Variational and Diffusion Problems in Random Walk Spaces (Progress in Nonlinear Differential Equations and Their Applications, 103) 303133583X, 9783031335839

This book presents the latest developments in the theory of gradient flows in random walk spaces. A broad framework is e

129 7

English Pages 401 [396] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Random Walks
1.1 Markov Chains
1.1.1 φ-Essential Irreducibility
1.2 Random Walk Spaces
1.3 Examples
1.4 The Nonlocal Gradient, Divergence and Laplace Operators
1.5 The Nonlocal Boundary, Perimeter and Mean Curvature
1.6 Poincaré-Type Inequalities
1.6.1 Global Poincaré-Type Inequalities
1.6.2 Poincaré-Type Inequalities on Subsets
2 The Heat Flow in Random Walk Spaces
2.1 The m-Heat Flow
2.2 Infinite Speed of Propagation
2.3 Asymptotic Behaviour
2.4 Ollivier-Ricci Curvature
2.5 The Bakry-Émery Curvature-Dimension Condition
2.6 Logarithmic-Sobolev Inequalities
2.7 Transport Inequalities
2.8 The m-Heat Content
2.8.1 Probabilistic Interpretation
2.8.2 The Spectral m-Heat Content
3 The Total Variation Flow in Random Walk Spaces
3.1 The m-Total Variation
3.2 The m-1-Laplacian and m-The Total Variation Flow
3.3 Asymptotic Behaviour
3.4 m-Cheeger and m-Calibrable Sets
3.5 The Eigenvalue Problem for - m1
3.6 Isoperimetric Inequality
3.7 The m-Cheeger Constant
3.8 The m-Cheeger Constant and the m-Eigenvalues of -1m
4 ROF-Models in Random Walk Spaces
4.1 The m-ROF Model with L2-Fidelity Term
4.1.1 The Gradient Descent Method
4.2 The m-ROF-Model with L1-Fidelity Term
4.2.1 The Geometric Problem
4.2.2 Regularity of Solutions in Terms of the NonlocalCurvature
4.2.3 Thresholding Parameters
4.2.4 The Gradient Descent Method
5 Least Gradient Functions in Random Walk Spaces
5.1 The Nonlocal Least Gradient Problem
5.2 Nonlocal Median Value Property
5.3 Nonlocal Poincaré Inequality
6 Doubly Nonlinear Nonlocal Stationary Problems of Leray-Lions Type with Nonlinear Boundary Conditions
6.1 Nonlocal Diffusion Operators of Leray-Lions Type and Nonlocal Neumann Boundary Operators
6.2 Nonlocal Stationary Problems with Neumann Boundary Conditions of Gunzburger-Lehoucq Type
6.2.1 Existence of Solutions of an Approximate Problem
6.2.2 Some Estimates on the Solutions of the Approximate Problems
6.2.3 Monotonicity of the Solutions of the ApproximateProblems
6.2.4 An Lp-Estimate for the Solutions of the Approximate Problems
6.2.5 Proof of the Existence Result
6.3 Neumann Boundary Conditions of Dipierro-Ros-Oton-Valdinoci Type
7 Doubly Nonlinear Nonlocal Diffusion Problems of Leray-Lions Type with Nonlinear Boundary Conditions
7.1 Evolution Problems with Neumann Boundary Conditions of Gunzburger-Lehoucq Type
7.1.1 Nonlinear Dynamical Boundary Conditions
7.1.2 The Evolution Problem for a Nonlocal Dirichlet-to-Neumann Operator
7.1.3 Doubly Nonlinear Boundary Conditions
7.1.4 Nonhomogeneous Boundary Conditions
7.2 Evolution Problems Under Neumann Boundary Conditions of Dipierro-Ros-Oton-Valdinoci Type
A Nonlinear Semigroups
A.1 Introduction
A.2 Abstract Cauchy Problems
A.3 Mild Solutions
A.4 Accretive Operators
A.5 Existence and Uniqueness Theorem
A.6 Regularity of the Mild Solution
A.7 Completely Accretive Operators
A.8 Yosida Approximation of Maximal Monotone Graphs in RR
Bibliography
Index
Index of notations
Recommend Papers

Variational and Diffusion Problems in Random Walk Spaces (Progress in Nonlinear Differential Equations and Their Applications, 103)
 303133583X, 9783031335839

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Progress in Nonlinear Differential Equations and Their Applications 103

José M. Mazón Marcos Solera-Diana J. Julián Toledo-Melero

Variational and Diffusion Problems in Random Walk Spaces

Progress in Nonlinear Differential Equations and Their Applications Volume 103

Series Editor Haïm Brezis, Rutgers University, New Brunswick, NJ, USA Editorial Board Members Luigi Ambrosio, Scuola Normale Superiore, Pisa, Italy Henri Berestycki, École des Hautes Études en Sciences Sociales, Paris, France Xavier Cabré, Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain Luis Caffarelli, University of Texas at Austin, Austin, TX, USA Sun-Yung Alice Chang, Princeton University, Princeton, NJ, USA Jean-Michel Coron, Université Marie et Pierre Curie, Paris, France Manuel Del Pino, University of Bath, Bath, UK Lawrence C. Evans, University of California, Berkeley, Berkeley, CA, USA Alessio Figalli, ETH Zürich, Zürich, Switzerland Rupert Frank, Mathematisches Institut, LMU Munich, München, Germany Nicola Fusco, Università di Napoli “Federico II”, Naples, Italy Sergiu Klainerman, Princeton University, Princeton, NJ, USA Robert Kohn, Courant Institute of Mathematical Sciences, New York, NY, USA Pierre-Louis Lions, Collège de France, Paris, France Andrea Malchiodi, Scuola Normale Superiore, Pisa, Italy Jean Mawhin, Université Catholique de Louvain, Louvain-la-Neuve, Belgium Frank Merle, Université de Cergy-Pontoise, Cergy, France Giuseppe Mingione, Università di Parma, Parma, Italy Felix Otto, MPI MIS Leipzig, Leipzig, Germany Paul Rabinowitz, University of Wisconsin, Madison, WI, USA John Toland, University of Bath, Bath, UK Michael Vogelius, Rutgers University, Piscataway, NJ, USA

José M. Mazón • Marcos Solera-Diana • J. Julián Toledo-Melero

Variational and Diffusion Problems in Random Walk Spaces

José M. Mazón Department of Mathematical Analysis University of Valencia Burjassot, Spain

Marcos Solera-Diana Department of Mathematical Analysis University of Valencia Burjassot, Spain

J. Julián Toledo-Melero Department of Mathematical Analysis University of Valencia Burjassot, Spain

ISSN 1421-1750 ISSN 2374-0280 (electronic) Progress in Nonlinear Differential Equations and Their Applications ISBN 978-3-031-33583-9 ISBN 978-3-031-33584-6 (eBook) https://doi.org/10.1007/978-3-031-33584-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

To J. M. Mazón dedicates this book to Fuensanta who would have improved it and to Claudia who would have composed music to go with it. M. Solera-Diana dedicates this book to his family. J. J. Toledo-Melero dedicates this book to his family.

Preface

The digital world has brought with it many different kinds of data of increasing size and complexity. Indeed, modern devices allow us to easily obtain images of higher resolution, as well as to collect data on internet searches, healthcare analytics, social networks, geographic information systems, business informatics, etc. Consequently, the study and treatment of these big data sets is of great interest and value. In this respect, weighted discrete graphs provide a natural and flexible workspace in which to represent the data. In this context, a vertex represents a data point, and each edge is weighted according to an appropriately chosen measure of “similarity” between the corresponding vertices. Historically, the main tools for the study of graphs came from combinatorial graph theory. However, following the implementation of the graph Laplacian in the development of spectral clustering in the 1970s, the theory of partial differential equations on graphs has obtained important results in this field (see, for example, [72, 141] and the references therein). This has prompted a big surge in the research of partial differential equations on graphs. Moreover, interest has been further bolstered by the study of problems in image processing. In this area of research, pixels are taken as the vertices and the “similarity” between pixels as the weights. The way in which these weights are defined depends on the problem at hand (see, for instance, [93] and [140]). On another note, let .J : RN → R be a nonnegative, radially symmetric and continuous function with . RN J (z)dz = 1. Nonlocal evolution equations of the form  .(E) ut (x, t) = J (y − x)u(y, t)dy − u(x, t), RN

and variations of it, have naturally arisen in various scientific fields as a means of modelling a wide range of diffusion processes. For example, in biology ([61, 160]), particle systems ([41]), coagulation models ([99]), nonlocal anisotropic models for phase transition ([3, 4]), mathematical finances using optimal control theory ([39, 127]), image processing ([106, 131]), etc. An intuitive reasoning for the wide applicability of this model comes from thinking of .u(x, t) as the density of a vii

viii

Preface

“population” at a point x at time t and of .J (y −  x) as the probability distribution of moving from y to x in one “jump”. Then, . RN J (y − x)u(y, t)dy is the rate at which “individuals” are arriving at x from anywhere else and .−u(x, t) = − RN J (y − x)u(x, t)dy is the rate at which they are leaving location x. Therefore, in the absence of external or internal sources, we are lead to equation (E) as a model for the evolution of the population density over time. An extensive study of this problem can be found in [21]. In the previous two paragraphs, we have brought forward two instances in which there is great interest in the study of partial differential equations in a nonlocal (or discrete) setting. Further interest has arisen following the analysis of the peridynamic formulation of the continuous mechanic (see [137] and [179]), the study of Markov jump processes and other nonlocal models. References on all of the topics mentioned thus far are given all along the book (see also [46, 73, 88, 93, 101, 102, 107, 131, 140, 175, 192, 196, 202]). The aim of this book is to unify into a broad framework the study of many of the previously mentioned problems. In order to do so, we note that there is a strong relation between some of these problems and probability theory, and it is in this field in which we find the appropriate spaces in which to develop this unifying study. Let .(X, B) be a measurable space and .P : X × B → [0, 1] a transition probability kernel on X (see Sect. 1.1). Then, a Markovian transition function can be defined as follows: for any .x ∈ X, .B ∈ B, let Pt (x, B) := e−t

+∞ n  t

.

n=0

n!

P n (x, B),

t ∈ R+ ,

where .P n denotes the n-step transition probability kernel. The associated family of operators, .Pt f (x) := f (y)Pt (x, dy), satisfy .

∂ Pt f (x) = ∂t

 Pt f (y)P (x, dy) − Pt f (x).

Moreover, if we consider a Markov process .(Xt )t0 associated to the Markovian transition function .(Pt )t0 , and if we denote by .μt the distribution of .Xt , then the family .(μt )t0 also satisfies a linear equation of the form .

∂ μt = ∂t

 P (y, ·)μt (dy) − μt .

In this setting, particular choices of the measurable space .(X, B) and of P lead to some of the previous problems. For example, if .X = RN and .P (x, dy) = J (y − x)dy, we recover equation (E). Moreover, taking X to be the set of vertices of a weighted discrete graph and appropriately defining the transition probability function in terms of the weights (see Example 1.41), we are also able to recover the heat equation on graphs.

Preface

ix

The previous remarks suggest that the appropriate setting in which to fulfill our goals of unifying a wide variety of nonlocal models into broad framework is provided by random walk spaces. These spaces are constituted by a measurable space .(X, B) and a transition probability kernel P on X which encodes the jumps of a Markov process. We adopt the notation .mx := P (x, ·) ∈ P(X, B) for each .x ∈ X (here .P(X, B) denotes the space of probability measures on .(X, B)). Additionally, we require a kind of stability property to hold for these spaces, that is, the existence of an invariant measure .ν (see Definition 1.8). Then, we say that .[X, B, m, ν] is a random walk space. Owing to the generality of these spaces, the results that we obtain have a wide range of applicability to a large spectrum of evolution problems arising in a variety of scientific fields. During the last years and with the aforementioned goal in mind, we have studied some gradient flows in the general framework of a random walk space. In particular, we have studied the heat flow, the total variational flow, and evolutions problems of Leray-Lions type with different types of nonhomogeneous boundary conditions. Specifically, together with the existence and uniqueness of solutions to these problems and the asymptotic behavior of its solutions, a wide variety of their properties have been studied, as well as the nonlocal diffusion operators involved in them. Our results have been published in [110, 150–153] and [180]. Let us summarize the contents of the book. To start with, in Chap. 1, we introduce the general framework of a random walk space. Then, in Sect. 1.1, we relate it to classical notions in Markov chain theory and provide a list of results which we hope aid the reader in getting a good idea about the properties which these spaces enjoy. After introducing a stability property for random walk spaces, called m-connectedness, we devote Sect. 1.2 to exploring the characteristics enjoyed by this notion and we relate it to known concepts of ergodicity. We then provide a list of examples of random walk spaces of particular interest, as those that were mentioned at the beginning of the introduction. The rest of the chapter is dedicated to introducing the nonlocal counterparts of classical notions like those of gradient, divergence, boundary, perimeter, mean curvature as well as of the Laplace operator. In this context, associated with the random walk .m = (mx )x∈X , the Laplace operator .m is defined as  m f (x) :=

(f (y) − f (x))dmx (y).

.

X

As we do, we obtain the nonlocal homologues of classical results. Moreover, we obtain further characterizations of the m-connectedness of a random walk space. We also spend some time in finding sufficient conditions for Poincaré type inequalities to hold and relate them both to the spectral gap of .−m and to isoperimetric inequalities. Chapter 2 is concerned with the heat flow in a random walk space .[X, B, m, ν]. Assuming that the invariant measure .ν satisfies a reversibility condition with respect to the random walk, the heat flow is the Markovian semigroup .(e−tm )t0 generated by the operator .−m . We obtain a complete expansion of .(e−tm )t0 in terms of

x

Preface

the n-step transition probability kernel of the random walk, and we characterize the infinite speed of propagation of .(e−tm )t0 in terms of the m-connectedness of the random walk space. Moreover, we study its asymptotic behaviour, obtaining a rate of convergence by means of the Poincaré inequality. For the particular case of graphs, see [113] and [130], and for the case of nonsingular kernels in .RN , see [21]. Furthermore, we introduce the concepts of Ollivier-Ricci curvature and BakryÉmery curvature-dimension condition and study their relation with the Poincaré type inequality. We also study the logarithmic-Sobolev inequality and the modified logarithmic-Sobolev inequality, which relate the entropy with the Fisher-DonskerVaradhan information. Moreover, we show that, under the positivity of the BakryÉmery curvature-dimension condition or the Ollivier-Ricci curvature, a transportinformation inequality holds. Finally, we study the m-heat content and give a probabilistic interpretation of it. In Chap. 3, we study the total variation flow in random walk spaces. For this purpose, we introduce the 1-Laplacian operator associated with a random walk space and obtain various characterizations of it. We then proceed to prove existence and uniqueness of solutions of the total variation flow in random walk spaces and to study its asymptotic behaviour with the help of some Poincaré type inequalities. As a result of our study, we generalize results obtained in [148] and [149] as well as some results in graph theory. Moreover, in Chap. 3, we introduce the concepts of Cheeger and calibrable sets in random walk spaces and characterize the calibrability of a set by using the 1Laplacian operator. Furthermore, we study the eigenvalue problem of the minus 1-Laplacian and relate it to the optimal Cheeger cut problem. These results apply, in particular, to locally finite weighted connected discrete graphs, complementing the results given in [65–67] and [120]. The 1-Laplacian operator also plays an essential role in the next two chapters. Chapter 4 is dedicated to the study of the Rudin-Osher-Fatemi model in random walk spaces both with .L2 and with .L1 fidelity terms. We obtain the Euler-Lagrange equations of these minimization problems and proceed to obtain a wide range of results on the properties enjoyed by the minimizers. In the case of the .L1 fidelity term, we show the relation of the ROF model with a family of geometric problems and we obtain thresholding parameters for the minimizers; these are special values of the scale parameter at which the scale space suffers a sudden transition. Chapter 5 is concerned with the nonlocal least gradient problem. This consists in finding the least gradient function on a set which extends a fixed function on its nonlocal boundary. Some results obtained in [145] are improved and extended to this general framework; thus, in particular, to locally finite weighted discrete graphs. In Chap. 6, we introduce a class of Leray-Lions type operators that generalize the p-Laplacian operator on random walk spaces. Furthermore, two types of Neumann boundary operators are presented. The main aim of this chapter is the study of a large class of stationary doubly nonlinear nonlocal problems of p-Laplacian type with Neumann boundary conditions or homogenous Dirichlet boundary conditions. This study serves as the basis for the study of diffusion problems in the following chapter.

Preface

xi

Chapter 7 is devoted to the study of the p-Laplacian type evolution problems in random walk spaces. We study evolution problems like the one given by the following model equation:  ut (t, x) =

|u(y) − u(x)|p−2 (u(y) − u(x))dmx (y),

.

x ∈ , 0 < t < T ,

∪∂m 

with nonhomogeneous Neumann boundary conditions, where . ∈ B and .∂m  is the m-boundary of .. This reference model can be regarded as the nonlocal counterpart to the classical evolution problem driven by the equation ut = div(|∇u|p−2 ∇u),

.

x ∈ U, 0 < t < T ,

with nonhomogeneous Neumann boundary conditions, where U is a bounded smooth domain in .Rn . In fact, our study develops with a far greater generality which allows us to cover a wide variety of problems such as: obstacle problems, the nonlocal counterpart of Stefan like problems, diffusion problems in porous media, Hele-Shaw type problems, evolution problems with dynamical boundary conditions or the evolution problem for a nonlocal Dirichlet-to-Neumann operator. As already mentioned, we study these variational and diffusion problems in the framework of random walk spaces (or of metric random walk spaces). In order to develop this work, we usually assume the reversibility of the invariant measure with respect to the random walk or the m-connectedness of the random walk space, or both. In fact, the reader can assume both of them all along in the interest of facilitating the reading. In some cases, we also suppose that the invariant measure is finite, and in these cases, we generally assume that it is a probability measure, since there is no loss of generality in doing this rescaling in the results obtained. If some of these assumptions are made in the entirety of a chapter or section, we will state them at the beginning of that chapter or section so as to simplify the writing of the statements given. The book ends with an appendix which provides an outline of nonlinear semigroup theory. This is a powerful theory for studying nonlinear evolution problems as is shown in the various chapters where it is used. In the bibliography, we list the sources used in the preparation of this monograph, including references to related works. However, it may be far from complete and we apologize for any omission. Finally, an index and an index of notations are included for ease of reference. It is a pleasure to acknowledge W. Gorny, whom we show our gratitude for being a co-author of part of the work presented. JMM, MSD and JJTM are partially supported by the Spanish MICIU and FEDER, project PGC2018-094775-B-100; and by the “Conselleria d’Innovació, Universitats, Ciència y Societat Digital”, project AICO/2021/223. MSD also acknowledges support by the Spanish MICIU under grant BES-2016079019 (which was supported by the European FSE); by the Spanish “Ministerio de Universidades” and NextGenerationUE, programme “Recualificación del sis-

xii

Preface

tema universitario español” under grant UP2021-044 (Margarita Salas); and the “Conselleria d’Innovació, Universitats, Ciència i Societat Digital”, programme “Subvenciones para la contratación de personal investigador en fase postdoctoral” (APOSTD 2022) under grant CIAPOS/2021/28. Burjassot, Spain November 2021

José M. Mazón Marcos Solera-Diana J. Julián Toledo-Melero

Contents

1

Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 ϕ-Essential Irreducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Random Walk Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 The Nonlocal Gradient, Divergence and Laplace Operators . . . . . . . . . 1.5 The Nonlocal Boundary, Perimeter and Mean Curvature . . . . . . . . . . . . 1.6 Poincaré-Type Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Global Poincaré-Type Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.2 Poincaré-Type Inequalities on Subsets . . . . . . . . . . . . . . . . . . . . . . . .

1 2 9 9 16 24 27 34 35 44

2

The Heat Flow in Random Walk Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.1 The m-Heat Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.2 Infinite Speed of Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 2.3 Asymptotic Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 2.4 Ollivier-Ricci Curvature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 2.5 The Bakry-Émery Curvature-Dimension Condition . . . . . . . . . . . . . . . . . . 82 2.6 Logarithmic-Sobolev Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 2.7 Transport Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 2.8 The m-Heat Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 2.8.1 Probabilistic Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 2.8.2 The Spectral m-Heat Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

3

The Total Variation Flow in Random Walk Spaces . . . . . . . . . . . . . . . . . . . . . . 3.1 The m-Total Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The m-1-Laplacian and m-The Total Variation Flow . . . . . . . . . . . . . . . . . 3.3 Asymptotic Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 m-Cheeger and m-Calibrable Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 The Eigenvalue Problem for −m 1 .................................... 3.6 Isoperimetric Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

105 107 110 119 125 138 146

xiii

xiv

Contents

3.7 3.8 4

The m-Cheeger Constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 The m-Cheeger Constant and the m-Eigenvalues of −m 1 . . . . . . . . . . . 154

ROF-Models in Random Walk Spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 The m-ROF Model with L2 -Fidelity Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 The Gradient Descent Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The m-ROF-Model with L1 -Fidelity Term. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 The Geometric Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Regularity of Solutions in Terms of the Nonlocal Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Thresholding Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 The Gradient Descent Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

163 165 171 174 177

5

Least Gradient Functions in Random Walk Spaces. . . . . . . . . . . . . . . . . . . . . . 5.1 The Nonlocal Least Gradient Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Nonlocal Median Value Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Nonlocal Poincaré Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

209 210 220 222

6

Doubly Nonlinear Nonlocal Stationary Problems of Leray-Lions Type with Nonlinear Boundary Conditions. . . . . . . . . . . . . . . . 6.1 Nonlocal Diffusion Operators of Leray-Lions Type and Nonlocal Neumann Boundary Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Nonlocal Stationary Problems with Neumann Boundary Conditions of Gunzburger-Lehoucq Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Existence of Solutions of an Approximate Problem. . . . . . . . . . 6.2.2 Some Estimates on the Solutions of the Approximate Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Monotonicity of the Solutions of the Approximate Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 An Lp -Estimate for the Solutions of the Approximate Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Proof of the Existence Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Neumann Boundary Conditions of Dipierro-Ros-Oton-Valdinoci Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

Doubly Nonlinear Nonlocal Diffusion Problems of Leray-Lions Type with Nonlinear Boundary Conditions. . . . . . . . . . . . . . . . 7.1 Evolution Problems with Neumann Boundary Conditions of Gunzburger-Lehoucq Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Nonlinear Dynamical Boundary Conditions . . . . . . . . . . . . . . . . . . 7.1.2 The Evolution Problem for a Nonlocal Dirichlet-to-Neumann Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Doubly Nonlinear Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Nonhomogeneous Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . 7.2 Evolution Problems Under Neumann Boundary Conditions of Dipierro-Ros-Oton-Valdinoci Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

182 189 206

235 236 241 250 254 255 256 259 270 273 278 278 297 298 318 323

Contents

A Nonlinear Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Abstract Cauchy Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 Mild Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Accretive Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5 Existence and Uniqueness Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6 Regularity of the Mild Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7 Completely Accretive Operators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.8 Yosida Approximation of Maximal Monotone Graphs in R × R . . . .

xv

347 347 348 350 352 358 361 363 368

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Index of notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385

Chapter 1

Random Walks

This chapter introduces the central definitions used in the book. The main character of the framework spaces on which the book is developed is the random walk. This notion is thoroughly covered in Sect. 1.1, where we delve into the classical theory of Markov chains. Section 1.2 then focuses on introducing random walk spaces, while a list of examples is given in Sect. 1.3. Note that no previous knowledge of random walks is required; all necessary definitions and results are given in the text. Definition 1.1 Let .(X, B) be a measurable space such that the .σ -algebra .B is countably generated. A random walk on .(X, B) is a family of probability measures .(mx )x∈X on .B such that .x → mx (B) is a measurable function on X for each fixed .B ∈ B. The notation and terminology chosen in this definition comes from [166]. However, we achieve a more general framework by considering measurable spaces (with countably generated .σ -algebras) instead of Polish metric spaces .(X, d) (see Definition 1.26). As noted in that paper, geometers may think of .mx as a replacement for the notion of balls around x, while in probabilistic terms we can rather think of these probability measures as defining a Markov chain whose transition probability from x to y in n steps is dm∗n x (y) :=



.

z∈X

dmz (y)dm∗(n−1) (z), n  1 x

(1.1)

and .m∗0 x = δx , the dirac measure at x. In the next section, we deepen on this latter perspective which will be the main one along our work, the former perspective will play an important role in Sect. 2.4. We therefore take a momentary break from the construction of what will be our framework space for the book to recall some results of the classic theory of Markov chains which we believe will aid in providing motivation.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. M. Mazón et al., Variational and Diffusion Problems in Random Walk Spaces, Progress in Nonlinear Differential Equations and Their Applications 103, https://doi.org/10.1007/978-3-031-33584-6_1

1

2

1 Random Walks

With the exception of the definitions, the reader may skip the following section, since it is mainly motivational (Definitions 1.2, 1.3, 1.4 and 1.6 may also be skipped).

1.1 Markov Chains In this section we immerse ourselves in the probabilistic terminology which houses our work. We expect this to be particularly useful for readers with a probabilistic background as it serves to clarify where exactly our work falls in this field. Furthermore, we recall well-known results to provide further insight into the nature of random walks. To this aim we start by giving the definition of a discrete-time, time-homogeneous Markov chain and the Markov property which it satisfies. The results in this section can be found in [87, 121, 155] or [201]. Definition 1.2 Let .(, F, P) be a probability space and .(X, B) a measurable space such that the .σ -algebra .B is countably generated. A discrete-time, timehomogeneous Markov chain is a sequence of X-valued random variables .{Xn : n = 0, 1, 2, . . .} defined on . such that P(Xn+1 ∈ B | X0 , X1 , . . . , Xn ) = P(Xn+1 ∈ B | Xn ) ∀B ∈ B, n = 0, 1, . . . (1.2)

.

The identity (1.2) is called the Markov property. The Markov property indicates that the future of the process is independent on the past given its present value so that, intuitively, we can say that it is “memoryless”. A large variety of examples of Markov chains may be found in [87, 121] or [155]. In this book we never work directly with the random variables and instead use a different, but equivalent, approach to Markov chains. For each .x ∈ X and .B ∈ B, let P (x, B) := P(Xn+1 ∈ B|Xn = x).

.

(1.3)

This defines a stochastic kernel on X, which means that: • .P (x, ·) is a probability measure on .B for each fixed .x ∈ X, and • .P (·, B) is a measurable function on X for each fixed .B ∈ B. The stochastic kernel .P (x, B) is also known as a (Markov) transition probability kernel. The “time homogeneity” of the Markov chain refers to the fact that .P (x, B) as defined in (1.3) is independent of n. This stochastic kernel is what we have previously called a random walk, so that, in our terminology, .mx (B) = P (x, B). n Similarly, we denote by .m∗n x the n-step transition probability kernel .P (x, B) := n P (Xn+1 ∈ B | X0 = x), .x ∈ X, .B ∈ B which, as before, can be recursively defined by

1.1 Markov Chains

3





P n (x, B) =

P n−1 (y, B)P (x, dy) =

.

X

P (y, B)P n−1 (x, dy) X

for .B ∈ B, .x ∈ X and .n = 1, 2, . . ., with .P 0 (x, ·) = δx (·). Note that, as can be easily proved by induction, dmx∗(n+k) (y) =



.

z∈X

∗n dm∗k x (z)dmz (y)

for every n, .k ∈ N. It can then be shown that, in fact, starting with a random walk (or stochastic kernel) on X, we can construct a Markov chain .(Xn ) such that its transition probability kernels coincide with the given random walk (see [155, Theorem 3.4.1]). In particular, it is proven that for a given initial probability distribution .μ on .B, one can construct the probability measure .Pμ on .F so that .Pμ (X0 ∈ B) = μ(B) for .B ∈ B and, moreover, for every .n = 0, 1, . . ., .x ∈ X and .B ∈ B: Pμ (Xn+1 ∈ B | Xn = x) = mx (B).

.

When .μ is the Dirac measure at .x ∈ X, we denote .Pμ by .Px . In the same way, the corresponding expectation operators are denoted by .Eμ and .Ex , respectively. We denote the characteristic function of a subset A of a set X by .χA , i.e. .χA : X → {0, 1} is defined by  χA (x) :=

.

1 if x ∈ A, 0 if x ∈ A.

We now define some general concepts of stability for Markov chains that will be used all along this work. These notions offer, in some way, an insight into the longterm behaviour of the process as it evolves with time. To this aim let us introduce the following concepts. Definition 1.3 (i) Let .A ∈ B, the occupation time .ηA is the number of visits of the Markov chain to A after time zero: ηA :=

∞ 

.

χ{Xn ∈A} .

n=1

(ii) For any set .A ∈ B, the random variable τA := min{n  1 : Xn ∈ A},

.

is called the first return time on A. We assume that .inf ∅ = ∞.

4

1 Random Walks

The first and least restrictive concept of stability is that of .ϕ-irreducibility, where ϕ is some measure on .B. With this concept we are requiring that, no matter what the starting point, we will be able to reach any “important” set in a finite number of jumps. The “important” sets will be understood as those which have positive measure with respect to the measure .ϕ. We may also understand that we are asking that the Markov chain does not in truth consist of two chains. Let

.

U (x, A) :=

∞ 

.

m∗n x (A)

n=1

= Ex [ηA ]. Definition 1.4 Let .ϕ be a measure on .B. A random walk m is .ϕ-irreducible if, for every .x ∈ X: ϕ(A) > 0 ⇒ U (x, A) > 0.

.

Alternatively (see [155, Proposition 4.2.1]), we may also understand this concept by using .τA . Then, m is .ϕ-irreducible if, for every .x ∈ X, ϕ(A) > 0 ⇒ Px (τA < ∞) > 0,

.

i.e. starting from any point .x ∈ X, we have a positive probability of arriving to any set of positive measure in finite time. Instead of taking some possibly arbitrary measure to define the irreducibility of the random walk, we can take the maximal irreducibility measure which defines the range of the chain more completely. This is done through the following proposition. Let Ka 1 (x, A) :=

∞ 

.

2

n=0

1 2n+1

m∗n x (x, A), x ∈ X, A ∈ B(X);

this kernel defines for each .x ∈ X a probability measure equivalent to .m∗0 x (·) + U (x, ·), i.e. they agree on which sets have measure zero (which may be infinite for many sets). Proposition 1.5 ([155, Proposition 4.2.2]) Given a .ϕ-irreducible random walk m on .(X, B), there exists a probability measure .ψ on .B such that: (i) m is .ψ-irreducible; (ii) for any other measure .ϕ , the random walk m is .ϕ -irreducible if and only if

.ϕ ψ; (iii) if .ψ(A) = 0, then .ψ({y : Py (τA < ∞) > 0}) = 0; (iv) the probability measure .ψ is equivalent to

1.1 Markov Chains

5

ψ (A) :=



.

X

Ka 1 (y, A)dϕ (y), 2

for any finite irreducibility measure .ϕ . A measure satisfying the conditions of the previous proposition is called a maximal irreducibility measure. For notation convenience we say that m is .ψirreducible if it is .ϕ-irreducible for some measure .ϕ and the measure .ψ is a maximal irreducibility measure. A stronger stability notion can be obtained by asking, not only that .U (x, A) > 0, but that .U (x, A) = ∞ for every .x ∈ X and every set A with .ϕ(A) > 0. Alternatively, we may strengthen the requirement that there is a positive probability of reaching every set of .ϕ-positive measure wherever we start from, and instead require that, in fact, this has to eventually happen. These approaches lead to the various concepts of recurrence. Definition 1.6 (i) Let .A ∈ B. The set A is called recurrent if .U (x, A) = Ex [ηA ] = ∞ for all .x ∈ A. (ii) The Markov chain is called recurrent if it is .ψ-irreducible and .U (x, A) = Ex [ηA ] = ∞ for every .x ∈ X and every .A ∈ B(X) with .ψ(A) > 0. (iii) Let .A ∈ B. The set A is called Harris recurrent if Px [ηA = ∞] = 1

.

for all .x ∈ A. (iv) The chain is Harris recurrent if it is .ψ-irreducible and every .A ∈ B(X) with .ψ(A) > 0 is Harris recurrent. It follows that any Harris recurrent set is recurrent. Indeed, for recurrence we require that the expected number of visits is infinite; meanwhile Harris recurrence implies that the number of visits is infinite almost surely. Notably, by [155, Theorem 9.0.1], a recurrent chain differs by a .ψ-null set from a Harris recurrent chain. Moreover, it is proved that there is a dichotomy in the sense that irreducible Markov chains cannot be “partially stable”; either they possess these stability properties uniformly in x or the chain is unstable in a well-defined way (we will however not enter into this; see [155] for details). Another stability property that we will use is given by the existence of an invariant measure. This is a measure which provides a distribution such that, if the chain starts distributed in this way, then it remains like this. Furthermore, these measures turn out to be the ones which define the long-term behaviour of the chain. Definition 1.7 If m is a random walk on .(X, B) and .μ is a .σ -finite measure on X, the convolution of .μ with m on X is the measure defined as follows:

6

1 Random Walks

 μ ∗ m(A) :=

mx (A)dμ(x) ∀A ∈ B,

.

X

which is the image of .μ by the random walk m. Definition 1.8 If m is a random walk on .(X, B), a .σ -finite measure .ν on X is invariant with respect to the random walk m if ν ∗ m = ν.

.

Of course, if an invariant measure is finite, then it can and will be normalized to a (stationary) probability measure. Therefore, all along this book, whenever we require the finiteness of the invariant measure, we directly consider an invariant probability measure. Remark 1.9 Note that if m is a random walk on .(X, B), then .m∗n is also a random walk on .(X, B) for every .n ∈ N. Moreover, if .ν is invariant with respect to m, then ∗n for every .n ∈ N. .ν is invariant with respect to .m Bringing irreducibility and the existence of an invariant measure together, we get the following results. Proposition 1.10 ([155, Proposition 10.0.1]) If the random walk m is recurrent, then it admits a unique (up to constant multiples) invariant measure. Proposition 1.11 ([155, Proposition 10.1.1]) If the random walk m is .ψirreducible and admits an invariant probability measure, then it is recurrent; thus, in particular, the invariant probability measure is unique. Furthermore, we give the following theorem relating the invariant and maximal irreducibility measures for recurrent chains. Theorem 1.12 ([155, Theorem 10.4.9]) If the random walk m is recurrent, then the unique (up to constant multiples) invariant measure .ν with respect to m is equivalent to .ψ (thus, .ν is a maximal irreducibility measure). Another well-known concept is that of an ergodic measure. Definition 1.13 A set .B ∈ B is said to be invariant (or absorbing or stochastically closed) (with respect to m) if .mx (B) = 1 for every .x ∈ B. An invariant probability measure .ν is said to be ergodic (with respect to m) if .ν(B) = 0 or .ν(B) = 1 for every invariant set .B ∈ B. A profound study of ergodic theory for Markov chains can be found in [87, Chapter 5]. There we can find the construction of a dynamical system associated with a Markov chain such that the previous notion of ergodicity is equivalent to the classical notion of ergodicity for this dynamical system ([87, Theorem 5.2.11]). The following result ensures that uniqueness of the invariant measure implies its ergodicity.

1.1 Markov Chains

7

Proposition 1.14 ([121, Proposition 2.4.3]) Let m be a random walk on X. If m has a unique invariant probability measure .ν, then .ν is ergodic. Which together with Proposition 1.11 and Theorem 1.12 implies the following. Corollary 1.15 If m is a .ψ-irreducible random walk that admits an invariant probability measure, then the invariant probability measure is unique, ergodic and equivalent to .ψ. Finally, the last property that we introduce is the existence of a measure which is reversible with respect to the Markov chain. This reversibility condition on a measure is stronger than the invariance condition. We first define the tensor product of a .σ -finite measure and a stochastic kernel. Definition 1.16 If .ν is a .σ -finite measure on .(X, B) and N is a stochastic kernel on .(X, B), we define the tensor product of .ν and N, denoted by .ν ⊗ N, which is a measure on .(X × X, B × B) by  ν ⊗ N(A × B) :=

N (x, B)dν(x), (A, B) ∈ B × B.

.

A

Using our notation m for the random walk, we denote the tensor product of .ν and m by .ν ⊗ mx . Definition 1.17 A .σ -finite measure .ν on .B is reversible with respect to the random walk m if the measure .ν ⊗ mx on .B × B is symmetric, i.e. for all .(A, B) ∈ B × B: ν ⊗ mx (A × B) = ν ⊗ mx (B × A).

.

Equivalently, .ν is reversible with respect to m if, for all bounded measurable functions f defined on .(X × X, B × B):  

  f (x, y)dmx (y)dν(x) =

.

X

X

f (y, x)dmx (y)dν(x). X

X

Note that if .ν is reversible with respect to m, then it is invariant with respect to m (see [87, Proposition 1.5.2]). Associated with a Markov chain, we can define the following operator which will play a very important role in many of our developments. Definition 1.18 If .ν is an invariant measure with respect to m, we define the linear operator .Mm on .L1 (X, ν) into itself as follows:  Mm f (x) :=

f (y)dmx (y), f ∈ L1 (X, ν).

.

X

Mm is called the averaging operator on .[X, B, m] (see, e.g. [166]).

.

Note that, if .f ∈ L1 (X, ν) then, using the invariance of .ν with respect to m

8

1 Random Walks



  |f (y)|dmx (y)dν(x) =

.

X

X

|f (x)|dν(x) < ∞, X

so .f ∈ L1 (X, mx ) for .ν-a.e. .x ∈ X; thus .Mm is well defined from .L1 (X, ν) into itself. Remark 1.19 Let .ν be an invariant measure with respect to m. It follows that

Mm f L1 (X,ν)  f L1 (X,ν) ∀f ∈ L1 (X, ν),

.

so that .Mm is a contraction on .L1 (X, ν). In fact, since .Mm f  0 if .f  0, .Mm is a positive contraction on .L1 (X, ν). Moreover, by Jensen’s inequality, for .f ∈ L1 (X, ν) ∩ L2 (X, ν)  

Mm f 2L2 (X,ν)

=

f (y)dmx (y) X

dν(x)

X

  

.

2

f 2 (y)dmx (y)dν(x) X

 =

X

X

f 2 (x)dν(x) = f 2L2 (X,ν) .

Therefore, .Mm is a linear operator in .L2 (X, ν) with domain D(Mm ) = L1 (X, ν) ∩ L2 (X, ν).

.

Consequently, if .ν is a probability measure, .Mm is a bounded linear operator from L2 (X, ν) into itself satisfying . Mm B(L2 (X,ν),L2 (X,ν))  1.

.

Note that, making use of this operator, .B ∈ B is invariant with respect to m (Definition 1.13) if, and only if, .Mm χB  χB . We may slightly weaken this notion as follows. Definition 1.20 We say that .B ∈ B is .ν-invariant (with respect to m) if .Mm χB = χB .ν-a.e. Similarly, we define the notion of a harmonic (or .ν-invariant) function. Definition 1.21 A function .f ∈ L1 (X, ν) is said to be harmonic (with respect to m) if .Mm f = f .ν-a.e. We may therefore recall a classic result which characterizes the ergodicity of .ν (see, e.g. [121, Lemma 5.3.2]). Proposition 1.22 Let .ν be an invariant probability measure. Then, .ν is ergodic if, and only if, every harmonic function is a constant .ν-a.e.

1.2 Random Walk Spaces

9

1.1.1 ϕ-Essential Irreducibility A different direction may be taken to define the irreducibility of a random walk (see [164] or [174, Definition 4.4]). Definition 1.23 Let .ϕ be a measure on .B. A random walk m is .ϕ-essentially irreducible if, for .ϕ-a.e. .x ∈ X: ϕ(A) > 0 ⇒ U (x, A) > 0.

.

While this change may appear small, it actually leads to a wider and wilder class of “irreducible” models. However, there is a nice dichotomy result. Proposition 1.24 ([164, Proposition 2]) Let .ν be an invariant measure with respect to the random walk m such that m is .ν-essentially irreducible. Then, only one of the following two cases may happen: (i) there exists .X1 ∈ B such that .ν(X \ X1 ) = 0, .X1 is invariant with respect to m and ν U (x, .) for every x ∈ X1

.

i.e. the restriction of the Markov chain to .X1 is .ν-irreducible; (ii) there exists .X2 ∈ B such that .ν(X \ X2 ) = 0, .X2 is invariant with respect to m and ν ⊥ U (x, .) for every x ∈ X2 .

.

Since most of the examples of application of our results actually fall into the first category in this theorem, the previous results in this section are applicable. However, some extremal examples fall into the second category, a case which classic literature does not usually cover (see [174, Chapter 4] for a discussion about some of the results obtained using this weakened form). Therefore, we now proceed to develop some of the results that we have given for .ϕ-irreducible Markov chains but for .ϕessentially irreducible Markov chains (assuming, in addition, that .ϕ is an invariant measure). At this point we recover our terminology, in which Markov chains are referred to as random walks and the notion of .ϕ-essential irreducibility is called m-connectedness.

1.2 Random Walk Spaces We continue developing the spaces in which we will work.

10

1 Random Walks

Definition 1.25 Let .(X, B) be a measurable space where the .σ -algebra .B is countably generated. Let m be a random walk on .(X, B) (see Definition 1.1) and .ν an invariant measure with respect to m (see Definition 1.8). The measurable space together with m and .ν is then called a random walk space and denoted by .[X, B, m, ν]. Definition 1.26 Let .[X, B, m, ν] be a random walk space. If .(X, d) is a Polish metric space (separable completely metrizable topological space), .B is its Borel .σ -algebra and .ν is a Radon measure (i.e. .ν is inner regular and locally finite), then we say that .[X, B, m, ν] is a metric random walk space, and we denote it by .[X, d, m, ν]. Moreover, as is done in [166], when necessary, we also assume that each measure .mx has  finite first moment, i.e. for some (hence any) .z ∈ X, and for any .x ∈ X one has . X d(z, y)dmx (y) < +∞. Definition 1.27 Let .[X, B, m, ν] be a random walk space. We say that .[X, B, m, ν] is m-connected if, for every .D ∈ B with .ν(D) > 0 and .ν-a.e. .x ∈ X: ∞  .

m∗n x (D) > 0,

n=1

i.e. m is .ν-essentially irreducible. Note that, in this definition, we are requiring that the random walk is .ν-essentially irreducible with the additional requirement that .ν is actually an invariant measure (as was done in Proposition 1.24). However, the irreducibility measure and the invariant measure are usually introduced separately as seen in the previous section. Nonetheless, this provides a simpler all-in-one notion whose choice is moreover justified by Theorem 1.12. Note that, as somewhat mentioned in the previous section, the fundamental concept is that all parts of the space can be reached after a certain number of jumps, no matter what the starting point (except for, at most, a .ν-null set of points). We now recall how this notion was originally introduced in [150]. This serves to introduce notation which we will use in some results. Definition 1.28 Let .[X, B, m, ν] be a random walk space and .D ∈ B. We denote (recall (1.1)) m ND := {x ∈ X : m∗n x (D) = 0, ∀n ∈ N}.

.

For .n ∈ N, we also define m HD,n := {x ∈ X : m∗n x (D) > 0},

.

and

1.2 Random Walk Spaces

m .HD

11

:=





= x∈X :

m HD,n

∞ 

m∗n x (D)

>0 .

n=1

n∈N

m ) = 0 for every With this notation, .[X, B, m, ν] is m-connected if, and only if, .ν(ND m m .D ∈ B such that .ν(D) > 0. Note that .N D and .HD are disjoint and m X = ND ∪ HDm .

.

m , H m and .H m belong to .B. In the next result, we see that Observe also that .ND D,n D m m .N D is invariant and .HD is .ν-invariant (recall Definitions 1.13 and 1.20). m = Proposition 1.29 Let .[X, B, m, ν] be a random walk space and let .D ∈ B. If .ND ∅, then: m m m m ∗n (i) .m∗n x (HD ) = 0 (thus, .mx (ND ) = 1) for every .x ∈ ND and .n ∈ N, i.e. .ND is invariant with respect to m. m m m ∗n (ii) .m∗n x (HD ) = 1 (thus, .mx (ND ) = 0) for .ν-almost every .x ∈ HD and for every m .n ∈ N, i.e. .H D is .ν-invariant with respect to m. m and .ν-a.e. .y ∈ H m we have .m ⊥m , i.e. .m Consequently, for every .x ∈ ND x y x D and .my are mutually singular. m m Proof (i): Suppose that .m∗k x (HD ) > 0 for some .x ∈ ND and .k ∈ N; then, since m m m ∗k .H D = ∪n HD,n , there exists .n ∈ N such that .mx (HD,n ) > 0 and, therefore, ∗(n+k) .mx (D)

 = z∈X

∗k m∗n z (D)dmx (z)

 

m z∈HD,n

∗k m∗n z (D)dmx (z) > 0

m m since .m∗n z (D) > 0 for every .z ∈ HD,n , and this contradicts that .x ∈ ND . ∗n (ii): Fix .n ∈ N. Using the invariance of .ν with respect to .m and statement (i), we have that    m m ∗n m ν(ND )= m∗n (N )dν(x) = m (N )dν(x) + dν(x) x x D D



.

=

X

m HD

m HD

m ND

m m m∗n x (ND )dν(x) + ν(ND ).

m m Consequently, .m∗n x (ND ) = 0 for .ν-a.e. .x ∈ HD .

 

This result exemplifies how a random walk m which is not m-connected is in reality composed of two (or more) separate random walks, one whose jumps occur m . Moreover, we may restrict the invariant measure to any in .HDm and the other in .ND of these subsets in order to obtain invariant measures for the restricted random walks as seen in the following result.

12

1 Random Walks

Proposition 1.30 Let .[X, B, m, ν] be a random walk space and let .D ∈ B. For every .n ∈ N and .A ∈ B  ν(A ∩ HDm ) =

.

m HD

m∗n x (A)dν(x),

and  m ν(A ∩ ND )=

.

m ND

m∗n x (A)dν(x).

Proof By the invariance of .ν with respect to .m∗n and Proposition 1.29, we have that, for any .A ∈ B,  ν(A ∩ HDm ) =

.

 =

X

m m∗n x (A ∩ HD )dν(x) =

m HD

 m HD

m m∗n x (A ∩ HD )dν(x)

m∗n x (A)dν(x).  

Similarly, one proves the other statement.

In the following result, we see that, given a random walk space .[X, B, m, ν], if we start at .ν-almost any point x in a set .D ∈ B of .ν-positive measure, then there is a positive probability that we eventually return to D. In the terms of the previous section, .Px (τD < ∞) > 0 for .ν-a.e. .x ∈ D. Corollary 1.31 Let .[X, B, m, ν] be a random walk space. For any .D ∈ B, m ν(D ∩ ND ) = 0.

.

Consequently, if .ν(D) > 0, then .D ⊂ HDm up to a .ν-null set; therefore, for .ν-a.e. ∗n .x ∈ D, there exists .n ∈ N such that .mx (D) > 0. Proof By Proposition 1.30,  m ν(D ∩ ND )=

.

m ND

m∗n x (D)dν(x) = 0.  

Finally, we now give another approach to define an m-connected random walk space. This approach requires the notion of m-interaction between sets and is very good at providing intuition not only for the concept of m-connectedness but also for the reversibility condition of a measure. Definition 1.32 Let .[X, B, m, ν] be a random walk space and let A, .B ∈ B. We define the m-interaction between A and B as

1.2 Random Walk Spaces

13

  Lm (A, B) :=

 dmx (y)dν(x) =

.

A B

mx (B)dν(x). A

Note that, whenever .Lm (A, B) < +∞, if .ν is reversible with respect to m Lm (A, B) = Lm (B, A).

.

A possible interpretation of this notion is the following: for a population which is originally distributed according to .ν and which moves according to the law provided by the random walk m, .Lm (A, B) measures how many individuals are moving from A to B in one jump. Then, if .ν is reversible with respect to m, this is equal to the amount of individuals moving from B to A in one jump. In order to facilitate notation, we make the following definition. Definition 1.33 Let .[X, B, m, ν] be a random walk space. We say that .[X, B, m, ν] is a reversible random walk space if .ν is reversible with respect to m (see Definition 1.17). Moreover, if .[X, d, m, ν] is a metric random walk space and .ν is reversible with respect to m, then we say that .[X, B, m, ν] is a reversible metric random walk space. The following result gives a characterization of m-connectedness in terms of the m-interaction between sets. Proposition 1.34 Let .[X, B, m, ν] be a random walk space. The following statements are equivalent: (i) .[X, B, m, ν] is m-connected. (ii) If .A, B ∈ B satisfy .A ∪ B = X and .Lm (A, B) = 0, then either .ν(A) = 0 or .ν(B) = 0. (iii) If .A ∈ B is a .ν-invariant set, then either .ν(A) = 0 or .ν(X \ A) = 0. Proof (i) .⇒ (ii): Assume that .[X, B, m, ν] is m-connected and let .A, B be as in statement (ii). If   .0 = Lm (A, B) = dmx (y)dν(x), A B

then there exists .N1 ∈ B, .ν(N1 ) = 0, such that .

mx (B) = 0 for every x ∈ A \ N1 .

Now, since .ν is invariant with respect to m,  0 = ν(N1 ) =

mx (N1 )dν(x),

.

X

14

1 Random Walks

and, consequently, there exists .N2 ∈ B, .ν(N2 ) = 0, such that mx (N1 ) = 0 ∀x ∈ X \ N2 .

.

Hence, for .x ∈ A \ (N1 ∪ N2 ), 

m∗2 x (B) =



X

χB (y)dm∗2 x (y) =

X

 mz (B)dmx (z) =

=

X



mz (B)dmx (z) +

X

.

 χB (y)dmz (y) dmx (z)

 

A



mz (B)dmx (z) B =0, since x∈A\N1



=

mz (B)dmx (z) + A\N1

mz (B)dmx (z) = 0 N1 =0, since x∈A\N2

=0, since z∈A\N1

Working as above, we find .N3 ∈ B, .ν(N3 ) = 0, such that mx (N1 ∪ N2 ) = 0 ∀x ∈ X \ N3 .

.

Hence, for .x ∈ A \ (N1 ∪ N2 ∪ N3 ), m∗3 x (B)

 = 

X

= X

.

χB (y)dm∗3 x (y) m∗2 z (B)dmx (z)

 

A\(N1 ∪N2 )

 

χB (y)dm∗2 z (y)

= X

 

A

X

m∗2 z (B)dmx (z) +

m∗2 z (B)dmx (z) +

 B

dmx (z) m∗2 z (B)dmx (z)

=0, since x∈A\(N1 ∪N2 )



=0, since z∈A\(N1 ∪N2 )



N1 ∪N2

m∗2 z (B)dmx (z) = 0.

=0, since x∈A\N3

Inductively, we obtain that m∗n x (B) = 0 for ν-a.e x ∈ A and every n ∈ N.

.

Consequently, A ⊂ NBm

.

up to a .ν-null set thus .ν(B) > 0 implies that .ν(A) = 0: (ii) .⇒ (iii): Note that if A is .ν-invariant, then .Lm (A, X \ A) = 0. (iii) .⇒ (i): Let .D ∈ B with .ν(D) > 0. Then, by Proposition 1.29, .HDm is .νm ) = 0. invariant but, by Corollary 1.31, .ν(HDm )  ν(D) > 0 thus .ν(ND  

1.2 Random Walk Spaces

15

Note that this result also justifies the choice of the terminology used since the characterization of m-connectedness given is in some way reminiscent of the definition of a connected topological space. Let us also use this moment to introduce the notion of m-connectedness for a subset of a reversible random walk space. Definition 1.35 Let .[X, B, m, ν] be a reversible random walk space, and let . ∈ B with .ν() > 0. Let .B be the following .σ -algebra: B := {B ∈ B : B ⊂ }.

.

We say that . is m-connected (with respect to .ν) if .Lm (A, B) > 0 for every pair of non-.ν-null sets A, .B ∈ B such that .A ∪ B = . If a random walk space .[X, B, m, ν] is not m-connected, then we may obtain non-trivial decompositions of X as the following. Definition 1.36 Let .[X, B, m, ν] be a reversible random walk space and . ∈ B with .0 < ν() < ν(X). Suppose that .1 , 2 ∈ B satisfy: . = 1 ∪2 , .1 ∩2 = ∅, .ν(1 ) > 0, .ν(2 ) > 0 and .Lm (1 , 2 ) = 0. Then, we write . = 1 m 2 . We are now able to characterize the m-connectedness of a random walk space in terms of the ergodicity of the invariant measure (recall Corollary 1.15). Theorem 1.37 Let .[X, B, m, ν] be a random walk space and suppose that .ν is a probability measure. Then [X, B, m, ν] is m-connected ⇔ ν is ergodic with respect to m.

.

Proof .(⇒). Let .B ∈ B be invariant. Then, B is .ν-invariant; thus, by Proposition 1.34, .ν(B) = 0 or .ν(B) = 1. m .(⇐). Let .D ∈ B with .ν(D) > 0. By Proposition 1.29, .N D is invariant with respect m m to m. Then, since .ν is ergodic, .ν(ND ) = 0 or .ν(ND ) = 1. Now, since .ν(D) > 0 m ) = 0 and, consequently, .[X, B, m, ν] is mwe have that, by Corollary 1.31, .ν(ND connected.   Finally, let us give a sufficient condition for the .ϕ-irreducibility of a random walk. This involves the following definition (see, e.g. [121, Section 7.2]). Definition 1.38 Let .[X, d, m, ν] be a metric random walk space. We say that [X, d, m, ν] has the strong-Feller property at .x0 ∈ X if

.

mx0 (A) = lim mxn (A)

.

n→+∞

for every Borel set A ⊂ X

whenever .xn → x0 in .(X, d) as .n → +∞. We say that .[X, d, m, ν] has the strong-Feller property if it has the strong-Feller property at every point in X.

16

1 Random Walks

Proposition 1.39 Let .[X, d, m, ν] be a metric random walk space such that supp ν = X. Suppose further that .[X, d, m, ν] has the strong-Feller property and that .(X, d) is connected. Then, m is .ν-irreducible (thus m-connected).

.

Proof Recall that setwise convergence of a sequence of probability measures is equivalent to the convergence of the integrals against bounded measurable functions (in fact, by [97, Theorem 2.3], convergence on open or closed sets is enough). Therefore, since .[X, d, m, ν] has the strong-Feller property and 

m∗k x (A) =

.

X

my∗(k−1) (A)dmx (y), x ∈ X, A ∈ B,

[X, d, m∗k , ν] also has the strong-Feller property for any .k ∈ N. Let .D ∈ B with .ν(D) > 0. Let us see first that .HDm is open or, equivalently, that m m .N D is closed. If .(xn )n1 ⊂ ND is a sequence such that .limn→∞ xn = x ∈ X, then .

∗k m∗k x (D) = lim mxn (D) = 0

.

n→∞

m as desired. for any .k ∈ N, thus .x ∈ ND m However, .HD is also closed. Indeed, if .mx (HDm ) < 1 for some .x ∈ HDm , since m .[X, d, m, ν] has the strong-Feller property, there exists .r > 0 such that .my (H ) < 1 D m for every .y ∈ Br (x) ⊂ HD . Therefore, by Proposition 1.29, .ν(Br (x)) = 0, which is in contradiction with .supp ν = X. Hence:

mx (HDm ) = 1 ⇔ x ∈ HDm .

.

Then, given .(xn )n1 ⊂ HDm such that .limn→∞ xn = x ∈ X, we have: mx (HDm ) = lim mxn (HDm ) = 1,

.

n→∞

so .x ∈ HDm . Consequently, .HDm is closed and therefore, since X is connected, .X = m = ∅. HDm , which implies that .ND   Note that this result gives a relation between the topological connectedness and the m-connectedness of a metric random walk space.

1.3 Examples Example 1.40 (Nonlocal Continuum Problems) Consider the metric measure space (RN , d, LN ), where d is the Euclidean distance and LN the Lebesgue measure on RN (which we also denote by |.|). For simplicity, we write dx instead of dLN (x). Let J : RN → [0,  +∞[ be a measurable, nonnegative and radially symmetric function verifying RN J (x)dx = 1. Let mJ be the following random walk on

1.3 Examples

17

(RN , d):  mJx (A) :=

J (x − y)dy

.

for every x ∈ RN and every Borel set A ⊂ RN .

A

Then, applying Fubini’s theorem, it is easy to see that the Lebesgue measure LN is reversible with respect to mJ . Therefore, [RN , d, mJ , LN ] is a reversible metric random walk space. An interpretation, similar to the one given in Sect. 1.2 for the m-interaction between sets, can be given for mJ . In this case, if in RN we consider a population such that each individual starting at location x jumps to location y according to the probability distribution J (x −y), then, for a Borel set A in RN , mJx (A) is measuring the proportion of individuals who started at x and are arriving at A after one jump. Example 1.41 (Weighted Discrete Graphs) Consider a locally finite weighted discrete graph G = (V (G), E(G)),

.

where V (G) is the vertex set, E(G) is the edge set and each edge (x, y) ∈ E(G) (we write x ∼ y if (x, y) ∈ E(G)) has a positive weight wxy = wyx assigned. Suppose further that wxy = 0 if (x, y) ∈ E(G). Note that there may be loops in the graph, that is, we may have (x, x) ∈ E(G) for some x ∈ V (G) and, therefore, wxx > 0. Recall that a graph is locally finite if every vertex is only contained in a finite number of edges. A finite sequence {xk }nk=0 of vertices of the graph is called a path if xk ∼ xk+1 for all k = 0, 1, ..., n − 1. The length of a path {xk }nk=0 is defined as the number n of edges in the path. With this terminology, G = (V (G), E(G)) is said to be connected if, for any two vertices x, y ∈ V , there is a path connecting x and y, that is, a path {xk }nk=0 such that x0 = x and xn = y. Finally, if G = (V (G), E(G)) is connected, the graph distance dG (x, y) between any two distinct vertices x, y is defined as the minimum of the lengths of the paths connecting x and y. Note that this metric is independent of the weights. For x ∈ V (G), we define the weight at x as dx :=



.

wxy =

y∼x



wxy ,

y∈V (G)

and the neighbourhood of x as NG (x) := {y ∈ V (G) : x ∼ y}. Note that, by definition of locally finite graph, the sets NG (x) are finite. When all the non-null weights are 1, dx coincides with the degree of the vertex x in a graph, that is, the number of edges containing x. For each x ∈ V (G), we define the following probability measure: mG x :=

.

1  wxy δy . dx y∼x

18

1 Random Walks

It is not difficult to see that the measure νG defined as νG (A) :=



.

A ⊂ V (G),

dx ,

x∈A

is a reversible measure with respect to this random walk. Therefore, [V (G), B, mG , νG ] is a reversible random walk space (B is the σ -algebra of all subsets of V (G)) and [V (G), dG , mG , νG ] is a reversible metric random walk space. Proposition 1.42 Let [V (G), dG , mG , νG ] be the metric random walk space associated with a connected locally finite weighted discrete graph G = (V (G), E(G)). Then, mG is νG -irreducible. G

m = ∅. Suppose Proof Take D ⊂ V (G) with νG (D) > 0, and let us see that ND G m , this implies that that there exists y ∈ ND

(mG )∗n y (D) = 0 ∀ n ∈ N.

.

(1.4)

Now, given x ∈ D, there exists a path {x, z1 , z2 , . . . , zk−1 , y} (x ∼ z1 ∼ z2 ∼ · · · ∼ zk−1 ∼ y) of length k connecting x and y, and, therefore, (mG )∗k y ({x}) 

.

wyzk−1 wzk−1 zk−2 · · · wz2 z1 wz1 x > 0, dy dzk−1 dzk−2 · · · dz2 dz1  

which is in contradiction with (1.4).

In Machine Learning Theory [101, 102], an example of a weighted discrete graph is a point cloud in RN , V = {x1 , . . . , xn }, with edge weights wxi ,xj given by wxi ,xj := η(|xi − xj |),

.

1  i, j  n,

where the kernel η : [0, ∞) → [0, ∞) is a radial profile satisfying: (i) η(0) > 0, and η is continuous at 0, (ii) η is non-decreasing, ∞ (iii) and the integral 0 η(r)r N dr is finite. Example 1.43 (Markov Chains) Let K : X × X → R be a Markov kernel on a countable space X, i.e.  .K(x, y)  0 ∀x, y ∈ X, K(x, y) = 1 ∀x ∈ X. y∈X

Then, if mK x (A) :=



.

K(x, y), x ∈ X, A ⊂ X

y∈A

and B is the σ -algebra of all subsets of X, mK is a random walk on (X, B).

1.3 Examples

19

Recall that, in discrete Markov chain theory terminology, a measure π on X satisfying  .

π(x) = 1

and

π(y) =

x∈X



π(x)K(x, y)

∀y ∈ X,

x∈X

is called a stationary probability measure (or steady state) on X. Of course, π is a stationary probability measure if, and only if, π is and invariant probability measure with respect to mK . Consequently, if π is a stationary probability measure on X, then [X, B, mK , π ] is a random walk space. Furthermore, a stationary probability measure π is said to be reversible for K if the following detailed balance equation holds: K(x, y)π(x) = K(y, x)π(y) for x, y ∈ X.

.

This balance condition is equivalent to K dmK x (y)dπ(x) = dmy (x)dπ(y) for x, y ∈ X.

.

Note that, given a locally finite weighted discrete graph G = (V (G), E(G)) as in Example 1.41, there is a natural definition of a Markov chain on the vertices. Indeed, define the Markov kernel KG : V (G) × V (G) → R as KG (x, y) :=

.

1 wxy . dx

Then, mG and mKG define the same random walk. If νG (V (G)) is finite, the unique reversible probability measure with respect to mG is given by πG (x) :=

.

 1 wxz . νG (V (G)) z∈V (G)

Usually, the study of a set of objects with pairwise relationships is carried out in the workspace of weighted graphs. However, in many real-world problems, representing a set of more complex relations between objects in such a way is not convenient. A more general framework for these situations is provided by hypergraphs. In these, instead of edges, hyperedges are considered, which are subsets of the set of vertices. Hypergraphs have been recognized in several fields, such as computer vision [124, 165], bioinformatics [132, 191] or information retrieval [55, 105], as the proper workspace in which to study higher-order relations and have aided in improving the learning performance. Example 1.44 (Weighted Hypergraphs) Let V a countable set and let E be a subset of 2V satisfying that e is finite for every e ∈ E and that V = e∈E e. H = (V , E) together with a weight function w : E → [0, ∞) is called a weighted hypergraph.

20

1 Random Walks

The elements of V are called vertices and those of E are called hyperedges. A hyperedge e ∈ E is said to be incident with a vertex x ∈ V when x ∈ e. Given a finite hypergraph H , we define IH , the incidence matrix of H , as the |V | × |E| matrix with the following entries:  h(x, e) =

.

1 0

if x ∈ e, if x ∈ e.

The degree of a vertex x ∈ V is defined as 

dxw = dx :=

w(e);

.

e∈E : x∈e

and the degree of a hyperedge e ∈ E is defined by δ(e) := |e|, i.e. the cardinality of e. If V = {xn : n ∈ N} is countably infinite, we assume that ∞ n=1 dxn < ∞. The volume of a subset of vertices A ⊂ V is defined by vol(A) :=



.

dx .

x∈A

Given two distinct vertices x, y ∈ V , an alternate sequence of distinct vertices and hyperedges {x1 , e1 , x2 , e2 , . . . , ek−1 , xk }

.

such that x1 = x, xk = y and {xi , xi+1 } ⊂ ei for 1  i  k − 1, is called a hyperpath between x and y. A hypergraph is connected if there exists a hyperpath between every pair of distinct vertices. A connected hypergraph can be equipped with the standard shortest hyperpath distance dH , that is, dH (x, x) = 0 and, if x = y, dH (x, y) is the minimum number of hyperedges in hyperpaths between x and y. A Markov kernel is defined on a connected hypergraph by the following transition rule (see [204]; see also [75] and [89]): given a starting vertex x ∈ V , first select a hyperedge e randomly over all hyperedges incident with x, with probability proportional to w(e), and then choose a vertex y ∈ e uniformly at random over all edges in e. If K denotes the transition probability matrix of this Markov kernel, then each entry of K is given by K(x, y) =

.

h(x, e)h(y, e) 1  . w(e) δ(e) dx e∈E

The stationary probability measure on H is given by π=

.

1  dx δx . vol(V ) x∈V

1.3 Examples

Indeed,

21

x∈V



π(x) = 1 and, for y ∈ V : 

dx 1  h(x, e)h(y, e) w(e) vol(V ) dx δ(e) x∈V e∈E 1  h(x, e)h(y, e) = w(e) vol(V ) δ(e) x∈V e∈E  1  h(y, e) = w(e) h(x, e) δ(e) vol(V ) e∈E x∈V dy 1  = π(y). = w(e)h(y, e) = vol(V ) vol(V )

π(x)K(x, y) =

x∈V

.

e∈E

Observe that π is, in fact, reversible for K, i.e. it verifies the detailed balance equation: K(x, y)π(x) = K(y, x)π(y) for x, y ∈ V .

.

Therefore, we may define the random walk mH = mK as in Example 1.43, that is, for x ∈ V and A ⊂ V : H

.mx

(A) =

 y∈A

K(x, y) =

1  h(x, e)h(y, e) 1  w(e) = dx δ(e) dx



y∈A e∈E : x,y∈e

y∈A e∈E

w(e) . δ(e)

Consequently, [V , dH , mH , νH ] is a reversible metric random walk space where νH = π . Note that   w(e) 1  f (y) . f (y)dmH . x (y) = dx δ(e) V y∈V

e∈E : x,y∈e

Example 1.45 (ε-Step Random Walks) From a metric measure space (X, d, μ), we can obtain a random walk, the so-called ε-step random walk associated with μ, as follows. Assume that balls in X have finite measure and that supp(μ) = X. Given ε > 0, the ε-step random walk on X, starting at x ∈ X, consists in randomly jumping in the ball of radius ε around x, with probability proportional to μ, namely, mμ,ε x :=

.

μ B(x, ε) μ(B(x, ε))

where μ B(x, ε) denotes the restriction of μ to B(x, ε) (or, more precisely, to BB(x,ε) , where B is the Borel σ -algebra associated with (X, d).).

22

1 Random Walks

If μ(B(x, ε)) = μ(B(y, ε)) for every x, y ∈ X, then μ is reversible with respect to mμ,ε ; thus, [X, d, mμ,ε , μ] is a reversible metric random walk space. Corollary 1.46 Let (X, d, ν) be a metric measure space such that (X, d) is a length space (see [59, Chapter 2]), supp(ν) = X and ν is a probability measure. Suppose that ν(B(x, ε)) = ν(B(y, ε)) for some ε > 0 and every x, y ∈ X and consider the reversible metric random walk space [X, d, mν,ε , ν] where mν,ε is the ε-step random walk associated with ν. Then, ν is ergodic. Proof To prove this result, we use Proposition 1.34 (ii). Let A, B ⊆ X be Borel sets with A ∪ B = X, ν(A) > 0 and ν(B) > 0, and let us see that Lmν,ε (A, B) > 0. Without loss of generality, we may assume that A and B are disjoint. Indeed, if ν(A) = 1 or ν(B) = 1, then we can easily see that Lmν,ε (A, B) > 0; otherwise, we can consider A = A \ B and B = B \ A, obtaining that Lmν,ε (A, B)  Lmν,ε (A , B ). Let us first see that there exists a ball B(x0 , δ) with δ  4ε such that 0 < ν(A ∩ B(x0 , δ)), ν(B ∩ B(x0 , δ)) < ν(B(x0 , δ)).

.

(1.5)

or x ∈ B,

where Otherwise, for every x ∈ X, we have that either x ∈ A

:= {x ∈ X : for every δ  ε , ν(A ∩ B(x, δ)) = ν(B(x, δ))} A 4

.

and

:= {x ∈ X : for every δ  ε , ν(B ∩ B(x, δ)) = ν(B(x, δ))}. B 4

.

B)

= 0. Indeed, if d(A,

B)

> c > 0, then the Then, since X is a length space, d(A, assumption that X is a length space implies that there exists a point with positive

and B.

Therefore, there exists a ball with positive distance to both distance to both A

and B,

and this is in contradiction with the fact that A

∪ B

= X, so d(A,

B)

= 0. A

and y0 ∈ B

such that d(x0 , y0 ) < ε . Then, by the definition of A, Now, take x0 ∈ A 8 ε ε ε

ν(A∩B(x0 , 4 )) = ν(B(x0 , 4 )); similarly, by the definition of B, ν(B ∩B(y0 , 8 )) = ν(B(y0 , 8ε )), which yields a contradiction because A and B are disjoint. Hence, there exists a ball B(x0 , δ) with δ  4ε which satisfies condition (1.5). Then,     ν,ε ν,ε Lm (A, B) = dmx (y) dν(x)  dmν,ε x (y) dν(x) = 

.

A B



= A∩B(x0 ,δ) B

A∩B(x0 ,δ) B

1 χ{y : d(x,y) 0, ν(B(x0 , 2ε))  

which concludes the proof.

Example 1.47 Given a random walk space [X, B, m, ν] and  ∈ B with ν() > 0, let     .mx (A) := dmx (y) + dmx (y) δx (A) for every A ∈ B and x ∈ . A

X\

Then, m is a random walk on (, B ) and it easy to see that ν  is invariant with respect to m . Therefore, [, B , m , ν ] is a random walk space. Moreover, if ν is reversible with respect to m, then ν  is reversible with respect to m . Of course, if ν is a probability measure, we may normalize ν  to obtain the random walk space:   1  ν  . . , B , m , ν() Note that if [X, d, m, ν] is a metric random walk space and  is closed, then [, d, m , ν ] is also a metric random walk space, where we abuse notation and denote by d the restriction of d to . In particular, in the context of Example 1.40, if  is a closed and bounded subset of RN , we obtain the metric random walk space [, d, mJ, , LN ] where mJ, := (mJ ) ; that is, 

 mJ, x (A) :=

J (x − y)dy +

.

A

Rn \

 J (x − z)dz δx (A)

for every Borel set A ⊂  and x ∈ . Using this last example, we can characterize m-connected sets as follows (recall Definition 1.35). Proposition 1.48 Let [X, B, m, ν] be a random walk space and let  ∈ B with ν() > 0. Then:  is m-connected ⇒ [, B , m , ν ] is m -connected.

.

24

1 Random Walks

Moreover, if there exists  ∈ B such that 0 < ν( ) < ν(), then  is m-connected ⇔ [, B , m , ν ] is m -connected.

.

Proof In general, if A, B ∈ B then Lm (A, B)  Lm (A, B)

.

so the left to right implication follows. Let A, B ∈ B . Then, if x ∈ A \ B m x (B) = mx (B) + mx (X \ )δx (B) = mx (B).

.

Therefore, if ν(A ∩ B) = 0, Lm (A, B) = Lm (A, B).

.

Now, assume that there exists  ∈ B such that 0 < ν( ) < ν(). Let us see the right to left implication. Let A, B ∈ B such that Lm (A, B) = 0 and A∪B = . If ν(A ∩ B) = 0, then Lm (A, B) = Lm (A, B) = 0; thus, by Proposition 1.34, ν(A) = 0 or ν(B) = 0. Otherwise, ν(A \ B) > 0, ν(B \ A) > 0 or ν(A) = ν(B) = ν(). In the first two cases, we have that Lm (A \ B, B)  Lm (A, B) = 0 or Lm (A, B \ A)  Lm (A, B) = 0, respectively, so we conclude in the same way. If ν(A) = ν(B) = ν(), then Lm (, ) = 0; thus, Lm ( ,  \  ) = 0. Therefore, ν( ) = 0 or ν( ) = ν() which is in contradiction with 0 < ν( ) < ν().  

1.4 The Nonlocal Gradient, Divergence and Laplace Operators Let us introduce the nonlocal counterparts of some classical concepts. Definition 1.49 Let .[X, B, m, ν] be a random walk space. Given a function .u : X → R, we define its nonlocal gradient .∇u : X × X → R as ∇u(x, y) := u(y) − u(x)

.

∀ x, y ∈ X.

Moreover, given .z : X × X → R, its m-divergence .divm z : X → R is defined as 1 .(divm z)(x) := 2

 (z(x, y) − z(y, x))dmx (y). X

We define the (nonlocal) Laplace operator as follows (recall the definition of the averaging operator given in Definition 1.18).

1.4 The Nonlocal Gradient, Divergence and Laplace Operators

25

Definition 1.50 Let .[X, B, m, ν] be a random walk space, we define the m-Laplace operator (or m-Laplacian) from .L1 (X, ν) into itself as . m := Mm − I , i.e. 



m u(x) =

u(y)dmx (y) − u(x) =

.

X

(u(y) − u(x))dmx (y), u ∈ L1 (X, ν). X

The m-Laplace operator is also called the drift operator (see [155, Chapter 8]). Note that

m f (x) = divm (∇f )(x).

.

Remark 1.51 It holds that .

m f 1  2 f 1 and 

m f (x)dν(x) = 0

.

∀ f ∈ L1 (X, ν).

(1.6)

X

As in Remark 1.19, we obtain that . m is a linear operator in .L2 (X, ν) with domain D( m ) = L1 (X, ν) ∩ L2 (X, ν).

.

Moreover, if .ν is a probability measure, . m is a bounded linear operator in .L2 (X, ν) satisfying .

m  2. In the case of the random walk space associated with a locally finite weighted discrete graph .G = (V , E) (as defined in Example 1.41), the Laplace operator coincides with the graph Laplacian (also called the normalized graph Laplacian) studied by many authors (see, e.g. [29, 30, 86] or [128]):

u(x) :=

.

1  wxy (u(y) − u(x)), dx y∼x

u ∈ L2 (V , νG ), x ∈ V .

Proposition 1.52 (Integration by Parts Formula) Let .[X, B, m, ν] be a reversible random walk space. Then:  f (x) m g(x)dν(x) = −

.

X

1 2

 ∇f (x, y)∇g(x, y)d(ν ⊗ mx )(x, y)

(1.7)

X×X

for .f, g ∈ L1 (X, ν) ∩ L2 (X, ν). Proof Since, by the reversibility of .ν with respect to m,  

  f (x)(g(y) − g(x))dmx (y)dν(x) =

.

X

X

f (y)(g(x) − g(y))dmx (y)dν(x) X

X

26

1 Random Walks

we get that 

  f (x) m g(x)dν(x) =

X

1 = 2

f (x)(g(y) − g(x))dmx (y)dν(x) X

 

X

f (x)(g(y) − g(x))dmx (y)dν(x) X

X

  1 + f (x)(g(y) − g(x))dmx (y)dν(x) 2 X X .   1 = f (x)(g(y) − g(x))dmx (y)dν(x) 2 X X   1 + f (y)(g(x) − g(y))dmx (y)dν(x) 2 X X  1 =− ∇f (x, y)∇g(x, y)d(ν ⊗ mx )(x, y). 2 X×X

 

In fact, we may prove, in the same way, the following more general result which will be useful in Chap. 7. Lemma 1.53 Let .q  1. If .Q ⊂ X × X is a symmetric set (i.e. .(x, y) ∈ Q ⇐⇒ (y, x) ∈ Q) and . : Q → R is a .ν ⊗mx -a.e. antisymmetric function (i.e. .(x, y) =

−(y, x) for .ν ⊗ mx -a.e. .(x, y) ∈ Q) with . ∈ Lq (Q, ν ⊗ mx ) and .u ∈ Lq (X, ν), then   1 (x, y)(u(y) − u(x))d(ν ⊗ mx )(x, y). . (x, y)u(x)d(ν ⊗ mx )(x, y) = − 2 Q Q In particular, if . ∈ L1 (Q, ν ⊗ mx ),  (x, y)d(ν ⊗ mx )(x, y) = 0.

.

Q

We are now able to characterize the m-connectedness of a random walk space in terms of the ergodicity of the Laplace operator. Following Bakry, Gentil and Ledoux [25], we give the following definition. Definition 1.54 Let .[X, B, m, ν] be a random walk space. We say that . m is ergodic if, for .u ∈ D( m ),

m u = 0 ν-a.e. ⇒ u is a constant ν-a.e.

.

(being this constant 0 if .ν is not finite), i.e. every harmonic function in .D( m ) (recall Definition 1.21) is a constant .ν-a.e.

1.5 The Nonlocal Boundary, Perimeter and Mean Curvature

27

Theorem 1.55 Let .[X, B, m, ν] be a random walk space and suppose that .ν is a probability measure. Then: [X, B, m, ν] is m-connected ⇔ m is ergodic.

.

Proof (.⇐): Let .D ∈ B with .ν(D) > 0 and recall that, by Corollary 1.31, .ν(HDm )  ν(D) > 0. Consider the function u(x) := χHDm (x), x ∈ X,

.

and note that, since .ν is finite, .u ∈ L2 (X, ν). Now, since, by Proposition 1.29, .HDm is .ν-invariant, we have that . m u = 0 .ν-a.e. Thus, by the ergodicity of . m and m ) = 0. recalling that .ν(HDm ) > 0, .u = χHDm = 1 .ν-a.e. and, therefore, .ν(ND (.⇒): Suppose now that .[X, B, m, ν] is m-connected and let .u ∈ L2 (X, ν) such that u is not .ν-a.e. a constant, let us see that . m u is not .ν-a.e. 0. We may find .U ∈ B with .0 < ν(U ) < 1 such that .u(x) < u(y) for every .x ∈ U and .y ∈ X \ U . Then, since .Lm (U, X \ U ) > 0  

  ∇u(x, y)2 dmx (y)dν(x) 

.

X

X

∇u(x, y)2 dmx (y)dν(x) > 0 U

X\U

but 

  ∇u(x, y)2 dmx (y)dν(x) = −2

.

X

X

u(x) m u(x)dν(x), X

 

so . m u is not .ν-a.e. 0.

This result together with Theorem 1.37 shows that both concepts of ergodicity, the one for the invariant measure and the one for the Laplace operator, are equivalent. Such a relation recovers Proposition 1.22.

1.5 The Nonlocal Boundary, Perimeter and Mean Curvature In this section we introduce the nonlocal counterparts of the notions of boundary, perimeter and mean curvature. The following notion of nonlocal boundary will play the role of the classical boundary when we consider the nonlocal counterparts of classical equations in Chap. 7, that is, boundary conditions will be imposed on this set. Definition 1.56 Let .[X, B, m, ν] be a random walk space and . ∈ B. We define the m-boundary of . by ∂m  := {x ∈ X \  : mx () > 0}

.

28

1 Random Walks

and its m-closure as m :=  ∪ ∂m .

.

In Chap. 3 the following notion of nonlocal perimeter will be widely used. Definition 1.57 Let .[X, B, m, ν] be a random walk space and .E ∈ B. The mperimeter of E is defined by   Pm (E) := Lm (E, X \ E) =

dmx (y)dν(x).

.

E

X\E

In regard to the interpretation given for the m-interaction between sets (Definition 1.32), this notion of perimeter can be interpreted as measuring the total flux of individuals that cross the “boundary” (in a very weak sense) of a set in one jump. Lemma 1.58 Let .[X, B, m, ν] be a random walk space and .E ∈ B .ν-finite. Then:   Pm (E) = ν(E) −

(1.8)

dmx (y)dν(x).

.

E

E

Furthermore, .Pm (E) = Pm (X \ E) and Pm (E) =

.

=

 

1 2

|χE (y) − χE (x)|dmx (y)dν(x) X

X

X

X

 

1 2

|∇χE (x, y)|dmx (y)dν(x).

Proof Equation (1.8) is straightforward. Now,  



Pm (E) − Pm (X \ E) =

dmx (y)dν(x) − E

X\E

E

X

 

dmx (y)dν(x) X\E

E

 

dmx (y)dν(x) −

=

dmx (y)dν(x) E

 

E

 dmx (y)dν(x)

 

dmx (y)dν(x) −

− .



X



E

dν(x) −

= E

E

 

χE (y)dmx (y)dν(x) X



= ν(E) −

E

X

χE (x)dν(x) X

= ν(E) − ν(E) = 0.

1.5 The Nonlocal Boundary, Perimeter and Mean Curvature

29

For the last statement, note that   . |χE (y) − χE (x)|dmx (y)dν(x) = Pm (E) + Pm (X \ E) = 2Pm (E). X

X

  N J N .[R , d, m , L ]

Example 1.59 Let Example 1.40. Then:

PmJ (E) =

.

1 2

be the metric random walk space given in



 RN

RN

|χE (y) − χE (x)|J (x − y)dydx,

which coincides with the concept of J -perimeter introduced in [148]. Furthermore:  

1 .PmJ, (E) = 2

|χE (y) − χE (x)|J (x − y)dydx.  

Note that, in general, .PmJ, (E) =  PmJ (E) (recall the definition of .mJ, given in Example 1.47). Moreover:   PmJ, (E) = LN (E) − dmJ, x (y)dx E

E

E

E

E

 



 

.

= L (E) −

  J (x − y)dydx −

N

 RN \

J (x − z)dz dx

and, therefore, PmJ, (E) = PmJ (E) −

.

E

RN \

J (x − z)dz dx,

∀ E ⊂ .

(1.9)

Example 1.60 Let .[V (G), dG , mG , νG ] be the metric random walk space associated (as in Example 1.41) with a finite weighted discrete graph G. Given .A, B ⊂ V (G), .Cut(A, B) is defined as Cut(A, B) :=



.

wxy = LmG (A, B),

x∈A,y∈B

and the perimeter of a set .E ⊂ V (G) is given by |∂E| := Cut(E, E c ) =



.

x∈E,y∈V \E

wxy .

30

1 Random Walks

Consequently, |∂E| = PmG (E)

for all E ⊂ V (G).

.

(1.10)

Example 1.61 Let .[V , dH , mH , νH ] be the reversible metric random walk space associated with a weighted hypergraph H (see Example 1.44). The hyperedge boundary .∂A of a vertex subset .A ⊂ V is defined as ∂A := {e ∈ E : e ∩ A = ∅, e ∩ (V \ A) = ∅}.

.

The volume of .∂A is defined by 

vol(∂A) :=

.

w(e)

e∈∂A

|e ∩ A| |e ∩ (V \ A)| . δ(e)

For any .S ⊂ V , we have that its .mH -perimeter is defined as   PH (S) := PmH (S) =

S

V \S

dmH x (y)dνH (x)

 w(e) 1   δ(e) vol(V) x∈S y∈V \S e∈E,{x,y}⊂E   1   ω(e) |e ∩ (V \ S)| = vol(V) δ(e) x∈S e∈E 1  ω(e) = |e ∩ (V \ S)| |e ∩ S|. δ(e) vol(V) =

.

e∈E

Then, since .|(e ∩ (V \ S))| |e ∩ S| = 0 for .e ∈ ∂S, we get: PH (S) =

.

 vol(∂S) |e ∩ (V \ S)| |e ∩ S| 1 = . ω(e) vol(V) δ(e) vol(V ) e∈∂S

We now give some properties of the m-perimeter. Proposition 1.62 Let .[X, B, m, ν] be a reversible random walk space and let A, B ∈ B be sets with finite m-perimeter such that .ν(A ∩ B) = 0. Then:

.

Pm (A ∪ B) = Pm (A) + Pm (B) − 2Lm (A, B).

.

Proof  .



 dmx (y) dν(x)

Pm (A ∪ B) = A∪B

X\(A∪B)

1.5 The Nonlocal Boundary, Perimeter and Mean Curvature



  = X\(A∪B)

A

X\A

=





dmx (y) − 

dmx (y) − B

X\B

dmx (y) dν(x) B

X\(A∪B)

dmx (y) dν(x) B

 

+



 

dmx (y) dν(x) + A

  .

31

 dmx (y) dν(x),

A

thus, by the reversibility of .ν with respect to m,  

 dmx (y) dν(x).

Pm (A ∪ B) = Pm (A) + Pm (B) − 2

.

A

B

  Corollary 1.63 Let .[X, B, m, ν] be a reversible random walk space and let A, B, C ∈ B be sets with pairwise .ν-null intersections. Then,

.

Pm (A∪B ∪C) = Pm (A∪B)+Pm (A∪C)+Pm (B ∪C)−Pm (A)−Pm (B)−Pm (C).

.

Proposition 1.64 (Submodularity) Let .[X, B, m, ν] be a reversible random walk space and let A, .B ∈ B. Then: Pm (A ∪ B) + Pm (A ∩ B) = Pm (A) + Pm (B) − 2Lm (A \ B, B \ A).

.

Consequently, Pm (A ∪ B) + Pm (A ∩ B)  Pm (A) + Pm (B).

.

Proof By Corollary 1.63,   Pm (A ∪ B) = Pm (A \ B) ∪ (B \ A) ∪ (A ∩ B) .

  = Pm (A \ B) ∪ (B \ A) + Pm (A) + Pm (B) − Pm (A \ B) − Pm (B \ A) − Pm (A ∩ B).

Hence:   Pm (A ∪ B) + Pm (A ∩ B) = Pm (A) + Pm (B) + Pm (A \ B) ∪ (B \ A) .

− Pm (A \ B) − Pm (B \ A).

32

1 Random Walks

Now, by Proposition 1.62   Pm (A \ B) ∪ (B \ A) − Pm (A \ B) − Pm (B \ A) = −2Lm (A \ B, B \ A).

.

  Definition 1.65 Let .[X, B, m, ν] be a random walk space and let .E ∈ B. For a point .x ∈ X, we define the m-mean curvature of .∂E at x as m H∂E (x) := mx (X \ E) − mx (E) = 1 − 2mx (E).

(1.11)

.

m (x) can be computed for every .x ∈ X, not only for points in .∂E. Note that .H∂E Furthermore, if .ν(E) < ∞,

 

 .

E

m H∂E (x)dν(x) =

 1−2

E

   dmx (y) dν(x) = ν(E) − 2 dmx (y)dν(x),

E

E

E

(1.12)

hence, having in mind (1.8), we obtain that  .

E

m H∂E (x)dν(x) = 2Pm (E) − ν(E).

(1.13)

Observe also that m m H∂E (x) = −H∂(X\E) (x).

(1.14)

.

Remark 1.66 Let .[, B , m , ν ] be the random walk space given in Example 1.47. Then: 

m H∂E (x) = mx ( \ E) + mx (X \ )δx ( \ E) − mx (E) − mx (X \ )δx (E),

.

thus, m

H∂E (x) =

.

⎧ ⎨ mx ( \ E) − mx (E) + mx (X \ ) if x ∈  \ E, ⎩

mx ( \ E) − mx (E) − mx (X \ ) if x ∈ E.

In particular, for the random walk space .[, d, mJ, , LN ] (see also Example 1.47), we get:

mJ, .H∂E (x)

=

  ⎧ ⎪ ⎪ J (x − y)dy − J (x − y)dy + J (x − y)dy if x ∈  \ E, ⎪ ⎪ ⎨ \E E RN \  ⎪ ⎪ ⎪ ⎪ ⎩

\E

 J (x − y)dy −

 J (x − y)dy −

E

RN \

J (x − y)dy if x ∈ E.

1.5 The Nonlocal Boundary, Perimeter and Mean Curvature

33

Finally, in Theorem 1.68 we give another characterization of the ergodicity of

m in terms of geometric properties.

.

Lemma 1.67 Let .[X, B, m, ν] be a random walk space and suppose that .ν is a probability measure. Then, for .D ∈ B, the following statements are equivalent: (i) (ii) (iii)

.

D is .ν-invariant.

m χD = 0 .ν-a.e. .Pm (D) = 0. 

(iv)

.

D

m H∂D (x)dν(x) = −ν(D).

Proof .(i) ⇔ (ii) By definition of a .ν-invariant set and of the Laplace operator. .(ii) ⇒ (iii) By hypothesis, .mx (D) = Mm χD (x) = χD (x) for .ν-a.e. .x ∈ X; thus, in particular, .mx (X \ D) = 0 for .ν-a.e. .x ∈ D and, therefore, Pm (D) = Lm (D, X \ D) = 0.

.

(iii) ⇒ (ii) Suppose that .Pm (D) = 0. Then, by (1.8)    .ν(D) = dmx (y)dν(x) = mx (D)dν(x)

.

D

D

D

thus mx (D) = 1 for ν-a.e. x ∈ D.

.

Moreover, by the invariance of .ν with respect to m    ν(D) = mx (D)dν(x) = mx (D)dν(x) + X

D

.



= ν(D) +

mx (D)dν(x)

X\D

mx (D)dν(x), X\D

thus mx (D) = 0

.

for ν-a.e. x ∈ X \ D.

Consequently, Mm χD (x) = mx (D) = χD (x) for ν-a.e. x ∈ X

.

as desired. .(iv) ⇔ (iii) By (1.13),  .

D

m H∂D (x)dν(x) = 2Pm (D) − ν(D),

 m thus .Pm (D) = 0 if, and only if, . H∂D (x)dν(x) = −ν(D). D

 

34

1 Random Walks

Theorem 1.68 Let .[X, B, m, ν] be a random walk space and suppose that .ν is a probability measure. The following statements are equivalent: (i) (ii) (iii) (iv)

m is ergodic. For every .D ∈ B, . m χD = 0 .ν-a.e. .⇒ .ν(D) = 0 or .ν(D) = 1. For every .D ∈ B, .0 < ν(D) < 1 .⇒ .Pm (D) > 0. For every .D ∈ B,

.

0 < ν(D) < 1 ⇒

.

1 ν(D)

 D

m H∂D (x)dν(x) > −1

Proof .(i) ⇒ (ii) Straightforward. .(ii) ⇒ (iii) If .Pm (D) = 0, then, by Lemma 1.67, . m χD = 0 .ν-a.e. thus .(ii) implies that .ν(D) = 0 or .ν(D) = 1. .(iii) ⇒ (ii) Let .D ∈ B. If . m χD = 0 .ν-a.e., then, by Lemma 1.67, .Pm (D) = 0 thus .(iii) implies that .ν(D) = 0 or .ν(D) = 1. .(ii) ⇒ (i) Suppose that . m is not ergodic. Then, by Theorem 1.55, .[X, B, m, ν] m) < 1 is not m-connected so there exists .D ∈ B with .ν(D) > 0 such that .0 < ν(ND (recall Corollary 1.31). However, by Proposition 1.29, . m χNDm (x) = 0 and, by m ) = 0 or .ν(N m ) = 1, which is a contradiction. hypothesis, this implies that .ν(ND D .(iii) ⇔ (iv) This equivalence follows by (1.13) and Lemma 1.67.  

1.6 Poincaré-Type Inequalities Poincaré-type inequalities like those defined in Definition 1.71 and Definition 1.85 (see also Remark 1.86) will play a very important role in this book. Assuming that a Poincaré-type inequality is satisfied, we will be able to obtain results on the rates of convergence of both the heat flow and the total variation flow. Moreover, we will also assume that an inequality of this type holds in order to prove existence of solutions to some of the problems in Chap. 7. We first introduce some notation. Definition 1.69 Let .(X, B, ν) be a probability space. We denote the mean value of f ∈ L1 (X, ν) (or the expected value of f ) with respect to .ν by

.

 ν(f ) := Eν (f ) =

f (x)dν(x).

.

X

Moreover, given .f ∈ L2 (X, ν), we denote its variance with respect to .ν by 

1 .Varν (f ) := (f (x) − ν(f )) dν(x) = 2 X

 (f (x) − f (y))2 dν(y)dν(x).

2

X×X

1.6 Poincaré-Type Inequalities

35

In general, if .ν(X) < ∞, we also denote the mean of a function .f ∈ L1 (X, ν) by .ν(f ), i.e. ν(f ) :=

.

1 ν(X)

 f (x)dν(x). X

We now introduce the nonlocal counterpart of the Dirichlet energy. Definition 1.70 Let .[X, B, m, ν] be a random walk space. We define the energy functional .Hm : L2 (X, ν) → [0, +∞] by ⎧  1 ⎪ ⎨ (f (x) − f (y))2 dmx (y)dν(x) if f ∈ L1 (X, ν) ∩ L2 (X, ν), 4 X×X .Hm (f ) := ⎪ ⎩ +∞ else, and denote D(Hm ) := L1 (X, ν) ∩ L2 (X, ν).

.

Note that, by Proposition 1.52, if .ν is reversible with respect to m, then Hm (f ) = −

.

1 2

 f (x) m f (x)dν(x) for every f ∈ D(Hm ).

(1.15)

X

1.6.1 Global Poincaré-Type Inequalities Definition 1.71 Let [X, B, m, ν] be a random walk space and suppose that ν is a probability measure. We say that [X, B, m, ν] satisfies a Poincaré inequality if there exists λ > 0 such that λVarν (f )  Hm (f )

.

for all f ∈ L2 (X, ν),

or, equivalently, λ f 2L2 (X,ν)  Hm (f ) for all f ∈ L2 (X, ν) with ν(f ) = 0.

.

More generally, we say that [X, B, m, ν] satisfies a (p, q)-Poincaré inequality (p, q ∈ [1, +∞[) if there exists a constant  > 0 such that, for any f ∈ Lq (X, ν):   .

f Lp (X,ν)  

X

X

  1  q   +  f dν  , |f (y) − f (x)|q dmx (y)dν(x) X

36

1 Random Walks

or, equivalently, there exists a  > 0 such that .

f Lp (X,ν)   ∇f Lq (X×X,d(ν⊗mx ))

for all f ∈ Lq (X, ν) with ν(f ) = 0.

For simplicity, when [X, B, m, ν] satisfies a (p, 1)-Poincaré inequality, we say that [X, B, m, ν] satisfies a p-Poincaré inequality. The spectral gap of the Laplace operator is closely related to the Poincaré inequality. Definition 1.72 Let [X, B, m, ν] be a random walk space and suppose that ν is a probability measure. The spectral gap of − m is defined as  gap(− m ) := inf .



 2Hm (f ) : f ∈ D(Hm ), Varν (f ) = 0 Varν (f )

2Hm (f ) = inf : f ∈ D(Hm ), f L2 (X,ν) = 0,

f 2L2 (X,ν)



f dν = 0 .

X

(1.16) Observe that, as mentioned in Remark 1.51, since ν is a probability measure, D(Hm ) = L2 (X, ν).

.

Remark 1.73 If gap(− m ) > 0, then [X, B, m, ν] satisfies a Poincaré inequality with λ = 12 gap(− m ): .

1 gap(− m )Varν (f )  Hm (f ) ∀f ∈ L2 (X, ν), 2

being 12 gap(− m ) the best constant in the Poincaré inequality. Therefore, we are interested in studying when the spectral gap of − m is positive. Remark 1.74 Suppose that X is a Polish metric space and that ν is reversible with respect to m. Sufficient conditions for the existence of a Poincaré inequality can be found in, for example, [166, Corollary 31] or [187, Theorem 1]. In [166] the positivity of the coarse Ricci curvature (see Theorem 2.28) is assumed, while in [187] the hypothesis is the following Foster-Lyapunov condition: Mm V  (1 − λ)V + bχK ,

.

Mm 1A (x)  αμ(A)χK , ∀A ∈ B

.

for a positive function V : Rd → [1, ∞), numbers b < ∞, α, λ > 0, a set K ⊂ X, and a probability measure μ. Moreover, in Theorem 2.32, in relation to another

1.6 Poincaré-Type Inequalities

37

notion of Ricci curvature bounded from below, we find further sufficient conditions for the existence of a Poincaré inequality. If (X, d, μ) is a length space, μ is doubling (i.e. if there exists a constant C > 0 such that 0 < μ(B(x, 2r))  Cμ(B(x, r)) < ∞ for all x ∈ X and r > 0) and [X, d, mμ,ε , μ] (recall Example 1.45) is a metric random walk space, sufficient conditions for the existence of a “modified” Poincaré inequality can be found in Proposition 5.15. Definition 1.75 Let (X, B, ν) be a probability space. We denote by H (X, ν) the subspace of L2 (X, ν) consisting of the functions which are orthogonal to the constants, i.e.   2 .H (X, ν) := f ∈ L (X, ν) : ν(f ) = 0 . Remark 1.76 Let [X, B, m, ν] be a reversible random walk space and suppose that ν is a probability measure. Since the operator − m : H (X, ν) → H (X, ν) is selfadjoint and nonnegative, and

m  2, by [52, Proposition 6.9], we have that the spectrum σ (− m ) of − m in H (X, ν) satisfies σ (− m ) ⊂ [α, β] ⊂ [0, 2],

.

where   α := inf − m u, u : u ∈ H (X, ν), u L2 (X,ν) = 1 ∈ σ (− m ),

.

and   β := sup − m u, u : u ∈ H (X, ν), u L2 (X,ν) = 1 ∈ σ (− m ).

.

Let us see that gap(− m ) = α. By definition we have that gap(− m )  α (recall that 2Hm (u) = − m u, u). Now, for the opposite inequality, let f ∈ L2 (X, ν) with Varν (f ) = 0. Then, u := f − ν(f ) = 0 belongs to H (X, ν), so  α  2Hm

.

u

u L2 (X,ν)

 =

2Hm (f ) 2Hm (u) = , Varν (f )

u 2L2 (X,ν)

and, therefore, gap(− m )  α. As a consequence, we obtain that gap(− m ) > 0 ⇔ 0 ∈ σ (− m ).

.

With this remark at hand, we are able to obtain the following result.

38

1 Random Walks

Proposition 1.77 Let [X, B, m, ν] be an m-connected reversible random walk space and suppose that ν is a probability measure. If − m is the sum of an invertible and a compact operator in H (X, ν), then gap(− m ) > 0.

.

Consequently, if the averaging operator Mm is compact in H (X, ν), then gap(− m ) > 0.

.

Proof If we assume that − m is the sum of an invertible and a compact operator in H (X, ν), then, if 0 ∈ σ (− m ), by Fredholm’s alternative theorem, there exists u ∈ H (X, ν), u = 0, such that − m u = (I − Mm )u = 0. Then, since [X, d, m, ν] is m-connected, by Theorem 1.55, we obtain that m is ergodic so u is ν-a.e. a constant. Therefore, since u ∈ H (X, ν), we must have u = 0 ν-a.e., which is a contradiction.   Example 1.78 If G = (V (G), E(G)) is a finite connected weighted discrete graph, then, obviously, MmG is compact and, consequently, gap(− G m ) > 0. In this situation, it is well known that, if (V (G)) = N , the spectrum of − mG is 0 < λ1  λ2  . . .  λN −1 and 0 < λ1 = gap(− G m ). In fact, we can easily prove that [V (G), dG , mG , νG ] satisfies a (p, q)-Poincaré inequality for any p, q ∈ [1, ∞[. Indeed, let p, q ∈ [1, ∞[ and suppose that a (p, q)-Poincaré inequality does not hold. Then,  there exists a sequence (un )n∈N ⊂ Lp (V (G), νG ) with un Lp (V (G),νG ) = 1 and V (G) un (x)dν(x) = 0 ∀n ∈ N, such that   . lim wxy |un (x) − un (y)|q = 0. n→∞

x∈V (G) y∼x

Hence: .

lim |un (x) − un (y)| = 0 for every x, y ∈ V (G), x ∼ y.

n→∞

(1.17)

Moreover, since un Lp (V (G),νG ) = 1, we have that, up to a subsequence: .

lim un (x) = u(x) ∈ R for every x ∈ V (G).

n→∞

However, since the graph is connected, we get, by (1.17), u(x) = u(y) for every x, y ∈ V (G), i.e. there exists λ ∈ R such that u(x) = λ for every x ∈ V (G); thus,  un → λ in Lp (V (G), νG ). Therefore, since V (G) un (x)dνG (x) = 0, we get that λ = 0, which is in contradiction with un Lp (V (G),νG ) = 1.

1.6 Poincaré-Type Inequalities

39

Example 1.79 Let  be a bounded domain in RN and let J be a kernel such that J ∈ C(RN , R) is nonnegative and radially symmetric, with J (0) > 0 and RN J (x)dx = 1. Consider the reversible metric random walk space [, B , mJ, , LN ] as defined in Example 1.47 (recall also Example 1.40). Then, − mJ, is the sum of an invertible and a compact operator. Indeed:  .

− mJ, f (x) =

 J (x − y)dyf (x) − 

f (y)J (x − y)dy, x ∈ , 

 where f →

J (· − y)dyf (·) is an invertible operator in H (, LN ) (J is  continuous, J (0) > 0 and  is a domain; thus,  J (x − y)dy > 0 for every x ∈ ) and f → f (y)J (· − y)dy is a compact operator in H (, LN ) (this 



follows by the Arzelà-Ascoli theorem). Hence, in this case, gap(− mJ, ) is equal to (see also [21])

. inf

⎧  1 ⎪ ⎪ ⎨2 ⎪ ⎪ ⎩

⎫ ⎪ ⎪ J (x − y)(u(y) − u(x))2 dxdy  ⎬ × 2  : u ∈ L (), u L2 () > 0, u = 0 > 0. ⎪  ⎪ ⎭ u(x)2 dx 

Let us point  out that the condition J (0) > 0 is necessary since, otherwise, the J (· − y)dy may be 0 on a set of positive measure (see [21, Remark function 6.20]).



Another result in which we provide sufficient conditions for the positivity of gap(− m ) is the following. In the proof we use that, as a consequence of a result by Miclo [156], gap(− m ) > 0 if m is ergodic and Mm is hyperbounded, that is, if there exists q > 2 such that Mm is bounded from L2 (X, ν) to Lq (X, ν). Proposition 1.80 Let [X, B, m, ν] be a reversible random walk space and suppose that ν is a probability measure. Assume that m is ergodic and that mx ν for every x ∈ X. If there exists p > 2 and a constant K such that .

!  ! ! dmx !p ! ! dν(x)  K < ∞, ! dν ! 2 X L (X,ν)

then gap(− m ) > 0. 1 x Proof Let fx := dm dν ∈ L (X, ν), x ∈ X. Let us see that Mm is hyperbounded. 2 Given u ∈ L (X, ν), by the Cauchy-Schwarz inequality

40

1 Random Walks

p

Mm u p

.

p       = |Mm u(x)| dν(x) =  u(y)dmx (y) dν(x) X X X p       =  u(y)fx (y)dν(y) dν(x) 

p

X



X

p

u L2 (X,ν)



p

fx L2 (X,ν) dν(x),

X

hence 1

Mm u p  K p u L2 (X,ν) .

.

Therefore, Mm is hyperbounded as desired.

 

In the next examples, we give random walk spaces for which a Poincaré inequality does not hold. Example 1.81 Let [V (G), dG , mG , νG ] be the metric random walk space associated with the locally finite weighted discrete graph G with vertex set V (G) := {x3 , x4 , x5 . . . , xn , . . .} and weights: 1 1 1 , wx3n+1 ,x3n+2 = 2 , wx3n+2 ,x3n+3 = 3 , n3 n n

wx3n ,x3n+1 =

.

for n  1, and wxi ,xj = 0 otherwise (recall Example 1.41). (i) Let fn (x) =

.

⎧ ⎨ n if x = x3n+1 , x3n+2 , ⎩

0 else.

Note that νG (V (G)) < +∞ (we avoid its normalization to a probability measure for simplicity). Now, 



4Hm (fn ) = V (G) V (G)

(fn (x) − fn (y))2 dmG x (y)dνG (x)



= dx3n

V (G)

(fn (x3n ) − fn (y))2 dmG x3n (y)



.

+ dx3n+1

V (G)

(fn (x3n+1 ) − fn (y))2 dmG x3n+1 (y)

 + dx3n+2

V (G)

(fn (x3n+2 ) − fn (y))2 dmG x3n+2 (y)

1.6 Poincaré-Type Inequalities

41

 + dx3n+3

V (G)

1 2 n3

= dx3n n

.

dx3n

(fn (x3n+3 ) − fn (y))2 dmG x3n+3 (y)

+ dx3n+1 n2

1 n3

dx3n+1

+ dx3n+2 n2

1 n3

dx3n+2

+ dx3n+3 n2

1 n3

dx3n+3

4 = . n However, we have: 

 fn (x)dνG (x) = n(dx3n+1 + dx3n+2 ) = 2n

.

V (G)

1 1 + 3 2 n n

 =

  2 1 1+ , n n

thus νG (fn ) =

.

2 n

" # 1 + n1

νG (V (G))

=O

  1 , n

where we use the notation    ϕ(n)   = C = 0.

ϕ(n) = O(ψ(n)) ⇔ lim sup   n→∞ ψ(n)

.

Therefore: ⎧

2 ⎪ ⎨ O(n ) if x = x3n+1 , x3n+2 , 2 .(fn (x) − ν(fn )) = " # ⎪ ⎩O

12 otherwise. n

Finally,  VarνG (fn ) = .

(fn (x) − νG (fn ))2 dνG (x) V (G)

=O





1 n2

1 =O n2

 



2 )(dx3n+1 + dx3n+2 ) dx + O(n

x=x3n+1 ,x3n+2

2) + 2O(n



1 1 + 3 2 n n



= O(1).

Consequently, [V (G), dG , mG , νG ] does not satisfy a Poincaré inequality.

42

1 Random Walks

(ii) Let fn (x) :=

.

⎧ ⎨ n2 if x = x3n+1 , x3n+2 , ⎩

0 else.

With similar computations (see also [151, Example 4.7]), we get that 



.

V (G) V (G)

|fn (x) − fn (y)|dmG x (y)dνG (x) =

4 n

and 

|fn (x) − νG (fn )|dνG (x) = O(1).

.

V (G)

Therefore, [V (G), dG , mG , νG ] does not satisfy a 1-Poincaré inequality. Example 1.82 Consider the metric random walk space [R, d, mJ , L1 ] (recall Example 1.40), where d is the Euclidean distance and J (x) = 12 χ[−1,1] . Define, for n ∈ N, un :=

.

1

χ n n+1 2n+1 [2 ,2 ]



1

χ n+1 n . 2n+1 [−2 ,−2 ]

 Then, un 1 = 1,

un (x)dx = 0 and it is easy to see that, for n large enough,

R

 

1 . 2

R R

|un (y) − un (x)|dmJx (y)dx =

1 . 2n+1

Therefore, [R, d, mJ , L1 ] does not satisfy a 1-Poincaré inequality. Let us now see that, if gap(− m ) > 0, then m is ergodic. Proposition 1.83 Let [X, B, m, ν] be a random walk space and assume that ν is a probability measure. If [X, B, m, ν] satisfies a Poincaré inequality, then m is ergodic (i.e. [X, B, m, ν] is m-connected). Proof Let f ∈ D( m ) such that m (f ) = 0 ν-a.e. Then, 1 .Hm (f ) = − 2

 f (x) m f (x)dν(x) = 0 X

and, therefore, if [X, B, m, ν] satisfies a Poincaré inequality:  Varν (f ) =

(f (x) − ν(f ))2 dν(x) = 0

.

X

1.6 Poincaré-Type Inequalities

43

thus f is ν-a.e. a constant:  f (x) =

f dν for ν-a.e. x ∈ X.

.

X

 

Example 1.81 shows that the reverse implication does not hold in general. Finally, we give the following result which may aid in finding lower bounds for gap(− m ). Theorem 1.84 Let [X, B, m, ν] be a reversible random walk space such that ν is a probability measure. Assume that m is ergodic. Then:    1 (− m f )2 dν ∀f ∈ L2 (X, ν) . gap(− m ) = sup λ  0 : λHm (f )  2 X

.

Proof By Remark 1.76 we know that gap(− m ) = α, where   α := inf − m u, u : u ∈ H (X, ν), u L2 (X,ν) = 1 ∈ σ (− m ).

.

Let also, as in that remark,   β := sup − m u, u : u ∈ H (X, ν), u L2 (X,ν) = 1 ∈ σ (− m ).

.

Set    1 (− m f )2 dν ∀f ∈ L2 (X, ν) . .A := sup λ  0 : λHm (f )  2 X Let us see that α  A. Let (Pλ )λ0 be the spectral projection of the self-adjoint and positive operator − m : H (X, ν) → H (X, ν). By the spectral theorem [172, Theorem VIII.6], we have, for any f ∈ H (X, ν): 

β

2Hm (f ) = − m f, f  =

.

λdPλ f, f 

α

and 

 (− m f )2 dν = − m f, − m f  =

.

X

β

λ2 dPλ f, f .

α

Hence, for any f ∈ H (X, ν), 



β

(− m f ) dν  α 2

.

X

λdPλ f, f  = α2Hm (f ),

α

and we get α  A (note that, for any f ∈ L2 (X, ν), we may take g := f − ν(f ) ∈ H (X, ν) so that m (g) = m (f )).

44

1 Random Walks

Finally, let us see that α  A. Since α ∈ σ ( m ), given ε > 0, there exists 0 = f ∈ Range(Pα+ε ) and, consequently, Pλ f = f for λ  α + ε. Then, since m is ergodic, − m (f ) = 0 (0 = f ∈ H (X, ν) is not ν-a.e. a constant), thus 



α+ε

(− m f )2 dν =

0< X



α+ε

λ2 dPλ f, f   (α + ε)

α

λdPλ f, f 

α

.

= (α + ε)2Hm (f ) < (α + 2ε)2Hm (f ). This implies that α + 2ε does not belong to the set:    1 λ  0 : λHm (f )  (− m f )2 dν ∀f ∈ L2 (X, ν) , 2 X

.

thus A < α + 2ε. Therefore, since ε > 0 was arbitrary, we have: A  α.

 

.

1.6.2 Poincaré-Type Inequalities on Subsets Let us now consider Poincaré-type inequalities on subsets. Definition 1.85 Let .[X, B, m, ν] be a random walk space and let .A, B ∈ B be disjoint sets such that .ν(A) > 0. Let .Q := ((A ∪ B) × (A ∪ B)) \ (B × B). We say that .[X, B, m, ν] satisfies a generalized .(p, q)-Poincaré type inequality (.p, q ∈ [1, +∞[) on .(A, B): if, given .0 < l  ν(A ∪ B), there exists a constant . > 0 such that, for any .u ∈ Lq (A ∪ B, ν) and any .Z ∈ BA∪B with .ν(Z)  l,  .

u Lp (A∪B,ν)  

1 |u(y) − u(x)| dmx (y)dν(x) q

Q

q

    +  u dν  . Z

Remark 1.86 These notations allow us to cover many situations. For example: (i) If .A = X, .B = ∅ and .[X, B, m, ν] satisfies a generalized .(2, 2)-Poincaré-type inequality on .(X, ∅), then .[X, B, m, ν] satisfies a Poincaré inequality as defined in Definition 1.71. (ii) Let . ∈ B. If .A := , .B := ∂m  and we assume that a .(p, p)-Poincaré-type inequality on .(A, B) holds, then the inequality takes the following form:  .

u Lp (m ,ν)        +  u dν  Z

(m ×m )\(∂m ×∂m )

1 p |u(y) − u(x)|p dmx (y)dν(x)

1.6 Poincaré-Type Inequalities

45

which will be extensively used in Chap. 7. Moreover, if .A := m and .B := ∅, we obtain:    1  p   . u Lp ( ,ν)   |u(y) − u(x)|p dmx (y)dν(x) +  u dν  m (m ×m )

Z

which will also be widely used in Chap. 7. In Theorem 1.88 we give sufficient conditions for a random walk space to satisfy inequalities of this kind. Let us first prove the following lemma. Lemma 1.87 Let .[X, B, m, ν] be a reversible random walk space. Let .A, B ∈ B be disjoint sets such that .B ⊂ ∂m A, .ν(A) > 0 and A is m-connected. Suppose that .ν(A ∪ B) < ∞ and that ν ({x ∈ A ∪ B : (mx A) ⊥ (ν A)}) = 0.

.

Let .q  1. Let .{un }n ⊂ Lq (A ∪ B, ν) be a bounded sequence in .L1 (A ∪ B, ν) satisfying  .

lim

n→∞ Q

|un (y) − un (x)|q dmx (y)dν(x) = 0

(1.18)

where, as in Definition 1.85, .Q = ((A ∪ B) × (A ∪ B)) \ (B × B). Then, there exists λ ∈ R such that

.

un (x) → λ

.

for ν-a.e. x ∈ A ∪ B,

un − λ Lq (A,mx ) → 0

.

for ν-a.e. x ∈ A ∪ B,

and

un − λ Lq (A∪B,mx ) → 0 for ν-a.e. x ∈ A.

.

Proof If .B = ∅ (or .ν(B) = 0), one can skip some steps of the proof. Let Fn (x, y) = |un (y) − un (x)|,

.

(x, y) ∈ Q,

 fn (x) =

|un (y) − un (x)|q dmx (y),

.

x ∈ A ∪ B,

A

and  gn (x) =

|un (y) − un (x)|q dmx (y),

.

A∪B

x ∈ A.

46

1 Random Walks

Let N⊥ := {x ∈ A ∪ B : (mx A) ⊥ (ν A)} .

.

From (1.18), it follows that fn → 0

.

in L1 (A ∪ B, ν)

and gn → 0 in L1 (A, ν).

.

Passing to a subsequence if necessary, we can assume that fn (x) → 0

.

for every x ∈ (A∪B)\Nf ,

where Nf ⊂ A∪B is ν-null

(1.19)

and gn (x) → 0 for every x ∈ A \ Ng ,

.

where Ng ⊂ A is ν-null.

(1.20)

On the other hand, by (1.18), we also have that Fn → 0 in Lq (Q, ν ⊗ mx ).

.

Therefore, we can suppose that, up to a subsequence .Fn (x, y)

→0

for every (x, y) ∈ Q \ C,

where C ⊂ Q is ν ⊗ mx -null.

(1.21)

Let .N1 ⊂ A be a .ν-null set satisfying that .

for all x ∈ A \ N1 , the section Cx := {y ∈ A ∪ B : (x, y) ∈ C} of C is mx -null,

and .N2 ⊂ A ∪ B be a .ν-null set satisfying that for all x ∈ (A ∪ B) \ N2 , the section Cx := {y ∈ A : (x, y) ∈ C} of C is mx -null.

.

Now, since A is m-connected and .B ⊂ ∂m A, D := {x ∈ A ∪ B : mx (A) = 0}

.

is .ν-null. Indeed, by the definition of D, .Lm (A ∩ D, A) = 0 thus, since A is mconnected, we must have .ν(A ∩ D) = 0. Now, since .B ⊂ ∂m A, .mx (A) > 0 for every .x ∈ B, thus .ν(B ∩ D) = 0.

1.6 Poincaré-Type Inequalities

47

Set .N := N⊥ ∪ Nf ∪ Ng ∪ N1 ∪ N2 ∪ D (note that .ν(N) = 0). Fix .x0 ∈ A \ N. Up to a subsequence, .un (x0 ) → λ for some .λ ∈ [−∞, +∞]; let S := {x ∈ A ∪ B : un (x) → λ}

.

and let us see that .ν((A ∪ B) \ S) = 0. By (1.21), since .un (x0 ) → λ, we also have that .un (y) → λ for every .y ∈ (A ∪ B) \ Cx0 . However, since .x0 ∈ N⊥ and .mx0 (Cx0 ) = 0, we must have that .ν(A \ Cx0 ) > 0; thus, .ν(A ∩ S)  ν(A \ Cx0 ) > 0. Note that, if .x ∈ (A ∩ S) \ N, then, by (1.21) again, .(A ∪ B) \ Cx ⊂ S thus .mx ((A ∪ B) \ S)  mx (Cx ) = 0; therefore, Lm (A ∩ S, (A ∪ B) \ S) = 0.

.

In particular, .Lm (A ∩ S, A \ S) = 0, but, since A is m-connected and .ν(A ∩ S) > 0, we must have .ν(A \ S) = 0, i.e. .ν(A) = ν(A ∩ S). Now, suppose that .ν(B \ S) > 0. Let .x ∈ B \ (S ∪ N). By (1.21), .A \ Cx ⊂ A \ S, i.e. .A ∩ S ⊂ Cx , thus .mx (A ∩ S) = 0. Therefore, since .x ∈ N⊥ , we must have .ν(A \ S) > 0 which is in contradiction with what we have already obtained. Consequently, we have obtained that .un converges .ν-a.e. in .A ∪ B to .λ: un (x) → λ

.

for every x ∈ S, ν((A ∪ B) \ S) = 0.

Since .{ un L1 (A∪B,ν) }n is bounded, by Fatou’s lemma, we must have that .λ ∈ R. On the other hand, by (1.19) Fn (x, ·) → 0

.

in Lq (A, mx ) ,

for every .x ∈  \ Nf . In other words, . un (·) − un (x) Lq (A,mx ) → 0, thus

un − λ Lq (A,mx ) → 0 for ν-a.e. x ∈ A ∪ B.

.

Similarly, by (1.20)

un − λ Lq (A∪B,mx ) → 0 for ν-a.e. x ∈ A.

.

  Theorem 1.88 Let .p  1. Let .[X, B, m, ν] be a reversible random walk space. Let A, B ∈ B be disjoint sets such that .B ⊂ ∂m A, .ν(A) > 0 and A is m-connected. Suppose that .ν(A ∪ B) < ∞ and that

.

ν ({x ∈ A ∪ B : (mx A) ⊥ (ν A)}) = 0.

.

48

1 Random Walks

Assume further that, given a .ν-null set .N ⊂ A, there exist .x1 , x2 , . . . , xL ∈ A \ N and a constant .C > 0 such that .ν (A ∪ B)  C(mx1 + · · · + mxL ) (A ∪ B). Then, .[X, B, m, ν] satisfies a generalized .(p, p)-Poincaré-type inequality on .(A, B). Proof Let .p  1 and .0 < l  ν(A ∪ B). We want to prove that there exists a constant . > 0 such that    1  p   p +  u dν  . u Lp (A∪B,ν)   |u(y) − u(x)| dmx (y)dν(x) Q

Z

for every .u ∈ Lp (A ∪ B, ν) and every .Z ∈ BA∪B with .ν(Z)  l. Suppose that this inequality is not satisfied for any .. Then, there exists a sequence .{un }n∈N ⊂ Lp (A∪B, ν), with . un Lp (A∪B,ν) = 1, and a sequence .Zn ∈ BA∪B with .ν(Zn )  l, .n ∈ N, satisfying  .

|un (y) − un (x)|p dmx (y)dν(x) = 0

lim n

Q

and  .

un dν = 0.

lim n

Zn

Therefore, by Lemma 1.87, there exist .λ ∈ R and a .ν-null set .N ⊂ A such that n

un − λ Lp (A∪B,mx ) −→ 0

for every x ∈ A \ N.

.

Now, by hypothesis, there exist .x1 , x2 , . . . , xL ∈ A \ N and .C > 0 such that ν (A ∪ B)  C(mx1 + · · · + mxL ). Therefore:

.

p

un − λ Lp (A∪B,ν)  C

.

L 

n

p

un − λ Lp (A∪B,mx ) −→ 0. i

i=1



Moreover, since .{χZn }n is bounded in .Lp (A ∪ B, ν), there exists .φ ∈ Lp (A ∪ B, ν)

such that, up to a subsequence, .χZn  φ weakly in .Lp (A ∪ B, ν) (weakly-.∗ in ∞ 1 .L (A ∪ B, ν) in the case .p = 1; note that .L (X, ν) is separable since .ν is .σ -finite and .B is countably generated). In addition, .φ  0 .ν-a.e. in .A ∪ B and 

 0 < l  lim ν(Zn ) = lim

.

n→+∞

n

n→+∞ A∪B

χZn dν = n

φdν. A∪B

Then, since .un −→ λ in .Lp (A ∪ B, ν) and .χZn  φ weakly in .Lp (A ∪ B, ν) (weakly-.∗ in .L∞ (A ∪ B, ν) in the case .p = 1),

1.6 Poincaré-Type Inequalities

49

 0 = lim

.

n→+∞ Z n



 un = lim

n→+∞ A∪B

χZn un = λ

φdν, A∪B n

thus .λ = 0. This is in contradiction with .||un ||Lp (A∪B,ν) = 1 .∀n ∈ N, since .un −→ λ in .Lp (A ∪ B, ν).   Remark 1.89 Note that the assumption ν ({x ∈ A ∪ B : (mx A) ⊥ (ν A)}) = 0

.

means that we find ourselves in case (i) of Proposition 1.24, i.e. disregarding a .ν-null set the random walk is .ν-irreducible. Remark 1.90 (i) The assumption that, given a .ν-null set .N ⊂ A, there exist .x1 , x2 , . . . , xL ∈ A \ N and .C > 0 such that .ν (A ∪ B)  C(mx1 + · · · + mxL ) (A ∪ B) is not as strong as it seems. Indeed, this is trivially satisfied by connected locally finite weighted discrete graphs and is also satisfied by .[RN , d, mJ , LN ] (recall Example 1.40) if, for a domain .A ⊂ RN , we take .B ⊂ ∂mJ A such that dist .(B, RN \ AmJ ) > 0. Moreover, in the following example, we see that if we remove this hypothesis, then the statement is not true in general. Consider the metric random walk space .[R, d, mJ , L1 ] where d is the Euclidean distance and .J := 12 χ[−1,1] (recall Example 1.40). Let .A := [−1, 1] and .B := ∂mJ A = [−2, 2] \ A. Then, if .N = {−1, 1}, we may not find points in .A \ N satisfying the aforementioned assumption. In fact, the statement of the theorem " does not hold for any # .p  1 as can be seen by taking 1

χ[−2,−2+ 1 ] − χ[2− 1 ,2] and .Z := A ∪ B. Indeed, first note n n that . un Lp ([−2,2],L1 ) = 1 and . [−2,2] un dL1 = 0 for every .n ∈ N. Now, J .supp(mx ) = [x − 1, x + 1] for .x ∈ [−1, 1] and, therefore, for .x ∈ [−1, 1], un :=

1 p 2n

.

 [−2,2]

|un (y) − un (x)|p dmJx (y)



=



[−2,−2+ n1 ]∩[x−1,x+1]

n dmJx (y) +



.

= 2nχ[1− 1 ,1] (x) n

[2− n1 ,x+1]

dmJx (y)

  1 χ[1− 1 ,1] (x). = 2n x − 1 + n n

[2− n1 ,2]∩[x−1,x+1]

n dmJx (y)

50

1 Random Walks

Consequently, 



[−1,1] [−2,2]

|un (y) − un (x)|p dmJx (y)dL1 (x)

  1 x−1+ dL1 (x) = 2n n [1− n1 ,1]   1 (1 − n1 )2 1 1 1 = 2n − − + 2 = . n n n 2 2 

.

Finally, by the reversibility of .L1 with respect to .mJ , 



.

[−2,2] [−1,1]

|un (y) − un (x)|p dmJx (y)dL1 (x) =

1 , n

thus  .

Q

|un (y) − un (x)|p dmJx (y)dL1 (x) 

2 n −→ 0. n

(ii) However, in this example, as we mentioned before, we can take .B ⊂ ∂m A such that .dist(B, R \ [−2, 2]) > 0 to avoid this problem and to ensure that the hypotheses of the theorem are satisfied so that .[R, d, mJ , L1 ] satisfies a generalized .(p, p)-Poincaré-type inequality on .(A, B). In the following example, the metric random walk space .[X, d, m, ν] defined satisfies that .mx ⊥ ν for every .x ∈ X (thus falling into the case (ii) of Proposition 1.24), and a Poincaré-type inequality does not hold. Example 1.91 Let .p > 1. Let .S 1 = {e2π iα : α ∈ [0, 1)} and let .Tθ : S 1 −→ S 1 denote the irrational rotation map .Tθ (x) = xe2π iθ where .θ is an irrational number. On .S 1 consider the Borel .σ -algebra .B and the one-dimensional Hausdorff measure S 1 . It is well known (see [91, Example 4.11]) that .Tθ is a uniquely ergodic .ν := H1 measure-preserving transformation on .(S 1 , B, ν). Now, denote .X := S 1 and let .mx := 12 δT−θ (x) + 12 δTθ (x) , .x ∈ X. Then, .[X, d, m, ν] is a reversible metric random walk space, where d is the metric given by the arclength. Indeed, given a bounded measurable function f on .(X×X, B×B),  S1 .

 S1

f (x, y)dmx (y)dν(x) =

1 2

 S1

f (x, T−θ (x))dν(x) +

1 2 

 1 1 f (Tθ (x), x)dν(x) + 2 S1 2   f (y, x)dmx (y)dν(x). =

=

S1

S1

 S1

S1

f (x, Tθ (x))dν(x)

f (T−θ (x), x)dν(x)

1.6 Poincaré-Type Inequalities

51

Let us see that .[X, d, m, ν] is m-connected. First note that, for .x ∈ X, 1 1 1 1 δx + δT 2 (x) + δT 2 (x)  δT 2 (x) −θ θ 2 4 4 4 θ

m∗2 x :=

.

and, by induction, it is easy to see that m∗n x 

.

1 δT n (x) . 2n θ

Now, let .A ∈ B such that .ν(A) > 0. By the pointwise ergodic theorem " # ν(A) 1 >0 χA Tθk (x) = n→+∞ n ν(X) n−1

.

lim

k=0

for .ν-a.e. .x ∈ X. Consequently, for .ν-a.e. .x ∈ X, there exists .k ∈ N such that m∗k x (A) 

.

" # 1 1 k T δ χ (x) > 0, k (x) (A) = A θ T 2k θ 2k

thus .[X, d, m, ν] is m-connected. Let us see that .[X, d, m, ν] does not satisfy a .(p, p)-Poincaré inequality. For .n ∈ N let   n 2π iα .Ik := e : kθ − δ(n) < α < kθ + δ(n) , −1  k  2n, where .δ(n) > 0 is chosen so that Ikn1 ∩ Ikn2 = ∅ for every − 1  k1 , k2  2n, k1 = k2

.

(note that .e2π i(k1 θ−δ(n)) = e2π i(k2 θ−δ(n)) for every .k1 = k2 since .Tθ is ergodic). Consider the following sequence of functions: un :=

n−1 

.

k=0

χIkn −

2n−1 

χIkn , n ∈ N.

k=n

Then:  un dν = 0, for every n ∈ N,

.

X

and  |un |p dν = 4nδ(n), for every n ∈ N.

.

X

52

1 Random Walks

Fix .n ∈ N, let us see what happens with   |un (y) − un (x)|p dmx (y)dν(x).

.

X

X

If .1  k  n − 2 or .n + 1  k  2n − 2 and .x ∈ Ikn , then  |un (y) − un (x)|p dmx (y) =

.

X

1 1 |un (T−θ (x)) − un (x)|p + |un (Tθ (x)) − un (x)|p = 0 2 2

n and .T (x) ∈ I n . Now, if .x ∈ I n , then .T (x) ∈ I n ; thus since .T−θ (x) ∈ Ik−1 θ −θ k+1 0 −1



1 1 |un (T−θ (x)) − un (x)|p + |un (Tθ (x)) − un (x)|p 2 2 1 1 = | − 1|p = 2 2

|un (y) − un (x)|p dmx (y) =

.

X

n n ). For .x ∈ I and the same holds if .x ∈ I2n−1 (then .Tθ (x) ∈ I2n n−1 , we have .Tθ (x) ∈ n In thus

 |un (y) − un (x)|p dmx (y) =

.

X

=

1 1 |un (T−θ (x)) − un (x)|p + |un (Tθ (x)) − un (x)|p 2 2

1 p p−1 . | − 2| = 2 2

n . Similarly, if .x ∈ I n or .x ∈ I n , and the same result is obtained for .x ∈ In+1 −1 2n

 |un (y)−un (x)|p dmx (y) =

.

X

1 1 1 |un (T−θ (x))−un (x)|p + |un (Tθ (x))−un (x)|p = . 2 2 2

2n−1 n n Finally, if .x ∈ / ∪2n k=−1 Ik , then .T−θ (x), Tθ (x) ∈ ∪k=0 Ik thus

 |un (y) − un (x)|p dmx (y) =

.

X

1 1 |un (T−θ (x)) − un (x)|p + |un (Tθ (x)) − un (x)|p = 0. 2 2

Consequently,   |un (y)−un (x)|p dmx (y)dν(x) =

.

X

X

1 (4·2δ(n))+2p−1 (2·2δ(n)) = (4+2p+1 )δ(n). 2

Therefore, there is no . > 0 such that ! !p    ! ! 1 ! u dν   |un (y) − un (x)|p dmx (y)dν(x), ∀n ∈ N . !un − n ! ! p 2π X X X L (X,ν)

1.6 Poincaré-Type Inequalities

53

since this would imply 4nδ(n)  (4 + 2p+1 )δ(n) ⇒ n  (1 + 2p−1 ), ∀n ∈ N.

.

Finally, we provide another result in which we give sufficient conditions for a generalized .(p, q)-Poincaré inequality to hold. Theorem 1.92 Let .1  p < q. Let .[X, B, m, ν] be a reversible random walk space. Let .A, B ∈ B be disjoint sets such that .B ⊂ ∂m A, .ν(A) > 0 and A is m-connected. Suppose that .ν(A ∪ B) < ∞ and .mx ν for every .x ∈ A ∪ B. Assume further that, given a .ν-null set .N ⊂ A, there exist .x1 , x2 , . . . , xL ∈ A \ N L  dmxi on .i , and .1 , 2 , . . . , L ∈ BA∪B , such that .A ∪ B = i and, if .gi := dν − p then .gi q−p

i=1

∈ = 1, 2, ..., L. Then, .[X, B, m, ν] satisfies a generalized (p, q)-Poincaré-type inequality on .(A, B). L1 (i , ν), .i

.

Proof Let .0 < l  ν(A ∪ B). Starting as in the proof of Theorem 1.88, if we suppose that a generalized .(p, q)-Poincaré-type inequality on .(A, B) does not hold, then there exists a sequence .{un }n∈N ⊂ Lq (A ∪ B, ν), with . un Lp (A∪B,ν) = 1, and a sequence .Zn ∈ BA∪B with .ν(Zn )  l, .n ∈ N, satisfying  .

|un (y) − un (x)|q dmx (y)dν(x) = 0

lim n

Q

and  .

un dν = 0.

lim n

Zn

Therefore, by Lemma 1.87, there exist .λ ∈ R and a .ν-null set .N ⊂ A such that n

un − λ Lq (A∪B,mx ) −→ 0 for every x ∈ A \ N.

.

Now, by hypothesis, there exist .x1 , x2 , . . . , xL ∈ A \ N and .1 , 2 , . . . , L ∈ L  − p dmxi BA∪B , such that .A∪B = on .i , then .gi q−p ∈ L1 (i , ν), i and, if .gi := dν i=1 .i = 1, 2, ..., L. Therefore: p

un − λ Lp (A∪B,ν)

.



L   i=1

i

|un (y) − λ|p dν(y)

54

1 Random Walks

=

L   i=1



p

dν(y)

 p  q |un (y) − λ| gi (y)dν(y) i

i

 q−p q

1

q

L   i=1

p q

gi (y) q

i

L   i=1

=

|un (y) − λ|

p gi (y)

p

dν(y)

gi (y) q−p ! q−p ! q 1 ! p ! (y) q−p ! 1 i

p ! q ! ! |un (y) − λ|q dmxi (y) ! !g

! ! L  ! 1 p

un − λ Lq (i ,mx ) ! = p i ! q−p !g i=1 i

! q−p ! q ! ! ! !

i

L (i ,ν)

n

−→ 0.

L1 (i ,ν)

 

We now finish the proof in the same way as for Theorem 1.88.

The following two lemmas are used in Chap. 7. Their proofs are similar to the proof of [15, Lemma 4.2]. Lemma 1.93 Let .p  1. Let .[X, B, m, ν] be a reversible random walk space and let .A, B ∈ B be disjoint sets. Assume that .ν(A ∪ B) < ∞ and that .A ∪ B is mconnected. Suppose that .[X, B, m, ν] satisfies a generalized .(p, p)-Poincaré-type inequality on .(A, B). Let .α and .τ be maximal monotone graphs in .R2 such that p 1 .0 ∈ α(0) and .0 ∈ τ (0). Let .{un }n∈N ⊂ L (A ∪ B, ν), .{zn }n∈N ⊂ L (A, ν) and 1 .{ωn }n∈N ⊂ L (B, ν) be such that, for every .n ∈ N, .zn ∈ α(un ) .ν-a.e. in A and .ωn ∈ τ (un ) .ν-a.e. in B. Finally, let .Q1 := (A ∪ B) × (A ∪ B). (i) Suppose that R+ α,τ := ν(A) sup Ran(α) + ν(B) sup Ran(τ ) = +∞

.

(by convention .0 · ∞ = 0) and that there exists .M > 0 such that  .

A

zn+ dν +

 B

ωn+ dν < M

for every n ∈ N.

Then, there exists a constant .K = K(A, B, M, α, τ ) such that ! +! . ! un ! p K L (A∪B,ν)

 Q1



1

+ p |u+ n (y) − un (x)| dmx (y)dν(x)

p

+1

(ii) Suppose that R− α,τ := ν(A) inf Ran(α) + ν(B) inf Ran(τ ) = −∞

.

∀n ∈ N.

1.6 Poincaré-Type Inequalities

55

and that there exists .M > 0 such that   . zn− dν + ωn− dν < M A

for every n ∈ N.

B

= K(A,

Then, there exists a constant .K B, M, α, τ ), such that ! −!

. ! un ! p K L (A∪B,ν)





1

Q1

− p |u− n (y) − un (x)| dmx (y)dν(x)

p

+1

∀n ∈ N.

Proof (i) Since .R+ α,τ = +∞, .ν(A) sup Ran(α) = +∞ or .ν(B) sup Ran(τ ) = +∞. Suppose first that .ν(A) sup Ran(α) = +∞ (thus, in particular, .ν(A) > 0). By hypothesis,  .

A

zn+ dν < M for every n ∈ N.



n := x ∈ A : zn+ (x) < Let .A  0

n A

zn+ dν

2M ν(A)

 . Then:

 = A

zn+ dν

 −

n A\A

zn+ dν

.

n ))  M − (ν(A) − ν(A

2M

n ) 2M − M, = ν(A ν(A) ν(A)

thus

n )  ν(A

.

+ Moreover, since .α(u+ n (x))  zn (x) < + .un (x)

 sup α

−1

2M ν(A)



ν(A) . 2

n , for .x ∈ A

2M ν(A)



n . for x ∈ A

This inequality and the generalized .(p, p)-Poincaré-type inequality on .(A, B) yield the thesis. If .ν(B) sup Ran(τ ) = +∞, we proceed similarly.   (ii) Follows similarly to (i). Lemma 1.94 Let .p  1. Let .[X, B, m, ν] be a reversible random walk space and let .A, B ∈ B be disjoint sets. Assume that .0 < ν(A ∪ B) < ∞ and that .A ∪ B is m-connected. Suppose that .[X, B, m, ν] satisfies a generalized .(p, p)-Poincarétype inequality on .(A, B). Let .α and .τ be maximal monotone graphs in .R2 such that .0 ∈ α(0) and .0 ∈ τ (0). Let .{un }n∈N ⊂ Lp (A ∪ B, ν), .{zn }n∈N ⊂ L1 (A, ν)

56

1 Random Walks

and .{ωn }n∈N ⊂ L1 (B, ν) such that, for every .n ∈ N, .zn ∈ α(un ) .ν-a.e. in A and + .ωn ∈ τ (un ) .ν-a.e. in B. Finally, let .Q1 := (A ∪ B) × (A ∪ B) and let .Rα,τ and − .Rα,τ be defined as in Lemma 1.93. (i) Suppose that .R+ α,τ < +∞ and that there exists .M ∈ R and .h > 0 such that 

 zn dν +

.

A

B

ωn dν < M < R+ α,τ

for every n ∈ N,

and  .

max





{x∈A : zn 0 such that 

 zn dν +

.

A

B

ωn dν > M > R− α,τ

for every n ∈ N,

and  .

max

 {x∈A : zn >h}

zn dν,

 M − R− α,τ ωn dν < 8 x∈B : ωn (x)>h}

∀n ∈ N.

= K(A,

Then, there exists a constant .K B, M, h, α, τ ) such that ! −!

. ! un ! p K L (A∪B,ν)

 Q1



1

− p |u− n (y) − un (x)| dmx (y)dν(x)

p

+1

∀n ∈ N.

Proof Let .δ := R+ α,τ − M. By assumption, 

 zn dν + A .

ωn dν < ν(A) sup Ran(α) + ν(B) sup Ran(τ ) − δ B

    δ δ + ν(B) sup Ran(τ ) − . = ν(A) sup Ran(α) − 2 2

1.6 Poincaré-Type Inequalities

57

Therefore, for each .n ∈ N, either  δ . zn dν < ν(A) sup Ran(α) − 2 A

(1.22)

or  .

δ ωn dν < ν(B) sup Ran(τ ) − . 2 B

(1.23)

δ }. For .n ∈ N such that (1.22) holds let .Kn := {x ∈ A : zn (x) < sup Ran(α) − 4ν(A) Then,      δ δ , . zn dν = zn dν − zn dν < − + ν(Kn ) sup Ran(α) − 4 4ν(A) Kn A A\Kn

and 

 zn dν = −

.

Kn

 Kn ∩{x∈A : zn 0, .h −

 δ δ + sup Ran(α)  , 4ν(A) 8

δ + sup Ran(α) > 0 and 4ν(A) ν(Kn ) 

.

δ/8 h−

δ 4ν(A)

+ sup Ran(α)

.

Consequently, since  −1 sup Ran(α) − u+ (x)  sup α n

.

δ 4ν(A)

 for x ∈ Kn ,

by the generalized .(p, p)-Poincaré-type inequality on .(A, B) we are done. Similarly for .n ∈ N such that (1.23) holds. (ii) Follows similarly.

 

Chapter 2

The Heat Flow in Random Walk Spaces

This chapter focuses on the study of the heat flow in random walk spaces. Assuming that .[X, B, m, ν] is a reversible random walk space, the operator .−m generates in 2 −tm ) .L (X, ν) a Markovian semigroup .(e t0 called the heat flow in .[X, B, m, ν]. This is shown in Sect. 2.1 where, in addition, we obtain a complete expansion of the solution of the heat flow in terms of the n-step transition probabilities of the random walk. In Sect. 2.2 we characterize the infinite speed of propagation of the heat flow in terms of the m-connectedness of the random walk space. Moreover, in Sect. 2.3, we study the asymptotic behaviour of the semigroup .(e−tm )t0 , and with the help of a Poincaré inequality, we obtain rates of convergence of −tm ) .(e t0 as .t → ∞. Section 2.4 is devoted to the Ollivier-Ricci curvature and its relation with the Poincaré-type inequality. We prove that if the Ollivier-Ricci curvature .κ of a metric random walk space is positive, then the space is m-connected and that, under additional assumptions, .κ is a lower bound of the spectral gap of .−m . In Sect. 2.5 we introduce the Bakry-Émery curvature-dimension condition and study its relation to the Poincaré inequality. Furthermore, in Sect. 2.6 we study the logarithmic-Sobolev inequality and the modified logarithmic-Sobolev inequality, which relate the entropy with the Fisher-Donsker-Varadhan information. Additionally, the relation between these two inequalities and the Poincaré inequality is also covered. Section 2.7 focuses on the study of transport inequalities in relation with the Bakry-Émery curvature-dimension condition and the Ollivier-Ricci curvature. Lastly, in Sect. 2.8 we study the heat content.

2.1 The m-Heat Flow In this section we assume that .[X, B, m, ν] is a reversible random walk space. In Chap. 1, we have introduced the (nonlocal) m-Laplacian operator .m (Definition 1.50): © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. M. Mazón et al., Variational and Diffusion Problems in Random Walk Spaces, Progress in Nonlinear Differential Equations and Their Applications 103, https://doi.org/10.1007/978-3-031-33584-6_2

59

60

2 The m-Heat Flow

 m f (x) =

(f (y) − f (x))dmx (y) = divm (∇f )(x), f ∈ L1 (X, ν),

.

X

and the energy functional .Hm (Definition 1.70): ⎧  1 ⎪ ⎪ (f (x) − f (y))2 dmx (y)dν(x) if f ∈ L1 (X, ν) ∩ L2 (X, ν), ⎨ 4 X×X .Hm (f ) = ⎪ ⎪ ⎩ +∞ else. In the following theorem, we prove that the subdifferential of .Hm is .−m . Theorem 2.1 .Hm is proper, convex and lower semicontinuous in .L2 (X, ν). Moreover, ∂Hm = −m .

.

Consequently, .−m is m-accretive in .L2 (X, ν). Recall that the domain of .−m , viewed as an operator on .L2 (X, ν), is equal to and hence it is dense in .L2 (X, ν).

1 2 .L (X, ν) ∩ L (X, ν),

Proof It is easy to see that .Hm is proper and convex. Let us see that it is lower n semicontinuous as an operator on .L2 (X, ν). Let .fn −→ f ∈ L2 (X, ν); we can assume that there exists a .ν-null set N such that .fn (x) → f (x) for all .x ∈ X \ N . Then, (fn (x) − fn (y))2 → (f (x) − f (y))2

.

(2.1)

for all .(x, y) ∈ (X \ N) × (X \ N) = (X × X) \ [(N × X) ∪ (X × N)]. Now, since ν is invariant with respect to m

.

ν ⊗ mx ([(N × X) ∪ (X × N)])     dmx (y) dν(x) + χN (y)dmx (y) dν(x)

  .=

N

X



= ν(N) +

X

X

χN (y)dν(y) = 2ν(N) = 0. X

Therefore, by (2.1), Fatou’s lemma yields the thesis. Let us now see that .∂Hm = −m . Take .(f, g) ∈ ∂Hm ; then, for any .h ∈ L1 (X, ν) ∩ L2 (X, ν) and any .λ ∈ R  Hm (f + λh)  Hm (f ) +

λghdν.

.

X

2.1 The m-Heat Flow

61

Therefore, for .λ > 0  ghdν 

.

X

1 (Hm (f + λh) − Hm (f )) , λ

thus, taking limits as .λ → 0, we get:  ghdν 

.

X

1 2



 ∇f (x, y)∇h(x, y)d(ν ⊗ mx )(x, y) = − X×X

m f hdν. X

Working with .λ < 0, we obtain the opposite inequality and, therefore, equality holds in the previous equation. Consequently, since .h ∈ L1 (X, ν) ∩ L2 (X, ν) was arbitrary g = −m f.

.

Let us now see that, for .f ∈ L1 (X, ν) ∩ L2 (X, ν), .(f, −m f ) ∈ ∂Hm . Take h ∈ L1 (X, ν) ∩ L2 (X, ν); then,  .Hm (h)  Hm (f ) − m f (h − f )dν

.

X

if, and only if, Hm (h)  Hm (f ) +

.

1 2

 ∇f (x, y)(∇h(x, y) − ∇f (x, y))dmx (y)dν(x) X×X

if, and only if, 1 4

 (∇h(x, y))2 dmx (y)dν(x) + X×X

.



1 2

1 4

 (∇f (x, y))2 dmx (y)dν(x) X×X

 ∇f (x, y)∇h(x, y)dmx (y)dν(x), X×X



which trivially holds true.

From the above result, and as a consequence of Theorem A.27 and LumerPhillips theorem (Theorem A.15), we obtain the following result. Theorem 2.2 .m generates a .C0 -semigroup .(Ttm )t0 in .L2 (X, ν) (see Definition A.7 and the subsequent remarks), given by the Hille-Yosida exponential formula:  m .Tt u0

= lim

n→+∞

t I − m n

−1 n u0 ,

∀u0 ∈ L2 (X, ν).

62

2 The m-Heat Flow

Moreover, for any .u0 ∈ L2 (X, ν), .u(t) := Ttm u0 is the unique strong solution of the heat equation (which me may also refer to as the m-heat equation): ⎧ ⎨ u (t) = m u(t) for every t ∈ (0, +∞), .

(2.2)



u(0) = u0 ,

that is, .u ∈ C([0, +∞) : L2 (X, ν)) ∩ W 1,1 ((0, +∞) : L2 (X, ν)) and satisfies  ⎧ du ⎪ ⎨ (t)(x) = (u(t)(y) − u(t)(x))dmx (y) dt X . ⎪ ⎩ u(0) = u0 .

for a.e. t > 0 and ν-a.e. x ∈ X,

Definition 2.3 We denote .e−tm := Ttm and say that .{e−tm : t  0} is the heat flow in the random walk space .[X, B, m, ν] or, alternatively, the m-heat flow. Remark 2.4 Given .(f, g) ∈ ∂Hm and .q ∈ P0 (see Sect. A.7), by (1.7), we have:  .

1 q(f (x))g(x)dν(x) = 2 X

 (q(f (y))−q(f (x)))(f (y)−f (x))dmx (y)dν(x)  0. X×X

Therefore, by Corollary A.32, .∂Hm is completely accretive, thus m-completely accretive (see Minty’s theorem in Appendix A.4). Consequently, .{e−tm }t0 is a Markovian semigroup ( i.e. .0  e−tm f  1 2 .ν-a.e. whenever .u0 ∈ L (X, ν) and .0  u0  1 .ν-a.e.). Indeed, by Proposition A.39, since .∂Hm = −m is m-completely accretive, we have that .{e−tm }t0 is a complete contraction for every .t  0 (see Definition A.28), thus, in particular, .

−tm − u0 e

L2 (X,ν)

 (u0 )− L2 (X,ν) ∀u0 ∈ L2 (X, ν) and t  0,

which implies that .e−tm u0  0 if .u0  0; and .



+ −tm u0 − 1 e

L2 (X,ν)

 (u0 − 1)+ L2 (X,ν) ∀u0 ∈ L2 (X, ν) and t  0,

which yields that .e−tm u0  1 if .u0  1. Moreover, another consequence is that .

(I + ∂Hm )−1 f

L1 (X,ν)

 f L1 (X,ν) ∀f ∈ L2 (X, ν).

We can also consider the heat flow in .L1 (X, ν). Definition 2.5 On .L1 (X, ν) we define the operator A as follows: .Af = g ⇔ g(x) = −m f (x) for all .x ∈ X.

2.1 The m-Heat Flow

63

Theorem 2.6 The operator A is m-completely accretive in .L1 (X, ν), with domain equal to .L1 (X, ν). Proof The complete accretivity of A is proved as in the previous remark. The domain of this operator has been studied in Chap. 1. Let us see that A is .m-completely accretive in .L1 (X, ν). By Theorem 2.1, we have: R(I + ∂Hm ) = L2 (X, ν).

.

Therefore, given .f ∈ L1 (X, ν) ∩ L2 (X, ν), there exists .g ∈ L2 (X, ν) such that f = (I + ∂Hm )g,

.

but, by Remark 2.4, .g ∈ L1 (X, ν) ∩ L2 (X, ν), thus L1 (X, ν) ∩ L2 (X, ν) ⊂ R(I + A).

.

Then, by (A.9), L1 (X, ν) = L1 (X, ν) ∩ L2 (X, ν)

.

L1 (X,ν)

⊂ R(I + A)

L1 (X,ν)

  L1 (X,ν) . ⊂R I +A

However, using the same argument as in the proof of (2.1), it is easy to see that A is closed, so L1 (X, ν) = R(I + A).

.

By the above result and Theorem A.39, A generates a .C0 -semigroup .(S(t))t0 in .L1 (X, ν). Moreover: . S(t)u0 Lp (X,ν)

 u0 Lp (X,ν) ,

(S(t)u0 )+ Lp (X,ν)  (u0 )+ Lp (X,ν) ,

(2.3)

for every .1  p  +∞ and every .u0 ∈ L1 (X, ν) ∩ Lp (X, ν). If .ν(X) < ∞, 1 −tm in .L2 (X, ν); moreover, by .S(t) is an extension to .L (X, ν) of the heat flow .e 1 Corollary A.41, for every .u0 ∈ L (X, ν), the mild solution .S(t)u0 of problem (2.2) is a strong solution. Nevertheless, we can prove that, for .u0 ∈ L1 (X, ν), .e−tm u0 := S(t)u0 is a strong solution regardless of the finiteness of .ν(X). In fact, it follows from the following result that u belongs to .C([0, +∞) : L1 (X, ν)) ∩ C ∞ (0, T : L1 (X, ν)). Theorem 2.7 Let .u0 ∈ L1 (X, ν). Then (see (1.1) for the definition of .m∗n x ),

64

e

.

2 The m-Heat Flow

−tm

u0 (x) = e

−t

+∞   n=0 X

u0 (y)dm∗n x (y)

tn n!

for every x ∈ X and t > 0.

(2.4)

In particular, for .D ∈ B with .ν(D) < +∞, we have: e−tm χD (x) = e−t

+∞ 

.

m∗n x (D)

n=0

tn n!

for every x ∈ X and t > 0.

1 Proof Recall that .m∗0 x = δx , .x ∈ X. Then, for .u0 ∈ L (X, ν),

u(t)(x) := e−t

+∞  

.

n=0 X

u0 (y)dm∗n x (y)

tn for x ∈ X and t > 0. n!

Let us see that u is well defined. Recall that, by the invariance of .ν with respect to 1 1 ∗n m∗n x , since .u0 ∈ L (X, ν), we have that .u0 ∈ L (X, mx ) for .ν-a.e. .x ∈ X and every .n ∈ N. Moreover, for .k ∈ N and .t > 0,

.

 n   k  k      u0 (y)dm∗n (y) t dν(x) = x   n! .

X n=0 X k   



n=0 X

X

|u0 (y)| dm∗n x (y)dν(x)

  n    u0 (y)dm∗n (y) t dν(x) x   n!

n=0 X X k   tn

n!

|u0 (x)| dν(x)

=

n=0 X

tn  et u0 L1 (X,ν) . n!

Then, if fk (x, t) :=

.

 n k      u0 (y)dm∗n (y) t , (x, t) ∈ X × (0, +∞), x   n! n=0

X

 we have that .0  fk (x, t)  fk+1 (x, t) and . fk (x, t)dν(x)  et u0 L1 (X,ν) for X

every .k ∈ N, .x ∈ X and .t > 0. Therefore, we may apply the monotone convergence theorem to get that the function  n +∞    t ∗n   .x →   u0 (y)dmx (y) n! , x ∈ X, n=0

X

belongs to .L1 (X, ν) for each .t > 0 (hence, in particular, it is .ν-a.e finite) with  n   +∞   t t ∗n   .  u0 (y)dmx (y) n! dν(x)  e u0 L1 (X,ν) , t > 0. X n=0

X

2.1 The m-Heat Flow

65

Note that the same applies to the function: x →

+∞  

.

n=0 X

|u0 (y)| dm∗n x (y)

tn , x ∈ X. n!

From this we get that .u(t)(x) is well defined and also the uniform convergence of the series for t in compact subsets of .[0, +∞). Hence: +∞ 

 du (t)(x) = −u(t)(x) + e−t . dt

n=1 X

u0 (y)dm∗n x (y)

t n−1 . (n − 1)!

Therefore, to prove (2.4), we only need to show that e

.

−t

+∞   n=1 X

u0 (y)dm∗n x (y)

t n−1 = (n − 1)!

 u(z, t)dmx (z). X

Recall that, for every .x ∈ X and .n ∈ N,     ∗n ∗(n−1) . u0 (y)dmx (y) = u0 (y)dmz (y) dmx (z). X

X

X

Consequently, e

−t

+∞   n=1 X

=e

−t

u0 (y)dm∗n x (y)

+∞    n=1 X

.



 =

e z∈X

−t

X

t n−1 (n − 1)!

u0 (y)dmz∗(n−1) (y)

+∞   n=1 X



u0 (y)dmz∗(n−1) (y)

dmx (z)

t n−1 (n − 1)!

 t n−1 dmx (z) (n − 1)!

 =

u(t)(z)dmx (z), X

where we have interchanged the series and integral thanks to the dominated convergence theorem since   k   n−1  t  −t   u0 (y)dmz∗(n−1) (y) e    (n − 1)! X .

e

n=1 +∞   −t

n=1 X

|u0 (y)|dmz∗(n−1) (y)

t n−1 =: F (z, t) (n − 1)!

66

2 The m-Heat Flow

and .F (·, t) belongs to .L1 (X, ν), thus to .L1 (X, mx ) for .ν-a.e. .x ∈ X and every .t > 0.

The above result generalizes the following result given in [147] in the context of Example 1.40, and proven by means of the Fourier transform: if .D ⊂ RN has N .L -finite measure, then e

.

−tmJ

χD (x) = e

−t

∞  

(J ∗)n (x − y)dy

n=0 D

tn n!

for every x ∈ RN and t > 0,

where .(J ∗)1 := J , .(J ∗)2 := J ∗ J (the convolution of J and J ) and .(J ∗)n+1 is defined inductively by .(J ∗)n+1 := (J ∗)n ∗ J for .n  2. Remark 2.8 In the above result, we can easily change .L1 (X, ν) for any other p .L (X, ν) space, .1 < p < +∞. As a consequence of (1.6) we have. Theorem 2.9 If .ν is a probability measure, the m-heat flow .(e−tm )t0 conserves the mass. Proof Let .u0 ∈ L1 ( ). Then, 

d . dt

e

−tm

 u0 (x)dν(x) =

m u0 (x)dν(x) = 0,

X

X

and, therefore,  e

.

−tm

 u0 (x)dν(x) =

X

u0 (x)dν(x), ∀t > 0.

(2.5)

X

Example 2.10 (1) If we consider the metric random walk space .[RN , d, mJ , LN ] as defined in Example 1.40, the Laplacian is given by  mJ f (x) :=

.

RN

(f (y) − f (x))J (x − y)dy.

Then, given .u0 ∈ L2 (RN , LN ), .u(t) := e−tmJ u0 is the solution of the J nonlocal heat equation:  ⎧ du ⎪ ⎨ (t, x) = (u(t)(y) − u(t)(x))J (x − y)dy dt RN . ⎪ ⎩ u(0) = u0 .

in RN × (0, +∞),

(2.6)

2.1 The m-Heat Flow

67

(2) Consider the random walk space .[X, B, mK , π ] associated with a Markov kernel K (as in Example 1.43) and assume that the stationary probability measure .π is reversible. Then, the Laplacian .mK acting on .f ∈ L2 (X, π ) is given by   f (y)dmK . mK f (x) := x (y) − f (x) = y∈X K(x, y)f (y) − f (x), X

for .x ∈ X. Consequently, given .u0 ∈ L2 (X, π ), .u(t) := e−tmK u0 is the solution of the equation: ⎧  du ⎪ ⎨ (t, x) = K(x, y)u(t)(y) − u(t)(x) on (0, +∞) × X, dt . y∈X ⎪ ⎩ u(0) = u0 . Therefore, .e−tmK = e−t (K−I ) is the heat semigroup on X with respect to the geometry determined by the Markov kernel K. In the case that X is a finite set, we have: e−tmK = e−t (K−I ) = e−t

+∞ n n  t K

.

n=0

n!

.

(3) Let .[V , dH , mH , νH ] be the metric random walk space associated with a weighted hypergraph .H = (V , E) (as in Example 1.44) and let .H := mH . Then,  .H f (x) := (f (y) − f (x))dmH x (y) V

=



(f (y) − f (x))

e∈E

y∈V

=

1 dx

h(x, e)h(y, e) 1  w(e) dx δ(e)

 e∈E:x,y∈e

(f (y) − f (x))

w(e) , δ(e)

and, for every initial data .u0 ∈ L2 (V , νH ), the heat equation ⎧ ⎪ ⎨ du = H u(t) in (0, +∞) × V , dt . ⎪ ⎩ u(0) = u ,

(2.7)

0

has a unique solution u in the sense that .u ∈ C([0, +∞) : L2 (V , νH )) ∩ C 1 ((0, +∞) : L2 (V , νH )) and satisfies (2.7), that is, ⎧  h(x, e)h(y, e) 1  du ⎪ ⎨ in (0, +∞) × V , (t, x) = w(e) (u(t)(y) − u(t)(x)) dx δ(e) dt . e∈E y∈V ⎪ ⎩ u(0) = u0 .

68

2 The m-Heat Flow

For every .u0 ∈ L2 (X, ν), we denote .e−tH u0 := u(t), where .u(t) the unique solution of the heat equation (2.7), and we call .(e−tH )t0 the heat flow on the hypergraph H . Note that, by Theorem 2.7, for every .u0 ∈ L2 (V , νH ) ∩ L1 (V , νH ), e−tH u0 (x) = e−t

∞  

.

n=0 V

∗n u0 (y)d(mH x ) (y)

tn , n!

 ∗0 where . u0 (y)d(mH x ) (y) = u0 (x). V

(4) If . is a closed bounded subset of .RN and we consider the metric random walk space .[ , d, mJ, , LN ] (see Example 1.47),  mJ, f (x) =

.



 (f (y) − f (x))dmJ, x (y) =

J (x − y)(f (y) − f (x))dy.

Then, .u(t) := e−tmJ, u0 is the solution of the homogeneous Neumann problem for the J -nonlocal heat equation:  ⎧ du ⎪ ⎨ (t, x) = (u(t)(y) − u(t)(x))J (x − y)dx dt . ⎪ ⎩ u(0) = u0 .

in (0, +∞) × ,

(2.8) See [21] for a comprehensive study of problems (2.6) and (2.8). (5) Observe that, in general, given a reversible random walk space .[X, B, m, ν] and −tm . ∈ B, we have that .u(t) := e u0 is the solution of  ⎧ du ⎪ ⎨ (t)(x) = (u(t)(y) − u(t)(x))dmx (y) in (0, +∞) × , dt . ⎪ ⎩ u(0) = u0 , which, like (2.8), is a homogeneous Neumann problem for the m-heat equation. In Chap. 7 we deal with other type of Neumann problems; see in particular Example 7.2.

2.2 Infinite Speed of Propagation Let us see that the infinite speed of propagation of the heat flow is equivalent to the m-connectedness of the random walk space. In this section we assume that .[X, B, m, ν] is a reversible random walk space.

2.2 Infinite Speed of Propagation

69

Theorem 2.11 .[X, B, m, ν] is m-connected if, and only if, for any non-.ν-null .0  u0 ∈ L1 (X, ν) ∩ L2 (X, ν), we have that .e−tm u0 > 0 for .ν-a.e. .x ∈ X and all .t > 0. Proof (.⇒): Given a non-.ν-null .0  u0 ∈ L1 (X, ν) ∩ L2 (X, ν), there exist .D ∈ B with .ν(D) > 0 and .α > 0, such that .u0  αχD . Therefore, by Theorem 2.7 and the m-connectedness of .[X, B, m, ν] e−tm u0 (x)  αe−tm χD (x) = αe−t

∞ 

.

n=0

m∗n x (D)

tn >0 n!

for .ν-a.e.  .x ∈ X and every .t > 0. Indeed, since .[X, B, m, ν] is m-connected, we ∗n have that . ∞ n=1 mx (D) > 0 .ν-a.e. (.⇐): Take .D ∈ B with .ν(D) > 0. By Theorem 2.7, e−tm χD (x) = e−t

∞ 

.

m∗n x (D)

n=0

tn > 0 for ν-a.e. x ∈ X and every t > 0. n!

∞ ∗n Therefore, since .m∗0 x = δx , . n=1 mx(D) > 0 for .ν-a.e. .x ∈ X \ D. However, by ∗n Corollary 1.31, we already have that . ∞

n=1 mx (D) > 0 for .ν-a.e. .x ∈ D. Remark 2.12 In the preceding proof, if m is .ν-irreducible (Definition 1.4), we obtain that, in fact, e−tm u0 (x) > 0 for all x ∈ X and for all t > 0 .

.

Moreover, when X is a topological space, we introduce a weaker notion of mconnectedness which serves to characterize the infinite speed of propagation of the heat flow for continuous initial data. Definition 2.13 Let .[X, B, m, ν] be a random walk space. Assume that X is equipped with a topology and .B is the associated Borel .σ -algebra. We say that .[X, B, m, ν] is weakly m-connected if, for every open set .D ∈ B with .ν(D) > 0 and .ν-a.e. .x ∈ X: ∞  .

m∗n x (D) > 0,

n=1 m ) = 0. i.e. if for every open set .D ∈ B with .ν(D) > 0, we have that .ν(ND

In the same way as m-connectedness is reminiscent of .ϕ-irreducibility (in fact, ϕ-essential irreducibility), this weaker notion of m-connectedness for topological spaces is evocative of classical notions like that of open set irreducibility (see [155, Chapter 6.1.2]). For the next result, we recall that a topological space X is a normal space if, given any disjoint closed sets E and F , there are neighbourhoods U of E and V of F that are also disjoint.

.

70

2 The m-Heat Flow

Theorem 2.14 Assume that X is a normal space, .B is the associated Borel .σ algebra and .ν is inner regular. Then, .[X, B, m, ν] is weakly-m-connected if, and only if, for every non-.ν-null .0  u0 ∈ L1 (X, ν) ∩ L2 (X, ν) ∩ C(X), we have that −tm u > 0 .ν-a.e. for all .t > 0. .e 0 Proof (.⇒): Similar to the proof of the left to right implication in Theorem 2.11. (.⇐): Take .D ∈ B open with .ν(D) > 0, since .ν is inner regular there exists a compact set .K ⊂ D with .ν(K) > 0. By Urysohn’s lemma we may find a continuous function .0  u0  1 such that .u0 = 0 on .X \ D and .u0 = 1 on K; thus, .u0  χD . Hence: e−t

∞ 

.

m∗n x (D)

n=0

tn  e−tm u0 (x) > 0 n!

for ν-a.e. x ∈ X and every t > 0.



So we conclude as in Theorem 2.11.

2.3 Asymptotic Behaviour In this section we assume that .[X, B, m, ν] is a reversible random walk space. It is easy to see that .m is ergodic if, and only if, f ∈ L1 (X, ν) ∩ L2 (X, ν), e−tm f = f ∀t  0 ⇒ f is ν-a.e. a constant.

.

Indeed, just recall that .



d −tm e f = m e−tm f = e−tm (m f ) . dt

Moreover, we have the following result. Proposition 2.15 For every .f ∈ L2 (X, ν) .

lim e−tm f = f∞ ∈ {u ∈ L2 (X, ν) : m u = 0}.

t→∞

Suppose that .[X, B, m, ν] is m-connected, then: (i) if .ν(X) = +∞, .f∞ = 0 .ν-a.e. (ii) if .ν is a probability measure, .f∞ =

 f (x)dν(x) .ν-a.e. X

Proof Since .Hm is a proper and lower semicontinuous function in X attaining the minimum at the zero function and, moreover, .Hm is even, we have that, by [54, Theorem 5], the strong limit in .L2 (X, ν) of .e−tm f exists and is a minimum point of .Hm , i.e.

2.3 Asymptotic Behaviour

71

u∞ ∈ {u ∈ L1 (X, ν) ∩ L2 (X, ν) : 0 ∈ m (u)}.

.

The second part is a consequence of the ergodicity of .m (recall Theorem 1.55) and the conservation of mass (2.5). See also [25, Proposition 3.1.13].

If a Poincaré inequality holds it follows, with a similar proof to the one done in the continuous setting (see, for instance, [25]), that, if .gap(−m ) > 0, then .e−tm u0 converges to .ν(u0 ) with exponential rate .gap(−m ). Theorem 2.16 Assume that .ν is a probability measure. The following statements are equivalent: (i) There exists .λ > 0 such that λVarν (f )  Hm (f )

.

for all f ∈ L2 (X, ν).

(ii) For every .f ∈ L2 (X, ν)

e−tm f − ν(f ) L2 (X,ν)  e−2λt f − ν(f ) L2 (X,ν)

.

for all t  0;

or, equivalently, for every .f ∈ L2 (X, ν) with .ν(f ) = 0,

e−tm f L2 (X,ν)  e−2λt f L2 (X,ν)

.

for all t  0.

Proof .(i) ⇒ (ii) Given .f ∈ L2 (X, ν), if .u(t) = e−tm f , by (2.5) and (1.15), d

u(t) − ν(f ) 2L2 (X,ν) = 2 dt

 (u(t) − ν(f )) 

X

=2

.

∂u(t) dν ∂t

(u(t) − ν(f ))m u(t)dν = −4Hm (u(t)) X

 −4λVarν (u(t)) = −4λ u(t) − ν(f ) 2L2 (X,ν) , from this we get

u(t) − ν(f ) L2 (X,ν)  f − ν(f ) L2 (X,ν) e−2λt

.

for all t  0.

(ii) ⇒ (i) Given .f ∈ D(Hm ), let .w(t) := e−tm f − ν(f ). Then, by hypothesis,

.

w(t) L2 (X,ν)  w(0) L2 (X,ν) e−2λt

.

for all t  0,

and the first-order Taylor expansion of w at .t = 0 is given by w(t) = w(0) + tw (t) + o(t) = w(0) + tm f + o(t).

.

(2.9)

72

2 The m-Heat Flow

Hence: 

w(t) 2L2 (X,ν) = .





w(t)2 dν = X

w(0)2 dν + 2t X

f m f dν + o(t) X

= w(0) 2L2 (X,ν) − 4tHm (f ) + o(t), thus, by (2.9),

w(0) 2L2 (X,ν) (1 − e−4λt )  4tHm (f ) + o(t).

.

Then, taking limits as .t ↓ 0, we obtain that Varν (f ) = λ w(0) 2L2 (X,ν)  Hm (f ).

.

Remark 2.17 Assume that .ν is a probability measure. Let .μ1 , .μ2 ∈ P(X). We denote by . μ1 − μ2 T V the total variation distance between .μ1 and .μ2 , i.e.

μ1 − μ2 T V := sup{|μ1 (A) − μ2 (A)| : A ∈ B}.

.



Then, for .f ∈ L2 (X, ν) and .μt = e−tm f ν, we have:

μt − ν T V  f − 1 L2 (X,ν) e−gap(−m )t .

.

Indeed, by Theorem 2.16, for any .A ∈ B

.

    1 2    2  −t  −t mf − 1 dν  mf − 1 dν  e−tmf dν − ν(A)  e e   A

A

X

 e−gap(−m )t f − 1 L2 (X,ν) .

2.4 Ollivier-Ricci Curvature As we have seen in Theorem 2.16, an important tool in the study of the speed of convergence of the heat flow to the equilibrium is a global Poincaré inequality. In the case of Riemannian manifolds and Markov diffusion semigroups, a usual condition required to obtain this functional inequality is the positivity of the corresponding Ricci curvature of the underlying space (see [25] and [200]). In [24], Bakry and Emery found a way to define the lower Ricci curvature bound through the heat flow. Moreover, Renesse and Sturm [173] proved that, on a Riemannian manifold M, the Ricci curvature is bounded from below by some constant .K ∈ R if, and only if, the Boltzmann-Shannon entropy is K-convex along geodesics in the

2.4 Ollivier-Ricci Curvature

73

2-Wasserstein space of probability measures on M. This was the key observation, used simultaneously by Lott and Villani [139] and Sturm [183], to give a notion of a lower Ricci curvature bound in the general context of length metric measure spaces. In these spaces, a relation between the Bakry-Émery curvature-dimension condition and the notion of the Ricci curvature bound introduced by Lott-Villani-Sturm was obtained by Ambrosio, Gigli and Savaré in [10], where they proved that these two notions of Ricci curvature coincide under certain assumptions on the metric measure space. When the space under consideration is discrete, for instance, in the case of a graph, the previous concept of a Ricci curvature bound is not as clearly applicable as in the continuous setting. Indeed, the definition by Lott-Sturm-Villani does not apply if the 2-Wasserstein space over the metric measure space does not contain geodesics. This is the case if the underlying space is discrete. Recently, Erbas and Maas [95], in the framework of Markov chains on discrete spaces, in order to circumvent the nonexistence of 2-Wasserstein geodesics, replace the 2-Wasserstein metric by a different metric, which was introduced by Maas in [142]. Here we do not consider this notion of Ricci curvature. Nevertheless, the use of the Bakry-Émery curvature-dimension condition to obtain a valid definition of a Ricci curvature bound in Markov chains was considered by Schmuckenschlager [178]. In [138], Lin and Yau applied this idea to graphs. Subsequently, this concept of curvature in the discrete setting has been frequently used (see [133] and the references therein). In Sect. 2.5, we relate the Bakry-Émery curvature-dimension condition with a Poincaré inequality. Having said that, in the discrete case, there is another well-suited concept of Ricci curvature bound, introduced by Y. Ollivier in [166]. In this section, following the work of Y. Ollivier, we see that the positivity of this curvature bound ensures that a Poincaré inequality holds. In order to introduce the coarse Ricci curvature defined by Y. Ollivier, we first recall the Monge-Kantorovich transportation problem. The main references used on optimal transport theory are the monographs [176] and [199]. Let .(X, d) be a Polish metric space; we denote by .P(X) the set of probability ˜ is a Polish metric space and .T : Y → X is a Borel function, measures on X. If .(Y, d) the push forward operator T # : P(Y ) → P(X)

.

is defined by T #μ(B) := μ(T −1 (B)) for all Borel sets B ⊂ X.

.

Given .μ, .ν ∈ P(X), the Monge-Kantorovich problem is the minimization problem:  .

min X×X

 d(x, y) dγ (x, y) : γ ∈ (μ, ν) ,

74

2 The m-Heat Flow

(μ, ν) := {γ ∈ P(X × X) : π0 #γ = μ, π1 #γ = ν} ,

.

where .πα #γ denotes the pushforward of .γ by .πα and .πα : X × X → X is defined as .πα (x, y) := x + α(y − x) for .α ∈ {0, 1}. For .1  p < ∞, the p-Wasserstein distance between .μ and .ν is defined as   Wpd (μ, ν) := min

 1 p d(x, y)p dγ (x, y) : γ ∈ (μ, ν) .

.

(2.10)

X×X

The Wasserstein space of order p is defined as    p .Pp (X) := μ ∈ P(X) : d(x0 , x) dμ(x) < +∞ , X

where .x0 ∈ X is arbitrary. This space does not depend on the choice of the point .x0 . Wpd defines a (finite) distance on .Pp (X). The Monge-Kantorovich problem has a dual formulation that can be stated as follows (see for instance [199, Theorem 1.14]).

.

Kantorovich-Rubinstein’s Theorem Let .μ, ν ∈ P1 (X). Then,  W1d (μ, ν)

 u d(μ − ν) : u ∈ Kd (X)

= sup 

.

X

= sup

 u d(μ − ν) : u ∈ Kd (X) ∩ L∞ (X, ν)

X

where Kd (X) := {u : X → R : |u(y) − u(x)|  d(y, x)} .

.

For the proof of the following result, see, for instance, [200, Theorem 6.9]. Theorem 2.18 Let .(X, d) be a Polish space. The Wassertein distance .W1d metrizes the weak convergence in .P1 (X). In Riemannian geometry, positive Ricci curvature is characterized by the fact that “small balls are closer, in the 1-Wasserstein distance, than their centers are” (see [173]). In the framework of metric random walk spaces, inspired by this, Y. Ollivier [166] introduced the concept of coarse Ricci curvature, substituting the balls by the measures .mx and using the 1-Wasserstein distance to measure the distance between them. Definition 2.19 Given random walk m on a Polish metric space .(X, d) such that each measure .mx has finite first moment, for any two distinct points .x, y ∈ X, the Ollivier-Ricci curvature (or coarse Ricci curvature) of .[X, d, m] along .(x, y) is defined as

2.4 Ollivier-Ricci Curvature

75

κm (x, y) := 1 −

.

W1d (mx , my ) . d(x, y)

The Ollivier-Ricci curvature of .[X, d, m] is defined by κm := inf κm (x, y).

.

x, y ∈ X x = y

We will write .κ(x, y) instead of .κm (x, y), and .κ = κm , if the context allows no confusion. Observe that, in principle, the metric d and the random walk m of a metric random walk space .[X, d, m, ν] have no relation between them aside from the fact that m is defined on the Borel .σ -algebra associated with d and that each .mx , .x ∈ X, has finite first moment. Therefore, we cannot expect to obtain strong results on the properties of m by imposing conditions only in terms of d. For example, as we will see in Example 3.34, balls in random walk spaces are not necessarily m-calibrable (see Definition 3.32). However, imposing conditions on .κ, like .κ > 0, effectively creates a strong relation between the random walk and the metric which allows us to prove results like Theorem 2.26. In [166] one can find many interesting examples with .κ > 0. μ,ε

Remark 2.20 If .(X, d, μ) is a smooth complete Riemannian manifold and .(mx ) is the .ε-step random walk associated with .μ given in Example 1.45, then it is proved in [173] (see also [166]) that, up to scaling by .ε2 , .κmμ,ε (x, y) gives back the ordinary Ricci curvature when .ε → 0. Example 2.21 Let .[RN , d, mJ , LN ] be the metric random walk space given in Example 1.40. Let us see that .κ(x, y) = 0 for every .x, y ∈ RN with .x = y. Given .x, y ∈ RN , .x = y, by Kantorovich-Rubinstein’s Theorem, we have:  W1d (mJx , mJy ) = sup 

.

= sup

 RN

u(z)(J (x − z) − J (y − z)) dz : u ∈ Kd (RN )  (u(x + z) − u(y + z))J (z) dz : u ∈ Kd (R ) . N

RN

Now, for .u ∈ Kd (RN )  .

RN

(u(x + z) − u(y + z))J (z) dz  x − y ,

thus .W1d (mJx , mJy )  x − y . On the other hand, if .u(z) := z,x−y

x−y , then .u ∈ Kd (RN ), hence  d J J .W1 (mx , my )  (u(x + z) − u(y + z))J (z) dz = x − y . RN

76

2 The m-Heat Flow

Therefore: W1d (mJx , mJy ) = x − y ,

.

and, consequently, .κ(x, y) = 0. Example 2.22 Let .[V (G), dG , mG , νG ] be the metric random walk space associated to a locally finite weighted discrete graph .G = (V (G), E(G)) as defined in Example 1.41 and recall that .NG (x) := {z ∈ V (G) : z ∼ x} for .x ∈ V (G). Let .x, y ∈ V (G) be distinct vertices. Then, the Ollivier-Ricci curvature along .(x, y) ∈ E(G) is κ(x, y) = 1 −

.

W1dG (mx , my ) , dG (x, y)

where  

W1dG (mx , my ) = inf

.

μ∈A

μ(z1 , z2 )dG (z1 , z2 ),

z1 ∼x z2 ∼y

being .A the set of all matrices with entries indexed by .NG (x) × NG (y) such that μ(z1 , z2 )  0 and

.

 .

μ(z1 , z2 ) =

z2 ∼y

wxz1 , dx



μ(z1 , z2 ) =

z1 ∼x

wyz2 , dy

for (z1 , z2 ) ∈ NG (x)×NG (y).

There is an extensive literature about Ollivier-Ricci curvature on discrete graphs (see for instance, [30, 38, 71, 112, 128, 138, 166–168] and [170]). Y. Ollivier in [166, Proposition 229] gives the following result. Proposition 2.23 Let m be a random walk on a Polish metric space .(X, d) and assume that each measure .mx has finite first moment. Then, .κ(x, y)  κ ∈ R for all distinct .x, y ∈ X if, and only if, for every h-Lipschitz function .f : X → R, the function .Mm f is .h(1 − κ)-Lipschitz. Proof Suppose that .κ(x, y)  κ ∈ R for all .x, y ∈ X with .x = y. Let .x, y ∈ X with .x = y and let .γxy ∈ (mx , my ) be an optimal plan, i.e.  .

X×X

d(z, z )dγxy (z, z ) = W1d (mx , my ).

Then, if f is an h-Lipschitz function, we have:  Mm f (y) − Mm f (x) =

 f (z)dmx (z) −

X



.

= X×X

f (z )dmy (z )

X

(f (z) − f (z ))dγxy (z, z )

2.4 Ollivier-Ricci Curvature

77

 h X×X

.

d(z, z )dγxy (z, z ) = hW1d (mx , my )

= hW1d (mx , my ) = h(1 − κ(x, y))d(x, y). Conversely, suppose that whenever f is 1-Lipschitz, .Mm f is .(1 − κ)-Lipschitz. Then, applying Kantorovich-Rubinstein’s theorem, we have that, for .x, y ∈ X with .x = y:  W1d (mx , my ) = sup

 f d(mx − my ) : f ∈ Kd (X) X

.

= sup {Mm f (x) − Mm f (y) : f ∈ Kd (X)}  (1 − κ)d(x, y), from this it follows that κm (x, y) = 1 −

.

W1d (mx , my )  κ. d(x, y)



Y. Ollivier in [166, Proposition 20] gives the

following .W1d

contraction property.

Proposition 2.24 Let m be a random walk on a Polish metric space .(X, d) and assume that each measure .mx has finite first moment. Then, .κ(x, y)  κ ∈ R for all distinct .x, y ∈ X if, and only if, for any .μ, μ ∈ P1 (X) one has (recall Definition 1.8): W1d (μ ∗ m, μ ∗ m)  (1 − κ)W1d (μ, μ ).

.

(2.11)

Moreover, in this case, if .μ ∈ P1 (X), then .μ ∗ m ∈ P1 (X). Proof First, suppose that (2.11) holds and let .x, y ∈ X with .x = y. Then, since δx ∗ m = mx and .δy ∗ m = my , by (2.11), we have:

.

W1d (mx , my )  (1 − κ)W1d (δx , δy ) = (1 − κ)d(x, y),

.

so .κ(x, y)  κ. Let us see the converse. For each .x, y ∈ X with .x = y, let .γxy ∈ (mx , my ) such that  d .W1 (mx , my ) = d(x , y ) dγxy (x , y ). X×X

According to [200, Corollary 5.22], we can choose .γxy to depend measurably on the pair .(x, y).

78

2 The m-Heat Flow

Let . ∈ (μ, μ ) such that W1d (μ, μ ) =



d(x , y ) d(x , y ).

.

X×X

Then: 

d(x, y)γxy ∈ (μ ∗ m, μ ∗ m)

.

X×X

and, consequently, W1d ((μ ∗ m, μ



 ∗ m)) 

d(x, y)d X×X

 



d(x , y )γ

 x y

(x, y)

X×X

d(x, y)d(x , y )dγx y (x, y)

=

.



X×X×X×X

 X×X

d(x , y )(1 − κ(x , y ))d(x , y )  (1 − κ)W1d (μ, μ )

by Fubini’s theorem applied to .d(x, y)d(x , y )dγx y (x, y). Finally, to see that in this situation .P1 (X) is preserved by the random walk, fix some origin .x0 ∈ X and note that, for any .μ ∈ P1 (X)  X

.

d(x0 , x)dμ ∗ m(x) = W1d (δx0 , μ ∗ m)  W1d (δx0 , mx0 ) + W1d (mx0 , μ ∗ m)  W1d (δx0 , mx0 ) + (1 − κ)W1d (δx0 , μ) < ∞.

Indeed, by Definition 1.26, .W1d (δx0 , μx0 ) < ∞ and, since .μ ∈ P1 (X), d .W (δx0 , μ) < ∞.

1 As an immediate consequence of this contracting property, we obtain the following result. Corollary 2.25 Let m be a random walk on a Polish metric space .(X, d) and assume that each measure .mx has finite first moment. Suppose that .κ(x, y)  κ > 0 for any two distinct points .x, y ∈ X. Then, the random walk has a unique invariant measure .ν ∈ P1 (X). Moreover, for any probability measure .μ ∈ P1 (X), the sequence .μ ∗ m∗n tends exponentially fast to .ν in .W1d distance. Namely: (i)

W1d (μ ∗ m∗n , ν)  (1 − κ)n W1d (μ, ν)

.

(ii)

n W1d (m∗n x , ν)  (1 − κ)

W1d (δx , mx ) κ

∀n ∈ N, ∀μ ∈ P1 (X), (2.12) ∀n ∈ N, ∀x ∈ X.

2.4 Ollivier-Ricci Curvature

79

In the next result, we see that metric random walk spaces with positive OllivierRicci curvature are m-connected. Theorem 2.26 Let .[X, d, m, ν] be a metric random walk space such that .ν is a probability measure and each measure .mx has finite first moment. Assume that .κ, the Ollivier-Ricci curvature of .[X, d, m], satisfies .κ > 0. Then, .[X, d, m, ν] is mconnected, i.e. .ν is ergodic. Proof By (2.11) we have that, for any .μ, μ ∈ P1 (X) and .n ∈ N, W1d (μ ∗ m∗n , μ ∗ m∗n )  (1 − κ)n W1d (μ, μ ).

.

(2.13)

Moreover, by Corollary 2.25, there exists a unique invariant measure .ν ∈ P1 (X) with respect to m, satisfying (2.12). Then, applying Theorem 2.18 μ ∗ m∗n  ν

weakly as measures ∀μ ∈ P1 (X);

.

(2.14)

thus, taking .μ = δx , we obtain that m∗n x ν

.

weakly as measures ∀x ∈ X.

Let us now see that .[X, d, m, ν] is m-connected if .κ > 0. Take .D ⊂ X a Borel m ) > 0. By Proposition 1.31, we have set with .ν(D) > 0 and suppose that .ν(ND m .ν(H ) > 0. Let D μ :=

1 ν HDm ∈ P(X), ν(HDm )

μ :=

1 m m ) ν ND ∈ P(X). ν(ND

.

and .

Now, by Proposition 1.30, μ ∗ m∗n = μ,

.

and μ ∗ m∗n = μ ,

.

but then, by (2.13), we get: W1 (μ, μ ) = W1 (μ ∗ m∗n , μ ∗ m∗n )  (1 − κ)n W1 (μ, μ )

.

80

2 The m-Heat Flow

which is only possible if .W1 (μ, μ ) = 0 since .1 − κ < 1. Hence, μ = μ ,

.

m ) = μ(N m ) = 0 which is a contradiction. Therefore, and this implies .1 = μ (ND D m .ν(N ) = 0 as desired.

D m = ∅, i.e. Remark 2.27 (i) Observe that if D is open and .ν(D) > 0, then .ND ∞  .

m∗n x (D) > 0 for every x ∈ X.

n=1 m , by (2.14), we have: Indeed, for .x ∈ ND

0 < ν(D)  lim inf m∗n x (D) = 0.

.

n

(ii) By Proposition 1.14 uniqueness of the invariant probability measure implies its ergodicity. Consequently, Theorem 2.26 follows from [166, Corollary 21] (recall also Theorem 1.37). We have presented the result for the sake of completeness and using the framework of m-connectedness. The following result was given by Y. Ollivier in [166, Corollary 31]. Theorem 2.28 Let .[X, d, m, ν] be a reversible metric random walk space and assume that .ν is a probability measure. Suppose that    σ :=

d(y, z)2 dmx (y)dmx (z)dν(x) < +∞.

.

X

X

X

If the Ollivier-Ricci curvature of .[X, d, m], .κ, is positive, then the spectrum of .−m in .H (X, ν) is contained in .[κ, ∞), thus 0 < κ  gap(−m ).

.

Consequently .[X, d, m, ν] satisfies the Poincaré inequality .

κ Varν (f )  Hm (f ) ∀f ∈ L2 (X, ν). 2

Proof Let us first prove that there exists a constant C depending on .σ and .κ such that, for every 1-Lipschitz function f Varν (f )  C = C(κ, σ ).

.

(2.15)

2.4 Ollivier-Ricci Curvature

81

Suppose for now that . f L∞ (X,ν)  A ∈ R, so that .Varν (f ) < ∞. First we prove that .

n lim Varν (Mm (f )) = 0.

(2.16)

n→∞

n (f ) is Let .Br be a ball of radius r in X. By Proposition 2.23, we have that .Mm n .(1 − κ) -Lipschitz on .Br and bounded by A on .X \ Br . Therefore: n .Varν (Mm (f ))

1 = 2 .

Taking .r =

1 n (1−κ) 2

 X×X

n n (Mm (f )(x) − Mm (f )(y))2 dν(x)dν(y)

 2(1 − κ)2n r 2 + 2A2 ν(X \ Br ).

shows that (2.16) holds. Now, since 

Varν (f ) =

.

X

Varmx (f )dν(x) + Varν (Mm (f )),

by induction and having in mind (2.16), we get that Varν (f ) =

∞  

.

n=0 X

n Varmx (Mm (f ))dν(x).

(2.17)

Now, since f is 1-Lipschitz .Varmx (f )

1 2

=

 (f (y) − f (z))2 dmx (y)dmx (z)  X×X

n (f ) is .(1 − κ)n -Lipschitz, and, since .Mm n Varmx (Mm (f ))  (1 − κ)2n

1 2

 d(y, z)2 dmx (y)dmx (z) X×X

 d(y, z)2 dmx (y)dmx (z).

.

X×X

Hence, by (2.17), (2.15) holds for bounded functions. However, if f is unbounded, (2.15) follows by a limiting argument. We return to the proof of the theorem. First, note that it is enough to prove that the spectral radius of the averaging operator .Mm acting on .H (X, ν) is at most .1 − κ. Let f be a k-Lipschitz function. As a consequence of (2.15), Lipschitz functions belong to .H (X, ν) and, since .σ < +∞, the Lipschitz norm controls the .L2 -norm. n (f ) is .k(1 − κ)n -Lipschitz, so there exists a By Proposition 2.23, we have that .Mm constant Q such that n Varν (Mm (f ))  Qk 2 (1 − κ)2n ;

.

82

2 The m-Heat Flow

thus  .

lim

n→∞

n

n (f ) 2

Mm L (X,ν)  (1 − κ).

Consequently, the spectral radius of M is at most .1 − κ on the subspace of Lipschitz functions. Now, since .Mm is bounded and self-adjoint, its spectral radius is controlled by its value on a dense subspace using the spectral decomposition. Therefore, since Lipschitz functions are dense in .L2 (X, ν) (note that .ν is a probability measure on a metric space and, therefore, regular), the proof is finished.



2.5 The Bakry-Émery Curvature-Dimension Condition To deal with the Bakry-Émery curvature-dimension condition, one needs a Carré du champ . (see [25, Subsection 1.4.2]). In the framework of Markov diffusion semigroups, in order to get good inequalities from this curvature-dimension condition, it is essential that the generator A of the semigroup satisfies the chain rule formula: A((f )) =  (f )A(f ) +  (f )(f )

.

for f ∈ D(A) and smooth  : R → R,

which characterizes diffusion operators in the continuous setting (see [25]). Unfortunately, this chain rule does not hold in the discrete setting, and this is one of the main difficulties that arises when working with this curvature-dimension condition in metric random walk spaces. Following [25, Definition 1.4.2], we make the following definition. Definition 2.29 Let .[X, B, m, ν] be a reversible random walk space. The bilinear map  1 m (f g)(x) − f (x)m g(x) − g(x)m f (x) , .(f, g)(x) := 2 x ∈ X, f, g ∈ L2 (X, ν), is called the carré du champ operator of .m .

.

With this notion, and following the theory developed in [25], we can study the Bakry-Émery curvature-dimension condition in reversible random walk spaces. In particular, we study its relation with the spectral gap. According to Bakry and Émery [24], we define the Ricci curvature operator .2 by iterating . as follows: 2 (f, g) :=

.

 1 m (f, g) − (f, m g) − (m f, g) , 2

which is well defined for .f, g ∈ L2 (X, ν). We write, for .f ∈ L2 (X, ν),

2.5 The Bakry-Émery Curvature-Dimension Condition

(f ) := (f, f ) =

.

83

1 m (f 2 ) − f m f 2

and 2 (f ) := 2 (f, f ) =

.

1 m (f ) − (f, m f ). 2

It is easy to see that, for .x ∈ X, 1 .(f, g)(x) = 2

 ∇f (x, y)∇g(x, y)dmx (y) X

and 1 .(f )(x) = 2

 |∇f (x, y)|2 dmx (y). X

Consequently (recall Definitions 1.70)  (f )(x)dν(x) = 2Hm (f ).

.

(2.18)

X

Furthermore, by (1.6) and (2.18), we get:  2 (f ) dν =

.

X

1 2



 (m (f ) − 2(f, m f )) dν = − X

(f, m f ) dν, X

thus 

 2 (f ) dν =

.

X

(m f )2 dν.

(2.19)

X

Definition 2.30 Let .[X, B, m, ν] be a reversible random walk space. The operator m satisfies the Bakry-Émery curvature-dimension condition .BE(K, n) for .n ∈ (1, +∞) and .K ∈ R if

.

2 (f ) 

.

1 (m f )2 + K(f ) n

∀ f ∈ L2 (X, ν).

(2.20)

The constant n is the dimension of the operator .m , and K is the lower bound of the Ricci curvature of the operator .m . If there exists .K ∈ R such that 2 (f )  K(f )

.

∀ f ∈ L2 (X, ν),

(2.21)

then it is said that the operator .m satisfies the Bakry-Émery curvature-dimension condition .BE(K, ∞).

84

2 The m-Heat Flow

Observe that if .m satisfies the Bakry-Émery curvature-dimension condition BE(K, n), then it also satisfies the Bakry-Émery curvature-dimension condition .BE(K, m) for .m > n. Definition 2.30 is motivated by the well-known fact that on a complete ndimensional Riemannian manifold .(M, g), the Laplace-Beltrami operator .g satisfies .BE(K, n) if, and only if, the Ricci curvature of the Riemannian manifold is bounded from below by K (see, e.g. [25, Appendix C.6]). As mentioned at the beginning of Sect. 2.4, the use of the Bakry-Émery curvature-dimension condition as a possible definition of a Ricci curvature bound in Markov chains was first considered in [178]. We now relate this concept with a global Poincaré inequality. Integrating (2.20) over X with respect to .ν yields: .

 2 (f ) dν 

.

X

1 n



 (m f )2 dν + K X

(f ) dν. X

Now, by (2.18) and (2.19), this inequality can be rewritten as  .

1 (m f ) dν  n X

 (m f )2 dν + 2KHm (f ),

2

X

or, equivalently, as K

.

n 1 Hm (f )  n−1 2

 (m f )2 dν.

(2.22)

X

Similarly, integrating (2.21) over X with respect to .ν, we get: KHm (f ) 

.

1 2

 (m f )2 dν.

(2.23)

X

Definition 2.31 Let .[X, B, m, ν] be a reversible random walk space. Let .n ∈ (1, +∞) and .K ∈ R. The operator .m satisfies the integrated Bakry-Émery curvature-dimension condition .I BE(K, n) if the inequality (2.22) holds for every 2 2 .f ∈ L (X, ν). Moreover, if (2.23) holds for every .f ∈ L (X, ν), we say that .m satisfies the integrated Bakry-Émery curvature-dimension condition .I BE(K, ∞). On account of Theorem 1.84, we can rewrite the Poincaré inequality via the integrated Bakry-Émery curvature-dimension conditions as follows (see [25, Theorem 4.8.4]; see also [28, Theorem 2.1]). Theorem 2.32 Let .[X, B, m, ν] be a reversible random walk space and assume that m is ergodic (or, equivalently, that .[X, d, m, ν] is m-connected). Let .n ∈ (1, +∞) and .K > 0. Then:

.

(1) .m satisfies .I BE(K, n) if, and only if, .[X, B, m, ν] satisfies a Poincaré n inequality with constant .K n−1 .

2.5 The Bakry-Émery Curvature-Dimension Condition

85

(2) .m satisfies .I BE(K, ∞) if, and only if, .[X, B, m, ν] satisfies a Poincaré inequality with constant K. Therefore, if .m satisfies .BE(K, n), then gap(−m )  K

.

n , n−1

(2.24)

and if .m satisfies .BE(K, ∞), then gap(−m )  K.

.

(2.25)

In the next example, we see that, in general, an integrated Bakry-Émery curvature-dimension condition .I BE(K, n) with .K > 0 does not imply a BakryÉmery curvature-dimension condition .BE(K, n) with .K > 0. Example 2.33 Consider the weighted discrete graph .G = (V (G), E(G)) with vertex set .V (G) = {a, b, c} and weights: .wa,b = wb,c = 1 and .wi,j = 0 otherwise. Let .[V (G), dG , mG , νG ] be the associated metric random walk space and let . := mG . A simple calculation gives (f )(a) =

1 1 (f (b) − f (a))2 = (f (a))2 , 2 2

(f )(c) =

1 1 (f (b) − f (c))2 = (f (c))2 , 2 2

.

.

and (f )(b) =

 1 1 1 (f (a))2 + (f (c))2 (f (b) − f (a))2 + (f (b) − f (c))2 = 4 4 4

.

=

1 ((f )(a) + (f )(c)) . 2

Moreover: .

2 (f )(a) =

1 5 1 (f (c))2 + (f (a))2 + f (a)f (c) 8 8 4

(2.26)

2 (f )(c) =

5 1 1 (f (a))2 + (f (c))2 + f (a)f (c). 8 8 4

(2.27)

and .

Having in mind (2.26) and (2.27), the inequality 2 (f )(v) 

.

1 (f )2 (v) + K(f )(v) n

86

2 The m-Heat Flow

holds true for .v ∈ {a, c} and every .f ∈ L2 (V (G), νG ) if, and only if, .

1 2 5 2 1 2 y + x + xy  x 2 + Kx 2 4 4 2 n

∀x, y ∈ R.

(2.28)

Now, since (2.28) is true for .x = 0, (2.28) holds if, and only if, K  inf

.

1 2 4y

x=0, y

+ 54 x 2 + 12 xy − n2 x 2 . x2

Moreover, taking .y = λx, we obtain that the following inequality must be satisfied: K  inf

.

λ

1 4

λ2 +

2 2 5 1 + λ− =1− . 4 2 n n

In fact, it is easy to see that (2.28) is true for any .K  1 − n2 . On the other hand, for .f ∈ L2 (V (G), νG ) 2 (f )(b) =

.

1 (f (b))2 + (f )(b), 2

and it is easy to see that 2 (f )(b) 

.

2 1 (f (b))2 + K(f )(b) for every n > 1 and K  1 − . n n

Therefore, this graph Laplacian satisfies the Bakry-Émery curvature-dimension condition   2 ,n for every n > 1, .BE 1 − n being .K = 1 − n2 the best constant for a fixed .n > 1. Now, it is easy to see that .gap(−) = 1 thus, by Theorem 2.32, . satisfies the integrated Bakry-Émery curvature-dimension condition .I BE(K, n) with .K = 1 − n1 > 1 − n2 . Note that . satisfies the Bakry-Émery curvature-dimension condition .BE(1, ∞), and hence, in this example, the bound in (2.25) is sharp but there is a gap in the bound (2.24). It is well known that in the case of diffusion semigroups, the Bakry-Émery curvature-dimension condition .BE(K, ∞) of its generator is characterized by gradient estimates on the semigroup (see, for instance, [23] or [25]). The same characterization is also true for locally finite weighted discrete graphs (see, e.g.

2.6 Logarithmic-Sobolev Inequalities

87

[62] and [133]). With a similar proof we have that in the general context of metric random walk spaces this characterization is also true. Theorem 2.34 Let .[X, d, m, ν] be a reversible metric random walk space and let (Tt )t>0 = (e−tm )t>0 be the heat semigroup. Then, .m satisfies the Bakry-Émery curvature-dimension condition .BE(K, ∞) with .K > 0 if, and only if,

.

(Tt f )  e−2Kt Tt ((f ))

.

∀ t  0 and f ∈ L2 (X, ν).

(2.29)

Proof Fix .t > 0. For .s ∈ [0, t), we define the function: g(s, x) := e−2Ks Ts ((Tt−s f ))(x), x ∈ X.

.

The same computations as in [133] show that .

∂g (s, x) = 2e−2Ks Ts (2 (Tt−s f ) − K(Tt−s f )) (x). ∂s

Then, if .m satisfies the Bakry-Émery curvature-dimension condition .BE(K, ∞) with .K > 0, we have that . ∂g ∂s (s, x)  0 which is equivalent to (2.29). On the other hand, if (2.29) holds, then . ∂g ∂s (0, x)  0, which is equivalent to 2 (Tt f ) − K(Tt f )  0.

.

Then, letting .t → 0, we get .2 (f ) − K(f )  0.



2.6 Logarithmic-Sobolev Inequalities In this section we assume that .[X, B, m, ν] is a reversible random walk space and, moreover, that .ν is a probability measure. Definition 2.35 Let .μ be a measure on X; if .μ  ν with . dμ dν = f , we write .μ = f ν. Let .μ be a nonnegative Radon measure on X, .μ  ν. We define the relative entropy of .μ with respect to .ν by

.Entν (μ)

:=

⎧

⎪ ⎨ f log f dν − ν(f ) log ν(f ) if μ = f ν, f  0, f log f ∈ L1 (X, ν), ⎪ ⎩

X

+∞,

otherwise,

with the usual convention that .f (x) log f (x) = 0 if .f (x) = 0; for simplicity we write .Entν (μ) = Entν (f ).

88

2 The m-Heat Flow

For .0  u0 ∈ L2 (X, ν), let .u(t) = e−tm u0 . Then, if .m u(t) log u(t) ∈ by (1.6) and (2.5), we have:

L1 (X, ν),

 du d (log u(t) + 1)dν Entν (u(t)) = dt X dt .   = m u(t)(log u(t) + 1)dν = m u(t) log u(t)dν. X

(2.30)

X

We define the functional .Fm : L2 (X, ν) → [0, +∞] as follows: ⎧   1 ⎪ ⎪ ∇f (x, y)∇ log f (x, y)dmx (y)dν(x) if the integral is finite, ⎨ 2 X X .Fm (f ) := ⎪ ⎪ ⎩ +∞, otherwise. The functional .Fm is called the modified Fisher information. It corresponds to the entropy production functional of the heat flow .(e−tm )t0 . Indeed, from the integration by parts formula (1.7), we have that, for .f ∈ L2 (X, ν)+ with .log f ∈ L2 (X, ν)  .Fm (f ) = − log f m f dν. X

Therefore, by (2.30), the time-derivative of the entropy along the heat flow verifies .

d Entν (e−tm u0 ) = −Fm (e−tm u0 ). dt

(2.31)

Definition 2.36 The Fisher-Donsker-Varadhan information of a probability measure .μ ∈ P(X) with respect to .ν is defined by Iν (μ) :=

.

⎧ √ ⎨ 2Hm ( f ) if μ = f ν, f  0, ⎩

+∞,

otherwise.

Observe that D(Iν ) = {μ : μ = f ν, f ∈ L1 (X, ν)+ }

.

√ since . f ∈ L2 (X, ν) = D(Hm ) whenever .f ∈ L1 (X, ν)+ . In the continuous setting .Iν (f ν) = Fm (f ), that is, the modified Fisher information and the Fisher-Donsker-Varadhan information coincide. However, this is not true in general in the discrete setting. We will see in (2.33) that .Iν (f ν)  Fm (f ).

2.6 Logarithmic-Sobolev Inequalities

89

Definition 2.37 We say that .[X, B, m, ν] satisfies a logarithmic-Sobolev inequality if there exists .λ > 0 such that λEntν (f )  Iν (f ν)

.

for all f ∈ L1 (X, ν)+ ,

or, equivalently,  λEntν (f )  2Hm ( f )

.

for all f ∈ L1 (X, ν)+ .

We denote √  2Hm ( f ) : 0 = Entν (f ) < +∞ LS(m, ν) := inf Entν (f ) .   2Hm (f ) 2 : 0 =  Ent (f ) < +∞ . = inf ν Entν (f 2 ) 

If .[X, B, m, ν] satisfies a logarithmic-Sobolev inequality, then, for any function f ∈ L1 (X, ν)+ , we have that .f log(f ) ∈ L1 (X, ν).

.

Definition 2.38 We say that .[X, B, m, ν] satisfies a modified logarithmic-Sobolev inequality if there exists .λ > 0 such that λEntν (f )  Fm (f )

.

for all f ∈ D(Fm ).

We denote  MLS(m, ν) := inf

.

 Fm (f ) : f ∈ D(Fm ), 0 = Entν (f ) < +∞ . Entν (f )

The modified logarithmic-Sobolev inequality was introduced by Ledoux [135] in connection with the concentration of measure phenomenon. In [42], Bobkov and Tetali illustrate the fact that for a non-diffusion Dirichlet form, the modified logarithmic-Sobolev inequality can give better results than the classical log-Sobolev inequality. Having in mind (2.31), with a similar proof to the one of Theorem 2.16, we have the following result. Theorem 2.39 If there exists .λ > 0 such that λEntν (f )  Fm (f ) ∀f ∈ D(Fm ),

.

then for every .0  f ∈ L2 (X, ν) Entν (e−tm f )  Entν (f ) e−λt

.

∀t  0.

90

2 The m-Heat Flow

As a consequence of the previous theorem, if .MLS(m, ν) > 0, then we have exponential decay of the entropy with exponential rate .MLS(m, ν). More precisely, if MLS.(m, ν) > 0, Entν (e−tm f )  Entν (f ) e−MLS(m,ν)t

∀t  0 and ∀ 0  f ∈ L2 (X, ν). (2.32) The next result was obtained for the particular case of reversible Markov chains with a finite state space in [42, Proposition 3.6]. .

Theorem 2.40 If the constants .LS(m, ν), .MLS(m, ν) and .gap(−m ) are positive, then 2LS(m, ν) 

.

1 MLS(m, ν)  gap(−m ). 2

Proof Let us first see that  8Hm ( f )  Fm (f ).

(2.33)

.

Since .log x  2 −

4 x+1

for every .x > 1, we get that

√ √ (a − b)(log a − log b)  4( a − b)2

.

∀a, b > 0.

Hence, for .f ∈ D(Fm )      f (y) − f (x) log f (y) − log f (y) dmx (y)dν(x)

1 Fm (f ) = 2 .

X

X

   2   1  4 f (y) − f (x) dmx (y)dν(x) = 8Hm ( f ). 2 X X

Now, as a consequence of (2.33), we have .4LS(m, ν)  MLS(m,  ν). Let us see the other inequality. Given .f ∈ L∞ (X, ν) with .

f dν = 0, take X

g = 1 + εf with .ε > 0 small enough so that .g  0. By Taylor’s formula we have:

.

.

1 log(1 + εf (x)) = εf (x) − ε2 f (x)2 + o(ε2 ), 2

which, moreover, implies that .log g ∈ L∞ (X, ν). Then: Entν (g) =

.

1 2 ε 2

 f (x)2 dν(x) + o(ε2 ), X

2.6 Logarithmic-Sobolev Inequalities

91

and  Fm (g) = −

log(1 + εf (x))m (1 + εf )(x)dν(x) 

X

1 (εf (x) − ε2 f (x)2 + o(ε2 ))εm f (x)dν(x) 2 X  f (x)m f (x)dν(x) + o(ε2 ) = 2ε2 Hm (f ) + o(ε2 ). = −ε2

=−

.

X

Now, since .MLS(m, ν) > 0 MLS(m, ν)Entν (g)  Fm (g).

.

Thus, using the previous equalities, we get: 

1 2 ε .MLS(m, ν) 2





f (x) dν(x) + o(ε )  2ε2 Hm (f ) + o(ε2 ). 2

2

X

Then, dividing by .ε2 and letting .ε ↓ 0, we obtain that 1 .MLS(m, ν) 2

 f (x)2 dν(x)  2Hm (f ). X

 Now, take .f ∈

L2 (X, ν)

with .ν(f ) =

f (x)dν(x) = 0, by the previous X

computation: 

2 Tk f (x) − ν(Tk f ) dν(x)

MLS(m, ν)Varν (Tk f ) = MLS(m, ν) X

.

 4Hm (Tk f )  4Hm (f ), where .Tk (s) = min{max{s, −k}, k} for .s ∈ R and .Tk f (x) = Tk (f (x)) for .k ∈ N. Now, by the dominated convergence Theorem,  .

(Tk f (x) − ν(Tk f )) dν(x) =

lim k

 2

X

f (x)2 dν(x). X

Therefore, we get .MLS(m, ν)  2gap(−m ).



In [42] there are examples showing that, in general, the two inequalities of the previous theorem are not equalities. As a consequence of Theorem 2.40 and (2.32), we have: Entν (e−tm f )  Entν (f ) e−4LS(m,ν)t ∀t  0 and ∀ 0  f ∈ L2 (X, ν).

.

92

2 The m-Heat Flow

Remark 2.41 If .ν and .μ are probability measures on X, we have the classical Pinsker-Csizsár-Kullback inequality (see [200]): 

μ − ν T V 

.

Entν (μ) , 2

(2.34)

where . μ − ν T V is the total variation distance

μ − ν T V := sup{|μ(A) − ν(A)| : A ∈ B}.

.

Therefore, if .f ∈ L2 (X, ν) and .μ = f ν ∈ P(X) (so that .μt = e−tmf ν ∈ P(X) for every .t  0), then, by (2.32) and (2.34) 

μt − ν T V 

.

Entν (f ) − MLS(m,ν) t 2 e . 2

Consequently, we get exponential convergence of the solution of the heat flow to the invariant measure .ν (in the total variation sense). In fact, without even making use of the Pinsker-Csizsár-Kullback inequality, it is possible to obtain a stronger result. Indeed, if .gap(−m ) > 0

μt − ν T V  f − 1 L2 (X,ν) e−gap(−m )t

.

even if .MLS(m, ν) = 0. Actually, by Theorem 2.16, for any .A ∈ B        −t mf − 1 dν  e−tmf dν − ν(A)  e   A

A



.

2  −t mf − 1 dν e

 X

1 2

 f − 1 L2 (X,ν) e−gap(−m )t .

2.7 Transport Inequalities In this section we show that, under the positivity of the Bakry-Émery curvaturedimension condition or the Ollivier-Ricci curvature, a transport-information inequality holds. Moreover, we prove that if a transport-information inequality holds, then a transport-entropy inequality is also satisfied and that, in general, the converse implication is not true. The interest on this type of inequalities goes back to the works by Marton and Talagrand [143, 188]; see the survey [111]. In this section we assume that .[X, d, m, ν] is a reversible metric random walk space such that .ν is a probability measure.

2.7 Transport Inequalities

93

Definition 2.42 We define (recall (2.10)) 2 1 1 d W2 (δx , mx ) = 2 2

(x) :=

.

 d(x, y)2 dmx (y), x ∈ X, X

and m := ess sup (x).

.

x∈X

Note that if .diam(X) is finite, then, since .(x)  1 2 .m  (diam(X)) . Observe also that 2

(f ) ∞ = sup

.

x∈X

1 2

1 2

(diam(supp(mx ))2 , we have

 X

(f (x) − f (y))2 dmx (y)  m f 2Lip .

(2.35)

Example 2.43 Given a metric measure space .(X, d, μ) as in Example 1.45, if .mμ,ε is the .ε-step random walk associated with .μ, that is mμ,ε x :=

.

μ B(x, ε) μ(B(x, ε))

for x ∈ X,

then mμ,ε 

.

1 2 ε . 2

Following [166] we define the jump of a random walk as follows. Definition 2.44 The jump of the random walk at x is defined by  Jm (x) := W1d (δx , mx ) =

d(x, y)dmx (y).

.

X

Example 2.45 Let .[V (G), dG , mG , νG ] be the metric random walk space associated to a locally finite discrete graph as in Example 1.41. Then, for .x ∈ V (G) JmG (x) =

.

1 dx



wxy  1,

y∼x, y=x

thus (x) =

.

1 1 J G (x) = 2dx 2 m

 x∼y, x=y

wxy 

1 . 2

94

2 The m-Heat Flow

If .[ , d, mJ, , LN ] is the metric random walk space given in Example 1.47 with .J (x) = |Br1(0)| χBr (0) (x), then, for .x ∈ (x) 

.

N r 2. 2(N + 2)

In the next result, we show that the Bakry-Émery curvature-dimension condition BE(K, ∞) with .K > 0 implies a transport-information inequality, result that was obtained for the particular case of Markov chains in discrete spaces in [96].

.

Theorem 2.46 Assume that .m is finite. If .m satisfies the Bakry-Émery curvature-dimension condition .BE(K, ∞) with .K > 0, then .ν satisfies the transport-information inequality: √ d .W1 (μ, ν)



2m  Iν (μ) for all probability measures μ  ν on X. K

(2.36)

Proof Let .μ ∈ P(X) such that .μ  ν and set .μ = f ν. By KantorovichRubinstein’s theorem   d ∞ .W1 (μ, ν) = sup g(x)(f (x) − 1)dν(x) : g Lip  1 and g ∈ L (X, ν) . X

Let .Tt = e−tm be the heat semigroup. Given .g ∈ L∞ (X, ν) with . g Lip  1 and having in mind Proposition 2.15, we get: 



.

 d (Tt g)(x)f (x)dν(x)dt g(x)(f (x) − 1)dν(x) = − dt X X 0  +∞  =− m (Tt g)(x)f (x)dν(x)dt +∞

0



+∞ 

=

X

(Tt g, f )(x)dν(x)dt. 0

X

Now, using the Cauchy-Schwartz inequality, the reversibility of .ν with respect to m and that   ( f (y) + f (x))2  2((f (x) + f (y)),

.

we obtain the following:  (Tt g, f )(x)dν(x) X .

=

1 2

 ((Tt g)(y) − (Tt g)(x))(f (y) − f (x))dmx (y)dν(x) X×X

2.7 Transport Inequalities

=

1 2



  ((Tt g)(y) − (Tt g)(x))( f (y) − f (x)) X×X

  ×( f (y) + f (x))dmx (y)dν(x) 1  2 1  ( f (y) − f (x))2 dmx (y)dν(x) 4

  X×X



.

95

1   2 2 ((Tt g)(y) − (Tt g)(x)) ( f (y) + f (x)) dmx (y)dν(x)

×

2

X×X

1    2 1 ( f )(x)dν(x)  2 X     1 2 × 4 ((Tt g)(y) − (Tt g)(x))2 dmx (y) f (x)dν(x) . X

X

Then, applying Theorem 2.34, we get:  g(x)(f (x) − 1)dν(x) X +∞ 

   12  .  Hm ( f )

1 2 ((Tt g)(x))f (x)dν(x) dt



4 0

X

   21   4Hm ( f )

+∞ 

e

0

−2Kt

1 2

Tt (g) (x)f (x)dν(x) dt.

 X

Now, by (2.3) and (2.35), we have:



|Tt (g) (x)|  Tt (g) ∞  (g) ∞  m .

.

Hence,  .

+∞ 

   12  g(x)(f (x) − 1)dν(x)  4Hm ( f ) X

√ 

e

−2Kt

 m

0

1 2 f (x)dν(x) dt

X

  12 m  4Hm ( f ) . K

Finally, taking the supremum over all functions .g ∈ L∞ (X, ν) with . g Lip  1, we get (2.36).

Remark 2.47 If .ν satisfies the transport-information inequality   W1d (μ, ν)  λ 2Hm ( f )

.

for all μ = f ν with f ∈ L1 (X, ν)+ ,

(2.37)

96

2 The m-Heat Flow

then .ν is ergodic. Indeed, if .ν is not ergodic, then, by Theorem 1.68, there exists 1 D ∈ B with .0 < ν(D) < 1 such that .m χD = 0 .ν-a.e. Now, if .μ := ν(D) χD ν, then .μ = ν and, therefore, by (2.37), .Hm (χD ) > 0, which is in contradiction with .m χD = 0.

.

As a consequence of the previous Remark and Theorem 2.46, the positivity of the Bakry-Émery curvature-dimension condition implies the ergodicity of .m . Therefore, by Theorem 2.32, we have the following result. Theorem 2.48 Assume that .m is finite. Then: (1) if .m satisfies the Bakry-Émery curvature-dimension condition .BE(K, n), gap(−m )  K

.

n . n−1

(2) if .m satisfies the Bakry-Émery curvature-dimension condition .BE(K, ∞), gap(−m )  K.

.

The next result shows that a transport-information inequality implies a transportentropy inequality and, therefore, normal concentration (see, e.g. [40, 136]). Theorem 2.49 Suppose that .m is finite and that there exists some .x0 ∈ X such  that . d(x, x0 )dν(x) < ∞. Then, the transport-information inequality W1d (μ, ν) 

.

1 Iν (μ) K

for all μ ∈ P(X) such that μ  ν,

(2.38)

implies the transport-entropy inequality  W1d (μ, ν) 

.

√ m  Entν (μ) K

for all μ ∈ P(X) such that μ  ν.

(2.39)

Proof By [40, Theorem 1.3], (2.39) holds if, and only if, 

2

eλf (x) dν(x)  eλ

.



m 4K

(2.40)

X

for every function f with . f Lip  1 and .ν(f ) = 0, and every .λ ∈ R. Let .f ∈ L∞ (X, ν) with . f Lip  1 and .ν(f ) = 0, we define . : R → R by  (λ) :=

eλf (x) dν(x),

.

X

2.7 Transport Inequalities

97

and the probability measures .μλ , .λ ∈ R, as follows 1 λf e dν. (λ)

μλ :=

.

By Kantorovich-Rubinstein’s theorem and the assumption (2.38), we have: 1 d log((λ)) = dλ (λ)



 f (x)eλf (x) dν(x) = X

f (x)(dμλ (x) − dν(x)) X

     1 λf 1  d 2Hm  W1 (μλ , ν)  e K (λ)     1 1 λf  =  (x)dν(x) e K (λ) X   λf  1 1  e 2 (x)dν(x). = K X (λ)

.

Now, since .1 − a1  log a for every .a  1 and having in mind the reversibility of .ν, we get that, for any .g ∈ L2 (X, ν) 

 (g)(x)dν(x) 

.

g 2 (x)(log g)(x)dν(x),

X

X

and, consequently, (2.35) yields: 1 d log((λ))  K dλ



λ = 2K .

X

1 λf (x) e  (λ)

 X



λf 2

 (x)dν(x)

1 λf (x)  (f ) (x)dν(x) e (λ)



λ  (f ) (x)dμλ (x) 2K X √ m λ.  2K

=

Then, integrating this inequality we get (2.40).

98

2 The m-Heat Flow

Now, if .f ∈ L∞ (X, ν), let .fn := f ∧ n − ν(f ∧ n) ∈ L∞ (X, ν) for .n ∈ N, which satisfy . fn Lip  1 and .ν(fn ) = 0 for every .n ∈ N. Then, Fatou’s lemma yields: 



2

eλf (x) dν(x)  lim inf

.

n

X



eλfn (x) dν(x)  eλ

m 4K

,

X



as desired.

In the next example, we see that, in general, a transport-entropy inequality does not imply a transport-information inequality. Example 2.50 Let . := [−1, 0] ∪ [2, 3] and consider the metric random walk space .[ , d, mJ, , 12 L1 ], with d the Euclidean distance and .J = 12 χ[−1,1] (see Example 1.47). By the Gaussian integrability criterion [84, Theorem 2.3], .ν satisfies a transport-entropy inequality. However, .ν does not satisfy a transport-information inequality since this would imply that .ν is ergodic (see Remark 2.47) and it is easy to see that .[ , d, mJ, , 12 L1 ] is not m-connected (thus, by Theorem 1.37, .ν is not ergodic). By Theorems 1.37 and 2.26, we have that the metric random walk space of Example 2.50 has non-positive Ollivier-Ricci curvature. In the next theorem, we see that, under positive Ollivier-Ricci curvature, a transport-information inequality holds. First, we need the following result. Lemma 2.51 If .f ∈ L2 (X, ν) with . f Lip  1, then . e−tm f Lip  e−tκm . Proof By [166, Proposition 25], κm∗(n+l)  κm∗n + κm∗l − κm∗n κm∗l

.

∀ n, l ∈ N,

where .κm∗1 = κm . Hence: 1 − κm∗n  (1 − κm )n

.

∀ n ∈ N.

(2.41)

By Theorem 2.7 and Eq. (2.41), we have: |e

−tm

f (x) − e

−tm

  +∞   t n   −t  ∗n ∗n f (y)| = e f (z)(dmx (z) − dmy (z))   n!  X n=0

.

 e−t

+∞  n=0

∗n W1d (m∗n x , my )

tn n!

+∞  tn  e−t (1 − κm∗n )d(x, y) n! n=0

2.7 Transport Inequalities

99

e

−t

.

+∞  tn (1 − κm )n d(x, y) n! n=0

= e−t et (1−κm ) d(x, y) = e−tκm d(x, y), from this it follows that . e−tm f Lip  e−tκm .



Theorem 2.52 Assume that each measure .mx has finite first moment. Assume further that .m is finite. If .κm > 0, then the following transport-information inequality holds: √ 2 m  Iν (μ),  κm

d .W1 (μ, ν)

for all μ ∈ P(X) such that μ  ν.

Proof Let .Tt = e−tm be the heat semigroup and .μ = f ν be a probability measure in X. We use, as in the proof of Theorem 2.46, Kantorovich-Rubinstein’s theorem. Let .g ∈ L∞ (X, ν) with . g Lip  1. Having in mind Lemma 2.51, we have: 



+∞

g(x)(f (x) − 1)dν(x) = − X



+∞ 

=−  =  

.

+∞

+∞

=

1 2



((Tt g)(y) − (Tt g)(x))(f (y) − f (x))dmx (y)dν(x)dt X×X

Tt g Lip

+∞

e 0

=

(Tt g)(x)f (x)dν(x)dt X

X

0





m (Tt g)(x)f (x)dν(x)dt 0

0



0

d dt

1 2κm 1 2κm

−tκm 1



2



1 2

 d(x, y)|f (y) − f (x)|dmx (y)dν(x)dt X×X

d(x, y)|f (y) − f (x)|dmx (y)dν(x)dt X×X

d(x, y)|f (y) − f (x)|dmx (y)dν(x) X×X



     f (y) + f (x) dmx (y)dν(x) d(x, y)| f (y) − f (x)|

X×X

   2   1  d 2 (x, y) f (y) + f (x) dmx (y)dν(x). Hm ( f ) κm X×X Now, using reversibility of .ν with respect to m

100

2 The m-Heat Flow

 d 2 (x, y) X×X



.

=



f (y) +



f (x)

2

dmx (y)dν(x)

 2    dmx (y)dν(x) d 2 (x, y) 2f (x) + 2f (y) − f (y) − f (x)

X×X



d 2 (x, y) (f (x) + f (y)) dmx (y)dν(x)  8m .

2 X×X

Therefore, we get:  g(x)(f (x) − 1)dν(x) 

.

X

√   2 m 2Hm ( f ), κm

thus taking the supremum over g yields d .W1 (μ, ν)

√ √   2 m 2 m  Iν (μ).  2Hm ( f ) = κm κm



2.8 The m-Heat Content We begin this section with a brief description of the classical heat content for the heat equation. In [193] (see also [194]), the heat content of a Borel set .D ⊂ RN at time t is defined as  .HD (t) := T (t)χD (x)dx, D

where .(T (t))t0 is the heat semigroup in .L2 (RN ). Therefore, .HD (t) represents the amount of heat in D at time t if in D the initial temperature is 1 and in .RN \ D the initial temperature is 0. The following result was given in [193] (see also [159]) for the heat content: given an open subset D in .RN with finite Lebesgue measure and finite perimeter .Per(D)  HD (t) = |D| −

.

√ t Per(D) + o( t), π

t ↓ 0.

(2.42)

Let .[X, B, m, ν] be a reversible random walk space and let .(e−tm )t0 be the m-heat flow on .[X, B, m, ν].

2.8 The m-Heat Content

101

Definition 2.53 Given .D ∈ B with .ν(D) < ∞, we define the m-heat content of D in X at time t by  Hm D (t) :=

.

e−tm χD (x)dν(x).

D

Note that Hm D (0) = ν(D).

.

Remark 2.54 If we consider the metric random walk space .[RN , d, mJ , LN ] defined in Example 1.40, we have that the .mJ -heat content coincides with the J heat content introduced in [147] (see also [149, Chapter 6]; see [1] and [2] for other heat processes). We can extend, to the framework of random walk spaces, some of the results given there. Proposition 2.55 Given .D ∈ B with .ν(D) < ∞, we have: t − 2 m 2 Hm χD 2 D (t) = e

.

L (X,ν)

.

Proof Since the operator .m is self-adjoint, the operators .e−tm are self-adjoint too. Therefore: .

− 2t m 2 χD 2 e

L (X,ν)

 t ! " t = e− 2 m χD , e− 2 m χD = e−tm χD , χD = Hm D (t).



From Theorem 2.7, we obtain the following complete expansion of .Hm D by integrating over D with respect to .ν in (2.4). Theorem 2.56 For .D ∈ B with .ν(D) < ∞, we have: −t Hm D (t) = e

∞   

.

n=0 D

D

dm∗n x (z)dν(x)



 tn = e−t n!



n=0 D

m∗n x (D)dν(x)

tn . n! (2.43)

Note that (Hm D ) (0) = −Pm (D).

.

Therefore, as a consequence of the above theorem, we obtain the following nonlocal counterpart of (2.42); observe the difference in the time scale.

102

2 The m-Heat Flow

Corollary 2.57 For .D ∈ B with .ν(D) < ∞, we have: .

lim

t→0+

1 t



e−tm χD (x)dν(x) = Pm (D),

X\D

or, equivalently, Hm D (t) = ν(D) − Pm (D)t + o(t), t ↓ 0.

.

Proof As a consequence of the mass conservation property (2.5) 1 lim + t→0 t .

= lim

t→0+



e−tm χD (x)dν(x) X\D

1 t



e−tm χD (x)dν(x) − X



 e−tm χD (x)dν(x)

D

1 = lim ν(D) − Hm (t) = −(Hm D D ) (0) = Pm (D). t→0+ t



2.8.1 Probabilistic Interpretation Let us see that Formula (2.43) has a probabilistic interpretation. Since the transition probability kernel from x to y in k steps (or jumps) is given by dm∗k x (y) :=



.

z∈X

dmx∗(k−1) (z)dmz (y),

we have that the transition probability from x to D in k steps is  F

.

(k)

(x, D) := D

∗k dm∗k x (y) = mx (D),

where F (0) (x, D) = χD (x).

.

Observe that, following the intuitive interpretation given after Definition 1.32, F (k) (x, D) measures how many individuals are moving from x to D in k jumps, even for .k = 0. Moreover, if we define   .f (0) := dm∗0 x (y)dν(x) = ν(D)

.

D D

2.8 The m-Heat Content

103

and   f (k) :=

.

D D

dm∗k x (y)dν(x), k = 1, 2 . . . ,

we have that .f (k) measures the amount of individuals that start in D and end up in D after k jumps, for any .k  0. From (2.4) it follows that .e−tm χD (x) is the expected value of the amount of individuals that, at time t and having started at x, end up in D by successively jumping according to m if the number of jumps they make up to time t, .Nt , follows a Poisson distribution with rate t: e−tm χD (x) =

+∞ 

.

F (k) (x, D)

k=0

e−t t k . k!

It is well known that this function is the transition probability density of a pseudoPoisson process of intensity 1 (see [98, Chapter X]). Moreover, by (2.43) Hm D (t) =

+∞ 

.

f (k)

k=0

e−t t k = E(f (Nt )) k!

is the expected value of the amount of individuals that start and end in D at time t when these individuals move by successively jumping according to m and the number of jumps up to time t follows a Poisson distribution with rate t.

2.8.2 The Spectral m-Heat Content For .k  0, let .g(k) be the measure of the amount of individuals that, starting in D, end up in D after k jumps without ever leaving D, and let m .QD (t)

:=

+∞  k=0

g(k)

e−t t k , k!

(2.44)

i.e. the expected value of the amount of individuals that start in D and end in D at time t without ever leaving D, when these individuals move by successively jumping according to m and the number of jumps made up to time t follows a Poisson distribution with rate t. .Qm D (t) is called the spectral m-heat content of D and, as in [149], is easy to see that  Qm D (t) =

v(t, x)dν(x),

.

D

104

2 The m-Heat Flow

where .v(t, x) is the solution of the homogeneous Dirichlet problem for the m-heat equation  ⎧ dv ⎪ ⎪ (t, x) = (v(t, y) − v(t, x))dmx (y), (t, x) ∈ (0, +∞) × D, ⎪ ⎪ ⎪ X ⎨ dt . v(t, x) = 0, (t, x) ∈ (0, +∞) × ∂m D, ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ u(0, x) = 1, x ∈ D. (2.45) The existence and uniqueness of solutions of problem (2.45) is studied in Chap. 7. Observe that .Qm D (t)  agrees with its local version, the spectral heat content, which is given by .QD (t) =

u(t, x)dx, where u is the solution of the heat equation with D

homogeneous Dirichlet conditions and initial datum 1 in D: ⎧ ⎪ ut (t, x) = u(t, x), (t, x) ∈ (0, ∞) × D, ⎪ ⎪ ⎪ ⎨ . u(t, x) = 0, (t, x) ∈ (0, ∞) × ∂D, ⎪ ⎪ ⎪ ⎪ ⎩ u(0, x) = 1, x ∈ D. For smooth bounded domains, in [195] it is shown that  √ 2 1 .QD (t) = |D| − √ Per(D) H∂D (x)d HN−1 (x) t + O(t 3/2 ) t ↓ 0, t + (N − 1) 2 π ∂D where .H∂D is the mean curvature of .∂D. For the spectral m-heat content, we have the complete expansion given in (2.44) which yields the following equation given in geometrical terms:  1 Hm (x)dν(x)t 2 Qm (t) = ν(D) − P (D)t + m D 2 D ∂m D .    1 dmy (z)dmx (y)dν(x)t 2 + o(t 3 ) + 2 D D D

t ↓ 0.

Chapter 3

The Total Variation Flow in Random Walk Spaces

Since its introduction as a means of solving the denoising problem in the seminal work by Rudin, Osher and Fatemi [175], the total variation flow has remained one of the most popular tools in image processing. From the mathematical point of view, the study of the total variation flow in .RN was established in [14]. Furthermore, the use of neighbourhood filters by Buades, Coll and Morel in [56], which was originally proposed by P. Yaroslavsky [202], has led to an extensive literature in nonlocal models in image processing (see for instance [46, 107, 131, 140] and the references therein). Consequently, there is great interest in studying the total variation flow in the nonlocal context. Moreover, a different line of research considers an image as a weighted discrete graph, where the pixels are taken as the vertices and the “similarity” between pixels as the weights. The way in which these weights are defined depends on the problem at hand; see, for instance, [93] and [140]. Therefore, the study of the 1-Laplacian operator and the total variation flow in random walk spaces has a potentially broad scope of application. Further motivation for the study of the 1-Laplacian operator comes from spectral clustering. Partitioning data into sensible groups is a fundamental problem in machine learning, computer science, statistics and science in general. In these fields, it is usual to face large amounts of empirical data, and getting a first impression of these data by identifying groups with similar properties has proved to be very useful. One of the most popular approaches to this problem is to find the best balanced cut of a graph representing the data, such as the Cheeger ratio cut [69] which we now introduce. Consider a finite weighted connected graph .G = (V , E), where .V = {x1 , . . . , xn } is the set of vertices (or nodes) and E the set of edges, which are weighted by a function .wj i = wij  0, .(xi , xj ) ∈ E. In this context, the Cheeger cut value of a partition .{S, S c } (.S c := V \ S) of V is defined as C(S) :=

.

Cut(S, S c ) , min{vol(S), vol(S c )}

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. M. Mazón et al., Variational and Diffusion Problems in Random Walk Spaces, Progress in Nonlinear Differential Equations and Their Applications 103, https://doi.org/10.1007/978-3-031-33584-6_3

105

106

3 The m-total Variation Flow

 where .Cut(A, B) = xi ∈A,xj ∈B wij (recall Example 1.60) and .vol(S) is the volume  of S, defined as .vol(S) := xi ∈S dxi , being .dxi the weight at the vertex .xi (see Example 1.41). Then: h(G) := min C(S)

.

S⊂V

is called the Cheeger constant, and a partition .{S, S c } of V is called a Cheeger cut of G if .h(G) = C(S). Unfortunately, the Cheeger minimization problem of computing .h(G) is NP-hard [120, 184]. However, it turns out that .h(G) can be approximated by the first positive eigenvalue .λ1 of .−mG , thanks to the following Cheeger inequality [72]: .

 λ1  h(G)  2λ1 . 2

(3.1)

This motivates the spectral clustering method [141], which, in its simplest form, thresholds the first positive eigenvalue of .−mG to get an approximation to the Cheeger constant and, moreover, to a Cheeger cut. In order to achieve a better approximation than the one provided by the classical spectral clustering method, a spectral clustering based on the graph p-Laplacian was developed in [58], where it is shown that the second eigenvalue of the graph p-Laplacian tends to the Cheeger constant .h(G) as .p → 1+ . In [184] the idea was further developed by directly considering the variational characterization of the Cheeger constant .h(G): h(G) = min

.

u∈L1

|u|T V , u − median(u))1

(3.2)

where |u|T V :=

.

n 1  wij |u(xi ) − u(xj )|. 2 i,j =1

Note that the subdifferential of the energy functional .| · |T V is minus the graph 1Laplacian. Using the nonlinear eigenvalue problem .λ sign(u) ∈ −1 u, the theory of 1-spectral clustering is developed in [65–67] and [120]. The aim of this chapter is to study the total variation flow in reversible random walk spaces, obtaining general results that can be applied, as aforementioned, to the different points of view in image processing. In this regard, we introduce the 1-Laplacian operator associated with a random walk space (see Definition 3.16) and obtain various characterizations (see Theorem 3.13). In doing so, we generalize results obtained in [148] and [149] for the particular case of .[RN , d, mJ , LN ], and, moreover, generalize results in graph theory. We then proceed to prove existence and uniqueness of solutions of the total variation flow in random walk spaces and to study its asymptotic behaviour with the help of Poincaré’s-type

3.1 The m-Total Variation

107

inequalities. Furthermore, we introduce the concepts of Cheeger and calibrable sets in random walk spaces and characterize calibrability by using the 1-Laplacian operator. Moreover, in Sect. 3.5, in connection with the 1-spectral clustering, we study the eigenvalue problem of the operator .−m 1 , and then relate it to the optimal Cheeger cut problem. Then again, these results apply, in particular, to locally finite weighted connected graphs, complementing the results obtained in the previously mentioned papers [65–67] and [120]. Lastly, in Sect. 3.7, we obtain a generalization of the Cheeger inequality (3.1) and of the variational characterization of the Cheeger constant (3.2). The Cheeger problem in the fractional case is studied in [49].

3.1 The m-Total Variation Definition 3.1 Let [X, B, m, ν] be a random walk space. We define the space of nonlocal bounded variation functions BVm (X, ν) as follows:     BVm (X, ν):= u : X → R measurable : |u(y)−u(x)|dmx (y)dν(x) < ∞ .

.

X

X

Moreover, the m-total variation of a function u ∈ BVm (X, ν) is defined by 1 .T Vm (u) := 2 =

1 2

  |u(y) − u(x)|dmx (y)dν(x) X



X

|u(y) − u(x)|d(ν ⊗ mx )(x, y). X×X

Note that L1 (X, ν) ⊂ BVm (X, ν). In fact, by the invariance of ν with respect to m, T Vm (u)  uL1 (X,ν) for every u ∈ L1 (X, ν). Observe also that, by Lemma 1.58, Pm (E) = T Vm (χE ).

.

(3.3)

The space BVm (X, ν) is the nonlocal counterpart of classical local bounded variation spaces (BV-spaces). Note further that, in the local context, given a Lebesgue measurable set E ⊂ Rn , its perimeter is equal to the total variation of its characteristic function (see [9]) and the above Eq. (3.3) provides the nonlocal analogue. However, although they represent analogous concepts in different settings, the local classical BV-spaces and the nonlocal BV-spaces are of a different nature. For example, in this nonlocal framework L1 (X, ν) ⊂ BVm (X, ν) in contrast with classical local bounded variation spaces that are, by definition, contained in L1 . Example 3.2 Let [V (G), dG , mG , νG ] be the metric random walk space given in Example 1.41. Then:

108

3 The m-total Variation Flow

T VmG (u) =





1 2

V (G) V (G)





|u(y) − u(x)|dmG x (y)dνG (x) 



1 ⎝ |u(y) − u(x)|wxy ⎠ dνG (x) d V (G) x ⎛y∈V (G) ⎞   1 1 = dx ⎝ |u(y) − u(x)|wxy ⎠ 2 dx x∈V (G) y∈V (G) 1   = |u(y) − u(x)|wxy , 2 =

.

1 2

x∈V (G) y∈V (G)

which coincides with the anisotropic total variation defined in [103]. In the following results, we give some properties of the m-total variation. Proposition 3.3 Let [X, B, m, ν] be a random walk space and φ : R → R a Lipschitz continuous function. If u ∈ BVm (X, ν), then φ(u) ∈ BVm (X, ν) and T Vm (φ(u))  φLip T Vm (u).

.

Proof 1 T Vm (φ(u)) = 2

  |φ(u)(y) − φ(u)(x)|dmx (y)dν(x) X

X

.

 φLip

1 2

  |u(y) − u(x)|dmx (y)dν(x) = φLip T Vm (u). X

X

 Proposition 3.4 Let [X, B, m, ν] be a random walk space. Then, T Vm is convex and 1-Lipschitz continuous in L1 (X, ν). Proof The convexity of T Vm follows easily. Let us see that it is 1-Lipschitz continuous. Let u, v ∈ L1 (X, ν). Since ν is invariant with respect to m   1 (y)dν(x) − v(x)| − |u(y) − u(x)|) dm (|v(y) x 2 X X

    1  |v(y) − u(y)|dmx (y)dν(x) + |v(x) − u(x)|dν(x) 2 X X X .

   1 = |v(y) − u(y)|dν(y) + |v(x) − u(x)|dν(x) 2 X X |T Vm (v) − T Vm (u)| =

= v − uL1 (X,ν) . 

3.1 The m-Total Variation

109

As in the local case, we have a coarea formula relating the m-total variation of a function with the m-perimeter of its superlevel sets. Theorem 3.5 (Coarea Formula) Let [X, B, m, ν] be a random walk space. For any u ∈ BVm (X, ν), let Et (u) := {x ∈ X : u(x) > t}. Then:  T Vm (u) =

+∞

Pm (Et (u)) dt.

.

−∞

Proof Let u ∈ BVm (X, ν). Since 

+∞

u(x) =

.

0

 χEt (u) (x) dt −

0

−∞

(1 − χEt (u) (x)) dt

∀x ∈ X,

we have:  u(y) − u(x) =

+∞

χEt (u) (y) − χEt (u) (x) dt

.

−∞

∀x, y ∈ X.

Moreover, since u(y)  u(x) implies χEt (u) (y)  χEt (u) (x), we obtain that  |u(y) − u(x)| =

+∞

.

−∞

|χEt (u) (y) − χEt (u) (x)| dt.

Therefore, we get: 1 T Vm (u) = 2 = .

1 2 

=  =

  |u(y) − u(x)|dmx (y)dν(x) X

X

   X X +∞ 1

−∞ +∞ −∞

2

+∞

−∞

  X

X

 |χEt (u) (y) − χEt (u) (x)|dt dmx (y)dν(x)  |χEt (u) (y) − χEt (u) (x)|dmx (y)dν(x) dt

Pm (Et (u))dt,

where Tonelli-Hobson’s theorem is used in the third equality.



Lemma 3.6 Let [X, B, m, ν] be an m-connected random walk space. Then, T Vm (u) = 0 ⇔ u is a constant ν-a.e.

.

Proof (⇐) Suppose that u is ν-a.e. equal to a constant k ∈ R, then, since ν is invariant with respect to m, we have:

110

3 The m-total Variation Flow

  1 |u(y) − u(x)|dmx (y)dν(x) 2 X X   |u(y) − k|dmx (y)dν(x) = X X  |u(x) − k|dν(x) = 0. =

T Vm (u) = .

X

(⇒) Suppose that 0 = T Vm (u) =

.

1 2

  |u(y) − u(x)|dmx (y)dν(x). X

X

 |u(y) − u(x)|dmx (y) = 0 for ν-a.e. x ∈ X; thus

Then, X

    u(y) − u(x) dmx (y)  |m u(x)| = |u(y) − u(x)|dmx (y) = 0

.

X

X

for ν-a.e. x ∈ X, so we conclude by Theorem 1.55.



3.2 The m-1-Laplacian and m-The Total Variation Flow In this section we assume that .[X, B, m, ν] is a reversible random walk space. Definition 3.7 For .p  1, we denote   p Xm (X, ν) := z ∈ L∞ (X × X, ν ⊗ mx ) : divm z ∈ Lp (X, ν) .

.

The following proposition follows similarly to Proposition 1.52. 

Proposition 3.8 (Green’s Formula) Let .1  p  ∞, .u ∈ BVm (X, ν) ∩ Lp (X, ν) p and .z ∈ Xm (X, ν). Then:   1 . u(x)(divm z)(x)dν(x) = − ∇u(x, y)z(x, y)d(ν ⊗ mx )(x, y). (3.4) 2 X×X X In the next result, we characterize the m-total variation and the m-perimeter using the m-divergence operator (see, e.g. [9, Proposition 3.6] for the analogous result in the local case). Let us denote by .sign0 (r) the usual sign function and by .sign(r) the multivalued sign function:

3.2 The m-1-Laplacian and m-The Total Variation Flow

⎧ 1 ⎪ ⎪ ⎨ sign0 (r) := 0 ⎪ ⎪ ⎩ −1 .

111

if r > 0, if r = 0, if r < 0;

⎧ 1 ⎪ ⎪ ⎨ sign(r) := [−1, 1] ⎪ ⎪ ⎩ −1

(3.5) if r > 0, if r = 0, if r < 0. 

Proposition 3.9 Let .1  p  ∞. For .u ∈ BVm (X, ν) ∩ Lp (X, ν), we have:  u(x)(divm z)(x)dν(x) : z ∈

T Vm (u)= sup

.

p Xm (X, ν),

X

zL∞ (X×X,ν⊗mx )

 1 . (3.6)

In particular, for any .E ∈ B with .ν(E) < ∞, we have:  (divm z)(x)dν(x) : z ∈

Pm (E)= sup

.

E 

1 Xm (X, ν),

zL∞ (X×X,ν⊗mx )

 1 .

p

Proof Let .u ∈ BVm (X, ν) ∩ Lp (X, ν). Given .z ∈ Xm (X, ν) with .z∞  1, Green’s formula (3.4) yields:  u(x)(divm z)(x)dν(x) = − X .



1 2

1 2 

 ∇u(x, y)z(x, y)d(ν ⊗ mx )(x, y) X×X

|u(y) − u(x)|dmx (y)dν(x) = T Vm (u). X×X

Therefore:   p ∞ . sup u(x)(divm z)(x)dx : z ∈ Xm (X, ν), zL (X×X,ν⊗mx )  1  T Vm (u). X

On the other hand, since .ν is .σ -finite, there exists a sequence of measurable sets .K1 ⊂ K2 ⊂ . . . ⊂ Kn ⊂ . . . of .ν-finite measure, such that .X = ∪∞ n=1 Kn . Then, if we define .zn (x, y) := sign0 (u(y) − u(x))χKn ×Kn (x, y), we have that .zn ∈ p Xm (X, ν) with .zn L∞ (X×X,ν⊗mx )  1 and

112

3 The m-total Variation Flow

1 2

T Vm (u) =

 |u(y) − u(x)|d(ν ⊗ mx )(x, y) X×X

1 n→∞ 2



= lim

Kn ×Kn

|u(y) − u(x)|d(ν ⊗ mx )(x, y)

 1 ∇u(x, y)zn (x, y)d(ν ⊗ mx )(x, y) n→∞ 2 X×X  = lim u(x)(divm (−zn ))(x)dν(x)

= lim

.

n→∞ X





u(x)(divm (z))(x)dν(x) : z ∈

 sup

p Xm (X, ν),

X

z∞  1 . 

Corollary 3.10 .T Vm is lower semicontinuous with respect to the weak convergence in .L2 (X, ν). 2 (X, ν) with Proof If .un  u weakly in .L2 (X, ν), then, given .z ∈ Xm

zL∞ (X×X,ν⊗mx )  1,

.

we have that   . u(x)(divm z)(x)dν(x) = lim un (x)(divm z)(x)dν(x)  lim inf T Vm (un ) n→∞ X

X

n→∞

by Proposition 3.9. Now, taking the supremum over .z in this inequality (and by Proposition 3.9 again), we get: T Vm (u)  lim inf T Vm (un ).

.

n→∞

 We now introduce the 1-Laplacian operator in random walk spaces. To this aim we first prove Theorem 3.13 which requires the following definitions. Definition 3.11 We define .Fm : L2 (X, ν) →] − ∞, +∞] by Fm (u) :=

.

⎧ ⎨ T Vm (u) if u ∈ L2 (X, ν) ∩ BVm (X, ν), ⎩

+∞

if u ∈ L2 (X, ν) \ BVm (X, ν).

Consider the formal nonlocal evolution equation:  ut (x, t) =

.

X

u(y, t) − u(x, t) dmx (y), |u(y, t) − u(x, t)|

x ∈ X, t  0.

(3.7)

3.2 The m-1-Laplacian and m-The Total Variation Flow

113

In order to study the Cauchy problem associated with this equation, we see in Theorem 3.19 that we can rewrite it as the gradient flow in .L2 (X, ν) of the functional .Fm which is convex and lower semicontinuous. Following the method used in [14], we characterize the subdifferential of the functional .Fm . Definition 3.12 Let .(X, ν) be a measure space. Given a functional . : L2 (X, ν) →  : L2 (X, ν) → [0, ∞] as [0, ∞], we define . ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ v(x)w(x)dν(x) ⎨ ⎬ 2 X  . (v) := sup : w ∈ L (X, ν) ⎪ ⎪ (w) ⎪ ⎪ ⎩ ⎭ with the convention that . 00 =

0 ∞

2  1 . = 0. Obviously, if . 1  2 , then .

Theorem 3.13 Let .u ∈ L2 (X, ν) and .v ∈ L2 (X, ν). The following assertions are equivalent: (i) .v ∈ ∂Fm (u); 2 (X, ν) with .z ∞ (ii) there exists .z ∈ Xm L (X×X,ν⊗mx )  1 such that v = −divm z

(3.8)

.

and  u(x)v(x)dν(x) = Fm (u);

.

X 2 (X, ν) with .z ∞ (iii) there exists .z ∈ Xm L (X×X,ν⊗mx )  1 such that (3.8) holds and  1 . ∇u(x, y)z(x, y)d(ν ⊗ mx )(x, y) = Fm (u); 2 X×X

(iv) there exists .g ∈ L∞ (X×X, ν⊗mx ) antisymmetric with .gL∞ (X×X,ν⊗mx )  1 such that  .v(x) = − g(x, y) dmx (y) for ν-a.e x ∈ X, (3.9) X

and   .



g(x, y)dmx (y) u(x)dν(x) = Fm (u). X

(3.10)

X

(v) there exists .g ∈ L∞ (X × X, ν ⊗ mx ) antisymmetric, satisfying (3.9) and

114

3 The m-total Variation Flow

g(x, y) ∈ sign(u(y) − u(x))

for (ν ⊗ mx )-a.e. (x, y) ∈ X × X.

.

(3.11)

Proof Since .Fm is convex, lower semicontinuous and positive homogeneous of degree 1, by Andreu et al. [14, Theorem 1.8], we have:    2  .∂Fm (u) = v ∈ L (X, ν) : Fm (v)  1, u(x)v(x)dν(x) = Fm (u) . X

(3.12)

We define, for .v ∈ L2 (X, ν),   2 (v) := inf zL∞ (X×X,ν⊗mx ) : z ∈ Xm (X, ν), v = −divm z .

.

(3.13)

Observe that . is convex, lower semicontinuous and positive homogeneous of degree 1. Moreover, it is easy to see that, if . (v) < ∞, the infimum in (3.13) 2 (X, ν) such that .v = −div z and is attained, i.e. there exists some .z ∈ Xm m . (v) = zL∞ (X×X,ν⊗mx ) . Let us see that m . =F

.

m (v)  (v). If . (v) = +∞, then this assertion is We begin by proving that .F trivial. Therefore, suppose that . (v) < +∞. Let .z ∈ L∞ (X × X, ν ⊗ mx ) such that 2 .v = −divm z. Then, for .w ∈ L (X, ν), we have:   1 ∇w(x, y)z(x, y)d(ν ⊗ mx )(x, y)  z∞ Fm (w). . w(x)v(x)dν(x) = 2 X×X X m (v)  zL∞ (X×X,ν⊗mx ) . Now, Taking the supremum over w, we obtain that .F  taking the infimum over .z, we get .Fm (v)  (v). To prove the opposite inequality, let us denote 2 D := {divm z : z ∈ Xm (X, ν)}.

.

Then, by (3.6), we have that, for .v ∈ L2 (X, ν)   (v) =

 w(x)v(x)dν(x)

w(x)v(x)dν(x) X

sup

(w)

w∈L2 (X,ν)

 sup

X

w∈D



.

divm z(x)v(x)dν(x) =

sup 2 (X,ν) z∈Xm

X

zL∞ (X×X,ν⊗mx )

= Fm (v).

(w)

3.2 The m-1-Laplacian and m-The Total Variation Flow

115

 , which implies, by Andreu et al. [14, Proposition 1.6], that . = Thus, .Fm    m , and, consequently, from (3.12), we get:  Fm . Therefore, . = F    ∂Fm (u) = v ∈ L2 (X, ν) : (v)  1, u(x)v(x)dν(x) = Fm (u) .

.

X

Hence, from the observation before (3.13), the equivalence between (i) and (ii) follows. To get the equivalence between (ii) and (iii), we only need to apply Proposition 3.8. On the other hand, to see that (iii) implies (iv), it is enough to take .g(x, y) = 1 (z(x, y) − z(y, x)). To see that (iv) implies (ii), it is enough to take .z(x, y) = 2 2 (X, ν)). Finally, to see g(x, y) (observe that, from (3.9), .−divm (g) = v, so .g ∈ Xm that (iv) and (v) are equivalent, we need to show that (3.10) and (3.11) are equivalent. Now, since .g is antisymmetric with .gL∞ (X×X,ν⊗mx )  1 and .ν is reversible with respect to m, we have:   .



−2

g(x, y)dmx (y) u(x)dν(x) = X

X

g(x, y)(u(y)−u(x))d(ν⊗mx )(x, y), X×X



from this the equivalence between (3.10) and (3.11) follows. Remark 3.14 The next space, in its local version, was introduced in [154]. Set 2 Gm (X, ν) := {f ∈ L2 (X, ν) : ∃z ∈ Xm (X, ν) such that f = divm (z)}

.

and consider in .Gm (X, ν) the norm f m,∗ := inf{zL∞ (X×X,ν⊗mx ) : f = divm (z)}.

.

Following the proof of Theorem 3.13, we obtain that f m,∗

.

   2 = sup f (x)u(x)dν(x) : u ∈ L (X, ν), T Vm (u)  1 ,

(3.14)

X

and    ∂Fm (u) = v ∈ Gm (X, ν) : vm,∗  1, u(x)v(x)dν(x) = Fm (u) .

.

X

In particular, ∂Fm (0) = {v ∈ Gm (X, ν) : vm,∗  1}.

.

We have the following result.

(3.15)

116

3 The m-total Variation Flow

Proposition 3.15 .∂Fm is an m-completely accretive operator in .L2 (X, ν). Proof Since .∂Fm is maximal monotone, we only need to show that .∂Fm is completely accretive, that is, for .q ∈ P0 and .(ui , vi ) ∈ ∂Fm , .i = 1, 2, we need to show that  . (v1 (x) − v2 (x))q(u1 (x) − u2 (x))dν(x)  0. (3.16) X

In fact, by Theorem 3.13, there exists .gi ∈ L∞ (X × X, ν ⊗ mx ) antisymmetric satisfying  vi (x) = −

gi (x, y) dmx (y) for ν-a.e x ∈ X,

.

X

and gi (x, y) ∈ sign(ui (y) − ui (x))

.

for (ν ⊗ mx )-a.e. (x, y) ∈ X × X.

Hence, we have:  (v1 (x) − v2 (x))q(u1 (x) − u2 (x))dν(x) X

 

=− 1 = 2

(g1 (x, y) − g2 (x, y))q(u1 (x) − u2 (x))dmx (y)dν(x) X

X

X

X

 

(g1 (x, y) − g2 (x, y)) ×(q(u1 (y) − u2 (y)) − (q(u1 (x) − u2 (x)))dmx (y)dν(x)

= .

1 2

 {(x,y):u1 (y)=u1 (x),u2 (y)=u2 (x)}

(g1 (x, y) − g2 (x, y))

×(q(u1 (y) − u2 (y)) − (q(u1 (x) − u2 (x)))dmx (y)dν(x) 1 + 2

 {(x,y):u1 (y)=u1 (x),u2 (y)=u2 (x)}

(g1 (x, y) − g2 (x, y))

×(q(u1 (y) − u2 (y)) − (q(u1 (x) − u2 (x)))dmx (y)dν(x) +

1 2

 {(x,y):u1 (y)=u1 (x),u2 (y)=u2 (x)}

(g1 (x, y) − g2 (x, y))

×(q(u1 (y) − u2 (y)) − (q(u1 (x) − u2 (x)))dmx (y)dν(x). Therefore, since the last three integrals are nonnegative, we get that (3.16) holds. 

3.2 The m-1-Laplacian and m-The Total Variation Flow

117

Definition 3.16 We define in .L2 (X, ν) the multivalued operator .m 1 by (u, v) ∈ m 1 if, and only if, −v ∈ ∂Fm (u),

.

m and we call it the m-1-Laplacian. As usual, we write .v ∈ m 1 u for .(u, v) ∈ 1 .

Chang in [65] and Hein and Bühler in [120] define a similar operator in the particular case of finite graphs: Example 3.17 Let .[V (G), dG , mG , νG ] be the metric random walk given in ExamG ple 1.41. By Theorem 3.13, we have .(u, v) ∈ m 1 if and only if ⎧ ∃g ∈ L∞ (V (G) × V (G), νG ⊗ mG ⎪ x ) antisymmetric such that ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ gL∞ (V (G)×V (G),νG ⊗mGx )  1, .

1  ⎪ ⎪ ⎪ g(x, y)wxy = v(x) ∀ x ∈ V (G), and ⎪ ⎪ ⎪ dx y∈V (G) ⎪ ⎪ ⎩ g(x, y) ∈ sign(u(y) − u(x)) for (νG ⊗ mG x )-a.e. (x, y) ∈ V (G) × V (G). G

The next example shows that the operator .m is indeed multivalued. Let 1 .V (G) = {a, b}, .0 < p < 1, .waa = wbb = p and .wab = wba = 1 − p. Then, mG if and only if .(u, v) ∈  1 ⎧ ∞ G ⎪ there exists g ∈ L ({a, b} × {a, b}, νG ⊗ mx ) antisymmetric such that ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ gL∞ ({a,b}×{a,b},νG ⊗mGx )  1, .

⎪ ⎪ g(a, a)p + g(a, b)(1 − p) = v(a), ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ g(a, b) ∈ sign(u(b) − u(a)).

g(b, b)p + g(b, a)(1 − p) = v(b), and

Now, since .g is antisymmetric, we get: v(a)=g(a, b)(1−p),

.

v(b)=−g(a, b)(1−p)

and

g(a, b) ∈ sign(u(b)−u(a)).

Proposition 3.18 For any .(u, v) ∈ m 1 , it holds that  .



vwdν  T Vm (w)

for all w ∈ BVm (X, ν) ∩ L2 (X, ν),

(3.17)

X

and  .



vudν = T Vm (u). X

(3.18)

118

3 The m-total Variation Flow

Proof Since .−v ∈ ∂Fm (u), given .w ∈ BVm (X, ν)  .



vwdν  Fm (u + w) − Fm (u)  Fm (w), X

so we get (3.17). On the other hand, (3.18) is given in Theorem 3.13.



As a consequence of Theorem 3.13 and Proposition 3.15, on account of Theorem A.27 [51, Theorem 3.6] and by the complete accretivity of the operator (see the results in Appendix A.7), we can give the following existence and uniqueness result for the Cauchy problem: ⎧ ⎨ ut − m 1 u  0 in (0, T ) × X .

(3.19)

⎩ u(0, x) = u (x) x ∈ X, 0

which is a rewrite of the formal expression (3.7). Theorem 3.19 For every .u0 ∈ L2 (X, ν) and any .T > 0, there exists a unique solution of the Cauchy problem (3.19) in .(0, T ) in the following sense: .u ∈ W 1,1 (0, T ; L2 (X, ν)), .u(0, ·) = u0 in .L2 (X, ν), and, for almost all .t ∈ (0, T ), ut (t, ·) − m 1 u(t)  0.

.

Moreover, we have the following contraction and maximum principle in any Lq (X, ν) space, .1  q  ∞:

.

(u(t) − v(t))+ Lq (X,ν)  (u0 − v0 )+ Lq (X,ν)

.

∀0 < t < T,

for any pair of solutions u and v of problem (3.19) with initial datum .u0 and .v0 , respectively. Definition 3.20 Given .u0 ∈ L2 (X, ν), we denote by .e−t1 u0 the unique solution m of problem (3.19). We call the semigroup .{e−t1 }t0 in .L2 (X, ν) the total variational flow in .[X, B, m, ν] or, alternatively, the m-total variational flow. m

In the next result, we give an important property of the total variational flow in random walk spaces. Proposition 3.21 The total variation flow satisfies the mass conservation property: for .u0 ∈ L1 (X, ν) ∩ L2 (X, ν),  .

e−t1 u0 dν = m

X

Proof By Proposition 3.18, we have:

 u0 dν X

for every t  0.

3.3 Asymptotic Behaviour

.

119





d dt

e−t1 u0 dν  T Vm (1) = 0, m

X

and .

d dt



e−t1 u0 dν  T Vm (−1) = 0. m

X

Hence: .

d dt



e−t1 u0 dν = 0, m

X

and, consequently, 

e−t1 u0 dν = m

.



X

u0 dν

for any t  0.

X



3.3 Asymptotic Behaviour In this section we assume that .[X, B, m, ν] is a reversible random walk space. Proposition 3.22 For every .u0 ∈ L1 (X, ν) ∩ L2 (X, ν), there exists u∞ ∈ {u ∈ L1 (X, ν) ∩ L2 (X, ν) : 0 ∈ m 1 (u)}

.

such that lim e−t1 u0 = u∞ m

.

t→∞

in L2 (X, ν).

Suppose further that .[X, B, m, ν] is m-connected, then: (i) if .ν(X) = ∞, .u∞ = 0 .ν-a.e. (ii) if .ν is a probability measure,  u∞ =

u0 (x)dν(x) ν-a.e.

.

X

Proof Since .Fm is a proper and lower semicontinuous function in X attaining the minimum at the zero function and, moreover, .Fm is even, by Bruck [54, Theorem 5], m we have that the strong limit in .L2 (X, ν) of .e−t1 u0 exists and is a minimum point of .Fm , i.e.

120

3 The m-total Variation Flow

u∞ ∈ {u ∈ L1 (X, ν) ∩ L2 (X, ν) : 0 ∈ m 1 (u)}.

.

Suppose now that .[X, B, m, ν] is m-connected. Then, since .0 ∈ m 1 (u∞ ), we have that .T Vm (u∞ ) = 0; thus, by Lemma 3.6, we get that .u∞ is a constant .ν-a.e. Therefore, if .ν(X) = ∞, we must have .u∞ = 0 .ν-a.e. and, if .ν is a probability measure, Proposition 3.21 yields:  u∞ =

u0 (x)dν(x).

.

X

 Let us see that we can specify a rate of convergence of the total variational flow m (e−t1 )t0 when a Poincaré-type inequality holds. If .[X, B, m, ν] satisfies a p-Poincaré inequality (see Definition 1.71), we write

.

 p

λ[X,B,m,ν] := inf

.

T Vm (u) : uLp (X,ν) = 0, uLp (X,ν)



 u(x)dν(x) = 0 .

X

The following result was proved in [21, Theorem 7.11] for the particular case of the metric random walk space .[ , d, mJ, , LN ] given in Example 1.40, where N is a bounded smooth domain. We generalize the result to any reversible . ⊂ R random walk space. Let us first recall the definition of a Lyapunov functional for a continuous semigroup. Definition 3.23 Let X be a metric space and S a continuous semigroup on X. A Lyapunov functional for T (see [81]) is a map .V : X → R such that .V (S(t)u)  V (u) for any .u ∈ X, .t  0. Theorem 3.24 Assume that .ν is a probability measure. If .[X, B, m, ν] satisfies a 1-Poincaré inequality, then, for any .u0 ∈ L2 (X, ν)

.

    −tm1 u0 − ν(u0 ) e

L1 (X,ν)



1

u0 2L2 (X,ν)

2λ1[X,B,m,ν]

t

for all t > 0.

Proof Let .u0 ∈ L2 (X, ν). The complete accretivity of the operator .−m 1 (recall Definition A.30) implies that L(u) := u − ν(u0 )L1 (X,ν) , u ∈ L2 (X, ν),

.

is a Lyapunov functional for the semigroup .{e−t1 : t  0}. Indeed, by m Proposition A.39, .e−t1 is a complete contraction (see Definition A.28) for .t  0; thus, taking the normal functional .N : L1 (X, ν) → (−∞, +∞] defined by −tm 1 0 = 0, .t ≥ 0, we get that .N (u) := u − ν(u0 )L1 (X,ν) , since .e m

3.3 Asymptotic Behaviour

.

121

    −tm1 u − ν(u0 ) e

L1 (X,ν)

 u − ν(u0 )L1 (X,ν) , u ∈ L2 (X, ν), t  0.

In particular, if .v(t) := e−t1 u0 − ν(u0 ) m

v(t)L1 (X,ν)  v(s)L1 (X,ν)

for t  s.

.

(3.20)

Now, by Proposition 3.21, .ν(u(t)) = ν(u0 ) for all .t  0, so the 1-Poincaré inequality yields: λ1[X,B,m,ν] v(s)L1 (X,ν)  T Vm (v(s)), s  0.

(3.21)

.

This inequality is of course true if .v(s)L1 (X,ν) = 0. Therefore, by (3.20) and (3.21)  tv(t)L1 (X,ν) 

t

.

0

v(s)L1 (X,ν) ds 



1 λ1[X,B,m,ν]

t

T Vm (v(s))ds, t  0.

0

(3.22)

On the other hand, by Proposition 3.18, .



1 d −tm 1 u0 2 e =− L2 (X,ν) 2 dt



e−t1 u0 m

X

m d −tm 1 u0 dν = T Vm (e −t1 u0 ), e dt

and then, 1 −tm 1 1 u0 2 . e − u0 2L2 (X,ν) = − L2 (X,ν) 2 2

 

t

T Vm (e−s1 u0 )ds m

0 t

=−

T Vm (v(s))ds, 0

which implies 

t

.

T Vm (v(s))ds 

0

1 u0 2L2 (X,ν) , t  0. 2

Hence, by (3.22), v(t)L1 (X,ν) 

.

1

u0 2L2 (X,ν)

2λ1[X,B,m,ν]

t

, t > 0, 

which concludes the proof.

Remark 3.25 If .ν(X) = ∞ and .u0 ∈ ∩ then, if .ν(u0 ) = 0, we may proceed similarly (substituting .ν(u0 ) by 0) to obtain that L1 (X, ν)

L2 (X, ν),

122

3 The m-total Variation Flow

   −tm1  . e u0 

L1 (X,ν)



1

u0 2L2 (X,ν)

2λ1[X,B,m,ν]

t

for all t > 0.

On account of Theorem 3.24, we obtain the following result on the asymptotic behaviour of the total variation flow. Corollary 3.26 Under the hypothesis of Theorem 1.88 with .p = 1, .A = X, .B = ∅ and assuming that .ν is a probability measure; if .u0 ∈ L2 (X, ν), then    −tm1  . e u0 − ν(u0 )

L1 (X,ν)



1

u0 2L2 (X,ν)

2λ1[X,B,m,ν]

t

for all t > 0.

Let us see that, when .[X, B, m, ν] satisfies a 2-Poincaré inequality, the solution of the total variational flow reaches the steady state in finite time. Theorem 3.27 Assume that .ν is a probability measure. Suppose that .[X, d, m, ν] satisfies a 2-Poincaré inequality. Then, for any .u0 ∈ L2 (X, ν)  m e−t1 u0 −ν(u0 )L2 (X,ν)  u0 − ν(u0 )L2 (X,ν) − λ2[X,B,m,ν] t

+

for all t  0.

.

Consequently, e−t1 u0 = ν(u0 ) m

.

∀ t  tˆ :=

u0 − ν(u0 )L2 (X,ν) λ2[X,B,m,ν]

.

Proof Let .u0 ∈ L2 (X, ν), .u(t) := e−t1 u0 and .v(t) := u(t) − ν(u0 ). Since m m . u(t) =  v(t), 1 1 m

.

d v(t) ∈ m 1 v(t), t > 0. dt

Note that .v(t) ∈ BVm (X, ν) for every .t > 0. Indeed, since .−1m = ∂Fm is a maximal monotone operator in .L2 (X, ν), then, by [51, Theorem 3.7] with .H = L2 (X, ν), we have that .v(t) ∈ D(1m ) ⊂ BVm (X, ν) for every .t > 0. Consequently, by Theorem 3.13, there exists .gt ∈ L∞ (X × X, ν ⊗ mx ) antisymmetric with .gt L∞ (X×X,ν⊗mx )  1 such that  gt (x, y) dmx (y) =

.

X

and

d v(t)(x) dt

for ν-a.e x ∈ X and every t > 0,

(3.23)

3.3 Asymptotic Behaviour

123

  − gt (x, y)dmx (y) v(t)(x)dν(x) = Fm (v(t)) = T Vm (u(t)) for every t > 0.

.

X

X

(3.24)

Then, multiplying (3.23) by .v(t), integrating over X with respect to .ν and having in mind (3.24), we get: .

1 d 2 dt

 v(t)2 dν + T Vm (v(t)) = 0. X

Now, by Proposition 3.21, .ν(u(t)) = ν(u0 ) for all .t  0, and, since .[X, B, m, ν] satisfies a 2-Poincaré inequality: λ2[X,B,m,ν] v(t)L2 (X,ν)  T Vm (v(t)) for all t  0.

.

Therefore: .

1 d v(t)2L2 (X,ν) + λ2[X,B,m,ν] v(t)L2 (X,ν)  0 for all t  0. 2 dt

Now, integrating this ordinary differential inequation, we get:  v(t)L2 (X,ν)  v(0)L2 (X,ν) − λ2[X,B,m,ν] t

+

.

for all t  0,

that is,  u(t) − ν(u0 )L2 (X,ν)  u0 − ν(u0 )L2 (X,ν) − λ2[X,B,m,ν] t

.

+

for all t  0. 

Remark 3.28 As before, if .ν(X) = ∞ and .u0 ∈ L1 (X, ν) ∩ L2 (X, ν) with  . u0 dν = 0, we obtain that X

 m e−t1 u0 L2 (X,ν)  u0 L2 (X,ν) − λ2[X,B,m,ν] t

+

.

for all t  0.

Definition 3.29 Assume that .ν is a probability measure. We define the extinction time as T ∗ (u0 ) := inf{t > 0 : e−t1 u0 = ν(u0 )}, u0 ∈ L2 (X, ν). m

.

Under the conditions of Theorem 3.27 T ∗ (u0 ) 

.

u0 − ν(u0 )L2 (X,ν) λ2[X,B,m,ν]

.

124

3 The m-total Variation Flow

To obtain a lower bound on the extinction time, we use the norm . · m,∗ introduced in Remark 3.14. Theorem 3.30 Assume that .ν is a probability measure. Then: T ∗ (u0 )  u0 − ν(u0 )m,∗

∀u0 ∈ L2 (X, ν).

.

Proof We may assume that .T ∗ (u0 ) < ∞. Let .u0 ∈ L2 (X, ν). If .u(t) := e−t1 u0 , we have: m



T ∗ (u0 )

u0 − ν(u0 ) = −

.

u (t)dt.

0

Then, by Proposition 3.18, we get: 

 w(u0 − ν(u0 ))dν : T Vm (w)  1

u0 − ν(u0 )m,∗ = sup X

"

! = sup

w X

!

.

T ∗ (u0 )

= sup !  sup

−u (t)dt dν : T Vm (w)  1 $

−wu (t)dνdt : T Vm (w)  1 X

T ∗ (u0 )

$

0

T ∗ (u0 ) 

0

# 

$

T Vm (w)dt : T Vm (w)  1 = T ∗ (u0 ).

0

 As a consequence of Example 1.78 and Theorem 3.27, we get the following result. Theorem 3.31 Let .G = (V (G), E(G)) be a finite weighted connected discrete graph and let .[V (G), dG , mG , νG ] be the associated metric random walk space (recall Example 1.41). Then: mG

e−t1 u0 − ν(u0 )L2 (V (G),νG )  λ2[V (G),d

.

where .tˆ :=

u0 −ν(u0 )L2 (V (G),ν

G)

λ2

G G ,m ,νG ]

. Consequently,

[V (G),dG ,mG ,νG ] mG

e−t1 u0 = ν(u0 )

.

for all t  tˆ.

 + tˆ − t ,

3.4 m-Cheeger and m-Calibrable Sets

125

3.4 m-Cheeger and m-Calibrable Sets In this section we assume that .[X, B, m, ν] is an m-connected reversible random walk space. Definition 3.32 Given . ∈ B with .0 < ν( ) < ν(X), we define the m-Cheeger constant of . as   Pm (E) m : E ∈ B , ν(E) > 0 . (3.25) .h1 ( ) := inf ν(E) The notation .hm 1 ( ) is chosen together with the one that we will use for the classical Cheeger constant .h1 ( ) (see (3.26)). In both of these, the subscript 1 is there to further distinguish them from the upcoming notation .hm (X) for the mCheeger constant of X (see (3.49)). If a set .E ∈ B minimizes (3.25), then E is said to be an m-Cheeger set of . . Furthermore, we say that . is m-calibrable if it is an m-Cheeger set of itself, that is, if hm 1 ( ) =

.

Pm ( ) . ν( )

Note that, by (1.8), .hm 1 ( )  1. For ease of notation, given . ∈ B with .0 < ν( ) < ν(X), we denote λm :=

.

Pm ( ) . ν( )

Remark 3.33 (i) Let .[RN , B, mJ , LN ] be the random walk space given in Example 1.40. Then, the concepts of .mJ -Cheeger set and .mJ -calibrable set coincide with the concepts of J -Cheeger set and J -calibrable set introduced in [148] (see also [149]). (ii) Let .[V (G), dG , mG , νG ] be the metric random walk space associated with a locally finite weighted discrete graph .G = (V (G), E(G)) having more than two vertices and no loops (i.e. .wxx = 0 for all .x ∈ V ). Then, any subset consisting of two vertices is .mG -calibrable. Indeed, let . = {x, y}, by (1.8) we have:   PmG ({x}) PmG ( ) , =1− . dmG x (z)dνG (z) = 1  νG ( ) νG ({x}) {x} {x} and, similarly,

126

3 The m-total Variation Flow

.

P G ( ) PmG ({y}) =1 m . νG ({y}) νG ( )

Therefore, . is .mG -calibrable. In [148] it is proved that, for the metric random walk space .[RN , d, mJ , LN ] given in Example 1.40, each ball is a J -calibrable set. In the next example, we see that this result is not true in general. Example 3.34 Let .G = (V (G), E(G)) be the finite weighted discrete graph with vertex set .V (G) = {x1 , x2 , . . . , x7 } and the following weights: wx1 ,x2 = 2, wx2 ,x3 = 1, wx3 ,x4 = 2, wx4 ,x5 = 2, wx5 ,x6 = 1, wx6 ,x7 = 2

.

and .wxi ,xj = 0 otherwise. Let .[V (G), dG , mG , νG ] be the associated metric random walk space. Then, if .E1 = B(x4 , 52 ) = {x2 , x3 , . . . , x6 }, .

w x1 x2 + wx6 x7 PmG (E1 ) 1 = = . dx2 + dx3 + dx4 + dx5 + dx6 4 νG (E1 )

However, taking .E2 = B(x4 , 32 ) = {x3 , x4 , x5 } ⊂ E1 , we have: .

wx2 x3 + wx5 x6 PmG (E2 ) 1 = = . dx3 + dx4 + dx5 5 νG (E2 )

Consequently, the ball .B(x4 , 52 ) is not .mG -calibrable. In the next example, we see that there exist random walk spaces with sets that do not contain m-Cheeger sets. Example 3.35 Let .G = (V (G), E(G)) be the finite weighted discrete graph defined in Example 3.79 (1), i.e. .V (G) = {x0 , x1 , . . . , xn . . .} and weights: wx2n x2n+1 :=

.

1 , 2n

wx2n+1 x2n+2 :=

1 3n

for n = 0, 1, 2, . . . , P

G (D)

and .wxi ,xj = 0 otherwise. If . := {x1 , x2 , x3 . . .}, then . νmG (D) > 0 for every m .D ⊂ with .νG (D) > 0, but, working as in Example 3.79, we get .h ( ) = 0. 1 Therefore, . has no m-Cheeger set. It is well known (see [100]) that, for a bounded smooth domain . ⊂ RN , the classical Cheeger constant  h1 ( ) := inf

.

 P er(E) : E ⊂ , |E| > 0 , |E|

(3.26)

3.4 m-Cheeger and m-Calibrable Sets

127

is an optimal Poincaré constant, namely, it coincides with the first eigenvalue of the minus 1-Laplacian:  ⎧ ⎪ ⎪ |Du| + ⎨ h1 ( )= 1 ( ):= inf

.

⎪ ⎪ ⎩



|u|dHN −1



uL1 ( )

⎫ ⎪ ⎪ ⎬ : u ∈ BV ( ), uL∞ ( ) =1 . ⎪ ⎪ ⎭

In order to get a nonlocal version of this result, we introduce the following constant. Definition 3.36 For . ∈ B with .0 < ν( ) < ν(X), we define    1

m ( ):= inf T V (u) : u ∈ L (X, ν), u=0inX \ , u  0, u(x)dν(x)=1 m 1 X

⎫ ⎪ ⎪ ⎬ T Vm (u) 1  : u ∈ L (X, ν) \ {0}, u = 0 in X \ , u  0 . = inf ⎪ ⎪ ⎪ ⎪ ⎭ ⎩ u(x)dν(x) ⎧ ⎪ ⎪ ⎨

.

X

Theorem 3.37 Let . ∈ B with .0 < ν( ) < ν(X). Then: m hm 1 ( ) = 1 ( ).

.

Proof Given .E ∈ B with .ν(E) > 0, we have: .

T Vm (χE ) Pm (E) . = ν(E) χE L1 (X,ν)

m Therefore, . m 1 ( )  h1 ( ). For the opposite inequality, we follow an idea used 1 in [100]. Given .u ∈ L (X, ν) \ {0}, with .u = 0 in .X \ and .u  0, we have:





uL∞ (X,ν)

Pm (Et (u)) ν(Et (u)) dt ν(Et (u)) 0 0  +∞  m ( ) ν(E (u)) dt = h ( ) u(x)dν(x)  hm t 1 1

T Vm (u) = .

+∞

Pm (Et (u)) dt =

0

X

where the first equality follows by the coarea formula (Theorem 3.5) and the last one by Cavalieri’s principle. Taking the infimum over u in the above expression, we m get . m  1 ( )  h1 ( ). Let us recall that, in the local case, a set . ⊂ RN is called calibrable if   Per( ) Per(E) . : E ⊂ , E with finite perimeter, |E| > 0 . = inf |E| | |

128

3 The m-total Variation Flow

The following characterization of convex calibrable sets is proved in [8]. Theorem 3.38 ([8]) Given a bounded convex set . ⊂ RN of class .C 1,1 , the following assertions are equivalent: (a) . is calibrable.  ( ) Du (b) .χ satisfies .−1 χ = Per | | χ , where .1 u := div |Du| . Per( ) . (c) .(N − 1)ess supH∂ (x)  | | x∈∂ In the following results, we see that the nonlocal counterparts of some of the implications in this theorem also hold true in our setting, while others do not. Remark 3.39 (i) Let . ∈ B with .0 < ν( ) < ν(X) and assume that there exists a constant .λ > 0 and a measurable function .τ : X → R such that .τ (x) = 1 for .x ∈ and .

− λτ ∈ m 1 χ in X.

∞ Let us see that .λ = λm . By Theorem 3.13, there exists .g ∈ L (X × X, ν ⊗ mx ) ∞ antisymmetric with .gL (X×X,ν⊗mx )  1 satisfying

 .



g(x, y) dmx (y) = λτ (x)

for ν-a.e x ∈ X

X

and   .



g(x, y)dmx (y) χ (x)dν(x) = Fm (χ ) = Pm ( ). X

X

Then:  λν( ) =

λτ (x)χ (x)dν(x) X

 

.



=−

g(x, y) dmx (y) χ (x)dν(x) X

X

= Pm ( ) and, consequently, λ=

.

Pm ( ) = λm . ν( )

(ii) Let . ∈ B with .0 < ν( ) < ν(X), and .τ : X → R a measurable function with .τ (x) = 1 for .x ∈ . Then:

3.4 m-Cheeger and m-Calibrable Sets .

m − λm τ ∈ 1 χ

129 m in X ⇔ −λm τ ∈ 1 0

in X.

(3.27)

Indeed, the left to right implication follows from the fact that, for .u ∈ L2 (X, ν), ∂Fm (u) ⊂ ∂Fm (0),

.

and, for the converse implication, we have that there exists .g ∈ L∞ (X × X, ν ⊗ mx ) antisymmetric with .gL∞ (X×X,ν⊗mx )  1 and satisfying  .

− λm τ (x) =

g(x, y) dmx (y)

for ν-a.e. x ∈ X.

X

Now, multiplying this equation by .χ and integrating over X with respect to .ν, since .ν is reversible with respect to m and .g is antisymmetric, we get:  Pm ( ) =

.

λm ν( )

=

  =− =

1 2

1  2

λm

τ (x)χ (x)dν(x) X

g(x, y)χ (x)dmx (y)dν(x) X

X

X

 

X

X

X

 

g(x, y)(χ (y) − χ (x))dmx (y)dν(x) |χ (y) − χ (x)| dmx (y)dν(x) = Pm ( ).

Therefore, the previous inequality is, in fact, an equality, thus g(x, y) ∈ sign(χ (y) − χ (x))

.

for (ν ⊗ mx )-a.e. (x, y) ∈ X × X,

and, consequently, .

m − λm τ ∈ 1 χ

in X.

The next result is the nonlocal version of the fact that (a) is equivalent to (b) in Theorem 3.38. Theorem 3.40 Let . ∈ B with .0 < ν( ) < ν(X). Then, the following assertions are equivalent: (i) . is m-calibrable, (ii) there exists .λ > 0 and a measurable function .τ : X → R equal to 1 in . such that .

− λτ ∈ m 1 χ

in X,

(3.28)

130

3 The m-total Variation Flow

(iii) .

∗ m − λm τ ∈ 1 χ

in X,

for

τ ∗ (x) :=

.

⎧ ⎪ ⎨1 ⎪ ⎩−

if x ∈ , 1 mx ( ) λm

if x ∈ X \ .

Proof Observe that, since .[X, B, m, ν] is m-connected, by Proposition 1.68, we have .Pm ( ) > 0 and, therefore, .λm > 0. .(iii) ⇒ (ii): Trivial. .(ii) ⇒ (i): Suppose that there exists a measurable function .τ : X → R equal to 1 in . satisfying (3.28). Then, by Remark 3.39 (i), .λ = λm . Hence, there exists ∞ .g ∈ L (X × X, ν ⊗ mx ) antisymmetric with .gL∞ (X×X,ν⊗mx )  1 satisfying  .

− X

g(x, y) dmx (y) = λm τ (x)

for ν-a.e. x ∈ X

and   .



g(x, y)dmx (y) χ (x)dν(x) = Pm ( ). X

X

Then, if .F ∈ B with .ν(F ) > 0, since .g is antisymmetric, by using the reversibility of .ν with respect to m, we get:    m λm ν(F ) = λ τ (x)χ (x)dν(x) = − g(x, y)χF (x) dmx (y)dν(x) F X X  X . 1 = g(x, y)(χF (y) − χF (x)) dmx (y)dν(x)  Pm (F ). 2 X X m Therefore, .hm 1 ( ) = λ and, consequently, . is m-calibrable. .(i) ⇒ (iii) Suppose that . is m-calibrable. Take



τ (x) =

.

⎧ ⎪ ⎨1 ⎪ ⎩−

if x ∈ , 1 mx ( ) λm

if x ∈ X \ .

m ∗ We claim that .−λm τ ∈ 1 0, that is, ∗ λm τ ∈ ∂Fm (0).

.

(3.29)

3.4 m-Cheeger and m-Calibrable Sets

131

Indeed, take .w ∈ L2 (X, ν) with .Fm (w) < +∞. Since  w(x) =



+∞

χEt (w) (x)dt −

.

0

and 



1 τ (x)dν(x) = 1dν(x) − m λ X

.



we have:   ∗ m . λm τ (x)w(x)dν(x) = λ X

0 −∞

(1 − χEt (w) (x))dt,

 mx ( )dν(x) = ν( ) − X\

+∞  −∞

1 Pm ( ) = 0, λm

τ ∗ (x)χEt (w) (x)dν(x)dt.

X

(3.30)

Now, using that .τ ∗ = 1 in . and that . is m-calibrable, we get that  λm

+∞  −∞



.

=

λm 



X +∞

−∞

+∞ −∞

τ ∗ (x)χEt (w) (x)dν(x)dt  + λm

ν(Et (w) ∩ )dt

 + λm

Pm (Et (w) ∩ )dt

+∞  −∞

+∞  −∞

τ ∗ (x)dν(x)dt

(3.31)

Et (w)\

τ ∗ (x)dν(x)dt.

Et (w)\

By Proposition 1.62 and the coarea formula (Theorem 3.5), we get: 

+∞ −∞



=

Pm (Et (w) ∩ )dt

+∞

−∞

 Pm (Et (w) ∩ )dt +





+∞ −∞

+

.

 =

+∞

+

−∞

−∞

Pm (Et (w) \ )dt 

2Lm (Et (w) \ , Et (w) ∩ )dt −



+∞ −∞

+∞

−∞

Pm (Et (w) \ )dt

2Lm (Et (w) \ , Et (w) ∩ )dt 

Pm (Et (w))dt −

−∞  +∞

+∞

+∞

−∞

Pm (Et (w) \ )dt

2Lm (Et (w) \ , Et (w) ∩ )dt

= Fm (w) −



+∞ −∞

 Pm (Et (w) \ )dt +

+∞ −∞

2Lm (Et (w) \ , Et (w) ∩ )dt.

132

3 The m-total Variation Flow

Hence, if we prove that  I := −

+∞ −∞



.

+ λm

 Pm (Et (w) \ )dt +

+∞ 

−∞

+∞ −∞

2Lm (Et (w) \ , Et (w) ∩ )dt

τ ∗ (x)dν(x)dt

Et (w)\

is non-positive then, by (3.30) and (3.31), we get:  .

X

∗ λm τ (x)w(x)dν(x)  Fm (w),

which proves (3.29). Now, since Pm (Et (w) \ ) = Lm (Et (w) \ , X \ (Et (w) \ )) .

= Lm (Et (w) \ , (Et (w) ∩ ) ∪ (X \ Et (w)))

.

= Lm (Et (w) \ , Et (w) ∩ ) + Lm (Et (w) \ , X \ Et (w)), and .τ ∗ (x) = − λ1m mx ( ) for .x ∈ X \ , we have:

 I =−

−∞

 −

.

 

+∞

 Lm (Et (w) \ , X \ Et (w))dt +

+∞  −∞

+∞ −∞

+∞ −∞

Lm (Et (w) \ , Et (w) ∩ )dt

 dmx (y)dν(x)dt

Et (w)\

 Lm (Et (w) \ , Et (w) ∩ )dt −

+∞ −∞

Lm (Et (w) \ , )dt  0

as desired Finally, by (3.27), (3.29) yields: .

∗ m − λm τ ∈ 1 χ

in X, 

and this concludes the proof.

Even though, in principle, the m-calibrability of a set is a nonlocal concept which may, therefore, depend on the whole of X, in the next result, we see that the mcalibrability of a set depends only on the set itself. Theorem 3.41 Let . ∈ B with .0 < ν( ) < ν(X). Then, . is m-calibrable if, and only if, there exists an antisymmetric function .g in . × such that .

− 1  g(x, y)  1

for (ν ⊗ mx ) − a.e. (x, y) ∈ × ,

(3.32)

3.4 m-Cheeger and m-Calibrable Sets

133

and  λm =−

g(x, y) dmx (y) + 1 − mx ( ),

.

for ν-a.e. x ∈ .

(3.33)



Observe that, on account of (1.8), (3.33) is equivalent to 1 .mx ( ) = ν( )



 mz ( )dν(z) −

g(x, y) dmx (y)

for ν-a.e. x ∈ .



Proof By Theorem 3.40, we have that . is m-calibrable if, and only if, there exists g ∈ L∞ (X × X, ν ⊗ mx ) antisymmetric with .g(x, y) ∈ sign(χ (y) − χ (x)) for .ν ⊗ mx -a.e. .(x, y) ∈ × , satisfying .

 .

− X

g(x, y) dmx (y) = λm

for ν-a.e. x ∈

and  mx ( ) =

g(x, y) dmx (y)

.

for ν-a.e. x ∈ X \ .

X

Now, having in mind that .g(x, y) = −1 if .x ∈ and .y ∈ X \ , we have that, for ν-a.e. .x ∈ ,    λm = − g(x, y) dm (y) = − g(x, y) dm (y) − g(x, y) dmx (y) x x

.



.

X

=−



X\



g(x, y) dmx (y) + mx (X \ ) = −

g(x, y) dmx (y) + 1 − mx ( ).

Therefore, we have obtained (3.32) and (3.33). Let us now suppose that we have an antisymmetric function .g in . × satisfying (3.32) and (3.33). To check that . is m-calibrable, we need to find ˜ (x, y) ∈ sign (χ (y) − χ (x)) antisymmetric such that .g ⎧  ⎪ −λm = ⎪ g˜ (x, y)dmx (y), x ∈ , ⎪ ⎨ X .  ⎪ ⎪ ⎪ g˜ (x, y)dmx (y), x ∈ X \ , ⎩ mx ( ) = X

which is equivalent to

134

3 The m-total Variation Flow

 ⎧ m ⎪ ⎪ −λ g˜ (x, y)dmx (y) − mx (X \ ), x ∈ , = ⎪ ⎨ .  ⎪ ⎪ ⎪ g˜ (x, y)dmx (y) + mx ( ), x ∈ X \ , ⎩ mx ( ) = X\

since, necessarily, .g˜ (x, y) = −1 for .x ∈ and .y ∈ X \ , and .g˜ (x, y) = 1 for x ∈ X \ and .y ∈ . Now, the second equality in this system is satisfied if we take ˜ (x, y) = 0 for .x, y ∈ X \ , and the first one is just a rewrite of (3.33) if we take .g ˜ (x, y) = g(x, y) for .x, y ∈ . .g  .

Corollary 3.42 A set . ∈ B is m-calibrable if, and only if, it is .m m -calibrable as a subset of .[ m , B m , m m , ν m ] (recall Example 1.47). Remark 3.43 (i) Let . ∈ B with .0 < ν( ) < ν(X). Observe that, as we have proved, m is m-calibrable ⇔ −λm χ + m(.) ( ) χX\ ∈ 1 χ .

.

(3.34)

(ii) Let . ∈ B with .0 < ν( ) < 1. Let us see that if .ν is a probability measure, then the equation .

m − λm χ ∈ 1 χ

in X

does not hold true. Indeed, suppose that .

m − λm χ + h χX\ ∈ 1 χ

for some measurable function .h : X → R. Then, there exists .g ∈ L∞ (X × X, ν ⊗ mx ) antisymmetric with g(x, y) ∈ sign(χ (y) − χ (x))

for (ν ⊗ mx )-a.e. (x, y) ∈ X × X

.

and  .

− λm χ (x) + h(x) χX\ (x) =

g(x, y) dmx (y) X

Hence, since g is ν ⊗ mx -integrable,

.

we have that  h(x)dν(x) = Pm ( ).

.

X\

for ν-a.e x ∈ X. (3.35)

3.4 m-Cheeger and m-Calibrable Sets

135

Indeed, integrating (3.35) over X with respect to .ν, we get:  

 − λm ν( ) +

.

=Pm ( )

h(x)dν(x) = X\

g(x, y) dmx (y)dν(x). X

X

Now, since .g is antisymmetric and .ν ⊗ mx -integrable, the integral on the right hand side is zero and, consequently, we get  h(x)dν(x) = Pm ( ).

.

(3.36)

X\

Then, since .[X, B, m, ν] is m-connected, Theorem 1.68 yields that .Pm ( ) > 0; thus, by (3.36), h is non-.ν-null. Therefore, if .ν is a probability measure (so that, in particular, .g is .ν ⊗ mx integrable), the equation .

m − λm χ ∈ 1 χ

in X

(3.37)

does not hold true for any set . ∈ B with .0 < ν( ) < 1. However, if .ν(X) = +∞, then (3.37) may be satisfied, as shown in the next example. Example 3.44 Consider the metric random walk space .[R, d, mJ , L1 ] with .J = 1 2 χ[−1,1] (as defined in Example 1.40). Let us see that J

.

J

where .λm [−1,1] = .y < x:

1 4.

J

m − λm [−1,1] χ[−1,1] ∈ 1 χ[−1,1] ,

Take .g(x, y) to be antisymmetric and defined as follows for

1 g(x, y) := − χ{y