Measure-valued Processes and Stochastic Flows 9783110986518, 9783110997583

This book discusses the systems of interacting particles evolving in the random media. The focus is on the study of both

111 4 3MB

English Pages 228 Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
1 Examples of random measures and flows in the description of dynamical systems
2 Stochastic differential equations for the flows with interactions
3 The evolutionary measure-valued processes
4 Stochastic differential equations driven by the random measures on the space of trajectories
5 Stationary measure-valued processes
6 Evolutionary measure-valued processes on the infinitely-dimensional space
7 Stochastic calculus for flows with singular interaction
8 Historical comments
9 Appendix
Bibliography
Index
Recommend Papers

Measure-valued Processes and Stochastic Flows
 9783110986518, 9783110997583

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Andrey A. Dorogovtsev Measure-valued Processes and Stochastic Flows

De Gruyter Series in Probability and Stochastics



Edited by Itai Benjamini, Israel Jean Bertoin, Switzerland Michel Ledoux, France René L. Schilling, Germany

Volume 3

Andrey A. Dorogovtsev

Measure-valued Processes and Stochastic Flows �

Mathematics Subject Classification 2020 Primary: 60H17, 60H25; Secondary: 60J90, 60G57 Author Prof. Dr. Andrey A. Dorogovtsev National Academy of Sciences of Ukraine Institute of Mathematics Department of Theory of Random Processes Tereshenkovskaya str., 3 Kiev 01601 Ukraine [email protected]

ISBN 978-3-11-099758-3 e-ISBN (PDF) 978-3-11-098651-8 e-ISBN (EPUB) 978-3-11-098655-6 ISSN 2512-9007 Library of Congress Control Number: 2023942986 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2024 Walter de Gruyter GmbH, Berlin/Boston Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com



To my parents

Preface This monograph is the English translation of the author’s book “Measure-valued processes and stochastic flows,”1 which was published at the Institute of Mathematics National Academy of Sciences of the Ukraine in 2007. The material is mostly the same. But for the period from 2007, new things in the area were performed by the author and his PhD students and colleagues. Correspondingly the Appendix was added at the end of the monograph. This is a survey article of the author, which covers some recent developments in stochastic flows and measure-valued processes. The main objective of investigation in this book is stochastic flows, random measures on the space of functions and measure-valued Markov processes. Each of these subjects is connected with the variety of new ideas and methods, which arise both from applications and internal logic of random processes theory development. For example, stochastic flows naturally arise as a generalization of homeomorphic flows of solutions to ordinary differential equations. But despite this similarity, the stochastic flows have new properties. In particular, solutions to the ordinary stochastic differential equations with good coefficients can have more than linear growth with respect to the spatial variable. From the other side, the stochastic flow in the finite-dimensional Euclid space can be treated as a family of diffusion or more general Markov processes, which are connected in some way. This point of view arises in connection with the development of mathematical models for the interacting particle systems. In this book, we consider stochastic flows whose evolution depends on the measure transferred by the flow. This measure can be treated as a mass distribution of the interacting particle system. Constructed in a such way, the measure-valued process is Markovian. It must be noted that Markov measure-valued processes have been actively studied from the 80-th years of the previous century. There are several known classes of such processes. One of these describes the systems of independent branching Markov processes. Such processes have changing mass and are often called the superprocesses [38, 37, 11]. The superprocesses are used for probabilistic representation of the solutions to nonlinear parabolic equations. Another well-known class of measure-valued processes is the Fleming–Viot process. Here, the total mass remains to be constant and evolution of the mass distribution is achieved via random jumps of particles over the space. In mathematical biology, such jumps are related to mutations. One of the simplest models of this type is the Erenphest model [6, 9]. In this book, we unite together the stochastic flows and measure-valued processes. Namely, the future infinitesimal increments of the stochastic flow depend on the mass

1 “Meroznachnye protsessy i stokhasticheskie potoki” (Russian) (Measure-valued processes and stochastic flows). Proceedings of the Institute of Mathematics of NAS of the Ukraine. Mathematics and its Applications, 66 Kiev, 2007. p. 290 ISBN: 978-966-02-4540-2 https://doi.org/10.1515/9783110986518-201

VIII � Preface distribution, which it carries. The investigation of these stochastic flows allows to develop theory for the description of different interacting particle systems and to study the families of correlated Markov processes. In certain cases, the evolution of mass can be complicated. For example, one particle can turn into several or even into a cloud. To describe such a situation, it is convenient to use the measures on the space of functions with the values in the initial phase space. Such idea is commonly used in fluid dynamics. In the monograph [93], the space-time statistical solutions to the Navier–Stokes equation were introduced. These solutions are the measures on the space of trajectories. In more general situations, analogous measures as solutions to evolution equations in the Hilbert space were introduced by Young [92, 54]. Random measures on the space of trajectories are studied in this book from different points of view. We consider the structure of the stationary or adapted to the filtration of the Wiener process random measures. The equations driven by the random measures on the space of trajectories are the natural generalization of equations for stochastic flow with interaction. To consider such equations, we derive the Fubini theorem for Itô stochastic integrals and integrals with respect to adapted random measure. Also, in Chapter 4, we propose the random adapted measure instead of the weak solution to stochastic differential equations. It appears that this measure, as a solution to initial stochastic differential equations, satisfies the Hoph system of equations. According to these mentioned ideas, the book is divided into chapters. The first chapter contains examples, which lead to the notion of the flow with interaction. Since the interaction is expressed by the dependence of the trajectory of each particle from the total mass distribution, we need some facts about the metric spaces of probability measures. So, we present basic notions and statements about Wasserstain distance. Also, in Chapter 1, the random measures on the space of trajectories are considered. We investigate the structure of such measures for a stationary case or when the random measure is adapted to the Wiener flow of σ-fields. In Chapter 2, we introduce the new class of stochastic differential equations in the finite-dimensional Euclid space. These are equations with interaction. In such equations, coefficients depend on the measure, which is transferred by the flow of solutions. The existence of a strong solution and its properties are discussed. The solutions to equations with interaction consist of two objects: one is the stochastic flow and the other one is the measure-valued process, which describes the transportation of the mass by the flow. The obtained measure-valued process has the Markov property. Also, the finite sets of particles together with total mass distribution form the Markov process. Consequently, it is natural to define the evolutionary Markov measurevalued processes as the processes with the above mentioned properties. Evolutionary measure-valued processes can be built with the help of the family of a consistent stochastic kernel, which describes the motion of finite sets of particles depending on the total mass distribution. The form of the infinitesimal operator for the evolutionary measurevalued process also is established. It appears that in certain cases the stochastic flow

Preface � IX

with interaction can be obtained using the random change of the time in the flow of solutions to ordinary stochastic differential equations. This means that interaction leads to the change in the speed of particles, which moves along the trajectories of an initial dynamical system. In Chapter 4, the random measures on the space of functions are used for the description of the mass evolution. Here, we present an analog of the Fubini theorem for the integral with respect to adapted random measure and the Itô integral. Since the random field, which is integrated with respect to random measure, is now defined on the infinitely-dimensional space of functions, then the choice of measurable modification can influence the value of the integral. This difficulty is overcome by using finitedimensional approximations. In this chapter, the random measure adapted to the flow of σ-fields generated by the Wiener process is proposed as a substitution of the weak solution to stochastic differential equations. Chapter 5 is devoted mostly to the question of existence of the stationary solution to the equation with interaction. We begin with the new criteria of weak compactness for measure-valued processes. There are a lot of such criteria in the literature [9, 38], but we propose the conditions, which are easy to check for processes that arise as solutions to equations with interaction. The sufficient conditions for existence of the stationary solution to equations with interaction are following. As for ordinary stochastic differential equations, one can suppose the presence of an external force, which pushes all particles to the origin (for example) and dominates in the certain meaning interaction and noise perturbations. It appears that the stationary solution exists in this case but has very poor structure. Namely, the corresponding random measures at every moment are δ-measures. All particles collapse at one point, which moves stationary. To avoid such effects, two different approaches are proposed in Chapter 5. We introduce the set of attracting centers. Each particle has its own center of attraction. In this case, the stationary solution has a more complicated structure. Also, when only interaction and noise perturbations are present, the stationary regime is not expected. Due to this reason, we introduce the new notion of shift-compactness for the family of measures. Shift-compactness means that when each measure from the family is shifted on a certain vector, then the obtained family becomes to be weakly compact. This property for the measure-valued solutions to the equation with interaction can serve as a mathematical description of the so-called partial synchronization phenomena. In Chapter 6, we are trying to consider evolutionary measure-valued processes on abstract spaces. The first step is the description of equations with interaction in the infinitely-dimensional Hilbert space. Here, the new problem arises in comparison with the finite-dimensional Euclid space. Namely, it is not expected that the stochastic flow on the infinite-dimensional space will have the continuous modification. Using the standard arguments, one can prove only existence of the measurable modification. But in the infinitely-dimensional space, measurable modification can be very different. To avoid such difficulties, we propose to consider the conditional distributions instead of transferred measures. Obtained equations are very close to mean-field or McKean–Vlasov

X � Preface equations. This occurs only in Chapter 6. Let us note (this place is appropriate for such remark) that equations with interaction are different from the McKean–Vlasov because its aim is to simultaneously describe the motion of all particles in the space and their total mass distribution. In contrast with the McKean–Vlasov equation, the equation with interaction describes a flow on the initial phase space. At the end of Chapter 6, the local times for Markov measure-valued processes are discussed. Chapter 7 is devoted to the special stochastic flow with the singular interaction. It is called the Arratia flow and nonformally can be described as a family of Brownian motion independently starting from each point of the real line and coalescing when met. It was introduced in 1979 by R. Arratia as a weak limit of the scaled coalescing random walk. In Chapter 5, it appears as the weak limit of the flows, which correspond to stochastic differential equations. Since the uncountable family of independent random processes cannot exist in the good probability space, it then seems that at any positive moment of time the particles in Arratia flow coalesce into a countable set—this is true. In the first section, we prove that the total time of free motion for all particles, which start from the bounded interval, is finite. This allows us to prove the analog of the Girsanov theorem for Arratia flow. This monograph is an English version of the author’s book, which was published in Russian at the Institute of Mathematics NAS of the Ukraine in 2007. Since then, new results have been obtained by the author, his students and colleagues. A survey article is included in this book regarding recent achievements in the field of stochastic flows. In addition, this book contains exercises, which are devoted to technical details or interesting facts. For an understanding of the material, it is enough to know basic courses of random processes, functional analysis and measure theory. This book can serve as a basis for courses for both Master and PhD students, which was delivered by the author several times at the Institute of Mathematics NAS of the Ukraine, National Taras Shevchenko University and Jilin University.

Contents Preface � VII 1 1.1 1.2 1.3 1.4

Examples of random measures and flows in the description of dynamical systems � 1 Examples of stochastic flows with interaction � 1 Random measures and maps � 8 Stochastic kernels on the spaces of measures � 16 Random measures on the space of trajectories � 22

2 2.1 2.2 2.3 2.4

Stochastic differential equations for the flows with interactions � 34 Equation with interactions � 34 The Brownian sheet and related stochastic calculus � 38 The existence of the solution for the equation with the interaction � 42 The stochastic flows with interaction and the moments of initial mass distribution � 49

3 3.1

The evolutionary measure-valued processes � 53 Construction of the evolutionary measure-valued processes with the help of transition probabilities � 53 Evolutionary measure-valued processes on the discrete space � 58 The stochastic flows with interaction and the evolutionary processes � 61 The stochastic semigroups and evolutionary processes � 63 The generator of the evolutionary process � 66

3.2 3.3 3.4 3.5 4 4.1 4.2 4.3 4.4 5 5.1 5.2 5.3 5.4 5.5

Stochastic differential equations driven by the random measures on the space of trajectories � 76 Definition of the integral with respect to the random measure � 76 The equations driven by the random measures � 95 Random measures as solutions to stochastic differential equations � 105 Equations with the random measures on the space of trajectories � 116 Stationary measure-valued processes � 121 Weak compactness of measure-valued processes � 121 SDE on the real line. Existence of the stationary solution � 130 The stationary solution in the presence of motionless attracting centers � 137 Shift compactness of the random measures � 140 Weak limits of the processes with the interaction � 145

XII � Contents 6 6.1 6.2 6.3

Evolutionary measure-valued processes on the infinitely-dimensional space � 151 Hilbert-valued Cádlág processes and their conditional distributions � 151 Stochastic differential equation for the measure-valued process � 156 The local times for the measure-valued processes related to the stochastic flow with interaction � 161

7 7.1 7.2 7.3

Stochastic calculus for flows with singular interaction � 168 Total time of free motion in Arratia flow � 168 Stochastic integral with respect to Arratia flow � 171 Girsanov theorem for Arratia flow � 176

8

Historical comments � 181

9 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10

Appendix � 184 Stochastic flows and measure-valued processes � 184 Some properties of stochastic flows � 184 Gaussian properties of the Arratia flow � 187 Stochastic semigroups and widths � 190 Discrete time approximation of coalescing stochastic flows � 196 The iterated logarithm law and the fractional step method � 196 Approximations with SDEs � 198 Point densities � 199 Brownian particles with singular interaction � 202 Random dynamical systems generated by coalescing stochastic flows on the real line � 205 Stationary points in coalescing stochastic flows � 207 Duality for coalescing stochastic flows on the real line � 208

9.11 9.12

Bibliography � 211 Index � 215

1 Examples of random measures and flows in the description of dynamical systems 1.1 Examples of stochastic flows with interaction This chapter has an introduction character. Its aim is to show how the random motion of interacting particles can be described via its total mass distribution. The evolution of such mass distribution can be investigated using the flow, which carries it. We begin the chapter from the case when the system of the particles organizes the Markov chain. Example 1.1.1. Markov chains and stochastic flows. Let (X, ρ) be a complete separable metric space with a Borel σ-field ℬ. The Markov chain on such space can be defined using the transition kernel K : X × ℬ → [0; 1]. For every x ∈ X, K(x, ⋅) is a probability measure on ℬ, and for every Δ ∈ ℬ, K(⋅, Δ) is a measurable function on X. Starting from the point u ∈ X, one can define the Markov chain {xn , n ≥ 0} using K as a transition kernel: x0 = u,

P{xn+1 ∈ Δ/x0 , . . . , xn } = K(xn , Δ),

n ≥ 0.

The Markov chain {xn ; n ≥ 0} can be viewed as a random motion on X according to the rule K. From the first look, this situation looks similar to the deterministic dynamical system driven by the ordinary differential equation in Euclid space ℝd : dx(t) = a(x(t))dt. This analogy is fruitful and leads to different deep results. But there is also an essential difference between two these situations. The “probability law” K does not manage the joint motion of particles, which start from the different points of the space. In fact, it does not define it at all. In contrast, the differential equation under relatively weak conditions on the right part gives us a family of solutions starting from every point of the space. In order to use such possibility for the Markov chain, let us recall the well-known representation for the stochastic kernel on the complete separable space [41]. There exists the measurable function f : X × [0; 1] → X such that for uniformly on [0; 1] random variable ξ and for arbitrary u ∈ X, Δ ∈ ℬ, K(u, Δ) = P{f (u, ξ) ∈ Δ}. Let {ξn ; n ≥ 1} be the independent variables uniformly distributed on [0; 1]. Consider the sequence {un ; n ≥ 1}, which is obtained via the recurrent equation, un+1 = f (un , ξn ), https://doi.org/10.1515/9783110986518-001

n ≥ 1.

(1.1)

2 � 1 Examples of random measures and flows in the description of dynamical systems This is the Markov chain with transition kernel K. Note that now (1.1) describes the joint behavior of the sequences, which start from the different points of the space. Namely, for u1 , . . . , um ∈ X, we can consider the recurrent sequence: 1 m (un+1 , . . . , un+1 ) = (f (un1 , ξn ), . . . , f (unm , ξn )),

n ≥ 1,

(1.2)

which is a Markov chain in Xm . The joint distribution for sequences {unk ; n ≥ 1}, k = 1, . . . , m is not determined by the kernel K. It depends on our choice of the function f . Exercise 1.1.1. Let X = {0, 1}. Now the stochastic kernel K is the matrix. Consider the case K=

1 2 (1 2

1 2 ). 1 2

Build two different functions f related to K for which the sequences (1.2) starting from 0 and 1 have different behavior. Roughly speaking, the function f is the rule of the motion on X. Now we can build the stochastic flow on X as a sequence of the random maps determined by the recurrent equation: g1 (u) = u,

gn+1 (u) = f (gn (u), ξn ),

n ≥ 1, u ∈ X.

The flow {gn ; n ≥ 1} has the following properties: (1) For every u ∈ X, the sequence {gn (u); n ≥ 1} is a Markov chain with the kernel probability K. (2) For every u1 , . . . , um ∈ X, the sequence {(gn (u1 ), . . . , gn (um )), n ≥ 1} is a Markov chain in Xm . It is easy to understand that the distribution of g is determined by the family of the transition probabilities of the following kind: Qm (u1 , . . . , um )(Δ) = P{(f (u1 , ξ), . . . , f (um , ξ)) ∈ Δ}. Here, u1 , . . . , um ∈ X, Δ is a Borel subset of Xm , m ≥ 1. The next simple lemma shows that the family Q defines the flow on X. Lemma 1.1.1. Let {Qm (u1 , . . . , um ); u1 , . . . , um ∈ X, m ≥ 1} be a family of consistent finitedimensional distributions on X indexed by the elements of X. Suppose that ∀u ∈ X

lim ∬

v→u

X2

ρ(p, q) Q (u, v)(dp, dq) = 0. 1 + ρ(p, q) 2

(1.3)

1.1 Examples of stochastic flows with interaction

� 3

Then there exists the sequence of random maps {gn ; n ≥ 1} on X, such that for every u1 , . . . , um ∈ X, the sequence {(gn (u1 ), . . . , gn (um )); n ≥ 1} is a Markov chain with Qm as the transition probability. Proof. Due to the lemma condition, there exists the measurable random process η on X with the values in X, which has {Qm } as a finite-dimensional distribution. Consider the sequence {ξ; n ≥ 1} of independent copies of η. Then define g1 (u) = u,

gn+1 (u) = ξn (gn (u)),

n ≥ 1, u ∈ X.

One can easily check that {gn ; n ≥ 1} has the desired properties. The lemma is proved. From Exercise 1.1.1, one can see that for the different families {Qm } the law of motion of the single particles can be the same. But there are such situations when the law of the joint motion of the particles can be obtained from the original description of the model exactly as in the deterministic case. Example 1.1.2. Stochastic flow corresponding to SDE. Consider the usual stochastic differential equation in ℝd : dx(t) = a(x(t))dt + b(x(t))dw(t),

(1.4)

where w is Wiener process in ℝd , and coefficients a and b have continuous bounded derivatives. It is known (cf. [67]) that under such conditions there is a family of random homeomorphisms {φs,t ; 0 ≤ s ≤ t < +∞} in ℝd with the properties: (1) φs,s (u) = u, u ∈ ℝd , s ≥ 0. (2) For arbitrary u ∈ ℝd , the random process {φs,t (u), t ≥ s} is the solution to the Cauchy problem for (1.4) with the initial condition x(s) = u. Now φs,t (u) is a diffusion process, which has the same generator for every u. But in contrast to previous example, we have the description of the joint motion for the particles starting from the different points of the space. In fact, for arbitrary x1 , . . . , xm ∈ ℝd , the random process {(φ0,t (x1 ), . . . , φ0,t (xm )), t ≥ 0} is a diffusion process in ℝd×m . Note that the flow {φ} is not a unique family of the random maps in ℝd under which the single particles have the diffusion motion with a given generator. To understand this, consider the case d = 2. Assume that a = 0 and b is the identity matrix. Then ∀u ∈ ℝ2 : φ0,t (u) = u + w(t), i. e., {φ0,t (u); u ≥ 0} is the Wiener process, starting at the point u. Now suppose that a = 0 and

4 � 1 Examples of random measures and flows in the description of dynamical systems sin x1 b(x) = ( cos x1

cos x1 ), − sin x1

where x = (x1 , x2 ). Denote the flow corresponding to these coefficients by {ψs,t ; 0 ≤ s ≤ t < +∞}. Then for arbitrary u ∈ ℝ2 and λ ∈ ℝ2 the scalar process {(λ, φ0,t (u)); t ≥ 0} is the continuous martingale with respect to filtration of the initial Wiener process starting at the point u with characteristics ‖λ‖2 ⋅ t. So, {ψ0,t (u); t ≥ 0} is the Wiener process starting at u (cf. [73]). Pay attention to the fact that now the joint motion of two particles can be different from the previous case. Truly, the generators of the processes {(φ0,t (x1 ) ⋅ φ0,t (x2 )); t ≥ 0} and {(ψ0,t (x1 ) ⋅ ψ0,t (x2 )); t ≥ 0} are different. Exercise 1.1.2. Find the generators in both cases and compare them. Two previous examples show us that the family of random maps on the phase space can arise under natural conditions. Later on we will use the following definition. Definition 1.1.1. The measurable function φ : X×[0; +∞)×Ω → X is called the stochastic flow on X. In some cases, ℤ or ℤ+ will be used instead of [0; +∞). Note that we do not put any requirements on φ in contrast with [67], for example. The stochastic flow on the space X describes not only the motion of the particles and the sets of particles, but naturally it defines the motion of functions and measures on the space X. Example 1.1.3. Measure-valued process related to stochastic flow. Let {φ0,t ; t ≥ 0} be a stochastic flow corresponding to (1.4) in ℝd . Consider the finite measure μ0 on the Borel σ-field ℬ(ℝd ). The measurability of φ0,t allows us to define the random measure d μt = μ0 ∘ φ−1 0,t . In detail, for any Borel subset Δ ⊂ ℝ , μt (Δ) := μ0 (φ−1 0,t (Δ)). Exercise 1.1.3. Prove that μt (Δ) is a random variable. If the initial measure μ0 is concentrated in the finite amount of the points, i. e., m

μ0 = ∑ ak δxk , k=1

then μt can be viewed as the mass distribution of the particles, which start from the points x1 , . . . , xm and move accordingly to (1.4). The same terminology will be used in the general case when the initial measure is arbitrary. We will say that the process {μt ; t ≥ 0} describes the evolution of the mass when the points of the space move accordingly with (1.4). Note that in this case the behavior of the single particle in the phase space does not depend on the whole mass distribution. One of the main purposes of this book is to define and study such stochastic flows where the motion of the single particle and

1.1 Examples of stochastic flows with interaction

� 5

of the set of particles will depend on the mass, which is carried by all particles. Also, it is necessary to emphasize that the measure-valued process {μt ; t ≥ 0} does not contain all the information about the mass transportation. To see this, let us consider the following simple example in ℝ2 . Consider two deterministic differential equations: dx(t) = 0,

0 1

with T = (

dy(t) = Ty(t)dt,

−1 ). 0

Let μ0 be the standard Gaussian measure in ℝ2 with the zero mean and identity covariation operator. It can be easily seen that the stochastic flows corresponding to our equations are of the following kind: φ0,t (u) = u, ψ0,t (u) = Qt u,

u ∈ ℝ2 , t ≥ 0,

where cos t sin t

Qt = (

− sin t ). cos t

Consequently, −1 ∀t ≥ 0 : μ0 ∘ φ−1 0,t = μ0 ∘ ψ0,t = μ0 .

That is why in some cases it is useful to gather the information about the stochastic flows and the initial mass distribution and not build the measure-valued process but the random measure on the space of the trajectories of stochastic flow. In our case, this measure can be defined via its finite-dimensional distributions: νt1 ,...,tn (Δ) = μ0 (u : (φ0,t1 (u), . . . , φ0,tn (u)) ∈ Δ),

(1.5)

where Δ is a Borel subset of ℝd×n . Exercise 1.1.4. Check that under fixed ω from the probability space the set {νt1 ⋅⋅⋅tn } is the family of the consistent finite-dimensional distributions. In the future, corresponding random measures on X[0;+∞) will be studied in this book. Now consider an example of the stochastic flow and related measure-valued process where the motion of the single particle depends on the whole mass distribution. Example 1.1.4. Heavy particles on the finite space. Let the space X be finite, X = {1, . . . , N}. Consider the set of stochastic matrices on X, N

P(k) = (pij (k))ij=1 ,

k = 1, . . . , N.

(1.6)

6 � 1 Examples of random measures and flows in the description of dynamical systems Let μ be a probability measure on X, i. e., μ is a set of the nonnegative numbers μ1 , . . . , μN such that N

∑ μk = 1.

k=1

Define the random map fμ on X by the probabilities N

N

i=1

k=1

P{fμ (1) = j1 , . . . , fμ (N) = jN } = pj1 ...jN = ∏( ∑ piji (k)μk ).

(1.7)

It can be easily seen that N

∑ pj1 ...jN = 1.

j1 ...jN

So, (1.7) defines the distribution of the certain random map on X. This map can be roughly described as follows. It moves all mass placed in the certain point i according to stochastic matrix, which depends on the whole mass distribution on the space, and independently from the mass placed in other points. Our aim is to define the measurevalued process {μn ; n ≥ 0} on the space X in such a way that μn+1 will be obtained from μn as an image under the map fμn . To do this, let us consider the space XX of all maps from X to X and the space M of all probability measures on X. The last space can be treated as a simplex in ℝN and is equipped with Euclid distance. Build the stochastic kernel in the product M × XX by the following way: K((μ, g), Δ) = P{(μ ∘ fμ−1 , fμ (g)) ∈ Δ}, where fμ is the random map corresponding to μ in the previous sense and Δ is a Borel subset of M × XX . It can be easily checked that K is the stochastic kernel. Exercise 1.1.5. Prove this. Consider the Markov chain {(μn , gn ); n ≥ 0} in M × XX with transition probability K and initial state (μ0 , e), where e is identity map on X. Let us prove that for every n ≥ 1 with probability one, μn = μ0 ∘ gn−1 .

(1.8)

Use the induction. For n = 1, (1.8) results from the definition of K. Suppose that (1.8) holds for the certain n. It follows from the Markov property that for every Borel Δ ∈ M × XX , P{(μn+1 , gn+1 ) ∈ Δ} = EK((μn , gn ), Δ). So,

1.1 Examples of stochastic flows with interaction

� 7

N

󵄨 −1 󵄨󵄨 E ∑ 󵄨󵄨󵄨μkn+1 − (μ0 ∘ gn+1 )󵄨󵄨 k=1

=E

N

∫ M×XX

k󵄨 󵄨 E ∑ 󵄨󵄨󵄨νk − (μ0 ∘ h−1 ) 󵄨󵄨󵄨K((μn , gn ), dν, dh) k=1

N

k −1 k 󵄨 󵄨 = E Ẽ ∑ 󵄨󵄨󵄨(μn ∘ fμ−1 ) − (μ0 ∘ (fμn (gn )) ) 󵄨󵄨󵄨 n k=1 N

−1 k −1 k 󵄨 󵄨 = E Ẽ ∑ 󵄨󵄨󵄨(μ0 ∘ (fμn (gn )) ) − (μ0 ∘ (fμn (gn )) ) 󵄨󵄨󵄨 = 0. k=1

Here, fμn is the random map, which is built on another probability space under fixed ω from the initial probability space and Ẽ is related to the expectation with respect to fμn . Consequently, (1.8) holds for all n. Note that the motion of the single particle under the stochastic flows {gn } depends on every step from all mass distribution μn . As a numerical example, let us consider the space X = {1, 2} and stochastic matrix P(k), k = 1, 2 of the form 1 0

P(1) = (

0 ), 1

0 P(2) = ( 1

1 ). 0

Let the initial measure μ0 be equal to ( 31 ; 32 ). Then it is obvious that μ0 can be transformed by our stochastic flow only into one of the measures ( 31 ; 32 ), ( 32 ; 31 ), (1; 0), (0; 1). Denote these measures by ν1 = ( 31 ; 32 ), ν2 = ( 32 ; 31 ), ν3 = (1; 0), ν4 = (0; 1). Exercise 1.1.6. Prove that now {μn } is a homogeneous Markov chain with transition matrix: ν1 ν2 ν3 ν4

ν1

ν2

ν3

ν4

0 0

0 0

1 1

0 0

1 9 1 9

4 9 4 9

2 9 2 9

2 9 2 9

Exercise 1.1.7. Suppose that for stochastic matrices P(k), k = 1, . . . , N the following condition holds: min pij (k) > 0. i,j,k

Prove that for every initial measure μ0 the sequence of the random measures {μn } has the property: P{ω : ∃n = n(ω) ≥ 0 : ∀m ≥ n ∃im = im (ω) : μm = δim } = 1. This means that our stochastic flow gathers all the mass in the single particle, which moves randomly in the space.

8 � 1 Examples of random measures and flows in the description of dynamical systems

1.2 Random measures and maps This part of the chapter is devoted to the investigation of the spaces of probability measures on the complete separable metric space and the random elements in such spaces. This will serve as the basis for us in the next chapters when the general construction related to Example 1.1.4 will be considered. Denote the space of all probability measures on the Borel σ-field ℬ(X) in the complete separable metric space (X, ρ) by M. For two measures μ, ν ∈ M, define the set C(μ, ν) as the set of all probability measures on the Borel σ-field in X2 , which have μ and ν as its marginal projections. Definition 1.2.1. The Wasserstein distance of order zero [36] between μ and ν is the value: γ0 (μ, ν) =

inf ∬

ϰ∈C(μ,ν)

X2

ρ(u, v) ϰ(du, dv). 1 + ρ(u, v)

Exercise 1.2.1. Prove that γ0 is the distance and (M, γ0 ) is a complete separable metric space. Exercise 1.2.2. Prove that convergence in γ0 is equivalent to the weak convergence. Now let us define the distance similar to γ0 for measures, which have the moments. Denote for n ≥ 1, Mn = {μ ∈ M : ∀u ∈ X : ∫ ρ(u, v)n μ(dv) < +∞}. X

If X is a linear normed space, then Mn is a set of all measures, which have the finite absolute moment of order n. Definition 1.2.2. The Wasserstein distance of order n between μ, ν ∈ Mn is the value: n

1 n

γ0 (μ, ν) = ( inf ∬ ρ(u, v) ϰ(du, dv)) . ϰ∈C(μ,ν)

(1.9)

X2

Exercise 1.2.3. Prove that the integral on the right side of (1.9) is always finite. Exercise 1.2.4. Prove that (Mn , γn ) is a complete separable metric space. The next lemma gives the conditions of compactness in the space (Mn , γn ). Lemma 1.2.1. Let n ≥ 1. The closed set F in Mn is compact if and only if for every increasing sequence of compacts {Km ; m ≥ 1} in X such that

1.2 Random measures and maps

� 9



⋃ Km = X

m=1

the following relation holds: lim sup ∫ ρ(Km , u)n μ(du) = 0.

m→∞ μ∈F

(1.10)

X

Remark. An existence of only one sequence {Km ; m ≥ 1} satisfying the lemma condition is sufficient for F to be compact. Proof. Necessity. Suppose that F is a compact set in Mn . Let ε > 0 be fixed. Consider net for F{μ1 , . . . , μN }. For every i = 1, . . . , N,

1

εn 2

lim ∫ ρ(Km , u)n μi (du) = 0.

m→∞

X

This relation follows from the dominated convergence theorem and the lemma conditions. Take m0 ∈ ℕ such that ∀m ≥ m0 : ∀i = 1, . . . , N :

∫ ρ(Km , u)n μi (du) < X

ε . 2n

Now let μ ∈ F. Then there is i such that 1

εn γn (μ, μi ) < . 2 So, for the certain ϰ ∈ C(μ, μi ), ∬ ρ(u, v)n ϰ(du, dv) < X2

ε . 2n

Hence, for m ≥ m0 , ∫ ρ(Km , u)n μ(du) = ∬ ρ(Km , u)n ϰ(du, dv) X

X2 n

≤ ∬[ρ(Km , v) + ρ(u, v)] ϰ(du, dv) X2

≤ 2n−1 ∬ ρ(Km , v)n ϰ(du, dv) + 2n−1 ∬ ρ(u, v)n ϰ(du, dv) X2

≤ 2n−1 ( The necessity is proved.

ε ε + ) = ε. 2n 2n

X2

10 � 1 Examples of random measures and flows in the description of dynamical systems Sufficiency. Let the conditions of the lemma hold. Fix ε > 0 and build the finite ε-net for F. Choose m0 in a such way that ∀μ ∈ F: n

ε ∫ ρ(Km0 , u)n μ(du) < ( ) . 2

X

The compact Km0 in X has the finite ε2 -net {x1 , . . . , xN }. Take the measurable function f : X → {x1 , . . . , xN } such that ∀u ∈ X: ρ(u, f (u)) = min ρ(u, xi ). i=1,...,N

Exercise 1.2.5. Build the function f with the mentioned property. Put G = {μ ∘ f −1 : μ ∈ F}. G is sure to have a compact closure in Mn . So, there is ε2 -net {μj ; j = 1, . . . , L} for G. Let us check that {μj ; j = 1, . . . , L} is √2ε-net for F. For arbitrary μ ∈ F, consider n

n

γ0 (μ, μ ∘ f −1 ) ≤ ∫ ρ(u, f (u)) μ(du) X

n

1 ≤ ∫[ρ(u, Km0 ) + (ε)] μ(du) 2 X

≤ 2n−1 (∫ ρ(u, Km0 )n μ(du) + X

εn εn )< . n 2 2

Note that μ ∘ f −1 ∈ G. So, there is such i from {1, . . . , L} that n

γ0 (μ ∘ f −1 , μi )
c}

ρ(u, v)n μ(dv) = 0.

(1.11)

1.2 Random measures and maps

� 11

Exercise 1.2.6. Prove this statement. Corollary 1.2.2. Let the space X be unbound, i. e., sup ρ(u, v) = +∞.

u,v∈X

Then every ball in Mn is not a compact set. Proof. Consider the closed ball B(μ, r) in Mn . Take the sequence of the points {xm ; m ≥ 1} in X with the property, ∀u ∈ X : ρ(u, xm ) → +∞,

m → ∞.

Take x0 ∈ X and δ > 0 such that α = μ(B(x0 , δ)) > 0. Define μm = 1X\B(x0 ,δ) μ + αm δx0 + (α − αm )δxm , where 1,

1X\B(x0 ,δ) (u) = {

0,

u ∈ X \ B(x0 , δ), u ∉ X \ B(x0 , δ),

αm is a certain number from (0; α) and δx0 is the probability measure concentrated at the point x0 . Then γn (μ, μm )n ≤ αm ∫ ρ(x0 , v)n μ(du) + (α − αm ) ∫ ρ(x0 , v)n μ(du). B(x0 ,δ)

B(x0 ,δ)

Choosing {αm ; m ≥ 1}, x0 and δ such a way that r δ< , 2

n

r lim (α − αm ) ∫ ρ(xm , v)n μ(du) ≤ ( ) , m→∞ 2 B(x0 ,δ)

and we get ∃_0 : ∀m ≥ m0 : μm ∈ B(μ, r), sup m≥1



ρ(x0 , v)n μm (dv) ≥ lim (α − αm )ρ(xm , x0 ) > 0. m→∞

{ρ(x0 ,v)>c}

These calculations show that (1.11) does not hold. The statement is proved.

12 � 1 Examples of random measures and flows in the description of dynamical systems In the case when the compacts in X have the simple description, the statement of Lemma 1.2.1 can be written in the more acceptable form. As an example, consider the case when X is the separable Hilbert space. Let {ek ; k ≥ 1} be an orthonormal basis in X. Denote the orthogonal projection on the linear span of {e1 , . . . , em } by Qm . Lemma 1.2.2. The closed set F in M2 is compact if and only if the following relations hold: (1) limc→+∞ supF ∫{‖u‖>c} ‖u‖2 μ(du) = 0. (2) limm→∞ supF ∫X ‖u − Qm u‖2 μ(du) = 0.

Proof. Necessity. Let F be a compact set. The first condition holds due to Corollary 1.2.1. Let us check the second one. Fix ε > 0. Due to Lemma 1.2.1, there exists the compact K in X such that ε sup ∫ ρ(u, K)2 μ(du) < . 2 F X

From the characterization of compacts in Hilbert space, it follows that lim sup ‖u − Qm u‖ = 0.

m→∞ K

Consequently, there is such a number m0 , that for all m ≥ m0 , ε sup ‖u − Qm u‖ < √ . 2 K Then for arbitrary measure μ ∈ F and m ≥ m0 , ε ∫ ‖u − Qm u‖2 μ(du) ≤ 2 ∫ ρ(u, K)2 μ(du) + 2 < 2ε. 2

X

X

The necessity is proved. Sufficiency. Let the conditions of the lemma hold. Choose for m ≥ 1, Km = {u : u = Qm u, ‖u‖ ≤ m}. Then for μ ∈ F, ∫ ρ(Km , u)2 μ(du) ≤ ∫ ‖u − Qm u‖2 μ(du) + ∫ ‖u‖2 μ(du). X

X

‖u‖≥m

To complete the proof, it is sufficient to note that ∞

⋃ Km = X.

m=1

The lemma is proved.

1.2 Random measures and maps

� 13

Remark. Note that Lemma 1.2.1 remains to be true in the space M if ρ(u, v)n is replaced by ρ(u, v) . 1 + ρ(u, v) Exercise 1.2.7. Check this statement. In the future, we will mark M0 as M. Definition 1.2.3. The random measure of finite order n (n ≥ 0) is the random element in the space Mn . For further consideration, we are in need of some lemmas about the random measures and random maps. Lemma 1.2.3. Let μ ∈ M, f : X × Ω → X be the measurable function. Then an image μ ∘ f −1 is a random measure. Exercise 1.2.8. Prove Lemma 1.2.3 using the next Lemma 1.2.4. Lemma 1.2.4. The function μ : Ω → M is the random element in Mn if and only if: (1) ∀ω ∈ Ω : μω ∈ Mn . (2) For every Δ ∈ ℬ(X) μ(Δ) is a random variable. Proof. Necessity. If μ is the random element in Mn , then condition (1) holds automatically. To check condition (2), consider the bounded function φ : X → ℝ, which satisfies the Lipshitz condition. Then for arbitrary ν1 , ν2 ∈ Mn , ϰ ∈ C(ν1 , ν2 ), 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨∫ φ(x)ν1 (dx) − ∫ φ(x)ν2 (dx)󵄨󵄨󵄨 ≤ ∬󵄨󵄨󵄨φ(u) − φ(v)󵄨󵄨󵄨ϰ(du, dv). 󵄨󵄨 󵄨󵄨 2 X

X

X

Consequently, 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨∫ φ(x)ν1 (dx) − ∫ φ(x)ν2 (dx)󵄨󵄨󵄨 ≤ L ⋅ γn (ν1 , ν2 ), 󵄨󵄨 󵄨󵄨 X

X

where the constant L depends on the Lipshitz constant of function φ and (in the case when n = 0) sup |φ|. Hence, the map, Mn ∋ ν 󳨃→ ∫ φ(x)ν(dx) ∈ ℝ, X

is continuous. Consequently, the integral ∫ φ(x)μ(du) X

14 � 1 Examples of random measures and flows in the description of dynamical systems is the random variable. Now condition (2) results from the standard arguments. Sufficiency. To prove this part of the lemma, we are in need of the description of the topology in the space Mn . Note that (Mn , γn ) is a metric space. So, if we define another metric λn in Mn with the property γn (μm , μ0 ) → 0,

m→∞

m → ∞,

λ(μm , μ0 ) → 0,

⇐⇒

then the open sets in (Mn , γn ) and in (Mn , λn ) must coincide. Exercise 1.2.9. Prove this statement. Let us consider the sequence of bounded Lipshitz functions {φm ; m ≥ 1} on X such that convergence of integrals from these functions leads to the weak convergence. The existence of such a sequence results from the separability of X. Define the metric λn on Mn in such a manner ∀ν1 , ν2 ∈ Mn : | ∫X φm dν1 − ∫X φm dν2 |

1 m 2 m=1 1 + | ∫X φm dν1 − ∫X φm dν2 | 󵄨󵄨󵄨 + sup󵄨󵄨󵄨 ∫ ρ(u0 , v)n ν1 (dv) − ∫ 󵄨 󵄨 k≥1 ∞

λn (ν1 , ν2 ) := ∑



{ρ(u0 ,v)>k}

{ρ(u0 ,v)>k}

󵄨󵄨 󵄨 ρ(u0 , v)n ν2 (dv)󵄨󵄨󵄨, 󵄨󵄨

where u0 is a certain fixed point in X. Let us verify that the convergence in λn is the same as the convergence in γn . Consider the sequence {μk ; k ≥ 0} in Mn such that γn (μk , μ0 ) → 0,

k → ∞.

Then as in previous considerations, ∀m ≥ 1: 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨∫ φm dμk − ∫ φm dμ0 󵄨󵄨󵄨 → 0, 󵄨󵄨 󵄨󵄨 X

k → ∞.

(1.12)

ρ(u0 , v)n μk (dv) = 0.

(1.13)

X

Moreover, it follows from Corollary 1.2.1 that lim sup

i→∞ k≥1

∫ {ρ(u0 ,v)>i}

From (1.12) and (1.13), it is easy to get that λn (μk , μ0 ) → 0,

k → ∞.

Now suppose that the sequence {μk ; k ≥ 1} converges to μ0 in the metric λn . Then {μk ; k ≥ 1} weakly converges to μ0 and (1.13) holds. From the Skorokhod theorem (cf. [55]), it

1.2 Random measures and maps

� 15

follows that there exists the sequence of random elements {ξk ; k ≥ 0} in X defined on the certain probability space such that: (1) For every k ≥ 0, ξk has a distribution μk . (2) {ξk ; k ≥ 1} converges almost surely to ξ0 . Now use (1.13) to see that Eρ(ξk , ξ0 )n → 0,

k → ∞.

(1.14)

For each k ≥ 1, let us build the measure ϰk on X2 as the joint distribution of ξk and ξ0 . Then (1.14) turns into ∬ ρ(u, v)n ϰk (du, dv) → 0,

k → ∞.

X2

So, γn (μk , μ0 ) → 0,

k → ∞.

Now to complete the proof of the lemma let us note that the map μ : Ω → M, which satisfies conditions of the lemma, is measurable with respect to Borel σ-field in (Mn , λn ). The lemma is proved. Lemma 1.2.5. Let μ ∈ M, φ, ψ : X × Ω → X be the measurable maps such that ∀x ∈ X : φ(x) = ψ(x)

a. s.

Then μ ∘ φ−1 = μ ∘ ψ−1

a. s.

Proof. Consider the bounded continuous function f : X → ℝ. Then 󵄨󵄨 󵄨󵄨 󵄨 󵄨 E 󵄨󵄨󵄨∫ f (φ(u))μ(du) − ∫ f (ψ(u))μ(du)󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 X

X

󵄨 󵄨 ≤ E ∫󵄨󵄨󵄨f (φ(u)) − f (ψ(u))󵄨󵄨󵄨μ(du) X

󵄨 󵄨 = ∫ E 󵄨󵄨󵄨f (φ(u)) − f (ψ(u))󵄨󵄨󵄨μ(du) = 0. X

Hence, ∫ f (φ(u))μ(du) = ∫ f (ψ(u))μ(du) X

X

a. s.

(1.15)

16 � 1 Examples of random measures and flows in the description of dynamical systems Since (X, ρ) is separable, we can choose the sequence of bounded continuous functions {fm ; m ≥ 1}, which defines the measures on X. So, (1.15) proves the lemma.

1.3 Stochastic kernels on the spaces of measures Here we present the general construction for the stochastic kernel on the space Mn , which is related to the random map on the phase space X. Our purpose is to generalize Example 1.1.4 and to define the stochastic kernel using the finite-dimensional distributions of the correspondent random map. Let {Q(x1 , . . . , xn ); x1 , . . . , xn ∈ X, n ≥ 1} be the family of consistent finite-dimensional distributions on X indexed by the elements of X. Suppose that lim ∬

u→u0

X

ρ(x, y) Q(u, u0 )(dx, dy) = 0. 1 + ρ(x, y)

(1.16)

The family {Q} corresponds to the random process φ in X indexed by the elements of X. Due to (1.16), φ has a measurable modification. So, we can define the random measure μ ∘ φ−1 (Lemma 1.2.3) for arbitrary μ ∈ M. From Lemma 1.2.5, it follows that μ ∘ φ−1 does not depend on the choice of the modification of φ. Hence, we can consider the function: K(μ, Δ) = P{μ ∘ φ−1 ∈ Δ},

μ ∈ M, Δ ∈ ℬ(M).

K is the probability measure, relatively the second argument in a natural way. Let us prove that K is measurable with respect to the first argument under a fixed second one. As we have seen in the proof of Lemma 1.2.4, it is enough to check that for the every bounded Lipshitz function f : X → ℝ the function E ∫ f (φ(u))μ(du),

μ∈M

X

is continuous with respect to μ. Let μn → μ, n → ∞ in M. Then, as it has already been mentioned above, {μn ; n ≥ 1} weakly converges to μ. Note that the function Ef (φ(u)),

u∈X

is now continuous on X due to (1.16). So, ∫ Ef (φ(u))μn (du) → ∫ Ef (φ(u))μ(du), X

Hence, K is the stochastic kernel on M.

X

n → ∞.

1.3 Stochastic kernels on the spaces of measures � 17

Exercise 1.3.1. Let X = [0; +∞). As the random function, φ considers the Poisson process. Find all such probability measures μ on X for which μ ∘ φ−1 is the random element in Mn (Hint: note that there exists the limit lim

t→+∞

φ(t) t

almost surely.) In this construction, the family {Q} can be considered as the probability law, which describes the motion of the finite systems of the particles in X. In other words, Q(x1 , . . . , xn )(Γ) is the probability that the particles placed in x1 , . . . , xn jump into the set Γ ⊂ Xn in one step. Note that not every stochastic kernel on X can be obtained in the way described above. Example 1.3.1. Let X = {1, 2}. Now the space Mn for all n is the space of pairs (α; 1 − α), α ∈ [0; 1]. Exercise 1.3.2. Prove that 1 γ0 ((α1 , 1 − α1 ), (α2 , 1 − α2 )) = |α1 − α2 |, 2 1

γn ((α1 , 1 − α1 ), (α2 , 1 − α2 )) = |α1 − α2 | n ,

n ≥ 1.

Consider on M the stochastic kernel of the following type: K(μ, ⋅) = δ( 1 ; 1 ) , 2 2

i. e., the probability measure K(μ, ⋅) is concentrated on the measure ( 21 ; 21 ) for every μ ∈ M. Such a kernel cannot be obtained as a result of our previous construction. The measure μ = (1; 0) under the action of arbitrary map on X can be transformed only into (0; 1) or into itself. Exercise 1.3.3. Describe all stochastic kernels on M, which corresponds to the random maps in the case X = {1, 2}. In the above considered construction, the probabilities describing the motion of the finite systems of the particles do not depend on the measure, which is carried by the random map. Now suppose that such a dependence takes place. In other words, let us consider the family of consistent finite-dimensional distributions on X {Q(μ, x1 , . . . , xn ); x1 , . . . , xn ∈ X, n ≥ 1, μ ∈ M}, which contains the measure μ ∈ M as an additional parameter. If this family under fixed μ satisfies condition (1.16), then we can define as above the random map fμ and the probability measure K(μ, ⋅) on M. The question is when K will be measurable relatively to μ. The next lemma presents the sufficient conditions for this. Before we formulate the next lemma, some notation are to be made.

18 � 1 Examples of random measures and flows in the description of dynamical systems Denote for every n ≥ 1 by Xn the Deskart product of X by itself n times. Consider in X the metric n

ρn ((x1 , . . . , xn ); (y1 , . . . , yn )) = max ρ(xi , yi ). i=1,...,n

̃n , γ̃n ). On the space (Xn , ρn ), build the analog of the space M. Denote this space by (M The following statement holds. Lemma 1.3.1. Let the family {Q(μ, x1 , . . . , xn ); x1 , . . . , xn ∈ X, n ≥ 1, μ ∈ M} satisfy the conditions: (1) Under fixed, μ ∈ M{Q} is the consistent family of the finite-dimensional distributions. (2) ∀μ0 ∈ M, x0 ∈ X: lim

sup

sup ∬

ε→0 γ(μ,μ ) 0 such that ϰN ∈ C(Q(μ0 , x1 , . . . , xN ), Q(μ, x1 , . . . , xN )), ∬ X2N

ρN ((u1 , . . . , uN ), (v1 , . . . , vN )) ϰ (du1 , . . . , duN , dv1 , . . . , dvN ) ≤ Cγ(μ0 , μ) + ε. 1 + ρN ((u1 , . . . , uN ), (v1 , . . . , vN )) N

Now for each y1 , . . . , yl ∈ X define the conditional measures {q(μ0 , u1 , . . . , uN )} on Xl corresponding to Q(μ0 , x1 , . . . , xN , y1 , . . . , yl ) and {q(μ, u1 , . . . , uN )} corresponding to Q(μ, x1 , . . . , xN , y1 , . . . , yl ). That is, for every Δ ∈ ℬ(XN+l ), Q(μ0 , x1 , . . . , xN , y1 , . . . , yl )(Δ) = ∫ q(μ0 , u1 , . . . , uN )({(v1 , . . . , vl ) : (u1 , . . . , uN ), (v1 , . . . , vl ) ∈ Δ}) XN

× Q(μ0 , x1 , . . . , xN )(du1 , . . . , duN ) and similarly for Q(μ, x1 , . . . , xN , y1 , . . . , yl ). Now define the family of consistent finitedimensional distributions on X2 in such a manner ∀q(μ0 , u1 , . . . , uN ), . . . , zm ∈ X : ∀Δ ∈ ℬ(X2N+l+m ), R(x1 , . . . , xN , y1 , . . . , yl , z1 , . . . , zm )(Δ) = ∬ q(μ0 , u1 , . . . , uN ) ⊗ q(μ, v1 , . . . , vN )({(s1 , . . . , sl , t1 , . . . , tm ) : X2N

(u1 , . . . , uN , s1 , . . . , sl , v1 , . . . , vN , t1 , . . . , tm ) ∈ Δ})ϰ(du1 , . . . , duN , dv1 , . . . , dvN ). Exercise 1.3.6. Prove that {R} is the family of consistent finite-dimensional distributions. Using the family {R}, it is possible to find the probability space on which the pair fμ0 and fμ is built. Now 󵄨󵄨 −1 −1 󵄨 󵄨󵄨E0 φ(μN ∘ fμ0 ) − Eφ(μ ∘ fμ0 )󵄨󵄨󵄨 󵄨 󵄨 = 󵄨󵄨󵄨E ′ φ(μN ∘ fμ−1 ) − E ′ φ(μ ∘ fμ−1 )󵄨󵄨󵄨 0 ≤ LE ′ γ(μN ∘ fμ−1 ⋅ μN ∘ fμ−1 ) 0 ≤L

1 N ′ ρ(fμ0 (xk ), fμ (xk )) ∑E N k=1 1 + ρ(fμ0 (xk ), fμ (xk ))

≤ LE ′ max

k=1,...,N

ρ(fμ0 (xk ), fμ (xk ))

1 + ρ(fμ0 (xk ), fμ (xk ))

1.3 Stochastic kernels on the spaces of measures � 21

=L∬ X2N

ρN ((u1 , . . . , uN ), (v1 , . . . , vN )) ϰ (du1 , . . . , duN , dv1 , . . . , dvN ) 1 + ρN ((u1 , . . . , uN ), (v1 , . . . , vN )) N

≤ L(Cγ(μ0 , μ) + ε). Here, E ′ denotes the mathematical expectation related to the new probability space. Taking into account the fact that ε is arbitrary and passing to the limit under N → ∞, we get 󵄨󵄨 −1 −1 󵄨 󵄨󵄨E0 φ(μ ∘ fμ0 ) − Eφ(μ ∘ fμ )󵄨󵄨󵄨 ≤ LCγ(μ0 , μ). So, the second summand also is continuous with respect to the weak convergence. The lemma is proved. Consider examples of the stochastic kernels on M, which are built with the help of this lemma. Example 1.3.2. Let X = {1, . . . , N}. Consider the compound Poisson process on X with the certain matrix Π of jumps and intensity of jumps equal to 1. Let P(t) be the transition matrix of our process at the time t, i. e., ∞

P(t) = ∑ e−t k=0

tk k Π . k!

Now define the family {Q} by N

Q(μ, 1, . . . , N)(i1 , . . . , iN ) = ∏ P(μk )(k, ik ). k=1

(1.18)

Here, Q(μ, 1, . . . , N)(i1 , . . . , iN ) is the notation for the probability that the particles placed in the points 1, . . . , N jump into the points i1 , . . . , iN if the mass distribution of all particles is μ = (μ1 , . . . , μN ). Also, P(t)(k, j) is the element of the matrix P(t) indexed by k and j. It is easy to check it with μ fixed, N

∑ Q(μ, 1, . . . , N)(i1 , . . . , iN ) = 1.

i1 ,...,iN

So, condition (1) of Lemma 1.3.1 is fulfilled. It is not necessary to check condition (2) because X is the finite space. To verify condition (3), it is sufficient to note now that for each (k, j)P(⋅)(k, j) is a bounded Lipshitz function on [0; +∞). So, by Lemma 1.3.1 the family {Q} defines the stochastic kernel K on M. The Markov chain on M corresponding to K can be described in nonrigorous way. In every step, all particles are placed in the certain point of the space jump simultaneously according to the law {P(t)} for the time equal to their joint mass and independently from the particles placed in other points.

22 � 1 Examples of random measures and flows in the description of dynamical systems Note that Example 1.1.4 can be described in the terms of Lemma 1.3.1. Other examples will be considered in the future.

1.4 Random measures on the space of trajectories In Section 1.1, it was mentioned that the measure-valued process, constructed as a description of the mass-distribution in the single moments of the time, does not contain all of the information about the dynamics of the mass on the space. Due to this circumstance, the random measures on the space of trajectories of initial stochastic flow arise. So, in this section, we will study the random measures on the spaces C([0; 1], X), XN , C(ℝ, X) and so on. First of all, consider the adapted random measures on the space C([0; 1], X). Define the distance in this space by the formula ̃ (f , g) = max ρ(f (t), g(t)). ∀f , g ∈ X : ρ [0;1]

̃ the space of all probability measures on C([0; 1], X), which is equipped Denote by M by Wasserstein distance γ̃. The certain filtration (ℱt ; t ∈ [0; 1]) on the initial probability space is supposed to be given. ̃ Definition 1.4.1. The random measure μ on C([0; 1], X) (i. e., the random element in M) is adapted to the filtration (ℱt ; t ∈ [0; 1]), if for all 0 ≤ s1 ≤ ⋅ ⋅ ⋅ ≤ sn ≤ 1, n ≥ 1 and for all Δ ∈ ℬ(Xn ) the random variable ∫

1Δ (u(s1 ), . . . , u(sn ))μ(du)

C([0;1],X)

is measurable with respect to ℱsn . Consider the examples of the adapted random measures. Example 1.4.1. Let the continuous random processes {ξi , i = 1, . . . , N} in 𝕏 be adapted to the flow {ℱt }. Then the random measure μ=

1 N ∑δ N k=1 εk

is adapted. Now the integral from Definition 1.4.1 looks like ∫ C([0;1],X)

1Δ (u(s1 ), . . . , u(sn ))μ(du) =

1 N ∑ 1 (ξ (s ), . . . , ξk (sn )). N k=1 Δ k 1

The obtained value is measurable with respect to ℱsn . The next example is related to Example 1.1.2.

1.4 Random measures on the space of trajectories

� 23

Example 1.4.2. Let X = ℝd . Consider the stochastic differential equation: dx(t) = a(x(t))dt + b(x(t))dw(t), where a, b and w are the same as in Example 1.1.2. Denote by {φ0,t } the stochastic flow, which corresponds to this equation. Let {ℱt ; t ∈ [0; 1]} be the flow of σ-fields generated by the Wiener process. Consider the probability measure ν on ℝd and define the random measure μ on C([0; 1], ℝd ) via its finite-dimensional distributions: ∫

1Δ (u(s1 ), . . . , u(sn ))μ(du) = ∫ 1Δ (φ0,s1 (x), . . . , φ0,sn (x))ν(dx).

C([0;1],ℝd )

ℝd

The measure μ is an image of the measure ν under the random map ℝd ∋ x 󳨃→ φ0,⋅ (x) ∈ C([0; 1], ℝd ). Note that μ is an adapted random measure. Example 1.4.3. Random diffusion measure. Let {ℱt } be a certain fixed filtration on the initial probability space. Let (Ω′ , ℱ ′ , P′ ) be another probability space. Consider the ℝd valued Wiener process {w′ (t); t ∈ [0; ]1}, which is defined on the space (Ω′ , ℱ ′ , P′ ). Suppose that the functions a : Ω × [0; 1] × ℝd → ℝd×d and b satisfy the following conditions: (1) Under fixed ω, a and b satisfy all conditions from Example 1.1.2. (2) For every t ∈ [0; 1], the restrictions of a and b on the interval [0; t] are ℱt × ℬ([0; t]) × ℬ(ℝd ) measurable. Under the fixed ω ∈ Ω Cauchy problem for the stochastic differential equation, dx(t) = a(ω, t, x(t))dt + b(ω, t, x(t))dw′ (t),

{

x(0) = 0

has the unique strong solution on the probability space (Ω′ , ℱ ′ , P′ ). Denote the distribution of this solution in the space C([0; 1], ℝd ) by μ. Note that this distribution depends on ω ∈ Ω. Exercise 1.4.1. Prove that μ is the random measure adapted to the filtration {ℱt }. (Hint: use the Euler approximation to the solution of SDE.) In addition to the adapted random measures, we will consider the stationary random measures. Let us use the space C(ℝ; X) with the distance of the uniform convergence on the compact sets: ∀f , g ∈ C(ℝ; X): ∞

̃ (f , g) = ∑ max ρ k=1

[−k;k]

ρ(f (t), g(t)) 1 ⋅ . 1 + ρ(f (t), g(t)) 2k

The following definition will be useful in the future.

24 � 1 Examples of random measures and flows in the description of dynamical systems Definition 1.4.2. Two random measures μ and ν on the complete separable metric space (𝒴 , λ) are called equidistributed if the random vectors (μ(Δ1 ), . . . , μ(Δn )) and (ν(Δ1 ), . . . , ν(Δn )) have the same distributions for arbitrary set Δ1 , . . . , Δn of disjoint Borel subsets of 𝒴 . Exercise 1.4.2. Prove that if μ and ν have the same distribution, then for arbitrary Borel subsets Δ1 , . . . , Δn the random vectors (μ(Δ1 ), . . . , μ(Δn )) and (ν(Δ1 ), . . . , ν(Δn )) have the same distribution. Exercise 1.4.3. Prove that the random measures μ and ν have the same distributions if and only if for every bounded measurable function φ on 𝒴 the integrals ∫ φdμ,

∫ φdν

𝒴

𝒴

have the same distributions. Definition 1.4.3. The random measure μ on C(ℝ, X) is called stationary if for arbitrary s > 0 the shift of μ on s is equidistributed with μ. In this definition, we use the notion of the shift relative to the time parameter, which can be defined on C(ℝ, X) in a natural and obvious way. Exercise 1.4.4. Formulate the definition of the stationary random measure on the spaces C([0; +∞), X), Xℕ , Xℤ . Similar to the notion of adapted random measure, the simplest examples of the stationary random measures are the discrete random measures concentrated on the stationary random processes. More complicated examples will be considered later. The aim of this section is to study the structure of the adapted or stationary random measures. Let us begin with the adapted measures. It is useful to start with an abstract situation. Let {ℱt ; t ∈ [0; 1]} be the certain filtration on the initial probability space, H be a separable Hilbert space, {Gt ; t ∈ [0; 1]} be the family of the spectral projectors related to the self-adjoint operator A in H with the simple spectrum [5]. Definition 1.4.4. The random element η in H is called adapted ({Gt }-adapted) if Gt η is the random element measurable with respect to ℱt for every t ∈ [0; 1]. The following lemma describes the structure of the adapted random elements in the terms of spectral integrals. Lemma 1.4.1. Let η be an adapted random element in H such that E‖η‖2 < +∞. Then there are both h ∈ H and the measurable function f : Ω × [0; 1] → ℝ such that: (1) The restriction of f on the interval [0; t] is ℱt ⊗ ℬ([0; t]) measurable for every t ∈ [0; 1].

1.4 Random measures on the space of trajectories

� 25

1

(2) E ∫0 f 2 (t)d(Gt h, h) < +∞. 1

(3) η = ∫0 f (t)dGt h.

Proof. Since A has the simple spectrum, then there is such h ∈ H that for every ω ∈ Ω the measure with the differential d(G⋅ η(ω), η(ω)) is absolutely continuous relatively to measure with the differential d(G⋅ h, h). Denote by ̃f the Radon–Nycodim derivative: ̃f (ω, t) = d(Gt η(ω), η(ω)) . dG⋅ h, h Using the differentiation with respect to net [82], it is easy to check that there exists such a version f of f̃, which satisfies condition (1) of the lemma. The following relations hold: 1

η(ω) = ∫ f (ω, t)dGt h, 0

1

󵄩󵄩 󵄩2 2 󵄩󵄩η(ω)󵄩󵄩󵄩 = ∫ f (ω, t)d(Gt h, h), 0

1

󵄩 󵄩2 E 󵄩󵄩󵄩η(ω)󵄩󵄩󵄩 = E ∫ f 2 (ω, t)d(Gt h, h) < +∞. 0

So, conditions (2) and (3) of the lemma are also performed. The lemma is proved. Exercise 1.4.5. Describe the adapted random elements in the finite-dimensional H. We will use the notion of the adapted random element in the case when H is the space of square-integrable functions on the certain space with the probability measure. Let (𝒴 , A) be some space with the fixed σ-field of its subsets. Suppose that ν is the probability measure on A and put H = L2 (𝒴 , A, ν). Consider the flow of σ-fields {At ; t ∈ [0; 1]} on 𝒴 such that: (1) A0 = {0, 𝒴 }. (2) {At } is right continuous. (3) A1 = A. For every t ∈ [0; 1], define the projector Gt in H as an operator of conditional mathematical expectation with respect to 𝒜t . Define also G0− as the projector on the zero subspace. Let us check that {Gt } is right continuous in the sense of the strong convergence. Let f ∈ H, t0 ∈ [0; 1). Consider an arbitrary sequence tn ↘ t0 , n → ∞. The sequence fn = Gtn f ,

n≥1

26 � 1 Examples of random measures and flows in the description of dynamical systems is the backward {Atn } martingale and At0 = ⋂ Atn . n≥1

Since f is square-integrable, then [73] fn tends to f0 = Gt0 f in the square mean under n → ∞. This proves the right continuity of {Gt }. In the case when 𝒴 = C([0; 1], X) with the Borel σ-field A, one can choose the filtration {At ; t ∈ [0; 1]} in the canonical way. Namely, define for each t ∈ [0; 1], At = σ(⟨⋅⟩s ; s ≤ t), where ∀u ∈ C([0; 1], X): ⟨u⟩s = u(s). Note that the flow {At } is not right continuous in general. Exercise 1.4.6. Prove this statement. (Hint: In the usual space, C([0; 1]) Consider the set of the functions 1 1 1 Γ = {u ∈ C([0; 1]) : ∃δ ∈ (0; ] : ∀s ∈ [ ; + δ] u(s) ≥ 0}. 2 2 2 Show that Γ ∈ At , t >

1 2

and Γ ≠ A 1 .) 2

In spite of the result of this exercise for the certain probability measures, the flow {At } can be right continuous in the sense that for every t ∈ [0; 1), At = ⋂ As , s>t

where the equality is understood up to the sets of measure zero. For example, this is the case of Wiener measure on C([0; 1]) [73]. For such probability measures, the corresponding family of conditional expectations {Gt } is right continuous. Let us consider such probability measure ν on C([0; 1], X), that the corresponding flow {Gt } is right continuous. Lemma 1.4.2. Let μ be the random probability measure on A such that ∀ω ∈ Ω : μω ≪ ν, and the Radon–Nycodim density

1.4 Random measures on the space of trajectories � 27

f (ω, u) =

dμω (u) dν

is square-integrable with respect to ν. Then the measure μ is {ℱt }-adapted if and only if there exists the random element η in L2 (C([0; 1], X), ν) with the properties: (1) η(ω, u) = f (ω, u) a. s. with respect to P × ν. (2) η is an adapted random element relative to the family {Gt } and filtration {ℱt }. Proof. Necessity. Let μ be an adapted random measure. Then analogously to the previous lemma, one can find such a version of the density ̃f that for every t ∈ [0; 1] the projection of ̃f on At is ℱt × At measurable. Hence, ̃f is an adapted random element. Sufficiency. From the lemma conditions, it follows that for every t ∈ [0; 1] the restriction of μ on At is ℱt -measurable. The lemma is proved. Now we can present the theorem, which describes the structure of the adapted random measures in the certain cases. Suppose that the nonrandom probability measure μ on C([0; 1], X) satisfies the same condition with respect to the flow {At } as in the previous lemma. Theorem 1.4.1. Suppose that the adapted random measure μ on C([0; 1], X) satisfies the conditions: (1) ∀ω ∈ Ω : μω ≪ ν. (2) E ∫C([0;1],X) ( dμ )2 dν < +∞. dν Then there exists the sequence {gn ; n ≥ 1} of {At }-martingales on C([0; 1], X) and the sequence {fn ; n ≥ 1} of the measurable functions from Ω × C([0; 1], X) × [0; 1] → ℝ such that: (1) ∀n ≥ 1: 1

E ∫ fn (t)2 d[gn ]t < +∞, 0

(2) for all n ≥ 1 and t ∈ [0; 1], the restriction of fn on [0; t] is measurable with respect to ℱt × At × ℬ([0; t]), 1 (3) dμ = ∑∞ n=1 ∫0 fn (t)dgn (t), dν where the series converge in the square mean. Remark. [gn ] is the quadratic variation of the martingale gn . Proof. Consider the Hilbert space L2 (C([0; 1], X), ν). Due to Lemma 1.4.2, the density dμ/dν is the adapted random element in this space with respect to the flow {Gt } and the filtration {ℱt }. From the general spectral theory [5], it follows that for the family {Gt } there exists the sequence (possibly finite) {hn , n ≥ 1} of elements of L2 (C([0; 1], X), ν) such that

28 � 1 Examples of random measures and flows in the description of dynamical systems L2 (C([0; 1], X), ν) = ⊕Hn , where Hn is the closure of the linear span of the set {Gt hn , t ∈ [0; 1]}. It is easy to check that the projections of dμ/dν on Hn are the adapted random elements. Exercise 1.4.7. Verify this statement. For each n ≥ 1, {Gt } has the simple spectrum on Hn . Now the statement of the theorem follows from Lemma 1.4.1. The theorem is proved. In certain cases the structure of {At }-martingales on the space C([0; 1], X) with the measure ν is known. Example 1.4.4. Adapted random measures absolutely continuous with respect to Wiener measure. Let X = ℝ and ν be the Wiener measure on the space C([0; 1]). The flow {At } is known to be continuous on (C([0; 1]), ν) [53]. Also, every square-integrable {At }-martingale has the representation: t

m(t) = m(0) + ∫ φ(s, u)du(s),

u ∈ C([0; 1]).

0

Here, the integral in the right part is the Itô stochastic integral with respect to the coordinate process u, which is the Wiener process under the measure ν. The function φ satisfies the conditions: (1) For every t ∈ [0; 1], the restriction of φ on [0; t] is measurable with respect to At × ℬ([0; t]). 1 (2) ∫C([0;1]) ∫0 φ(t, u)2 dtν(du) < +∞. In order to understand whether the flow {Gt } has the simple spectrum in the space L2 (C([0; 1]), ν) or not, the following lemma will be useful. We will formulate this lemma in the more probabilistic notation. Let {w(t); t ∈ [0; 1]} be the one-dimensional Wiener process on the probability space (Ω, ℱ , P) and the flow {ℱt } is defined as ℱt = σ(w(s); s ≤ t),

t ∈ [0; 1].

Suppose also that ℱ = ℱ1 . Lemma 1.4.3. For the arbitrary random variable ξ ∈ L2 (Ω, ℱ , P), there exists the random variable α ∈ L2 (Ω, ℱ , P) such that: (1) Eα = 0. (2) ∀t ∈ [0; 1] : EαE(ξ/ℱt ) = 0. Proof. As it was mentioned above, the square-integrable martingale {E(ξ/ℱt ), t ∈ [0; 1]} admits the representation

1.4 Random measures on the space of trajectories

� 29

1

{E(ξ/ℱt )} = Eξ + ∫ φ(s)dw(s), 0

with the square-integrable and adapted function φ. Similarly, the random variable α has the so-called Clark representation [53], 1

α = Eα + ∫ ψ(s)dw(s). 0

Put Eα = 0. Then for the proof of the lemma, it is enough to find such adapted squareintegrable function ψ that ∀t ∈ [0; 1] : Eφ(t)ψ(t) = 0. Let us use the Itô–Wiener expansion of the function φ [53]: ∞

φ(t) = ∑ Ik (ak (t)). k=0

Here, ak (t) for every k ≥ 1 is the nonrandom symmetric function from L2 ([0; t]k ), Ik is k-tuple Wiener integral, a0 (t) = Eφ(t). The function ψ, which we are looking for, also has the same representation with the unknown kernels {bk }. Now ∞

t

k=0

0

Eφ(t)ψ(t) = ∑ ∫ . k. . ∫ ak (t)(τ1 , s, τk )bk (t)(τ1 , s, τk )dτ1 , s, dτk . Define the function rk : [0; 1] → [0; 1] for the every fixed k ≥ 1 in the following way: t

rk (t) = inf{r : ∫ . k. . ∫ ak (t)(τ1 , s, τk )2 dτ1 , s, dτk = 0

t

1 k ∫ . . . ∫ ak (t)(τ1 , s, τk )2 dτ1 , s, dτk }. 2 0

Note that rk is the measurable function. Now set b0 = 0, bk (t)(τ1 , . . . , τk ) = sign( max τi − rk (t))ak (t)(τ1 , . . . , τk ), i=1,...,k

k ≥ 1.

Then the random function Γ with the kernels {bk ; k ≥ 0} is exactly what we are looking for and the random variable, 1

α = ∫ ψ(t)dw(t), 0

30 � 1 Examples of random measures and flows in the description of dynamical systems satisfies the condition of the lemma. But it can happen that α = 0. So, we have to separately consider the case when all ak ≡ 0 for k ≥ 1, i. e., when ξ is jointly Gaussian with the process w. This simple situation is left to the reader as an exercise. The lemma is proved. The previous lemma shows that the family {Gt } has not a simple spectrum. But the representation of {At }-martingales and Theorem 1.4.1 gives us the density structure for the density of the adapted random measure μ with respect to the Wiener measure ν, 1

dμ (u) = 1 + ∫ ρ(ω, u, s)du(s). dν 0

Remark. All previous considerations and conclusions remain to be true not only for the adapted random measure itself but for its absolutely continuous part with respect to ν as well. Exercise 1.4.8. Prove that the absolutely continuous part of the adapted measure relative to the fixed nonrandom measure is again the adapted measure (in general nonprobability). (Hint: Use the differentiation of the measures relative to net [82].) Now let us consider the stationary random measures on the space C(ℝ, X). Let μ be the such stationary random measure. Exercise 1.4.9. Prove that Eμ is the probability measure, which is invariant with respect to the time shifts on C(ℝ, X). It appears that the stationary random measures can be obtained with the help of the measure-preserving transformations like the deterministic case. Consider the product Ω × C(ℝ, X) with the correspondent product of σ-fields. Suppose that the measure P̃ on Ω × C(ℝ, X) is such that its projection on Ω coincide with P. Let also {Tt ; t ∈ ℝ} be the family of transformations on Ω × C(ℝ, X) with the properties: (1) Tt preserves the measure P̃ for every t ∈ ℝ. (2) Tt (ω, u) = (gt (ω), ut ), where gt is the certain transformation on Ω and ut is the shift of the function u on the time t. Consider the family of conditional distributions ̃ which is correspondent to the partition of Ω × C(ℝ, X) {μω , ω ∈ Ω} of the measure P, on the sets ω × C(ℝ, X), ω ∈ Ω. Such a family has the measurable version if we suppose that ℱ is countably generated. The family {μω , ω |∈ Ω} can be considered as the random measure on C(ℝ, X). Let us prove that μ is the stationary random measure. To check this, consider for t ∈ ℝ the measure μ and shifted measure μt . It follows from the conditional measures definition that for each measurable Γ in the product Ω × C(ℝ, X), ̃ P(Γ) = ∫ μω (Γω )P(dω), Ω

where

1.4 Random measures on the space of trajectories

� 31

Γω = {u : (ω, u) ∈ Γ}. Then for every t ∈ ℝ and for the set, Γt = {(ω, ut ) : (ω, u) ∈ Γ}, the following equality holds: ̃ t ) = ∫ μ−1 (Γω )P(dω). P(Γ ω Ω

From another side, ̃ t ) = P(T ̃ −t (Γt )) = P({(g ̃ P(Γ −t (ω), u) : (ω, u) ∈ Γ}) ′ ′ ̃ = P({(ω , u) : (gt (ω ), u) ∈ Γ}) = ∫ μω′ ({u : (gt (ω′ ), u) ∈ Γ})P(dω′ ) Ω

= ∫ μg−t(ω) ({u : (ω, u) ∈ Γ})Pg−1t (dω) Ω

= ∫ μg−t(ω) (Γω )P(dω). Ω

Consequently, for arbitrary Γ, ∫ μ−t ω (Γω )P(dω) = ∫ μg−t(ω) (Γω )P(dω).

Ω

Ω

Hence, μ−t ω = μg−t(ω)

a. s.

Now the distribution of μ−t and μ coincide due to the fact that {gt } preserve the measure P. Before considering the concrete examples of the described construction of the stationary measures, note that this construction is in general in the sense that every stationary measure can be obtained in this way. Lemma 1.4.4. Let μ be the stationary random measures on C(ℝX). Then there exist the new probability space (Ω′ , ℱ ′ , P′ ), the family of transformations {Tt ; t ∈ ℝ} on Ω′ ×C(ℝ, X) and the probability measure P̃ ′ such that the random measure μ′ , which is built in the previous way and has the same distribution as μ. Proof. The proof of the lemma follows the idea of the corresponding statement for the ̃ fM of all probability measures on ordinary stationary process. Consider the space m C(ℝ, X). The random measures {μt ; t ∈ ℝ} form a stationary process in this space.

32 � 1 Examples of random measures and flows in the description of dynamical systems Exercise 1.4.10. Prove the stationarity of {μt ; t ∈ ℝ}. (Hint: Use the fact that the time ̃ which preserve the distribution of μ.) shifts from the group of transformations on M, ̃ℝ , σ-field ℱ ′ , which is generated by the cylindrical sets in a usual way. Put Ω′ = M Define the probability measure P′ on Ω′ as the distribution of the process {μt ; t ∈ ℝ}. Define for t ∈ ℝ the map Tt on the product Ω′ × C(ℝ, X) in the following way: t

Tt (ω′ , u) = (ω′ , ut ), t

where ω′ is the shift of the measure and ut is the shift of the function on the time t. Also, define the measure P̃ by the formula ̃ P(Γ) = ∫ ω′ (Γω′ )P(dω′ ). Ω′

Let us prove that the map Tt preserves the measure P̃ for the every t ∈ ℝ. Indeed ′ ̃ P̃ ∘ Tt−1 (Γ) = P({(ω , u) : Tt (ω′ , u) ∈ Γ}) t ′ ̃ = P({(ω , u) : (ω′ , ut ) ∈ Γ}) t

= ∫ ω′ ({u : (ω′ , ut ) ∈ Γ})P′ (dω′ ) Ω′ −t

= ∫ ω′ ({u : (ω′ , ut ) ∈ Γ})P′ t (dω′ ) Ω′

= ∫ ω′ ({u : (ω′ , u) ∈ Γ})P′ t (dω′ ) Ω′

̃ = ∫ ω′ ({u : (ω′ , u) ∈ Γ})P′ (dω′ ) = P(Γ). Ω′

It is easy to check that the conditional measure of the measure P̃ on the set {(ω′ , u) : u ∈ C(ℝ, X)} coincide with ω′ and that the one-dimensional distribution of ω′ under the measure P′ coincide with the distribution of μ. The lemma is proved. Consider the examples of the described construction. Example 1.4.5. Let {(xt , yt ); t ∈ ℝ} be the stationary random process in X × ℝ, which has the continuous components. Consider Ω′ = (X × ℝ)ℝ with the sigma-field, which is generated by the cylindrical sets. Define the measure P′ as the distribution of the process {(xt , y − t); t ∈ ℝ}. Then the family of the time-shift operators preserve the measure P′ . It follows now from the previous considerations that the conditional distributions of the process x under fixed y form the stationary random measure on C(ℝ, X). Exercise 1.4.11. Show the detailed proof of the previous statement.

1.4 Random measures on the space of trajectories �

33

Example 1.4.6. Suppose that {(xt , yt ); t ∈ ℝ} is the Gaussian stationary process in ℝ2 with the continuous projections. Then the conditional distributions of x under fixed y form the stationary random measure μ as it was stated above. Now the measure μ looks like μ = ν0 ∗ δξ , where ν0 is the deterministic stationary Gaussian measure on C(ℝ) and ξ is the stationary Gaussian process. Exercise 1.4.12. Prove this statement. Example 1.4.7. Dynamic systems and the stationary random measures. Let {(xt , yt ); t ∈ ℝ} be the stationary process in X × ℝ. Suppose that x is the trajectory of the dynamic system in X controlled by y and perturbed by the random noise. Assume that there exists the family of measurable maps {φs,t : Ω × X × C(ℝ) → X; −∞ < s ≤ t < +∞} such that: (1) ∀s1 ≤ s2 ≤ s3 ∀z ∈ C(ℝ) ∀x ∈ X ∀ω ∈ Ω: φs2 ,s3 (ω, z, φs1 ,s2 (ω, z, x)) = φs1 ,s3 (ω, z, x). (2) For arbitrary s1 ≤ s2 ≤ ⋅ ⋅ ⋅ ≤ sn and z ∈ C(ℝ), the random maps φs1 ,s2 (z, ⋅), . . . , φsn−1 ,sn (z, ⋅) are jointly independent. (3) For arbitrary t ∈ ℝ and z ∈ C(ℝ), φt,t (z, ⋅) is an identity map on X. (4) The family {φs,t (z, ⋅); −∞ < s ≤ t < +∞, z ∈ C(ℝ)} is independent from y. Suppose that the process {(xt , yt ); t ∈ ℝ} is such that for every t1 ≤ t2 , x(t2 ) = φt1 ,t2 (y, x(t1 )) a. s. Consider the conditional distribution of x under fixed y. As it was proved above, this distribution is a stationary random measure on C(ℝ). Denote it by μ. It occurs that the measure μ satisfies certain relations. Let for z ∈ C(ℝ) and r ∈ Xνt1 ,t2 (z, r) be the distribution of φt1 ,t2 (z, r) in X. Then for arbitrary t1 ≤ t2 ≤ ⋅ ⋅ ⋅ ≤ tn and bounded measurable function f on Xn , ∫ f (u(t1 ), . . . , u(tn ))μ(du) C(ℝ,X)

=



∫ f (u(t1 ), r2 , . . . , rn )μ(du)νt1 ,t2 (y, u(t1 ))(dr2 ) ⋅ ⋅ ⋅ νtn−1 ,tn (y, rn−1 )(drn ).

C(ℝ,X) Xn−1

That is, the measure μ is Markov under fixed y. If the Chapman–Kolmogorov equation can be substituted by the certain differential relation, then for measure μ the differential equation can be written also. Such equations will be considered in the future, where the equations driven by the random measures will be studied.

2 Stochastic differential equations for the flows with interactions 2.1 Equation with interactions This chapter is devoted to the study of the new type of stochastic differential equations on the finite-dimensional Euclid space. In the usual stochastic differential equations, the flow on the phase space arise after considerations of all trajectories, which start from the all points of the space. Here, the initial equation is already the equation for the flow. Moreover, the trajectory of the one particle, which starts from the fixed point of the space, cannot be determined without the calculations of the trajectories of all other points. Let us start from the formal description of our equation. Consider the space ℝd as a main phase space. Suppose that {wk ; k ≥ 1} is the sequence of independent ℝd -valued Wiener processes, which are determined on the certain probability space (Ω, ℱ , P). These processes will describe the random media in which our stochastic flow moves. Let μ0 be a probability measure on the space ℝd . Our stochastic differential equation has the form ∞

dx(u, t) = a(x(u, t), μt , t)dt + ∑k=1 bk (x(u, t), μt , t)dwk (t), { { { x(u, 0) = u, u ∈ ℝd , { { { −1 {μt = μ0 ∘ x(⋅, t) .

(2.1)

Here, μt = μ0 ∘ x(⋅, t)−1 is an image of μ0 under the map x(⋅, t) : ℝd → ℝd . So, the equation for the x(u, t), t ≥ 0, i. e., for the trajectory of the particle, which starts from the point u, contains as μt the information about the particles, which start from other points. Equation (2.1) is written in the formal differential form. Before solving (2.1), let us consider some partial cases and examples and study some other appropriate form for the infinite sum of integrals with respect to wk , k ≥ 1. Example 2.1.1. Nonrandom equations for the description of the motion of interacting system of the particles. Suppose that in (2.1) all bk are equal to 0 and the coefficient a has the form a(r, μ, t) = ∫ f (r, v)μ(dv), ℝd

where f is a bounded continuous function on ℝd × ℝd . Let the measure μ0 be discrete, i. e., N

μ0 = ∑ pk δuk , k=1

https://doi.org/10.1515/9783110986518-002

2.1 Equation with interactions

� 35

where pk > 0, p1 + ⋅ ⋅ ⋅ + pN = 1, u1 , . . . , uN are the different points in ℝd . In this case for every t > 0, the measure μt will have the form N

μt = ∑ pk δx(uk ,t) . k=1

Hence, (2.1) can be rewritten as dx(u, t) = ∑Nk=1 pk f (x(u, t), x(uk , t))dt,

{

x(u, 0) = u,

u ∈ ℝd .

(2.2)

For k = 1, . . . , N, denote xk (t) = x(uk , t),

t ≥ 0.

Then for xk , k = 1, . . . , N, we get the usual system of the differential equations dxk (t) = ∑Nj=1 pj f (xk (t))dt,

{

xk (0) = uk ,

k = 1, . . . , N.

(2.3)

In (2.3), the unknown functions xk , k = 1, . . . , N can be considered as the trajectories of heavy particles, which interact with each other. The interaction is described by the function f . From the physical point of view, (2.3) has the fault. This equation contains only pairwise interactions. This problem can be easily overcome if we consider the coefficient a of the more complicated form ∞

a(r, μ, t) = ∑ ∫ . .j . ∫ fj (r, v1 , . . . , vj )μ(dv1 ) ⋅ ⋅ ⋅ μ(dvl ). j=0

ℝd

After such a modification, (2.3) turns out to be a system N {dxk (t) = ∑∞ j=0 ∑i1 ...ij =1 f (xk (t), xi1 (t), . . . , xij (t)) ⋅ pi1 ⋅ ⋅ ⋅ pij dt, { x (0) = uk , k = 1, . . . , N. { k

(2.4)

If the function f satisfies the Lipshitz condition, then one can conclude that (2.2) has the solution, which is unique. But the initial equation (2.1) contains the information not only about the motion of the heavy particles, but also about the motion of the particles, which start from the arbitrary point of the space. So, to completely solve (2.1), we have to add to (2.2) the equation dx(u, t) = ∑Nk=1 pk f (x(u, t); xk (t))dt,

{

x(u, 0) = u,

u ∈ ℝd ,

(2.5)

36 � 2 Stochastic differential equations for the flows with interactions where x1 , . . . , xN are already known. Note that in the case u = uk the solution x(uk , t) of (2.5) coincide with xk (t) due to the uniqueness of the (2.2) and (2.5) solutions. Let us consider the simple numerical example. Put d = 1, a(r, μ, t) = ∫(r − v)μ(dv) = r − ∫ vμ(dv). ℝ



Such a choice of the coefficient a corresponds to the case when all the particles are pushed away from the joint center of the mass. Take 1 1 μ0 = δ−1 + δ1 . 2 2 Consider the trajectories of the heavy particles. Put x1 (t) = x(−1, t), x2 (t) = x(1, t). For these functions, we get the following Cauchy problem: dx1 (t) = (x1 (t) − 21 (x1 (t) + x2 (t)))dt, { { { dx2 (t) = (x2 (t) − 21 (x1 (t) + x2 (t)))dt, { { { {x1 (0) = −1, x2 (0) = 1.

(2.6)

Hence, x1 (t) = −et ,

x2 (t) = et .

Now for the particle starting from the arbitrary point u ∈ ℝ the equation gets the form dx(u, t) = x(u, t)dt. So, now the solution of our equation is x(u, t) = uet ,

u ∈ ℝ, t ≥ 0.

And the measure μt is of the kind 1 1 μt = δ−et + δet , 2 2

t ≥ 0.

If we change the initial measure μ0 , then the flow will also change. Put μ0 = pδ−1 + qδ1 , where p, q > 0, p + q = 1. In this case, instead of (2.6) we get dx1 (t) = q(x1 (t) − x2 (t))dt, { { { dx (t) = p(x2 (t) − x1 (t) + x2 (t))dt, { { 2 { {x1 (0) = −1, x2 (0) = 1.

2.1 Equation with interactions

� 37

So, x1 (t) = −2qe−t + (q − p),

x2 (t) = 2pet + (q − p). Consequently,

x(u, t) = (u + p − q)et + (q − p). The corresponding measure is 1 1 μt = δx1 (t) + δx2 (t) . 2 2 Example 2.1.2. The interacting Brownian particles. Let d = 1 as before. Put a = 0 and b1 (r, μ, t) = cos(∫ f (r, u)μ(du)), ℝ

b2 (r, μ, t) = sin(∫ f (r, u)μ(du)), bk = 0,



k > 2.

Suppose that the initial measure is 1 1 μ0 = δ−1 + δ1 . 2 2 For the random processes x1 (t) = x(−1, t), x2 (t) = x(1, t), we get the Cauchy problem: dx1 (t) = cos( 21 f (x1 (t), x1 (t)) + 21 f (x1 (t), x2 (t)))dw1 (t) { { { { { { + sin( 21 f (x1 (t), x1 (t)) + 21 f (x1 (t), x2 (t)))dw2 (t), { { { dx (t) = cos( 21 f (x2 (t), x1 (t)) + 21 f (x2 (t), x2 (t)))dw1 (t) { { 2 { { { + sin( 21 f (x2 (t), x1 (t)) + 21 f (x2 (t), x2 (t)))dw2 (t), { { { { {x1 (0) = −1, x2 (0) = 1.

(2.7)

If the function f satisfies the Lipshitz condition, then (2.7) has the solution, which is unique. Now for the stochastic flow x, we get the equation 1 1 dx(u, t) = cos( f (x(u, t), x1 (t)) + f (x(u, t), x2 (t)))dw1 (t) 2 2 1 1 + sin( f (x(u, t), x1 (t)) + f (x(u, t), x2 (t)))dw2 (t). 2 2

(2.8)

Note that (2.7) has the solution as a usual Itô equation. It can be proved [53] that x has the modification, which is continuous with respect u and t, and is the homeomorphism of ℝ

38 � 2 Stochastic differential equations for the flows with interactions under fixed t. So, the particles, which start from the different points of the space, never meet each other. Also, note that for every fixed u the random process {x(u, t), t ≥ 0} is the continuous martingale with the characteristic ⟨x(u, ⋅)⟩t = t, i. e., [73] x(u, ⋅) is the Wiener process with the initial point u. So, in this case (2.1) describes the motion of interacting Brownian particles.

2.2 The Brownian sheet and related stochastic calculus This section is devoted to the appropriate (from both mathematical and physical point of view) form of the description of the infinite sum of stochastic integrals in (2.1). So, this is purely a technical part, and the reader, who knows the corresponding material, can omit it. First, consider the case when d = 1. Denote the Lebesgue measure in ℝ by λ. Let {ek ; k ≥ 1} be an orthonormal basis in L2 (ℝ, λ). Consider the Borel set Δ ⊂ ℝ × [0; +∞). Suppose that Δ has a finite Lebesgue measure. It follows from the Fubini theorem that for almost all t the set, Δt = {u ∈ ℝd : (u, t) ∈ Δ}, has the property 1Δt ∈ L2 (ℝ, λ).

Hence, 1Δt can be expanded in the series ∞

1Δt = ∑ fk (t)ek . k=1

Exercise 2.2.1. Prove that the functions {fk } can be chosen to be measurable by modification on the set of measure zero. Note that ∞ +∞

∑ ∫

k=1 0

fk2 (t)dt

+∞

= ∫ ∫ 1Δt (u)dudt = λ2 (Δ), 0 ℝ

where λ2 is the Lebesgue measure in ℝ2 . Put the Gaussian random variable ∞ +∞

W (Δ) = ∑ ∫ fk (t)dwk (t) k=1 0

into the correspondence with the set Δ.

2.2 The Brownian sheet and related stochastic calculus

� 39

Exercise 2.2.2. Prove that the following relations take place: (1) EW (Δ) = 0, EW (Δ)2 = λ2 (Δ). (2) If Δ1 and Δ2 have the finite Lebesgue measure, then EW (Δ1 )W (Δ2 ) = λ2 (Δ1 ∩ Δ2 ). (3) If Δ1 ∩ Δ2 = 0, then W (Δ1 ) + W (Δ2 ) = W (Δ1 ∪ Δ2 ). From the Exercise 2.2.2, one can see that W is the Gaussian orthogonal measure with the Lebesgue measure as a structure measure. Sometimes in literature, W is called the Wiener sheet. We also will use this name. Since W is defined on the space ℝ × [0; +∞), which contains the time component, then it is natural to associate the flow of σ-fields, ℱt = σ{W (Δ) : Δ ⊂ ℝ × [0; t]},

(2.9)

with W . Exercise 2.2.3. Prove that ∀t ≥ 0: ℱt = σ{wk (s) : k ≥ 1, s ≤ t}.

Now consider the random function f from L2 (ℝ × [0; +∞), λ2 ) such that the restriction of f on ℝ × [0; t] is ℱt -measurable for every t ≥ 0. Let us build the stochastic integral +∞

∫ ∫ f (s, u)W (ds, du). 0 ℝ

Consider the sequence of the functions, n

fn (s, u) = ∑ φnk (u)1[tn ;tn ] (s), k k+1

k=0

such that: +∞ (1) E ∫0 ∫ℝ (fn − f )2 duds → 0, n → ∞. (2) For every k φnk is ℱtn -measurable. k (3) For every k, n−1

n φnk (u) = ∑ αkj 1Δn , j=0

j

where Δnj , j = 0, . . . , n are the disjoint subsets of ℝ with finite measure.

40 � 2 Stochastic differential equations for the flows with interactions Exercise 2.2.4. Prove the existence of such a sequence. For every n ≥ 1, put +∞

n

n

n n W ([tkn ; tk+1 ) × Δnj ). ∫ ∫ fn (s, u)W (ds, du) = ∑ ∑ αkj k=0 j=0

0 ℝ

The correctness of this definition can be checked as usual. Exercise 2.2.5. Prove that +∞

E ∫ ∫ fn (s, u)W (ds, du) = 0, 0 ℝ

2

+∞

+∞

E( ∫ ∫ fn (s, u)W (ds, du)) = E ∫ ∫ fn (s, u)2 ds, du. 0 ℝ

0 ℝ

Now consider for every k ≥ 1, gk (t) = ∫ fn (t, u)ek (u)du,

t ≥ 0.



Then for every k ≥ 1, gk is adapted to the flow {ℱt }, and ∞

+∞

k=1

0

+∞

∑ E ∫ gk (t)2 dt = E ∫ ∫ fn (s, u)2 ds, du. 0 ℝ

Using the definition of W, one can check that +∞

∞ +∞

∫ ∫ fn (s, u)W (ds, du) = ∑ ∫ gk (s)dwk (s), 0 ℝ

k=1 0

(2.10)

where the series in the right part converges in the square mean. Now we can define the stochastic integral from f with respect to W as a limit in the square mean +∞

+∞

∫ ∫ f (s, u)W (ds, du) := lim ∫ ∫ fn (s, u)W (ds, du). 0 ℝ

n→∞

0 ℝ

It can be checked as usual that the value of the limit does not depend on the choice of the approximating sequence {fn ; n ≥ 1} and that the obtained integral has all the properties, which were pointed to in Exercise 2.2.5. Moreover, the representation of the integral as an infinite series (2.10) also remains to be true. Also, note that the random process

2.2 The Brownian sheet and related stochastic calculus

� 41

t

∫ ∫ f (s, u)W (ds, du),

t≥0

0 ℝ

is {ℱt }-martingale. This fact can be easily checked by using the representation (2.10). Let us write the Itô formula for this martingale. Suppose that the deterministic function F : [0; +∞) × ℝ → ℝ has two continuous bounded derivatives, and consider the value t

F(t, ∫ ∫ f (s, u)W (ds, du)). 0 ℝ

Using the representation (2.10), this value can be written as a limit (e. g., in probability), t

n

t

F(t, ∫ ∫ f (s, u)W (ds, du)) = lim F(t, ∑ ∫ gk (s)dwk (s)). n→∞

0 ℝ

k=1 0

Now write the usual Itô formula: t

n

F(t, ∑ ∫ gk (s)dwk (s)) k=1 0

t

= F(0, 0) + n

∫ F1′ (s, 0

t

n

s

n

∑ ∫ gk (τ)dwk (τ))ds

k=1 0

s

+ ∑ ∫ F2′ (s, ∑ ∫ gj (τ)dwj (τ))gk (s)dwk (s) k=1 0

j=1 0

t

s

n 1 n ′′ + ∑ ∫ F22 (s, ∑ ∫ gj (τ)dwj (τ))gk2 (s)d(s). 2 k=1 j=1 0

0

Note that n

t

n

s

lim ∑ ∫ F2′ (s, ∑ ∫ gj (τ)dwj (τ))gk (s)dwk (s)

n→∞

k=1 0 t

s

j=1 0

= ∫ ∫ F2′ (s, ∫ ∫ f (τ, u)W (dτ, du))f (s, v)W (ds, dv), 0 ℝ

0 ℝ

and n

t

n

s

t

n

s

′′ ′′ lim ∑ ∫ F22 (s, ∑ ∫ gj (τ)dwj (τ))gk2 (s)d(s) = ∫ ∫ F22 (s, ∑ ∫ gj (τ))f 2 (s, u)duds.

n→∞

k=1 0

j=1 0

0 ℝ

j=1 0

42 � 2 Stochastic differential equations for the flows with interactions ′′ Here, the first limit is in the square mean and the second is in probability. Also, F1′ , F2′ , F22 denote the partial derivatives of F with respect to the first or second variables. Consequently, we have the following Itô formula for the integral relative to the Wiener sheet: t

F(t, ∫ ∫ f (s, u)W (ds, du)) 0 ℝ

t

s

= F(0, 0) + ∫ F1′ (s, ∫ ∫ f (τ, u)W (dτ, du))ds 0

t

0 ℝ

s

+ ∫ F2′ (s, ∫ ∫ f (τ, u)W (dτ, du))f (s, v)W (ds, dv) 0

+

t

0 ℝ

s

1 ′′ (s, ∫ ∫ f (τ, u)W (dτ, du))f 2 (s, v)dsdv. ∫ ∫ F22 2 0 ℝ

(2.11)

0 ℝ

We will need the notions of the ℝd -valued Wiener sheet and the integrals relative to it. All this machinery can be produced in the same way as above. So, we leave the precise formulations to the reader. Here, we only briefly say that the Wiener sheet in ℝd is the set from d independent one-dimensional Wiener sheets. The properties of the stochastic integral with respect to the Wiener sheet allow to consider the stochastic differential equations with this integral in a usual manner.

2.3 The existence of the solution for the equation with the interaction This section contains the rigorous formulation of (2.1) and the description of the successful approximation method when being applied to this equation. Let us first write (2.1) with the integral with respect to the ℝ-valued Wiener sheet W , dx(u, t) = a(x(u, t), μt , t)dt + ∫ℝ b(x(u, t), μt , t, q)W (dt, dq), { { { x(u, 0) = u, u ∈ ℝd , { { { −1 {μt = μ0 ∘ x(⋅, t) , t ≥ 0.

(2.12)

Definition 2.3.1. The solution to the Cauchy problem (2.12) corresponding to the coefficients a, b and initial measure μ0 is the random ℝd -valued field x(u, t), u ∈ ℝd , t ∈ [0; +∞) such that: (1) For every t ≥ 0, the restriction of x on the interval [0; t] is ℬd × ℬ[0;t] × ℱt -measurable (here ℬd and ℬ[0;t] are the Borel σ-fields in ℝd and [0; t] correspondingly). (2) Under fixed u ∈ ℝd and t ≥ 0, the integral analog of equality from (2.12) is performed. (3) The initial condition holds for x.

2.3 The existence of the solution for the equation with the interaction

� 43

The following theorem is the main statement of this section. Theorem 2.3.1. Let the coefficients a and b satisfy the condition ∃C > 0 : ∀u1 , u2 ∈ ℝd , ν1 , ν2 ∈ M, t ≥ 0: 1

2 󵄩 󵄩2 󵄩 󵄩󵄩 󵄩󵄩a(u1 , ν1 , t) − a(u2 , ν2 , t)󵄩󵄩󵄩 + ( ∫ 󵄩󵄩󵄩b(u1 , ν1 , t, q) − b(u2 , ν2 , t, q)󵄩󵄩󵄩 dq)

ℝd

≤ C(‖u1 − u2 ‖ + γ(ν1 , ν2 )).

Suppose also that a and b are continuous with respect to variables x, μ and t (the coefficient b is continuous in the L2 -norm). Then (2.12) has the solution, which is unique. Proof. Let us use the method of successful approximation. For the future, fix T > 0 and consider the equation on the interval [0; T]. All constants, which will arise, depend on this T. The statement of the theorem will result from the standard arguments. Define x0 (u, t) = u for all u ∈ ℝd , t ≥ 0. Correspondingly, μ0t = μ0 for all t ≥ 0. Consider now the equation t

x1 (u, t) = u +

∫ a(x1 (u, s), μ0s , s)ds 0

t

+ ∫ ∫ b(x1 (u, s), μ0s , s, q)W (ds, dq). 0 ℝd

The existence of the unique x1 follows from the existence theorem for the Itô stochastic differential equation [53]. Let us check that x1 has the measurable modification. Note that for u1 , ui nℝd , 󵄩󵄩 󵄩2 󵄩󵄩x1 (u1 , t) − x1 (u2 , t)󵄩󵄩󵄩

t

2

󵄩 󵄩 ≤ 3‖u1 − u2 ‖2 + 3(∫󵄩󵄩󵄩a(x1 (u1 , s), μ0 , s) − a(x1 (u2 , s), μ0 , s)󵄩󵄩󵄩) 0

󵄩󵄩 t 󵄩󵄩2 󵄩 󵄩 + 3󵄩󵄩󵄩∫ ∫ (b(x1 (u1 , s), μ0 , s, q) − b(x1 (u2 , s), μ0 , s, q))W (ds, dq)󵄩󵄩󵄩 . 󵄩󵄩 󵄩󵄩 d 0 ℝ

Using the Burkholder inequality for the stochastic integral (which is a martingale) and the Lipshitz condition on the coefficients, we get 󵄩 󵄩2 E sup󵄩󵄩󵄩x(u1 , s) − x(u2 , s)󵄩󵄩󵄩 ≤ c1 ‖u1 − u2 ‖2 . [0;T]

(2.13)

Hence, x1 has continuous modification with respect to u and t. Now due to Lemma 1.2.3, μ1t = μ0 ∘ x1 (⋅, t)−1 is the random measure for every t ≥ 0. Exercise 2.3.1. Prove that {μ1t ; t ≥ 0} is continuous with the probability one process.

44 � 2 Stochastic differential equations for the flows with interactions Now define x2 as the solution to the equation: t

t

x2 (u, t) = u + ∫ a(x2 (u, s), μ1s , s)ds + ∫ ∫ b(x2 (u, s), μ1s , s, q)W (ds, dq), 0

0 ℝd

and so on. As a result of such a procedure, we obtain the sequence of the random fields {xn ; n ≥ 1} and the corresponding measure-valued processes {μn ; n ≥ 1}. In this construction, every xn satisfies the estimation (2.13) with the same c1 . Now for n ≥ 1, t ≥ 0, 󵄩 󵄩2 E sup󵄩󵄩󵄩xn+1 (u, s) − xn (u, s)󵄩󵄩󵄩 [0;t]

t

t

2 󵄩 󵄩2 ≤ c2 ∫ E sup󵄩󵄩󵄩xn+1 (u, τ) − xn (u, τ)󵄩󵄩󵄩 ds + c2 ∫ E sup γ(μnτ , μn−1 τ ) ds. [0;s]

0

0

[0;s]

So, due to the Gronwall–Bellman inequality, t

2 󵄩 󵄩2 E sup󵄩󵄩󵄩xn+1 (u, s) − xn (u, s)󵄩󵄩󵄩 ≤ c3 ∫ E sup γ(μnτ , μn−1 τ ) ds. [0;t]

0

[0;s]

Define for r ≥ 0, φ(r) =

r . 1+r

Then 󵄩󵄩 󵄩󵄩 γ(μns , μn−1 s ) ≤ ∫ φ(󵄩 󵄩xn (u, s) − xn−1 (u, s)󵄩󵄩)μ0 (du). ℝd

Hence, E sup γ(μns , μn−1 s )

2

[0;t]

󵄩 󵄩 ≤ E sup( ∫ φ(󵄩󵄩󵄩xn (u, s) − xn−1 (u, s)󵄩󵄩󵄩)μ0 (du)) [0;t]

ℝd

󵄩 󵄩 ≤ E sup ∫ φ(󵄩󵄩󵄩xn (u, s) − xn−1 (u, s)󵄩󵄩󵄩)μ0 (du) [0;t]

ℝd

󵄩 󵄩 ≤ E ∫ sup φ(󵄩󵄩󵄩xn (u, s) − xn−1 (u, s)󵄩󵄩󵄩)μ0 (du) ℝd

[0;t]

󵄩 󵄩2 ≤ ∫ E sup󵄩󵄩󵄩xn (u, t) − xn−1 (u, t)󵄩󵄩󵄩 μ0 (du) ℝd

[0;t]

2

2.3 The existence of the solution for the equation with the interaction

� 45

t

n−2 ≤ c3 ∫ E sup γ(μn−1 τ , μτ )ds. 0

[0;s]

(2.14)

Now the convergence of {μn } and {xn } can be checked in the standard way. The uniqueness of the solution is explained by the estimations similar to (2.14). The theorem is proved. Consider some simple particular cases and examples to this theorem. Example 2.3.1. The system of interacting particles moving in the random media. Let N

μ0 = ∑ μk δx k . k=1

0

Consider a(x, μ, t) = ∫ f (x, v)μ(dv), ℝd

where the function f is bounded and satisfies the Lipshitz condition relative to both variables. Let us check whether a satisfies the condition of the theorem. For arbitrary μ1 , μ2 ∈ M and ϰ ∈ C(μ1 , μ2 ), from the condition on f we have 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩 ∫ f (x, v)μ1 (dv) − ∫ f (x, v)μ2 (dv)󵄩󵄩󵄩 󵄩󵄩 󵄩󵄩 d d ℝ



󵄩 󵄩 ≤ ∫ ∫ 󵄩󵄩󵄩f (x, v1 ) − f (x, v2 )󵄩󵄩󵄩ϰ(dv1 , dv2 ) ℝd ℝd

≤ C ∫ ∫ ϰ(‖v1 − v2 ‖)ϰ(dv1 , dv2 ), ℝd ℝd

where the constant C depends on 󵄩 󵄩 sup󵄩󵄩󵄩f (x, v)󵄩󵄩󵄩 x,v

and the Lipshitz constant for f . Since ϰ was arbitrary, then 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩 ∫ f (x, v)μ1 (dv) − ∫ f (x, v)μ2 (dv)󵄩󵄩󵄩 ≤ Cγ(μ1 , μ2 ). 󵄩󵄩 󵄩󵄩 d d ℝ



So, our coefficient a satisfies the conditions of Theorem 2.3.1. Assume that the coefficient b does not depend on μ and satisfies the conditions of Theorem 2.3.1. Now equation (2.12) has the following form:

46 � 2 Stochastic differential equations for the flows with interactions dx(u, t) = ∑Nk=1 pk f (x(u, t), x(x0k , t))dt + ∫ℝd b(x(u, t), t, q)W (dt, dq), { x(u, 0) = u, u ∈ ℝd . Consider the motion of heavy particles. Put for every k = 1, . . . , N, xk (t) = x(x0k , t), t ≥ 0. For the processes xk (t), k = 1, . . . , N, we get the system dxk (t) = ∑Nj=1 pj f (xk (t), xj (t))dt + ∫ℝd b(xk (t), t, q)W (dt, dq), { xk (0) = x0k , k = 1, . . . , N. This system describes the motion of the N interacting particles in the random media. Note the two important features of our model. It is obvious now that the particle, which starts from different points of the space, cannot couple in the finite time (as was mentioned above, x is the homeomorphism under every fixed t). Such models in ℝd with continuous time and coupling will be obtained in the future as weak limits of the processes, which satisfy (2.12). Second, note that from equation (2.12) we have the information about the behavior of the particles, which starts not only from the heavy points but from all points of the space. Now consider the measure-valued process {μt ; t ≥ 0}, which arise under the solution of equation (2.12). The following lemma shows that the solution of (2.12) continuously depends on the initial measure μ0 . Lemma 2.3.1. Let x and {xn ; n ≥ 1} be the solutions to the equation (2.12), which are corresponding to the initial measures {μn ; n ≥ 1}. Suppose that μn → μ0 ,

n→∞

in the space M. Then for every T > 0 and u ∈ ℝd , 󵄩 󵄩2 E sup󵄩󵄩󵄩x(u, t) − xn (u, t)󵄩󵄩󵄩 → 0,

n → ∞,

E sup γ(μt , μnt ) → 0,

n → ∞.

[0;T]

[0;T]

Proof. Similar to the proof of Theorem 2.3.1, the following inequality can be obtained: ∀t ∈ [0; T], t

2 󵄩 󵄩2 E sup󵄩󵄩󵄩x(u, s) − xn (u, s)󵄩󵄩󵄩 ≤ C ∫ E sup γ(μτ , μnτ ) ds. [0;T]

0

[0;s]

To estimate the distance γ(μτ , μnτ ), consider arbitrary ϰ ∈ C(μ, μn ). Then 2

E sup γ(μs , μns ) [0;t]

2.3 The existence of the solution for the equation with the interaction

� 47

2

󵄩 󵄩 ≤ E sup( ∫ ∫ φ(󵄩󵄩󵄩x(u, s) − xn (v, s)󵄩󵄩󵄩)ϰ(du, dv)) [0;t]

ℝd ℝd

󵄩 󵄩 ≤ 2E sup( ∫ ∫ φ(󵄩󵄩󵄩x(v, s) − xn (v, s)󵄩󵄩󵄩)ϰ(du, dv)) [0;t]

2

ℝd ℝd

2

󵄩 󵄩 + 2E sup( ∫ ∫ φ(󵄩󵄩󵄩x(u, s) − x(v, s)󵄩󵄩󵄩)ϰ(du, dv)) [0;t]

ℝd ℝd

󵄩 󵄩 ≤ 2E sup( ∫ φ(󵄩󵄩󵄩x(u, s) − xn (u, s)󵄩󵄩󵄩)μ(du)) [0;t]

ℝd

2

2

󵄩 󵄩 + 2E ∫ ∫ φ(sup󵄩󵄩󵄩x(u, s) − x(v, s)󵄩󵄩󵄩) ϰ(du, dv). ℝd ℝd

[0;T]

Now analogously to the proof of Theorem 2.3.1, we have from the Gronwall–Bellman inequality 2 󵄩 󵄩 2 E sup γ(μs , μns ) ≤ C1 ∫ ∫ Eφ(sup󵄩󵄩󵄩x(u, s) − x(v, s)󵄩󵄩󵄩) ϰ(du, dv). [0;T]

ℝd ℝd

[0;T]

Note that it follows from the proof of Theorem 2.3.1 that the function 󵄩 󵄩 2 Eφ(sup󵄩󵄩󵄩x(u, s) − x(v, s)󵄩󵄩󵄩) , [0;T]

u, v ∈ ℝd

is bounded and continuous on both variables. Let us take the sequence of the measures {ϰn ; n ≥ 1} on ℝd × ℝd in the such way, that: (1) ∀n ≥ 1 : ϰn ∈ C(μ, μn ). (2) ϰn weakly converges to the measure concentrated on the diagonal in ℝd × ℝd . Then 2

󵄩 󵄩 lim ∫ ∫ Eφ(sup󵄩󵄩󵄩x(u, s) − x(v, s)󵄩󵄩󵄩) ϰ(du, dv) = 0.

n→∞

ℝd ℝd

[0;T]

Hence, the conclusion of the lemma holds. The lemma is proved. This continuous dependence of the solution to (2.12) from the initial measure will be used in the future under the investigation of the Markov structure of the measurevalued process {μt ; t ≥ 0}. Another important remark about (2.12) is that we can consider the Cauchy problem with the initial conditions, which are random and independent from the Wiener sheet. More precisely, the following statement holds.

48 � 2 Stochastic differential equations for the flows with interactions Theorem 2.3.2. Suppose that the random map ψ : ℝd → ℝd is independent from the Wiener sheet W and ∃L > 0 : ∀u, v ∈ ℝd : 󵄩2 󵄩 E 󵄩󵄩󵄩ψ(u) − ψ(v)󵄩󵄩󵄩 ≤ L‖u − v‖2 ,

󵄩2 󵄩 E 󵄩󵄩󵄩ψ(0)󵄩󵄩󵄩 < +∞.

Let the random measure μ0 in M is independent from the W . Then the Cauchy problem dx(u, t) = a(x(u, t), μt , t)dt + ∫ℝd b(x(u, t), μt , q)W (dt, dq), { { { x(u, 0) = ψ(u), u ∈ ℝd , { { { −1 {μt = μ0 ∘ x(⋅, t) , t > 0 has the solution, which is unique, under the same conditions on the coefficients a, b as in Theorem 2.3.1. The proof of this theorem repeats the outline of the proof of Theorem 2.3.1, and we left it to the reader. Now let us speak about the properties of the stochastic flow x, which is the solution of (2.12). Suppose that the process {μt ; t ≥ 0} is already known to us. Note that this is continuous measure-valued process. Denote for every r ∈ ℝd , t

t

F(r, t) = ∫ a(r, μs , s)ds + ∫ ∫ b(r, μs , s, q)W (ds, dq). 0

0 ℝd

Then, in the terminology of [67], F is spatial semimartingale and (2.12) can be rewritten in the form dx(u, t) = F(x(u, t), dt), { x(u, 0) = u, u ∈ ℝd . Now the properties of x can be derived from the properties of the semimartingale F. The bounded variation part of F has the kind t

∫ a(r, μs , s)ds,

t≥0

0

and the joint characteristics of the martingale parts for the different r1 and r2 is t

⟨F(r1 , ⋅), F(r2 , ⋅)⟩t = ∫ ∫ b∗ (r1 , μs , s, q)b(r2 , μs , s, q)dqds, 0 ℝd

where b∗ is the adjoint matrix. Denote B(r1 , r2 , s) = ∫ b∗ (r1 , μs , s, q)b(r2 , μs , s, q)dq. ℝd

2.4 The stochastic flows with interaction and the moments of initial mass distribution

� 49

Exercise 2.3.2. Prove that the function B is bounded and satisfies the Lipshitz condition under the conditions of Theorem 2.3.1. From Theorem 4.5.1 [67], we now have the following statement. Theorem 2.3.3. Under the conditions of Theorem 2.3.1, the solution x has the modification, which is with the probability one flow of the homeomorphisms ℝd on ℝd . This property of the flow x can be investigated more deeply if we suppose that a and b have the derivatives with respect to a spatial variable. In this case, x appears to be the diffeomorphism. We will use this property to study the local times of the process {μt ; t ≥ 0} and the absolute continuity of the measures μt under the fixed t and ω.

2.4 The stochastic flows with interaction and the moments of initial mass distribution In this section, we will consider the initial mass distribution μ0 not from M but from Mm , where m ≥ 1. Theorem 2.4.1. Suppose that in equation (2.12) ∃C > 0 : ∀u1 , u2 ∈ ℝd , ν1 , ν2 ∈ Mm , t ≥ 0: 1

2 󵄩2 󵄩󵄩 󵄩 󵄩 󵄩󵄩a(u1 , ν1 , t) − a(u2 , ν2 , t)󵄩󵄩󵄩 + ( ∫ 󵄩󵄩󵄩b(u1 , ν1 , t, q) − b(u2 , ν2 , t, q)󵄩󵄩󵄩 dq)

ℝd

≤ C(‖u1 − u2 ‖ + γm (ν1 , ν2 ))

and the functions a and b are continuous with respect to all variables (the measure-valued variable is from the space Mm ). Then (2.12) has the solution, which is unique, and for every t > 0 the measure μt = mu0 ∘ x(⋅, t)−1 is the random element in Mm . Proof. Similar to Theorem 2.3.1, use the method of successful approximation. Let {xn ; n ≥ 1} and {μnt ; n ≥ 1} be the correspondent sequences of the stochastic fields and measurevalued processes. Let us prove that for every fixed T > 0, 󵄩 󵄩m sup E sup ∫ 󵄩󵄩󵄩xn (u, s)󵄩󵄩󵄩 μ0 (du) < +∞. n≥1

[0;T]

(2.15)

ℝd

Note that similar to the proof of Theorem 2.3.1, we now have 󵄩 󵄩m E sup󵄩󵄩󵄩xn (u1 , s) − xn (u2 , s)󵄩󵄩󵄩 ≤ C‖u1 − u2 ‖m , [0;T]

where the constant C depends on T but does not depend on u1 , u2 ∈ ℝd and n ≥ 1. Consequently,

50 � 2 Stochastic differential equations for the flows with interactions 󵄩m 󵄩 󵄩m 󵄩 E sup ∫ 󵄩󵄩󵄩xn (u, s)󵄩󵄩󵄩 μ0 (du) ≤ C1 ∫ ‖u‖m μ0 (du) + C1 E sup󵄩󵄩󵄩xn (0, s)󵄩󵄩󵄩 . [0;T]

[0;T]

ℝd

ℝd

(2.16)

For the second summand, the following inequality holds for t ∈ [0; T]: t

t

m 󵄩m 󵄩 󵄩m 󵄩 E sup󵄩󵄩󵄩xn (0, s)󵄩󵄩󵄩 ≤ C2 + C3 ∫ E sup󵄩󵄩󵄩xn (0, τ)󵄩󵄩󵄩 ds + C3 ∫ E sup γm (μn−1 τ , δ0 ) ds. [0;t]

0

[0;s]

0

[0;s]

(2.17)

It is obvious that m 󵄩󵄩 󵄩󵄩m E sup γm (μn−1 τ , δ0 ) = E sup ∫ 󵄩 󵄩xn−1 (u, τ)󵄩󵄩 μ0 (du). [0;s]

[0;s]

(2.18)

ℝd

From (2.16)–(2.18), the relation (2.15) can be easily derived. The proof of the convergence of approximating sequences and the uniqueness of the solution is standard with using (2.15), and is left to the reader. The theorem is proved. Remark. It can be checked that the Lipshitz condition in the metrics γm is not necessary for the conclusion μt ∈ Mm . Exercise 2.4.1. Prove that under the conditions of Theorem 2.3.1 for every initial measure μ0 ∈ Mm , we have μt ∈ Mm , t ≥ 0. Let us consider the case when μ0 is concentrated in the points {ak ; k ≥ 1} with the weights {pk ; k ≥ 1}. Then the inclusion μ0 ∈ Mm means that ∞

∑ pk ‖ak ‖m < +∞.

k=1

It follows from Theorem 2.3.1 under appropriate conditions in the coefficients μt ∈ Mm . Now it means that ∞

󵄩 󵄩m ∑ pk 󵄩󵄩󵄩x(ak , t)󵄩󵄩󵄩 < +∞

k=1

a. s.

This conclusion almost coincides with the behavior of the stochastic flows from [67], where it is mentioned that under fixed t ≥ 0, ‖x(u, t)‖ = 0, 1 + ‖u‖1+ε ‖u‖ lim = 0, a. s. ‖u‖→∞ 1 + ‖x(u, t)‖1+ε lim

‖u‖→∞

Now let us consider the dependence from the initial measure in the distance γm .

2.4 The stochastic flows with interaction and the moments of initial mass distribution

� 51

Lemma 2.4.1. Let the conditions of Theorem 2.4.1 be fulfilled. Suppose that μ′0 and μ′′ 0 are the initial measures from Mm . Then for every T > 0 there exists such C > 0, that for the corresponding solution of (2.12) the following inequalities hold: m

m

′ ′′ E sup γm (μ′t , μ′′ t ) ≤ Cγm (μ0 , μ0 ) , [0;T]

m

E sup ‖x (u, t) − x ′′ (u, t)‖m ≤ Cγm (μ′0 , μ′′ 0) , ′

u ∈ ℝd .

[0;T]

Proof. Consider m

γm (μ′t , μ′′ t ) ≤

inf

ϰ∈C(μ′0 ,μ′′ 0)

≤ 2m−1

󵄩 󵄩m ∫ ∫ 󵄩󵄩󵄩x ′ (u, t) − x ′′ (v, t)󵄩󵄩󵄩 ϰ(du, dv)

ℝd ℝd

inf′

ϰ∈C(μ0 ,μ′′ 0)

+ 2m−1

inf′

󵄩 󵄩m ∫ ∫ 󵄩󵄩󵄩x ′ (u, t) − x ′′ (u, t)󵄩󵄩󵄩 ϰ(du, dv)

ℝd ℝd

ϰ∈C(μ0 ,μ′′ 0)

󵄩 󵄩m ∫ ∫ 󵄩󵄩󵄩x ′′ (u, t) − x ′′ (v, t)󵄩󵄩󵄩 ϰ(du, dv).

ℝd ℝd

As it was mentioned in the proof of Theorem 2.4.1, 󵄩 󵄩m E sup󵄩󵄩󵄩x ′′ (u1 , t) − x ′′ (u2 , t)󵄩󵄩󵄩 ≤ C1 ‖u1 − u2 ‖m , [0;T]

u1 , u2 ∈ ℝd .

Hence, E sup

inf

′ ′′ [0;T] ϰ∈C(μ0 ,μ0 )

m 󵄩 󵄩m ∫ ∫ 󵄩󵄩󵄩x ′′ (u, t) − x ′′ (v, t)󵄩󵄩󵄩 ϰ(du, dv) ≤ C1 γm (μ′0 , μ′′ 0) .

ℝd ℝd

Now for t ∈ [0; T], 󵄩 󵄩m E sup󵄩󵄩󵄩x ′ (u, t) − x ′′ (u, t)󵄩󵄩󵄩 [0;t]

t

t

m 󵄩 󵄩m ≤ C2 ∫ E sup󵄩󵄩󵄩x ′ (u, τ) − x ′′ (u, τ)󵄩󵄩󵄩 ds + C2 ∫ E sup γm (μ′τ , μ′′ τ ) ds. 0

[0;s]

0

[0;s]

Consequently, t

m 󵄩 󵄩m E sup󵄩󵄩󵄩x ′ (u, t) − x ′′ (u, t)󵄩󵄩󵄩 ≤ C3 ∫ E sup γm (μ′τ , μ′′ τ ) ds. [0;t]

So,

0

[0;s]

(2.19)

52 � 2 Stochastic differential equations for the flows with interactions

m E sup γm (μ′τ , μ′′ τ) [0;t]



m C4 γm (μ′0 , μ′′ 0)

t

m

+ C4 ∫ E sup γm (μ′τ , μ′′ τ ) ds. 0

[0;s]

From this inequality, the last inequality of the lemma follows. The first can be received from (2.19). The lemma is proved. The statement of this lemma will be used in the next chapter for the proof of the Markov structure of the measure-valued process {μt ; t ≥ 0} is the space Mm . Exercise 2.4.2. Let μ1 , μ2 ∈ M and the functions f1 , f2 : ℝd → ℝd be the measurable bijections, which have the measurable inverse. Consider ν1 = μ1 ∘ f1−1 , ν2 = μ2 ∘ f2−1 . Prove that C(ν1 , ν2 ) = {λ : λ = ϰ ∘ (f1 , f2 )−1 , ϰ ∈ C(μ1 , μ2 )}. Exercise 2.4.3. Consider d = 1 and the simple equation dx(u, t) = ∫ℝ φ(x(u, t) − q)W (dt, dq)

{

x(u, 0) = u,

u ∈ ℝ,

where the function φ ∈ C01 (ℝ) and ∫ φ2 (v)dv = 1. ℝ

Consider the sequence of the measures μn =

n 1 ∑ δk , 2n + 1 k=−n

n ≥ 1.

Let μnt = μn ⋅ x(⋅, t)−1 , n ≥ 1, t ≥ 0 be the correspondent measure-valued processes. Prove that ∀t ≥ 0 : E ∫ |u|m μt (du) < +∞, ℝ

and that the sequence {μnt ; n ≥ 1} is not weakly compact in arbitrary Mm for m ≥ 0.

3 The evolutionary measure-valued processes 3.1 Construction of the evolutionary measure-valued processes with the help of transition probabilities This chapter is devoted to the Markov measure-valued processes with the constant mass (for convenience we always suppose that this mass is equal to one). The class of processes, which is the subject of our investigation, can be described briefly in the following way. The measure on the space is carried by the sequence or the flow of random maps. One can consider this flow as a description of the motion of the particles on the phase space. But in our models the motion of the one particle or even of the finite system of the particles is not Markov. It becomes Markov only after consideration together with all mass, which is distributed on the space. In this chapter, we start with the definition of our measure-valued processes and construct the distribution of such process from the probabilities, which describes the behavior of the finite systems of the particles. Then we consider the processes corresponding to the flows with interactions as an important example. Other discrete and continuous time examples are considered. Let us start with the main definition. Definition 3.1.1. The random process {μt ; t ≥ 0} in the space M is called evolutionary Markov with respect to the flow of σ-fields {ℱt ; t ≥ 0} if there exists the measurable function f : Ω × [0; +∞) × X → X such that: (1) For every t ≥ 0, the restriction of f on [0; t] is ℱt × ℬ([0; t]) × ℬ(X)-measurable. (2) For every t ≥ 0, μt = μ0 ∘ ft−1

a. s.

(3) For arbitrary n ≥ 0, u1 , . . . , un ∈ X the random process {(μt , ft (u1 ), . . . , ft (un )), t ≥ 0} is Markov with respect to the flow {ℱt }. Definition 3.1.2. Stochastic flow {ft ; t ≥ 0} from the previous definition is called the stochastic flow with interaction, which is corresponds to the Markov measure-valued process {μt ; t ≥ 0}. Remark. In Definition 3.1.1, we use the notation ft (u) instead f (t, u), as usual, to associate it with μt . From the previous definition, one can see that for Γ ∈ Xn , u1 , . . . , un ∈ X the regular version of the conditional probability P{(ft+s (u1 ), . . . , ft+s (un )) ∈ Γ/ℱt }, if it exists, must depend only on u1 , . . . , un , t, s and μt . Suppose that the regular version exists. Consider from now the time-homogeneous case. Then https://doi.org/10.1515/9783110986518-003

54 � 3 The evolutionary measure-valued processes P{(ft+s (u1 ), . . . , ft+s (un )) ∈ Γ/ℱt } = Q(μt , s, u1 , . . . , un )(Γ), where the kernel Q is measurable with respect to the first arguments, and is the measure with respect to the set-valued argument. In the first chapter, the stochastic kernels, which depend on the measures, were treated. Our aim here is to present the conditions on the family {Q} under which it gives us the possibility to reconstruct the process {μt ; t ≥ 0} and the stochastic flow {ft ; t ≥ 0}. As it was mentioned in the first chapter, the space M of all probability measures on X has in general more complicated structure than the space X itself. So, it will be more convenient to write the analog of the Chapman–Kolmogorov equation on the space X than on M. It occurs that under the continuity condition on the family {Q}, with respect to the measure-valued argument, we can write the replacement of the Chapman–Kolmogorov equation for {Q} on the powers of X. The reason for this is exactly the same as in examples from Sections 2.1 and 2.3. It lies in the fact that the motion of the heavy particles describes the motion of all systems. Indeed, suppose that {μt ; t ≥ 0} is the evolutionary measure-valued Markov process with the corresponding stochastic flow {ft ; t ≥ 0}. Suppose also that the measure μ0 is concentrated in the finite number of the points N

μ0 = ∑ pk δuk . k=1

Then the random process {(ft (u1 ), . . . , ft (uN )); t ≥ 0} is Markov with respect to the flow {ℱt }. For arbitrary t, s ≥ 0 and the Borel subset Δ ⊂ XN , P{(ft+s (u1 ), . . . , ft+s (uN )) ∈ Γ/ℱt } = P{μt+s ∈ M, (ft+s (u1 ), . . . , ft+s (uN )) ∈ Γ/ℱt } = Q(μt , s, ft (u1 ), . . . , ft (uN ))(M × Δ) N

= Q( ∑ pk δft (uk ) , s, ft (u1 ), . . . , ft (uN ))(M × Δ) k=1

= R(ft (u1 ), . . . , ft (uN ), s)(Δ). Here, R(⋅, s)(⋅) is the stochastic kernel on XN . This means that the process {(ft (u1 ), . . . , ft (uN )); t ≥ 0} has the Markov property. Exercise 3.1.1. Prove that now the process {(ft (u1 ), . . . , ft (uN ), ft (v1 ), . . . , ft (vm )); t ≥ 0} also has the Markov property. Hence, from the usual Chapman–Kolmogorov equation one can get that the family {Q} satisfies following relation: ∀m, n ≥ 1, t, s ≥ 0, u1 , . . . , un , v1 , . . . , vm ∈ X, Δ ∈ ℬ(Xn+m ),

3.1 Construction of the evolutionary measure-valued processes

� 55

1 n ∑ δ , t + s, u1 , . . . , un , v1 , . . . , vm )(Δ) n k=1 uk

= Q(

= ∫ Q( Xn+m

⋅ Q(

1 n ∑ δ , s, r1 , . . . , rn , q1 , . . . , qm )(Δ) n k=1 rk

1 n ∑ δ , t, u1 , . . . , un , v1 , . . . , vm )(dp1 , . . . , dpn , dq1 , . . . , dqm ). n k=1 uk

(3.1)

It turns out that with the certain continuity conditions the equality (3.1) is enough for family {Q} to describe the evolutionary measure-valued process. Before we consider the example of the families, which satisfy (3.1), pay some attention to the following exercise. Exercise 3.1.2. Formulate Definitions 3.1.1 and 3.1.2 in the case when the time is discrete. Example 3.1.1. Let X = {1, . . . , N} and the time is discrete. Suppose that for every μ ∈ M {p(μ, i1 , . . . , iN ); i1 , . . . , iN = 1, . . . , N} is the distribution of the certain random map fμ : X → X as is Example 1.1.3. Define the Markov chain {(νn , gn ); n ≥ 0} in M × XX with the following transition kernel: K(ν, g)(Δ) = P{(μ ∘ fμ−1 , fμ (g)) ∈ Δ}. Now put Q(μ, n, 1, . . . , N)(i1 , . . . , iN ) = P{gn (1) = i1 , . . . , gn (N) = iN /ν0 = μ, g0 = e}, where e is an identity map. Check that {Q} satisfies (3.1). Note that in our case (3.1) has the form N

Q( ∑ pk δk , n + m, 1, . . . , N)(i1 , . . . , iN ) k=1

N

=

N



j1 ,...,jN =1

Q( ∑ pk δk , n, 1, . . . , N)(j1 , . . . , jN )

N





r1 ,...,rN =1

k=1

N

Q( ∑ pk δjk , m, 1, . . . , d)(r1 , . . . , rN ) ⋅ 1{rj

1

k=1

=i1 ,...,rjN =iN } .

Now N

Q( ∑ pk δk , n + m, 1, . . . , N)(i1 , . . . , iN ) k=1

N

= P{gn+m (1) = i1 , . . . , gn+m (N) = iN /ν0 = ∑ pk δk , g0 = e} k=1

56 � 3 The evolutionary measure-valued processes N

=

N



j1 ,...,jN =1

Q( ∑ pk δk , n, 1, . . . , N)(j1 , . . . , rN ) k=1

N

= P{gjm = i1 , . . . , gm (jN ) = iN /ν0 = ∑ pk δjk , g0 = e} k=1

N

=

N



j1 ,...,jN =1

Q( ∑ pk δk , n, 1, . . . , N)(j1 , . . . , rN )

N





r1 ,...,rN =1

k=1

N

Q( ∑ pk δjk , m, 1, . . . , N)(r1 , . . . , rN ) ⋅ 1{rj

1

k=1

=i1 ,...,rjN =iN } .

Note that the additional summation with respect to r1 , . . . , rN arises due to the condition that j1 must get i1 , j2 must get i2 and so on. The construction of this example can be spread to the case of countable X. Exercise 3.1.3. Give the precise description of {Q} and verify (3.1) in this case. Example 3.1.2. Let it be as above X = {1, . . . , N}. Let K be the stochastic kernel from the previous example. Build the compound Poisson process {(ν, gt ); t ≥ 0} M × XX with the transition probability (λt)k k K , k! k=0 ∞

Lt = e−λt ∑ where λ is the fixed positive number. Define

Q(μ, t, 1, . . . , N)(i1 , . . . , iN ) = P{gt (1) = i1 , . . . , gt (N) = iN /μ0 = μ, g0 = e}. Exercise 3.1.4. Check whether this family satisfies (3.1). Exercise 3.1.5. Prove that for the Markov process {(νt , gt ); t ≥ 0}, which started from the point (μ, e), and the following relation holds: μt = μ ∘ gt−1

a. s.

(Hint: Use the discrete approximation with respect to time.) Now let us consider the main result of this section. Theorem 3.1.1. Let the family {Q(μ, t, u1 , . . . , un ); μ ∈ M; t ≥ 0, u1 , . . . , un ∈ X, n ≥ 1} satisfy under every fixed t ≥ 0 the conditions of Lemma 1.3.1 and (3.1) holds. Then for the every initial measure μ0 there exists the evolutionary Markov process {μt ; t ≥ 0} and the correspondent stochastic flow {ft ; t ≥ 0} such that ∀v1 , . . . , vn ∈ X, t, s ≥ 0, Δ ∈ ℬ(Xn )

3.1 Construction of the evolutionary measure-valued processes � 57

P{(ft+s (v1 ), . . . , ft+s (vn )) ∈ Δ/μτ , fτ , τ ≤ t} = Q(μt , sft (v1 ), . . . , ft (vn ))(δ). Proof. For every n ≥ 0, build the set of stochastic kernels on M×Xn in the following way. Fix t ≥ 0, μ ∈ M and u1 , . . . , un ∈ X. Using the set of the finite consistent distributions {Q(μ, t, . . .)}, build on the certain probability space the random map φt,μ : X → X such that Q(μ, t, v1 , . . . , vk )(Γ) = P{(φt,μ (v1 ), . . . , φt,μ (vk )) ∈ Γ}, Γ ∈ ℬ(Xk ). Then define for Δ ∈ ℬ(M × Xn ), Kn (t, μ, u1 , . . . , un )(Δ) = P{(μ ∘ φ−1 t,μ , φt,μ (u1 ), . . . , φt,μ (un )) ∈ Δ}. Due to Lemma 1.2.3, Kn (t, ⋅)(⋅) is the stochastic kernel on M×Xn . Now fix the measure μ and define the following family of consistent finite-dimensional distributions. For 0 ≤ t0 < t1 < ⋅ ⋅ ⋅ < tm , u1 , . . . , un ∈ X, m

n

R(t0 , . . . , tm , u1 , . . . , un )(∏ ∏ Δkj × Γkj ) k=0 j=1

n

= ∏(1Δ0 (μ)1Γ0 (uk )) k

k=1

k

∫ Kn (t1 − t0 , μ, u1 , . . . , un )(dν1 , dv11 , . . . , dv1n )



M×Xn n

⋅ ∏(1Δ1 (ν1 )1Γ1 (v1k )) ⋅ k=1

k

k

∫ Kn (t2 − t1 , v1 , v11 , . . . , v1n )(dv1 , dv21 , . . . , dv2n ) ⋅ ⋅ ⋅

M×Xn

n

⋅ ∏(1Δm (νm )1Γm (vm k )). k=1

k

k

From the proof of Lemma 1.3.1, it follows that it is enough to check the consistency of such distributions in the case when μ=

1 N ∑δ . N k=1 uk

Note that for such measures, consistency follows from (3.1). Consider now the random process {(μ(t, u), φ(t, u)), t ≥ 0, u ∈ X}, which has the distributions {R}. Due to the form of {R}, {μ(t, u)} can be chosen independent from u. That is, we get the random process {(μ(t), φ(t, u)); t ≥ 0, u ∈ X}. Moreover, it is possible to check that under fixed t, φ(t, ⋅) is continuous with respect to u and that for arbitrary u1 , . . . , uN ∈ X {(μ(t), φ(t, u1 ), . . . , φ(t, un )); t ≥ 0} is the Markov process. Let us check that for every t ≥ 0,

58 � 3 The evolutionary measure-valued processes Eγ(μ(t), μ ∘ φ(t, ⋅)−1 ) = 0. Due to the stochastic continuity of φ(t, ⋅), Eγ(μ(t), μ ∘ φ(t, ⋅)−1 ) = lim E(μ(t), μn ∘ φ(t, ⋅)−1 ), n→∞

where {μn ; n ≥ 1} is the sequence of the discrete measures, which weakly converges to μ. Let μn =

1 n ∑δ , n k=1 xk

n ≥ 1.

Then Eγ(μ(t), μ ∘ φ(t, ⋅)−1 ) =

∫ γ(ν, M×Xn

1 n ∑ δ )K (t, μ, x1 , . . . , xn )(dν, dy1 , . . . , dyn ). n k=1 yk n

(3.2)

Using the random map φt,μ , which was considered when constructing the kernel Kn (3.2), can be rewritten as follows: ∫ γ(ν, M×Xn

1 n −1 ̃ ∘ φ−1 ∑ δ )K (t, μ, x1 , . . . , xn )(dν, dy1 , . . . , dyn ) = Eγ(μ t,μ , μn ∘ φt,μ ), n k=1 yk n

where Ẽ is the mathematical expectation on the probability space correspondent to φt,μ . The last expression tends to zero since φt,μ is stochastically continuous on X. We have proved such a fact in Section 1.3. So, μ(t) = μ ∘ φ(t, ⋅)−1

a. s.

The theorem is proved. The examples of the evolutionary measure-valued processes will be treated in the next section of this chapter.

3.2 Evolutionary measure-valued processes on the discrete space This section contains the models first described on the physical level and then they are interpreted as an evolutionary process. Example 3.2.1. Let X = ℤ. Consider N particles on X which move independently as a symmetric random walk. Suppose that the motion of the whole system satisfies the following rule. If the two particles meet one another, then they will move together after meeting. Note that this description is only physical. What really does “move inde-

3.2 Evolutionary measure-valued processes on the discrete space

� 59

pendently” mean if after meeting particles move together? In order to understand the mathematical meaning of this rule, the following exercise will be useful. Example 3.2.2. Let {xn ; n ≥ 1} and {yn ; n ≥ 1} be the independent symmetric random walks on ℤ. Define ∀n ≥ 1 : ℱn = σ(xk , yk ; k ≤ n) and τ = inf{n : xn = yn }. Prove that a) τ is the Markov moment with respect to {ℱn }. b) P{τ < +∞} = 1 for arbitrary initial values x1 and y1 . c) The new sequence xn , n ≤ τ, zn = { yn , n > τ is again the symmetric random walk on ℤ. After this exercise, it is clear that one can describe our model in a rigorous way as a Markov process in ℤN with the special transition probabilities. But this way is rather complicated. In order to see this, the reader can solve the following exercise. Exercise 3.2.1. Find the transition probabilities and write it as simple as possible for N = 3. If we are interested only in the mass distribution of the our system and considering the fact that all particles have the same weight, we can apply the definition of the evolutionary process to the description of this model. Now it is enough to define the family {Q(μ, 1, u1 , . . . , un ); μ ∈ M, u1 , . . . , un ∈ ℤ, n ≥ 1}. Put m

Q(μ, 1, u1 , . . . , un )(j1 , . . . , jn ) = ∏ h(uk , jk ), k=1

where 1 , |u − j| = 1, h(u, j) = { 2 0, |u − j| ≠ 1.

(3.3)

It is easy to check that this family satisfies all the conditions of Theorem 3.1.1. So, we can build the corresponding measure-valued evolutionary process {μn ; n ≥ 0} and the

60 � 3 The evolutionary measure-valued processes stochastic flow {fn ; n ≥ 0}. Now the distribution of the random maps {fn ; n ≥ 0} does not depend on the measure μ0 , and for every n ≥ 1, fn can be represented as fn = gn gn−1 ⋅ ⋅ ⋅ ⋅ ⋅ g1 , where {gn ; n ≥ 1} are independent random maps on ℤ with the distribution (3.3). Note that for μ0 =

1 N ∑δ N k=1 uk

the random measures μn = μ0 ∘ fn−1 describe the required system evolution. Also, note that our approach gives us the possibility to consider the infinite system of the particles (when the measure μ0 is concentrated on the countable set), but with the different weights. Example 3.2.3. Modify the rules of the previous example in the following way. Let the particles remain without moving the place where they meet. In this case, the probabilities {Q} can be defined by the formula n

Q(μ, 1, u1 , . . . , un )(j1 , . . . , jn ) = ∏ h(μ({uk }), uk , jk ). k=1

Here, μ({uk }) is the weight, which corresponds to the point uk under the measure μ, h : [0; 1] × ℤ2 → [0; 1], is the function continuous on the first argument under the fixed others such that 0, r ≥ N2 , i ≠ j, { { { h(r, i, j) = {0, |i − j| ≠ 1, { {1 1 { 2 , r ≤ N , |i − j| = 1. Exercise 3.2.2. Check that the family {Q} defines the evolutionary measure-valued process and that for μ0 =

1 N ∑δ N k=1 uk

the measures {μn ; n ≥ 0} describe the desired system evolution. Other examples of the evolutionary processes were considered in Sections 1.3 and 3.1.

3.3 The stochastic flows with interaction and the evolutionary processes

� 61

3.3 The stochastic flows with interaction and the evolutionary processes In this section, we will consider the stochastic differential equations with the interaction from Chapter 2. The corresponding measure-valued processes appear to be the evolutionary processes. Note that for the process {μt ; t ≥ 0} from equation (2.12) the stochastic flow x already exists. All we need to check is that for every u1 , . . . , un ∈ ℝd the process {(μt , x(u1 , t), . . . , x(un , t)); t ≥ 0} has the Markov property relative to the flow {ℱt } generated by the Wiener sheet. Theorem 3.3.1. Let the conditions of Theorem 2.4.1 hold. Then {μt ; t ≥ 0} is the homogeneous Markov process in Mm relative to the flow {ℱt } generated by the Wiener sheet. Proof. For μ ∈ Mm , t ≥ 0, Δ ∈ ℬ(Mm ), define Kt (μ, Δ) = P{μt ∈ Δ}, where {μt ; t ≥ 0} is the measure-valued process obtained from the solution of (2.12) with the initial condition μ. Let us verify that Kt is the stochastic kernel on Mm . It is enough to check the measurability with respect to μ. It follows from Lemma 2.4.1 that Kt (⋅, ⋅) is weakly continuous with respect to the first argument. Hence, for every bounded continuous function f : Mm → ℝ, the integral ∫ f (ν)Kt (μ, dν) Mm

is the continuous function of μ. From here, the measurability of Kt with respect to μ can be obtained in the obvious way. Let us check that ∀t, s ≥ 0 ∀Δ ∈ ℬ(M): P{μt+s ∈ Δ/ℱt } = Ks (μt , Δ). As it was mentioned above after Theorem 2.3.1, the statements of Theorem 2.3.1 (and 2.4.1) remain to be true in the case when the initial condition is random but independent from the Wiener sheet. Consider equation (2.12) on the interval [t; t + s] with the initial measure μt . Denote the corresponding solution by y(u, τ), u ∈ ℝd , τ ∈ [t; t + s] and the measure-valued process by {ντ ; τ ∈ [t; t + s]}. Now

τ

ντ = μt ∘ y(⋅, τ)−1 ,

y(u, t) = u,

u ∈ ℝd ,

τ

y(u, τ) = u + ∫ a(y(u, τ1 ), ντ1 , τ1 )dτ1 + ∫ ∫ b(y(u, τ1 ), ντ1 , τ1 , q)W (dτ1 , dq). t

t ℝd

(3.4)

62 � 3 The evolutionary measure-valued processes At the same time, the stochastic flow x being the solution of the initial equation satisfies [t; t + s] in the following equation: τ

τ

x(u, τ) = x(u, t) + ∫ a(x(u, τ1 ), μτ1 , τ1 )dτ1 + ∫ ∫ b(x(u, τ1 ), μτ1 , τ1 , q)W (dτ1 , dq). t

t ℝd

Consider the new process z(u, τ) = x(x(u, t)−1 , τ), τ ∈ [t; t + s]. Here, x(⋅, t)−1 denotes the inverse of the random homeomorphism x(⋅, t). Exercise 3.3.1. Prove that x(⋅, t)−1 is jointly measurable relative to u and ω. Hint!: Use the homeomorphic property of x. Let B be a closed ball in ℝd , {vk ; k ≥ 1} be the dense sequence in B. Then {(ω, u) : xω (u, t)−1 ∈ B} = {(ω, u) : u ∈ xω (B, t)} = {(ω, u) : inf ‖u − xω (vk , t)‖ = 0}. k≥1

The process z satisfies the equation τ

τ

z(u, τ) = u + ∫ a(z(u, τ1 ), μτ1 , τ1 )dτ1 + ∫ ∫ b(z(u, τ1 ), μτ1 , τ1 , q)W (dτ1 , dq), t

t ℝd

μτ − μt ∘ z(⋅, τ) . −1

Since the solution of (3.4) is unique, then ∀τ ∈ [t; t + s]: y(u, τ) = z(u, τ)

a. s.

So, μt+s = μt ∘ y(⋅, s)−1

a. s.

Due to the independence of the measure μt and the increments of W in [t; t + s], P{μt+s ∈ Δ/ℱt } = P{μt ∘ y(⋅, s)−1 ∈ Δ/ℱt } = Ks (μt , Δ). The theorem is proved. Corollary 3.3.1. Under the conditions of Theorem 2.4.1, the process {μt ; t ≥ 0} is the evolutionary measure-valued process. Proof. It is enough to check the Markov property for the process of such a kind {(μt , x(u1 , t), . . . , x(un , t)); t ≥ 0}. This can be carried out with the help of the representation

3.4 The stochastic semigroups and evolutionary processes

x(uk , t + s) = y(x(uk , t), s),

� 63

k = 1, . . . , n

similar to the proof of Theorem 3.3.1. Remark. The statement of Corollary 3.3.1 remains to be true if we consider the situation of Theorem 2.3.1 instead of 2.4.1. The continuity of K in the weak sense holds now due to Lemma 2.3.1.

3.4 The stochastic semigroups and evolutionary processes In this section, we present the construction of the evolutionary processes from the systems of the particles, which move along the orbits of the stochastic semigroup with the speed depending on the mass distribution on the space. Consider the homogeneous stochastic semigroup {Gst ; 0 ≤ s < t < +∞} of continuous mappings on X. Precisely, {Gst } is such a family of continuous random mappings on X that (1) Gss is the identity map for every s ≥ 0. s s s (2) Gs23 (Gs12 ) = Gs13 a. s. for every 0 ≤ s1 ≤ s2 ≤ s3 < +∞. sn s (3) For arbitrary 0 ≤ s1 ≤ s2 ≤ ⋅ ⋅ ⋅ ≤ sn , the random maps Gs12 , . . . , Gsn−1 are independent. s1 +t s2 +t (4) For arbitrary s1 , s2 , t ≥ 0, the mappings Gs1 and Gs2 have the same distributions. Remark. When we say “the distribution of the random map,” we mean the corresponding family of the finite-dimensional distributions. The semigroup related to the ordinary stochastic differential equation is very important and a well-known example. Example 3.4.1. Consider the ordinary stochastic differential equation in ℝd , dξ(t) = a(ξ(t))dt + b(ξ(t))dw(t),

(3.5)

with the Wiener process w and the coefficients a, b, which satisfy the Lipshitz condition. Define for 0 ≤ s ≤ t and u ∈ ℝd Gst (u) as the solution to the Cauchy problem for (3.5) with the initial condition u at the time s. It is well known (see, e. g., [53]) that there exists the modification of {Gst }, which is continuous with probability one on s, t and u. Exercise 3.4.1. Prove that {Gst } satisfies now all conditions (1)–(4). The semigroup {Gst } allows us to construct the homogeneous Markov process in the space M × C(X, X). Suppose that X is σ-compact. In C(X, X), consider the topology of the uniform convergence on the compact sets. Then the maps {Gst } become the random elements in C(X, X). Exercise 3.4.2. Prove that for arbitrary f ∈ C(X, X) the composition Gst (f ) is the random element in C(X, X).

64 � 3 The evolutionary measure-valued processes For μ ∈ M and f ∈ C(X, X), define μt = μ ∘ C0t ,

ft = G0t (f ),

t ≥ 0.

Exercise 3.4.3. Verify that the process {(μt , ft ); t ≥ 0} is the homogeneous Markov process relative to the flow of σ-fields, τ

ℱt = σ(G0 ; τ ≤ t),

t ≥ 0.

Consider the positive continuous bounded function φ on X. Let inf φ(x) > 0. Build the homogeneous additive functional [41] with φ from {(μt , ft ); ty ≥ 0}, t

Ist = ∫ ∫ φ(u)μτ (du)dτ,

0 ≤ s < t < +∞.

s X

Define the random time τ(t) = inf{r ≥ 0 : I0r = t}. Note that due to the condition on φ, ∀t ≥ 0 : τ(t) < +∞

a. s.

It is known [41] that the new process νt = μτ(t) ,

t≥0

is the Markov process. Also, for every u1 , . . . , un ∈ X, ν1 , fτ(t) (u1 ), . . . , fτ(t) (un ),

t≥0

is the Markov process. Note that if we set f0 to be equal to the identity map, then −1 νt = ν0 ∘ fτ(t) ,

t ≥ 0.

So, {νt ; t ≥ 0} is the evolutionary process. This process can be interpreted in the following way. For every u ∈ X, define its “orbit,” i. e., the random process {G0t (u); t ≥ 0}. Then the measure-valued process {νt ; t ≥ 0} describes the motion of the particles along their “orbits” but with the speed depending on the whole mass distribution. Consider the concrete example of the above construction. Example 3.4.2. Consider the semigroup from Example 3.4.1 first in the case b = 0. Now {Gst } is the evolutionary family related to the equation dξ(t) = a(ξ(t))dt.

3.4 The stochastic semigroups and evolutionary processes

� 65

Let n

μ0 = ∑ pk δuk . k=1

Then t n

I0t = ∫ ∑ pk φ(G0s (uk ))ds. 0 k=1

The function τ is the inverse function for I. So, d(G0τ(t) (u))

=

a((G0τ(t) (u)))

−1

n

⋅ (∑

k=1

pk φ(G0τ(t) (uk )))

dt.

Let us denote νt = μ0 ∘ (G0τ(t) )

−1

x(u, t) = G0τ(t) (u),

n

= ∑ pk δGτ(t) (u ) , k=1

0

u ∈ ℝd ,

k

t ≥ 0.

Then dx(u, t) = a(x(u, t)) ⋅ (∫ℝd φ(v)νt (dv))−1 dt,

{

νt = ν0 ∘ x(⋅, t)−1 ,

x(⋅, 0) = u,

u ∈ ℝd , t ≥ 0.

So, we can see that {νt ; t ≥ 0} now is corresponding to the solution of the equation for the flow with interaction. Now consider the stochastic case. Let b ≠ 0 and the measure μ0 is the same one as mentioned above. Then the function t n

I0t = ∫ ∑ pk φ(G0s (uk ))ds 0 k=1

is random. Denote by τ the inverse to I0 . Then dG0τ(t) (u)

=

−1 n τ(t) τ(t) a(G0 (u))( ∑ pk φ(G0 (uk ))) dt k=1

̃ (t). + b(G0τ(t) (u))d w

̃ (t) = w(τ(t)) is the local square-integrable martingale. Here, w ̃. Exercise 3.4.4. Prove this property of w

66 � 3 The evolutionary measure-valued processes ̃ can be represented as an Itô stochastic integral, Consequently, in [73], w t

n

̃ (t) = ∫( ∑ w 0

k=1

−1 τ(t) pk φ(G0 (uk ))) dw1 (s),

t ≥ 0,

where w1 is the new Wiener process. Hence, the flow {G0τ(t) } satisfies the following equation: ̃ τ(t) (u), ν )dw (t), dG0τ(t) (u) = ã(G0τ(t) (u), νt )dt + b(G t 1 0

νt = ν0 ∘ (G0τ(t) ) . −1

Consequently, after the random change of the time in the flow related to the stochastic semigroup with independent increments, we obtain the flow with interaction, which satisfies an equation of the type (2.12).

3.5 The generator of the evolutionary process Here, we consider the infinitesimal characteristics of the evolutionary process. This allows us to compare the evolutionary process with other measure-valued processes of constant mass. Let us begin with the situation of the general complete separable metric space (X, ρ). Consider the evolutionary measure-valued process {μt ; t ≥ 0} on X. From the general point of view, it can be treated as the Markov process in the space M. As we have already seen in Chapter 1, the space M is not locally compact in general. So, usual sets of the “good” functions are not dense in the space Cb (M) of bounded continuous functions on M with the uniform distance. But, fortunately, in order to define the transition semigroups corresponding to the evolutionary process, it is enough to know its generator on the poor set of the functions, e. g., on the polynomials. Define the set 𝒫 of the polynomials on M as follows. Let φk ∈ C(Xk ) be bounded and symmetric for k = 1, . . . , n, φ0 ∈ ℝ. Definition 3.5.1. The polynomial on M with the kernels {φk ; k = 0, . . . , n} is the function n

R(μ) = φ0 + ∑ ∫ ⋅ ⋅ ⋅ ∫ φk (u1 , . . . , uk )μ(du1 ) ⋅ ⋅ ⋅ μ(duk ). k=1

Xk

Exercise 3.5.1. Show that the polynomials separate the points of M. Exercise 3.5.2. Prove the continuity of the polynomials on every space Mm , m ≥ 0. The polynomials are differentiable on M in the following sense. Let μ ∈ M, u ∈ X and ε ∈ ℝ. Consider n

R(μ + εδu ) := φ0 + ∑ ∫ ⋅ ⋅ ⋅ ∫ φk (v1 , . . . , vk )(μ + εδu )(dv1 ) ⋅ ⋅ ⋅ (μ + εδu )(dvk ). k=1

Xk

3.5 The generator of the evolutionary process

� 67

Definition 3.5.2 ([9]). The derivative of the polynomial R at the measure μ and point u is the value 󵄨󵄨 d δ 󵄨 R(μ) := R(μ + εδu )󵄨󵄨󵄨 . 󵄨󵄨ε=0 δu dε Note that

δ R(μ) δu

always exists and n δ R(μ) = ∑ k ∫ ⋅ ⋅ ⋅ ∫ φk (u, v2 , . . . , vk )μ(dv2 ) ⋅ ⋅ ⋅ μ(dvk ). δu k=1 Xk−1

The derivatives of the higher order can be defined by induction. Exercise 3.5.3. Give the rigorous definition for the derivative δm R(μ) δu1 ⋅ ⋅ ⋅ δum and find it. Now consider the examples of the evolutionary processes and try to find its generator on the polynomials. Example 3.5.1. Let {Gst } be a stochastic semigroup on X with the independent stationary increments as in the previous section. Consider the process {G0t (u); t ≥ 0}. This is the Markov process. Suppose that A is its generator in Cb (X). Define the evolutionary measure-valued process {μt ; t ≥ 0} simply as μt = μ0 ∘ (G0t ) , −1

t ≥ 0.

The transition semigroup of this Markov process in M and denote by {T̃t ; t ≥ 0}. Take R ∈ 𝒫 and find lim

t→0+

1 ̃ (T R − R)(μ). t t

Let R have the kernels {φk , k = 0, . . . , n} of the special kind. Suppose that for every k = 0, . . . , n and every u1 , . . . , uk−1 ∈ X the function φk (u1 , . . . , uk−1 , ⋅) belongs to the domain of definition 𝒟(A). Then 1 ̃ (T R − R)(μ) t t 1 n = lim E ∑ ∫ ⋅ ⋅ ⋅ ∫(φk (G0t (v1 ), . . . , G0t (vk )) − φk (v1 , . . . , vk ))μ(dv1 ) ⋅ ⋅ ⋅ μ(dvk ) t→0+ t k=1

lim

t→0+

Xk

n

1 = lim ∑ ∫ ⋅ ⋅ ⋅ ∫ E(φk (G0t (v1 ), . . . , G0t (vk )) − φk (v1 , . . . , vk ))μ(dv1 ) ⋅ ⋅ ⋅ μ(dvk ). (3.6) t→0+ t k=1 Xk

68 � 3 The evolutionary measure-valued processes Now φk (G0t (v1 ), . . . , G0t (vk )) − φk (v1 , . . . , vk ) k

= ∑(φk (G0t (v1 ), . . . , G0t (vk ), vj+1 , . . . , vk ) − φk (G0t (v1 ), . . . , G0t (vj−1 ), vj+1 , . . . , vk )). j=1

For future calculations, the following lemma will be useful. Note that for v1 , . . . , vn ∈ X the process {(G0t (v1 ), . . . , G0t (vn )); t ≥ 0} is Markov. Denote its generator by An . Suppose that for every k = 1, . . . , n, φk ∈ 𝒟(Ak ). Then the limit in (3.6) is equal to n

∑ ∫ ⋅ ⋅ ⋅ ∫ Ak φk (v1 , . . . , vk )μ(dv1 ) ⋅ ⋅ ⋅ μ(dvk ).

k=1

(3.7)

Xk

The next example shows the concrete realization of (3.7). Example 3.5.2. Let X = ℝ. Consider the semigroup {Gst } related to the stochastic differential equation dξ(t) = ∫ φ(ξ(t) − q)W (dt, dq), ℝ

where φ ∈ C01 (ℝ). It follows from the Itô formula for the Wiener sheet (see Section 2.2) that for every n ≥ 1, C02 (ℝn ) ⊂ 𝒟(An ) and for every f ∈ C02 (ℝn ), An f (v1 , . . . , vn ) =

1 n ′′ ∑ f ∫ φ(vi − q)φ(vj − q)dq. 2 ij=1 ij ℝ

Hence, in this case (3.6) turns to be k 1 n ∑ ∫ . k. . ∫ ∑(φk )′′ kj ∫ φ(vi − q)φ(vj − q)dqμ(dv1 ) ⋅ ⋅ ⋅ μ(dvk ). 2 k=1 ij ℝ



The next example is devoted to the flow with interaction in the deterministic case. Example 3.5.3. Consider the equation with the interaction on ℝ, dx(u, t) = a(x(u, t), μt )dt,

{

x(u, 0) = u,

μt = μ0 ∘ x(⋅, t)−1 ,

u ∈ ℝ, t ≥ 0.

In this case, suppose that for every k = 1, . . . , n, φk ∈ C01 (ℝk ). Then (3.6) has the value

3.5 The generator of the evolutionary process

� 69

n

∑ k ∫ . k. . ∫(φk )′1 (v1 , . . . , vk )a(v1 , μ)μ(dv1 ) ⋅ ⋅ ⋅ μ(dvk ).

k=1



Example 3.5.4. Let X = {1, . . . , N}. Consider the measure-valued process {μt ; t ≥ 0} from the Example 3.2.2. Then for the symmetric function φk : Xk → ℝ, 1 E(∫ . k. . ∫ φk (v1 , . . . , vk )μt (dv1 ) ⋅ ⋅ ⋅ μt (dvk ) t→0+ t lim

X

− ∫ . k. . ∫ φk (v1 , . . . , vk )μ0 (dv1 ) ⋅ ⋅ ⋅ μ0 (dvk )) X

= lim

t→0+

1 k ∫ . . . ∫ E(φk (gt (v1 ), . . . , gt (vk )) − φk (v1 , . . . , vk )μ0 (dv1 ) ⋅ ⋅ ⋅ μ0 (dvk ). t X

Here, {gt ; t ≥ 0} is the stochastic flow related to the process {μt ; t ≥ 0}. Using the kernel Kt , we can write lim

t→0+

1 k ∫ . . . ∫ E(φk (gt (v1 ), . . . , gt (vk )) − φk (v1 , . . . , vk )μ0 (dv1 ) ⋅ ⋅ ⋅ μ0 (dvk ) t X

= lim

t→0+

1 k ∫...∫ t R



(φk (f (v1 ), . . . , f (vk )) − φk (v1 , . . . , vk ))Kt (μ0 , e)(dν, df )

M×Xm fX

= λẼ ∫ . k. . ∫(φk (fμ0 (v1 ), . . . , fμ0 (vk )) − φk (v1 , . . . , vk )μ0 (dv1 ) ⋅ ⋅ ⋅ μ0 (dvk )). X

Here, fμ0 is the random map related to μ0 and Ẽ is the corresponding expectation. All previous examples show that to define the action of the generator on polynomials we have to control the behavior of the probabilities Q(μ, t, u1 , . . . , un ) under t → 0+. Let Bn denote the space of the bounded real-valued measurable functions on Xn for every n ≥ 1. Denote by Ttn the following operator on M × Bn : t(μ, f )(u1 , . . . , un ) = ∫ f (v1 , . . . , vn )Q(μ, t, u1 , . . . , un )(dv1 , . . . , dvn ). Xn

This operator is similar to the transition operator of the Markov process. But now it has different properties. The reason is that now the finite number of the particles do not form a closed system with the Markov motion. The properties of {Ttn ; t ≥ 0, n ≥ 1} are collected in the next lemma. Lemma 3.5.1. The family {Ttn ; t ≥ 0, n ≥ 1} has the properties: (1) If f is nonnegative, then Ttn (μ, f ) is also nonnegative. (2) Ttn (μ, 1) = 1, where 1 is the function totally equal to one.

70 � 3 The evolutionary measure-valued processes (3) For the function f (u1 , . . . , un ) = g(u1 , . . . , un−1 ), Ttn (μ, f )(u1 , . . . , un )

=

u1 , . . . , un ∈ X, n−1 Tt (μ, f )(u1 , . . . , un ).

(4) For s, t ≥ 0, n Tt+s (μ, f )(u1 , . . . , un ) = ETtn (μ, f )(φs (u1 ), . . . , φs (un )),

where {φt ; t ≥ 0} is the stochastic flow related to the evolutionary process. Proof. The properties (1)–(3) are evident. We need only to prove the last statement. Let {μt ; t ≥ 0} and {φt ; t ≥ 0} be the evolutionary process and related stochastic flow. Then, by definition, n Tt+s (μ, f )(u1 , . . . , un ) = Ef (φt+s (u1 ), . . . , φt+s (un )).

Using the Markov property of the process {(μt , φt (u1 ), . . . , φt (un )), t ≥ 0}, we can write Ef (φt+s (u1 ), . . . , φt+s (un ))

= EE(μs ,φs (u1 ),...,φs (un )) f (φt (φs (u1 )), . . . , φt (φs (un ))) = ETtn (μs , f )(φs (u1 ), . . . , φs (un )).

Here, the symbol E(μs ,φs (u1 ),...,φs (un )) denotes the expectation for the process, which starts from (μs , f )(φs (u1 ), . . . , φs (un )). The lemma is proved. Using the family {Ttn }, we can construct the family of Markov semigroups. For n ≥ 1, define the semigroup in Bn by the rule Stn (f )(u1 , . . . , un ) = Ttn (

1 n ∑ δ , f )(u1 , . . . , un ). n k=1 uk

Lemma 3.5.2. For the fixed n ≥ 1, {Stn ; t ≥ 0} is the Markov semigroup in Bn . Proof. It is enough to check only the semigroup property. Due to the previous lemma, n n St+s (f )(u1 , . . . , un ) = Tt+s (

= ETtn (

1 n ∑ δ , f )(u1 , . . . , uk ) n k=1 uk 1 n ∑ δφ (u ), f )(φs (u1 ), . . . , φs (uk )) n k=1 s k

= EStn (f )(φs (u1 ), . . . , φs (uk )) = Stn (Stn (f ))(u1 , . . . , uk ). Here, we use the symbol of the mathematical expectation in the case when

3.5 The generator of the evolutionary process

μ0 =

� 71

1 n ∑δ . n k=1 uk

The lemma is proved. The Markov properties of {Stn ; t ≥ 0} are related to the reason that was already mentioned in Chapter 2. Namely, the system containing all the heavy particles has the Markov behavior. Suppose now that for the certain f there exists the limit in the uniform norm, lim

t→0+

1 n T (μ, f ) = An (μ, f ). t t

Then 1 n 1 n St (f )(u1 , . . . , un ) = An ( ∑ δφuk , f )(u1 , . . . , un ). t→0+ t n k=1 lim

Hence, formally, if we know {An (μ)} then we can find the semigroups {Stn }, and finally the probabilities {Q} using the continuity property. So, the question is how can we can reconstruct {An (μ)} from the values of the generator on the polynomials? Here, we consider only the algebraic side of this problem because the analytic aspect related to the domain of definition requires special consideration in every case. Note first that under the conditions of Theorem 3.1.1, to define the transition probabilities of the evolutionary process in the space of measures, we only need the probabilities {Q(

1 n ∑ δ , t, u1 , . . . , un ); n ≥ 1, u1 , . . . , un ∈ X, t ≥ 0}. n k=1 uk

And, moreover, it is important to know only the values Q(

1 n ∑ δ , t, u1 , . . . , un )(Δ) n k=1 uk

for the symmetric sets Δ. For every t ≥ 0 and μ ∈ M, we need are in the reconstruction of the transition probability K(t, μ, ⋅), which is the weak limit, K(t, μ, ⋅) = lim K(t, μn , ⋅), n→∞

where the discrete measure μn for each n ≥ 1 has the form μn =

1 n ∑ δ n. n k=1 uk

72 � 3 The evolutionary measure-valued processes The transition probability K(t, μn , ⋅) can be calculated as follows. For every Borel set Γ ⊂ M, K(t, μn , Γ) = ∫ 1Γ ( Xn

1 n ∑ δ )Q(μn , t, u1n , . . . , unn )(dv1 , . . . , dvn ). n k=1 vk

(3.8)

Note that the function Xn ∋ (v1 , . . . , vn ) 󳨃→ 1Γ (

1 n ∑ δ ) ∈ ℝ+ n k=1 vk

is symmetric with respect to the permutations of v1 , . . . , vn . So, in (3.8) the integral can be obtained using the values of Q(μn , t, u1 , . . . , un ) only in the symmetric sets. The previous reasoning leads to the following lemma. Lemma 3.5.3. Suppose that {μt ; t ≥ 0} and {νt ; t ≥ 0} are such evolutionary processes for ̃ t, . . .)} satisfy the conditions of Theorem 3.1.1 which their families {Q(μ, t, . . .)} and {Q(ν, and the generators A and B of these processes have the following property: ̃ : Af = Bf . ∀f ∈ 𝒫

̃, 𝒟(A) ∩ 𝒫 = 𝒟(B) = 𝒫

Then the transition probabilities for {μt } and {νt } coincide. Proof. As it was mentioned above, it is enough to verify that for arbitrary u1 , . . . , un ∈ X, t ≥ 0 and symmetric bounded function φn ∈ C(Xn ), the following equality holds: ∫ φn (v1 , . . . , vn )Q( Xn

1 n ∑ δ , t, u1 , . . . , un )(dv1 , . . . , dvn ) n k=1 uk

̃ = ∫ φn (v1 , . . . , vn )Q( Xn

1 n ∑ δ , t, u1 , . . . , un )(dv1 , . . . , dvn ). n k=1 uk

To do this, it is sufficient to prove the equality An (

n 1 n ̃ n ( 1 ∑ δu , φn )(u1 , . . . , un ) ∑ δuk , φn )(u1 , . . . , un ) = A n k=1 n k=1 k

(3.9)

for arbitrary u1 , . . . , un ∈ X and symmetric function φn , which belong to the domain of the definition of generators of {Stn } and {S̃tn }. ̃ of degree Let us check (3.9) by induction. For n = 1, consider the polynomial g ∈ 𝒫 one without a free member. Then ∀μ ∈ M : g1 (μ) = ∫ φ1 (u)μ(du). Xn

3.5 The generator of the evolutionary process

� 73

According to the lemma condition, for every u ∈ X, ̃ 1 (δu ). Ag1 (δu ) = Ag Hence, for every u ∈ X, ̃ 1 (δu , φ1 )(u), A1 (δu , φ1 )(u) = A and φ1 belong to the domain of the generator’s definition of {St1 } and {S̃t1 }. Now let n = 2. ̃, which has the form Take g2 ∈ 𝒫 g2 (μ) = ∬ φ2 (u1 , u2 )μ(du1 )μ(du2 ). X2

According to the lemma condition, for every u ∈ X, ̃ 2 (δu ). Ag2 (δu ) = Ag

(3.10)

Define the new function ψ1 (u) = φ2 (u, u),

u ∈ X.

It follows from (3.10) that ψ1 belongs to the domain of the definition of a generator for both {St1 } and {S̃t1 } and ̃ 1 (δu , ψ1 )(u). A1 (δu , ψ1 )(u) = A Note that 1 1 A1 (δu , ψ1 )(u) = A2 ( δu + δu , φ2 )(u, u) 2 2 ̃ 2 . Since g2 ∈ 𝒫 ̃, there exists the limit and the same relation holds for A 2 2 1 1 1 1 1 [∬ ∑ φ2 (vi , vj )Q( δu1 + δu2 , t)(dv1 , dv2 ) − ∑ φ2 (ui , uj )] t→0+ t 4 2 2 4 ij=1 ij=1

lim

X2

1 1 = Ag2 ( δu1 + δu2 ). 2 2 Due to (3.11) and to the symmetry of φ2 for every u1 , u2 , there exists the limit 2 1 1 1 1 ∬ ∑ φ2 (vi , vj )Q( δu1 + δu2 , t)(dv1 , dv2 ), t→0+ t 4 2 2 ij=1

lim

X2

(3.11)

74 � 3 The evolutionary measure-valued processes ̃ This proves that (3.9) is true for n = 2. For which coincides with the same limit for Q. the next values, the proof is similar. In order to complete the proof of the lemma, let us prove the following auxiliary statement. The symmetric function φn : Xn → ℝ belongs to the domain of the generator for n {St } if and only if the correspondent polynomial gn with the kernel φn belongs to the domain of the definition of A. Let gn lie in the domain of definition of A. Then by using arguments similar to those previously, we can check that φn lies in the domain of the generator definition for {Stn }. If φn is such that there exists the limit uniformly on Xn , 1 n 1 (∫ ⋅ ⋅ ⋅ ∫ φn (v1 , . . . , vn )Q( ∑ δuk , t, u1 , . . . , un )(dv1 , . . . , dvn ) − φn (u1 , . . . , un )). t→0+ t n k=1 lim

X2

Then it is evident that gn belongs to the domain of the definition of A. When using this simple conclusion, we have that for the every n ≥ 1 the sets of symmetric functions on Xn , which belong to the domain of the definition of the generator for {Stn } or {S̃tn } coincide. Consider the stochastic kernels on the symmetrization ΛXn of Xn , which are defined as a factor space obtained from Xn under the action of the group of coordinate permutations. Define ΛStn (u, Δ) for u ∈ ΛXn , Δ ⊂ ΛXn , ΛStn (u, Δ) =

1 ∑ S n (σ(u), σ(Δ)). n! σ∈S t n

Prove that {ΛStn } is the stochastic semigroup on ΛXn . So, from the previous considerations it follows that ΛStn = ΛS̃tn ,

t ≥ 0,

which completes the proof of the lemma. Let us consider the generator of the evolutionary process on a simple phase space. Example 3.5.5. Let X = {1, . . . , d} and {μt ; t ≥ 0} be an evolutionary measure-valued process on X. Denote the generator of {μ} by A. For the probability measure ν on X, build the set of measures 𝒯ν = {ϰ = ν ∘ g

−1

: g ∈ XX }.

Since X is finite, the set 𝒯ν is also finite. Note that the evolutionary process, which starts from the measure ν, has the values only in 𝒯n u. The structure of the infinitesimal operator of the stochastically continuous homogeneous Markov process on the finite set is well known (see, e. g., [53]). So, we can conclude that the operator A now is bounded and has the form

3.5 The generator of the evolutionary process

� 75

Af (ν) = ∑ a(ν, ϰ)f (ϰ). ϰ∈𝒯ν

Here, a(ν, ϰ) ≥ 0,

ν ≠ ϰ,

∑ a(ν, ϰ) = 0.

ϰ∈𝒯ν

This representation can be rewritten in the case when f is a polynomial. By aν , denote the following signed measure on X: aν = ∑ a(ν, ϰ)ϰ. ϰ∈𝒯ν

Then for every polynomial f and ν ∈ M, Af (ν) = f (aν ). This formula describes the generator of the evolutionary process in the case of the finite state space.

4 Stochastic differential equations driven by the random measures on the space of trajectories 4.1 Definition of the integral with respect to the random measure The material in this chapter develops the ideas from Section 1.4 and Chapter 2. Instead of the measure-valued process related to the certain stochastic flow, we will consider here the random measure on the space of initial dynamical system trajectories. The generalization of equation (2.12) will be investigated correspondingly to this aim. In this section, we study the random measures on the space C([0; 1]) or C([0; 1], X) and the integrals from the random functions with respect to such measures. To describe the problems arising here, let us pay attention to the following simple situation. Let μ be the random probability measure on the Borel σ-field of some complete separable metric space 𝒴 . Suppose that the function f : Ω × 𝒴 → ℝ is bounded and jointly measurable. It is easy to check that the integral ∫ f (ω, u)μ(ω, du)

(4.1)

𝒴

is measurable with respect to ω, i. e., is the random variable. From another side, if the function f is not jointly measurable, then (4.1) can be nonmeasurable. Exercise 4.1.1. Build the corresponding example. So, the random functions, which will be integrated relative to the random measures, must be measurable. This circumstance is not so important in the case when 𝒴 is the space with the simple structure, e. g., Euclidean space. In such cases, the random function determined as a set of random variables corresponding to the points of the space is often measurable automatically or it has the unique measurable modification with good properties. In the case of the general space, this is not so. Let us consider the classical example [86]. Example 4.1.1. Let {w(t); t ∈ [0; 1]} be the standard Wiener process. Define the random function on the space C([0; 1]) as follows: 1

f (u) = ∫ u(t)dw(t),

u ∈ C([0; 1]).

0

It is evident that f is stochastically continuous and, consequently, has the measurable modification. We will construct two different modifications of f . Define for every n ≥ 1 and u ∈ C([0; 1]), https://doi.org/10.1515/9783110986518-004

4.1 Definition of the integral with respect to the randommeasure

� 77

n−1 k+1 k k fn1 (u) = ∑ u( )(w( ) − w( )), n n n k=0 n−1

fn2 (u) = ∑ u( k=0

k+1 k k+1 )(w( ) − w( )). n n n

Consider the subset L 1 of C([0; 1]), which consists of the functions that satisfy the Hölder 4

condition with the exponent 41 .

Exercise 4.1.2. Prove that L 1 is the measurable subset of C([0; 1]). 4

Take the arbitrary measurable modification ̃f of f . Define for u ∉ L 1 , ω ∈ Ω, 4

f i (u, ω) = ̃f (u, ω),

i = 1, 2.

For u ∈ L 1 and ω ∈ Ω, put 4

f i (u, ω) = lim f2im (u, ω),

i = 1, 2

m→∞

(4.2)

if the limit exists and f i (u, ω) = 0,

i = 1, 2

if the limit does not exist. Let us check that f i is the modification of f for i = 1, 2. For u ∉ L1 , 4

f i (u) = ̃f (u) = f (u)

a. s.

i = 1, 2,

by the definition. Consider u ∈ L 1 . Note that 4

2

1 2 E(fni (u) − f (u)) ≤ ωu ( ) , n

n ≥ 1,

where ωu is the modulo of continuity of u. So, the limit (4.2) exists a. s. due to the Borel– Cantelli lemma and f i (u) = f (u) a. s.

i = 1, 2.

The measurability of f i is evident. Now consider the random measure μ on C([0; 1]) of the following kind: μ = δω . For every ω ∈ Ω, the measure μω is the measure concentrated in one point, which is the trajectory of w corresponding to this ω. It seems that the integral

78 � 4 Stochastic differential equations driven by the random measures



f (u)μ(du)

C([0;1])

can be defined as an integral from the measurable modification of f . But now ∫

f 1 (u)μ(du) = f 1 (w) ≠ f 2 (w) =

C([0;1])

f 2 (u)μ(du).

∫ C([0;1])

It is known [55] that with the probability one w ∈ L 1 . So, 4

f i (w) = lim f2im (w), m→∞

i = 1, 2,

and 1

1 1 f (w) = ∫ w(s)dw(s) = w(1)2 − , 2 2 1

0

1

2m −1

f 2 (w) = ∫ w(s)dw(s) + lim ∑ Δw( 0

m→∞

k=0

2

1 1 k ) = w(1)2 + , 2m 2 2

where we use the Itô integral. This example shows that the definition of the integral from the random function with respect to the random measure must be built carefully. We have to take the canonical (in some sense) way of the choice of the measurable modification. Another problem arising here is the relation between the integrals with respect to the random measure and the stochastic integrals. In this section, we will consider the appropriate definition of the integral with respect to the random measure and establish some kind of Fubini theorem for the random measures and stochastic integrals. The main space in this section will be C([0; 1]) with the uniform distance. So, then C([0; 1]) will be denoted by C, for short. Let us begin with the random measures on C, which are concentrated on the finite-dimensional subspaces of C. This restriction arises because of the following circumstances. Being the most important for us, the random functions constructed with the help of the stochastic integral have continuous modification on every finite-dimensional subspace of C. Exercise 4.1.3. Prove that the random function from Example 4.1.1 has a continuous modification on every finite-dimensional subspace of C. (Hint: Use the Kolmogorov condition.) Exercise 4.1.4. Let L be the finite-dimensional subspace of C and f1 , f2 be the modifications of the random function f , which are continuous on L. Prove that in the set of probability one the restrictions f1 and f2 on L coincide.

4.1 Definition of the integral with respect to the randommeasure � 79

Due to the result of Exercise 4.1.4, the continuous modification on the finitedimensional subset is unique. So, the following definition is correct. Definition 4.1.1. Let the real-valued random function f on C, random measure μ and the finite-dimensional subspace L of C be such that: (1) f has a continuous modification ̃f on L. (2) μ(L) = 1 a. s. Then the integral from f with respect to μ is ∫ ̃f (u, ω)μω (du). C

Remark. The obtained integral is an extended random variable (it can have infinite values or ̃f (⋅, ω) under certain ω, it can be nonintegrable relatively to μω , but all these events are measurable). Now let us consider the integrals with respect to the adapted random measures. Suppose that we have the flow of σ-fields {ℱt ; t ∈ [0; 1]} generated by the Wiener process {w(t); t ∈ [0; 1]}. Take the random measure μ, which is adapted to the flow {ℱt } and concentrated on the certain finite-dimensional subspace L of C. We will prove the analog of the Fubini theorem for such measures and the integrals relative to the Wiener process. But let us first consider all the examples of the adapted random measures concentrated on the finite-dimensional subspaces. Example 4.1.2. Let n ≥ 1 be fixed and L be a set of the continuous functions on [0; 1], which are linear on the each of intervals [0; n1 ], [ n1 ; n2 ], . . . , [ n−1 ; 1]. This is the finiten dimensional subspace of C. Consider the set of random variables {αkl ; 1 ≤ k ≤ n, 1 ≤ l ≤ m} such that αkl is measurable with respect to ℱ k−1 . Define the random function n

ηl , l = 1, . . . , m in [0; 1] as an element of L such that ηl (0) = 0, ηl ( kn ) = αkl , k = 1, . . . , n. For the positive numbers pl , l = 1, . . . , m, satisfying relation p1 + ⋅ ⋅ ⋅ + pm = 1 define the random measure μ as m

μ = ∑ pl δη . l=1

It is easy to check that μ is an adapted random measure concentrated on L. Exercise 4.1.5. Consider as the subspace L the set of all polynomials, which have a degree not greater than n: Prove that all adapted measures concentrated on L are nonrandom. (Hint: For the adapted measure μ concentrated on L, check that μ is ℱ0 measurable.)

80 � 4 Stochastic differential equations driven by the random measures The last exercise shows that to have the nontrivial examples of the adapted measures concentrated on L this space must have the basis with the special properties. Namely, it is desirable that the functions from such basis cannot be uniquely determined by their values on the small part of the interval [0; 1]. Exercise 4.1.6. Construct the examples of nondiscrete adapted measures concentrated on the finite-dimensional subspace of C. Now consider the adapted random map from C to C. Definition 4.1.2. The adapted random map F from C to C is the jointly measurable function F : Ω × C → C, which satisfies the following conditions: (1) For every t ∈ [0; 1] and u ∈ C, F(u)(t) = F(ut )(t)

a. s.,

where ut (s) = u(t ∧ s),

s ∈ [0; 1].

(2) For every t ∈ [0; 1] and u ∈ C, F(u)(t) is ℱt -measurable. The examples of the adapted random maps naturally arise in the content of the stochastic differential equation. Example 4.1.3. Let the functions a, b : ℝ → ℝ satisfy the Lipshitz condition. Consider for every u ∈ C the following stochastic integral equation: t

t

x(t) = ∫ a(x(s))ds + ∫ b(x(s))dw(s) + u(t). 0

0

According to our condition, this equation has the unique solution, which can be obtained by the iteration method. Denote this solution by F(u). It can be checked that F(u) is continuous with the probability one. Exercise 4.1.7. Prove that F is continuous in probability with respect to u, i. e., for arbitrary u ∈ C, P

F(v) → F(u),

v → u.

Let us check that F has a modification, which is an adapted random map. Pay attention first of all to the following simple assertion. Lemma 4.1.1. Let the sequence of the adapted random maps {Fn ; n ≥ 1} and the random map F in C satisfy the relation ∀R > 0:

4.1 Definition of the integral with respect to the randommeasure � 81

󵄩2 󵄩 sup{E 󵄩󵄩󵄩Fn (u) − F(u)󵄩󵄩󵄩 ; ‖u‖ ≤ R} → 0,

n → ∞.

Then F has a modification, which is an adapted random map. Proof. Choosing the subsequence if it is necessary, we can suppose that 1 󵄩2 󵄩 sup{E 󵄩󵄩󵄩Fn (u) − F(u)󵄩󵄩󵄩 ; ‖u‖ ≤ n} ≤ n , 2

n ≥ 1.

(4.3)

For ω ∈ Ω, u ∈ C, define ̃ F(u)(ω) = lim Fn (u)(ω), n→∞

if the limit exists and ̃ F(u)(ω) =0 in the opposite case. Note that F̃ is the measurable modification of F. Property (1) from Definition 4.1.2 holds immediately. To check property (2), note that ̃ F(u)(t) = lim Fn (u)(t) n→∞

a. s.

The lemma is proved. Now to check the statement of the example define the sequence of the random maps in such a way, t

t

Fn+1 (u, t) = ∫ a(Fn (u, s))ds + ∫ b(Fn (u, s))dw(s) + u(t), 0

0

F0 (u, t) = u(t). Exercise 4.1.8. Prove for every n ≥ 1 that Fn is an adapted random map. Our next purpose is the Fubini theorem for the integrals from the adapted random maps. Theorem 4.1.1. Let the adapted random map F from C to C and the adapted random measure μ on C be such that: (1) There exist the functions h, K : [1; +∞) → [0; +∞) for which lim h(p) = +∞,

p→+∞

∀p ≥ 1, u, v ∈ C: 󵄩 󵄩p E 󵄩󵄩󵄩F(u) − F(v)󵄩󵄩󵄩 ≤ K(p)‖u − v‖h(p) .

82 � 4 Stochastic differential equations driven by the random measures (2) ∀p ≥ 1, 󵄩p 󵄩 E 󵄩󵄩󵄩F(0)󵄩󵄩󵄩 < +∞. (3) ∃α > 0: E ∫ exp{α‖u‖2 }μ(du) < +∞. C

(4) μ is concentrated on the finite-dimensional subspace L. Then: (1) There exists the modification of F, which is continuous on L with the probability one and the integral ∫ F(u)μ(du) is the adapted continuous random process. (2) There exists the modification of the random map 1

C ∋ u 󳨃→ ∫ F(u)(t)dw(t), 0

which is continuous on L with the probability one and the following integral is welldefined via Definition 4.1.1: 1

∫(∫ F(u)(t)dw(t))μ(du). C

0

(3) The following equation holds: 1

1

∫(∫ F(u)(t)dw(t)) = ∫(∫ F(u)(t)μ(du))dw(t), C

0

0

a. s.

C

Proof. Let d be the dimension of the subspace L. For every u ∈ L, the vector of the coordinates of u in the certain fixed basis of L and denote by (u1 , . . . , ud ). Then there exist such positive constants c1 and c2 that d

∀u ∈ L : c1 ‖u‖ ≤ ∑ |ui | ≤ c2 ‖u‖. i=1

4.1 Definition of the integral with respect to the randommeasure

� 83

Due to condition (1) of the theorem for such p that h(p) ≥ 1, d

󵄩p 󵄩 ∀u, v ∈ L : E 󵄩󵄩󵄩F(u) − F(v)󵄩󵄩󵄩 ≤ R(p) ∑ |ui − vi |h(p) . i=1

Here, the constant R(p) depends only on p. Since h(p) → +∞, p → +∞, it is possible to choose such p that d < 1. h(p) Then from the multidimensional variant of the Kolmogorov condition [55] follows that F has the continuous modification on L. Let us check whether F is integrable relative to μ with probability one. Consider the new random field on the unit ball in L with the center in zero, F ′ (u) = ‖u‖β (F(

u ) − F(0)), ‖u‖2

u ≠ 0,

F ′ (0) = 0.

It can be easily checked that under the condition, β > 11 +

h(p) p

F ′ satisfies the relation, ∀u, v ∈ L, ‖u‖, |v‖ ≤ 1: 󵄩 󵄩p E 󵄩󵄩󵄩F ′ (u) − F ′ (v)󵄩󵄩󵄩 ≤ K(p, β)‖u − v‖h(p) , where the constant K(p, β) depends only on p and β. Consequently, F ′ has the modification, which is continuous with probability one. Now by choosing the appropriate β, we can conclude that the initial random field F has the modification, which is continuous on L, and satisfies the relation lim

‖u‖→+∞, u∈L

F(u) = 0, ‖u‖3

a. s.

Now from condition (3) of the theorem, it follows that 󵄩 󵄩 ∫󵄩󵄩󵄩F(u)󵄩󵄩󵄩μ(du) < +∞, L

Consequently, the integral

a. s.

84 � 4 Stochastic differential equations driven by the random measures ∫ F(u)μ(du) C

is well-defined as an integral from the C-valued function under every ω and is the random element in C. Let us check that this is the {ℱt }-adapted random process. Denote the coordinate functional on C by δt for t ∈ [0; 1], C ∋ u 󳨃→ δt (u) = u(t). Then δt (∫ F(u)μ(du)) = ∫ F(u)(t)μ(du). C

C

Since F is an adapted random map, F(u)(t) = F(ut )(t)

a. s.

From the theorem condition, it follows that the map L ∋ u 󳨃→ F(ut )(t) has the continuous modification on L. Due to the uniqueness of the continuous modification, ∫ F(u)(t)μ(du) = ∫ F(ut )(t)μ(du) C

a. s.

C

The last integral is ℱt -measurable. So, the Itô stochastic integral, 1

∫(∫ F(u)(t)μ(du))dw(t), 0

C

is well-defined. Consider the sequence of partitions of [0; 1], 0 = t0n < t1n < ⋅ ⋅ ⋅ < tnn = 1,

n≥1

with the property n max (tk+1 − tkn ) → 0,

k=0,...,n−1

n → ∞.

Denote n−1

n Sn (u) = ∑ F(u)(tkn )(w(tk+1 − w(tkn ))), k=0

n ≥ 1,

4.1 Definition of the integral with respect to the randommeasure

� 85

1

S(u) = ∫ F(u)(t)dw(t). 0

Due to the Burgholder inequality for p ≥ 1, ∀n ≥ 1 ∀u, v ∈ C, 󵄩p 󵄩 E 󵄩󵄩󵄩Sn (u) − Sn (v)󵄩󵄩󵄩 ≤ K1 (p)‖u − v‖h(p) , 󵄩p 󵄩 E 󵄩󵄩󵄩S(u) − S(v)󵄩󵄩󵄩 ≤ K1 (p)‖u − v‖h(p) , where the constant K1 (p) depends only on p. In a similar way, we can prove that for every p ≥ 1 there is the constant Q(p), which ∀n ≥ 1 ∀u ∈ C, 󵄩 󵄩p E 󵄩󵄩󵄩Sn (u)󵄩󵄩󵄩 ≤ Q(p)‖u‖h(p) , 󵄩 󵄩p E 󵄩󵄩󵄩S(u)󵄩󵄩󵄩 ≤ Q(p)‖u‖h(p) . From here [4], it follows that the continuous modifications of the random fields {Sn ; n ≥ 1} are weakly compact in any space C(B), where B is a compact subset of L. Construct the random fields {Sn′ ; n ≥ 1} for the random map F ′ in the same way. This fields are weakly compact in the space of continuous functions over the closed unit ball with the center in zero in L. Consequently, there is the subsequence {Sn(k) ; k ≥ 1}, which satisfies the relation ∫ Sn(k) (u)μ(du) → ∫ S(u)μ(du), L

k→∞

a. s.

L

From another side, n(k)−1

n(k) ) − w(tjn(k) )), ∫ Sn(k) (u)μ(du) = ∑ ∫ F(u)(tjn(k) )μ(du) ⋅ (w(tj+1 j=0 L

L

So, P

1

∫ Sn(k) (u)μ(du) → ∫(∫ F(u)(t)μ(du))dw(t). L

0

L

Finally, 1

1

∫(∫ F(u)(t)dw(t))μ(du) = ∫(∫ F(u)(t)μ(du))dw(t). L

0

The theorem is proved.

0

L

k ≥ 1.

86 � 4 Stochastic differential equations driven by the random measures We will consider this theorem in the infinite-dimensional framework later. But let us first consider an example in which the structure of the finite-dimensional adapted measures is cleared up. Example 4.1.4. Assume that L is a finite-dimensional subspace of C. Build the random adapted measure on L as follows. For every t ∈ [0; 1], define the linear operator Qt : C → C([0; t]) as a restriction operator. Let d(t) be a dimension of Qt (L) for every t ∈ [0; 1]. Then d is nondecreasing and left continuous. In order to check this, consider t1 < t2 . Then Qt1 (L) = Qt1 (Qt2 (L)). So, d(t1 ) ≤ d(t2 ). Now suppose that for certain t0 ∈ (0; 1], d(t0 −) < d(t0 ). This means that there are such points s1 , . . . , sd(t0 )−1 ∈ [0; t0 ) and functions f1 , . . . , fd(t0 ) ∈ L for which f1 (s1 ) .. det ( . fd(t0 ) (s1 )

f1 (s2 ) .. . fd(t0 ) (s2 )

⋅⋅⋅ ⋅⋅⋅

f1 (sd(t0 )−1 ) .. . fd(t0 ) (sd(t0 )−1 )

f1 (t0 ) .. ) ≠ 0, . fd(t0 ) (t0 )

(4.4)

′ and for every s1′ , . . . , sd(t ∈ [0; t0 ), 0)

f1 (s1′ ) .. det ( . fd(t0 ) (s1′ )

f1 (s2′ ) .. . fd(t0 ) (s2′ )

⋅⋅⋅ ⋅⋅⋅

′ f1 (sd(t ) 0) .. ) = 0. . ′ fd(t0 ) (sd(t0 ) )

(4.5)

The last statement can be checked with the help of the following lemma. Lemma 4.1.2. The functions f1 , . . . , fn ∈ C are linearly dependent if and only if ∀t1 , . . . , tn ∈ [0; 1]: f1 (t1 ) . det ( .. fn (t1 )

f1 (t2 ) .. . fn (t2 )

⋅⋅⋅ ⋅⋅⋅

f1 (tn ) .. . ) = 0. fn (tn )

(4.6)

Proof. The necessity is obvious. Now assume that (4.6) holds. Then for every t1 , . . . , tn ∈ [0; 1], the following relation holds:

4.1 Definition of the integral with respect to the randommeasure � 87

f1 (t1 ) . rank ( .. fn (t1 )

f1 (t2 ) .. . fn (t2 )

⋅⋅⋅ ⋅⋅⋅

f1 (tn ) .. . ) < n. fn (tn )

Let f1 (t1 ) . m = max rank ( .. t1 ...tn fn (t1 )

f1 (t2 ) .. . fn (t2 )

⋅⋅⋅ ⋅⋅⋅

f1 (tn ) .. . ) fn (tn )

and f1 (t10 ) . rank ( .. fn (t10 )

f1 (t20 ) .. . fn (t20 )

⋅⋅⋅ ⋅⋅⋅

f1 (tn0 ) .. . ) = m. fn (tn0 )

There are m linearly independent vectors among f1 (ti0 ) . ( .. ) , fn (ti0 )

i = 1, . . . , n.

Without any restrictions, we can assume that these are the vectors f1 (ti0 ) . ( .. ) , fn (ti0 )

i = 1, . . . , m.

By means of H, denote the linear span of these vectors in ℝn . From the definition of m, it follows that ∀t ∈ [0; 1]: f1 (t) . ( .. ) ∈ H. fn (t) Since dim H = m < n, then there exists the nonzero vector a ∈ ℝn , which is orthogonal to H, i. e., ∀t ∈ [0; 1]: n

∑ ai fi (t) = 0. i=1

This means that the functions f1 , . . . , fn are linearly dependent. The lemma is proved.

88 � 4 Stochastic differential equations driven by the random measures Coming back to our considerations, note that (4.5) contradicts with (4.4) because of the continuity of the functions f1 , . . . , fd(t0 ) . So, our hypothesis d(t0 −) < d(t0 ) is false. Now let t1 < t2 < ⋅ ⋅ ⋅ < tn be the jumps points for the function d. Denote the kernel of Qtk in L by Mk . Note that M1 = L and the following inclusions hold: M1 ⊃ M2 ⊃ ⋅ ⋅ ⋅ ⊃ Mn . Consider the orthogonal differences: Nk = Mk ⊖ Mk+1 ,

k = 1, . . . , n − 1,

N0 = L ⊖ M1 ,

Then

Nn = Mn .

n

L = ⊕ ∑ Nk . k=0

(4.7)

Assume that μ is the random measure on L adapted to the flow {ℱt }. Take s ∈ (tk ; tk+1 ] for the fixed k. According to Definition 1.4.1, the values of μ on the cylindrical sets being based on [0; s] must be ℱs -measurable. From the previous considerations, it follows that Qs is the linear operator with the kernel Mk+1 . Consequently, the projection of the measure μ on the space L ⊖ Mk+1 must be ℱs -measurable for every s ∈ (tk ; tk+1 ]. Now use the continuity of the flow generated by some [73]. Since ℱtk + = ℱtk , that is why the projection of μ on L ⊖ Mk+1 must be ℱtk -measurable. It is evident that this requirement for every k is sufficient for measure μ to be adapted. Consider the special case. Let ν0 be the deterministic probability measure on N0 and for every k = . . . , n, νk is the random measure on Nk measurable relative to ℱtk−1 . Define the measure μ as a product n

μ = ⊗ νk . k=1

Then μ is an adapted measure. Consider now the adapted random map F on L. Again, take s ∈ (tk ; tk+1 ] for the fixed k. From the Definition 4.1.2, it follows that ∀u ∈ L: F(u)(s) = F(us )(s) a. s.

(4.8)

Denote for u ∈ L its elements in the expansion (4.7) by (u0 , u1 , . . . , un ). So, from (4.8) and previous considerations, we obtain that ∀u ∈ L: F(u)(s) = F(u0 , u1 , . . . , uk , 0, . . . , 0)(s) a. s. Now for measure μ previously constructed and the map F, we have 1

n

tk+1

n

tk+1

∫ F(u)(t)dw(t) = ∑ ∫ F(u)(t)dw(t) = ∑ ∫ F(u0 , . . . , uk , 0, . . . , 0)(t)dw(t), 0

k=0 t k

k=0 t k

4.1 Definition of the integral with respect to the randommeasure

� 89

where tn+1 = 1. By the definition of the adapted map, {F(u0 , . . . , uk , 0, . . . , 0)(t), t ∈ [tk ; tk+1 ]} is independent from the increments of w on the same interval. Hence, tk+1

∫ ⋅ ⋅ ⋅ ∫ ∫ F(u0 , . . . , uk , 0, . . . , 0)(t)dw(t)ν0 (du0 ) . . . νk (duk ) M0

Mk tk

tk+1

= ∫ ( ∫ ⋅ ⋅ ⋅ ∫ F(u0 , . . . , uk , 0, . . . , 0)(t)dw(t)ν0 (du0 ) . . . νk (duk ))dw(t). tk

M0

Mk

So, in this situation the statement of Theorem 4.1.1 is clear. In order to consider the general case of the infinite-dimensional measure, we are in need of the definition of the integral from the random map. Recall that there is a problem with the choice of the measurable modification. Here, we suggest the definition for the adapted random maps and measures. Let {πn ; n ≥ 1} be the sequence of the bounded linear finite-dimensional operators in C with the following properties: (1) πn strongly converges to the identity operator when n tends to infinity. (2) ∀u ∈ C, ∈ [0; 1]: πn (u)(t) = πn (ut )(t). Here, we present one example of the sequence with such properties. Example 4.1.5. Let t0n = 0 < t1n < ⋅ ⋅ ⋅ < tnn = 1 be the partition of [0; 1] for every n ≥ 1. Assume that n max (tk+1 − tkn ) → 0,

k=0,...,n−1

n → ∞.

n For u ∈ C, define πn (u) as a function, which is linear on every interval [tkn ; tk+1 ], k = 0, . . . , n − 1 and

πn (u)(tkn )

πn (u)(0) = 0,

n = u(tk−1 ),

k = 1, . . . , n.

n It is evident that condition (1) holds. Take t ∈ [tkn ; tk+1 ] for the fixed k. Then πn (u)(t) is n n defined by the values u(tk − 1) and u(tk ) if k ≥ 1 or u(tkn ). So, the condition (2) also holds.

Let the adapted random map F have the continuous modification on every finitedimensional subspace of C. Definition 4.1.3. The integral from the map F with respect to the adapted random measure μ is the limit in probability

90 � 4 Stochastic differential equations driven by the random measures P- lim ∫ F(u)μπn−1 (du) n→∞

C

if this limit exists and does not depend on the choice of the sequence {πn ; n ≥ 1}. Example 4.1.6. Consider the situation from Example 4.1.1. That is, let ∀u ∈ C: t

F(u)(t) = ∫ u(s)dw(s),

t ∈ [0; 1],

0

μ = δw(⋅) , where {w(t); t ∈ [0; 1]} is the Wiener process. Take the sequence {πn ; n ≥ 1} of the finitedimensional operators, which has the above mentioned properties. Then ̃ n (w))(t), ∫ F(u)(t)μπn−1 (du) = F(π C

where F̃ is the continuous modification of F on πn (C). Use the description of the space πn (C), which was built in Example 4.1.4. Let v0 , . . . , vm be the corresponding coordinates for v ∈ πn (C) and t0 , . . . , tm be the jump points for dimension function. Then for t ∈ [tk ; tk+1 ], ̃ n (u))(t) = F(π ̃ n (u)0 , . . . , πn (u)k , 0, . . . , 0)(t). F(π Consequently, k−1

ti+1

t

̃ n (u))(t) = ∑ ∫ φi (πn (u)0 , . . . , πn (u)i )(s)dw(s)+ ∫ φk (πn (u)0 , . . . , πn (u)k )(s)dw(s) F(π i=0 t i

a. s.

tk

̃ n (w)) coincides with the Here, φ0 , . . . , φn−1 are the deterministic linear operators. So, F(π Itô integral. Then there exists the limit 1

̃ n (u)) = ∫ w(s)dw(s). P- lim F(π n→∞

0

We see that in this case the difficulties related to the choice of the modification do not arise. Now we will construct a wide class of the random adapted maps and measures for which the integral is well-defined. In order to do this, we need the technical corollaries from Theorem 4.1.1.

4.1 Definition of the integral with respect to the randommeasure � 91

Theorem 4.1.2. Assume that in addition to the conditions of Theorem 4.1.1 the following relation holds: 󵄨 󵄨 sup esssup󵄨󵄨󵄨F(u)(t)󵄨󵄨󵄨 < +∞. u,t

Then 1

1

E ∫ exp{∫ F(u)(t)dw(t) − C

0

1 ∫ F(u)(t)2 dt}μ(du) = 1. 2 0

Proof. Denote t

t

0

0

1 g(u)(t) = exp{∫ F(u)(s)dw(s) − ∫ F(u)(s)2 ds}, 2 t

gn (u)(t) = exp{∫ Fn (u)(s)dw(s) − 0

t

1 ∫ Fn (u)(s)2 ds}, 2 0

n−1

Fn (u)(s) = ∑ F(u)(tkn )1(tn ;tn ] (s), k k+1

k=0

0 = t0n < ⋅ ⋅ ⋅ < tnn = 1,

n ≥ 1.

Since {gn ; n ≥ 1} and g satisfy the stochastic differential equation, dgn (u)(t) = gn (u)(t)Fn (u)(t)dw(t), dg(u)(t) = g(u)(t)F(u)(t)dw(t), gn (u)(0) = g(u)(0) = 1,

by using the conditions on the function F and the Gronwall–Bellman lemma, we can check whether {gn ; n ≥ 1} and g satisfy conditions (1) and (2) of Theorem 4.1.1 with the constants independent from n. Similar to the proof of Theorem 4.1.1, we can select the subsequence {gnk ; k ≥ 1} such that ∀R > 0: 󵄩 󵄩 max 󵄩󵄩󵄩gnk (u) − g(u)󵄩󵄩󵄩 → 0,

u∈L,‖u‖≤R

k→∞

a. s.

Therefore, due to the Fatou lemma, we have E ∫ g(u)(1)μ(du) ≤ lim E ∫ g(u)(1)μ(du). C

For fixed n, consider

n→∞

C

92 � 4 Stochastic differential equations driven by the random measures t

t

0

0

1 gn (u)(t) = exp{∫ Fn (u)(s)dw(s) − ∫ Fn (u)(s)2 ds} 2 t n−1

= exp{∫ ∑ F(u)(tkn )1(tn ;tn ] (s)dw(s) − k k+1

0 k=0

t

1 n−1 ∫ ∑ F(u)(tkn )1(tn ;tn ] (s)ds}. k k+1 2 k=0 0

From the condition of this theorem, it follows that 󵄩 󵄩4 E ∫󵄩󵄩󵄩gn (u)󵄩󵄩󵄩 μ(du) < +∞. C

Therefore, 1

E ∫{∫ gn (u)(t)Fn (u)(t)μ(du)}dw(t) = 0. 0

C

Consequently, according to Theorem 4.1.1, we get 1

E ∫ gn (u)(1)μ(du) = E ∫{1 + ∫ gn (u)(t)Fn (u)(t)dw(t)}μ(du) C

C

0

1

= 1 + E ∫ ∫ gn (u)(t)Fn (u)(t)μ(du)dw(t) = 1. 0 C

Thus, from the Fatou lemma, E ∫ gn (u)(1)μ(du) ≤ 1. C

Using the condition of the theorem, we get from here 1

∀β ∈ ℝ : E ∫ exp β ∫ F(u)(t)dw(t)μ(du) < +∞. C

0

Taking this into account and using the Hölder inequality, we can verify that 1

E ∫ ∫ g(u)(t)4 μ(du)dt < +∞. 0 C

Hence, by analogy with the argument presented above, we obtain E ∫ g(u)(1)μ(du) = 1. C

The theorem is proved.

4.1 Definition of the integral with respect to the randommeasure

� 93

Consider some corollaries from this theorem. Corollary 4.1.1. If the conditions of Theorem 4.1.2 are satisfied, then 2p

1

∀p ∈ ℕ :

E ∫(∫ F(u)(t)dw(t)) μ(du) < +∞. C

0

Lemma 4.1.3. Assume that an adapted random measure μ and random map F satisfy the conditions of Theorem 4.1.1 and for certain p ∈ 𝒩 we have 1

E ∫ ∫ F(u)(t)2p dtμ(du) < +∞. C 0

Then 1

2p

p

1

E ∫(∫ F(u)(t)dw(t)) μ(du) ≤ [p(2p − 1)] E ∫ ∫ F(u)(t)2p dtμ(du). C

0

C 0

Now let us define the class of the diffusion random maps. Assume that a, b : C × ℝ × [0; 1] → ℝ are continuous functions with respect to the collection of variables and satisfy the following conditions: (1) a and b satisfy the Lipshitz condition with respect to the first two variables uniformly in t ∈ [0; 1]. (2) For any t ∈ [0; 1] and u ∈ C, we have a(u, ⋅, t) = a(ut , ⋅, t),

b(u, ⋅, t) = b(ut , ⋅, t).

Assume also that the function φ : ℝ → ℝ satisfies the Lipshitz condition. Definition 4.1.4. The solution to the stochastic differential equation dx(u)(t) = a(u, x(u)(t), t)dt + b(u, x(u)(t), t)dw(t) x(u)(0) = φ(u(0))

is called a diffusion random map. Exercise 4.1.9. Prove that every diffusion random map is adapted and satisfying the conditions of Theorem 4.1.1. Theorem 4.1.3. Let the adapted random measure μ satisfy the conditions of Theorem 4.1.1. Let the sequence of the finite-dimensional operators {πn : n ≥ 1} be such as above and x be a diffusion random map. Then for any p ≥ 1, there exists

94 � 4 Stochastic differential equations driven by the random measures Lp - lim ∫ x(πn u)(t)μ(du). n→∞

C

Proof. For p ∈ 𝒩 and u1 , u2 ∈ C, we consider the difference (x(u1 )(t) − x(u2 )(t))

2p

≤ C1 ⋅ ‖u1 − u2 ‖2p t

2p

+ C1 ⋅ ∫(x(u2 )(s) − x(u2 )(s)) ds 0

2p

t

+ C1 (∫[b(u1 , x(u1 )(s), s) − b(u2 , x(u2 )(s), s)]dw(s)) . 0

According to Lemma 4.1.3 for any n, m ≥ 1, we have 2p

E ∫(x(πn u)(t) − x(πm u)(t)) μ(du) C

t

≤ C1 E ∫ ‖πn u − πm u‖2p μ(du) + C1 ∫ E ∫(x(πn u)(s) C

0

t

2p

C

− x(πm u)(s)) μ(du)ds + C2 ∫ E ∫(b(πn u, x(πn u)(s), s) 2p

0

C

− b(πm u, x(πm u)(s), s)) μ(du)ds t

2p

≤ C3 ∫ ‖πn u − πm u‖2p μ(du) + C4 ∫ E ∫(x(πn u)(s) − x(πm u)(s)) μ(du)ds. C

0

C

Consequently, from the Gronwall–Bellman lemma, we get 2p

E ∫(x(πn u)(t) − x(πm u)(t)) μ(du) ≤ C5 ∫ ‖πn u − πm u‖2p μ(du). C

C

Here, the constant C5 does not depend on n, m and μ. Therefore, by virtue of the Lebesgue dominated convergence theorem, the sequence {∫(x(πn u)(t))μ(du); n ≥ 1} C

is fundamental in any space Lp , and hence, there exists the limit Lp - lim ∫ x(πn u)(t)μ(du). n→∞

C

4.2 The equations driven by the random measures � 95

It is obvious that this limit does not depend on the choice of the sequence {πn ; n ≥ 1}. The theorem is proved.

4.2 The equations driven by the random measures Finite-dimensional case. The main objective of this chapter is the following equation: dx(u)(t) = ∫ a(t, u, x(u)(t), x(v)(t))μ(dv)dt + b(t, x(u)(t))dw(t).

(4.9)

C

Here, as in the previous section, C is the space of continuous functions on the integral [0; 1], u and v are the variables from this space, w is the standard Wiener process on [0; 1], μ is the random measure on C adapted to the flow of σ-fields generated by w. The initial condition for (4.10) is the following: x(u)(0) = u(0),

u ∈ C.

(4.10)

This equation can be treated as the description of the motion of the continuum system of the particles in the random media. The trajectories of the particles depend on the trajectories of another particle system (described by the measure μ). Let us see what form equation (4.10) has in the case of the special choice of the measure μ. Example 4.2.1. The equation for the stochastic flow with the interaction. Assume that the measure μ is concentrated on the constant functions. Then it must be nonrandom in order to be adapted. Then equation (4.10) turns into the following equation: dx(u)(t) = ∫ a(t, u, x(u)(t), x(v̂)(t))μ(dv)dt + b(t, x(u)(t))dw(t). ℝ

Here, v̂ is the function totally equal to v ∈ ℝ. This equation can be considered at first only for such functions u, which are totally constant. In this case, we get the following equation: dx(u)(t) = ∫ a(t, u, x(u)(t), x(v)(t))μ(dv)dt + b(t, x(u)(t))dw(t). ℝ

The first term in the right part can be rewritten in such a way, ∫ a(t, u, x(u)(t), v)μt (dv)dt, ℝ

where μt = μ ∘ x(⋅)(t)−1 . So, equation (4.10) turns into equation (2.12).

96 � 4 Stochastic differential equations driven by the random measures Example 4.2.2. The case of the discrete measure μ. Assume that μ is of the kind, n

μ = ∑ pk δηk ,

(4.11)

k=1

where {ηk ; k = 1, . . . , n} are the adapted processes. Using the arguments similar to Example 4.1.6, we can show that the integrals from the random adapted maps relative to μ are of the following kind: n

∫ F(u)μ(du) = ∑ pk F(ηk ). k=1

C

Hence, for the random measure μ, which has the form (4.11), equation (4.10) turns into n

dx(u)(t) = ∑ pk a(t, u, x(u)(t), ηk )dt + b(t, x(u)(t))dw(t). k=1

(4.12)

Having solved (4.12), consider new random processes yk (t) = x(ηk )(t),

t ∈ [0; 1], k = 1, . . . , n.

Let us check whether the processes yk , k = 1, . . . , n satisfy the following system of equations: n

dyk (t) = ∑ pj a(t, ηk , yk (t), ηj )dt + b(t, yk (t))dw(t), j=1

k = 1, . . . , n.

(4.13)

Assume that x satisfies the integral form of (4.12). For k fixed, consider the random measure νk = δηk . Integrating (4.12) with respect to νk and using the stochastic Fubini theorem, we get (4.13). Now let us consider the existence of the solution for (4.10). We will study at first the following partial case: dx(u)(t) = ∫ f (x(u)(t), x(v)(t))μ(dv)dt + b(x(u)(t))dw(t), C

x(u)(0) = x0 (u). Assume that the measure μ is concentrated on the finite-dimensional subspace L. Theorem 4.2.1. Let the functions f , b and x0 satisfy the following conditions:

(4.14)

4.2 The equations driven by the random measures

1) 2) 3)

� 97

f , b satisfy the Lipshitz condition relative to all their variables. b is bounded. x0 is Lipshitz.

Then the Cauchy problem (4.14) has the solution, which is unique. Proof. Use the iteration method. Define t

t

x1 (u)(t) = x0 (u) + ∫ ∫ f (x0 (u), x0 (v))μ(dv)ds + ∫ b(x0 (u))dw(s). 0 C

0

It is possible to check that the random map x1 satisfies the technical conditions of the previous section. So, we can define the next iteration x2 in the same way and so on. Consider E(xm+1 (u)(t) − xm (u)(t))

2

t

t

0

0

2

2 󵄨 󵄨 ≤ C1 ∫ E(xm (u)(s) − xm−1 (u)(s)) ds + C2 ∫ E(∫󵄨󵄨󵄨xm (v)(s) − xm−1 (v)(s)󵄨󵄨󵄨μ(dv)) ds. C

Consequently, t

2

E(xm+1 (u)(t) − xm (u)(t)) ≤ C3 ∫ eC3 (t−s) E ∫((xm (v)(s) − xm−1 (v)(s))) μ(dv)ds. C

0

Let us estimate 2

E ∫(xm+1 (u)(s) − xm (u)(s)) μ(du). L

In order to do this, consider again (xm+1 (u)(t) − xm (u)(t))

2

t

2

≤ C4 ∫ ∫(f (xm (u)(s), xm (v)(s)) − f (xm−1 (u)(s), xm−1 (v)(s)) μ(dv)ds 0 C

2

t

+ C4 {∫(b(xm (u)(s)) − b(xm−1 (u)(s)))dw(s)} t

0 2

≤ C5 ∫(xm (u)(s) − xm−1 (u)(s)) ds 0

98 � 4 Stochastic differential equations driven by the random measures t

2

+ C5 ∫ ∫(xm (v)(s) − xm−1 (v)(s)) μ(dv)ds 0 C t

+ C6 ∫ ∫(b(xm (u)(τ)) − b(xm−1 (u)(τ)))dw(τ) 0 C

⋅ (b(xm (u)(s)) − b(xm−1 (u)(s)))dw(s) t

1 2 + C6 ∫(b(xm (u)(s)) − b(xm−1 (u)(s))) ds 2 0

t

2

≤ C7 ∫(xm (u)(s) − xm−1 (u)(s)) ds 0

t

2

+ C5 ∫ ∫(xm (v)(s) − xm−1 (v)(s)) μ(dv)ds 0 C

t s

+ C6 ∫ ∫(b(xm (u)(τ)) − b(xm−1 (u)(τ)))dw(τ) 0 0

⋅ (b(xm (u)(s)) − b(xm−1 (u)(s)))dw(s). Let us integrate the obtained inequality relative to the measure μ and use the stochastic Fubini theorem: 2

∫(xm+1 (u)(t) − xm (u)(t)) μ(du) C

t

2

≤ C8 ∫ ∫(xm (u)(s) − xm−1 (u)(s)) μ(du)ds 0 C

t

s

+ C6 ∫ ∫ ∫(b(xm (u)(τ)) − b(xm−1 (u)(τ)))dw(τ) 0 C 0

⋅ (b(xm (u)(s)) − b(xm−1 (u)(s)))μ(du)dw(s). Using the statement of Lemma 4.1.3 that can be easily adapted to the case of a bounded nonanticipating integrand, we can get 2

s

sup E{∫ ∫(b(xm (u)(τ)) − b(xm−1 (u)(τ)))dw(τ) ⋅ (b(xm (u)(s)) − b(xm−1 (u)(s)))μ(du)} [0;1]

C 0

< +∞. Hence,

4.2 The equations driven by the random measures � 99 t

2

2

E ∫(xm+1 (u)(t) − xm (u)(t)) μ(du) ≤ C8 ∫ E ∫(xm (u)(s) − xm−1 (u)(s)) μ(du)ds. C

0

C

Similarly, we can obtain the inequality not only for the second but for an arbitrary moment and check the existence of such a random field x(u)(t), u ∈ C, t ∈ [0; 1] for which 󵄨2p 󵄨 E sup󵄨󵄨󵄨x(u)(s) − xm (u)(s)󵄨󵄨󵄨 → 0,

m → ∞, p ≥ 1, u ∈ C.

[0;1]

Moreover, for arbitrary u1 , u2 ∈ C and p ≥ 1, E sup(xm+1 (u1 )(s) − xm+1 (u2 )(s)) [0;1]

2p

t

2p

2p

≤ C1 ∫ E sup(xm (u1 )(τ) − xm (u2 )(τ)) ds + C1 (x0 (u1 ) − x0 (u2 )) . (p)

0

(p)

[0;s]

Consequently, due to the Gronwall–Bellman lemma, E sup(x(u1 )(t) − x(u2 )(t)) [0;1]

2p

≤ C2 ‖u1 − u2 ‖2p . (p)

So, x can be integrated with respect to the measure μ. Using the arguments similar to the proof of the stochastic Fubini theorem, we can find for every u ∈ C the subsequence {xnj ; j ≥ 1}, which has the property 󵄨󵄨 󵄨󵄨 󵄨 󵄨 lim sup󵄨󵄨󵄨∫(f (xnj (u)(s), xnj (v)(s)) − f (x(u)(s), x(v)(s)))μ(dv)󵄨󵄨󵄨 = 0 󵄨󵄨 j→∞ [0;1] 󵄨󵄨

a. s.

C

Consequently, x satisfies the initial equation. The uniqueness of the solution can be obtained as usual. Assume that x ′ and x ′′ are two different solutions of our problem. Then similar to the previous calculations, we can get 2

t

2

E ∫(x (u)(t) − x (u)(t)) μ(du) ≤ C8 ∫ E ∫(x ′ (u)(t) − x ′′ (u)(t)) μ(du)dt. ′

′′

L

0

C

Hence, 2

E ∫(x ′ (u)(t) − x ′′ (u)(t)) μ(du) = 0 C

and 2

E(x ′ (u)(t) − x ′′ (u)(t)) = 0. The theorem is proved.

100 � 4 Stochastic differential equations driven by the random measures It is useful for future consideration to have the estimation of the difference between two solutions corresponding to the different measures. Assume that μ is an adapted random measure on C and π1 , π2 are such finite-dimensional operators in C, that μ1 = μ∘π1−1 , μ2 = μ ∘ π2−1 are adapted. Lemma 4.2.1. Let x1 and x2 be the solutions of (4.14) for the measures μ1 and μ2 with the same initial condition x0 . Then for every p ∈ 𝒩 there is the constant Cp , which does not depend on π1 , π2 , μ such that ∀u ∈ C: 2p

E sup(x1 (u)(t) − x2 (u)(t)) [0;1]

≤ Cp ∫ ‖π1 v − π2 v‖2 + ‖π1 v − π2 v‖2p μ(dv). C

Proof. Consider 2p

E sup(x1 (u)(t) − x2 (u)(t)) [0;t]

t

2p

≤ D1 ∫ E(x1 (u)(s) − x2 (u)(s)) ds 0

t

2p

+ D1 ∫ E(∫ f (x1 (u)(s), x1 (v)(s))μ1 (dv) − ∫ f (x2 (u)(s), x2 (v)(s))μ2 (dv)) ds t

0

C

C 2p

≤ D2 ∫ E(x1 (u)(s) − x2 (u)(s)) ds 0

t

2p

+ D2 ∫ E ∫(x1 (v)(s) − x2 (v)(s)) μ1 (dv)ds 0

t

C 2p

+ D2 ∫ E(∫ f (x1 (u)(s), x1 (v)(s))μ(dv) − ∫ f (x1 (u)(s), x1 (v)(s))μ2 (dv)) ds. 0

C

C

Let us estimate the last summand: 2p

E(∫ f (x1 (u)(s), x1 (v)(s))μ1 (dv) − ∫ f (x1 (u)(s), x1 (v)(s))μ2 (dv)) C

C 2p

≤ D3 E ∫(x1 (π1 (v))(s) − x1 (π2 (v))(s)) μ(dv). C

Now 2p

(x1 (π1 (v))(s) − x1 (π2 (v))(s))

t

2p 󵄩 󵄩2p ≤ D4 󵄩󵄩󵄩π1 (v) − π2 (v)󵄩󵄩󵄩 + D4 ∫(x1 (π1 (v))(s) − x1 (π2 (v))(s)) ds 0

4.2 The equations driven by the random measures t

� 101

2p

+ D4 ∫(b(x1 (π1 (v))(s) − b(x1 (π2 (v))(s)))dw(s)) . 0

Using the stochastic Fubini theorem and Itô’s formula, t

2p

E ∫ ∫(b(x1 (π1 (v))(s) − b(x1 (π2 (v))(s)))dw(s)) μ(dv) C 0

t

s

2p

≤ D5 ∫ E ∫ ∫(b(x1 (π1 (v))(τ) − b(x1 (π2 (v))(τ)))dw(τ)) μ(dv)ds ≤ ⋅ ⋅ ⋅ 0

t

C 0 2

≤ D6 ∫ E ∫(x1 (π1 (v))(s) − x1 (π2 (v))(s)) μ(dv)ds. 0

C

For p = 1, we can get from the Gronwall–Bellman lemma, 2 󵄩 󵄩2 E ∫(x1 (π1 (v))(s) − x1 (π2 (v))(s)) μ(dv) ≤ D7 E ∫󵄩󵄩󵄩π1 (v) − π2 (v)󵄩󵄩󵄩 μ(dv). C

C

So, for the arbitrary p ≥ 1, 2 󵄩 󵄩2 󵄩 󵄩2p E ∫(x1 (π1 (v))(t) − x1 (π2 (v))(t)) μ(dv) ≤ D8 E ∫󵄩󵄩󵄩π1 (v) − π2 (v)󵄩󵄩󵄩 + 󵄩󵄩󵄩π1 (v) − π2 (v)󵄩󵄩󵄩 μ(dv). C

C

Now the statement of the lemma can be obtained with the usual arguments. This lemma shows that for the certain adapted measures the solution corresponding to the finite-dimensional approximations can have the limit. This means the existence of the solutions for the infinite-dimensional measure in the certain sense. Note that under the conditions of Lemma 4.2.1 we have the following estimation ∀p ≥ 1 ∃Mp > 0 : ∀u1 , u2 ∈ C: 2p

E sup(xi (u1 )(t) − xi (u2 )(t)) [0;1]

≤ Mp ‖u1 − u2 ‖2p ,

i = 1, 2.

Now we can consider equation (4.12) for the arbitrary (not necessary finite-dimensional) random adapted measure μ. Definition 4.2.1. The random adapted map x : C → C, which satisfies the conditions of Theorem 4.1.1 is the solution to equation (4.10) if for every u ∈ C the integral form (4.10) holds, where the integral relative to μ is defined via Definition 4.1.3. The following statement is true.

102 � 4 Stochastic differential equations driven by the random measures Theorem 4.2.2. If the random adapted measure μ is such that ∃α > 0 : E ∫ exp(α‖u‖2 )μ(du) < +∞ C

then equation (4.12) has the solution, which is unique. Proof. Let us consider the sequence of finite-dimensional linear operators {πn ; n ≥ 1} as before. Then for every n ≥ 1 there is the solution of (4.10) xn corresponding to the measure μn = μ ∘ πn−1 . It follows from Lemma 4.2.1 that for every p ≥ 1 there is such Cp > 0 that ∀m, n ≥ 1 ∀u ∈ C: 2p

E sup(xm (u)(t) − xn (u)(t)) [0;1]

≤ Cp E ∫(‖πn v − πm v‖2 + ‖πn v − πm v‖2p)μ(dv), C

∀u1 , u2 ∈ C: E sup(xn (u1 )(t) − xn (u2 )(t))

2p

[0;1]

≤ Cp ‖u1 − u2 ‖2p .

Hence, there exists the random map x : C → C such that ∀p ≥ 1 ∀u ∈ C: E sup(xn (u)(t) − x(u)(t))

2p

→ 0,

2p

≤ Cp ‖u1 − u2 ‖2p .

[0;1]

n → ∞,

∀u1 , u2 ∈ C: E sup(x(u1 )(t) − x(u2 )(t)) [0;1]

It can be verified using the Fatou lemma that ∀u ∈ C ∀m, n ≥ 1: 2p

E sup(∫ f (x(u)(t), x(πm v)(t))μ(dv) − ∫ f (x(u)(t), x(πn v)(t))μ(dv)) [0;1]

C

C

≤ Cp E ∫(‖πm v − πn v‖2 + ‖πm v − πn v‖2p )μ(dv). C

So, the integral from x with respect to μ is well-defined. Now, using the usual limit arguments, we can check that x is the solution to (4.10). The uniqueness of x can be verified considering the difference between another solution and xn similar to Lemma 4.2.1. The theorem is proved. Let us consider the properties of the solution in the some partial cases of (4.10). First, note that the following fact takes place.

4.2 The equations driven by the random measures �

103

Remark. Let x be the solution to (4.10) under the conditions of Theorem 4.2.2. Then for the every adapted random measure μ, which satisfies the technical restriction, 2

∃α > 0 : E ∫ exp(α‖u‖) ν(du) < +∞ C

the integral ∫ x(u)(t)ν(du) C

is well-defined. Proof. Let {πn ; n ≥ 1} be the sequence of the finite-dimensional linear operators with the above mentioned properties. Consider the difference x(πn u)(t) − x(πm u)(t). It has been mentioned after the proof of Lemma 4.2.1 that the solution x has the property, ∀p ≥ 1 : ∃Cp > 0 : ∀u1 , u2 ∈ C: 2p

E sup(x(u1 )(t) − x(u2 )(t)) [0;1]

≤ Cp ‖u1 − u2 ‖2p .

Hence, similar to the proof of Lemma 4.2.1, 󵄨 󵄨2p E ∫󵄨󵄨󵄨x(πn u)(t) − x(πm u)(t)󵄨󵄨󵄨 ν(du) C

≤ D1 E ∫ ‖πn u − πm u‖2p ν(du) C

t

󵄨 󵄨2p + D1 E ∫ ∫󵄨󵄨󵄨x(πn u)(s) − x(πm u)(s)󵄨󵄨󵄨 ν(du)ds 0 C

t

2p

+ D1 E ∫(∫(b(x(πn u)(s)) − b(x(πm u)(s)))dw(s)) ν(du) C

0

≤ ⋅ ⋅ ⋅ ≤ D1 E ∫ ‖πn u − πm u‖2p ν(du) t

C

󵄨 󵄨2p + D1 E ∫ ∫󵄨󵄨󵄨x(πn u)(s) − x(πm u)(s)󵄨󵄨󵄨 ν(du)ds 0 C t

󵄨 󵄨2 + D2 E ∫ ∫󵄨󵄨󵄨x(πn u)(s) − x(πm u)(s)󵄨󵄨󵄨 ν(du)ds. 0 C

104 � 4 Stochastic differential equations driven by the random measures Putting first p = 1 and using the Gronwall–Bellman lemma, one can get 󵄨2p 󵄨 E ∫󵄨󵄨󵄨x(πn u)(t) − x(πm u)(t)󵄨󵄨󵄨 ν(du) ≤ D3 E ∫ ‖πn u − πm u‖2 + ‖πn u − πm u‖2p ν(du). C

C

This inequality shows the desired statement. Note that the same estimation can be obtained for 󵄨 󵄨2p E sup ∫󵄨󵄨󵄨x(πn u)(t) − x(πm u)(t)󵄨󵄨󵄨 ν(du). [0;1]

C

Now let us consider the concrete examples. Example 4.2.3. Let the random measure μ in (4.10) be the same as in Example 4.2.2, i. e., n

μ = ∑ ak δηk k=1

where {ηk ; k = 1, . . . , n} are the adapted random processes. Assume that for certain α > 0 the following relation holds: ∀k = 1, . . . , n: E exp α‖ηk ‖2 < +∞. Then μ satisfies the condition of Theorem 4.2.2. Let x be a solution to (4.10) with the measure μ. It follows from the previous considerations that the following integrals are well-defined: ∫ x(u)(t)δηk (du),

k = 1, . . . , n.

C

It is natural to denote these integrals by x(ηk )(t), k = 1, . . . , n. Using the stochastic Fubini theorem, we get that x(ηk ) satisfies the equation n

dx(ηk )(t) = ∑ aj f (x(ηk )(t), x(ηj )(t))dt + b(t, x(ηk )(t))dw(t). j=1

Hence, for the random processes x(ηk ), k = 1, . . . , n, we get the usual system of stochastic differential equations. The more deep examples of the equations like (4.10) and its solutions will be considered in the next chapter under the investigation of the stationary solutions.

4.3 Random measures as solutions to stochastic differential equations

� 105

4.3 Random measures as solutions to stochastic differential equations In this section, we are trying to use random measure as a strong solution to the stochastic differential equation in the case when equation has no strong solution in common sense. To explain the idea, let us consider the following example. Example 4.3.1. Suppose that {ξn ; n ∈ ℤ} are independent and uniformly distributed on [0; 1] random variables. Consider the difference equation: xn+1 = {xn + ξn },

n ∈ ℤ.

(4.15)

Here, {y} mean the fractional part of y for y ∈ ℝ. Define for n ∈ ℤ, ℱn = σ(ξk ; k ≤ n).

Note that (4.15) has solutions adapted to the flow ℱn , n ∈ ℤ. Indeed, put x0 = 0 and calculate xn for n ≥ 1 by the formula (4.15) and take xn = {xn+1 − ξn } for n < 0. Let us check that (4.15) has no solution with the following two properties: (1) {(xn , ξn ); n ∈ ℤ} is the stationary sequence. (2) xn is ℱn -measurable for every n ∈ ℤ. To prove the absence of such solution, let us suppose for a moment that it exists and get a contradiction. Put yn = ei2πxn ,

n ∈ ℤ.

Then ∀n ∈ ℤ :

yn+1 = yn ei2πξn+1 .

By our assumption, {yn ; n ∈ ℤ} is stationary and ℱ -adapted. Hence, ∀n ∈ ℤ: Eyn+1 = Eyn Eei2πξn+1 = 0. Then ∀m ≤ n E(yn /ξm , . . . , ξn ) = Eyn−m ei2π(ξm +⋅⋅⋅+ξn ) = 0. But

106 � 4 Stochastic differential equations driven by the random measures yn = lim E(yn /ξm , . . . , ξn ). m→−∞

Consequently, yn = 0, n ∈ ℤ. This is a contradiction. If we omit condition (2), then the stationary solution (4.15) can be found easily. Indeed, take η the uniformly distributed on [0; 1] random variable and put x0 = η. Then calculate other xn , n ∈ ℤ using the equation. It can be checked that xn is uniformly distributed on [0; 1]. Moreover, for n = 1, α ∈ [0; 1], P{x1 < α/ξ} = P{{η + ξ1 } < α/ξ} = P{η < α} = P{x1 < α}. Consequently, x1 is independent from the sequence ξ. The same can be proved for every n ∈ ℤ. Hence for n ∈ ℤ, m ≥ 0 the random vector {(x0 , ξ0 ), . . . , (xm , ξm )} and {(xn , ξn ), . . . , (xn+m , ξn+m )} are equidistributed. Exercise 4.3.1. Prove that {(xn , ξn ); n ∈ ℤ} is unique in distribution sequence, such that (4.15) holds and for every n ∈ ℤ xn is independent from {ξn+1 , ξn+2 , . . .}. The previous exercise means that the weak stationary solution to (4.15) is unique. Note that xn is not measurable with respect to ℱn = σ(ξk ; k ≤ n). Here, we propose to substitute {xn ; n ∈ ℤ} by the random measure μ, which is defined as the following: iμ({yn ; n ∈ ℤ} : yk ∈ Δk , . . . , yk+m ∈ Δk+m ) = P(xk ∈ Δk , . . . , xk+m∈Δm )/ξk , . . . , ξk+m . (4.16) Here, Δk , . . . , Δk+m are Borel subsets of [0; 1]. Due to the structure of {(xn , ξn ); n ∈ ℤ}, the following equality holds: P(xk ∈ Δk , . . . , xk+m ∈ Δm /ξk , . . . , ξk+m ) = P(xk ∈ Δk , . . . , xk+m ∈ Δm /ξ).

(4.17)

This means that (4.17) is family of consistent distributions and the random measure μ is well-defined. It is easy to see that μ is a stationary and adapted to the flow (ℱ )n∈ℤ in a sense of Section 1.4. Let us see what kind of equation is satisfied by the measure μ. Consider characteristic for finite-dimensional distributions of μ. Denote by X the set of sequences [0; 1]ℤ . It follows from (4.15) that m

∫ e2πi ∑k=0 λk yk+n μ(dy) X m

= (e2πi ∑k=0 λk xk+n /ξ) m−1

= e2πiλn+m ξn+m (e2πi ∑k=0 λk xk+n /ξ) m−1

= e2πiλn+m ξn+m ∫ e2πi ∑k=0 λk yy+n μ(dy). X

(4.18)

4.3 Random measures as solutions to stochastic differential equations

� 107

The obtained relation allows to find finite-dimensional distributions of μ subsequently starting from one-dimensional, which is simply uniform on [0; 1]. Consequently, (4.18) can be considered as a substitution of (4.15) for random stationary solution μ. Note that (4.18) is similar to the Hopf equation for space-time statistical solution to the Navier–Stokes equation [93]. Now consider the general situation starting from the difference equations. Consider the difference equation xn+1 = φ(xn , ξn+1 ),

n ∈ ℤ,

(4.19)

where {ξn ; n ∈ ℤ} are independent and equidistributed random variables, φ : ℝ2 → ℝ is a Borel function. The previous example shows us that the stationary solution to (4.19) adapted to the flow of σ-fields generated by {ξn } not always exists. Exercise 4.3.2. Prove that equation 1 xn+1 = xn + ξn+1 , 2

n ∈ ℤ,

has a stationary adapted solution under the assumption E|ξ1 | < +∞. Definition 4.3.1. Stationary random measure μ on ℝℤ is called by the strong solution to equation (4.19), if: (1) μ is adapted to the flow generated by {ξn }. (2) For arbitrary m ≥ 1, n ∈ ℤ, λ1 , . . . , λm , ρ ∈ ℝ the following relation holds: m

∫ exp{i ∑ λk un+k + iρun+m+1 }μ(du) k=1

ℝℤ

m

= ∫ exp{i ∑ λk un+k + iρφ(un+m , ξn+m+1 )}ξμ(du). ℝℤ

k=1

The next theorem describes the connection between the strong measure-valued and weak usual solution to (4.19). Theorem 4.3.1. Equation (4.19) has a stationary weak solution if and only if it has a strong stationary measure-valued solution. In case if the weak stationary solution is unique, that stationary measure-valued solution also is unique.

108 � 4 Stochastic differential equations driven by the random measures Proof. Suppose that {xn } is a weak stationary solution to (4.19). Define the random measure on ℝℤ by its finite-dimensional distributions, μ({u : (un , . . . , un+m ) ∈ Δ}) = P{(xn , . . . , xn+m ) ∈ Δ/ξ}. The existence of the regular version of conditional probability follows from the fact that ℝℤ is a complete separable space with the distance corresponding to pointwise convergence. The stationarity of μ follows from the stationarity of sequence {(xn , ξn )}. Check that μ is adapted. In addition to σ-fields, ℱn = σ{ξk ; k ≤ n}

consider Gn = σ{(xk , ξk ); k ≤ n}. By the definition of the weak solution for every n ∈ ℤ, the σ-fields Gn and σ{ξk ; k > n} are independent. Let us check that for every integrable random variable ζ , which is measurable with respect to {ξn }, the following relation holds: E(ζ /Gn ) = E(ζ /ℱn ).

(4.20)

To see this, it is enough to consider ζ of the kind, m

ζ = ∏ f (ξk ), k=1

where 1 ≤ n ≤ m, fk , k = 1, . . . , m are bounded Borel functions. In this case, (4.20) follows from the above mentioned independency of Gn and σ(ξj ; n < j). Now take the bounded cylindrical function h on ℝℤ , which depends on the coordinates with numbers not greater then n. Define the random variable: α = ∫ h(u)μ(du) = M(h(x)/ξ). ℝℤ

Then for every bounded random variable ζ , which is measurable, ζ is measurable with respect {ξn }, Eαζ = Eh(x)ζ = EE(h(x)ζ /Gn ) = Eh(x)E(ζ /Gn ) = Eh(x)Eh(x)E(ζ /ℱn ). Hence, α = E(h(x)/ℱn ).

4.3 Random measures as solutions to stochastic differential equations

� 109

Consequently, α is ℱn -measurable. The relationship (4.20) can be easily checked. Then μ is a solution to (4.20) in a sense of Definition 4.3.1. Now suppose that random measure μ is a measure-valued solution to (4.19). Consider (possibly on an extended probability space) the sequence of random variables {xn } such that for every Δ ⊂ ℝℤ , P{x ∈ Δ/ξ} = μ(Δ). Since μ satisfies (4.19), then ∀n ∈ ℤ, ρ ∈ ℝ: μ{u : exp iρun+1 = exp iρφ(un , ξn+1 )} = 1.

(4.21)

Consequently, ∀n ∈ ℤ: P{xn+1 = φ(xn , ξn )} = Eμ{u : un+1 = φ(un , ξn+1 )} = 1. Also, ∀k ≥ 0, n ∈ ℤ: P{(xn−k , . . . , xn ) ∈ Δ}

= Eμ{u : (un−k , . . . , un ) ∈ Δ},

P{(xn−k , . . . , xn ) ∈ Δ/ξn+1 , . . .}

= E(E(1xn−k , . . . , xn ) ∈ Δ/ξ)/ξn+1 , . . .) = Eμ{u : (un−k , . . . , un ) ∈ Δ} = P{(xn−k , . . . , xn ) ∈ Δ}.

Since Δ is arbitrary, then the “past” of {(xn , ξn )ξn } is independent from the “future” of {ξn }. Hence, {(xn , ξn )} is a weak stationary solution. The simultaneous uniqueness of weak and measure-valued stationary solutions follows from the previous contractions of μ and {xn }. The theorem is proved. Now consider the case of the continuous time parameter. We will consider onedimensional stochastic differential equations. Consider the following Cauchy problem: dx(t) = a(xt , t)dt + b(xt , t)dw(t), x(0) = x0 .

(4.22)

Here, x is an unknown random continuous function on [0; 1]. Coefficients a, b are measurable functions acting from C([0; 1]) × [0; 1] to ℝ, xt (s) = x(t ∧ s),

s ∈ [0; 1],

110 � 4 Stochastic differential equations driven by the random measures and w is a standard Wiener process. Let us recall the definition of the weak solution to (4.22). ̃ ) defined on a certain Definition 4.3.2. The pair of continuous random processes (x̃, w ̃ℱ ̃ is called by the weak solution to (4.22) if: ̃, P) probability space (Ω, ̃ is a square-integrable martingale with characteristics ⟨w ̃ ⟩(t) = t, t ∈ [0; 1] with (1) w respect to joint filtration, ̃ ̃ (s); s ≤ t), ̃t = ∑(X(s), W ℱ

t ∈ [0; 1].

(2) The integral form of (4.22) holds. In contrast to the strong solution, the weak solution is not functional from the initial noise. Similar to the discrete time case, one can define the measure-valued solution to (4.22). Definition 4.3.3. The random measure μ on C([0; 1]) is called by the solution to the Cauchy problem (4.22) if the following conditions hold: (1) μ adapted to the filtration generated by w. (2) μ({u : u(0) = x0 }) = 1 a. s. (3) ∀0 ≤ t1 < t2 < ⋅ ⋅ ⋅ < tn+1 ≤ 1 ∀λ1 , . . . , λn+1 ∈ ℝ; n+1

∫ exp i{ ∑ λk u(tk )}μ(du) k=1

C

n

= ∫ exp i{ ∑ λk u(tk ) + λk+1 u(tk+1 )}μ(du) C

k=1

tn+1

n

+ iλn+1 ∫ ∫ exp i{ ∑ λk u(tk ) + λn+1 u(s)}a(us , s)μ(du)ds tn C

k=1

tn+1

n λ2 − n+1 ∫ ∫ exp i{ ∑ λk u(tk ) + λn+1 u(s)}b2 (us , s)μ(du)ds 2 k=1 tn C

tn+1

n

+ in+1 ∫ ∫ exp i{ ∑ λk u(tk ) + λn+1 u(s)}b(us , s)μ(du)dw(s). tn C

k=1

Let us prove the analog of Theorem 4.3.1, which allows us to construct measurevalued solutions.

4.3 Random measures as solutions to stochastic differential equations

� 111

Theorem 4.3.2. The problem (4.22) has a measure-valued solution if and only if it has a weak solution. Both measure-valued and a weak solution have the uniqueness property simultaneously. Proof. The proof follows the main steps from the proof of Theorem 4.3.1 but due to continuous time and the presence of stochastic integrals some new technical difficulties arise. Hence, we will present the main ideas and new moments in the proof. Suppose that (4.22) has a weak solution (x, w). Define the random measure μ as a condition distribution x under fixed w. For arbitrary 0 ≤ t1 < ⋅ ⋅ ⋅ < tn+1 ≤ 1, consider n+1

∫ exp i{ ∑ λk u(tk )}μ(du) C

k=1

n+1

= E(exp i{ ∑ λk x(tk )}/μ) k=1 n

= E(exp i{ ∑ λk x(tk ) + λn+1 x(tn )}/μ) k=1

tn +1

n

+ iλn+1 E( ∫ exp i{ ∑ λk x(tk ) + λn+1 x(s)}a(xs , s)ds/μ) k=1

tn

tn +1

n

+ iλn+1 E( ∫ exp i{ ∑ λk x(tk ) + λn+1 x(s)}b(xs , s)dw(s)/w) k=1

tn

tn +1

n 1 − λ2n+1 E( ∫ exp i{ ∑ λk x(tk ) + λn+1 x(s)}b2 (xs , s)ds/w). 2 k=1

(4.23)

tn

In the obtained sum, first and third summands can be easily rewritten in terms of measure μ. To do the same with the second, we need to check that μ is adapted random measure. Consider set Δ ⊂ C([0; 1]), which belongs to σ-field generated by the coordinate functionals up to the moment t. Then for arbitrary bounded random variable α, which is measurable with respect to w, one can write Eμ(Δ)α = EE(1Δ (x)/w)α

= E 1Δ (x)α = EE(1Δ (x)α/ℱt )

= E 1Δ (x)E(α/ℱt ) = E 1Δ (x)E(α/Gt ).

Here, ℱt = σ(w(s), x(s); s ≤ t),

Gt = σ(w(s); s ≤ t).

(4.24)

112 � 4 Stochastic differential equations driven by the random measures It follows from (4.24) that μ(Δ) = E(μ(Δ)/Gt ). Consequently, μ is adapted. Now the second summand from the right-hand side of (4.24) can be written as tn +1

n

iλn+1 E( ∫ exp i{ ∑ λk x(tk ) + λn+1 x(s)}b(xs , s)dw(s)/w) k=1

tn

tn +1

n

= iλn+1 ∫ ∫ exp i{ ∑ λk u(tk ) + λn+1 u(s)}b(us , s)dw(s)μ(du). C tn

k=1

Now it is desirable to interchange the places of dw(s) and μ(du). Since μ is adapted, then we can use Theorem 4.3.2. It allows us to do this for the good functions b, but after that one can use approximation arguments. So, μ appears to be a measure-valued solution to (4.23). Now let (4.23) have a measure-valued solution μ. Consider a pair of random processes (x, w) such that w is a Wiener process and the conditional distribution of x under fixed x is μ. Prove that (x, w) is a weak solution to (4.23). First, prove that the increments of w after t do not depend on the σ-field, ℱt = σ(x(s), w(s); s ≤ t).

Take the random variable β measurable with respect to increments of w after t. Then for s1 , . . . , sn ≤ t and bounded measurable φ : nℝ2n → ℝ, Eφ(x(s1 ), . . . , x(sn )w(s1 ), . . . , w(sn ))

= EE(φ(x(s1 ), . . . , x(sn )w(s1 ), . . . , w(s1 ), . . . , w(sn ))β/w)

= Eβ ∫ φ(u(s1 ), . . . , u(sn ), w(s1 ), . . . , w(sn ))μ(du) C

= EE(β ∫ φ(u(s1 ), . . . , u(sn ), w(s1 ), . . . , w(sn ))μ(du)/Gt ) C

= Eβ ⋅ E ∫ φ(u(s1 ), . . . , u(sn ), w(s1 ), . . . , w(sn ))μ(du) C

= EβEφ(x(s1 ), . . . , x(sn ), w(s1 ), . . . , w(sn )). Here, as before Gt = σ(w(s); s ≤ t).

4.3 Random measures as solutions to stochastic differential equations

� 113

Hence, the “future” increments of w do not depend on the “past” of (x, w). Let us check that x satisfies the integral form of (4.23). Using the same arguments as before, one can check that for arbitrary function φ : C([0; 1]) × ℝ → ℝ under smoothness conditions with respect to the second variable the following relation holds: ∀o ≤ t1 ≤ t2 ≤ 1 ∫ φ(ut1 , u(t2 ))μ(du) = ∫ φ(ut1 , u(t1 ))μ(du) C

C

t2

+ ∫ ∫ φ′2 (ut1 , u(s))a(us , s)dsμ(du) C t1

t2

+ ∫ ∫ φ′2 (ut1 , u(s))b(us , s)dw(s)μ(du) C t1

t2

1 2 + ∫ ∫ φ′′ 22 (ut1 , u(s))b(us , s) dsμ(du). 2

(4.25)

C t1

Let use (4.25) to check that x satisfies the integral form of (4.23). For simplicity, suppose that x0 = 0. Then 2

t

t

E(x(t) − ∫ a(xs , s) − ∫ b(xs , s)sw(s)) 0

0

t

t

0

0

= E(x(t)2 − 2x(t) ∫ a(xs , s)ds − 2x(t) ∫ b(xs , s)dw(s) t

t

+ 2 ∫ a(xs , s)ds ∫ b(xs , s)dw(s) 0

t

0 2

2

t

+ (∫ a(xs , s) + (∫ b(xs , s)dw(s)) ). 0

0

Calculate the expectation for every summand separately. For the first one, Ex(t)2 − EE(x(t)2 /w) = E ∫ u(t)2 μ(du) C

t

= 2E ∫ ∫ u(s)a(us , s)dsμ(du) C 0

114 � 4 Stochastic differential equations driven by the random measures t

2

+ E ∫ ∫ b(us , u(s)) dsμ(du) C 0

t

+ 2E ∫ ∫ u(s)b(us , u(s))dw(s)μ(du) C 0 t

= 2E ∫ ∫ u(s)a(us , s)dsμ(du) C 0

t

2

+ E ∫ ∫ b(us , u(s)) dsμ(du)). C 0

Here, we use Theorem 4.3.2 and the properties of the Itô integral. For the second summand, t

t

Ex(t) ∫ a(xs , s)ds − E ∫ u(t) ∫ a(us , s)dsμ(du) 0

C

t

0

= E ∫ u(t)a(us , s)μ(du)ds 0

t

= E ∫ ∫ u(s)a(us , s)μ(du)ds 0 C

t

t

+ E ∫ ∫ ∫ a(ur , r)dra(us , s)μ(du)ds 0 C s t

t

+ E ∫ ∫ ∫ b(ur , r)dw(r)a(us , s)μ(du)ds 0 C s

t

t

1 = E ∫ ∫ u(s)a(us , s)dsμ(du) + E ∫(∫ a(us , s)ds)2μ(du). 2 C 0

C

0

Using the difference approximation, one can check that t

t

t

t

0

C 0

0

C 0

Ex(t) ∫ b(xs , s)dw(s) = E ∫ ∫ a(ur , r)dr ∫ b(us , s)μ(du) + E ∫ ∫ b(ur , r)2 drμ(du). Proceeding in the same way, one check that t

t

2

E(x(t) − ∫ a(xs , s)ds − ∫ b(xs , s)ds) = 0. 0

0

4.3 Random measures as solutions to stochastic differential equations

� 115

Consequently, x is a weak solution to (4.23). Uniqueness is considered similar to the discrete time case. The theorem is proved. Consider an example when the measure-valued solution can be found explicitly. Example 4.3.2. On the time interval [0; 1], consider the following Cauchy problem: dx(t) = sign(x(t)+)dw(t) { x(0) = 0.

(4.26)

Here, 1, (r+) = { −1,

r ≥ 0, r < 0.

It is well known that in [53] (4.26) has only a weak solution. Consider this solution (x, w) and try to find measure-valued solution μ as a conditional distribution x with respect to w. It can be checked that 󵄨󵄨 󵄨 󵄨󵄨x(t)󵄨󵄨󵄨 = w(t) − min w(s). [0;t]

(4.27)

Indeed, from the Tanaka formula [53] one can have 󵄨 󵄨 w(t) = 󵄨󵄨󵄨x(t)󵄨󵄨󵄨 − lx (t),

(4.28)

where lx (t) is a local time of the process x at the point 0. Since lx is nondecreasing, then it follows from (4.28), lx (t) ≥ − min w(s), [0;t]

and 󵄨󵄨 󵄨 󵄨󵄨x(t) ≥ w(t)󵄨󵄨󵄨 − min w(s). [0;t] Since |x(t)| and w(t) − min[0;t] w(s) are equidistributed [53], then taking the expectation one can check that (4.27) is true. Consequently, knowledge about w gives us information about the position and form of excursions of x but not about their sign. Since x is a Wiener process itself and Wiener excursions have symmetric distribution in C([0; 1]) [53], then the random measure μ is constructed as follows. Let {φn ; n ≥ 1} be the set of excursions of w enumerated in some way. Consider the sequence {ε; n ≥ 1}. Let εn ; n ≥ 1 be a sequence of independent random variables taking values ±1 with probability 21 . Now μw is the distribution of the random function, which has excursions

116 � 4 Stochastic differential equations driven by the random measures {εn φn ; n ≥ 1} on the same places where w has {φn ; n ≥ 1}. To check this, let us recall [53] that under fixed positions, excursions of w are independent and have symmetric distribution in C([0; 1]).

4.4 Equations with the random measures on the space of trajectories In this section, we consider the following equation: dx(u)(t) = ∫ a(t, u, x(u)(t), x(v)(t))μ(dv)dt + b(t, x(u)(t)).

(4.29)

C

Here, as before, C is a short notation for the space C([0; 1]), u, v denote elements of this space and w is standard Brownian motion. The measure μ is random and adapted to the filtration of w. As the initial condition for (4.29), let us take x(u)(0) = x0 (u(0)),

u ∈ C.

(4.30)

In (4.29), w plays the role of random perturbations and the random measure μ is information about another system, which has connection to x. Consider the examples. Example 4.4.1. Suppose that μ is concentrated on the constants (i. e., on the functions, which remain to be constant on [0; 1]). Then μ must be deterministic. In this case for the restriction of x on constants, one can obtain the equation with interaction from Chapter 2. Then prove the existence of the solution to (4.29) and let us begin with the autonomous case and assume that μ is concentrated on the finite-dimensional subspace L ⊂ C. Theorem 4.4.1. Suppose that a, b, x0 satisfy the Lipshitz condition and b is bounded. The Cauchy problem (4.29), (4.30) has a solution, which is unique. Proof. Construct the sequence of successful approximations t

t

xn+1 (u)(t) = x0 (u) + ∫ ∫ a(xn (u)(s), xn (v)(s))μ(dv)ds + ∫ b(xn (u)(s))dw(s), n ≥ 0. 0 C

0

It can be checked that for every n, xn satisfy conditions of Section 4.1. Hence, the sequence {xn ; n ≥ 0} is well-defined. From the Gronwall–Bellman lemma, one can get 2

t

2

E(xm+1 (u)(t) − xm (u)(t)) ≤ C1 ∫ eC1 (t−s) E ∫(xm (v)(s) − xm−1 (v)(s)) μ(dv)ds. 0

C

4.4 Equations with the random measures on the space of trajectories

� 117

Now apply the stochastic Fubini theorem from Section 4.1. Using Itô’s formula, one can get t

2

2

(xm+1 (u)(t) − xm (u)(t)) ≤ C2 ∫(xm (u)(s) − xm−1 (u)(s)) ds 0

t

2

+ C3 ∫ ∫(xm (v)(s) − xm−1 (v)(s)) μ(dv)ds 0 C

t s

+ C4 ∫ ∫(b(xm (u)(τ)) − b(xm−1 (u)(τ)))dw(τ) 0 0

⋅ (b(xm (u)(s)) − b(xm−1 (u)(s)))dw(s). Let us integrate the obtained inequality with respect to μ using the Fubini theorem, 2

∫(xm+1 (u)(s) − xm (u)(s)) μ(du) C

t

2

≤ C5 ∫ ∫(xm (u)(s) − xm1 (u)(s)) μ(du)ds 0 C

t

s

+ C4 ∫ ∫(∫(b(xm (u)(τ)) − b(xm−1 (u)(τ)))dw(τ) 0 C

0

⋅ (b(xm (u)(s)) − b(xm−1 (u)(s))))μ(du)dw(s). Taking into account Theorem 4.1.1 and calculating the expectation, one can get 2

t

2

E ∫(xm+1 (u)(t) − xm (u)(t)) μ(du) ≤ C6 ∫ E ∫(xm (u)(s) − xm−1 (u)(s)) μ(du)ds. C

0

C

Similar estimations can be obtained not only for the second, but for larger moments. Then, using Burkholder–Davis–Gundy inequality, one can prove the existence of such random mapping x that 󵄨 󵄨2p E sup󵄨󵄨󵄨X(u)(s) − Xm (u)(s)󵄨󵄨󵄨 → 0, [0;1]

m → ∞,

∀u1 , u2 ∈ C: 󵄨 󵄨2p E sup󵄨󵄨󵄨X(u1 )(s) − X(u2 )(s)󵄨󵄨󵄨 ≤ Cp ‖u1 − u2 ‖2p . [0;1]

Consequently, X is integrable with respect to μ. It can be easily checked that X is a unique solution to (4.29). The theorem is proved.

118 � 4 Stochastic differential equations driven by the random measures Before considering the general case (i. e., when the measure μ has no finite-dimensional support), let us prove a priori estimation for the difference of solutions corresponding to different random measures. Let μ be an adapted random measure on C, π1 , π2 be a finite-dimensional operators, which were built in Section 4.1. Lemma 4.4.1. For all p ∈ 𝒩 , there exists a constant Cp such that for any solutions x1 , x2 , which correspond to initial condition x0 and measures μ ∘ π1−1 , μ ∘ π2−1 the following inequality holds: ∀u ∈ C: E sup(x1 (u)(t) − x2 (u)(t)) [0;1]

2p

≤ Cp E ∫(‖π1 v − π2 v‖2 + ‖π1 v − π2 v‖2 p)μ(dv). C

Proof. From the Burkholder–Davis–Gundy inequality, 2p

E sup(x1 (u)(s) − x2 (u)(s)) [0;t]

t

2p

≤ D1 ∫ E(x1 (u)(s) − x2 (u)(s)) ds 0

t

+ D1 ∫ E(∫ f (x1 (u)(s), x1 (v)(s))μ1 (dv) 0

C

2p

− ∫ f (x2 (u)(s), x2 (v)(s))μ2 (dv)) ds C

t

2p

≤ D2 ∫ E(x1 (u)(s) − x2 (u)(s)) ds 0

t

2p

+ D2 ∫ E ∫(x1 (v)(s) − x2 (v)(s)) μ1 (dv)ds 0

t

C 2p

+ D2 ∫ E(∫ f (x1 (u)(s), x1 (v)(s))(μ1 (dv) − μ2 (dv))) ds. 0

C

Consider every summand separately. Note that E(∫ f (x1 (u)(s), x1 (v)(s))(μ1 (dv) − μ2 (dv)))

2p

C 2p

≤ D3 E ∫(x1 (π1 v)(s) − x1 (π2 v)(s)) μ(dv). C

Now

4.4 Equations with the random measures on the space of trajectories

� 119

2p

(x1 (π1 v)(t) − x1 (π2 v)(t))

t

2p

2p

≤ D4 ‖π1 v − π2 v‖ + D4 ∫(x1 (π1 v)(s) − x1 (π2 v)(s)) ds 0

2p

+ D4 ((b(x1 (π1 v)(s)) − b(x1 (π2 v)(s)))dw(s)) . Using Itô’s formula and the stochastic Fubini theorem, one can get 2p

t

E ∫(∫(b(x1 (π1 v)(s)) − b(x1 (π2 v)(s))dw(s))) μ(dv) C

0

2p−2

s

t

≤ D5 ∫ E ∫(∫(b(x1 (π(v))(s1 ) − b(x1 (π2 v)(s1 ))))dw(s)) 0

t

C

μ(dv)ds ≤ ⋅ ⋅ ⋅

0 2

≤ D6 ∫ E ∫(x1 (π1 v)(s) − x1 (π2 v)(s)) μ(dv)ds. 0

C

Due to the Gronwall–Bellman lemma, 2

E ∫(x1 (π1 v)(t) − x1 (π2 v)(t)) μ(dv) ≤ D7 E ∫ ‖π1 v − π2 v‖2 μ(dv). C

C

Hence, for arbitrary p ≥ 1, 2p

E ∫(x1 (π1 v)(t) − x1 (π2 v)(t)) μ(dv) ≤ D8 E ∫(‖π1 v − π2 v‖2 + ‖π1 v − π2 v‖2p )μ(dv). C

C

The estimation of other summands and the end of proof is standard. The statement of the lemma gives us possibility to obtain the solution to (4.29) for general random measure μ. Theorem 4.4.2. Suppose that the adapted random measure μ satisfies condition: ∃α > 0 : E ∫ exp α‖u‖2 μ(du) < +∞.

(4.31)

C

Then (4.31) has a solution, which is unique. Proof. Note that under conditions of Lemma 4.4.1 for every p ∈ 𝒩 there exists Kp > 0 such that 2p

E sup(xi (u1 )(t) − xi (u2 )(t)) [0;1]

≤ Kp ‖u1 − u2 ‖2p .

(4.32)

120 � 4 Stochastic differential equations driven by the random measures Here, xi is a solution for μi = μ∘πi−1 . Consider a sequence {πn ; n ≥ 1} of finite-dimensional linear operators in C such that πn strongly converges to identity and maps μ into an adapted random measure. Let xn be a solution for the measure μ ∘ πn . It follows from Lemma 4.4.1 and inequality (4.32) that there exists a random mapping x : C → C such that for every p ∈ 𝒩 , E sup(xn (u)(t) − x(u)(t)) [0;1]

E sup(x(u1 )(t) − x(u2 )(t)) [0;1]

2p

2p

→ 0,

n → ∞,

≤ Kp ‖u1 − u2 ‖2p ,

u, u1 , u2 ∈ C.

In can be checked that integrals in (4.29) with respect to μ is well-defined for x. Then x is a solution to (4.29). Uniqueness of the solution can be proved exactly as in the finitedimensional way. The theorem is proved.

5 Stationary measure-valued processes 5.1 Weak compactness of measure-valued processes In this chapter, we consider stationary solutions to equations with interaction and related subjects. One of common ways to obtain a stationary solution is to construct it as a weak limit of other solutions. So, it is useful to find comfortable criteria of weak compactness for measure-valued processes. At our time, there are a lot of weak compactness criteria both for processes in general metric spaces and especially for measurevalued processes (see [9, 38, 39]). We propose criteria, which can be easily applied in case when the measure-valued process describes the transformation of the measure by some stochastic flow. Let us begin with the problem of weak compactness of random processes with the values in a space of decreasing to zero sequences. Denote by c0+ the set of sequences of real number (xk )k≥1 , which satisfy the following conditions: (1) xk ≥ xk+1 ≥ 0. (2) limk→∞ xk = 0. Define the uniform distance on c0+ : ∀(xk )k≥1 , (yk )k≥1 ∈ c0+ : d(x, y) = max |xk − yk |. k

(c0+ , d) is a complete separable metric space. The next lemma gives a compactness criteria in C([0; 1], c0+ ). Lemma 5.1.1. The closed set F is a compact in C([0; 1], c0+ ) if and only if the following conditions hold: (1) The set of all coordinate functions wtF = {φk ; φ ∈ F, k ≥ 1} ∪ {0} is compact in C([0; 1]). (2) ∀t ∈ [0; 1]: sup φk (t) → 0, F

k → ∞.

Here, {φ; k ≥ 1} is the sequence of coordinates of φ ∈ C([0; 1], c0+ ). Proof. Necessity. Let F be a compact set. Then for every ε > 0 there exists finite ε-net for F, i. e., such set {x i , i = 1, . . . , N} ⊂ C([0; 1], c0+ ), that ∀φ ∈ F ∃i = 1, . . . , N: d(x i , φ) < ε. Due to Dini’s theorem, https://doi.org/10.1515/9783110986518-005

122 � 5 Stationary measure-valued processes max max xki → 0,

i=1,...,N [0;1]

k → ∞.

Then there exists k0 such that max max xki 0 < ε.

i=1,...,N [0;1]

If k > k0 , take i0 such that i 󵄨 󵄨 max max󵄨󵄨󵄨φj (t) − xj 0 (t)󵄨󵄨󵄨 < ε. [0;1]

j≥1

Then by the choice of k0 , max φk (t) ≤ max φk0 (t) < 2ε. [0;1]

[0;1]

Hence, i i 󵄨 󵄨 max󵄨󵄨󵄨φk (t) − xk0 (t)󵄨󵄨󵄨 ≤ max [0; 1]φk (t) + max [0; 1]xk0 (t) < 3ε. [0;1]

Note also that F̃ is a closed set in C([0; 1]), which can be easily checked. Hence, F̃ is compact by Haussdorf criteria. Now check that condition (2) holds. Fix arbitrary t ∈ [0; 1] and ε > 0. Let {x i ; i = 1, . . . , N} be an ε-net for F. Choose the number k0 as before. Then for arbitrary φ ∈ F, i

φk0 (t) ≤ xk0 (t) + ε < 2ε, 0

where d(φ, x i0 ) < ε. Consequently, condition (2) holds. Sufficiency. Suppose that (1) and (2) hold true. Then the family F is uniformly equicontinuous. To check this note, by condition (1), now F̃ has such property. Then for arbitrary positive ε, there exists δ such that ∀t1 , t2 ∈ [0; 1],

|t1 − t2 | < δ;

∀φ ∈ F, k ≥ 1: 󵄨󵄨 󵄨 󵄨󵄨φk (t1 ) − φk (t2 )󵄨󵄨󵄨 < ε. Hence, ∀φ ∈ F:

(5.1)

5.1 Weak compactness of measure-valued processes

d(φ(t1 ), φ(t2 )) ≤ ε.

� 123

(5.2)

Consider for fixed t ∈ [0; 1], the set Ft = {φ(t) : φ ∈ F} ⊂ c0+ . Since F is closed, then Ft is closed in c0+ . It follows from condition (2) that Ft is compact. Now the fact that F is compact follows from the equicontinuity and from the compactness of Ft for all t ∈ [0; 1] in a standard way. The lemma is proved. As usual, when we know what are the compacts in the metric space, we can get the criteria of weak convergence in this space. Let {ξα ; α ∈ A} be a family of random elements in C([0; 1], c0+ ). Theorem 5.1.1. {ξα ; α ∈ A} is weakly compact if and only if the following conditions hold: (1) The family ξα,k , α ∈ A, k ≥ 1 is weakly compact in C([0; 1]); (2) ∀t ∈ [0; 1] ∀ε > 0: sup P{ξα,t > ε} → 0, α

k → ∞.

Proof. Necessity. Let {ξα ; α ∈ A} be weakly compact. Then, due to the Prokhorov theorem [4], for every ε > 0 there exists a compact K ⊂ C([0; 1], c0+ ) such that inf P{ξα ∈ K} ≥ 1 − ε.

α∈A

(5.3)

Consider a family ̃ = {φk ; φ ∈ K, k ≥ 1} ∪ {0}. K ̃ is a compact set in C([0; 1]). From (5.3), Due to Lemma 5.1.1, K ̃ ≥ 1 − ε. inf inf P{ξα,k ∈ K}

α∈A k≥1

Hence condition (1) of the theorem holds true. Also, from Lemma 5.1.1 for arbitrary t ∈ [0; 1], sup φk (t) → 0, K

k → ∞.

Consequently, there exists k0 such that sup φk0 (t) < ε. K

Then it follows from (5.3) that ∀k ≥ k0 , sup P{ξα,k (t) > ε} ≤ ε. Hence, condition (2) holds.

124 � 5 Stationary measure-valued processes Sufficiency. Suppose that conditions (1) and (2) of the theorem hold true. For arbitrary ε > 0, due to condition (1), there exists such compact set F̃ ⊂ C([0, 1]) that ̃ ≥ 1 − ε. inf inf P{ξα,k ∈ K} 3

α∈A k≥1

(5.4)

Since the family F̃ is uniformly equicontinuous, then there exists such partition t0 = 0 < t1 . . . < tn = 1 that ∀f ∈ F̃ ∀t ∈ [0; 1]: 󵄨 󵄨 ε min 󵄨󵄨󵄨f (t) − f (ti )󵄨󵄨󵄨 < . 2

i=0,...,n

(5.5)

Due to condition (2), there exists such number k0 that ε 1 sup P{ max ξα,k0 (ti ) > } < ε. i=0,...,n 2 3 α∈A

(5.6)

Then, from (5.4)–(5.6) one can get 2 sup P{max ξα,k0 (ti ) > ε} < ε. [0;1] 3 α∈A

(5.7)

̃ ⊂ C([0; 1]), ℝk0 that Due to condition (1), there exists such compact G ̃ > 1 − ε. inf P{ξα,1 , . . . , ξα,k0 ) ∈ G} 3

α∈A

(5.8)

Let us build compact set G in C([0; 1], c0+ ) as follows: ̃ φk +1 = φk +2 = ⋅ ⋅ ⋅ = 0}. G = {φ : (φ1 , . . . , φk0 ) ∈ G, 0 0 From (5.7) and (5.8), one can get that inf P{ρ(ξα , G) < ε} > 1 − ε.

α∈A

(5.9)

Here, as usual ρ(ξα , G) = inf max d(ξα (t), φ(t)). φ∈G [0;1]

It follows from (5.9) that family {ξα ; α ∈ A} is uniformly tight. Since, by Prokhorov criteria, {ξα , α ∈ A} is weakly compact. The theorem is proved. Now we are ready to discuss weak compactness of measure-valued processes. In the space M of all probability measures on ℝd , we will introduce the new distance related to the space c0+ . Consider two sequences of functions on ℝd {fk ; k ≥ 1} and {gk ; k ≥ 1} such that:

5.1 Weak compactness of measure-valued processes

� 125

(1) fk ∈ C0 (ℝd ), gk ∈ C(ℝd ), k ≥ 1. (2) ∀f ∈ C0 (ℝd ) ∀ε > 0 ∃k ≥ 1: 󵄨 󵄨 max󵄨󵄨󵄨f (x) − fk (x)󵄨󵄨󵄨 < ε. ℝd

(3) gk (x) ∈ [0; 1], x ∈ ℝd , gk (x) = 0,

‖x‖ ≤ k,

gk (x) = 1,

‖x‖ ≥ k + 1.

(4) ∀k ≥ 1 ∀u, v ∈ ℝd : 󵄨󵄨 󵄨 󵄨󵄨gk (u) − gk (v)󵄨󵄨󵄨 ≤ ‖u − v‖. Note that for every μ ∈ M, { ∫ gk (x)μ(dx); k ≥ 1} ∈ c0+ , ℝd

{ ∫ fk (x)μ(dx); k ≥ 1} ∈ ℝ∞ .

(5.10)

ℝd

We already defined distance in c0+ . In ℝ∞ , define the distance in a usual way 1 |xk − yk | . k 1x −y | | k k k=1 2 ∞

ρ((xk ), (yk )) = ∑

The mapping, which to every measure μ ∈ M put into correspondence sequences (5.10), is a continuous injection. More over, convergence in M is equivalent to convergence of these sequences. This is the reason why the following statement holds. Lemma 5.1.2. The set K ⊂ M is compact if and only if its images (5.10) are compact in c0+ and ℝ∞ . The proof of the lemma is left to the reader. Now we are able to formulate weak compactness criteria in C([0; 1], M). Theorem 5.1.2. The family {ξα , α ∈ A} of random elements in C([0; 1], M) is weakly compact if and only if the following conditions hold: (1) For every k ≥ 1, the family of random processes { ∫ fk dξα ; α ∈ A}, ℝd

is weakly compact in C([0; 1]).

126 � 5 Stationary measure-valued processes (2) The family { ∫ gk dξα ; k ≥ 1, α ∈ A} ℝd

is weakly compact in C([0; 1]). (3) ∀t ∈ [0; 1] ∀ε > 0: sup{ ∫ gk dξα (t) > ε} → 0, α∈A

k → ∞.

ℝd

Proof. Note that condition (1) is equivalent to weak compactness of the image of our family in C([0; 1], ℝ∞ ), and conditions (2) and (3) are equivalent to weak compactness in C([0; 1], c0+ ) due to Lemma 5.1.1. Now the statement of the theorem follows from Lemma 5.1.2. Now let us describe weak compactness in C([0; 1], Mn ) where Mn , as in Section 1.2, is a set of probability measures on ℝd with the finite nth moment and corresponding Wasserstein distance of order n. As it was noted in Section 1.2, the sequence {μk ; k ≥ 1} converges in Mn to μ if and only if: (1) {μk ; k ≥ 1} converges to μ in M (i. e., weakly). (2) supk≥1 ∫ℝd ‖u‖n μk (du) < +∞. (3) supk≥1 ∫‖u‖≥m ‖u‖n μk (du) → 0, m → ∞. So, Theorem 5.1.1 can be easily adapted for the space C([0; 1], Mn ). Let us substitute functions {gk ; k ≥ 1} by the functions hk (u) = gk (u)‖u‖n ,

u ∈ ℝd , k ≥ 1.

Then, similar to Theorem 5.1.2, one can formulate the next statement. Theorem 5.1.3. Family of random elements {ξα ; α ∈ A} is weakly compact in C([0; 1], Mn ) if and only if: (1) For every k ≥ 1, the family { ∫ fk dξα ; α ∈ A} ℝd

is weakly compact in C([0; 1]). (2) The family { ∫ hk dξα ; α ∈ A, k ≥ 1} ℝd

is weakly compact in C([0; 1]).

5.1 Weak compactness of measure-valued processes

� 127

(3) ∀t ∈ [0; 1] ∀ε > 0: sup P{ ∫ hk dξα (t) > ε} → 0, α∈A

k → ∞.

ℝd

The proof is left to the reader. Despite the similar arguments, the cases C([0; 1], M) and C([0; 1], Mn ) have one essential difference. It consists of properties of mappings from M and Mn to the space c0+ . In the first case, it is uniformly continuous. Indeed, for M uniform continuity follows from the inequality, 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨 ∫ gk (u)μ(du) − ∫ gk (v)μ(dv)󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 d d ℝ



󵄨󵄨 󵄨󵄨 󵄨 󵄨 = 󵄨󵄨󵄨∬(gk (u) − gk (v))ϰ(du, dv)󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 d ℝ

≤ 2∬ ℝd

‖u − v‖ , 1 + ‖u − v‖ϰ(dudv)

which holds for arbitrary ϰ ∈ C(μ, ν). Hence, 󵄨󵄨 󵄨󵄨 󵄨 󵄨 sup󵄨󵄨󵄨 ∫ gk (u)μ(du) − ∫ gk (v)μ(dv)󵄨󵄨󵄨 ≤ 2γ(μ, ν). 󵄨 󵄨󵄨 k≥1 󵄨 d d ℝ



The next example shows that for n ≥ 1 uniform continuity does not hold. Example 5.1.1. Let d = 1, p ∈ 𝒩 . Consider two sequences of random variables {ξn ; n ≥ 1}, {ηn ; n ≥ 1} such that P{ξn = 0} = 1 −

1 , n

1

P{ξn = [n p ]} = 1

1

1

1 , n

ηn = [(n + n 2 )] p /[n p ]ξn . Then E|ξn − ηn |p → 0,

n → ∞.

But for k ≥ 1, 1

Eξnp gk (ξn )

{ 1 [n p ]p , = {n {0,

1

k ≤ [n p ] − 1 1

k > [n p ] − 1

128 � 5 Stationary measure-valued processes 1

1

{ 1 [(n + n 2 ) p ]p , Eηpn gk (ηn ) = { n {0,

1

1

1

1

k ≤ [(n + n 2 ) p ]p k > [(n + n 2 ) p ]p

Consequently, 󵄨 󵄨 sup󵄨󵄨󵄨Eξnp gk (ξn ) − Eηpn gk (ηn )󵄨󵄨󵄨 → 1, k≥1

n → ∞.

Denote by μn and νn the distributions of ξn and ηn . Then one can conclude from the above considerations that γp (μn , νn ) → 0, 󵄨󵄨 󵄨󵄨 󵄨 󵄨 sup󵄨󵄨󵄨 ∫ hk dμn − ∫ hk dνn 󵄨󵄨󵄨 → 1, 󵄨 󵄨󵄨 k≥1 󵄨 d d ℝ

n → ∞, n → ∞.



The noted property means that the sufficient Kolmogorov condition of the weak compactness must be adapted for the space C([0; 1], Mp ). Theorem 5.1.4. Suppose that the family {ξα ; α ∈ 𝒰 } of random elements in C([0; 1], Mn ) satisfies the following conditions: > 1 : ∀t1 , t2 ∈ [0; 1] ∀α ∈ 𝒰 : (1) ∃L > 0, a > n, (b+1)(n+(n−1)a) na2 a

Eγn (ξα (t1 ), ξα (t2 )) ≤ L|t1 − t2 |1+b . (2) ∀t ∈ [0; 1] ∀ε > 0: sup P{⟨ξα (t); hk ⟩ > ε} → 0,

α∈cU

k → ∞.

(3) supα∈𝒰 sup[0;1] Eγn (ξα (t), δ0 )n < +∞. Then the set {ξα ; α ∈ 𝒰 } is weakly relatively compact in C([0; 1], Mn ). Proof. First of all, note that the functions {fk ; k ≥ 1} from the beginning of this section can be chosen from the space C01 (ℝd ) of finite continuously differentiable functions. Fix k ≥ 1 and consider the set {⟨ξα ; fk ⟩; α ∈ 𝒰 } of the random processes in C([0; 1]). From condition (3) of the theorem, it follows that sup P{⟨ξα (0); fk ⟩ > C} → 0, α∈𝒰

C → +∞.

Now for t1 , t2 ∈ [0; 1] and α ∈ 𝒰 the inequality given below is true: a 󵄨 󵄨2 E 󵄨󵄨󵄨⟨ξα (t1 ); fk ⟩ − ⟨ξα (t2 ); fk ⟩󵄨󵄨󵄨 ≤ EK a γn (ξα (t1 ), ξα (t2 )) ,

where K is the Lipshitz constant for fk . For arbitrary μ, ν ∈ Mn and ϰ ∈ C(μ, ν),

(5.11)

5.1 Weak compactness of measure-valued processes

� 129

󵄨a 󵄨󵄨 󵄨󵄨⟨μ, fk ⟩ − ⟨ν, fk ⟩󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨a 󵄨 󵄨 = 󵄨󵄨󵄨∬(fk (u) − fk (v))ϰ(du, dv)󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 ℝd

a

≤ K a (∬ |u − v|ϰ(du, dv)) ℝd a

a n

n

≤ K (∬ |u − v| ϰ(du, dv)) . ℝd

Hence, 󵄨󵄨 󵄨a a a 󵄨󵄨⟨μ, fk ⟩ − ⟨ν, fk ⟩󵄨󵄨󵄨 ≤ K γn (μ, ν) . Note that the same inequality can be obtained in the space M because a 󵄨󵄨a 󵄨󵄨 󵄨 󵄨󵄨 ̃a (∬ |u − v| ϰ(du, dv)) , 󵄨󵄨∬(fk (u) − fk (v))ϰ(du, dv)󵄨󵄨󵄨 ≤ K 󵄨󵄨 󵄨󵄨 1 + |u − v| d d ℝ



̃ depends on the Lipshitz constant of fk and on where K max |fk |. ℝd

Consequently, from condition (1) of the theorem we have that 󵄨 󵄨a E 󵄨󵄨󵄨⟨ξα (t1 ), fk ⟩ − ⟨ξα (t2 ), fk ⟩󵄨󵄨󵄨 ≤ K a L|t1 − t2 |1+b .

(5.12)

From (5.11) and (5.12), it follows that the family {⟨ξα , fk ⟩; k ≥ 1} is weakly relatively compact in C([0; 1]). So, condition (1) of Theorem 5.1.3 is fulfilled. In order to verify condition (2), take the sequence of the functions {gk ; k ≥ 1} satisfying the Lipshitz condition with constant 1. Now for μν ∈ Mn and ϰ ∈ C(μ, ν), 󵄨󵄨 󵄨 󵄨󵄨⟨μ, hk ⟩ − ⟨ν, hk ⟩󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨 = 󵄨󵄨󵄨∬(|u|n gk (u) − |v|n gk (u))ϰ(du, dv)󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 ℝd

≤ L ∬ |u − v|(|u| ∨ |v|)

n−1

ϰ(du, dv)

ℝd n

1 n

n

≤ (∬ |u − v| ϰ(du, dv)) (∬(|u| ∨ |v|) ϰ(du, dv)) ℝd

Consequently,

ℝd

n−1 n

.

130 � 5 Stationary measure-valued processes

󵄨 󵄨󵄨 n n 󵄨󵄨⟨μ, hk ⟩ − ⟨ν, hk ⟩󵄨󵄨󵄨 ≤ Lγn (μ, ν)( ∫ |u| μ(du) + ∫ |v| ν(du)) ℝd

So, for δ =

na n+(n−1)a

n−1 n

.

ℝd

we have

󵄨 󵄨δ E 󵄨󵄨󵄨⟨ξα (t1 ), hk ⟩ − ⟨ξα (t2 ), hk ⟩󵄨󵄨󵄨 δ

n

≤ Lδ Eγn (ξα (t1 ), ξα (t2 )) ⋅ (γn (ξα (t1 ), δ0 ) + γn (ξα (t1 ), δ0 )) δ

1+b1

≤ L K1 |t1 − t2 |

n

(2 max Eγn (ξα (t), δ0 ) ) [0;1]

n−1 δ n

n−1 δ n

,

where b1 > 0. Hence, condition (2) of the Theorem 5.1.3 also holds. The theorem is proved. Remark. In the case n = 0, the last statement can be reformulated in such a manner. Theorem 5.1.5. Suppose that the family {ξα ; α ∈ 𝒰 } of random elements in C([0; 1], M) satisfies the following conditions: (1) ∃L > 0, a, b > 0 : ∀t1 , t2 ∈ [0; 1] ∀α ∈ 𝒰 : a

Eγn (ξα (t1 ), ξα (t2 )) ≤ L|t1 − t2 |1+b . (2) ∀t ∈ [0; 1] ∀ε > 0: sup P{⟨ξα , gk ⟩ > ε} → 0,

α∈cU

k → ∞.

The proof of this theorem is similar to the previous case. So, we leave it for the reader.

5.2 SDE on the real line. Existence of the stationary solution In this section, we study the stationary measure-valued processes related to the stochastic flow with interaction given in Chapter 2. Let us consider the following equation: dx(u, t) = a(x(u; t), νt )dt + ∫ b(x(u, t), νt , q)W (dq, dt), ℝd

νt = νs ∘ x(⋅, t) . −1

In Chapter 2, it was mentioned that this equation can be treated as a description of the evolution of the mass, which is carried by the infinite system of the particles in ℝd . The trajectories of the stochastic flow {x(u, ⋅); u ∈ ℝd } describe the motion of the particles, which start from the different points of the space. Note that this interpretation and the

5.2 SDE on the real line. Existence of the stationary solution



131

definition of the solution to (2.12) strongly depend on the initial point of time. So, to deal with the stationary case on the whole real line, we have to define the solution for (5.13) on the real line and formulate the corresponding definition of the evolutionary process. In this section, we introduce the next definition. Definition 5.2.1. The random process {μt ; t ∈ ℝ} is called the solution to (5.13) if the following conditions hold: (1) {μt ; t ∈ ℝ} is adapted to the flow of σ-fields generated by W . (2) For the every s ∈ ℝ, there is the solution to (5.13) with the initial measure μs at time s such that νt = μt

a. s.

for all t ≥ s. Consider an example in the deterministic situation. Example 5.2.1. Let equation (5.13) be of the following kind: dx(u, t) = x(u, t)dt − ∫ vνt (dv)dt.

(5.13)

ℝd

Find the solution to this equation corresponding to the initial measure ν0 ∈ M1 at the time s. Denote mt = ∫ vνt (dv). ℝd

Then dmt = 0. So, mt = ms , t ≥ s. Hence, x(u, t) = uet−s + ms (1t−s e ).

(5.14)

Define the measure-valued function {μt ; t ∈ ℝ} in the following way. Let for every t ∈ ℝ μt be a normal distribution on ℝd with the mean a and the covariation operator e2t Id. Here, Id is the identity matrix. When using (5.14), it is easy to check whether {μt ; t ∈ ℝ} is the solution to (5.13) in the sense of Definition 5.2.1. Exercise 5.2.1. Give another example of the solution to (5.13). The purpose of this section is to present the conditions under which the stationary measure-valued solution to (5.13) exists and to study the properties of such a solution. The following statement holds.

132 � 5 Stationary measure-valued processes Theorem 5.2.1. Let the coefficients a and b in (5.13) satisfy the conditions: (1) a(v, μ) = −αv + f (v, μ), α > 0. (2) f and b satisfy the Lipshitz condition with respect to both variables from ℝd and M. (3) α > max(12L, 6L2 (1 + α1 )), where L is the joint Lipshitz constant for f and b. Then (5.13) has the stationary solution, which is unique. Proof. Fix a certain measure ν ∈ M. Denote as xs (u, t) the flow satisfying (5.13) for the initial measure ν at the time s. Let {νts ; t ≥ s} be the corresponding measure-valued process. For s1 ≤ s2 < t1 < t2 , consider E sup γ(ντs1 , ντs2 ). [t1 ;t2 ]

In order to estimate this expectation, note that 󵄩 󵄩 γ(ντs1 , ντs2 ) ≤ ∫ φ(󵄩󵄩󵄩xs1 (u, t) − xs2 (u, t)󵄩󵄩󵄩)ν(du), ℝd

where φ(r) = r/1 + r. Consequently, E sup γ(ντs1 , ντs2 ) [t1 ;t2 ]

1

2 󵄩 󵄩 ≤ ( ∫ E sup φ(󵄩󵄩󵄩xs1 (u, τ) − xs2 (u, τ)󵄩󵄩󵄩)ν(du))

[t1 ;t2 ]

ℝd

1

2 󵄩 󵄩2 ≤ ( ∫ E sup 󵄩󵄩󵄩xs1 (u, τ) − xs2 (u, τ)󵄩󵄩󵄩 ν(du)) .

[t1 ;t2 ]

ℝd

Now 󵄩 󵄩2 E sup 󵄩󵄩󵄩xs1 (u, τ) − xs2 (u, τ)󵄩󵄩󵄩 [t1 ;t2 ]

󵄩 󵄩2 ≤ C1 E 󵄩󵄩󵄩xs1 (u, t1 ) − xs2 (u, t1 )󵄩󵄩󵄩 t

󵄩 󵄩2 + C1 ∫ E sup 󵄩󵄩󵄩xs1 (u, r) − xs2 (u, r)󵄩󵄩󵄩 dτ t1

[t1 ,τ]

t

2

+ C1 ∫ E sup γ(ντs1 , ντs2 ) dτ. t1

So,

[t1 ,τ]

5.2 SDE on the real line. Existence of the stationary solution

2 󵄩2 󵄩 E sup γ(ντs1 , ντs2 ) ≤ C2 E ∫ 󵄩󵄩󵄩xs1 (u, t1 ) − xs2 (u, t1 )󵄩󵄩󵄩 ν(du). [t1 ,t2 ]

ℝd

For further consideration, we are in need of the estimation for 󵄩2 󵄩 E 󵄩󵄩󵄩xs (u, t)󵄩󵄩󵄩 . Using the initial equation, we can get that 󵄩 󵄩2 E 󵄩󵄩󵄩xs (u, t)󵄩󵄩󵄩 ≤ 3‖u‖2 e−2α(t−s) t

+ 3E(∫ e t

s 󵄩 󵄩󵄩f (xs (u, τ), ντ )󵄩󵄩󵄩dτ)

2

−α(t−τ) 󵄩 󵄩

s

󵄩 󵄩2 + 3E ∫ ∫ 󵄩󵄩󵄩b(xs (u, τ), ντs , q)󵄩󵄩󵄩 dqe−α(t−τ) dτ. s ℝd

Hence, t

3 󵄩 󵄩2 󵄩 󵄩2 E 󵄩󵄩󵄩xs (u, t)󵄩󵄩󵄩 ≤ 3‖u‖2 e−2α(t−s) + E ∫ e−α(t−τ) 󵄩󵄩󵄩f (xs (u, τ), ντs )󵄩󵄩󵄩 dτ α s

t

󵄩 󵄩2 + 3E ∫ ∫ 󵄩󵄩󵄩b(xs (u, τ), ντs , q)󵄩󵄩󵄩 dqe−2α(t−τ) dτ s ℝd

2 −2α(t−s)

≤ 3‖u‖ e

t

t

6 󵄩 󵄩2 + ∫ e−α(t−τ) 󵄩󵄩󵄩xs (u, τ)󵄩󵄩󵄩 dτ α s

󵄩 󵄩2 + 6L2 E ∫ e−2α(t−τ) 󵄩󵄩󵄩xs (u, τ)󵄩󵄩󵄩 dτ t

+

s

6 󵄩 󵄩2 E ∫ e−α(t−τ) 󵄩󵄩󵄩f (0, ντs )󵄩󵄩󵄩 dτ α s

t

󵄩 󵄩2 + 6E ∫ ∫ 󵄩󵄩󵄩b(0, ντs , q)󵄩󵄩󵄩 dqe−2α(t−τ) dτ. s ℝd

Note that 󵄩 󵄩2 󵄩 󵄩2 M = sup(E 󵄩󵄩󵄩f (0, ντs )󵄩󵄩󵄩 + E ∫ 󵄩󵄩󵄩b(0, ντs , q)󵄩󵄩󵄩 dq) ∞. t≥s

Now

ℝd

+



133

134 � 5 Stationary measure-valued processes t

6 6 1 󵄩2 󵄩2 󵄩 󵄩 E 󵄩󵄩󵄩xs (u, t)󵄩󵄩󵄩 ≤ 3‖u‖2 e−2α(t−s) + M( 2 + ) + 6L2 (1 + ) ∫ e−α(t−τ) E 󵄩󵄩󵄩xs (u, τ)󵄩󵄩󵄩 dτ. 2α α α s

When iterating this inequality, it is possible to get the following estimation: t

1 2 6 6 6 6 󵄩2 󵄩 E 󵄩󵄩󵄩xs (u, τ)󵄩󵄩󵄩 ≤ 3‖u‖2 + M( 2 + )+(3‖u‖2 + M( 2 + )) ∫ e(6L (1+ α −α)(t−τ) dτ ≤ D(u), 2α α 2α α

s

where D(u) ≤ D1 + D2 ‖u‖2 . Consequently, for s1 < s2 , 󵄩 󵄩2 E 󵄩󵄩󵄩xs1 (u, t) − xs2 (u, t)󵄩󵄩󵄩

≤ 6(D(u) + ‖u‖2 )e−α(t−s2 ) t

󵄩 󵄩2 + 6L ∫ E 󵄩󵄩󵄩xs1 (u, τ) − xs2 (u, τ)󵄩󵄩󵄩 e−α(t−τ) duτ s2

t

+ 6L ∫ Eγ(ντs1 , ντs2 )e−α(t−τ) dτ. s2

Hence, for ν ∈ M2 , 󵄩 󵄩2 ∫ E 󵄩󵄩󵄩xs1 (u, t) − xs2 (u, t)󵄩󵄩󵄩 ν(du) ℝd t

󵄩 󵄩2 ≤ D3 (ν)e−α(t−s2 ) + 12L ∫ ∫ E 󵄩󵄩󵄩xs1 (u, τ) − xs2 (u, τ)󵄩󵄩󵄩 ν(du)e−α(t−τ) dτ. s2 ℝd

So 󵄩 󵄩2 ∫ E 󵄩󵄩󵄩xs1 (u, t) − xs2 (u, t)󵄩󵄩󵄩 ν(du) → 0,

s1 , s2 → −∞.

(5.15)

ℝd

Similarly, s

s 2

Eγ(νt 1 , νt 2 ) , s1 , s2 → −∞, 󵄩 󵄩2 E 󵄩󵄩󵄩xs1 (u, t) − xs2 (u, t)󵄩󵄩󵄩 → 0, s1 , s2 → −∞, u ∈ ℝd . From the previous considerations and relations (5.15), (5.16), it follows that there exists the limit νt = lim νts . s→−∞

This limit is the random element in M. Moreover, for every fixed t1 < t2 ,

5.2 SDE on the real line. Existence of the stationary solution

2

E sup γ(νts , νt ) → 0,



s → −∞.

[t1 ;t2 ]

135 (5.16)

Analogously, for every u ∈ ℝd and t ∈ ℝ, there exists the limit in the square mean x(u, t) = lim xs (u, t). s→−∞

Moreover, for every fixed t1 < t2 , 󵄩 󵄩2 E sup 󵄩󵄩󵄩xs (u, t) − x(u, t)󵄩󵄩󵄩 → 0, [t1 ;t2 ]

s → −∞.

(5.17)

Consider the initial equation on the interval [t1 ; t2 ]: t

xs (u, t) = xs (u, t1 ) +

∫ a(xs (u, τ), ντs )dτ

t1

t

+ ∫ ∫ b(xs (u, τ), ντs , q)W (dq, dτ). t1 ℝd

From (5.16) and (5.17), it follows that t

t

x(u, t) = x(u, t1 ) + ∫ a(x(u, τ), ντ )dτ + ∫ ∫ b(x(u, τ), ντ , q)W (dq, dτ). t1

t1 ℝd

Now note that for every u1 , u2 ∈ ℝd , 󵄩 󵄩2 E 󵄩󵄩󵄩xs (u1 , t) − xs (u2 , t)󵄩󵄩󵄩 → 0,

s → −∞.

This relation can be shown in the same way as the previous estimations. So, x(u, t) = x(t), and νt = δx(t) . Finally x satisfies the equation: dx(t) = a(x(t), δx(t) )dt + ∫ b(x(t), δx(t) , q)W (dq, dt) ℝd

on the whole real line. Then {δx(t) ; t ∈ ℝ} is the measure-valued solution to (5.13). If we consider the equation, dy(u, t) = a(y(u, t), δx(t) )dt + ∫ b(y(u, t), δx(t) , q)W (dq, dt), ℝd

136 � 5 Stationary measure-valued processes on the interval [t1 ; t2 ] with the initial condition u ∈ ℝd ,

y(u, t1 ) = u, then due to the uniqueness of the solution

x(t) = y(x(t1 ), t) a. s. for t ∈ [t1 ; t2 ]. So, δx(t) = δx(t1 ) ∘ y(⋅, t)−1 ,

a. s.

Note that {δx(t) ; t ∈ ℝ} is the stationary process. Exercise 5.2.2. Prove the stationarity of {δx(t) ; t ∈ ℝ}. In order to check the uniqueness of the stationary solution, assume that {νt ; t ∈ ℝ} is the stationary solution to (5.13), which satisfies the theorem conditions. Define zs (u, t), t ≥ s, u ∈ ℝd as a solution to (5.13) with the initial random measure νs at the time s. Repeating the previous estimations, we can check whether 󵄩 󵄩2 E 󵄩󵄩󵄩x(t) − zs (u, t)󵄩󵄩󵄩 → 0,

s → −∞

for every u ∈ ℝd and 2

Eγ(δx(t) , νs ∘ zs (⋅, t)−1 ) → 0,

s → −∞.

But νs ∘ zs (⋅, t)−1 = νt ,

t ≥ s.

Hence, νt = δx(t) ,

t ∈ ℝ.

The conditions of the last theorem guarantees the existence of the stationary solution. But this solution has very poor structure. Roughly speaking, all the particles stick together and move as one heavy particle. The reason of such a phenomenon lies in the form of the shift coefficient in (5.13). Due to the theorem conditions, the force of attraction to the origin dominates the forces of interaction and influence of the random media. In order to avoid such a situation, we will consider in the next section the different model of the particle motion where each particle has its own center of attraction.

5.3 The stationary solution in the presence of motionless attracting centers

� 137

5.3 The stationary solution in the presence of motionless attracting centers In this section, we consider the following modification of (5.13): dx(u, t) = −α(x(u, t) − u)dt + f (x(u, t), μt )dt + ∫ b(x(u, t), μt , q)W (dq, dt).

(5.18)

ℝd

The first term in the right part describes attraction of each particle to its own center. We will see that such a structure of the equation enables us to derive the existence of the stationary measure-valued solution {μt ; t ∈ ℝ}. In contrast to the previous section, now μt can be distributed over the whole space ℝd . So, there appears the natural question about the smoothness of μt respectively to the smoothness of the initial mass distribution. To solve this problem, we prove the existence of the stationary stochastic flow x related to (5.18). The stationary measure-valued solution {μt ; t ∈ ℝ} can be obtained as an image of the initial measure under the map x. So, the question of the smoothness of the stationary solution can be reformulated now as the question about the properties of x. Let us start with the definition of the measure-valued solution to (5.18) related to the initial measure μ. Definition 5.3.1. The stochastic flow {x(u, t); u ∈ ℝd , t} is the solution to (5.18) related to the measure μ ∈ M if: (1) x is adapted to the flow of σ-field generated by W . (2) For every t1 < t2 , t2

x(u, t2 ) = x(u, t1 ) + ∫{−α(x(u, τ) − u) + f (x(u, τ), μτ )}dτ t1

t2

+ ∫ ∫ b(x(u, τ), μτ , q)W (dq, dτ), t1 ℝd

where μτ = μ ∘ x(⋅, τ)−1 . Definition 5.3.2. The measure-valued process {μt ; t ∈ ℝ} is the solution to (5.18) related to the measure μ if there is the stochastic flow, which is the solution to (5.18) such that μt = μ ∘ x(⋅, t)−1 ,

t ∈ ℝ.

The following statement gives the sufficient conditions for the existence of the stationary solution.

138 � 5 Stationary measure-valued processes Theorem 5.3.1. Assume that f and b satisfy the same conditions as in Theorem 5.2.1 and α > max(12L, 6L2 (1 + α1 )). Let μ ∈ M2 . Then there exists the stationary measure-valued solution to (5.18) related to μ and this solution is unique. Proof. The proof of this theorem is the slight modification of the proof of Theorem 5.2.1. So, we will only point out the main steps and leave the details for the reader. As in the previous case, consider the solutions to the Cauchy problem for (5.18), which start at the times s1 and s2 xs1 (u, t), xs2 (u, t). Then prove that 󵄩 󵄩2 E 󵄩󵄩󵄩xs (u, t)󵄩󵄩󵄩 ≤ D1 + D2 ‖u‖2 ,

s ≤ t.

This helps us to verify that 󵄩 󵄩2 E 󵄩󵄩󵄩xs1 (u, t) − xs2 (u, t)󵄩󵄩󵄩 → 0,

s1 , s2 → −∞,

and s

s 2

Eγ2 (νt 1 , νt 1 ) → 0,

s1 , s2 → −∞,

for the correspondent measure-valued solutions. Then repeat the arguments from the previous section. The theorem is proved. Remark. The following integral relation for the stationary flow corresponding to the stationary measure-valued solution will be used later, t

t

x(u, t) = u + ∫ e

−α(t−s)

f (x(u, s), μs )ds + ∫ e−α(t−s) ∫ b(x(u, s), μs , q)W (dq, ds). −∞

−∞

ℝd

This relation enables us to conclude the continuous dependence of the stationary solution from the initial measure. Lemma 5.3.1. Suppose that the conditions of Theorem 5.3.1 hold. Then for the stationary solutions related to the initial measures μ1 , μ2 , the following relation holds: 2

Eγ2 (μ1t , μ2t ) ≤ Cγ2 (μ1 , μ2 )2 , where the constant C depends only on the initial measures μ1 and μ2 . Exercise 5.3.1. Prove this statement. Now let us consider the question about the smoothness of the stationary solution. As it was mentioned above, one can get the desired property of the measure-valued solution if the smooth initial measure will be taken and the differentiability with respect to the spatial variable of the stationary flow will be checked. Denote the derivatives relatively 𝜕 𝜕 to the spatial argument by 𝜕u and 𝜕τ .

5.3 The stationary solution in the presence of motionless attracting centers

� 139

Theorem 5.3.2. Suppose that in addition to the conditions of Theorem 5.3.1 the following conditions hold: 𝜕 , which satisfies the Lipschitz condition in r with (1) There exists the first derivative 𝜕τ the same constant as f . (2) The function b has two derivatives with respect to r and 󵄩󵄩 𝜕 󵄩󵄩2 󵄩󵄩 𝜕2 󵄩󵄩2 󵄩 󵄩 󵄩 󵄩 ∫ (󵄩󵄩󵄩 b(r, μ, q)󵄩󵄩󵄩 + 󵄩󵄩󵄩 2 b(r, μ, q)󵄩󵄩󵄩 )dq ≤ L. 󵄩󵄩 𝜕r 󵄩󵄩 󵄩󵄩 𝜕r 󵄩󵄩 d



Then for all sufficiently large α and arbitrary initial measure μ ∈ M2 the equation (5.18) has the stationary solution x with the properties: (1) With probability one, x(u, t) is jointly continuous in t and u, and continuously differentiable relatively to u. 𝜕x (2) the derivative 𝜕u is a unique stationary solution of the matrix-valued equation d

𝜕 𝜕 𝜕 𝜕 x(u, t) = [−α( x(u, t) − I) + f (x(u, t), μt ) x(u, t)]dt 𝜕u 𝜕u 𝜕x 𝜕u 𝜕 𝜕 +∫ b(x(u, t), μt , q)W (dq, dt) x(u, t). 𝜕x 𝜕u ℝd

ted.

The proof of the theorem is analogous to that of Theorem 12 of [73] and is omit-

Corollary 5.3.1. Under the conditions of Theorem 5.3.2, the derivative t ∈ ℝ is given by

𝜕 x(u, t), u 𝜕u

∈ ℝd ,

t

𝜕 x(u, t) = α ∫ ℰst (u)ds, 𝜕u

(5.19)

−∞

where for every s ℰst (u), t ≥ s is the matrix solution of the equation d ℰst (u) = [−αI 𝜕 f (x(u, t), μt )]ℰst (u)dt + ∫ 𝜕x

ℝd

𝜕 b(x(u, t), μt , q)W (dq, du)ℰst (u), 𝜕x

with the initial condition s

ℰs (u) = I.

Exercise 5.3.2. Prove this corollary using the similar well-known representations for the derivative of the solution of the Cauchy problem and taking the limit as s → −∞.

140 � 5 Stationary measure-valued processes From this corollary, we can see that in the one-dimensional case the stationary solution x has a positive continuous spatial derivative with probability one. This allows us to formulate the following result in the one-dimensional case. Theorem 5.3.3. Let d = 1 and the conditions of Theorem 5.3.2 hold. Assume that the initial measure μ ∈ M2 is absolutely continuous. Then the stationary solution {μt ; t ∈ ℝ} is absolutely continuous with probability one. The statement of the theorem can be easily obtained from Theorem 5.3.2 and the positivity of the stochastic exponent in the representation (5.19).

5.4 Shift compactness of the random measures This section is devoted to the behavior of the solution to equation (2.12) on the infinity. As we can see in Section 5.2, the stationary solution can have the very simple structure. Namely, all the particles collide and move together. In order to avoid such a situation, we provide a new concept of shift compactness in the terms of which we describe the behavior of our system. We regard a family {μt − ξt ; t ≥ 0} of measures shifted by some random process. It may possess some good properties while {μt ; t ≥ 0} itself does not have. Let us consider the following definition. Definition 5.4.1. The set {μα ; α ∈ 𝒜} ⊂ Mn is refereed to as shift compact if for every α ∈ ℝd there is uα ∈ ℝd such that {μα − uα ; α ∈ 𝒜} is a compact set in Mn (the notation μα − uα is used for the measure, which is obtained from μα by shift on vector uα ). The next lemma is obtained directly from the characterization of the compact sets in Mn (see Section 1.2). Lemma 5.4.1. Let F ⊂ Mn be shift compact and {uα ; α ∈ 𝒜} be the correspondent family of vectors. Suppose that vectors {vα ; α ∈ 𝒜} are such that sup ‖uα − vα ‖ < +∞. 𝒜

Then the set {μα − να ; α ∈ 𝒜} is compact in Mn . The next technical definition will be useful later. Definition 5.4.2. A vector u ∈ ℝd is called the center of the measure μ ∈ M0 if γ0 (δu , μ) = ∫ ℝd

‖u − v‖ μ(dv) 1 + ‖u − v‖

= min γ0 (δp , μ). p∈ℝd

Exercise 5.4.1. Prove that an arbitrary measure from M0 has the center.

5.4 Shift compactness of the random measures � 141

Exercise 5.4.2. Give an example, which shows that the center can be nonunique. Lemma 5.4.2. Let the family of measures {μα ; α ∈ 𝒜} be shift compact in M0 . Then the family {μα − cα ; α ∈ 𝒜} is compact (here cα is the center of μα , α ∈ 𝒜). Proof. Let vectors {uα , α ∈ 𝒜} satisfy Definition 5.4.1 for {μα , α ∈ 𝒜}. By using Lemma 5.4.1, it is sufficient to prove that sup ‖cα − uα ‖ < +∞.

α∈𝒜

From the description of the compact sets in M0 , it follows that sup μα ({u : ‖u − uα ‖ > r}) → 0,

α∈𝒜

r → +∞.

Consequently, sup γ0 (δuα , μα ) < 1.

α∈𝒜

(5.20)

Assume that for a certain sequence lim ‖uαn − cαn ‖ = +∞.

n→∞

Then lim γ (δ , μ ) n→∞ 0 cαn αn

= 1,

which contradicts with the definition of the center and (5.20). The lemma is proved. The next example shows how the shift compact families can arise when solving the equations with interactions. Example 5.4.1. Let d = 1. Consider the following equation: dx(u, t) = πdt − ∫ arctg(x(u, t) − v)μt (dv)dt, ℝ

μt = μ0 ∘ x(⋅, t)−1 . Assume that μ0 ∈ M0 . According to Chapter 2, this equation has the unique solution. It is obvious now that ∀u ∈ ℝ : x(u, t) → +∞,

t → +∞,

and ∀u1 < u2 ,

t ≥ 0 : x(u1 , t) < x(u2 , t).

142 � 5 Stationary measure-valued processes So, the closure of the family {μt ; t ≥ 0} cannot be compact. Let us check that the family {μt ; t ≥ 0} is shift relatively compact in M0 . For arbitrary u1 , u2 ∈ ℝ, consider d(x(u1 , t) − x(u2 , t))

2

= 2(x(u1 , t) − x(u2 , t)) ⋅ ∫[arctg(x(u2 , t) − v) − arctg(x(u1 , t) − v)]μ0 (dv)dt ℝ

= 2 ∫[(x(u1 , t) − v) − (x(u2 , t) − v)] ℝ

⋅ [arctg(x(u2 , t) − v) − arctg(x(u1 , t) − v)]μ0 (dv)dt. Consequently, for every u1 , u2 ∈ ℝ, the distance between x(u1 , t) and x(u2 , t) is a nonincreasing function. Define for every u ∈ ℝ, f (u) = lim (x(u, t) − x(0, t)). t→+∞

Then the measures μt − x(0, t) = μ0 ∘ (x(⋅, t) − x(0, t))

−1

weakly converge to the measure ν = μ0 ∘ f −1 under t → +∞. This example shows that the notion of the shift compactness describes some regularization of the moving particles system with respect to some relative coordinates. Now consider the stochastic variant of the shift compactness. Definition 5.4.3. A set of random measures {μα ; α ∈ 𝒜} ⊂ Mn is called shift compact if for every α ∈ 𝒜 there is a random vector uα ∈ ℝd such that {μα − uα , α ∈ 𝒜} is weakly compact in Mn , i. e., for every ε > 0 there is a compact set F ⊂ Mn for which the following relation holds: ∀α ∈ 𝒜

P(μα − uα ∈ F) > 1 − ε.

Let us consider the conditions under which the measure-valued process related to the equation with the interaction will be the shift compact family of random measures. The next lemma contains the technical fact useful for the further studies. Lemma 5.4.3. Suppose that the family of random measures {μα ; α ∈ 𝒜} is constructed in the following way:

5.4 Shift compactness of the random measures

μα = μ0 ∘ xα−1 ,

� 143

α ∈ 𝒜,

where the random maps {xα ; α ∈ 𝒜} satisfy the condition ∃A > 0 ∀α ∈ 𝒜 ∀u, v ∈ ℝd : 󵄩n+1 󵄩 E 󵄩󵄩󵄩xα (u) − xα (v)󵄩󵄩󵄩 ≤ A‖u − v‖n+1 . Then for μ0 ∈ Mn+1 the family {μα ; α ∈ 𝒜} is shift compact in Mn . Proof. Denote mα = ∫ xα (u)μ0 (du).

(5.21)

ℝd

This integral exists due to the following estimation: 1

n+1 󵄩 󵄩 E ∫ 󵄩󵄩󵄩xα (u) − xα (0)󵄩󵄩󵄩μ0 (du) ≤ A( ∫ [‖u‖n+1 ]μ0 (du)) .

ℝd

ℝd

From this inequality, it follows that the following integral exists almost everywhere: ∫ (xα (u) − xα (0))μ0 (du). ℝd

But the last integral differs from another term, which does not depend on u and can be taken off from the integral sign. Now 󵄩 󵄩n+1 P{ ∫ 󵄩󵄩󵄩xα (u) − mα 󵄩󵄩󵄩 μ0 (du) < L} ℝd



1 AM 󵄩 󵄩n+1 E ∬󵄩󵄩󵄩xα (u) − xα (v)󵄩󵄩󵄩 μ0 (du)μ0 (dv) ≤ , L L ℝd

where M = ∬ ‖u − v‖n+1 μ0 (du)μ0 (dv). ℝd

Exercise 5.4.3. Prove that M < +∞. Thus, for every ε > 0 there is Lε > 0 such that P{ ∫ ‖u‖n+1 (μα − mα )(du) > Lε } < ε, ℝd

α ∈ 𝒜.

144 � 5 Stationary measure-valued processes Put Kε = {ν ∈ Mn : ∫ ‖u‖n 1{‖u‖>c} ν(du) ≤ ℝd

Lε for every c > 0}. c2

Kε is a compact set in Mn due to Lemma 1.2.1. For every α ∈ 𝒜, L 󵄩n 󵄩 P{μα − mα ∈ Kε } = P{ ∫ 󵄩󵄩󵄩xα (u) − mα 󵄩󵄩󵄩 ⋅ 1{‖xα (u)−mα ‖≥c} μ0 (du) ≤ 2ε for every c > 0} c ℝd

󵄩 󵄩n+1 ≥ P{ ∫ 󵄩󵄩󵄩xα (u) − mα 󵄩󵄩󵄩 μ0 (du) ≤ Lε } > 1 − ε. ℝd

The lemma is proved. Now let us return to the flows with the interactions. Consider the equation, dx(u, t) = ∫ φ(x(u, t) − v)μt (dv)dt + ∫ b(x(u, t), μt , q)W (dq, dt), ℝd

ℝd

μt = μ0 ∘ x(⋅, t) , −1

t ≥ 0.

Theorem 5.4.1. Assume that in (5.22), μ0 ∈ M2n+2 , φ satisfies the condition (u − v, φ(u) − φ(v)) ≤ −α‖u − v‖2 ,

u, v ∈ ℝd ,

b satisfies the Lipschitz condition relatively to both variables with constant B and 1 α − B2 (2n + 1) ≥ 0. 2 Then the set {μt ; t ≥ 0} is shift compact in M2n . Proof. Applying Itô’s formula, we have for u1 , u2 ∈ ℝd , 󵄩 󵄩2n+2 E 󵄩󵄩󵄩x(u1 , t) − x(u2 , t)󵄩󵄩󵄩 = ‖u1 − u2 ‖ k

2n+2

t

d

0

k=1

+ ∫ E{(2n + 2)‖x(u1 , s) − x(u2 , s)‖2n ∑ ∫ (φk (x(u1 , s) − v) k

ℝd

k

− φ (x(u2 , s) − v))(x (u1 , s) − x (u2 , s))μs (dv) d

+ (n + 1)‖x(u1 , s) − x(u2 , s)‖2n−2 ∑ (2n(x k (u1 , s) − x k (u2 , s))

2

k=1

2 󵄩 󵄩2 + 󵄩󵄩󵄩x(u1 , s) − x(u2 , s)󵄩󵄩󵄩 )(bk (x(u1 , s), μs ) − bk (x(u2 , s), μs )) }ds

(5.22)

5.5 Weak limits of the processes with the interaction

2n+2

≤ ‖u1 − u2 ‖

� 145

t

󵄩2n+2 󵄩 ds − 2α(n + 1) ∫ E 󵄩󵄩󵄩x(u1 , s) − x(u2 , s)󵄩󵄩󵄩 t

0

󵄩2n+2 󵄩 ds + B2 (n + 1)(2n + 1) ∫ E 󵄩󵄩󵄩x(u1 , s) − x(u2 , s)󵄩󵄩󵄩 0

≤ ‖u1 − u2 ‖2n+2 .

The statement of the theorem follows now from Lemma 5.4.3. Consider the example of using Theorem 5.4.1. Example 5.4.2. Assume that the conditions of Theorem 5.4.1 hold. Define the functional Φ on M2n as follows: Φ(μ) = ∫ . n. . ∫ f1 (u1 − u2 )f2 (u2 − u3 ) ⋅ ⋅ ⋅ ⋅ ⋅ fn−1 (un−1 − un )μ(du1 ) ⋅ ⋅ ⋅ μ(dun ), ℝd

where {fi } are the Lipschitz on ℝd . The family of the random variables {Φ(μt ); t ≥ 0} is weakly compact now. This follows from the relation Φ(μ) = Φ(μ + h), which is valid for the arbitrary μ ∈ M2n , h ∈ ℝd and from the continuity of Φ. In the concrete case, Φ(μ) = ∬ ‖u − v‖2 μ(du)μ(dv) ℝd

we can consider {x(u, t), u ∈ ℝd , t ∈ [0; +∞)} as the flow of velocities and conclude that internal energy of our system is given by the functional Φ. So, now from the previous theorem it follows that: sup P{Φ(μt ) > c} → 0, t≥0

c → +∞.

5.5 Weak limits of the processes with the interaction In this section, we will study the measure-valued processes, which arise as the solutions to the equations with interaction from Section 2.3. In order to explain the idea of the section, let us consider the following example. Let W be an ℝd -valued Wiener sheet on ℝd × [0; 1]. Assume that φ ∈ C0∞ (ℝd ) is the spherically symmetric nonnegative function with the property

146 � 5 Stationary measure-valued processes ∫ φ(u)du = 1. ℝd

Define for ε > 0, 1

1

φε (u) = ε− 2 φ(ε−1 u) 2 ,

u ∈ ℝd .

Note that φn ∈ C0∞ (ℝd ) for every ε > 0. Let us consider now the equation dxε (u, t) = ∫ℝd φε (xε (u, t) − q)W (dq, dt),

{

xε (u, 0) = u.

The solution xε has two important properties. The first property is that xε is the flow of the homeomorphisms and the second one is that for every u ∈ ℝd {xε (u, t); t ≥ 0} is the Wiener process. Denote by {ℱt ; t ≥ 0} the flow of σ-field generated by W in a usual way. Then {xε (u, t); t ≥ 0} is a continuous ℱt -martingale with the characteristics t

⟨xε (u, ⋅)⟩t = ∫ ∫ φ2ε (xε (u, s) − q)dqds = t. 0 ℝd

Hence, {xε (u, t); t ≥ 0} is the Wiener process. Note that for different u1 , u2 ∈ ℝd xε (u1 , ⋅) and xε (u2 , ⋅) are not independent. Their joint characteristic equals to t

⟨xε (u1 , ⋅), xε (u2 , ⋅)⟩t = ∫ ∫ φ2ε (xε (u1 , s) − q)φ2ε (xε (u2 , s) − q)dqds. 0 ℝd

So, the flow xε now consists of the Wiener processes, which do not stick together. Let us mention that the support φε tends to the origin when ε → 0+. So, one can expect that xε in the limit turns into the family of independent Wiener processes. But on the other hand, if u1 < u2 , then as it was mentioned above xε (u1 , t) < xε (u2 , t) for every t with probability one. Consequently, on the nonformal level xε in the limit turns into the family of the Wiener particles, which start from every point of the space and move independently up to their meeting after which continue the motion together. The formal realization of this idea on the level of the description of the particle motion meets some technical troubles. So, here we suggest to speak not about the particles but about the mass, which they carry. In other words, we will consider the measure-valued processes related to the flow xε and their weak limit under ε → 0+. Let us fix μ0 ∈ M. Define the measure-valued process {μεt ; t ∈ [0; 1]} in such a manner μεt = μ0 ∘ xε (⋅, t)−1 . Since xε is continuous relative to both variables, so με is a continuous process in M with probability one. The following statement holds.

5.5 Weak limits of the processes with the interaction

� 147

Theorem 5.5.1. Let the initial measure μ0 ∈ Mn , n > 2. Then the family {με ; ε > 0} is weakly compact in C([0; 1], M). Proof. Let us use the criterion of the weak compactness for the measure-valued processes from Theorem 5.1.2. Take the functions {fk ; k ≥ 1} and {gk ; k ≥ 1} in such a way that: (1) For every k ≥ 1, fk and gk satisfy the Lipshitz condition. (2) The Lipschitz constant for gk equals 1 for every k ≥ 1. Now check the weak compactness of the families {⟨fk , με ⟩; ε > 0} and {⟨gk , με ⟩; ε > 0, k ≥ 1} in C([0; 1]). Consider the function h on ℝd , which satisfies the Lipschitz condition with the constant C. For such a function, 󵄨 󵄨n E 󵄨󵄨󵄨⟨h, μεt1 ⟩ − ⟨h, μεt2 ⟩󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨n 󵄨 󵄨 = E 󵄨󵄨󵄨 ∫ (h(xε (u, t1 )) − h(xε (u, t2 )))μ0 (du)󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 d ℝ

n 󵄩 󵄩n ≤ C ∫ E 󵄩󵄩󵄩xε (u, t1 ) − xε (u, t2 )󵄩󵄩󵄩 μ0 (du) ≤ C n ⋅ Kn ⋅ |t2 − t1 | 2 ,

n

ℝd

where the constant Kn depends only on n and the dimension d. This estimation together with relation lim ⟨gk , μ0 ⟩ = 0

k→∞

gives us the fact that the conditions (1) and (2) of Theorem 5.1.2 hold. In order to check the condition (3), consider E ∫ ‖u‖n μεt (du) = E ℝd



󵄩󵄩 󵄩n ε n 󵄩󵄩u + (xε (u, t) − u)󵄩󵄩󵄩 μt (du) ≤ D( ∫ ‖u‖ μ0 (du) + 1),

mathbbRd

ℝd

where the constant D depends only on the dimensions d and n. Then lim sup P{⟨gk , μεt ⟩ > δ}

k→∞ ε>0

1 ≤ lim sup E⟨gk , μεt ⟩ k→∞ ε>0 δ 1 1 ≤ lim sup E ∫ ‖u‖n μεt (du) k→∞ ε>0 δ k n ℝd

≤ lim

k→∞

1 1 D( ∫ ‖u‖n μ0 (du) + 1) = 0 δ kn ℝd

for every t ∈ [0; 1] and δ > 0. So, the condition (3) holds and the theorem is proved.

148 � 5 Stationary measure-valued processes In order to understand how many limit points the family {με } admits under ε → 0+, let us study the behavior of the finite-point processes {x⃗s (t) = (xε (u1 , t), . . . , xε (un , t)); t ∈ [0; 1]}. From now on, we will consider the case d = 1. We begin with the construction of the Markov processes in ℝn , which can serve as a weak limit of x⃗ε . Nonformally, this process can be described in such a way. In the space ℝn , we consider the usual Wiener process up to the first time when some of its coordinates become equal. After this time, the process turns into the Wiener process on the hyperplane, where these coordinates remain equal. This procedure goes on until we get the one-dimensional Wiener process. From that moment, our process coincides with this Wiener process. In order to construct such a random process rigorously and check whether it is the unique weak limit of x⃗ε let us prove the next theorem. Theorem 5.5.2. The family {x⃗ε ; ε > 0} weakly converges under ε → 0+ in the space C([0; 1], ℝn ). Proof. The weak compactness of {x⃗ε ; ε > 0} follows from the arguments, which were mentioned in the proof of Theorem 5.5.1. So, we have nothing to do but prove that under ε → 0+ there is only one limit point. Fix ε > 0 and consider for x ∈ ℝ the function gε (x) = ∫ φε (x + q)φε (q)dq. ℝ

The process x⃗ε is a diffusion process in ℝn with zero drift and the diffusion matrix n

A(x)⃗ = (gε (xi − xj ))ij=1 , where x⃗ = (x1 , . . . , xn ). It results from the condition on φε that A coincide with the identity matrix on the set Gε = {x⃗ : |xi − xj | > 2ε, i ≠ j}. ⃗ Let us define the random moment τε as the first exit time from Gε . In our case x(0) ∈ Gε , because we take the different initial values u1 , . . . , un . As it results from Theorem 1.13.2 [55], the distribution of the process {x⃗ε (τε ∧ t); t ∈ [0; 1]} coincides with the distribution of the process {w⃗ ε (τε ∧ t); t ∈ [0; 1]}, where w⃗ is the Wiener process in ℝn starting from (u1 , . . . , un ) and τε has the same meaning for w⃗ as for x⃗ε . Now suppose that u1 < ⋅ ⋅ ⋅ < un without loss of generality. Consider in C([0; 1], ℝn ) the set 𝒢δ {f ⃗ : fi (0) = ui , i = 1, . . . , n, t ∈ [0; 1]}.

Note that 𝒢δ is an open set in C([0; 1], ℝn ) for sufficiently small δ > 0 (we use the usual uniform norm in the space of continuous functions). Let ϰ be the limit point of the distributions x⃗ε under ε → 0+. As it results from the previous considerations for all sufficiently small ε > 0, the restriction on 𝒢δ of the distribution of x⃗ε coincides with the

5.5 Weak limits of the processes with the interaction

� 149

restriction of the Wiener measure related to the initial value (u1 , . . . , un ). Consequently, ϰ coincide with the Wiener measure on the set 𝒢δ for arbitrary δ > 0. Denote by 𝒢 the closure of the union ⋃ 𝒢δ .

δ>0

Note that for all ε > 0, P{x⃗ε ∈ 𝒢 } = 1. So, by characterization of the weak convergence [4], ϰ(𝒢 ) = 1. It remains to describe ϰ on the boundary of 𝒢 . In order to do this, let us recall that the random process obtained from x⃗ε by choosing some of its coordinates, has the same properties as x⃗ε under ε → 0+. Hence, the measure ϰ has the following properties. Every coordinate has the Wiener distribution. Any two coordinates move as an independent Wiener processes up to their meeting and move together after this moment. Now the uniqueness of ϰ can be obtained by induction. The theorem is proved. This theorem has the following consequence. Corollary 5.5.1. The measure-valued processes {με } constructed in Theorem 5.5.1 converge weakly under ε → 0+. Proof. Due to Theorem 5.5.1, we have only to check the uniqueness of the limit point for {με } under ε → 0+. Let {νt ; t ∈ [0; 1]} be the measure-valued process representing the limit point of {με } under ε → 0+. Take t1 , . . . , td ∈ [0; 1] and for bounded continuous functions φ1 , . . . , φd consider the value d

E ∏ ∫ φk (u)νtk (du). k=1 ℝ

(5.23)

Note that the set of all such values uniquely define the distribution of {νt ; t ∈ [0; 1]}. Exercise 5.5.1. Prove this statement. Due to the previous theorem and the Lebesgue dominated convergence theorem, d

E ∏ ∫ φk (u)νtk (du) k=1 ℝ

d

ε

= lim E ∏ ∫ φk (u)μtkn (du) εn →0+

k=1 ℝ

150 � 5 Stationary measure-valued processes d

= lim E ∏ ∫ φk (xεn (u, tk ))μ0 (du) εn →0+

k=1 ℝ

d

= lim E ∫ ⋅ ⋅ ⋅ ∫ ∏ ∫ φk (xεn (uk , tk ))μ0 (du1 ) ⋅ ⋅ ⋅ μ0 (duk ) εn →0+

ℝd

k=1 ℝ d

= E ∫ ⋅ ⋅ ⋅ ∫ lim E ∏ φk (xεn (uk , tk ))μ0 (du1 ) ⋅ ⋅ ⋅ μ0 (duk ). ℝd

εn →0+

k=1

Since the limit in the last integral does not depend on the choice of the sequence εn → 0+, n → ∞, then the value of (5.23) is uniquely defined. Hence, we have only one limit point for {με } when ε → 0+ and our statement is proved.

6 Evolutionary measure-valued processes on the infinitely-dimensional space 6.1 Hilbert-valued Cádlág processes and their conditional distributions In this chapter, we will consider the stochastic differential equation of the type (2.12) in the infinitely-dimensional Hilbert space. Our aim is to build the corresponding measurevalued process. The main new problem, which arises here, is that in the general case there is no good stochastic flow related to the stochastic differential equation in the infinitely-dimensional case. Later we will show the corresponding example. Hence, in order to preserve the structure of the equation (2.12) we prefer to substitute the words “image of the measure” by some suitable notion for the infinite-dimensional case. It occurs that such a notion is the conditional distribution. So, we will reformulate the problem (2.12) in terms of the conditional distributions. This section contains the preliminary technical results and some known facts, which will be used later. We begin with the description of the Wiener process in the Hilbert space and the stochastic differential equations with this process. There exist many books dealing with this subject and the reader can find the detailed and deep theory (e. g., in the following monographs [7, 81, 87]). Here, we only mention the necessary constructions and facts. Let H be a separable real Hilbert space and {en ; n ≥ 1} be an orthonormal basis in H. Consider the sequence of the independent standard real-valued Wiener processes {wn ; n ≥ 1}. Definition 6.1.1. The generalized Wiener process with the identity covariation is the formal sum ∞

W (t) = ∑ wn (t)en .

(6.1)

n=1

Exercise 6.1.1. Prove that the series (6.1) does not converge in probability or weakly as the series of random elements in H. It occurs that we can integrate some random functions with respect to W and obtain the usual random elements. In order to describe the appropriate integrands, let us introduce some notation. Denote by σ2 (H) the class of all Hilbert-Schmidt operators in H. Let {ℱt ; t ≥ 0} be the flow of σ-fields generated by W , i. e., ℱt = σ(wn (s); s ≤ t, n ≥ 1),

t ≥ 0.

Denote by 𝒦 the set of all random functions φ defined on [0; +∞), which have the following properties: (1) φ takes values in σ2 (H). https://doi.org/10.1515/9783110986518-006

152 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space (2) The restriction of φ on the every interval [0; T] is jointly measurable with respect to ℱT and the Borel σ-field on [0; T]. (3) For every T > 0 T

E ∫ ‖φ‖22 ds < +∞, 0

where ‖ ⋅ ‖2 is the Hilbert–Schmidt norm in σ2 (H). It can be easily proved that the Itô construction of the stochastic integral can be applied for the functions from 𝒦 and the integral t

∫ φ(s)dW (s),

t≥0

(6.2)

0

has the usual properties. More precisely, for every t > 0 the integral (6.2) is the random element in H with the finite second moment of the norm and t

E ∫ φ(s)dW (s) = 0, 0

󵄩󵄩 t 󵄩󵄩2 t 󵄩󵄩 󵄩󵄩 󵄩 󵄩2 E 󵄩󵄩󵄩∫ φ(s)dW (s)󵄩󵄩󵄩 = ∫ E 󵄩󵄩󵄩φ(s)󵄩󵄩󵄩2 ds. 󵄩󵄩 󵄩󵄩 󵄩0 󵄩 0

The process (6.2) is the H-valued {ℱt }-martingale. Here and later when we consider the expectations of the H-valued random elements, it must be understood in the sense of Bochner integral [91]. Exercise 6.1.2. Check the mentioned properties of the integral (6.2). Example 6.1.1. Consider the integrand φ of the following kind: φ(t)x = (x, ek )ek ,

x ∈ H,

where k ≥ 1 is fixed and (⋅, ⋅) denotes the scalar product in H. Then t

∫ φ(s)dW (s) = wk (t)ek ,

t ≥ 0.

0

If ∞

φ(t)x = ∑ λk (x, ek )ek , k=1

were

x ∈ H,

6.1 Hilbert-valued Cádlág processes and their conditional distributions

� 153



∑ λ2k < +∞,

k=1

then t



∫ φ(s)dW (s) = ∑ λk wk (t)ek . k=1

0

Exercise 6.1.3. Prove that the last series converges in the square mean in H. More interesting examples can be obtained if we prove that the Wiener sheet from Section 2.1 and our formal Wiener process are the same ones. ̃ is the Wiener sheet on ℝ × [0; +∞). Example 6.1.2. Let H = L2 (ℝ). Suppose that W In order to construct the formal Wiener process, consider the orthonormal basis in L2 (ℝ) {ek ; k ≥ 1}. Then, as it was mentioned in Section 2.1, the processes t

̃ (du, ds), wk (t) = ∫ ∫ ek (u)W

t ≥ 0, k ≥ 1

0 ℝ

are independent Wiener processes. Hence, we can define the generalized Wiener process in L2 (ℝ) as the sum ∞

W (t) = ∑ wk (t)ek , k=1

t ≥ 0.

̃ are almost the same things. The random Now the integration with respect to W and W function f on ℝ×[0; +∞), which was chosen as an integrand in Section 2.1 can be viewed as the Hilbert–Schmidt operator acting from L2 (ℝ) to ℝ by the formula ∞

f (t)x = ∑ (x, ek )(f (t), ek ). k=1

Then it can be easily checked that t

t

̃ (du; ds). ∫ f (s)dW (s) = ∫ ∫ f (s, u)W 0

0 ℝ

Now let us consider the stochastic differential equation in the space H, dx(t) = a(x(t), t)dt + b(x(t), t)dW (t).

(6.3)

Suppose that the coefficients a : H × [0; +∞) → H, b : H × [0; +∞) → σ2 (H) are continuous functions from both variables and satisfy the Lipschitz condition on the first

154 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space variable uniformly with respect to the second variable on every closed interval. Then the Cauchy problem for (6.3) is known to have a unique strong solution (see, e. g., [7, 81]). Consider the solution to (6.3) x(u, t), which starts from the point u ∈ H at the time t = 0. It is possible to verify that similar to the finite-dimensional case the following estimation can be obtained: 󵄩2p 󵄩 E 󵄩󵄩󵄩x(u1 , t) − x(u2 , t)󵄩󵄩󵄩 ≤ C‖u1 − u2 ‖2p ,

t ∈ [0; T],

(6.4)

where the constant C depends only on the length of the interval [0; T] and the coefficients a and b. But now, in contrast with the finite-dimensional case, (6.4) does not allow us to conclude that x(⋅, t) has a continuous modification. The reason is that in the Kolmogorov condition the power related to the difference u1 −u2 depends on the dimension of the space (see [67]). Moreover, there are examples when x has no continuous modification. Example 6.1.3. Consider in (6.3) a = 0 and b of the following kind ∀x, y ∈ H, t ≥ 0: ∞

b(x, t)y = ∑ (x, ek )(y, ek )ek . k=1

It is easy to find out that b satisfies the above mentioned condition. Consequently, there exists the solution to the Cauchy problem for (6.3). Now it can be written in the explicit form. Namely, ∞

1

x(u, t) = ∑ ewk (t)− 2 t (u, ek )ek , k=1

t ≥ 0.

Assume that x(⋅, t) has a continuous modification. Then this modification must be a linear bounded operator in H. So, the following relation must be true: 1

sup ewk (t)− 2 t < +∞ k≥1

a. s.

But this is evidently false. Consequently, x has no a continuous modification relatively to u. In terminology of [86], x(⋅, t) now is a strong random operator in H. The previous example shows us the absence of a good modification for the stochastic flow. So, we will change the special terms used in this chapter. Instead, for the “the image of the probability measure ν under the map x(⋅, t),” we will say “the conditional distribution of the solution x(t) with the random initial condition, which has the distribution ν.” In other words, we will use only the measurability of x. So, the next material of this section regards the useful facts about the conditional distributions in Hilbert. Lemma 6.1.1. Let {ζ (t); t ≥ 0} be an H-valued martingale right continuous in the mean. Then ζ has a Cádlág modification (i. e., right continuous with the left-hand side limits).

6.1 Hilbert-valued Cádlág processes and their conditional distributions

� 155

Proof. Let {ek ; k ≥ 1} be an orthonormal basis in H, and for every n ≥ 1, Qn is an orthogonal projector on the linear span of {e1 , . . . , en }. Consider for the fixed n ≥ 1 the finitedimensional martingale Qn ζ . Due to the well-known result for the real-valued martingales [73], Qn ζ has a Cádlág modification, which we will denote by ζn . Note that due to the Doob inequality for every ε > 0 and T > 0, 1 󵄩 󵄩 1 󵄩 󵄩 󵄩 󵄩 P{sup󵄩󵄩󵄩ζn (t) − ζm (t)󵄩󵄩󵄩 ≥ ε} ≤ E 󵄩󵄩󵄩ζn (T) − ζm (T)󵄩󵄩󵄩 = E 󵄩󵄩󵄩(Qn − Qm )ζ (T)󵄩󵄩󵄩. ε ε [0;T]

(6.5)

Here, we use the fact that for the martingale ζn − ζm its norm is a Cádlág submartingale (see [55]). It follows from (6.5) that there exists subsequence {ζnk ; k ≥ 1} such that 󵄩 󵄩 sup󵄩󵄩󵄩ζnk (t) − ζnl (t)󵄩󵄩󵄩 → 0,

[0;T]

k, l → ∞,

a. s.

Denote by ζ̃ the uniform limit of the Cádlág functions ζnk under k → ∞ on [0; T], which exists on the set of probability one. Put ζ̃ ≡ 0 on other points of the probability space. Then ζ̃ is a Cádlág modification of ζ . Since T is arbitrary, the lemma is proved. Lemma 6.1.2. Assume that the Cádlág process ζ satisfies the relation 󵄩 󵄩2 ∀T > 0 : E sup󵄩󵄩󵄩ζ (t)󵄩󵄩󵄩 < +∞. [0;T]

Then the measure-valued random function {πt ; t ≥ 0}, which is defined by the rule πt (Δ) = P{ζ (t) ∈ Δ/ℱ1 }, where ℱ1 is some σ-field, has a Cádlág modification in the space M2 of all probability measures on H with the finite second moment. Proof. Consider the Skorokhod space of the Cádlág H-valued functions DH T = D([0; T], H). H It is known [4] that there is such a distance d in DH for which (D , d) is a complete sepT T arable space. Let Π be the regular version of the conditional distribution of the random element ζ in DH T under given ℱ1 . Then for every t ∈ [0; T] the one-dimensional distribution π̃t of Π at the moment t coincide with πt a. s. Now note that ∀ω ∈ Ω, 0 ≤ t0 < t ≤ T: 󵄩 󵄩2 γ22 (π̃t0 (ω), π̃t (ω)) ≤ ∫ 󵄩󵄩󵄩u(t0 ) − u(t)󵄩󵄩󵄩 Π(ω, dω). DHT

Due to the condition of the lemma, we can write that 󵄩 󵄩2 ∀ω ∈ Ω : ∫ sup󵄩󵄩󵄩u(t)󵄩󵄩󵄩 Π(ω, du) < +∞. DHT

[0;T]

156 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space So, due to the Lebesgue dominated convergence theorem, ∀ω ∈ Ω, t0 ∈ [0; T] : γ22 (π̃t0 (ω), π̃t (ω)) → 0, t → t0 +. In a similar way, ∀ω ∈ Ω, t0 ∈ [0; T] :

lim γ2 (π̃s (ω), π̃t (ω)) s,t→t0 − 2

= 0.

Hence, for every ω ∈ Ω, the trajectory π̃0 (ω) is Cádlág. The lemma is proved.

6.2 Stochastic differential equation for the measure-valued process Let us consider the following stochastic differential equation in H: dξ(t) = a(ξ(t), πt )dt + b(ξ(t), πt )dW (t) + ∫ f1 (ξ(t−), πt− , q)ν(dq, dt) [0;1]

+

∫ f2 (ξ(t−), πt− , q)ν̃(dq, dt),

ξ(0) = ξ0 ,

(6.6)

ℝ\[0;1]

where W is the generalized Wiener process in H with the identity covariance operator and ν is independent from W Poisson random measure on ℝ × [0; +∞), which has the Lebesgue measure as a structure measure. Here, the deterministic measurable functions a : H × M2 → H, b : H × M2 → σ(H), f1 : H × M2 → H, f2 : H × M2 × ℝ \ [0; 1] → H and the random initial condition ξ0 satisfy the following assumptions: (1) ∃L : ∀u, v ∈ H, μ1 , μ2 ∈ M2 : 󵄩󵄩 󵄩 󵄩󵄩a(u1 , μ1 ) − a(u2 , μ2 )󵄩󵄩󵄩 ≤ (‖u1 − u2 ‖ + γ2 (μ1 , μ2 )), 󵄩󵄩 󵄩󵄩 󵄩󵄩b(u1 , μ1 ) − b(u2 , μ2 )󵄩󵄩 ≤ (‖u1 − u2 ‖ + γ2 (μ1 , μ2 )), 󵄩2 󵄩 ∫ 󵄩󵄩󵄩f2 (u1 , μ1 , q) − f2 (u2 μ2 , q)󵄩󵄩󵄩 dq ≤ L2 (‖u1 − u2 ‖2 + γ22 (μ1 , μ2 )). ℝ\[0;1]

(2) ∀u ∈ H, μ ∈ M2 : 󵄩 󵄩2 ∫ 󵄩󵄩󵄩f2 (u, μ, q)󵄩󵄩󵄩 dq < +∞. ℝ\[0;1]

(3) ξ0 has a finite second moment of the norm and is independent from W and ν.

6.2 Stochastic differential equation for the measure-valued process

� 157

In equation (6.6), πt is the conditional distribution of ξt under given W and ν. The Cádlág modification of π in the space M2 is considered. Definition 6.2.1. The Cádlág process ξ in H is a solution to (6.6) if 󵄩2 󵄩 ∀t ≥ 0 : E sup󵄩󵄩󵄩ξ(s)󵄩󵄩󵄩 < +∞ [0;t]

and the integral form of (6.6) holds for every t ≥ 0 with the probability one. Theorem 6.2.1. The Cauchy problem (6.6) has the solution, which is unique. Proof. To begin with, prove the existence of the solution when f1 ≡ 0. Use the method of iterations. Consider the random processes ξ 0 (t) ≡ ξ0 ,

πt0 ≡ Pξ0−1 ,

t ≥ 0,

where Pξ0−1 is the distribution of ξ0 in H (due to the condition (3) πt0 ∈ M2 ). Define by the induction t

t

0

0

t

n ξ n+1 (t) = ξ0 + ∫ a(ξ n (s), πsn )ds + ∫ b(ξ n (s), πsn )dW (s) + ∫ ∫ f2 (ξ n (s−), πs− , q)ν̃(dq, dt), 0 ℝ\[0;1]

n ≥ 0. Here, πtn is the conditional distribution of ξ n (t) under given W and ν. Let us check that the processes ξ n and π n are correctly defined for every n and have the Cádlág modifications in the spaces H and M2 . For n = 0, this is obvious. By Lemma 6.1.1, the process ξ 1 has a Cádlág modification, and due to the usual arguments, ∀t ≥ 0;

󵄩 󵄩2 E sup󵄩󵄩󵄩ξ 1 (s)󵄩󵄩󵄩 < +∞. [0;t]

Then, due to Lemma 6.1.2, π 1 has a Cádlág modification and ∀t ≥ 0;

E sup γ22 (πt , δ0 ) < +∞. [0;t]

So, the process ξ 2 is correctly defined. Now ξ n and π n , n ≥ 2 can be built by induction. Let us consider for the fixed n ≥ 1 the difference ξ n+1 − ξ n . For the fixed t ≥ 0, t

󵄩󵄩 n+1 󵄩2 󵄩 n 󵄩2 n n n−1 n−1 󵄩󵄩ξ (t) − ξ (t)󵄩󵄩󵄩 ≤ C1 (∫󵄩󵄩󵄩a(ξ (s), π (s)) − a(ξ (s), π (s))󵄩󵄩󵄩 ds 0

󵄩󵄩 t 󵄩󵄩2 󵄩󵄩 󵄩󵄩 n n n−1 n−1 󵄩 + 󵄩󵄩∫(b(ξ (s), π (s)) − b(ξ (s), π (s)))dW (s)󵄩󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩0 󵄩

158 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space 󵄩󵄩2 󵄩󵄩 t 󵄩󵄩 󵄩󵄩 n n n−1 n−1 󵄩 ̃ + 󵄩󵄩∫ ∫ (f2 (ξ (s−), πs− , q) − f2 (ξ (s−), πs− , q))ν(dq, ds)󵄩󵄩󵄩 ). 󵄩󵄩 󵄩󵄩 󵄩 0 ℝ\[0;1] 󵄩 Since 󵄩 󵄩2 γ22 (πtn+1 , πtn ) ≤ E(󵄩󵄩󵄩ξ n+1 (t) − ξ n (t)󵄩󵄩󵄩 /W , ν) a. s., then 󵄩 󵄩2 E sup γ22 (πsn+1 , πsn ) ≤ E sup󵄩󵄩󵄩ξ n+1 (s) − ξ n (s)󵄩󵄩󵄩 . [0;t]

[0;t]

Denote for n ≥ 0, 󵄩 󵄩2 αn (t) = E sup󵄩󵄩󵄩ξ n+1 (s) − ξ n (s)󵄩󵄩󵄩 , [0;t]

βn (t) = E sup γ22 (πsn+1 , πsn ). [0;t]

Then similar to the proof of Theorem 2.3.1, we can get the inequalities t

t

βn (t) ≤ C2 (∫ αn−1 (s)ds + ∫ βn−1 (s)ds), 0

0

t

t

αn (t) ≤ C2 (∫ αn−1 (s)ds + ∫ βn−1 (s)ds). 0

0

From these inequalities, it follows that under n → ∞ there are the uniform limits ξ and π of ξ n and π n on every finite interval with the probability one. It is easy to check that ξ and π are the Cádlág processes in H and M and that πt is the conditional distribution of ξ(t) under the given W and ν. Now check the uniqueness of the solution. Let ξ and η be the solutions to the Cauchy problem (6.6) with the same initial condition ξ0 . Denote for t ≥ 0, 󵄩 󵄩2 γ(t) = E sup󵄩󵄩󵄩ξ(s) − η(s)󵄩󵄩󵄩 , [0;t]

δ(t) = E sup γ22 (πsξ , πsη ), [0;t]

where π ξ and π η are corresponding conditional distributions of ξ and η. Then similar to the previous considerations it is possible to get the inequalities t

t

γ(t) ≤ C(∫ γ(s)ds + ∫ δ(s)ds), 0

0

6.2 Stochastic differential equation for the measure-valued process t

� 159

t

δ(t) ≤ C(∫ γ(s)ds + ∫ δ(s)ds). 0

0

So, γ ≡ 0, δ ≡ 0. The uniqueness is proved. This completes the proof of the theorem in the case f1 ≡ 0. The general case can be considered using the random moments of jumps of ν, which lie in [0; 1] similar to [78]. Remark 6.2.1. It is useful to compare the two solutions to (6.6), which correspond to the different initial conditions ξ0 and η0 . For this case, the last inequalities from the proof of Theorem 6.2.1 can be rewritten as follows: t

t

2

γ(t) ≤ C(E‖ξ0 − η0 ‖ + ∫ γ(s)ds + ∫ δ(s)ds), 0

0

t

δ(t) ≤ C(γ22 (π0ξ , π0ξ ) ∫ δ(s)ds). 0

Consequently, for every 0 ≤ t ≤ T, γ(t) ≤ CT E‖ξ0 − η0 ‖2 , η

δ(t) ≤ CT γ22 (π0ξ , π0 ).

It results from the last inequality that if the distributions of ξ0 and η0 coincide, then δ(t) = 0 for every t ≥ 0. Remark 6.2.2. For the purpose of the construction Markov process in M2 with the help of equation (6.6), it is useful to consider this equation with the random initial condition for π. It can be done in the following way. Consider (6.6) on the time interval [T0 ; T1 ] with T0 > 0. Assume that the value of ξ at the time T0 depends on the σ-field ℱT0 generated by the values of W and ν up to the moment T0 and some σ-field ℱ ′ , which is independent from W and ν. Under this assumption, πT0 is a random element in M2 measurable relatively to ℱ ′ . The following statement can be checked. Theorem 6.2.2. Let ξ be the solution to (6.6) starting at the time 0. Then T

ξ(T2 ) = ξT 2 1

T

a. s.,

where ξT 2 is the solution to (6.6) starting at the time T1 from the value ξ(T1 ). 1

160 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space Now let us construct the Markov process in M2 with the help of equation (6.6). Let μ ∈ M2 . Define the process {x(t); t ≥ 0} in M2 in the following way. Consider the random variable ξ0 in H, which is independent from W and ν and has the distribution μ (the probability space can be enlarged if necessary). Then for ξ0 equation (6.6) has the unique solution {ξ(t); t ≥ 0}. Take x(t) = πtξ ,

t ≥ 0.

It follows from the previous considerations that x is a Cádlág process in M2 and uniquely determined by μ. Theorem 6.2.3. {x(t); t ≥ 0} is a Markov process with respect to the flow of σ-fields {ℱt ; t ≥ 0}. Proof. One can conclude from the remark after Theorem 6.2.1 that for every s ≥ 0 the correspondence M2 (H)3 ∋ μ → πs is correctly defined and continuous in the square mean. Denote by Ftt+s the analogous map on M2 , which is built by equation (6.6) if we take the Cauchy problem at the initial time t and consider the conditional distribution at the time t+s. Note that due to the usual t+s reasons Ftt+s has the measurable modification F̃ t . This modification is independent from ℱt . Now due to Theorem 6.2.2 we have the following relation: t+s x(t + s) = F̃ t (x(t)) a. s.

From here, the Markov property of x can be easily obtained. The theorem is proved. Now let us consider the action of the infinitesimal operator for the process x on a certain polynomial on M2 . For simplicity, assume that we have the jumps, which are described only by the noncompensable part of ν, i. e., f2 ≡ 0. Lemma 6.2.1. Let φ : H k → ℝ be a bounded continuous function with the two bounded continuous derivatives. Then the function Φ(μ) = . k. . φ(u1 , . . . , un )μ(du1 ) ⋅ ⋅ ⋅ μ(dun ) H

belongs to the domain of the infinitesimal operator A for x and k

(AΦ)(μ) = . k. . ∑(∇j φ, a(uj , μ))μ(du1 ) ⋅ ⋅ ⋅ μ(duk ) H

+

j=1

1 k k . . . ∑ tr b∗ (uj1 , μ)𝜕j1 ,j2 φb(uj2 , μ)μ(du1 ) ⋅ ⋅ ⋅ μ(duk ) 2 H j ,j =1 1 2

6.3 The local times for the measure-valued processes

� 161

1

+ ∫ . k. . φ(u1 + f1 (u1 , μ, q), . . . , uk + f1 (uk , μ, q))μ(du1 ) ⋅ ⋅ ⋅ μ(duk )dq. 0

H

Here, ∇j φ is the gradient of φ relatively to u and 𝜕j1 j2 is the operator of the second derivative relative to uj1 and uj2 . Sketch of the proof. This lemma can be easily checked by using Itô’s formula if we can get some representation for Φ(x(t)). It can be done in the following way. Let ξ 1 , . . . , ξ k be independent random elements in H having the same distribution μ. Consider k solutions to the Cauchy problem (6.6) with the initial conditions ξ 1 , . . . , ξ k . Then Φ(x(t)) = E(φ(ξ 1 (t), . . . , ξ k (t))/W , ν) a. s. Now the Itô formula can be applied to the right part of the last equality. Exercise 6.2.1. Complete the proof of the theorem. It is interesting to interpret the formula for A by the language of particles. If we omit in equation (6.6) the members corresponding to the continuous motion, i. e., put a and b equal to 0, then one can see that our particles have the same random clock. This means that they have the jumps at the same moment of time. This conclusion coincides with the considerations of Section 3.5 where the evolutionary processes on the finite space were considered.

6.3 The local times for the measure-valued processes related to the stochastic flow with interaction In this section, we consider the evolutionary measure-valued processes, which arise as the solution of equation (2.1), i. e., we will assume that the measure-valued process {μt ; } in the space M2 of all the probability measures on ℝd with the finite second moment is obtained from the equation dx(u, t) = a(x(u, t), μt )dt + ∫ b(x(u, t), μt , q)W (dq, dt), x(u, 0) = u,

ℝd d

u∈ℝ ,

(6.7)

μt = μ0 ⋅ x(⋅, t)−1

with the conditions on the coefficients described in Section 2.3. The main object of investigation in this section is the following value: t

L(u, t) = ∫ ∫ δu (v)μs (dv)ds. 0 ℝd

(6.8)

162 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space Here, δu is the δ-function concentrated in the point u of the space ℝd . The formal expression (6.8) is some kind of generalization of the notion of local time for the measure-valued process {μz ; t ≥ 0}. Let us consider some examples, which show us the reasons because of which this local time can exist. Example 6.3.1. Assume that the initial measure μ is discrete, i. e., N

μ = ∑ ak δxk . k=1

Then N

μt = ∑ ak δxk (t), k=1

where the heavy particles xk (t), k = 1, . . . , N satisfy the system of the stochastic differential equations (see Sections 2.1, 2.3). Now (6.8) has such a kind as N

t

k=1

0

L(u, t) = ∑ ak ∫ δu (xk (s))ds, i. e., in this case L(u, t) is the sum of the usual local times for different processes xk , k = 1, . . . , N. Example 6.3.2. Suppose that the initial measure μ0 has a continuous density with respect to the Lebesgue measure. Then, similar to the considerations from Sections 5.2 and 5.3, we can check whether under appropriate conditions on the coefficients the measures {μt } have the density {pt } with respect to the Lebesgue measure and this density has continuous modification relative to both spatial and time variables. It is obvious that in this case L(u, t) exists and is equal to t

L(u, t) = ∫ ps (u)ds. 0

These two simple examples are considered here with the purpose to emphasize two important reasons, which lead to the existence of the local time L(u, t). In the first case, the single particles trajectories, from which our process is composed, can have the usual local time at the point u. In the second case, our process can consist of the smooth measures. The first reason does not work in the space ℝd with d ≥ 2 because the trajectories of the single particles now have no local time at the point u. Also, we will consider here the case of the singular initial mass distribution. We will consider the measure μ0 , which is concentrated on the d − 1 dimensional surface S0 and is absolutely continuous relative to the surface measure. Then the existence of the local time L(u, t) can be achieved in

6.3 The local times for the measure-valued processes

� 163

such a manner. It can be proved that for the sufficiently smooth coefficients a and b the map x(⋅, t) : ℝd → ℝd is a diffeomorphism with probability one. Denote by St the image of S0 under x(⋅, t). Since the dimension of S0 (and correspondingly of St ) is d − 1, then it can be expected that there exists a nondecreasing random process t

ζ (t) = ∫ δSτ (u)dτ, 0

which is the “local time” when u is placed on St , t ≥ 0. Finally, L(u, t) can be expressed in the form t

L(u, t) = ∫ J(τ)p(x −1 (u, τ))dζ (τ), 0

where p is the density of μ0 with respect to initial surface measure on S0 , J is a correction term from the change of variables formula, and x −1 (⋅, τ) is an inverse map for x(⋅, τ). So, at first we will prove the existence of the “local time” ζ . Consider the random flow {x(⋅, t); t ∈ [0; 1]}, which is the solution to (6.7). In order to obtain the desired properties of x, and let us treat equation (6.7) as an equation with the random coefficients ã(r, t) = a(r, μt ),

̃ t) = b(r, μ ), b(r, t

r ∈ ℝd ,

t ∈ [0; 1].

Then under appropriate conditions on a, b we can derive some properties of x(⋅, t) as the properties of the stochastic flow in the sense of [67]. The next lemma is a direct consequence of Theorem 4.4.2, page 148 from [67]. Lemma 6.3.1. Let the coefficients a and b of (6.7) satisfy the following conditions: (1) ∀k = 0. . . . , 3 𝜕k a 𝜕k b , ∈ C(ℝd × M2 ). 𝜕uk 𝜕uk (2) ∀k = 0. . . . , 3 󵄩󵄩 𝜕k a 󵄩󵄩 󵄩󵄩 𝜕k b 󵄩󵄩 󵄩 󵄩 󵄩 󵄩 sup (󵄩󵄩󵄩 k 󵄩󵄩󵄩 + 󵄩󵄩󵄩 k 󵄩󵄩󵄩) < +∞. 󵄩 󵄩 󵄩 󵄩 ℝd ×M2 󵄩 𝜕u 󵄩 󵄩 𝜕u 󵄩 Then the random field x(u, t), u ∈ ℝd , t ∈ [0; 1] has such a modification that: (1) x ∈ C(ℝd × [0; 1]) a. s. (2) x(⋅, t) is C 2 -diffeomorphism ℝd onto ℝd for every t ∈ [0; 1] a. s. (3) The inverse mappings x −1 (⋅, t), t ∈ [0; 1] are such that for every u ∈ ℝd {x −1 (u, t), t ∈ [0; 1]} is the Itô process with the differential dx −1 (u, t) = α(u, t)dt + β(u, t, q)W (dq, dt), where α and β have continuous modifications relative to u and t.

164 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space Now let us assume that S0 = {u = (u1 , . . . , ud ) : ud = 0}. Denote by St the image of S0 under the random map x(⋅, t). Due to the properties of x(⋅, t), which were mentioned in Lemma 6.3.1, St is a smooth surface of codimension one. Our aim is to register the cases, when the fixed point (zero in the sequel without loss of generality) belongs to St . Thus, we will consider the functional t

ζ (t) = ∫ δ0 (x −1 (0, s)d )ds, 0

where δ0 is a δ-function on ℝ concentrated at the origin and x −1 (0, s)d is the last coordinate of x −1 (0, s). In order to define ζ in a proper way, let us use the Tanaka approach [53]. Consider the function h ∈ C(ℝ), which is even, nonnegative, compactly supported and satisfies the equality x

y

Fn (x) = ∫ ∫ hn (z)dzdy. −∞ −∞

We get t

Fn (x −1 (0, t)d ) = ∫ Fn′ (x −1 (0, s)d )α(0, s)d ds 0

t

+

d 1 2 ∫ hn (x −1 (0, s)d ) ∑ ∫ (β(0, s, q)dj ) dq 2 j=1 0

ℝd

t

d

0

j=1

+ ∫ Fn′ (x −1 (0, s)d ) ∑ ∫ β(0, s, q)dj W (dq, ds). ℝd

Having used the usual limit argument, we receive now that there exists the limit in the square mean t

d

0

j=1

L2 - lim ∫ hn (x −1 (0, s)d ) ∑ ∫ β(0, s, q)2dj dqds. n→∞

ℝd

It follows from the explicit representation of the stochastic differential of x −1 (0, t) in [67] that under the condition of nondegeneracy on b the sum d

̃ = ∑ ∫ β(0, s, q)2 dq β(s) dj j=1

ℝd

6.3 The local times for the measure-valued processes

� 165

has continuous modification. Let h > 0 satisfy the relation ∫ h(r)dr = 1. ℝ

Define for n ≥ 1, hn (r) = nh(rn),

r ∈ ℝ, n ≥ 1.

The following statement is true. Lemma 6.3.2. Let b be nondegenerate uniformly relative to u and t in addition to the conditions of Lemma 6.3.1. Then there exists the limit in probability t

ζ (t) = P- lim ∫ hn (x −1 (0, s)d )ds. n→∞

0

{ζ (t); t ∈ [0; 1]} has a modification, which is the nondecreasing right-continuous process. Proof. It results from Lemma 6.3.1 that −1

d

s

d

d

s

x (0, s) = ∫ α(0, τ) dτ + ∑ ∫ ∫ β(0, τ, q)dj Wj (dq, dτ) j=1 0 d ℝ

0

in the usual coordinate notation. Now we use Itô’s formula for the process x −1 (0, s)d and the function ̃ >0 min β(s)

a. s.

[0;1]

t

Now consider the arbitrary subsequence of {∫0 hn (x −1 (0, s)d )ds; n ≥ 1}. Using the diagonal method, we can choose from this subsequence subsubsequence, which will be t shortly denoted by {∫0 hnk (x −1 (0, s)d )ds; k ≥ 1} such that satisfies the following relations. There exists the set Ω0 ⊂ Ω of probability one such that for every ω ∈ Ω0 : ̃ (1) β(ω) ∈ C([0; 1]), ̃ min β(ω) > 0. [0;1]

(2) ∀r1 , r2 ∈ ℚ ∩ [0; t]: r2

̃ ∃ lim ∫ hnk (x −1 (0, s)d (ω))β(s)(ω)ds. k→∞

r1

166 � 6 Evolutionary measure-valued processes on the infinitely-dimensional space t

(3) supk≥1 ∫0 hnk (x −1 (0, s)d (ω))ds < +∞. It results from (1)–(3) that for every ω ∈ Ω0 and every φ ∈ C([0; t]) there exists a limit t

lim ∫ hnk (x −1 (0, s)d (ω))φ(s)ds.

k→∞

0

̃ Using uniform continuity of the functions φ and β(ω) for the fixed ω, we can construct the sequence of the functions {φm ; m ≥ 1} such that: (4) For every m ≥ 1, there exists the partition 0 = t0m < ⋅ ⋅ ⋅ < tNm = t by the rational m numbers (excluding t) for which ∀j = 0, . . . , Nm − 1 ∀s ∈ (tjm ; tj+1 ): φm (s) = φm (s) =

φ(tjm ) ̃ m) β(t j

̃ β(s),

φ(tNmm −1 ) ̃ m β(t N

m

−1 )

̃ β(s).

(5) sup[0;t] |φm − φ| → 0, m → ∞. (6) For every m ≥ 1, there is the limit t

lim ∫ hnk (x −1 (0, s)d (ω))φm (s)ds.

k→∞

0

Now, using (5) and (3), we conclude that there exists the limit t

lim ∫ hnk (x −1 (0, s)d (ω))φ(s)ds.

k→∞

0

So, by substituting φ ≡ 1, we prove the statement of the lemma. Now let us return to local time for our measure-valued process. The following statement holds. Theorem 6.3.1. Suppose that the coefficients of equation (6.7) satisfy the conditions of Lemma 6.3.2. Let the initial measure μ0 be concentrated on the surface S0 and have the continuous density p with respect to the Lebesgue measure on this surface. Then there is the limit in probability, t

P- lim ∫ ∫ fn (u)μs (du)ds, n→∞

0 ℝd

6.3 The local times for the measure-valued processes

� 167

where fn (v) = nd f (nv) and f ∈ C01 (ℝd ) nonnegative, spherically symmetric with ∫ f (v)dv = 1. ℝd

Proof. Let us consider the stochastic flow {x(⋅, s); s ∈ [0; 1]} related to the initial measure μ0 via equation (6.7). Then t

t

∫ ∫ fn (u)μs (du)ds = ∫ ∫ fn (u)μ0 (du)ds. 0 ℝd

0 ℝd

Let us define the functions φn (s, u) = ∫ f (x(v1 , . . . , vd−1 , u, s))p(v1 , . . . , vd−1 , 0)dv1 ⋅ ⋅ ⋅ dvd−1 ,

u ∈ ℝ, s ∈ [0; 1],

S0

ψn (s, u) = φn (s, u − x −1 (0, s)d ),

u ∈ ℝ, s ∈ [0; 1].

Using the diffeomorphic property of the flow x and the change of variables formula, it is possible to check whether there exists a continuous random process {J(t); t ∈ [0; 1]} and the deterministic sequence of functions {ϰn ; n ≥ 1} on ℝ such that: (1) ∀n ≥ 1 : supp ϰn ⊂ [−1; 1], ϰn ≥ 0. 1 (2) E ∫0 [ψn (s, x −1 (0, s)d ) − J(s)ϰn (x −1 (0, s)d )]2 ds → 0, n → ∞. (3) {ϰn ; n ≥ 1} satisfies the conditions of Lemma 6.3.2. Note that ∫ fn (v)μs (dv) = ψn (s, x −1 (0, s)d ) a. s. ℝd

So, due to Lemma 6.3.2, for every t ∈ [0; 1] there is the limit in probability t

n→∞

0

The theorem is proved.

d

t

P- lim ∫ ψn (s, x (0, s) )ds = P- lim ∫ J(s)ϰn (x −1 (0, s)d )ds. −1

n→∞

0

7 Stochastic calculus for flows with singular interaction 7.1 Total time of free motion in Arratia flow In this chapter, we study the Arratia flow of coalescing Brownian particles. Such flow was obtained in Chapter 5 as a weak limit of smooth stochastic flows. Since the Arratia flow consists of pieces of Brownian trajectories starting from the points of a real line, then it is natural to construct the analog of Itô’s integral using the summation of integrals along all pieces. In this chapter, we propose such integral. This allows to obtain the analog of Girsanov’s theorem for Arratia flow. Another tool of stochastic analysis is also developed in this chapter. All these constructions are possible due to an interesting fact. Namely, it appears that the total time of free motion is finite for the family of all particles, which start from the bounded interval. In this section, we prove this statement. Let {x(u, t); u ∈ ℝ, t ∈ [0; 1]} be the Arratia flow on the time interval [0; 1]. Here, we use the following description. Definition 7.1.1. {x(u, t); u ∈ ℝ, t ∈ [0; 1]} is called the Arratia flow if: (1) For every u ∈ ℝ, x(u, ⋅) is a Wiener martingale with respect to a joint filtration and x(u, 0) = u. (2) The joint quadratic characteristics t

⟨x(u1 , ⋅), x(u2 , ⋅)⟩(t) = ∫ 1τ(uu1 ,u2 )≤s ds, 0

where τ(u1 , u2 ) = min{1; t : x(u1 , t) = x(u2 , t)}. (3) For arbitrary u1 ≤ u2 x(u1 , t) ≤ x(u2 , t),

t ∈ [0; 1].

In Chapter 5, such flow was constructed as a weak limit of the flows corresponding to the stochastic differential equations with smooth coefficients. In the original work of Arratia, it was constructed as a weak limit of coalescing random walks [51]. The coalescence phenomena takes place in the limiting Arratia flow, too. Namely, for arbitrary u1 , u2 ∈ ℝ, ∀t ≥ τ(u1 , u2 ): x(u1 , t) = x(u2 , t). https://doi.org/10.1515/9783110986518-007

7.1 Total time of free motion in Arratia flow

� 169

If we have a finite family of starting points u1 , . . . , un , then one can define a total time of free motion for particles, which start from these points as follows. Put τ1 = 1, (7.1)

k−1

τk = min{1; t : ∏(x(uk , t) − xj (uj , t)) = 0}. j=1

Definition 7.1.2. The total time of free motion in the system x(uk , ⋅), k = 1, . . . , n is a sum, n

𝒯n = ∑ τk . k=1

It occurs that 𝒯 remains to be finite for any bounded sequence of starting points {un ; n ≥ 1}. To check this, let us begin with the following simple lemma. Lemma 7.1.1. Let {w(t); t ∈ [0; 1]} be a standard Wiener process such that w(0) = 0. For u > 0, define τu = inf{1; t : w(t) = u}. Then 2 Eτu ∼ 2√ u, π

u → 0+.

Proof. Since [55], ∀t ∈ [0; 1] u

P{τu ≥ t} = ∫ pt (v)dv, −u

where pt is a density of w(t), then 1

1 u

0

0 −u

v2

u 1

v2

1 e− 2t 1 e− 2t Eτu = ∫ P{τu ≥ t}dt = √ ∫ ∫ dvdt = √ ∫ ∫ dvdt. √t √t π π −u 0

Note that 1

v2

1

e− 2t dt lim ∫ dt = ∫ = 2. v→0 √t √t 0

0

Hence, 2 Eτu ∼ 2√ u, π The lemma is proved.

u → 0+.

170 � 7 Stochastic calculus for flows with singular interaction It follows from this lemma that there exists some positive C > 0, Eτu ≤ Cu,

u > 0.

Now consider a sequence {un ; n ≥ 1} of points from the interval [a; b]. Then for every n ≥ 1, n

n

k=1

k=1

∑ τk = ∑ σk ,

where {σk } are defined in the same way as {τk } but for the sequence u0 , . . . , u(n) , which is u1 , . . . , un ordered in the increasing order. But now n

n

n−1

k=1

k=1

k=1

E ∑ σk = ∑ Eσk ≤ C ∑ (u(k+1) − uk ) + 1 ≤ b − a + 1. Consequently, ∞

E ∑ τn < +∞. n=1

Now let us prove that the sum ∞

𝒯 = ∑ τn n=1

has the same value for all sequences, which are dense in [a; b]. To check this, define for every n ≥ 1 the new process νn (t), t ∈ [0; 1] as follows: n

νn (t) = ∑ 1t 0

ν(t) < +∞.

Definition 7.1.3. Random variable 𝒯 is called by the total time of free motion in Arratia flow for the particles, which start from the interval [a; b]. Since 𝒯 is finite with probability one and 1

𝒯 = ∫ ν(t)dt, 0

then ν(t) is finite with probability one for all positive t. Consequently, for t > 0 the image of the arbitrary bounded interval under the mapping x(⋅, t) is a finite set. Hence, with probability one, x(⋅, t) is a right-continuous step function with the finite number of jumps on every open interval.

7.2 Stochastic integral with respect to Arratia flow In this section, we build the stochastic integral with respect to parts of trajectories related to periods of free motion in the flow. Let a : ℝ → ℝ be a bounded measurable function. Suppose that {un ; n ≥ 1} is a dense subset of [0; U] containing 0 and U. Define the random moments {τn ; n ≥ 1} as in the previous section. Theorem 7.2.1. The series ∞

τn

∑ ∫ a(x(un , s))dx(un , s)

n=1 0

(7.2)

converges in the square mean and its sum does not depend on the concrete choice {un ; n ≥ 1}.

172 � 7 Stochastic calculus for flows with singular interaction Proof. First note that summands in (7.2) are noncorrelated. Indeed, τn

∀n ≥ 1 E ∫ a(x(un , s))dx(un , s) = 0, 0

τn

τm

E ∫ a(x(un , s))dx(un , s) ∫ a(x(um , s))dx(um , s) 0

0

τn ∧τm

= E ∫ a(x(un , s))a(x(um , s))1s≤τ(un ,um ) ds. 0

But τn ∧ τm ≤ τ(un , um ). Hence, τn

τm

E ∫ a(x(un , s))dx(un , s)) ∫ a(x(um , s))dx(um , s) = 0. 0

0

Consequently, it is enough to check that ∞

τn

n=1

0

2

∑ E ∫ a(x(un , s)) ds < +∞.

But the last sum is less or equal than c2 E 𝒯 ≤ ̃c(U + 1). Here, 󵄨 󵄨 ∀u ∈ ℝ : 󵄨󵄨󵄨a(u)󵄨󵄨󵄨 ≤ c. The fact that (7.2) does not depend on the choice of {un ; n ≥ 1} can be checked similar to the same statement for 𝒯 . The theorem is proved. Note that (7.2) can converge not only for bounded a. Example 7.2.1. Let us prove that the series ∞

τn

∑ ∫ x(un , s)dx(un , s)

n=1 0

converges in probability. For c > 0, define the random moment σc = inf{1, t : x(0, t) = −c or x(u, t) = c}.

(7.3)

7.2 Stochastic integral with respect to Arratia flow � 173

Now put for n ≥ 1, τn′ = τn ∧ σc . Similar to the proof of Theorem 7.2.1, one can check that the series ∞

τn′

∑ ∫ x(un , s)dx(un , s)

n=1 0

consists of noncorrelated summands and ∞

τn′

n=1

0



τn′

n=1

0

2

∑ E(∫ x(un , s)dx(un , s)) = ∑ E ∫ x(un , s)2 ds ≤ c2 E 𝒯 .

Now ∀n ≥ 1 : τn′ 1σc =1 = τn 1σc =1 . To complete the proof, it remains to note that P{σc = 1} → 1,

c → +∞.

Definition 7.2.1. The sum (7.2) is called by the stochastic integral with respect to Arratia flow over interval [0; U]. In the sequel, we will denote this integral as follows: U τ(u)

∫ ∫ a(x(u, s))dx(u, s). 0

0

In this notation, the symbol τ(u) is not a special moment of time. We only emphasize that the integral is a sum of stochastic integrals along the parts of trajectories until meeting. Also, the same reason leads to the absence of the second differential. A small modification of constructed stochastic integral gives us possibility to establish Clark representation theoremfor the functionals from Arratia flow. It is known that every square integrable random variable α, which is measurable with respect to the Wiener process {w(t); t ∈ [0; 1]} can be uniquely represented as 1

α = Eα + ∫ ξ(t)dw(t). 0

Here, in Itô’s integral, the square-integrable adapted function is uniquely defined by α [73]. Since the Arratia flow is constructed from the parts of trajectories of Wiener pro-

174 � 7 Stochastic calculus for flows with singular interaction cesses then one can expect that the square integrable functionals from the flow also will have the Clark representation. To prove this, we will start from the following statement. Lemma 7.2.1. Let {w(t); t ∈ [0; 1]} be a standard Wiener process, τ ≤ 1 is a stopping moment. Suppose that the square random variable α is measurable with respect to {w(t ∧ τ); t ∈ [0; 11]}. Then α can be uniquely represented as τ

α = Eα + ∫ ξ(t)dw(t) 0

with the Itô integral from the square integrable adapted function ξ. Proof. Due to Clark representation theorem, 1

α = Eα + ∫ η(t)dw(t). 0

Hence, by Dub theorem [55], τ

α = E(α/ℱτ ) = Eα + ∫ η(t)dw(t). 0

Here, ℱτ = σ(w(t ∧ τ); t ∈ [0; 1]).

Uniqueness of obtained representation is trivial. The lemma is proved. Now consider the functionals from the finite set of independent Wiener processes wk , k = 1, . . . , n. We will say that the square integrable random function ξ(t), t ∈ [0; 1] is measurable with respect w1 , . . . , wn−1 and adapted to the flow of wn if for every t ∈ [0; 1] the restriction of ξ onto interval [0; t] is measurable with respect w1 , . . . , wn−1 and wn (s), s ≤ t. Such random function has an Itô integral with respect to wn . Now consider the random times τ1 , . . . , τn such that τk is a stopping moment with respect to the flow generated by w1 , . . . , wk . Suppose that square integrable random variable α is measurable with respect to {w1 (t ∧ τ1 ), . . . , wn (t ∧ τn ); t ∈ [0; 1]}. Lemma 7.2.2. α can be uniquely represented as n

τk

α = Eα + ∑ ∫ ξk (t)dwk (t), k=1 0

where for k = 1, . . . , n the random function ξk is measurable with respect to w1 , . . . , wk−1 and adapted to the flow of wk .

7.2 Stochastic integral with respect to Arratia flow � 175

Proof. Denote for k = 1, . . . , n, ̃k (t) = wk (t ∧ τk ), w

t ∈ [0; 1].

Consider the random variable ̃1 , . . . , w ̃n−1 ). ζ = α − E(α/w ̃1 , . . . , w ̃n−1 , ζ is measurable with respect to w ̃ . Consequently, Under fixed w τn

ζ = ∫ ξn (t)dwn (t), 0

where the random function ξn is measurable with respect to w1 , . . . , wn−1 and adapted to the flow of wn . Repeating the same arguments subsequent to E(α/w1 , . . . , wk ) − E(α/w1 , . . . , wk−1 ),

k = n, n − 1, . . . , 1,

one can get the desired representation. Uniqueness is evident. The lemma is proved. Now suppose that the sequence {un ; n ≥ 0} is dense in [0; U] and containing 0 and U. For every n ≥ 0, define the stopping moment τn = inf{1; t : x(un , t) ∈ {x(u0 , t), . . . , x(un−1 , t)}}. Since x is right continuous with respect to spatial variables, then σ-field generated by x on [0; U] is equal to σ-field generated by {x(un , τn ∧ ⋅); n ≥ 0}. Hence, from Lemma 7.2.2 one can get the following statement. Lemma 7.2.3. If the square-integrable random variable α is measurable with respect to Arratia flow on the interval [0; U], then α can be uniquely represented as ∞

τn

α = Eα + ∑ ∫ ξn (t)dx(un , t), n=0 0

(7.4)

where for every n ≥ 0, ξn is measurable with respect to x(u0 , τ0 ∧ ⋅), . . . , x(un−1 , τn−1 ∧ ⋅) and adapted to the flow of x(un , τn ∧ ⋅). Remark. Note that the moments τn , n ≥ 0 and random functions ξn , n ≥ 0 depend on the choice and order of {un ; n ≥ 0}.

176 � 7 Stochastic calculus for flows with singular interaction

7.3 Girsanov theorem for Arratia flow In this section, we build the Arratia flow with drift and discuss absolute continuity of its distribution with respect to distribution of Arratia flow without drift. Let a : ℝ → ℝ be a bounded Lipschitz function. Definition 7.3.1. Arratia flow with drift a is a family of random processes {xa (u, t); t ≥ 0}, u ∈ ℝ such that: (1) For every u ∈ ℝ xa is a solution to the following Cauchy problem: dxa (u, t) = a(xa (u, t))dt + dβ(u, t), xa (u, 0) = u,

where {β(u, t) : t ≥ 0}, u ∈ ℝ are the Wiener martingales with respect to a joint filtration of xa (u, ⋅), u ∈ ℝ and t

⟨β(u1 , ⋅), β(u2 , ⋅)⟩(t) = ∫ 1s≥τ(u1 ,u2 ) ds. 0

Here, τ(u1 , u2 ) = inf{t : xa (u1 , t) = xa (u2 , t)}. (2) For any u1 < ⋅ ⋅ ⋅ < un , t ≥ 0 xa (u1 , t) ≤ ⋅ ⋅ ⋅ ≤ xa (un , t). Note, that conditions (1), (2) together with the strong Markov property guarantee that ∀t ≥ τ(u1 , u2 ) : xa (u1 , t) = xa (u2 , t). Not strictly speaking the flow xa consists of diffusion particles, which starts independently from each point of the real line, then coalesce and move together. Lemma 7.3.1. The flow xa exists and its distribution is uniquely defined by the function a and conditions (1) and (2). Proof. Define the finite-dimensional distributions of xa as a random C([0; +∞]) process indexed by parameter u ∈ ℝ. Such distributions correspond to n-point motions in the flow. Consider u1 < ⋅ ⋅ ⋅ < un . Let w1 , . . . , < wn be independent Wiener processes. For every k = 1, . . . , n, define the process y(uk , t), t ≥ 0 as a solution to the following Cauchy problem:

7.3 Girsanov theorem for Arratia flow �

177

dy(uk , t) = a(y(uk , t))dt + dwk (t), y(uk , 0) = uk .

Define for k = 2, . . . , n, τk = inf{t : y(uk , t) = y(uk−1 , t)}. Put y(uk , t),

xa (uk , t) = {

y(uk−1 , t),

t ≤ τk ,

t > τk .

Using strong Markov property, one can check that the processes {xa (uk , t); t ≥ 0}, k = 1, . . . , n satisfy conditions (1), (2). The uniqueness of the distribution of xa (uk , ⋅), k = 1, . . . , n can be easily checked. Moreover, one can check that the same distribution appears if we start from the nonordered set of points u, . . . , un but use another stopping moments σ = 1, k−1

σk = min{t : ∏(y(uk , t) − xa (uj , t)) = 0} j=1

and put y(uk , t),

xa (uk , t) = {

xa (uj∗ , t),

t ≤ σk , t > σk .

Here, j∗ is such number that y(uk , σk ) = xa (uj∗ , σk ). Due to this independence, from the order of gluing obtained n-points distributions are consistent. Consequently, one can apply Kolmogorov theorem in order to get the existence of the family xa (u, ⋅), u ∈ ℝ with desired properties. The lemma is proved. In this section, we compare distribution of xa with the distribution of initial Arratia flow x. It can be checked [20] that x and xa have a right-continuous modification as a process on ℝ with the values in C([0; 1]). Then we can compare distributions of xa and x in the Skorokhod space D([0; U], C([0; 1])). Theorem 7.3.1. Distribution of xa is absolutely continuous with respect to distribution of x with the density

178 � 7 Stochastic calculus for flows with singular interaction U τ(u)

U τ(u)

0

0

1 2 p(x) = exp{∫ ∫ a(x(u, s))dx(u, s) − ∫ ∫ a(x(u, s)) ds}. 2 0

0

Proof. Consider dyadic points uk =

k U, 2n

k = 0, 1, . . . , 2n = N.

Let us prove first that the distribution of xa (uk , ⋅), k = 0, . . . , N is absolutely continuous with respect to the distribution of x(uk , ⋅), k = 0, . . . , N in C([0; 1], ℝN ). Note that due to Girsanov’s theorem [73] the distribution of the processes ya (u0 , ⋅), . . . , ya (uN , ⋅) is absolutely continuous with respect to the distribution of the processes uk + wk (⋅),

k = 0, . . . , N

with the density N

1

1

0

0

1 N 2 qN (w) − exp{∑ ∫ a(wi (t))dwi (t) − ∑ ∫ a(wi (t)) dt}. 2 i=0 i=0 Then the sets of processes xa (u0 , ⋅), . . . , xa (uN , ⋅) and x(u0 , ⋅), . . . , x(u0 , ⋅) are constructed from ya (u0 , ⋅), . . . , ya (uN , ⋅) and u0 + w0 (⋅), . . . , uN + wN (⋅) with the help of the same procedure of gluing. Such mapping is measurable. If we denote it by F, then for arbitrary bounded measurable function φ : C([0; 1], ℝN ) → ℝ, Eφ(xa ) = Eφ(F(y)) = Eφ(F(w))qN (w) = Eφ(x)qN (w). Hence, the density of the distribution of xa with respect to x is pN (x) = E(qN (w)/x). get

Using the properties of stochastic exponent and strong Markov property, one can τ(ui )

τ(ui )

0

0

1 N 2 pN (x) = E(qN (w)/x) = exp{∑ ∫ a(x(ui , s))dx(ui , s) − ∑ ∫ a(x(ui , s)) ds}. (7.5) 2 i=0 i=0 N

Passing to the limit when n → ∞ in (7.5), we obtain p(x). Consequently, to prove the statement of the theorem it is enough to check uniform integrability of (7.5) [41]. To do this, let us prove that 󵄨 󵄨 sup EpN (x)󵄨󵄨󵄨ln pN (x)󵄨󵄨󵄨 < +∞. N

7.3 Girsanov theorem for Arratia flow

� 179

τ(ui ) 󵄨󵄨 N τ(ui ) 󵄨󵄨 󵄨󵄨 1 N 2 󵄨󵄨 󵄨 sup E 󵄨󵄨∑ ∫ a(x(ui , s))dxa (ui , s) − ∑ ∫ a(xa (ui , s)) ds󵄨󵄨󵄨 < +∞. 󵄨󵄨 󵄨󵄨 2 i=0 N 󵄨i=0 0 󵄨 0

(7.6)

This relation is equivalent to

In the last formula, the moments τ(ui ), i = 0, . . . , N are built for xa . To prove (7.5), we need the following statement. Lemma 7.3.2. Let the processes zi , i = 1, 2 be the solutions to the following Cauchy problem: dzi (t) = a(zi (t))dt + dwi (t),

z1 (0) = 0, z2 (0) = u, 0 < u ≤ 1.

Here, w1 , w2 are independent Wiener processes. Define τ = inf{1, t : z1 (t) = z2 (t)}. Then for some constant C, which depends only on the function a, Eτ ≤ C ⋅ u. Proof. Denote sup |a| = A. mbR

Note that t

z2 (t) − z1 (t) = u + ∫(a(z2 (s)) − a(z1 (s)))ds + w2 (t) − w1 (s) 0

≤ u + 2At + w2 (t) − w1 (t) = η(t). Hence, Eτ ≤ Eσ, where σ = inf{1, t : η(t) = 0}. Now η(t) = u + 2At + √2w3 (t),

180 � 7 Stochastic calculus for flows with singular interaction with the standard Wiener process w3 . Due to Girsanov’s theorem [73], Eσ = Eζ exp{

A A2 w3 (1) − }, √2 4

where ζ = inf{1, t : w3 (t) =

u }. √2

Then A2 A w(ζ ) − } √2 4 A Au ≤ Eζ exp{ } ≤ e √2 Eζ ≤ Cu. √2

Eσ = Eζ exp{

The lemma is proved. The proof of (7.6) now can be obtained similar to considerations from Section 7.2. The theorem is proved.

8 Historical comments This chapter is written with several aims. First, the ideas of the book are compared with the similar results and constructions of other authors, and second, to point the problems and directions for further investigations. The material is discussed in the same order as it is presented in the book. In the beginning of the first chapter, there are several examples of different stochastic flows with the same law of motion for separate particles. Such constructions are closely connected with the idea of coupling. This idea appears very fruitful under the investigation of the limit behavior of Markov processes and chains. The reader can find the detailed explanation in the monographs [71, 72]. These ideas naturally lead to construction of stochastic flows starting from the family of Markov semigroups describing the motion of finite sets of particles in the flow. Such construction was done in [8] and in many other works. One of the main ideas of the book is to study the case when the motion of the one particle or the set of particles depend on the mass distribution of the whole family. According to this, Section 1.2 is devoted to the random elements and stochastic kernels in the space of probability measures with Wasserstein distance. This distance is connected with the weak convergence of probability measures and studied in the works of different authors. Its properties and description of compact sets are known; see, for example, [36]. We propose the criteria of compactness in the distance γn in the form, which is suitable for further investigation. In such form, this criteria is presented first. Since the flows, which are constructed in the book, are always transferring the mass distribution of particles than different descriptions of mass evolution in the space are used. One of most general considered here is the random measure on the trajectories. One-dimensional distributions of such measure describes the mass distributions at different moments of time. The multidimensional distributions correspond to the mechanism of mass evolution. In Section 1.3, we discuss the structure of the random measure in two cases. First, when it is adapted to the flow of σ-fields generated by the Wiener process and, second, when it is stationary. The Palm representation [55] is close to ours, but as it easy to see, not the same. Stochastic differential equations with interaction first appear in the joint work of the author and Peter Kotelenez [29]. Before this, in the works of Peter Kotelenez [50, 64, 65], the weak limits of empirical distributions of finite systems of interacting particles in random media were considered. All the results from Chapter 2 belong to the author. Note that, despite the similarity with the ordinary stochastic differential equations, the equations with interaction have new features. For example, it was proven in the paper of M. P. Karlikova [56] that the local Lipschitz condition on the coefficients together with at most the linear growth condition does not guarantee the uniqueness of the strong solution. In the works of A. Yu. Pilipenko, M. P. Karlikova (see list of references), different generalizations of the equations with interaction are considered. In particular, the possibility to transfer not only measure but the generalized function of the highest order was treated [57, 68, 69]. Also, the martingale problem for equations https://doi.org/10.1515/9783110986518-008

182 � 8 Historical comments with interaction was discussed [58]. In the paper of A. Yu. Pilipenko, the possibility of the total mass change was added into equation with interaction. As a solution to obtained equation, the Skorokhod measure-valued diffusions were obtained [88]. As it was mentioned in Section 2.1 for the discrete initial measure, μ0 =

1 N ∑δ N k=1 uk

the equation with interaction contains the system of equations, which describes the motion of heavy particles. If we consider the one-dimensional case and the coefficient, a(u, μ) = K ∫ sin(u − v)μ(dv), ℝ

then for corresponding deterministic equation with interaction the system for heavy particles looks like dyi (t) =

K N ∑ sin(yi (t) − yk (t))dt, N k=1

yi (0) = ui , i = 1, . . . , N.

This is the well-known Kuramoto system [89], which is intensively studied in the synchronization theory. When such system is considered in the presence of random perturbations, it is supposed commonly that each process yi is perturbed by its own noise wi and wi , i = 1, . . . , N are independent. Such a model under appropriate scaling when N tends to infinity leads to the mean-field equation [59]. In contrast with this approach, in equation with interaction noises for different particles are dependent. Such limiting scheme for measure-valued processes were considered by Peter Kotelenez [63, 64]. He also obtained a stochastic partial differential equation for the densities of the limiting measure-valued processes [65]. The equation with interaction for the limiting process was not obtained. Such equation for stochastic flow and measures transported by the flow simultaneously was proposed by the author. In Chapter 3, we consider Markov measure-valued processes of constant mass, which inherit the properties of the measure-valued solutions to equations with interaction. The main feature of the flow carries the mass. Flows, which are solutions to equations with interaction, are the partial case of the flows from this chapter. For example, now the coalescence of the particles is allowed. Here, we present a construction of the stochastic flow with interaction via random time change in stochastic semigroup. Such approach seems to be general enough. It will be interesting to describe all stochastic flows with interaction, which can be obtained in this way. All results from this chapter belong to the author. The main object of investigation in Chapter 4 is the random measures on the space of trajectories. As it was already pointed out, such measures can describe general evolution of the mass distribution on the phase space. We start with the Fubini theorem for such

8 Historical comments

� 183

measures and the Itô’s integral. Such theorem first appears in the author’s paper. Note that for Stratonovich’s integral the similar statement was obtained in [2]. Note that different measure-valued solutions for various kinds of differential equations were treated by the many authors [93, 54]. As a substitution for the weak solution to the stochastic differential equation, such measure is proposed first. Statements about the equations with the random measures on the space of trajectories belong to the author [12, 13, 18]. Such equations are close to the equation of the optimal nonlinear filtering problem [10]. Chapter 5 is devoted to asymptotic behavior of stochastic flows with interaction. It appears that the typical for ordinary stochastic differential equations conditions for existence of stationary solution are not suitable for equations with interaction. For example, the presence of external forces, which pushes all the particles to the origin usually leads to the existence of a stationary solution to ordinary stochastic differential equations. In the case of an equation with interaction, it also can be true but the stationary solution becomes very simple [77]. Namely, all the mass distribution is concentrated in one point, which is floating over the space in a stationary regime. To make the resulting stationary process more risky, the attracting centers (different for different particles) are introduced in this chapter. All obtained results belong to the author [17]. In Chapter 5, we propose the notion of shift-compactness for family of measures, which is in some sense the substitution of stationarity. This notion was introduced in [28] and can serve as a mathematical formulation of the partial synchronization phenomena, which was discovered for the Kuramoto system. The Arratia flow is obtained as a weak limit in different approximation schemes by many authors. As a limit of the solutions to stochastic differential equations with the smooth coefficients, it is obtained by the author [19]. Chapter 6 is devoted mostly to the equations with interaction in the Hilbert space [15, 16, 60, 61]. The Hilbert space is chosen not for just generalization but with the following purpose. One of the approaches to studying the random processes in compact spaces is the description of such processes with the help of the infinite number of coordinates [87]. Hence, the statements of the Chapter 6 can be used for the construction of measurevalued evolutionary processes in compact spaces. The local times for evolutionary processes can be used for studying of the additive functionals from these processes [14, 27]. Chapter 7 contains the construction of the new stochastic integral with respect to Arratia flow. All proposed statements belong to the author. The analog of Girsanov’s theorem for stochastic flows was proved by the author together with T. V. Malovichko, who was a PhD student of the author at that time. Since this book is the translation of monograph [23], then we include in the Appendix, where the recent investigations of the author, his colleagues and students related to stochastic flows with interaction are briefly described.

9 Appendix 9.1 Stochastic flows and measure-valued processes This chapter presents results regarding one-dimensional Brownian stochastic flows obtained in the department of the random processes during the past 15 years. Onedimensional stochastic flows describe the mutual motion of diffusion particles on the real line. As a good example of such flows, the flows generated by solutions to SDE can serve. But there exists a large set of flows, which cannot be obtained from SDE because of the possibility for particles to coalesce. The survey is concentrated mainly on such flows. Here, we discuss questions of two types. The first group is devoted to traditional SDE problems like large deviations, discrete approximations, Krylov–Veretennikov expansion and the Girsanov theorem. We present corresponding results, which have new features, due to the coalescence phenomena. Another set of questions is related to point structure, which arise in the coalescing set of Brownian motions. Here, the properties of corresponding point measures are discussed.

Introduction This chapter contains results devoted to stochastic flows on the real line and related point processes. This subject is actively studied in the department of random processes from 2005 to 2021 years. Here, some statements are collected in order to describe the problems, which arise in this field along with present approaches developed in this area. So, the paper is organized as follows. The first four sections are based on the previous survey [30], which was written for COSA. Other sections contain material obtained after [30] was written.

9.2 Some properties of stochastic flows Let (X, ρ) be a complete separable metric space and (Ω, ℱ , P) a complete probability space. Definition 9.2.1. A measurable map φ : X × Ω → X? is called a random map in X. Definition 9.2.2. A family {φs,t ; 0 ≤ s ≤ t < +} of random maps in X is referred to as a stochastic flow if the following conditions hold: (1) For any 0 ≤ s1 ≤ s2 ≤ ⋅ ⋅ ⋅ ≤ sn , maps φs1 ,s2 , . . . , φsn−1 ,sn are independent. (2) For any s, t, r ≥ 0, maps φs,t and φs+r,t+r are equidistributed. (3) For any 0 ≤ s1 ≤ s2 ≤ s3 and u ∈ X holds, φs1 ,s3 (u) = φs2 ,s3 φs1 ,s2 (u). (4) For any s ≥ 0 and u ∈ X, φs,s (u) = u. https://doi.org/10.1515/9783110986518-009

9.2 Some properties of stochastic flows

� 185

Remark 9.2.1. Note that due to the separability of X the superposition of random maps is defined correctly. Furthermore, a superposition of independent random maps does not depend on the choice of their modifications (up to a modification). Below we consider point motions or trajectories of individual particles in a stochastic flow, i. e., random processes of the form x(u, t) = φ0,t (u), u ∈ X, t ≥ 0. If this does not lead to confusion, sometimes x will also be called a stochastic flow. One of the most famous examples of a stochastic flow is the flow of diffeomorphisms corresponding to a stochastic differential equation (SDE) with smooth coefficients. Let X = ℝn with the usual metric. Consider the equation dz(t) = a(z(t))dt + b(z(t))dw(t),

(9.1)

where w is a standard ℝn -valued Wiener process, a : ℝn → ℝn , b : ℝn → ℝn×n are continuously differentiable functions with bounded derivatives. It is well known [67] that this equation generates the stochastic flow {φs,t ; 0 ≤ s ≤ t < +∞} in ℝn , consisting of random diffeomorphisms, such that for any u ∈ ℝn , s ≥ 0, φs,t (u) is the solution of the Cauchy problem for (9.1), which started at the point u at time s. In addition to the smoothness of its component maps, the flow corresponding to equation (9.1) has a number of “good” properties. For example, the solutions of equation (9.1) are known large deviations, Girsanov’s theorem and the presentation of Krylov–Veretennikov or related Chen–Shrishart’s formula [3]. All of these properties are due to the fact that the flow is obtained from the Wiener process via Itô’s map, generated by the vector fields corresponding to the coefficients a and b. Thus, random maps forming the flow inherit properties of the Wiener process. In general, this is not the case. The stochastic flow can be organized in a more complicated way. As an example of the flow with more rich structure, one can consider the Harris flows. Let ψ : ℝ → ℝ be a continuous positive definite function. Definition 9.2.3 ([51, 74]). The Harris flow with ψ being its local characteristic is a family {x(u, t); u ∈ ℝ} of Wiener martingales with respect to the joint filtration such that: (1) For every u ∈ ℝ, x(u, 0) = u. (2) For any u ≤ v and t ≥ 0, x(u, t) ≤ x(v, t). (3) The joint characteristic of x(u, ⋅) and x(v, ⋅) has the form d⟨x(u, ⋅), x(v, ⋅)⟩(t) = ψ(x(u, t) − x(v, t))dt. Here, we do not consider the family {φs,t ; 0 ≤ s ≤ t}, but a set of the one-point motions {x(u, t); u ∈ R, ≥ 0}. Roughly speaking, the Harris flow describes the joint evolution of ordered family of the Brownian particles interacting with each other. Remark 9.2.2. For a smooth function ψ, the Harris flow can be constructed as a solution to SDE. Let W be a Wiener sheet on ℝ × ℝ+ (random Gaussian measure with mean zero, independent values on disjoint sets and structural measure equal to the Lebesgue

186 � 9 Appendix measure). For a function f from the Schwartz space of rapidly decreasing infinitely differentiable functions, consider the following equation: dx(u, t) = ∫ f (x(u, t) − p)W (dp, dt).

(9.2)



This equation generates a flow of random diffeomorphisms in ℝ [67]. In addition, for fixed u, x(u, ⋅) is a continuous square-integrable martingale with respect to the filtration generated by W , and d⟨x(u, ⋅), x(v, ⋅)⟩(t) = ∫ f (x(u, t) − p)f (x(v, t) − p)dpdt ℝ

= ∫ f (x(u, t) − x(v, t) − q)f (−q)dqdt = ψ(x(u, t) − x(v, t))dt. ℝ

If ψ(0) = 1, then for every u, x(u, ⋅) is a Wiener process due to Levy’s theorem [41]. The ordering of x with respect to the spatial variable follows from the smoothness of x. Thus, x is a Harris flow. Note that the function ψ is smooth now. One can, however, establish the existence of the Harris flow for a broader class of functions ψ. For example, it is known [51, 74] that the Harris flow exists for a function ψ, which is continuous on R and satisfies the Lipschitz condition outside any neighborhood of 0. The resulting flow may already have novel properties. In particular, it is known that under the condition ε

∫(1 − ψ(u)) udu < +∞ −1

0

particles started from different points of the line, and coalesce with each other. In this case, the maps x(⋅, t) are not smooth in the space variable. The noted property is shown most clearly in the Arratia flow. The Arratia flow is a Harris flow corresponding to the discontinuous function ψ = 1{0} . Roughly speaking, the Arratia flow consists of independent Wiener processes, coalescing at the time of the meeting. It is known [74] that in every positive moment of time, any interval in the Arratia flow turns into a finite number of points. Thus, the maps x(⋅, t) are step functions with a finite number of jumps on each interval. From the above examples, we can draw the following conclusions. First, under the same distributions of one-point motions (they can all be Wiener processes) flows can have completely different properties with respect to the spatial variable. Second, in some cases, the flow is arranged by an external Gaussian noise and almost automatically inherits its properties, and in other cases, only one-point motions are diffusion processes and the entire flow no longer generates a Gaussian noise. Since the Arratia

9.3 Gaussian properties of the Arratia flow

� 187

flow delivers one of the most striking examples of the latter situation, the next question is interesting. How are the properties of the Gaussian white noise and, therefore, solutions of stochastic differential equations with smooth coefficients inherent in the Arratia flow? The second section of the chapter is devoted to answering this question.

9.3 Gaussian properties of the Arratia flow Let {x(u, t); u ∈ ℝ, t ≥ 0} be an Arratia flow. As mentioned in the previous section, this flow is composed of independent Wiener processes, coalescing at the time of the meeting. The construction of the stochastic integral with respect to the Arratia flow is based on the fact that the processes started from a certain interval, and coalesce during a finite time. Formally, this property can be described as follows. Let λ = {0 = u0 < ⋅ ⋅ ⋅ < un = 1} be a partition of [0; 1]. Denote τ0 = 1, τk = min{1, s : x(uk , s) = x(uk−1 , s)}, k = 1, . . . , n. The sum n

Sλ = ∑ τk k=0

can be regarded as a total time of a free motion of particles that started from the points of the partition λ. The following statement is true. Theorem 9.3.1 ([21]). With probability one, sup Sλ = lim Sλ < +∞. λ

|λ|→0

Here, |λ| = maxk=0,n−1 (uk+1 − uk ). Thus, in the Arratia flow the total time of free motion of particles that started from the interval [0; 1] (or any other interval) is finite. This allows us to build a stochastic integral by pieces of free trajectories in the flow. Let a : ℝ → ℝ be a bounded measurable function. Theorem 9.3.2 ([21]). There exist the following limits: 1 τu

n

τk

∫ ∫ a(x(u, s))ds := P- lim ∑ ∫ a(x(uk , s))ds, 1 τu

0 0

|λ|→0

k=0 0 n

τk

∫ ∫ a(x(u, s))dx(u, s) := L2 - lim ∑ ∫ a(x(uk , s))dx(uk , s). 0 0

|λ|→0

k=0 0

In the left-hand side of these equalities, we use two signs of integrals and only one differential because the role of the second differential is played by the moments τu -times of free motions of the particles.

188 � 9 Appendix Built integrals allow us to formulate an analogue of Girsanov’s theorem for the Arratia flow. Let a be a bounded measurable function. Definition 9.3.1 ([23]). An Arratia’s flow with a drift a is a stochastic process {xa (u, t); u ∈ ℝ, t ≥ 0} such that: (1) For a fixed u, xa (u, ⋅) is a diffusion process with a drift a and diffusion 1. (2) For any u ≤ v and t ≥ 0, xa (u, t) ≤ xa (v, t). (3) For any u1 < u2 < ⋅ ⋅ ⋅ < un and t ≥ 0, the restriction of the distribution of the stochastic processes xa (u1 , ⋅), . . . , xa (un , ⋅) on the set {f ∈ C([0; t], ℝn ) : fi (0) = ui , f1 (s) < ⋅ ⋅ ⋅ < fn (s), s ∈ [0; t]} coincides with the restriction to this set of the distribution of an n-dimensional diffusion process with the standard Wiener process with the drift (a, . . . , a); (4) For any u1 , u2 xa (u1 , t) = xa (u2 , t) when t ≥ τu1 u2 , where τu1 u2 = min{s : xa (u1 , s) = xa (u2 , s)}. The existence of such a flow is proved in [21, 22, 23]. Note that for a fixed t > 0 there exists a modification of xa in the space D([0; 1], C([0; t])). Denote by μa the distribution of xa in this space (μ0 is the distribution of the Arratia flow). Theorem 9.3.3 ([21]). The measure μa is absolutely continuous with respect to the measure μ0 and the density has the form 1 τu

1 τu

dμa 1 2 (x) = exp{∫ ∫ a(x(u, s))dx(u, s) − ∫ ∫ a(x(u, s)) ds}. dμ0 2 0 0

0 0

This result shows that under smooth perturbations of the motion of individual particles, the Arratia flow behaves like a Wiener process. This is because the flow is composed of “independent pieces” of Wiener processes, and the total time of free motion is finite. In view of the same reason, the Arratia flow satisfies the large deviations principle in an appropriate space. Let us denote by ℳ the space of functions acting from [0; 1]2 to [0; 1] and having the following properties: (1) For every u ∈ [0; 1], y(u, ⋅) ∈ C([0; 1]). (2) For all u1 ≤ u2 , t ∈ [0; 1], y(u1 , t) ≤ y(u2 , t).

9.3 Gaussian properties of the Arratia flow

� 189

(3) For any t ∈ [0; 1], y(⋅, t) is right continuous. (4) For all u1 , u2 ∈ [0; 1], y(u1 , t) = y(u2 , t),

t ≥ τu1 u2 ,

τu1 u2 = inf{s : y(u1 , s) = y(u2 , s)}. (5) For any u ∈ [0; 1], y(u, 0) = u. Each element of ℳ can be called a continual forest [31]. Let us introduce the metric in ℳ, ρ(y1 , y2 ) = max σ(y1 (⋅, t), y2 (⋅, t)), [0;1]

where σ is the Lévy–Prokhorov distance. For an Arratia flow {x(u, t); u ∈ [0; 1], t ∈ [0; 1]}, let us define new random fields xε via time change xε (u, t) = x(u, εt),

ε ∈ (0; 1).

The following theorem holds true. Theorem 9.3.4 ([31]). The family xε under ε → 0+ satisfies the LDP in the space ℳ with the speed function, I(x) = inf I0 (h). i(h)=x

Here, h ranges over the set of real-valued functions defined on ℚ ∩ [0; 1] × [0; 1] and with the above properties (1)–(5), and τr

1 I0 (h) = ∑ ∫ h′ (r, t)2 dt, 2 r∈ℚ∩[0;1] 0

i(h)(u, t) = inf h(r, t). r>u

Like the previous theorem, this result shows that in the study of the asymptotic behavior of large deviations of the Arratia flow a major role is played by the fact that it is made up of Wiener trajectories. Emerging as the Radon–Nikodym densities, the stochastic exponents are known [83] to form the total set in the space of all square-integrable functionals from the Wiener process. It turns out that a similar property holds true for the Arratia flow. The following statement holds.

190 � 9 Appendix Theorem 9.3.5 ([22]). The linear combinations of random variables of the form 1 τu

1 τu

exp{∫ ∫ f (u, s)dx(u, s) − 0 0

1 ∫ ∫ f (u, s)2 ds}, 2

f ∈ C([0; 1]2 )

0 0

are dense in the space of square-integrable functionals of the Arratia flow {x(u, t); u, t ∈ [0; 1]}.

9.4 Stochastic semigroups and widths In this part of the paper, we make an attempt to find a common approach to the study of the geometry of stochastic flows both for smooth and nonsmooth cases. It is well known [3] that the shift of functions or differential forms by solutions to SDE with smooth coefficients is described in terms of the Lie algebra generated by vector fields, which are the coefficients of the equation. If, however, a flow is composed of discontinuous maps, then such flow may not preserve the continuity. Instead, we propose to consider how the flow transforms finite-dimensional subspaces of functions, and calculate the widths of functional compacts with respect to these subspaces. We consider in detail here a model example of a stochastic semigroup consisting of finite-dimensional projections. We start with a definition of a strong random operator. Let H be a real separable Hilbert space. Definition 9.4.1 ([85]). A strong random operator in H is a continuous in probability linear map from H to the set of all random elements in H. Below there is an appropriate example of a strong random operator. Example 9.4.1. Let {x(u, t), u ∈ ℝ, t ≥ 0} be the Harris flow, H = L2 (ℝ). Define a strong random operator in H by the equality Af (u) = f (x(u, t)). Since 2

E ∫ f (x(u, t)) du = ∫ ∫ f (v)2 p1 (u − v)dudv ℝ

ℝℝ

= ∫ f (v)2 ∫ p1 (u − v)dudv = ∫ f (v)2 dv, ℝ





where p1 is the density of the standard Gaussian distribution, A is continuous in the square mean.

9.4 Stochastic semigroups and widths

� 191

It can be checked that in the case of the Arratia flow, the strong random operator constructed in the example has the following property. For this, flow does not exist a set ̃ ω , ω ∈ Ω} in H such that of bounded operators {A ∀f ∈ H

̃ ω f a. s. Af = A

Indeed, suppose that there exists such a set, i. e., that A is a bounded random operator in terms of Skorokhod [85]. Let {̃fn ; n ≥ 1} be the Rademacher system of functions on [0; 1]. For n ≥ 1, denote fn (u) = ̃fn (u), u ∈ [0; 1], fn (u) = 0, u ∉ [0; 1]. Then fn converges weakly to 0 in H. From the other side the sequence {fn ; n ≥ 1} does not converge almost everywhere on [0; 1]. As we have already noted, in the Arratia flow points of any finite interval turn into a finite number of points during any positive time interval. Therefore, with probability 1, there exists an interval Δ = {v : x(v, t) = x(0, t)} of positive Lebesgue measure. Then ∫(Afn )(u)du = (fn , A∗ω 1Δ ) → 0 a. s. n→∞

Δ

On the other hand, ∫(Afn )(u)du = |Δ|fn (x(0, t)). Δ

It means that the sequence {∫Δ (Afn )(u)du, n ≥ 1} does not converge to 0 on the set of those ω for which x(0, t) ∈ [0; 1]. This contradiction proves that our assumption was incorrect. Despite the fact that strong random operators often are unbounded, their superpositions can be determined in the usual way for independent operators. Therefore, a stochastic flow in ℝd can sometimes be associated with a semigroup of strong random operators in L2 (ℝd ). Let us formulate a precise definition. Definition 9.4.2 ([84]). A set {Gs,t ; 0 ≤ s ≤ t < +∞} of strong random operators in H is called a stochastic semigroup if: (1) The distribution of Gs,t depends only on t − s. (2) For all t1 ≤ t2 ≤ ⋅ ⋅ ⋅ ≤ tn , operators Gt1 ,t2 , . . . , Gtn−1 ,tn are independent. (3) For all r ≤ s ≤ t, Gr,t = Gs,t Gr,s , Gr,r is the identity operator. Let {φs,t ; 0 ≤ s ≤ t < +∞} be a stochastic flow (see Definition 9.2.1) in ℝd whose one-point motions are standard Wiener processes. Then the operators Gs,t in L2 (ℝd ) defined as Gs,t f (u) = f (φs,t (u)), form a stochastic semigroup.

u ∈ ℝd

192 � 9 Appendix Note that the consideration of stochastic semigroups associated with the flow, allows one to approach the study of the properties of flows with smooth and singular interaction in a unified way. The notion of a strong random operator was introduced by A. V. Skorokhod [85]. He also began to consider the semigroups of such operators and received sufficient conditions for the representation of these semigroups as the solutions of a linear SDE with operator-valued Wiener process [84]. This presentation is made possible, in part, because the noise generated by the semigroup is Gaussian. We present here two theorems about the structure of semigroups of strong random operators. One of them gives a description of the multiplicative operator functionals of Gaussian noise. The second one is devoted to semigroups of random finite-dimensional projections (here a Poisson noise arises). We start with a Gaussian case. Let {w(t); t ≥ 0} be a standard one-dimensional Wiener process. Suppose that the semigroup {Gs,t ; 0 ≤ s ≤ t < +∞} of strong random operators is a multiplicative homogeneous functional of w, i. e., the following conditions hold: (1) Gs,t is measurable with respect to ℱs,t = σ(w(r) − w(s); r ∈ [s; t]). (2) θr Gs,t = Gs+r,t+r , r ≥ 0, s ≤ t. Here, θr is the shift operator along the trajectories of w. Assume that Gs,t are squareintegrable, i. e., ∀u ∈ H : ∀s ≤ t : E‖Gs,t u‖2 < +∞. Define for all t, the mathematical expectation of G0,t as continuous linear operator in H, acting by the rule ∀u ∈ H : Tt u = EG0,t u. Note that for the continuous in the square-mean semigroup {Gs,t } the family {Tt } is a strongly continuous semigroup of operators in H, and thus, is uniquely determined by its infinitesimal operator. It turns out that in order to describe the semigroup {Gs,t } we also need a “stochastic” infinitesimal operator. Let us define it in the following way. For f ∈ H, consider Bf := lim

t→0+

1 EG fw(t). t 0,t

To make sure that B is densely defined, we consider the family of operators {Ft ; t ≥ 0} defined by the relation Ft f = EG0,t fw(t) = EGs,s+t f (w(t + s) − w(s)).

9.4 Stochastic semigroups and widths

� 193

It is easy to check that the following equalities hold: Ft G0,s = G0,s Ft ,

Ft+s = Fs Tt + Ts Ft .

Using these equalities, it is possible to check in a standard way that all elements of H of the form s

∫ Tr gdr,

g ∈ H, s > 0

0

belong to the domain of B. The following theorem is true. Theorem 9.4.1 ([24]). Suppose that for any t > 0, Tt (H) ⊂ D(B) and the kernels of the Itô–Wiener expansion for G0,t are continuous in time variables. Then for any f ∈ H the equality holds ∞

G0,t f = Tt f + ∑ ∫ Tt−τk BTτk −τk−1 B ⋅ ⋅ ⋅ BTτ1 fdw(τ1 ) ⋅ ⋅ ⋅ dw(τk ), k=1 Δ (0;t) k

where Δk (0; t) = {0 ≤ τ1 ≤ ⋅ ⋅ ⋅ ≤ τk ≤ t}. In the case when the semigroup {Gs,t } is generated by SDE, the statement of Theorem 9.4.1 is the well-known Krylov–Veretennikov expansion [66]. Example 9.4.2. Consider the following SDE in ℝ: dx(t) = a(x(t))dw(t), where w is a standard Wiener process, a ∈ C ∞ (ℝ) is bounded together with its derivative. To this, the equation corresponds a stochastic flow {φs,t ; 0 ≤ s ≤ t} of diffeomorphisms in ℝ. One can check that the operators defined by the relation (Gs,t f )(u) = f (φs,t (u)) on square-integrable functions, form a stochastic semigroup, which is a multiplicative functional on w. For f ∈ C02 (ℝ), Bf (u) = a(u)f ′ (u). Let inf a > 0. Then the operator Tt for t > 0 is an integral operator with an infinitely differentiable kernel, and thus the condition of Theorem 9.4.1 is satisfied. The claim of Theorem 9.4.1 now takes the form ∞

f (φ0,t (u)) = Tt f (u) + ∑ ∫ Tt−τk a k=1 Δ (0;t) k

d d T ⋅⋅⋅a T f (u)dw(τ1 ) ⋅ ⋅ ⋅ dw(τk ). dνk τk −τk−1 dν1 τ1

194 � 9 Appendix Here, ν1 , . . . , νk are variables on which the integration is performed by the action of {Tt }. The last formula coincides with the Krylov–Veretennikov representation [66]. It is not always the case that the noise associated with a stochastic semigroup is given by a Wiener process. In addition, semigroups corresponding to flows with coalescence can be composed of operators with values in finite-dimensional spaces. Revealing in terms of describing the structure and the asymptotic behavior is an example of a stochastic semigroup consisting of random finite-dimensional projections in a Hilbert space. Let H be a real Hilbert space. Under the projection in H, we understand the orthogonal projection on a subspace in H. A projector is called finite if the corresponding subspace is finite-dimensional. Definition 9.4.3. A random finite-dimensional projection G in H is a set of finitedimensional projections {Gω , ω ∈ Ω} such that for any u ∈ H, Gω u is a random element in H. It is evident that a random finite-dimensional projection is a strong random operator continuous in the square mean. The following theorem gives a complete description of the mean-square continuous stochastic semigroup consisting of random finitedimensional projections. Theorem 9.4.2 ([24]). Let {Gs,t ; 0 ≤ s ≤ t} be a mean-square continuous stochastic semigroup consisting of random finite-dimensional projections in H. Then there exist an orthonormal basis {en ; n ≥ 1} in H and a sequence of Poisson, with regard to the general filtration, random processes {νn ; n ≥ 1} such that: (1) ∑∞ n=1 P{νn (t) = 0} < +∞, t > 0. (2) Gs,t = ∑∞ n=1 1νn (t)=νn (s)=0 en ⊗ en . Remark 9.4.1. This theorem states that all projections of the semigroup have a common basis, the elements of which “are killed” in accordance with the Poisson regulation. If one postulates the existence of a common basis in advance, then the theorem is simple. Remark 9.4.2. The object discussed in Theorem 9.4.2 is significantly stochastic. It is easy to see that there is no deterministic strongly continuous semigroup consisting of finitedimensional projections in the infinite-dimensional space H. Since stochastic semigroups can be built on stochastic flows, it is natural to expect that the geometric properties of the maps that make up the flow affect the properties of the operators of the semigroup. For the characterization of such properties, it seems promising to use the notion of width. Here is the definition. Let K ⊂ H be compact and L ⊂ H a subspace in H.

9.4 Stochastic semigroups and widths

� 195

Definition 9.4.4 ([90]). The width of K with respect to L is the value max ρ(u, L), u∈K

where ρ(u, L) = infv∈L ‖u − v‖. Further, as an example, we consider the widths of some compacts with respect to subspaces of the form G0,t (H), t > 0, where {Gs,t } is a stochastic semigroup of finitedimensional projectors. Example 9.4.3 ([24]). Let the Poisson processes from the description of the semi-group {Gs,t } in Theorem 9.4.2 be independent and their intensities equal λn = n, n ≥ 1. Consider the following compacts in H: K1 = {u : (u, en )2 ≤

1 , n ≥ 1}, n2



K2 = {u : ∑ n2 (x, en )2 ≤ 1}. n=1

Define the widths αi (t) = max ‖u − G0,t u‖,

i = 1, 2,

u∈Ki

and the value d(t) = dim G0,t (H). Then α1 (t) P → 1, √t| ln t|

t → 0+,

2 lim α2 (t)√ l ln t ≥ 1, a. s., t

t→0+

td(t) ≤ 1, a. s., t→0+ 2| ln t| lim

lim 2t| ln t|d(t) ≥ 1, a. s.

t→0+

The example shows the dependence of the asymptotic behavior of the widths on the structure of the compacts and the semigroup. Thus, it can be expected that the proposed approach will give an opportunity to explore the geometry of both smooth and nonsmooth stochastic flows.

196 � 9 Appendix

9.5 Discrete time approximation of coalescing stochastic flows It is known that a solution to SDE with smooth coefficients may be obtained via a discrete time approximation. It appears that a discrete time approximation can also be built for coalescing stochastic flows, which may not be generated by SDEs. Here, we present a discrete time approximation for the Harris flow. Consider a sequence of independent stationary Gaussian processes {ξn (u); u ∈ ℝ, n ≥ 1} with zero mean and covariation function Γ. Suppose that Γ is continuous. Define a sequence of random mappings {xn ; n ≥ 0} by the rule x0 (u) = u,

xn+1 (u) = xn (u) + ξn+1 (xn (u)),

u ∈ ℝ.

Note that the continuity of Γ implies that the processes {ξn ; n ≥ 1} have measurable modifications. This allows substituting xn into ξn+1 . The independence of {ξn ; n ≥ 1} guarantees that ξn+1 (xn (u)) does not depend on the choice of these modifications. Let us also define the random functions k k+1 − t)xk (u) + n(t − )xk+1 (u), n n k k+1 t∈[ ; ], k = 0, . . . , n − 1. n n

yn (u, t) = n( u ∈ ℝ,

Theorem 9.5.1 ([25]). Let Γ be a continuous positive definite function on ℝ such that Γ(0) = 1 and Γ has two continuous bounded derivatives. Suppose that yn is built upon 1 a sequence {ξk ; k ≥ 1} with covariance √n Γ. Then for every u1 , . . . , ul ∈ ℝ the random processes {yn (uj , ⋅), j = 1, . . . , l} weakly converge in C([0; 1], ℝl ) to the l-point motion of the Harris flow with the local characteristic Γ.

9.6 The iterated logarithm law and the fractional step method Let {Xa (u, t) | u ∈ ℝ, t ≥ 0} be an Arratia flow with bounded Lipschitz continuous drift a [23]. One can define the cluster formed by all particles with positive starting points that have merged with the particle started from 0: L(t) = {u > 0 | Xa (0, t) = Xa (u, t)}. Theorem 9.6.1 ([26]). With probability 1, Leb(L(t)) ≥ 1, √2t ln ln t −1 Leb(L(t)) ≤ 1. lim supt→0+ 2√t ln ln t −1 lim supt→0+

9.6 The iterated logarithm law and the fractional step method

� 197

The Arratia flow is a part of the Brownian web {φt,⋅ (u) ∈ C([t; +∞)) | u, t ∈ ℝ} [40, 52]. The fractional step method is applied to the Brownian web perturbed by the flow of solutions to the deterministic equation dzt = a(zt )dt in [33]. For that, consider (n) − a sequence of partitions of [0; 1], {t0(n) , . . . , tN(n)(n) }, n ∈ ℕ, with λ(n) = maxk=0,N (n) −1 (tk+1 tk(n) )) → 0, n → ∞. Put

t

At (u) = u + ∫ a(As (u))ds,

u ∈ ℝ, t ≥ 0.

0 (n) Then define, for fixed n, k ∈ 0, N (n) − 1 and t ∈ [tk(n) ; tk+1 ), k

Xt(n) (u) = φt(n) ,t ( ∘ At(n) −t(n) (φt(n) ,t(n) (⋅)))(u), j=1

k

Δ(n) (u) k

=

(u) X (n) tk(n)



j

j−1

j−1 j

(u). X (n) tk(n) −

(n) Additionally, put X1(n) (u) = X1− (u).

Theorem 9.6.2. Let {Xta (u) | u ∈ ℝ, t ≥ 0} be an Arratia flow with drift a. Then for any N ∈ ℕ, (u1 , . . . , uN ) ∈ ℝN , (X (n) (u1 ), . . . , X (n) (uN )) ⇒ (X a (u1 , ⋅), . . . , X a (uN , ⋅)),

n → ∞,

in (D([0; 1]))N . The convergence cannot be strengthened. For any u, the sequence Nn −1

∑ (φt(n) ,t(n) (u) − u),

k=0

k

k+1

n ∈ ℕ,

does not contain convergence in probability subsequences. To obtain an estimate on the speed of the convergence, the following random measures are considered: μt = Leb ∘ (X a (, ⋅, t)) , −1

(n) μ(n) t = Leb ∘ (Xt ) ,

n ∈ ℕ.

−1

Consider ℳp (ℝ), the space of all probability measures on ℝ endowed with the Wasserstein metric W1 , and the space ℳ1 (ℳp (ℝ)), the corresponding Wasserstein metric in ℳ1 (ℳp (ℝ)) being denoted by W1,p . Theorem 9.6.3 ([1, Theorem 2.1]). Assume that the sequence {nδn }n∈ℕ is bounded. Then for every p ≥ 2, there exist a positive constant C and a number N ∈ ℕ such that for all n ≥ N, −1 W1,p (Law(μt ), Law(μ(n) t )) ≤ C(log log δn )

−1/p

.

Remark 9.6.1. The formulation of Theorem 2.1 in [34] is mistakenly missing the second logarithm.

198 � 9 Appendix

9.7 Approximations with SDEs Fix β ∈ (0; +∞), α ∈ (0; 2). In [94], a coalescing Harris [51] flow with infinitesimal covariance α

φ(x) = e−β|x| ,

x∈ℝ

and the corresponding dual flow are approximated with solutions to SDEs. The dual flow ̂ [62] is defined via X ̂ t1 , t2 , s) = X(x,

inf

X(y,r,t2 )≥x, y∈ℝ, r∈[t1 ;t1 +t2 −s]

X(y, r, t1 + t2 − s),

x ∈ ℝ, s ∈ [t1 ; t2 ].

(9.3)

Consider a sequence of twice continuously differentiable symmetric nonnegative definite functions {φε }ε∈(0;1) such that φε → φ, ε → 0+, uniformly on compact subsets of ℝ and φε (0) = 1. For any ε ∈ (0; 1), let Fε ≡ {Fε (x, t) | x ∈ ℝ, t ∈ ℝ+ } be a Gaussian process with Cov(Fε (t, x), Fε (s, y)) = min{t, s}φε (x − y). The process Fε is a continuous C(ℝ)-valued martingale in the sense of [67] and, therefore, the flow given via t

Xε (x, s, t) = x + ∫ Fε (Xε (x, s, r), dr) s

is a flow of homeomorphisms. For ε ∈ (0; 1), 0 ≤ s ≤ t ≤ T, define με (s, t) = Leb ∘ Xε (⋅, s, t)−1 ,

μ(s, t) = Leb ∘ X(⋅, s, t)−1 ,

̂ ε (s, t) = Leb ∘ (Xε−1 (⋅, 0, t, s)) , μ −1

̂ 0, t, s))−1 . ̂ (s, t) = Leb ∘ (X(⋅, μ

Let ℳ(ℝ) be a space of locally finite nonnegative Borel measures on the real line equipped with the vague topology. Given real numbers a, a1 , b: a < a1 < b and a function f ∈ C([a1 ; b]) put 𝒫a,b f (s) = 1s∈[a;a1 ] f (a1 ) + f (s)1s∈(a1 ;b] ,

s ∈ [a; b].

Theorem 9.7.1. Fix T > 0 and a set {(xn , tn )}n∈ℕ ∈ (ℝ × [0; T])∞ . Then (𝒫0,T Xε (x1 , t1 , ⋅), 𝒫0,T Xε−1 (x1 , t1 , T, T + t1 − ⋅), . . . , 𝒫0,T Xε (xN , tN , ⋅), 𝒫0,T Xε (xN , tN , T, T + tN − ⋅), . . .) −1

̂ 1 , t1 , T, T + t1 − ⋅), . . . , ⇒ (𝒫0,T X(x1 , t1 , ⋅), 𝒫0,T X(x ̂ N , tN , T, T + tN − ⋅), . . .), 𝒫0,T X(xN , tN , ⋅), 𝒫0,T X(x in (C([0; T]))∞ as ε → 0+.

9.8 Point densities

� 199

For any s1 ≤ ⋅ ⋅ ⋅ ≤ sN , t1 ≤ ⋅ ⋅ ⋅ ≤ tN , si ≤ ti , i = 1, N, N ∈ ℕ, ̂ ε (s1 , t1 ), . . . , μ ̂ ε (sN , tN )) (με (s1 , t1 ), . . . , με (sN , tN ), μ

̂ (s1 , t1 ), . . . , μ ̂ (sN , tN )), ⇒ (μ(s1 , t1 ), . . . , μ(sN , tN ), μ

in (ℳ(ℝ))2N as ε → 0+.

9.8 Point densities The point process associated with an Arratia flow {X a (u, t) | u ∈ [0; 1], t ∈ [0; T]} with bounded Lipshitz continuous drift a are discussed in [35] in terms of special (n, k)-point densities pa,n,k , k ≤ n, which were introduced in [34] and represent a further developt ment of the well-known notions used, for instance, in [75]. To describe sequences of collisions, the following reformulation of the scheme in [25, pp. 433–434] is used. Put Shn,k = {(j1 , . . . , jk ) | ji ∈ {1, . . . , n − i}, i = 1, k}, Shn = ⌀ ∨ ⋃ Shn,k , k=1,n−1

n ∈ ℕ.

k = 1, n,

Let ξ = (ξ1 , . . . , ξn ) to be a continuous function on [0; T] with coalescing coordinates and no triple collisions. Let n − ϰ be the number of distinct values in {ξi (T) | i = 1, n}. Let τ1 < τ2 < ⋅ ⋅ ⋅ < τϰ be moments of subsequent collisions of the coordinates of ξ. Put j1 = min{i | ∃j ≠ i ξj (τ1 ) = ξi (τ1 )} and define the process ξ n−1 by excluding the j1 -th coordinate from the vector ξ. Then put j2 = min{i | ∃j ≠ i ξjn−1 (τ2 ) = ξin−1 (τ2 )}, define ξ n−2 by excluding the j2 -th coordinate in ξ n−1 and repeat the procedure until a collection S(ξ) = (j1 , . . . , jϰ ) ∈ Shn,ϰ appears. We will call S(ξ) the coalescing scheme for ξ. Put Δn = {u ∈ ℝn | u1 < ⋅ ⋅ ⋅ < un },

Xat (u) = {X a (uk , t) | k = 1, n}, → 󳨀a → 󳨀a X (u, ⋅) ≡ X (u) = (X a (u1 , ⋅), . . . , X a (un , ⋅)),

u = (u1 , . . . , un ) ∈ ℝn , n ∈ ℕ.

Definition 9.8.1. Given an Arratia flow X a , a starting point u ∈ Δn and a coalescence scheme s ∈ 𝒮n,j for some j and k such that k ≤ n − j the corresponding (n, k)-point density pa,n,s,k (u; ⋅), j ≥ k, is a measurable function on ℝk such that for any bounded T nonnegative measurable f : ℝk → ℝ, → 󳨀a E 1(S( X (u) = s)



v1 ,...,vk ∈XaT (u),

v1 ,...,vk are distinct

f (v1 , . . . , vk ) = ∫ pa,n,s,k (u; y)f (y)dy. T ℝk

200 � 9 Appendix Definition 9.8.2. Given an Arratia flow X a , a starting point u ∈ Δn and k ∈ {1, . . . , n} the corresponding (n, k)-point density is a measurable function pa,n,k (u; ⋅) on ℝk such that T k for any bounded nonnegative measurable f : ℝ → ℝ, 󵄨 󵄨 E 1(󵄨󵄨󵄨Xat (u)󵄨󵄨󵄨 ≥ k)



v1 ,...,vk ∈Xat (u),

f (v1 , . . . , vk ) = ∫ pa,n,k (u; y)f (y)dy. T

(9.4)

ℝk

v1 ,...,vk are distinct

Then a. e. n−k

pa,n,k (u; ⋅) = ∑ ∑ pa,n,s,k (u; ⋅). T T l=0 s∈Shn,l

k The k-point density pa,k T (⋅) is defined as a measurable function on ℝ such that the analog a a of (9.4) holds with XT (u) replaced with the set {X (v, T) | v ∈ [0; 1]} and the condition |Xat (u)| ≥ k is dropped.

Theorem 9.8.1. Let u(n) = (u1(n) , . . . , un(n) ) ∈ Δn , n ∈ ℕ, be such that u1(n) = 0, un(n) = 1, n ∈ ℕ, (n+1) {u1(n) , . . . , un(n) } ⊂ {u1(n+1) , . . . , un+1 },

n ∈ ℕ,

and (n) − uj(n) ) 󳨀→ 0. max (uj+1

j=0,n−1

n→∞

Then for all k ∈ ℕ a. e. (u(n) ; ⋅) ↗ pa,k pa,n,k T , T

n → ∞.

Suppose ζk ∈ C([0; T]), k = 1, n. Put ζ̃1 = ζ1 , r1 = T and construct ζ̃k , k = 2, n as follows: rk = inf{T; t | ζ̃k−1 (t) = ζk (t)}, ζ̃k (t) = ζk (t)1(t < rk ) + ζ̃k−1 (t))1(t ≥ rk ),

k = 2, n.

Assume that S((ζ̃1 , . . . , ζ̃n )) = s. Define θij = inf{t | ζi (s) = ζj (s)},

θ00 = T.

j = 1, i − 1, i = 2, n,

Assume additionally that θij ≠ T for all pairs (i, j) ≠ (00). Then there exists a unique collection {λij (s) | i = 1, 2, j = 1, n} such that rk = θλ1k (s)λ2k (s) ,

k = 1, n.

9.8 Point densities

� 201

Consider independent Brownian bridges η = (η1 , . . . , ηn ) with ηk (0) = ηk (T) = 0, k = 1, n, and define ηu,y (t) = η(t) + (1 −

t t )u + y, T T

u ∈ Δn , y ∈ ℝn , t ∈ [0; T],

and put u,y

u,y

j = 1, i − 1, i = 2, n, u ∈ Δn , y ∈ ℝn ,

θij (u, y) = inf{t | ηi (t) = ηj (t)} ∧ T, θ00 (u, y) = T, eaT,n (u, y, s)

n

θλ1k (s)λ2k (s) (u,y)

= exp{ ∑

k=1

0

θλ1k (s)λ2k (s) (u,y)

n

+∑

k=1 n

∫ 0

u,y

a(ηk (t))dηk (t)



u,y

a(ηk (t))(

yk − uk 1 − ak (t, u, y, s))ds}, T 2

u ∈ Δ , y ∈ ℝn , s ∈ Shn . Fix some u ∈ ℝn and k ∈ {0, . . . , n − 1}. A partition of the set {1, . . . , n} is associated with a coalescence scheme s in a natural way. The blocks of the partition being π1 , . . . , πk , we define I(s) = {min πi | i = 1, n − k}. Given a set K = {k1 , . . . , km } ⊂ {1, . . . , n} and a point z ∈ ℝn , we denote by z−K the vector obtained by removing in the vector z all the coordinates whose numbers are in K; by zK , the vector obtained by removing all coordinates except those in K. We write zK1 ,±K2 for (zK1 )±K2 . Denote by gTm (u; ⋅) the m-dimensional Gaussian density with mean u and variance T Idm×m , where Idm×m is the unit square matrix of size m, m ∈ ℕ. Theorem 9.8.2. Assume u ∈ Δn and s ∈ Shn,n−k for some k ∈ {0, . . . , n − 1}. Then for each j ∈ {1, . . . , k} for all y ∈ Δk , a,n,s,j

pT

(u; y) =



L={l1 ,...,lj }⊂ {1,...,k}

j

k−j

gT (uI(s),L ; zI(s),L ) ∫ dzI(s),−L gT (uI(s),−L ; zI(s),−L ) ℝk−j

󵄨󵄨 󵄨 × ∫ dz−I(s) gTn−k (u−I(s) ; z−I(s) )(E 1(S(ηu,z ) = s)eaT,n (u, z, s))󵄨󵄨󵄨 󵄨󵄨 n−k ℝ

z∈ℝn , zI(s),L =y

.

Another representation of point densities is obtained in terms of stochastic exponentials for the Arratia flow. To formulate the corresponding result, the following notation is needed. Let U = {uk | k ∈ ℕ} be a dense subset of [0; 1], and define ρ1 = T,

k−1

󵄨 ρk = inf{s 󵄨󵄨󵄨 ∏(X 0 (uk , s) − X 0 (uj , s)) = 0} ∧ T, j=1

k ≥ 2,

202 � 9 Appendix and put for u(n) = (u1 , . . . , un ), n

ρk

In (u(n) ) = ∑ ∫ a(X 0 (uk , t))dX(uk , t), k=1 0 ρ n k

Jn (u(n) ) = ∑ ∫ a2 (X 0 (uk , t))dt,

n ∈ ℕ.

k=1 0

Then the following quantities are well-defined [23]: a

ℰ̃T = exp{L2 - lim In (u a

ℰ̃T,n (u

(n)

n→∞

(u)

1 ) − L2 - lim Jn (u(u) )}, 2 n→∞

) = exp{In (u(n) ) − Jn (u(n) )},

n ∈ ℕ.

Consider u ∈ Δn . Suppose that elements of the set XT (u) are listed in ascending order. Let ϰ be the cemetery state. Given a set L = {l1 , . . . , lk }, li ∈ ℕ, i = 1, k, for some k, put the random vector 𝒳TL (u) to be equal to ((XT (u))l ,..., (XT (u))l ), 1

k

(9.5)

if maxi=1,k li ≤ |XT (u)|, and ϰ, otherwise. We denote the density of 𝒳TL (u) in ℝk by qTL (u; ⋅). Theorem 9.8.3. For any u ∈ Δn and any k ∈ {1, . . . , n} a. e., (u; y) = pa,n,k T



L={l1 ,...,lk }, li ∈ℕ, i=1,k

a qTL (u; y) E(ℰ̃T,n (u)/𝒳TL (u) = y).

Replacing in (9.5) the set XT (u) with the set {X(v, T) | v ∈ [0; 1]}, one defines analogous to 𝒳TL (u), the random vector 𝒳TL with values in ℝk ∪ {ϰ}. The corresponding density being denoted by qTL (⋅), the following result holds. Theorem 9.8.4. For any k ∈ ℕ a. e., pa,k T (y) =



L={l1 ,...,lk }, li ∈ℕ, i=1,k

qTL (y) E(ℰ̃Ta /𝒳TL = y).

9.9 Brownian particles with singular interaction The discrete time approximation of the Arratia flow {xkn (u), k = 0, . . . , n} is given by a difference equation with random perturbation generated by a sequence of independent stationary Gaussian processes {ξkn (u), u ∈ ℝ, k = 0, . . . , n} with covariance function Γn : n xk+1 (u) = xkn (u) +

1 n ξ (x n (u)), √n k+1 k

x0n (u) = u, u ∈ ℝ.

9.9 Brownian particles with singular interaction � 203

Define the random process x̃n (u, ⋅) on [0, 1] as the polygonal line with edges ( kn , xkn (u)), k = 0, . . . , n. It was proved by I. I. Nishchenko in [76] that if the covariance Γn approximates in some sense the function 𝕀{0} then m-point motion of x̃n weakly converges to the m-point motion of the Arratia flow. An explicit form of the Itô–Wiener expansion for f (xn (u1 ), . . . , xn (um )) with respect to noise that produced by the processes {ξkn (u), u ∈ ℝ, k = 0, . . . , n}n≥1 was obtained by E. V. Glinyanaya in [44, 47]. This expansion can be regarded as a discrete time analogue of the Krylov–Veretennikov representation formula. Let ηi be the white noise that correspond to the process ξi . Define the operators {Qk }k≥1 from the Itô–Wiener expansion: ∞

f (u1 + ξ1 (u1 ), . . . , um + ξ1 (um )) = ∑ Qk f (u;⃗ η1 , . . . , η1 ). k=0

Theorem 9.9.1 ([44, E. Glinyanaya, 2015]). Let {xn (u), u ∈ ℝ}n≥1 be the discrete-time flow: xn+1 = xn (u) + ξn+1 (xn (u)), x0 (u) = u. Then for any φ ∈ B(ℝm ; ℝ) the Itô–Wiener expansion of φ(xn (u1 ), . . . , xn (um )) has the form ∞

φ(xn (u1 ), . . . , xn (um )) = ∑



k=0 l1 ,...,ln ≥0 l1 +⋅⋅⋅+ln =k

Qln Qln−1 . . . Ql1 φ(u;⃗ η ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ n , . . . , ηn , . . . , η 1 , . . . , η1 ). ln

l1

In contrasts to the flow of Brownian particles on the line, in the discrete time approximations the order between particles can change in time. We define a measure of disordering for 2-point motion as follows: 1

Φn = ∫ 𝕀{x̃n (u2 ,s)−x̃n (u1 ,s) 0. (u) m

(3) 2 − 2Γm ( √c1 ) ≥ m

1 , K2

m ≥ 1 for some constant K.

Then ℙ{Φm > 0} F(√ cm )

≤ m,

m

and for any ε > 0 there exists constant A > 0 such that

204 � 9 Appendix lim

m→∞

where F(x) = ∫x

+∞

1 √2π

ℙ{Φm > ε} F(K √ cm )

≥ A,

m

2

exp{− x2 }dx.

E. V. Glinyanaya obtained an explicit form for the semigroup of m-point motion of the Arratia flow in terms of binary forests that correspond to order of trajectories coalescence [46]. For the Arratia flow, define its m-point motion semigroup (Qm,t )t≥0 : Qm,t f (u)⃗ = 𝔼f (x(u,⃗ t)). An iterative scheme of boundary value problems for the functions Qm,t f was obtained. Theorem 9.9.3. 𝜕 1 ⃗ Qm,t f (u)⃗ = ΔQm,t f (u), u⃗ ∈ Δm , t ≥ 0, 𝜕t 2 Qm,0 f (u)⃗ = f (u)⃗ ⃗ Qm,t f (u)⃗ = (Qm−1,t f ∘ πi−1 )(πi u),

u⃗ ∈ Kim , Qm,t f (⋅) ∈ Dm ,

where Δm = {u⃗ ∈ ℝm : u1 ≤ u2 ≤ ⋅ ⋅ ⋅ ≤ um }, Dm ={f ∈ C02 (Δm ) :

𝜕2 f ∈ C0 (Δm ), 𝜕xi 𝜕xj

𝜕2 f 1 (x)⃗ = 0, i ≠ j} 𝜕xi 𝜕xj {xi =xj }

and πi (u1 , u2 , . . . , um ) = (u1 , . . . , ui , ui+2 , . . . , um ) πi−1 (u1 , . . . , um−1 ) = (u1 , . . . , ui , ui , ui+1 , . . . , um−1 ). A solution to the boundary value problem from the previous theorem was represented as a sum in which each summand is indexed by a binary forest that correspond to the order of trajectories’ coalescence. Theorem 9.9.4 ([46, E. Glinyanaya, 2014]). Let Gm be the Karlin–McGregor determinant. Then for any f ∈ Dm , t

⃗ m (u,⃗ y,⃗ t, 0) d y⃗ + ∑ (−1)ε(T) ∫ ∫ ∫ fT (y)G ⃗ m−1 (u⃗ (m−1) , y,⃗ tm−1 , 0) Qm,t f (u)⃗ = ∫ f (y)G m T∈Tm−1

Δm

0 Δm−1 Δm−1

󵄨 󵄨 ⋅ 󵄨󵄨󵄨T(u,⃗ u⃗ (m) , t, tm−1 )󵄨󵄨󵄨 d u⃗ (m−1) d y⃗ dtm−1 t tm−1

⃗ m−2 (u⃗ (m−2) , y,⃗ tm−2 , 0) + ∑ ∫ ∫ ∫ ∫ ∫ (−1)ε(T) fT (y)G m T∈Tm−2 0

0 Δm−1 Δm−2 Δm−2

󵄨 󵄨 ⋅ 󵄨󵄨󵄨T(u,⃗ u⃗ (m−1) , u⃗ (m−2) , t, tm−1 , tm−2 )󵄨󵄨󵄨 d y⃗ d u⃗ (m−2) d u⃗ (m−1) dtm−2 dtm−1

9.10 Random dynamical systems generated by coalescing stochastic flows on the real line

ε(T)

+ ⋅ ⋅ ⋅ + ∑ (−1) T∈T1m

t tm−1

� 205

t2

∫ ∫ ⋅ ⋅ ⋅ ∫ ∫ ∫ ⋅ ⋅ ⋅ ∫ ∫ fT (y)G1 (u(1) , y, t1 , 0) 0

0

0 Δm−1 Δm−2

ℝℝ

󵄨 󵄨 ⋅ 󵄨󵄨󵄨T(u,⃗ u⃗ (m−1) , . . . , u⃗ (2) , u⃗ (1) , t, tm−1 , . . . , t1 )󵄨󵄨󵄨 dy du(1) ⋅ ⋅ ⋅ d u⃗ (m−1) dt1 ⋅ ⋅ ⋅ dtm−1 . The main tool in the investigation limit theorems for functionals of stochastic flows is its mixing property with respect to the spatial variable. It was proved that considered discrete time flows and flows with continuous time are ergodic and mixing with respect to spatial variable under some conditions on covariance function [42, 43, 48]. More precisely, the upper bound for the strong mixing coefficient was obtained. Theorem 9.9.5 ([48, E. Glinyanaya, 2017]). Let α denote strong mixing coefficient of the Arratia flow at the time 1. Then for h > 0, ∞

2 2 α(h) ≤ 2√ ∫ e−x /2 dx. π

h

These properties allow to get the central limit theorem for linear functionals of the flows with coalescing. Namely, limit theorems was obtained for the number of clusters in the Arratia flow in the paper [49]. The following central limit theorem for νt ([0; n]) as n → ∞ holds. Theorem 9.9.6 ([48, E. Glinyanaya, V. Fomichov, 2018]). For any t > 0, νt ([0; n]) − 𝔼νt ([0; n]) 󳨐⇒ 𝒩 (0; σt2 ), √n where σt2 :=

n → ∞,

3−2√2 . √πt

Furthermore, an estimate for the rate of this convergence was obtained by proving the following inequality of the Berry–Esseen type. Theorem 9.9.7 ([49, E. Glinyanaya, V. Fomichov, 2018]). For any n ≥ 1, z 󵄨󵄨 󵄨󵄨 2 2 󵄨󵄨 󵄨󵄨 νt ([0; n]) − 𝔼νt ([0; n]) 1 󵄨 ≤ z} − ∫ e−r /2σt dr 󵄨󵄨󵄨 ≤ Cn−1/2 (log n)2 . sup󵄨󵄨ℙ{ 󵄨󵄨 󵄨 √ n 2 z∈ℝ 󵄨󵄨 󵄨 −∞ √2πσt

9.10 Random dynamical systems generated by coalescing stochastic flows on the real line Consider a sequence of transition probabilities {P(n) : n ≥ 1}, where {Pt(n) : t ≥ 0} is a transition probability on ℝn . Assume that transition probabilities satisfy following conditions:

206 � 9 Appendix TP1 For each n ≥ 1, the expression Tt(n) f (x) = ∫ f (y)Pt(n) (x, dy),

x ∈ ℝn

ℝn

defines a Feller semigroup on C0 (ℝn ). TP2 Given {i1 , . . . , ik } ⊂ {1, . . . , n}, Bk ∈ ℬ(ℝk ) and Cn = {y ∈ ℝn : (yi1 , . . . , yik ) ∈ Bk }, one has Pt(n) (x, Cn ) = Pt(k) ((xi1 , . . . , xik ), Bk ),

t ≥ 0, x ∈ ℝn .

TP3 For all x ∈ ℝ, Pt(2) ((x, x), Δ) = 1,

t ≥ 0,

where Δ = {(y, y) : y ∈ ℝ} is the diagonal in the space ℝ2 . TP4 For all x ∈ ℝ and ε > 0, one has t −1 Pt(1) (x, (x − ε, x + ε)c ) → 0,

t → 0.

Under this condition, a transition probability P(1) generates a continuous Feller process on ℝ. Conditions TP1, TP2 imply that for each n ≥ 1 there exists a famn n ily {ℙ(n) x , x ∈ ℝ } of probability measures on C([0, ∞), ℝ ) such that with respect to (n) (n) ℙx the canonical process Xt (f ) = f (t), f ∈ C([0, ∞), ℝn ) is a continuous Markov process with a transitional probability {Pt(n) : t ≥ 0} and a starting point x. TP5 For each c < c′ and t > 0, there exists a continuous increasing function m : ℝ → ℝ such that for all x, y, 󵄨󵄨 󵄨󵄨 (2) (2) ′ 2 (2) (2) ℙ(2) (x,y) (∀s ∈ [0, t] (X1 (s), X2 (s)) ∈ [c, c ] and X1 (s) ≠ X2 (s)) ≤ 󵄨󵄨m(x) − m(y)󵄨󵄨. TP6 For all t > 0 and x ∈ ℝ, the measure Pt(1) (x, ⋅) has no atoms. Theorem 9.10.1 ([80]). Consider a sequence of transition probabilities {P(n) : n ≥ 1} that satisfy conditions TP1–TP6 above. Then there exists a metric dynamical system (Ω, ℱ , ℙ, {θh , h ∈ ℝ}) and a perfect cocycle φ : ℝ+ × Ω × ℝ → ℝ over θ such that ψ(s, t, ω, x) = φ(t − s, θs ω, x) is a stochastic flow of mappings generated by transition probabilities {P(n) : n ≥ 1}.

9.11 Stationary points in coalescing stochastic flows

� 207

9.11 Stationary points in coalescing stochastic flows Existence of a random dynamical system that generates a stochastic flow makes it possible to study invariant distributions and stationary points. A random variable η is a stationary point for the random dynamical system φ, if there exists a forward-invariant set of full measure Ω0 ∈ ℱ (i. e., θt (Ω0 ) ⊂ Ω0 for all t ≥ 0), such that for all ω ∈ Ω0 and t ≥ 0, φ(t, ω, η(ω)) = η(θt ω). In [32], existence of stationary points was studied for Arratia flows with drifts. Consider a SDE, dX(t) = a(X(t))dt + dw(t), where w is a Wiener process, and a is a Lipschitz function. For every x ∈ ℝ, this equation has a unique strong solution {Xx (t) : t ≥ 0} and defines a Feller semigroup of transition probabilities on ℝ, Pt(1) (x, A) = ℙ(Xx (t) ∈ A), t ≥ 0, x ∈ ℝ, A ∈ ℬ(ℝ). Further, n

Pt(n),ind. ((x1 , . . . , xn ), A1 × ⋅ ⋅ ⋅ × An ) = ∏ Pt(1) (xi , Ai ), i=1

where t ≥ 0, (x1 , . . . , xn ) ∈ ℝn , A1 , . . . , An ∈ ℬ(ℝ) define a Feller transition probability on ℝn that corresponds to an n-dimensional SDE, dXi (t) = a(Xi (t))dt + dwi (t),

1 ≤ i ≤ n,

where w1 , . . . , wn are independent Wiener processes. The result of [70, Theorem 4.1] implies that there exists a unique consistent sequence of Feller transition semigroups {Pt(n),c : n ≥ 1} such that: 1. For every n ≥ 1, {Pt(n),c : t ≥ 0} is a transition semigroup on ℝn . 2. For all x ∈ ℝ and t ≥ 0, Pt(2),c ((x, x), Δ) = 1, 3.

where Δ = {(y, y) : y ∈ ℝ} is a diagonal. Given x ∈ ℝn , let X = (X1 , . . . , Xn ) be an ℝn -valued Feller process with the starting point x and transition probabilities {Pt(n),c : t ≥ 0} and X̃ = (X̃ 1 , . . . , X̃ n ) be an ℝn valued Feller process with the starting point x and transition probabilities {Pt(n),ind. : t ≥ 0}. Let

208 � 9 Appendix τ = inf{t ≥ 0 | ∃i, j : 1 ≤ i < j ≤ n, Xi (t) = Xj (t)} be the first meeting time for processes X1 , . . . , Xn , and τ̃ = inf{t ≥ 0 | ∃i, j : 1 ≤ i < j ≤ n, X̃ i (t) = X̃ j (t)} be the first meeting time for processes X̃ 1 , . . . , X̃ n . Then distributions of stopped prõ . . . , X̃ n (t ∧ τ)) ̃ : t ≥ 0} coincide. cesses {(X1 (t ∧ τ), . . . , Xn (t ∧ τ)) : t ≥ 0} and {(X̃ 1 (t ∧ τ), By ψ = {ψs,t : −∞ < s ≤ t < ∞}, we will denote a stochastic flow on ℝ, such that for all s ∈ ℝ, n ≥ 1 and x = (x1 , . . . , xn ) ∈ ℝn the finite-point motion t → (ψs,s+t (x1 ), . . . , ψs,s+t (xn )),

t ≥ 0,

is a Feller process with a starting point x and transition probabilities {Pt(n),c : t ≥ 0}. We will assume that ψ is generated by a random dynamical system φ. Theorem 9.11.1 ([32]). Let φ be a random dynamical system that corresponds to the Arratia flow with the drift a. Assume that the drift a is Lipschitz and for some λ > 0 and all x, y ∈ ℝ one has (a(x) − a(y))(x − y) ≤ −λ(x − y)2 . Then there exists a unique stationary point η for the random dynamical system φ. Theorem 9.11.2 ([32]). Let φ be a random dynamical system that corresponds to the Arratia flow without drift (i. e., a = 0). Then there is no stationary point η for the random dynamical system φ.

9.12 Duality for coalescing stochastic flows on the real line Backward flow ψ̃ = {ψ̃ t,s : −∞ < s ≤ t < ∞} is dual to the flow ψ = {ψs,t : −∞ < s ≤ t < ∞}, if for all s ≤ t, x, y ∈ ℝ, (ψs,t (x) − y)(x − ψ̃ t,s (y)) ≥ 0. Consider a sequence of transition probabilities {P(n) : n ≥ 1} that satisfy conditions TP1–TP6 above. Given reals a < b and t > 0, denote fa,b,t =

sup

a≤x1 0, lim inf ε,δ→0

fa,b,t (8ε) = 0. wa,b (ε, δ)

Then there exists a metric dynamical system (Ω, ℱ , ℙ, {θh , h ∈ ℝ}), a perfect cocycle φ over θ and a backward perfect cocycle φ̃ over θ, such that: 1. The flow ψs,t (ω, x) = φ(t − s, θs ω, x) is a stochastic flow on ℝ with finite-point motions determined by {P(n) : n ≥ 1}. ̃ − s, θs ω, x) is a backward stochastic flow on ℝ. 2. The backward flow ψ̃ t,s (ω, x) = φ(t 3. The backward stochastic flow ψ̃ is dual to the stochastic flow ψ. Moreover, the finite-point motions of ψ̃ are determined by a sequence {P̃ (n) : n ≥ 1}, which is a unique compatible sequence of coalescing Feller transition probabilities on ℝ that satisfy the duality relation: P̃ (n) (y, (x1 , x2 ) × (x2 , x3 ) × ⋅ ⋅ ⋅ × (xn , ∞)) = P(n) (x, (−∞, y1 ) × (y1 , y2 ) × ⋅ ⋅ ⋅ × (yn−1 , yn )) for all n ≥ 1, t ≥ 0 and x, y ∈ ℝn such that x1 < y1 < x2 < y2 < ⋅ ⋅ ⋅ < xn < yn .

Bibliography [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]

Adams, R. A. Sobolev Spaces. Academic Press, New York-San Francisco-London, 1975. Arnold, L., Sheutzow, M. Perfect cocycles through stochastic differential equations. Probab. Theory Relat. Fields 101: 65–88, 1995. Baudoin, F. An Introduction to the Geometry of Stochastic Flows. Imperial College Press, 2019. Bilingsley, P. Convergence of Probability Measures, 2nd edition. A Wiley-Interscience Publication. John Wiley & Sons Inc., 1999, 296 pp. Birman, M. S., Solomiak, M. Z. Spectral Theory of Self-Adjoint Operators in Hilbert Space. Springer, Dordrecht, 1987, xvi+302 pp. Cerrai, S. Second Order PDE’s in Finite and Infinite Dimension: A Probabilistic Approach. Lecture Notes in Math., vol. 1762. Springer, 2001. Da Prato, G., Zabczyk, J. Second Order Partial Differential Equations in Hilbert Spaces. London Mathematical Society Lecture Note Series, vol. 293. Cambridge University Press, Cambridge, 2002, xvi+379 pp. Darling, R. W. R. Constructing nonhomeomorphic stochastic flows. Mem. Am. Math. Soc. 70: 376, 1987, 97 pp. Dawson, D. A. Measure-valued Markov processes. In: Ecole d’Eté de Probabilités de Saint-Flour XXI – 1991, pp. 1–260. Lecture Notes in Math., vol. 1541. Springer, Berlin, 1993. Del Moral, P. Measure-valued processes and interacting particle systems. Application to nonlinear filtering problems. Ann. Appl. Probab. 8(2): 438–495, 1998. Doku, I. A limit theorem of superprocesses with non-vanishing deterministic immigration. Sci. Math. Jpn. 64(3): 563–579, 2006. Dorogovtsev, A. A. Conditional measures for diffusion processes and anticipating stochastic equations. Theory Stoch. Process. 4(20): 17–24, 1998. Dorogovtsev, A. A. Properties of the random measures. Theory Stoch. Process. 6(1–2): 26–33, 2000. Dorogovtsev, A. A. The local time for one class of measure-valued Markov processes. In: Ukrainian Mathematical Congress 2001. Section “Probabilty theory and mathematical statistics”. Proceedings, pp. 38–43, 2001. Dorogovtsev, A. A. Measure-valued Markov processes and stochastic flows. Ukr. Math. J. 54(2): 218–232, 2002; translation from Ukr. Mat. Zh. 54(2): 178–189, 2002. Dorogovtsev, A. A. Stochastic flows with interactions and measure-valued processes. Int. J. Math. Math. Sci. 63: 3963–3977, 2003. Dorogovtsev, A. A. On a condition of weak compactness of a family of a family of measure-valued processes. Ukr. Math. J. 56(7): 1054–1062, 2004. Dorogovtsev, A. A. On random measures on spaces of trajectories, and strong and weak solutions of stochastic differential equations. Ukr. Math. J. 56(5): 753–763, 2004. Dorogovtsev, A. A. One Brownian stochastic flow. Theory Stoch. Process. 10(3–4): 21–26, 2004. Dorogovtsev, A. A. Some remarks on Wiener flow with coalescence. Ukr. Math. J. 57(19): 1550–1558, 2005. Dorogovtsev, A. A. A stochastic integral with respect to Arratia’s flow. Dokl. Akad. Nauk 410: 156–157, 2006. Dorogovtsev, A. A. Fourier–Wiener transform of functionals of Arratia flow. Ukr. Math. Bull. 4(3): 329–350, 2007. Dorogovtsev, A. A. Measure-Valued Processes and Stochastic Flows, vol. 66. Institute of Mathematics NAS of Ukraine, 2007. Dorogovtsev, A. A. Semigroups of finite-dimensional random projections. Lith. Math. J. 2: 329–350, 2011. Dorogovtsev, A. A. Krylov–Veretennikov expansion for coalescing stochastic flows. Commun. Stoch. Anal. 6(3): 421–435, 2012.

https://doi.org/10.1515/9783110986518-010

212 � Bibliography

[26] Dorogovtsev, A. A., Gnegin, A. V., Vovchanskii, M. B. Iterated logarithm law forsizes of clusters in Arratia flow. Theory Stoch. Process. 18(2): 1–7, 2012. [27] Dorogovtsev, A. A., Goncharuk N. Yu. Local times for measure-valued Ito’s stochastic processes. Preprint # 95-142. Case Western Reserve University, Cleveland, October 12, 1995. [28] Dorogovtsev, A. A., Karlikova M. P. Long-time behaviour of measure-valued processes correspondent to stochastic flows with interaction. Theory Stoch. Process. 9(25)(1–2): 52–59, 2003. [29] Dorogovtsev, A. A., Kotelenez, P. Smooth stationary solutions of quasilinear stochastic partial differential equations: 1. Finite mass. Preprint #97-145, Department of Mathematics Case Western Reserve University Cleveland, Ohio. 19 p. [30] Dorogovtsev, A. A., Nishchenko, I. I. An analysis of stochastic flows. Commun. Stoch. Anal. 8(3): 4, 2014. [31] Dorogovtsev, A. A., Ostapenko, O. V. Large deviations for flows of interacting Brownian motions. Lith. Math. J. 10(3): 315–339, 2010. [32] Dorogovtsev, A. A., Riabov, G. V., Schmalfuß, B. Stationary points in coalescing stochastic flows on ℝ. Stoch. Process. Appl. 130(8): 4910–4926, 2020. [33] Dorogovtsev, A. A., Vovchanskii, M. B. Arratia flow with drift and Trotter formula for Brownian web. Commun. Stoch. Anal. 12(1): 89–108, 2018. [34] Dorogovtsev, A. A., Vovchanskii, M. B. On approximations of the point measures associated with the Brownian web by means of the fractional step method and the discretization of the initial interval. Ukr. Math. J. 72(9): 1179–1194, 2020. [35] Dorogovtsev, A. A., Vovchanskii, N. B. Representations of the finite-dimensional point densities in Arratia flows with drift. Theory Stoch. Process. 25(1): 25–36, 2020. [36] Dudley, R. M. Real Analysis and Probability. Wadsworth & Brooks/Cole Mathematics Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1989. [37] Dynkin, E. B. Superdiffusions and Positive Solutions of Nonlinear Partial Differential Equations. AMS, 2004. [38] Etheridge, A. M. An Introduction to Superprocesses. University Lecture Series, vol. 20. American Mathematical Society, 2000, x+188 pp. [39] Ethier, S. N., Kurtz, T. G. Markov Processes: Characterization and Convergence. Wiley, 1986. [40] Fontes, L. R., Newman, C. M. The full Brownian web as scaling limit of stochastic flows. Stoch. Dyn. 6(2): 213–228, 2006. [41] Gihman I. I., Skorokhod A. V. The Theory of Stochastic Processes I. Springer, 1986, xv+488 pp. [42] Glinyanaya, E. V. Discrete analogue of the Krylov–Veretennikov expansion. Theory Stoch. Process. 17(33)(1): 39–49, 2011. [43] Glinyanaya, E. V. Disordering asymptotics in the discrete approximation of an Arratia flow. Theory Stoch. Process. 18(34)(2): 8–14, 2012. [44] Glinyanaya, E. V. Semigroups of m-point motions of the Arratia flow, and binary forests. Theory Stoch. Process. 19(35)(2): 31–41, 2014. [45] Glinyanaya, E. V. Ergodicity with respect to the spatial variable of discrete-time stochastic flows. Dopov. Nats. Akad. Nauk Ukr. 8(1): 13–20, 2015. [46] Glinyanaya, E. V. Krylov–Veretennikov representation for the m-point motion of a discrete-time flow. Theory Stoch. Process. 20(36)(1): 63–77, 2015. [47] Glinyanaya, E. V. Spatial ergidicity of the Harris flows. Commun. Stoch. Anal. 11(2): 223–231, 2017. [48] Glinyanaya, E. V. Mixing coefficient for discrete-time stochastic flow. J. Stoch. Anal. 1(1): 3, 2020. [49] Glinyanaya, E. V., Fomichov, V. V. Limit Theorems for the number of clusters of the Arratia flow. Theory Stoch. Process. 23(39)(2): 33–40, 2018. [50] Goncharuk, N. Yu., Kotelenez, P. Fractional step method for stochastic evolution equations. Stoch. Process. Appl. 73(1): 1–45, 1998. [51] Harris, T. E. Coalescing and noncoalescing stochastic flows in R1 . Stoch. Process. Appl. 17(2): 187–210, 1984.

Bibliography

� 213

[52] Howitt, C., Warren, J. Dynamics for the Brownian web and the erosion flow. Stoch. Process. Appl. 119(6): 2009–2051, 2009. [53] Ikeda, N., Watanabe, S. Stochastic Differential Equations and Diffusion Processes, 2nd edition. Elsevier, 1992, 572 pp. [54] Jakubowski, A., Kamenskii, M., Raynaud de Fitte, P. Existence of weak solutions to stochastic evolution inclusions. Stoch. Anal. Appl. 23(4), 723–749, 2005. [55] Kallenberg, O. Foundations of Modern Probability, 2nd edition. Probability and Its Applications. Springer, New York, NY, 2002. [56] Karlikova, M. P. The martingale problem for stochastic differential equations with interaction. Theory Stoch. Process. 11(27)(1–2): 69–73, 2005 (in English). [57] Karlikova, M. P. On the shift of generalized functions by evolutionary flow. Ukr. Mat. Zh. 57(8): 1020–1029, 2005 (in Ukrainian); translation in Ukr. Math. J. 57(8): 1201–1213, 2005 (in English). [58] Karlikova, M. P. On a weak solution of an equation for an evolution flow with interaction. Ukr. Mat. Zh. 57(7): 895–903, 2005 (in Russian); translation in Ukr. Math. J. 57(7): 1055–1065, 2005 (in English). [59] Kolokoltsov, V. N. Nonlinear Markov Processes and Kinetic Equations. Cambridge Univ. Press, 2010. [60] Komatsu, T. On the Malliavin calculus for SDE’s on Hilbert spaces. Acta Appl. Math. 78: 223–232, 2003. [61] Komatsu, T., Khataccentono, F. On the Malliavin calculus for stochastic flows with interaction on Hilbert spaces. Preprint. [62] Korenovskaya, Y. A. Properties of strong random operators constructed with respect to an Arratia flow. Ukr. Math. J. 69(2): 186–204, 2017. [63] Kotelenez, P. On the semigroup approach to stochastic evolution equations. In: Arnold, L. and Kotelenez, P. (eds.): Stochastic Space-Time Models and Limit Theorems, pp. 95–139. D. Reidel, 1985. [64] Kotelenez, P. A stochastic Navier–Stokes equation for the vorticity of a two-dimensional fluid. Ann. Appl. Probab. 5(4): 1126–1160, 1995. [65] Kotelenez, P. A class of quasilinear stochastic partial differential equations of McKean–Vlasov type with mass conservation. Probab. Theory Relat. Fields 102(2): 159–188, 1995. [66] Krylov, N. N., Veretennikov, A. Yu. Explicit formulae for the solutions of the stochastic differential equations. Math. USSR Sb. 29(2): 239–256, 1976. [67] Kunita, H. Stochastic Flows and Stochastic Differential Equations. Cambridge Studies in Advanced Mathematics, vol. 24. Cambridge University Press, Cambridge, 1990. [68] Kunita, H. Generalized solutions of a stochastic partial differential equation. J. Theor. Probab. 7(2): 279–308, 1994. [69] Kunita, H. Stochastic flows acting on Schwartz distributions. J. Theor. Probab. 7(2): 247–278, 1994. [70] Le Jan, Y., Raimond, O., Flows coalescence and noise. Ann. Probab. 32(2): 1247–1315, 2004. [71] Ligget, T. M. Interacting Particle Systems. Springer, 1985, xv+488 pp. [72] Lindvall, T. Lectures on the Coupling Method. John Wiley & Sons, 1992, 258 pp. [73] Liptser, R. S., Shiryaev, A. N. Statistics of Random Processes. I. Springer, 1977, x+395 pp. [74] Matsumoto, H. Coalescing stochastic flows on the real line. Osaka J. Math. 26: 139–158, 1989. [75] Munasinghe, R., Rajesh, R., Tribe, R., Zaboronski, O. Multi-scaling of the n-point density function for coalescing Brownian motions. Commun. Math. Phys. 268(3): 717–725, 2006. [76] Nishchenko, I. I. Discrete time approximation of coalescing stochastic flows on the real line. Theory Stoch. Process. 17(33)(1): 70–78, 2011. [77] Pilipenko, A. Yu. Stationary measure-valued processes generated by a flow of interacted particles. In: Ukrainian Mathematical Congress 2001, Section 9. Probability Theory and Mathematical Statistics, Proceedings, Kyiv 2002, pp. 123–130, 2002. [78] Protter, P. Stochastic Integration and Differential Equations. A New Approach. Applications of Mathematics, vol. 21. Springer-Verlag, Berlin, 1990. [79] Riabov, G. V. Duality for coalescing stochastic flows on the real line. Theory Stoch. Process. 23(39)(2): 55–74, 2018.

214 � Bibliography

[80] Riabov, G. V. Random dynamical systems generated by coalescing stochastic flows on ℝ. Stoch. Dyn. 18(4): 1850031, 2018. [81] Rozovsky, B. L., Lototsky, S. V. Stochastic Evolution Systems. Linear Theory and Applications to Non-Linear Filtering. Springer, 2018, xvi+330 pp. [82] Saks, S. Theory of the Integral. Dover Publications, 2005, 348 pp. [83] Simon, B. P(φ)2 Model of Euclidean Quantum Field Theory. “Mir”, Moscow, 1976. [84] Skorokhod, A. V. Operator-valued stochastic differential equations and stochastic semi-groups. Usp. Mat. Nauk 37(6(228)): 157–183, 1982. [85] Skorokhod, A. V. Random Linear Operators. D. Reidel Publishing Company, Dordrecht, Holland, 1983. [86] Skorokhod, A. V. Random Linear Operators. Springer, Dordrecht, 1984, xvi+200 pp. [87] Skorokhod, A. V. Stochastic Equations for Complex Systems. Springer, Dordrecht, 1988, xvii+175 pp. [88] Skorokhod, A. V. Measure-valued diffusion. Theory Stoch. Process. 3(19)(1–2): 7–12, 1997. [89] Strogatz, S. H. From Kuramoto to Crawford: exploring the onset of synchronization in population of coupled oscillatots. Physica D 143: 1–20, 2000. [90] Tikhomirov, V. M. Some Questions in Approximation Theory. Izdat. Moskov. Univ., 1976. [91] Vakhania, N., Tarieladze, V., Chobanian, S. Probability Distributions on Banach Spaces. Springer, 1987. [92] Valadier, M. A course on Young measures. Rend. Ist. Mat. Univ. Trieste 26(suppl.): 349–394, 1994. [93] Vishik, M. J., Fursikov, A. V. Mathematical Problems of Statistical Hydromechanics. Springer, Dordrecht, 2011, ix+576 pp. [94] Vovchanskii, M. B. Convergence of solutions of SDEs to Harris flows. Theory Stoch. Process. 23(2): 80–91, 2018.

Index adapted random map 80, 93 Arratia flow 168, 189 center of probability measure 124, 140, 154 compact sets in C([0; 1], c0+ ) 121 conditional distribution in Hilbert space 123, 155 derivative of polynomial on the space of measures 66, 67 deterministic flow with interaction 53, 68 diffusion random map 93 generalized Wiener process in Hilbert space 151 homogeneous additive functional 64 infinitesimal operator of the evolutionary process 66 integral of random function respect to random measure 89 local time for measure-valued process 161 Markov evolutionary process 1, 53, 58 martingale with values in Hilbert space 53 measure (ℱt )-adapted 22, 66 measure-valued process 4, 8 random probability measure 3, 13 – diffusion measure 23, 26 – measure on the space of trajectories 95 shift-compactness of measures 140 Skorokhod space 34, 155

https://doi.org/10.1515/9783110986518-011

stationary – measure-valued solution to equation with interaction 130 – random measure 24 stochastic – differential equation 156 – in Hilbert space 156 – with interaction 42 – integral with respect to Arratia flow 171 – kernel 1 – semi-group 32 stochastic flow 34, 48 strong measure-valued solution to stochastic differential equation 110 theorem – Clark 173 – Fubini for integrals with respect to adapted random measures 78 – Girsanov for Arratia flow 176 total time of free motion in Arratia flow 168 Wasserstain distance 8 – of order 0 8 – of order n 8 weak compactness in – C([0; 1], c0+ ) 126 – C([0; 1], mn ) 125 – m 13 – mn 8 Wiener sheet 38

De Gruyter Series in Probability and Stochastics Volume 2 Yuliya Mishura, Kostiantyn Ralchenko Discrete-Time Approximations and Limit Theorems. In Applications to Financial Markets, 2021 ISBN 978-3-11-065279-6, e-ISBN 978-3-11-065424-0, e-ISBN (ePUB) 978-3-11-065299-4 Volume 1 Abdelhamid Hassairi Riesz Probability Distributions, 2021 ISBN 978-3-11-071325-1, e-ISBN 978-3-11-071337-4, e-ISBN (ePUB) 978-3-11-071345-9

www.degruyter.com