182 51 4MB
English Pages 141 [148] Year 2013
Random Walks on Boundary for Solving PDEs
Random Walks on Boundary for Solving PDEs K.K. Sabelfeld and N.A. Simonov
m ys?m Utrecht, The Netherlands, 1994
VSP B V P.O. Box 346 3700 AH Zeist The Netherlands
© VSP BV 1994 First published in 1994 ISBN 90-6764-183-9
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.
CIP-DATA KONINKLIJKE BIBLIOTHEEK, DEN H A A G Sabelfeld, K.K. Random walks on boundary for solving PDEs / K.K. Sabelfeld and N.A. Simonov. - Utrecht : VSP With ref. ISBN 90-6764-183-9 bound NUGI811 Subject headings: mathematical physics
Printed in The Netherlands by Koninklijke Wöhrmann BV, Zutphen.
Contents 1. Introduction
1
2. R a n d o m walk algorithms for solving integral equations
8
2.1. Conventional Monte Carlo scheme
8
2.2. Biased estimators
15
2.3. Linear-fractional transformations and relations to iterative processes . . . .
17
2.4. Asymptotically unbiased estimators based on singular approximation of the kernel 24 2.5. Integral equation of the first kind
32
3. R a n d o m Walk on Boundary algorithms for solving the Laplace equation 36 3.1. Newton potentials and boundary integral equations of the electrostatics
. .
36
3.2. The interior Dirichlet problem and isotropic Random Walk on Boundary process 38 3.3. Solution of the Neumann problem
45
3.4. Random estimators for the exterior Dirichlet problem
51
3.5. Third boundary value problem and alternative methods of solving the Dirichlet problem 55 3.6. Inhomogeneous problems
60
3.7. Calculation of the derivatives near the boundary
63
3.8. Normal derivative of a double layer potential
67
4. Walk on Boundary algorithms for the heat equation.
70
4.1. Heat potentials and Volterra boundary integral equations
70
4.2. Nonstationary Walk on Boundary process
73
4.3. The Dirichlet problem
75
4.4. The Neumann problem
79
4.5. Third boundary value problem
80
4.6. Unbiasedness and variance of the Walk on Boundary algorithms
84
4.7. The cost of the Walk on Boundary algorithms
89
4.8. Inhomogeneous heat equation
90
4.9. Calculation of derivatives on the boundary
93
iv
Contents
5. Spatial problems of elasticity.
98
5.1. Elastopotentials and systems of boundary integral equations of the elasticity theory 98 5.2. First boundary value problem and estimators for singular integrals
101
5.3. Other boundary value problems for the Lame equations and regular integral equations
106
6. Variants of the Random Walk on Boundary for solving t h e stationary potential problems
108
6.1. The Robin problem and the ergodic theorem
108
6.2. Stationary diffusion equation with absorption
112
6.3. Stabilization method
113
6.4. Multiply connected domains
114
7. R a n d o m Walk on Boundary in nonlinear problems
123
7.1. Nonlinear Poisson equation
123
7.2. Boundary value problem for the Navier-Stokes equation
125
Bibliography
136
The monograph presents new probabilistic representations for classical boundary value problems of mathematical physics. When comparing with the well known probabilistic representations in the form of Wiener and diffusion path integrals, the trajectories of random walks in our representations are simulated on the boundary of the domain as Markov chains generated by the kernels of the boundary integral equations equivalent to the original boundary value problem. The Monte Carlo methods based on the walk on boundary processes have a series of advantages: (1) high-dimensional problems can be solved, (2) the method is grid-free and gives the solution simultaneously in arbitrary points, (3) external and internal boundary value problems are solved using one and the same random walk on boundary process, (4) when comparing with the classical probabilistic representations, there is no ε-error generated by the approximations in the ε-boundary, and (5) parallel implementation of the walk on boundary algorithms is straightforward and much easier. This is the first book devoted to the walk on boundary algorithms. First introduced by K. Sabelfeld for solving the interior and exterior boundary value problems for the Laplace and heat equations, the method was then extended to all the main boundary value problems of the potential and elasticity theories. For specialists in applied and computational mathematics, applied probabilists, for students and post-graduates studying new numerical methods for solving PDEs.
k e y w o r d s : Markov chains, double layer potentials, heat and elasticity potentials, integral equations, random estimators, random walk on boundary .
boundary
Chapter 1
Introduction It is well known that the random walk methods for boundary value problems (BVP) for high-dimensional domains with complex boundaries are quite efficient, especially if it is necessary to find the solution not in all the points of a grid, but only in some marked points of interest. One of the most impressive features of the Monte Carlo methods is the possibility to calculate probabilistic characteristics of the solutions to BVPs with random parameters (random boundary functions, random sources, and even random boundaries). Monte Carlo methods for solving PDEs are based: (a) on classical probabilistic representations in the form of Wiener or diffusion path integrals, (b) on probabilistic interpretation of integral equations equivalent to the original BVP which results in representations of the solutions as expectations over Markov chains. In the approach (a), diffusion processes generated by the relevant differential operator are simulated using numerical methods for solving ordinary stochastic differential equations (Friedmann, 1976). To achieve the desired accuracy, it is necessary to take the discretization step small enough, which results in long random trajectories simulated. For PDEs with constant coefficients, however, it is possible to use the strong Markov property of the Wiener process and create much more efficient algorithms first constructed for the Laplace equation (Müller, 1956) known as the walk on spheres method (WSM). This algorithm was later justified in the framework of the approach (b) by passing to an equivalent integral equation of the second kind with generalized kernel and using the Markov chain which "solves" this equation. This approach was developed to general second order scalar elliptic equations, high-order equations related to the Laplace operator and some elliptic systems (Sabelfeld, 1991). We now shortly present two different approaches for constructing and justifying the walk on spheres algorithm: the first, conventional, coming from the approach (a), and the second, based on a converse mean value relation. Let us start with a simple case, the Dirichlet problem for the Laplace equation in a bounded domain G C R3: Δ « ( ζ ) = 0,
χ e G,
"(y) = v>(y), y € Γ = dG.
(1.1) (1.2)
We seek a regular solution to (1.1) and (1.2), i.e. u € C 2 ( G ) f | C ( G | j r ) . Let d" := s u p i e G d ( x ) , where d(x) is the largest radius of the spheres S(x,d(x)) tered at χ and contained in G.
cen-
1.
2
Introduction
Let wx(t) be the Wiener process starting at the point χ € G, and let rp be the first passage time (the time of the first intersection of the process wx(t) with the boundary Γ). We suppose that the boundary Γ is regular so that (1.1) and (1.2) has a unique solution. Then (Dynkin, 1963) u ( x )
=
Ε
χ
φ ( ι υ
χ
( 1 . 3 )
( τ τ ) ) .
Note that in (1.3), only the random points on the boundary are involved. We thus can formulate the following problem: how to find these points without explicit simulation of the Wiener process inside the domain G ?. This problem was solved in (Müller, 1956) using the following considerations. In sphere S(x, d{x)) representation (1.3) gives: u(x)
=
(1.4)
ExU(Wx(TS{X,Ì(X)))).
The same representation is valid for all points y ζ S ( x , d Markov property and write the conditional expectation: u ( x )
=
E
x
{ E
y
u { W y ( T s
M
y ) ) ) ) \ x
=
w(0);
y
( x ) ) ,
so we can use the strong
= m(ts(Ii - - · ) U ( z n ) , . . . } , it is also a martingale with respect to x i , . . . , x (Motoo, 1959). Especially, n
fJ^x)!®!,...,^;^ and
>
k
}
=
E { u { x
N
e
) \ x
k
, N
s
> k )
=
u ( x
k
) ,
3
1. Introduction
Εχξ{χ)
= u(x).
Let L = sup ||z — t/||,M = s u p φ ( χ ) and assume t h a t u(x) satisfies the condition
Ex{i
- u)2 < cV
+ 4Mce + c2L2,
and ^
j - y ^ ( x ) - u ( * ) j
0) such that the mean value relation holds at every point χ £ G\Te for the sphere S(x,d(x)). Then v(x) is the unique solution to the problem
(1.1),
(1.2).
P r o o f . The proof uses the maximum property (Courant and Hilbert, 1989). Since u also satisfies the mean value relation for every sphere contained in G, we conclude that the function u — ν satisfies the mean value relation at every point χ £ G \TC. Let F be a closed set of points χ € G \ Γ £ where u — ν attains its maximum M. Let xo be a point of F which has the minimal distance from the surface Y = {y : d(y) = e}. If XQ were an interior point of χ £ G \ Γ £ , we could find a sphere S(xo, r0) C G for which the mean value relation holds, thus u — ν = M inside S(x0, r 0 ). Therefore, x 0 should belong to Y. We repeat the same argument for the minimum of u — v. Since (u — u)|y = 0, we have ν = u in G. •
4
1. Introduction Using this proposition, we rewrite an integral equation equivalent to the problem (1.1),
(1.2). Let 6x(y) be a generalized density describing the uniform distribution on a sphere S(x,d(x)), and let us define the kernel function as follows:
We define also a function fe(x)
M X ;
in G: JO ~ \ u{x)
if ζ € G \ Γε, if χ € Γ,.
By the proposition, we can write an equivalent integral equation : + / ε (χ),
(1.9)
= [ h{x,y)i>{y)dy JG
(1.10)
u{x) = Keu{x) where the integral operator Ke is defined by K,j>{x) for each φ(χ)
€
C{G).
Proposition 1.2. For any ε > 0, integral equation (1.9) has the unique solution given by oo u(x) = fe(x)
+ Y/K'Je(x). ¿=1
(1.11)
Besides, this solution coincides with the solution to (1.1), (1-2). Proof. Let ε be fixed. To show the convergence of the series Ν Μχ) + Σ Κ Μ Χ ) , ϊ=1
(1.12)
it is sufficient to prove that H ^ < 1 (this fact also implies the uniqueness of the solution to (1.11)). Let ν(ε) = ε 2 /4(d*) 2 . For χ € G \ Γ ε we have
/ h(x,y) JG
/ K{y,y')dy' JG =
Let υ ( χ ) : = fe(x)
+
dy = / Sx(y) ( / 6y{y')dy' ) dy = JG\Γ, \JG /
/ st(y)dy JG\r,
K'efc(x).
< 1 - ι/(ε) < 1.
It is clear that ν satisfies
1.
v(x)
5
Introduction
= Kev{x)
+
fe(x).
• N o w w e remark that, to use (1.9) for numerical purposes, it is necessary to know the solution to the boundary problem in Γ £ . However, we obtain an approximation if we take in Γ £ u ( i ) = / « ( * ) » ¥>(*),
i e r
where χ is the point of Γ nearest to z , since u £ C(G
t
,
(1.13)
(J Γ ) . Instead of (1.13) we could use
any continuous extension of φ to Γ £ (the ideal case is the harmonic extension). Thus, let us consider the equation ηε(χ)
= K\ut{x)
+ ft{x),
Γ£
xeG\
,
(1.14)
where / £ is an approximation of u ( x ) in Γ £ , e.g., given by (1.13). N o t e that the solution to (1.14) is not harmonic, but it is clear that |u(z) — m £ (x)| = 0(ω(ε)) ω ( ε ) is the continuity modulus of the function u(x)
in Γ £ .
as ε —• 0, where
For example, for Lipschitz
continuous functions we get |u(x) — u £ ( x ) |
where φυχ is the angle between the vector χ — y and ny, the interior normal vector at the point y € Γ , and μ(ί/) is a continuous function satisfying the boundary integral equation:
1.
6
Introduction
L1
μ(ν) = - J p(y,v'My')d σ > ρ — r(p — 1), then Κ is a completely continuous operator from £ Ρ ( Γ ) to £>Ρ(Γ) with the norm (Kantorowitsch and Akilow, 1964):
2.1. Conventional
Monte Carlo
(\\ < c\~°h
scheme
9
cih.
In particular, this is true for the kernels of the potential theory: _ b(x,x') K{X, X ) — . , where b(x,x') is a bounded continuous function, η < m. If, in addition, ρ > m/(m then Κ : L"(T) C(r).
— n),
We now define a Markov chain in the phase space Γ as a sequence of random points Y = {Vb, Yi, • • •, Yn, • • ·}> where the initial point Y0 is distributed in Γ with an initial density po(x), and the next points are determined from the transitional density
piY,^
->Yi)
=
r(Yi.1,Yi)(l-g(Yi-1))
Here r ( V í _ i , y í ) is a conditional distribution density of Yi, Y¡~i fixed, and g(Y{-1) is the probability that the chain terminates in the state V ^ j (i > 1). Thus, the chain may have infinite or finite number Ν of states, depending on the value of g. On the Markov chain Y , we define the following random weights: Qu =
fix o) Po(^o)
- the initial weight, and η — η k(x¡, Xj-ï) Qi = Q,-ì—, r, P ^ . - i , χ i)
.
î > 1·
It is supposed that po and ρ are consistent with / and k, respectively, i.e.: Ρ ο ( ζ ) / 0 for {x : f(x)
φ 0},
and p{x,y)
φ 0 for {y : k(y, χ) φ 0 } .
These suppositions are necessary in order to weights Qi be finite. Our goal is to evaluate a linear integral functional:
Ih = ( 0. Then D = {A : > -α/2}. The Neumann series (2.24) converges, if
\Φ~\\Ιι)\>\η·\,
k = 1,2,...,
where {A*.} are the characteristic numbers of K. Let λ = χ + iy, then we rewrite this condition in the form:
We consider now the transformation λ = α/( 1 — βη). It maps the disk Δ on the half-plane:
£> = {λ:|λ-α|i)[l - (7, ¥>2)]
, [1 - (7,V2)] 2 .
Analogous arguments can be used for the random variable [1 — £2™^], hence, η is an asymptotically unbiased estimator for φ(χ). The approach described above can be generalized to systems of integral equations. Systems of linear algebric equations can also be treated. Let us consider a system of integral equations
φ(χ)
where f{x)
= J K[x,y]dy
+
f{x),
(2.48)
is a known vector; K[x, y] is a matrix with elements {*(*,»)}£=ι!
φ(χ)
(φ1(χ),·.·,φπι(χ))Τ
=
is the column vector of functions to be found. We will not make special assumptions about the properties of the integral operator Λ'[], the class of vector functions φ, f and the domain G. We assume only that the system (2.48) is uniquely solvable, and now we generalize the representation (2.42). For simplicity we first consider the case η = 1, i.e., M[x,y}
= K[x,y}
6(x)-fT(y)
-
or in more detail, Μ φ ,
ν
) = ktj{x,y)
- 6\χ)η\ν\
(2.49)
where ¿(x) and 7(y) are some arbitrary column-vectors with components ¿ ' ( x ) , . . . , Sn(x) and 7 1 ( y ) , . . . , 7 m ( y ) , respectively. We introduce two auxiliary systems of itegral equations
Μχ)
= J
M[x,y]ipo{y)dy
+ f(x),
G
ΦΙ{Χ) = J M[x,y^(y)dy
+
S(x).
Then the following representation for the solution to (2.48) holds:
(2.50)
2-4- Asymptotically
unbiased estimators
based on singular approximation
φ(χ) = ψο(χ) + Ψι{χ)-
ι-
of the kernel
29
7—TM , M , · jGT(ym(y)dy
This relation is a corollary of the general Theorem 2.7 proved below. Let
η
M[x, y] = Io(y)dy + f(x), ιfii(x) = jM[x,y^1(y)dy
+ 61(x), (2.52)
φη{χ) = J M[x,y]ïn(y)dy
+ 6n(x).
Denote by A the matrix with elements
= j
ΊI(yWAy)dy
and denote by 6 the vector with components
bi = J lï{ytyo(y)dy,
i,j =
l,...,n.
Theorem 2.6. Assume that the system (2.52) is uniquely solvable and there exists an operator (I — Λ) - 1 . Then the solution to (2.48) is represented as φ{χ) = φ0 + Φ Τ { χ ) J , where the vector J is determined from the system of linear algebraic equations (2.53)
J = AJ + b.
Proof. Substituting K[x,y]
from (2.51) into (2.48) yields
tp(x) = J M[x, y]tp(y)dy + δη(χ)
J in(y)v(y)y')· it is natural to use adjoint estimators (see Chapter 2) for iterations of the integral operator Κ and hence for a solution of the boundary value problem (3.5). As a result we have QQ = 2, Q* = ( — 1)' • 2 and
Γ(χ;η) = ^2/Μ(-1)^(2/1·) 1=0
(3.11)
is ¿(a;;n)— biased estimator for u(x), where |é(a:;n)| < const · qn, and q < 1; /| n ' are defined by the concrete method of analytical continuation of the resolvent (see Chapter 2).
In this particular case characteristic set χ(Κ) is well-defined and we can use simple enough domains D in the complex plane Λ with the required properties. As an example we put D = {A : SA > — 1} - the half-plane, and take the function A = 2η/(I — η), mapping the unit disk on D. In this case η* = 1 / 3 ,
ς = η"/ηο,
ηο = m i n
and
φ
=
¿ ^ ( i / ^ c ^ . k=i
3.2. The interior Dirichlet problem and isotropic Random Walk on Boundary process
41
Another appropriate function is λ = 4 ^ / ( 1 — η)2, mapping the unit disk on the whole plane, from which the ray of real numbers less or equal to -1 is excluded. Here η* = 3 - 2 \ / 2 , g = η*/ηο,
ηο = min |l +
-
+
and
k=i Remember now that λι = — 1 is a simple pole of the resolvent and |λ 2 | > 1. T h u s Λ'(λ) = (λ +
1)ÄA
is analytical function of Λ in the interior of the disk {λ : |λ| < |A 2 |}. This function as a consequence can be expanded in a convergent operator series R'(X) = Κ + \(K
+ K2) + \2(K2
+ Λ'3) +
...,
and since Ri = j f í ' ( l ) we can proceed as follows. Having taken a finite number of terms, we can evaluate the remainder using the fact that series for R' converges at a speed of a geometric progression with multiplier q < p^j. So we have η—1 μ{ν) = ]Γ K'g(y)
j + -Kng{y)
+ e(n; y)
1=0
and t h e estimator based on this representation is (3.11), where l\n) = 1,
i = l,...,n-l,
/(»ι _ I n 9· It has been shown in Chapter 2, that some analytical continuation methods and other methods of rearrangement of the initial integral equation provide a possibility to construct new iterative algorithms of finding the solution. And these iterative procedures can be often considered as a summation of Neumann series for some new integral equation. In particular, linear fractional mappings A = y f · ^ gives birth to a class of alternative representations
1=0
where Λ'ι = ——ri + - — Λ ' , a +β q + β a fi = —7^9Q+ ß Note, t h a t we cannot use a standart Monte Carlo procedure of constructing an estimator on trajectories of terminating Markov chain since Neumann series for K\ does not converge. So we take, as usual, a finite number n of terms, sufficient to attain the desired accuracy, t h a t is
42
3. Random
Walk on Boundary algorithms for solving the Laplace
equation
η \\μ-Σκί i=o
\\