New Monte Carlo Methods With Estimating Derivatives 9783112318935, 9783112307663


258 41 14MB

English Pages 194 [196] Year 1995

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
CONTENTS
Preface
1. Estimation of integrals and solution of integral equations
2. Estimation of derivatives
3. Solution of the Helmholtz equation
4. Solution of metaharmonic equations and elliptic systems
5. Monte Carlo methods with calculating parametric derivatives in the radiation transport theory
6. Solution of nonlinear problems
Appendix. Some simulation algorithms
References
Recommend Papers

New Monte Carlo Methods With Estimating Derivatives
 9783112318935, 9783112307663

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

New Monte Carlo Methods with Estimating Derivatives

New Monte Carlo Methods with Estimating Derivatives G.A. Mikhailov

///VSP///

Utrecht, The Netherlands, 1995

VSP BV P.O. Box 346 3700 AH Zeist The Netherlands

© VSP BV 1995 First published in 1995 ISBN 90-6764-190-1

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.

CIP-DATA KONINKLIJKE BIBLIOTHEEK, DEN HAAG Mikhailov, G.A. New Monte Carlo methods with estimating derivatives / G.A. Mikhailov. - Utrecht : VSP With ref. ISBN 90-6764-190-1 bound NUGI811 Subject headings: Monte Carlo methods / mathematical physics.

Printed in The Netherlands by Koninklijke Wöhrmann bv, Zutphen

CONTENTS Preface

1.

Estimation of integrals and solution of integral equations

1.1. 1.2 1.3. 1.4. 1.5. 1.6.

Estimation of integrals Recurrent estimates of Monte Carlo method for the solution to an integral equation of the second kind Variance of the basic unbiased estimate Branching chains and solutions to nonlinear equations Cost of various algorithms for solving integral equations Solving problems with stochastic parameters

2.

Estimation of derivatives

2.1. 2.2.

Vector Monte Carlo algorithms Calculation derivatives and perturbations with respect to parameters 2.3 . Calculation of parametric derivatives in a special case

3.

4.

Solution of metaharmonic equations and elliptic systems

4.1.

Solution of metaharmonic equations by calculating the parametric derivatives Solving metaharmonic equations of the form A^u + c u ^ i - i y ^ g Two-dimensional case Calculation of the covariance function of the solution to the biharmonic equation Monte Carlo solution of Dirichlet problem for elliptic systems with variable parameters

4.3. 4.4. 4.5.

1 7 11 15 16 25 30 35 41

Solution of the Helmholtz equation

3.1. The'walk on spheres'process 3.2. The use of probabilistic representation 3.3. The use of integral representations 3.4. New algorithms for variable c(r) 3.5. 'Walk on spheres' algorithms for solving Helmholtz equation in the «-dimensional space 3.6. Solving difference equations by the Monte Carlo method 3.7. Additional remarks

4.2.

vii

45 47 52 57 63 72 75

78 84 90 94 101

vi

contents

5.

Monte Carlo methods with calculating parametric derivatives in the radiation transport theory

5.1 5.2. 5.3. 5.4. 5.5.

Mono velocity transfer process Calculations of derivatives and perturbations Multivelocity radiation transport process with fission Calculation the derivatives with respect to cross-sections Calculating critical values of the parameters: the critical density, the time constant of particle multiplication, the effective multiplication factor Numerical examples Monte Carlo calculations of critical systems with equalization of generations Solving some inverse and stochastic problems of the transfer theory The 'free-path' estimate for solving the transfer equation in total

5.6. 5.7. 5.8. 5.9.

6.

Solution of nonlinear problems

6.1. 6.2. 6.3.

Solution of nonlinear integral equations Solution of Dirichlet problem for elliptic equations Minimization of cost of Monte Carlo methods in iterative solution of nonlinear problems Iterative solution of a model kinetic equation

6.4.

107 113 120 124

128 130 135 140 144

147 149 154 158

Appendix. Some simulation algorithms A. 1. Numerical simulation of random variables A.2. Numerical simulation of random fields A. 3. Remarks about simulation algorithms with the use of multiprocessor systems

164 170 180

References

183

PREFACE It is possible to use weighted Monte Carlo methods for solving many problems of mathematical physics (boundary value problems for elliptic equations, the Boltzmann equation, radiation transfer and diffusion equations). In this case an appropriate Markov chain is simulated on a computer and necessary functional are estimated using the weight which, after each transition, is multiplied by the ratio of the integral equation kernel and the transition density function. Weight estimates make it possible to evaluate special functional, for example, derivatives with respect to parameters of a problem. The triangular system of integral equations is obtained by a multiple differentiation with respect to a parameter of the original integral equation. In this book new weak conditions are presented under which the corresponding vector Monte Carlo estimates are unbiased and their variances are finite. For Helmholtz and radiation transfer equations, new recurrent scalar representations of these estimates are constructed. Based on linearization, parametric derivatives can be used for solving reverse, stochastic and nonlinear problems. It appears that by using direct differentiation of Helmholtz and transfer equations, it is possible to construct effective iterative processes for estimating critical parameters of these equations. Corresponding new algorithms with numerical tests are considered in detail. In addition, parametric derivatives for the Helmholtz equation represent solutions of metaharmonic equations. Consequently, the author has constructed new Monte Carlo methods for solving the Helmholtz equation with a nonconstant parameter, including the stationary Schrodinger equation. The corresponding estimates are of the power type with respect to the parameter. This is especially suitable for differentiation. Using Green functions, it is sometimes possible to make the weight dependent on a spatial point and to estimate corresponding derivatives. On the other hand, it is possible to construct similar estimates on the basis of a sufficiently regular extension of standard point estimations ofthe solution. Based on the imbedding theorem these estimates allow one to optimize Monte Carlo algorithms for a global estimate of the solution, for instance, in uniform metric. In this book corresponding new results for linear and nonlinear problems are presented. These results, in particular, demonstrate the advantage of the 'path estimate' as compared to the 'collision estimate' for solving linear and nonlinear radiation transfer equations in total.

viii

preface

In constructing these algorithms under study, the method of additional randomization is especially important. Therefore, some methods of random function simulation are considered in the special appendix. A new method of substantiating and optimizing the recurrent Monte Carlo estimates without using the Neumann series are presented in the Introduction. This method appears especially effective for substantiating Monte Carlo algorithms with branching, which is used when solving nonlinear problems. This book will be ofvalue and interest to specialists in the field of computational mathematics and physics, probability theory and mathematical statistics.

Chapter 1

Estimation of integrals and solution of integral equations 1.1.

ESTIMATION OF INTEGRALS

Monte Carlo estimates for integrals are constructed on the basis of the relations (see, for example, [17,21,24]): J = j^g{y)dy

= H,

t =

9{v)/p{v),

where p(y), y E Y, is the distribution density for the random vector 77, and

JY

p{y)

To estimate the expected value of one usually uses a sequence Xj, ...,XN of independent samples of £ obtained by Monte Carlo. Then, by the law of large numbers,

E ^ x

N

1 N = -"£xk, k=1

(1.1)

To evaluate the error of estimate of Eq. (1.1), a confidence interval is usually calculated which contains E£ with probability 1 — e, where 0 < £ < 1. The central-limit theorem can be used to construct the confidence interval approximately, provided that < +00. For N sufficiently large, the following asymptotic equality holds P(\xN -J\
max: / k2(x',x)dx'. Jx Note that the relation

22

1. Estimation

of integrals and solution of integral equations

4

0{8~k'2)

=

can be considered to be in favour of t h e histogram m e t h o d as t h e greater number of trajectories simulated makes it possible to estimate with a greater precision certain special functionals, root-mean-square errors and also, for example, probabilistic characteristics of solutions to problems with r a n d o m p a r a m e t e r s when double randomization is used. Also note t h a t t h e optimal relation between t h e parameters n and h in t h e histogram m e t h o d can be estimated in a rather simple way. On the other hand, with t h e local estimate used, t h e values of f ( x ) are estimated in a more dependent way, this making it possible to estimate with a greater precision, for example, t h e quantity arg sup }{x). In addition, when using local estimates, we can utilize in principle a smoother prolongation of order m h%/m, k —> 2k/m. As a result, we obtain 1/2 m

hjm)

=

'

1/ra

{m) =

m

(2 m + k)c( )

S, (m) = t,

¿i(2m + k) 2m62

'

¿¡{2m + k) [(2m + k)c/k] 2

*/(2m)

k m

2m6 + /

T h e minimization of t h e estimation cost for the solution f{x) by t h e 'conjugate walks' m e t h o d is obviously carried out in t h e same way as for t h e local estimates m e t h o d and leads to t h e same expressions with t h e following replacement: ti —> t', di —> d'. T h e quantity t' means t h e average cost of simulation of t h e conjugate t r a j e c t o r y and can be close to t h e average cost of simulation of a trajectory in t h e histogram m e t h o d , i.e. to t. But t h e quantity d! can be essentially superior to t h e corresponding coefficient d in t h e histogram method. This is implied, for example, by estimates of Eq. (1.31) and Eq. (1.33). T h e local estimates used for t h e solution of t h e transport theory problems, as a rule, have infinite variances (see Section 5.1) because of t h e fact t h a t

L

k2(x',x)dx'

= +oo.

We can overcome this difficulty by carrying out an 'e-cut' of t h e kernel when computing the local estimate, i.e. we can assume t h a t ( k(x', x), M*0=

o,

\x' — x| > £ , |x'-x| 0 we also have the following asymptotic expressions: K

=

(4 +

^

452

Compare now the optimal cost of the 'frequency polygon' method with that of the local estimation logarithmically biased. We have

If mes(X) d0/q which practically does not effect the optimal values of h and n. In this Section there were considered the global estimates with respect to the metric L2(X). It is possible to obtain the similar results for the metric C(X) by using the theorems of imbedding of the space into the space C{D), provided 21 > k. When using local estimates or the 'conjugate walks' method these results are the same as in Section 1.1 with specific constants. If the proper derivatives of the kernel exist, then the cost of local estimates method is similar to the cost of dependent simulating the 'conjugate walks'. The similar problem for the 'frequency polygon' will be considered in Section 5.7 in comparison with the 'free-path estimate' method.

1.6.

MONTE CARLO METHODS FOR SOLVING PROBLEMS WITH STOCHASTIC PARAMETERS

Various examples introducing additional randomness for constructing effective simulation algorithms can be found in the literature devoted to the Monte Carlo methods (see, for example, [17] ). This section is concerned with the optimization of randomized algorithms

26

I. Estimation of integrals and solution of integral equations

for estimating probabilistic characteristics of equations with random parameters. In this connection, randomized models for random fields are suggested in the Appendix. Randomized estimation for the statistical moments of the solution is presented be low. Assume a linear functional equation L = f to be solved by the Monte Carlo method on the basis of simulation of a stochastic process. (Denote the trajectories of this process by u>). This means that random variables £/t(w) are constructed so that M&(w) = Jk,

k =

\,2,...,m,

where Jk are the functionals of to be evaluated (M denotes the mathematical expectation). Let the operator L and the function / depend on a random field cr (for example, a random medium in transfer theory, random force in elasticity theory, etc.). Also, £k = £*(w,o-),

Jk-Jk{cr)

and Mfo(W,cr)M =

Jk(a)

, where the variables w and a are generally not independent. Consider the problem of evaluating the quantities Jk = EJjt( then p(Kp) = Proof is similar to the proof of the Theorem 2.2.



It follows from Theorem 2.3 that the variances of the estimates of the solution components for system Eq. (2.11) are finite if, for example, ||Ap,m|| < 1 , i = 1,2, . . . , m . This gives a simple condition of variance finiteness for the vector algorithm for the multigroup transfer equations proposed in [52]. Here, the transition into another group is treated as absorption. Therefore the equality max 11^,11 = 1 1 ^ 1 1 . t 2.2.

CALCULATION DERIVATIVES A N D PERTURBATIONS W I T H R E S P E C T TO P A R A M E T E R S

The kernels of the integral equations often depend on parameters describing certain variable characteristics of the original information, for example, properties of the medium where the process is considered. In this connection, various problems, for example, inverse problems of prediction of the real values of parameters can be considered using the observations of some functionals of the solution [2,28]. Besides, optimization problems with respect to parameters for these functionals are considered. When the parameters are random, we also consider the problem of calculation of probabilistic characteristics of the solution. It is not difficult to see that to solve these problems, it is useful to have an algorithm for calculating perturbations of the solution or the functionals under perturbations of the parameters. Sometimes it is more effective to calculate derivatives of the solution (or functionals) with respect to parameters. Below, we present a special vector technique for calculating the derivatives and the perturbations mentioned. Consider in L ^ X ) an integral equation of the second kind with a kernel depending on a parameter A:

36

2. Estimation of derivatives

("> =

n =

+

0,1,..., m

(2.13)

1=0 or in the operator form $ = K