Spectral Models of Random Fields in Monte Carlo Methods [Reprint 2018 ed.] 9783110941982, 9783110412116


250 80 18MB

English Pages 198 [212] Year 2001

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Chapter 1. Approximate Modelling of Homogeneous Gaussian Fields on the Basis of Spectral Decomposition
Chapter 2. Spectral Models for Vector-Valued Fields
Chapter 3. Convergence of Spectral Models of Random Fields in Monte Carlo Methods
Chapter 4. On Optimization and Convergence of Functional Monte Carlo Estimators
Appendix A. Gaussian Distributions: Properties and Simulation
Appendix Β. Solution of Boundary Value Problems for Linear Systems of Stochastic Differential Equations
Appendix C. On Interpolation of Positive Definite Functions and Stationary Random Sequences
Appendix D. Coding of Multiplicative Pseudorandom Number Generators
Bibliography
Notation
Index
Recommend Papers

Spectral Models of Random Fields in Monte Carlo Methods [Reprint 2018 ed.]
 9783110941982, 9783110412116

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Spectral Models of Random Fields in Monte Carlo Methods

SPECTRAL MODELS OF RANDOM FIELDS IN MONTE CARLO METHODS

S.M.

PRIGARIN

/ffVSPIII UTRECHT ·

BOSTON ·

KÖLN ·

TOKYO - 2 0 0 1

VSP BV

Tel: + 3 1 3 0 6 9 2 5 7 9 0

P.O. B o x 3 4 6

Fax: + 3 1 3 0 6 9 3 2 0 8 1 [email protected]

3 7 0 0 A H Zeist

www.vsppub.com

T h e Netherlands

© V S P B V 2001 First published in 2 0 0 1 ISBN 90-6764-343-2

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.

Printed in The Netherlands

by Ridderprint

bv,

Ridderkerk.

To my beloved

Grannies, Eugenia and Maria

Preface The proposed spectral models represent a new class of numerical methods aimed at simulation of random processes and fields. The spectral models were developed in the 70-s, and they have appeared to be considerably promising for various applications. In Russia, the papers by Prof. G.A. Mikhailov have given impetus to a large number of investigations on spectral models of random fields. Nowadays the spectral models are extensively used for stochastic simulation in the atmosphere and ocean optics, turbulence theory, analysis of pollution transport for porous media, astrophysics, and other fields of science. The scalar spectral models and some of their applications are presented in Chapter 1. A new technique of successive refinement of spectral models and conditional spectral models are described here. Chapter 2 deals with vector-valued spectral models. Convergence of spectral models is studied in Chapter 3. Problems of optimization and convergence for functional Monte Carlo estimators are considered in Chapter 4. Moreover, the monograph includes four Appendices, where some auxiliary information is presented and additional problems are discussed. Appendices deal with a) properties and methods of simulation of Gaussian distributions, b) numerical solution of boundary value problems for linear systems of stochastic differential equations, c) interpolation of stationary random sequences and their correlation functions, d) coding of multiplicative pseudo-random number generators in high-level programming languages. The book is intended for experts in Monte Carlo methods, as well as for senior and post-graduate students, who are interested in the theory and applications of stochastic simulation. For preparing the text, the author used the notes of the lectures he has given at Novosibirsk State University. Knowledge of basic concepts of linear algebra, probability theory and functional analysis is desirable for reading the book. A number of "exercises" and "tasks" are offered for studying on reader's own. Some of the "tasks" contain comparatively difficult and unsolved problems that can be used as subjects of graduate and post-graduate research.

The author would be interested in readers' references that can be sent at the .following address or by electronic mail: Institute of Computational Mathematics and Mathematical Geophysics, pr. Lavrentieva, 6, Novosibirsk, 630090, Russia (Email: [email protected]). Acknowledgements. My thanks are due to a number of my teachers, colleagues, and students. I am sincerely grateful to Gennadii Mikhailov for his encouragement and stimulating ideas. I would like to thank my colleagues Tatiana Averina, Klaus Hahn, Stefan Heinrich, Boris Kargin, Klaus Noack, Vasilii Ogorodnikov, Ulrich Oppel, Anton Voitishek, and Gerhard Winkler for close cooperation and fruitful discussions. Special thanks are due to my students at Novosibirsk State University. Finally, I take the opportunity to thank the Russian Foundation for Basic Research and INTAS for the financial support (grants 00-01-00797, 00-05-65456, IR-97-1441). Akademgorodok, Novosibirsk, Russia September, 2000

Sergei M. Prigarin

Contents Chapter 1. Approximate modelling of homogeneous Gaussian fields on the basis of spectral decomposition 1.1. Spectral models of random processes and

fields

1 1

1.1.1. Basic principles of constructing spectral models . .

1

1.1.2. Generalized scheme. About numerical analysis of the error

4

1.1.3. Examples of spectral models of stationary processes

6

1.1.4. Examples of spectral models for isotropic fields on a plane

9

1.1.5. Spectral models for isotropic fields in three-dimensional space 17 1.2. Technique of successive refinement of spectral models on the same probability space 21 1.2.1. Description of the algorithm

21

1.2.2. Auxiliary statements and examples

22

1.3. Conditional spectral models

24

1.3.1. Statement of the problem

25

1.3.2. Method of solving the problem

26

1.3.3. On realization of numerical algorithm

30

1.4. Specialized models for isotropic fields on a fc-dimensional space and on a sphere 33 1.4.1. Models of isotropic fields on a ^-dimensional space

33

1.4.2. Spectral models of isotropic fields on a sphere . . .

38

1.5. Certain applications of scalar spectral models

39

1.5.1. Simulation of clouds

40

1.5.2. Spectral model of the sea surface undulation . . . .

43

1.6. Further remarks 1.6.1. Nonhomogeneous spectral models

47 47

1.6.2. Approximate modelling of Gaussian vectors of stationary type by discrete Fourier transform 47

Contents Chapter 2. Spectral m o d e l s for vector-valued

fields

2.1. Spectral representations

50 50

2.1.1. Spectral representations for complex-valued vector random fields

50

2.1.2. Spectral representations for real-valued vector random fields 52 2.2. Isotropy

54

2.3. Simulation of random harmonics

55

2.3.1. Complex-valued harmonics

55

2.3.2. Real-valued harmonics

57

2.3.3. About simulation of complex-valued Gaussian vectors 59 2.4. Spectral models of homogeneous Gaussian vector-valued fields

61

2.5. Examples of simulation

62

2.5.1. Gradient of isotropic scalar

field

62

2.5.2. Solenoidal and potential isotropic random fields . .

62

2.5.3. Vector-valued isotropic fields on plane and in space

63

Chapter 3. Convergence of spectral m o d e l s of r a n d o m fields in M o n t e Carlo m e t h o d s 3.1. Introduction

69 69

p

3.2. Conditions of weak convergence in spaces C and C . . . 71 3.2.1. Weak convergence in the space of continuous functions 71 3.2.2. Weak convergence of probability measures in spaces of continuously differentiable functions 74 3.3. Convergence of spectral models of Gaussian homogeneous fields 82 3.3.1. Spectral models

82

3.3.2. Weak convergence of spectral models in spaces of continuously differentiable functions 84 3.3.3. On weak convergence in spaces of integrable functions 90 3.4. Convergence of conditional spectral models

91

Contents 3.4.1. Convergence of finite-dimensional distributions 3.4.2. Weak convergence in spaces Lp and Cp

. .

92 93

3.5. Remark on allowance for bias of estimates constructed by approximate models 96 Chapter 4. On optimization and convergence of functional M o n t e Carlo estimators 100 4.1. Asymptotic efficiency of estimators in the method of dependent tests. Convergence conditions 100 4.1.1. Asymptotic computational cost

100

4.1.2. Convergence conditions

102

4.2. Optimal functional estimators in Sobolev's Hilbert spaces 106 4.2.1. F-deviation of estimators in Sobolev's Hilbert spaces 106 4.2.2. ii-optimal estimators for computing integrals depending on a parameter 107 4.2.3. ü-optimal "absorption" estimator for computing a family of functionals of solution of an integral equation of the second kind 110 4.2.4. Investigation of i/-optimal "collision" estimator . . 114 4.3. Convergence of collision and absorption functional estimators 119 4.4. Oñ convergence and optimization of local estimators . . . 121 4.5. Additional remarks

123

4.5.1. On convergence in the method of dependent tests . 123 4.5.2. Discrete and generalized versions of optimization for Monte Carlo estimators in Sobolev's Hilbert spaces 1^26 A p p e n d i x A. Gaussian distributions: properties and simulation 130 A.l. Gaussian distributions

130

A.2. Conditional Gaussian distributions

132

A.3. Schemes of moving average and autoregression

133

A.4. Generalized Wiener process

137

Contents

Appendix Β. Solution of boundary value problems for linear systems of stochastic differential equations 140 B.l. General relations 140 B.2. Boundary value problems for time-invariant linear systems 142 B.3. On correctness of boundary value problems 146 B.4. Stationary boundary value problems 149 B.5. Boundary value problems for the second-order autonomous linear SDE 150 Appendix C. On interpolation of positive definite functions and stationary random sequences

155

Appendix D. Coding of multiplicative pseudorandom number generators

167

D.I. Multiplicative generators: description and coding D.2. Procedures in Pascal, Fortran and C

....

167 171

Bibliography

185

Notation

196

Index

197

Chapter 1

Approximate modelling of homogeneous Gaussian fields on the basis of spectral decomposition 1.1.

Spectral models of random processes and fields

1.1.1.

Basic principles of constructing spectral models

Let us consider a real homogeneous random Gaussian field w(x), χ e lRfc, with zero mean, unit variance and correlation function R(x) = M i c ( i + y)w(y). The spectral representations of the random field and its correlation function are of the form (see, for example, [101])

w(x) = J cos < χ, λ > ξ(ά\) + J sin < ζ, λ > η(άλ), Ρ Ρ

(1-1)

R[x) = J COS < χ , λ > μ(άλ).

(1.2)

ρ Here £(c?A), η(άλ) are real-valued orthogonal stochastic Gaussian measures on a half-space Ρ that will be called "spectral space" (i.e., Ρ is a measurable set such that Ρ Π ( - P ) = { 0 } , Ρ U ( - P ) = lR fc ); μ(άλ) is a spectral measure of the random field w(x) and < . , . > denotes the scalar product in U k . In this case the following properties are fulfilled: 1)

Μξ(Α)

= Μη(Α)

3) 4)

M £ 2 ( A ) = Μη 2(Α) = μ{Α)\ if An Β = 0,then Μξ(Α)ξ(Β) = Μ η(Α)η(Β)

2) M£(A)»y(A) = 0;

= 0-

= 0,

where A and Β stand for measurable subsets of P . The main idea underlying the methods of constructing spectral models is to take an approximation of stochastic integral (1.1) as a numerical model of the random field w(x).

2

Chapter 1. Approximate spectral models

A simple spectral model may be constructed as follows. Let us fix some splitting of the "spectral space" P : Qin

ρ =

= 0

for

3= 1

As an approximation for (1.1) we consider η wn(x) = ^/^^(Qj)

[£j'cos < λ ή ® > +77jsin < Λj,x > ] .

(1.3)

3=1

Here

η3 are independent Gaussian random variables, MO = MVj

=

= 0,

Μξ]

= Μη]

= 1,

while vectors λj G Ρ belong to the corresponding sets Qj. The random fields wn(x) in formula (1.3) are homogeneous Gaussian fields with the correlation function n

Rn(x) - ^ K Q j ) 0 0 0


3=1

R e m a r k . Formula (1.3) may be reduced to an equivalent, but more effective for modelling, form: n

wn{x) = Σμ1/2( + +φ)

3

(1.5)

where the amplitude r, frequency Λ, and phase φ are random and mutually independent. If M r 2 = 1, the value of φ is uniformly distributed in the interval ( 0 , 2 π ) , and the frequency Λ is distributed according to measure μ, then one can easily show that the field (1.5) is homogeneous with the spectral measure μ.

Summing the independent realizations

of random harmonics (1.5) and normalizing the sum, we obtain the asymptotically Gaussian sequence of homogeneous random fields with the spectral measure μ: wn(x)

η = ( l / n ) 1 / 2 Σ Tj c o s ( < Λ j , x > +¿). j=ι

(1.6)

Such models were discussed in [110]. Thus, the spectrum randomization principle (the frequencies \ j are randomly chosen, according to the spectral measure μ) allows one to reproduce exactly the correlation function of the field (1.1), which is of particular importance for solving problems by Monte Carlo method, when independent realizations of a random field are simulated many times in order to compute some functionals. T h e most promising spectral model for Monte Carlo methods was proposed in [71]. T h e model combines the above principles: spectral space partitioning and spectrum randomization. In this case the modelling formula coincides with (1.3), the essential distinction being that the vectors Λj are chosen randomly in the corresponding domains Qj according to the distributions generated by the spectral measure μ. A s in the case of model (1.6), the spectrum randomization principle allows one to reproduce exactly the correlation function of the limit field (1.1). A possibility to vary the way of partitioning of the spectral space makes the model more flexible than model (1.6) and enables one to obtain a higher adequacy, using a lesser number of harmonics. W e call models (1.3) and (1.6), constructed with the spectrum randomization principle, the randomized R e m a r k . Random variables

spectral

models.

τ/j in formula (1.3) can be non-Gaussian

as well. Non-randomized model is non-Gaussian in this case and onedimensional distribution of randomized spectral model is non-Gaussian,

4

Chapter

1. Approximate

spectral

models

too. But the property of the models to be asymptotically Gaussian is retained. In particular, one can use spectral models with random variables 77¿ which take values +1, - 1 with probability 0.5 in formula (1.3) and, correspondingly, r j = 2 1 / 2 in formulas (1.4), (1.6). However, in what follows we shall assume that random variables £,·, ηj are Gaussian if the reverse is not mentioned. 1.1.2.

Generalized scheme. About numerical analysis of the error

Let us consider a generalized randomized scheme of approximate modelling of a homogeneous Gaussian random field w(x) with zero mean, unit variance and spectral measure μ: wn(x)

where

n = Σ aj 3=1

[6' c o s


sin
],

(1.7)

η^ are independent random variables, = MVj

= MbVk

= 0,

M g = Μη? = 1,

Xj are random vectors independent of (£,·, 77j)j=i...n and distributed according to such probability measures μ¿ that μ(ά\) = ^ α ^ ( ά \ ) , 3 =1

¿ a j = l.

(1.8)

3=1

Condition (1.8) guarantees the coincidence of spectra for the field w(x) and the approximate model (1.7), while from convergence max a,· —> 0 it follows that the approximate model is asymptotically j 0,

ζ ~ c

aexp(—az)

Va2 +12

^(az)e~a* (1 + az)

a2"+1 (a2 + i2)1/+1/2

a

Ki(az)

aV2(azr+V» Γ(Ϊ/ +1/2)2"

'

2"ν/πΓ(ι/ + 1/2) 1 - j (/ α — Xι),

2

χm + yps'mum) , 771 —

1

by δι, ¿2, Se for norms in spaces Li[0, Τ], Z^fO, Τ] and C[0, Γ] respectively. Here R(x, y)=J0(p^x2+y2), while mathematical expectation is taken with respect to joint distribution of random variables u m ) m=1,..., M. Table 1.4 presents the errors of spectral models ß l — R3 for Τ = 1, ρ = 10, M = 10. Randomized model RI has much smaller errors because of splitting spectrum and, particularly, because of dependence between random vectors \m = (p cos u>m, ρ sin um).

1.1.

Spectral

models

of random

processes

and

fields

15

Table 1.4 Model

Si

¿2

Sc

Rl R2 RS

0.00006 0.081 0.169

0.00028 0.104 0.207

0.0031 0.285 0.520

An arbitrary isotropic field on a plane can be represented as a superposition of isotropic fields with correlation functions J o ( p ^ x 2 + y 2 )· Therefore, an approximate model of an arbitrary isotropic field on a plane can be built as a weighted sum J2a p w\^(x,y) of spectral models of the form (1.12) (cf. (1.10), (1.11)). Example 4. (Figure 1.2). B(r)

=

exp(-ar),

g(p)

= ap(p2

G(p) = 1 — a(a 2 + p 2 ) - 1 / 2 ,

+ a 2 )~ 3/2 ,

ρ >

0,

p> 0,

Figure 1.2. Realization of spectral model of a homogeneous isotropic random field with exponential correlation function

16

Chapter

1. Approximate

spectral

models

Example 5. (Figure 1.3) B

r

=

( >

G(p)

sin(ar) ^ ar

r

'

G / 2 — /> 2U/2' i! 1 / a(cr )/ Ρ

9KP) =

0

= l-(a

2

-p

2

)l'

2

/a,

0 a

'

'

otherwise,

p€[0,a],

G- 1 (c) = a ( l - ( l - c ) 2 ) 1 / a ,

c e [0,1).

Below we present relations between spectral measure oo

F(dA) = f(X)d\,

/(A) = - f B(r) cos(Ar)dr, πy o

λ > 0,

of a stationary random process, which is obtained as a trace on a straight line of a homogeneous isotropic field on 1R2, and a radial spectral measure G(dp) = g(p)dp of the isotropic random field (for details see [83] and references presented there): oo

2 2

f(\)=lj(p -\ )~1/2dG(p),

λ > 0,

λ π/2

F(A)=-

oo

[ G(X/sm9)d9

= G(X) + - f &Tcsm(\/p)dG(p), η J

π J ο

A > 0,

λ

τγ/2

1 -G(p)

= p J f(p/

sin Θ)

s'm~2(θ)άθ, R

g(p)=p

2

2

(R

1/2

- p )~ f(R)

- I f { λ ) ( λ 2-

z'Y^dX

, p>

0.

In the last equation we assume that /(A) > 0 for A G [0, R) and /(A) = 0 for A € (R, +oo). In particular, these formulas imply that if the radial spectrum has a power density, i. e. the following is fulfilled const ρ r , ρ > ρ*, 9(P)

=

0

>0,

r >

1),

pe[o,p*),

then the spectral density /(A) is a power function, too, with the same index for A > p*\

17

1.1. Spectral models of random processes and fields

Figure 1.3. Realization of spectral model of a homogeneous isotropic random field with correlation function sin(cr)/(cr) G(p) = d [l - {p/p*)~r+l],

j

ττ/2

λ

Cl

κρ*

Ρ

P*i

>

-r+1

d0=c2+c3\~r+l,

sin0/

\>ρ*,

where Ci, C2, C3 are constants. 1.1.5.

Spectral models for isotropic fields in threedimensional space

Let w(x), χ G M 3 , be a homogeneous isotropic random field with zero mean and unit variance. Correlation function of the field w(x) can be represented as Μ ΐ φ Μ Ο ) = B{r) = JSmMdH(7), J

(1.13)

ΎΓ

where r = ||x|| is Euclidean norm in 1R, , H(7) is the function of probability distribution in [0,+00), 2 f

B(r)

π J

r

H(7) = — / [sin(7r) — (7r) cos(7r)]

dr.

Chapter 1. Approximate spectral models

18

The correlation function B{r) of a homogeneous isotropic field on ]R3 has the following properties (cf., for example, [124], [64]): 1) B(r) > -1/3; 2) function B(r) is differentiable in (—oo, +oo). Let us assume that distribution dH(7) has a density h(7), 1ι(η)άη = dH(7). Inverse transformation to (1.13) for the density is the following

MT)

=

2 œ ~ J(TR)

s'm^ r) Β (r) dr.

Functions h(7) and H(7) will be called radial spectral density and radial spectral function of a homogeneous isotropic random field in a three-dimensional space, while corresponding measure will be referred to as radial spectral measure. E x a m p l e 6. Let w(x) be a homogeneous isotropic field on 1R3 with zero mean, unit variance and radial spectral measure concentrated at a point 7, sin(7r) B(r) = ηΤ As a numerical approximation of the field w(x) we consider the following spectral model

wm(x)

=

M~1/2

M Σ [im COS < Xm, χ > +ηπι sin < \ m , χ > ] , m=1

where £ m , ηιη are independent standard normal variables and < .,. > denotes the scalar product in IR3. For a non-randomized model, vectors \m must be located "uniformly" on a hemisphere S in IR3 with radius 7 (i.e. S Π ( - 5 ) = 0, S U {-S) = {||λ|| = 7 » · For any randomized model it is necessary for vectors Xm to be distributed according to such M measures ^m(c?A) that measure Σ PmidX) is the uniform distribution m=1 on the hemisphere S. One version of the randomized model is presented below:

1.1. Spectral models of random processes ι ij

W/xjÍ®)=( ) ^»i

=

1 2 Kji

^

= 7

tij,

1/2

I

J cos

Σ Σ i= 1 j=l

3

tij — 1 —

< 2

® > —a

sin
],

«'j))

τ[•

'

= 7 V 1 - ti,cos

19

and fields

(i - Ai) j

λ^ = 7 \ Λ - t l sin [ ( i - ^ O j j Here a¿j, /?¿¿ (¿ € { 1 , . . . , /}, j € { 1 , . . . , «/}) are independent random variables uniformly distributed in (0,1). One of the methods to construct approximate models of homogeneous isotropic fields in ]R3 with an arbitrary spectral density /i(7) consists in using spectral models of the form Ν WNM(X) = Σ

n=1

a

nWMl\x,y),

where a) a2n = 1/N, 7„ are independent random variables identically distributed in [0, +oo) with density h(7) or 6n

b) 0 = b0 < bi < ... < bn = +00, a2n =

f

h(j)dy,

ηη

are random

6„_l

variables distributed in corresponding intervals sity h(-y)/a2n.

[ 6 „ _ i , bn)

with den-

There are some examples of correlation functions of isotropic random fields in three-dimensional space and the corresponding radial spectral densities: l.

MT)

B(r)

-,

7 £ (0,α),

0

otherwise,

a

Si(ar) = Si(ar) ar

Si(z)

f s'mt = / ——dt] J t

20

Chapter 1. Approximate

2

spectral

2a

·

.

= ^¡TfT). τ >

3.

M7) =

7 > o,

4.

B(r)=e~ar'

B(r)

=

=

models 1-

e~ar

^ ψ ϋ ί ; r/a

/ 1 ( 7 ) = 2 - ^ - 1 / V 3 / 2 7 2 exp(-72/(4a)),

7

> 0.

R e m a r k . Evidently, the time of simulation for spectral models increases for larger dimensions of parameter of a random field. If values of a spectral model are calculated in nodes of a regular grid, then it is necessary to get the multitude of numbers cos(kS + ψ) for many k G { 0 , 1 , 2 , . . . } . In this case the following relation allows to reduce considerably the time of computation cos((fc + 1)5 + φ) — 2 cos(¿) cos (kS + ) — cos((fc — 1)5 + φ). Below we present relations between spectral measure F(d\)

= f(X)d\,

9

00

/(A) = - f B{r) cos(Ar)dr, π J o

X > 0,

of a stationary process which is obtained as a trace on a straight line of a homogeneous isotropic field on IR3, and a radial spectral measure of the isotropic random field (cf., [25]): ι F ( A ) = J H(X/x)dx,

X > 0,

0 1

oo

/(A) = J h(X/x)x~1dx

= J h(t)t~ldt,

0 M7) = - 7 / ' ( 7 ) ,

A > 0,

A 7>0.

The last equation implies / ' ( 7 ) < 0. It is easy to show that (similar to isotropic fields on a plane) if the radial spectral density h is a power function, then the density f is a power function as well with the same index.

1.2. Successive refinement of spectral models

21

Task 1.1. Verify the following proposition: if g(p) denotes a radial spectral density of isotropic field on 1R2 obtained as a trace of an isotropic field on IR3 with radial spectral density h(7), then Y

h(>y)

Formulate and prove the corresponding statements for isotropic fields and their traces in spaces of arbitrary dimensions. Task 1.2. Develop methods of searching for optimal spectral models of homogeneous Gaussian fields having the minimal error for a fixed number of harmonics. Consider different ways to define the value of errors.

1.2.

1.2.1.

Technique of successive refinement of spectral models on the same probability space Description of the algorithm

The technique of conditional Gaussian distributions allows to develop algorithms of successive refinement of spectral models of random processes and fields. Let us illustrate it by the following example. Assume that for approximate simulation of a stationary Gaussian process w(t) with mean zero and spectral density / ( λ ) , Λ G [0, + 0 0 ) we use a spectral model η w

*( ) = Σ x

& cos(Aj-x) + % 8ίη(λ,·χ)],

(1.14)

3=1

where A¿ € A¿, U" =1 A¿ = [0,+oo), A j Π Afc = 0 for j φ k;

are

independent Gaussian random variables with mean zero and variance D 6 = Di/,· = /

f(X)d\.

To obtain better accuracy we have to increase the number of harmonics in formula (1.14). As a refined spectral model of w(t) we take

22

Chapter 1. Approximate spectral models

«>"(*) = ¿[ei1)cos(Af)x) + efcos(Af)a;) + 3= 1

where A ^ e ? Çj \ variance

X f e A?\ Af ] U A f

= A¿, A ^ O A f

= 0, and

^ a r e independent Gaussian variables with mean zero and D ( j m ) = Dq¡m) =

j

f(X)d\,

m = 1,2,

A(">

which are simulated under additional condition tf'

+ ^ f c ,

+

= m

(1.15)

(see formulas (1.18) below). Thus, the refined spectral model w**(t) contains twice as many harmonics as w*(t). The same improvement procedure can be performed for model w**(t), etc. Condition (1.15) gives possibility to build realisations on the same probability space and to gain convergence of trajectories of spectral models to the limit process w(t) (Figure 1.4).

Figure 1.4. Successive refinement of spectral model of stationary Gaussian process (number of harmonics: 4, 8, 16, 32, 64, 128, 256)

1.2.2.

A u x i l i a r y statements and examples

The following statement is actual for successive refinement of spectral models on the same probability space.

1.2. Successive refinement of spectral models

23

L e m m a 1.1. Suppose ξ is a k-dimensional Gaussian random vector with mathematical expectation m and nonsingular correlation matrix R, A is a I χ k-matrix, 0 < I < k, rang(A) = I, and b is a vector of dimension I. Then conditional distribution of the vector ξ provided that = b, is Gaussian with mean vector m + RAT(ARAT)~1 and correlation

(b — Am)

(1.16)

matrix R - RAT(ARAT)~1AR.

(1.17)

Proof. Formulas (1.16), (1.17) are true (see Appendix A) since random vector (ξ, Αξ)τ is Gaussian with mathematical expectation (m, Am)T and correlation matrix RAT ARAT

R AR

"

It remains now to ascertain that the square I x /-matrix ARAT singular: r a n g ( A / M T ) = rang (A\/RVRAT)

is non-

= r a n g ( A \ / ß ) = rang(A) = I.

Here we used the equality rang(i?) = r a n g ( ß ß * ) and the fact that the rank of a rectangular matrix is invariant under multiplication by a nonsingular square matrix (see, for example, [62], [119]). E x a m p l e A. Consider two-dimensional Gaussian random (£i>£2)T with mean zero and correlation matrix ri r

r r2

Conditional distribution of the vector (£i,£2) T provided Gaussian with expectation , [ Γχ + r cb r + r2 and correlation matrix

vector

+ £2 = b is

24

Chapter 1. Approximate

d —d

spectral

models

-d

d,

where c = l / ( n + 2r + Γ2), d — r^r2 — r 2 . Hence we obtain the following formulas for numerical modelling: , ( n r 2 - r 2 V/2 6 = ri + 2 r + r / 2

ε

, r1+r L + r +r~ô2r +I r x 2

(1.18)

Here ε is a standard normal random variable. Example B. Using expressions (1.18) one can easily derive formulas for simulation of η independent Gaussian random variables 6 , · · · > 6 with mean zero and variances σ\,..., σ 2 , provided that 6 + . . . + ξη = b: 6 = 6 = 6 =

2 + . . + σ 2\ 1/2 σιβι + 2 ;—-0, + . · + σ71 / *ι +

A

+ . · + σ¿ V 7 2 σ2ε2 + 2 σ + σ +. ·+ σ 2

σ

2

(6-6),

2 + . · + σ,2 \ 1/2 σ3 σ3ε3 + (6-6-6), + . · + σΗ / σί + - - - + σί 1/2

Cn-l^n-l +

6-1 — η—1

:(6-6-···-6-2),

6 = 6— 6 — 6 — ··· - 6-ι· Here ει, ε 2 , . . . , ε η _ι are independent standard normal random variables.

1.3.

Conditional spectral models

On the basis of the spectral decomposition, we develop here numerical algorithms that allow to model a Gaussian homogeneous random field

1.3. Conditional spectral models

25

subject to the known values of the field at fixed points (Figure 1.5). These algorithms can be used for extrapolation and interpolation of random functions.

Figure 1.5. Samples of conditional spectral models of a stationary random process

1.3.1.

Statement of the problem

Let w(x), χ e IR*, be a homogeneous real Gaussian random field with known expectation and spectral measure. We suppose that at the points X j

e Mk,

j =

i,...,J,

values of the random field are known: w(xj) = bj,

je

{1,2,...,

J}.

(1.19)

It is necessary to construct a numerical model of the random field w(x) that satisfies the condition (1.19), i.e. it is necessary to construct an algorithm for modelling the values of w(x), χ £ X, for a set X C These problems arise, for example, in constructing of stochastic models of natural objects, with the experimental data being taken into account. In particular, this problem is of interest for the dynamic-probability forecast of meteorological processes (see [29], [81]). The method proposed below is an approximate one and is based on the spectral decomposition of homogeneous random fields. Its peculiarity is that the number of the necessary arithmetic operations linearly

Chapter 1. Approximate spectral models

26

depends on the number of elements in the set X . The error of the spectral method is substantially defined by the space dimensions of this set rather than by the number of its elements. 1.3.2.

Method of solving the problem

For simplicity we assume that the mean value of the homogeneous random field w(x) is equal to zero. As the numerical approximation of the field we take a spectral model: Ν WN(X) = Σ°Η 71=1

cos

< λ„, a: >

sin < λ„, Χ > ] ,

(1.20)

where c„ > 0, Ση=ι cn < 0 0 ΐ ζ'η a r e independent standard normal random variables; λ η are vectors in ]Rfe, which are independent of the random variables ξη, ξ'η; and < .,. > denotes the inner product R f c . Thus, in order to model the random field w(x) we use the spectral model (1.20). At the first stage we determine the values of cn and simulate the vectors An according to the chosen spectral model. At the second stage we simulate the independent standard normal random variables ξη, Ç'n with the linear constraint

71=1

+

¿ = 1>···»4

(1-21)

where djn — cn cos < λ η , Xj >, d'jn — cn sin < Λ n ,Xj >. The following statements can be used to model the conditional Gaussian distributions. L e m m a 1.2. Suppose that the system of linear algebraic Αξ — b is a consistent system.

equations

A. Let ξ be a Gaussian random vector with expectation m and correlation matrix R such that b — Am € Ai2(H'!). Then the conditional distribution of the vector ξ, provided that = b, is the Gaussian one with expectation μ = m + R1/2{AR1/2)+{b = m + RAT(ARAT)+(b and correlation

matrix

- Am) = - Am)

(1.22)

1.3. Conditional spectral models B = R-

R1^2(AR1/2)+AR

= R-

RAT(ARAT)+AR.

27 (1.23)

B. If ξ is a vector of independent standard normal random variables, formulas (1.22), (1.23) have the form μ = A+b,

(1.24)

Β = I — A+A.

(1.25)

Hereafler we denote by A+ the pseudoinverse of the matrix A. We present a proof that is based on Lemma 1.1 and skeleton decomposition of a matrix. Let us prove item B. Suppose that A is the I χ k matrix of rank r and consider the skeleton decomposition of the matrix A: A

=

ΑΛΑ 1^2,

where A\ and A-¿ are matrices of rank r and dimensions I χ r and r χ k respectively. Random vector = Α^ζ of dimension r satisfies the equation = b. The conditions of the Lemma imply that the system = b is consistent and hence it is equivalent to the system Α^Αιξι = Ajb. Matrix AjAu as r χ r matrix of rank r, is nonsingular and t1 = (AjA1)-1Ajb

= A+b.

Therefore, equation Αξ = b is equivalent to = A+b, and we can apply Lemma 1.1: μ = A+A+b = A+b, B = I - A+A2 = I The latter equality is obvious:

A+A.

28

Chapter 1. Approximate spectral models A+A = A+A+A^

= At{Al A^-MfAiAa

= A+A2.

Item Β of Lemma 1.2 has been proved. To prove Item A we present vector ξ as ξ = Qe + m, where ε is a vector of independent standard normal variables, QQ* = R. Condition = A{Qe + m) = b implies (1.26)

AQe = b — Am.

This system is consistent if 6 — Am G Applying Item Β of the Lemma to the Gaussian vector ε, we obtain that under condition (1.26) vector ε has the following expectation and correlation matrix με =

(AQ)+(b-Am),

Be =

I-(AQ)+(AQ).

Hence, conditional distribution of the vector ξ = (¿ε +m has the expectation μ = Qμε + m = m + Q(AQ)+(b - Am) and the correlation matrix Β = QBeQ* = R - Q(AQ)+(AQ)Q*

= R-

Q(AQ)+AR.

If we put Q = R}!2 and take into account the relation (cf., for example, [1]) H+ = HT{HHT)+,

(1.27)

then we obtain (1.22), (1.23). Lemma 1.2 has been proved. L e m m a 1.3. Let ξ be a vector of independent standard normal random variables. Then the conditional distribution of the vector ξ, provided that ξ is the pseudo-solution of the system of linear algebraic equations Αξ = b, is the Gaussian one with the expectation (1.24) and the correlation matrix (1.25). Lemma 1.3 is a direct consequence of the Item Β in Lemma 1.2. Indeed, all the pseudo-solutions of the equation Αξ = b and only they

1.3.

Conditional

spectral

are the solutions of the consistent linear system into account (1.27), we get the required result. Let us denote by

ζ

the vector

(ξτ, (ζ')Τ)Τ,

• 6 •

ΑτΑξ

=

ATb.

Taking

where

II

• fí • II

0,

ν»€[0,2π],

= Τ1"-172 cos(m), tu(r,

ΜΊΓ)Φ(άΊ)·,

o 2) for k = 3 c 3 - >/2π,

/i(m, 3) = 2m + 1, oo . (m{eu...,ek-2,(:r), χ = (r,φ) (Ε Κ 2 , on the plane with correlation function B(r) = J0(jr): M 771 =

1

+ sin(m^)Jm(7r)d2)], where

(1.33)

Çffl are independent standard normal random variables.

The following properties of the model (1.33) can be pointed out: 1) MwM(r, =

2 re '2η)

with probability density proportional to Q from (1.38), r¿j and r[· are random variables with Rayleigh distribution, φ^ and φ'^ are uniformly distributed over [0,2ττ],

4 =

j j AiBj

StMMdp.

Random variables pi were simulated by the method of the inverse distribution function, and 9j were simulated by the rejection method (with the linear majorant for j > re/2 and the constant majorant for re being even). The results of simulation (1.40) of the surface j < of a wind-ruffled sea are presented in Figures 1.8, 1.9. The algorithm of modelling is defined by three parameters: m, re and p* which provide the required accuracy. The field w* has the requisite spectral density (1.39) and it is asymptotically Gaussian for max(m,re) —> oo,

p* —> oo.

These conditions are sufficient for the weak convergence of w* in the space of continuously differentiable functions (see Chapter 3). It may be noted that the proposed approach to numerical modelling of a wind-ruffled sea surface may be used in solving various applied problems. In particular, a spatio-temporal model of the form η wn(x,

ί)

=

Σ

airi

cos(


+

Ψΐ)ί

3=1

where /¿j are connected with \j by dispersion relation, seems to be promising for solving problems in laser remote sensing of a sea surface by numerical simulation (see [8]). A realization of the spatio-temporal model of the sea swell is presented by Figure 1.10. More detailed information on numerical models of clouds, sea swell, as well as some applications in study of radiation transfer processes, and comprehensive bibliography can be found in [96].

46

Chapter 1. Approximate

spectral

models

Figure 1.8. An example of the simulated topography of the sea surface roughness (spectral model)

Figure 1.9. Sunglint glitter on a wind-ruffled sea (results of simulation)

1.6. Further

remarks

47

F i g u r e 1.10. Results of simulation of the space - time structure of the sea surface (relief in a time order sequence)

1.6.

Further remarks

1.6.1.

N o n h o m o g e n e o u s spectral m o d e l s

The spectral models can be used for simulation of nonhomogeneous fields if, for example, one puts a} = aj(x), Xj — Xj(x) in formula (1.3). Some realizations of nonhomogeneous spectral models are shown in Figure 1.11. In general (see, for example, [31]) it is known that, if the correlation function B(x,y) of a nonhomogeneous random field w(x) is of the form: B{t,s)

= J g(t, X)g(s,

X)m(dX),

Λ

then the field w(x) allows the following representation: w(t)=

Jg(t,\)z{d\),

(1.41)

Λ

where z(dX) is an orthogonal stochastic measure such that m(dX) = M|.2:(efA)|2. The principles used for constructing spectral models can be applied for the approximate modelling of nonhomogeneous random Gaussian fields of the form of (1.41). 1.6.2.

A p p r o x i m a t e modelling of Gaussian vectors of stationary t y p e by discrete Fourier transform

The spectral method can be adapted to simulation of Gaussian vectors of the stationary type. A peculiarity of this approach is that all Ν

48

Chapter

1.

Approximate

spectral

models

Figure 1.11. Realizations of nonhomogeneous spectral models components of a Gaussian vector ( x i , . . . , x ^ ) can be computed by the algorithms of the fast Fourier transform. Below we present an interesting algorithm for simulation of Gaussian vectors of stationary type, that was reported in [11]. For even N, it was proposed to calculate the values of the Gaussian vector ( x i , . . . , χ ν ) by formula N—i xm

=

, 2π Cfcex

Σ

jfe=o

Ρ

v l j ^

\ 1 7 1

\ w

) ~

/

°o sign (A) - 0.5) + N/2-1

+(-1 )

N / 2

a

N / 2

sign(/?jv/2 - 0.5) + 2 £

ak

cos 2n(bk

+

m k / N ) .

k= 1

Here β \ , . . . , β^/2 are independent random variables uniformly distributed in (0,1),

Cfc =

a0 sign(/?o - 0.5),

k = 0,

akex])(i2nßk),

k =

1,2,..., η —

k =

N / 2 ,

aN/2 > c

N

sign{ßN/2

-

0.5),

-k,

1,

k = N / 2 + 1,..., Ν



1,

while coefficients α ϊ , . . . , α/v/2 are taken from ak

=

\ 2 ^ N -

l

f { 2 - K k / N ) ^

1 2

,

k =

0,...,

N / 2 .

It was shown in paper [11] that for the random vector ( χ χ , . . . , χ ν ) the following equalities are fulfilled

1.6. Further

remarks

49

Ν

(27riY)_1|¿ xmexp

km)

* = f(2nk/N),

(1.42)

m—l k = 0,...,

N/2.

Moreover, in the case of "white noise", when f(2nk/N)

_1 = (2π)-1 ,

we have the equality

k = 0 , . . . , N/2,

Ν

Ν-

1

Σ *m = 1.

m=1

(1.43)

It is asserted in [11] that «if the function / ( λ ) , 0 < λ < π, is continuous, then ^t can be proved that the finite-dimensional distributions of the sequence X \ , . . . ,xjq for Ν —> oo converge in distribution to the finite-dimensional distributions of a stationary Gaussian sequence with mean zero and spectral density / ( λ ) , 0 < λ < π". In spite of this assertion, we would hesitate to recommend the proposed method for widespread use, since the properties (1.42), (1.43) are too unnatural for Gaussian stationary sequences. In our opinion, it is more rational to use common discrete spectral models of the form

k=0

Chapter 2

Spectral models for vector-valued fields Nowadays numerical models of vector-valued random fields are extensively used in solving applied problems, and these models have become subject of many investigations (see, for example, [53], [65], [76], [84], [85], [86] [91], [94], [107], [106], [110]). This Chapter deals with methods of numerical modelling of homogeneous vector-valued random fields based on the spectral decomposition. General relations for complexvalued and real-valued spectral models are obtained, and particular algorithms for simulation of homogeneous, isotropic, solenoidal and potential vector fields of arbitrary dimensions are presented here.

2.1.

Spectral representations

2.1.1.

S p e c t r a l r e p r e s e n t a t i o n s for c o m p l e x - v a l u e d v e c t o r r a n d o m fields

Let w(x) be a complex vector-valued homogeneous field,

w(x) =

;

,

wr(x) E C, r = l , . . . , s , χ G

_ ws(x) _

with mean zero and (matrix-valued) correlation function K(x)

= Μ ι υ ( χ + y)w*(y)

=

= M (Re w(x + y)(Rew(y))

+ I m i i ; ( x + j/)(Im«)(j/))

+

T -(- Imu;(a; + ew(x + j/)(Imu;(y)) y)(Rew(y))Twith y). For a vector-\-i c €{— (DsRthe scalar random field c*w(x) is homogeneous correlation function c*K(x)c.

If the field w(x) is continuous in mean square, then [30] (2.1)

51

2.1. Spectral representations

where F(A) is a matrix-valued spectral measure in IR^, i.e., for any measurable set A C IR^ the complex-valued (s, s)-matrix F(A) is positive definite (hence F (A) = F*(A), ReF(A) = (Re F(A))T, ImF(A) = — (Im F ( A ) ) T ) , and for any vector c G C s the function of sets c*F(A)c fc is a finite measure in IR (evidently, it is the spectral measure of the field c*w(x)). Some properties of a correlation function are presented below: 1) K(0)

=

2) K(x)

=

F(Rfc); K*{-x)·

3) K{x)K*(x)


-ε||ζ||2)

K(x)dx.

In particular, spectral density exists if +0O

J

\\K(x)\\dx < + 0 0 ,

—oo

and in this case / ( Λ ) = (2π)-* J e x p ( - i < χ, A

>)K(x)dx.

The spectral representation of the field w(x) is of the form (cf. [30])

52

Chapter 2. Spectral models for vector-valued fields

w(x) = J exp(i < x,X K

>)z(dX),

k

where z(A) is a vector-valued spectral stochastic measure in IRA spectral stochastic measure satisfies the following properties:

1) Mz(A)

A

= 0;

2) if Α Π Β = 0, then z(A + B) = z(A) + 3) M z ( A ) z * ( B ) = F(ADB),

z(B);

i.e.

Re F(A Π Β) = M R e 2 ( A ) ( R e 2 ( 5 ) ) T + M I m z ( A ) ( I m z ( £ ) ) r , I m F ( A n B ) = M ( - R e z ( A ) ( I m 2 ( £ ) ) T + Im2(A)(Re*(ß))T) , where A and Β are measurable sets in IR^. Property 3 implies t h a t if Α Π Β = 0, then M z(A)z*(B) = 0.

2.1.2.

(2.2)

Spectral representations for real-valued vector random fields

For the vector field w(x) to be real-valued it is necessary and sufficient t h a t z(A) = z(—A) for all measurable A and in this case F (A) = F (—A) =

FT(-A).

From the orthogonality property of the stochastic spectral measure z(A) (cf. (2.2)) it follows t h a t if Α Π - A = 0, then M z ( A ) z T ( A ) = 0. Thus, under assumption Α Π —A = 0, we have M Re z(A)(Re z(A))T M Re ζ(Α)(\τη

z(A))T

- M lmz(A)(lmz(A))T

= 0,

+ M Im2(A)(Re^(A))T = 0

2.1. Spectral

53

representations

and, therefore, MRe¿(A)(Rez(A)) T = MImz(yl)(Imz(A)) T = ReF(A)/2, - M R e z ( A ) ( I m * ( A ) ) T = M Imz(A)(Re^(A)) T = ImF(A)/2. Spectral representations of the real vector-valued field w(x) and its correlation function may be written in the form w(x)=z{0}+2

J cos < x,X

ρ

> Rez(dA)—sin

= J cos < χ, X > R e z(dX)~

R* K(x)=F{0}+2

J cos

ρ

sin < x,X > Imz(oíA),

ReF(dX)

= J cos < χ, X > R e F(dX)

Imz(JA)=



sin < x,A > ImF(dA)=

- sin < χ, X > I m F(dX).

(2.3)

R* Here Ρ is a half-space of IR*, i.e., Ρ is a measurable set such that Ρ η ( - P ) = 0, Ρ + ( - P ) + {0} = IRfc. For spectral density of a real vector field we have / ( A ) = (2n)~k ( ^ j cos < χ, X > K(x)dx

+ i J sin < χ, X >

R*

K(x)dx^.

R*

Since K{—x) = KT(x) we can write /(λ) = ( 2 π c o s

ρ

< χ,λ

i J sin < χ, Χ > [Κ(χ)

ρ

> [Κ(χ)

-

+ KT{x)]dx

+

KT(x)]dx^.

Remark. If a random field has a real-valued correlation function, then it does not mean that the field is a real-valued one. A simple example is presented below. Let 2 be a complex-valued random variable such that Mz = 0, Mzz = 2A, Mzz = 0:

54

Chapter

2. Spectral models for vector-valued

M ( R e z)2 = M ( I m z)2 = A,

MRezlmz

fields

= 0.

Consider the random processes u(x) = exp(zÀx)2 + exp(—i\x)z = = 2(Rez

cos(Ax) — Imz sin(Ax)),

v(x) — exp(¿Ax)z — exp(—iXx)z = = 2¿(Imzcos(Aa;) + Re ζ sin(Ax)). One of the processes is purely imaginary and the other is real-valued, while both of them have the same correlation function K(x) = 4y4cos(Ax).

2.2.

Isotropy

A homogeneous complex vector-valued random field w(x) with correlation function K{x) is said to be isotropic if K{x)

= K(Vx)

=

5(H),

χ G IR*,

for any orthogonal transformation V (i.e., V is a combination of rota= F(A) and tions and reflections). For isotropic field we have F(V~1A) K(x)

=

K*(x).

The spectral representation (2.1) of correlation function of isotropic homogeneous field may be written in the form (cf. [37],[123]) oo B(p)

= I Yk(ΊΡ)0(άΊ),

(2.4)

o Yk(a) = 2 ^

2

T(k/2)(ap

k

-

2

y

2

J(fc_2)/2(a).

Here Jm are Bessel functions of the first kind and G(B) = ^(||λ|| G Β) is a matrix-valued measure in ]R. Note that Κι(α) = cos(a),

Υζ(α) =

J0(a),

>3(0) = s i n ( a ) / a ,

YA{O) = 2 a _ 1 Ji(a).

If the spectral measures F and G are absolutely continuous with respect to Lebesgue measure and / , g are the corresponding spectral densities, then

2.3. Simulation

of random

harmonics

55

g( 7) = S*(7)/(7e), where Sk{j) = (2nk/2/T(k/2))ryk~1 7 and e is a unit vector in R/5.

is area of a sphere in lRfc with radius

The transformations inverse to (2.4) are of the form (cf. [37], [114]): 00

G[0,7) =

2-( fc - 2 )/ 2 r- 1 (fc/2)

J(1p)^Jk/2(1p)p-1B(p)dpi 0

00 3(7) = 2-(*- 2 )/ 2 Γ- 1 (Α:/2)

J(1P)^J(k_2)/2(1P)B(p)dp. 0

For real-valued homogeneous isotropic vector fields the matrix-valued spectral measures G and F are real-valued: F (A)

= F (—A)

= F (A),

K(x)

=

KT{x).

2.3.

Simulation of random harmonics

2.3.1.

Complex-valued harmonics

A complex-valued vector random harmonic ξ(χ)

= exp(¿ < χ, Λ > ) ζ ,

£(x),2GCs,

(2.5)

where is a complex random vector with zero mean, has correlation function K(x)

= exp(i < χ,Λ > ) M ( z z * )

and its spectral measure is concentrated at the point λ: F (A)

= /{λ G

A}M{zz*),

Re F{ λ } = M Re z(Re zf Im F{\}

= - M Re z( Im zf

+ M Im z( Im z)T, + M Im ¿(Re z f .

The field £(:r) is strictly homogeneous (in other words, homogeneous in narrow sense, i.e., the finite-dimensional distributions are invariant with respect to shifts) if and only if the random vector 2 has the structure

56

Chapter 2. Spectral models for vector-valued fields ζ = exp(i0)zo,

where z0 is an arbitrary complex-valued random vector, while θ is a random variable independent of z0 and uniformly distributed in [0,2π]. Obviously, we have Mzz* -

MZ0ZQ.

A harmonic with a random frequency λ and amplitude z\ dependent on the frequency = exp(i < χ, Χ >)ζλ, Mzx = 0,

ΜζχζΙ =

(2.6)

F{dX)^{dX),

where λ is a random vector distributed in according to probability measure μ, absolutely continuous with respect to F, has the correlation function K(x) = j exp(¿ < χ,Χ

>)F(dX)

R* and the spectral measure F(dX). R e m a r k 1. Basically in the capacity of the measure μ for harmonic (2.6) one may take an arbitrary probability measure in IRfc absolutely continuous with respect to F. The authors of paper [107] propose μ(ά\) = ti F(dX)/ tv

F(Uk),

where tr denotes the trace of a matrix. The value t r F ( H f c ) may be interpreted as the "full energy" of the field and tr F(dX) as the "energy" of the frequencies dX. R e m a r k 2. If z\ = ζ does not depend on λ, then K(x) = J exp(i < χ , λ R*

>)μ(άΧ)Μ[ζζ*].

2.3. Simulation of random

2.3.2.

57

harmonics

Real-valued h a r m o n i c s

Let us consider a real-valued case. Vector harmonic £(:r) = exp(¿ < χ, Λ >)z + exp(—i < χ, λ > ) ζ = = 2 (cos < χ, λ > Re ζ — sin < χ, λ > Im ζ ) ,

(2-7)

where Μ ζ = 0, Μ ζζτ = 0, has correlation function Κ (χ) = 2 cos < ζ, λ > ReM[zz*] - 2 sin < χ, λ > I m M [ ^ * ] and spectral measure being concentrated at the points λ, — Λ, F{X} = M[zz*} =

F{-\}.

The equality M z z r — 0 means M Re ¿(Re z)T - M Im ζ (Im z f = 0, M Re z(liazf

+ M Im z( Re z f - 0

and, hence, M R e z ( R e z ) T = M I m z ( I m z ) T = ReF{Ä}/2, -MRez(Imz)T = MImz(Rez)T = If " Zi '

lmF{\}/2.

' Pi exp(tipi) _

" fi (χ) ' II

; psexp(iíps)

. ξ.(χ) .

then (2.7) may be rewritten in the form f m ( x ) = 2/> m cos(< x,\ > +

+φ^ + 0),

m =

l,...,s,

where the value of θ is independent of pm and φ^ is uniformly distributed in [0,2π].

58

Chapter 2. Spectral models for vector-valued fields

The harmonic ξ(χ)

= cos < χ, X > R e z ( A ) — sin < χ, X > I m ^ ( A ) /9ι(λ) cos ( < χ, X > +ι(λ)) (2.9) _ Ps(A) cos ( < x,\ > +s(A)) _ Ρι(λ)βχρ(»ν?ι(λ))

*ι(λ) Κλ) =

L

J

_ ps(X)exp(ips(X)) _ 2F(dX)^(dX),

K

'

M ¿ ( A ) = 0,

{ F{0}M0},

AyéO, λ = o,

Μ ζ ( λ ) / ( Α ) = 0,

where λ is a random variable in K f c distributed according to symmetric (μ(άΧ) = μ(-άλ)) probability measure μ (absolutely continuous with respect to F ) has the correlation function (2.3) and the spectral measure F(dX). Moreover, the following is fulfilled: M R e z ( A ) ( R e z ( A ) ) T = M l m z ( A ) (Im^(A))T = =

ReF{dX}/μ(άλ),

- M R e z ( A ) ( I m z ( X ) ) T = Mlmz(X)(Rez(X))T

=

= Im F{dA}//x(dA), M

' Re ¿(A) ' " R e z ( A ) ' Imz(A) Im 2 ( A )

μ(άΧ)

Re F{dX} Im F{cL\}

- I m F{dX} ReF{dX}

(2.10)

If F { 0 } = 0, then in the capacity of the measure μ one may take a measure concentrated on a half-space. In this case Μ ζ ( λ ) ζ * ( λ ) = 4F(dA)//¿(dA), and in the right-hand side of (2.10) coefficient 2 will appear.

2.3. Simulation of random

59

harmonics

R e m a r k . Sometimes for the real-valued case it is more convenient to use spectral representations in the following form (cf. (2.3)) w(x) = J cos < χ, X > zi(d\) — sin < χ, λ > z2{dX); ρ K(x) = J cos < χ, X > Fi(dX) - sin < χ, X > F2(dX)·, ρ Mzi(dX)zf

(dX) = Fi(dX),

-MZl(dX)z%(dX)

¿ = 1,2;

= Mz2(dX)z{(dX)

Fi(dX) = 2 Re F(dX),

=

F2(dX)]

F2(dX) = 2 Im F(dX),

X > 0;

where F\ and F2 are corresponding symmetric and skew-symmetric matrix-valued measures on the half-space P . In this context the harmonic (2.9) can be written in the form £(x) = Z\(X) cos < χ, X > —Z2(X) sin < χ, X > , where Λ is a vector distributed according the measure μ on the half-space P, MZi(X)Zf(X)

= Fi(dX) ¡ μ(άΧ),

- Μ Ζ 1 ( Λ ) Ζ 1 τ ( λ ) = ΜΖ2(Χ)Ζξ(Χ)

2.3.3.

¿=

1,2;

=

Ρ2{άΧ)/μ(άΧ).

About simulation of complex-valued Gaussian vectors

Modelling of a complex-valued random vector £ with zero mean and specified covariance matrix F = ~M.zz* (with additional condition M z z T = 0 for real-valued harmonics) is the basic problem for constructing homogeneous vector-valued harmonics (2.5)—(2.7), (2.9). A conventional solution of this problem consists of two stages: modelling of an orthonormal vector ε, Μ ε ε * = E and subsequent linear transformation ζ = Αε,

where AA* = F.

(2.11)

R e m a r k . Correlation matrix Myy* does not define uniquely correlation coefficients for all components Re y¿, Im y¿ of the complex-valued random vector y. Therefore, usually it is supposed (often without mentioning) that MyyT - 0.

60

Chapter 2. Spectral models for vector-valued

fields

In particular, for an orthonormal complex vector ε, Μ ε ε * = E , equation Μ ε ε τ = 0 means that real and imaginary parts of the vector have the same correlation matrix E¡2 and they are orthogonal to each other: = MIme(Ime)T =

MRee(Ree)T

E/2,

M R e e ( I m £ ) T = 0. Let us assume that the positive definite matrix F is not singular. The linear transformation A in (2.11) is uniquely defined up to a unitary operator U: vectors As and AU ε have the same covariance matrix F. If we require that matrix A is self-conjugated, A = A*, then A = In this case matrix A may be found by the following recurrent procedure: F1/2.

A 0 = 0,

A n + 1 = An + (2 II F y 1 / 2 )" 1 (F - A 2 )

(see, e.g., [104]), or by the formula A = W(DY^2W*, where D is a diagonal matrix with eigenvalues of the matrix F as diagonal elements, and W is a matrix, whose columns are eigenvectors of the matrix F , F = WDW*. If we require that matrix A be a lower triangular one, then it gives us the well-known recurrent formulas for the matrix elements Art /in = ( F n ^ e x p O V i ) , Art = ^Frt - Σ

Arr -

ArkÂfkj

M«,

t = 1, . . . , Γ - 1,

r_1 ( \1/2 ( Frr - Σ I Art I 2 J exp(iVr),

Γ = 2, . . . , S

(the diagonal elements of the matrix F are real), where i p i , . . . ,W

+ (E-

L(\)W(d\)],

(2.14)

where 7 = ||A||, Sk(j) is area of a sphere in IR,fc with radius 7, άσ(η) is an area element of this sphere, E is the k χ k identity matrix, L(λ) = λλ Γ /||Λ|| 2 , φ and ψ are some finite measures on [0,00). It is easy to verify that ^ ^ M | H | 2 = J tv F(d\)

R*

= Jv(df)

+ (k-

0

1) J

φ{άη).

0

For potential bi-isotropic field (2.12) the following is fulfilled: φ(άη) = 7 2 i/(¿7), φ(ά7) = 0. In general case a homogeneous potential bi-isotropic vector field is gradient of a locally homogeneous and locally isotropic scalar random field (see [125]). If w(x) is a bi-isotropic field with spectral measure (2.14), then the scalar field divu;(:r) is isotropic with "radial" spectral measure η 2 ψ{άη). So, the field w(x) is solenoidal (div w(x) = 0) if and only if φ(ά-γ) = 0. Finally, we notice that coincidence of measures φ{άη) = φ{άη) corresponds to a homogeneous random field which is isotropic and bi-isotropic simultaneously. Such a field consists of uncorrelated components. A general representation of correlation functions for bi-isotropic random fields is obtained in [125]. 2.5.3.

Vector-valued isotropic fields on plane and in space

In two-dimensional case any correlation function of a potential bi-isotropic field (2.12) may be written in the form 00

K(x)

= J {-ΜΊρ)Χ

+ ^(Ίρ){Ίρ)-ιΕ]

φ(άΊ),

o where ρ = ||x||, Χ = xx*/||x|| 2 and E is the 2 x 2 identity matrix.

64

Chapter 2. Spectral models for vector-valued fields

Realizations of spectral models of a scalar isotropic field on the plane with correlation function Jo(c||a;||) and the gradient of this field are presented in Figure 2.1. This case corresponds to the spectral measures ν, ψ concentrated at the point 7 = c. More general models may be obtained by summation of such fields of different scale (see Figure 2.2).

Figure 2.1. Realizations of spectral models of a scalar isotropic field with correlation function JQ(CX) and its gradient

.-."HE; ; ; ! ' ~ " ! « C ' , ; ; ' Í Í Í ! M K Í ; : : ' "A\

— · . - -UK i Ρ * - ; ; Hi i ^ p i j ' f e : ; ; : j j f $ ! : rr.iijiijrf.ii;;;; Ifjv «·>• oo, to a Gaussian vector w,i ι

w

w

e IR"1,

wl e

and correlation matrix of the vector w 2 is a nonsingular one. Then conditional distributions of the vectors w]y provided that ||tojy — 6|| —>• min (b is an arbitrary vector) weakly converge to conditional distribution of the vector w1 provided that w2 = b. Let us consider a sequence of partitionings of JR¿ into disjoint sets: Λ ^ + .,. + Λ ^ Κ * ,

(3.28)

(the superscripts are later omitted) and let us consider the spectral model (3.27) with c 2 = μ(Αη), \ n e An. (3.29) The following condition Ajv C {|λ| > dN},

d^oo,

maxdiam(An)

0

(3.30)

3·4· Convergence of conditional spectral models

93

is sufficient for convergence of finite-dimensional distributions of the spectral model (3.27)—(3.30) and, by Lemma 3.7, for convergence of finite-dimensional distributions of the corresponding conditional spectral model. In this case convergence occurs for an arbitrary algorithm of choice of Xj € Λj including randomized spectral models as well. Thus, the following statement holds. L e m m a 3.8. Finite-dimensional distributions of conditional spectral model (3.27)-(3.30) converge to the corresponding finite-dimensional b distributions of the random field w (x) (both for randomized and nonrandomized models).

3.4.2. Weak convergence in spaces Lp and Cp The problem is particularly simple for weak convergence of randomized and non-randomized spectral models in spaces LP(K), Κ is a compact set in R f c . L e m m a 3.9. If conditional spectral model wbN(x) converges to a Gaussian field wb(x) in the sense of convergence of finite-dimensional distributions, then it also converges in the sense of weak convergence in Lp(K),p> 1. By Teorem 3.5, it is sufficient to prove that P M w (x) " as iV —> oo and M wW(x)

s?'

sup M Ν

w$(x)

V

< oo.

But this is evident from convergence of one-dimensional distributions of the spectral model, which are Gaussian ones. We proceed to weak convergence of conditional spectral models in the space CP(K). As conditional spectral model wbN(x) we first consider a non-randomized model that is constructed by (3.27)-(3.30) under the additional condition |Ajv| = d¡γ. L e m m a 3.10. The conditional spectral model wbN(x) weakly converges in CP(K), ρ Ç. {1,2,...}, to the Gaussian field wb(x) if

94

CHAPTER 3. CONVERGENCE OF SPECTRAL MODELS OF RANDOM FIELDS

\Λ\Β+2ΡΜ(ΆΛ) < oo

J

(3.31)

FOR SOME Β > 0.

Proof. 1. Consider the case ρ = 0. Convergence of finite-dimensional distributions follows from Lemma 3.7. We have to prove weak compactness of the sequence of random fields WBN(X) in the space C(K). For the field WBN(X) we have: M IWBN(Y) - U)^(A;)|2=J5JV(A;, X)+BN(Y, +M2N(X)

Y)-2BN(X,

Y)+

+ M2N(Y) - 2MN{X)MN(Y)

= [BN(X, X)-Bn(X,

Y)] + [BN(Y,

+ [MN(X) - MN(Y)]2

=

Y)-BN(X,

Y)] +

,

where B¡V(X,Y) is correlation function and mjv(x) is expectation of the field WBN(X). Let us consider matrices ΒΝ =

R(N)

=

K{N)

=

Bn(X,X)

BN(X,Y)

. BN(X,Y)

BN(Y,Y)

Ä(AT)(0) _R{N)(x-y)

R(N)(X - Y) " R( N )( 0)

R(N)(X - ΧΙ) . . . i?(iv)(a; - XJ) . R(N)(Y - Χ Ι ) ••• R(N){Y - XJ) .

Here by Ä(jv)(x) we denote correlation functions of homogeneous fields Matrices BN and vectors MN

satisfy the relations

MN(X)

mjv(y).

95

3-4- Convergence of conditional spectral models Bn

1

RN — KnL%KJ],

raw =

KNL^Ò.

From condition (3.31) it follows that for all Ν RN(0) - Rn(X) < C||x|| 7 , where 7 = min(2,/?) and C is a constant (see, e.g. proof of Theorem 3.4). Since for correlation functions Rn{%) we have |ΛλΚ*) Rn

- Rn{v)I2 Ä,

KN

< 2Äat(0) [Äjv(O) - Rn(x K,

- y)]

and

L j f ^ · ΖΓ 1 as Ν -»• oo,

the following equalities hold for all Ν \BN(x,y)

- Bn(x,x)\

\mN(x)-mN(y)\2

< C'\\x -

yr,

2 and (X, u) is a measur-

able space with α σ - finite measure u, sufficient

convergence

condition

is Μ||ε|||. < oo.

(b) For the space F = C(X), ficient

convergence conditions

where X is a compactum

in H fc , suf-

are:

Μ ε 2 ( χ ) < oo

for all χ from

X,

and there exist a > 0 and a random variable (?(ω) > 0, M G 2 < oo, such that \ε(χ,ω) for all χ and y from

- ε{ν,ω)\

< β{ώ)\χ

- y\a

X.

(c) For the space F = CP(X), bounded closed domain in max M

0,

MG 2 m

,mk) such that |m| = ρ there exist < oo, and a number am > 0, for

which Vm

[ε{χ,ω)

-

are:

sup M [P m £(x)] 2 < oo, xex

τη — (ml5...

a random variable Gm(oj)

convergence conditions

is a

£(y,u>)}

< Gm(ω)\χ

- y\a™,

χ,y G Χ.

104

Chapter 4- Optimization

and convergence of

functional..

Here m = (mi,m2,...

,mk),

\m\ = mi + ... + nik,

and by Ò vm

we denote partial

•·· =

derivatives

dx?ldx?2

...dx^k

of trajectories

( d ) In a separable Hilbert

' ''

of a random

space F sufficient

field.

convergence

condition

is:

M||e||^ < oo. P r o o f . Conditions ( a ) - ( d ) imply weak convergence in the space F for the random functions n 1 / 2 («/*(x) — J(x)) to the Gaussian process w(x) with covariance function (4.6) (statements of the corresponding functional limit theorems can be found in [45], [87], [89], [93], [100]). Since the functional is continuous in F , we have weak convergence of n\\J* — J\\p to 11 to H]?, and the uniform integrability of the quantities n\\J* — J\\p ensures convergence of the moments in (4.5), (4.7) (cf. [13]). R e m a r k 1. Let us recall the definition of weak convergence. A sequence of random elements ξη is said to converge weakly to a random element ξ in the space F with metric ρ ρ if for any real functional g, continuous in the metric pp, the sequence of random variables g(Çn) converges to random variable g(Ç) in distribution. Because of the weak convergence ofra1/,2(J * ( x ) — J(x)) to the Gaussian process w(x) in the space F we can state that the deviation of J* from J on X is of the order n -1 / 2 in probability: Ρ { P F { J * , J ) < en-1/2}

P{pF(u;,0) < c},

η

oo.

Therefore, the conditions for weak convergence of n - 1 / , 2 ( J * ( x ) — J{x)) to w(x) in various functional spaces are of separate interest for general theory of Monte Carlo methods. The conditions (a)-(d) of Statement 4.1 are general conditions which are sufficient for weak convergence in corresponding spaces. On the basis of these conditions, one can obtain efficient criteria for weak convergence of various functional estimates that are used in Monte Carlo methods (see Item 4.3 below).

105

4-1. Asymptotic efficiency of estimators

R e m a r k 2. The uniform integrability of the quantitiesro||«7*— J\\j? in Statement 4.1 can be replaced by the following condition sup τι n

1

2+2/3

^M

< OO,

¿=1

where β > 0, and e,· are independent realizations of ε = ξ — J. R e m a r k 3. Conventionally, in order to compare the quality of unbiased functional Monte Carlo estimators one uses the criterion of weighted variance [22] based on comparison of the quantities I

Όξ(χ)ν(άχ),

or the minimax criterion proposed and studied by G.A.Milchailov [74] based on the comparison of maxima of variances max D£(;r). xEX Without analyzing in detail various criteria for optimality we note that a new approach described here is quite reasonable and in general case it is not reduced to minimax criterion or criterion of weighted variance. The idea is that for comparing the quality of estimators we use the quantities nM\\rn-J\\2F

=M

i-I

£&0r)-J(x)]

/η,

and their limit value Dev(f, F) = MIHIS·

( 4 ·8)

The quantity (4.8) is essentially a functional of the correlation function (4.6): Dev(£,F) = ), χ G X, Ω

we consider the family of estimates

rn(x) = n-1J2Ux), i—1

ξ{χ, ω) = G(x, ω)άν/άΧ(ω), where £ t (x) = tion

(4.12)

u;t) are independent realizations of the random func-

ξ(χ,ω), α; is a random variable distributed according to

probability

measure dX, measure du is absolutely continuous with respect to

(Miieii*) 2 .

(4-14)

Since £(x) = M

G(x,u)dv/d\(u),

»

= /

\\G(.,u)\\%du/dX(u)du(u),

Ω

M|ieik = /

\\G{.,u)\\„du{u),

Ω

the equality in (4.14) is reached if and only if dv/d\(u)

= Μ\\ξ\\Η/\\0(.,ω)\\Η.

Hence άλ(ω)

=

\\β(.,ω)\\Ηάν{ω)/Μ\\ξ\\Η.

Lemma has been proved. E x a m p l e . W e consider oo J(x)

= J χ exp(—ωχ)άω ξ 1,

χ G [α, 6],

o as an integral depending on a parameter, and we take the spaces

4-2.

Optimal

H0 = L2[a,b],

functional

estimators

in Sobolev's

tfiM],

H2 = H2[a,b],

#1 =

Hilbert

109

spaces

C = C[a,

b],

as the functional space F. Assume that the distributions μο, μι and μ2 which are used in definition of the spaces Ho, Hi and H2 are uniform probability distributions in [a, 6]. By £o, £2 we denote F-optimal estimators for corresponding spaces F = Ho, Hi, H2, and by £ we denote the minimax estimator. The results of computations of F-deviations are presented in the next Table for a = 1, b — 3 (these results are borrowed from [77]). deviation i/o-deviation -ffi-deviation //2-deviation C-deviation

ξο

Í1

6

ξ

0.17 1.0 5.6 0.37

0.24 0.78 2.23 0.18

0.45 0.94 1.64 0.29

0.20 0.84 6.39 0.24

In this example the i/j-optimal estimator turns out to be the best one of the four estimators in terms of C-deviation criterion. We emphasize here that in order to calculate C-deviation we don't need to simulate the corresponding estimators, which may turn out to be a time-consuming procedure. It is sufficient to find correlation function of estimator and then compute (by Monte Carlo method) the expectation of the squared C-norm of a Gaussian process with zero mean and the obtained correlation function. R e m a r k . For the estimate (4.1) the problems of minimization of absolute and relative errors coincide, while in the functional case the question arises: What should be chosen as a measure of relative error? One of possible approaches is as follows. We assume that J(x) φ 0 for χ G Χ. The value Sn = ntM\\(J:-J)/J\\2F

=

tM η

-1/2

Σ * / J i-1

is said to be the relative computational cost of the estimate J* in the space F . Under additional assumptions analogous to those for the convergence (4.5), (4.7) we have

110

Chapter 4- Optimization

and convergence

of

functional..

2

where

Sn ^ s00 = tM\\w(x)/J(x)\\ F, as η ->• oo, w(x)/J(x) is the corresponding Gaussian function.

In the spaces F = H the relative computational cost of the estimates J * (under conditions analogous to ( A ) - ( C ) ) is independent of n:

2

2

Sn=tM\\e/J\\ H = t[M\\t/J\\ H-C], χ

C = J Vo{dx),

and, as in Lemma 4.1, one can prove a statement about minimum of mean squared norm of the relative error

M\\e/JrH of the estimator (4.12) for an integral depending on a parameter. 4.2.3.

H - o p t i m a l "absorption" estimator f o r computing a family of functionals of solution of an integral equation of the second kind

In what follows we use the notation and terminology from [22], [74]. Let us consider function

J(x) = (ip,h(x)) = J min, ε(χ) = ξ(χ) - J (χ). A generalization of familiar result concerning the importance sampling technique for the absorption estimator for many functionals is formulated as follows. T h e o r e m 4.1. Assume that condition (d) is satisfied, and the strictly positive function 1/2

h0(X) = \\h(.ix)\\H

=

Σ _|m|

7i=0

η

-π ,

χ* =

Σ n=0

>

Τ ^ 9

for t h e integral equations of the second kind X

= Lx + f/n,

χ* = L*x* + \\h\\2F/g,

LX(X) = j K2(\',\)/p(\',X)x(\l)d\'.

114

Chapter 4- Optimization and convergence of functional..

Remark estimator

2. The problem of obtaining an //-optimal

u

collision"

Ν X

Ç( ) = Σ Qnh(x,Xn) n=0 is a more complicated one. Note that if & = Σ Ι 0 » Ι IIM-.-MIIF, n=0 then, evidently, H\\F < fo,

and hence, Μ\\ξ\\2ρ < M^o· This inequality gives grounds to use an approximate information on the importance function with respect to the functional < φ, ||/ï||f > = M£o in order to optimize the collision estimate ζ(χ). 4.2.4.

Investigation of ü-optimal "collision" estimator

The main principle of investigating //-optimal collision estimator consists in reducing the problem to the known results for the non-functional case (as it was done for absorption estimator). Therefore, we need the following assertion. Lemma 4.2. Let us consider an integral equation of the second kind ψ(\) = J K(X', Χ)φ(Χ')άΧ' + /(Λ),

φ = Κφ + /,

Λ

e L 1 = Ll(A), and the collision

Κ : L1 -ϊ

L1,

estimator 6 = Σ