Adaptive Stochastic Methods: In Computational Mathematics and Mechanics 9783110554632, 9783110553642

This monograph develops adaptive stochastic methods in computational mathematics. The authors discuss the basic ideas of

213 31 3MB

English Pages 290 Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Introduction: Statistical Computing Algorithms as a Subject of Adaptive Control
Part I: Evaluation of Integrals
1. Fundamentals of the Monte Carlo Method to Evaluate Definite Integrals
2. Sequential Monte Carlo Method and Adaptive Integration
3. Methods of Adaptive Integration Based on Piecewise Approximation
4. Methods of Adaptive Integration Based on Global Approximation
5. Numerical Experiments
6. Adaptive Importance Sampling Method Based on Piecewise Constant Approximation
Part II: Solution of Integral Equations
7. Semi-Statistical Method of Solving Integral Equations Numerically
8. Problem of Vibration Conductivity
9. Problem on Ideal-Fluid Flow Around an Airfoil
10. First Basic Problem of Elasticity Theory
11. Second Basic Problem of Elasticity Theory
12. Projectional and Statistical Method of Solving Integral Equations Numerically
Afterword
Bibliography
Index
Recommend Papers

Adaptive Stochastic Methods: In Computational Mathematics and Mechanics
 9783110554632, 9783110553642

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Dmitry G. Arseniev, Vladimir M. Ivanov, Maxim L. Korenevsky Adaptive Stochastic Methods

Also of interest Exact Finite-Difference Schemes Sergey Lemeshevsky, Piotr Matus, Dmitriy Poliakov, 2016 ISBN 978-3-11-048964-4, e-ISBN (PDF) 978-3-11-049132-6, e-ISBN (EPUB) 978-3-11-048972-9

Stochastic Methods for Boundary Value Problems. Numerics for High-dimensional PDEs and Applications Karl K. Sabelfeld, Nikolai A. Simonov, 2016 ISBN 978-3-11-047906-5, e-ISBN (PDF) 978-3-11-047945-4, e-ISBN (EPUB) 978-3-11-047916-4

Inside Finite Elements Martin Weiser, 2016 ISBN 978-3-11-037317-2, e-ISBN (PDF) 978-3-11-037320-2, e-ISBN (EPUB) 978-3-11-038618-9

Richardson Extrapolation. Practical Aspects and Applications Zahari Zlatev, Ivan Dimov, István Faragó, Ágnes Havasi, 2017 ISBN 978-3-11-051649-4, e-ISBN (PDF) 978-3-11-053300-2, e-ISBN (EPUB) 978-3-11-053198-5

Stochastic PDEs and Dynamics Boling Guo, Hongjun Gao, Xueke Pu, 2016 ISBN 978-3-11-049510-2, e-ISBN (PDF) 978-3-11-049388-7, e-ISBN (EPUB) 978-3-11-049243-9

Dmitry G. Arseniev, Vladimir M. Ivanov, Maxim L. Korenevsky

Adaptive Stochastic Methods |

In Computational Mathematics and Mechanics

Mathematics Subject Classification 2010 65C05, 65C20, 65R20 Authors Prof. Dr. Dmitry G. Arseniev Grazhdansky pr. 28 195220 St. Petersburg Russian Federation [email protected]

Translator Assoc. Prof. Andrei V. Kolchin Leningradsky pr. 64 125319 Moscow Russian Federation [email protected]

Prof. Dr. Vladimir M. Ivanov Ispytateley pr. 26 197227 St. Petersburg Russian Federation [email protected] Prof. Dr. Maxim L. Korenevsky Polyarnikov str. 8 192171 St. Petersburg Russian Federation [email protected]

ISBN 978-3-11-055364-2 e-ISBN (PDF) 978-3-11-055463-2 e-ISBN (EPUB) 978-3-11-055367-3 Set-ISBN 978-3-11-055464-9 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2018 Walter de Gruyter GmbH, Berlin/Boston Cover image: Andrei V. Kolchin Typesetting: Dimler & Albroscheit, Müncheberg Printing and binding: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com

Preface This monograph is devoted to developing adaptive stochastic methods of computational mathematics with the use of adaptively controlled computational procedures. We consider the base ideas of the algorithms, ways to synthesise them, and analyse their properties and efficiency while evaluating multidimensional integrals and solving integral equations of the theory of elasticity and the theory of heat conduction. The key feature of the approaches and results presented in this book consists of a comprehensive analysis of mechanisms of utilisation of adaptive control in statistical evaluation procedures, which makes them converge much faster. This book is intended for all students of numerical methods, mathematical statistics, and methods of statistical simulation, as well as for specialists in the fields of computational mathematics and mechanics.

https://doi.org/10.1515/9783110554632-202

Contents Preface | V Introduction: Statistical Computing Algorithms as a Subject of Adaptive Control | 1

Part I: Evaluation of Integrals 1

1.6.4 1.6.5 1.6.6 1.7

Fundamentals of the Monte Carlo Method to Evaluate Definite Integrals | 9 Problem setup | 9 Essence of the Monte Carlo method | 10 Sampling of a scalar random variable | 11 The inverse function method | 11 The superposition method | 14 The rejection method | 15 Sampling of a vector random variable | 16 Elementary Monte Carlo method and its properties | 18 Methods of variance reduction | 20 Importance sampling | 20 Control variate sampling | 21 Advantages and relations between the methods of importance sampling and control variate sampling | 21 Symmetrisation of the integrand | 22 Group sampling | 23 Estimating with a faster rate of convergence | 24 Conclusion | 25

2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.2 2.2.1 2.2.2 2.2.3

Sequential Monte Carlo Method and Adaptive Integration | 27 Sequential Monte Carlo method | 27 Basic relations | 27 Mean square convergence | 29 Almost sure convergence | 36 Error estimation | 39 Adaptive methods of integration | 41 Elementary adaptive method of one-dimensional integration | 42 Adaptive method of importance sampling | 44 Adaptive method of control variate sampling | 46

1.1 1.2 1.3 1.3.1 1.3.2 1.3.3 1.4 1.5 1.6 1.6.1 1.6.2 1.6.3

https://doi.org/10.1515/9783110554632-203

VIII | Contents 2.2.4 2.2.5 2.2.6 2.2.7 2.3

Generalised adaptive methods of importance sampling and control variate sampling | 47 On time and memory consumption | 47 Regression-based adaptive methods | 49 Note on notation | 56 Conclusion | 57

3 3.1 3.1.1 3.1.2 3.1.3 3.1.4 3.2 3.2.1 3.2.2 3.2.3 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.4 3.5 3.6

Methods of Adaptive Integration Based on Piecewise Approximation | 59 Piecewise approximations over subdomains | 59 Piecewise approximations and their orders | 59 Approximations for particular classes of functions | 60 Partition moments and estimates for the variances D k | 62 Generalised adaptive methods | 63 Elementary one-dimensional method | 65 Control variate sampling | 66 Importance sampling | 67 Conclusions and remarks | 69 Sequential bisection | 70 Description of the bisection technique | 70 Control variate sampling | 72 Importance sampling | 76 Time consumption of the bisection method | 78 Sequential method of stratified sampling | 79 Deterministic construction of partitions | 80 Conclusion | 82

4 4.1 4.1.1 4.1.2 4.2 4.2.1 4.2.2 4.2.3

Methods of Adaptive Integration Based on Global Approximation | 83 Global approximations | 83 Approximations by orthonormalised functions: Basic relations | 84 Conditions for algorithm convergence | 86 Adaptive integration over the class S p | 92 Haar system of functions and univariate classes of functions S p | 92 Adaptive integration over the class S p : One-dimensional case | 93 Expansion into parts of differing dimensionalities: Multidimensional classes S p | 95 Adaptive integration over the class S p : Multidimensional case | 97 Adaptive integration over the class E sα | 103 The classes of functions E sα | 104 Adaptive integration with the use of trigonometric approximations | 104 Conclusion | 108

4.2.4 4.3 4.3.1 4.3.2 4.4

Contents

| IX

5 5.1 5.1.1 5.1.2 5.2 5.2.1 5.2.2

Numerical Experiments | 111 Test problems setup | 111 The first problem | 111 The second problem | 112 Results of experiments | 113 The first test problem | 113 The second test problem | 121

6

Adaptive Importance Sampling Method Based on Piecewise Constant Approximation | 123 Introduction | 123 Investigation of efficiency of the adaptive importance sampling method | 123 Adaptive and sequential importance sampling schemes | 123 Comparison of adaptive and sequential schemes | 126 Numerical experiments | 128 Conclusion | 131 Adaptive importance sampling method in the case where the number of bisection steps is limited | 132 The adaptive scheme for one-dimensional improper integrals | 132 The adaptive scheme for the case where the number of bisection steps is limited | 134 Peculiarities and capabilities of the adaptive importance sampling scheme in the case where the number of bisection steps is fixed | 135 Numerical experiments | 136 Conclusion | 140 Solution of a problem of navigation by distances to pin-point targets with the use of the adaptive importance sampling method | 141 Problem setup | 141 Application of the adaptive importance sampling method to calculating the optimal estimator of the object position | 143 A numerical experiment | 145 Conclusion | 148

6.1 6.2 6.2.1 6.2.2 6.2.3 6.2.4 6.3 6.3.1 6.3.2 6.3.3 6.3.4 6.3.5 6.4 6.4.1 6.4.2 6.4.3 6.4.4

Part II: Solution of Integral Equations 7 7.1 7.2 7.3 7.4

Semi-Statistical Method of Solving Integral Equations Numerically | 151 Introduction | 151 Basic relations | 152 Recurrent inversion formulas | 154 Non-degeneracy of the matrix of the semi-statistical method | 155

X | Contents 7.5 7.6 7.7 7.8 7.8.1 7.8.2 7.8.3 7.8.4 8 8.1 8.2 8.3 8.4 8.5 8.5.1 8.5.2

9 9.1 9.2 9.3 9.4 9.5 9.5.1 9.5.2 9.5.3 9.6 9.7 9.7.1 9.7.2 9.7.3 9.7.4 10 10.1

Convergence of the method | 161 Adaptive capabilities of the algorithm | 163 Qualitative considerations on the relation between the semi-statistical method and the variational ones | 165 Application of the method to integral equations with a singularity | 165 Description of the method and peculiarities of its application | 165 Recurrent inversion formulas | 168 Error analysis | 168 Adaptive capabilities of the algorithm | 171 Problem of Vibration Conductivity | 173 Boundary value problem of vibration conductivity | 173 Integral equations of vibration conductivity | 174 Regularisation of the equations | 180 An integral equation with enhanced asymptotic properties at small β | 184 Numerical solution of vibration conductivity problems | 187 Solution of the test problem | 187 Analysis of the influence of the sphere distortion and the external stress character on the results of the numerical solution | 190 Problem on Ideal-Fluid Flow Around an Airfoil | 193 Introduction | 193 Setup of the problem on flow around an airfoil | 193 Analytic description of the airfoil contour | 195 Computational algorithm and optimisation | 198 Results of numerical computation | 199 Computation of the velocity around an airfoil | 199 Analysis of the density adaptation efficiency | 201 Computations on test cascades | 205 Conclusions | 207 A modified semi-statistical method | 208 Computational scheme | 209 Ways to estimate the variance in the computing process | 210 Recommendations and remarks to the scheme of the modified semi-statistical method | 211 Numerical experiment for a prolate airfoil | 211 First Basic Problem of Elasticity Theory | 215 Potentials and integral equations of the first basic problem of elasticity theory | 215

Contents

10.1.1 10.1.2 10.2 10.2.1 10.2.2 10.2.3 10.2.4 10.3 10.4 10.5 11 11.1 11.2 11.3 11.4 11.5

| XI

The force and pseudo-force tensors | 215 Integral equations of the first basic problem | 217 Solution of some spatial problems of elasticity theory using the method of potentials | 218 Solution of the first basic problem for a series of centrally symmetric spatial regions | 219 Solution of the first basic problem for a sphere | 220 Solution of the first basic problem for an unbounded medium with a spherical cavity | 220 Solution of the first basic problem for a hollow sphere | 221 Solution of integral equations of elasticity theory using the semi-statistical method | 223 Formulas for the optimal density | 225 Results of numerical experiments | 227 Second Basic Problem of Elasticity Theory | 231 Fundamental solutions of the first and second kind | 231 Boussinesq potentials | 234 Weyl tensor | 235 Weyl force tensors | 237 Arbitrary Lyapunov surface | 238

12

Projectional and Statistical Method of Solving Integral Equations Numerically | 241 12.1 Basic relations | 241 12.2 Recurrent inversion formulas | 244 12.3 Non-degeneracy of the matrix of the method | 246 12.4 Convergence of the method | 250 12.5 Advantages of the method and its adaptive capabilities | 253 12.6 Peculiarities of the numerical implementation | 255 12.7 Another computing technique: Averaging of approximate solutions | 257 12.8 Numerical experiments | 259 12.8.1 The test problem | 259 The problem on steady-state forced small transverse vibration of a pinned 12.8.2 string caused by a harmonic force | 264 Afterword | 271 Bibliography | 273 Index | 277

Introduction: Statistical Computing Algorithms as a Subject of Adaptive Control This monograph is devoted to the research and development of adaptive stochastic methods to solve a quite wide class of problems formalised by the corresponding mathematical models as problems to evaluate integrals and solve integral equations, which possess a high-order convergence rate and an ability to maintain a deep control over the computing process. This is achieved by applying the ideas of the theory of adaptive control to such ‘unconventional’ objects as stochastic computing processes. The methods developed in this monograph can be thought of as a new class of hybrid self-adapting methods utilising the apparatus of adaptive control over the computing processes which possess both advantages of the stochastic methods (selfadaptation of the grid; recursive growth of its dimension; a way to control the error while performing the computing; quite not so heavy dependence on the dimensionality of the problem) and the deterministic ones (as concerns the rate of convergence). Recently, a great body of actual problems has been formalised by means of mathematical models whose analysis is carried out on the base of grid methods. These problems concern, for example, the stress–strain analysis of a material, investigation of heat and vibration fields, estimation of integral characteristics in various applied problems (filtration problems, analysis of complex control systems). However, by the constant technological innovation and development the need for designing numerical methods for more and more complicated models becomes imminent, which heavily increases the demand for intellectual labour and computer resources. In particular, it is known that the so-called ‘dimensionality curse’ arises while analysing spatial constructions of complex configuration: in order to find satisfactory numerical solutions with the use of traditional grid methods one has to solve sets of equations of great dimensionality, which, in turn, becomes a source of systematic errors. The most promising approach to attack this problem consists of an efficient choice of the nodes of the grid and a special non-uniform organisation of the integration grid, as well as parallelisation of the computing processes. It seems quite obvious that in order to achieve a higher accuracy of solution on a grid with a constant number of nodes it suffices to construct a grid which is denser in those parts of the integration domain where the intensity of solution variation is the greatest. The numerical methods existing up to now can be divided into two types, namely deterministic and statistical ones. The deterministic methods, whose most known representatives are the method of finite elements (FEM) and the method of boundary elements (BEM), are rather well studied and have found a widespread use. It is known that these methods yield a sufficient accuracy provided that the number of nodes of the grid is large enough, so their efficiency depends much on the good choice of the coordinates of the grid nodes, which requires a priori information concerning properties of the solutions sought for and relies heavily on the skills of the researcher. While using https://doi.org/10.1515/9783110554632-204

2 | Introduction the deterministic methods, it is very difficult to solve the problem of optimal choice of the coordinates of the grid nodes because of two reasons: first, the solutions sought for are assumed to be known; and second, the dimensionality and the complexity of this problem are of the same order as those of the initial one, since the grid must be optimised with respect to the coordinates of each individual node. Recently, statistical numerical methods have been under active development, which are based on statistical trials (the Monte Carlo methods). They possess a number of unique abilities: they depend only weakly on the dimensionality of the problem to be solved; the process of building the grid is easily controlled because the grid nodes are not taken as individual ones but are characterised by an ensemble-wide characteristic, namely the distribution density. In addition, the computing process can be organised recursively, where grid nodes are successively added, as well as in the form of parallel computations, while the accuracy is estimated in the process of computation. Nevertheless, until recently the use of statistical methods was not so wide because of two reasons: first, the convergence is not so fast as compared with deterministic methods; second, there are very few algorithms to solve the problems of elasticity and heat conduction which often occur in engineering practice. In this monograph, we consider computational algorithms from the viewpoint of adaptive control. The subjects under control are processes which compute multidimensional integrals and solve numerically integral equations with the use of statistical methods. The control tool for them is the function of the density of distribution of the nodes of the random grid of integration. As the criterion of optimality of the functioning we choose the criterion of accuracy of the computation, which possesses an analytic representation for both subjects under consideration we have to control. One of the key features of the statistical computational procedures consists of the recurrence, which makes them quite adequate to be used together with dynamic systems whose control theory has been well developed (see [11, 43, 46]). The application of the control theory to the subjects of this kind makes it possible to increase much the computation efficiency. In Figure 1, we give a design of the system under consideration, namely a stochastic computing algorithm. Usually, characteristics of computational algorithms are chosen starting from their convergence on the class of the integrands or the equations, which surely cannot take into account the individual properties of the particular subject we are applying our algorithm to. The idea of our approach consists of making use of the methods of adaptive control theory to carry out self-adaptation of the computational algorithms in the very process of their functioning. Such an approach turns out to be quite efficient because we are able to find analytic solutions of variational problems of optimisation for accuracy of algorithms with respect to the function of the density of distribution of the nodes of the integration grid. In doing so, the optimal density of the distribution turns out to be heavily dependent on particular properties of the integrands and the integral equations. This dependence is precisely the cornerstone in the applications of

Introduction

| 3

problem

a priori data

 stochastic / computing o algorithm '

adaptation apparatus O

 recursive estimation of accuracy  halt criterion  result Fig. 1. Design of a stochastic computing algorithm.

adaptive control methods, which makes it possible to raise significantly the efficiency of the computing processes. The monograph thus fills in these gaps. It presents statistical algorithms for problems which have not been solved earlier by statistical methods; known algorithms are improved which yields a more fast convergence and a possibility of self-adaptation of computing processes while solving basic problems of computational mathematics. This is achieved using efficient deterministic procedures in statistical algorithms as well as application of special adaptive computational methods. The so-called semi-statistical method is one of them. It combines both statistical and deterministic operations and is intended for the numerical solution of integral equations. This method allows for recursive increase of the number of nodes of the random integration grid, optimisation of its structure, and control over the accuracy of estimates obtained in the computing process. Developing the semi-statistical method, we arrive at the projection statistical method. The use of additional basic functions permits to expand the adaptive capabilities and advantages of the hybrid algorithms. In the process of developing the semi-statistical method we solve an auxiliary problem which is of its own right: the problem of adaptive evaluation of an integral. The thus obtained adaptive statistical algorithm of numerical integration converges with the same rate as its deterministic analogue, whereas the traditional statistical methods provide us with a quite slower convergence rate. The suggested methods perform at their best as multidimensional integrals have to be evaluated. So, a big part of the book is devoted to its theory and ways to make its convergence even more fast.

4 | Introduction This monograph consists of two parts. Part I deals with adaptive statistical methods to evaluate integrals in both one-dimensional and multidimensional cases. In Part II we consider adaptive hybrid methods to solve integral equations as well as their applications to problems of the continuum mechanics. Let us give a brief survey of the monograph. In Chapter 1, we consider the basic well-known concepts and approaches of the Monte Carlo method. This chapter is of introductory nature and will help in the further presentation. In Chapter 2, we give a description and thorough analysis of the computational scheme of the ‘sequential Monte Carlo method’ and present a general approach to constructing adaptive integration methods on its base. Chapter 3 is devoted to investigation of the adaptive method of construction of piecewise approximations of smooth functions of many variables and the corresponding algorithms of integration. In Chapter 4, we suggest an adaptive method of construction of global approximations of functions admitting an expansion into a quickly converging Fourier series over some orthonormalised system, and investigate the corresponding methods of integration for some classes of functions. Chapter 5 contains results of numerical experiments to solve a series of test problems by the suggested methods. In Chapter 6, we study the efficiency of the adaptive method of importance sampling in its simplest version where piecewise constant approximations are utilised, and we demonstrate its applications. This chapter is written in co-authorship with N. A. Berkovsky. In Chapter 7, we present the theory of the semi-statistical method of solving integral equations numerically. We consider the algorithmic capabilities of the method, obtain formulas for recurrent representation of the algorithms of the method. We prove the convergence of the suggested algorithms. We then investigate the question concerning the grid optimisation. Chapter 8 is devoted to applying the semi-statistical method to the vibration conductivity problem. We give the integral equations. Upon carrying out numerical experiments for a series of test problems, we illustrate potentialities of the method and efficiency of adaptive optimisation of the structure of a random grid. In Chapter 9, we discuss how to apply the semi-statistical method to the problem of ideal-fluid flow around a plate. We give the integral equations and discuss peculiarities of the contour definitions. The main attention is paid to the numerical investigation of efficiency of the method for stretched contours and to ways to enhance the accuracy of the evaluation by means of utilisation of adaptive procedures to optimise the grid structure. This chapter is also written in co-authorship with N. A. Berkovsky. In Chapter 10, we consider the peculiarities of application of the semi-statistical method to solving the integral equations of the first basic problem of elasticity theory. The corresponding integral equations are given in an invariant form. For a series

Introduction

| 5

of classical bodies (a sphere, a spherical cavity, a hollow sphere), we give analytic solutions of the integral equation itself. For test problems we give results of numerical simulation. Chapter 11 is devoted to applying the semi-statistical method to solving the integral equations of the second basic problem of elasticity theory. The Weyl integral equations we use are presented in an invariant form. Chapter 12 deals with the statistical projection method to solve differential equations which is a generalisation of the above-suggested semi-statistical method. In conclusion, a brief summary of the results obtained in the monograph is given, and prospective directions of their further development are mentioned. The authors are sincerely grateful to Professor A. V. Kolchin who kindly agreed to translate the book to English.

|

Part I: Evaluation of Integrals

1 Fundamentals of the Monte Carlo Method to Evaluate Definite Integrals The Monte Carlo methods (methods of statistical trials) play a quite important part in the modern computational mathematics. They are based on methods of mathematical statistics and hence possess a series of unique properties which are hard to find in traditional deterministic ones. Upon coming into light in the middle of the twentieth century, they have found a widespread use in actual practice. The development of the Monte Carlo methods heavily relies on works of S. M. Ermakov, I. M. Sobol, G. A. Mikhailov, J. Spanier, J. M. Hammersley, J. Neumann, S. Ulam, D. C. Handscomb, J. H. Halton, and others. The Monte Carlo methods can be divided into several groups. The first of them consists of the methods based on the simulation of the physical phenomenon under consideration; they are usually referred to as the imitation ones. Other methods are devoted to solving various problems of computational mathematics such as solution of simultaneous linear equations, integral equations, boundary problems, approximation of a function, numerical integration, etc. The methods considered in this monograph fall into precisely this group and are mainly dedicated to the evaluation of definite integrals. This chapter contains a brief introduction to the Monte Carlo methods to evaluate integrals. We describe basic techniques to simulate random variables and present necessary information from the Monte Carlo method theory. We also consider some most widely used approaches to raise the efficiency of integration by the Monte Carlo method.

1.1 Problem setup The key problem under consideration is to find an approximate value of the integral J = ∫ f(x) dx

(1.1)



of a function f(x) ∈ L2 (Ω) (generally speaking, complex-valued) over a bounded closed domain Ω of the s-dimensional Euclidean space ℝs . The complexity of this problem depends essentially (for s > 1) on the geometrical peculiarities of the domain of integration Ω. Those domains of rather complicated geometry are usually divided into subdomains which are close to standard shapes (such as a multidimensional cube, sphere, simplex) to which most known integration methods are applicable. Where possible, we make our reasoning independent of the geometry of the domain Ω. Nevertheless, while constructing an integration algorithm we often pay special attention to the particular case of the problem where the integrahttps://doi.org/10.1515/9783110554632-001

10 | 1 Fundamentals of the Monte Carlo Method tion domain Ω is an s-dimensional hyperparallelepiped. Since by a linear change of variables the hyperparallelepiped reduces to a unit s-dimensional cube, without loss of generality we consider the problem to evaluate the integral J = ∫ f(x) dx, Ks

where K s = {x = (x1 , . . . , x s ) | 0 ≤ x i ≤ 1, i = 1, . . . , s} is the unit cube in the space ℝs .

1.2 Essence of the Monte Carlo method The cornerstone of the Monte Carlo method consists of reduction of the problem under consideration to calculation of some probabilistic characteristics of a certain random variable (as a rule, its mathematical expectation) and subsequent statistical estimation of these characteristics. As an elementary example we consider the evaluation of the integral 1

J = ∫ f(x)p(x) dx 0

by the Monte Carlo method, where p(x) ≥ 0 for x ∈ [0, 1] and 1

∫ p(x) dx = 1. 0

Assume that some random variable ξ is given, which is distributed on the interval [0, 1] with density p(x). Then the random variable η defined by the equality η = f(ξ) obeys the relation 1

E{η} = E{f(ξ)} = ∫ f(x)p(x) dx = J, 0

that is, we construct a variable whose mathematical expectation is equal to the integral sought for. Let n independent realisations ξ1 , . . . , ξ n of the random variable ξ be given. Then 1 n J n = ∑ f(ξ k ) n k=1 is an unbiased estimator of the integral J, which, by the law of large numbers [59], is, with high probability, close to J for a sufficiently large n. This example demonstrates that in order to utilise the Monte Carlo methods one has to be able to simulate random variables distributed by given laws. Most generators of random numbers realised in modern computer software/hardware make it possible to obtain a sequence of independent realisations of a random variable which is uniformly

1.3 Sampling of a scalar random variable | 11

distributed on the interval [0, 1] (as a rule, some ‘pseudo-random’ numbers are used in actual practice, which satisfy certain statistical tests for independence and uniformity). But it is possible to obtain realisations of a random variable distributed by an arbitrary law from the standard ones with the use of some transformations. We consider the most widely used ones in what follows.

1.3 Sampling of a scalar random variable 1.3.1 The inverse function method Definition 1.3.1. Let F(x) be a function which possesses the distribution function properties (that is, F(x) is monotonically non-decreasing, left-continuous, 0 ≤ F(x) ≤ 1, F(x) 󳨀󳨀󳨀󳨀󳨀→ 0, F(x) 󳨀󳨀󳨀󳨀→ 1). We define the function G(y) by the relation x→−∞

x→∞

{inf{z | y < F(z)} if 0 ≤ y < 1, G(y) = { lim G(y) if y = 1, { y→1−0

(1.2)

and let x = G(y) solve the equation F(x) = y for 0 ≤ y ≤ 1. The relation between F(x) and G(y) is given in Figure 1.1. Lemma 1.3.2. The inequality y < F(x) holds true if and only if y < 1 and x < G(y). Proof. Let y < F(x). Since F(x) ≤ 1, we see that y < 1. Since F(x) is left-continuous, there exists z < x such that y < F(z). Then, by definition (1.2), G(y) ≤ z < x, which is the desired result. Now let x < G(y) and y < 1. Assume that y ≥ F(x). Consider the set A = {z | y < F(z)}. It is non-empty, because y < 1. In addition, x < z for any z ∈ A, because F(x) ≤ y < F(z). Therefore, x ≤ inf{z | z ∈ A} = G(y). This contradiction completes the proof. Theorem 1.3.3. Let γ be a random variable which is uniformly distributed on the interval [0, 1]. Then the random variable ξ which solves the equation F(ξ) = γ, that is, ξ = G(γ), has the distribution function F(x). Proof. Let us find the distribution function of the variable ξ : P{ξ < x} = P{G(γ) < x} = P{G(γ) < x, γ < 1} + P{G(γ) < x, γ = 1}. The latter addend is zero because of continuity of the random variable γ. Therefore, P{ξ < x} = P{G(γ) < x, γ < 1} = P{γ < F(x)} = F(x), which is the desired result. This general theorem immediately yields two key algorithms of the inverse function method.

12 | 1 Fundamentals of the Monte Carlo Method y



1

y = F(x)



❜ r

❜ r

r ✲ x

0 y



1

❜ ❜

r

x = G(y)

r ✲ x

0 Fig. 1.1. Relation between F(x) and G(y).

xm (i) Let us simulate a discrete random variable ξ distributed by the law ( px11 px22 ⋅⋅⋅ ⋅⋅⋅ p m ). For the sake of definiteness, let x1 < x2 < ⋅ ⋅ ⋅ < x m . From Theorem 1.3.3 it follows that the variable ξ has to be simulated by the formula

ξ = xi

if

i−1

i

k=1

k=1

∑ pk ≤ γ < ∑ pk .

(1.3)

The generalisation of this algorithm to the case of infinite m is obvious. (ii) Let us simulate a continuous random variable ξ distributed on [a, b] with density p(x) > 0. Let F(x) denote its distribution function which is of the form x

F(x) = ∫ p(x) dx a

for x ∈ [a, b]. In this case F(x) monotonically increases on [a, b] and has the inverse function F −1 (y) on it, which coincides with G(y). Hence the variable ξ can

1.3 Sampling of a scalar random variable | 13

be found by the formula

ξ = F −1 (γ).

(1.4)

Remark 1.3.4. In some cases it is more convenient to use the formula ξ = F −1 (1 − γ), which is equivalent to (1.4), since γ and 1 − γ are identically distributed. Example 1.3.5. Let us simulate the discrete random variable ξ which is distributed by the geometrical law: p k = P{ξ = k} = p(1 − p)k ,

k = 0, 1, 2, . . .

It is easily seen that i

i

k=0

k=1

∑ p k = p ∑ (1 − p)k = p

1 − (1 − p)i = 1 − (1 − p)i . 1 − (1 − p)

We thus arrive at the following method to find ξ : ξ=i

if 1 − (1 − p)i−1 ≤ γ < 1 − (1 − p)i .

After elementary transformations we arrive at the expressions ξ =⌊

ln(1 − γ) + 1⌋ or ln(1 − p)

ξ =⌊

ln γ + 1⌋, ln(1 − p)

which are equivalent since γ and 1 − γ are identically distributed. Example 1.3.6. Let us simulate the exponential random variable ξ distributed for x0 ≤ x < ∞ with density p(x) = ae−a(x−x0 ) . Since

x

F(x) = ∫ ae−a(y−x0 ) dy = 1 − e−a(x−x0 ) , x0

equation (1.4) used to find ξ takes the form 1 − e−a(ξ −x0 ) = γ, hence

ξ = x0 − (1/a) ln(1 − γ) or

ξ = x0 − (1/a) ln γ.

Exercise 1.3.7. Prove that the following relations are true. (i) The uniform distribution on the interval [a, b] is implemented by the formula ξ = a + γ(b − a).

14 | 1 Fundamentals of the Monte Carlo Method (ii) Let ξ be the continuous random variable with the piecewise constant distribution density 0 if x < x0 , { { { p ξ (x) = {p i if x i−1 ≤ x < x i , i = 1, . . . , m, (1.5) { { {0 if x ≥ x m . In order to simulate it one has to use the formula ξ = x i−1 + for

i−1 1 (γ − ∑ p k (x k − x k−1 )) pi k=1

i−1

i

k=1

k=1

(1.6)

∑ p k (x k − x k−1 ) ≤ γ ≤ ∑ p k (x k − x k−1 ).

1.3.2 The superposition method We assume that the distribution function of the random variable ξ under consideration admits the representation m

F(x) = ∑ c k F k (x), k=1

where all F k (x) are distribution functions as well and c k > 0. As x → ∞, we see that c1 + c2 + ⋅ ⋅ ⋅ + c m = 1. Therefore, we are able to introduce the discrete random variable m η distributed by the law ( c11 c22 ⋅⋅⋅ ⋅⋅⋅ c m ), so that P{η = k} = c k . Theorem 1.3.8. Let γ1 , γ2 be independent random variables uniformly distributed on the interval [0, 1]. If from γ1 we draw the value η = k of the random variable η in accordance with formula (1.3), and then find ξ from the equation F k (ξ) = γ2 , then the distribution function of ξ is equal to F(x). Proof. It follows from Theorem 1.3.3 that the conditional distribution function of ξ under the condition η = k is equal to F k (x). Hence, by the formula of total probability we obtain m

m

k=1

k=1

P{ξ < x} = ∑ P{ξ < x | η = k}P{η = k} = ∑ F k (x)c k = F(x), which is the desired result. Exercise 1.3.9. Prove that one random number γ suffices in the superposition method if first from γ one draws the value η = k of the random variable η and then finds ξ from the equation F k (ξ) = θ,

where

k−1

θ = (γ − ∑ c j )c−1 k j=1

(the modified superposition method suggested by G. A. Mikhailov).

1.3 Sampling of a scalar random variable | 15

Hint: Show that the random variable θ is uniformly distributed on the interval [0, 1], and the variables θ and η are independent. Exercise 1.3.10. Demonstrate that the random variable ξ with piecewise constant density (1.5) can be simulated as follows: ξ = x i−1 + γ2 (x i − x i−1 ), where

i−1

i

k=1

k=1

∑ p k (x k − x k−1 ) ≤ γ1 ≤ ∑ p k (x k − x k−1 ),

i = 1, 2, . . . , m.

Obtain a formula for simulating ξ by the modified superposition method and compare it with (1.6).

1.3.3 The rejection method Let us simulate a random variable η distributed on a finite interval [a, b] with a bounded density p(x) ≤ c. Theorem 1.3.11. Let γ1 , γ2 be independent random variables uniformly distributed on the interval [0, 1] and ξ 󸀠 = a + γ1 (b − a), η󸀠 = cγ2 . Then the random variable ξ defined by the condition ξ = ξ󸀠

if

η󸀠 < p(ξ 󸀠 )

has the probability density p(x). Proof. We observe that the point (ξ 󸀠 , η󸀠 ) is uniformly distributed in the rectangle [a, b] × [0, c]. Calculate the conditional probability P{ξ < z} = P{ξ 󸀠 < z | η < p(ξ 󸀠 )} =

P{ξ 󸀠 < z, η < p(ξ 󸀠 )} . P{η < p(ξ 󸀠 )}

The denominator of the last fraction is the probability for the point (ξ 󸀠 , η󸀠 ) to fall below the curve y = p(x). Since the density of the point (ξ 󸀠 , η󸀠 ) is constant equal to [c(b − a)]−1 , we see that b

p(x)

󸀠

P{η < p(ξ )} = ∫ dx ∫ [c(b − a)]−1 dy a

0 b

= [c(b − a)]−1 ∫ p(x) dx = [c(b − a)]−1 . a

16 | 1 Fundamentals of the Monte Carlo Method The numerator is equal to the probability that the point (ξ 󸀠 , η󸀠 ) finds itself below the curve but ξ 󸀠 < z: p(x)

z

P{ξ 󸀠 < z, η < p(ξ 󸀠 )} = ∫ dx ∫ [c(b − a)]−1 dy a

0

z

= [c(b − a)]

−1

∫ p(x) dx. a

Thus,

z 󸀠

P{ξ < z} = ∫ p(x) dx, a

which is the desired result.

1.4 Sampling of a vector random variable Let Ω be a bounded closed subset of the s-dimensional Euclidean space ℝs , and let p(x) be some function such that p(x) > 0

for x ∈ Ω,

and

∫ p(x) dx = 1. Ω

Let us simulate independent realisations of a vector-valued random variable ξ ∈ ℝs whose values lie in Ω and whose distribution density is p(x). This problem can be solved, as above, by the methods of inverse functions, superposition, and rejection. First, let us consider the extension of the method of inverse functions. The joint distribution density of the components of the vector ξ admits the representation as a product p ξ (x1 , . . . , x s ) = p1 (x1 )p2 (x2 | x1 )p3 (x3 | x1 , x2 ) ⋅ ⋅ ⋅ p s (x s | x1 , . . . , x s−1 ), where p i (x i | x1 , . . . , x i−1 ) is the conditional distribution density of the component ξ i under the condition that ξ1 = x1 , . . . , ξ i−1 = x i−1 . We introduce the conditional distribution functions xi

F i (x i | x1 , . . . , x i−1 ) = ∫ p i (x | x1 , . . . , x i−1 ) dx,

i = 2, . . . , s.

−∞

Theorem 1.4.1. Let γ1 , . . . , γ s be independent random variables uniformly distributed on the interval [0, 1]. Then the family of the random variables ξ2 , . . . , ξ s which result

1.4 Sampling of a vector random variable | 17

from sequential solving of the equations F1 (ξ1 ) = γ1 , F2 (ξ2 | ξ1 ) = γ2 , .. . F s (ξ s | ξ1 , . . . , ξ n−1 ) = γ s , has the joint probability density p ξ (x1 , . . . , x s ). Proof. If the values ξ1 = x1 , . . . , ξ i−1 = x i−1 are fixed, then, as follows from Theorem 1.3.3, the random variable ξ i with distribution function F i (x | x1 , . . . , x i−1 ) can be defined from the equation F i (ξ i | x1 , . . . , x i−1 ) = γ i . Then the probability of the inequality x i < ξ i < x i + dx i is equal to P{x i < ξ i < x i + dx i } = p i (x i | x1 , . . . , x i−1 ) dx i . Therefore, to within infinitesimals of higher order, we obtain P{x1 < ξ1 < x1 + dx1 , . . . , x s < ξ s < x s + dx s } = P{x1 < ξ1 < x1 + dx1 }P{x2 < ξ2 < x2 + dx2 | ξ1 = x1 } ⋅ ⋅ ⋅ P{x s < ξ s < x s + dx s | ξ1 = x1 , . . . , ξ s = x s } = p1 (x1 ) dx1 p2 (x2 | x1 ) dx2 ⋅ ⋅ ⋅ p s (y s | x1 , . . . , x s−1 ) dx s = p ξ (x1 , . . . , x s ) dx1 ⋅ ⋅ ⋅ dx s , which proves the theorem. Theorem 1.4.1 allows us to decompose the problem to simulate a multidimensional random variable into s problems to simulate scalar random variables ξ i . It is trivial to extend the superposition method to the multidimensional case. The rejection method is not altered much; but in order to realise it one has to simulate random variables ξ 󸀠 which are uniformly distributed in the domain Ω. To do this, we utilise the approach based on embedding the domain Ω into the s-dimensional rectangular parallelepiped Π. In order to find points ξ 󸀠 uniformly distributed in Ω, we can calculate points ξ 󸀠󸀠 uniformly distributed in Π, which should present no problems, and separate those points which fall into Ω. It is easily seen that the relation P{ξ 󸀠 ∈ G} = P{ξ 󸀠󸀠 ∈ G | ξ 󸀠󸀠 ∈ Ω} =

P{ξ 󸀠󸀠 ∈ G} P{ξ 󸀠󸀠 ∈ Ω}

holds true for any domain G ⊂ Ω. Since ξ 󸀠󸀠 is uniformly distributed in Π, the probability that it falls into a domain is proportional to the measure of this domain, that is, P{ξ 󸀠󸀠 ∈ G} =

μ(G) , μ(Π)

P{ξ 󸀠󸀠 ∈ Ω} =

μ(Ω) , μ(Π)

18 | 1 Fundamentals of the Monte Carlo Method where μ(⋅) is the Lebesgue measure. Hence it follows that P{ξ 󸀠 ∈ G} =

μ(G) , μ(Ω)

in other words, ξ 󸀠 is uniformly distributed in Ω.

1.5 Elementary Monte Carlo method and its properties We consider the problem to evaluate the integral J = ∫ f(x) dx

(1.7)



of a function f(x) ∈ L2 (Ω) over a bounded closed domain Ω ∈ ℝs . Let us use a generator of random numbers which outputs a sequence of independent realisations xi of an s-dimensional random variable ξ distributed in the domain Ω with density p(x) such that ∫ p(x) dx = 1

(1.8)



and p(x) > 0 for all x ∈ Ω such that f(x) ≠ 0. Such a density is referred to as admissible for f(x) (see [62, p. 108]). Along with the random variable ξ we consider the random variable η = f(ξ)/p(ξ) which obeys the equalities E{η} = ∫ Ω

f(x) p(x) dx = J; p(x)

Var{η} = E{|η − J|2 } = ∫ Ω

|f(x)|2 dx − |J|2 . p(x)

Here and in what follows |a| means the absolute value of a complex number: |a|2 = a a,̄ where the bar stands for the complex conjugation. In this case the integral J can be estimated by the formula J ≈ Jn =

1 n 1 n f(xi ) . ∑ ηi = ∑ n i=1 n i=1 p(xi )

(1.9)

Relation (1.9) describes the elementary Monte Carlo method to evaluate integral (1.7). The estimator J n is unbiased, that is, E{J n } = J, and consistent, with the variance Var{J n } = E{|J n − J|2 } =

Var{η} 1 |f(x)|2 = (∫ dx − |J|2 ). n n p(x) Ω

(1.10)

1.5 Elementary Monte Carlo method and its properties | 19

In the simplest case where the random variable ξ is uniformly distributed in the domain Ω, that is, the density is p(x) ≡ (μ(Ω))−1 , formula (1.9) takes the form J ≈ Jn =

μ(Ω) n ∑ f(xi ), n i=1

(1.11)

which describes the ‘crude’ Monte Carlo method. The variance of the estimator J n in this case is of the form Var{J n } =

1 (μ(Ω) ∫ |f(x)|2 dx − |J|2 ). n

(1.12)



Under the condition that Var{η} is finite, the Chebyshev inequality [56] implies convergence of the estimators J n in probability to the value of the integral J with the rate O(n−1/2 ). Namely, for δ > 0 as small as desired we see that P{|J n − J| ≤

1 √ Var{η} } ≥ 1 − δ2 . δ n

By virtue of the strong law of large numbers, the estimators J n converge, as n grows, to the value of the integral J almost surely. In addition, by virtue of the central limit theorem, the distribution of the variables J n is close to the normal law for n large enough, which allows us to construct confidence intervals for them provided that we know (even approximately) their variances. In actual practice, in order to estimate the variance Var{J n } its statistical estimator is used, namely the sample (or empirical) variance which is of the form σ2n =

n 1 ∑ |η i − J n |2 . n(n − 1) i=1

Both J n and the estimators of their variances σ2n can be calculated recursively, which dictates the sequential character of the algorithm. In summary, the basic properties of the elementary Monte Carlo method are as follows: ∙ it is simple and easily implemented; ∙ the estimator of the integral can be recursively calculated as n grows, while the accuracy of the estimator can be controlled by means of the central limit theorem; ∙ the convergence rate does not depend on the dimensionality of the problem. While the first two properties are undeniably advantages of the method, the benefits of the third one are twofold. For small s (s ≤ 3), for sufficiently smooth integrands one can suggest deterministic integration methods which converge much faster than the Monte Carlo ones while consuming a comparable amount of labour. As dimensionality of the problem grows, efficiency of the deterministic methods in many cases decreases (or their complexity increases a lot), and use of the Monte Carlo methods becomes preferable. Moreover, the utilisation of special techniques allows for essential growth

20 | 1 Fundamentals of the Monte Carlo Method of the method accuracy, and it becomes possible to apply it even to problems of rather low dimensionality. Such techniques called variance reduction methods (we speak about reducing the variance of the variable η averaged) are considered in detail below. It is worth emphasising that the mentioned convergence rate of the method takes place for any function in L2 (Ω), in other words, the method converges for a quite wide class of integrands.

1.6 Methods of variance reduction In this section, we briefly describe the most widely used approaches to increasing the efficiency of the Monte Carlo method. The objective is to help to understand the essence, advantages and deficiencies of these approaches, as well as to prepare the reader to accept the main contents of this monograph concerning adaptive statistical methods of numerical integration.

1.6.1 Importance sampling The variance functional (1.10) of estimator (1.9) depends on the choice of the density p(x) of distribution of the random points, and the constant density corresponding to the ‘crude’ Monte Carlo method (1.11) by no means minimises it. So, choosing densities differing from a constant, we may try to decrease the variance. The minimisation of the variance functional with respect to the density under constraint (1.8) leads us to the known result (see [18, p. 117] and [62, p. 109]) that the minimum of the variance is attained at −1

p∗ (x) = |f(x)|( ∫ |f(x)| dx)

(1.13)



and is equal to Var{J n } =

2 1 [( ∫ |f(x)| dx) − |J|2 ]. n Ω

For the positive integrands this choice of the density implies the zero value of the variance, that is, the accurate evaluation of the integral. Although it is impossible to use the density p∗ (x) in actual practice, because the calculation of the normalising coefficient is as labour-consuming as the initial problem, its form allows us to suggest a way to approximate it. We have to find a non-negative function g(x) close to |f(x)| such that the integral of it is quite easily calculated. Following J. H. Halton [25], we refer to these functions as ‘easy.’ Then we set −1

p(x) = g(x)( ∫ g(x) dx) . Ω

1.6 Methods of variance reduction | 21

This is precisely the essence of the importance sampling method. In this case the integral estimator is of the form −1 1 n

J n = ( ∫ g(x) dx) Ω

n



i=1

f(xi ) , g(xi )

(1.14)

and its variance is D1 = Var{J n } =

1 |f(x)|2 dx − |J|2 ). ( ∫ g(x) dx ∫ n g(x) Ω

(1.15)



The actual efficiency of this approach depends primarily on a priori information on the integrand: the more information, the easier to find the suitable ‘easy’ function.

1.6.2 Control variate sampling The variance functional (1.12) of estimator (1.11) is a positive quadratic functional of the integrand f(x) and becomes equal to zero for f(x) ≡ 0. What this means is that the absolute error of integration by the Monte Carlo method is the smaller, the closer the integrand to zero is. So, if one succeeded in finding a function g(x) which is close to f(x) and easily integrable, then only the integral of the difference between g and f should be estimated by the Monte Carlo method hence decreasing the variance. This method is referred to as either ‘correlated sampling’ or ‘control variate sampling.’ The estimator of the integral then takes the form J n = ∫ g(x) dx + Ω

μ(Ω) n ∑ (f(xi ) − g(xi )), n i=1

(1.16)

and its variance is equal to D2 = Var{J n } =

󵄨󵄨 󵄨󵄨2 1 [μ(Ω) ∫ |f(x) − g(x)|2 dx − 󵄨󵄨󵄨󵄨J − ∫ g(x) dx󵄨󵄨󵄨󵄨 ]. n 󵄨 󵄨 Ω

(1.17)



As in the importance sampling method, a priori information on the integrand is of importance, which helps to find the most efficient ‘easy’ function g(x).

1.6.3 Advantages and relations between the methods of importance sampling and control variate sampling The advantage of the importance sampling method consists primarily of the possibility to apply it to improper integrals: the singularity of the integrand can be embedded into the density. In addition, the ‘optimal’ density (1.13) must vanish in those subdomains of Ω where f(x) ≡ 0. Then these domains are automatically excluded from integration.

22 | 1 Fundamentals of the Monte Carlo Method The control variate sampling method is simpler to implement because, in contrast to the importance sampling method, there is no need to simulate a random variable with given distribution. In order to simulate the uniformly distributed random variable used in this method one can make use of standard software routines. In addition, as the approximation g(x) becomes infinitely close to the integrand f(x), the variance of the control variate sampling method becomes infinitesimal, while in the importance sampling method this holds true in the case of non-negative integrands only. It is interesting which method is more accurate provided that the integrand is non-negative and the same ‘easy’ function g(x) is used. This question was analysed by Halton in [24]. Consider expressions (1.17) and (1.15) and investigate the difference D2 − D1 =

1 (μ(Ω) ∫ |f(x) − g(x)|2 dx n Ω

󵄨󵄨 󵄨󵄨2 |f(x)|2 dx + |J|2 ) − 󵄨󵄨󵄨󵄨J − ∫ g(x) dx󵄨󵄨󵄨󵄨 − ∫ g(x) dx ∫ g(x) 󵄨 󵄨 Ω





1 |f(x) − g(x)|2 |f(x) − g(x)|2 = (μ(Ω) ∫ g(x) dx − ∫ dx ∫ g(x) dx). n g(x) g(x) Ω





Investigating this expression, we conclude that if the approximation of g(x) to f(x) is ‘absolutely’ uniform, that is, |f(x) − g(x)| ≈ const, then D2 < D1 , and the control variate sampling method is more accurate, and if it is ‘relatively’ uniform, that is, |f(x) − g(x)| ≈ const ⋅ g(x), then D2 > D1 , and the importance sampling method becomes more accurate.

1.6.4 Symmetrisation of the integrand This method was suggested in 1956 by K. W. Morton and J. M. Hammersley [26] as the so-called ‘antithetic variates method.’ It is based on the following simple idea: find a function g(x) such that the integral of it is equal to the integral J sought for and the variance (1.12) decreases under changing f(x) for g(x). Then in order to evaluate the integral J in formula (1.11) one uses g(x) instead of f(x) hence decreasing the variance Var{J n }. Let us demonstrate how to use the symmetrisation method on a one-dimensional example. Let 1

J = ∫ f(x) dx

and

g(x) =

0

1 [f(x) + f(1 − x)]. 2

It is easily seen that 1

1

2

1

∫ g(x) dx = J,

∫ |g (x)| dx ≤ ∫ |f 2 (x)| dx,

0

0

0

1.6 Methods of variance reduction | 23

and hence the estimator Jn =

1 n ∑ [f(x k ) + f(1 − x k )], 2n k=1

(1.18)

where x k are independent identically distributed points in [0, 1], is unbiased, and its variance does not exceed the variance of the elementary Monte Carlo method. But to carry out calculation by formula (1.18) we need twice as many computations of the function f(x), which may make it not so efficient. As an immediate extension of estimator (1.18) to integration over the unit s-dimensional cube K s we obtain the estimator Jn =

1 n ∑ [f(xk ) + f(1 − xk )], 2n k=1

(1.19)

where 1 is the point all whose coordinates are equal to one. There are other ways of symmetrisation, but most of them are convenient and efficient for one-dimensional problems only, and become quite cumbersome and hard to analyse as the dimension grows.

1.6.5 Group sampling The idea of the group sampling method, also known as the stratified sampling method, is close to that of the importance sampling one: one also has to choose many points in the ‘essential’ domains. But the choice is defined not by the density but direct description of the number of points drawn at various parts of the integration domain. We assume that the integration domain Ω is divided into m subdomains Ω1 , Ω2 , . . . , Ω m and the integral sought for admits the representation m

J = ∫ f(x) dx = ∑ J j ,

J j = ∫ f(x) dx.

j=1



Ωj

Evaluating each integral J j with the use of the elementary Monte Carlo method with n j random points xjk , we arrive at the estimator m

Jn = ∑

j=1

n

μ(Ω j ) j ∑ f(xjk ), n j k=1

(1.20)

where xjk are independent points uniformly distributed in Ω j . It is easy to see that m

Var{J n } = ∑ D j , j=1

Dj =

1 (μ(Ω) ∫ |f(x)|2 dx − |J j |2 ). nj Ωj

The minimum of this expression over n j under the condition n = n1 + n2 + ⋅ ⋅ ⋅ + n m

24 | 1 Fundamentals of the Monte Carlo Method is attained in the case where n j are proportional to √D j , that is, we have to take more points in those subdomains where the integrand changes more dramatically. In actual problems, the variances D j are, as a rule, unknown. In this case we choose n j to be proportional to μ(Ω j ), which also decreases the variance of the estimator J n as compared with the elementary Monte Carlo method. Exercise 1.6.1. Demonstrate that if we choose nj = n

μ(Ω j ) μ(Ω)

then the variance of the estimate of the integral by the group sampling method does not exceed the variance obtained in the case of the elementary Monte Carlo method (1.12).

1.6.6 Estimating with a faster rate of convergence The above approaches allow us to increase the integration accuracy, sometimes to a great extent, but in the general case they do not improve the order of the convergence rate of the Monte Carlo method. The integration error (for any given probability) decreases as O(n−1/2 ), as in the elementary case. Nevertheless, in some cases it is possible to get estimators whose convergence rate is higher at certain classes of functions. For example, S. Heinrich [27] suggested such an estimator in the correlated sampling method. Assume that the integration domain coincides with the unit cube K s and f(x) ∈ C m (a), in other words, the integrand possesses all partial derivatives up to order m inclusive in K s bounded by some constant a. Divide K s into n = M s subcubes with sides M −1 , where M is some positive integer. It is well known that on this uniform grid it is possible to construct an approximation g(x) of the function f(x) such that |f(x) − g(x)| ≤ CM −m = Cn−m/s ,

x ∈ Ks ,

where the constant C does not depend on M. Substituting this inequality into (1.17), we find that the variance of the estimator of the correlated sampling method with the approximation g(x) is of order Var{J n } = O(n−1−2m/s ).

(1.21)

The group sampling method permits also to construct Monte Carlo estimators with higher convergence rate. Such estimators were studied in [7, 14, 21, 22]. Again, divide the unit cube into n = M s small subcubes Ω1 , Ω2 , . . . , Ω n , and in each subcube Ω j choose a single uniformly distributed random point xj , that is, in the notation of the preceding section, n j = 1. In this case formula (1.20) takes the form n

J n = ∑ μ(Ω j )f(xj ) = j=1

1 n ∑ f(xj ). n j=1

(1.22)

1.7 Conclusion

| 25

Formally speaking, this formula coincides with that for the elementary Monte Carlo method (1.11), with the difference that each of the points xj is uniformly distributed not in the whole domain K s but in its own subcube Ω j . This formula can be treated as the stochastic analogue of the rectangular cubature formula. It is easily seen that in the case f(x) ∈ C1 (a) the variance of this estimator is of order Var{J n } = O(n−1−2/s ). Using the symmetrisation idea, Haber [21, 22] constructed a similar estimator for functions in the class C2 (a): Jn =

󸀠 1 n f(xj ) + f(xj ) , ∑ n j=1 2

(1.23)

where x󸀠j stands for the point symmetric to xj about the centre of the subcube Ω j (compare with (1.19)). Formula (1.23) can be treated as the stochastic version of the trapezoidal rule. It is not difficult to see that on the class of functions C2 (a) the variance of this estimator is of order Var{J n } = O(n−1−4/s ), which is the same as in (1.21) and higher than that of the elementary Monte Carlo method. The above methods, although converging rapidly, lose the key feature of the elementary Monte Carlo method which consists of the sequential nature of calculation. What this means is that the required accuracy has not been attained for chosen M and n = M s , the whole uniform grid must be re-constructed and both the integral estimator and empirical variance must be re-calculated. In this monograph we consider other approaches to constructing statistical methods of integration with higher convergence rate which allow for sequential calculation. They are described and investigated in the following chapters.

1.7 Conclusion In this chapter, we considered the most essential to our presentation questions of the theory of the Monte Carlo method to evaluate integrals. Such important topics as the generation of uniformly distributed random variables (pseudo-random numbers), simulation of random variables with the use of Markov chains (Markov chain Monte Carlo), estimation of integrals depending on a parameter, evaluation of continual integrals, as well as such techniques to reduce the variance as stochastic cubature formulas and splitting methods, and many others are beyond this chapter; the interested reader should consult special literature on the Monte Carlo methods, e.g., the books [15, 18, 47, 49, 62].

2 Sequential Monte Carlo Method and Adaptive Integration This chapter is a theoretical basis for the following construction of adaptive integration algorithms based on the Monte Carlo methods. We thoroughly discuss the scheme of the so-called sequential Monte Carlo method. It was suggested in 1962 by J. Halton in [23] and for a long time had no practical use. Halton himself used this method to solve some problems of linear algebra. In 1986, O. Yu. Kulchitsky and S. V. Skrobotov [40] proposed an elementary adaptive Monte Carlo algorithm to calculate a one-dimensional integral based on the importance sampling method. This approach was then developed in [5]. Later it was found that this algorithm could be treated as a special version of the sequential Monte Carlo method. This led to a re-evaluation of Halton’s ideas, so that Kulchitsky–Skrobotov’s adaptive method was generalised and new efficient statistical integration algorithms were designed. In this chapter we investigate convergence of the sequential Monte Carlo method which extends Halton’s results and on this base propose several general approaches to constructing adaptive statistical methods of integration.

2.1 Sequential Monte Carlo method 2.1.1 Basic relations We consider problem (1.1) of evaluation of the integral J = ∫ f(x) dx. Ω

Assume that a statistical simulation procedure outputs a sequence of random points x1 , x2 , . . . , xn , . . . in the domain Ω, which yields some sequence of unbiased estimators S1 , S2 , . . . , S n , . . . of the integral J such that for each k the estimator S k depends on the set of points x1 , x2 , . . . , xk only. Such estimators (they are, as a rule, simple enough) are called the primary ones. On the base of the primary estimators we construct more complex estimators defined by the equality J n = β n S n + (1 − β n )J n−1 ,

β1 = 1,

0 ≤ β n ≤ 1,

(2.1)

which we refer to as secondary ones. Relation (2.1) immediately yields unbiasedness of the secondary estimators. It defines the sequential Monte Carlo method to evaluate the integral J. https://doi.org/10.1515/9783110554632-002

28 | 2 Sequential Monte Carlo Method and Adaptive Integration Let us study sequential scheme (2.1). We observe that the secondary estimators admit the representation n

(n)

Jn = ∑ αk Sk ,

(2.2)

k=1

where

n

(n)

α k = β k ∏ (1 − β j ). j=k+1

(2.3)

(n)

The coefficients α k are non-negative for all k and n, and their sum over k is equal to one: n

(n)

α k ≥ 0,

(n)

∑ α k = 1.

k=1

(2.4)

Thus, the secondary estimator of J n is a convex combination of the primary estimators numbered from 1 to n. Thus, recurrence relation (2.1) completely determines the dependence of the coef(n) (n) ficients of the convex combination α k on β k . In addition, the equality α n = β n holds true. It is not hard to see that the reverse is also true. Exercise 2.1.1. Prove that if the secondary estimators are defined by relation (2.2) and (n) the coefficients α k obey the equality (n)

n

(k)

αk = αk

(j)

∏ (1 − α j ),

j=k+1

(2.5)

(n)

then recurrence relation (2.1) holds true with β n = α n . Most of the versions of the Monte Carlo method considered in the preceding chapter can be treated as particular cases of the sequential method. For example, the elementary Monte Carlo method (1.9) with density p(x) is associated with the primary estimators Sk =

f(xk ) . p(xk )

The importance sampling method (1.14) based on the ‘easy’ function g(x) is derived with the use of the primary estimators −1

S k = ( ∫ g(x) dx) Ω

f(xk ) , g(xk )

and the correlated sampling method (1.16), with the use of the primary estimators S k = ∫ g(x) dx + μ(Ω)[f(xk ) − g(xk )]. Ω (n) αk

The coefficients β k and are the same in all cases and equal to 1n . It is seen that the primary estimators in all these cases depend on the current realisation xk and are independent of each other. Precisely this case was studied by Halton.

2.1 Sequential Monte Carlo method | 29

In the general case, though, primary estimators can depend on all preceding realisations, that is, they may use all information collected so far (say, on the values of the integrand at the points drawn). From a priori reasoning it becomes clear that by taking account for this information the convergence rate of the method may be made higher. But the analysis of convergence appears to be much more complicated. For most methods presented in this monograph, the primary estimators S k are not independent but only uncorrelated. In this connection, in what follows we study convergence of the sequential Monte Carlo method for the case of uncorrelated primary estimators, thus generalising Halton’s results.

2.1.2 Mean square convergence We thus assume that the primary estimators S k are uncorrelated, that is, E{(S i − J)(S j − J)} = 0,

i ≠ j.

As above, the bar stands for the complex conjugation operation. In this case the variance of the secondary estimator J n is of the form n

(n) 2

Var{J n } = ∑ α k k=1

Dk ,

(2.6)

provided that the variances of the primary estimators D k = Var{S k } = E{|S k − J|2 } are finite. The mathematical expectation in the expression of D k is, generally speaking, over the totality of realisations x1 , x2 , . . . , xk which the primary estimator S k depends on. (n) From (2.6) it follows that, by making a suitable choice of the coefficients α k , it is worth trying to reduce the variance of the estimator J n for given D k . With the use of methods of calculus of variations it is not difficult to demonstrate that the minimum of (n) expression (2.6) in α k under conditions (2.4) is attained at (n)

αk =

n 1 1 −1 (∑ ) . D k i=1 D i

(2.7) (n)

While minimising, we have used condition (2.4) only. Thus, the solutions α k do not necessarily satisfy relation (2.5). But direct verification shows that it holds true in this case. Therefore, computational scheme (2.2) with coefficients (2.7) is a sequential Monte Carlo scheme (2.1) with the coefficients (n)

βn = αn =

n 1 −1 1 (∑ ) . D n i=1 D i

(2.8)

30 | 2 Sequential Monte Carlo Method and Adaptive Integration The variances of the secondary estimators under such a choice of the coefficients are of the form n 1 −1 Var{J n } = ( ∑ ) . D i=1 i The true values of D k are, as a rule, unknown, so in actual computational practice (n) one uses their upper estimators ̂ D k . In this case the coefficients α k and β n have to be chosen by formulas (2.7) and (2.8) with D k changed for ̂ D k , and the best possible estimator of the variance of J n takes the form n

Var{J n } ≤ ( ∑

i=1

1 −1 ) . ̂ Di

(2.9)

Analysing expression (2.9), we see that the sequential scheme with coefficients (2.8) converges in the mean square sense, in other words, the variances of the secondary estimators tend to zero as n → ∞, if, as n grows, D k decrease, remain constant or even grow not too fast. In particular, for D k = O(k−γ ) from (2.9) we arrive at the estimate {O(n−1−γ ) if γ > −1, Var{J n } = { O( 1 ) if γ = −1. { ln n

(2.10)

Of interest is only the case γ ≥ 0, which will be considered below. The importance sampling, correlated sampling, and elementary Monte Carlo methods fall into the case γ = 0. Let us find an explicit expression of the coefficients β n in the case where D k = O(k−γ ). Using ̂ D k = Ak−γ in formula (2.8) instead of D k , we obtain n

−1

βn = kγ ( ∑ kγ ) . i=1

Direct summation in this expression for arbitrary γ > 0 is impossible, so our interest is in finding coefficients β n which, being not optimal in the sense of minimisation of the variance of the estimator J n , provide us with the convergence rate (2.10). We need the following lemma. Lemma 2.1.2. For any x ≥ 1, γ ≥ 0, the inequalities min{Γ(1 + γ), 1} ≤

Γ(x + γ) ≤ max{Γ(1 + γ), 1} Γ(x)x γ

are true. Here and in what follows Γ(⋅) and ψ(⋅) stand for the gamma function (the Euler integral of the second kind) and the psi function (the logarithmic derivative of the gamma function), respectively. Proof. We consider the auxiliary function g(x, γ) = ln(

Γ(x + γ) ) = ln Γ(x + γ) − ln Γ(x) − γ ln x. Γ(x)x γ

2.1 Sequential Monte Carlo method | 31

Since [57, p. 775]

∞ 1 ∂2 g = ψ󸀠 (x + γ) = ∑ > 0, 2 ∂γ (m + x + γ)2 m=0

the function g(x, γ) for fixed x is convex downward in γ. Since g(x, 0) = g(x, 1) = 0, by virtue of Rolle’s theorem, g(x, γ) has a unique minimum in γ lying inside the interval [0, 1]. Therefore, for 0 ≤ γ ≤ 1,

while for γ > 1,

g(x, γ) ≤ g(x, 1) = 0,

(2.11)

g(x, γ) ≥ g(x, 1) = 0.

(2.12)

Furthermore, ∞ ∂g γ γ γ = ψ(x + γ) − ψ(x) − = ∑ − , ∂x x m=0 (m + x)(m + x + γ) x

by virtue of [57, (5.1.6.2)]. Therefore, for 0 ≤ γ ≤ 1, ∞ ∂g γ γ ≥ ∑ − = 0, ∂x m=0 (m + x)(m + x + 1) x

and for γ > 1,

∞ ∂g γ γ ≤ ∑ − = 0. ∂x m=0 (m + x)(m + x + 1) x

Hence, for fixed 0 ≤ γ ≤ 1 the function g(x, γ) does not decrease in x, in other words, g(x, γ) ≥ g(1, γ) = ln Γ(1 + γ),

x ≥ 1,

(2.13)

x ≥ 1.

(2.14)

while for γ > 1 it does not increase in x, in other words, g(x, γ) ≤ g(1, γ) = ln Γ(1 + γ),

To complete the proof, it remains to combine inequalities (2.11)–(2.14). The right-hand side of the inequality obtained in Lemma 2.1.2 shows that, rather than ̂ D k = Ak−γ , one can choose Γ(k) ̂ D k = Â Γ(k + γ) in relation (2.8) for the coefficients β n . In this case βn = where

n−1 Γ(n + γ) Γ(n + γ) n Γ(k + γ) −1 k + γ −1 (∑ ) = (∑( )) , Γ(n) Γ(k) Γ(n)Γ(γ + 1) k=0 γ k=1

a Γ(a + 1) ( )= Γ(b + 1)Γ(a − b + 1) b

32 | 2 Sequential Monte Carlo Method and Adaptive Integration are the binomial coefficients. Using [57, (4.2.1.28)], i.e., n

∑(

k=0

n+γ+1 k+γ ), )=( n γ

we obtain −1

Γ(n + γ) n+γ ) ( Γ(n)Γ(γ + 1) n − 1 Γ(n + γ) Γ(n)Γ(γ + 2) γ + 1 = = , Γ(n)Γ(γ + 1) Γ(n + γ + 1) n+γ

βn =

(n)

n

α k = β k ∏ (1 − β j ) = (γ + 1) j=k+1

Γ(n)Γ(k + γ) . Γ(k)Γ(n + γ + 1)

(2.15) (2.16)

Variance (2.9) under such a choice of ̂ D k is estimated as follows: γ + 1 Γ(n) (γ + 1)Γ(n) Var{J n } ≤ β n ̂ D n = Â = Â . n + γ Γ(n + γ) Γ(n + γ + 1)

(2.17)

Besides, in view of the left-hand side of the inequality of Lemma 2.1.2, this estimate is of order given by relation (2.10). Thus, the following theorem is true. Theorem 2.1.3. Let the primary estimators S k in sequential scheme (2.1) be uncorrelated and let their variances D k be of order O(k−γ ) for some γ ≥ 0. If the coefficients β n in the sequential scheme are chosen in accordance with formula (2.15), then the variances of the secondary estimators J n are of order O(n−1−γ ), and this order cannot be improved by the choice of the coefficients. Thus, upon a suitable choice of the coefficients, the decrease of the variances of the secondary estimators is an order of magnitude faster than that of the primary estimators. (n) We observe that the coefficients β n and α k defined by equalities (2.15) and (2.16) depend on the parameter γ which determines the rate of decrease of the variances D k . In what follows we will emphasise this dependence by denoting the right-hand sides of these equalities by β(n, γ) and α(n, k, γ), respectively. The point is that the exact value of γ in integration algorithms depends generally on the properties of the integrand and may not be known in advance. In this connection, the question arises: upon having but a rough idea of the rate of decrease of D k , is it possible to exercise a choice of the coefficients of the sequential scheme such that Theorem 2.1.3 remains true? The positive answer is given by Theorem 2.1.5. In order to prove it, we need the following elementary lemma. Lemma 2.1.4. Let γ󸀠 be chosen in such a way that 2γ󸀠 + 1 > γ. Then g(k) = for all k = 1, 2, . . . , n.

Γ2 (k + γ󸀠 ) Γ2 (n + γ󸀠 ) ≤ Γ(k + γ)Γ(k + 2γ󸀠 − γ) Γ(n + γ)Γ(n + 2γ󸀠 − γ)

2.1 Sequential Monte Carlo method | 33

Proof. It suffices to consider the relation (k + γ󸀠 )2 g(k + 1) = ≥ 1, g(k) (k + γ)(k + 2γ󸀠 − γ)

k ≥ 1,

2γ󸀠 + 1 > γ.

Hence, g(k) ≤ g(n), which proves the lemma. Theorem 2.1.5. Let the hypothesis of Lemma 2.1.4 be satisfied, and for some constant A > 0 let Γ(k) , k ≥ 1. Dk ≤ A Γ(k + γ) (n)

If the coefficients of the sequential scheme are chosen as β n = β(n, γ󸀠 ) or α k = α(n, k, γ󸀠 ), respectively, then the bound Var{J n } ≤ A

(γ󸀠 + 1)2 Γ(n) (2γ󸀠 − γ + 1) Γ(n + γ + 1)

holds true. Proof. We use relation (2.6) and obtain n

Var{J n } = ∑ α2 (n, k, γ󸀠 )D k k=1

n

≤ A(γ󸀠 + 1)2 ∑

k=1

Γ(k) Γ2 (n)Γ2 (k + γ󸀠 ) + γ󸀠 + 1) Γ(k + γ)

Γ2 (k)Γ2 (n

(γ󸀠 + 1)2 Γ2 (n) n Γ2 (k + γ󸀠 ) . =A 2 ∑ Γ (n + γ󸀠 + 1) k=1 Γ(k)Γ(k + γ) Making use of Lemma 2.1.4, we obtain n



k=1

n Γ2 (k + γ󸀠 ) Γ2 (n + γ󸀠 ) Γ(k + 2γ󸀠 − γ) ≤ ∑ 󸀠 Γ(k)Γ(k + γ) Γ(n + γ)Γ(n + 2γ − γ) k=1 Γ(k)

= = =

Γ2 (n + γ󸀠 ) n Γ(n)Γ(k + 2γ󸀠 − γ) ∑ Γ(n + γ)Γ(n) k=1 Γ(k)Γ(n + 2γ󸀠 − γ)

Γ2 (n + γ󸀠 ) n n + 2γ󸀠 − γ α(k, n, 2γ󸀠 − γ) ∑ Γ(n + γ)Γ(n) k=1 2γ󸀠 − γ + 1 Γ2 (n + γ󸀠 )(n + 2γ󸀠 − γ) . Γ(n + γ)Γ(n)(2γ󸀠 − γ + 1)

Therefore, Var{J n } ≤ A

(γ󸀠 + 1)2 Γ(n) (n + 2γ󸀠 − γ)(n + γ) . 󸀠 (2γ − γ + 1) Γ(n + γ + 1) (n + γ󸀠 )2

The last fraction in this bound does not exceed 1, which proves the theorem.

(2.18)

34 | 2 Sequential Monte Carlo Method and Adaptive Integration Thus, for any γ󸀠 > (γ − 1)/2 the rate of convergence of the variances of the secondary estimators is an order of magnitude faster than that of the primary estimators, which gives us much more freedom in choosing the coefficients of the sequential scheme. But we emphasise that the factor (γ󸀠 + 1)2 /(2γ󸀠 − γ + 1) entering into bound (2.18) attains its least value at γ󸀠 = γ, in other words, for known γ the coefficients of the sequential scheme must be chosen in the form β(n, γ). We also observe that bound (2.18) for γ󸀠 = γ reduces to bound (2.17), in other words, Theorem 2.1.5 is a direct generalisation of Theorem 2.1.3. If there is only conjectural value of the convergence rate exponent γ,̂ we can only recommend to set γ󸀠 > (γ̂ − 1)/2. Then, if the true value γ ≤ γ,̂ then such a choice of γ󸀠 guarantees the best possible convergence rate of the secondary estimators, and in the case γ > γ̂ the convergence rate depends on the ratio between γ and γ󸀠 but is higher than the expected one. Since the mean square convergence implies the convergence in probability, the estimators J n under the hypotheses of Theorem 2.1.5 are consistent. Moreover, the Chebyshev inequality (see, e.g., [56]) yields the following estimate for accuracy of the sequential scheme. Corollary 2.1.6. Let the hypotheses of Theorem 2.1.5 be satisfied. Then for δ > 0 as small as we wish, P{|J n − J| ≤ √

Var{J n } = O(n−(γ+1)/2 )} ≥ 1 − δ. δ

(2.19)

Thus, for an arbitrary given confidence probability level our sequential scheme converges with the rate O(n−(γ+1)/2 ). Two theorems proved above concern the case where D k = O(k−γ ). But in some cases one succeeds in finding a better estimate of D k of the form o(k−γ ) as k → ∞. Let us demonstrate that the choice of the coefficients suggested in Theorem 2.1.5 provides us with quite good results as well. Theorem 2.1.7. Let the hypotheses of Lemma 2.1.4 be satisfied, and D k = o(

Γ(k) ), Γ(k + γ)

k → ∞.

If the coefficients of the sequential scheme are chosen in the form β n = β(n, γ󸀠 ) or (n) α k = α(n, k, γ󸀠 ), respectively, then the estimate Var{J n } = o(

Γ(n) ) Γ(n + γ + 1)

holds true as n → ∞. Proof. We introduce Ak = Dk

Γ(k + γ) Γ(k)

2.1 Sequential Monte Carlo method | 35

and consider the product Var{J n }

Γ(n + γ + 1) Γ(n + γ + 1) n (n) 2 = ∑ αk Dk Γ(n) Γ(n) k=1 = (γ󸀠 + 1)2 = (γ󸀠 + 1)2

Γ(n + γ + 1) n Γ2 (n)Γ2 (k + γ󸀠 ) Γ(k) A ∑ 2 2 (n + γ 󸀠 + 1) Γ(k + γ) k Γ(n) Γ (k)Γ k=1 Γ(n + γ + 1)Γ(n) n Γ2 (k + γ󸀠 ) Ak . ∑ Γ2 (n + γ󸀠 + 1) k=1 Γ(k)Γ(k + γ)

Using Lemma 2.1.4, we find that Var{J n }

Γ(n + γ + 1) Γ(n + γ + 1)Γ(n) n Γ(k + 2γ󸀠 − γ)Γ2 (n + γ󸀠 ) ≤ (γ󸀠 + 1)2 2 Ak ∑ Γ(n) Γ (n + γ󸀠 + 1) k=1 Γ(k)Γ(n + γ)Γ(n + 2γ󸀠 − γ) = (γ󸀠 + 1)2 ≤ =

Γ(n)Γ(k + 2γ󸀠 − γ) (n + 2γ󸀠 − γ)(n + γ) n Ak ∑ 󸀠 2 Γ(k)Γ(n + 2γ󸀠 − γ + 1) (n + γ ) k=1

(γ󸀠 + 1)2 n ∑ α(n, k, 2γ󸀠 − γ)A k 2γ󸀠 − γ + 1 k=1

m n (γ󸀠 + 1)2 󸀠 α(n, k, 2γ − γ)A + α(n, k, 2γ󸀠 − γ)A k ], [ ∑ ∑ k 2γ󸀠 − γ + 1 k=1 k=m+1

where m is an integer between 1 and n. Since A k tend to zero as k → ∞, there exists a constant A > 0 such that A k ≤ A for all k > 0. Therefore, m

m

∑ α(n, k, 2γ󸀠 − γ)A k ≤ A ∑ α(n, k, 2γ󸀠 − γ)

k=1

k=1

≤ Am max α(n, k, 2γ󸀠 − γ) k

= Amβ(n, 2γ󸀠 − γ) =

Am(2γ󸀠 − γ + 1) . n + 2γ󸀠 − γ

Furthermore, n

n

∑ α(n, k, 2γ󸀠 − γ)A k ≤ max A k ∑ α(n, k, 2γ󸀠 − γ) ≤ max A k .

k=m+1

Thus, Var{J n }

k>m

k=m+1

k>m

(γ󸀠 + 1)2 Am(2γ󸀠 − γ + 1) Γ(n + γ + 1) ≤ 󸀠 + max A k ]. [ Γ(n) 2γ − γ + 1 n + 2γ󸀠 − γ k>m

Choosing m = ⌊√n⌋, where ⌊⋅⌋ means the integer part of a number, we see that the right-hand side tends to zero as n → ∞, which is the desired result. It is possible to prove assertions similar to Theorems 2.1.5 and 2.1.7 in the case where, rather than D k = O(k−γ ) and D k = o(k−γ ), there are D k = O(k−γ lnδ k) and

36 | 2 Sequential Monte Carlo Method and Adaptive Integration D k = o(k−γ lnδ k), respectively, with some δ > 0. The proofs remain the same, and the estimates for the variances become of the form Var{J n } = O(n−γ−1 lnδ n) and Var{J n } = o(n−γ−1 lnδ n), respectively. Thus, the results of this section can be summarised as follows. Theorem 2.1.8. Let the primary estimators S k be pairwise uncorrelated, and let there exist constants γ, δ ≥ 0 such that, as k → ∞, D k = O(k−γ lnδ k) or

D k = o(k−γ lnδ k).

If the coefficients of our sequential scheme are chosen as βn =

1 + γ󸀠 , n + γ󸀠

γ󸀠 >

γ−1 , 2

then the estimates Var{J n } = O(n−γ−1 lnδ n) or

Var{J n } = o(n−γ−1 lnδ n),

respectively, are true, as n → ∞. Exercise 2.1.9. Demonstrate that the coefficients of the sequential scheme which minimise the variances of the secondary estimators are defined by relation (2.7).

2.1.3 Almost sure convergence The question concerning almost sure convergence (with probability one) is more complex and requires, in the general case, invoking additional constraints. A theorem about convergence of the sequential scheme with probability one has been proven by Halton in [23] and we will give it below. We nevertheless emphasise that Halton’s proof utilises the assumption that the primary estimators are derived from independent realisations xi , although this assumption (which does not necessarily hold true in actual practice) is not explicitly formulated. Upon analysing this proof we succeed in altering it in such a way that it remains true in the general case. This proof is precisely what we will present below. Theorem 2.1.10. Let the following conditions be satisfied: (n) (i) the variables κ n = maxk α k tend to zero as n → ∞; (ii) the variances of the primary estimators D k are of order O(k−γ ); −t (iii) there exists t > 0, 2t + 1 < γ, such that the series ∑∞ k=1 β k k converges. Then the secondary estimators J n constructed in accordance with (2.1) converge to J with probability one. Proof. We apply the Chebyshev inequality to the primary estimators and find that P{|S k − J| ≥ θ} ≤

Var{S k } θ2

2.1 Sequential Monte Carlo method | 37

with θ = k−t , and then make use of the second condition of the theorem. Then P{|S k − J| ≥ k−t } ≤ Bk2t−γ for some B > 0. For an arbitrary m, we estimate the probability P{the inequality |S k − J| ≥ k−t holds true for some k > m} ∞

≤ ∑ P{|S k − J| ≥ k−t } k=m+1 ∞

≤ ∑ Bk2t−γ k=m+1 ∞ 2t−γ

≤ B∫x

Bm2t+1−γ . γ − 2t − 1

dx =

m

Hence, P{|S k − J| < k−t for any k > m} ≥ 1 −

Bm2t+1−γ . γ − 2t − 1

Thus, with probability as close to one as we wish, m2 m2 󵄨󵄨 m2 󵄨 󵄨󵄨 ∑ β (S − J)󵄨󵄨󵄨 ≤ ∑ β |S − J| < ∑ β k−t k k k k k 󵄨󵄨 󵄨󵄨 󵄨 󵄨 k=m1

k=m1

k=m1

for any m2 > m1 > m, which, by virtue of the third condition of the theorem, implies convergence of the series ∞

∑ β k (S k − J).

(2.20)

k=1

Thus, under the second and third conditions of the theorem, series (2.20) converges with probability one. Now let us demonstrate that under the first condition of the theorem, convergence of series (2.20) implies convergence of the sequence of the secondary estimators to J. The following proof is close to the corresponding reasoning by Halton [23, p. 62]. We consider an arbitrary sequence of the primary estimators S k such that series (n) (2.20) converges, and let S denote its limit. Since all α k are non-negative and their sum over k is equal to one, we see that k 󵄨󵄨 n (n) k 󵄨 󵄨 n 󵄨 󵄨󵄨 ∑ α ∑ β (S − J) − S󵄨󵄨󵄨 = 󵄨󵄨󵄨 ∑ α(n) { ∑ β (S − J) − S}󵄨󵄨󵄨 l l l l 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 k k 󵄨 󵄨 󵄨 󵄨 k=1

l=1

k=1

l=1

n 󵄨󵄨 n 󵄨 (n) 󵄨 = 󵄨󵄨󵄨󵄨 ∑ {β l (S l − J) − S} ∑ α k 󵄨󵄨󵄨󵄨 󵄨 󵄨 l=1

k=l

󵄨󵄨 󵄨󵄨 n ≤ 󵄨󵄨󵄨󵄨 ∑ {β l (S l − J) − S}󵄨󵄨󵄨󵄨 󳨀󳨀󳨀󳨀→ 0. 󵄨 󵄨 n→∞ l=1

(2.21)

38 | 2 Sequential Monte Carlo Method and Adaptive Integration Furthermore, 󵄨󵄨 󵄨󵄨 n (n) |J n − J| = 󵄨󵄨󵄨󵄨 ∑ α k (S k − J)󵄨󵄨󵄨󵄨 󵄨 󵄨k=1 k−1 󵄨󵄨 n α(n) k 󵄨󵄨 = 󵄨󵄨󵄨󵄨 ∑ k { ∑ β l (S l − J) − ∑ β l (S l − J)}󵄨󵄨󵄨󵄨 β 󵄨k=1 k l=1 󵄨 l=1 (n−1) (n) k−1 n 󵄨󵄨 n 󵄨󵄨 α α = 󵄨󵄨󵄨󵄨 ∑ β l (S l − J) + ∑ ( k−1 − k ) ∑ β l (S l − J)󵄨󵄨󵄨󵄨 β k−1 β k l=1 󵄨l=1 󵄨 k=2 n n k−1 󵄨󵄨 n 󵄨󵄨 = 󵄨󵄨󵄨󵄨 ∑ β l (S l − J) − ∑ β k { ∏ (1 − β j )} ∑ β l (S l − J)󵄨󵄨󵄨󵄨 󵄨l=1 󵄨 k=2 j=k+1 l=1 n k−1 󵄨󵄨 n 󵄨󵄨 (n) = 󵄨󵄨󵄨󵄨 ∑ β l (S l − J) − ∑ α k ∑ β l (S l − J)󵄨󵄨󵄨󵄨 󵄨l=1 󵄨 k=2 l=1 n k n 󵄨󵄨 󵄨󵄨 n (n) (n) = 󵄨󵄨󵄨󵄨 ∑ β l (S l − J) − ∑ α k ∑ β l (S l − J) + ∑ α k β k (S k − J) 󵄨󵄨󵄨󵄨 󵄨l=1 󵄨 k=1 l=1 k=1

󵄨󵄨 n 󵄨󵄨 󵄨󵄨 n (n) k 󵄨󵄨 󵄨󵄨 n 󵄨󵄨 ≤ 󵄨󵄨󵄨󵄨 ∑ β l (S l − J) − S󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨 ∑ α k ∑ β l (S l − J) − S󵄨󵄨󵄨󵄨 + κ n 󵄨󵄨󵄨󵄨 ∑ β k (S k − J)󵄨󵄨󵄨󵄨. 󵄨l=1 󵄨 󵄨k=1 󵄨 󵄨k=1 󵄨 l=1 The first addend tends to zero as n → ∞ by virtue of convergence of series (2.20), the second addend, by (2.21), and the third one, by the first condition of the theorem, since the partial sums of a convergent series are limited. Therefore, |J n − J| 󳨀󳨀󳨀󳨀→ 0, n→∞

which is the desired result. If we choose the coefficients of the sequential scheme by formula (2.15), the conditions for almost sure convergence become more simple. It is easy enough to show that in this case κ n = β n , so the first condition of the theorem is always satisfied. Besides, the third condition of the theorem is satisfied for any t > 0, so the following assertion is valid. Corollary 2.1.11. Let the variances of the primary estimators D k be of order O(k−γ ), and let the coefficients β n be chosen in accordance with (2.15). Then the secondary estimators J n converge to the true value of the integral J almost surely for any γ > 1. The condition γ > 1 obtained in Corollary 2.1.11 can be essentially weakened if the coefficients of the sequential scheme are chosen by formula (2.15). It is easily seen indeed that for any γ > 0 the variances of the secondary estimators decrease as O(k−γ−1 ), hence the series formed of them converges. It turns out that this suffices for convergence of the secondary estimators with probability one.

2.1 Sequential Monte Carlo method | 39

Lemma 2.1.12. Let the random variables X1 , X2 , . . . , X k , . . . be non-negative, and let the series of their mathematical expectations converge: ∞

∑ E{X k } < ∞.

k=1

Then X k tend to zero with probability one. Proof. Since all X k are non-negative, the partial sums n

S Xn = ∑ X k k=1

converge monotonically to the limit ∞

SX = ∑ Xk , k=1

which is not necessarily finite, with probability one. Therefore, by virtue of the theorem on monotone convergence of the Lebesgue–Stieltjes integral, E{S X n } → E{S X }, as n → ∞. On the other hand, n



k=1

k=1

E{S X n } = ∑ E{X k } → ∑ E{X k } < ∞ because of non-negativity of X k again. Therefore, E{SX} < ∞, hence the random variable SX is finite with probability one, which proves the lemma. It is clear that as X k we can choose (J k − J)2 with γ > 0. The following theorem is thus true. Theorem 2.1.13. Let the variances of the primary estimators D k be of order O(k−γ ), and let the coefficients β n be chosen in accordance with (2.15). Then the secondary estimators J n converge to J almost surely for any γ > 0.

2.1.4 Error estimation From relation (2.19) it follows that the measure of the error of the numerical result obtained is its mean square deviation. In order to estimate the mean square deviation in the computation process, the empirical (sample) variance is used. The following requirements are imposed upon the empirical variance: it must be a non-negative and, wherever possible, unbiased and consistent estimator of the value we are calculating. Let us find the empirical variance with the use of the method of undetermined coefficients in the form n

Var{J n } ≈ σ2n = ∑ ν k |S k − J n |2 , k=1

40 | 2 Sequential Monte Carlo Method and Adaptive Integration where ν k ≥ 0 are some coefficients we have to find. We calculate the mathematical expectation of this estimator under the assumption that the primary estimators are uncorrelated: n

E{σ2n } = ∑ ν k E{|(S k − J) − (J n − J)|2 } k=1 n

= ∑ ν k [D k + Var{J n } − E{(S k − J)(J n − J)} − E{(S k − J)(J n − J)}]. k=1

Recalling relation (2.2), we obtain (n)

E{(S k − J)(J n − J)} = α k D k , hence n

E{σ2n } = ∑ ν k [D k (1 − 2α k ) + Var{J n }] (n)

k=1 n

(n)

= ∑ ν k (1 − 2α k )D k + C Var{J n }, k=1

where

n

C = ∑ νj . j=1

To ensure that the variance estimator σ2n is unbiased, in view of (2.6), we set νk =

(n) 2

αk

(1 − C) (n)

1 − 2α k

.

Finding the constant C, we finally arrive at (n) 2

αk

νk =

(n)

1−2α k

1+

(n) 2

α ∑nj=1 j (n) 1−2α j

.

Therefore, the unbiased estimator of Var{J n } is defined by the equality σ2n =

n

1 1 + ∑nk=1

(n) 2

αk

(n)

1−2α k



k=1

(n) 2

αk 1−

(n) 2α k

|S k − J n |2 .

Unfortunately, the estimator obtained is entirely unsuited for sequential calculation as n changes. In this connection, we suggest to use the estimator obtained in the (n) 2 definition of ν k from the relation ν k = α k . Then the variance estimator takes the form n

2

(n) σ̂ 2n = ∑ α k |S k − J n |2 . k=1

(2.22)

2.2 Adaptive methods of integration | 41

This estimator is surely biased: n

(n) 2

E{σ̂ 2n } = Var{J n } + Var{J n } ∑ α k k=1

n

(n) 3

− 2 ∑ αk k=1

Dk ,

but the bias is easily estimated and is an order of magnitude less than the principal term: n

(n) 2

∑ αk

n

k

k=1

(n) 3

∑ αk

k=1

n

(n)

(n)

(n)

≤ max α k ∑ α k = max α k , (n)

k

k=1 n

(n) 2

D k ≤ max α k ∑ α k k

k=1

(n)

D k = max α k Var{J n }. k

(n)

If the coefficients α k are chosen as in Section 2.1.2, then (n)

max α k = β n = O(n−1 ), k

hence

(1 − 2β n ) Var{J n } ≤ E{σ̂ 2n } ≤ (1 + β n ) Var{J n }.

Thus, σ̂ 2n is an asymptotically unbiased estimator of the variance Var{J n }. Under some additional constraints it is also consistent. The obtained estimator (2.22) is well suited for sequential calculation as n changes: introducing the variables n

(n) 2

Un = ∑ αk k=1

,

n

(n) 2

Vn = ∑ αk k=1

|S k |2 ,

n

(n) 2

Wn = ∑ αk k=1

Sk

and making use of representation (2.3), we obtain U n = β2n + (1 − β n )2 U n−1 ,

V n = β2n |S n |2 + (1 − β n )2 V n−1 ,

W n = β2n S n + (1 − β n )2 W n−1 , σ̂ 2n = V n − (J n W n + J n̄ W n ) + U n |J n |2 . We do not need boundary conditions for U n , V n , and W n because β1 = 1.

2.2 Adaptive methods of integration In this section we suggest a number of general approaches to constructing sequential methods of integration based on the sequential Monte Carlo method. As we have said, it seems that the most efficient way consists of using such versions of the sequential Monte Carlo method which take full advantage of the information on

42 | 2 Sequential Monte Carlo Method and Adaptive Integration the integrand collected in the course of simulation. One such method was suggested by Halton, the author of the sequential scheme, and consisted in the following: having computed a set of values of the integrand, one constructs its ‘easy’ approximation which is then used to get a primary estimator. As the volume of the collected information grows, it is felt that these approximations converge to the integrand. Among other things, this means that the primary estimators derived from these approximations by the importance sampling or correlated sampling methods are being progressively refined, which, as we have concluded in the preceding section, leads to more fast convergence of the secondary estimators. The advantage of this way consists also of the fact that it requires no a priori information on the behaviour of the integrand in the integration domain. On the other hand, while constructing the approximations, one may take into account special features of the integrand related to its belonging to a certain class of functions (say, the number of continuous derivatives). Given a way of constructing the approximations, the algorithm adapts itself to the behaviour of the particular integrand taking advantage of the collected information on it, so adapting to peculiarities of the particular problem. Once he had proposed this procedure, Halton did not study it theoretically. This was probably because it did not fit the framework of the sequential Monte Carlo method theory whose basics he developed for the case where the primary estimators are constructed from separate independent realisations of a random variable. It seems likely that the fact that in the scheme just described the primary estimators intricately depend on all realisations drawn frightens most scientists away. Such an adaptive procedure in its simplest form was for the first time thoroughly investigated and analysed for convergence by O. Yu. Kulchitsky and S. V. Skrobotov in 1986 in [40]. The following section describes briefly ideas and results of this study.

2.2.1 Elementary adaptive method of one-dimensional integration Let us consider the following simple problem setup: evaluate the integral b

J = ∫ f(x) dx a

of a positive function f(x) over some interval [a, b] ⊂ ℝ. The restriction f(x) > 0 poses no practical difficulty because one can always add a sufficiently large constant to f(x). To compute this integral, the estimator of the form n

(n)

Jn = ∑ αk k=1

f(x k ) p k (x k ) (n)

was proposed by O. Yu. Kulchitsky and S. V. Skrobotov in [40], where α k were some coefficients obeying condition (2.4) and x k , k = 1, 2, . . . , n, were realisations of

2.2 Adaptive methods of integration | 43

the random variables distributed on the interval [a, b] with conditional densities p k (x) = p(x | x1 , x2 , . . . , x k−1 ) such that b

p k (x) > 0,

x ∈ [a, b],

∫ p k (x) dx = 1. a

It was suggested to construct the densities p k (x) with the use of the formulas p k (x) =

f k (x) , Ik

b

I k = ∫ f k (x) dx, a

where f k (x) were piecewise constant approximations of the integrand at those random points x1 , x2 , . . . , x k−1 , in other words, f k (x) = f(x(i) ),

x ∈ [x(i) , x(i+1) ),

i = 0, 1, . . . , k − 1,

where x(0) = a, x(k) = b, x(1) , x(2) , . . . , x(k−1) were the order statistics of the sample x1 , x2 , . . . , x k−1 . The process of constructing several first approximations f k (x) is given in Figure 2.1, where the thick curve stands for the function f(x) being approximated. From this description it is easily seen that this method can be treated as a version of the sequential Monte Carlo method where the primary estimators are obtained on the base of the importance sampling method: Sk =

f(x k ) . p k (x k )

The distribution densities of the random points are built up on the ‘easy’ approximations f k (x), so the primary estimators depend on all random points drawn before. Nevertheless, it can be shown (see Section 2.2.2) that the primary estimators obtained in such a way are uncorrelated, and hence, the results of Section 2.1.2 deduced under this condition remain fully valid. Furthermore, under some additional requirements, namely, on boundedness of f(x) from above and from below by some positive constants and on existence of its piecewise-continuous bounded first derivative, Kulchitsky and Skrobotov succeeded in finding estimates for the variances of the variables S k of the (n) form D k = O(1/k). So, in the case where the coefficients α k are chosen by the formula (n)

αk =

2k , n(n + 1)

we establish the convergence of the estimators J n in variance with rate O(n−2 ). The subsequent study of the method in [5, 29, 30] shows that the estimates obtained are quite rough and the constraints on f(x) could be weakened. Namely, instead of differentiability it suffices to require for Lipschitz-continuity, then D k is estimated as O(k−2 ), so J n converge in variance to J with rate O(n−3 ) under the suitable choice of (n) the coefficients α k . Exercise 2.2.1. Prove the above-formulated assertion on convergence. If a difficulty arises, go to Section 3.2.

44 | 2 Sequential Monte Carlo Method and Adaptive Integration y

y

f1 (x)



✲ b x

a y

a

a

x1

f4 (x)



x2

✲ b x

✲ b x

x1

y

f3 (x)



f2 (x)



a x3

x1

x2

✲ b x

Fig. 2.1. Construction of piecewise constant approximations.

2.2.2 Adaptive method of importance sampling In this section, we present a general idea of the adaptive procedure based on the importance sampling method which is a direct generalisation of the method given in the preceding section. Let us evaluate the integral J = ∫ f(x) dx Ω

of a function f(x) over a domain Ω ⊂ ℝs . To do this, we utilise the sequential Monte Carlo method with the primary estimators chosen by the formulas Sk =

f(xk ) , p k (xk )

(2.23)

where xk are realisations of the random variables distributed in the domain Ω with the conditional densities p k (x) = p(x | x1 , x2 , . . . , xk−1 ), such that p k (x) > 0,

x ∈ Ω,

∫ p k (x) dx = 1. Ω

2.2 Adaptive methods of integration | 45

The densities p k (x) obey the formulas p k (x) =

f k (x) , Ik

I k = ∫ f k (x) dx,

(2.24)



where f k (x) are some non-negative approximations of the integrand depending on x1 , x2 , . . . , xk−1 . By the definition of the densities p k (x), the joint density of the random points x1 , x2 , . . . , xk satisfies the relation p(x1 , x2 , . . . , xk ) = p1 (x1 )p2 (x2 ) ⋅ ⋅ ⋅ p k (xk ),

(2.25)

which, obviously, shows that the estimators S k are unbiased. Let us prove that they are uncorrelated. For fixed k and i < k consider the covariance cov{S i , S k } = E{(S i − J)(S k − J)}. Since S i does not depend on xk , we see that cov{S i , S k } = Ex1 ,x2 ,...,xk−1 {(S i − J)Exk {(S k − J) | x1 , x2 , . . . , xk−1 }}. But in view of (2.23) and (2.25) the inner mathematical expectation is equal to zero, which proves that the primary estimators are uncorrelated indeed. Let us derive an explicit expression of the variances D k of primary estimators (2.23) provided that the densities obey formulas (2.24): 󵄨󵄨 f(xk ) 󵄨󵄨2 󵄨󵄨2 󵄨󵄨 f(xk ) − f k (xk ) D k = M{󵄨󵄨󵄨󵄨 − J 󵄨󵄨󵄨󵄨 } = M{󵄨󵄨󵄨󵄨 − (J − I k )󵄨󵄨󵄨󵄨 } p k (xk ) 󵄨 p k (xk ) 󵄨 󵄨 󵄨 2 f(xk ) − f k (xk ) |f(xk ) − f k (xk )| − 2ℜ((J − I k ) = M{ ) + |J − I k |2 }, p k (xk ) p2k (xk ) ̄ where ℜ(x) = (x + x)/2 is the real part of x. We observe that |J − I k | does not depend on xk , so, making use of (2.25), we separate the integration over this variable in the expression of mathematical expectation. After reducing the similar terms we obtain D k = Ex1 ,x2 ,...,xk−1 { ∫ Ω

|f(x) − f k (x)|2 dx − |J − I k |2 }. p k (x)

(2.26)

Remark 2.2.2. Expression (2.26) reveals that the closer the approximation f k (x) is to the integrand f(x) (hence, I k to J), the smaller is the variance of the primary estimator S k . But it is clear that the integrand can be successfully approximated by non-negative f k in the only case where it is non-negative itself. So in what follows, speaking about the applications of the adaptive importance sampling method we will always keep in mind that f(x) ≥ 0 in the integration domain.

46 | 2 Sequential Monte Carlo Method and Adaptive Integration 2.2.3 Adaptive method of control variate sampling We consider the same problem as in the preceding section not assuming that the function f(x) is positive. Again, let a sequence of approximations f k (x) (not necessarily positive) be given, each depending only on the points x1 , x2 , . . . , xk−1 drawn earlier. We make use of the sequential Monte Carlo method choosing the primary estimators in the form S k = I k + μ(Ω)[f(xk ) − f k (xk )], where the variables I k are defined by relation (2.24) and x1 , x2 , . . . , xk are independent realisations of the uniformly distributed on Ω random variable. The estimators S k are of the same form as in the elementary method of control variate sampling, but the ‘easy’ approximation varies here from step to step. It is obvious that these primary estimators are unbiased, while the fact that they are uncorrelated is proved in precisely the same way as in Section 2.2.2. Let us find an explicit expression of the variances of these estimators: D k = Var{S k } = E{|I k − J + μ(Ω)[f(xk ) − f k (xk )]|2 } = E{|J − I k |2 − 2μ(Ω)ℜ((J − I k )[f(xk ) − f k (xk )]) + μ2 (Ω)|f(xk ) − f k (xk )|2 }. Since |J − I k | does not depend on xk , using the independence of the realisations x1 , x2 , . . . , xk , we separate the integration over this variable in the mathematical expectation. After reducing the similar terms we obtain D k = Ex1 ,x2 ,...,xk−1 {μ(Ω) ∫ |f(x) − f k (x)|2 dx − |J − I k |2 }.

(2.27)



As in expression (2.26), we again see that the better the approximation f k (x) is in the norm L2 (Ω), the smaller are the variances D k . In this section we impose no constraints on the sign of the integrand, so the area of application of the adaptive correlated sampling method is much wider than that of the adaptive importance sampling method. This approach has also the advantage that it is not required to draw random points with prescribed densities p k (x). In order to carry out further estimation of the variances D k for both adaptive methods, we need to be more specific about the properties of the domain and integrand, as well as the way to construct the approximations f k (x). These questions are discussed in the chapters below.

2.2 Adaptive methods of integration | 47

2.2.4 Generalised adaptive methods of importance sampling and control variate sampling The primary estimators of the adaptive methods considered above turn out to be uncorrelated because they obey the relation Exk {S k | x1 , x2 , . . . , xk−1 } = J,

(2.28)

that is, they are unbiased with respect to the last point drawn under the condition that the preceding points are fixed. Relation (2.28) is, of course, not necessary for the estimators S k to be uncorrelated, but it is of rather general form and provides a way of constructing the primary estimators. In particular, the primary estimators presented in Sections 2.2.2 and 2.2.3 allow for comprehensive generalisation in the framework of condition (2.28). Namely, it remains valid if either Sk = or

g k (xk ) p k (xk )

S k = I k + μ(Ω)[g k (xk ) − f k (xk )],

(2.29) (2.30)

where f k (x), p k (x), I k are the same as above, and g k (x) are some functions maybe depending on x1 , x2 , . . . , xk−1 such that for all k, ∫ g k (x) dx = ∫ f(x) dx. Ω



Formulas (2.29) and (2.30) define the generalised methods of importance sampling and control variate sampling, respectively. It is clear that they reduce to the methods of the preceding sections as g k (x) = f(x). It is easily seen that the variances of the primary estimators (2.29) and (2.30) are defined by expressions (2.26) and (2.27), respectively, with f(x) changed for g k (x). Thus, in the generalised methods the sequence f k (x) must approximate not the function f(x) but the sequence of ‘integrands’ g k (x). The use of such generalised methods may be of advantage in the case where it is more convenient to use f k (x) to approximate the sequence of functions g k (x) rather than the initial integrand f(x). In the next chapter we give examples of construction of generalised adaptive methods. We observe that the idea of generalisation of adaptive methods presented is close to that providing the basis of the integrand symmetrisation method (see Section 1.6.4), and known symmetrisation methods can be used to increase the efficiency of the suggested adaptive integration algorithms.

2.2.5 On time and memory consumption Let us to estimate the time required to execute the adaptive algorithms described above. At each (kth) step of the algorithm, we need to carry out the following operations:

48 | 2 Sequential Monte Carlo Method and Adaptive Integration (i) draw the random point xk ; (ii) calculate the value of the primary estimator S k ; (iii) calculate the value of the secondary estimator J k and the empirical variance σ̂ 2k ; (iv) carry out the adaptation, that is, construct the next ‘easy’ approximation f k+1 (x), and the next function g k+1 (x) in the case of a generalised method. (i)

Let t k be the time needed for the ith stage, i = 1, 2, 3, 4, at the kth step of the algorithm. Then the total time needed for n steps of the algorithm is n

(1)

(2)

(3)

(4)

n

(1)

(2)

(4)

T n = ∑ (t k + t k + t k + t k ) = nt(3) + ∑ (t k + t k + t k ), k=1

k=1

since the consumptions at the third stage t(3) are constant. Since the primary estimators S k in the adaptive methods depend on all points x1 , x2 , . . . , xk which have been drawn before, it is clear that in order to implement these algorithms one has to store somewhere all these points. Thus, the memory required to carry out n steps is proportional to n. If the convergence is not too fast or the requirements on accuracy needed are high, computer memory shortage can occur. One of the ways to solve this problem consists of storing the points drawn in a magnetic disk file whose size can be much larger than that of the operating memory. But because of the big difference in the access speed this change may not warrant the extra time expenditures. Another solution consists of the following. Assume that the adaptation (change of the ‘easy’ function f k as a new random point is drawn) is carried out not at each step but with some discretisation, say, every M steps. This can be formalised as follows: f kM+M (x) ≡ f kM+M−1 (x) ≡ ⋅ ⋅ ⋅ ≡ f kM+2 (x) ≡ f kM+1 (x; xM , x2M , . . . , xkM ) for each k ≥ 0. In this case, D kM+M = D kM+M−1 = ⋅ ⋅ ⋅ = D kM+1 , that is, the variances of the primary estimators derived from the same ‘easy’ function coincide and are equal to the variance D k in the case M = 1. Hence we conclude that if the variances of the primary estimators of the initial algorithm are of order O(k−γ ), which depends only on the method of constructing the ‘easy’ approximations and the joint density of distribution of the random points they are based on, then D k are of the same order for M ≠ 1. What this means is that under the same choice of the coefficients of the sequential scheme the secondary estimators preserve the orders of their convergence rates. But the memory required to carry out n steps of the algorithm is lowered by a factor equal to M. We arrive at a similar result in the case where we calculate the arithmetical average of M primary estimators derived from the same ‘easy’ function and use the obtained ‘averaged’ estimator for recursive calculation of the secondary estimator and variance estimator. In this case it is clear that the variance of the secondary estimator at the

2.2 Adaptive methods of integration | 49

(n − 1)st adaptation step (that is, constructed by nM random points) is an Mth of the secondary estimator at the nth step of the initial algorithm. The needs for memory are the same but the time required to find this estimator is n

̃ n = nt(3) + ∑ (Mt(1) + Mt(2) + t(4) ). T k k k k=1

Hence we see that such a modification of the algorithm can be of advantage from the time consumption viewpoint if the time t4k required to carry out the adaptation stage is large in comparison with the other terms. In addition, such a scheme can be implemented advantageously with parallel computers (where M is the number of processors).

2.2.6 Regression-based adaptive methods In the conclusion of the chapter we consider one more way to construct adaptive integration methods. The approach utilised in this section is somewhat more complex than the adaptive methods of importance sampling and control variate sampling and we have to study thoroughly how to use it in the context of the sequential Monte Carlo method. This section contains only a sketch of the application. Thus, let us evaluate the integral of a function f(x) over a domain Ω ⊂ ℝs : J = ∫ f(x) dx = Eξ { Ω

f(ξ) } = Eξ {g(ξ)}, p(ξ)

where ξ is a random variable distributed with a density p(x) such that p(x) > 0,

x ∈ Ω,

∫ p(x) dx = 1. Ω

We represent the function g(x) as follows: g(x) =

f(x) = θ T ψ(x) + ∆g(x), p(x)

(2.31)

where θ ∈ ℝm is an unknown vector of parameters, ψ(x) ∈ ℝm is a known vector function whose components are basis functions which are linearly independent in Ω; ∆g(x) is the minimum in variance remainder of the approximation with zero mean. Then J = ∫ g(x)p(x) dx = θ T ∫ ψ(x)p(x) dx + ∫ ∆g(x)p(x) dx. Ω



(2.32)



The parameters θ are chosen so that they minimise the variance functional ∆g(x): σ2∆g = E{∆g2 } = E{(g(x) − θ T ψ(x))2 }. Let us prove a series of elementary assertions about this representation.

(2.33)

50 | 2 Sequential Monte Carlo Method and Adaptive Integration Lemma 2.2.3. The minimum of functional (2.33) is attained at θ = θ∗ such that θ∗ = arg min E{(g(x) − θ T ψ(x))2 } = F −1 ψ Jψ , θ

where

F ψ = ∫ ψ(x)ψ T (x)p(x) dx,

(2.34)

J ψ = ∫ f(x)ψ(x) dx.





Proof. Removing the brackets in formula (2.33), we arrive at the expression σ2∆g = ∫ g2 (x)p(x) dx − 2θ T ∫ g(x)ψ(x)p(x) dx + θ T ∫ ψ(x)ψ T (x)p(x) dx θ. Ω





σ2∆g

The derivative of with respect to θ vanishes at θ = The lemma is proved.

θ∗

defined by formula (2.33).

Lemma 2.2.4. If the coefficients θ in regression scheme (2.31) are set to θ = θ∗ defined by formula (2.34), then ∫ ∆g(x)ψ(x)p(x) dx = 0.

(2.35)



Proof. We consider approximation (2.31) with θ = θ∗ , in other words, g(x) =

f(x) = θ∗T ψ(x) + ∆g(x). p(x)

(2.36)

Multiply both the left-hand and right-hand sides of expression (2.36) by ψ(x)p(x) and integrate over x ∈ Ω. We arrive at the identity ∫ f(x)ψ(x) dx = ∫ ψ(x)ψ T (x)p(x) dx θ∗ + ∫ ∆g(x)ψ(x)p(x) dx or







J ψ = F ψ θ∗ + ∫ ∆g(x)ψ(x)p(x) dx.

(2.37)



To prove the lemma, it suffices to substitute the expression of θ∗ into (2.37). Lemma 2.2.5. Let the hypotheses of Lemma 2.2.4 be satisfied and ψ(x) = [1, φ T (x)]T ∈ ℝm ,

θ = [θ1 , ϑ T ]T ∈ ℝm .

Then integral (2.32) transforms to J = θ T ∫ ψ(x)p(x) dx. Ω

Proof. This assertion immediately follows from (2.32) and (2.35), because for ψ(x) satisfying the hypotheses of the lemma the relation ∫ ∆g(x)p(x) dx = 0 Ω

is valid.

2.2 Adaptive methods of integration | 51

From Lemma 2.2.4 it follows that if the parameters θ∗ are known, then it is possible to calculate the integral J exactly. But the formula for the optimal values θ∗ includes the integrals J ψ whose evaluation is comparable in complexity with the initial problem. We thus again come up against the situation where the accuracy of the solution depends on the accuracy of the choice of the parameters whose optimal values depend on the solution of the initial problem. This in turn means that it is possible to carry out an adaptive tuning of the algorithm. There are two ways: first, the adaptive tuning of the coefficients θ for a chosen density p(x), and second, the adaptive change of the density for a fixed scheme of estimation of the parameters θ; their combinations are also possible. In Chapter 4, we proceed along the first way. Below we give some results we may meet on the second way. The parameters θ of approximation (2.31) can be estimated with the use of the least squares method: n

2 θ̂ n = arg min { ∑ (g(xi ) − θ̂ T ψ(xi )) }, θ̂

i=1

where xi are independent realisations of the random variable ξ distributed with density p(x). In this case the estimator of θ obeys the relation θ̂ n = Q n Φ n G n ,

(2.38)

where Φ n = [ψ(x1 ), . . . , ψ(xn )], G n = [g(x1 ), . . . , g(xn )]T = Φ Tn θ + V n ,

V T = [V1 , . . . , V n ] = [∆g(x1 ), . . . , ∆g(xn )]T , n

F n = Φ n Φ Tn = ∑ ψ(xi )ψ T (xi ), i=1

Q n = F −1 n .

Properties of the estimator θ̂ n : (i) The mathematical expectation of θ̂ n is E{θ̂ n } = E{Q n Φ n G n } = E{Q n Φ n (Φ Tn θ + V n )}

= E{Q n Φ n Φ Tn θ∗ } + E{Q n Φ n V n } = θ∗ + E{Q n Φ n V n }.

Thus, under the condition E{Q n Φ n V n } = 0 the estimator of θ is unbiased. (ii) The estimator θ̂ n converges in probability to the true value of parameter θ∗ . (iii) The variance of the unbiased estimator of θ n is Var{θ̂ n } = E{(θ − θ̂ n )(θ − θ̂ n )T } = E{Q n ΦVV T Φ T Q n }. After estimating the parameters θ, the integral J is estimated as follows: J n̂ = ∫ f n̂ (x) dx = θ̂ Tn ∫ ψ(x)p(x) dx = θ̂ Tn h, Ω

(2.39)



where f n̂ (x) = p(x)θ̂ n ψ(x) and the integral h = ∫Ω ψ(x)p(x) dx is assumed to be known.

52 | 2 Sequential Monte Carlo Method and Adaptive Integration Relations (2.38) and (2.39) determine a particular integration method with a fixed density p(x). The performance criterion is the error variance functional D J n̂ = h T D θ n h.

(2.40)

The assertion below provides us with a good method to give an explicit expression of functional (2.40) via the density function. Lemma 2.2.6. Let ψ(x) = [1, φ T (x)]T ∈ ℝm , and

E{φ(ξ)} = ∫ φ(x)p(x) dx = 0,

θ = [θ1 , ϑ T ]T ∈ ℝm ,

F φ = ∫ φ(x)φ T (x)p(x) dx > 0.





Then D J n̂ {p, f} =

1 f 2 (x) 1 dx − J 2 − J φT F −1 (∫ φ J φ ) + o( ), n p(x) n

(2.41)



where

J φ = ∫ f(x)φ(x) dx. Ω

Proof. In view of property (iii) of the parameters θ̂ n and (2.40), the expression of the variance of J n̂ takes the form D J n̂ = E{h T Q n ΦVV T Φ T Q n h}.

(2.42)

Along with the matrix F n , we consider the normalised matrix F n = F n /n, which converges, as n → ∞, in the mean square sense to the matrix F ∞ = ∫ ψ(x)ψ T (x)p(x) dx, Ω

and represents the matrix F n as follows: F n = F ∞ + ∆F. Since, by the hypotheses of the theorem, h T = [1, 0, . . . , 0] = e1T , −1

it is easily seen that F ∞ h = e1 and −1 F −1 n h = (F ∞ + ∆F) h −1

−1

−1

= (F ∞ − F ∞ ∆FF ∞ + O(∆F 2 ))h = e1 + δ n + o(δ n ),

(2.43)

2.2 Adaptive methods of integration | 53

where

−1

−1

−1

δ n = F ∞ ∆FF ∞ h = F ∆Fe1

and E{δ2n } → 0 as n → ∞. Taking account for (2.43), we rewrite formula (2.42) as follows: D J n̂ =

1 ΦVV T Φ T 1 E{e1T e1 } + o( ). n n n

Since by virtue of the hypotheses of the theorem e1T Φ = [1, . . . , 1], we find that E{e1T

2 ΦVV T Φ T 1 n 1 n 1 n e1 } = E{ ( ∑ V i ) } = E{ ∑ V i2 } = ∑ E{V i2 } = σ2∆g , n n i=1 n i=1 n i=1

where σ2∆g = E{∆g2 } = E{(g(x) − θ T ψ(x))2 } = E{(g(x) − θ1 − ϑ T φ(x))2 }.

(2.44)

The true value of θ minimises functional (2.40). It is easily seen that θ1 = J and ϑ = F −1 φ J φ . Removing the brackets in (2.44) and substituting the expression of θ into (2.42), we arrive at the desired result. The lemma is proved. Remark 2.2.7. It is well known (see Section 1.5) that if one evaluates an integral with the use of the conventional Monte Carlo method then the error variance is determined by two leading terms in (2.41). The introduction of the regression dependence ϑ T φ(x) in (2.38) leads us, as we see from (2.41), to a decrease of the integral evaluation error variance, but in those cases only where not all basis functions φ i (x) are orthogonal to the function f(x), since otherwise the variance remains the same as for the conventional approach. Next, we can carry out the optimisation of the functional D J n̂ with respect to the density in the same way as we did in the importance sampling method. But since D J n̂ depends on the integrand f(x) and the integrals J φ , the optimal density popt (x) depends on them, too, and hence, cannot be determined beforehand. Nevertheless, finding an explicit expression of popt {x, f} can be used to organise a process of adaptive tuning of the computation procedure. Lemma 2.2.8. Under the hypotheses of Lemma 2.2.6, the optimal in the sense of minimisation of functional (2.41) density p∗ (x) obeys the non-linear integral equation p∗ (x) =

|f(x)| √λ +

μ T φ(x)

+

(φ T (x)( ∫Ω

−1

2

φ(y)φ T (y)p∗ (y) dy) J φ )

,

(2.45)

where λ is chosen from the normalisation condition and μ ∈ ℝ1 , from the condition ∫ φ(x)p∗ (x) dx = 0. Ω

54 | 2 Sequential Monte Carlo Method and Adaptive Integration Proof. The optimisation of functional (2.41) in p(x) must be carried out with regard to the conditions p(x) > 0,

x ∈ Ω,

∫ p(x) dx = 1,

∫ φ(x)p(x) dx = 0.





We introduce the Lagrange function taking into account the second and third conditions only; the first condition, as we will see, is always satisfied: L=∫ Ω

f 2 (x) dx − J 2 − J φT F −1 J + λ( ∫ p(x) dx − 1) + μ T ( ∫ φ(x)p(x) dx). p(x) Ω

(2.46)



Let p∗ (x) be the optimal density. Let p(x) = p∗ (x) + ε∆p(x), where ε is as small as we wish. Then F φ = F ∗φ + ε∆F φ , where F ∗φ = ∫ φ(x)φ T (x)p∗ (x) dx, Ω

∆F φ = ∫ φ(x)φ T (x)∆p(x) dx,

(2.47)

Ω ∗−1 ∗−1 ∗−1 F −1 φ = F φ − εF φ ∆F φ F φ + o(ε).

The function L thus depends on three variables ε, λ, μ, and for ε = 0 it takes its optimal value with respect to the density p(x), hence the derivative of L(ε, λ, μ) with respect to ε must be zero at ε = 0. Substituting (2.47) into (2.46) and calculating the derivative of L with respect to ε at ε = 0, we arrive at the expression 󵄨󵄨 󵄨󵄨 ∂ f 2 (x) ∂ ∂ L{p∗ + ε∆p, λ, μ}󵄨󵄨󵄨󵄨 dx󵄨󵄨󵄨󵄨 =∫ − J φT F(ε)J φ ∗ 2 ∂ε ∂ε (p (x) + ε∆p(x)) ∂ε 󵄨ε=0 󵄨ε=0 Ω



󵄨󵄨 ∂ ∫(p∗ (x) + ε∆p(x)) dx󵄨󵄨󵄨󵄨 ∂ε 󵄨ε=0 Ω

󵄨󵄨 + μ ∫ φ(x)(p∗ (x) + ε∆p(x)) dx󵄨󵄨󵄨󵄨 , 󵄨ε=0 T



where

F φ (ε) = ∫ φ(x)φ T (x)(p∗ (x) + ε∆p(x)) dx. Ω

2.2 Adaptive methods of integration | 55

From the obvious identity

F φ (ε)F −1 φ (ε) = I

it follows that

∂ −1 ∂ F φ (ε)F −1 F (ε) = 0, φ (ε) + F φ (ε) ∂ε ∂ε φ so, setting ε = 0, we obtain ∂ −1 󵄨󵄨󵄨 T −1 = −F −1 F (ε)󵄨󵄨󵄨 φ (0) ∫ φ(x)φ (x)∆p(x) dx F φ (0). ∂ε φ 󵄨ε=0

(2.48)



Substituting (2.48) into (2.47), we obtain 󵄨󵄨 ∂ f 2 (x) T ∗−1 dx − J φT F ∗−1 L{p∗ + ε∆p, λ, μ}󵄨󵄨󵄨󵄨 = −∫ 2 φ ∫ φ(x)φ (x)∆p(x) dx F φ J φ ∂ε p (x) 󵄨ε=0 Ω



+ λ ∫ ∆p(x) dx + μ T ∫ φ(x)∆p(x) dx Ω



f 2 (x) 2 T = − ∫( 2 − (J φT F ∗−1 φ φ(x)) − λ − μ φ(x))∆p(x) dx p (x) Ω

= 0. Since ∆p(x) is arbitrary, we set the factor at ∆p(x) equal to zero and arrive at integral equation (2.45) in p∗ (x) which obviously satisfies the condition p∗ (x) ≥ 0. The lemma is proved. The obtained equation in p∗ (x) allows us to construct a strategy of adaptive control over the integration procedure. In what follows, it is convenient to rewrite equation (2.45) as follows: |f(x)| p∗ (x) = , T √ λ + μ φ(x) + (ϑ∗T φ(x))2 where

ϑ∗ = arg min E{(g(x)θ T ψ(x))2 }. θ

The strategy of adaptive control over the computation procedure consists of the following: on the base of k preceding series of computations of the integrand f(x) at individual points we construct its approximation f k̂ (x) in the whole domain Ω, and calculate the estimators θ̂ k and J k̂ of the parameters θ∗ and J in accordance with procedure (2.38)–(2.39). The density of the grid points in the next (k + 1)st series is chosen in accordance with (2.45) by the formula p k+1 (x) =

|f k̂ (x)| √ λ + μ T φ(x) + (ϑ̂ Tk φ(x))2

.

(2.49)

56 | 2 Sequential Monte Carlo Method and Adaptive Integration The difference between adaptation algorithms in this case is due to the difference between methods of approximation of the function f(x). The most obvious way consists of using approximation (2.31) of the function f(x). Then the first k series of computations yield f k (x) = p k (x)θ Tk ψ(x).

(2.50)

Substitution of (2.50) into (2.49) yields the following recurrence relation for the density of the (k + 1)st generation of the grid points: p k+1 (x) =

|θ̂ Tk ψ(x)| √ λ + μ T φ(x) + (ϑ̂ Tk φ(x))2

p k (x).

This approach turns out to be advantageous in the case where the initial approximation (2.31) of the function f(x) is sufficiently accurate and reflects its essential changes which the density p(x) must take account of.

2.2.7 Note on notation Recently, several interpretations of the term ‘adaptation’ have arisen. Usually, adaptation is considered to mean a progressive accumulation of knowledge about the properties of the problem to be solved and utilisation of this knowledge in the computation procedure. From this viewpoint, the methods suggested above are surely adaptive because they simulate the behaviour of the integrand and use the obtained approximations to construct the primary estimators. But there is another version where a method is said to be adaptive in the only case where the knowledge accumulated, along with being used in computation, corrects the subsequent behaviour of the algorithm itself. As applied to integration, this means that the position of the next integration node depends on where the preceding nodes have found themselves. From this viewpoint, the suggested methods of importance sampling and density refinement based on the regression scheme are adaptive, because the points which have been drawn determine the density of distribution of the subsequent ones, while the control variate sampling methods are not, because all random points are uniformly distributed and independent. So, the name ‘sequential method of importance sampling’ is probably more correct, but, as the authors believe, it does not reflect the major advantage of this algorithm that it does adapt to the peculiarities of the problem, so for all suggested methods we preserve the term ‘adaptive.’

2.3 Conclusion

| 57

2.3 Conclusion In this chapter, we considered the sequential Monte Carlo method and obtained theorems on convergence in various senses (mean-square, probability, almost sure). We suggested a way to choose the parameters and constructed an empirical estimator of the variance which allows for easy recursive calculation. We proposed a number of approaches to constructing the primary estimators in the sequential Monte Carlo method based on the ideas of importance sampling and correlated sampling. All of them imply constructing a sequence of easily integrable approximations of the integrand each depending on all random points drawn beforehand. We obtained expressions of the variances of the primary estimators in terms of approximation errors in the norm L2 (Ω) or close to it. The development of integration methods thus reduces to constructing a sequence of easily integrable approximations of the integrand which converge to it in the norm L2 (Ω). So all approaches suggested in what follows can be treated also as methods of approximations of functions of many independent variables.

3 Methods of Adaptive Integration Based on Piecewise Approximation This chapter is devoted to investigation of adaptive methods of integration based on the sequential Monte Carlo method where the ‘easy’ approximations are piecewise (locally defined) functions on partitions of the integration domain. These methods extend the approach described in Section 2.2.1 to the case of piecewise approximations of a general form for both one- and multidimensional integrals. We suggest a series of techniques to construct such approximations, investigate the convergence of the corresponding adaptive algorithms and discuss details of their numerical implementation.

3.1 Piecewise approximations over subdomains 3.1.1 Piecewise approximations and their orders At the kth step of the adaptive algorithm (when k − 1 random points have been drawn), let the integration domain Ω be the union of some number N k of subdomains Ω kj with non-overlapping interiors. The set of these subdomains is called a partition of the domain Ω. By a piecewise approximation we mean an approximation f k (x) whose behaviour in each subdomain Ω kj is determined only by the behaviour of the integrand f(x) in this subdomain. An example of such an approximation is the piecewise constant approximation described in Section 2.2.1. The partition in this case is the set of intervals [x(j−1) , x(j) ] between the random points drawn beforehand. Different classes of functions admit approximations of variable accuracy. Generally speaking, the smoother the integrand is, the better is its piecewise approximation. We say that a sequence of piecewise approximations f k (x) approximates the integrand f(x) with approximation order l > 0 if there exists a constant C > 0 such that |f(x) − f k (x)| ≤ C[μ(Ω kj )]l ,

x ∈ Ω kj ,

(3.1)

for any k > 0 and j = 1, 2, . . . , N k . The approximation order l needs not to be an integer. Approximations of form (3.1) exist for a quite wide class of the integrands. In order to demonstrate this, we recall the method from Section 2.2.1 and assume that the integrand satisfies the Lipschitz condition with constant L > 0. In this case for x ∈ [x(j−1) , x(j) ] the bound |f(x) − f k (x)| = |f(x) − f(x(j−1) )| ≤ L(x − x(j−1) ) ≤ L(x(j) − x(j−1) ) holds true, that is, condition (3.1) is satisfied with order l = 1 and constant C = L. https://doi.org/10.1515/9783110554632-003

60 | 3 Piecewise Approximation 3.1.2 Approximations for particular classes of functions In this section we consider classes of functions admitting an approximation by sequences satisfying condition (3.1) and particular ways to construct them. For the sake of simplicity we consider the case where the subdomains Ω kj are hyperparallelepipeds of the space ℝs whose sides are parallel to the coordinate axes. It seems likely that the most general class of functions admitting the approximation satisfying condition (3.1) is the class of functions H(m, a, λ) whose partial derivatives of order up to m inclusive are bounded in absolute value by a constant a and the derivatives of order m are Hölder-continuous with exponent λ, 0 < λ ≤ 1: s 󵄨󵄨 󵄨󵄨 ∂ m f(y) ∂ m f(z) 󵄨󵄨 󵄨󵄨 ≤ a ∑ |y − z |λ − q q 󵄨󵄨 m1 󵄨 m m s 󵄨 ∂x1 ⋅ ⋅ ⋅ ∂x m ∂x1 1 ⋅ ⋅ ⋅ ∂x s s 󵄨󵄨 s q=1

for any y, z ∈ Ω. We consider the hyperparallelepiped Ω kj with sides d1 , d2 , . . . , d s and measure μ(Ω kj ) = d1 d2 ⋅ ⋅ ⋅ d s . An arbitrary integrand in H(m, a, λ) admits the Taylor series expansion near one of the points x0 of the subdomains Ω kj preserving the terms of orders up to m − 1 and the remainder in the Lagrange form: m−1

f(x) = ∑



i=0 m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =i

+

s 1 ∂ i f(x0 ) 0 mq m s ∏ (x q − x q ) m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1



m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =m

s 1 ∂ m f(y) 0 mq m s ∏ (x q − x q ) , m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1

where y = x0 + θ(x − x0 ) with some θ ∈ [0, 1]. Inside Ω kj we set m

f k (x) = ∑



i=0 m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =i

s ∂ i f(x0 ) 1 0 mq m s ∏ (x q − x q ) . m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1

(3.2)

Then for any x ∈ Ω kj , 󵄨󵄨 |f k (x) − f(x)| = 󵄨󵄨󵄨󵄨 󵄨



m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =m s

(

∏sq=1 (x q − x0q )m i 󵄨󵄨󵄨 ∂ m f(x0 ) ∂ m f(y) 󵄨󵄨 − ) m m m m m1 ! ⋅ ⋅ ⋅ m s ! 󵄨󵄨 ∂x1 1 ⋅ ⋅ ⋅ ∂x s s ∂x1 1 ⋅ ⋅ ⋅ ∂x s s

≤ (a ∑ |y q − x0q |λ ) q=1 s

= (a ∑ d λq ) q=1



m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =m

s m 1 ( ∑ |x q − x0q |) m! q=1

s s m 1 ≤ (a ∑ d λq )( ∑ d q ) . m! q=1 q=1

∏sq=1 |x q − x0q |m q m1 ! ⋅ ⋅ ⋅ m s !

3.1 Piecewise approximations over subdomains | 61

It is not difficult to show that di ≤ (

max d q (s−1)/s [μ(Ω kj )]1/s = (κ kj )(s−1)/s [μ(Ω kj )]1/s , ) min d q

where κ kj characterises the oblongness of Ω kj . Assume that all κ kj for any fixed k are bounded above by some constant κ which does not depend on j and k. Then 1 asκ(s−1)λ/s [μ(Ω kj )]λ/s (sκ(s−1)/s [μ(Ω kj )]1/s )m m! as m+1 κ(m+λ)(s−1)/s [μ(Ω kj )](m+λ)/s , = m!

|f k (x) − f(x)| ≤

in other words, condition (3.1) is satisfied with order l = (m + λ)/s. In order to calculate a value of f k (x) by formula (3.2) one has to evaluate as many values of f(x) and its variables as there are monomials of degree not greater than m of s variables, that is (m+s m ) (see [51, p. 25]). As λ → 0, the class H(m, a, λ) reduces to the class C m (a) of those functions whose partial derivatives of orders up to m inclusive are bounded by some constant a. For the functions of this class one may use approximation (3.2), but it is more obvious that condition (3.1) is satisfied as one chooses as f k (x) the part of the Taylor series up to order m − 1: m−1

f k (x) = ∑



i=0 m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =i

s 1 ∂ i f(x0 ) (x q − x0q )m q . ∏ s m1 ! ⋅ ⋅ ⋅ m s ! ∂x1m1 ⋅ ⋅ ⋅ ∂x m s q=1

(3.3)

In both cases it is easy to see that condition (3.1) holds with order l = m/s. A particular case of the class H(m, a, λ) with m = 0 is the class of the functions which are Hölder-continuous with exponent λ: s

|f(y) − f(z)| ≤ a ∑ |y q − z q |λ q=1

for any y, z ∈ Ω. For these functions the approximation satisfying condition (3.1) is of the form f k (x) = f(x0 ), x ∈ Ω kj , where x0 is an arbitrary point in Ω kj . Thus, in this case the approximation f k (x) is piecewise constant in Ω. It is worth emphasising that in the same way we can construct a sequence of approximations satisfying condition (3.1) in more general cases, namely, for functions whose smoothness is broken at hypersurfaces of dimensionality smaller than s. This is essential for integration over domains differing from hyperparallelepipeds. In these cases, the integrand is continued by zero outside Ω. In doing so, we inevitably break the smoothness of the function at the boundary of the integration domain. In this case, the

62 | 3 Piecewise Approximation approximations in the subdomains overlapping the boundary have to be constructed intelligently: outside Ω they should be set equal to zero, and inside Ω, in the form of, say, a part of the Taylor series near one of the points of the subdomain which finds itself inside Ω as well. It is easy to show that condition (3.1) remains valid in this case.

3.1.3 Partition moments and estimates for the variances D k Imposing constraint (3.1) on the approximations aids greatly in studying the adaptive methods of integration suggested in the preceding chapter. Consider expression (2.27) for variances of the primary estimators in the adaptive method of control variate sampling. Discarding the second term and partitioning the integral over Ω into N k integrals over Ω kj , and taking into account (3.1), we obtain Nk

D k ≤ C2 μ(Ω)Ex1 ,x2 ,...,xk−1 { ∑ [μ(Ω kj )]2l+1 }.

(3.4)

j=1

The mathematical expectation entering into the right-hand side of inequality (3.4) depends on the joint law of distribution of the random points x1 , x2 , . . . , xk−1 , as well as on the method to construct the partition (by values of xi ). In what follows, the variable Nk

γ

M k = Ex1 ,x2 ,...,xk−1 { ∑ [μ(Ω kj )]γ } j=1

N

k will be referred to as the moment of order γ of the partition {Ω kj }j=1 with respect to the random points x1 , x2 , . . . , xk−1 . We need the following elementary assertion.

Lemma 3.1.1. For any γ > 1, the inequality 1−γ

[μ(Ω)]γ N k

Nk

γ

≤ ∑ [μ(Ω kj )] ≤ [μ(Ω)]γ j=1

holds true; the equality is attained in the case where all domains Ω kj are of equal area, that is, μ(Ω kj ) = μ(Ω)N k−1 , j = 1, 2, . . . , N k .

Proof. The left-hand inequality is easily established by minimisation of the sum with respect to the measures of Ω kj under the condition Nk

∑ μ(Ω kj ) = μ(Ω),

j=1

while the right-hand one follows from the inequality α γ + β γ ≤ (α + β)γ , which holds for any α, β > 0, γ > 1.

3.1 Piecewise approximations over subdomains | 63

This lemma provides us with a lower bound for the partition moments of order γ > 1: γ

1−γ

M k ≥ [μ(Ω)]γ N k . The variances of the primary estimators in the adaptive method of importance sampling can be expressed via partition moments, too, under one more assumption. Let all approximations f k (x) be uniformly bounded below by a positive number, in other words, let there be a constant m > 0 such that for any k > 0, f k (x) ≥ m

(3.5)

for all x ∈ Ω. In this case, all densities p k (x) of the adaptive method of control variate sampling constructed in accordance with (2.24) turn out to be bounded below as well. We easily see, making use of (3.1) and Lemma 3.1.1, that I k = ∫ f k (x) dx ≤ |J| + ∫ |f k (x) − f(x)| dx Ω

Ω Nk

≤ |J| + C ∑ [μ(Ω kj )]

l+1

j=1

Hence p k (x) =

≤ |J| + C[μ(Ω)]l+1 .

f k (x) m r ≥ = , l+1 Ik μ(Ω) |J| + C[μ(Ω)]

(3.6)

where r ≤ 1 in view of the normalisation condition. We turn back to expression (2.26) and make use of inequality (3.6). Then, after the above-described transformations, in the context of the adaptive method of importance sampling we are able to estimate D k as follows: Nk C2 μ(Ω) 2l+1 Dk ≤ Ex1 ,x2 ,...,xk−1 { ∑ [μ(Ω kj )] (3.7) }. r j=1

3.1.4 Generalised adaptive methods In this section we suggest an approach to constructing generalised adaptive methods presented in a rather general form in Section 2.2.4. In some cases, this approach leads to an essential decrease in time needed to carry out the algorithm. As we have said in Section 2.2.4, in order to construct an adaptive method one has to choose a sequence of functions g k (x) = g k (x; x1 , x2 , . . . , xk ) such that ∫ g k (x) dx = ∫ f(x) dx = J. Ω

(3.8)



At the kth step of an adaptive algorithm, let some partition of the domain Ω into hyperparallelepipeds Ω kj , k = 1, 2, . . . , N k , have been constructed. For x ∈ Ω kj , let g k (x) =

f(x) + f(x󸀠 ) , 2

64 | 3 Piecewise Approximation where x󸀠 is symmetric to the point x about the centre x0 of the subdomain Ω kj , in other words, x󸀠 = 2x0 − x. It is obvious that ∫ g k (x) dx = ∫ f(x) dx, Ω kj

Ω kj

which immediately yields (3.8). We assume that f(x) ∈ H(m, a, λ) (this class is defined in Section 3.1.2). In this case, expanding g k (x) into the Taylor series in a neighbourhood of the centre of the subdomain Ω kj , we easily obtain ⌊ m−1 2 ⌋

g k (x) = ∑

i=0



m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =2i

+

s 1 ∂2i f(x0 ) (x q − x0q )m q ∏ s m1 ! ⋅ ⋅ ⋅ m s ! ∂x1m1 ⋅ ⋅ ⋅ ∂x m s q=1



m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =m

s 1 ∂ m f(y) 0 mq m s ∏ (x q − x q ) , m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1

where y = x0 + θ(x − x0 ) with some θ ∈ [0, 1] (all terms of odd total degree in the expansions of f(x) and f(x󸀠 ) are cancelled). So, choosing f k (x) as ⌊ m−1 2 ⌋

f k (x) = ∑

i=0



m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =2i

+



s 1 ∂2i f(x0 ) (x q − x0q )m q ∏ s m1 ! ⋅ ⋅ ⋅ m s ! ∂x1m1 ⋅ ⋅ ⋅ ∂x m s q=1

m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =m

s ∂ m f(x0 ) 1 0 mq m s ∏ (x q − x q ) m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1

and repeating the reasoning given in Section 3.1.2, we obtain |g k (x) − f k (x)| ≤ C[μ(Ω kj )]

(m+λ)/s

,

x ∈ Ω kj ,

(3.9)

where the constant C depends only on a, s, and the maximum oblongness κ of the subdomains Ω kj for all k and j. As we have said in Section 2.2.4, the variances of the primary estimators in adaptive methods are defined by expressions (2.26) and (2.27), respectively, where f(x) is replaced by g k (x). Thus, in view of (3.9) estimation of the variances D k in the adaptive method of control variate sampling (and in the adaptive method of importance sampling, too, provided that condition (3.5) is satisfied) reduces to estimation of the partiγ tion moments M k with γ = 2(m + λ)/s + 1. Thus, generalised adaptive methods possess the same convergence rate order as the conventional ones in the classes H(m, a, λ). The advantage of the generalised methods over the conventional ones makes itself evident in integrating functions of the classes C2m (a) (see Section 3.1.2). For these functions, the approximation which guarantees the convergence rate of order 2m/s for

3.2 Elementary one-dimensional method | 65

the adaptive methods in relation (3.9) can be chosen as follows: m−1

f k (x) = ∑



i=0 m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =2i

s 1 ∂2i f(x0 ) 0 mq m s ∏ (x q − x q ) . m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1

(3.10)

In the conventional methods, the approximation which yields the same convergence rate has to be chosen in accordance with (3.3) as follows: 2m−1

f k (x) = ∑



i=0 m1 ,m2 ,...,m s ≥0 m1 +m2 +⋅⋅⋅+m s =i

s ∂2i f(x0 ) 1 0 mq m s ∏ (x q − x q ) . m1 m1 ! ⋅ ⋅ ⋅ m s ! ∂x1 ⋅ ⋅ ⋅ ∂x s q=1

(3.11)

Comparing expressions (3.10) and (3.11), we see that the number of operations needed to calculate the approximation in the generalised methods is smaller by a factor of O(s), whose difference becomes quite significant for large s. The reasonings in Sections 3.1.3 and 3.1.4 reveal that the subsequent analysis of convergence of the adaptive methods based on piecewise approximations reduces to γ investigation of the behaviour of partition moments M k depending on the distribution of the random points and the method used to partition the domain. Of the straightforward nature are the one-dimensional algorithms where the piecewise approximations are derived from a set of intervals formed by the random points drawn. These will be investigated in the next section.

3.2 Elementary one-dimensional method In this section we study the convergence of adaptive methods of one-dimensional integration which extend the method of Section 2.2.1 to the case of piecewise approximations of arbitrary order. The problem consists of evaluating the integral b

J = ∫ f(x) dx. a

At the kth step of the algorithm we construct a piecewise approximation f k (x) on a partition of the interval [a, b] consisting of a set of intervals [x(j−1) , x(j) ], j = 1, 2, . . . , k, where x(0) = a, x(k) = b, x(1) , x(2) , . . . , x(k−1) are the order statistics of the sample x1 , x2 , . . . , x k−1 . As an illustration, the first four piecewise linear approximations are given in Figure 3.1 (compare with Figure 2.1). Thus, in this case N k = k, and the moments of this partition can be represented as follows: γ

k

γ

M k = Ex1 ,x2 ,...,x k−1 { ∑ ∆ j }, j=1

where ∆ j = x(j) − x(j−1) is the length of the jth interval of the partition.

66 | 3 Piecewise Approximation y

y

f1 (x)



a y

b

✲ x

a

x1

a

x1

y

f3 (x)



f2 (x)



b

✲ x

b

✲ x

f4 (x)



x2

b

✲ x

a x3

x1

x2

Fig. 3.1. Construction of piecewise linear approximations.

3.2.1 Control variate sampling In the control variate sampling method, the random points x1 , x2 , . . . , x k−1 are independent and uniformly distributed on the interval [a, b]. For these points, it is well known how the lengths of the intervals between the order statistics are distributed [12]. Namely, the variables ∆ j are identically distributed on the interval [0, b − a] with the density k−2 k−1 ∆ (3.12) p(∆) = (1 − ) . b−a b−a Lemma 3.2.1. Let the random variable ∆ be distributed with density (3.12). Then E{∆ γ } = (b − a)γ

Γ(γ + 1)(k − 1)! . Γ(k + γ)

Proof. It is not difficult to see that γ

b−a

E{∆ } = ∫ ∆ γ 0

k−2 k−1 ∆ d∆ (1 − ) b−a b−a γ

1

= (k − 1)(b − a) ∫ δ γ (1 − δ)k−2 dδ 0

3.2 Elementary one-dimensional method | 67

= (k − 1)(b − a)γ B(γ + 1, k − 1) = (b − a)γ

Γ(γ + 1)(k − 1)! , Γ(k + γ)

which is the desired result. Remark 3.2.2. For k = 1 the variable ∆1 is not distributed by law (3.12) but takes the value b − a with probability one. Nevertheless, one easily sees that the expression of the mathematical expectation obtained in the lemma remains true in this case as well. Since the variables ∆ j are identically distributed, from the above lemma we immediately arrive at the following representation of the partition moments: γ

M k = (b − a)γ

Γ(γ + 1)k! . Γ(k + γ)

Substituting this expression into (3.4), we obtain a bound for D k : D k ≤ C2 (b − a)2l+2

Γ(2l + 2)k! . Γ(k + 2l + 1)

3.2.2 Importance sampling The investigation of partition moments in the adaptive method of importance sampling is somewhat more complicated. In view of (3.6), the densities p j for all j > 0 admit the representation r p j (x) = + (1 − r)p̃ j (x), μ(Ω) where p̃ j (x) possess all properties of a distribution density: b

p̃ j (x) ≥ 0,

∫ p̃ j (x) dx = 1. a

Hence the realisations x j of the random variables with densities p j (x) can be treated as generated in accordance with the following recursive procedure: 1. A random variable z j is drawn, which is independent of x1 , x2 , . . . , x j−1 and takes two values: {0 with probability r, zj = { 1 with probability 1 − r. { 2. if z j = 0, then we generate x j uniformly distributed on [a, b]; 3. if z j = 1, then we generate x j distributed with the density p̃ j (x). Let us consider the whole set of realisations x1 , x2 , . . . , x k−1 . Let us separate those for which the corresponding z j were zeros; let them be x̃ j , j = 1, 2, . . . , k̃ − 1, and let the intervals between the neighbouring terms of the set of order statistics be ∆̃ j : ∆̃ j = x̃ (j) − x̃ (j−1) ,

j = 1, . . . , k;̃

x̃ (0) = a,

x̃ (k)̃ = b.

68 | 3 Piecewise Approximation The variable k̃ is random and takes the values from 1 to k with the probabilities k − 1 j−1 P{k̃ = j} = ( )r (1 − r)k−j . j−1

(3.13)

γ

Hence the partition moment M k admits the representation k

γ γ ̃ M k = Ek̃ {E{ ∑ ∆ j | k}}. j=1

Since every interval [x̃ (j−1) , x̃ (j) ] contains one or more intervals [x(j−1) , x(j) ], for fixed k̃ the inequality k



i=1

i=1

γ γ ∑ ∆ i ≤ ∑ ∆̃ i

holds true for any γ > 1 (see the proof of Lemma 3.1.1). Hence, recalling the uniformity and Lemma 3.2.1, we obtain ̃ k k+1 ̃ γ ̃ ≤ E{ ∑ ∆̃ γ | k} ̃ = (b − a)γ Γ(γ + 1)k! . E{ ∑ ∆ j | k} j ̃ Γ(k + γ) j=1 j=1

Therefore, γ

M k ≤ Ek̃ {(b − a)γ

Γ(γ + 1)k!̃ }. Γ(k̃ + γ)

A bound for the obtained mathematical expectation is given in the lemma below. Lemma 3.2.3. Let k̃ be distributed by (3.13). Then Ek̃ {

k!̃ (k − 1)! . } ≤ r−γ Γ(k + γ − 1) Γ(k̃ + γ)

Proof. It is not difficult to see that Ek̃ {

k k−1 j! k!̃ r j−1 (1 − r)k−j } = ∑( ) ̃ j − 1 Γ(j + γ) Γ(k + γ) j=1 k

(k − 1)! j r j−1 (1 − r)k−j (k − j)! Γ(j + γ) j=1

=∑ k

(k − 1)! r j−1 (1 − r)k−j (k − j)! Γ(j + γ − 1) j=1

≤∑ =

k (k − 1)! Γ(k + γ − 1) r j−1 (1 − r)k−j . ∑ Γ(k + γ − 1) j=1 (k − j)! Γ(j + γ − 1)

We consider the auxiliary function k

Γ(k + γ − 1) r j+γ−1 (1 − r)k−j (k − j)! Γ(j + γ − 1) j=1

g(r) = ∑

3.2 Elementary one-dimensional method | 69

and study its behaviour in the interval r ∈ [0, 1]. We see that k

Γ(k + γ − 1) (j + γ − 1)r j+γ−2 (1 − r)k−j (k − j)! Γ(j + γ − 1) j=1

g󸀠 (r) = ∑

k

Γ(k + γ − 1) (k − j)r j+γ−1 (1 − r)k−j−1 (k − j)! Γ(j + γ − 1) j=1

−∑ k−1

= ∑

j=0

Γ(k + γ − 1) r j+γ−1 (1 − r)k−j−1 (k − j − 1)! Γ(j + γ − 1) k−1

−∑

j=1

Γ(k + γ − 1) r j+γ−1 (1 − r)k−j−1 (k − j − 1)! Γ(j + γ − 1)

Γ(k + γ − 1) γ−1 = r (1 − r)k−1 ≥ 0, (k − 1)! Γ(γ − 1)

0 ≤ r ≤ 1.

Therefore, g(r) ≤ g(1) = 1 for all r ∈ [0, 1], which proves the lemma. γ

Using Lemma 3.2.3, we finally arrive at the bound for M k : γ

Mk ≤ (

b − a γ Γ(γ + 1)(k − 1)! . ) r Γ(k + γ − 1)

(3.14)

Substituting bound (3.14) into formula (3.7), we obtain the following bound for D k in the adaptive method of importance sampling: D k ≤ C2 (

b − a 2l+2 Γ(2l + 2)(k − 1)! . ) r Γ(k + 2l)

3.2.3 Conclusions and remarks In the preceding sections we have seen that under condition (3.1) (and condition (3.5) in the method of importance sampling) the variances D k of the primary estimators in both adaptive methods decrease with the rate O(k−2l ). Therefore, by virtue of Theorem 2.1.3, if the coefficients of the sequential scheme are chosen as βn =

2l + 1 , n + 2l

then the variances of the secondary estimators are of order O(n−2l−1 ), while the error of the method under a given confidence level (see Corollary 2.1.6) is of order O(n−l−1/2 ). From Theorem 2.1.13 it follows that the secondary estimators converge to the true value of the integral with probability one for any l > 0. Thus, for any function of one variable f(x) in the class H(m, a, λ) on the base of suggested approaches one is able to construct a sequential Monte Carlo method which converges (with a given confidence) with the rate O(n−m−λ−1/2 ), and either for any m > 0 or for λ > 0 if m = 0 the almost sure convergence takes place. In order to

70 | 3 Piecewise Approximation execute n steps of the algorithm, one needs O(n) calculations of the integrand and its derivatives. In [7], N. S. Bakhvalov shows that in the class of functions H(m, a, λ) no indeterministic integration method which uses only the information on values of f(x) and its derivatives at O(n) points has the convergence rate for a given confidence level better than O(n−(m+λ)/s−1/2 ). So, the methods we suggest are optimal one-dimensional integration algorithms on the class H(m, a, λ) with respect to the convergence rate. Of most interest to applications is constructing multidimensional algorithms of integration, that is, the case s > 1. As a rule, while investigating multidimensional algorithms one restricts the attention to the case where the integration domain is a multidimensional hyperparallelepiped or, without loss of generality, the unit hypercube in the space ℝs . The analysis of two one-dimensional algorithms we have just considered shows that it is rather difficult to extend them directly to the multidimensional case. First, it is not clear how, having the random points drawn, to build up a convenient partition of the integration domain. By direct analogy, after drawing a random point one can partition the domain by means of hyperplanes which pass through it being orthogonal to the coordinate axes (an example for the two-dimensional case exists in [5]); but this leads to very rapid (as k s ) growth of the number N k of the subdomains at the kth step of the algorithm, and hence of the computational labour. (The partitioning should not change too fast, otherwise it would be hard to organise the recursive calculation of the values of the integrals I k .) So this approach seems to be inefficient even for small s. In this connection, it is of importance to develop a method to partition the domain such that the number of subdomains N k grows slowly as k does, independently of the dimensionality (we mean partitioning into hyperparallelepipeds), while preserving the tendency of fast decrease of the partition moments as k grows. One method of partitioning which satisfies the above requirements is thoroughly studied in the next section.

3.3 Sequential bisection In this section we study a method to construct sequences of partitions of the integration domain which guarantees the same convergence rate of the adaptive methods as the method presented in Section 3.2 in the one-dimensional case but remains simple and efficient when extended to the multidimensional case.

3.3.1 Description of the bisection technique The essence of the bijection method consists of the following simple rule: each subsequent partition of the integration domain is derived from the preceding one by bisection

3.3 Sequential bisection | 71

Ω21

Ω31

Ω22

Ω32

q x1

q x1 Ω33

q x2

Ω41

Ω42

q x3

Ω43

Ω51

q x1

Ω44

Ω52

q x3

q x1

Ω54

q x2

Ω53

q x2

Ω55 q x4

Fig. 3.2. Sequential bisection in the two-dimensional case.

(division into two equal parts) of that subdomain of the piecewise approximation where the current random point finds itself. We observe that the initial partition consists only of the domain Ω itself, so the partition resulting from drawing the random point x1 consists always of two halves of the domain Ω independently of which of its part this point falls into. First steps of the bisection method in the two-dimensional case are shown in Figure 3.2. In order to guarantee the uniqueness of the partitions obtained over a given sequence of random points, we have to agree on the bisection technique. We assume that the partitioning of a given subdomain is carried out by means of a hyperplane which is orthogonal to that of the coordinate axes along which it is of the greatest length, and if there are several such axes, to that of the smallest ordinal under the given Euclidean basis. Along with uniqueness, this technique provides us with one more advantageous peculiarity of the partitions so obtained, namely, their oblongness κ kj (see Section 3.1.2) either decreases, or increases no more than twice, if at all. In turn, the constant κ bounding the oblongness κ kj exists and can be set to κ = max{2, κ Ω }, where κ Ω stands for the oblongness of the whole integration domain Ω. It is clear that the number N k of the subdomains of the partition formed up to the kth step of the algorithm is equal precisely to k. Thus, the former desired property of

72 | 3 Piecewise Approximation the partitions mentioned in the conclusion of Section 3.2 holds. The latter property consisting of fast decrease of the partition moments as k grows is studied in the next sections. For the sake of convenience, we represent a partition resulting from applying the bisection method as a growing binary tree [70] whose vertices are the subdomains of Ω. The root of the tree is the domain Ω itself. At each step of the algorithm, the leaves of the tree, that is, the vertices with no descendants, are all subdomains of the current partition. Upon drawing the next random point, the subdomain is determined by which the point falls into, as well as the corresponding leaf of the tree; then the leaf gives birth to two descendants which are the halves of the subdomain partitioned. 3.3.2 Control variate sampling We recall that in this method the points xk are independent and uniformly distributed γ in the domain Ω. This allows us to get a recurrence relation for the moments M k of the partitions generated by the bisection method. Lemma 3.3.1. For any k ≥ 1, the following relation is true: 1 1 γ γ γ+1 M k+1 = M k − (1 − γ−1 )M k . μ(Ω) 2

(3.15)

Proof. Let us index all partitions obtained for different realisations of the random points γ Nk [μ(Ω kj )]γ x1 , x2 , . . . , xk−1 by elements t of some set T. Let S t stand for the value of ∑j=1 calculated at a partition t, and let p t be the probability of occurrence of a partition t. γ Then M k admits the representation γ

γ

Mk = ∑ St pt . t∈T

The next point xk drawn may find itself inside one of k existing subdomains of partition t, so all partitions generated by k random points can be indexed by a double index t, i with t ∈ T, i = 1, . . . , k. It is clear that under such an indexation scheme some partitions are repeated, but for the sake of convenience we treat them as distinct. γ The variables S t,i and p t,i so defined obey the relations γ

γ

γ

S t,i = S t − [μ(Ω ki )] + 2( p t,i = p t

μ(Ω ki ) . μ(Ω)

μ(Ω ki ) γ 1 γ γ ) = S t − [μ(Ω ki )] (1 − γ−1 ), 2 2

The latter one follows from the uniform distribution of xk in the domain Ω. Therefore, k

γ

γ

k

γ

γ

M k+1 = ∑ ∑ S t,i p t,i = ∑ ∑ [S t − [μ(Ω ki )] (1 − t∈T i=1 γ

= ∑ St pt − t∈T

t∈T i=1

2

1

)]p t γ−1

k 1 1 γ+1 (1 − γ−1 ) ∑ p t ∑ [μ(Ω ki )] μ(Ω) 2 t∈T i=1

μ(Ω ki ) μ(Ω)

3.3 Sequential bisection | 73

γ

= ∑ St pt − t∈T

γ

= Mk −

1 1 γ+1 (1 − γ−1 ) ∑ S t p t μ(Ω) 2 t∈T

1 1 γ+1 (1 − γ−1 )M k , μ(Ω) 2

which is the desired result. γ

Lemma 3.3.1 together with the initial condition M1 = [μ(Ω)]γ allows us to get a closed γ representation of the moments M k . But it is rather cumbersome and very unsuitable for estimation issues. At the same time, direct evaluation of some first moments γ (k = 2, 3, 4, . . .) with the use of relation (3.15) shows that M k admit the representation k−1

γ

M k = [μ(Ω)]γ ∑

j=0

a kj , 2γj

(3.16)

where a kj are some numbers which do not depend on γ. Substituting this representation into (3.15) yields the recurrence relation for a kj : a(k+1)j = (1 −

1 1 )a kj + j−2 a k(j−1) , 2j 2

j = 1, 2, . . . , k − 1,

with initial conditions of the form a10 = 1,

a k0 = 0,

k > 1.

We easily see that this recurrence relation is valid for all j > 0 if we set a kj = 0 for j ≥ k. Furthermore, from the initial conditions and the recurrence relation it follows that a kj are non-negative. The form of a kj is described in the following lemma. Lemma 3.3.2. The following representation holds true for all j > 0: a kj = 2−

j(j−3) 2

1 k−2 2m ) . j 1 1 m=1 ∏ i=1 ( 2i − 2m ) i=m ̸ j

(1 −



(3.17)

This lemma is proved by direct verification. Exercise 3.3.3. Deduce formula (3.17). Hint: Consider the series



A j (z) = ∑ a kj z k k=1

in a neighbourhood of the point z = 0 and, using the recurrence relation for a kj , obtain a fractional rational expression of the function A j (z). Its inverse expansion to a series in powers of z yields the representation of a kj sought for. γ

Combining (3.16) and (3.17), we again are able to get a closed expression of M k . But we are interested in estimating these values, so we need to find upper bounds for the coefficients a kj .

74 | 3 Piecewise Approximation Lemma 3.3.4. The following upper bound for a kj holds true: a kj ≤

1 k−2 ) 2j . j−1 ∏i=1 (1 − 21i )

2j (1 −

Proof. The sum entering into representation (3.17) is with alternating signs because of variation of the number of negative factors in the denominator. Consider the ratio of the (m + 1)st to the mth term of this sum: (1 − ∏

j

i=1 i=m+1 ̸

1

2m+1

)

( 21i −

k−2 1

2m+1

)

:

1 k−2 2m ) j ∏ i=1 ( 21i − 21m ) i=m ̸

(1 −

=

= =

(1 − (1 −

1

2m+1

)

j

∏ i=1 ( 21i −

k−2

1 k−2 2m )

1 k−2 ) 2m+1 1 k−2 (1 − 2m ) 1 k−2 (1 − 2m+1 ) 1 k−2 (1 − 2m )

(1 −



i=m ̸ j−1

∏ i=0 ( 21i+1 − i=m ̸

⋅ ⋅

1 2m ) 1 ) 2m+1

( 21j −

1 (1 2j−1

1 2m ) 1 − 2m+1 )

( 12 − (1

1 ) 2m−j+1 . 1 − 2m+1 )

It is easily seen that the absolute value of this ratio is greater than 1 for all m < j − 1, in other words, the absolute value of each term, with the possible exception of the last, exceeds that of the preceding one. This means that the whole sum is no greater than the last summand (because of its positivity and the fact that the sum is alternating), in other words, a kj ≤ 2

− j(j−3) 2

1 k−2 ) 2j j−1 1 ∏i=1 ( 2i − 21j )

(1 −

=

2− 2−

1 k−2 ) 2j j−1 ∏i=1 (1 − 21j−i )

j(j−3) 2

j(j−1) 2

(1 −

=

1 k−2 ) 2j , j−1 ∏i=1 (1 − 21i )

2j (1 −

which is the desired result. Substituting the result of the lemma into representation (3.16), we arrive at the bound γ

1 k−2 ) 2j , j−1 j(γ−1) ∏i=1 (1 − 21i ) j=1 2

k−1

M k ≤ [μ(Ω)]γ ∑

(1 −

which holds true for all k > 1 (we drop the term with j = 0 because of the initial conditions on a kj ). We introduce the auxiliary function ∞

φ(t) = ∏(1 − i=1

1 ), ti

which is non-negative and non-decreasing for t > 1. Then γ

Mk ≤

[μ(Ω)]γ k−1 1 1 k−2 ∑ j(γ−1) (1 − j ) , φ(2) j=1 2 2

k > 1.

(3.18) γ

It is hard to derive an information concerning the behaviour of M k from expression (3.18) for large k. The lemma below makes it somewhat more simple.

3.3 Sequential bisection | 75

Lemma 3.3.5. Let the function g(x) be defined for x ≥ 0 by the equality g(x) = 2−x(γ−1) (1 − 2−x )k−2 , Then

k−1

∑ g(j) ≤

j=1

k > 1.

1 (k − 2)! Γ(γ − 1) (k − 2)k−2 (γ − 1)γ−1 + . ln 2 Γ(k + γ − 2) (k + γ − 3)k+γ−3

Proof. It is obvious that g(x) > 0 for all positive x. Furthermore, it is easily seen that the function g(x) has a single maximum in [0, ∞) and its greatest value is equal to g0 =

(k − 2)k−2 (γ − 1)γ−1 . (k + γ − 3)k+γ−3

Hence, the bound k−1

k−1

j=1

1

∑ g(j) ≤ ∫ g(x) dx + g0

is true. It is easily proved by dividing the sum into two parts, before the point of maximum and after it, and replacing these parts by integrals over the corresponding intervals. We estimate the integral as follows: k−1

1



∫ g(x) dx ≤ ∫ g(x) dx = 1

1

1 ∫(1 − x)k−2 x γ−2 dx ln 2 0

1 1 (k − 2)! Γ(γ − 1) = B(k − 1, γ − 1) = , ln 2 ln 2 Γ(k + γ − 2)

which proves the lemma. Using the result of Lemma 3.3.5 in bound (3.18), we finally arrive at the following bound for partition moments in the method of sequential bisection: γ

Mk ≤

[μ(Ω)]γ 1 (k − 2)! Γ(γ − 1) (k − 2)k−2 (γ − 1)γ−1 + [ ]. φ(2) ln 2 Γ(k + γ − 2) (k + γ − 3)k+γ−3

(3.19)

Remark 3.3.6. In the above approach, the subdomain where the random point drawn finds itself is divided in two. Along with bisection one may consider the ‘t-section,’ where the subdomain is partitioned into t, t ≥ 2, equal parts. In this case the partition tree is t-ary, the number of subdomains is N k = 1 + (t − 1)(k − 1), and the constant κ which characterises the oblongness of the subdomains is set to κ = max{t, κ Ω }. The investigation of this approach repeats the above one almost word for word. The final bound for the partition moments in this case is γ

Mk ≤

[μ(Ω)]γ 1 (k − 2)! Γ(γ − 1) (k − 2)k−2 (γ − 1)γ−1 + [ ]. φ(t) ln t Γ(k + γ − 2) (k + γ − 3)k+γ−3

(3.20)

76 | 3 Piecewise Approximation Formulas (3.19) and (3.20) show that for large k the partition moments are of order O(k−(γ−1) ). From bound (3.4) we conclude that the variances D k of the primary estimators in the adaptive method of control variate sampling based on sequential bisection are of order O(k−2l ), where l is the approximation order (see condition (3.1)).

3.3.3 Importance sampling In order to investigate the behaviour of the partition moments in the adaptive method of importance sampling, we again use the idea suggested in Section 3.2.2 which consists of reducing the realisations of x1 , x2 , . . . , xk , . . . , which are dependent and distributed by a rather complex joint law, to a family of a random number of independent realisations of a uniformly distributed random variable. Given x1 , x2 , . . . , xk−1 , let some partition Ω1k , Ω2k , . . . , Ω kk be built with the use of the sequential bisection. We have to find a bound for the partition moments γ

k

γ

M k = Ex1 ,x2 ,...,xk−1 { ∑ [μ(Ω kj )] }. j=1

Repeating the reasoning presented in Section 3.2.2, we see that some amount k̃ − 1 of points among x1 , x2 , . . . , xk−1 can be treated as independent realisations of the uniformly distributed in Ω random variable. The variable k̃ is random itself and is distributed by law (3.13). We fix k̃ and the mentioned realisations x̃ 1 , x̃ 2 , . . . , x̃ k−1 ̃ , k̃ ̃ k̃ k̃ ̃ ̃ and imagine that only those ones have been drawn. We let Ω1 , Ω2 , . . . , Ω ̃ stand for k the partition so obtained by the bisection method and refer to it as an incomplete partition. Let us show that any subdomain of the incomplete partition contains one or more subdomains of the complete one, that is, for any i ≤ k there exists j ≤ k̃ such that ̃ Ω ki ⊂ Ω̃ kj . In other words, the following assertion is true. Lemma 3.3.7. The tree of subdomains of the complete partition includes entirely the tree of subdomains of the incomplete partition, no matter which is k̃ and how the points x̃ 1 , x̃ 2 , . . . , x̃ k−1 are chosen. ̃ Proof. We consider two processes, namely of building the complete partition over the points xj , j = 1, . . . , k − 1, and of building the incomplete partition over the points x̃ j , j = 1, . . . , k̃ − 1, in parallel. We compare the partition trees at the instant both processes find themselves at the same point x̃ j = xi j , j = 1, . . . , k,̃ and demonstrate that at each of these instants the tree of the complete partition includes entirely the tree of the incomplete one. We use the mathematical induction. Let j = 1. It is clear that in this case the tree of the incomplete partition consists of three vertices only, the domain Ω itself and two its halves. It is, of course, included in the tree of the complete partition, where the domain Ω has been divided into two halves at the point x1 .

3.3 Sequential bisection | 77

Now we assume that the desired fact has been proved at the instant the points up to x̃ j−1 = xi j−1 inclusive have been drawn in both processes, and consider the instant the points up to x̃ j = xi j inclusive have been drawn. The tree of the incomplete partition, as compared with the previous instant, contains only two new vertices, which are descendants of the vertex containing the subdomain where the only drawn point x̃ j ̂ j ). Show that they enter into the tree of the complete have found itself (denote it by Ω partition as well. If they have been there at the induction assumption instant, then ̂ j was the desired result is proved. Otherwise, before this instant the subdomain Ω untouched, but because at least one of the points xi j−1 +1 , . . . , xi j (at least the last one) falls into this domain, it must become bisected. ̃ k̃ , Ω ̃ k̃ , . . . , Ω ̃ k̃ is contained in the tree Thus, the tree of the incomplete partition Ω 1 2 k̃ of the complete partition built over the random points x1 , x2 , . . . , xi k−1 ̃ , and hence, in the final tree of the complete partition Ω1k , Ω2k , . . . , Ω kk , which is the desired result. By virtue of the lemma just proved and the inequality used to prove Lemma 3.1.1, the relation k

γ ∑ [μ(Ω kj )] j=1



̃ k̃ )]γ ≤ ∑ [μ(Ω j j=1

is valid. Therefore, k

γ

γ

M k = Ex1 ,x2 ,...,xk−1 { ∑ [μ(Ω kj )] } j=1

k

γ ̃ = Ek̃ {Ex1 ,x2 ,...,xk−1 { ∑ [μ(Ω kj )] | k}} j=1 k̃

̃ ̃ k̃ )]γ | k}}. ≤ Ek̃ {Ex̃ 1 ,x̃ 2 ,...,x̃ k−1 { ∑ [μ(Ω j j=1

γ

But the inner mathematical expectation is nothing more nor less than the moment M ̃ of k the partition constructed with the use of the bisection method over the uniform random points x̃ 1 , x̃ 2 , . . . , x̃ k−1 ̃ . Its behaviour has been thoroughly studied in the preceding section and is determined by bound (3.19): k̃

̃ ̃ k̃ )]γ | k} Ex̃ 1 ,x̃ 2 ,...,x̃ k−1 { ∑ [μ(Ω j j=1



̃ [μ(Ω)]γ 1 (k̃ − 2)! Γ(γ − 1) (k̃ − 2)k−2 (γ − 1)γ−1 + [ ]. ̃ φ(2) ln 2 Γ(k̃ + γ − 2) (k̃ + γ − 3)k+γ−3

As we have seen, this bound is of order O(k̃ −(γ−1) ), in other words, there exists a constant ̃ such that (see Lemma 2.1.2) C k̃

̃ ≤C ̃ k̃ )]γ | k} ̃ Ex̃ 1 ,x̃ 2 ,...,x̃ k−1 { ∑ [μ(Ω j j=1

k!̃ . Γ(k̃ + γ)

78 | 3 Piecewise Approximation Hence, by virtue of Lemma 3.2.3, the following bound holds true: γ ̃ −γ M k ≤ C2

(k − 1)! . Γ(k + γ − 1)

(3.21) γ

Bound (3.21) shows that for large k the partition moment M k is of order O(k−(γ−1) ), as in the preceding section. Relying on bound (3.7), we conclude that the variances D k of the primary estimators in the adaptive method of importance sampling based on sequential bisection are of order O(k−2l ). Remark 3.3.8. As in the preceding section, all reasonings, including Lemma 3.3.7, can be extended to the case of ‘t-sectioning’ of domains with no essential changes. The order of magnitude of the variances D k remains the same. It seems likely that the question which method to partition the domains is of advantage has to be solved by an experiment, because, as t increases, the estimators of the variances D k become better, but, at the same time, the computational labour grows.

3.3.4 Time consumption of the bisection method Let us briefly discuss the question on time consumption of adaptive algorithms based on the sequential bisection method. As we have said, it is convenient to store the partition of the domain as a binary tree which grows during the process of computation. To evaluate the values of the piecewise approximation and adapt the current partition, at each step one must find the subdomain of the partition where the last drawn random point has fallen. It is clear that when sufficiently many points have been drawn and the partition tree has become overgrown, the time is mostly consumed by searching the tree for the subdomain containing the drawn point. The search time in a tree is determined by its depth [70]. In the adaptive method of control variate sampling, all points xk are uniformly distributed in the domain Ω. It is intuitively clear that in this case the partition tree is, with high probability, close to a balanced one, that is, a tree of the minimum possible depth. Since the depth of a balanced binary tree with k vertices is equal to ⌊log2 k⌋ + 1 and the partition tree at the kth step of the bisection method has 2k + 1 vertices, the time consumed by searching over this tree is, with high probability, of order O(log2 k). Hence it follows that the time to carry out n steps of the algorithm is, with high probability, O(n log2 n), that is, the time of execution of the algorithm grows almost linearly with the sample size n. In the adaptive method of importance sampling, more random points are drawn, hence the domain is divided more in those places where the integrand takes large values. So, the partition tree may become very long, and the search time may become quite high. But the actual computational practice shows that the time needed to carry out n steps of the adaptive algorithm grows as O(n log2 n) in this case, too. This is obviously due to condition (3.6) which does not allow the partition tree to stretch too

3.4 Sequential method of stratified sampling | 79

much (since some constant fraction of the random points fall into unessential domains as well).

3.4 Sequential method of stratified sampling In this section, the idea of the method of sequential bisection is utilised to develop a sequential version of the method of stratified sampling (group sampling). As we have said in Section 1.6.6, the known methods of group sampling possessing a high convergence rate have the essential weakness that they are not sequential. The use of the bisection method allows us to develop a similar method of sequential kind. This section has no direct concern with adaptive integration methods considered above but rather illustrates the possibilities provided by the sequential bisection method. We assume that in the integration domain Ω, which is thought of as an s-dimensional hyperparallelepiped, n − 1 uniformly distributed random points y1 , y2 , . . . , yn−1 are drawn, and a partition Ω1n , Ω2n , . . . , Ω nn of the domain is built over them with the use of the sequential bisection method. In each subdomain Ω nj , independently of each other we draw a uniformly distributed point xj and consider the estimators for the integral n

J 1n = ∑ μ(Ω nj )f(xj )

(3.22)

j=1

and

n

J 2n = ∑ μ(Ω nj )

f(xj ) + f(x󸀠j ) 2

j=1

,

(3.23)

where x󸀠j stands for the point which is symmetric to xj about the centre of the subdomain Ω nj . It is clear that these estimators reduce to (1.22) and (1.23) in the case where the domains are of equal area. It is obvious that these estimators are unbiased. Let us find their variances. It is easily seen that n

n

j=1

j=1

J − J 1n = ∫ f(x) dx − ∑ μ(Ω nj )f(xj ) = ∑ ( ∫ f(x) dx − μ(Ω nj )f(xj )) Ω

Ω nj

n

= ∑ ( ∫ (f(x) − f(x0j )) dx − μ(Ω nj )(f(xj ) − f(x0j ))), j=1

Ω nj

where x0j denotes the centre of the subdomain Ω nj . Hence, after elementary transformations, we obtain Var{J 1n } = Ey1 ,...,yn−1 {Ex1 ,...,xn {|J − J 1n |2 | y1 , . . . , yn−1 }} n

≤ Ey1 ,...,yn−1 { ∑ μ(Ω nj ) ∫ |f(x) − f(x0j )|2 dx}. j=1

Ω nj

80 | 3 Piecewise Approximation Similarly we obtain n 󵄨󵄨2 󵄨󵄨 f(x) + f(2x0j − x) − f(x0j )󵄨󵄨󵄨󵄨 dx}, Var{J 2n } ≤ Ey1 ,...,yn−1 { ∑ μ(Ω nj ) ∫ 󵄨󵄨󵄨󵄨 2 󵄨 󵄨 j=1 n Ωj

where the point 2x0j − x is symmetric to x about the centre of the subdomain Ω nj . If either f(x) ∈ C1 (a) or f(x) ∈ H(0, a, 1) (that is, satisfies the Lipschitz condition with constant a), then 1/s

|f(x) − f(x0j )| ≤ c1 [μ(Ω nj )]

,

x ∈ Ω nj ,

where c1 depends on a, s, and the maximum oblongness of the subdomains of the partition (see Section 3.1.2). Similarly, if either f(x) ∈ C2 (a) or f(x) ∈ H(1, a, 1), then 󵄨󵄨 f(x) + f(2x0j − x) 󵄨󵄨 2/s 󵄨󵄨 − f(x0j )󵄨󵄨󵄨󵄨 ≤ c2 [μ(Ω nj )] , 󵄨󵄨 2 󵄨 󵄨

x ∈ Ω nj .

Hence it follows that under the corresponding constraints on f(x) the bounds for the variances n

2+2m/s

Var{J nm } ≤ CEy1 ,...,yn−1 { ∑ [μ(Ω nj )] j=1

}

are true, where the constant C does not depend on n. Thus, estimation of the variances reduces to estimation of partition moments of order 2 + 2m/s whose behaviour in the sequential bisection method was thoroughly studied in Section 3.3. Using results of that section, we obtain Var{J nm } = O(n−1−2m/s ). (3.24) This rate of decrease of the variances coincides with those for the methods considered in Section 1.6.6, which follows immediately from (3.24) with μ(Ω nj ) = 1/n, but the methods suggested in this section can easily be organised sequentially. Indeed, upon drawing a random point yn , only one term in sums (3.22) and (3.23) must be altered which corresponds to the subdomain where the point yn falls. Calculation of the empirical variance in an effort to estimate the integration error can also be organised sequentially.

3.5 Deterministic construction of partitions In the sequential bisection method suggested above, the sequence of partitions obtained depends on the random points drawn. The domain is divided precisely at these places where they fall. Because of this, in the adaptive method of control variate sampling the partitioning of the integration domain is uniform, so the partition moments decrease quite quickly. In the adaptive method of importance sampling, though, more points find themselves in the domains where the density is greater, so the domain is

3.5 Deterministic construction of partitions | 81

Fig. 3.3. Deterministic construction of partitions.

divided more intensively, and in such ‘essential’ subdomains the integrand is approximated more accurately. These properties of an algorithm are of great importance, but, as we will see below, one can arrive at the convergence rate of the same order as in the method of sequential bisection with the use of other ways to construct the partition which does not depend on the random points drawn, that is, are deterministic in some sense. Since, by virtue of Lemma 3.1.1, the partition moment is minimal if the measures of all subdomains coincide, we suggest to bisect at each step that subdomain of the partition whose measure is maximal at this instant (if there are several such domains, we proceed with the first of them according to some ordering). One of the possible ways to partition a two-dimensional domain is shown in Figure 3.3. It is clear that for such a partition there are precisely k subdomains at the kth step of the algorithm when k − 1 random points have been drawn, and for k = 2m these subdomains are of equal area and μ(Ω kj ) = 2−m μ(Ω) = k−1 μ(Ω). Hence, the inequality

2 μ(Ω) k holds true for all k > 0, j = 1, . . . , k, which immediately yields the bound for the partition moments γ M k ≤ 2γ [μ(Ω)]γ k1−γ , μ(Ω kj ) ≤

whose order coincides with those of (3.19) and (3.21) obtained for the method of sequential bisection. Thus, deterministic partition procedures of the above kind allow us to obtain the same convergence rates in adaptive methods of importance sampling and control variate sampling. In this case, the partitions take account neither of the behaviour of the integrand nor of the distribution of the drawn random points, so it is not improbable that we will need to carry out more steps than in the approaches suggested earlier in

82 | 3 Piecewise Approximation order to achieve the required accuracy. But the deterministic partition procedure has an important advantage: there exists a simple algorithm to determine the subdomain which contains a given point at a given step. This means that there is no need to store the tree of subdomains, that is, the problems related to shortage of operating memory are thus eliminated and the execution time becomes linear in the number of steps. At the same time, the method remains to be fully sequential, that is, admits for recursive calculation of the integral and the empirical variance after drawing the next random point.

3.6 Conclusion In this chapter, we considered ways to build piecewise approximations of the integrand and investigated adaptive methods of integration on their base. We suggested the method of sequential bisection of the integration domain which guarantees the optimal rates of decrease of the variances for the adaptive methods both in the oneand multidimensional cases. On its base we developed sequential versions of the group sampling method which possess the same rates of convergence as their non-sequential analogues. Furthermore, we demonstrated that on the classes of functions H(m, a, λ) one is able to construct adaptive methods with piecewise approximations whose error decreases as O(n−(m+λ)/s−1/2 ) for any given confidence level. N. S. Bakhvalov proved in [7] that this convergence rate is optimal in order on this class for all indeterministic methods of integration which use the information on the function at O(n) points. Thus, the suggested adaptive methods are optimal in order on the classes H(m, a, λ). S. Heinrich in [27] proposed non-sequential methods guaranteeing the same order but consuming no operating memory (see Section 1.6.6). The methods proposed in this chapter which utilise a deterministic procedure of partitioning the domain are also with modest memory requirements but allow for sequential computing.

4 Methods of Adaptive Integration Based on Global Approximation In this chapter we suggest a way to construct a sequence of approximations of functions represented by rapidly convergent Fourier series, and on its base we elaborate adaptive methods of integration for various classes of integrands. The approximations investigated in this chapter are linear combinations of some number of orthonormalised functions and are global, that is, they are defined by one and the same expression on the whole integration domain. In Section 4.1 we suggest a method to build a global approximation, discuss questions concerning its implementation, prove theorems on its convergence in the norm L2 and on convergence of the corresponding adaptive methods of integration. In the next sections the suggested methods of approximation and integration are analysed for some important particular classes of the integrands.

4.1 Global approximations In Chapter 3, we considered piecewise approximations of the integrand over a partition of the integration domain. By imposing constraint (3.1) and making a suitable choice of the way of partitioning, we succeeded in achieving a fast decrease of the variances D k of the primary estimators in sequential Monte Carlo methods based on such approximations. We separated classes of functions such that condition (3.1) was satisfied and specified a particular way to build a piecewise approximation. But those integrands for which such a sequence of piecewise approximations satisfying condition (3.1) either does not exist or is inefficient are frequently dealt with. In connection with this, we have to look for another ways to approximate the integrand which guarantee a rapid decrease of the variances D k . In this section we suggest a way to build global approximations of the integrands which are of one and the same form in the whole integration domain and investigate its properties. While constructing a global approximation, some system of functions is usually fixed, and the approximation is sought for in the form of a linear combination of these functions. The coefficients of the linear combinations are chosen in such a way that the approximation is of most accuracy in some sense. In order to construct a global approximation, the orthonormalised systems of functions are frequently used. Before we start to describe and analyse a way to construct an approximation, we make an important remark. The point is that while using linear combinations of a priori given functions it may happen that the ‘easy’ approximation f k (x) changes its sign even in the case of the integrand of constant sign. Because of this, we cannot use the adaptive method of importance sampling. So, the presentation below concerns exclusively the adaptive method of control variate sampling. https://doi.org/10.1515/9783110554632-004

84 | 4 Global Approximation 4.1.1 Approximations by orthonormalised functions: Basic relations Consider expression (2.27) of the variances of the primary estimators in the adaptive method of control variate sampling. Dropping the second term, we arrive at the bound Dk ≤ ̂ D k = μ(Ω)Ex1 ,x2 ,...,xk−1 { ∫ |f(x) − f k (x)|2 dx}, Ω

which shows that the closer the functions f k (x) and f(x) are to each other in the norm L2 (Ω), the smaller are the variances D k . Assume that we are given a complete orthonormalised in L2 (Ω) system of functions {φ i (x)}∞ i=1 , that is, {1 ∫ φ i (x)φ j (x) dx = δ ij = { 0 { Ω

if i = j, if i ≠ j.

Due to completeness, any function in L2 (Ω) can be represented by a Fourier series in this system which converges to it in L2 : ∞

f(x) = ∑ c i φ i (x), i=1

c i = ∫ f(x)φ i (x) dx.

(4.1)



It is well known that the best L2 -approximation to f(x) in the form of a linear combination over the functions φ i (x) is attained in the case where the coefficients of the linear combination coincide with the Fourier coefficients c i . So it is wise to choose the approximations f k (x) in the form of parts of the Fourier series for the integrand f(x) whose length (the number of the functions φ i (x) used) grows with k. The rate of decrease of the variances D k then depends on how quickly Fourier series (4.1) converges for f(x). It is nevertheless clear that one cannot directly use the above method to build approximations because the coefficients c i are integrals unknown to us and their evaluation is equivalent in complexity to the initial problem of integration of f(x). If they were evaluated crudely, then the approximation error would depend on two factors, how long is the Fourier series part and how accurately the Fourier coefficients are calculated. Since, as k grows, the approximations f k should tend to the integrand, the coefficients of the expansion should be successively adjusted so being more and more close to the real Fourier coefficients. The accuracy of calculation of the coefficients is rigidly bound with the time consumption of the algorithm. So, we have to find a compromise between the rate of convergence of the approximations and the computational labour. While calculating the Fourier coefficients for the next approximation, it would be reasonable to use the information on the integrand we have gathered before. The method to build an approximation we will discuss below can be illustrated as follows. We assume that we are at the kth step of the algorithm and have to construct

4.1 Global approximations | 85

the next approximation f k (x) in the form rk

f k (x) = ∑ ĉ ki φ i (x),

(4.2)

i=1

where r k is the length of the part of the Fourier series for the kth approximation, ĉ ki are approximate values of the Fourier coefficients c i . By this time, the random points x1 , x2 , . . . , xk−1 have been drawn, and all preceding approximations f1 (x), f2 (x), . . . , f k−1 (x) have been built up. The coefficients ĉ ki are in essence approximate values of the integrals c i whose integrands are the products f(x)φ i (x). For an advantageous usage of the computational information we have accumulated, these integrals should obviously be calculated with the help of the adaptive method of control variate sampling by means of the realisations xj , j = 1, 2, . . . , k − 1, which we have had and approximations of the integrands of the form f j (x)φ i (x). Thus, it is suggested to calculate ĉ ki by the formulas k−1

(k−1) S ij , ĉ ki = ∑ α j j=1

where

S ij = ∫ f j (x)φ i (x) dx + μ(Ω)[f(xj )φ i (xj ) − f j (xj )φ i (xj )] Ω

are the primary estimators of the integral c i of the adaptive method of control variate (k−1) sampling based on the approximation f j (x)φ i (x), and α j are the coefficients of the sequential Monte Carlo scheme. Let us turn to the question concerning the time consumption of such a procedure. Taking into account relation (4.2) for f j (x) and the fact that the functions φ i (x) are orthonormalised, we arrive at the following expression of S ij : {ĉ ji S ij = μ(Ω)[f(xj ) − f j (xj )]φ i (xj ) + { 0 {

if i ≤ r j , if i > r j .

Hence, for i > r k−1 , k−1

(k−1) ĉ ki = ∑ α j μ(Ω)[f(xj ) − f j (xj )]φ i (xj ), j=1

while for i ≤ r k−1 , k−1

k−2

(k−1) (k−1) (k−1) ĉ ki = ∑ α j S ij = α k−1 S i(k−1) + ∑ α j S ij j=1

j=1

k−2

(k−2)

= β k−1 S i(k−1) + (1 − β k−1 ) ∑ α j j=1

S ij

86 | 4 Global Approximation = β k−1 (μ(Ω)[f(xk−1 ) − f k−1 (xk−1 )]φ i (xk−1 ) + ĉ (k−1)i ) + (1 − β k−1 )ĉ (k−1)i = ĉ (k−1)i + β k−1 μ(Ω)[f(xk−1 ) − f k−1 (xk−1 )]φ i (xk−1 ). Thus, ĉ (k−1)i + β k−1 μ(Ω)[f(xk−1 ) − f k−1 (xk−1 )]φ i (xk−1 ) { { { k−1 ĉ ki = { (k−1) { μ(Ω)[f(xj ) − f j (xj )]φ i (xj ) { ∑ αj { j=1

if i ≤ r k−1 , if i > r k−1 ,

(4.3)

that is, the coefficients of the expansion of f k (x) can be recursively calculated. To compute the set of coefficients ĉ ki we need to know the preceding set ĉ (k−1)i and the values of f j (xj ) for j = 1, 2, . . . , k − 1, which requires storage space of order O(k + r k−1 ). It makes sense to store not f j (xj ) themselves but the differences f(xj ) − f j (xj ) because these are precisely what we are using to calculate both the coefficients ĉ ki and the primary estimators of the integral sought for. After evaluating the set of coefficients ĉ ki , we calculate f k (xk ) and I k by rk

f k (xk ) = ∑ ĉ ki φ i (xk ), i=1

rk

I k = ∫ f(x) dx = ∑ ĉ ki ∫ φ i (x) dx. Ω

i=1



Evaluation of the coefficients ĉ ki by formula (4.3) requires O(r k−1 + k(r k − r k−1 )) arithmetical operations, while the calculation of f k (xk ) and I k requires O(k) arithmetical operations. Thus, the total number of arithmetical operations in n steps of the algorithm is of order O(nr n ). Besides, we have to calculate nr n values φ i (xk ) and n values of the integrand f(xk ). If the integrand is ‘hard,’ that is, the cost to calculate its value is much more than that to calculate φ i (xk ), then for not so large n the linear in n component of the time consumption is dominating.

4.1.2 Conditions for algorithm convergence Now let us estimate the error of the suggested algorithm. Comparing (4.1) and (4.2), we see that rk



i=1

i=r k +1

f(x) − f k (x) = ∑ [c i − ĉ ki ]φ i (x) + ∑ c i φ i (x).

(4.4)

Multiplying by the conjugate and integrating over Ω, taking into account the orthonormality and completeness of φ i (x), we arrive at the Parseval equality rk



i=1

i=r k +1

∫ |f(x) − f k (x)|2 dx = ∑ |c i − ĉ ki |2 + ∑ |c i |2 . Ω

Therefore,

rk



i=1

i=r k +1

̂ D k = μ(Ω)( ∑ Ex1 ,x2 ,...,xk−1 {|c i − ĉ ki |2 } + ∑ |c i |2 ).

4.1 Global approximations | 87

Since ĉ ki is the estimator of c i by the adaptive method of control variate sampling in k − 1 steps, we find that k−1

(k−1) 2

Ex1 ,x2 ,...,xk−1 {|c i − ĉ ki |2 } = Var{ĉ ki } = ∑ α j j=1

D ij ,

where D ij = Var{S ij } = Ex1 ,x2 ,...,xj−1 {μ(Ω) ∫ |f(x) − f j (x)|2 |φ i (x)|2 dx − |c i − ĉ ji |2 }. Ω

Thus,

r k k−1



2

(k−1) (1) (2) ̂ D k = μ(Ω)( ∑ ∑ α j D ij + ∑ |c i |2 ) = μ(Ω)(̂ Dk + ̂ D k ), i=1 j=1

i=r k +1

(4.5)

where r k k−1

2

(1) (k−1) ̂ Dk ≤ ∑ ∑ αj Ex1 ,x2 ,...,xj−1 {μ(Ω) ∫ |f(x) − f j (x)|2 |φ i (x)|2 dx} i=1 j=1

k−1

= ∑

j=1



(k−1) 2 αj Ex1 ,x2 ,...,xj−1 {μ(Ω) ∫ |f(x)

rk

− f j (x)|2 ∑ |φ i (x)|2 dx}, i=1





(2) ̂ D k = ∑ |c i |2 . i=r k +1

Theorem 4.1.1. Let the following conditions be satisfied: (i) There exist constants C1 , C2 , γ1 , γ2 > 0 such that for any r > 0 the bounds r

∑ |φ i (x)|2 ≤ C1 r γ1 ,

i=1

x ∈ Ω,



∑ |c i |2 ≤ C2 r−γ2

(4.6) (4.7)

i=r+1

are true. (ii) The coefficients of the sequential Monte Carlo method to calculate ĉ ki are chosen as follows (see relation (2.16) and the remark to Theorem 2.1.3): (n)

α k = α(n, k, γ󸀠 ), where

2γ󸀠 + 1 > γ =

(iii) The values of r k are chosen by the formula

γ2 . γ1

r k = ⌊(λ(k − 1))1/γ1 ⌋, where ⌊x⌋ stands for the integer part of x, and λ > 0 is chosen in such a way that q = C1 μ(Ω)λ

(γ󸀠 + 1)2 < 1. (2γ󸀠 − γ + 1)

88 | 4 Global Approximation Then there exists a constant A > 0 such that Γ(k) ̂ Dk ≤ A Γ(k + γ)

(4.8)

for any k > 0, in other words, D k = O(k−γ ). Proof. Since r k ≤ (λ(k − 1))1/γ1 , for all k and all x ∈ Ω we see that rk

∑ |φ i (x)|2 ≤ C1 λ(k − 1).

i=1

Therefore, k−1

(1) (k−1) 2 󵄨 󵄨2 ̂ D k ≤ C1 λ(k − 1) ∑ α j Ex1 ,x2 ,...,xj−1 {μ(Ω) ∫󵄨󵄨󵄨f(x) − f j (x)󵄨󵄨󵄨 dx} j=1

k−1



(k−1) 2 ̂

= C1 λ(k − 1) ∑ α j

Dj .

j=1

(4.9)

(2) Let us demonstrate that ̂ D k = O(k−γ ). Let

{2 + ⌊1/λ⌋ if 1/λ is not an integer, k0 = { 1 + 1/λ if 1/λ is an integer. { Then for any k ≥ k0 we see that r k ≥ 1, and therefore, −γ (2) ̂ D k ≤ C2 r k 2 .

In addition, for k ≥ k0 + 1 we obtain k − 1 = (1 −

1 1 k0 k, )k ≥ (1 − )k = k k0 + 1 k0 + 1

hence r k ≥ (λ(k − 1))1/γ1 − 1 ≥ (λ

1/γ1 k0 k) −1 k0 + 1

1/γ1 k0 1 − 1/γ }k1/γ1 ) k0 + 1 k 1 1/γ 1 k0 1 ≥ {(λ − 1/γ }k1/γ1 , ) k0 + 1 k0 1

= {(λ

while for k = k0 it is true that r k = 1. Hence there exists a constant B0 > 0 such that r k ≥ B0 k1/γ1 for all k ≥ k0 , that is, −γ (2) ̂ D k ≤ C2 B0 2 k−γ .

4.1 Global approximations | 89

In the case 1 ≤ k < k0 we obtain r k = 0, that is, ∞

γ (2) ̂ D k = ∑ c2i = ∫ f 2 (x) dx ≤ ( ∫ f 2 (x) dx)k0 k−γ . i=1





Thus, there exists a constant B1 > 0 such that (2) ̂ D k ≤ B1 k−γ

for all k, which is the desired result. Finally, there exists a constant B2 > 0 such that Γ(k) B2 (2) ̂ Dk ≤ μ(Ω) Γ(k + γ)

(4.10)

for all k. Substituting bounds (4.9) and (4.10) into expression (4.5), we obtain k−1 Γ(k) (k−1) 2 ̂ ̂ D j + B2 D k ≤ C1 μ(Ω)λ(k − 1) ∑ α j . Γ(k + γ) j=1

(4.11)

Let us prove with the use of induction that the theorem is true for A=̂ D1 Γ(1 + γ) +

B2 . 1−q

For k = 1, bound (4.8) is obvious. We assume that it has been proved for all j = 1, 2, . . . , k − 1 with some k > 1. In this case k−1

(k−1) 2 ̂

∑ αj

j=1

k−1

(k−1) 2

Dj ≤ A ∑ αj j=1

Γ(j) . Γ(j + γ)

Repeating the reasoning of Theorem 2.1.5, we arrive at the bound k−1

(k−1) 2

∑ αj

j=1

Γ(j) (γ󸀠 + 1)2 Γ(k − 1) ≤ , Γ(j + γ) (2γ󸀠 − γ + 1) Γ(k + γ)

hence we obtain (γ󸀠 + 1)2 Γ(k) ̂ D k ≤ (C1 μ(Ω)λ A + B2 ) 󸀠 (2γ − γ + 1) Γ(k + γ) Γ(k) . = (qA + B2 ) Γ(k + γ) But qA + B2 = q ̂ D1 Γ(1 + γ) + B2 ≤̂ D1 Γ(1 + γ) + which proves the theorem.

q + B2 1−q

B2 = A, 1−q

90 | 4 Global Approximation Remark 4.1.2. In the case where the integrand f(x) is a finite piece of a Fourier series, the convergence rate of the suggested method depends only on the choice of the coefficients of the sequential scheme and can be as high as we wish. This immediately follows from Theorem 4.1.1, since conditions (4.6) and (4.7) are satisfied for arbitrary positive values of the constants γ1 and γ2 . If, in addition, it is known which functions φ i (x) enter into the expansion of f(x) (without loss of generality, these are the first functions of the system), then in the adaptive algorithm we can set r k = m for all k, where m is the amount of those functions φ i . In this case, the time consumed by n steps of the algorithm is of order O(n) rather than O(nr n ) as in the initial case, and the convergence rate is again determined by the choice of the coefficients and can be as high as we wish. Theorem 4.1.1 can be strengthened if instead of (4.7) we use the more strong condition: as r → ∞, ∞

∑ |c i |2 = o(r−γ2 ).

(4.12)

i=r+1

Theorem 4.1.3. Let the conditions of Theorem 4.1.1 be satisfied, but let condition (4.12) be satisfied rather than (4.7). Then Γ(k) ̂ D k = o( ). Γ(k + γ) D k we obtain a bound similar Proof. Repeating the reasoning of Theorem 4.1.1, for ̂ to (4.11): k−1 Γ(k) (k−1) 2 ̂ ̂ D k ≤ C1 μ(Ω)λ(k − 1) ∑ α j Dj + Bk , (4.13) Γ(k + γ) j=1 where B k = o(1) → 0 as k → ∞. We introduce Ak = ̂ Dk

Γ(k + γ) Γ(k)

and show that they tend to zero as k grows. By virtue of (4.13), A k ≤ C1 μ(Ω)λ(k − 1) = C1 μ(Ω)λ

Γ(k + γ) k−1 (k−1) 2 ̂ Dj + Bk ∑α Γ(k) j=1 j

Γ(k + γ) k−1 (k−1) 2 ̂ Dj + Bk . ∑α Γ(k − 1) j=1 j

In order to estimate the sum, we repeat the reasoning used to prove Theorem 2.1.7 and thus arrive at the bound A k ≤ C1 μ(Ω)λ = q[

(γ󸀠 + 1)2 k−1 ∑ α(k − 1, j, 2γ󸀠 − γ)A k + B k 2γ󸀠 − γ + 1 j=1

A⌊√k⌋(2γ󸀠 − γ + 1) + max A j ] + B k , k − 1 + 2γ󸀠 − γ j>⌊√k⌋

k > 1,

4.1 Global approximations | 91

where A k ≤ A (the existence of A is proved in the previous theorem). The first and third terms tend to zero as k grows. Hence, for an arbitrary ε > 0 there exists an integer m > 0 such that for all k > m, Aq⌊√k⌋(2γ󸀠 − γ + 1) ε + B k < (1 − q). 󸀠 k − 1 + 2γ − γ 2 Then for all k > m,

A k ≤ q max A j + j>⌊√k⌋

Therefore,

max A j ≤ qA + j>m

ε (1 − q). 2

(4.14)

ε (1 − q). 2

Using (4.14) for k > m2 , we obtain ε ε (1 − q) ≤ q max A j + (1 − q) 2 2 j>m ε ε ε ≤ q(qA + (1 − q)) + (1 − q) = q2 A + (1 − q2 ), 2 2 2

A k ≤ q max A j + j>⌊√k⌋

hence

max A j ≤ q2 A + j>m2

ε (1 − q2 ). 2

Similarly, using (4.14) for k > m4 , we obtain ε ε (1 − q) ≤ q max A j + (1 − q) 2 2 j>m2 ε ε ε ≤ q(q2 A + (1 − q2 )) + (1 − q) = q3 A + (1 − q3 ), 2 2 2

A k ≤ q max A j + j>⌊√k⌋

t

and so on. Finally, we conclude that for all k > m2 , t ≥ 0, A k ≤ q t+1 A +

ε (1 − q t+1 ). 2

By virtue of the condition q < 1, there exists T ≥ 0 such that for all t ≥ T it is true that T 2q t+1 A < ε. Therefore, A k < ε for all k > m2 , which is the desired result. Remark 4.1.4. In Theorems 4.1.1 and 4.1.3, we obtain, in fact, orders of rate of convergence of the approximations f k (x) to the function f(x) in the norm L2 (Ω), because ̂ D k = E{‖f − f k ‖L2 }. Remark 4.1.5. Repeating the proofs of Theorems 4.1.1 and 4.1.3, we obtain similar assertions for the case where, instead of conditions (4.7) and (4.12), the estimates ∞

∑ |c i |2 = O(r−γ2 lnδ r)

i=r+1

(4.15)

92 | 4 Global Approximation and



∑ |c i |2 = o(r−γ2 lnδ r)

i=r+1

(4.16)

hold true, respectively, as r → ∞, with some δ > 0. In this case the variances are D k = O(k−γ lnδ r) and D k = o(k−γ lnδ r), respectively. Thus, the results of this section can be united in a single summarising assertion as follows. Theorem 4.1.6. Let there exist constants C1 , C2 , γ1 , γ2 , δ > 0 such that for any r > 0 estimates (4.6) and either (4.15) or (4.16) hold true, and let conditions (ii) and (iii) of Theorem 4.1.1 be satisfied. Then the variances of the primary estimators in the adaptive method of control variate sampling with global approximations are of order either D k = O(k−γ lnδ r) or

D k = o(k−γ lnδ r).

Thus, under the conditions of Theorem 4.1.6, we find ourselves exactly under the conditions of Theorem 2.1.8 on convergence of the sequential Monte Carlo method. So, under an appropriate choice of the coefficients of the sequential scheme to evaluate the initial integral, the variance of their estimators in the method we suggested is of order O(n−γ−1 ) or o(n−γ−1 ), respectively. We emphasise that in order to evaluate the initial integral one needs not to choose the coefficients of the sequential scheme to be set equal to the coefficients of the expansion ĉ ki (that is, with different γ󸀠 ). But the choice of identical coefficients under the condition 2γ󸀠 + 1 > γ also leads us to the result just formulated.

4.2 Adaptive integration over the class S p This section is devoted to constructing and analysing the adaptive method of control variate sampling based on global approximations applied to integrating functions of the classes S p with rapidly converging Fourier–Haar series.

4.2.1 Haar system of functions and univariate classes of functions S p The Haar system of functions plays an important part in the modern numerical analysis, in particular in problems concerning convergence of quadrature processes. A thorough presentation of the theory of Haar functions can be found in Sobol’s monograph [60]. The Haar functions χ i (x) form a complete orthonormalised in L2 system on the interval [0, 1]. The first function of the Haar system, χ1 (x) ≡ 1, and the others are grouped and numbered by two indices m and j, where m = 1, 2, . . . , ∞, j = 1, 2, . . . , 2m−1 , and the relation between the indices is of the form i = 2m−1 + j. In order to define the Haar

4.2 Adaptive integration over the class S p

| 93

functions, we introduce the notion of a binary interval. A binary interval is an interval of the form l mj = [

j−1 j , ), 2m−1 2m−1

m = 1, 2, . . . , ∞, j = 1, 2, . . . , 2m−1 .

For j = 2m−1 , the interval l mj is considered closed at both sides. Let l−mj and l+mj denote the left-hand and right-hand halves of the interval (which are also left-closed and right-open), and define a Haar function as follows: m−1

2 2 { { { m−1 χ mj (x) = {−2 2 { { { 0

if x ∈ l−mj , if x ∈ l+mj ,

(4.17)

if x ∉ l mj .

In [60], Haar functions are utilised to analyse the convergence of deterministic quadrature formulas on various classes of functions and to construct new integration methods. The most deep results have been obtained for the classes S p of functions with rapidly converging Fourier–Haar series. By the Fourier–Haar series of a function f(x) ∈ L2 ([0, 1]) we mean the Fourier series in terms of Haar functions: ∞

∞ 2m−1

i=1

m=1 j=1

f(x) = ∑ c i χ i (x) = c1 + ∑ ∑ c mj χ mj (x), where

1

1

c i = ∫ f(x)χ i (x) dx,

c mj = ∫ f(x)χ mj (x) dx.

0

0

By virtue of completeness of the system of Haar functions, this series converges in L2 ([0, 1]) to the function f(x). We say that a function f(x) belongs to the class S p (A), p ≥ 1, if ∞

∑ 2

m=1

m−1 2

2m−1

1/p

{ ∑ |c mj |p } j=1

≤ A.

(4.18)

4.2.2 Adaptive integration over the class S p : One-dimensional case Let us evaluate the integral

1

J = ∫ f(x) dx 0

of a function f(x) ∈ S p (A). As the basis for construction of global approximations in the adaptive method of control variate sampling, we take the system of Haar functions χ i (x). In this case, rk

f k (x) = ∑ ĉ ki χ i (x), i=1

94 | 4 Global Approximation where c ki are calculated by the recurrence formulas ĉ (k−1)i + β k−1 [f(x k−1 ) − f k−1 (x k−1 )]χ i (x k−1 ) if i ≤ r j , { { { ĉ ki = {k−1 (k−1) { if i > r j . [f(x j ) − f j (x j )]χ i (x j ) { ∑ αj { j=1 Recalling the definition of Haar functions (4.17), we easily see that 1

{1 ∫ χ i (x) dx = { 0 0 { hence

if i = 1, if i > 1,

1

I k = ∫ f k (x) dx = ĉ k1 . 0

Let us look how the conditions of Theorems 4.1.1 and 4.1.3 stand in this case. It is easily seen that the first n groups of Haar functions together with the function χ1 contain precisely 2n functions, and the identity 2m−1

∑ χ2mj (x) = 2m−1

(4.19)

j=1

holds true for any m. Hence, if 2n−1 < r ≤ 2n , then r



i=1

χ2i (x)

2n

≤∑

i=1

χ2i (x)

n 2m−1

n

= 1 + ∑ ∑ χ2mj (x) = 1 + ∑ 2m−1 = 2n < 2r. m=1 j=1

m=1

(4.20)

Thus, Haar functions satisfy condition (4.6) with C1 = 2, γ1 = 1. Let us estimate the series ∞

∑ |c i |2 .

i=r+1

Let p ≥ 2. Using Hölder’s inequality, for 2n ≤ r < 2n+1 we obtain the bound ∞



i=r+1

i=2n +1

2m−1



∑ |c i |2 ≤ ∑ |c i |2 = ∑

∑ |c mj |2

m=n+1 j=1



2m−1

m=n+1

j=1

2/p

≤ ∑ { ∑ |c mj |p } ∞

≤ ∑ 2

m−1

m=n+1 ∞

≤( ∑ 2 0. In view of (4.20), the conditions on the choice of the constant λ can be represented as follows: 2λ

(γ󸀠 + 1)2 < 1, (2γ󸀠 − γ + 1)

λ
1 (see Section 4.2.3), and the class E2s (see Section 4.3.1). So, virtually all adaptive methods suggested in our monograph can be applied to it. The integral of f(x) is easily evaluated: J = ∫ f(x) dx = 1 Ks

for all a k . At the same time, the behaviour of f(x) in the integration domain depends essentially on the set of parameters a k . While carrying out the numerical experiments, we take four sets of the parameters: https://doi.org/10.1515/9783110554632-005

112 | 5 Numerical Experiments

2

✻ a = 10

2

1

0



a=1

2

1

1





a = 0.1

1

0

1



0

1



Fig. 5.1. Behaviour of the function h(x) for different values of a.

(i) a1 = a2 = ⋅ ⋅ ⋅ = a s = 0.01; (ii) a1 = a2 = ⋅ ⋅ ⋅ = a s = 1; (iii) a k = k, 1 ≤ k ≤ s; (iv) a k = k2 , 1 ≤ k ≤ s. From the viewpoint of numerical integration, the behaviour of the function f(x) differs essentially in these four cases. Consider its greatest value s

2 + ak , 1 + ak k=1

f ∗ = sup f(x) = ∏ x∈K s

which is attained at each of the vertices of the cube K s . It is easily seen that in the first case f ∗ = (2.01/1.01)s grows very fast with s. What this means is that for s large enough the cube contains domains with large gradients of the function f(x), where numerical integration methods usually behave badly. In the second case, the maximum value f ∗ = 1.5s also grows exponentially, but slower. In the third case, f ∗ = 1 + 2s is linear with respect to the dimensionality, and in the fourth case the value of f ∗ is bounded for all s. Because of this, one should expect that numerical integration methods converge better as we go from the first case to the fourth.

5.1.2 The second problem The second test problem is taken from [15, 16] and consists of integration over the unit cube K s , s = 5, of the function f(x) = exp{−xT Ax}, where A is a positive definite matrix of the form 13/16 0 A = (3√2/16 0 3/16

0 13/16 0 3√3/16 0

3√2/16 0 5/8 0 −3√2/16

0 √ 3 3/16 0 7/16 0

3/16 0 −3√2/16) . 0 13/16

5.2 Results of experiments | 113

Thus, the integrand, to within a factor, is equal to the density of the quintamensional normal law with parameters depending on the elements of the matrix A. The integral sought for is, to within the same factor, equal to the probability that the corresponding random variable falls into the unit cube. In the mentioned studies, this example illustrated the application of random interpolation quadrature formulas.

5.2 Results of experiments 5.2.1 The first test problem In the course of experiments, calculations were carried out for all four sets of parameters a k in the first test problem for s = 1, 3, 6. In these examples, we compare most of the methods implemented in our program: 1. trapezoidal cubature formula Trap; 2. Sobol’s LPτ -sequences method LP; 3. elementary Monte Carlo method ElMC; 4. elementary importance sampling method ElIS; 5. elementary control variate sampling method ElCVS; 6. adaptive control variate sampling method with piecewise approximations AdCVS-P; 7. generalised control variate sampling method with piecewise approximations MAdCVS-P; 8. adaptive importance sampling method with piecewise approximations AdIS-P; 9. adaptive control variate sampling method with global trigonometric approximations AdCVS-Trig. In the elementary methods of control variate sampling and importance sampling, the ‘easy’ approximation was chosen in the form s π 󵄨󵄨 x − 1 󵄨󵄨 + 2 − 󵄨 k 2 󵄨󵄨 g(x) = ∏ 󵄨 1 + ak k=1

π 2

+ ak

;

each factor in g(x) is a piecewise linear three-point interpolation of the corresponding factors in f(x), and in adaptive methods with piecewise approximations we use the piecewise constant (AdCVS-P0, AdIS-P0, MAdCVS-P), the piecewise linear (AdCVS-P1, AdIS-P1), and the piecewise quadratic (AdCVS-P2, AdIS-P2) approximations. Although, along with the elementary Monte Carlo method, for comparison purpose we choose the elementary methods of control variate sampling and importance sampling, it seems to be most correct to compare the adaptive methods precisely with the elementary Monte Carlo method because the methods of control variate sampling and importance sampling use a priori information about the behaviour of the integrand, while the adaptive methods require no such information.

114 | 5 Numerical Experiments Tab. 5.1. Time and nodes number for a given accuracy. s = 1, a k = 0.01 Method

Trap LP ElMC ElCVS ElIS AdCVS-P0 AdCVS-P1 AdCVS-P2 MAdCVS-P AdIS-P0 AdIS-P1 AdIS-P2 AdCVS-Trig

ε=

10−3

s = 3, a k = 0.01

ε=

n

t

29 1520 2.1⋅106 88963 1.5⋅105 455 39 15 39 510 43 15 260

1.9⋅10−5

0.110 162.4 6.480 13.73 0.073 0.007 0.005 0.007 0.110 0.011 0.007 0.280

10−4

ε = 10−2

n

t

92 23900 2.1⋅108 8.9⋅106 1.5⋅107 2128 99 33 101 2232 109 25 988

6.4⋅10−5

1.490 16380 633.6 1382 0.330 0.017 0.011 0.017 0.507 0.028 0.012 3.190

n 4096 21200 80065 13813 4554 3621 464 79 487 3214 418 62 1701

ε = 10−3 t

n

t

0.007 1.820 7.512 1.080 0.770 0.564 0.120 0.083 0.091 0.605 0.130 0.120 3.242

1.3⋅105

0.198 79.97 625.9 107.7 60.53 10.46 0.920 0.940 0.720 11.47 1.412 1.253 107.7

8.6⋅105 7.7⋅106 1.4⋅106 4.6⋅105 61555 3680 878 3675 53715 3185 743 8602

Tab. 5.2. Time and nodes number for a given accuracy. s = 6, a k = 0.01 Method

ε = 10−1 n

Trap LP ElMC ElCVS ElIS AdCVS-P0 AdCVS-P1 AdCVS-P2 MAdCVS-P AdIS-P0 AdIS-P1 AdIS-P2 AdCVS-Trig

2.6⋅105

1.9⋅105 2100 3480 97 852 220 198 348 1190 420 218 2450

s = 6, a k = 1

ε = 10−2 t

n

1.480 18.13 0.170 0.331 0.043 0.180 0.103 0.660 0.110 0.353 0.681 4.15 5.290

1.5⋅108

2.2⋅106 2.2⋅105 3.5⋅105 9415 63975 15696 5229 15065 42219 10733 5361 42055

ε = 10−2 t

n

634.5 229.9 16.15 27.14 3.790 13.15 7.210 17.31 4.930 12.26 4.560 41.70 1017

1.7⋅108

1.8⋅105 36324 12874 1719 14050 1689 690 2261 11406 2044 897 8103

ε = 10−3 t

n

t

56.25 16.92 3.963 1.041 0.387 2.860 0.684 1.830 0.605 3.742 1.460 5.430 58.88

1.6⋅1010

60375 222.7 306.0 99.91 29.72 84.49 16.02 40.23 9.460 105.6 29.86 80.10 4023

2.1⋅106 3.6⋅106 1.3⋅106 1.7⋅105 4.6⋅105 39951 12264 35511 4.2⋅105 37331 11265 67020

Tables 5.1–5.3 are given, which contain data on time consumption and the number of nodes required to attain the required accuracy with the use of the above methods. The accuracy of statistical methods is controlled by the empirical variance on the basis of the three-sigma rule. The correctness of its use for the suggested adaptive methods is not proved, but in all calculations the actual error was in the limits determined by this rule.

5.2 Results of experiments | 115 Tab. 5.3. Time and nodes number for a given accuracy. s = 6, a k = k Method

ε = 10−1 n

Trap LP ElMC ElCVS ElIS AdCVS-P0 AdCVS-P1 AdCVS-P2 MAdCVS-P AdIS-P0 AdIS-P1 AdIS-P2 AdCVS-Trig

3⋅106

4600 10854 1057 622 3892 561 198 512 3876 482 177 4323

s = 6, a k = k 2

ε = 10−2 t

n

14.83 0.440 0.920 0.084 0.050 0.751 0.253 0.640 0.144 1.015 0.250 1.103 15.71

2.6⋅109

2.1⋅105 1.1⋅106 1.1⋅105 55390 1.3⋅105 10132 2913 9717 1.2⋅105 8913 2808 26459

ε = 10−2 t

n

9914 19.89 94.86 8.400 6.760 24.26 4.480 9.390 2.680 32.37 4.710 17.46 686.0

5.3⋅105

840 6552 249 290 1897 168 42 164 2110 292 39 3856

ε = 10−3 t

n

t

3.07 0.081 0.543 0.023 0.035 0.390 0.057 0.130 0.043 0.570 0.250 0.233 12.90

4.8⋅108

1862 1.210 55.70 2.360 3.740 14.86 1.000 2.490 0.710 18.86 2.470 4.590 374.3

12800 6.5⋅105 30069 34075 72471 2971 834 2747 70294 2870 771 19794

All calculations were performed on a computer equipped with a 450 MHz Intel® Pentium® II processor and 32 MB RAM. (Intel and Pentium are trademarks of Intel Corporation in the U.S. and/or other countries.) The time is given in seconds. For the sake of illustration, the time consumption is pictured in Figures 5.2–5.7. Finally, in Figure 5.8 we present a typical example how an adaptive method (AdCVS-P1) converges in the problem with s = 6, a k = 0.01. Dependencies of |J − J n | ⋅ n5/6 and σ̂ n ⋅ n5/6 on n are shown in the interval n = [1, 100 000]; here σ̂ n is the empirical mean square error. The analysis of the given results allows us to make a series of conclusions. First, the rate of actual convergence of the suggested adaptive methods fits well the obtained theoretical estimates. This is well seen in the picture where the line corresponding to the mean square error lies on the horizontal asymptote (the theoretical convergence rate of the method is O(n−5/6 )). In addition, from the picture we see that the integration error always lies in the 3σ n limits, and for n > 20 000 it does not exceed 2σ n . Second, as it must be, the efficiency of adaptive methods as compared with the cubature formulas rises steeply with the dimensionality, and as compared with the Monte Carlo and quasi-Monte Carlo ones it decreases gradually (for the adaptive methods with piecewise approximations). For s = 1 the cubature formulas work much faster than the adaptive methods despite the difference in convergence rates (the error in the trapezoid method decreases as O(n−2 ), and that of the adaptive method with piecewise quadratic approximations does as O(n−3.5 )). For s = 3 the difference becomes not so perceptible, and for s = 6 the cubature formulas behave much worse than the statistical methods at all. At the same time, the considerable superiority of the adaptive methods with

116 | 5 Numerical Experiments

1.0E+05 1.0E+04 1.0E+03 1.0E+02 1.0E+01 1.0E+00 1.0E 01 1.0E 02 1.0E 03 1.0E 04

Ad

CV S

Ad I

S

Tr ig

P2

P1

Ad

IS

IS Ad

Ad

CV S

CV S

P0

P

P2

eps = 1e 3

M

Ad

Ad

CV S

P1

P0

IS

CV S

Ad

EL

M

C

eps = 1e 4 EL CV S

EL

LP

TR

AP

1.0E 05

Fig. 5.2. Time consumption. One-dimensional problem, a k = 0.01.

1.0E+03

1.0E+02

1.0E+01

1.0E+00

1.0E 01 1.0E 02

Tr ig

P2 Ad

CV S

IS Ad

IS

P1

eps= 1e 2 Ad

P0 IS Ad

M

Ad

CV S

P

P2 CV S

Ad

CV S

P1

P0 Ad

CV S

EL

IS

eps= 1e 3

Ad

EL CV S

C M

LP

EL

TR

AP

1.0E 03

Fig. 5.3. Time consumption. Three-dimensional problem, a k = 0.01.

5.2 Results of experiments | 117

1.0E+04

1.0E+03

1.0E+02

1.0E+01

1.0E+00 1.0E 01

Tr ig

P2 Ad

CV S

P1 IS Ad

Ad

Ad M

Ad IS

P0

eps = 1e 1

IS

CV S

P

P2

P1

CV S Ad

P0

Ad CV S

EL

IS

eps = 1e 2 Ad CV S

C

EL CV S

EL M

LP

TR

AP

1.0E 02

Fig. 5.4. Time consumption. Six-dimensional problem, a k = 0.01. 1.0E+05

1.0E+04

1.0E+03

1.0E+02 1.0E+01 1.0E+00

eps = 1e 3

Tr ig

P2

CV S Ad

Ad IS

P1 IS

Ad

dI S KA

Ad

CV S

P0

P

P2

eps = 1e 2

M

P1

Ad CV S

CV S Ad

CV S

P0

S IE LI

Ad

EL CV S

C

LP

EL M

TR

AP

1.0E 01

Fig. 5.5. Time consumption. Six-dimensional problem, a k = 1.

118 | 5 Numerical Experiments

1.0E+04

1.0E+03

1.0E+02

1.0E+01

1.0E+00 1.0E 01

Tr ig

P2 IS Ad

CV S

P1 Ad

IS

Ad IS

eps= 1e 2 Ad

P0

P

P2

M Ad CV S

Ad CV S

CV S

P1

P0 Ad

Ad

CV S

EL

IS

eps= 1e 3 ˉI

C M EL

EL CV S

LP

TR

AP

1.0E 02

Fig. 5.6. Time consumption. Six-dimensional problem, a k = k. 1.0E+04

1.0E+03

1.0E+02

1.0E+01

1.0E+00 1.0E 01

eps = 1e 3

Tr ig

IS

P2

Ad CV S

Ad

P1

S KA dI

Fig. 5.7. Time consumption. Six-dimensional problem, a k = k 2 .

Ad IS

P0

P

eps = 1e 2

VS Ad C M

Ad

CV S

P2

P1

P0

CV S Ad

VS

IS EL

Ad C

EL CV S

C M EL

LP

TR

AP

1.0E 02

5.2 Results of experiments | 119

12

Eps Sigma

10 8 6 4 2 0

0

10000

20000

30000

40000

50000 n

60000

70000

80000

90000 100000

Fig. 5.8. Dependence of eps = |J − J n | ⋅ n5/6 , sigma = σ̂ n ⋅ n5/6 on n (AdCVS-P1, s = 6).

piecewise approximations over the elementary Monte Carlo methods and the method of LPτ -sequences, which takes place for s = 1, decreases gradually as the dimensionality grows. This is due to the fact that the convergence rate of these methods decreases to O(n−1/2 ) as s grows, whereas the convergence rates of the traditional Monte Carlo and quasi-Monte Carlo methods do not depend on the dimensionality (or depend very weakly). Analysing the diagrams, we easily observe that for s = 6 under not-so-high requirement for accuracy the Monte Carlo methods and their modifications are frequently more efficient than the adaptive algorithms. This arises from the fact that the surplus to the convergence in the adaptive methods is not so large, and in the Monte Carlo methods less labour is required to calculate the primary estimators and none is needed to carry out the adaptation. But as the requirements for accuracy grow, the situation changes: because of a higher convergence rate, the adaptive methods require significantly less nodes and thus become more preferable. The not-so-high efficiency of the adaptive method with global trigonometric approximations have engaged our attention. It is due to the quadratic increase of the execution time (see Section 4.1). In addition, as the accuracy grows ten times, the number of nodes increases more than √10 times, that is, the theoretical convergence rate O(n−2 lns−1 n) has not manifested itself for these n. Nevertheless, the number of nodes grows slow enough, so this gives grounds to expect that the adaptive trigonometric approximation will be efficient for complicated functions which demand significantly more computations per single value.

120 | 5 Numerical Experiments Finally, as it is expected, the convergence of all integration methods becomes better from the first to the fourth sets of parameter (that is, as the smoothness of the integrand increases). We observe that in the first two sets (a k = 0.01 and a k = 1) for s = 6 Sobol’s method of LPτ -sequences does not achieve the theoretical convergence rate O(n−1+δ ), which arises from the quite large value of the total variation of the integrand, so the asymptotics has not had time to manifest itself. This fact for analogous integrands was observed in [58]. Accepting some advantages of the cubature formulas (for small dimensionalities) and quasi-Monte Carlo methods (for great dimensionalities) over the adaptive methods, we should remember that the adaptive methods are of sequential kind and provide us with a convenient mechanism to estimate the error at each step of the algorithm which enables us to stop the computation process as soon as the required accuracy has been attained. The quasi-Monte Carlo methods, being of sequential nature, are not feasible for a posteriori estimation of the error, though. Concerning the cubature formulas, the known techniques of sequential evaluation of the integral and control over the accuracy during the computation involve severe growth of the number of nodes. We stress again that one or another method of approximation of the integrand underlies all suggested adaptive methods. Hence, all adaptive methods may find use not only in integration but in approximation problems as well, which is not true for the other methods presented in Tables 5.1–5.3. Summarising the aforesaid, we formulate the following recommendations for application of the suggested adaptive algorithms: ∙ In problems of small dimensionality (s = 1, 2, 3, 4), the adaptive methods with piecewise approximations compare favourably both with the deterministic methods and various versions of the Monte Carlo methods. ∙ In problems of dimensionality s = 5, 6, . . . , 15, the adaptive methods with piecewise approximations are efficient under rather high requirements for the accuracy of evaluation of the integral, particularly in the case of absence of a priori information about the integrand. The use of adaptive methods in approximation problems would also be of advantage. ∙ The adaptive methods with global approximations may find a use in problems of approximation of functions of several variables where the cost of calculation of each value is high. ∙ In problems of large dimensionality (s > 15), the adaptive methods with piecewise approximations have probable advantage over the elementary Monte Carlo method and its versions only in the case of highly smooth integrands, high order of approximation, and high requirements for accuracy. In such problems, while constructing an adaptive algorithm, the main attention must be paid to the choice of the technique of approximation of the integrand, namely, finding a compromise between its simplicity and accuracy. Also of use may be the algorithm modifications presented in Section 2.2.5 and the generalised adaptive methods (see Sections 2.2.4 and 3.1.4).

5.2 Results of experiments | 121

5.2.2 The second test problem Let us evaluate the integral J = ∫ exp{−xT Ax} dx, K5

where A is a given positive definite matrix (see Section 5.1.2). In [15, 16], stochastic interpolation quadrature formulas were suggested to calculate this integral on the base of the functions φ1 (x) ≡ 1,

φ2 (x) = xT Ax − c1 ,

φ3 (x) = (xT Ax)2 − c2 , . . .

φ n (x) = (xT Ax)m − c m−1 , where the constants c i are determined from the condition of orthogonality of the functions φ i+1 (x) to the function φ1 (x). It was observed in these studies that, as the number m of functions grows, the variance of a stochastic quadrature formula decreases very rapidly. Namely, the following results were obtained in [15]: m

σ

1 2 3 4

0.166 0.045 0.0054 0.0004

Here the case m = 1 corresponds to the elementary Monte Carlo method, and σ stands for the single trial variance. Thus, the variance of the quadrature sum while 1 1 one uses one, two, and three additional functions becomes, respectively, 3.7 , 31.3 , and 1 of the variance of the elementary Monte Carlo method. It is clear that the time 415 required to attain a given accuracy by these methods differs not so radically, since the increase of the number of functions used makes both the quadrature formula itself and the joint density of distribution of its nodes more complex (hence the algorithm of their simulation becomes more complicated). Nevertheless, such information bears witness to high efficiency of the method of stochastic quadrature formulas applied to this problem. In order to compare these with the adaptive algorithms suggested in our study, we have carried out a series of calculations to evaluate the initial integral by the elementary Monte Carlo method and all adaptive methods with piecewise approximations considered in the preceding section (AdCVS-P, MAdCVS-P, and AdIS-P). The results are gathered in the diagram given in Figure 5.9, which represents the time needed to attain a given level of empirical mean square error σ̂ n . Along with the column corresponding to the elementary Monte Carlo method, there are columns named SQF2, SQF3, and SQF4 which correspond to the stochastic quadrature formulas with m = 2, 3, 4, respectively. We emphasise that we carried out no experiments with these methods because

122 | 5 Numerical Experiments

1.E+07 1.E+06 1.E+05 1.E+04 1.E+03 1.E+02 1.E+01 1.E+00 sigma = 1e 6

P2 Ad lS

P1 Ad lS

CV S Ad M

Ad lS

P

P2 CV S

Ad

CV S

P0

sigma = 1e 4

P1

P0 Ad

CV S

Ad

SQ F3

2 SQ F

F1

sigma = 1e 5 SQ

EL

M

C

1.E 01

Fig. 5.9. Time consumption for different accuracy levels.

of the absence of corresponding software modules and assumed that a single trial there took the same time as in the Monte Carlo method. Since the variance of stochastic quadrature formulas decreases as O(n−1 ), the number of trials required to attain the given variance is easily found provided that the single-trial variance is known. Thus, the data in columns SQF2, SQF3, SQF4 are somewhat less than in actual practice. The analysis of the diagram shows that the adaptive algorithms are well compared with the stochastic quadrature formulas in this problem, which is well seen in the case of high requirements for accuracy.

6 Adaptive Importance Sampling Method Based on Piecewise Constant Approximation 6.1 Introduction In this chapter, we study the efficiency of the adaptive method of importance sampling in its simplest version where piecewise constant approximations are utilised, and demonstrate its applications. We assume that the approximate density taken as the ‘optimal’ one is proportional to the absolute values of the integrand at the centres of the rectangles formed in the bisection process. We point out the fact that in many applied problems integrals arise such that in order to evaluate them one has to generate a quite large number of random points, so the bisection process yields a great amount of hyper-rectangles which partition the integration domain. This may overflow the memory limits of the computer. In addition, if a partition is too refined, the computation becomes very slow. In this connection, in Section 6.3, we suggest a technique which is well worth using in the case where the number of steps of the bisection process appears to be limited. In Section 6.2, we study the efficiency of the adaptive importance sampling (AIS) method and the sequential importance sampling (SIS) method in the sense of quality of the ‘optimal’ densities obtained. We state that for functions with large gradients, most notably in the multidimensional case, the adaptive method is well suited. In Section 6.4, the adaptive method is applied to the planar trilaterational problem (navigation by distances to pin-point targets). We see that, while solving navigation problems involving Bayesian estimation, the following difficulty arises: since, as a rule, a posteriori density possesses sharp maxima at a small amount of points while being close to zero in other parts of the domain, the generated grid nodes may appear to be distributed densely enough under a half but not the whole peak. To overcome this, we introduce a small constant added to the absolute value of the integrand. Another way to solve this problem is the use of those piecewise constant approximations which preserve more information about the behaviour of the integrand. For example, as the local constants one can take not the values of the function at the rectangle centres but the arithmetical means of its values at the rectangle corners. Chapter 6 is written in co-authorship with N. A. Berkovsky.

6.2 Investigation of efficiency of the adaptive importance sampling method 6.2.1 Adaptive and sequential importance sampling schemes For the sake of clarity, we give a detailed adaptive importance sampling algorithm used in this section. Let K be the unit cube in ℝs . We have to evaluate the multidimensional https://doi.org/10.1515/9783110554632-006

124 | 6 Adaptive Importance Sampling integral I = ∫ f(x) dx. K

The function f(x) must, generally speaking, be integrable on the cube K. 1. Two random points x1 , x2 ∈ K uniformly distributed in K are drawn. We calculate the primary statistical estimators of the integral I by the formulas S1 = f(x1 ), We set

2.

S2 = f(x2 ).

p1 (x) = p2 (x) = 1,

x ∈ K.

We store the points x1 , x2 ∈ K. The first bisection: draw the hyperplane x1 =

x11 + x12 , 2

that is, divide the cube into two parts K1 and K2 along the first axis. Set the counter of the steps n equal to 3. Next, we start the cycle 3–11 with the halt criterion in item 9. Let n = k and let K1 , . . . , K k−1 be the partition of the cube obtained by bisection at the preceding k − 1 iterations. (For n = 3 these are K1 and K2 obtained above.) 3. Construct the piecewise constant approximation of f(x) by the rule ̃ = f(sm ), f(x) 4.

x ∈ Km ,

m = 1, 2, . . . , k − 1,

where sm stands for the centre of the multidimensional rectangle K m . ̃ over the cube: Calculate the normalising coefficient I, which is the integral of f(x) k−1

I = ∑ f(sm )μ(K m ). m=1

5.

Calculate the improved distribution density p k (x) =

6.

7.

1 ̃ f(x). I

Draw the random vector xk distributed with the density p k (x) and store it. Find the ordinal j of that multidimensional rectangle K j where xk finds itself, j ∈ {1, . . . , k − 1}. Obtain the primary estimator of the integral I by the formula Sk =

f(xk ) . p k (xk )

6.2 Efficiency of the adaptive importance sampling | 125

8.

Obtain the secondary estimator of the integral I by the formula k

Ik = ∑ am Sm , m=1

9.

where a m are the weights of the estimators S m , a m > 0, ∑km=1 a m = 1; in our case, a m = 1k , m = 1, . . . , k. Calculate the unbiased estimator of the variance of I k : k

σ2k = ∑ d m (S m − I k )2 , m=1

where d m ≥ 0 are some coefficients. In our study, we set dm =

1 , k(k − 1)

m = 1, 2, . . . , k.

If σ2k is smaller than a prescribed threshold determined by the required accuracy, then the procedure halts and outputs I ≈ I k ; otherwise go to 10. 10. Find the ordinal r of the axis along which the rectangle K j in item 6 is of the greatest width; if there are several such axes, choose the first one. 11. In K j , draw the hyperplane xrk + xrj xr = 2 thus implementing a bisection, arrive at a new partition of the cube K1 , . . . , K k , and end the cycle. Let n = k + 1 and go to 3. Now let us turn to the sequential importance sampling which differs from the adaptive one in the way how the density p k (x) is defined at each iteration. 1. Draw a random point x1 ∈ K uniformly distributed in K. Calculate the primary estimator of the integral I by the formula S1 = f(x1 ). Let 2.

p1 (x) = 1,

x ∈ K.

The first bisection: draw the hyperplane x1 =

1 , 2

that is, divide the cube into two equal parts K1 and K2 along the first axis. Set the counter of the steps n equal to 2. Next, we start the cycle 3–11 with the halt criterion in item 9. Let n = k and K1 , . . . , K k−1 be the partition of the cube obtained by bisection at the preceding k − 1 iterations. (For n = 2 these are K1 and K2 obtained above.) Items 3–5 and 7–9 are the same as in the scheme for the adaptive importance sampling.

126 | 6 Adaptive Importance Sampling 6. Draw the random vector xk distributed with the density p k (x). 10. Find the ordinal j of the subdomain K j , j ∈ {1, 2, . . . , k − 1}, of the greatest sdimensional volume, and in this subdomain, find the ordinal r of the axis along which K j is of the maximal width. If there are several such j, r, choose those which go first. 11. In K j , draw the hyperplane lr + lr xr = 1 2 , 2 thus implementing a bisection, where K j = {x ∈ ℝs | xr ∈ [l1r , l2r ]},

l ri ∈ [0, 1], i = 1, 2, r = 1, 2, . . . , s,

arrive at a new partition of the cube K1 , . . . , K k , and end the cycle. Let n = k + 1 and go to 3.

6.2.2 Comparison of adaptive and sequential schemes Comparing the algorithms, we find that the sequential one is more simple, because (a) there is no need to store the random vectors xk at steps 1–2; (b) the bisection is implemented more simply than in the adaptive algorithm. In practice, this means that the computation speed gets faster and the memory consumption is lower. Of course, it is desirable to recognise the cases where the transition from the sequential scheme to the adaptive one is worth the complication. Since the essence of both methods consists of progressive improvement of the distribution density p k (x) and the only difference is the way to define this density, it would be reasonable to compare the methods in the quality of the density p k (x) in the sense of its closeness to the optimal one after the same number of iterations of both methods. This can be done by making the block of re-definition of the density inactive after the same number of iterations in both methods and then continuing the computation with the use of the conventional Monte Carlo method. So, we arrive at two densities pAIS and pSIS after n iterations of the adaptive importance sampling method and the sequential importance sampling method, respectively. Next, we make use of the conventional Monte Carlo method with these densities and look where the convergence is better. This is inferred from the number of iterations needed to attain the required accuracy and from the sample variance estimated during the computation. Let us take a look into what we might expect in a numerical experiment. Let f N (x) be a piecewise constant approximation of the function f(x) on a partition of the cube into N parts. We assume that L ≤ f(x) ≤ M (6.1)

6.2 Efficiency of the adaptive importance sampling | 127

for all x ∈ K. These bounds are obviously true for f N (x). We construct the density p N (x) by normalising f N (x) as follows: p N (x) =

f N (x)

∫K f N (z) dz

.

The variance of the random variable f(x) p N (x)

g(x) =

being averaged in the Monte Carlo method is evaluated by the well-known formula [4] Var{g} = ∫ K

2 f 2 (x) dx − ( ∫ f(x) dx) . p N (x)

(6.2)

K

We transform (6.2) with the use of inequalities (6.1) and obtain Var{g} = ∫ K

2 f 2 (x) dx ∫ f N (x) dx − ( ∫ f(x) dx) f N (x) K

K

2 f 2 (x) − f N2 (x) + f N2 (x) =∫ dx ∫ f N (x) dx − ( ∫ f(x) dx) f N (x) K

K

K

2 2 f 2 (x) − f N2 (x) dx ∫ f N (x) dx + ( ∫ f N (x) dx) − ( ∫ f(x) dx) =∫ f N (x) K

K

K

K

M 󵄨󵄨 2 󵄨 󵄨 󵄨 ≤ ∫󵄨f (x) − f N2 (x)󵄨󵄨󵄨 dx + ∫󵄨󵄨󵄨f N (x) − f(x)󵄨󵄨󵄨 dx ∫(f N (x) + f(x)) dx L 󵄨 K

K

2M 󵄨 󵄨 ≤( + 2M) ∫󵄨󵄨󵄨f N (x) − f(x)󵄨󵄨󵄨 dx. L

K

(6.3)

K

From the last expression in (6.3) we conclude that the closer to each other in the mean f N (x) and f(x) are, the smaller is the variance. Besides, we observe that the greater the values of f(x) are, the more dense is the grid of multidimensional rectangles generated by the adaptive importance sampling method. The points xk indeed fall more frequently to the domains where f N (x) is large, hence in those parts of the cube the rectangles are subject to a more frequent partitioning (so the adaptation manifests itself). Concerning the sequential importance sampling method, the grid is uniformly refined, so the size of the greatest rectangle tends to zero. The question thus arises, in which cases it is worth making the rectangles more dense at the places where the function takes its greatest values. The answer is given in Figure 6.1 where the one-dimensional case b ∫a f(x) dx is shown. In Figure 6.1 we see that in the case of a peak-like maximum of the integrand (f1 (x)) it is beneficial to concentrate the rectangles at the places where the function takes great values, whereas in the case of a smooth change of the function and absence

128 | 6 Adaptive Importance Sampling f1 (x)

f2 (x)

a

X

b

a

X

b

Fig. 6.1. On optimal partitioning of the integration domain. Tab. 6.1. Comparison of the elementary Monte Carlo to adaptive and sequential variants of importance sampling for the 2-dimensional problem after 16 iterations. Method

Result

Number of iterations

Sample standard deviation

Elementary Adaptive IS Sequential IS

6.167 6.142 6.163

166138 30641 55624

12.22 5.25 7.06

of large gradients (f2 (x)), the uniform partitioning of the cube is good. The numerical experiments confirm that it is wise to apply the adaptive scheme to the functions possessing a sharp peak-like maximum, whereas the sequential one does better in the case of a smoothly changing function.

6.2.3 Numerical experiments For the sake of clarity, we begin with double integrals. Let us calculate the integral I = ∫ f(x) dx,

2

f(x) = 100e−100(x−0.3) + 3.

K2

The graph of f(x) is given in Figure 6.2. The number of subdomains N which the square K2 is partitioned into is equal to 16. The calculation is carried out with the use of the conventional Monte Carlo method with three initial densities: first, the uniform one (elementary Monte Carlo); second, resulting from 16 iterations of the adaptive algorithm; and third, the density obtained after 16 iterations of the sequential algorithm. The results are presented in Table 6.1, the absolute error is 0.1, and the true value of the integral is equal to 6.145.

6.2 Efficiency of the adaptive importance sampling | 129

100

80

60

40

20

0.8 0.6 0.4 0.2 0 0

0.2

0.4

0.6

0.8

Fig. 6.2. The graph of an integrand possessing a sharp maximum.

(a) adaptive IS

(b) sequential IS

Fig. 6.3. Distribution of the points with the density obtained after 16 iterations.

Table 6.1 shows that the adaptive importance sampling method yields the best results. In Figure 6.3, the distributions of the points drawn with the densities obtained by the adaptive and sequential methods are shown (the number of subdomains is 16). In Figure 6.3, we see rectangles which are quite densely filled by random points. The centre of the densely filled rectangle in the adaptive scheme is more close to the point (0.3, 0.3) than the centre of such rectangle in the sequential scheme. Taking into account the fact that f(x) attains its maximum at the point (0.3, 0.3), we see that the density obtained with the use of the adaptive importance sampling algorithm follows better the behaviour of the integrand, although both methods put more points in the maximum neighbourhood. We observe that if we increase the number N of iterations to get the density from 16 to 400, the advantage of the adaptive

130 | 6 Adaptive Importance Sampling Tab. 6.2. Comparison of the elementary Monte Carlo to adaptive and sequential variants of importance sampling for the 2-dimensional problem after 400 iterations. Method

Result

Number of iterations

Sample standard deviation

Elementary Adaptive IS Sequential IS

6.143 6.143 6.146

16379341 129815 139783

12.14 1.082 1.122

Tab. 6.3. Comparison of the elementary Monte Carlo to adaptive and sequential variants of importance sampling for the 6-dimensional problem after 36 iterations. Method

Result

Number of iterations

Sample standard deviation

Elementary Adaptive IS Sequential IS

3.174 3.183 3.180

261141 80184 147714

1.53 0.85 1.15

method over the sequential one becomes subtle; the corresponding results are given in Table 6.2, the absolute error is 0.01. This is because the partition is fine enough to approximate accurately the integrand at the ‘dangerous’ regions of large gradients. But in the multidimensional case we technically are not able to partition K into sufficiently small parts, and the adaptive importance sampling method is hence more efficient. In what follows, we give results obtained for s = 6 and s = 12 for integrands behaving as above. In the six-dimensional case, we calculate I = ∫ f(x) dx,

2

f(x) = 100e−25(x−0.3) + 3.

K6

The true value of the integral is equal to −3.18, the absolute error is 0.01. The number of iterations to get the densities is 36 = 729. The results are given in Table 6.3. In the twelve-dimensional case, we calculate I = ∫ f(x) dx,

2

f(x) = 1000e−16(x−0.3) + 3.

K12

The true value of the integral is equal to −3.33, the absolute error is 0.01. The number of iterations to get the densities is 312 = 531 441. The results are given in Table 6.4. Analysing the tables, we conclude that in all above cases the adaptive algorithm provides us with the required accuracy making less calls to the integrand as compared with the sequential one, and the variance of the random variable gAIS (x) = f(x)/pAIS (x) is less than the variance of the random variable gSIS (x) = f(x)/pSIS (x), which is seen from the values of the sample standard deviation. Therefore, the density pAIS (x) is

6.2 Efficiency of the adaptive importance sampling | 131 Tab. 6.4. Comparison of the elementary Monte Carlo to adaptive and sequential variants of importance sampling for the 12-dimensional problem after 312 iterations. Method

Result

Number of iterations

Sample standard deviation

Elementary Adaptive IS Sequential IS

3.330 3.324 3.326

9211254 1125190 1504759

9.10 3.18 3.68

Tab. 6.5. Comparison of the elementary Monte Carlo to adaptive and sequential variants of importance sampling for the smooth 12-dimensional problem after 212 iterations. Method

Result

Number of iterations

Sample standard deviation

Elementary Adaptive IS Sequential IS

3.999 3.324 3.995

118387 48617 35942

1.03 0.66 0.56

‘more optimal’ than pSIS (x), which is what we might expect looking at inequalities (6.3). The advantages of the adaptive method and the sequential method over the elementary Monte Carlo method in the sense of the amount of calls to the integrand are obvious; as concern the computation speed, in the examples above it was also faster, although this cannot be thought of as a universal law. Now let us turn to integration of a smoothly changing function f(x) = x2 for s = 12 (for s = 2 and s = 6 the results are similar). The true value of ∫K f(x) dx is 4, the 12 absolute error is set to 0.01. The number of iterations to get the densities is 212 = 4096. The results are given in Table 6.5. From Table 6.5 we conclude that it is beneficial to apply the sequential importance sampling algorithm to smooth functions.

6.2.4 Conclusion Starting from the theoretical considerations and results of numerical experiments, we arrive at the conclusion that the adaptive importance sampling is a quite efficient technique of evaluation of multidimensional integrals. In the case of a function of large gradient (non-oscillating) and large dimensionality, it is more efficient than a method such as the sequential importance sampling and, of course, the elementary Monte Carlo method.

132 | 6 Adaptive Importance Sampling

6.3 Adaptive importance sampling method in the case where the number of bisection steps is limited In this section we investigate the peculiarities of application of the adaptive importance sampling method if there is a limitation on how fine the partition of the integration domain can be. We elaborate an algorithm which allows for an efficient utilisation of the adaptive scheme under the condition that only a small number of bijection steps can occur. The abilities of the algorithm are demonstrated on the example of a wellknown one-dimensional problem of filtration theory and are compared with the results obtained by means of the Monte Carlo method with Gauß distribution density as well as by the importance sampling method. We analyse in full detail the example where the adaptive scheme turns out to be more efficient than the importance sampling method. We point out that the adaptation in stochastic integration known in the literature can be based on other ideas as well [13].

6.3.1 The adaptive scheme for one-dimensional improper integrals Let us present a one-dimensional version of the adaptive importance sampling procedure under the assumption that the integral is improper with infinite limits. Let us evaluate approximately the integral ∞

I = ∫ f(x) dx, −∞

where the function f(x) is chosen in such a way that (i) it is integrable on [−∞, ∞]; (ii) f(x) ≠ 0 almost everywhere on the real axis; (iii) the values of f(x) outside a fixed interval [a, b] are negligible as compared with those inside [a, b], so their contribution to the integral is unessential. It is possible to give a more rigorous formulation of assumption (iii), but there is no need for this. In addition, let f(x) be such that the variance of p(x)/f(x) is finite for all densities p(x) used in this section. Consider the following computational scheme. 1. Choose a distribution density d(x) such that d(x) ≩ 0 on the real axis. 2. Partition [a, b] into two equal parts [a, c] and [c, b], choose a small enough δ ≥ 0, and calculate 󵄨󵄨 a + c 󵄨󵄨 󵄨󵄨 b + c 󵄨󵄨 f1 = 󵄨󵄨󵄨󵄨f ( )󵄨󵄨󵄨󵄨 + δ, f2 = 󵄨󵄨󵄨󵄨f ( )󵄨󵄨󵄨 + δ. 2 󵄨 2 󵄨󵄨 󵄨 󵄨 Denote ∆1 = [a, c], ∆2 = [c, b].

6.3 Limited number of bisection steps | 133

Consider the function {f1 χ ∆1 (x) + f2 χ ∆2 (x) if x ∈ [a, b], f1̂ (x) = { d(x) if x ∉ [a, b]. { where χ ∆ k stands for the indicator of the interval ∆ k . Introduce the function p1 (x) =

̂

f1 (x) , ∞ ∫−∞ f1̂ (y) dy

which we choose as the first approximation of the optimal distribution density, and thus initialise the iterative procedure. 3. At the Nth step, draw a random point x N distributed with the density p N (x). Calculate the primary estimator f(x N ) . SN = p N (x N ) From the primary estimators S1 , . . . , S N which have been computed, we find the secondary estimator N

IN = ∑ αk Sk , k=1

where α k are the weight coefficients; in the case of piecewise constant approximation they are 3k(k + 1) αk = . N(N + 1)(N + 2) If (a) N > N0 , and (b) σ N < 2ε (or σ N < 3ε ), where ε is the given accuracy, then the procedure halts and outputs I ≈ I N . If the conjunction (a) & (b) is not true, then go to 4. 4. Carry out the bisection by dividing the interval ∆ k where x N finds itself in two equal halves ∆1k and ∆2k . In the existing partition τ N = {∆1 , . . . , ∆ N+1 }, substitute the intervals ∆1k and ∆2k for ∆ k and obtain the new partition τ N+1 = {∆1 , . . . , ∆ N+2 }. Let ξ k stand for the centre of ∆ k , let f k = |f(ξ k )|, and define the function { N+2 ̂ (x) = ∑k=1 (f k + δ)χ ∆ k (x) if x ∈ [a, b], f N+1 { d(x) if x ∉ [a, b]. { Then introduce p N+1 (x) =

̂

f N+1 (x) ∞ ̂ ∫−∞ f N+1 (y) dy

and go to 3 while increasing the counter N of steps by one.

134 | 6 Adaptive Importance Sampling Let us explain the roles of δ, N0 , and d(x). For the method to work adequately for those integrands which can be very close to zero inside [a, b], one should introduce the parameter δ; the number N0 , which is large enough, helps to get rid of inadequate estimators at the first steps of the procedure; and the function d(x) defines the density p N (x) outside the interval [a, b] where the adaptive bisection procedure works.

6.3.2 The adaptive scheme for the case where the number of bisection steps is limited As we have seen, we cannot increase without bound the cardinality of the partition τ N = {∆1 , . . . , ∆ N }. We assume that we are able to partition [a, b] into no more than N pieces. As the actual practice shows, in some problems, acting in accordance with the procedure given in Section 6.2, after N iterations we sometimes get the precise result, but sometimes not. We observe that if the variance of the estimator I N = I N (x1 , . . . , x N ) is small, we can go to the estimator ̄ = IM

1 M k ∑I , M k=1 N

where I Nk is the kth realisation of I N = I N (x1 , . . . , x N ) and M is the number of realisations I Nk . But this technique is inefficient in the case where the variance of I N = I N (x1 , . . . , x N ) is large. Let p kN (x) be the random density related to the kth realisation I Nk . For each p kN (x), we consider the random variables J Lk =

1 L f(x i ) , ∑ L i=1 p k (i) N ∞

which are the standard estimators of I = ∫−∞ f(x) dx by the Monte Carlo method with the density p kN (x). Depending on p kN (x), their variance varies from realisation to realisation, as well as the behaviour of the sample standard deviation σ{f(x)/p kN (x)}. The worst case is where it makes relatively large jumps at high-count iterations, so the monotone decrease of the standard error σ{J Lk } does not occur, and the program execution time opt grows heavily. The requirements on the optimal density p N (x) chosen out of the random densities p kN (x) can be formulated as follows: opt (i) the random variable f(x)/p N (x) gives the minimum to the variance over all densities p kN (x); (ii) σ{J Lk } must monotonically decrease as L grows. In practice, this means that if some computationally efficient density p kN (x) occurs at some N with some not-so small probability, then one can try to ‘catch’ it and then perform computation on its base with the standard Monte Carlo method. The way of searching for such a density can vary. In our study, the search is implemented by statistical means, which might appear at first glance to take a plenty of time. But,

6.3 Limited number of bisection steps | 135

as numerical experiments show, in some cases our technique allows us to compute the integral much faster and make less calls to the integrand as compared with the standard Monte Carlo method and the importance sampling method. The application of the adaptive importance sampling scheme in the case of limited number N of the steps of the bisection procedure can be described as follows: 1. Generate a small number (3–5) of random densities p kN (x) and by a small sample (200–300 points) estimate σ{f(x)/p kN (x)}. Store the density pmed N (x) which cork responds to the median med(σ{|f(x)|/p N (x)}). It is not wise to choose the density with minimum σ{|f(x)|/p kN (x)}, because the small sample size may result in an inadequate estimation. Set the upper bound σ̄ for the subsequent values of σ{|f(x)|/p kN (x)} equal to this median (or 120–150% of it, to be safe). 2. Start the computation by the standard Monte Carlo method with the density pmed N (x), that is, calculate the estimator J med = L

1 L f(x i ) ∑ L i=1 pmed (i) N

and its sample standard deviation σ{J med L }. At each step, beginning with some L 0 (which usually lies between 300 and 500), calculate the ratio R=

σ L {f(x)/pmed N (x)}

σ L−1 {f(x)/pmed N (x)}

.

If R < R0 , then go to 3, where the check number R0 is chosen to fall between 1 med and 1.5. In addition, every 100 iterations compute diff = σ{J med L } − σ{J L−100 }; if diff < 0, then go to 3. This double check is not necessary if the required accuracy med mε ε is almost attained, say, if σ{J med L } < 2 , where 1 < m < 1.5. If σ{J L } < 2 at the med Lth iteration, then the computation halts and outputs I ≈ J L . ̄ 3. Generate new densities p kN (x) up to the first density pnew such that σ{J new N L } < σ. k We estimate σ{f(x)/p N (x)} by a small sample (of 200–300 points). Next, for the new med density pnew N passing the check we repeat 2 with p N replacing p N , and so on.

6.3.3 Peculiarities and capabilities of the adaptive importance sampling scheme in the case where the number of bisection steps is fixed We observe that the procedure described in Section 6.3.2 does not necessarily halt. It can search through the densities endlessly. In order to tackle this, one introduces limits on running time, implements output of various messages concerning the stages of procedure execution, etc. The suggested scheme is easily combined with the importance sampling method because the density constructed in accordance with the latter can go through the same selection procedure as the densities pnew N in Section 6.3.2.

136 | 6 Adaptive Importance Sampling Now the adaptive importance sampling scheme with unlimited N (see Section 6.3.1) amounts, in essence, to the technique of adjusting the density whose key feature consists of that the intervals ∆1 , . . . , ∆ N automatically become more dense in those domains where the integrand takes its greatest absolute values, which helps to calculate integrals of functions possessing sharp isolated maxima. The estimators I N = ∑Nk=1 α k S k are not calculated, although an algorithm probably exists resembling that of Section 6.3.2 where they will find a good use. In the bisection procedure, the piecewise constant approximations can be replaced by more precise ones using, say, the Taylor formula. This algorithm does well for functions possessing sharp, almost singular maxima whose allocation is known to within a rather large subdomain containing the extremum. As examples of such functions we will consider the integrands in the problem of optimal estimation below. The parameters δ, N0 , L0 , R0 , and the auxiliary density d(x) can be varied to suit the operator’s need; certain defaults can also be set.

6.3.4 Numerical experiments Consider the following one-dimensional optimal estimation problem. Let the random variable X be distributed by the normal law with the density p(x) = N(x, x,̄ σ), and let the measurement result Y for X = x be distributed with the density p(y | x) = N(y, x, r). Assume that the measurement of the position of X yields the result y. Under this condition, we have to find an optimal estimator of the value of X. This estimator is given by the formula ∞



xopt =

I1 ∫−∞ xp(y | x)p(x) dx ∫−∞ f1 (x) dx = ∞ . = ∞ I2 ∫−∞ p(y | x)p(x) dx ∫−∞ f2 (x) dx

(6.4)

The problem is solved for X distributed by the normal law with the density p(x) = N(x, 1, 1) under the condition that one knows the result of measurement y = 1.1. The graphs of the integrands in the numerator and denominator in (6.4) are shown in Figure 6.4. As we see in Figure 6.4, the smaller r is, the closer to the delta function are both integrands, and hence, harder to compute. We carry out the calculation in the Matlab software package on a computer equipped with a 3 GHz processor. Three methods are considered. MC is the standard Monte Carlo method with the distribution density of the points drawn p(x) = N(x, 1, 1). ∞ IS is the importance sampling method where I = ∫−∞ f(x) dx with the density p N (x) =

̃

f N (x) , ∞ ̃ ∫−∞ f N (y) dy

6.3 Limited number of bisection steps | 137

(a) r = 1

(b) r = 0.1

(c) r = 0.01

(d) r = 0.001

Fig. 6.4. Graphs of the integrands for various r.

where {∑N (f k + δ)χ ∆ k (x) f Ñ (x) = { k=1 N(x, 1, 1) { 6(k − 1) , −2 + ∆ k = [−2 + N

if x ∈ [−2, 4], if x ∉ [−2, 4], 6k ], N

and f k are the absolute values of the functions at the centres of ∆ k , so we have a piecewise constant approximation on [−2, 4], where the pieces ∆ k where the approximation takes a constant value are equal to each other. The interval [−2, 4] is chosen because the random variable X almost surely falls into it by the three-sigma rule. As in Section 6.3.1, δ ≥ 0 is introduced to get rid of large variances for integrands close to zero in [−2, 4] which indeed occur in our case. AIS is the adaptive importance sampling method applied in full accordance with the scheme described in Section 6.3.2 with d(x) = N(x, 1, 1). In both AIS and IS, we set N = 1000 and δ = 0.005. We are interested in the question in what cases it is worth while to transit from IS to the algorithmically more complex AIS in the sense of computation complexity. The results of numerical experiments are

138 | 6 Adaptive Importance Sampling Tab. 6.6. Root mean square deviation of measurements r = 1. Method MC IS AIS

Absolute error

Calls for functions (thousands)

Time (sec)

1.66 × 10−3 1.04 × 10−3 1.03 × 10−3

378 86 90

9 48 45

Tab. 6.7. Root mean square deviation of measurements r = 0.1. Method MC IS AIS

Absolute error

Calls for functions (thousands)

Time (sec)

0 1.6 × 10−4 1 × 10−4

9337 116 123

208 35 35

Tab. 6.8. Root mean square deviation of measurements r = 0.01. Method

Absolute error

Calls for functions (thousands)

Time (sec)

MC IS AIS

1 × 10−5 1 × 10−5 3 × 10−5

106957 168 171

2738 47 48

Tab. 6.9. Root mean square deviation of measurements r = 0.001. Method MC IS AIS

Absolute error

Calls for functions (thousands)

Time (sec)

0 0 0

1081095 110342 305

24252 30281 92

given in Tables 6.6–6.9. Since simple and complex functions alike can play the role of the integrands, we compare the methods both by the computing time and number of calls to the integrands. The f -gain column contains ratio of the number of calls to the integrands in the MC method to that in the method under consideration, while the t-gain column contains the same characteristic concerning the computing time. In all methods, the computation halts as soon as the accuracy 0.001 is attained for both integrals. In the IS and AIS methods, for r = 1 we approximate both the densities in the numerator and the denominator; for other r we approximate only the optimal density in the denominator due to similarity of the integrands, which allows us to somewhat cut down processing time expense. The values for the AIS are averaged over 100 computations.

6.3 Limited number of bisection steps | 139

X

X

(a) ‘Bad’ AIS density

(b) ‘Good’ AIS density

X

(c) ‘Good’ IS density Fig. 6.5. Histograms of densities found by the AIS and IS methods, r = 0.01.

From Tables 6.6–6.9 we see that the standard Monte Carlo method beats the AIS in time because of its quite simple implementation for r = 1 only. As r decreases, this profit vanishes. In addition, as r decreases, the benefit of AIS over MC in both the time and the number of calls for the integrands grows sharply. A more complex situation arises while we compare AIS and IS: if r > 0.001, the results are almost identical, while for r = 0.001 the AIS method turns out to be much more efficient. Numerical experiments show that the average number of densities which the AIS method looks over is one for r > 0.01, three for r = 0.01, and thirty for r = 0.001. So, if the cardinality of the partition is limited by N = 1000, for r < 0.01 a ‘good’ density is surely generated; for r = 0.01, a ‘good’ density is found with probability 13 ; while for 1 r = 0.001, a ‘good’ density is generated with a rather small probability 30 . In Figure 6.5, we present histograms of two densities generated by the AIS method for r = 0.01, one is ‘bad’ which is removed as the result of selection, and the other is ‘good’ for computation. In addition, we present the histogram of a density obtained in accordance with the IS method.

140 | 6 Adaptive Importance Sampling

(a) ‘Good’ AIS density

(b) ‘Good’ IS density

Fig. 6.6. Histograms of densities found by the AIS and IS methods, r = 0.001.

We observe that the ‘bad’ density in Figure 6.5 is indeed very unfavourable. It occurs with a small probability, although the selection criterion of the AIS method rejects densities even more similar to the normal one, in other words, it is very strict indeed. In Figure 6.5 we see that the density obtained by the IS method gives not so good fit to the integrand as the ‘good’ density generated by the AIS method. But due to uniform refining of the integration interval the IS method provides us with a quite precise approximation of the integrand behaviour, so the IS method is quite efficient for r = 0.01. A different situation arises in the case shown in Figure 6.6. Figure 6.6 shows that if the partition is not dense enough, then the density constructed in accordance with the IS method fails to reflect the key feature of the integrand behaviour, namely, the sharp maximum at x = 1.1, so the IS method turns out to be inefficient for r = 0.001. The histogram of one of a few densities passing the selection in the AIS method also shown in Figure 6.6 almost precisely reproduces the behaviour of the integrand. In the multidimensional case, a similar situation was observed in [3]. The selection procedure takes some amount of processing time, but it is, in average, much less than that required to carry out the computation based on a single density either by the Monte Carlo or the importance sampling method.

6.3.5 Conclusion We suggested an algorithm of numerical stochastic integration based on the concept of adaptation of the distribution density. As compared with the early sources, the stress is laid not on the convergence rate at infinity but on capabilities of the adaptive approach under natural conditions of limited ability to refine the partition of the integration domain.

6.4 Navigation problem | 141

This algorithm should be applied to complex problems which defy all attempts of solution by means of either the standard Monte Carlo method or the importance sampling method. The suggested scheme includes the above methods as its particular cases. On the example of the optimal estimation problem we demonstrated the advantages of the proposed scheme and explained how these benefits rise.

6.4 Solution of a problem of navigation by distances to pin-point targets with the use of the adaptive importance sampling method The problem of navigation by distances to pin-point targets consists of finding the coordinates of an object (a ship, a mobile robot) from results of measurement of the distances to two pin-point targets whose coordinates are known. As these targets one can choose a light tower, a reference stake, a landmark, etc. It is assumed that the object whose position we must determine possesses an ability to measure the distance to the remote points by means of a kind of a range finder. We consider the planar problem, that is, the object has two coordinates. In the case of exact measurement of the distance to two targets, these coordinates are uniquely determined as the point of intersection of two circles with centres at the pin-points and the radii equal to the distance measured. But in actual practice the measurements are subject to noise, that is, include a random error. In the framework of the Bayesian approach [13, 64, 65], instead of an exact place of the coordinates of the object on the plane we have a probabilistic distribution, while the optimal Bayesian estimator is defined as the mathematical expectation corresponding to this distribution. The direct evaluation of this estimator implies calculation of the quotient of two integrals which, as a rule, are computed only approximately. We consider the application of the adaptive importance sampling method [3, 4] to the evaluation of the integrals involved in the formula for the optimal Bayesian estimator. Moreover, we compare the results with the calculation following the scheme used to compute a posteriori means in the importance sampling method [9, 13], which is one of the primary sequential Monte Carlo methods. Numerical experiments show the benefits of the suggested method.

6.4.1 Problem setup The aim is to determine the coordinates (x1 , x2 ) of an object from known results of measurement of the distances to two stationary pin-point targets whose coordinates (x11 , x12 ) and (x21 , x22 ) are presumed to be known. Let there be m pairs of measurements from the same position (one measurement for each target in a pair). Introducing the

142 | 6 Adaptive Importance Sampling noise, we write out the results of the measurements as follows: y1i = √(x1 − x11 )2 + (x2 − x12 )2 + ν1i , y2i = √(x1 − x21 )2 + (x2 − x22 )2 + ν2i ,

i = 1, . . . , m.

The measurement errors are assumed to be identically distributed, Gaussian, independent, and centred, in other words, ν ki ∼ N(0, r), i = 1, . . . , m, k = 1, 2. Here N(0, r) stands for the normal distribution on the real axis with centre at zero and variance r2 . Following the Bayesian approach, we have to define some a priori distribution of the object position on the plane; in our study, it is defined by the function p(x1 , x2 ) = N(x1 , x2 , x01 , x02 , σ1 , σ2 , 0), where N(x1 , x2 , x01 , x02 , σ1 , σ2 , 0) denotes the normal distribution on the plane with independent components distributed by the laws N(x01 , σ1 ) and N(x02 , σ2 ). The a posteriori density of distribution of the vector (x1 , x2 ) conditioned upon the measurements of y1i and y2i , i = 1, . . . , m, is defined by the formula m

p(x1 , x2 | y1i , y2i ) = Cp(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

× N(y2k , √(x1 − x21 )2 + (x2 − x22 )2 , r), where N(y sk , √(x1 − x1s )2 + (x2 − x2s )2 , r) stands for the value of the function of the density of distribution of the normal law N(√(x1 − x1s )2 + (x2 − x2s )2 , r) at the point y sk , k = 1, . . . , m, s = 1, 2, and C is the normalising factor. Hence we arrive at the formulas for the optimal Bayesian estimators of the coordinates x̂ 1 and x̂ 2 of the position of the object: m

x̂ 1 = ∬ x1 p(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

×

N(y2k , √(x1

− x21 )2 + (x2 − x22 )2 , r) dx1 dx2

m

× ( ∬ p(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

−1

× N(y2k , √(x1 − x21 )2 + (x2 − x22 )2 , r) dx1 dx2 ) ,

(6.5)

m

x̂ 2 = ∬ x2 p(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

× N(y2k , √(x1 − x21 )2 + (x2 − x22 )2 , r) dx1 dx2 m

× ( ∬ p(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

×

N(y2k , √(x1

−1

− x21 )2 + (x2 − x22 )2 , r) dx1 dx2 ) ,

(6.6)

6.4 Navigation problem | 143

We apply the adaptive importance sampling method to the approximate calculation of (6.5) and (6.6).

6.4.2 Application of the adaptive importance sampling method to calculating the optimal estimator of the object position Consider the application of the adaptive importance sampling method to calculating the first coordinate of the object defined by formula (6.5). For brevity, let x = (x1 , x2 ), x ∈ ℝ2 , and consider the following functions of this vector-valued argument: m

f1 (x) = x1 p(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

×

N(y2k , √(x1

− x21 )2 + (x2 − x22 )2 , r),

m

f2 (x) = p(x1 , x2 ) ∏ N(y1k , √(x1 − x11 )2 + (x2 − x12 )2 , r) k=1

× N(y2k , √(x1 − x21 )2 + (x2 − x22 )2 , r).

(6.7)

Then formula (6.5) takes the form x̂ 1 =

I1 ∫ℝ2 f1 (x) dx = . I2 ∫ 2 f2 (x) dx ℝ

(6.8)

Here and in what follows the symbol dx stands for the elementary element of area within the plane. Assume that the estimator is calculated with the use of the Monte Carlo method in the form 1 N f1 (xk ) 1 N f2 (xk ) I1̃ , I2̃ = , x̃̂ 1 = , I1̃ = ∑ ∑ N k=1 p(xk ) N k=1 p(xk ) I2̃ and the sequence x1 , . . . , xN consists of independent random vectors in ℝ2 distributed with some density p(x). By virtue of the generalised central limit theorem (by the delta method, see [20, 37]), for sufficiently large N the distribution of x̃̂ 1 is close to the normal law σ2I ̃ σ2I ̃ 2 cov(I1̃ , I2̃ ) N(x̃̂ 1 , x̃̂ 1 √ 21 + 22 − ), I1 I2 I1 I2 where σ I1̃ , σ I2̃ are the mean square deviations of the random variables I1̃ and I2̃ . Introducing f1 (x) f2 (x) ξ= , η= , p(x) p(x) where the random vector x is distributed with the density p(x), we obtain E{ξη} − Eξ Eη cov(ξ, η) cov(I1̃ , I2̃ ) = = , N N

144 | 6 Adaptive Importance Sampling hence the approximate distribution of x̃̂ 1 is indeed of the form 1 σ2 {ξ} σ2 {η} 2 cov(ξ, η) √ 2 + 2 − N(x̃̂ 1 , x̃̂ 1 ). I1 I2 √N I1 I2 Thus, the mean square error σ{x̃̂ 1 } obeys the approximate equality σ{x̃̂ 1 } ≈ x̃̂ 1

σ2 {ξ} σ2 {η} 2 cov(ξ, η) 1 √ 2 + 2 − . I1 I2 √N I1 I2

The smaller this deviation is, the better x̃̂ 1 approximates the Bayesian estimator x̂ 1 of the first coordinate of the object sought for. It is easily seen that for fixed N the variance σ2 {x̃̂ 1 } depends on the density p(x) not in a simple or obvious way. In filtration theory problems, in particular while using the importance sampling method, as p(x) they frequently choose the a priori density of distribution of the object coordinates, in our case it is equal to p(x) = p(x1 , x2 ) = N(x1 , x2 , x01 , x02 , σ1 , σ2 , 0). Such a choice is convenient for the subsequent on-line estimation of the position of the moving object in the framework of a Markov model [13]. In the case of measurements being recorded from a fixed position, though, this choice of the density p(x) is not optimal. In a numerical experiment below we demonstrate that in the case where the density p(x) is constructed by the adaptive importance sampling algorithm, the variance defined by formula (6.1) is much smaller than in the case of computation utilising a priori density p(x). The adaptive importance sampling algorithm (which amounts to a sequential bisection) has been described in Chapters 2 and 3 of this monograph. We introduce some modifications to the algorithm as compared with that given there and in [3]. First, we search for the ‘optimal’ density with the use of the bisection method only for the integral entering the denominator in (6.8). Thus, the optimal density takes care of both coordinates. One can choose an alternative where the density is optimised separately for the numerators entering (6.5) and (6.6) and the common denominator. But, as the experiment shows, this leads to some algorithmic complication which increases the execution time but does not yield any visible benefit in accuracy. Second, the method is used not in its full strength; namely, only the technique to find the optimal density is utilised which guarantees that the grid nodes layout quickly becomes more and more dense in the domains where the absolute values of the integrand are the greatest. After some number of iterations, the density obtained is fixed in order to avoid the computation deceleration because of too fine partitioning of the domain resulting in a slow generation of the random vectors distributed with the corresponding piecewise constant density. Third, since the integrands in (6.5) and (6.6) are very close to zero at most points inside the range of definition and possess sharp maxima, the bisection algorithm is applied not to the integrands themselves but to their shifts upward by some positive constant as in the solution of the one-dimensional test navigation problem in [4]. Unfortunately, without this protective measure the adaptive importance sampling

6.4 Navigation problem | 145

method may, with a non-zero probability, yield a density which is completely unsuitable for calculation. The shift by a constant is probably not the most efficient, but a quite simple way to get rid of this unwanted effect. Fourth, we observe that the piecewise constant density is constructed starting from a large square including the domain where the object a priori finds itself with probability exceeding 0.9999; outside of this square, the density is set to zero, which does not cause a noticeable loss of accuracy.

6.4.3 A numerical experiment We consider the following navigation problem. We have to evaluate the optimal Bayesian estimator of the position of a ship from the results of five pairs of measurements of the distances to two light towers with known coordinates (3000, 0) and (0, 3000); the unit of measurement is a metre. The results of measurements are the vectors (2981, 2978, 2985, 3017, 2993) and (2969, 3002, 2977, 3021, 2995) for the first and the second targets. The measurement errors are assumed to be identically distributed, Gaussian, independent, and centred, in other words, ν ki ∼ N(0, 30), i = 1, . . . , 5, k = 1, 2. The a priori density p(x) of the distribution of the vector x = (x1 , x2 ) of the coordinates of the ship is assumed to be Gaussian: p(x) = N(x, 0, 0, 100, 100, 0). The situation under discussion is shown in Figure 6.7: a priori the ship finds itself inside a circle of radius 400 m with centre at the origin. By the results of measurements, it can be either near the point M or near the point N. But the case of the point M is unfeasible because this point lies outside that circle. Therefore, the a posteriori density should possess a sharp maximum in a neighbourhood of the point N and take almostzero values at the other points of the plane. In Figure 6.8, the graph of the integrand f2 (x) entering formula (6.7) is given; this function is proportional to the a posteriori density. In this graph, a sharp maximum at the most probable position of the ship found from the measurements is clearly seen. Two methods are used to carry out the computation, the adaptive importance sampling and a version of the Monte Carlo method where as the distribution density one takes the a priori density of the distribution of the ship coordinates. We pick the latter for comparison because it is used to calculate the estimator sought for in the classical importance sampling scheme which has found a widespread use in solving navigation problems. For brevity, we refer to this method as the Monte Carlo a priori method. The characteristic for comparison is σ2 {ξ} σ2 {η} 2 cov(ξ, η) RE = σ{x̃̂ 1 }√N x̃̂ −1 + 2 − , 1 =√ I1 I2 I12 I2

(6.9)

146 | 6 Adaptive Importance Sampling

Fig. 6.7. The ship and the light towers.

f2 (x1, x2)

1–10–15

5–10–16

0 400

200

0 x2

–200

–400

400

0 200 x1

–400 –200

Fig. 6.8. The graph of the integrand f2 (x).

which is directly proportional to the relative error of the computation provided that the number of iterations is fixed. The smaller the value of RE is, the more efficient is the method. The true values under the square root in (6.9) are replaced by their statistical estimators. The calculations are carried out under the 95% confidence level for each coordinate; the accuracy is set to 0.1 m in each coordinate. The calculation halts as soon as both coordinates are estimated within the given accuracy.

6.4 Navigation problem | 147 Tab. 6.10. Comparison of methods of estimation. Method

Monte Carlo a priori AIS

Ship coordinates x ̂1

x ̂2

9.06 9.06

7.03 7.07

(a) Monte Carlo distribution

Number of iterations

1 × 106 1.1 × 105

Variance characteristic RE

Correlation of numerator and denominator

x ̂1

x ̂2

x ̂1

x ̂2

5.53 1.76

7.21 2.29

0.7 0.25

0.6 0.18

(b) AIS distribution

Fig. 6.9. Distributions of 5000 points generated by the Monte Carlo a priori method and the adaptive importance sampling method.

Hence we easily derive that both coordinates are found within the given accuracy with probability no smaller than 90%. For the adaptive importance sampling method, due to its probabilistic nature, we use the mean values of all statistical characteristics entering (6.9) averaged over 300 computations. One counts the iterations in the adaptive importance sampling method having regard to those required to generate the random piecewise constant in K = [−400, 400] × [−400, 400] density; the number of bisection steps is 10 000. The results of the numerical experiments are gathered in Table 6.10. Analysing this table, we see that the adaptive importance sampling method requires about one tenth as many iterations as the Monte Carlo a priori method, which is supported by the comparison of the corresponding values of RE: (5.53/1.76)2 = 9.87 and (7.21/2.19)2 = 9.91. In addition, we see that the Monte Carlo a priori method yields a greater correlation between the estimators of the numerator and denominator, which has a beneficial effect on computation in view of relation (6.9). But, because of the decrease of the variances of the numerator and denominator due to a well-chosen density of distribution of the generated points, the adaptive importance sampling possesses much smaller

148 | 6 Adaptive Importance Sampling RE than the Monte Carlo a priori method. In Figure 6.9, the distribution of the points generated by both methods is given. We see that the adaptive importance sampling method allocates the points in a much more dense pattern in precisely that quite small subdomain where the maximum of the a posteriori density finds itself. This is why the adaptive importance method is more efficient than the Monte Carlo method using a priori density.

6.4.4 Conclusion In Section 6.4, we studied how to apply the adaptive importance sampling method to solving the navigation problem by distances to pin-points targets. In a numerical example we compared the efficiency of the adaptive importance sampling method and of the algorithm commonly used to solve such problems; the advantage of the adaptive method was demonstrated. We pointed out the peculiarity of the application of the adaptive importance sampling method to Bayesian estimation problems, namely, the necessity of some sophistication in the case of close to zero integrands and of the use of the delta method to estimate the error in the obtained ratio.

|

Part II: Solution of Integral Equations

7 Semi-Statistical Method of Solving Integral Equations Numerically 7.1 Introduction The numerical methods of solving regular integral equations known today can conventionally be divided into pure deterministic and pure statistical ones. The classical deterministic methods, which have been theoretically analysed in depth, have found today a wide successful use in a great body of applied problems. But their efficiency essentially depends on the volume and reliability of underlying a priori information. For example, the variational methods assume quite accurate representations of the equation kernels over a basis of a rather small dimensionality to be known [50]; the use of quadrature formulas requires an appropriate allocation of the interpolation nodes on the integration domain [39], etc. The control on accuracy, though, is carried out either by means of rough theoretical estimates or by solving the problem several times with successive complication until the approximations turn out to be practically coinciding. The existing statistical methods [15, 17, 19, 62] are much less sensible to the presence of a priori information concerning both the kernel and the integration domain. They allow for control on accuracy in the computation process and are very convenient in the cases where a local evaluation of the solution is required, say, at a single or several points. But these methods appear to be not so efficient if the whole field of solutions must be found. In [6, 32, 54], a method of mixed kind was suggested to solve integral equations, which contained both deterministic and statistical operations, hence called semistatistical. As in the deterministic methods, the problem reduces to solving a set of algebraic equations, but the approximate replacement of the integral by a finite sum is carried out by means of the Monte Carlo method. This approach provides us with a series of algorithmic advantages. The suggested method, from the algorithmic point of view, is weakly sensitive to the spectral properties of the kernel and allows for recursive refinement of the solution with control for accuracy in the computation process, as well as an automatised choice of a suitable allocation of grid nodes into the integration domain based on a preliminary estimation of the solution.

https://doi.org/10.1515/9783110554632-007

152 | 7 Semi-Statistical Method

7.2 Basic relations Let a bounded closed domain D be given in the s-dimensional Euclidean space ℝs . We consider the Fredholm integral equation of the second kind φ(x) − λ ∫ K(x, y)φ(y) dy = f(x)

(7.1)

D

with the kernel K(x, y) ∈ L2 (D × D) and f(x) ∈ L2 (D); λ is a real number such that equation (7.1) has a unique solution φ(x) in L2 (D). For the sake of brevity, in what follows we will also use the operator notation ∆

Hφ = φ − λKφ,



Kφ = ∫ K(x, y)φ(y) dy, D

where K and H are operators acting from L2 (D) into itself. First we assume that the solution φ(x) of equation (7.1) is known and a family of statistically independent random vectors x1 , x2 , . . . , xN ∈ D ⊂ ℝs is given which is referred to as a random integration grid with distribution density p(x): p(x) > 0 for x ∈ D and (7.2) ∫ p(x) dx = 1. D

Then, upon application of the elementary Monte Carlo method to evaluate an integral, equation (7.1) takes the form φ(x) −

λ N K(x, xj ) φ(xj ) = f(x) + λρ(x), ∑ N j=1 p(xj )

(7.3)

and for x = xi , i = 1, 2, . . . , N, the form (E N −

λ K N )φ = f ̄ + λ ρ,̄ N−1

(7.4)

where ρ(x) is the random error of evaluation of the integral by N trials: ρ(x) = ∫ K(x, y)φ(y) dy − D

1 N K(x, xj ) φ(xj ), ∑ N − 1 i=1 p(xj )

E N is the N × N unit matrix, K N = ‖K ij ‖Ni,j=1 , {K(xi , xj )/p(xj ) if i ≠ j, K ij = { 0 if i = j, { T φ = [φ(x1 ), . . . , φ(xN )] ,

(7.5) (7.6)

7.2 Basic relations | 153 T

f ̄ = [f(x1 ), . . . , f(xN )] ,

(7.7) T

ρ̄ = [ρ1 (x1 ), . . . , ρ N (xN )] , ρ i (x) = ∫ K(x, y)φ(y) dy − D

(7.8)

1 N K(x, xj ) φ(xj ). ∑ N − 1 i=1 p(xj ) i=j̸

Under quite general assumptions on the functions K(x, y) and φ(y), the error ρ(x) is by construction unbiased, and its variance tends to zero as N → ∞. This gives rise to the expectation that ̃ φ i obtained from the simultaneous equations HN ̃ φ = (E N −

λ K N )̃ φ = f,̄ N−1

(7.9)

where ̃ φ = [̃ φ1 , . . . , ̃ φ N ]T , are in a sense close to φ(xi ), and the solution of equation (7.1) for any x ∈ D can be expressed in terms of ̃ φ i by the formula ̃ φ(x) = where

λ T k (x)̃ φ + f(x), N

(7.10)

K(x, xN ) K(x, x1 ) ,..., ]. p(x1 ) p(xN ) An expression of the exact solution φ(x) is obtained by substituting φ(xi ) found from equations (7.4) into (7.3): k T (x) = [

−1 λ T λ k (x)(E N − K N ) f ̄ + f(x) N N−1 −1 λ2 T λ + k (x)(E N − K N ) ρ̄ + λρ(x). (7.11) N N−1 Thus, the problem of approximate solution of integral equation (7.1) with given f(x) reduces to solving simultaneous linear algebraic equations (7.9), and the problem of approximate inversion of the integral operator corresponding to equation (7.1) λ K N . Sufficient conditions for the existence reduces to inversion of the matrix E N − N−1 λ −1 of (E N − N−1 K N ) are given in Theorem 7.4.6. This technique of approximate inversion of integral operators has well-known deterministic prototypes [34] based on quadrature formulas. But, provided that a quite powerful computer is used, the method we suggest possesses the following essential algorithmic advantages: (i) automatisation of allocation of nodes of the integration grid; (ii) combination of successive increase of the grid size and recurrent inversion of the λ matrix E N − N−1 KN ; (iii) control over the accuracy of approximate solution in the process of recurrent inversion with explicitly given stopping rule; (iv) optimisation of the grid structure by a suitable choice of the function p(x) by means of a preliminary estimation of the solution. These peculiarities of the suggested method are studied in detail in the following sections.

φ(x) =

154 | 7 Semi-Statistical Method

7.3 Recurrent inversion formulas λ In actual practice, the matrix (E N − N−1 K N )−1 can be evaluated for small N with the use of a standard computer software. For large enough N, this way becomes unfit to use. Then there is a good reason to utilise the method of recurrent inversion. Its algorithmic advantages arise from the fact that it allows for a steady, with a minimum expenditure at each step, increase of the solution accuracy by means of successive addition of new nodes to the grid and stopping as soon as the required accuracy is attained. λ λ K N )−1 to (E N+m − N−1+m K N+m )−1 , where The transition from the matrix (E N − N−1 m is the number of added nodes of the integration grid, must be carried out in two λ λ K N )−1 we calculate (E N − N−1+m K N )−1 stages. First, by the known matrix (E N − N−1 with the use of the following relations (see [31, 32]):

T N+1 = (E N − T j+1 = T j − T1 =

−1 λ KN ) , N−1+m m/(N − 1 + m)

(7.12)

1 + l Tj T j l j m/(N − 1 + m)

T j l j l Tj T j ,

j = 1, . . . , N,

(7.13)

−1 N−1+m λ KN ) , (E N − N−1 N−1

where T j are N × N matrices, {0 if i ≠ j, [l i ]j = δ ij = { 1 if i = j. { λ Then, with the use of the bordering method [8], we find (E N+m − N−1+m K N+m )−1 by the formulas −1 λ C11 C12 K N+m ) = [ (7.14) (E N+m − ], N−1+m C21 C22

where C11 = T N+1 − T N+1 A12 C21 ,

C22 = (A22 − A21 T N+1 A12 )−1 ,

C21 = −C22 A21 T N+1 ,

C12 = −T N+1 A12 C22 ,

and A12 , A21 , A22 are matrices of sizes m × N, N × m, m × m, respectively; in addition, λ K N+m . [ AA12 ], [A21 A22 ] are the last m-columns and m-rows of the matrix E N+m − N−1+m 22 Formulas (7.12)–(7.14) are proved as explained in the exercises at the end of this section. Sufficient conditions for correctness of algorithm (7.12), (7.14) consist of nonλ λ singularity of the matrices E N − N−1 K N and E N+m − N−1+m K N+m and the constraint −1 󵄩 󵄩󵄩 m 󵄩󵄩(E − λ K ) 󵄩󵄩󵄩 < 1. N 󵄩󵄩 N 󵄩󵄩 N−1 󵄩 󵄩N − 1 + m

7.4 Non-degeneracy of the matrix | 155

As the criterion for accuracy of the obtained solution we choose the functional U{∆φ} = ∫ E{|φ(x) − ̃ φ(x)|} dx. D λ K N , we calculate U{∆φ} At each step of recurrent inversion of the matrix E N − N−1 by the formulas (7.23) and (7.24) below, which allow us to estimate the accuracy level attained and to halt the computation as soon as the accuracy progressed to the desired value.

Exercise 7.3.1. Prove the validity of the formula (A + ab T )−1 = A−1 −

1 A−1 ab T A−1 , 1 + a T A−1 b

where A is a square matrix, and a, b are vectors. Exercise 7.3.2. Prove the validity of formulas (7.12), (7.13) with the use of the following matrix recurrence relations: m l j l T , j = 1, . . . , N, N−1+m j λ N−1 K N ). S1 = (E N − N−1+m N−1

S j+1 = S j +

Hint: Denote T j = S−1 j and use the formula from the previous exercise. Exercise 7.3.3. Prove (7.14) with the use of the above result and the formulas for inversion of block matrices [8].

7.4 Non-degeneracy of the matrix of the semi-statistical method In this and the subsequent sections, we assume that the numerical parameter λ of equation (7.1) differs from the eigenvalue of the integral operator with kernel K(x, y). In this case, the integral equation has a unique solution for any right-hand side f(x) (see [50]). As the integral operator is approximated by a finite-dimensional operator depending on the sample, it is natural to expect that the spectrum becomes biased. Besides, it may happen that one of the eigenvalues of the matrix K N coincides with (N − 1)/λ, so the matrix H N of the semi-statistical method becomes singular. It is clear, though, that the greater N is, in other words, the better the approximation is, the smaller is the probability of this event. In this section we prove this fact. Let us formulate some elementary assertions which will be widely used in what follows.

156 | 7 Semi-Statistical Method Proposition 7.4.1. Let A and B be random events, and let the probabilities of their occurrences be such that P{A} ≥ 1 − p A Then

and

P{B} ≥ 1 − p B .

P{AB} ≥ 1 − p A − p B .

Proposition 7.4.2. Let A and B be random events, and let P{A} ≥ 1 − p A Then

and

P{B | A} ≥ 1 − p B .

P{AB} ≥ 1 − p A − p B .

Proposition 7.4.3. Let X1 and X2 be essentially positive random variables, and let P{X1 ≤ a1 } ≥ 1 − p1

and

P{X2 ≤ a2 } ≥ 1 − p2 .

Then P{X1 + X2 ≤ a1 + a2 } ≥ 1 − p1 − p2 , P{X1 ⋅ X2 ≤ a1 a2 } ≥ 1 − p1 − p2 . Proposition 7.4.4. Let A and B be square matrices of identical dimension, and let (AB)−1 and (BA)−1 exist. Then the matrix A is invertible, and A−1 = B(AB)−1 = (BA)−1 B. The proofs of the above assertions are left to the reader as exercises. Let us turn to the proof of the key assertion of this section. We introduce the set Ω = ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ D × D × ⋅ ⋅ ⋅ × D = DN N

of all possible samples ω = (x1 , x2 , . . . , xN ) with independent components. It is clear that the matrix H, the vectors ̃ φ, ρ,̄ the functions ρ(x) and ̃ φ(x) implicitly depend on ω. The density of the joint distribution of the elements of the sample N

p(ω) = ∏ p(xi ) i=1

defines a probability measure on the set Ω. In Theorem 7.4.6, we show that the probability that the matrix H of the semi-statistic method is invertible (in other words, the probability measure of the set of those samples at which H is invertible) tends to one as the sample size N grows.

7.4 Non-degeneracy of the matrix | 157

For the sake of convenience, we introduce a special norm in ℝN which depends on the sample. For a fixed sample ω = (x1 , x2 , . . . , xN ), we define the norm of a vector V in ℝN by the formula N v2 ‖V‖2ω = ∑ i . p(xi ) i=1 Since for any square matrix A, N

‖AV‖2ω = ∑

i=1 N

≤∑

i=1

N 2 1 ( ∑ a ij v j ) p(xi ) j=1 N N v2j 1 [ ∑ a2ij p(xj ) ∑ ] p(xi ) j=1 p(xj ) i=1 N

N

= ‖V‖2ω ∑ ∑ a2ij i=1 j=1

p(xj ) , p(xi )

the following bound for the induced operator norm of the matrix holds true: ‖A‖ω = sup V =0 ̸

N N p(xj ) ∆ ‖AV‖ω ≤ √ ∑ ∑ a2ij = T(A). ‖V‖ω p(x i) i=1 j=1

In the theory of integral equations it is known [50] that if λ is not an eigenvalue of the integral operator in equation (7.1), then there exists an integral operator, referred to as resolvent, with the kernel K − (x, y), which satisfies the equality λ ∫ K(x, z)K − (z, y) dz = λ ∫ K − (x, z)K(z, y) dz = K − (x, y) − K(x, y), D

(7.15)

D

which is referred to as the resolvent equation. By the kernel K − (x, y) we construct the matrices K −N = {K −ij }Ni,j=1 , H N− = E N +

{0 K −ij = { K − (xi ,xj ) { p(xj )

if i = j, if i ≠ j,

λ K− , N−1 N

and consider the matrix λ λ K N )(E N + K− ) N−1 N−1 N λ λ2 = EN + (K −N − K N ) − K N K −N = E N + A. N−1 (N − 1)2

H N H N− = (E N −

The elements of the matrix A are of the form a ij =

N K(xi , xl )K − (xl , xj ) λ K − (xi , xj ) − K(xi , xj ) λ2 (1 − δ ij ) − , ∑ 2 N−1 p(xj ) p(xl )p(xj ) (N − 1) l=1 l=i,j ̸

158 | 7 Semi-Statistical Method where δ ij is the Kronecker symbol. Consider N

N

(T(A))2 = ∑ ∑ a2ij i=1 j=1

p(xj ) p(xi )

and calculate its mathematical expectation over ω. Lemma 7.4.5. Let the following integrals converge: ∫∫∫ D D D

K 2 (x, z)K − 2 (z, y) dz dx dy, p(z)

∫∫ D D

K 2 (x, y)K − 2 (y, x) dx dy. p(x)p(y)

Then E{(T(A))2 } is of order O( N1 ) as N → ∞. Proof. We see that E{(T(A))2 } =

N N λ4 K(xi , xl )K − (xl , xi ) 2 ) } ∑ E{( ∑ 4 p(xl )p(xi ) (N − 1) i=1 l=1 l=i̸

+



(N −

N

N

λ2

1)2

∑ ∑ E{

i=1 j=1 j=i̸

p(xj ) K − (xi , xj ) − K(xi , xj ) ( p(xi ) p(xj )

N K(xi , xl )K − (xl , xj ) 2 λ ) } ∑ N − 1 l=1 p(xl )p(xj ) l=i,j ̸

=

λ4 (N −

1)4 N

N

N

i=1

l=1 l=i̸

∑ [ ∑ E{ N

+ ∑ ∑ E{ l=1 m=1 ̸ l=i̸ m=i,l

+

K 2 (xi , xl )K − 2 (xl , xi ) } p2 (xl )p2 (xi )

K(xi , xl )K − (xl , xi )K(xi , xm )K − (xm , xi ) }] p(xl )p(xm )p2 (xi )

N N (K − (xi , xj ) − K(xi , xj ))2 λ2 } ∑ ∑ [E{ 2 p(xi )p(xj ) (N − 1) i=1 j=1 j=i̸



N

(K − (xi , xj ) − K(xi , xj ))K(xi , xl )K − (xl , xj ) 2λ } ∑ E{ N − 1 l=1 p(xi )p(xl )p(xj ) l=i,j ̸

+

+

λ2

(N − 1)2

N

N

∑ ∑ E{

l=1 m=1 ̸ l=i,j ̸ m=i,j,l

K(xi , xl )K − (xl , xj )K(xi , xm )K − (xm , xj ) } p(xi )p(xl )p(xm )p(xj )

N K 2 (xi , xl )K − 2 (xl , xj ) λ2 E{ }]. ∑ (N − 1)2 l=1 p(xi )p2 (xl )p(xj ) l=i,j ̸

7.4 Non-degeneracy of the matrix | 159

Evaluating the mathematical expectations and using equality (7.15), we obtain E{(T(A))2 } =

λ4 K 2 (x, y)K − 2 (y, x) dx dy N[(N − 1) ∫ ∫ p(x)p(y) (N − 1)4 D D

2 1 ( ∫ K(x, z)K − (z, x)) dx] p(x)

+ (N − 1)(N − 2) ∫ D

D

2 λ4 − + N(N − 1)[ K(x, z)K (z, y) dz) dx dy ( ∫ ∫ ∫ (N − 1)2 D D

D

2 2 − (N − 2) ∫ ∫ ( ∫ K(x, z)K − (z, y) dz) dx dy N−1 D

D D

+

2 1 (N − 2)(N − 3) ∫ ∫ ( ∫ K(x, z)K − (z, y) dz) dx dy 2 (N − 1) D D

D

K 2 (x, z)K − 2 (z, y) 1 dz) dx dy] + (N − 2) ( ∫ ∫ ∫ p(z) (N − 1)2 =

λ2 N (N −

D D

1)2

∫ D

D

2 1 N−2 [ ( ∫ K(x, z)K − (z, x) dz) p(x) N − 1 D

K 2 (x, y)K − 2 (y, x) 1 dy] dx + ∫ N−1 p(y) D

+

λ2 N (N −

1)2

∫ ∫[ D D

λ2 (N − 2) K 2 (x, z)K − 2 (z, y) dz ∫ N−1 p(z) D

2 N−3 − ( ∫ K(x, z)K − (z, y) dz) ] dx dy. N−1 D

The lemma is proved. By virtue of the Chebyshev inequality, from this lemma it follows that for any a > 0, P{T(A) > a} = O(

1 ), √N

that is, T(A) converges in probability to zero as N → ∞. Therefore, for any ε > 0 there exists N A such that for all N > N A , P{T(A) < But if T(A) ≤ 12 , then ‖A‖ω ≤ and the bound

1 2

1 } ≥ 1 − ε. 2

as well, hence the matrix (HH − ) = (E N + A) is invertible,

‖(HH − )−1 ‖ω ≤

1 ≤2 1 − ‖A‖ω

160 | 7 Semi-Statistical Method holds. Thus, the following assertion is true: for N > N A , P{∃(HH − )−1 , ‖(HH − )−1 ‖ω ≤ 2} ≥ 1 − ε.

(7.16)

Considering the matrix H − H = E N + B,

B=

λ2 λ (K − − K) − K − K, N−1 (N − 1)2

one is able to prove an assertion concerning T(B) which is similar to the above lemma. Reasoning in the same manner, we find that for any ε > 0 there exists N B such that for all N > N B , P{∃(H − H)−1 , ‖(H − H)−1 ‖ω ≤ 2} ≥ 1 − ε. (7.17) By virtue of Proposition 7.4.4, for T(A), T(B) ≤ the following bound holds: ‖H −1 ‖ω ≤ 2‖H − ‖ω ≤ 2(1 +

1 2

the matrix H is invertible, and

|λ| T(K − )). N−1

(7.18)

It is easily seen that E{(T(K − ))2 } = N(N − 1)‖K − (x, y)‖2L2 (D×D) . Hence, by virtue of the Chebyshev inequality, for any ε > 0 the relation P{T(K − ) ≤

1 √ N(N − 1)‖K − (x, y)‖L2 (D×D) } ≥ 1 − ε ε

(7.19)

is true. Combining (7.16)–(7.19), introducing N0 = max{N A , N B }, and making use of Proposition 7.4.1, we arrive at the following theorem. Theorem 7.4.6. Let the following integrals converge: ∫∫∫ D D D

∫∫ D D

K 2 (x, z)K − 2 (z, y) dz dx dy, p(z)

K 2 (x, y)K − 2 (y, x) p(x)p(y)

∫∫∫ D D D

K − 2 (x, z)K 2 (z, y) dz dx dy, p(z)

dx dy.

Then for any ε > 0 there exists N0 (ε) such that for all N > N0 the inequality P{∃H −1 , ‖H −1 ‖ω ≤ C} ≥ 1 − 3ε holds true, where C = 2(1 +

|λ| N0 √ ‖K − (x, y)‖L2 (D×D) ). ε N0 − 1

(7.20)

7.5 Convergence of the method | 161

7.5 Convergence of the method From formulas (7.10) and (7.11) we arrive at the following expression of the error of the approximate solution: λ2 T (7.21) k (x)H N−1 ρ.̄ N Let us estimate this expression under the assumption that the matrix H of the method is invertible and ‖H −1 ‖ω ≤ C. We introduce the diagonal matrix φ(x) − ̃ φ(x) = λρ(x) +

P = diag{√ p(x1 ), √ p(x2 ), . . . , √ p(xN )}, and obtain |k T (x)H −1 ρ|̄ = |k T (x)P ⋅ P−1 H −1 ρ|̄ ≤ ‖k T (x)P‖‖P−1 H −1 ρ‖̄ = ‖k T (x)P‖‖H −1 ρ‖̄ ω ≤ C‖k T (x)P‖‖ρ‖̄ ω . Here and in what follows ‖ ⋅ ‖ stands for the standard Euclidean norm of a vector and the norm of a matrix it induces on the Euclidean space of the corresponding dimensionality (ℝN in our case). Hence, integrating the absolute value of the error, from (7.21) we arrive at φ(x)| dx ≤ |λ| ∫ |ρ(x)| dx + ∫ |φ(x) − ̃ D

D

λ2 C‖ρ‖̄ ω ∫ ‖k(x)P‖ dx = R(ω). N D

Let us estimate E{R(ω)}. It is not so difficult to see that 1/2

E{R(ω)} ≤ |λ|( ∫ E{ρ2 (x)} dx)

1/2 λ2 C(E{‖ρ‖̄ 2ω })1/2 ( ∫ E{‖k(x)P‖2 } dx) . N

+

D

D

Let us consider separately the terms on the right-hand side. Elementary transformations yield E{ρ2 (x)} = E{( ∫ K(x, y)φ(y) dy − D 2

= ( ∫ K(x, y)φ(y) dy) − D

1 N K(x, xl )φ(xl ) 2 ) } ∑ N l=1 p(xl ) N 2 K(x, xl )φ(xl ) } ∫ K(x, y)φ(y) dy E{ ∑ N p(xl ) l=1 D

+

N

N

1 K(x, xl )K(x, xm )φ(xl )φ(xm ) [N(N − 1) ∑ ∑ E{ } p(xl )p(xm ) N2 l=1 m=1 m=l̸

N

+ N ∑ E{ l=1

=

K 2 (x, x

l )φ

p2 (x

2 (x

l)

l)

}]

2 1 K 2 (x, y)φ2 (y) dy − ( ∫ K(x, y)φ(y) dy) ]. [∫ N p(y) D

D

162 | 7 Semi-Statistical Method Therefore, ∫ E{ρ2 (x)} dx = D

where ∆2 = ∫ ∫ D D

2 K 2 (x, y)φ2 (y) dy dx − ∫ ( ∫ K(x, y)φ(y) dy) dx. p(y) D

Similarly we obtain E{

E{‖ρ‖̄ 2ω } = Finally,

(7.22)

D

ρ2i (xi ) ∆2 , }= p(xi ) N−1

therefore,

N

E{‖k(x)P‖2 } = ∑ E{ i=1

hence,

∆2 , N

N ∆2 . N−1

K 2 (x, xi ) } = N ∫ K 2 (x, y) dy, p(xi ) D

∫ E{‖k(x)P‖2 } dx = N‖K(x, y)‖2L2 (D×D) . D

We thus arrive at the bound E{R(ω)} ≤ where

|λ| + λ2 C‖K(x, y)‖L2 (D×D) M(C, ∆) ∆= , √N − 1 √N − 1

M(C, ∆) = (|λ| + λ2 C‖K(x, y)‖L2 (D×D) )∆.

Hence we obtain U{∆φ} ≤

M(C, ∆) . √N − 1

Applying the Chebyshev inequality to R(ω), we obtain P{R(ω) ≤

M(C, ∆) } ≥ 1 − ε. ε√N − 1

Thus, the following theorem is true. Theorem 7.5.1. Let the following integral converge: ∫∫ D D

K 2 (x, y)φ2 (y) dy dx. p(y)

Then P{ ∫ |φ(x) − ̃ φ(x)| dx ≤ D

M(C, ∆) | ∃H −1 , ‖H −1 ‖ω ≤ C} ≥ 1 − ε. ε√N − 1

(7.23) (7.24)

7.6 Adaptive capabilities of the algorithm | 163

Combining the results of Theorems 7.4.6 and 7.5.1 on the base of Proposition 7.4.2, we conclude that the following theorem on convergence of the semi-statistical method is valid. Theorem 7.5.2. Let the hypotheses of Theorems 7.4.6 and 7.5.1 be satisfied. Then for any ε > 0 there exists N0 (ε) such that for all N > N0 the inequality P{∃H −1 , ∫ |φ(x) − ̃ φ(x)| dx ≤ D

M(C, ∆) } ≥ 1 − 4ε ε√N − 1

is true, where C, ∆, and M(C, ∆) are defined by equalities (7.20), (7.22), and (7.23). Theorem 7.5.2 shows that for a given confidence probability level the L1 -norm of the error of approximate solution decreases as O(1/√N) as N grows.

7.6 Adaptive capabilities of the algorithm The estimate obtained in the preceding section for the convergence rate of the semistatistical method, which is equal to 1/√N, alone does not guarantee that we attain the required accuracy because N is not so large (of order of magnitude of a hundred) since matrices of large dimension must be inverted on a computer with rather limited memory. But, as we know [62], the base technique in the Monte Carlo method which permits to attain the required accuracy in actual practice is not the unlimited growth of the number N but a reasonable choice of the density of distribution of the random nodes of the grid. In the suggested semi-statistical method, there is also a way to tune the density with the aim to decrease the errors of estimators of the obtained solution. Looking at the structure of the functional U{∆φ} in (7.24), which characterises the error of the obtained solution, we see that the optimal density popt (x) must minimise expression (7.22). Solving this variational problem under condition (7.2), we find that popt (x) = α√ Q1 (x, x)φ2 (x), where

(7.25)

Q1 (x, z) = ∫ K(y, x)K(y, z) dy, D

and α is the normalising factor determined by condition (7.2). Relation (7.25) opens the way for successive refinement of the choice of the distribution density if one continues to substitute the current approximation ̃ φ(x) treated as the true solution φ(x) into (7.25). The effect of convergence speed-up under such a tuning of the density p(x) has been investigated in Chapters 2–4 on the example of evaluation of integrals by the adaptive method of importance sampling.

164 | 7 Semi-Statistical Method During actual computation, the integrals entering (7.25) and (7.22) should be evaluated by the Monte Carlo method by the same sample as in (7.9). The corresponding formulas are of the form popt (xi ) ≃

∆2 ≈

α

N

[∑ √N − 1 √

j=1 j=i̸

K 2 (xj , xi ) 2 φi , ]̃ p(xj )

(7.26)

N ̃ φ2i N K 2 (xj , xi ) 1 ∑ ∑ N(N − 1) i=1 p(xi ) j=1 p(xj ) j=i̸

N N N ̃ K(xk , xi )K(xk , xj ) φi ̃ φj 1 − . ∑∑ ∑ N(N − 1)(N − 2) i=1 j=1 p(xi )p(xj ) k=1 p(xk ) j=i̸

(7.27)

k=i,j ̸

In view of the aforesaid, the following computation procedure seems to be most efficient. The density p(x) is given a priori, and an estimator of the solution is evaluated on a rather small sample by means of inversion procedures implemented in the standard software of a particular computer used. Then by formula (7.26) we find the next approximation to the density p(x), and the process continues until the functional ∆ practically ceases to change. Then we perform a recursive increase of the number of nodes in the sample and estimation of the characteristics of the solution we are interested in. At the same time we estimate the evaluation accuracy by formula (7.27). This process continues until the desired accuracy is attained. If the further increase of the number of nodes is impractical, say, due to either growth of errors of matrix inversion or excessive computational labour required for each step, then the allocation of the grid nodes can be optimised again. If, finally, the necessary number of grid nodes is known beforehand, then it makes sense to form the matrix of appropriate size at once, invert it by the bordering method without intermediate estimation of the solution and of the accuracy, and then recursively refine the obtained solution. Then the need for the first iteration stage of inversion (7.13) vanishes, which really saves some computer resources. In [10], attempts were made to refine the grid in the method of finite elements. But they lead to an essential growth of the dimensionality of the system one solved. The algorithm we suggested in (7.26), (7.27) does not possess such deficiencies. Exercise 7.6.1. Prove formula (7.25).

7.7 Qualitative considerations | 165

7.7 Qualitative considerations on the relation between the semi-statistical method and the variational ones On the physical level of rigour, the suggested method can be treated as a variational one, where the part of the coordinate functions is played by the step functions as Haar functions with a random step width. The conventional application of variational methods consists of a felicitous choice of the coordinate functions by means of analytic solution of some particular cases of the problem and subsequent finding of the coefficients of expansion of the solution over these functions in a small part of this series (three-five terms). In the method we suggest, the form of the coordinate functions is chosen in advance by means of fixing the density p(x), while the required accuracy is achieved by dealing with a large number of terms of the series. So, the complexity decreases at the cost of a more complete utilisation of the computer resources. At the same time, all a priori information concerning the solution can be embedded into the density p(x) whose good choice would allow for a quite fast achievement of the required accuracy. Here an automatic refinement of the density is possible, too, with the use of relation (7.26). Thus, the method may work well with different volumes of a priori information.

7.8 Application of the method to integral equations with a singularity 7.8.1 Description of the method and peculiarities of its application As a rule, the integral equations which most boundary problems reduce to possess a singularity. Because of this, one must take extensive care in order to make a good use of the semi-statistical method. Integral representations of boundary problems are derived with the use of the potential theory [42, 63]. The solution S(z) of the problem for a body V bounded by a surface D is represented in the form of a potential S(z) = ∫ R1 (z, x)φ(x) dx,

z ∈ V + D,

(7.28)

D

and the unknown potential density φ(x) is determined by solving the integral equation φ(x) − λ ∫ K1 (x, y)φ(y) dy = f(x).

(7.29)

D

Since the kernels K1 (x, y), R1 (z, x) have a singularity, before applying the semistatistical procedure one must regularise the above integral relations with the use of

166 | 7 Semi-Statistical Method the technique described in [55], in other words, one adds and subtracts terms which have the same singularity as the kernels in (7.28), (7.29). As a result, these relations take the form (1 + Φ(x))φ(x) − λ ∫[K1 (x, y)φ(y) + K2 (x, y)φ(x)] dy = f(x),

(7.30)

D

S(z) = Ψ(z)φ(z∗ ) + ∫[R1 (z, x)φ(x) − R2 (z, x)φ(z∗ )] dx,

(7.31)

D

where x, y, z∗ ∈ D and z ∈ V + D. If the point z finds itself on the surface D, then z ≡ z∗ . Here f(x), Φ(x), Ψ(z) are given functions defined in the whole domain of variation of their arguments, Φ(x) = λ ∫ K2 (x, y) dy,

(7.32)

D

Ψ(z) = ∫ R2 (z, x) dx,

(7.33)

D

and φ(x) and S(z) are the functions we search for, which are defined in D and V + D, respectively. Regarding φ(x) and the kernels K i (x, y), R i (z, x), i = 1, 2, we assume the following: The kernels K i (x, y) and R i (z, x) have singularities at points x = y and z∗ = x, respectively, such that (i) all integrals entering relations (7.30)–(7.33) exist, even at z = z∗ , x = y; (ii) the following integrals converge: ∫ D

[K1 (x, y)φ(y) + K2 (x, y)φ(x)]2 dy, p(y)

∫ D

[R1 (z∗ , x)φ(x) − R2 (z∗ , x)φ(z∗ )]2 dx; p(x)

(iii) the following integrals diverge: ∫ D

K 2i (x, y) dy, p(y)

∫ D

R2i (z∗ , x) dx, p(x)

i = 1, 2.

Here the function p(x) is defined in D, and 0 ≤ p(x) ≤ p0 < ∞,

∫ p(x) dx = 1.

(7.34)

D

Moreover, we assume that λ is a real number. First we assume that the solution φ(x) of equation (7.30) is known. Then the elementary version of the Monte Carlo method applied to evaluation of the integrals entering (7.30) and (7.32) yields the formulas ∫[K1 (x, y)φ(y) + K2 (x, y)φ(x)] dy D

=

1 N K1 (x, xj )φ(xj ) + K2 (x, xj )φ(x) ] + α N (x) ∑[ N j=1 p(xj )

(7.35)

7.8 Application to integral equations | 167

and Φ(x) =

λ N K2 (x, xj ) (2) ] + λκ N (x). ∑[ N j=1 p(xj )

Here xj ∈ D are independent random vectors with the distribution density p(x) obeying (2) conditions (7.34); α N (x) and κ N (x) are random errors of evaluation of the corresponding integrals. In view of the obtained relations, equation (7.30) for any given x ∈ D takes the form λ N K1 (x, xj ) (2) (1 + λκ N (x))φ(x) − ∑ φ(xj ) = f(x) + λα N (x), (7.36) N j=1 p(xj ) and for x = xi , i = 1, . . . , N, the form (2)

(1 + λκ N−1 (xi ))φ(xi ) −

N K1 (xi , xj ) λ φ(xj ) = f(xi ) + λα N−1 (xi ). ∑ N − 1 j=1 p(xj )

(7.37)

j=i̸

Since, as we will see below, for large N the error α N (x) is small, we should expect that ̃ φ N (xi ), i = 1, . . . , N, obtained from the simultaneous linear equations (2)

(1 + λκ N−1 (xi ))̃ φ N (xi ) −

N K1 (xi , xj ) λ ̃ φ N (xj ) = f(xi ) ∑ N − 1 j=1 p(xj )

(7.38)

j=i̸

are close to φ(xi ); from them by means of interpolation we find the solution of the integral equation in the whole domain D. We observe that a relation similar to (7.37) can be derived from the non-regularised (1) equation (7.29). But in this case dropping the random error κ N−1 (xi ) leads to a great error because its variance is infinite: the integral ∫ D

K12 (xi , y) dy p(y)

diverges. Similarly, applying the Monte Carlo method to the integral entering (7.31), we obtain S(z) = Ψ(z)φ(z∗ ) +

1 N R1 (z, xj )φ(xj ) − R2 (z, xj )φ(z∗ ) ] + β N (z). ∑[ N j=1 p(xj )

(7.39)

It is obvious that there is a good reason to use the same sample of random vectors xj as in (7.37). As we will see, the random error β N (z) becomes small as N grows. So, in order to find an approximate value of S(z) it suffices to substitute the solution ̃ φ N (xj ) of simultaneous equations (7.38) and the value of the function φ(z∗ ) obtained by interpolation into formula (7.39) and drop the infinitesimal error β N (z).

168 | 7 Semi-Statistical Method In the same way as for an integral equation we can demonstrate that in order to evaluate S(z∗ ) at the points z∗ on the surface D it is better to use relation (7.39) than the similar one derived for the non-regularised integral transform. If one needs to find a solution S(z) at the points z inside V, then the regularisation is, generally speaking, not necessary, because in this case the kernels R i (z, x), i = 1, 2, have no singularities.

7.8.2 Recurrent inversion formulas We rewrite relations (7.36), (7.37) in matrix form: λ K N )φ N = f N̄ + λ ᾱ N , N−1 λ K N )̃ φ N = f N̄ , (D N − N−1 (D N −

(7.40) (7.41)

where (2)

(2)

D N = diag{1 + κ N−1 (x1 ), . . . , 1 + κ N−1 (xN )}, {K1 (xi , xj )/p(xj ) if i ≠ j, K ij = { 0 if i = j, {

K N = ‖K ij ‖Ni,j=1 , and

T

φ N = [φ(x1 ), . . . , φ(xN )] , T

f N̄ = [f(x1 ), . . . , f(xN )] , that

If the matrix D N −

λ N−1 K N

T ̃ φ N = [̃ φ N (x1 ), . . . , ̃ φ N (xN )] ,

T ᾱ N = [α N−1 (x1 ), . . . , α N−1 (xN )] .

is non-degenerate, then from (7.40), (7.41) it follows

−1 λ K N ) (f N̄ + λ ᾱ N ); N−1 −1 λ ̃ φ N = (D N − K N ) f N̄ . N−1

φ N = (D N −

Thus, in order to obtain an estimate of the solution ̃ φ N of equation (7.30) for λ an arbitrary function f(x), it, in fact, suffices to evaluate the matrix (D N − N−1 K N )−1 , which, as in Section 7.3, should be computed by recurrent inversion formulas. These relations are similar to those given in Section 7.3 with E N replaced by D N .

7.8.3 Error analysis For equations possessing a singularity, the analysis of convergence of the semistatistical method is much more complicated than in the regular case. In particular, the proof that the matrix of the method has an inverse matrix with large probability

7.8 Application to integral equations | 169

for BN large enough can be carried out under very strict constraints on the kernels K i (x, y), i = 1, 2. Nevertheless, under the assumption that the matrix of the method is invertible and the norm of the inverse matrix is bounded for large N, we are able to get some estimates of the error for the approximate solution. The objective consists of evaluation of S(z). If the function φ(x) is known, then in order to compute S(z) one uses relation (7.39). Let us demonstrate the asymptotic infinitesimality of the error β N (z) in this formula as N → ∞. For this purpose we consider the ‘worst’ case where the point z finds itself on the surface, that is, z = z∗ . From the definition of β N (z∗ ) we arrive at the following expressions of the mathematical expectation and variance of this random error: E{β N (z∗ )} = E{ ∫[R1 (z∗ , x)φ(x) − R2 (z∗ , x)φ(z∗ )] dx D



1 N [R1 (z∗ , xj )φ(xj ) − R2 (z∗ , xj )φ(z∗ )] } = 0, ∑ N j=1 p(xj )

E{β2N (z∗ )} = E{( ∫[R1 (z∗ , x)φ(x) − R2 (z∗ , x)φ(z∗ )] dx D

− =

2

1 N [R1 (z∗ , xj )φ(xj ) − R2 (z∗ , xj )φ(z∗ )] ) } ∑ N j=1 p(xj )

1 [R1 (z∗ , x)φ(x) − R2 (z∗ , x)φ(z∗ )]2 dx [∫ N p(x) D

2

− ( ∫[R1 (z∗ , x)φ(x) − R2 (z∗ , x)φ(z∗ )] dx) ].

(7.42)

D

By virtue of the constraints on the kernels R i (z, x) imposed in Section 7.8.1, from (7.42) we immediately find that E{β2N (z∗ )} 󳨀󳨀󳨀󳨀󳨀→ 0. N→∞

Thus, the error of approximate evaluation of S(z) by formula (7.39) can be as small as we wish. But, in actual practice, the function φ(x) in (7.39) is given with some uncertainty because of the approximation of integral equation (7.30) by simultaneous algebraic equations (7.38). Hence, an additional component in the error arises, whose level cannot be rigorously analysed due to a quite complicated statistical nature of the function ̃ φ(x). At the same time, it is obvious that if the error ∆φ(x) = ̃ φ(x) − φ(x) is small enough, then the error introduced by this component into (7.39) is small, too. Let us consider the question how small ∆φ(x) is. Since the value of ̃ φ(x) at an arbitrary point x ∈ D is obtained by means of an interpolation between the neighbouring nodes, most errors are due to inaccuracies in computation of ̃ φ(xi ) at the nodes xi ∈ D, i = 1, . . . , N, because the errors of interpolation formulas are small for a sufficiently fine grid [8].

170 | 7 Semi-Statistical Method Let us demonstrate that a solution ̃ φ(xi ) of approximate system (7.38) at the nodes xi ∈ D converges in the mean to the solution of integral equation (7.30), that is, the functional 1 N U N = ∑ E{|∆φ(xi )|} 󳨀󳨀󳨀󳨀󳨀→ 0. N→∞ N j=1 From relations (7.40), (7.41) it follows that the expression of the error ∆φ N (x) is of the form −1 λ ∆φ N = φ N − ̃ φ N = λ(D N − K N ) ᾱ N . N−1 We estimate U N as follows: UN =

1 1 N E{√‖∆φ N ‖2 } ∑ E{|∆φ(xi )|} ≤ N j=1 √N



−1 󵄩 |λ| 󵄩󵄩󵄩 λ 󵄩 E{󵄩󵄩󵄩(D N − K N ) 󵄩󵄩󵄩󵄩‖ᾱ N ‖} N−1 √N 󵄩 󵄩



−1 󵄩 |λ| √ 󵄩󵄩󵄩 λ 󵄩2 E{󵄩󵄩󵄩(D N − K N ) 󵄩󵄩󵄩󵄩 }√E{‖ᾱ N ‖2 }. N−1 √N 󵄩 󵄩

Furthermore, taking into account (7.35) and applying the Euclidean norm, we obtain N

N

i=1

i=1

E{‖ᾱ N ‖2 } = ∑ E{α2N−1 (xi )} = ∑ E{ ∫ α2N−1 (x) dx} =

D

N [K1 (x, y)φ(y) + K2 (x, y)φ(x)]2 dy ∫ dx[ ∫ N−1 p(y) D

D

2

− ( ∫[K1 (x, y)φ(y) + K2 (x, y)φ(x)] dy) ]. D

Changing the order of integration, we arrive at E{‖α N ‖2 } =

N L(y, y) dy − ∫ ∫ L(x, y) dx dy], [∫ N−1 p(y) D

D D

where L(x, y) = ∫[K1 (ξ, x)φ(x) + K2 (ξ, x)φ(ξ)][K1 (ξ, y)φ(y) + K2 (ξ, y)φ(ξ)] dξ. D

Thus, UN ≤

−1 󵄩2 󵄩 √ E{󵄩󵄩󵄩󵄩(D N − λ K N ) 󵄩󵄩󵄩󵄩 } √ ∫ L(y, y) dy − ∫ ∫ L(x, y) dx dy. 󵄩 󵄩󵄩 N−1 p(y) √N − 1 󵄩

|λ|

D

D D

7.8 Application to integral equations | 171

The obtained bound under the condition that the norm of the inverse matrix λ (D N − N−1 K N )−1 is bounded in the mean square sense allows us not only to prove the convergence of U N to zero as N → ∞ but to estimate the convergence rate as well. Namely, if there exists N0 > 0 such that the inequality −1 󵄩 󵄩󵄩 λ 󵄩2 E{󵄩󵄩󵄩󵄩(D N − K N ) 󵄩󵄩󵄩󵄩 } ≤ κ2 N−1 󵄩 󵄩

holds true for all N ≥ N0 , then the bound UN ≤

|λ|κ2 L(y, y) − ∫ L(x, y) dx) dy ∫( p(y) √N − 1 √ D

(7.43)

D

is valid for all N ≥ N0 . As we have said in the beginning of this section, the proof that the expectation λ E{‖(D N − N−1 K N )−1 ‖2 } is indeed bounded can be carried out only under very hard constraints imposed on the kernels K i (x, y), i = 1, 2. It should be emphasised that a similar problem to establish the convergence arises in all well-known variational methods such as the Ritz or the Galerkin method. The actual computational practice shows, though, that for sufficiently large N and small computing errors, no degeneration occurs provided that λ differs from the eigenvalues of the integral operator.

7.8.4 Adaptive capabilities of the algorithm From the structure of the right-hand side of inequality (7.43) it follows that if the density p(y) of the random variable y is chosen from the condition 󵄨󵄨 L(y, y) 󵄨󵄨 󵄨󵄨, p(y) = α󵄨󵄨󵄨󵄨 󵄨 󵄨 ∫ L(x, y) dx 󵄨󵄨 D

(7.44)

where the constant α is defined by the equality ∫D p(y) dy = 1, then the average variance of the error ∆φ N (x) is minimal. Thus, it becomes possible to automatically choose the density p(y) in the computation process where as the true solution φ(y) one takes sequentially obtained estimators of the solution ̃ φ(y). Such a choice is most efficient if L(y, y) ∫D L(x, y) dx

≥ 0,

because in this case the functional U N becomes equal to zero. In numerical calculations, it makes sense to evaluate the integrals entering (7.44) by the Monte Carlo method on the same sample as in (7.37). The final formula is 2 (xj , y)/p(xj ) ∑Nj=1 K12 ̃p(y) = α(N − 1) 󵄨, 󵄨󵄨 N K12 (xj ,y) N φ(xi ) + K2 (xj , xi )̃ φ(xj )]}󵄨󵄨󵄨 󵄨󵄨∑j=1 { p(xj ) ∑i=1, i=j̸ [K1 (xj , xi )̃

(7.45)

172 | 7 Semi-Statistical Method where

K12 (xj , y) = K1 (xj , y)̃ φ(y) + K2 (xj , y)̃ φ(xj ).

Let us turn to the question how to estimate the accuracy in the process of computation. From formula (7.39) we easily arrive at the statistical estimator of the solution S(z) = Ψ(z)̃ φ(z∗ ) +

φ(xj ) − R2 (z, xj )̃ φ(z∗ )] 1 N [R1 (z, xj )̃ . ∑ N j=1 p(xj )

It is natural to take its variance as the accuracy estimate. Using the formula for estimation of the empirical variance [68], we obtain ̃ 2̃ (z) = σ S

N 2 R1 (z, xj ) R2 (z, xj ) 1 ̃ ̃ ̃ φ(z∗ ) + φ(xj ) − φ(z∗ ) − S(z)] . (7.46) ∑ [Ψ(z)̃ N(N − 1) j=1 p(xj ) p(xj )

Formula (7.46) defines the function, but in practice we need some numerical characteristic of it, which can be chosen from rational considerations. If we are solving the problem on finding the extremal values of the function S(z) only, then it is wise to calculate the value of the variance at the extremum point only (of course, if it is known): ̃ 2̃ (z). ε = max σ S z∈V

If we are interested in the behaviour of the function in the whole domain of variation of the argument, then we may use the integral characteristic ̃ 2̃ (z) dz, ε = ∫ q(z)σ S D

where q(z) is the weight function which accounts for various requirements on the accuracy depending on the point where the computation is performed. At this point, the description of the semi-statistical method may be considered complete. In the next sections we discuss various boundary problems which can be solved by the suggested method, consider the process of construction of equivalent integral equations, their regularisation, analysis of orders of singularities of kernels of the equations and ways to apply the semi-statistical method to them. We give results of numerical experiments which reveal the utility of this method.

8 Problem of Vibration Conductivity In this chapter, we formulate the boundary problem on vibration conductivity. We pose the question on properties of the solution of this problem. We obtain equivalent integral equations possessing different asymptotic properties varying with some parameter β. We carry out their regularisation. We solve a series of test problems with the use of the semi-statistical method.

8.1 Boundary value problem of vibration conductivity In a great body of problems, computation of the vibration state of complex dynamic bodies with the use of classical methods does not yield any satisfactory result. In this connection, a way to get its aggregated description was proposed in [52, 53]. The key characteristic of a vibration field is its spectral characteristic. In what follows we will deal with vibrational acceleration fields only. Concerning the external stress, we assume that it is a stationary stochastic process. An example of such a stress is a body-mounted working engine. Hence the key characteristic of the obtained random acceleration field is the spectral density of acceleration of the points of this medium S(x, y, z). With regard to the analogy between heat and vibration phenomena, the following boundary problem for S in an anisotropic medium was obtained in [52]. Inside the volume V occupied by the body, the equation ∇ ⋅ K ⋅ ∇S − β2 S = O

(8.1)

must be true, and on the boundary of the body O, the condition n ⋅ K ⋅ ∇S = f

(8.2)

must be satisfied. Here K is the vibration conductivity tensor which characterises the anisotropic properties of the medium. It is symmetric and positive definite. In this case it is, in addition, considered constant. In the vibration conductivity theory it is assumed that all bodies under consideration are orthotropic, in other words, through every point of the body, three mutually perpendicular planes of symmetry of vibration conductivity properties pass (or, what is the same, three orthogonal principal directions). Hence, in these principal coordinates the vibration conductivity tensor K is of diagonal form K = K x ii + K y jj + K z kk, where i, j, k are the basis vectors of the orthotropy axes. In what follows, the coordinate axes will always lie along the principal directions; let S be the weighted average of the frequency values of the vibration accelerations along the orthotropy axes S x , S y , S z https://doi.org/10.1515/9783110554632-008

174 | 8 Problem of Vibration Conductivity with weight coefficients equal to the corresponding velocities of propagation of the axial disturbances a x , a y , a z , that is, S = ax Sx + ay Sy + az Sz . Up to a factor, f is the first invariant of the spectral density tensor of the external stress: f = ω2 (S NN + S N1 + S N2 ). Here S NN is the spectral density of the stress normal to the boundary, S N1 are S N2 the spectral densities of the stresses tangential to the boundary which act along two arbitrary orthogonal directions, and ω is the excitation frequency. In (8.1) and (8.2), β is a positive constant characterising the degree of spatial damping, n is the vector of outward normal to the surface O, and ∇ is the Hamiltonian operator. The components of the vibration conductivity tensor K and the damping coefficient β must be determined by specially designed experiments in the same way as in the heat conduction theory. The best suited ones are those where S varies along one of the orthotropic directions, say, along the x-axis. Such a vibration state can be realised in a stretched (along the x-axis) body where the stress is applied to one of the ends. Having an experimental curve S = S(x), choosing β and K x , we try to approximate it by the theoretical curve which can be obtained by solving problem (8.1), (8.2) directly (see [53]). Performing such one-dimensional experiments for different orthotropic directions, we find all components of the vibration conductivity tensor K and the damping coefficient β. In what follows, it is convenient to divide equations (8.1), (8.2) by the minimum diagonal component of the tensor K. Then all coefficients of the tensor become dimensionless, while the diagonal components do not exceed 1. The isotropic case corresponds to transition of the tensor K to the unit tensor E. In order to reduce boundary problem (8.1), (8.2) to equivalent integral equations of the second kind we suggest to use the apparatus of the potential theory [63].

8.2 Integral equations of vibration conductivity The general idea how to deduce integral equations from boundary problem (8.1), (8.2) with the use of the potential method consists of construction of a fundamental solution which satisfies the initial differential equation (8.1), then the function S is sought in the form of a potential, that is, the integral over the boundary O of the product of the fundamental solution and an unknown function referred to as the density. Making use of properties of the constructed potential, upon transition to boundary condition (8.2) we arrive at the integral equation in unknown density. Let there be a body V bounded by a closed Lyapunov surface O (see [63]). In order to specify positions of the points of the body, we introduce a coordinate system XYZ

8.2 Integral equations of vibration conductivity | 175

Z Nṉ

N

ṟN

ṉM M

X

ṞNoN No

ṟo

W ṟ*

Y V

L

O

ṉ* N*

Fig. 8.1. Principal orthotropic directions.

with the origin placed in some point W ∈ V and the axes lying along the principal orthotropic directions (see Figure 8.1). We introduce the following notation: R PQ is the vector drawn from the point P to the point Q. r P is the radius vector of the point P. n P is the vector of outward normal to the surface O at the point P. ∇P is the Hamiltonian operator which acts on the set of points P. It is obvious that ∇P R PQ = −∇Q R PQ . (8.3) Let the symbol ∫ ⋅ dO P mean that the integration is over the surface O on the set of points P. Direct verification shows that the function S∗ (N, N0 ) =

e−βR1N0 N R1N0 N

(8.4)

is a fundamental solution of equation (8.1), where R1N0 N = √R N0 N ⋅ K −1 ⋅ R N0 N .

(8.5)

Here N0 ∈ V + O is the observation point, N ∈ O is the current point. The introduced vectors are shown in Figure 8.1. In the isotropic case, K = E, hence R1N0 N = R N0 N . Thus, R1N0 N defined by formula (8.5) is a quasi-distance between the points N0 and N. In other words, it is the distance between the points N0 and N in the deformed body V1 obtained from V by compressing along the orthotropy axes by the factor of √ K x , √ K y , √ K z , respectively. (We recall that

176 | 8 Problem of Vibration Conductivity all diagonal components of the tensor K are no smaller than one.) Thus, for any two points N and N0 belonging to the body, the following inequality holds true: R N0 N ≥ R1N0 N .

(8.6)

Since the fundamental solution is known, we search for the solution of boundary problem (8.1), (8.2) in the potential form S(N0 ) = ∫ μ(N) O

e−βR1N0 N dO N , R1N0 N

(8.7)

where μ(N) is the density at the current point N ∈ O, which is yet unknown. Let us investigate properties of the introduced potential and its derivatives. In our case, the part of the Gauß integral used in the classic theory of Newton potential [63] is played by e−βR1PQ Ω(P) = − ∫ n Q ⋅ K ⋅ ∇Q ( (8.8) ) dO Q , R1PQ O

where Q ∈ O. Let us calculate Ω(P) at various positions of the point P. We assume first that the point P lies outside of the closed surface O. Then the integrand in (8.8) has no singularity, so the surface integral can be transformed into the volume integral by the Ostrogradsky formula Ω(P) = − ∫ n Q ⋅ K ⋅ ∇Q ( O

e−βR1PQ e−βR1PQ ) dO Q = − ∫ ∇Q ⋅ K ⋅ ∇Q ( ) dV. R1PQ R1PQ V

Alternatively, taking account for differential equation (8.1), we obtain Ω(P) = −Φ(P), where Φ(P) = β2 ∫ V

e−βR1PQ dO Q . R1PQ

(8.9)

Now we assume that the point P lies inside the surface O. In this case the direct application of the Ostrogradsky formula is impossible, because the integrand goes to infinity at the point P = Q. In order to isolate the singularity, we consider the body V − V δ which is the body under consideration with the sphere of a small radius δ with centre at the point P excluded (see Figure 8.2). Everywhere inside this body, the integrand is finite, so, applying the Ostrogradsky formula to the integral Ω 1 = − ∫ n Q ⋅ K ⋅ ∇Q ( O+O δ

e−βR1PQ ) dO Q , R1PQ

8.2 Integral equations of vibration conductivity | 177

Q Qδ

ṉQ

P

S

V 0

Fig. 8.2. Sphere surface.

where O δ is the sphere surface, we obtain Ω(P) = − ∫ n Q ⋅ K ⋅ ∇Q ( O+O δ

= − ∫ ∇Q ⋅ K ⋅ ∇Q ( V−V δ

e−βR1PQ ) dO Q R1PQ e−βR1PQ ) dV. R1PQ

(8.10)

In order to evaluate the integral over the sphere surface O δ , we go to the spherical system of coordinates with the basis vectors e R , e ϑ , e φ and the origin at the point P. In this coordinate system, the equalities R1PQ = R PQ c1 are true, where c1 = √ and

sin2 ϑ cos2 φ sin2 ϑ sin2 φ cos2 ϑ + + Kx Ky Kz

(8.11)

(8.12)

n Q = −e R .

Carrying out a direct evaluation of the integral over O δ with account for these relations and passing to the limit as δ → 0 in (8.10), we obtain Ω(P) = 4π√K x K y K z − Φ(P), where Φ(P) is defined by formula (8.9). Finally, if the point P lies on the surface O, then, by applying the Ostrogradsky formula to the body under consideration with excluded hemisphere of radius δ with centre at the point P, evaluating the integral over the surface of this hemisphere, and passing to the limit as δ → 0, we obtain Ω(P) = 2π√K x K y K z − Φ(P).

178 | 8 Problem of Vibration Conductivity

P

Q2

Q1

Q3

Q4

Q5

V 0 Fig. 8.3. Convex contour.

Gathering the obtained results, we see that { 4π√K x K y K z − Φ(P) if P ∈ V, { { { Ω(P) = {2π√K x K y K z − Φ(P) if P ∈ O, { { { −Φ(P) if P ∉ V + O. {

(8.13)

Now let us evaluate Φ(P). First we assume that the point P is in V. In (8.9), we pass to the spherical coordinates with the origin at the point P. Taking into account (8.11) and (8.12), we find that 2π π R

Φ(P) = β2 ∫ ∫ ∫ e−βRc1 R dR 0 0 0

sin ϑ dϑdR . c1

The inner integral is along the ray R, more precisely, along that its part which lies inside the body V. The corresponding part of the ray is shown as a continuous line in Figure 8.3. Carrying out the integration along R, we obtain 2π π

Φ(P) = ∫ ∫ { ∑[1 − (1 + βR PQ k c1 )e−βR PQk c1 ] sign cos(n Q k , R)}

sin ϑ dϑ dφ c31

k

0 0

,

where R PQ k is the distance between the point P and the points where the vector R corresponding to the fixed angles ϑ and φ intersects the surface O. The summation is over all these points. By analogy with (8.11), let R1PQ k = R PQ k c1 . Thus, for Φ(P) we obtain 2π π

Φ(P) = ∫ ∫ { ∑[1 − (1 + βR1PQ k )e−βR1PQk ] sign cos(n Q k , R)} 0 0

k

sin ϑ dϑ dφ c31

.

(8.14)

8.2 Integral equations of vibration conductivity | 179

In the general case, it is obvious that the subsequent integration in formula (8.14) can be carried out only numerically. Now let the point P find itself on the surface O. There is no need to treat the passage to the limit in (8.14) in a special way as the point P approaches the surface. We only have to write out the equation of the surface in the form R PQ k = R PQ k (ϑ, φ). If the point P falls onto the surface O, then every radius R PQ k (ϑ, φ) becomes equal to zero in a particular region of angles ϑ and φ, which does not cause any difficulties while performing calculations in accordance with formula (8.14). This reasoning remains valid in the case where P is outside the body. Hence, for any position of the point P, formula (8.14) can be utilised to evaluate the function Φ(P). In order to prove the continuity of the function Φ(P), we make use of a similar theorem on continuity of the simple fibre potential [63]. Making use of relation (8.13) and taking into account the continuity of Φ(P), we easily analyse properties of potential (8.7) and its derivatives. To do this, we repeat the reasoning applied to the theory of Newton’s potential in [63]. We thus conclude that (i) potential (8.7) exists and is continuous in the whole space; (ii) the derivatives of the potential have the gap at the fibre points n ⋅ K ⋅ ∇S|io = n ⋅ K ⋅ ∇S|o + 2π√K x K y K z μ(N),

(8.15)

n ⋅ K ⋅ ∇S|eo = n ⋅ K ⋅ ∇S|o − 2π√K x K y K z μ(N),

(8.16)

where the symbols i and e stand for the limit values of the derivatives from the inside (that is, as the point N0 approaches N ∈ 0 remaining inside the body V) and from the outside. The direct value of the derivative has no special notation. Thus, S(N0 ) in form (8.7) solves equation (8.1). It remains to satisfy boundary condition (8.2); it should be treated as the limit as the point N0 tends to the surface O from inside: lim n ⋅ K ⋅ ∇S = f. N0 →O N0 ∈V

Substituting (8.7) into (8.6) and making use of property (8.15), we arrive at the following integral equation of the second kind in the potential density μ: ∫ μ(M)n N ⋅ K ⋅ ∇N ( 0

e−βR1NM ) dO M + 2π√K x K y K z μ(N) = f(N), R1NM

(8.17)

where N is the point of observation at the surface O, M is the current point at the surface (see Figure 8.1). Thus, the problem to reduce the boundary problem to integral equation (8.17) is solved. But in order to successfully apply numerical methods, the equations obtained

180 | 8 Problem of Vibration Conductivity must be regular (see Section 7.8). So, we have to isolate all singularities which occur in the integrals. Everywhere in what follows, except for very special cases, for the sake of brevity we will omit the subscripts at R1 . In the integral equation in the density μ, let R1 mean the quasi-distance between the points N and M, while in the expression of the potential S, let R1 be the quasi-distance between N0 and N. This also applies to R.

8.3 Regularisation of the equations First, we consider expression (8.7). If the point N0 is inside V, no difficulty related to the evaluation of the potential occurs, because the integration in (8.7) is over the surface O. But if the point N0 falls onto the surface O, then the integrand suffers singularity, which makes the application of standard numerical integration methods rather difficult. In this connection, the problem arises to find a regular representation of integral (8.7) no matter where the point N0 finds itself. This will remove the need for two distinct integration procedures. In order to isolate the singularity, we introduce the integral Π(N0 ) = ∫ n N ⋅ r N 0

e−βR1 dO N . R1

(8.18)

We recall that R1 stands for a quasi-distance between the points N0 and N. First, consider the case where N0 finds itself on the surface O, and write out S(N0 ) in the form S(N0 ) =

n N ⋅ r N e−βR1 μ(N0 ) Π(N0 ) + ∫[μ(N) − μ(N0 ) dO N , ] n0 ⋅ r0 n0 ⋅ r0 R1

(8.19)

0

where n0 is the vector of outward normal to the surface at the point N0 , and r0 is the radius vector of the point N0 . Thus, for a differentiable μ(N) the integrand has now no singularity at the point N0 . Now let us consider some sequence of the observation points N0 approaching the surface O along some line L. The point of intersection of the line L and the surface O is denoted by N∗ (see Figure 8.1). As in (8.19), we represent S in the form S(N0 ) =

n N ⋅ r N e−βR1 μ(N∗ ) Π(N0 ) + ∫[μ(N) − μ(N∗ ) dO N , ] n∗ ⋅ r∗ n∗ ⋅ r∗ R1

(8.20)

O

where n∗ is the vector of outward normal to the surface O at the point N∗ , and r∗ is the radius vector of the point N∗ . In this case the integrand has no singularity, no matter where the point N0 finds itself, including the case where N0 → N∗ . We put

8.3 Regularisation of the equations | 181

Ƶʹ

Ƶ

X=



ḻR

θ

No

π +θ 2

ḻφ ḻθ

φ

ṟo



θo W

Y φo

X

Fig. 8.4. Spherical coordinate system.

a particular emphasis on the condition whose violation makes formulas (8.19) and (8.20) senseless: n0 ⋅ r0 ≠ 0, n∗ ⋅ r∗ ≠ 0. This condition poses a constraint on the choice of the point W inside the body. Let us turn to the problem to evaluate the function Π(N0 ). Using the Ostrogradsky formula, we transform surface integral (8.18) into a single integral, and obtain ∫ nN ⋅ rN

e−βR1 e−βR1 dO N = ∫ ∇N ⋅ (r N ) dV. R1 R1 V

O

There is no need for a special treatment of the singularity R1 = 0 because it is a weak one. Thus, e−βR1 1 + βR1 −βR1 Π(N0 ) = ∫(3 − e r N ⋅ ∇N R1 ) dV. (8.21) R1 R21 V

In order to evaluate integral (8.21) we pass to the spherical coordinate system with the basis vectors e R , e ϑ , e φ and the origin at the point N0 such that the z󸀠 -axis is directed along r0 (see Figure 8.4). Then in this coordinate system, we have r0 ⋅ e φ = 0, where c2 = √

R1 = Rc2 ,

α2x α2y α2z + + Kx Ky Kz

(8.22)

182 | 8 Problem of Vibration Conductivity and αx cos ϑ0 cos φ0 ( α y ) = ( cos ϑ0 sin φ0 αz − sin φ0

− sin φ0 cos φ0 0

sin ϑ0 cos φ0 sin ϑ cos φ ) ( sin ϑ0 sin φ0 sin ϑ sin φ ) . cos ϑ0 cos ϑ

The essential complication of the form of c2 as compared with (8.12) is due to the fact that the tensor K ceases to be diagonal after the rotation of the coordinate axes. For these reasons, since (see Figure 8.1) r N = R + r0 , we obtain 2π π R

Π(N0 ) = ∫ ∫ ∫[2Rc2 − D(ϑ, φ) − βR2 c22 − βRc2 D(ϑ, φ)] 0 0 0

× e−βRc2 dR

sin ϑ dϑ dφ, c22

where D(ϑ, φ) = r0 c2 [cos ϑ −

(8.23)

1 ∂c2 sin ϑ]. c2 ∂ϑ

(8.24)

Keeping in mind the fact that r0 remains constant as one goes along the ray R, we carry out integration along R in (8.23) and obtain 2π π

Π(N0 ) = ∫ ∫ [ ∑{[(R21N0 N k + D(ϑ, φ)R1N0 N k )e−βR1N0 Nk k

0 0

− where

2D(ϑ, φ) sin ϑ (1 − e−βR1N0 Nk )] sign cos(n N k , R)}] 3 dϑ dφ, β c2 R1N0 N k = R N0 N k c2 .

(8.25)

The other symbols are of the same meaning as in (8.14). Now let us consider integral equation (8.17). The integrand possesses a singularity of order 1/R1 at the point N = M (see formula (8.30) below and comments to it), which makes the computer evaluation much more difficult. In order to isolate this singularity, we make use of relations (8.13), (8.14) for P ≡ N ∈ O and Q ≡ M ∈ O. We obtain −∫ O

1 + βR1 −βR1 n M ⋅ K ⋅ ∇M R1 dO M + 2π√K x K y K z − Φ(N) = 0, e R21

(8.26)

where 2π π

Φ(N) = ∫ ∫ { ∑[1 − (1 + βR1NM k )e−βR1NMk ] sign cos(n M k , R)} 0 0

k

sin ϑ c31

dϑ dφ.

(8.27)

8.3 Regularisation of the equations | 183

We multiply both sides of relation (8.26) by μ(N), and subtract it from both sides of equation (8.17). In view of (8.3) we obtain μ(N)Φ(N) + ∫ O

1 + βR1 −βR1 e [μ(M)n N ⋅ K ⋅ ∇M R1 + μ(N)n M ⋅ K ⋅ ∇M R1 ] dO M R21

= f(N).

(8.28)

Let us demonstrate that the integrand (8.28) does not suffer singularity. For this purpose we represent the factor of the integrand which is not defined as R1 → 0 in the form μ(M)n N ⋅ K ⋅ ∇M R1 + μ(N)n M ⋅ K ⋅ ∇M R1 R21

=

n N ⋅ K ⋅ ∇M R 1 + n M ⋅ K ⋅ ∇M R 1 μ(M) − μ(N) n N ⋅ K ⋅ ∇M R1 . (8.29) + μ(N) R1 R1 R21

In a neighbourhood of the point N, let the surface be approximated by the equation z+

1 (K1 x2 + K2 y2 ) + ax3 + bx2 y + cxy2 + dy3 + ⋅ ⋅ ⋅ = 0. 2

Here it is assumed that the z-axis is directed along the outward normal n N , while the xand y-axes are at a tangent to the lines of curvature of the surface at the point N. We see that n N ⋅ K ⋅ ∇M R 1 = n N x K x because

x y z z + n Ny K y + n Nz K z = , K x R1 K y R1 K z R1 R1

n N x = n N y = 0,

Hence

n N ⋅ K ⋅ ∇M R 1 R1

=

n N z = 1.

1 2 2 3 3 z 2 (K 1 x + K 2 y ) + O(x , y ) = − . R21 R21

(8.30)

Therefore, the result is bounded as R1 → 0 (hence the integrand in equation (8.17) has a singularity of order 1/R1 ). The assumption on existence of the derivative of μ implies that the whole first addend in (8.29) is bounded. Let us demonstrate that the second addend is bounded: n M ⋅ K ⋅ ∇M R 1 = n M x K x =

x y z + n My K y + n Mz K z K x R1 K y R1 K z R1

z + K1 x2 + K2 y2 + 3(ax3 + bx2 y + cxy2 + dy3 )

R1 √1 + (K1 x + 3ax2 + 2bxy + cy2 )2 + (K2 y + bx2 + 2cxy + 3dy2 )2

Preserving the terms up to the third order in x and y, we obtain n N ⋅ K ⋅ ∇M R 1 + n M ⋅ K ⋅ ∇M R 1 R21

=

ax3 + bx2 y + cxy2 + dy3 R31

,

.

184 | 8 Problem of Vibration Conductivity that is, both the numerator and denominator are of the third order in x and y, so the result is bounded as R1 → 0. Let us turn to the analysis of the asymptotic behaviour of the term outside the integral in (8.28) for both large and small β. Let β be as small as desired. Since 1 − (1 − βR1 )e−βR1 = from (8.27) it follows that

β2 R21 + O(β3 ), 2

Φ(N) = β2 B(N) + O(β3 ), where

2π π

B(N) =

1 sin ϑ ∫ ∫ [ ∑ R21NM k sign cos(n M k , R)] 3 dϑ dφ. 2 c1 k 0 0

Hence, the contribution of the term outside the integral to (8.28) is asymptotically infinitesimal as β → 0, which renders this equation virtually useless from the numerical calculation viewpoint because it degenerates to an integral equation of the first kind. In Section 8.4 we consider another integral equation which behaves better for small β. We observe that equation (8.28) for large β is suitable for numerical solution. Inside the angles ϑ and φ lying in the same half-space with the boundary corresponding to the tangent plane, the integrand in (8.27) vanishes because R = 0, while in the limits of the other half-space it is close to sin(ϑ/c31 ) + O(1/β). Hence 2π π

sin ϑ

Φ(N) = ∫ ∫ 0 π/2

c31

1 1 dϑ dφ + O( ) = 2π√K x K y K z + O( ). β β

In other words, for large β integral equation (8.28) does not degenerate to an equation of the first kind.

8.4 An integral equation with enhanced asymptotic properties at small β As the fundamental solution, we choose not (8.4) but the function G=

cosh β(a − R1 ) − R1 (cosh βa −

sinh β(a−R1 ) βa sinh βa βa )

,

where a is a parameter chosen from the condition max R1 ≤ a, N0 ,N

(8.31)

that is, the value of a must not exceed the diameter of the deformed body V1 described in Section 8.1. In view of inequality (8.6), we are able to state that a sufficient condition

8.4 An integral equation at small β

for validity of (8.31) is

| 185

max R ≤ a, N0 ,N

that is, as a one may choose the diameter of the body V. But in order to get a better accuracy in numerical solution one should choose a from inequality (8.31) making it close to an equality. It is not difficult to see that the introduced function solves differential equation (8.1) and has a singularity of the form 1/R1 . Consider now the integral over the body surface 1) cosh β(a − R1 ) − sinh β(a−R βa dO N . (8.32) S(N0 ) = ∫ μ(N) βa R1 (cosh βa − sinh βa ) O Expression (8.32) satisfies vibration conductivity equation (8.1), and due to the structure of the integrand, integral (8.32) is an analogue of potential (8.7), so properties (8.15), (8.16) remain valid. Substituting (8.20) into boundary condition (8.2), we obtain 2π√K x K y K z μ(N) + ∫ μ(M)n N ⋅ K ⋅ ∇N [

cosh β(a − R1 ) −

O

R1 (cosh βa −

sinh β(a−R1 ) βa sinh βa βa )

] dO M = f(N).

Performing the isolation of the singularity in the last equation in the same way as we did for equation (8.17), we ultimately obtain μ(N)Ψ(N) + ∫ Θ(R1 )[μ(M)n N ⋅ K ⋅ ∇M R1 + μ(N)n M ⋅ K ⋅ ∇M R1 ] O

dO M = f(N), (8.33) R21

where Θ(R1 ) =

(1 −

R1 a ) cosh β(a

1 − βR1 ) sinh β(a − R1 ) − R1 ) − ( βa

cosh βa −

sinh βa βa

2π π

Ψ(N) = ∫ ∫ { ∑ l[1 − Θ(R1NM k )] sign cos(n M k , R)} 0 0

k

sin ϑ c31

,

(8.34)

dϑ dφ.

(8.35)

The function Θ(R1 ) does not suffer a singularity, as well as the second factor of the integrand, which was proved in Section 8.3. Thus, equation (8.33) can be solved with the use of a computer. Let us dwell on the asymptotic behaviour of Ψ(N) in (8.35). For large β we obtain Θ(R1 ) ≃ (1 + βR1 )e−βR1 . Hence, as in Section 8.3,

Ψ(N) ≃ 2π√K x K y K z .

(8.36)

Therein lies a similarity between integral equation (8.33) and equation (8.28). Now

186 | 8 Problem of Vibration Conductivity let β be small. Then

R31 . a3 Keeping this in mind and passing again to the three-dimensional integration in (8.35), we obtain Θ(R1 ) ≃ 1 −

2π π

R3 sin ϑ 1 Ψ(N) = 3 ∫ ∫ 1 3 dϑ dφ a c1 0 0

2π π R

=

3V 3 ∫ ∫ ∫ R2 sin ϑ dϑ dφ dR = 3 , a3 a

(8.37)

0 0 0

where V is the volume of the body. Substituting asymptotic expressions (8.36) and (8.37) into equation (8.33), we see that R31 μ(M)n M ⋅ K ⋅ ∇M R1 + μ(N)n M ⋅ K ⋅ ∇M R1 3V − μ(N) + ] dO M = f(N). (1 )[ ∫ a3 a3 R21 O

The asymptotic contributions of the term outside the integral and the integral on the left-hand side of the last equation coincide, which shows that equation (8.33) is better suited for the numerical solution than (8.34). In order to evaluate integral (8.32) whose integrand suffers a weak singularity, we introduce 1) cosh β(a − R1 ) − sinh β(a−R βa H(N0 ) = ∫ n N ⋅ r N dO N . sinh βa (cosh βa − βa )R1 O

We represent S in the form

sinh β(a−R1 )

S(N0 ) =

cosh β(a − R1 ) − μ(N∗ ) βa H(N0 ) + ∫ sinh βa n∗ ⋅ r∗ cosh βa −

[μ(N) − μ(N∗ )

βa

O

n N ⋅ r N dO N . ] n∗ ⋅ r∗ R1

The notation is explained in Section 8.2. Provided that μ(N) is differentiable, the integral entering the last equation does not suffer a singularity and is hence suitable for evaluation with the use of a computer. In order to calculate H(N0 ), as in Section 8.2, we transform the surface integral to the volume 1 by the Ostrogradsky formula, and go to the spherical coordinate system shown in Figure 8.4. Evaluating the integral over R, we obtain 2π π

H(N0 ) = ∫ ∫ { ∑[(R21N0 N k + R1N0 N k D(ϑ, φ))Θ1 (R1N0 N k ) − k

0 0

×

cosh βa−cosh β(a−R1N0 N k ) βa cosh βa−sinh βa βa

sinh βa − sinh β(a − R1N0 N k ) −

× sign cos(n N k , R)}

2D(ϑ, φ) β

sin ϑ c32

dϑ dφ,

]

8.5 Numerical solution | 187

where R1N0 N k , D(ϑ, φ), and c2 are calculated by formulas (8.25), (8.24), (8.22), respectively, and 1) cosh β(a − R1 ) − sinh β(a−R βa Θ1 (R1 ) = G(R1 )R1 = . sinh βa (cosh βa − βa ) We thus obtain regular integral equations for vibration conductivity which obey the conditions for applicability of the semi-statistical method. Let us turn to its numerical implementation.

8.5 Numerical solution of vibration conductivity problems 8.5.1 Solution of the test problem As a control example, we consider the problem on an isotropic solid sphere exposed to a unit stress applied to the whole surface. This is probably a unique problem which admits for an analytic solution both for the function S sought for and for the potential density μ. So, a direct estimation of the accuracy of the solution of integral equation (8.33) is possible. In the isotropic case K = E, boundary problem (8.1), (8.2) is of the form ∆S − β2 S = 0,

(8.38)

dS = f, dn

(8.39)

d where ∆ is a Laplacian, dn is the derivative along the outward normal. For a solid sphere exposed to a uniform stress, because of symmetry we are able to assert that in the spherical coordinate system r, ϑ, φ the value of S does not depend on the angular coordinates ϑ and φ, hence equations (8.38), (8.39) for a sphere of radius b take the form

1 d2 (rS) − β2 S = 0, r dr2 dS 󵄨󵄨󵄨 󵄨 = 1, S󵄨󵄨󵄨r=0 is bounded. 󵄨󵄨 dr 󵄨󵄨r=b The function S(r) =

(8.40)

sinh βr b2 βb cosh βb − sinh βb r

(8.41)

solves this problem. Thus, we have found an analytic solution for S. Now let us obtain a solution of the integral equation corresponding to this problem. It is not so difficult to see that integral equation (8.33) for an isotropic body takes the form μ(N)Ψ(N) + ∫ Θ(R)[μ(M) cos(R, n N ) + μ(N) cos(R, n M )] O

dO M = f(N). R2

(8.42)

188 | 8 Problem of Vibration Conductivity

ṉN

Z θ=

N

(Ṟ,ṉN ) Ṟ

β

ṉM

M

(Ṟ,ṉM )

β W

Fig. 8.5. The sphere case.

For K = E we indeed obtain R1 = R, n N ⋅ K ⋅ ∇M R1 = n N ⋅ ∇M R = −n N ⋅ ∇N R = cos(R, n N ), n M ⋅ K ⋅ ∇M R1 = n M ⋅ ∇M R = cos(R, n M ), which implies (8.42). We transform the expression in square brackets as follows: μ(M) cos(R, n N ) + μ(N) cos(R, n M ) = [μ(M) − μ(N)] cos(R, n N ) + μ(N)[cos(R, n N ) + cos(R, n M )]. But for the sphere, we have (see Figure 8.5) cos(R, n N ) = − cos(R, n M ), and from (8.35) it follows that 2π π

Ψ(N) = ∫ ∫[1 − Θ(R)] sin ϑ dϑ dφ,

(8.43)

0 0

where

{0 if 0 ≤ ϑ < 2π , R={ −2b cos ϑ if 2π ≤ ϑ < π. { The parameter a entering expression (8.34) of Θ(R) can be set to a = 2b.

(8.44)

(8.45)

8.5 Numerical solution | 189

Performing the integration in (8.43) with regard to (8.44), we obtain Ψ(N) = Ψ0 = 2π(1 −

2 (1 β2 a2

− cosh βa) + cosh βa −

2 βa sinh βa sinh βa βa

−1

),

where a is defined by (8.45), that is, Ψ(N) = const. Therefore, equation (8.42) for the given problem takes the form Ψ0 μ(N) + ∫ Θ(R) cos(R, n N )[μ(M) − μ(N)] O

dO M = 1. R2

(8.46)

There is no difficulty in understanding that the constant μ(N) = μ0 =

1 Ψ0

(8.47)

solves this equation. If the analytic solution of integral equation (8.46) is known, it is straightforward to find the optimum density p(ϑ, φ) which is then used to generate the random points. For this purpose we do not use relation (7.44) but the condition which it was derived from: L(y, y) − ∫ L(x, y) dx = 0. (8.48) p(y) O

In view of the fact that the solution φ(x) is the constant φ(x) = μ0 , we obtain 2

L(y, y) = μ20 ∫[K1 (ξ, y) + K2 (ξ, y)] dξ, O

∫ L(x, y) dx = μ20 ∫[K1 (ξ, y) + K2 (ξ, y)]{ ∫[K1 (ξ, x) + K2 (ξ, x)] dx} dξ, O

O

O

where K1 (ξ, y) = −K2 (ξ, y) = Therefore,

Θ(R) cos(R, n N ) . R2

L(y, y) ≡ ∫ L(x, y) dx ≡ 0. O

Hence, any function p(y) obeying relation (7.34) satisfies condition (8.48). By virtue of symmetry of the problem, there is no need to make the grid more dense in a place, so we use the uniform law of distribution of the random points: p(y) =

1 . 4π2

Besides, for the comparison purpose we consider the law given in Figure 8.6. This example is treated as a particular case of the problem on an anisotropic ellipsoid exposed to a unit stress, all whose semi-axes are equal to b and with anisotropy coefficients K x = K y = K z = 1.

190 | 8 Problem of Vibration Conductivity P(θ,φ)



φ

π θ Fig. 8.6. An example of a step function.

The results of numerical solution show that even if the number of generated points is equal to ten, the numerical solutions for μ and S coincided with the analytical results obtained by formulas (8.47), (8.41) with accuracy to four significant digits for both densities. This accuracy is attained on the surface of the sphere where the errors are maximal. This result is supported by the statistical estimation of the error. The mean square deviation σ̃ S̃ of the function S̃ calculated by formula (7.46) is one-fourth of the function itself.

8.5.2 Analysis of the influence of the sphere distortion and the external stress character on the results of the numerical solution The good coincidence of the numerical solution with the analytic solution of the problem on the solid sphere exposed to a constant stress obtained in the previous section is probably due to the simplicity of the problem which did not allow the statistical peculiarities of the method to manifest themselves. In this connection, problems are of interest where the solution of integral equation (8.40) differs from a constant. The simplest of them are the problems on an ellipsoid under a constant stress and on a solid sphere under an impulse one. While solving the former problem, we come up against the question concerning the influence of the oblongness of the ellipsoid on the value of statistical error. The generation of the random points is performed in accordance with the uniform distribution law. The results of solution for the maximum number of the points N = 160 are given ̃ in Figure 8.7. The continuous line stands for the function S(ϑ, φ) sought for in the

8.5 Numerical solution | 191

s a:b:c: = 1:2:1 5

a:b:c: = 1:4:1 a:b:c: = 1:6:1

4

3 C θ

2

B

1 φ= θ 0.5π

3 π 2 0.6π

0.7π

0.8π

0.9π

π

Fig. 8.7. Results of calculations for an ellipsoid.

lower half of the ellipsoid, while the dashed lines show the limits of the statistical error, ̃ that is, S(ϑ, φ) ± σ̃ S (ϑ, φ). At the points located far away from the ellipsoid ends (see Figure 8.7), the statistical error is very small even in the inner part for the semi-axes ratio 1 : 6 : 1. But, as one approaches the ends, the relative error increases noticeably and becomes 3.8% for the ellipsoid with semi-axes ratio 1 : 2 : 1, 18.7% for that with the ratio 1 : 4 : 1, and 16.6% for the ratio 1 : 6 : 1. Thus, the above procedure works well for 160 uniformly distributed random points provided that the ellipsoid is not too oblong (with the semi-axes ratio about 1 : 2 : 1). For more oblong bodies one has either to increase the number of points generated, hence calling for external storage units, or to refine the density of distribution of the random points with the use of relation (7.45), where as the first approximation one can choose the solution of the integral equation we have found. While solving the problem on a sphere which is subject to a unit stress applied to a small area of its surface, it is of interest to study the influence of the stress application area size to the value of statistical error. It is wise, though, to do away with the uniform distribution density p(y), because it is clear that the random grid must be more dense in the domain of rapid variation of the solution, that is, in the area exposed to the stress. So, as the first approximation we choose the density p(y) shown in Figure 8.6. The results of solution of this problem are shown in Figure 8.8 (the notation is the same as in Figure 8.7). From the given graphs it is seen that far away from the stress area the relative error is relatively small and is about 5% both in the case where the stress area S f is equal to 0.125 of the whole sphere area S0 and in the case where K S = S f /S0 = 0.045. But in the stress zone and in its neighbourhood the statistical

192 | 8 Problem of Vibration Conductivity S θ

1.6 1.4 1.2

1 0.8 0.6

0.4 0.2 φ=π θ

φ=0

0 O.7π

O.9π

π

O.2π

O.4π

θ

Fig. 8.8. Load on small surface area.

error grows depending essentially on the degree of impulsiveness of the stress. For K S = 0.125 the relative error is 8%, while for K S = 0.045 it is 18,8%. Therefore, to guarantee a sufficient accuracy in the problems where the stress is applied to a small surface area (K S < 0.1), one has to increase the number of generated points and to optimise their allocation.

9 Problem on Ideal-Fluid Flow Around an Airfoil 9.1 Introduction In this chapter we consider the problem on ideal-fluid flow around an airfoil cascade. With the help of the semi-statistical method, we obtain sufficiently accurate results on cascades whose parameters are taken from actual constructors’ practice. Our results are compared with solutions obtained by means of other numerical methods. The predictions are borne out that the convergence rate of the method depends essentially on the geometrical characteristics of the cascade. Attempts to make the convergence more rapid lead us to some modification of the method (removal of the outliers from the averaged sum). In order to understand better the dependence of convergence rate on the geometrical properties of the cascade, we also study the application of the method to a cascade of test airfoils (ellipses) with varying parameters of the airfoils and different cascade spans. In the end, a satisfactory accuracy is achieved in all problems under consideration; the adaptive algorithm has allocated the integration nodes in full accordance with the theoretical pattern. But in some cases, solving by the semistatistical method appears to be more time-consuming than by deterministic methods, which is likely due to both a lack of perfection of the software implementation and a need for further research on ways to make the convergence of the semi-statistical method more rapid, in particular, improve the optimization mechanism. Chapter 9 is written in co-authorship with N. A. Berkovsky.

9.2 Setup of the problem on flow around an airfoil Let a flat blade (airfoil, hydrofoil) cascade with span t be given (see Figure 9.1), such that from infinity an ideal fluid flows into it at the angle β1 and then leaves it at the angle β2 . We have to find the absolute value of the normalised velocity of the flow at the airfoil contours. This problem reduces (see [72]) to solution of an integral equation of the form 1 w(s) + ∮(K(s, l) − )w(l) dl = b(s), (9.1) L L

where w(s) is the normalised flow velocity, K(s, l) =

1 t

∂x ∂s

b(s) = −2

sin( 2π t (y(s) − y(l))) −

∂y ∂s

sinh( 2π t (x(s) − x(l)))

2π cos( 2π t (y(s) − y(l))) − cosh( t (x(s) − x(l)))

,

∂x ∂y t − (cot(β1 ) + cot(β2 )) + . ∂s ∂s L

Here s and l are the arc lengths at two distinct points of the airfoil contour, the arcs are measured from the centre of the airfoil trailing edge in the positive direction https://doi.org/10.1515/9783110554632-009

194 | 9 Ideal-Fluid Flow 2

β1

1.5

w

w

L t

1

w

Y

L

0.5

β1

–0.5

0

L w

w 0.2

t

w

w

0 β1

β2

w

0.4

0.6

w 0.8

β2

1

X

Fig. 9.1. A flat airfoil cascade.

(counterclockwise); x(l) and y(l) are the coordinates of the point of the contour whose arc length is l; L is the airfoil contour, L is its length. The direction of the unit tangent ∂y vector ∂x ∂s , ∂s is chosen in such a manner that the contour is traversed counterclockwise. As distinct from [72], here the cascade front is not along the abscissa axis but along the ordinate axis. In addition, the velocity in [72] is normalised in a way that the flow rate at the exit is equal to one, while we carry out the normalisation in such a way that the absolute value of the velocity vector at the exit of the cascade is equal to one. This is achieved by multiplying the velocity obtained upon solving equation (9.1) by sin(β2 ). This was precisely the latter normalisation which was utilised in the software designed in the Ural Polytechnic Institute where the computation was implemented by the method of rectangles with optimal allocation of integration nodes [28, 67]. In this chapter, solutions obtained with the use of this software are used for comparison purposes. Thus, in the given problem the part of the surface S is played by the contour L, and integral equation (9.1) must be solved in unknown function w(s). If we let w k (s) denote the averaged solution after k iterations and W k (s) the value obtained at the iteration numbered k upon solving integral equation (9.1) at N points generated, then we see that 1 k w k (s) = ∑ W k (s). N m=1 The empirical standard deviation at the iteration numbered k is calculated by the formula δ k (s) = √

1 k ∑ D l (s), k2 l=1

9.3 Analytic description of the airfoil contour | 195

where D l is the sample variance resulting from a single iteration numbered l: D l (s) =

N K(s, l m ) − 1 ∑( N(N − 1) m=1 p(l m )

1 L

2

W k (l m ) + b(s) − W k (s)) .

Here l1 , l2 , . . . , l N are random points in the interval [0, 2π] drawn at the kth iteration, N is the number of points in each iteration (it is the same for all iterations in our numerical calculations), s is the observation point. The values of w(l m ) result from approximate solution of integral equation (9.1) at the kth iteration of the algorithm. The computational practice shows that the errors are limited by the tripled standard deviation and, as a rule, lie in the bounds equal to one standard deviation.

9.3 Analytic description of the airfoil contour Integral equation (9.1) for a smooth contour L is a Fredholm integral equation and possesses a unique solution [72]. The kernel of equation (9.1) in the case of twice differentiable contour is treated as continuous because it has a removable singularity at s = l (see [72]). In our study, though, the contour is determined by a spline whose first derivative suffers discontinuity at finitely many points. This fact implies discontinuities of the kernel at the points of discontinuity of the first derivative and yet does not influence the quality of computation. In addition, it is always possible to approximate a spline by a part of the Fourier series and to solve the problem along an infinitely differentiable contour as in [28]. We make attempts to use both techniques; the solutions for a spline and for a segment of the Fourier series are found to differ very little. We apply the semi-statistical method to integral equation (9.1) in full accordance with the general scheme which was presented in detail in Chapter 7, with no background like a regularisation. The velocity values are calculated at one hundred points of the contour, fifty equidistant points at the trough and the ridge each (see Figures 9.2 and 9.4). They are then multiplied by sin(β2 ); the result is compared with the solution obtained by the method of rectangles over the same contour. Both results are also compared with the solution computed by the software developed in the Ural Polytechnic Institute. In that software, the airfoil contour is defined in a somewhat different way, which explains the subtle differences seen on the graphs given in Section 9.5. In order to define the airfoil analytically, the researchers from the Ural Polytechnic Institute used the Cartesian coordinates of some number of points at the contour of the blade which suffices for the subsequent interpolation as well as the radii and coordinates of the centres of the circles of the leading and trailing edges of the airfoil. First, we calculate the coordinates of the points where the trough and the ridge join with the edges. In this chapter we use the same technique as used in the Ural Polytechnic Institute; the junction points are found by means of straightforward geometrical constructions. We do not give here the formulas because they are quite bulky. As soon as we find the junction points, we choose twenty equidistant points at the resulting

196 | 9 Ideal-Fluid Flow leading and trailing edges each, which, together with the initial points of the trough and ridge form the point contour which we have to interpolate. The initial points at the trough and ridge, as well as forty points at the edges, are the interpolation nodes. The interpolating contour is constructed as follows. 1. The contour points are numerated counterclockwise; the first point is the centre of the trailing edge, and it is the last one as well; there are N + 1 points in total. 2. We approximately calculate the length L of the contour as the length of a closed polygonal path passing through the mentioned points by the formula N+1

L = ∑ √(x i − x i−1 )2 + (y i − y i−1 ). i=1

3. The point (x k , y k ) numbered k is associated with the number uk =

2π k ∑ √(x i − x i−1 )2 + (y i − y i−1 ) L i=1

for k > 1; we set u1 = 0. As the result, u k is the (approximated) length of the arc of the contour from the centre of the trailing edge to the point (x k , y k ) normalised by 2π. 4. With the use of spline interpolation, on the interval 0 ≤ u ≤ 2π we construct the interpolating functions x(u) and y(u) which obey the conditions x(u k ) = x k ,

y(u k ) = y k ,

k = 1, 2, . . . , N + 1.

These functions are of period 2π. 5. The parametrically defined curve x = x(u),

y = y(u),

where x(u) and y(u) are the functions from the preceding item, is precisely the analytical description of the airfoil contour. By the example of x(u) we demonstrate how to build up a spline function. For this purpose we introduce the cubic functions with undetermined coefficients g k (u) = A k u3 + B k u2 + C k u + D k , where A k , B k , C k , D k are constants satisfying the simultaneous equations g k (u k−2 ) = x k−2 ,

g k (u k−1 ) = x k−1 ,

g k (u k+1 ) = x k+1 ,

g k (u k+2 ) = x k+2 ,

where k = 1, 2, . . . , N; in addition, x−2 = x N−1 ,

x−1 = x N ,

x N+2 = x1 .

Upon finding the undetermined coefficients, let d k = g󸀠 (u k ). Then we consider other cubic functions with undetermined coefficients f k (u) = a k u3 + b k u2 + c k u + d k ,

9.3 Analytic description of the airfoil contour | 197

Fig. 9.2. Initial data about the contour.

Fig. 9.3. Interpolating spline contour.

Fig. 9.4. The points where the velocity is calculated.

198 | 9 Ideal-Fluid Flow where a k , b k , c k , d k satisfy the simultaneous equations f k (u k ) = x k ,

f k󸀠 (u k ) = d k ,

f k (u k+1 ) = x k+1 ,

f k󸀠 (u k+1 ) = d k+1 ,

where k = 1, 2, . . . , N. Now we are able to define the spline function x(u) = f k (u) for u k ≤ x ≤ u k+1 . The function x(u) is continuously differentiable and of period 2π. But the second derivative of this function suffers discontinuities at the points u k , hence the interpolating contour shows no curvature at these points, so the continuity of the kernel of integral equation (9.1) becomes broken at the points u k . It is well known that there exists a way to build up a spline with continuous second derivative but it does not work in the case of an airfoil because there is too much fluctuation between the interpolation nodes. Another way to get a more smooth contour consists of approximating the obtained above function x(u) by its Fourier series. But, in actual practice such an improvement of the differential properties of the contour gives us little real gain in computation quality, so the results presented in this chapter are obtained from a spline contour whose second derivative is discontinuous at the points u k . The interpolating contour obtained as above together with the interpolation nodes is shown in Figure 9.3.

9.4 Computational algorithm and optimisation The computation is performed iteratively. At each iteration, a certain number of random points is drawn in the interval [0, 2π] whose density is determined from the results of preceding iterations (an adaptive algorithm). At the first iteration, the points are generated with the uniform density in the interval [0, 2π], which under our parametrisation means a virtually uniform distribution of the points along the airfoil contour. The results are averaged over the iterations; as the approximate solution after the ith iteration we take the mean arithmetical of the solutions obtained in the preceding iterations. Given this approximate solution, we compute the optimal density by the method suggested in Chapter 7. This algorithm is more economical than that described in Section 7.6, because it utilises a more accurate approximation to the true solution. As the practice shows, on strongly prolate contours very extreme outliers may occur at particular iterations, which cannot be smoothed by averaging even for a quite large number of iterations. But it turns out that if the solution is very inaccurate, then the sample variances are large at the test points which are calculated in the process of computation. One may introduce a rule which will ‘trace’ those terms possessing a very large variance. We design the algorithm in such a manner that the averaging is performed not over all iterations but over those where the relative error determined by the sample variance does not exceed 100%. The solutions obtained at other iterations (usually there are not more than one percent of them, except for points neighbouring the edges) are treated as outliers and are not included into the averaged sum. In the case of a strongly prolate airfoil this modification provides us with a sure gain in com-

9.5 Results of numerical computation | 199

putational quality, which we demonstrate in Section 9.5 devoted to results of numerical calculations (see Figure 9.8). With the use of the semi-statistical method, we evaluate the velocities at 150 points allocated equidistantly with respect to parameter u along the airfoil contour (that is, with virtually equal span in the arc length); the velocity values at the test points (which are distributed not uniformly along the contour) are calculated with the use of interpolation. As the characteristic of the accuracy of the current approximation we choose the sample variance. It appears that the most time is consumed by computing the values of the kernel at the generated points, so the question arises how to decrease the number of generated points preserving the accuracy. While using the semi-statistical method, this is achieved by optimisation of the grid structure.

9.5 Results of numerical computation 9.5.1 Computation of the velocity around an airfoil We observe that the semi-statistical method appears to be very sensitive to the geometry of the contour and to the span of the cascade. The chosen examples emphasise this dependence. For a cascade of airfoils, the span is determined by the linear sizes of the contour, so we change the form of the contour only. It is a priori obvious that the convergence rate of the semi-statistical method worsens at a quite prolate contour, and this has been confirmed by our experiments. For an airfoil, as the prolateness characteristics one may choose k1 =

d , r1

k2 =

d , r2

where d is the distance between the centres of the circles of the airfoil edges, r1 is the radius of the leading edge, r2 is the radius of the trailing edge; the greater these values are, the more prolate is the blade (Figure 9.5). In Figure 9.6, we give graphs of three contours for which we perform our calculations; their parameters are as follows: for a non-cooled airfoil, k1 = 40.6,

k2 = 131.2,

t = 110.012,

β1 = 46.76∘ ,

β2 = 22.93∘ ;

for a cooled airfoil, k1 = 12.4,

t = 0.659,

β1 = 52.21∘ ,

t = 210.012,

β1 = 90∘ ,

k2 = 26.3,

β2 = 29.6∘ ;

for the test contour, k1 = 2.8,

k2 = 6.4,

Here t stands for the cascade span.

β2 = 32.93∘ .

200 | 9 Ideal-Fluid Flow

Fig. 9.5. To the definition of characteristics k1 and k2 of prolateness of the airfoil. 0.5

0

–50

0 100

150 –50

0

50

100

150

(a) Non-cooled airfoil

–0.5

0

(b) Cooled airfoil

40 20 0 –20 –40 –60 –20

0

20

40

60

80

(c) Test contour resembling an airfoil but somewhat rounded Fig. 9.6. Contours of different prolateness.

–0.5

1

9.5 Results of numerical computation | 201

Let us introduce the following notation. In Figures 9.7–9.15, the variable m denotes the observation point label, w m is the velocity at the point labelled m calculated by the semi-statistical method, w1m is the velocity at the point labelled m calculated by the method of rectangles, w2m is the velocity at the point labelled m calculated with the use of the Ural Polytechnic Institute technique, |w m − w1m | is the absolute error of calculation of the velocity at the point labelled m with the use the semi-statistical method as compared with the method of rectangles. In what follows, the words ‘calculated by the semi-statistical method (4 × 400)’ will mean that the semi-statistical method was used to perform 4 iterations to calculate the velocity on 400 points drawn in every iteration. Furthermore, in Figures 9.7–9.11 we show the results of numerical experiment on two airfoil cascades (cooled and non-cooled) and on a test contour. From these examples we see that the semi-statistical method evaluates the velocity with a good accuracy at all points of the contour except for a few at the edges which are not of significance for practical purposes. We also see that the more prolate the contour is, the more iterations we need to attain the required accuracy. Namely, for a non-cooled airfoil we need 300 iterations on 400 points each, for a cooled airfoil, 150 iterations on 400 points each, while for the test contour we have to perform as few as 10 iterations on 300 points each.

9.5.2 Analysis of the density adaptation efficiency It is of interest to study how the adaptive algorithm works while choosing the optimal density. As we see, the points are more dense at the edges and the ridge, in other words, precisely where the quality of calculations at first iterations is the worst. This is illustrated in Figure 9.11. In Figure 9.11 (c) the symbol ‘×’ denotes the boundaries of the intervals numbered 1, 2, . . . , 10 in the histogram presented in Figure 9.11 (a). The bold points are the test ones (placed every ten points) numbered 1, 11, 21, . . . , 91. In Figure 9.12 we present the results of numerical experiments for a cooled airfoil obtained after five iterations with an adaptive algorithm used to find the optimal density and the results after five iterations under a uniform distribution of the generated points. It is easily seen that for one and the same number of the points drawn, the adaptive algorithm yields more accurate results. In Figure 9.13, the former graph demonstrates the sample mean square error at test points calculated after five iterations with the use of an adaptive algorithm, while the latter graph shows the sample mean square error at test points calculated after five iterations under uniform distribution of the points drawn. In Figure 9.13 we see that when employing an adaptive algorithm, the sample mean square error decreases faster than with calculations on uniform samples. So, in order to attain the required accuracy one may draw a smaller amount of points.

202 | 9 Ideal-Fluid Flow 2

1.5 1.35 1.2 1.05 0.9 0.75 0.6 0.45 0.3 0.15 0

w2m w1m

1.67 1.33 1 0.67 0.33 0

0

25

50

75 m

100

125

150

(a) The graph of velocity calculated by the method of rectangles, and the graph of velocity found with the use of Ural’s method. 0.4 0.37 0.35 0.32 0.29 0.27 0.24 0.21 0.19 0.16 0.13 0.11 0.08 0.053 0.027 0

wm w1m

0

25

50

75 m

100

125

150

(b) The graph of velocity calculated by the semi-statistical method, 300 iterations with 400 points each, and the graph of velocity calculated by the method of rectangles.

|wm – w1m|

0

25

50

75 m

100

125

150

(c) The graph of the absolute error as the velocity calculated by the semi-statistical method (300 × 400) is compared with the method of rectangles. Fig. 9.7. The results of numerical experiment on a non-cooled airfoil. 15

wm w1m

13.5 12 10.5 9 7.5 6 4.5 3 1.5 0

0

25

50

75 m

100

125

150

(a) The graph of velocity calculated by the semi-statistical method (300 × 400), and the graph of velocity calculated by the method of rectangles.

15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

|wm – w1m|

0

25

50

75 m

100

125

150

(b) The graph of the absolute error as the velocity calculated by the semi-statistical method (300 × 400) is compared with the method of rectangles.

Fig. 9.8. The result obtained for a non-cooled airfoil as the outliers dropped are again included into the sum; the sample is the same as in Figure 9.7.

9.5 Results of numerical computation | 203

2

w2m w1m

1.67

1.33

1.33 1

1

0.67

0.67

0.33

0.33

0

0

25

50

75 m

100

125

150

(a) The graph of velocity calculated by the method of rectangles, and the graph of velocity found with the use of Ural’s method. 0.3 0.28 0.26 0.24 0.22 0.2 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

wm w1m

1.67

0

0

25

50

75 m

100

125

150

(b) The graph of velocity calculated by the semi-statistical method, 150 iterations with 400 points each, and the graph of velocity calculated by the method of rectangles.

|wm – w1m|

0

25

50

75 m

100

125

150

(c) The graph of the absolute error as the velocity calculated by the semi-statistical method (150 × 400) is compared with the method of rectangles. Fig. 9.9. The results of numerical experiment on a cooled airfoil. 3

wm w1m

2.5 2 1.5 1 0.5 0

0

25

50

75 m

100 125 150

(a) The graph of velocity calculated by the semi-statistical method (10 × 300), and the graph of velocity calculated by the method of rectangles.

0.15 0.14 0.12 0.11 0.09 0.075 0.06 0.045 0.03 0.015 0

|wm – w1m|

0

25

50

75 m

100

125

150

(b) The graph of the absolute error as the velocity calculated by the semi-statistical method (10 × 300) is compared with the method of rectangles.

Fig. 9.10. The results of numerical experiment on a test contour.

204 | 9 Ideal-Fluid Flow 3

wm w1m

2.5

0.3

2

0.2

1.5

pi

1

0.1

0.5

0

0

1

2

3

4

5 i

6

8 9 10

7

(a) The histogram of the optimal density after two iterations.

0

0 10 20 30 40 50 60 70 80 90100 m

(b) The graph of the velocity evaluated by the semi-statistical method (2 iterations on 400 points each) and the graph of the velocity evaluated by the method of rectangles.

0.5 5 51

0

–0.5

41

7

6

0

4

31

61

0.2

71 8

3

21

81 9 91

0.4

0.6

2

11 1

10 0.8

1 1

(c) Boundaries of the intervals in the histogram given in part (a). Fig. 9.11. The results of numerical experiments for a cooled airfoil.

2

4

wm w1m

1.6 1.2

2.4

0.8

1.6

0.4

0.8

0

0

20

wm w1m

3.2

40

m

60

80

100

(a) The graph of the velocity evaluated by the semi-statistical method (5 × 400) with the use of an adaptive algorithm and the graph of the velocity evaluated by the method of rectangles.

0

0

20

40

m

60

80

100

(b) The graph of the velocity evaluated by the semi-statistical method (5 × 400) without resort to an adaptive algorithm and the graph of the velocity evaluated by the method of rectangles.

Fig. 9.12. The results of numerical experiments for a cooled airfoil.

9.5 Results of numerical computation | 205 0.15 0.14 0.12 0.11 0.09 0.075 0.06 0.045 0.03 0.015 0

δm

0

m

100

(a) m is the test point number, δ m is the sample mean square error at the mth point evaluated by the semi-statistical method (5 × 400) with the use of an adaptive algorithm.

4 3.6 3.2 2.8 2.4 2 1.6 1.2 0.8 0.4 0

δm

0

m

100

(b) m is the test point number, δ m is the sample mean square error at the mth point evaluated by the semi-statistical method (5 × 400) without resort to an adaptive algorithm.

Fig. 9.13. The results of numerical experiments for a cooled airfoil.

9.5.3 Computations on test cascades We make an attempt to study the convergence of the method on more simple test cascades, in particular for a cascade of ellipses with frontal approach flow (see Figure 9.14). We change not only the linear sizes of the ellipses but the cascade span as well. We discover that the convergence becomes much worse as the blades approach each other. As earlier, the method suits better for the ellipses whose eccentricities are close to zero, that is, less prolate. The ellipses are determined by the equation in Cartesian coordinates x2 y2 + = 1. a2 b2 In Figures 9.16–9.18 we present results of numerical experiments for cascades of ellipses with varying semi-axes a and b and varying cascade span t. The figures show that there is a not-so-simple dependence between the rate of convergence of the semistatistical method and the cascade span: the numerical experiment shows that the best results are achieved for the span comparable with the linear size of the ellipses. For widely spanned blades (Figure 9.16), and especially for blades closely approaching each other (Figure 9.17) in order to attain a high accuracy we need much more points than for the cascade whose span is comparable with the blades sizes (this is exactly what we see in actual conditions). As in the case of airfoils, the convergence rate decreases for prolate blades (Figure 9.18).

206 | 9 Ideal-Fluid Flow β1 = β2 = 90°

150

116.67

83.33

Y

w

w

w

w

w

w

50

16.67

–16.67

–50 –50 –40

30

–20 –10

0 X

10

20

30

40

50

Fig. 9.14. A cascade of ellipses with frontal approach flow. 3

wm w1m

2

0.15

|wm – w1m|

0.13 0.1 0.075

1

0.05 0.025

0

0

25

50

75 m

100

125

150

(a) The graph of velocity calculated by the semi-statistical method (2 × 300) and the graph of velocity calculated by the method of rectangles.

0

0

25

50

75 m

100

125

150

(b) The graph of absolute error of velocity calculation by the semi-statistical method (2 × 300) compared with the method of rectangles.

Fig. 9.15. The results of computation on a cascade of ellipses with parameters a = 50, b = 10, t = 40.

9.6 Conclusions | 207

1.5

wm

w1m

1

0.1

|wm – w1m|

0.083 0.067 0.05

0.5

0.033 0.017

0

0

25

50

m

75

100

125

(a) The graph of velocity calculated by the semi-statistical method (2 × 300) and the graph of velocity calculated by the method of rectangles.

1.5

wm

w1m

1

0

25

0

50

75 m

100

125

150

(b) The graph of absolute error of velocity calculation by the semi-statistical method (2 × 300) compared with the method of rectangles.

0.015

|wm – w1m|

0.0125 0.01 0.0075

0.5

0.005 0.0025

0

0

25

50

m

75

100

125

(c) The graph of velocity calculated by the semi-statistical method (50 × 300) and the graph of velocity calculated by the method of rectangles

0

0

25

50

75 100 125 150 m

(d) The graph of absolute error of velocity calculation by the semi-statistical method (50 × 300) compared with the method of rectangles.

Fig. 9.16. The results of computation on a cascade of ellipses with parameters a = 50, b = 10, t = 10000 (virtually single blade).

9.6 Conclusions Summarising the results of our numerical experiments, we make the following conclusions. (i) With the use of the semi-statistical method, we are able to obtain sufficiently accurate results in the problem of potential flow over an airfoil cascade. (ii) The adaptive algorithm of optimisation of the integration grid works in full accordance with the theoretical constructs. It makes the convergence more fast while decreasing the sample variance.

208 | 9 Ideal-Fluid Flow 10

wm w1m

2

|wm – w1m|

1.67 1.33

5

1 0.67 0.33

0

0

25

50

75 m

100

125

150

(a) The graph of velocity calculated by the semi-statistical method (2 × 300) and the graph of velocity calculated by the method of rectangles. 6

wm w1m

4

0

0

25

50

75 m

100

125

150

(b) The graph of absolute error of velocity calculation by the semi-statistical method (2 × 300) compared with the method of rectangles. 0.15

|wm – w1m|

0.13 0.1 0.075

2

0.05 0.025

0

0

25

50

75 m

100

125

150

(c) The graph of velocity calculated by the semi-statistical method (350 × 300) and the graph of velocity calculated by the method of rectangles.

0

0

25

50

75 m

100

125

150

(d) The graph of absolute error of velocity calculation by the semi-statistical method (350 × 300) compared with the method of rectangles.

Fig. 9.17. The results of computation on a cascade of ellipses with parameters a = 50, b = 10, t = 25.

(iii) For prolate bodies, the convergence rate turns out to be not so high. The problem thus arises to make the convergence more fast, which is necessary for the semistatistical method to compare favourably with deterministic methods in terms of computation speed, so we will somewhat modify the adaptive optimisation algorithm.

9.7 A modified semi-statistical method Subsequent investigations show that there is a more theoretically justified and efficient way to modify the method than the one suggested in the previous sections. The key

9.7 A modified semi-statistical method | 209 3

1.5

wm w1m

|wm – w1m|

1.25 1

2

0.75

1

0.5 0.25

0

0

25

50

m

75

100

125

(a) The graph of velocity calculated by the semi-statistical method (2 × 300) and the graph of velocity calculated by the method of rectangles. 3

wm w1m

2

0

25

0

50

75 m

100

125

150

(b) The graph of absolute error of velocity calculation by the semi-statistical method (2 × 300) compared with the method of rectangles. 0.15

|wm – w1m|

0.13 0.1 0.075

1

0.05 0.025

0

0

25

50

75 m

100

125

(c) The graph of velocity calculated by the semi-statistical method (10 × 300) and the graph of velocity calculated by the method of rectangles.

0

0

25

50

75 m

100

125

150

(d) The graph of absolute error of velocity calculation by the semi-statistical method (10 × 300) compared with the method of rectangles.

Fig. 9.18. The results of computation on a cascade of ellipses with parameters a = 130, b = 10, t = 40.

difference is the following: as the criterion for a realisation to be included into the averaged sum we choose not the sample variance but one of the norms of the inverse matrix. This allows us, in particular, to preserve the smoothness of the approximate solutions.

9.7.1 Computational scheme We give a general scheme of the modified semi-statistical method without going into detail.

210 | 9 Ideal-Fluid Flow (i) For a fixed N, carry out L0 iterations of the semi-statistical method, store their results, and at each iteration calculate the norm of the inverse matrix associated with the cubic norm in ℝN . (ii) Making use of some criterion for elimination, exclude from consideration those iterations at which the norm of the inverse matrix is too large (exceeds the threshold set by this criterion). (iii) Evaluate the approximate solution at the observation point by means of averaging the approximate solutions obtained at this point by the semi-statistical method on the non-eliminated iterations. (iv) By the statistical data gathered in the computing process, estimate the variation of the approximate estimator of the true solution and, depending on its value, decide either to stop or continue the computation increasing the number of iterations. The formula to compute the approximate solution is the following: φ∗ (x) ≈ φ L,N (x) | B N =

1 L ∑ {φ N (x) | B N }k , L k=1

(9.2)

where L is the number of selected iterations and the symbol B N denotes the event which must occur for the realisation φ N (x) to be included in sum (9.2).

9.7.2 Ways to estimate the variance in the computing process There are two ways to control the variance of the estimator φ L,N (x) | B N in the computing process. First, at each iteration satisfying the elimination condition we calculate δ1i (x) = √

N 2 1 K(x, xn ) φ n − φ N (X)) , ∑ (f(x) + N(N − 1) n=1 p(xn )

i = 1, 2, . . . , L.

Here L is the number of iterations which remain after the elimination. Next, we calculate δ1 (x) =

1 L √ ∑ (δ1i (x))2 . L i=1

As the second way, we choose the conventional method of calculation of the standard error of sample mean of normally distributed data. The estimator φ L,N (x) | B N can be surely taken as normally distributed because it is a sum of a large number N of identically distributed random variables 1 K(x, xi ) φ i } | B N ), ({f(x) + N p(xi )

i = 1, 2, . . . , N.

We thus obtain δ2 (x) =

1 1 L 2 √ ∑ ({φ N (x) | B N }i − {φ L,N (x) | B N }) . √L L − 1 i=1

9.7 A modified semi-statistical method | 211

Furthermore, on the non-zero number of iterations which have not been rejected we construct confidence intervals for E{φ N (x) | B N } with confidence level α by the formulas

or

φ N (x) | B N − t α δ1 (x) ≤ E{φ N (x) | B N } ≤ φ N (x) | B N − t α δ1 (x)

(9.3)

φ N (x) | B N − t α δ2 (x) ≤ E{φ N (x) | B N } ≤ φ N (x) | B N − t α δ2 (x).

(9.4)

In these formulas, we have t α = s−1 L−1 (

1−α ), 2

where s−1 L−1 (x) is the inverse Student distribution with L degrees of freedom. We observe that the estimate of the standard error obtained by formula (9.3) is likely more trustworthy because it utilises more information concerning the structure of the random function φ N (x) | B N . If an estimate of φ N (x) | B N is biased by a sufficiently small value (the small value of the bias is determined by the required accuracy of calculation) then we can treat the confidence intervals obtained by formulas (9.3) and (9.4) as confidence bounds for φ∗ (x).

9.7.3 Recommendations and remarks to the scheme of the modified semi-statistical method There can be many criteria for rejecting the excessive iterations. But it must not eliminate too many of them and make the averaged sum consisting of too few addends. In the numerical examples below, we eliminate those iterations for which the matrix norm associated with the cubic norm in ℝN exceeds its median after L0 iterations by more than 1.5 of the interquartile range. After the elimination, we take the average over the remaining L iterations as the approximate solution. The number N should not be too large, otherwise large sets of simultaneous equations must be solved, which requires a lot of time. However, it also must not be too small in order to keep the bias of the estimator of the solution in limits required to guarantee the necessary computation accuracy. If N satisfies the latter condition, then, as the number of iterations grow, the variance of the approximate solution tends to zero and the error is due to the bias of the estimator of the solution obtained by the semi-statistical method for given N and the density of the generated random points.

9.7.4 Numerical experiment for a prolate airfoil Let us demonstrate the practical efficiency of two key components of the upgrade of the semi-statistical method, namely the averaging technique and introduction of a rejection criterion, on the example of a flow over airfoil cascade.

212 | 9 Ideal-Fluid Flow In Figures 9.19–9.24, we show the results of calculations by the modified semistatistical method for a non-cooled airfoil, which, since it is very prolate, appears to be quite difficult for the method. These solutions are compared with the solution obtained by the method of quadratures which is considered true; it coincides with the solution calculated by the software developed by the Ural Polytechnic Institute (which has been introduced into production). The words ‘Method (N × M)’ will everywhere mean that we carry out M iterations of the modified semi-statistical method over N random points drawn at each iteration; the iterations are counted with no regard for rejection, the amount of eliminated iterations is specified separately. In all graphs given in Figures 9.19–9.24 the notation is the same, namely, the observation points’ ordinals are laid off along the horizontal axis, while the values of various characteristics at these points are plotted on the vertical axis: ∙ 1 is the graph of the approximate solution by the semi-statistical method. ∙ 2 is the graph of the solution by the method of quadratures, which is considered the true solution. ∙ 3 and 4 are the confidence bounds calculated in accordance with formula (9.4). ∙ 5 and 6 are the confidence bounds calculated in accordance with formula (9.3). The significance level is set to 0.05. The scale in all illustrations is set up automatically by the Mathcad® software. In Figures 9.20 and 9.23, large values at extremal points are not shown. In Figure 9.19 we see that a single iteration of the semi-statistical method does not provide us with a good approximation even over 900 points drawn. Meanwhile, for such a great number of generated points the calculation is quite slow, and the further increase of the number of points is obviously impractical. Further, in Figure 9.20 we present the results of computation by the method (900 × 50) without rejection, that is, the results of 50 iterations on 900 points each are merely averaged. From Figure 9.20 we see that averaging without rejection does not lead to our objective; besides, we see that a single iteration may yield a completely inadequate approximate solution with a not-so-small probability. In Figure 9.21, we present the graph of a solution obtained by the method (900 × 100) with rejection implemented in accordance with the above general scheme of the modified semi-statistical method (that is, the iterations with large first norms of inverse matrices are rejected). The sequence of iterations is the same as in Figures 9.19 and 9.20; 12 iterations are eliminated. Figure 9.21 clearly shows a good fit to the true solution. We observe that the confidence bounds obtained by formula (9.4) may be both narrower and wider than those obtained by (9.3) (this is seen in Figure 9.20). So, the dependence of these bounds on each other is observed to be inconsistent. From purely theoretical grounds, the bounds obtained by formula (9.3) deserve more confidence because they gather more information on the random variable φ N (x) | B N .

9.7 A modified semi-statistical method | 213

Fig. 9.19. Method (900 × 1).

Fig. 9.20. Method (900 × 100) without rejection.

Fig. 9.21. Method (900 × 100) with rejection; 12 iterations are eliminated.

Fig. 9.22. Method (250 × 2) without rejection.

In Figures 9.22–9.24 we present the results of numerical calculations where 250 points are generated at each iteration. This experiment is motivated by the fact that the computation is faster in the case where there are not so many points generated at each iteration but the amount of iterations is quite large. In other words, the method (200 × 50) is much faster than (1000 × 10). In Figures 9.22–9.24 we see that 300 iterations over 250 points yield quite accurate results, with wider confidence bounds as compared with a (900 × 50) method, though. As in Figures 9.19–9.21, in Figures 9.22–9.24 we clearly see the necessity of application of the technique of averaging and introducing some rejection criterion. We

214 | 9 Ideal-Fluid Flow

Fig. 9.23. Method (250 × 300) without rejection.

Fig. 9.24. Method (250 × 300) with rejection; 54 iterations are eliminated.

can conclude that in this problem the small bias of the approximate solution is achieved on a quite small amount of points, so the calculation errors are due to the variance of the approximate solution and tend to zero after averaging the iteration process while removing those iterations for which the matrix norm of the inverse matrix associated with the cubic norm in ℝN is large. In view of this, it is probably more efficient to solve this problem at a small amount of points generated, but in many iterations.

10 First Basic Problem of Elasticity Theory We consider the first basic problem of elasticity theory which is formulated in [45] as follows: inside a body V, one has to find the displacement vector u satisfying the Lamé differential equation 1 ∇∇ ⋅ u + ∆u = 0 (10.1) 1 − 2ν and the following condition on the surface O of the body: 󵄨 u󵄨󵄨󵄨O = f. (10.2) Here ν is the Poisson coefficient, ∇ and ∆ are the Hamiltonian and Laplacian operators, respectively. We assume that the volume force is absent. It is known [45] that the basic problems of elasticity theory reduce to two-dimensional singular integral Fredholm equations of the second kind. It is shown in [41, 42, 55] that with the use of some artificial technique the first basic problem of elasticity theory reduces to regular integral d 1 equations with a weak singularity of the type dn R , where R is the distance between two points on the body surface, n is the vector of outward normal to the surface at the current point. In the first section of this chapter we derive regular integral equations for the first basic problem of elasticity theory in invariant form [2]. In the second section, for a series of classical bodies such as a sphere, hollow sphere, or unbounded medium with a spherical cavity, we find analytic solutions of the first basic problem by direct application of the potential method, that is, by means of finding the potential density from the integral equation with subsequent utilisation of integral transformation. These problems may be used as test ones in numerical experiments. In conclusion, we present results of numerical experiments.

10.1 Potentials and integral equations of the first basic problem of elasticity theory In this section we present potentials of elasticity theory used to solve the first basic problem and establish their properties. We obtain regular differential equations equivalent to the first basic problem. We observe that many of the results we present here have been obtained in the studies [41, 42, 55], but we represent them in a new invariant form which is convenient for the subsequent development.

10.1.1 The force and pseudo-force tensors It is well known that the stress vector τ n Q at a point Q on a plane with the normal n Q is determined via the displacement vector u at the point Q as follows [45]: τ n Q = n Q ⋅ τ = λn Q ∇Q ⋅ u + 2μn Q ⋅ ∇Q u + μn Q × (∇Q × u), https://doi.org/10.1515/9783110554632-010

216 | 10 First Problem of Elasticity Theory where τ is the stress tensor, λ, μ are Lamé’s coefficients, and λ=

2μν . 1 − 2ν

(10.3)

We introduce ∆

Tn Q u = λn Q ∇Q ⋅ u + 2μn Q ⋅ ∇Q u + μn Q × (∇Q × u).

(10.4)

Here Tn Q can be treated as the differential stress operator whose action on the displacement u at point Q yields the stress vector τ n Q . Now we introduce the generalised stress operator [41] ∆

Pn Q u = (λ + μ − α)n Q ∇Q ⋅ u + (μ + α)n Q ⋅ ∇Q u + αn Q × (∇Q × u),

(10.5)

where α is some constant. It is obvious that for α = μ the generalised stress operator reduces to stress operator (10.4): Pn Q u|α=μ ≡ Tn Q u = τ n Q . (10.6) It is known (see [45]) that the displacement u of a point Q of an unbounded elastic medium which is subject to a unit point force e(P) at P is expressed through the Kelvin– Somigliana tensor U: u = U(P, Q) ⋅ e(P), (10.7) where U(P, Q) =

RR 1 1 [(3 − 4ν)E + 2 ], 16πμ(1 − ν) R R

R = rQ − rP ,

R = |R|.

(10.8) (10.9)

Here r Q and r P are the radius vectors of the points P and Q, respectively, and E is the unit tensor. We substitute relation (10.7) into (10.5) and, in view of (10.8) and (10.9), obtain [2, 32] P n Q u = P(P, Q) ⋅ e(P). (10.10) The tensor ∆

P(P, Q) =

1 [n Q R(3α − μ − 4να) + Rn Q (μ − 3α + 4να) 16πμ(1 − ν)R3 nQ ⋅ R − n Q ⋅ RE(3μ − α − 4νμ) − 3 2 RR(α + μ)] (10.11) R

will be referred to as a generalised force tensor. Let α = μ. Then from (10.10), taking into account (10.6), we obtain nQ ⋅ R 1 󵄨 Pn Q u󵄨󵄨󵄨α=μ = τ n Q = [(1 − 2ν)(n Q R − Rn Q − n Q ⋅ RE) − 3 2 RR] ⋅ e(P) 3 8π(1 − ν)R R ∆

= Φ(P, Q) ⋅ e(P),

10.1 Potentials and integral equations | 217

where the tensor nQ ⋅ R 1 [(1 − 2ν)(n Q R − Rn Q − n Q ⋅ RE) − 3 2 RR] 8π(1 − ν)R3 R n Q R − Rn Q RR d 1 1 = [(1 − 2ν)( ] (10.12) ) + ((1 − 2ν)E + 3 2 ) 3 8π(1 − ν) R R dn Q R ∆

Φ(P, Q) =

is called the force tensor [45]. Following [42], we set α= or, taking into account (10.3),

α=

μ(λ + μ) , λ + 3μ μ . 3 − 4ν

(10.13)

Substituting expression (10.13) of α into (10.11), we arrive at μ ∆ = N(P, Q) 3 − 4ν 3 RR 1 =− [(1 − 2ν)E + ]n Q ⋅ R 2 R2 2π(3 − 4ν)R3 1 3 RR d 1 = . ] [(1 − 2ν)E + 2π(3 − 4ν) 2 R2 dn Q R

󵄨 P(P, Q)󵄨󵄨󵄨α =

(10.14)

The tensor N(P, Q) defined by formula (10.14) will be referred to as pseudo-force one. The pseudo-force tensor is remarkable for the property that the derivative along the normal dndQ R1 admits a ‘pure’ representation in contrast to the force tensor Φ(P, Q) (10.12). This is exactly the property which allows us in the next subsection to obtain regular integral equations of the first basic problem of elasticity theory.

10.1.2 Integral equations of the first basic problem We will seek for a solution of Lamé’s differential equation (10.1) in the form of the second-kind potential of elasticity theory [41, 42, 55]: ∆

u(P) = ∫ N(P, Q) ⋅ Ψ(Q) dO Q = B II (P),

P ∈ Vi + Ve ,

(10.15)

O

where N(P, Q) is a pseudo-force tensor (10.14). Then we arrive at integral equations of the first internal I i and first external I e basic problems, Ii :

1 − Ψ(P0 ) + ∫ N(P0 , Q) ⋅ Ψ(Q) dO Q = f(P0 ), 2

P0 ∈ O,

(10.16)

1 Ψ(P0 ) + ∫ N(P0 , Q) ⋅ Ψ(Q) dO Q = f(P0 ), 2

P0 ∈ O,

(10.17)

O

e

I :

O

218 | 10 First Problem of Elasticity Theory

Ve P* Pn

P2 P1

L

0

Vi

Fig. 10.1. The approach points to the surface O along line L.

which are known as the Lauricella equations [55]. Since the derivative along the normal dnd Q R1 in the pseudo-force tensor can be expressed in a ‘pure’ form, its singularity as Q → P0 is of order O(1/R α ), 1 ≤ α < 2. Therefore, integral equations (10.16), (10.17) are regular with a weak singularity. It is worth noticing that if a solution of the first basic problem (10.1), (10.2) is represented in the form of the second kind potential of elasticity theory, then in view of the potential’s properties the obtained solution u(P) satisfies Lamé’s equation (10.1) inside both domains V i and V e but not at the surface O and suffers a discontinuity while passing through the layer O. With the use of a somewhat artificial technique we can make the solution u(P) in the second kind potential form satisfy Lamé’s equations in the domains V i + O and V e + O and decrease the order of singularity of the potential as the point P goes to the surface O. Let us consider a sequence of observation points P1 , P2 , . . . , P n approaching the surface O along some line L. Let P∗ denote the point of intersection of the line L and the surface O (see Figure 10.1). Making use of the generalised Gauß theorem, we rewrite potential (10.15) as follows: for the internal problem I i , where P ∈ V i + O, u(P) = −Ψ(P∗ ) + ∫ N(P, Q) ⋅ (Ψ(Q) − Ψ(P∗ )) dO Q ;

(10.18)

O

for the external problem I e , where P ∈ V e + O, u(P) = ∫ N(P, Q) ⋅ (Ψ(Q) − Ψ(P∗ )) dO Q .

(10.19)

O

We observe that potentials (10.18), (10.19) are more convenient for computation because the order of singularity of the integrand is lower [55].

10.2 Solution of some spatial problems of elasticity theory using the method of potentials In this section, with the use of the method of potential we find analytic solutions of the first basic problem of elasticity theory for a series of classical bodies, namely a

10.2 Method of potentials | 219

W

a 0

0

W

a1

a

1

a2 W

2

01

02 3

Fig. 10.2. Series of classical bodies.

sphere, unbounded medium with a spherical cavity, and a hollow sphere under radial symmetric deformation.

10.2.1 Solution of the first basic problem for a series of centrally symmetric spatial regions We give known solutions of the first basic problem for some spatial regions. Radial symmetric deformation of a sphere. See Figure 10.2 (1). The boundary conditions are 󵄨 󵄨 u󵄨󵄨󵄨O = u󵄨󵄨󵄨r=a = be r , b = const, (10.20) where e r is the unit vector directed along the radius e r (Q) = r Q /r Q , Q ∈ O, a is the sphere radius. The solution is b u(P) = r P . (10.21) a Radial symmetric deformation of an unbounded elastic medium with a spherical cavity. See Figure 10.2 (2). The boundary conditions are 󵄨 󵄨 u󵄨󵄨󵄨O = u󵄨󵄨󵄨r=a = be r , b = const. The displacements sought for are u(P) = b

a2 e r (P). r2P

(10.22)

Radial symmetric deformation of a hollow sphere. See Figure 10.2 (3). The boundary conditions are 󵄨 󵄨 󵄨 󵄨 u󵄨󵄨󵄨O1 = u󵄨󵄨󵄨r=a1 = b1 e r , u󵄨󵄨󵄨O2 = u󵄨󵄨󵄨r=a2 = b2 e r , b1 , b2 = const, (10.23) where a1 and a2 are the radii of the external and internal limiting spheres O1 and O2 , respectively, a1 > a2 . The solution is u(P) = ((a21 b1 − a22 b2 )r P + (a1 b2 − b1 a2 )

a21 − a22 r2P

)

e r (P)

a31 − a32

.

(10.24)

These results will be used in what follows to control solutions of these problems by the method of potential.

220 | 10 First Problem of Elasticity Theory 10.2.2 Solution of the first basic problem for a sphere We assume that a constant radial displacement (10.20) is given at the surface of the sphere. The solution u(P) inside the sphere is sought for as the second kind potential (10.15): u(P) = ∫ N(P, Q) ⋅ Ψ(Q) dO Q ,

P ∈ Vi .

(10.25)

O

We find the unknown density Ψ(Q) from integral equation (10.16) of the first internal boundary problem. Taking into account boundary condition (10.20), from (10.16) we arrive at −

1 Ψ(P0 ) + ∫ N(P0 , Q) ⋅ Ψ(Q) dO Q = be r (P0 ), 2

P0 ∈ O.

(10.26)

O

In view of the central symmetry of the problem, we seek for the density of the potential in integral equation (10.26) in the form Ψ(P0 ) = c1 e r (P0 ), where

P0 ∈ O,

e r (P0 ) =

c1 = const,

(10.27)

r P0 . a

Substituting (10.27) into integral equation (10.26), we find c1 (see [2]): c1 = b

3(3 − 4ν) . 4(2ν − 1)

Thus, we have found the density of the potential in integral equation (10.26): Ψ(P0 ) = b

3(3 − 4ν) e r (P0 ), 4(2ν − 1)

P0 ∈ O.

(10.28)

Substituting (10.28) into (10.25), we obtain the displacement field inside the sphere u(P) = ∫[(AE + B O

RR d 1 c1 4(2ν − 1) b rP = rP . ) ] ⋅ c1 e r (Q) dO Q = 2 dn R a 3(3 − 4ν) a R Q

(10.29)

We observe that displacement field (10.29) found with the use of the method of potential coincides with the known solution (10.21) of this problem.

10.2.3 Solution of the first basic problem for an unbounded medium with a spherical cavity We assume that a uniform radial displacement is given at the boundary of the spherical cavity O.

10.2 Method of potentials | 221

Considering this problem as the first external basic problem I e on a spherical surface O and reasoning as in Section 10.2.2, we arrive at the following expression of the density of the potential [2]: Ψ(P0 ) = c2 e r (P0 ) = b

3(3 − 4ν) e r (P0 ). 5 − 4ν

Substituting this potential density into (10.15), we find the displacement field in the unbounded body V e : u(P) = ∫[(AE + B O

RR d 1 a2 ⋅ c e (Q) dO = b e r (P). ] ) 2 r Q R2 dn Q R r2P

We observe that it coincides with solution (10.22) known in the classical elasticity theory.

10.2.4 Solution of the first basic problem for a hollow sphere We consider a hollow sphere bounded by spheres O1 and O2 of radii a1 and a2 , respectively, a1 > a2 (see Figure 10.2 (3)). We assume that uniform radial displacements (10.23) are given at the surfaces O1 and O2 . Let us find the displacement field u(P) inside the hollow sphere. We seek for the solution in the form of the second kind elasticity potential taking into account the fact that the surface O in the problem under consideration is constituted by two spherical surfaces O1 and O2 : u(P) = ∫ N(P, Q) ⋅ Ψ1 (Q) dO1Q + ∫ N(P, Q) ⋅ Ψ2 (Q) dO2Q , O1

(10.30)

O2

where N(P, Q) is defined by relation (10.14); in the integral over O1 , the expression of N includes the outward normal n1Q to the sphere O1 at the point Q ∈ O1 , while in the integral over O2 it includes the outward normal n2Q to O2 at the point Q ∈ O2 . We observe that for the domain bounded by the sphere O2 the problem under consideration is external, whereas for the domain bounded by the sphere O1 it is internal. Hence the integral equations corresponding to boundary conditions (10.30) can be written out as follows: −

1 Ψ1 (P0 ) + ∫ N(P0 , Q) ⋅ Ψ1 (Q) dO1Q + ∫ N(P0 , Q) ⋅ Ψ2 (Q) dO2Q 2 O1

O2

= b1 e r (P0 ), P0 ∈ O1 , 1 Ψ2 (P0 ) + ∫ N(P0 , Q) ⋅ Ψ1 (Q) dO1Q + ∫ N(P0 , Q) ⋅ Ψ2 (Q) dO2Q 2 O1

= b2 e r (P0 ),

(10.31)

O2

P0 ∈ O2 .

(10.32)

222 | 10 First Problem of Elasticity Theory Taking into account the symmetry of the problem, we seek for the solution of integral equations (10.31), (10.32) in the form {

Ψ1 (P0 ) = c1 e r (P0 ),

c1 = const,

P0 ∈ O1 ,

Ψ2 (P0 ) = c2 e r (P0 ),

c2 = const,

P0 ∈ O2 .

(10.33)

Substituting (10.33) into (10.31), (10.32) and solving the resulting simultaneous equations in c1 and c2 , we obtain c1 = − c2 =

3(3 − 4ν) (b1 a21 − b2 a22 )a1 , 4(1 − 2ν) a31 − a32

3(3 − 4ν) (b2 a1 − a2 b1 )a21 . 5 − 4ν a31 − a32

Thus, the densities of potentials in integral equations (10.31), (10.32) are defined by the formulas Ψ1 (P0 ) = − Ψ2 (P0 ) =

3(3 − 4ν) (b1 a21 − b2 a22 )a1 e r (P0 ), 4(1 − 2ν) a31 − a32

3(3 − 4ν) (b2 a1 − a2 b1 )a21 e r (P0 ), 5 − 4ν a31 − a32

P0 ∈ O1 , P0 ∈ O2 .

Substituting the obtained densities Ψ1 and Ψ2 into potential (10.30), we calculate the displacement field u(P) in the hollow sphere. We observe that in the case where the point P belongs to the hollow sphere, i.e., a2 < r P < a1 , it is outside the sphere O2 and inside the sphere O1 . Hence we obtain u(P) = ∫ N(P, Q) ⋅ Ψ1 (Q) dO1Q + ∫ N(P, Q) ⋅ Ψ2 (Q) dO2Q O1

O2

RR d 1 = ∫ [(AE + B 2 ) ] ⋅ c1 e r (Q) dO1Q R dn1Q R O1

+ ∫ [(AE + B O2

=− =

RR d 1 ) ] ⋅ c2 e r (Q) dO2Q 2 R dn2Q R

4(1 − 2ν) r P 5 − 4ν a22 c1 e r (P) + c2 e r (P) 3(3 − 4ν) a1 3(3 − 4ν) r2P

a31

1



a32

[(b1 a21 − b2 a22 )r P + (b2 a1 − b1 a2 )

a21 a22 r2P

e r (P)].

(10.34)

We observe that displacement field (10.34) obtained with the use of the method of potential coincides with the known solution (10.24) of the problem under consideration.

10.3 Semi-statistical method | 223

10.3 Solution of integral equations of elasticity theory using the semi-statistical method Let us consider the class of integral equations corresponding to the first basic problem of elasticity theory. As we have seen in Section 10.1, the problem reduces to solving the regular integral equation (δ +

1 )Ψ(P0 ) + ∫ N(P0 , Q) ⋅ Ψ(Q) dO Q = f(P0 ), 2

P0 ∈ O,

(10.35)

O

and evaluating the potential u(P) = δΨ(P∗ ) + ∫ N(P, Q) ⋅ (Ψ(Q) − Ψ(P∗ )) dO Q ,

P ∈ V + O.

(10.36)

O

Here δ = −1 for the internal problem, and δ = 0 for the external one, N(P, Q) is the pseudo-force tensor (10.14), O is the Lyapunov closed surface bounding the body V, P0 , P∗ , Q ∈ O, P is an observation point belonging to either O or V, and P ≡ P∗ as the point P goes to the surface O, f is a given vector, and Ψ, u are the functions sought for. We introduce a fixed Cartesian system of axes X1 , X2 , X3 with the directing unit vectors i1 , i2 , i3 . The position of a point P ∈ V + O in this system is determined by the spherical coordinates r, θ, φ, that is, P = P(r, θ, φ). Then for any closed surface O as soon as the origin W of coordinates finds itself inside a finite body bounded by O, the integration domain in (10.35), (10.36) appears to be a rectangle with sides π and 2π. In addition, for points of convex surfaces two coordinates θ and φ uniquely determine the third r = r(θ, φ), in other words, for the points Q ∈ O we may write Q = Q(θ, φ). Let us apply the semi-statistical method to solving integral equation (10.35). We draw N random independent points θ i ∈ [0, π) and φ i ∈ [0, 2π). They correspond to a set of N random independent points Q i = Q(θ i , φ i ) which form a random integration ̃ grid. In accordance with the semi-statistical method, the approximate solution Ψ(Q) of integral equation (10.35) on the grid Q i , i = 1, . . . , N, is determined by the set of simultaneous equations (δ +

1 ̃ 1 N ̃ j ) 1 = f(Q i ), )Ψ(Q i ) + ∑ N(Q i , Q j ) ⋅ Ψ(Q 2 N − 1 j=1 p(Q j )

i = 1, . . . , N.

(10.37)

i=j̸

Here p(Q j ) is the density of distribution of the random nodes of the grid Q i . We expand the vectors Ψ, f , u, and the tensor N into components over the basis i1 , i2 , i3 and obtain 3

{ { Ψ(Q) = ∑ Ψ k (Q)i k , { { { { k=1 { 3 { { { { { u(P) = ∑ u k (Q)i k , k=1 {

3

f(Q) = ∑ f k (Q)i k , k=1 3

N(P, Q) = ∑ N ke (P, Q)i k i e , k,e=1

Q ∈ O, (10.38) P ∈ V + O.

224 | 10 First Problem of Elasticity Theory In view of (10.38), the set of vector relations (10.37) in projections onto the coordinate axes X1 , X2 , X3 takes the form (δ +

1 ̃ 1 N 3 N ke (Q i , Q j ) ̃ Ψ e (Q j ) = f k (Q i ) )Ψ k (Q i ) + ∑∑ 2 N − 1 j=1 e=1 p(Q j )

(10.39)

j=i̸

for i = 1, . . . , N and k = 1, 2, 3. We introduce the 3N-dimensional vectors ̃ 1 (Q1 ), Ψ ̃ 2 (Q1 ), Ψ ̃ 3 (Q1 ), . . . , Ψ ̃ 1 (Q N ), Ψ ̃ 2 (Q N ), Ψ ̃ 3 (Q N )]T , Ψ = [Ψ f = [f1 (Q1 ), f2 (Q1 ), f3 (Q1 ), . . . , f1 (Q N ), f2 (Q N ), f3 (Q N )]

T

and the (3N × 3N) matrix H whose elements are defined as follows: N ke (Q i ,Q j )

H

3(i−1)+k,3(j−1)+e i,j=1,...,N, k,e=1,2,3

{ { { (N−1)p(Q j ) = {δ + 21 { { {0

if i ≠ j, if i = j, k = e, if i = j, k ≠ e.

Then equations (10.39) take the form ̃ = f. HΨ

(10.40)

̃ of integral equation (10.35) at the nodes of the Thus, the approximate solution Ψ grid Q i , i = 1, . . . , N, is evaluated by the formula ̃ = H −1 f. Ψ

(10.41)

̃ As the approximate solution Ψ(Q) on the whole surface O we take the function obtained ̃ i ), i = 1, . . . , N, by means of, say, by interpolation between the calculated values Ψ(Q drawing a plane through three nearest node points, where Ψ(Q) is sought for (piecewise linear interpolation). In order to evaluate potential (10.36), one may use any numerical integration method, including the deterministic methods. But it seems most advantageous to make use of the Monte Carlo method on the same sample of random points Q i , i = 1, . . . , N, as in (10.37). Then we obtain u(P) = δΨ(P∗ ) +

1 N 1 + β(P), ∑ N(P, Q j ) ⋅ (Ψ(Q j ) − Ψ(P∗ )) N j=1 p(Q j )

(10.42)

where β is the random error of evaluation of the integral by the Monte Carlo method in (10.36) with N trials. ̃ In order to obtain the estimator u(P) of the potential, it suffices to substitute the ̃ approximate values Ψ(Q i ) calculated by formula (10.41) and the value of the function ̃ ∗ ) calculated by interpolation into (10.42) and drop the random error β(P): Ψ(P N ̃ ∗ ) + 1 ∑ N(P, Q j ) ⋅ (Ψ(Q ̃ j ) − Ψ(P ̃ ∗ )) 1 , ̃ u(P) = δ Ψ(P N j=1 p(Q j )

10.4 Formulas for the optimal density | 225

or, in the coordinate form, with regard to (10.38), N 3 ̃ k (P∗ ) + 1 ∑ ∑ N ke (P, Q j )(Ψ(Q ̃ j) − Ψ ̃ e (P∗ )) 1 , ̃ k (P) = δ Ψ u N j=1 e=1 p(Q j )

k = 1, 2, 3.

̃ The accuracy of the approximate solution u(P) can be estimated via the sample ̃ k which is determined by the formula variance σ2k (P) of each of its components u σ2k (P) = =

N 3 2 ̃ ̃ 1 ̃ k (P∗ ) + ∑ N ke (P, Q j )(Ψ e (Q j ) − Ψ e (P∗ )) − u ̃ k (P)] ∑ [δ Ψ N(N − 1) j=1 p(Q j ) e=1 3 2 ̃ ̃ 1 N 1 ̃ k (P∗ ) + ∑ N ke (P, Q j )(Ψ e (Q j ) − Ψ e (P∗ )) ) − u ̃ 2k (P)]. [ ∑ (δ Ψ N N − 1 j=1 p(Q ) j e=1

̃ As the estimator of the error of the approximation u(P), it is sometimes convenient to take ∑3 g k (P)σ2k (P) , ε(P) = k=13 ∑k=1 g k (P) where g k (P) are the components of the cost function g(P) which takes into account various requirements on the accuracy of evaluation of the components of the vector u(P) depending on the point where the evaluation is carried out. We observe that the integral entering into (10.36) can be evaluated with the use of the Monte Carlo method over a sample of size exceeding N. In this case, the values of ̃ i ) for i > N at the extra nodes are calculated by means of interpolation the density Ψ(Q ̃ i ) at N nodes. between the known values Ψ(Q

10.4 Formulas for the optimal density For the sake of definiteness, we consider integral equation (10.35) for the first internal problem in the form − Ψ(P0 ) + ∫ N(P0 , Q) ⋅ (Ψ(Q) − Ψ(P0 )) dO Q = f(P0 ).

(10.43)

O

Upon applying the Monte Carlo method formulas to evaluating the integral entering (10.43), this equation takes the form −Ψ(P0 ) +

1 N 1 N(P0 , Q j ) ⋅ (Ψ(Q j ) − Ψ(P0 )) = f(P0 ) + α(P0 ). ∑ N j=1 p(Q j )

We introduce the random variable N ̃ 0 ) = 1 ∑ 1 N(P0 , Q j ) ⋅ (Ψ(Q j ) − Ψ(P0 )). J(P N j=1 p(Q j )

(10.44)

226 | 10 First Problem of Elasticity Theory Then the statistical error resulting from replacement of the integral J(P0 ) = ∫ N(P0 , Q) ⋅ (Ψ(Q) − Ψ(P0 )) dO Q O

by its approximate value (10.44), and hence, the statistical error of solution of equation (10.43) occurring due to such a replacement can be described in terms of the variance ̃ 0 ). In this connection, of the estimator J(P ̃ 0 )} = J(P0 ), E{J(P ∆ 1 ̃ 0 )} = VarN {J(P D(P0 , P) N (Ψ(Q) − Ψ(P0 )) ⋅ N T (P0 , Q)N(P0 , Q) ⋅ (Ψ(Q) − Ψ(P0 )) 1 = [∫ dO Q N p(Q) O

− J(P0 )J(P0 )]. Consider the problem of choosing the optimal density p(P0 ) which minimises a certain functional Φ: popt (P0 ) = arg min Φ(D(P0 , P)). We introduce

D(P) = ∫ D(P0 , P) dO p .

(10.45)

O

As the criterion for optimality we take the criterion of weighted sum of variances [48] Φ(D(P)) = tr L ⋅ D(P) = L ⋅ ⋅D, where

L = λ2K e K e K .

This criterion is also known as L-optimality [71]. Then we see that Φ(D(P)) = ∫ O

I(Q) dO Q − ∫ J(P0 ) ⋅ L ⋅ J(P0 ) dO P0 , p(Q)

(10.46)

O

where I(Q) = ∫(Ψ(Q) − Ψ(P0 )) ⋅ N T (P0 , Q) ⋅ L ⋅ N(P0 , Q) ⋅ (Ψ(Q) − Ψ(P0 )) dO P0 . O

Theorem 10.4.1. The optimal density popt (Q) which minimises functional (10.46) (the so-called L-optimal density) for the problem to evaluate the integral by the Monte Carlo method is of the form popt (Q) = C√ I(Q), (10.47) where C is the normalising factor. The corresponding minimum value of the functional is 2

Φ∗ (D) = [ ∫ √ I(Q) dO Q ] − ∫ J(P0 ) ⋅ L ⋅ J(P0 ) dO P0 . O

O

(10.48)

10.5 Results of numerical experiments | 227

Theorem 10.4.1 is proved in [1]. Remark 10.4.2. If all components of the vector Ψ(P0 ) are considered equivalent, then the tensor L reduces the unit tensor E, so formula (10.47) is simplified. Remark 10.4.3. As the optimality criterion, one may take the minimax criterion for the variance of estimators of each of the components of the vector J(P0 ) (the so-called MV-optimality [1]): Φ2 (D(P)) = min max D KK (P), (10.49) P

K

where D KK (P) are the components of the tensor D(P) given in (10.45). In this case, the following assertion holds true [1]: Lemma 10.4.4. The optimal density in the sense of criterion (10.49) is of the form popt (Q) = C√ I∗ (Q), where I∗ (Q) = ∫(Ψ(Q) − Ψ(P0 )) ⋅ N T (P0 , Q) ⋅ L∗ ⋅ N(P0 , Q) ⋅ (Ψ(Q) − Ψ(P0 )) dO P0 , O

L = LK eK eK ,

3

∑ L K = 1,

K=1

L K ≥ 0, 2

L∗ = arg max{( ∫ √ I(Q) dO Q ) − ∫ J(P0 ) ⋅ L ⋅ J(P0 ) dO P0 }. L

O

O

Remark 10.4.5. In numerical computations, it is wise to evaluate the integrals entering into (10.47), (10.48) by the Monte Carlo method on the same sample as for the evaluation of Ψ(Q i ).

10.5 Results of numerical experiments In numerical experiments, as the test problem we choose the first basic problem for the sphere under uniform load. We assume that the elastic sphere is subject to a radial symmetric deformation, and a constant radial displacement is defined on its surface: 󵄨 u󵄨󵄨󵄨r=a = be r ,

b = const,

where a is the sphere radius. In view of the aforesaid, the solution of this problem reduces to evaluation of the potential u(P) = −Ψ(P∗ ) + ∫ N(P, Q) ⋅ (Ψ(Q) − Ψ(P∗ )) dO Q , O

P ∈ V i + O,

228 | 10 First Problem of Elasticity Theory whose density satisfies integral equation (10.26). For our testing purposes, we set b = −1,

ν = 0.3,

(10.50)

and apply the uniform law of distribution of the nodes of the random integration grid p(Q) =

1 , 2π2

Q ∈ O.

(10.51)

The results of numerical computations are compared with the true analytic solution (10.28), (10.29) which for given parameters (10.50) is of the form Ψ(Q) = 3.375e r (Q), rP u(P) = − e r (P), a

Q ∈ O,

(10.52)

P ∈ V i + O.

(10.53)

The main results of the numerical experiments are the following. Evaluation of the potential density Ψ(Q). The number of random points Q i at the sphere surface distributed by uniform law (10.51) is set to N = 50. Upon solving sĩ for three multaneous algebraic equations (10.40) we obtain an array of estimators Ψ

components of the vector Ψ(Q) at 50 nodes of the random grids. In order to estimate ̃ as N realisations of a the accuracy of the obtained result, we consider the array Ψ random vector, and calculate the mathematical expectations E and variances Var of each component of the density Ψ: E{Ψ r } = 3.5144, { { { E{Ψ θ } = 0.1576, { { { {E{Ψ φ } = 0.0015,

VarΨ r = 0.0388, (10.54)

VarΨ θ = 0.0346, VarΨ φ = 0.0369.

Here Ψ r , Ψ θ , Ψ φ are the projections of the density vector Ψ onto moving coordinate axes with directing unit vectors e r , e θ , e φ . Upon comparing numerical estimators (10.54) with analytic solution (10.52), we conclude that in the case where the number of nodes of the uniform integration grid is N = 50, the relative error of the solution of integral equation (10.37), say, in Ψ r , does not exceed 4.1%. The relative statistical error in variance is not more than 5.8%. For subsequent decrease of the error of evaluation of the density Ψ one can either increase the number N of points drawn or successively optimise their allocation by means of an appropriate choice of the function p(Q). Evaluation of the potential u(P). The potential u(P) in (10.36) is evaluated at 200 points, and the density Ψ(Q) at the extra integration grid nodes is calculated by piecewise linear interpolation between the values of Ψ calculated at 50 nodes. In Figure 10.3, we give numerically evaluated radial displacements of points P of the sphere lying along the rays: ∗ : θ = 0, φ = 0,

∙: θ =

π , φ = 0, 3

∆: θ =

π π , φ= . 2 2

10.5 Results of numerical experiments | 229

u2 1

2 • * ∆

0.5

*





*









*

0

0

0.5

1

ζ1 a

Fig. 10.3. Results of numerical experiment.

Non-radial displacements of the sphere points turn out to be equal to zero to twoplace accuracy. The straight line in Figure 10.3 corresponds to the true analytic solution (10.53). We see that far away from the surface (r P < 12 a) the computed displacement u(P) virtually coincides with the exact solution, but in case of approaching the surface, the relative error grows up to 7–9%. In order to study the question on efficient finding of the optimal density of distribution of the random grid points, we consider the problem on a sphere with displacements given on a small area of the lateral surface. As the first approximation we choose the uniform distribution. Solving the problem we arrive at a new density which is approximated by a piecewise linear function. Repeating the solution procedure for the density just found leads to a decrease of functional (10.48) by 35%. Repeating again the above procedure, we decrease functional (10.48) by 2.1%, which means a quite fast approaching to the optimum integration grid. The calculation is carried out under the assumption that the components of the vector Ψ(Q) are equivalent, in other words, the tensor L is set equal to the unit tensor E.

11 Second Basic Problem of Elasticity Theory As we have seen in Chapters 7 and 10, in order to successfully employ the semistatistical method, the kernels of the corresponding integral equations should have no worse than a weak singularity. It is well known [42, 63] that the application of fundamental solutions of the first and second kind leads to singular integral equations for the second basic problem of elasticity theory. In this regard, a fundamental solution of the third kind (in Kupradze’s notation) stands out, which is also referred to as the Weyl tensor [69], as well as the generalised force tensor derived from it. Following [33], we present them in an invariant form, suggest a more general mechanical interpretation, and explore the singularities. It is worth noticing that recent studies [35, 36] assert that the Weyl integral equations are not consistent with the equations of motion. Our study demonstrates that this is not true.

11.1 Fundamental solutions of the first and second kind Let a unit point force e be applied to a point M of a uniform isotropic unbounded medium. Let the displacement vector u at an observation point N admit the representation u(N) = Γ(N, M) ⋅ e(M), Γ(N, M) =

1 1 [(3 − 4ν)E + e R e R ], 16πμ(1 − ν) R

where

(11.1) (11.2)

R , R = |R|. R Here Γ(N, M) is the Kelvin–Somigliana tensor, E is the unit tensor, R denotes the vector radius of the point N starting from the ‘source’ point M; ν is the Poisson coefficient, μ is the shear modulus. Invariant representation (11.1), (11.2) was first proposed by ∘ A. I. Lurie [45]. V. D. Kupradze in [41, 42] called the Cartesian matrix ‖Γ‖ of the tensor ∘ Γ = 4πΓ the matrix of fundamental solutions. The tensor Γ in elasticity theory plays the part of fundamental harmonic potential 1/R (it is clear that the expression in square brackets in (11.2) is O(1) for any R); it serves as the kernel of the corresponding single layer potential of the first kind [41, 42, 45]. Solutions corresponding to other force singularities can be of use; they are obtained by formulas (11.1) and (11.2) if the force source point p is moved to a point M 󸀠 close to M with the vector radius R󸀠 = R − ρ, and calculations are made in terms of first order eR =

https://doi.org/10.1515/9783110554632-011

232 | 11 Second Problem of Elasticity Theory in ρ. The result is represented by the superposition of four terms [45]: u1 (N) = Γ(N, M) ⋅ p(M), { { { { { { {u2 (N) = − 1 1 e R × m M , { { { 8πμ R2 { eR 1 − 2ν { { I (F), {u3 (N) = { 2 1 { 24πμ(1 − ν) R { { { { 1 1 { {u4 (N) = (2(1 − 2ν)E + 3e R e R ) ⋅ Dev F ⋅ e R , 16πμ(1 − ν) R2 {

(11.3)

where m M = ρ × p(M),

I1 (F) = ρ ⋅ p,

F=

1 (ρp + pρ), 2

Dev F = F −

1 I1 (F)E. 3

After the passage to the limit ρ → 0, p → ∞ preserving the end coordinates of the dyad ρp, the force factors in (11.3) turn out to be localised at M, and they are referred to as follows: the point moment m M , the intensity of the centre of expansion ρ ⋅ p, and the force tensor F (the deviator of this tensor Dev F enters into u4 ). The displacements induced by these factors decrease as R−2 as one recedes from the source point. By the fundamental solution of the first kind Kupradze [41, 42] means the force 1 tensor 4π T(N, M) which determines the stress vector τ N at a point N on a plane with the normal n N in accordance with (11.1) and (11.2): τ N = 2μ

∂u + λn N (∇ ⋅ u N ) + μn N × (∇ × u N ) ∂n N

= T(N, M) ⋅ e(M),

(11.4)

∂Γ 1 T(N, M) = 2μ + λn N (∇ ⋅ Γ) + μn N × (∇ × Γ) 4π ∂n N 1 1 = [(1 − 2ν)(n N e R − e R n N − n N ⋅ e R E) − 3n N ⋅ e R e R e R ], 8π(1 − ν) R2 (11.5) where ∇ is the Hamiltonian. The tensor T is the kernel of the double layer potential of the first kind. But it is not a full analogue of the double layer potential in harmonic analysis since it explicitly includes not only the normal derivative of the harmonic potential nN ⋅ eR ∂ 1 1 , (11.6) ( ) = nN ⋅ ∇ = − ∂n N R R R2 but a worse than weak singularity of the form n N e R /R2 and e R n N /R2 as well. In [45] it is shown that the double layer in elasticity theory is formed by expansion centres and deviators of force tensors distributed at the surface. This potential indeed admits the

11.1 Fundamental solutions | 233

representation V I (N) =

1 ∫ b(M) ⋅ T(M, N) dO M 4π O

=

1 ∫[(1 − 2ν)(bn M + n M b − n M ⋅ bE) + 3e R ⋅ bn M ⋅ e R E] 8π(1 − ν) O

eR × 2 dO M . R

(11.7)

Here b(M) is the potential density, n M is the normal at the current point M to the surface O; the change of sign as compared with (11.5) is due to reversal of N and M in the kernel T while the source point M is preserved. If we introduce the force tensors F=

1 (n M b + bn M ), 2

then the expression in square brackets in (11.7) takes the form 2 [⋅] = − (1 + ν)I1 (F)E − (2(1 − 2ν)E + 3e R e R ) ⋅ Dev F, 3 which, in view of (11.4), points to presence of the above-mentioned singularities in (11.7), whereas the force and moment singularities are absent. In elasticity theory, with the goal of construction of a double layer containing only normal derivative (11.6), following Lauricella [44], Kupradze introduced the generalised force tensor ∂Γ 1 P α (N, M) = (α + μ) + (λ + μ − α)(∇ ⋅ Γ) + αn N × (∇ × Γ), 4π ∂n N

(11.8)

which reduces to (11.5) at α = μ, while for α = μ/(3 − 4ν) it turns into the so-called pseudo-force tensor N(N, M) (see [41, 55]). Substitution of formula (11.2) into (11.8) yields the expression 1 1 1 P α (N, M) = [(n N e R − e R n N )(3α − 4να − μ) 4π 16πμ(1 − ν) R2 + ((α + 4νμ − 3μ)E − 3(α + μ)e R e R )n N ⋅ e R ]. It is clear that the difference of the dyads ne R − e R n vanishes at α = 4/(3 − 4ν), so the pseudo-force tensor N(N, M) called the fundamental solution of the second kind includes a weak singularity ∂n∂N ( R1 ) only: eR 1 1 N(N, M) = − [2(1 − 2ν)E + 3e R e R ]n N ⋅ 2 . 4π 4π(3 − 4ν) R

(11.9)

Despite a more simple structure of (11.9), the double layer potential of the second kind [41, 42, 55] whose kernel is (11.9) is formed both by moment singularities mM = nM × b

234 | 11 Second Problem of Elasticity Theory and complete force tensors 1 (n M b + b n M ), 2 1 V II (N) = ∫ b(M) ⋅ N(M, N) dO M 4π F=

O

=

1 dO M ∫[(2(1 − 2ν)E + 3e R e R ) ⋅ F ⋅ e R − (1 − 2ν)e R × m M ] 2 . 4ν(3 − 4ν) R O

allocated on the surface. So, only force singularities are absent.

11.2 Boussinesq potentials While constructing a fundamental solution of the third kind, an essential part is played by the Boussinesq harmonic potentials which are partial solutions of governing equations of elasticity theory for an unbounded medium with the half-line removed where centres of constant intensity q = I1 (F) are allocated (q is the first invariant of the spherical force tensor defined by three dipoles of identical intensity 13 q along mutually perpendicular directions). The displacement of the medium point N with the radius vector R N = R − Se, where R and the abscissa S are counted from the origin of the half-line directed along the unit vector e, is determined as follows [45]: S

u(N) = A lim ∫ S→∞

with

O

RN R3N

A=

dS = A∇N Φ1 =

A eR − e , R 1 − eR ⋅ e

(11.10)

1 − 2ν q 24πμ(1 − ν)

and the first Boussinesq potential Φ1 (N) = ln(R − R ⋅ e). Displacement field (11.10) is solenoidal and irrotational because there are no volume expansion and rotation of the medium: θ = ∇ ⋅ u = 0,

ω=

1 ∇ × u = 0. 2

The potential Φ1 increases with R as ln R, whereas displacement (11.1) decreases as R−1 .

11.3 Weyl tensor | 235

We would do well to suggest an electrostatical analogy of solution (11.10). Treating the half-line as a charged line of uniform density A, we think of the solution as the attracting force at the point N formed by the field of this ‘antenna.’ If we choose the negative z-axis of the space as the half-line, then in this special case e = −k (here and in what follows k is the directing unit vector of the z-axis, R ⋅ e = −z, Φ1 (N) = ln(R + z)). Then the second Boussinesq potential, which is a harmonic function, too, is found by means of direct integration Φ2 (N) = ∫ Φ1 (N) dz = zΦ1 − R.

(11.11)

In [55], the generalised Boussinesq potentials Φ∗2 (N) = −R ⋅ n N Φ∗1 (N) − R,

Φ∗1 (N) = ln(R − R ⋅ n M )

(11.12)

are introduced, where n M is, as above, the normal at the source point M ∈ O to the surface limiting the given bounded body. It is clear that the use of (11.12) is admissible only for almost convex surfaces, that is, those for which the extension of the outward normal to infinity never overlaps (even touches) O.

11.3 Weyl tensor A fundamental solution of the third kind (in Kupradze’s notation) in [41, 42, 55] is represented as a superposition of two Cartesian matrices which are proportional to, ∘ respectively, the matrix ‖Γ‖ and some matrix ‖Z‖ whose elements are expressed via second derivatives of the potential Φ2 . We refer to this solution as the Weyl tensor and write it out in the following form: W(N, M) =

∘ 1 1 [ Z(N, M) − (λ + 2μ)Γ(N, M)], 3(λ + μ) 2

(11.13)



where Γ(N, M) = 4πΓ, with Γ being the Kelvin–Somigliana tensor, and the tensor Z is expressed (as can be easily seen) in terms of a particular value of the potential: Z(N, M) = z∇∇Φ1 − k∇Φ1 + ∇kΦ1 +

2 E − ∇∇R. R

(11.14)

Substitution of (11.2) and (11.14) into (11.13) yields W(N, M) =

1 2 [(1 − 2ν)(z∇∇Φ1 + ∇kΦ1 − k∇Φ1 ) − E + 2ν∇∇R]. 6μ R

This unexpanded form of the Weyl tensor is preferable when performing invariant operations on them. The mechanics sense of the Weyl tensor consists of determining the displacement u(N) in the Boussinesq problem [45] (the unit normal force k is applied to the point M

236 | 11 Second Problem of Elasticity Theory on the boundary of the half-space z ≥ 0): 3 W(N, M) ⋅ k(M) 2π z 1 = [(3 − 4ν)k + ∇R − (1 − 2ν)R∇Φ1 ]. 4πμR R

u(N) = −

This displacement is a superposition of two solutions: the former one (first two terms) corresponds to the displacement of the point N in an unbounded medium under the action of the force 4(1 − ν)k applied to the origin of coordinates M, while the latter one (the last term) is due to the action of expansion centres of intensity (1 − 2ν)/(4πμ) uniformly distributed in this space along the negative z-axis. We point out that the Weyl tensor can be subject to a more expansive interpretation which has not been mentioned so far in the literature: with its use the displacement u(N) is determined in the Boussinesq–Cerruti problem on action of an arbitrarily directed unit force e applied to the boundary of the half-space z ≥ 0: 3 W(N, M) ⋅ e(M). (11.15) 2π This assertion is proved by means of direct calculation of (11.15) and subsequent comparison of the result with the solution of this problem given in [45] in the general setup under an arbitrary distribution of the surface load q̃ + pk. The invariant form of the solution is ̃ ̃ ̃ ̃ ̃ ̃ ̃ { { 4πμ u(N) = −2φ − ∇[z(ω − ∇ ⋅ φ1 ) + (1 − 2ν)ω1 + 2ν∇ ⋅ φ2 ], (11.16) { {4πμw(N) = z(∇ ̃ ⋅ φ − ∂ω ) − (1 − 2ν)∇ ⋅ ̃ φ1 + 2(1 − ν)ω. ∂z { Here the tilde is assigned to the ‘flat’ vectors (ã ⋅ k = 0) such that ũ is the displacement of the point N in the plane z = const while wR is that along the z-axis. Besides, the following potentials are introduced: u(N) = −

dO , R

̃ ̃ 1 dO, φ1 (N) = ∫ qΦ

̃ ̃ 2 dO, φ2 (N) = ∫ qΦ

dO ω(N) = ∫ p , R





ω1 (N) = ∫ pΦ1 dO.

̃ φ(N) = ∫ q̃ Ω

O



We see that the potential Φ2 enters only into u,̃ and only in the presence of a tangential load q.̃ Under the action of concentrated force e = ẽ + e ⋅ kk, we should set ̃ 󸀠 ) = e(M ̃ 󸀠 )δ(M − M 󸀠 ), q(M p(M 󸀠 ) = k ⋅ e(M 󸀠 )δ(M − M 󸀠 ), where δ(M − M 󸀠 ) is the δ function, and M 󸀠 is the current point of Ω. After evaluating (11.16), the superposition ũ + ωk = u is realised.

11.4 Weyl force tensors | 237

11.4 Weyl force tensors The force tensor Q corresponding to the Weyl tensor is derived by formula (11.5) where Γ is changed for W defined by (11.11): ∂W 1 Q(N, M) = 2μ + λn N (∇ ⋅ W) + μn N × (∇ × W). 4π ∂n N We thus obtain

3 Q(N, M) ⋅ e(M), 2π 1 − 2ν 1 1 Q(N, M) = {n M ⋅ [z∇∇∇Φ1 − ∇∇Φ1 k + R3 (∇ )(∇∇ )] 3 R R 1 1 − (∇ )n N } − e R e R n N ⋅ ∇ . R R τN = −

(11.17)

The tensor Q(N, M) defines the stress vector on a plane with the normal n N in the half-space z ≥ 0. Formally, Q(N, M) exists (as W(N, M)) everywhere in the space except for the axis z ≤ 0. In the general case, using the potential Φ∗1 given in (11.12) instead of Φ1 , we arrive at the generalised force tensor Q∗ (N, M) = −

1 1 − 2ν 1 {n N ⋅ [n M ⋅ R∇∇∇Φ∗1 + (∇∇Φ∗ )n M − R3 (∇ )(∇∇ )] 3 R R 1 1 + (∇ )n N } − e R e R n N ⋅ , (11.18) R R

where n M is the outward normal to the closed surface O. Under an arbitrary allocation of M on O, the tensor Q∗ is defined only for the internal points of the body limited by O. In contrast to the pseudo-stress tensor N(N, M), the Weyl force tensors (11.17) and (11.18) contain three essentially differing normal derivatives including the normal derivative of the fundamental harmonic potential 1/R. This fact predetermines the character of singularities in (11.17) and (11.18) as the points N and M approach each other. First, let n N = −k, that is, we are dealing with planes parallel to the boundary of the half-space z ≥ 0. With the use of this assumption and the formulas 1 1 = −z, k ⋅ ∇∇Φ1 k = (∇ )k, R R ∂Φ1 1 k ⋅ z∇∇∇Φ1 = z∇∇ = z∇∇ , ∂z R k⋅∇=

∂ , ∂z

R3 k ⋅ ∇

we reduce tensor (11.17) to its elementary form 1 Q(N, M) = −e R e R n M ⋅ ∇ . R In order to study the singularity Q∗ (N, M) at a surface O with continuous normal n M we use the expansion of the normal n N into the Taylor series with Lagrange’s remainder n N = n M + R ⋅ ∇n M , r M󸀠 = r + θR, 0 < θ < 1, (11.19)

238 | 11 Second Problem of Elasticity Theory where ∇n M is the curvature tensor, N, M, M 󸀠 ∈ O. Substitution of (11.19) into (11.18) yields the following representation of Q∗ (N, M): 1 1 1 󵄨 Q∗ (N, M)󵄨󵄨󵄨n N →n M = −e R e R n N ⋅ ∇ + ARn M ⋅ ∇ + B , R R R

(11.20)

where A and B are bounded tensors. For example, in the case where O is a sphere of radius a (hence ∇n M = − 1a E), their explicit expressions in the expanded form are 2(1 − 2ν) { [e R e R − E + (1 + e R ⋅ n M )−1 (e R + n M )](1 + e R ⋅ n M )−1 , { {A = 3a { { { B = 1 [(1 − 2ν)e e − E + (1 + e ⋅ n )−1 (e + n )n ]. R R R M R M M 3a {

(11.21)

Thus, from (11.20), as well as from (11.21), it follows that the generalised force tensor Q∗ (N, M) as N → M contains not only the normal derivative ∂n∂M R1 , but also terms of lower order, in contrast to the pseudo-stress tensor N(N, M) where the mentioned derivative is the only one.

11.5 Arbitrary Lyapunov surface For an arbitrary Lyapunov surface O, where the outward normal n M is allowed to overlap the surface at several points, the construction of analogues of the Boussinesq potentials Φ1 and Φ2 should be implemented with the use of the technique suggested in [69] and developed in [41]. Now the expansion centres determining partial solution (11.10) are allocated not along the whole infinite outward normal but on a segment of length σ, where σ is the least of the segments of the normal never overlapping the surface O except for the source point M. Then formula (11.10) takes the form u = A∇N Φ∗∗ 1 , Φ∗∗ 1 (N) = ln(R − R ⋅ n M ) − ln(R ∗ + σ − R ⋅ n M ), where R∗ is the distance from the end of the segment of the normal of length σ to the current point N. Similarly to (11.11), (11.12) we introduce the second potential Φ∗∗ 2 (N) = −R ⋅ n N ln(R − R ⋅ n M ) − R + R ∗ − (σ − R ⋅ n M ) ln(R ∗ + σ − R ⋅ n M ). The construction of the corresponding force tensor Q∗∗ is implemented by the same formula (11.18) where Φ∗1 is changed for Φ∗∗ 1 . For almost convex surfaces (as σ → ∞), ∗∗ ∗ Q reduces to Q . Thus, for the second basic problem of elasticity theory we have constructed potentials suitable for application of the semi-statistical method since their kernels possess no worse than weak singularities.

11.5 Arbitrary Lyapunov surface | 239

Exercise 11.5.1. With the use of the obtained force tensors, deduce integral equations corresponding to the second basic problem of elasticity theory both for the case of almost convex surfaces and for arbitrary Lyapunov surfaces.

12 Projectional and Statistical Method of Solving Integral Equations Numerically In this chapter, we suggest and investigate one more method of solving integral equations numerically. A special case of this method is the semi-statistical one studied in Chapter 7. In the method considered here, a solution of an integral equation is sought for in the form of a linear combination of known basis functions, as in projection methods. Besides, one takes into account an auxiliary component of the solution evaluated with the use of the Monte Carlo method. We give a thorough description of the method, establish its convergence, and estimate the convergence rate. A detailed comparison with the semi-statistical method is carried out, peculiarities of numerical implementation are discussed, and results of application of this method to a series of test problems are given.

12.1 Basic relations We consider the problem to solve a linear integral Fredholm equation of the second kind: φ(x) − λ ∫ K(x, y)φ(y) dy = f(x), x ∈ D, (12.1) D

where f(x) ∈ L2 (D), K(x, y) ∈ L2 (D×D), and D is a bounded closed subset of the Euclidean space ℝs . We assume that we know some amount J of functions φ i (x) which are orthonormalised in L2 (D). We try to find a solution of equation (12.1) in the form φ(x) = α1 φ1 (x) + α2 φ2 (x) + ⋅ ⋅ ⋅ + α J φ J (x) + ∆φ(x) J

= ∑ α j φ j (x) + ∆φ(x),

(12.2)

j=1

where α j ∈ ℝ are unknown coefficients of the expansion of φ(x) into a series in orthogonal functions φ j (x), j = 1, . . . , J, and ∆φ(x) is the error of this expansion, while {1 ∫ φ i (x)φ j (x) dx = δ ij = { 0 D { ∫ φ i (x)∆φ(x) dx = 0,

if i = j, if i ≠ j,

,

i, j = 1, . . . , J,

i = 1, . . . , J.

D

We introduce ∫ φ i (x)K(x, y) dx = L i (y),

∫ K(x, y)φ j (y) dy = R j (x),

D

D

https://doi.org/10.1515/9783110554632-012

(12.3) (12.4)

242 | 12 Projectional and Statistical Method and

∫ φ i (x)K(x, y)φ j (y) dy = D ij ,

∫ φ i (x)f(x) dx = F i .

D

D

In this notation, taking into account (12.2), we rewrite equation (12.1) as follows: J

J

j=1

j=1

∑ α j φ j (x) + ∆φ(x) − λ ∑ α j R j (x) − λ ∫ K(x, y)∆φ(y) dy = f(x).

(12.5)

D

Multiplying this equality by φ i (x) and integrating over D in terms of (12.3) and (12.4), we obtain J

α i − λ ∑ α j D ij − λ ∫ L i (y)∆φ(y) dy = F i , j=1

i = 1, . . . , J.

(12.6)

D

If ∆φ(x) were known, then (12.6) would be a set of simultaneous linear algebraic equations in α i . At the same time, equality (12.5) with known α i is an integral equation in ∆φ(x). We assume, as for the semi-statistical method (see Chapter 7), that we are given a family of statistically independent random variables x1 , x2 , . . . , xN distributed in D with some density p(x) such that p(x) > 0 for x ∈ D and ∫ p(x) dx = 1. D

Then the integrals entering into (12.5) and (12.6) can be approximated by the formula of the Monte Carlo method in its elementary version [62], and formula (12.5) takes the form J

J

j=1

j=1

∑ α j φ j (x) + ∆φ(x) − λ ∑ α j R j (x) −

λ N K(x, xl )∆φ(xl ) − λρ(x) = f(x), ∑ N l=1 p(xl )

(12.7)

while at the points x = xi , i = 1, . . . , N, J

J

j=1

j=1

∑ α j φ j (xi )+ ∆φ(xi )− λ ∑ α j R j (xi )−

λ N K(xi , xl )∆φ(xl ) − λρ i (xi ) = f(xi ). (12.8) ∑ N − 1 l=1 p(xl ) l=i̸

Formula (12.6) is thus rewritten as follows: J

α i − λ ∑ α j D ij − j=1

λ N L i (xl )∆φ(xl ) − λβ i = F i , ∑ N l=1 p(xl )

where ρ(x) = ∫ K(x, y)∆φ(y) dy − D

1 N K(x, xl )∆φ(xl ) , ∑ N l=1 p(xl )

(12.9)

12.1 Basic relations | 243

ρ i (x) = ∫ K(x, y)∆φ(y) dy − D

1 N K(x, xl )∆φ(xl ) , ∑ N l=1 p(xl ) l=i̸

β i = ∫ L i (x)∆φ(x) dx − D

1 N L i (xl )∆φ(xl ) ∑ N − 1 l=1 p(xl )

are errors of evaluation of the corresponding integrals by the Monte Carlo methods on realisations of the variables xi . Simultaneous equations (12.8) and (12.9) can be represented in the matrix form λ K N )∆φ − λ ρ̄ = f,̄ N−1 λ (E J − λD)ᾱ − L∆φ − λ β̄ = F, N

(Φ − λR)ᾱ + (E N −

where ∆φ = [∆φ(x1 ), . . . , ∆φ(xN )]T , F = [F1 , . . . , F J ]T ,

ᾱ = [α1 , . . . , α J ]T , β̄ = [β1 , . . . , β J ]T ,

J R = ‖R j (xi )‖N, i=1,j=1 ,

J Φ = ‖φ j (xi )‖N, i=1,j=1 ,

󵄩󵄩 L i (xj ) 󵄩󵄩J, N 󵄩󵄩 L = 󵄩󵄩󵄩󵄩 , 󵄩 󵄩 p(xj ) 󵄩󵄩i=1,j=1

D = ‖D ij ‖Ji,j=1 ,

the matrix K N is defined by formulas (7.5), (7.6), and the vectors f ̄, ρ̄ by (7.7), (7.8). Thus, K N is of the same form as in the semi-statistical method. We introduce the block matrix A=[ [

EN −

λ N−1 K N

− Nλ L

Φ − λR E J − λD

]

(12.10)

]

and obtain the simultaneous equations in ᾱ and ∆φ: A[

∆φ f̄ ρ̄ ] = [ ] + λ [ ̄] . ᾱ F β

(12.11)

Under quite general assumptions on the kernel K(x, y) and the functions φ(y) and p(y), the errors ρ(x), ρ i (xi ), β i have zero mathematical expectation and variance ̃ and ∆ ̃ tending to zero as N → ∞. In this connection, we expect that α φ obtained from the simultaneous equations ∆̃ φ f̄ A[ ] = [ ] (12.12) ̃ F α are in some sense close to ᾱ and ∆φ for N large enough. As follows from (12.7), the true solution of equation (12.1) is of the form φ(x) = f(x) + λR T (x)ᾱ +

λ T k (x)∆φ + λρ(x), N

(12.13)

244 | 12 Projectional and Statistical Method where R(x) = [R1 (x), . . . , R J (x)]T ,

k(x) = [

K(x, xN ) T K(x, x1 ) ,..., ] . p(x1 ) p(xN )

Therefore, upon solving equations (12.12), the function φ(x) sought for can be estimated at any point x ∈ D as ̃ ̃+ φ(x) = f(x) + λR T (x)α

λ T k (x)∆ ̃ φ. N

(12.14)

Thus, the problem of approximate inversion of the integral operator in equation (12.1) reduces to solution of a set of simultaneous linear algebraic equations (12.12) (that is, inversion of the matrix A) and subsequent estimation of the function φ(x) by formula (12.14). The question whether or not the matrix A can be inverted is discussed in Section 12.3 The above description of the idea of the method reveals the essence of its name. The solution is sought for in the form of a linear combination of known basis functions as in the projection methods, but with an additional component evaluated with the use of the Monte Carlo method. From the aforesaid it becomes obvious that if a basis is absent (hence, ∆φ ≡ φ), then the statistical projection method reduces to the semi-statistical one which has been studied in Chapter 7. This relation will frequently be used in what follows in order to compare the methods and sometimes, in an indirect way, to verify our theoretical reasoning. Let us derive an expression of the error of the approximate solution at a point provided that the matrix A of the method is non-degenerate. For this purpose we solve simultaneous equations (12.11) and (12.12) and substitute their solutions into expressions (12.13) and (12.14), respectively. We thus obtain φ(x) = f(x) + λ ( N1 k T (x) ̃ φ(x) = f(x) + λ ( N1 k T (x)

ρ̄ f̄ R T (x)) A−1 [( ) + λ ( ̄ )] + λρ(x), F β f̄ R T (x)) A−1 ( ) , F

hence φ(x) − ̃ φ(x) = λ ( N1 k T (x)

ρ̄ R T (x)) A−1 ( ̄ ) + λρ(x). β

(12.15)

(12.16)

As the matrix of the semi-statistical method, the matrix A of the set of simultaneous equations of the statistical projection method admits a recurrent inversion whose formulas are given in the next section.

12.2 Recurrent inversion formulas We immediately observe that not the matrix A but the matrix B derived from it by some permutation of blocks of rows and columns is more suitable for recurrent inversion.

12.2 Recurrent inversion formulas | 245

This poses no difficulty because if −1

A−1 = [

A1 A3

A2 ] A4

B−1 = [

A4 A2

A3 ] A1

then

=[

B1 B3

B2 ], B4

=[

B4 B2

B3 ]. B1

− Nλ L

]

−1

(12.17)

The transition from the matrix B

−1

=[ [

E J − λD Φ − λR

EN −

−1

λ N−1 K N ]

of dimension N + J to the matrix B−1 N+m

=[ [

λ L N+m − N+m

E J − λD Φ N+m − λR N+m

E N+m −

−1

]

λ N−1+m K N+m ]

of dimension N + m + J, where m is the number of added nodes of the integration grid, must be carried out in two stages. We observe that the matrices L N+m , Φ N+m , and R N+m are built up on the base of N + m random points. Stage 1: We make use of representation (12.17) and set λ N−1+m N−1 (− N L)

E J − λD T1 = [ Φ − λR [

N−1+m N−1 (E N



−1

]

λ N−1 K N )]

B4 N−1 N−1+m B 2

=[

B3 ]. N−1 N−1+m B 1

Now calculate the matrix λ − N+m L

E J − λD

T N+1 = [ Φ − λR [

EN −

λm(2N−1+m)

N(N−1)(N+m) aj = [ m [ N−1+m l j

L∙j

],

]

λ N−1+m K N ]

with the use of the recurrent relations 1 T j+1 = T j − T j a j b Tj T j , 1 + b Tj T j a j where

−1

j = 1, . . . , N,

0 bj = [ ] , lj

(12.18)

[l j ]i = δ ij . ] Here and in what follows, B∙j stands for the jth column of the matrix B, and B i∙ for its ith row. Validity of relations (12.18) follows from the formula (A + ab T )−1 = A−1 − of Exercise 7.3.1.

1 A−1 ab T A−1 1 + b T A−1 a

246 | 12 Projectional and Statistical Method Stage 2: The obtained matrix T N+1 is a leading submatrix of dimension N + J of the matrix B−1 N+m . So to find this matrix sought for, it suffices to utilise the bordering method similar to (7.14). For J = 0 the recurrent inversion formulas given here reduce to formulas for the semi-statistical method presented in Chapter 7.

12.3 Non-degeneracy of the matrix of the method Let us turn to the question whether or not the matrix A of the statistical projection method is invertible. In what follows, it will be convenient to use the formula of the bordering method in the form A

−1

A1 =[ A3

A2 ] A4

−1

where

=[

A−1 1 0

0 A−1 A2 −1 ]+[ 1 ] S [A3 A−1 1 0 −E

−E] ,

(12.19)

S = A4 − A3 A−1 1 A2

is the Schur complement of the matrix A, and the dimension of the unit matrices E coincides with that of A4 . Formula (12.19) shows that for a square 2 × 2 matrix to be invertible it is sufficient that its upper left block and the Schur complement are invertible. As applied to the matrix A defined by relation (12.10), this means invertibility of the matrices HN = EN −

λ KN , N−1

S = I J − λD +

λ LH N−1 (Φ − λR). N

As shown in Theorem 7.4.6, the matrix H N is invertible with probability as close to one as we wish for N large enough, and the norm of the inverse matrix does not exceed some constant. Let us demonstrate that a similar assertion holds true for the matrix S as well. Let us transform S. We introduce 1 γ ij = {H N Φ − (Φ − λR)}ij λ 1 N K(x i , x l )φ j (x l ) = ∫ K(x i , x)φ j (x) dx − , ∑ N − 1 l=1 p(x l ) D

l=i̸

which is the error of evaluation of R j (x i ) by the elementary Monte Carlo method. For the matrix Γ of these elements we find that Φ − λR = H N Φ − λΓ, therefore, S = I J − λD +

λ λ2 LΦ − LH N−1 Γ. N N

12.3 Non-degeneracy of the matrix | 247

Now we introduce θ ij = {D −

1 N L i (x l )φ j (x l ) 1 LΦ} = ∫ L i (y)φ j (y) dy − ∑ , N N l=1 p(x l ) ij D

which is the error of evaluation of the integral D ij by the Monte Carlo method, and with the use of the matrix Θ of these elements we arrive at the following representation of the matrix S: λ2 S = I J − λΘ − LH N−1 Γ. (12.20) N We assume that the matrix H N is invertible and the inequality ‖H N−1 ‖ω ≤ C is true. 2 Let us estimate ‖λΘ + λN LH N−1 Γ‖, where ‖ ⋅ ‖ denotes the operator norm of the matrix induced by the Euclidean norm of a vector in ℝJ . It is not difficult to see that 2 2 󵄩󵄩 󵄩 󵄩󵄩λΘ + λ LH −1 Γ󵄩󵄩󵄩 ≤ |λ|‖Θ‖ + λ ‖LH −1 Γ‖ 󵄩󵄩 󵄩 N 󵄩 N N N 󵄩 󵄩 J

J

1/2

≤ |λ|( ∑ ∑ θ2ij ) i=1 j=1

+

1/2 λ2 J J ( ∑ ∑ {LH N−1 Γ}2ij ) . N i=1 j=1

(12.21)

We estimate each addend individually. An elementary transformation yields E{θ2ij } = E{( ∫ L i (x)φ j (x) dx − D 2

= ( ∫ L i (x)φ j (x) dx) − D

=

N L i (x l )φ j (x l ) 2 } ∫ L i (x)φ j (x) dx ∑ E{ N p(x l ) l=1 D

N L2i (x l )φ2j (x l ) L i (x l )L i (x m )φ j (x l )φ j (x m ) 1 N + 2 ∑ [ ∑ E{ } + E{ }] p(x l )p(x m ) N l=1 m=1 p2 (x l )

1 [∫ N D

=

1 N L i (x l )φ j (x l ) 2 ) } ∑ N l=1 p(x l )

1 [∫ N D

m=l̸ 2 2 L i (x)φ j (x)

p(x)

L2i (x)φ2j (x) p(x)

2

dx − ( ∫ L i (x)φ j (x) dx) ] D

dx − D2ij ].

By virtue of the Chebyshev inequality, for any ε > 0 the bound P{θ2ij ≤

L2i (x)φ2j (x) 1 dx − D2ij ]} ≥ 1 − ε [∫ εN p(x) D

holds true, hence L2i (x)φ2j (x) 1 J J P{‖Θ‖ ≤ dx − D2ij ]} ≥ 1 − J 2 ε. ∑ ∑[∫ εN i=1 j=1 p(x) 2

D

(12.22)

248 | 12 Projectional and Statistical Method Let us turn to the second addend. It is clear that {LH N−1 Γ}ij = L i∙ H N−1 Γ∙j , where M i∙ and M∙j denote the ith row and the jth column of the matrix M, respectively. Therefore, introducing the diagonal matrix P = diag{√ p(x1 ), √ p(x2 ), . . . , √ p(x N )}, we obtain |{LH N−1 Γ}ij | = |L i∙ H N−1 Γ∙j | = |L i∙ PP−1 H N−1 Γ∙j |

≤ ‖L i∙ P‖‖P−1 H N−1 Γ∙j ‖ = ‖L i∙ P‖‖H N−1 Γ∙j ‖ω ≤ C‖L i∙ P‖‖Γ∙j ‖ω .

Since

N

E{‖L i∙ P‖2 } = E{ ∑

l=1

L2i (x l ) } = N ∫ L2i (x) dx, p(x l ) D

we see that for any ε > 0, P{‖L i∙ P‖2 ≤

N ∫ L2i (x) dx} ≥ 1 − ε. ε D

Similarly, since N

E{‖Γ∙j ‖2ω } = ∑ E{ i=1

1 N K(x i , x l )φ j (x l ) 2 1 ( ∫ K(x i , x)φ j (x) dx − ) } ∑ p(x i ) N − 1 l=1 p(x l ) D

= ⋅⋅⋅ =

N ∫[∫ N−1

K 2 (x, y)φ2j (y) p(y)

D

D

l=i̸

dy − R2j (x)] dx,

the following inequality holds for any ε > 0: P{‖Γ∙j ‖2ω ≤

K 2 (x, y)φ2j (y) N dy − R2j (x)] dx} ≥ 1 − ε. ∫[∫ ε(N − 1) p(y) D

D

We thus conclude that P{{LH N−1 Γ}2ij ≤

K 2 (x, y)φ2j (y) C2 N 2 2 L (x) dx dy − R2j (x)] dx} ≥ 1 − 2ε. [ ∫ ∫ ∫ i p(y) ε2 (N − 1) D

D

D

Hence we obtain P{‖LH N−1 Γ‖2 ≤

J J K 2 (x, y)φ2j (y) C2 N 2 2 L (x) dx dy − R2j (x)] dx} [ ∑ ∑ ∫ ∫ ∫ i p(y) ε2 (N − 1) i=1 j=1 2

≥ 1 − 2J ε.

D

D

D

(12.23)

12.3 Non-degeneracy of the matrix | 249

Strictly speaking, the last two formulas contain conditional probabilities under the condition of existence of H N−1 and boundedness of its norm. Combining now relations (12.22) and (12.23) in (12.21), we arrive at the following probabilistic estimate of the norm: 󵄩󵄩 󵄩󵄩 C1 λ2 P{󵄩󵄩󵄩󵄩λΘ + LH N−1 Γ󵄩󵄩󵄩󵄩 ≤ | ∃H N−1 : ‖H N−1 ‖ω ≤ C} ≥ 1 − 3J 2 ε, N 󵄩 √ ε(N − 1) 󵄩 where J

J

C1 = |λ|( ∑ ∑ [ ∫ i=1 j=1

L2i (x)φ2j (x)

D

p(x)

J

dx − D2ij ]) J

+ Cλ2 ( ∑ ∫ L2i (x) dx ∑ ∫ [ ∫ i=1 D

j=1 D

1/2

K 2 (x, y)φ2j (y)

D

p(y)

1/2

dy − R2j (x)] dx)

.

Thus, for any ε > 0 there exists N S (ε) such that the relation 󵄩󵄩 󵄩󵄩 1 λ2 P{󵄩󵄩󵄩󵄩λΘ + LH N−1 Γ󵄩󵄩󵄩󵄩 ≤ | ∃H N−1 : ‖H N−1 ‖ω ≤ C} ≥ 1 − ε N 󵄩 󵄩 2 holds true for all N > N S . But, in view of representation (12.20), under the condition 2 󵄩󵄩 󵄩 󵄩󵄩λΘ + λ LH −1 Γ󵄩󵄩󵄩 ≤ 1 󵄩󵄩 󵄩󵄩 N N 󵄩 󵄩 2

the matrix S is invertible and ‖S−1 ‖ ≤ 2. Therefore, the relation P{∃S−1 : ‖S−1 ‖ ≤ 2 | ∃H N−1 : ‖H N−1 ‖ω ≤ C} ≥ 1 − ε

(12.24)

holds true for all N > N S . We observe that relation (12.24) itself does not depend on the constant C. It affects only the value of N S starting from which (12.24) holds. Combining the obtained result with Theorem 7.4.6 on the base of Proposition 7.4.2, and setting N1 = max(N0 , N S ), we arrive at the following theorem. Theorem 12.3.1. Let the hypotheses of Theorem 7.4.6 be satisfied. Let the following integrals converge for all j = 1, 2, . . . , J: ∫∫ D D

K 2 (x, y)φ2j (y) p(y)

dy dx.

Then for any ε > 0 there exists N1 (ε) such that P{∃S−1 : ‖S−1 ‖ ≤ 2 | ∃H N−1 : ‖H N−1 ‖ω ≤ C} ≥ 1 − 4ε for all N > N1 .

250 | 12 Projectional and Statistical Method

12.4 Convergence of the method For the purposes of further investigation of the convergence, we transform expression (12.16) of the error of approximate solution taking into account representation (12.19) as follows: λ2 T k (x)H −1 ρ̄ N λ 1 ̄ + λ2 ( k T (x)H −1 (Φ − λR) − R T (x))S−1 (− LH −1 ρ̄ − β). N N

φ(x) − ̃ φ(x) = λρ(x) +

Taking into account the representation Φ − λR = HΦ − λΓ obtained in the above section, we see that λ2 T k (x)H −1 ρ̄ N 1 λ λ ̄ + λ2 (− k T (x)Φ + k T (x)H −1 Γ + R T (x))S−1 ( LH −1 ρ̄ + β). N N N

φ(x) − ̃ φ(x) = λρ(x) +

We introduce the vector

1 ̄ δ(x) = R(x) − k̄ T (x)Φ N

whose components are

δ j (x) = R j (x) −

1 N K(x, xl )φ j (xl ) , ∑ N l=1 p(xl )

which are errors of evaluation of R j (x) by the Monte Carlo methods on the realisations x1 , x2 , . . . , xN . Thus we arrive at the final expression of the error: λ2 T k (x)H −1 ρ̄ N λ λ ̄ + λ2 (δ̄ T (x) + k T (x)H −1 Γ)S−1 ( LH −1 ρ̄ + β). N N

φ(x) − ̃ φ(x) = λρ(x) +

(12.25)

We assume that the matrices H and S are invertible and the bounds for the norms ≤ 2, ‖H −1 ‖ω ≤ C hold. In this case we introduce, as above, the diagonal matrix P with the elements √ p(xi ), and from (12.25) arrive at ‖S−1 ‖

|φ(x) − ̃ φ(x)| ≤ |λρ(x)| +

λ2 C T ‖k (x)P‖‖ρ‖̄ ω N

J |λ|C T ̄ + 2λ2 (‖δ(x)‖ + ‖k (x)P‖√ ∑ ‖Γ∙j ‖2ω ) N j=1

×(

|λ|C J ̄ √ ∑ ‖L i∙ P‖2 ‖ρ‖̄ ω + ‖β‖). N i=1

12.4 Convergence of the method | 251

Integration over D yields φ(x)| dx ≤ |λ| ∫ |ρ(x)| dx + ∫ |φ(x) − ̃ D

λ2 C ‖ρ‖̄ ω ∫ ‖k T (x)P‖ dx N D

D

+ 2λ2 (

|λ|C J ̄ √ ∑ ‖L i∙ P‖2 ‖ρ‖̄ ω + ‖β‖) N i=1

̄ × ( ∫ ‖δ(x)‖ dx + D

|λ|C J √ ∑ ‖Γ∙j ‖2ω ∫ ‖k T (x)P‖ dx) N j=1 D

= R1 (ω) + 2λ2 R2 (ω)R3 (ω).

We see that ρ(x) and ρ̄ coincide with those values in the semi-statistical method with φ(⋅) replaced by ∆φ(⋅). So the estimate for R1 (ω) coincides with the estimate for R(ω) in the semi-statistical method with the same replacement: P{R1 (ω) ≤ where ∆21 = ∫ ∫ D D

|λ| + λ2 C‖K(x, y)‖L2 (D×D) ∆1 } ≥ 1 − ε, ε√N − 1

(12.26)

2 K 2 (x, y)∆φ2 (y) dy dx − ∫ ( ∫ K(x, y)∆φ(y) dy) dx. p(y) D

D

The mathematical expectation of R2 (ω) is estimated as follows: E{R2 (ω)} ≤

|λ|C J √ ∑ E{‖L i∙ P‖2 } √E{‖ρ‖̄ 2ω } + √E{‖β‖̄ 2 }. N i=1

The constituents of the first term have been estimated, too (see Section 7.5): E{‖ρ‖̄ 2ω } = J

N ∆2 , N−1 1 J

∑ E{‖L i∙ P‖2 } = N ∑ ∫ L2i (x) dx.

i=1

i=1 D

In order to find an estimate for ‖β‖̄ we observe that β i coincides with θ ij where φ j (⋅) is replaced by ∆φ(⋅). Therefore, E{β2i } =

2 L2 (x)∆φ2 (x) 1 dx − ( ∫ L i (x)∆φ(x) dx) ], [∫ i N p(x) D

D

hence we obtain 2 L2 (x)∆φ2 (x) 1 J E{‖β‖̄ 2 } = dx − ( ∫ L i (x)∆φ(x) dx) ]. ∑[∫ i N i=1 p(x) D

D

252 | 12 Projectional and Statistical Method Therefore,

1/2

E{R2 (ω)} ≤ where

J

∆22 = ∑ [ ∫ i=1

D

|λ|C∆1 ( ∑Ji=1 ∫D L2i (x) dx)

+ ∆2

√N − 1

,

2 L2i (x)∆φ2 (x) dx − ( ∫ L i (x)∆φ(x) dx) ]. p(x) D

Thus, by virtue of the Chebyshev inequality, 1/2

P{R2 (ω) ≤

|λ|C∆1 ( ∑Ji=1 ∫D L2i (x) dx)

+ ∆2

ε√N − 1

} ≥ 1 − ε.

(12.27)

Finally, let us estimate the mathematical expectation of R3 (ω): |λ|C J ̄ 2 } dx + √ ∑ E{‖Γ∙j ‖2ω } √ ∫ E{‖k T (x)P‖2 } dx. E{R3 (ω)} ≤ √ ∫ E{‖δ(x)‖ N j=1 D

D

The constituents of the second term have been estimated: J

∑ E{‖Γ∙j ‖2ω } =

j=1 T

J K 2 (x, y)φ2j (y) N dy − R2j (x)] dx, ∑∫[∫ N − 1 j=1 p(y) D

2

∫ E{‖k (x)P‖ } dx =

D

N‖K(x, y)‖2L2 (D×D) .

D

In order to find an estimate for the first term, we observe that δ j (x) coincides with θ ij where L i (⋅) is replaced by K(x, ⋅). Therefore, E{δ2j (x)} =

2 K 2i (x, y)φ2j (y) 1 dy − ( ∫ K(x, y)φ j (y) dy) ] [∫ N p(y) D

=

1 [∫ N

K 2 (x, y)φ2j (y) p(y)

D

hence

dy − R2j (x)],

K 2 (x, y)φ2j (y) 1 J 2 ̄ dy − R2j (x)] dx. ∫ E{‖δ(x)‖ } dx = ∑ ∫ [ ∫ N j=1 p(y) D

D

Therefore, E{R3 (ω)} ≤ where

D

J

∆23 = ∑ ∫ [ ∫ j=1 D

D

D

1 + |λ|C‖K(x, y)‖L2 (D×D) ∆3 , √N − 1 K 2 (x, y)φ2j (y) p(y)

dy − R2j (x)] dx.

12.5 Advantages and adaptive capabilities | 253

Thus, P{R3 (ω) ≤

1 + |λ|C‖K(x, y)‖L2 (D×D) ∆3 } ≥ 1 − ε. ε√N − 1

(12.28)

Gathering relations (12.26), (12.27), and (12.28), we arrive at the following assertion. Theorem 12.4.1. Let the matrices S and H be invertible, their norms obey the bounds ‖S−1 ‖ ≤ 2, ‖H −1 ‖ω ≤ C, and let ∆1 , ∆2 , ∆3 have a meaning. Then for any ε > 0 the conditional probability satisfies P{ ∫ |φ(x) − ̃ φ(x)| dx ≤ D

where

|λ| + λ2 C‖K(x, y)‖L2 (D×D) ∆4 } ≥ 1 − 3ε, ε√N − 1 1/2

∆4 = ∆1 + 2|λ|∆3

|λ|C∆1 ( ∑Ji=1 ∫D L2i (x) dx) ε√N − 1

+ ∆2

.

Combining the results of Theorems 12.3.1 and 12.4.1 on the base of Proposition 7.4.2, we arrive at the following theorem on convergence of the statistical projection method. Theorem 12.4.2. Let the hypotheses of Theorem 12.3.1 be satisfied, and let the following integral converge: K 2 (x, y)∆φ2 (y) dy dx. ∫∫ p(y) D D

Then for any ε > 0 there exists N1 (ε) such that P{∃H −1 , S−1 : ∫ |φ(x) − ̃ φ(x)| dx ≤ D

|λ| + λ2 C‖K(x, y)‖L2 (D×D) ∆4 } ≥ 1 − 7ε ε√N − 1

for all N > N1 .

12.5 Advantages of the method and its adaptive capabilities The expression ∆21 entering Theorem 12.4.1 is a quadratic functional of ∆φ which vanishes at ∆φ ≡ 0. Therefore, the closer ∆φ is to zero, the smaller is the coefficient at N −1/2 in the estimate of the method error, in other words, the better is the approximation of the solution of equation (12.1) by a part of its expansion over the basis φ1 , . . . , φ J . This leads us to expect that the statistical projection method has some advantages over the semi-statistical one. Let us look at other virtues of the statistical projection method. First, the account for the discrepancy ∆φ allows us to find a good approximate solution even under a quite ‘bad’ choice of the basis. In the worst case where φ falls outside the subspace spanned onto φ1 , . . . , φ J , we get some solution with the use of the semi-statistical method, while the projection method gives us no result at all. Second, the utilisation of the Monte

254 | 12 Projectional and Statistical Method Carlo method to approximate the integrals containing an unknown discrepancy allows us to apply this method successfully to solving integral equations over complicated domains of large dimensionalities. To the advantages of the method we add the possibility, as in the semi-statistical method, of recurrent inversion of the matrix A, which permits for successive refinement of the random grid x1 , . . . , xN while adjusting the solution and stop as soon as the required accuracy has been attained. Unfortunately, a rather complicated form of the expression of the estimate of the approximate solution error given in Theorem 12.4.1 makes it hard to optimise it with respect to the density p(x) as for the semi-statistical method. Nevertheless, the second term in the expression of ∆4 is of higher rate of decrease, so we expect that the density minimising the functional ∆1 (the principal term of the error) is quite ‘good.’ The expression of this density is obtained by direct replacement of ∆φ by φ in (7.25) and is of the form p(x) = α√ ∫ K 2 (y, x) dy∆φ2 (x), D

where the factor α is defined from the normalisation condition. There exists another way of practical adaptation of the computational procedure to the properties of the problem being solved. It is based on the following simple observation: if there is an approximate solution ̃ φ(x) which falls into the subspace spanned onto some orthonormalised basis functions {φ j }, then the discrepancy ∆φ of the expansion of the true solution over this basis in the norm L2 (D) surely does not exceed the present error φ(x) − ̃ φ(x) (in other words, in this basis even the projection method yields an approximation which is better in the norm L2 (D) than ̃ φ). In this connection, it would be wise, having obtained some approximate solution ̃ φ by the statistical projection method, to augment the basis by adding the element φ J+1 in such a manner that ̃ φ finds itself inside the subspace spanned onto φ1 , . . . , φ J+1 . To do this, we proceed as follows. 1. Orthogonalise ̃ φ against the functions φ1 , . . . , φ J by setting J

̂ φ J+1 (x) = φ(x) − ∑ φ j (x) ∫ φ(x)φ j (x) dx. j=1

D

2. Normalise the resulting function in L2 (D): −1/2

φ J+1 (x) = ̂ φ J+1 (x)( ∫ ̂ φ2J+1 (x) dx)

.

D

There is a somewhat different interpretation of this approach. In order to find a true solution of an integral equation by the statistical projection method, we indeed have to set ∆φ(J) (x) φ J+1 (x) = , ‖∆φ(J) (x)‖L2

12.6 Numerical implementation | 255

where

J

J

j=1

j=1

∆φ(J) (x) = φ(x) − ∑ α j φ j (x) = φ(x) − ∑ φ j (x) ∫ φ(x)φ j (x) dx, D

∆φ(J+1) (x)

since in this case vanishes. But the true solution needs to be known. Since φ, it is wise to utilise it instead the true one and arrive we have an approximate solution ̃ at an extra basis element which is not optimal but likely close to it. We emphasise the following point: if the new basis element so arisen corresponds to a very small constituent of the solution (that is, ̂ φ J+1 (x) is infinitesimal in norm), then the next application of the method should not be over the same sample. Otherwise the solution differs very little from the preceding one and the new basis element is of even smaller weight. Thus, a waste of computing time occurs, as well as accumulation of errors related to normalisation of small functions (which may even lead to the loss of orthogonality). In the end of Section 12.8 we will present a series of additional considerations concerning the eventual basis adaptation.

12.6 Peculiarities of the numerical implementation It is not difficult to see that in order to implement the statistical projection method we have to be able to evaluate the matrix A whose components are, in particular, R j (xi ), L i (xj ), and D ij , which are some integrals. In the case where they cannot be analytically evaluated, approximate integration methods should be utilised. It is clear that the more accurately these integrals are found, the closer are the results we obtain to the theoretical ones given above. In the case of a complex geometry or high dimensionality of the domain D, it is hard to construct quadrature formulas for them. In this connection, the question arises how to apply to their evaluation a rather easily implementable formula of the Monte Carlo method. We point out one peculiarity of application of the Monte Carlo method, namely, how to use it on an already given sample x1 , . . . , xN . We set ̃ ij = R

1 N K(xi , xl )φ j (xl ) ≈ R j (xi ), ∑ N − 1 l=1 p(xl ) l=i̸

̃ ij = L

1 N φ i (xl )K(xl , xj ) ≈ L i (xj ), ∑ N − 1 l=1 p(xl ) l=i̸

̃ D ij =

N N φ i (xl )K(xl , xk )φ j (xk ) 1 ≈ D ij , ∑∑ N(N − 1) l=1 k=1 p(xl )p(xk ) k=l̸

256 | 12 Projectional and Statistical Method and construct the matrix EJ − λ̃ D Ã = [ ̃ Φ − λR [ where

̃ − Nλ L EN −

󵄩 ̃ 󵄩J, N ̃ = 󵄩󵄩󵄩󵄩 L ij 󵄩󵄩󵄩󵄩 L , 󵄩󵄩 p(xj ) 󵄩󵄩i=1,j=1

̃ = ‖R ̃ ij ‖N, J , R i=1,j=1

],

λ N−1 K N ]

̃ D = ‖̃ D ij ‖Ji,j=1 .

̃ ̃ 1 (x), . . . , R ̃ J (x)]T , where = [R We also introduce the vector R(x) N ̃ j (x) = 1 ∑ K(x, xl )φ j (xl ) . R N l=1 p(xl )

As the approximate solution we now choose expression (12.15) where A is replaced by ̃ Ã and R(x) by R(x): ̃ T (x) ̃ φ(x) = f(x) + λ [R

F 1 T Ã −1 [ ̄ ] . N k (x)] f

One might expect no degeneration of the matrices G and S for large enough N, that is, for accurate enough evaluation of the integrals, the analogue of formula (12.19) holds: ̃ G−1 Ã −1 = [ 0 where

λ ̃−1 ̃ G L −1 0 ̃ ̃ ] − [N ] S [(Φ − λ R) G−1 EN 0

̃ G = (E J − λ ̃ D),

Then ̃ T (x)G−1 F − ̃ φ(x) = f(x) + λ R

−E N ] ,

λ ̃ ̃ ̃ G−1 L. S̃ = H N + (Φ − λ R) N

λ ̃ T ̃−1 ̃ ̃ ̃ (λ R (x)G L + k T (x))S−1 ((Φ − λ R) G−1 F − f ̄). N

It is easily seen that ̃ T (x) = k T (x)Φ, NR

̃ = H N Φ, (Φ − λ R)

therefore, S = H N (E N +

λ ̃−1 ̃ Φ G L), N

λ T λ λ ̃ + E N )S̃ −1 (H N Φ ̃ k (x)Φ ̃ G−1 F − k T (x)( Φ ̃ G−1 L G−1 F − f)̄ N N N λ = f(x) + k T (x)H N−1 f.̄ N

̃ φ(x) = f(x) +

The solution takes the above form no matter how the integrals F j are approximated, and even no matter how L i and D ij are. Moreover, precisely this approximated solution is provided by the semi-statistical method (see Section 7.2).

12.7 Averaging of approximate solutions | 257

Thus, the application of the Monte Carlo method to evaluating at least the integrals R j at the same sample x1 , . . . , xN sacrifices almost all the advantages of the suggested statistical projection method over the semi-statistical one. At the first glance, the absolute independence of the obtained solution ̃ φ of the basis {φ j (x)} seems to be surprising because of its pivotal role in the construction. This phenomenon is nevertheless easy ̃ j (x) and to explain. Let us refer to formulas (12.7) and (12.8), and change R j (x) for R ̃ j (xi ). Dropping the random errors ρ(x) and ρ i (xi ), after elementary transR j (xi ) for R formations we arrive at the formulas φ(x) − φ(xi ) −

λ N K(x, xl )φ(xl ) = f(x), ∑ N l=1 p(xl )

N K(xi , xl )φ(xl ) λ = f(xi ), ∑ N − 1 l=1 p(xl ) l=i̸

which are exactly those which lie at the heart of the semi-statistical method (see Section 7.2). In view of the aforesaid, it would be of advantage to evaluate the integrals L i , R j , D ij , and F j by the Monte Carlo method but on samples of sufficiently large size which absolutely differ from the initial one in order to guarantee the necessary accuracy. A wise choice of the distribution of the nodes of integration is also of great importance.

12.7 Another computing technique: Averaging of approximate solutions In order to deduce equations of the statistical projection method, we approximate the integrals making use of the formula of the elementary Monte Carlo method which provides us with an unbiased estimator of the integral. In this connection, it is hoped that the approximate solutions we obtain have the true solution of the integral equation as the mathematical expectation. This gives impetus to use the following scheme: instead of recurrent inversion of matrices and progressive refinement of the random grid dimension, one calculates the sample mean of some number of approximate solutions obtained on independent identically distributed samples of rather small size. The aforesaid can be formally represented as follows. Let there be m approximate solutions ̃ φ1 (x), . . . , ̃ φ m (x) obtained on m independent identically distributed samples. Let 1 m ̃ φ∗m (x) = φ i (x). ∑̃ m i=1 Provided that

E{̃ φ i (x)} = φ(x),

258 | 12 Projectional and Statistical Method we see that E{̃ φ∗m (x)} = φ(x), Var{̃ φ∗m (x)} = E{(̃ φ∗m (x) − φ(x))2 } =

1 Var{̃ φ i (x)}. m

By virtue of the law of large numbers, ̃ φ∗ (x) tends to φ(x) in probability at every point x (see [62]). Moreover, if the variance of ̃ φ i (x) is finite, then, by virtue of the central limit theorem, the distribution of ̃ φ∗ (x) tends to the normal law at every point x ∈ D, and the approximate equality P{|̃ φ∗m (x) − φ(x)| < α√

1 Var{̃ φ i (x)}} ≈ Φ(α) m

holds true for m large enough, where α

2 2 Φ(α) = √ ∫ e−t /2 dt π

0

is the probability integral. Given the level of confidence Φ(α), and defining from it the corresponding value of α, knowing the variance of ̃ φ i (x), we are able to determine the number of trials m required to attain a desired accuracy with a given probability. In actual practice, the variance of the approximate solutions can be found empirically with the use of the sample variance: Var{̃ φ i (x)} ≈ S m (x) = =

1 m φ∗m (x))2 ∑ (φ i (x) − ̃ m i=1

1 m 2 φ∗m (x))2 . ∑ φ (x) − (̃ m i=1 i

We notice that there remains a possibility of recursive calculation of ̃ φ∗m (x) and S m (x) after any trial. Thus, the scheme just introduced allows us to control the accuracy during the computation process and to stop the calculation as soon as the required accuracy has been attained. Let us point out other advantages of this technique. The most time-consuming part of the implementation of the statistical projection method is the inversion of a matrix (solution of a set of simultaneous equations) of dimension N + J, where N ≫ J. Since we deal with a full matrix, the time consumption increases very fast with the sample size N (as N 3 in the case of the Gauß method). Moreover, the need to store the current inverse matrix (in the process of recurrent inversion) in the computer operating memory poses essential constraints on its dimension. The use of a small sample erases the problem on operating memory. In addition, in the time needed to solve the problem at a sample of large size N, we are able to solve it several times at a small sample (for example, one thousand times at a sample of size N/10 using the Gauß method) and arrive at a quality result after averaging. One more advantage of the suggested

12.8 Numerical experiments | 259

technique is the fact that the averaging smoothes out the statistical fluctuations and outliers in approximate solutions, while a one-pass solution on a large sample can yield an inadequate result due to an ‘unfavourable’ sample. Of course, the technique presented in this section is equally applicable to the semi-statistical method, which is demonstrated in experiments in Section 9.7.

12.8 Numerical experiments In this section we present results of numerical experiments consisting of application of the semi-statistical and statistical projection methods to Fredholm integral equations of the second kind. This section is divided into two parts; in the former part, we consider the test problem, while in the latter, physical one, we deal with vibration of a pinned string. While working through the test problem, we thoroughly study peculiarities of the utilised methods. The results obtained for the test problem thus demonstrate the functionality of the methods. In the latter part, by means of the example of the equation of transverse vibration of a pinned string, we consider problems whose analytic solution is known, as well as obtain approximate solutions of some problems which we failed to solve analytically. For the sake of simplicity, in both parts we consider one-dimensional problems. When we studied the statistical projection method, we asserted nothing on the suitable choice of the sample distribution density, so here we consider it constant (in other words, we everywhere deal with uniform samples).

12.8.1 The test problem In this section we consider the application of the statistical projection method to the integral equation 1

φ(x) − ∫ K(x, y)φ(y) dy = f(x),

x ∈ [0, 1],

0

whose kernel is

K(x, y) = e−xy .

It is obvious that the kernel does not exceed 1 inside the rectangle [0, 1] × [0, 1], hence ‖K‖L2 < 1. The equation thus has a unique solution for any right-hand side f(x), and this solution can be obtained by the method of successive approximations. The numerical experiments are carried out in two cases which differ in the structure of the true solution. In both cases, we are given a true solution of the integral equation, and the corresponding right-hand side is calculated, then the numerical method is applied.

260 | 12 Projectional and Statistical Method The former solution

φ(1) (x) = e2x

is infinitely smooth on [0, 1] and monotonically increasing. It corresponds to the righthand side e2−x − 1 . f (1) (x) = e2x − 2−x The latter solution φ(2) (x) = e|4x−2| is symmetric about the centre of the interval [0, 1], has a unique minimum and a well-marked knee (derivative discontinuity) at x = 0.5. It corresponds to the right-hand side e−x/2 − e2 e−x/2 − e−2 f (2) (x) = e|4x−2| + − e(4−x)/2 . 4+x 4−x To apply the statistical projection method to each of the above cases, we choose basis functions of two kinds, polynomial and trigonometric. For both kinds of the basis, the number of used basis functions is limited by three. The polynomial basis consists of the functions p

φ1 (x) ≡ 1,

p

φ2 (x) = √3(2x − 1),

p

φ3 (x) = √5(6x2 − 6x + 1),

and the trigonometric basis consists of the functions φ1t (x) ≡ 1,

φ2t (x) = √2 sin(2πx),

φ3t (x) = √2 cos(2πx).

The orthonormality of the trigonometric basis on the interval [0, 1] is well known, while the orthonormality of the polynomial basis immediately follows from the fact p that φ i (x) = √2P i−1 (2x − 1), where P i (x) are the Legendre polynomials which are orthonormalised on the interval [−1, 1] with weight 1. The principal difference between the solutions φ(1) (x) and φ(2) (x) from the viewpoint of application of the statistical projection method to the corresponding equations is that it is much more difficult to approximate the solution with a knee by a part of the Fourier series than a smooth one. So one might expect that the accuracy of the statistical projection method is somewhat worse in the case with a knee than in the smooth case. Furthermore, it is well known that the convergence of trigonometric Fourier series is rather slow in the general case, so in order to allow for an efficient application of the method on the trigonometric basis one has to preserve a large enough number of basis elements. Nevertheless, for three basis elements we succeed to reveal the advantage of the statistical projection method over the semi-statistical one and the trend for a more rapid convergence as the number of basis elements grows. We point out one more detail: the second elements of both bases are odd functions about the centre of the interval [0, 1]. Hence the coefficient α2 in the expansion of (2) the solution φ(2) over any of the bases is equal to zero, while the discrepancies ∆φ1 (2) and ∆φ2 merely coincide. In this connection we expect (and this is supported by

12.8 Numerical experiments | 261

 

 

  

  

    

    

    

    

     

     

Fig. 12.1. An example of application of the statistical projection method.

the experimental results) that the application of the method with either one or two basis functions to the case with a knee over one and the same sample yields very close results. Typical examples of how a version of the method works are shown in Figure 12.1. The graphs correspond to an application of the method on samples of 40 points. In Figure 12.1, for both problems under consideration the spread of approximate solutions obtained on independent samples is shown, as well as the sample mean of the approximate solutions over 20 trials and confidence intervals for the true solution for confidence levels 0.8 and 0.99. The quantitative comparison of different versions of the method is based on the characteristic u{φ, ̃ φ} = ∫ E{|̃ φ(x) − φ(x)|} dx D

calculated as the sample mean over 100 trials. In Table 12.1 we give the values of u{φ, ̃ φ} calculated for N = 40, 80, . . . , 200 and the following versions of the method: ∙ SS, the semi-statistical method; ∙ SP-1, the statistical projection method with a single basis element φ1 ≡ 1; ∙ SP-P2, the statistical projection method with two polynomial basis elements p p φ1 and φ2 ; ∙ SP-P3, the statistical projection method with three polynomial basis elements p p p φ1 , φ2 , and φ3 ;

262 | 12 Projectional and Statistical Method ∙ ∙

SP-T2, the statistical projection method with two trigonometric basis elements φ1t and φ2t ; SP-T3, the statistical projection method with three trigonometric basis elements φ1t , φ2t , and φ3t .

The data given in Table 12.1 demonstrate the decrease of error as the sample size grows. For a particular N, the error level for each version of the method depends primarily on how good the true solution of the integral equation is approximated by a part of its expansion over the basis functions, in other words, how close to the true solution the approximation given by the projection method is. For comparison with the table, in Figure 12.2 we present the results of application of the corresponding variants of the projection method. Let us briefly describe the results obtained while investigating the technique of adaptive tuning of the basis as defined in Section 12.5. For each of the test problems, we carry out experiments of the following form: first the problem is solved by the semi-statistical method, then five steps of basis adaptation are performed. In each next trial of the statistical projection method, a new sample is used, which is independent of the preceding ones. We thus observe the following clear trend: the best approximation to the true solution is attained after 2–3 steps of basis adaptation (that is, on 2–3 basis elements so constructed). The subsequent adaptation steps worsen the approximation quality. This is likely due to the fact that orthogonalisation and normalisation of new basis elements (which are of small weight) introduce round-off errors which break the orthogonality of the basis and the method becomes ill-defined. As an illustration, the approximate values of u{φ, ̃ φ} calculated at each step of adaptation of the basis over 1000 trials are given in Table 12.2. We can suggest another technique of adaptation of the basis set in the process of computation. Namely, instead of progressive increase of the number of basis functions we restrict ourselves to some constant number of them. At each step, one of the functions (say, with the least absolute value of its coefficient α̃ j ) is dropped out and replaced by a new one in accordance with formulas of Section 12.5. If, for example, we restrict ourselves to a single basis function, then it is corrected at each step in accordance with the approximate solution being calculated (it is itself an approximate solution normalised in L2 (D)). This technique possesses the advantage that, working with only a few basis functions of not-so-small weights, we preserve the orthonormality of the basis and do not make the set of linear equations of the method ill-conditioned. It makes sense to proceed as follows. Upon finding the ‘draft’ solution by the semistatistical method and obtaining the first basis function, it is refined by the above scheme during some amount of iterations. Then, starting from the current estimator of the solution, one generates the second basis element and during some iterations corrects the set of two basis functions, and so on.

12.8 Numerical experiments | 263 Tab. 12.1. Values of u{φ, ̃ φ} on the test problem. N

SS

SP-1

SP-P2

SP-P3

SP-T2

SP-T3

0.5643 0.4062 0.3097 0.2750 0.2808

0.6172 0.4347 0.3247 0.2780 0.2762

0.9611 0.7017 0.5557 0.5004 0.4372

0.3032 0.2137 0.1494 0.1546 0.1295

Smooth solution 40 80 120 160 200

0.7719 0.6030 0.4196 0.3417 0.2851

1.0337 0.6568 0.5676 0.4022 0.4237

0.2056 0.1475 0.1366 0.1093 0.1050

0.0432 0.0282 0.0228 0.0178 0.0204

Solution with a knee 40 80 120 160 200

1.1565 0.6426 0.5941 0.4773 0.4327

0.8627 0.7237 0.5862 0.5223 0.4093

1.0230 0.6889 0.5625 0.5058 0.4581

0.0415 0.0266 0.0219 0.0209 0.0182

Tab. 12.2. Values of u{φ, ̃ φ} on several basis adaptation steps. Smooth solution

Solution with a knee

0.741345 0.494815 0.001748 0.002825 0.014496 0.021772

1.039011 0.255568 0.005728 0.002321 0.017211 0.035652

Semi-statistical method after 1 adaptation step after 2 adaptation steps after 3 adaptation steps after 4 adaptation steps after 5 adaptation steps

 

                                      

Fig. 12.2. Results of application of the projection method.

 

                 

264 | 12 Projectional and Statistical Method 12.8.2 The problem on steady-state forced small transverse vibration of a pinned string caused by a harmonic force It is well known [66] that forced small transverse vibration of a pinned string is determined by the equation ρ(x)

∂2 u ∂2 u − T = g(x, t), 0 ∂t2 ∂x2

x ∈ [0, l],

t ≥ 0,

where u = u(x, t) is the transverse displacement of the string, ρ(x) is the linear density of the string, g(x, t) is the linear density of the exciting force, T0 is the string tension force which is set to a constant. If the string is pinned at the points x = 0 and x = l, we have to pose the boundary condition u(0, t) = 0,

u(l, t) = 0,

t ≥ 0.

We also need to pose the initial condition u(x, 0) = u0 (x),

x ∈ [0, l],

where u0 (x) is the initial displacement of the string, which is a given function. We assume that the external influence is a harmonic function g(x, t) = φ(x) cos(ωt). In this case, the string executes a forced vibration, and the steady-state solution is a harmonic oscillation of the same frequency ω: u∗ (x, t) = v(x) cos(ωt), where v(x) means the amplitude of the steady oscillation. If precisely the steady-state solution is sought for, then one should ignore the initial condition, while the substitution of u∗ (x, t) into the oscillation equation and the boundary conditions lead us to the boundary problem T0 v󸀠󸀠 = −ρ(x)ω2 v − φ(x),

x ∈ [0, l],

v(0) = v(l) = 0 for an ordinary differential equation of second order. Without loss of generality, in what follows we set T0 = 1. It is not so difficult to reduce this problem to the Fredholm integral equation of the second kind. Indeed, let us carry out the integration from 0 to x: x

v󸀠 (x) = − ∫(ρ(y)ω2 v(y) + φ(y)) dy + C, 0

12.8 Numerical experiments | 265

where C is an integration constant. The repeated integration yields x

x z

v(x) = ∫ v (z) dz = − ∫ ∫(ρ(y)ω2 v(y) + φ(y)) dy dz + Cx + D. 󸀠

0

0 0

Let us change the order of integration. Then x

v(x) = − ∫(ρ(y)ω2 v(y) + φ(y))(x − y) dy + Cx + D. 0

Satisfying the boundary condition, we obtain l

1 C = ∫(ρ(y)ω2 v(y) + φ(y))(l − y) dy, l

D = 0,

0

hence v(x) =

l

x

0

0

x ∫(ρ(y)ω2 v(y) + φ(y))(l − y) dy − ∫(ρ(y)ω2 v(y) + φ(y))(x − y) dy. l

Combining these two integrals into one, we arrive at l

v(x) = ∫ G(x, y)(ρ(y)ω2 v(y) + φ(y)) dy 0 l

l

0

0

= ω2 ∫ G(x, y)ρ(y)v(y) dy + ∫ G(x, y)φ(y) dy, where

{ x(l−y) if 0 ≤ x ≤ y ≤ l, l G(x, y) = { y(l−x) if 0 ≤ y ≤ x ≤ l. { l Thus, we arrive at a Fredholm one-dimensional integral equation of the second kind in the function v(x) with the kernel K(x, y) = G(x, y)ρ(y).

It is not difficult to see that by an appropriate length scaling the equation reduces to the interval [0, 1]. Furthermore, by an appropriate time scaling we get ω = 1. So, without loss of generality we consider the boundary problem v󸀠󸀠 + ρ(x)v = −φ(x), v(0) = v(1) = 0,

x ∈ [0, 1],

266 | 12 Projectional and Statistical Method or the equivalent Fredholm integral equation of the second kind 1

1

v(x) − ∫ G(x, y)ρ(y)v(y) dy = ∫ G(x, y)φ(y) dy, 0

x ∈ [0, 1],

0

where

{x(1 − y) if 0 ≤ x ≤ y ≤ 1, G(x, y) = { y(1 − x) if 0 ≤ y ≤ x ≤ 1. { This is the object of our numerical experiments. The linear density ρ(x) for a string with constant volume density is obviously proportional to the area of its cross section. The simplest model, where the string is of constant cross section area, corresponds to the case ρ(x) = const. We can consider a more complex model where the string under a strong stress has a smaller cross section area in the middle and becomes more thick near the ends. This case can be simulated with the use of the parabolic density. In this connection, numerical experiments are carried out for the cases ρ(x) ≡ 1 and ρ(x) = x2 − x + 1. For each of these cases, we consider the following problems. (i) Consider two problems with known analytic solutions obtained by presenting the true solution and calculating the corresponding right-hand side φ(x) from the differential equation. As the true solutions we take the functions v(1) (x) = x − x2 ,

v(2) (x) =

sin(2πx) , 4

to which there correspond the right-hand sides φ(1) (x) = x2 − x + 2, 1 φ(2) (x) = (π2 − ) sin(2πx), 4 for the constant density, and φ(1) (x) = (x2 − x)(x2 − x + 1) + 2, φ(2) (x) = (π2 −

x2 − x + 1 ) sin(2πx), 4

for the parabolic density. (ii) Consider the problems with given external influence {0 (3) φ ε (x) = { 1 { 2ε

if x ∉ [ 12 − ε, if x ∈

[ 21

− ε,

1 2 1 2

+ ε], + ε].

0 < ε ≤ 21 .

The case ε = 12 corresponds to the exciting force which is constant along all string length and equal to 1. As ε decreases, the area of force application gets narrow and its magnitude grows. As ε tends to zero, φ(3) tends to a unit point force applied to x = 21 . In our experiments we consider ε = 0.5, 0.3, 0.1, 0.05.

12.8 Numerical experiments | 267

Since the equation is considered in the interval [0, 1], in the application of the statistical projection method we use the same set of basis functions as for the test problem discussed in Section 12.8.1. As an example, in Figure 12.3 we present graphs of the solutions resulting from application of various versions of the method to the problem on a string of variable density. Along with the graphs on the whole interval, we also present zoomed graphs. All solutions are obtained on samples of size 40. Figure 12.3 contains similar graphs for two problems with step-wise stress applied to a string of constant density. Tables 12.3 and 12.4 are similar to Table 12.1 and contain values of u{φ, ̃ φ} for the problems with known solution. The tables show that, for any version of the method, the maximum relative deviation of the approximate solution from the true one does not exceed 2%.

268 | 12 Projectional and Statistical Method

                                    

        

       

      

      

  

  

  

Fig. 12.3. Problems with known solution for a string of variable density.

                                    

  

Fig. 12.4. Step-wise stresses applied to a string of constant density.

                           

     

12.8 Numerical experiments | 269 Tab. 12.3. Values of u{φ, ̃ φ} for the problem on a string of constant density. N

SS

SP-1

SP-P2

SP-P3

SP-T2

SP-T3

0.000729 0.000547 0.000376 0.000375 0.000328

0.000194 0.000149 0.000104 0.000100 0.000095

0.000000 0.000000 0.000000 0.000000 0.000000

0.000000 0.000000 0.000000 0.000000 0.000000

Parabolic stress 40 80 120 160 200

0.001973 0.001047 0.001095 0.000851 0.000722

0.000679 0.000516 0.000483 0.000414 0.000348

0.000769 0.000535 0.000366 0.000363 0.000291

0.000000 0.000000 0.000000 0.000000 0.000000

Sinusoidal stress 40 80 120 160 200

0.002331 0.001802 0.001520 0.001234 0.001053

0.002351 0.001692 0.001499 0.001256 0.001070

0.001590 0.000842 0.000804 0.000694 0.000614

0.001219 0.000865 0.000813 0.000656 0.000619

Tab. 12.4. Values of u{φ, ̃ φ} for the problem on a string of variable density. N

SS

SP-1

SP-P2

SP-P3

SP-T2

SP-T3

0.003685 0.003916 0.004002 0.003861 0.003911

0.004001 0.003950 0.003963 0.003971 0.003986

0.001517 0.001516 0.001516 0.001516 0.001516

0.001516 0.001516 0.001516 0.001516 0.001516

Parabolic stress 40 80 120 160 200

0.004163 0.003895 0.004071 0.004059 0.003879

0.003858 0.003944 0.003912 0.003881 0.003898

0.003782 0.003898 0.003912 0.003980 0.003988

0.003975 0.003976 0.003976 0.003976 0.003976

Sinusoidal stress 40 80 120 160 200

0.002302 0.002225 0.001890 0.001831 0.001698

0.002772 0.002111 0.001879 0.001831 0.001710

0.002064 0.001691 0.001622 0.001568 0.001605

0.001792 0.001722 0.001602 0.001621 0.001574

Afterword In this monograph, we have considered methods of statistical numerical analysis and presented the underlying ideas, ways to construct and analyse them, and techniques of further development and enhancement to them on examples of evaluation of integrals, solution of integral equations and boundary problems. The key feature of the suggested approach is the application of adaptive procedures during statistical calculations. Such approach provides us with a considerable gain in making their convergence faster and which renders them competitive in these parameters with the deterministic ones. Of directions of development of these methods, applications to spatial problems seem the most promising. Our analysis of particular spatial problems of heat conductivity and elasticity theory has revealed the abilities of these methods. Extension of the algorithms suggested in this monograph to the general spatial problems of elasticity theory and heat conductivity is of both theoretical and applied interest under a suitable numerical implementation. Besides, the next step in development of these methods can be considered in direction of their integration with deterministic procedures of numerical analysis such as the methods of finite and boundary elements.

https://doi.org/10.1515/9783110554632-013

Bibliography [1] [2]

[3] [4]

[5]

[6] [7] [8] [9] [10]

[11] [12] [13] [14] [15] [16]

[17] [18] [19] [20] [21] [22] [23]

D. G. Arsenjev, Development of statistical methods of solutiuon of integral equations in elasticity theory (in Russian), Cand. Techn. Sci. Thesis, St. Petersburg, 1994. D. G. Arsenjev and V. M. Ivanov, Solution of integral equations of the first basic problem of elasticity theory by the semi-statistical method (in Russian), Report No. 6644-B86, VINITI, 1986. D. G. Arsenjev, V. M. Ivanov and N. A. Berkovsky, Analysis of efficiency of the adaptive importance sampling method (in Russian), St. Petersburg State Polytech. Univ. J. 88 (2009), no. 4. D. G. Arsenjev, V. M. Ivanov and N. A. Berkovsky, Adaptive importance sampling method in the case where the number of steps of the bijection process is limited (in Russian), St. Petersburg State Polytech. Univ. J. 98 (2010), no. 2. D. G. Arsenjev, V. M. Ivanov and O. Y. Kulchitsky, Semi-statistical method of numerical solution of integral equation, in: International Conference on Systems, Signals, Control, Computers – SSCC’98, Springer, Durban (1998), 87–90. D. G. Arsenjev, V. M. Ivanov and O. Y. Kulchitsky, Adaptive Methods of Computing Mathematics and Mechanics. Stochastic Variant, World Scientific, Singapore, 1999. N. S. Bakhvalov, On the approximate calculation of multiple integrals, J. Complexity 31 (2015), no. 4, 502–516. I. S. Berezin and N. P. Zhidkov, Computing Methods. Vols. I–II, Pergamon, Oxford, 1965. N. Bergman, Recursive Bayesian Estimation. Navigation and Tracking Applications, Linköping Stud. Sci. Technol. Diss. 579, Linköping University, Linköping, 1999. C. Carstensen, E. Stein, B. Seifert and S. Ohnimus, Adaptive finite element analysis of geometrically non-linear plates and shells, especially buckling, Internat. J. Numer. Methods Engrg. 37 (1994), 2631–2655. F. L. Chernousko, I. M. Ananievski and S. A. Reshmin, Control of Nonlinear Dynamical Systems. Methods and Applications, Springer, Berlin, 2008. H. A. David, Order Statistics, Wiley, New York, 1970. A. Doucet, N. Freitas and N. Gordon, Sequential Monte Carlo Methods in Practice, Springer, New York, 2001. V. Dupač, Stochastické početni metody, Čas. pro pešt. Mat. 81 (1956), no. 1, 55–68. S. M. Ermakov, The Monte Carlo Method and Related Questions (in Russian), Nauka, Moscow, 1971. S. M. Ermakov and G. A. Ilyushina, On evaluation of one class of integrals with the use of stochastic interpolation quadrature formulas (in Russian), in: Monte Carlo Methods in Computational Mathematics and Mathematical Physics, Nauka, Novosibirsk (1974), 67–71. S. M. Ermakov and V. B. Melas, Design and Analysis of Simulation Experiments, Kluwer, Dordrecht, 1995. S. M. Ermakov and G. A. Mikhailov, A Course in Statistical Modeling (in Russian), Nauka, Moscow, 1976. S. M. Ermakov, V. V. Nekrutkin and A. S. Sipin, Random Processes for Classical Equations of Mathematical Physics, Kluwer, Dordrecht, 1989. J. Geweke, Bayesian inference in econometric models using Monte-Carlo integration, Econometrica 57 (1989), no. 6, 1317–1339. S. Haber, A modified Monte Carlo quadrature, Math. Comp. 20 (1967), no. 95, 361–368. S. Haber, A modified Monte Carlo quadrature. II, Math. Comp. 21 (1967), no. 99, 388–397. J. H. Halton, Sequential Monte Carlo, Proc. Cambridge Philos. Soc. 58 (1962), no. 1, 57–78.

https://doi.org/10.1515/9783110554632-014

274 | Bibliography [24] J. H. Halton, On the relative merits of correlated and importance sampling for Monte Carlo integration, Proc. Cambridge Philos. Soc. 61 (1965), no. 2, 497–498. [25] J. H. Halton, A retrospective and prospective survey of the Monte Carlo method, SIAM Rev. 12 (1970), no. 1, 1–63. [26] J. M. Hammersley and K. W. Morton, A new Monte Carlo technique: Antithetic variates, Proc. Cambridge Philos. Soc. 52 (1956), no. 3, 449–475. [27] S. Heinrich, Random approximation in numerical analysis, in: Proceedings of the Conference “Functional Analysis”, Lecture Notes Pure Appl. Math. 150, Marcel Dekker, New York (1994), 123–171. [28] S. N. Isakov, N. U. Tugushev and I. N. Pirogova, Software for Computation of the Field of Velocities and Profile Losses in a Turbine Airfoil Cascade (in Russian), Ural Polytechnic Institute, Sverdlovsk, 1984. [29] V. M. Ivanov and M. L. Korenevsky, Superconvergent adaptive stochastic method for numerical integration, in: Proceedings of the 2nd IMACS Seminar on Monte Carlo Methods, CLPP-BAS, Sofia (1999), 16–17. [30] V. M. Ivanov, M. L. Korenevsky and O. Y. Kulchitsky, Adaptive schemes for the Monte Carlo method of an enhanced accuracy, Dokl. Math. 60 (1999), no. 1, 90–93. [31] V. M. Ivanov and O. Y. Kulchitsky, Method for the numerical solution of integral equations on a random mesh, Differ. Equ. 26 (1990), no. 2, 259–265. [32] V. M. Ivanov and O. Y. Kulchitsky, Statistical Methods of Numerical Analysis with Adaptation (in Russian), St. Petersburg Polytechniv University, St. Petersburg, 1994. [33] V. M. Ivanov and V. A. Pupyrev, On the Weyl tensor (in Russian), Proc. Leningrad Polytechnic University 425 (1988), 125–129. [34] L. V. Kantorovich, Functional analysis and applied mathematics (in Russian), Uspekhi Mat. Nauk 3 (1948), no. 6(28), 89–185. [35] V. P. Klepikov, On integral equations of elasticity theory with regular kernels (in Russian), Report no. 7939-B88, VINITI, 1988. [36] V. P. Klepikov and N. A. Trubaev, Application of regular integral equations to solving problems of elasticity theory (in Russian), Theory Anal. Structures (1989), no. 5, 54–56. [37] A. Kong, A note on importance sampling using standardized weights, Technical Report 348, University of Chicago, Chicago, 1992. [38] N. M. Korobov, Number-Theoretic Methods in Approximate Analysis (in Russian), Fizmatgiz, Moscow, 1963. [39] V. I. Krylov, Approximate Calculation of Integrals, Macmillan, New York, 1962. [40] O. Y. Kulchitsky and S. V. Skrobotov, Adaptive algorithm of Monte Carlo type for calculating the integral characteristics of complex systems, Autom. Remote Control 47 (1986), 812–818. [41] V. D. Kupradze, Boundary problems of the theory of steady elastic vibrations (in Russian), Uspekhi Mat. Nauk 8 (1953), no. 3, 35–49. [42] V. D. Kupradze, Potential Methods in the Theory of Elasticity, Davey, New York, 1965. [43] A. B. Kurzhanski, Control and Observation Under Uncertainty (in Russian), Nauka, Moscow, 1977. [44] G. Lauricella, Alcune applicazioni della teoria delle equazioni funzioni alla fisica-matematica, Nuovo Cimento 13 (1907), no. 5, 104–118, 155–174, 237–262, 501–518. [45] A. I. Lurie, Nonlinear Theory of Elasticity, North-Holland, Amsterdam, 1990. [46] V. M. Matrosov, S. N. Vasilyev and A. I. Moskalenko, Nonlinear Control Theory and its Applications. Dynamics, Control, Optimizations (in Russian), Fizmatlit, Moscow, 2003. [47] G. A. Mikhailov, Some Questions of the Theory of the Monte Carlo Methods (in Russian), Nauka, Novosibirsk, 1974.

Bibliography | 275

[48] G. A. Mikhailov, Minimax theory of weighted Monte Carlo methods, USSR Comput. Math. Math. Phys. 24 (1984), no. 5, 8–13. [49] G. A. Mikhailov, Optimization of Weighted Monte Carlo Methods, Springer, Berlin, 1992. [50] S. G. Mikhlin, Integral Equations and Their Applications to Certain Problems in Mechanics, Mathematical Physics and Technology, Pergamon, London, 1957. [51] I. P. Mysovskikh, Interpolational Cubature Formulas (in Russian), Nauka, Moscow, 1981. [52] V. A. Palmov, Description of high-frequency vibration of complex dynamic objects by methods of the theory of thermal conduction (in Russian), in: Selected Methods of Applied Mechanics, VINITI, Moscow (1974), 210–215. [53] V. A. Palmov and A. K. Belyaev, Vibration conductivity theory (in Russian), in: Problems of Dynamics and Durability of Mechanisms, Riga (1978), 302–310. [54] V. A. Palmov, O. Y. Kulchitsky and V. M. Ivanov, Integral equations of the theory of vibration conductivity and the semi-statistical method of their numerical solution (in Russian), Report no. 2369-79, VINITI, 1979. [55] V. Z. Parton and P. I. Perlin, Integral Equations in Elasticity, Mir, Moscow, 1982. [56] Y. V. Prokhorov and Y. A. Rozanov, Probability Theory. Basic Concepts. Limit Theorems. Random Processes, Springer, Berlin, 1969. [57] A. P. Prudnikov, Y. A. Brychkov and O. I. Marichev, Integrals and Series: Elementary Functions. Vol. 1, Gordon & Breach, New York, 1986. [58] I. Radović, I. M. Sobol and R. F. Tichy, Quasi-Monte Carlo methods for numerical integration: Comparison of different low-discrepancy sequences, Monte Carlo Methods Appl. 2 (1996), no. 1, 1–14. [59] A. N. Shiryaev, Probability, Springer, New York, 1984. [60] I. M. Sobol, Multidimensional Quadrature Formulas and Haar Functions (in Russian), Nauka, Moscow, 1969. [61] I. M. Sobol, Probability estimate of the integration error for Π τ -nets, USSR Comput. Math. Math. Phys. 13 (1974), no. 4, 259–262. [62] I. M. Sobol, A Primer for the Monte Carlo Method, CRC, Boca Raton, 1994. [63] L. N. Sretensky, Theory of Newton Potential (in Russian), Gostekhizdat, Moscow, 1946. [64] O. A. Stepanov, Fundamentals of Estimation Theory With Applications to Problems on Processing of Navigation Information. I: Introduction to Estimation Theory (in Russian), Elektropribor, St. Petersburg, 2009. [65] O. A. Stepanov and A. B. Toropov, Investigation of a linear optimal estimator, in: Proc. 17th World Congress Math., Seoul (2008), 2750–2755. [66] A. N. Tikhonov and A. A. Samarsky, Equations of Mathematical Physics, Pergamon, Oxford, 1963. [67] S. I. Vokhmyanin, E. G. Roost and I. A. Bogov, Design of Cooling Systems of Combustion Turbine Blades. Software System (in Russian), International High School Academy St. Petersburg, Institute of Mechanical Engineering (VTUZ-LMZ), St. Petersburg, 1997. [68] E. B. Vulgakov, Handbook of Aviation Gear Assemblies (in Russian), Mashinostroenie, Moscow, 1981. [69] H. Weyl, Selected Works. Mathematics. Theoretical Physics (in Russian), Nauka, Moscow, 1984. [70] R. J. Wilson, Introduction to Graph Theory, Oliver & Boyd, Edinburgh, 1972. [71] A. A. Zhiglyavsky, Theory of Global Random Search, Kluwer, Dordrecht, 1991. [72] M. I. Zhukovsky, Estimation of Aerodynamic Flow in Axial Turbomachines (in Russian), Mashinostroenie, Leningrad, 1967.

Index adapting, 42 adaptive capability, 163 adaptive method of integration, 41 – adaptive method of control variate sampling, 46 – adaptive method of importance sampling, 44 – generalised adaptive method, 63 airfoil contour, 195 almost sure convergence, 36 almost surely, 19 antithetic variates method, 22 averaging, 257

Lamé’s coefficient, 216 Lauricella equation, 218 law of large numbers, 10 – strong law of large numbers, 19 LPτ -sequence, 113 Lyapunov surface, 174

boundary problem, 173 Boussinesq potential, 234

navigation problem, 141

Chebyshev inequality, 19 class E sα , 103 class S p , 92 class of functions H(m, a, λ), 60 control variate sampling, 21 correlated sampling, 21 crude Monte Carlo method, 19 definite integral, 9 easy function, 21 elasticity theory, 215 elementary Monte Carlo, 18 expectation, 10 first basic problem, 215 force tensor, 217 Fourier trigonometric series, 103 Fourier–Haar series, 92 global approximation, 83 group sampling, 23 Haar system of functions, 92 ideal-fluid flow, 193 imitation, 9 importance sampling, 20 improper integral, 132 integral equation, 150 – Fredholm integral equation, 152 inverse function method, 11 https://doi.org/10.1515/9783110554632-015

Kelvin–Somigliana tensor, 216 Kronecker symbol, 158

method of potentials, 218 Monte Carlo method, 9

orthonormalised function, 84 partition moment, 62 piecewise approximation, 59 Poisson coefficient, 215 potential, 215 primary estimator, 27 problem of vibration conductivity, 173 projectional and statistical method, 241 pseudo-force tensor, 217 pseudo-random, 11 random variable, 9 recurrent inversion, 154 regularisation, 180 rejection method, 15 resolvent, 157 resolvent equation, 157 sampling, 11 second basic problem, 231 secondary estimator, 27 semi-statistical method, 151 sequential bisection, 70 sequential Monte Carlo method, 27 simulation, 9 singularity, 165 statistical trial, 9 stratified sampling, 79 superposition method, 14 symmetrisation of the integrand, 22 trigonometric approximation, 104

278 | Index unit cube, 10 variance, 19 – empirical variance, 19 – sample variance, 19 variance reduction, 20

vector random variable, 16 vibration conductivity tensor, 173 vibration of a pinned string, 264 Weyl force tensor, 237 Weyl tensor, 235