Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications (Mathematics and Statistics) [1 ed.] 178630547X, 9781786305473

This book is the first of two volumes on random motions in Markov and semi-Markov random environments. This first volume

167 93 2MB

English Pages 256 [244] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications (Mathematics and Statistics) [1 ed.]
 178630547X, 9781786305473

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Random Motions in Markov and Semi-Markov Random Environments 1

Series Editor Nikolaos Limnios

Random Motions in Markov and Semi-Markov Random Environments 1 Homogeneous Random Motions and their Applications

Anatoliy Pogorui Anatoliy Swishchuk Ramón M. Rodríguez-Dagnino

First published 2021 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2021 The rights of Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2020946634 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-547-3

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments

ix

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xiii

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xv

Part 1. Basic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 1. Preliminary Concepts . . . . . . . . . . . . . . . . . . . . . . .

3

1.1. Introduction to random evolutions . . . . . . . . . . . . . 1.2. Abstract potential operators . . . . . . . . . . . . . . . . . 1.3. Markov processes: operator semigroups . . . . . . . . . 1.4. Semi-Markov processes . . . . . . . . . . . . . . . . . . 1.5. Lumped Markov chains . . . . . . . . . . . . . . . . . . . 1.6. Switched processes in Markov and semi-Markov media

. . . . . .

3 7 11 14 17 19

Chapter 2. Homogeneous Random Evolutions (HRE) and their Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

2.1. Homogeneous random evolutions (HRE) . . 2.1.1. Definition and classification of HRE . . 2.1.2. Some examples of HRE . . . . . . . . . . 2.1.3. Martingale characterization of HRE . . . 2.1.4. Analogue of Dynkin’s formula for HRE 2.1.5. Boundary value problems for HRE . . . 2.2. Limit theorems for HRE . . . . . . . . . . . 2.2.1. Weak convergence of HRE . . . . . . . . 2.2.2. Averaging of HRE . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . . . . .

24 24 25 28 34 36 37 37 39

vi

Random Motions in Markov and Semi-Markov Random Environments 1

2.2.3. Diffusion approximation of HRE . . . . . . . . . . . . . . 2.2.4. Averaging of REs in reducible phase space: merged HRE 2.2.5. Diffusion approximation of HRE in reducible phase space 2.2.6. Normal deviations of HRE . . . . . . . . . . . . . . . . . . 2.2.7. Rates of convergence in the limit theorems for HRE . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

42 45 48 51 53

Part 2. Applications to Reliability, Random Motions, and Telegraph Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

Chapter 3. Asymptotic Analysis for Distributions of Markov, Semi-Markov and Random Evolutions . . . . . . . . . . . . . . . . . . .

59

3.1. Asymptotic distribution of time to reach a level that is infinitely increasing by a family of semi-Markov processes on the set N . . . . . . 3.2. Asymptotic inequalities for the distribution of the occupation time of a semi-Markov process in an increasing set of states . . . . . . . . . . . . 3.3. Asymptotic analysis of the occupation time distribution of an embedded semi-Markov process (with increasing states) in a diffusion process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Asymptotic analysis of a semigroup of operators of the singularly perturbed random evolution in semi-Markov media . . . . . . . . . . . . 3.5. Asymptotic expansion for distribution of random motion in Markov media under the Kac condition . . . . . . . . . . . . . . . . . . . 3.5.1. The equation for the probability density of the particle position performing a random walk in Rn . . . . . . . . . . . . . . . . . . . . . . 3.5.2. Equation for the probability density of the particle position . . . . 3.5.3. Reduction of a singularly perturbed evolution equation to a regularly perturbed equation . . . . . . . . . . . . . . . . . . . . . . . . 3.6. Asymptotic estimation for application of the telegraph process as an alternative to the diffusion process in the Black–Scholes formula . . . 3.6.1. Asymptotic expansion for the singularly perturbed random evolution in Markov media in case of disbalance . . . . . . . . . . . . . 3.6.2. Application to an economic model of stock market . . . . . . . .

.

61

.

74

.

77

.

82

.

90

. .

90 91

.

93

.

96

. 96 . 100

Chapter 4. Random Switched Processes with Delay in Reflecting Boundaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.1. Stationary distribution of evolutionary switched processes in a Markov environment with delay in reflecting boundaries . . . . . . . . . . 104 4.2. Stationary distribution of switched process in semi-Markov media with delay in reflecting barriers . . . . . . . . . . . . . . . . . . . . . . . . . 109

Contents

4.2.1. Infinitesimal operator of random evolution with semi-Markov switching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2. Stationary distribution of random evolution in semi-Markov media with delaying boundaries in balance case . . . . . . . . . . . . . . . . . . 4.2.3. Stationary distribution of random evolution in semi-Markov media with delaying boundaries . . . . . . . . . . . . . . . . . . . . . . . 4.3. Stationary efficiency of a system with two unreliable subsystems in cascade and one buffer: the Markov case . . . . . . . . . . . . . . . . . . 4.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2. Stationary distribution of Markov stochastic evolutions . . . . . . . 4.3.3. Stationary efficiency of a system with two unreliable subsystems in cascade and one buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4. Mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.5. Main mathematical results . . . . . . . . . . . . . . . . . . . . . . . 4.3.6. Numerical results for the symmetric case . . . . . . . . . . . . . . . 4.4. Application of random evolutions with delaying barriers to modeling control of supply systems with feedback: the semi-Markov switching process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1. Estimation of stationary efficiency of one-phase system with a reservoir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2. Estimation of stationary efficiency of a production system with two unreliable supply lines . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

110 113 121 124 124 125 129 131 133 138

141 141 149

Chapter 5. One-dimensional Random Motions in Markov and Semi-Markov Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.1. One-dimensional semi-Markov evolutions with general Erlang sojourn times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1. Mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2. Solution of PDEs with constant coefficients and derivability of functions ranged in commutative algebras . . . . . . . . . . . . . . . . 5.1.3. Infinite-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4. The distribution of one-dimensional random evolutions in Erlang media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Distribution of limiting position of fading evolution . . . . . . . . . . . 5.2.1. Distribution of random power series in cases of uniform and Erlang distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2. The distribution of the limiting position . . . . . . . . . . . . . . . 5.3. Differential and integral equations for jump random motions . . . . . . 5.3.1. The Erlang jump telegraph process on a line . . . . . . . . . . . . . 5.3.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

160 160 168 171 172 181 182 190 191 192 198

viii

Random Motions in Markov and Semi-Markov Random Environments 1

5.4. Estimation of the number of level crossings by the telegraph process . 199 5.4.1. Estimation of the number of level crossings for the telegraph process in Kac’s condition . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 References Index

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

Summary of Volume 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Preface

Motion is an essential element in our daily life. Thoughts related to motion can be found in the ancient Greek philosophers; however, we have to look several centuries ahead for relevant mathematical models. Galileo Galilei, Isaac Newton and Johannes Kepler, between the years 1550–1650, made remarkable advances in the construction of mathematical models for deterministic motion. Further advances in this line were made by Leonhard Euler and William Rowan Hamilton, and, in 1778, Joseph-Louis Lagrange proposed a new formulation of classical mechanics. This new formulation is based on the optimization of energy functionals, and it allows us to solve more sophisticated motion problems in a more systematic manner. This new approach to mechanics is the basis for dealing with modern quantum mechanics and the physics of high-energy particles. Albert Einstein, in 1905, through his special theory of relativity, introduced fine corrections for high velocities, close to the maximum speed of light. All of these fundamental mathematical models deal with many sophisticated motions such as that of astronomical objects, satellites (natural and artificial), particles in intense electromagnetic fields, particles under gravitational forces, a better understanding of light, and so on. However, even though these might be very complicated problems, we should mention that all of them have deterministic paths of motion. In 1827, the Scottish botanist Robert Brown described a special kind of random motion produced by the interaction of many particles, while looking at pollen of the plant Clarkia pulchella immersed in water, using a microscope, and it was recognized that this type of motion could not be fully explained by modeling the motion of each particle or molecule. Albert Einstein, 78 years later, in 1905, published a seminal paper where he modeled the motion of the pollen as being moved by individual water molecules. In this work, the diffusion equation was introduced as a convenient mathematical model for this random phenomenon. A

x

Random Motions in Markov and Semi-Markov Random Environments 1

related model in this direction was presented in 1906 by Marian Smoluchowski, and the experimental verification was done by Jean Baptiste Perrin in 1908. A similar model for Brownian motion was proposed in 1900 by Louis Bachelier in his PhD thesis entitled The Theory of Speculation, where he presented a stochastic analysis for valuing stock options in financial markets. This novel application of a stochastic model faced criticism at the beginning, but Bachelier’s instructor Henri Poincaré was in full support of this visionary idea. This fact shows a close relationship between models to explain random phenomena in physics (statistical mechanics) and in financial analysis, and also in many other areas. A notable contribution of the American mathematician Norbert Wiener was to establish the mathematical foundations for Brownian motion, and for that reason it is also known as the Wiener process. Great mathematicians such as Paul Lévy, Andrey Kolmogorov and Kiyosi Itô, among many other brilliant experts in the new field of probability and stochastic processes, set the basis of these stochastic processes. For example, the famous Black–Scholes formula in financial markets is based on both diffusion processes and Itô’s ideas. In spite of its success in modeling many types of random motion and other random quantities, the Wiener process has some drawbacks when capturing the physics of many applications. For instance, the modulus of velocity is almost always infinite at any instant in time, it has a free path length of zero, the path function of a particle is almost surely non-differentiable at any given point and its Hausdorff dimension is equal to 1.5, i.e. the path function is fractal. However, the actual movement of a physical particle and the actual evolution of share prices are barely justified as fractal quantities. Taking into account these considerations, in this book we propose and develop other stochastic processes that are close to the actual physical behavior of random motion in many other situations. Instead of the diffusion process (Brownian motion), we consider telegraph processes, Poisson and Markov processes and renewal and semi-Markov processes. Markov (and semi-Markov) processes are named after the Russian mathematician Andrey Markov, who introduced them in around 1906. These processes have the important property of changing states under certain rules, i.e. they allow for abrupt changes (or switching) in the random phenomenon. As a result, these models are more appropriate for capturing random jumps, alternate velocities after traveling a certain random distance, random environments through the formulation of random evolutions, random motion with random changes of direction, interaction of particles with non-zero free paths, reliability of storage systems, and so on. In addition, we will model financial markets with Markov and semi-Markov volatilities as well as price covariance and correlation swaps. Numerical evaluations of variance, volatility, covariance and correlations swaps with semi-Markov volatility

Preface

xi

are also presented. The novelty of these results lies in the pricing of volatility swaps in the closed form, and the pricing of covariance and correlation swaps in a market with two risky assets. Anatoliy P OGORUI Zhytomyr State University, Ukraine Anatoliy S WISHCHUK University of Calgary, Canada Ramón M. RODRÍGUEZ -DAGNINO Tecnologico de Monterrey, Mexico October 2020

Acknowledgments

Anatoliy Pogorui was partially supported by the State Fund for Basic Research (Ministry of Education and Science of Ukraine 20.02.2017 letter no. 12). Anatoliy Pogorui I would like to thank NSERC for its continuing support, my research collaborators, and my current and former graduate students. I also give thanks to my family for their inspiration and unconditional support. Anatoliy Swishchuk I would like to thank Tecnologico de Monterrey for providing me the time and support for these research activities. I also appreciate the support given by Conacyt through the project no. SEP-CB-2015-01-256237. The time and lovely support given to me by my wife Saida, my three daughters Dunia, Melissa and R. Melina, as well as my son Ramón Martín, are invaluable. Ramón M. Rodríguez-Dagnino

Introduction

I.1. Overview The theory of dynamical systems is one of the fields of modern mathematics that is under intensive study. In the theory of stochastic processes, researchers are actively studying dynamical systems, operating under the influence of random factors. A good representative of such systems is the theory of random evolutions. The first results within this field were obtained by Goldstein (1951) and Kac (1974), who studied the movement of a particle on a line with a speed that changes its sign under the Poisson process. Subsequently, this process was called the telegraph process or the Goldstein– Kac process. Further developments of this theory have been presented in the works of Griego and Hersh (1969, 1971), Hersh and Pinsky (1972), and Hersh (1974, 2003), which gave a definition of stochastic evolutions in a general setting. Important advances in the theory of stochastic evolutions have been made in the formulation of limit theorems and their refinement; these consist of obtaining asymptotic expansions. These problems are studied in the works of Korolyuk and Turbin (1993), Skorokhod (1989), Turbin (1972, 1981), Korolyuk and Swishchuk (1986), Korolyuk and Limnios (2009, 2005), Shurenkov (1989, 1986), Dorogovtsev (2007b), Anisimov (1977), Girko (1982), Hersh and Pinsky (1972), Papanicolaou (1971a,b), Kertz (1978), Watkins (1984, 1985), Balakrishnan et al. (1988), Yeleyko and Zhernovyi (2002), Pogorui (1989, 1994, 2009a) and Pogorui and Rodríguez-Dagnino (2006, 2010a). Among the many methods in finding limiting theorems, we should mention a class that could be called an asymptotic average scheme. By using these methods, a semi-Markov evolution can be reduced to a random evolution in a lumping state space with Markov switching, and is thus studied as a Markov scheme. Most of the results in this field are presented in the following books and papers: Korolyuk and Swishchuk (1995b), Korolyuk and Korolyuk (1999), Korolyuk and Turbin (1993),

xvi

Random Motions in Markov and Semi-Markov Random Environments 1

Korolyuk and Limnios (2005), Turbin (1972), Pogorui (2004a, 2012a), and Rodríguez-Said et al. (2007). Furthermore, this theory was also developed in the study of asymptotic expansions for functionals of random evolution in the phase averaging and diffusion approximation. This topic was the main subject of the following works: Korolyuk and Limnios (2009, 2004), Turbin (1981), Samoilenko (2005), Albeverio et al. (2009), Pogorui (2010a), Pogorui and Rodríguez-Dagnino (2010a), and Nischenko (2001). The asymptotic average scheme has been applied to a semi-Markov evolution for the computation of the effectiveness of a multiphase system with a couple of storage units by Pogorui (2003, 2004a), Pogorui and Turbin (2002), and Rodríguez-Said et al. (2007). Research on random evolutions has also been carried out by applying martingale methods. It seems that the founders of this approach were Stroock and Varadhan (1969, 1979), but further developments can be found in the works of Skorokhod (1989), Pinsky (1991), Korolyuk and Korolyuk (1999), Sviridenko (1989), Swishchuk (1989), Hersh and Papanicolaou (1972), Iksanov and Résler (2006), and Griego and Korzeniowski (1989). In addition to the successful development of abstract stochastic evolutions in the decade 1980–1990, several scholars studied various generalizations of the Goldstein– Kac telegraph process to multidimensional spaces. In connection to this, the results of Gorostiza (1973), Gorostiza and Griego (1979), Orsingher (1985), Orsingher and Somella (2004), Turbin (1998), Orsingher and Ratanov (2002), Samoilenko (2001) and Lachal (2006) should be noted. In most of these works, the authors considered a finite number of directions of a particle movement and obtained differential equations for the probability density functions. In the works of Pogorui (2007) and Pogorui et al. (2014), the authors proposed a method for solving such equations by using monogenic functions, after associating a particular commutative algebra with them. Masoliver et al. (1993) studied the telegraph process with reflecting or partially reflecting boundaries and its distribution in a fixed interval of time. In papers by Pogorui (2005, 2006), and Pogorui and Rodríguez-Dagnino (2006, 2010b), the authors also study the stationary distribution of some Markov and semi-Markov evolution with delaying boundaries. De Gregorio et al. (2005), and Stadje and Zacks (2004) considered the generalizationof the telegraph process on a line, where there is a discrete set of particle velocities, then at Poisson epochs a velocity is chosen from this set. Pogorui (2010b), Pogorui and Rodríguez-Dagnino (2009b) and Samoilenko (2002) investigated fading evolutions, where the velocity of a particle tends to zero as the number of switches grows at infinite.

Introduction

xvii

Pogorui and Rodríguez-Dagnino (2005a) also studied a generalization of the telegraph process to the case of Erlang interarrivals between successive switches of particle velocities. For such processes, a differential equation for the probability density function (pdf) of the particle position on the line was obtained. In addition, in Pogorui (2011a) a method for solving such equations, by using monogenic functions associated with this equation on a commutative algebra, was developed. Orsingher and De Gregorio (2007), Stadje (2007) and others studied the motion of a particle in multidimensional spaces with constant absolute velocity and directions uniformly distributed on a unit sphere that change at Poisson points. The authors obtained explicit formulas for the position of the particle distributions for two- and four-dimensional space and investigated the “explosive effect” for the pdf of the position of a particle that approaches the singularity sphere for the plane and the three-dimensional space. In recent years, much attention has been paid to the Pearson random walk with Gamma distributed steps and associated walks of Pearson–Dirichlet type, whose steps have the Dirichlet distribution. Franceschetti (2007) developed explicit formulas for the conditional pdf of the position of a particle at any number of steps for a walk in Rn (n = 1, 2) with uniformly distributed directions and steps with the Dirichlet distribution with parameter q = 1. Beghin and Orsingher (2010a) obtained an expression for the conditional distribution of the position of a particle of the Pearson–Dirichlet walk with parameter q = 2 in the plane. Le Caér (2010, 2011) generalized these results to the case of the multidimensional Pearson–Dirichlet random walk with arbitrary parameter q, where the author introduces the concept of a “Hyperspherical Uniform” (HU) random walk. The HU walk is a motion, the endpoint distribution of which is identical to the distribution of the projection in the walk space of a point, with a position vector, randomly chosen on the surface of the unit hypersphere of some hyperspace in higher dimensions. By using properties of the HU random walk, Le Ca¨er found walks for which the conditional probability density can be expressed in a closed form. We should also mention the recent papers by De Gregorio (2014) and Letac and Piccioni (2014), where they obtained a generalization and simplification of proofs of the results stated by Le Ca¨er. In all of the above-mentioned papers, the authors study the conditional distributions of the particle position at renewal epochs of the switching direction process. For a non-Markov switching process, it is possible to find the corresponding results. The mathematical technique is to replace the distribution of the non-Markov process by the Markov chain walk embedded in this process. By using this approach, Pogorui and Rodríguez-Dagnino (2011a) obtained a recursive expression for the conditional characteristic functions of a random walk with Erlang switching, considering a non-Markov switching process. Namely, they studied the changes in the conditional characteristic functions of the particle position, not only at instants of the direction changing, but at all Poisson times. Pogorui (2011a) studied the

xviii

Random Motions in Markov and Semi-Markov Random Environments 1

particular case of the Erlang distributed stay of the switching process in the states. Further results in this direction were obtained by Pogorui and Rodríguez-Dagnino (2012) where the authors also studied multidimensional random motion at random velocities. For some distributions of random velocity, they observed an “explosive effect” for the pdf of the position of a particle when it is approaching the singularity sphere in four-dimensional space. Other directions of random walk theory, which have been studied intensively during recent years, are the fractal Brownian motion and the fractal generalization of the telegraph process. These processes have been studied by Qian et al. (1998), Cahoy (2007), Beghin and Orsingher (2010b), Orsingher and Beghin (2009), D’Ovidio et al. (2014), and others. The set of particles with interaction, where each particle moves on a line according to a telegraph process, up to collision with another particle, was studied by Pogorui (2012b). During the collision, the particles exchange momentums. In this book, the author calculates the distribution of time of the first collision for two telegraph particles that started simultaneously from different points on a line and investigates the limit of this distribution under Kac’s condition. The author also investigates the system of particles with Markov switching, which is bounded with reflecting boundaries. The distribution for the position of particles of the system in a fixed time was also obtained. The limiting properties of these distributions and an estimate of the number of collisions in the system with reflecting boundaries, as well as without them, are also studied. Such a system of particles can be interpreted as a model of one-dimensional gas and it is a kind of one-dimensional generalization of the deterministic models of gas, of the billiard type, that were studied by Kornfeld et al. (1982), for example. The velocity of particles in these models is considered to be finite. This is a major difference from systems where the position of a particle is described by a diffusion process, such as in Arratia flow. We should note that models with finite speeds of particles moving under the influence of forces of mutual attraction were studied by Sinai (1992), Lifshits and Shi (2005), Giraud (2001, 2005), Bertoin (2002) and Vysotsky (2008). I.2. Description of the book The book is divided into two volumes, each containing two parts. Part 1 of Volume 1 consists of basic concepts and methods developed for random evolutions. These methods are the elementary tools for the rest of the book, and they include many results in potential operators and the description of some techniques to find closed-form expressions in relevant applications. Part 2 of Volume 1 comprises three chapters (3, 4 and 5) dealing with asymptotic results (Chapter 3) and applications ranging from random motion with different types

Introduction

xix

of boundaries, reliability of storage systems, telegraph processes, an alternative formulation to the Black–Scholes formula in finance, fading evolutions, jump telegraph processes and estimation of the number of level crossings for telegraph processes (Chapters 4 and 5). Part 1 of Volume 2 extends many of the results of the latter part of Volume 1 to higher dimensions and consists of two chapters (1 and 2). Chapter 1 has the importance of presenting novel results of the random motion of the realistic three-dimensional case that has barely been mentioned in the literature. Chapter 2 deals with the interaction of particles in Markov and semi-Markov media, a topic many researchers have a strong interest in. Part 2 of Volume 2 discusses applications of Markov and semi-Markov motions in mathematical finance across three chapters (3, 4 and 5). It includes applications of the telegraph process in modeling a stock price dynamic (Chapter 3), pricing of variance, volatility, covariance and correlation swaps with Markov volatility (Chapter 4), and the same pricing swaps with semi-Markov volatilities (Chapter 5). The following is a general overview of the chapters and sections of the book. Chapters 1 and 2 of Volume 1 review the literature on the topic of random evolutions and outline the main areas of research. Many of these auxiliary results are used throughout the book. Section 1.1 outlines research directions on the theory of telegraph processes and their generalizations. In section 1.2, we introduce the notion of the projector operator and the generalized inverse operator or potential for an invertible reduced operator used in perturbation theory for linear operators. In turn, this theory is often used in the study of the asymptotic distribution of probability for reaching a “hard to reach domain”. In section 1.3, we consider the notion of a semigroup of operators generated by a Markov process. We give the definitions of the infinitesimal operator, the stationary distribution and the potential of a Markov process. These concepts are used in Chapter 3 for the asymptotic analysis of large deviations of semi-Markov processes. Section 1.4 provides a constructive definition of a semi-Markov process based on the concept of the Markov renewal process (MRP). The notion of the semi-Markov kernel, which is a key definition for MRP, is considered. For a semi-Markov process, we introduce some auxiliary processes, with which a semi-Markov process forms a two-component (or bivariate) Markov process, and for such a process the infinitesimal operator is presented. In section 1.5, we consider the notion of a lumped Markov chain and describe a phase merging scheme.

xx

Random Motions in Markov and Semi-Markov Random Environments 1

Section 1.6 describes a stochastic switching process in Markov and semi-Markov environments. We define semigroup operators associated with this process and consider their infinitesimal operator. In addition, the concept of superposition of independent semi-Markov processes is considered. In Chapter 2 of Volume 1 we introduce homogeneous random evolutions (HRE), the elementary definitions, classification and some examples. We also present the martingale characterization and an analogue of Dynkin’s formula for HRE. Some other important topics covered in this chapter are limit theorems, weak convergence and diffusion approximations, which are useful for Part 2 of Volume 2. In Chapter 3 of Volume 1 we consider the asymptotic distribution of a functional of the time for reaching “hard to reach” areas of the phase space by a semi-Markov process on the line. Section 3.1 is devoted to the analysis of the asymptotic distribution of a functional related to the time to reach a level that is infinitely removed by a semi-Markov process on the set of natural numbers. In section 3.2, we give asymptotic estimates for the distribution of residence times of the semi-Markov process in the set of states that expands when the condition of existence of the functional A is not fulfilled. In section 3.3, we obtain the asymptotic expansion for the distribution of the first exit time from the extending subset of the phase space of the semi-Markov process embedded in the diffusion process. In section 3.4, we obtain asymptotic expansions for the perturbed semigroups of operators of the respective three-variate Markov process (after the standard extension of the phase space of the perturbed random evolution uε (t, x) in the semi-Markov media), provided that the evolution uε (t, x) weakly converges to the diffusion process as ε > 0. In section 3.5, we obtain asymptotic expansions under Kac’s condition in the diffusion approximation for the distribution of a particle position, which performs a random walk in a multidimensional space with Markov switching. Section 3.6 describes a novel financial formula as an alternative to the well-known Black–Scholes formula for modeling the dynamic behavior of stock markets. This new formula is based on the asymptotic expansion for the singularly perturbed random evolution in Markov media. Chapter 4 of Volume 1 is devoted to the computation of the stationary distributions for random switched processes with reflecting boundaries in Markov

Introduction

xxi

and semi-Markov environments. These results are used for the calculation of the efficiency of inventory control systems with feedback, and for some reliability problems of systems that are modeled by Markov and semi-Markov evolution. In section 4.1, we derive the stationary distribution of transport processes with delaying in the boundaries in Markov media. These results are applicable to the study of multiphase systems with several reservoirs. Section 4.2 deals with the transport process with semi-Markov switching. We find the stationary measure for this process and it is described by the differential equation with phase space on the corresponding interval and constant vector field values that depend on the switching semi-Markov process with a finite set of states. In section 4.3, we give examples of applications of this method for the calculation of the stationary distributions for random switched processes with reflecting boundaries. In particular, we compute the efficiency coefficient of a single and two-phase inventory control system with feedback. In section 4.4, we apply random evolutions with delaying barriers to modeling the control of supply systems with feedback, by considering the semi-Markov switching process. In Chapter 5 of Volume 1 we study various models of stochastic evolutions that generalize the Goldstein–Kac telegraph process and investigate their distributions. Section 5.1 is devoted to one-dimensional semi-Markov evolutions in an Erlang environment. In section 5.1.1, we derive a hyperbolic differential equation of the telegraph type for the pdf of the position of a particle moving at finite velocity. The interarrival times between two successive changes of velocity have an Erlang distribution. In section 5.1.2, a method of solution of this partial differential equation is developed by using monogenic functions associated with a finite-dimensional commutative algebra. Section 5.1.3 extends the results of section 5.1.2 to infinite-dimensional commutative algebras. In section 5.1.4, by using the methods from sections 5.1.2 and 5.1.3, we obtain the distribution of a one-dimensional random evolution in Erlang media. In section 5.2, we find the distribution of limiting positions of a particle moving according to a fading evolution. We assume that the interarrival times are Erlang or uniform distributed for the switching process. In section 5.3, differential and integral equations for jump random motions are introduced and some examples have been developed to illustrate the method.

xxii

Random Motions in Markov and Semi-Markov Random Environments 1

Section 5.4 is devoted to the estimation of the number of level crossings by the telegraph process in Kac’s condition. In Chapter 1 of Volume 2 we study random motions in higher dimensions. In section 1.1, we study the random walk of a particle with constant absolute velocity, which changes its direction according to a uniform distribution on the unit sphere at the renewal epochs of a switching process. Section 1.2 deals with random motion with uniformly distributed directions and random velocity for the motion of particles in one, two, three and four dimensions. Similar cases are considered in section 1.3 for the distribution of random motion at non-constant velocity in semi-Markov media, and in section 1.4 for Goldstein–Kac telegraph equations and random flights in higher dimensions, where the directions of the movement and the velocity change at the renewal epochs. The jump telegraph process in Rn is considered in section 1.5. In Chapter 2 of Volume 2 we study a system of interacting particles with Markov and semi-Markov switching. Section 2.1 is devoted to an ideal gas model with finite velocity of molecules. In this section, we find the distribution of the first meeting time of two telegraph particles on the line, which started simultaneously from different points. In section 2.1.2, we estimate the number of particle collisions. In section 2.1.4, we obtain an asymptotic estimate of the number of collisions of the system of telegraph particles when time goes to infinity. Unlike the previous section, the system has no boundaries and particles can move arbitrarily far away in a straight line. Section 2.2 is devoted to the generalization of results of section 2.1 to the case of semi-Markov switching processes. In section 2.2.1, we obtain a set of renewal-type equations for the Laplace transform of the first collision of two particles. In section 2.2.2, we consider an example regarding a particular case of semi-Markov switching of particle velocity, and we study the limiting properties of the distribution of the particle position. Section 2.2.3 is devoted to studying the case where the first time of collision of two particles has finite expectation. We should note that in all previous examples, including the Markov case, expectation of the first time of collision of two particles is infinite. In Chapter 3 of Volume 2 the application of the telegraph process for option prices, as an alternative to the diffusion process in the Black-Scholes formula, is further studied through asymptotic estimation of the corresponding operators. Some numerical results and plots are also presented. Pricing variance, volatility derivatives, covariance and correlation swaps for financial markets with Markov-modulated volatilities are the main topics of Chapter 4 of Volume 2. These results are extended to the case of semi-Markov modulated volatilities in Chapter 5 of Volume 2. Numerical results and some plots to illustrate these results are also presented in these last two chapters.

PART 1

Basic Methods

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

1 Preliminary Concepts

1.1. Introduction to random evolutions Evolution in a random environment means that the system depends on the state of the environment, and this occurs in many real systems in nature. Similarly, if the evolution of the system does not affect the random environment, but the environment is described by a random process, say a Markov renewal process (MRP), then such systems are called stochastic. Some stochastic systems change their states abruptly; that is, in every state, the system spends a random holding time and then immediately transfers to another state. Such systems are called discrete-state systems. The simulation of discrete stochastic systems often uses jumping Markov and semi-Markov processes. However, in many applications stochastic systems are changing their states continuously or in a combination of continuous and discrete, with the result of it being ineffective to model such systems with jumping Markov or semi-Markov processes. A better modeling strategy for such systems is the notion of semi-Markov (Markov) evolution, which is given by two processes: the switching MRP, describing the random environment, and the switched process that describes the evolution of the system. This book is aimed at studying both discrete systems, the models for which are Markov and semi-Markov processes, and continuous systems, which are simulated by random evolutions (RE). One of the relevant areas in the study of semi-Markov processes is the theory of large deviations, in particular, the asymptotic analysis of some functionals associated with the hitting time of a “hard-to-achieve” level. Such results are useful in queuing theory and reliability theory (Korolyuk and Turbin 1982). One of the earliest asymptotic analyses of semi-Markov processes was done by Korolyuk, where he generalized the Vishyk–Lyusternik algorithm (Vasileva and

4

Random Motions in Markov and Semi-Markov Random Environments 1

Butuzov 1973, 1990; Vishik and Lusternik 1960) to study the distributions of large deviations. Basically, the method consists of representing a distribution as an asymptotic series with two types of terms. One type of these terms regularly depends on a small parameter of perturbations of embedded (in the semi-Markov process) Markov chain, and the other type of terms is of the kind of boundary layer functions. This method allows us to study the asymptotic behavior of the residence time of a semi-Markov process in a fixed set of states. By using the approach of Korolyuk, the asymptotic analysis of hitting time of a “hard-to-achieve” level by semi-Markov processes was carried out by Korolyuk et al. (1973), Korolyuk and Tadjiev (1977), Korolyuk and Borovskikh (1981), Korolyuk and Turbin (1993, 1982), and others. An alternative method for the asymptotic analysis of singularly perturbed semigroups of operators, which makes it possible to write the asymptotic expansion coefficients explicitly, bypassing the recursive calculations, is presented in the papers by Turbin (1981) and Pogorui (1990, 1992, 1994). The research to find the limiting distributions and the asymptotic analysis of semi-Markov processes continues to be an active area in recent times. For instance, we should mention in this line the works of Foss (2007), Kabanov and Pergamenshchiskov (2003), Semenov (2008), Silvestrov (2004, 2007a,b), Soloviev (1993), and Yeleyko and Zhernovyi (2002). Limiting distributions for stationary processes and their application in the assessment of financial risks are studied by Novak (2007, 2011). The adapted Vishyk–Lyusternik algorithm also finds its application in the study of asymptotic expansions for functionals of random evolution in the phase averaging and diffusion approximation. This topic is studied in the works of Albeverio et al. (2009), Korolyuk and Limnios (2009, 2005), Samoilenko (2005), Pogorui (2009a), and Pogorui and Rodríguez-Dagnino (2010a). Another important approach in the study of REs is the convergence rate of random walks by using limit theorems based on the martingale nature of RE. This has been the topic of various books and articles (Pinsky 1991; Korolyuk 1993; Swishchuk 1989; Iksanov 2006; Iksanov and Résler 2006; Sviridenko 1989). Stochastic processes with Markov and semi-Markov switching to simulate the motion of a particle in a finite-dimensional space are called RE. One of the first works along this line was a random evolution model describing the motion of a particle on a line at a constant absolute speed whose directions are switched by the Poisson process. Such an analysis was made by Goldstein (1951) and Kac (1951, 1974). This model was later called the telegraph process because the distribution of the motion of such particles is a solution of the telegraph equation. However, the telegraph process {x(t)} does not have some properties, which makes it difficult to study it. For example, the telegraph process x(t), in contrast to the Wiener process, is neither Markov nor a martingale. In particular, these “defects” complicate the study

Preliminary Concepts

5

of systems of interacting particles, whose trajectories are described by the telegraph processes. For example, after a hard collision of a particle with another particle, its whole trajectory is not described by a telegraph process. Pogorui (2012b) studied a system of particles and specified a manner to overcome problems arising from the non-Markov trajectories of particles. The connections of the telegraph process with the random motion of particles have raised to new research areas: 1) The generalization of telegraph processes on multidimensional spaces, especially on a plane with a finite number of symmetrical directions of a particle movement, which changes at random Poisson events. These studies were performed in the works of Orsingher (1985), Orsingher and Somella (2004) and Lachal (2006). 2) The telegraph processes with reflecting and partially reflecting boundaries and their distribution. This problem was studied by Masoliver et al. (1993). The stationary distribution of Markov and semi-Markov RE with delays in boundaries (or sticky boundaries) and their applications to evaluate the effectiveness of the multiphase inventory control systems with feedback was studied by Pogorui and Turbin (2002), Pogorui (2003, 2004a), and Pogorui and Rodríguez-Dagnino (2009b, 2010b). 3) The generalizations of telegraph processes on heterogeneous cases in which the absolute speed is variable or the distribution switching processes change over time. For example, the work of Stadje and Zacks (2004) is devoted to the generalization of the telegraph process to the case where the particle velocity is an independent and identically distributed random variable that changes at Poisson events. Di Crescenzo and Martinucci (2010) studied a generalized telegraph process with increasing parameters of the switching process. Samoilenko (2002), Pogorui (2009b, 2010b), and Pogorui and Rodríguez-Dagnino (2009b, 2005b) studied fading evolution, where speed goes to zero at an infinite increase in the number of switches. In the paper by Pogorui and Rodríguez-Dagnino (2005a), the authors study a generalization of the telegraph process on the case of general Erlang sojourn times of the switching process and obtain the hyperbolic partial differential equation for the distribution of this process. In addition, in Pogorui (2011a) and Pogorui et al. (2014) a method for solving such kind of differential equation was developed. 4) Multidimensional random motion with an infinite number of directions of motion, which changes its direction at renewal epochs of the switching process, has been developed in some recent works. Apparently, one of the first papers in this field was that by Stadje (2007), where motion in the plane with Poisson switching process is studied. Further works in this direction are by Orsingher and De Gregorio (2007), and Franceschetti (2007). Most of these works use the apparatus of characteristic functions to study the distribution of a particle in multidimensional spaces with constant absolute speed when at Poisson events the particle changes its direction to another uniformly distributed on a unit sphere. In these papers,

6

Random Motions in Markov and Semi-Markov Random Environments 1

the authors obtained the distribution of the particle position in explicit form for dimensions n = 2, 4, 6. Pogorui and Rodríguez-Dagnino (2012) study an isotropic random motion in multidimensional space with a random velocity that is more natural from the physics point of view. When we have random velocity, then an “explosive effect” for some distributions is observed. Pogorui and Rodríguez-Dagnino (2011b, 2013) studied an isotropic random motion with gamma steps in higher dimensions. Le Caér (2010, 2011), De Gregorio and Orsingher (2012), Beghin and Orsingher (2010a), De Gregorio (2014), and Letac and Piccioni (2014) considered a random motion in Rn by suggesting that switching points 0 < τ1 < τ2 < · · · < τn < t have the Dirichlet distribution on the interval [0, t] and changes of direction have the uniform distribution on a sphere, which change at epochs τi . In all of the papers mentioned above, the authors study the conditional distributions of the particle position at renewal epochs of the switching direction process. 5) Recently, there has been an increasing interest in studying stochastic flows in a system of interacting particles, the so-called Arratia flow. The first results in this field were found by Arratia (1979), where the author investigated a system of Wiener particles on a line that coalesce in the collision and continue to move as one particle. Many authors have continued this line of research and they have made important contributions (Dorogovtsev 2007b,a, 2010; Dawson 1993; Le and Raimond 2004; Yu 1999; Konarovskii 2011). Pogorui (2012b) investigates a system of interacting particles with Markov switching. In particular, the author finds the distribution of the first time collision of two telegraph particles that started simultaneously from different points on the line, and he found the limit of the distribution under the Kac condition. Based on this result, the author investigates the time of free path for a family of particles with elastic collision, and he studies the distributions of particles both with reflecting boundaries and without them, and the limiting properties of these distributions. Since the semi-Markov REs appear as abstract models for a wide class of real stochastic systems (Korolyuk 1987, 1993; Korolyuk et al. 1973; Korolyuk and Swishchuk 1986; Korolyuk and Turbin 1982), tthe study of their properties is important not only from a theoretical point of view but also in practical problems. In papers by Pogorui and Turbin (2002), Pogorui (2003, 2004b), Rodríguez-Said et al. (2007, 2008), and Pogorui et al. (2006), the authors study the stationary distributions of random evolution and their application to calculate the efficiency of inventory (or reservoir) control systems with feedback. Semi-Markov evolutions also find a wide use in the modeling and study of stochastic processes in finance and queueing systems, for example, in the works of Korolyuk (1993), Korolyuk and Swishchuk (1995b), Mitra (1998), Pogorui and Rodríguez-Dagnino (2009a), Pogorui and

Preliminary Concepts

7

Rodríguez-Dagnino (2008), (Ratanov 2007), López and Ratanov (2012), Swishchuk and Burdeinyi (1996), Swishchuk (2004), Maglaras and Zeevi (2004), and others. 1.2. Abstract potential operators In the study of the asymptotic distribution of probability of reaching a “hard-to-reach domain” by semi-Markov processes, the theory of perturbation for linear operators is systematically used. As basic tools for this theory, it is necessary to introduce the notion of the projector and the generalized inverse operator or potential operator. Let B be a Banach space. Two closed linear manifolds M1 and M2 in B are called supplemented if M1 ∩ M2 = 0, where 0 is the zero element in B, and M1 + M2 = {x : x = x1 + x2 , x1 ∈ M1 , x2 ∈ M2 } = B. Then, we say that B is a direct sum of M1 and M2 and we use the usual notation B = M1 ⊕ M2 . If M1 and M2 are closed subspaces in B, then to each image in the form of a direct sum corresponds a limited projector P such that P B = M1 , (I − P ) B = M2 . Let A : B → B be a linear operator, and we have the following definitions: D EFINITION 1.1.– (Kato 1980) A linear operator A, defined on some linear manifold D (A), is called closed if from xn → x (xn ∈ D (A)) and Axn → y it follows that x ∈ D (A) and y = Ax.   D EFINITION 1.2.– (Kato 1980) A closed densely defined operatorA D (A) = B is called normally solvable if its range R (A) is a subspace of B. The following theorem characterizes normally solvable operators: T HEOREM 1.1.– (Kato 1980) A closed densely defined operator A, with R (A) = B, is normally solvable if and only if (iff) ⊥

N (A∗ ) = R (A) , ⊥

where N (A∗ ) is a subset of B such that f (x) = 0 for all f ∈ N (A∗ ). The following lemma states another important condition of normal solvability of operators: L EMMA 1.1.– (Kato 1980; Krein 1971) Suppose A is a closed operator and there exists a closed linear subspace M in B such that B = M ⊕ R (A) . Then the operator A is normally solvable. The following characteristic for normal solvability of an operator is well known (Cox 1976). A bounded linear operator A is a normally solvable operator if and only −1 if its restriction A on R (A) has a bounded inverse operator A .

8

Random Motions in Markov and Semi-Markov Random Environments 1

D EFINITION 1.3.– (Kato 1980) A normally solvable operator A is called a Fredholm operator if dim N (A∗ ) = dim N (A) = r < ∞. D EFINITION 1.4.– (Kato 1980; Krein 1971) A closed densely defined operator A on B is called reducible-invertible if B can be represented as follows: B = N (A) ⊕ R (A) .

[1.1]

Since R (A) is closed, then a reducible-invertible operator is normally solvable. Hence, for a reducible-invertible operator A, we have R (A) ∩ N (A) = {0} and ∀b ∈ B can be represented in the form b = f + u, where f ∈ R (A), u ∈ N (A) and N (A) and R (A) are subspaces of B. The decomposition in equation [1.1] generates a projective operator Π on N (A) such that ΠN (A) = N (A) ,

(I − Π) R (A) = R (A) .

[1.2]

D EFINITION 1.5.– (Kato 1980) A projective operator Π, which satisfies properties of equation [1.2] for some reducible-invertible operator A, is said to be the proper projector of A. The proper projector Π satisfies the following conditions ∀ ϕ ∈ B: 1) Π2 ϕ = Πϕ; 2) ΠAϕ = AΠϕ = 0. L EMMA 1.2.– (Kato 1980; Korolyuk and Turbin 1993) If A is a reducible-invertible −1 operator, then there exists a bounded inverse operator (A + Π) . Let us consider an example of a proper projective operator of a reducible-invertible operator A. Let E be a countable set. Denote by l∞ the real space (Banach space) of vectors u = {ui , i ∈ E} with the norm u = maxi∈E  |ui | and by l1 the real space of vectors ρ = {ρi , i ∈ E} with the norm ρ = i∈E |ρi |. For any ρ ∈ l1 and u ∈ l∞ , define the scalar product  (ρ, u) = ρi ui . i∈E

The tensor product of u ∈ l∞ and ρ ∈ l1 is defined as follows: [u ⊗ ρ] = {ui ρj , i, j ∈ E} . It is easy to see that for f ∈ l∞ , we have [u ⊗ ρ] f = (ρ, f ) u.

Preliminary Concepts

9

Now, suppose that an operator A = {aij , i, j ∈ E} with real aij satisfies the condition  sup |aij | = A < ∞. [1.3] i∈E

j∈E

Consider Au =



aij uj .

j∈E

It follows from equation [1.3] that A : l∞ → l∞ . In addition, A is bounded on l∞ with the norm A . The adjoint operator A∗ of A is defined on l1 as follows:  aij ρi . A∗ ρ = i∈E

It is easily verified that A∗ is bounded on l1 and A∗ = A . Suppose A is Fredholm, i.e. dim N (A) = dim N (A∗ ) = r < ∞, where N (A) and N (A∗ ) are the kernels of operators A and A∗ , respectively. Let u(1) , u(2) , . . . , u(r) be a basis of N (A) and ρ(1) , ρ(2) , . . . , ρ(r) be a basis of N (A∗ ). Suppose that   ρ(i) , u(j) = δij , where δij is the Kronecker symbol.  r  The operator Π = k=1 u(k) ⊗ ρ(k) is the proper operator of A and it satisfies the following properties: 1) Π2 = Π; 2) Πu = u, u ∈ N (A); 3) ρΠ = ρ, ρ ∈ N (A∗ ); 4) Πf = 0, f ∈ R (A); 5) ϕΠ = 0, ϕ ∈ R (A∗ ); 6) ΠAϕ = AΠϕ, ϕ ∈ R (A) . L EMMA 1.3.– (Kato 1980; Korolyuk and Turbin 1993) Suppose that an operator A is reducible-invertible and Π is its proper projector. Then, there exists the bounded −1 inverse operator (A + Π) .

10

Random Motions in Markov and Semi-Markov Random Environments 1

It follows from lemma 1.3 that there exists (A + Π)

−1

− Π and it is bounded.

D EFINITION 1.6.– (Korolyuk and Turbin 1976, 1993; Sato 1971) The operator R0 = −1 (A + Π) − Π is called the generalized inverse operator or potential operator of A. The operator R0 is also called the abstract potential operator of the reducibleinvertible operator A. L EMMA 1.4.– (Kato 1980; Korolyuk and Turbin 1993) The operator R0 satisfies the following properties: AR0 = R0 A = I − Π; R0 Π = ΠR0 = 0;    −1  R0 = A  , where A is the restriction of A to R (A). The notion of a generalized inverse operator plays an important role in the study of Markov processes (Gikhman and Skorokhod 1975; Korolyuk and Turbin 1993, 1982). In particular, for a Markov chain with transition probabilities matrix P , the generalized inverse operator R0 of I − P is called the potential of this Markov chain. E XAMPLE 1.1.– Consider a Markov chain {ξn , n ≥ 0} with transition probabilities matrix ⎛ ⎞ 100 0 ⎜1 0 0 0 ⎟ ⎟ P =⎜ ⎝ 0 0 14 43 ⎠ . 00

1 1 2 2

Denote by A the following matrix ⎛ 00 0 0 ⎜ −1 1 0 0 A=I −P =⎜ 3 3 ⎝ 00 4 −4 1 1 0 0 −2 2

⎞ ⎟ ⎟. ⎠

The matrix is called infinitesimal for the Markov chain ξn . It is easily verified ⎛A ⎞ ⎛ ⎞ 1 0 ⎜ 1 ⎟ (2) ⎜0⎟   (1) ⎟ ⎟ = ⎜ = 1, 0, 0, 0 , that u(1) = ⎜ ⎝ 0 ⎠, u ⎝ 1 ⎠ is a basis of N (A) and ρ 0 1   2 3 (2) ρ = 0, 0, 5 , 5 is a basis of N (A∗ ), where A∗ is the transpose matrix of A. Thus, dim N (A) = dim N (A∗ ).

Preliminary Concepts

11

  Let us calculate the following scalar and tensor products ρ(i) , u(j) = δij , ⎞ ⎛ ⎛ ⎞ 1 0 0 0 1  ⎜1⎟   ⎜1 0 0 0⎟ ⎟ ⎜ ⎟ u(1) ⊗ ρ(1) = ⎜ ⎝ 0 ⎠ 1, 0, 0, 0, = ⎝ 0 0 0 0 ⎠ , 0 0 0 0 0 ⎛ ⎞ 0   ⎜0⎟ ⎟ u(2) ⊗ ρ(2) = ⎜ ⎝ 1 ⎠ 0, 0, 1



2 3 5, 5

0  ⎜0 =⎜ ⎝0 0

⎞ 00 0 00 0⎟ ⎟ 0 25 53 ⎠ . 0 25 53

Thus, the proper projector of A is as follows: ⎛ ⎞ 1 0 0 0 ⎜1 0 0 0 ⎟ ⎟ Π=⎜ ⎝ 0 0 25 35 ⎠ . 2 3 5 5

0 0 Then, ⎛

1 ⎜0 A+Π=⎜ ⎝0 0

⎞ 0 0 0 1 0 0 ⎟ 3 ⎟ ⎠. 0 23 − 20 20 11 1 0 − 10 10

Therefore, ⎛

(A + Π)

−1

1 ⎜0 =⎜ ⎝0 0

0 1 0 0

0 0

22 25 2 25

⎞ 0 0 ⎟ 3 ⎟ ⎠. 25

23 25

From this we have the potential of Markov chain ξn in the form of the generalized inverse operator A: ⎛ ⎞ 0 0 0 0 ⎜ −1 1 0 0 ⎟ −1 R0 = (A + Π) − Π = ⎜ 12 ⎟ ⎝ 0 0 12 ⎠. − 25 25 0

8 0 − 25

8 25

1.3. Markov processes: operator semigroups In this section, we study elementary concepts of semigroup of operators generated by Markov process. We also define the infinitesimal operator, and the

12

Random Motions in Markov and Semi-Markov Random Environments 1

stationary distribution of Markov processes. These notions will be used for the asymptotic analysis (large deviations) of semi-Markov processes. Denote by (Ω, F, P) a probability space and by (X, Σ) a complete separable metric space, where Σ is a σ-algebra of Borel subsets of X. Let {ξ(t), t ≥ 0} be a homogeneous Markov process on (Ω, F, P) on the phase space (X, Σ). Denote by P (t, x, B) = P {ξ (t) ∈ B| ξ (0) = x} the transition probability of ξ (t). It is well known that for every t ∈ T the probability P (t, x, B) is a stochastic kernel, i.e. for a fixed x ∈ X, P (t, x, B) is a measure on Σ and P (t, x, X) = 1, and for a fixed B, P (t, x, B) is a measurable function with respect to x. Denote by B (Σ) a set of bounded and Σ-measurable functions on X. Consider the family of operators generated by the transition probabilities: for a function f ∈ B (Σ), we define  Tt f (x) = f (y) P (t, x, dy). X

It is easy to see that Tt : B (Σ) → B (Σ), and it follows from the Chapman– Kolmogorov equation that Ts+t = Ts Tt . Thus, the family of operators Tt , t ∈ T is a semigroup. Let D be a subset of B (Σ) such that there exist the following limits for any ϕ ∈ D Aϕ = lim + Δt→0

TΔt ϕ − ϕ , Δt

lim TΔt ϕ = ϕ.

Δt→0+

The operator A is called the infinitesimal operator of the semigroup of operators Tt and of the Markov process ξ (t) (Gikhman and Skorokhod 1975; Kovalenko 1980; Kovalenko et al. 1983), respectively. D EFINITION 1.7.– It is said that ξ(t) has the stationary distribution π if for any B∈Σ  π (B) = π (dx)P (t, x, B) , t ≥ 0. X

Preliminary Concepts

13

The stationary projector Π of ξ(t) is defined as follows:  π (ds) f (s) I(x), Πf (x) = X

where I (x) = 1 for all x ∈ X and 0 otherwise. D EFINITION 1.8.– A Markov process ξ(t) is said to be uniformly ergodic if Tt − Π → 0 as t → ∞. It is well known (Korolyuk and Turbin 1976, 1993) that the infinitesimal operator A of a uniformly ergodic Markov process ξ(t), t ≥ 0 is reducible-invertible. Thus, −1 there exists the inverse operator (A + Π) . Moreover, the operator −1 R0 = (A + Π) − Π is called the potential of the stochastic process ξ(t). Let us define Q = I − Π. Then, the potential R0 satisfies the following properties (Korolyuk and Turbin 1976, 1993): 1) R0 Π = ΠR0 = 0; 2) R0 Q = QR0 = R0 ; 3) R0 A = AR0 = Q. D EFINITION 1.9.– The number = limt→∞ 1t ln Tt is called the type of a semigroup Tt . The following property of the resolvent R (λ, A) of operator A is important (Korolyuk and Turbin 1993): T HEOREM 1.2.– If semigroup Tt is of type , then for any λ such that Re λ > we have  ∞ −1 R (λ, A) f := (λI − A) f = e−λt T (t) f dt, f ∈ B (Σ) . 0

The resolvent of a reducible-invertible operator A has the following representation for 0 < |λ| < R10 (Korolyuk and Turbin 1993, 1982): R (λ, A) =

1 −1 Π + R0 (λR0 − I) . λ

Hence, it follows that for a reducible-invertible operator A for 0 < |λ| < ΠR (λ, A) =

1 Π, λ

QR (λ, A) = R0 (λR0 − I)

−1

.

1 R0

14

Random Motions in Markov and Semi-Markov Random Environments 1

Now, let us define Tt − Π = H (t). For a uniformly ergodic Markov process, the ∞ integral 0 H (t)dt converges and (Sato 1971): 

lim λ↓0

∞ 0

e−λt H (t)dt =



∞ 0

H (t)dt.

It can be proved (Kato 1980; Korolyuk and Turbin 1993) that if the following two  +∞ conditions are fulfilled, i.e. ∀f ∈ D (R0 ), 0 (Tt − Π) f dt < +∞, then the following equality holds:  0

+∞

 [Tt − Π] f dt =

∞ 0

H (t) f dt = −R0 f.

1.4. Semi-Markov processes This section is devoted to providing a constructive definition of a semi-Markov process, which is based upon the notion of an MRP, and a semi-Markov kernel, which is a crucial part of the definition of a semi-Markov process. In addition, we introduce an auxiliary process useful in reducing a semi-Markov process to the Markov with the standard phase space extension. For this Markov process, the infinitesimal operator is presented. D EFINITION 1.10.– A semi-Markov kernel in a measurable space (X, Σ) is defined by a function (x, B, t) x ∈ X, B ∈ Σ, t ∈ [0, +∞], which satisfies the following conditions: 1) Q (x, B, t) is non-decreasing and right-continuous with respect to t ≥ 0 for fixed x ∈ X and B ∈ Σ, and Q (x, B, 0) = 0 for x ∈ / B; 2) for a fixed t ≥ 0, Q (x, B, t) is a semi-stochastic kernel, i.e. X Q (x, X, t) ≤ 1;

∀x ∈

3) Q (x, B, +∞) is a stochastic kernel with respect to x and B, i.e. Q (x, X, +∞) = 1. D EFINITION 1.11.– A two-component Markov chain {ξn , θn ; n ≥ 0} on the phase space X × [0, +∞) is said to be an MRP if its transition probabilities depend only on the value of the first component ξn and is defined by a semi-Markov kernel Q (x, B, t) as follows: P {ξn+1 ∈ B, θn+1 ≤ t| ξn = x} = Q (x, B, t) . The first component {ξn , n ≥ 0} of {ξn , θn ; n ≥ 0} is a Markov chain with the transition probabilities P (x, B) = Q (x, B, +∞) and it is called the embedded Markov chain.

Preliminary Concepts

15

The non-negative random variables θn , n ≥ 0 in {ξn , θn ; n ≥ 0} are the sojourn times of the MRP n  τn = θk , n ≥ 1, τ0 = θ0 = 0. k=1

The distribution function of θn+1 depends on the state ξn as follows: Gx (t) = P {θn+1 ≤ t| ξn = x} = Q (x, X, t) . A random variable θn+1 with the distribution function Gx (t) is interpreted as the holding time of the MRP in a state ξn = x and it is convenient to denote it by θx . Thus, to complete the constructive definition of the MRP, it is necessary to define a semi-Markov kernel. Since for all t ≥ 0 and x ∈ X, Q (x, B, t) ≤ P (x, B) ∀ B ∈ Σ, the measure Q is absolutely continuous with respect to P . Consequently, there exists a measurable function G (x, B, t) such that  Q (x, B, t) = G (x, y, t) P (x, dy). B

The function G (x, y, t) is a conditional distribution of θn+1 assuming that the embedded Markov chain transits from x to y: G (x, y, t) = P {θn+1 ≤ t| ξn = x, ξn+1 = y} . For a fixed trajectory of the Markov chain {ξn , n ≥ 0}, the random variables θn , n ≥ 0 are independent and it can be said that they are conditionally independent. Indeed, by using the formula of total probability, we have P {θ1 ≤ t1 , · · · , θn+1 ≤ tn+1 | ξ1 = x1 , · · · , ξ n = xn , ξn+1 = xn+1 } =

n 

G (xk , xk+1 , tk+1 ).

[1.4]

k=0

The conditional independence of θn , n ≥ 0 for a fixed trajectory of the Markov chain {ξn , n ≥ 0} makes it possible to give another definition of the MRP {ξn , θn , n ≥ 0}, which is based on a sequence of non-negative independent random variables {θn , n ≥ 0}. These random variables are defined on the Markov chain {ξn , n ≥ 0} by the joint distribution [1.4]. It was shown in Korolyuk and Turbin (1976) and Silvestrov (1980) that if X is a countable set then, without loss of generality, we may assume that G (x, y, t) does not depend on y, i.e. the semi-Markov kernel is of the following form: Q (x, B, t) = P (x, B) Gx (t) , where Gx (t) = P (θx ≤ t| ξn = x) and θx is a sojourn time of the MRP at state x.

16

Random Motions in Markov and Semi-Markov Random Environments 1

Let us introduce a counting process {ν (t)} as follows: ν (t) = sup {n : τn ≤ t} . The process ν (t) counts the number of renewal occurrences in the interval [0, t]. L EMMA 1.5.– The process ξ (t) := ξν(t) , t ≥ 0, is semi-Markov. If, in addition, for every t ≥ 0, P {ν (t) < ∞} = 1, then the process ξν(t) is called regular. For a semi-Markov process ξ (t) define τ (t) = t − sup {u ≤ t : ξ (u) = ξ (t)}. The process τ (t) is called an auxiliary process. Let us consider the bivariate process ζ (t) = (ξ (t) , τ (t)) on the phase space X × [0, +∞). It is well known that the process ζ (t) is a Markov process (Gikhman and Skorokhod 1975; Corlat et al. 1991; Korolyuk and Turbin 1982). Let ϕ (x, t), x ∈ X, t ∈ [0, +∞) be a Σ × R+ -measurable function differentiable with respect to t. Then the infinitesimal operator of ζ (t) is of the following form (Corlat et al. 1991; Korolyuk and Turbin 1982): Aϕ (x, t) =

d gx (t) ϕ (x, t) + [P ϕ (x, 0) − ϕ (x, t)] , dt 1 − Gx (t)

d Gx (t) and P (x, dy) is the transition where Gx (t) = P (θx < t), gx (t) = dt probability of the Markov chain {ξn , n ≥ 0}. In addition,  P ϕ (x, 0) = P (x, dy)ϕ (y, 0) . X

A jumping Markov process is a particular case of a semi-Markov process with the following semi-Markov kernel:   Q (x, B, t) = P (x, B) 1 − e−q(x)t , q(x) > 0, t ≥ 0. The function q (x) , x ∈ X determines the intensity of staying at a particular state x. The infinitesimal operator of the jumping Markov process is given as (Korolyuk 1993; Korolyuk and Korolyuk 1999):   Aϕ (x) = q(x) ϕ(y)P (x, dy) − ϕ (x) , x ∈ X, B ∈ Σ, X

where ϕ is a real Σ-measurable function, which is bounded on X.

Preliminary Concepts

17

1.5. Lumped Markov chains We will describe a phase merging or state lumping scheme for Markov chains. This scheme applies to the investigation of the probability distribution of reaching a “hard-to-reach domain” by semi-Markov processes. This state lumping scheme was developed and introduced in seminal works by Korolyuk and Turbin (1976, 1982). In Korolyuk et al. (1979a,b), lumping states of the phase space of the Markov chain are described, and we will describe the basic issues of it in order to use it as an approach for the investigation of the probability distribution of reaching a level, which goes to infinity. We will describe the state lumping algorithm for Markov chains. Let {ξn , n ≥ 0} be a homogeneous Markov chain with the phase or state space E = {1, 2, . . . , M } and the matrix of transition probabilities P = {pij , i, j ∈ E}. We assume that all states of the Markov chain communicate, i.e. they are accessible to (l) each other or ∀ i, j ∈ E, ∃ l ≥ 1, such that pij > 0. Let us establish that E0 = {1, 2, . . . , m}, m < M , E1 = E \ E 0 . Consider the following random time instants: ⎧ ν0 = 0, ⎪ ⎪ ⎪ ⎨ ν1 = min {n > 0 : ξn ∈ E0 } .. ⎪ . ⎪ ⎪ ⎩ νk = min {n > νk−1 : ξn ∈ E0 } . Now, we introduce the sequence ξkE0 = ξνk , k ≥ 0, ξ0E0 = ξ0 ∈ E1 . It is easily verified that ξkE0 , k ≥ 0 is a homogeneous Markov chain with the state space E0 = {1, 2, . . . , m} and transition probabilities matrix 0 , i, j ∈ E P E 0 = pE 0 . Moreover ij P E0 = P00 + P01 (I − P11 )

−1

P10 ,

where Pkl , k, l = 0, 1, are matrices with elements pij , i ∈ Ek , j ∈ El . Since the matrix P11is not stochastic by assumption, then there exists the inverse −1 ∞ k matrix (I − P11 ) = k=0 P11 < ∞ (Kemeny and Snell 1960). D EFINITION 1.12.– A Markov chain ξkE0 = ξνk with transition probabilities matrix P E0 will be called the lumped Markov chain, and it is obtained from the chain ξn by state lumping from E to E0 .

18

Random Motions in Markov and Semi-Markov Random Environments 1

Let {ξn , n ≥ 0} be a homogeneous Markov chain with the state space E = {1, 2, · · · }, which satisfies the condition: C1. A Markov chain {ξk } is irreducible (Meyn and Tweedie 1993) and it has the stationary distribution ρ = {ρi , i ∈ E}. In addition,  ρi > 0. εn = i>n

Denote by en the new state obtained after merging the set of states E n = {n+1,n+2, . . .} (Korolyuk et al.  1979a,b), i.e. the transition probabilities of  the lumped Markov chain ξ m , m ∈ N on the state space En ∪ {en }, where En = {1, 2, . . . , n}, are as follows: ⎧ i, j ∈ En , ⎪ ⎪  pij , ⎪ ⎨ i ∈ En , j = e n , k∈E n pik ,  pij = 1 ⎪ k∈E n ρk pkj , i = en , j ∈ En , εn ⎪ ⎪  ⎩ 1 − l∈En pen l , i = en , j = en .   It is easily seen that the stationary distribution ρ(n) of the chain ξ m is of the form ρ(n) = (ρ1 , ρ1 , · · · , ρn , εn ). Consider a Markov process {ξε (t), t ≥ 0}, on the state space (X, Σ), where Σ is a σ-algebra of subsets of X. We introduce the following finite partition of the state space X = ∪ri=1 Xi , Xi ∈ Σ, where Xi ∩ Xj = ∅, i = j. The process ξε (t) depends on ε > 0 (where ε is the small series parameter) and its infinitesimal operator Aε is of the form Aε = A + εA1 , where A is the infinitesimal operator of the support Markov process ξ0 (t), t ≥ 0, which is obtained from ξε (t) by letting ε = 0. We assume that ξ0 (t) is uniformly ergodic in X with the transition probabilities P (x, B), x ∈ X, B ∈ Σ, which satisfy the following condition: P (x, Xi ) = Ii (x) =

1, x ∈ Xi , 0, x ∈ / Xi .

We also assume that for all i = 1, 2, . . . , r, there is not a proper subset Bi ⊂ Xi such that P (x, Bi ) = 1 ∀ x ∈ Bi and (x, Xi \Bi ) = 1 ∀ x ∈ Xi \Bi . Denote by πi (B), B ∈ Σ, πi (Xi ) = 1, 1 ≤ i ≤ r the stationary distribution of the support process ξ0 (t). As mentioned above, the infinitesimal operator A of ξ0 (t) is reducible-invertible (Korolyuk and Turbin 1993).

Preliminary Concepts

19

The kernel of A, say ker (A), has dimension r and its proper projector Π is as follows: r   f (y) π i (dy)Ii (x) . Πf (x) = i=1

Xi

The contracted operator Aˆ1 is defined in the following form: ΠA1 Π = Aˆ1 Π. It was shown in Korolyuk and Turbin (1976) that the operator Aˆ1 can be represented in the matrix form Aˆ1 = [aij , i, j ∈ {1, 2, . . . , r}], and it is the ! t ≥ 0, with infinitesimal operator of the new merged or lumped Markov process ξ(t), ˆ = {1, 2, . . . , r}. the state space X Now, we define the following merging function k (x) = i for x ∈ Xi . It was proved in Korolyuk and Turbin (1976) and Korolyuk and Turbin (1982) that k (ξε (t/ε)) weakly converges to ξ!(t) following weak convergence to ξ!(t) as ε → 0 for all t > 0, i.e. for all t > 0:   P {k (ξε (t/ε)) < x} → P ξ!(t) < x as ε → 0, x ∈ R. 1.6. Switched processes in Markov and semi-Markov media In this section, we introduce the switched process driven by Markov or semi-Markov process, and it represents the stochastic media, where the switched process evolves. The semigroup of operators associated with a switched process is defined and its infinitesimal operator is represented. In addition, we consider the superposition of independent semi-Markov processes. Denote by {ξ (t) , t ≥ 0}, a semi-Markov process in a standard state or phase space (X,Σ) given by the semi-Markov kernel Q (x, B, t) = P (x, B) Gx (t). The evolutionary switched process V (t) in a semi-Markov media ξ (t) is defined as a solution of the evolution equation (Korolyuk 1993; Korolyuk and Swishchuk 1995b; Anisimov 1977): dV (t) = C (V (t), ξ(t)) , dt

V (0) = v

[1.5]

where C(v, x) : R×X → R satisfies the unique valued solvability condition of equation [1.5]. A sufficient condition for this is that the function C(v, x) satisfies a Lipschitz condition with respect to the variable v uniformly in x ∈ X.

20

Random Motions in Markov and Semi-Markov Random Environments 1

Such a stochastic process with reflecting boundaries is a good model for a multiphase supplying system with feedback, which will be studied in Volume 1, Chapter 3. For instance, the estimation of effectiveness of a supplying system with feedback is reduced to the calculations of the stationary distribution of a switched process that models the system. Denote by τk , k = 1, 2, . . . , the renewal instants of a semi-Markov process ξ (t). Then, equation [1.5] can be solved sequentially in each of the segments [τn , τn+1 ), where it is deterministic, i.e. dV (t; ξn ) = C (V (t; ξn ) , ξn ) , dt

t ∈ [τn , τn+1 ) ,

with boundary condition ξn = ξ (τn ), where V (τn ; ξn ) = V (τn−1 − 0; ξn−1 ), n ≥ 2. The following recurrence relation represents a solution of equation [1.5] V (t) = V (t; ξn ) ,

t ∈ [τn , τn+1 ) , n ≥ 0.

Since a solution V (t) of equation [1.5] depends on x = ξ(t) and v, i.e. V (t) = V (t, x, v). On the space of boundary functions Cb (Rn ), we introduce a family of operators Tt , which is defined as follows: Tt (x) f (v) = f (V (t)) = f (V (t, x, v)) .

[1.6]

L EMMA 1.6.– (Korolyuk 1993; Korolyuk and Swishchuk 1995b) The family of operators Tt (x), t ≥ 0, introduced in equation [1.6], satisfies the semigroup property, namely for any t, t ≥ 0, Tt (x) Tt (x) = Tt+t (x) . We should mention that the process V (t) is not Markov even if ξ (t) is Markov. In the case of the Markov switching process ξ (t), it is found that the couple of processes (V (t) , ξ (t)) is Markov and its infinitesimal operator is defined by the following lemma: L EMMA 1.7.– (Korolyuk 1993; Korolyuk and Korolyuk 1999) The infinitesimal operator G(x) of the couple of processes (V (t) , ξ (t)) is of the following form: Gf (v, x) = Qf (v, x) + C (v, x)

∂ f (v, x) , ∂v

where f (v, x) is a bounded function differentiable with respect to v, and Qf (v, x) = Q (x, dy) f (v, y) − λx f (v, x) is the infinitesimal operator of the Markov process E ξ (t).

Preliminary Concepts

21

As we mentioned above, the process V (t) with delaying barriers models a multiphase system with feedback. In order to investigate the efficiency of such systems, it is enough to calculate the stationary distribution of the couple of processes (V (t) , ξ (t)). For such a purpose, we obtain the operator G∗ adjoint to the infinitesimal operator G of (V (t) , ξ (t)) with the same boundary conditions as the process V (t), and we should find a solution of the following equation: G∗ ρ = 0. The normalized solution ρ of this equation is the stationary distribution of (V (t) , ξ (t)). Another important result used in the study of multiphase systems is the superposition of independent semi-Markov processes. Assuming the case when the semi-Markov process is completely described by its corresponding MRP, we define a superposition of semi-Markov processes by using the superposition of MRPs.   (i) (i) (i) Consider N independent MRPs ξn , τn , where τn are renewal instants of the ith MRP. Let us introduce an auxiliary process as follows:   γ (i) (t) = inf s > t : ξn(i) (s) = ξn(i) (t) . Consider the processes   γ (t) = min γ (1) (t) , γ (2) (t) , · · · , γ (N ) (t) ; ζ (i) (t) = γ (i) (t) − γ (t) . Then the renewal instants {τn , n ≥ 1} of the superposition of MRPs are defined by the following relations: γ (τn − 0) = 0, n ≥ 1. Define ζn(i) = γ (i) (τn−1 ) − γ (τn−1 ) , n ≥ 1. D EFINITION 1.13.– (Korolyuk and Turbin1982) The Markov renewal process (1) (2) (N ) {ξn , ζn } with components ξn and ζn = ξn , ξn , · · · , ξn =   (1) (2) (N ) is called the superposition of Markov renewal processes ζn , ζ n , · · · , ζn   (i) (i) ξn , τn , i = 1, 2, · · · , N.

22

Random Motions in Markov and Semi-Markov Random Environments 1

The superposition of processes models a multiphase system with feedback, which consists of several consecutive aggregates such that in between any two successive aggregates there is a reservoir. This type of system will be studied in Volume 1, Chapter 3.

2 Homogeneous Random Evolutions (HRE) and their Applications

In mathematical language, a random evolution (RE) is a solution of a stochastic operator integral equation in a Banach space. The operator coefficients of such equations depend on random parameters. The RE, in physical language, is a model for a dynamical system whose state of evolution is subject to random variations. Such systems arise in many branches of science, e.g. random Hamiltonian and Shrödinger’s equations with random potential in quantum mechanics, Maxwell’s equation with a random reflective index in electrodynamics, transport equation and storage equation. In addition, there are many applications of REs in financial and insurance mathematics (Swishchuk 2000). One of the recent applications of RE is associated with geometric Markov renewal processes (GMRPs), which are regime-switching models for a stock price in financial mathematics. These models will be studied extensively in the following chapters. Another interesting application of RE is a semi-Markov risk process in insurance mathematics (Swishchuk 2000). The REs are also examples of more general mathematical objects such as multiplicative operator functionals (MOFs) (Pinsky 1991; Swishchuk 1997), which are random dynamical systems in Banach space. REs can be described by two mathematical objects: (1) an operator dynamical system V (t) and (2) a random process xt . Depending on the structure of V (t) and on properties of the stochastic process xt , we have different kinds of REs: continuous, discrete, Markov, semi-Markov, etc. In this chapter, we deal with various issues related to REs, including the martingale property, the asymptotical behaviors of REs, averaging, merging, diffusion approximation, normal deviations, averaging and diffusion approximation in reducible phase for xt , and rate of convergence for limit theorems for REs. Note that inhomogeneous REs and their applications were considered in Swishchuk (2020).

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

24

Random Motions in Markov and Semi-Markov Random Environments 1

2.1. Homogeneous random evolutions (HRE) 2.1.1. Definition and classification of HRE Let (Ω, F, Ft , P) be a probability space, t ∈ R+ := [0, +∞], let (X, Ξ) be a measurable phase space and let (B, B, · ) be a separable Banach space. Let us consider a Markov renewal process (xn , θn ; n ≥ 0), xn ∈ X, θn ∈ R+ , n ≥ 0, with stochastic kernel Q(x, A, t) := P (x, A)Gx (t),

P (x, A) := P (xn+1 ∈ A| xn = x),

Gx (t) := P (θn+1 | xn = x),

[2.1]

x ∈ X, a ∈ Ξ, t ∈ R+ . Process x t := xν(t) is called a semi-Markov process, where n ν(t) := max{n : τn ≤ t}, τn := k=0 θk , xn = xτn , P {ν(t) < +∞, ∀ t ∈ R+ } = −λ(x)t 1. Note that if Gx (t) = 1−e , where λ(x) is a measurable and bounded function on X, then xt is called a jump Markov process. Let {Γ(x); x ∈ X} be a family of operators on the dense subspace B0 ∈ B, which is a common domain for Γ(x), independent of x, non-commuting and unbounded in general, such that map Γ(x)f : X → B is strongly Ξ/B-measurable for all f ∈ B, ∀ t ∈ R+ ; also, let {D(x, y); x, y ∈ X} be a family of bounded linear operators on B, such that map D(x, y)f : X × X → B is Ξ × Ξ/B-measurable, ∀ f ∈ B. An RE is defined by the solution of a stochastic operator integral equation in separable Banach space B: 

t

V (t)f = f + 0

V (s)Γ(xs )f ds +

ν(t) 

V (τk− )[D(xk−1 , xk ) − I]f,

[2.2]

k=1

where I is an identity operator on B, τk− := τk − 0, f ∈ B. If xt in [2.1] is a Markov or semi-Markov process, then the RE in [2.2] is called a Markov or semi-Markov RE, respectively. If D(x, y) ≡ I, ∀ x, y ∈ X, then V (t) in [2.2] is called a continuous RE. If Γ(x) ≡ 0, ∀ x ∈ X, is a zero operator on B, then V (t) in [2.2] is called a jump RE. RE Vn := V (τn ) is called a discrete RE.

Homogeneous Random Evolutions (HRE) and their Applications

25

Operators Γ(x), x ∈ X, describe a continuous component V c (t) of RE V (t) in [2.2], and operators D(§, †) describe a jump component V d (t) of RE V d (t) in [2.2]. In this manner, a RE is described by two objects: (1) an operator dynamical system V (t) and (2) a random process xt . Note that the operator V (t) turns out to be (Korolyuk and Swishchuk 1995a,b) 

ν(t)

V (t) = Γxt (t − τν(t) )

D(xk−1 , xk )Γxk−1 (θk ),

[2.3]

k=1

where Γx (t) are the semigroups of operators of t generated by the operators Γ(x), ∀ x ∈ X. We also note that RE in [2.2] is usually called a discontinuous RE. Under the above-introduced conditions, the solution V (t) of equation [2.2] is unique and can be represented by the product [2.3], which can be proved by a constructive method (Korolyuk and Swishchuk 1995b). REMARK 2.1.– From the definition of REs given above, it follows that there are other examples of MOFs, as they satisfy all the conditions for MOFs. 2.1.2. Some examples of HRE The connection of RE with applied problems can be explained by the generality of definition [2.2] of RE. It includes any homogeneous linear evolutionary system. For instance, if d , D(x, y) ≡ I, B = C 1 (R), dz then equation [2.2] is a transport equation that describes the motion of a particle with random velocity v(xt ). Thus, various interpretations of operators Γ(x) and D(x, y) give us many realizations of RE. To illustrate the flexibility and richness of this approach, we provide some examples. Γ(x) := v(x)

EXAMPLE 2.1. (Impulse traffic process).– Let B = C(R) and operators Γ(x) and D(x, y) are defined as Γ(x)f (z) := v(z, x)

d f (z), dz

D(x, y)f (z) := f (z + a(x, y)),

[2.4]

where the functions v(z, x) and a(x, y) are continuous and bounded on R × X and X × X, respectively, ∀ z ∈ R, ∀ x, y ∈ X, f (z) ∈ C 1 (R) := B0 . Then, equation [2.2] takes the form  t d f (zt ) = f (z) + v(zs , xs ) f (zs ) ds dz 0

26

Random Motions in Markov and Semi-Markov Random Environments 1

+

ν(t) 

[f (zτ − + a(xk−1 , xk )) − f (zτ − )], k

k=1

k

[2.5]

and RE V (t) is defined by the relation V (t)f (z) = f (zt ), z0 = z. Equation [2.5] is a functional one for impulse traffic process zt , which satisfies the equation  zt = z +

t

0

v(zs , xs )ds +

ν(t) 

a(xk−1 , xk ).

[2.6]

k=1

Note that the impulse traffic process zt in [2.6] is a realization of a discontinuous RE. EXAMPLE 2.2. (Summation on a Markov chain).– Let us assume v(z, x) ≡ 0, ∀ z ∈ R, ∀ x ∈ X, in [2.6]. Then, the process zt = z +

ν(t) 

a(xk−1 , xk )

[2.7]

k=1

is a summation on a Markov chain (xn ; n ≥ 0) and it is a realization of a jump RE. Let zn := zτn in [2.7], then the discrete process zn = z +

n 

a(xk−1 , xk )

k=1

is a realization of a discrete RE. EXAMPLE 2.3. (Diffusion process in random media).– Let B = C(R), B0 = C 2 (R), Px (t, z, A) be a Markov continuous distribution function with respect to the diffusion process ξ(t), i.e. the solution of the stochastic differential equation in R with semiMarkov switching: dξ(t) = μ(ξ(t), xt )dt + σ(ξ(t), xt )dwt ,

ξ(0) = z,

[2.8]

where xt is a semi-Markov process independent of a standard Wiener process wt , coefficients μ(z, x) and σ(z, x) are bounded and continuous functions on R × X. Let us define the following contraction semigroups of operators on B:  Γx (t)f (z) := Px (t, z, dy)f (y), f (y) ∈ B, x ∈ X. [2.9] R

Homogeneous Random Evolutions (HRE) and their Applications

27

Their infinitesimal operators Γ(x) are of the following kind: Γ(x)f (z) = μ(z, x)

d2 d 1 f (z) + σ 2 (z, x) 2 f (z), dz 2 dz

f (z) ∈ B0 .

Then the process ξ(t) is a continuous one. This is the main reason why the operators D(§, †) ≡ I, ∀ x, y ∈ X, are identity operators. Hence, equation [2.2] takes the form   t d σ 2 (ξ(s), xs ) d2 f (ξ(t)) = f (z) + μ(ξ(s), xs ) + f (ξ(s))ds, [2.10] dz 2 dz 2 0 and the RE V (t) is defined by the relation V (t)f (z) = E[f (ξ(t))| xs ; 0 ≤ s ≤ t; ξ(0) = z]. Equation [2.10] is a functional for the diffusion process ξ(t) in [2.8] in a semiMarkov random media xt . Note that diffusion process ξ(t) in [2.8] is a realization of a continuous RE. EXAMPLE 2.4. (The geometric Markov renewal process (GMRP)).– (Swishchuk and Islam 2010). Let (xn , θn )n∈Z+ be a Markov renewal process on the phase space X × R+ with the semi-Markov kernel Q(x, A, t) and x(t) := xν(t) be a semi-Markov process. Let ρ(x) be a bounded continuous function on X such that ρ(x) > −1. We define a stochastic functional St with a Markov renewal process (xn ; θn )n∈Z+ as follows: 

ν(t)

St := S0

(1 + ρ(xk )),

[2.11]

k=0

where S0 > 0 is the initial value of St . We call the process (St )t∈R+ in [2.11] a GMRP. This process (St )t∈R+ is referred to as GMRP by analogy with the geometric compound Poisson process N (t)

St = S0



(1 + Yk ),

[2.12]

k=1

where S0 > 0, N (t) is a standard Poisson process, and (Yk )k∈Z+ are independent and identically distributed (i.i.d.) random variables. The GMRP is a trading model in many financial applications as a pure jump model (see Swishchuk and Islam (2010) and Part 2 of Volume 2 of this book).

28

Random Motions in Markov and Semi-Markov Random Environments 1

Let B : C0 (R+ ) be a space of continuous functions on R+ , vanishing at infinity, and let us define a family of bounded contracting operators D(x) on C0 (R+ ): D(x)f (s) := f (s(1 + ρ(x)),

x ∈ X, s ∈ R+ .

[2.13]

With these contraction operators D(x), we define the following jump semi-Markov random evolution (JSMRE) V (t) of GMRP in [2.11] 

ν(t)

V (t) =

D(xk ) := D(xν(t) ) ◦ D(xν(t)−1 ) ◦ . . . ◦ D(x1 ) ◦ D(x0 ). [2.14]

k=0

Using [2.13], we obtain from [2.14] 

ν(t)

V (t)f (s) =



ν(t)

D(xk )f (s) = f (s

k=0

(1 + ρ(xk )) = f (St ),

[2.15]

k=0

where St is defined in [2.11] and S0 = s. 2.1.3. Martingale characterization of HRE One of the main approaches to the study of RE are the martingale methods. The main idea is that process Mn := Vn − I −

n−1 

E[Vk+1 − Vk | Fk ],

V0 = I,

[2.16]

k=0

is an Fn -martingale in B, where Fn := σ{xk , τk ; 0 ≤ k ≤ n},

Vn := V (τn ),

E is the expectation operator for probability measure P. The representation of the martingale Mn (see [2.4]) is in the form of a martingale difference such as the following one: Mn =

n−1 

[Vk+1 − E(Vk+1 | Fk )]

[2.17]

k=0

which gives us the possibility to calculate the weak quadratic variation < l(Mn f ) >:=

n−1 

E[l2 ((Vk+1 − Vk )f )| Fk ],

k=0

where l ∈ B ∗ , and B ∗ is a dual space to B, dividing points of B.

[2.18]

Homogeneous Random Evolutions (HRE) and their Applications

29

The martingale method for obtaining limit theorems for the sequence of RE is founded on the solution of the following problems: (1) weak compactness of the family of measures generated by the sequences of RE; (2) any limiting point of this family of measures is the solution of the martingale problem; (3) the solution of the martingale problem is unique. The conditions (1) and (2) guarantee the existence of weakly converging subsequence, and condition (3) gives the uniqueness of the weak limit. It follows from (1) to (3) that a solution of RE converges weakly to the unique solution of the corresponding martingale problem. The weak convergence of RE is obtained as a series scheme from the criterion of weak compactness of processes with values in a separable Banach space (Korolyuk and Swishchuk 1995b). The limit solution of a RE is obtained from the solution of a corresponding martingale problem in the form of some integral operator equations in Banach space B. We also use the representation Vk+1 − Vk = [Γxk (θk+1 )D(xk , xk+1 ) − I]Vk ,

Vk := V (τk ),

[2.19]

and the following expression for semigroups of operators Γx (t) (Korolyuk and Swishchuk 1995b):  t n−1  tk 1 k Γx (t)f = f + (t − s)n Γx (s)Γn (x)f ds, [2.20] Γ (x)f + k! (n − 1) 0 k=1

∀ x ∈ X, ∀ f ∈ ∩x∈X Dom(Γn (x)). Taking into account [2.4]–[2.8], we obtain the limit theorems for RE. In the previous section, we considered the evolution equation associated with REs by using the jump structure of the semi-Markov process or jump Markov process. In order to deal with more general driving processes and to consider other applications, it is useful to reformulate the treatment of RE in terms of a martingale problem. It has been shown by Stroock and Varadhan that the entire theory of multidimensional diffusion processes (and many other continuous-parameter Markov processes) can be formulated by following this approach. Suppose that we have an evolution equation of the form: df = Gf. dt

[2.21]

The martingale problem is to find a Markov process x(t), t ≥ 0, and a RE V (t) so that for all smooth functions  t V (s)Gf (x(s))ds is a martingale. [2.22] V (t)f (x(t)) − 0

This gives the required solution immediately. Indeed, the operator f → T (t)f := Ex [V (t)f (x(t))]

30

Random Motions in Markov and Semi-Markov Random Environments 1

defines a semigroup of operators on the Banach space B, whose infinitesimal generator can be computed by taking the expectation:  t  Ex [V (t)f (x(t))] − f (x) = Ex V (s)Gf (x(s))ds , 0

and lim t−1 [Ex [V (t)f (x(t))] − f (x)] = lim t−1 Ex

t→0



t→0

t 0

 V (s)Gf (x(s))ds = Gf (x).

REMARK 2.2.– In case V (t) ≡ I-identity operator, the above reduces to the usual martingale problem for Markov process (Dynkin 1991). REMARK 2.3.– In case B = R, the problem reduces to the determination of a real-valued multiplicative functional, which is related to a Feynman–Kac type formula. In the case of the one-dimensional Wiener process, a wide class of multiplicative functionals is provided by "  t  t V (t) = exp a(x(s))ds + b(x(s))dw(s) , 0

0

where w(t) is a standard Wiener process. Now, let us illustrate the martingale problem for a discontinuous RE over a jump Markov process, diffusion process, etc. Martingale problem for discontinuous RE over a jump Markov process Let x(t), t ≥ 0, be a conservative regular jump Markov process on a measurable state space (X, Ξ) with rate function λ(x) > 0 and a family of probability measures P (x, dy). In addition, let V (t) be a discontinuous RE in [2.2]. For any Borel function f , we have the sum  f (x(t)) = f (x(0)) + [f (x(s + 0)) − f (x(s − 0))]. [2.23] 0≤s≤t

From this, we see that the product V (t)f (x(t)) satisfies the differential equation: dV (t)f (x(t)) = V (t)Γ(x(t))f (x(t)), dt

if

τk < t < τk+1 ,

and the jump across t = τk is evaluated as τ+

V (t)f (x(t))|τk− = V (τk− )D(x(τk− ), x(τk+ ))f (x(τk + 0)) − f (x(τk − 0)) k

Homogeneous Random Evolutions (HRE) and their Applications

31

leading to the equation:  V (t)f (x(t)) = f (x) + +

 0≤τk ≤t

t

V (s)Γ(x(s))f (x(s))ds

0

V (τk− )[D(x(τk− ), x(τk+ ))f (x(τk+ )) − f (x(τk− ))],

[2.24]

x(0) = x, τk± := τk ± 0. To state this in the appropriate form of the martingale problem, we use the following identity from the theory of Markov processes: for any positive Borel-measurable function φ(·, ·), we have ⎡ ⎤  Ex ⎣ φ(x(τ − ), x(τ + ))⎦ k

0≤τk ≤t

 = Ex





t 0

k

λ(x(s))

φ(x(s), y)P (x(s), dy)ds .

[2.25]

X

Note that the difference  0≤τk ≤t

φ(x(τk− ), x(τk+ ))

 −

0

t

λ(x(s))(P φ)(x(s))ds

is a martingale, where P is an operator generated by P (x, A), x ∈ X, A ∈ Ξ. Applying this to the above computations, we see that  t V (t)f (x(t)) = f (x) + V (s)Gf (x(s))ds + Z(t), [2.26] 0

where Z(t), t ≥ 0, is a martingale and  Gf (x) = Γ(x)f + λ(x) [D(x, y)f (y) − f (x)]P (x, dy). X

Martingale problem for discontinuous RE over semi-Markov process It is well known that process (x(t), γ(t)) (with γ(t) := t − τν(t) and x(t) as semi-Markov process) is a Markov process in X × R+ with infinitesimal operator ˆ := d + gx (t) [P − I], Q ¯ x (t) dt G ¯ x (t) := 1 − Gx (t), P is an operator generated by where gx (t) := dGx (t)/dt, G P (x, A), x ∈ X, A ∈ Ξ, P (x, A) and Gx (t) are defined in [2.1]. In the Markov case, ˆ x (t) = exp{−λ(x)t}, Gx (t) = 1 − exp{−λ(x)t}, gx (t) = λ(x) exp{−λ(x)t}, G

32

Random Motions in Markov and Semi-Markov Random Environments 1

ˆ x (t) = λ(x), ∀ x ∈ X. Hence, Q ˆ = λ(x)[P − I] is an infinitesimal and gx (t)/G operator of a jump Markov process x(t) in X. Using the reasoning [2.23]–[2.26] of the previous example, for Markov process y(t) := (x(t), γ(t)) in X × R+ we obtain that the solution for the martingale problem is the operator  d gx (t) Gf (x, t) = f (x, t) + Γ(x)f (x, t) + [D(§, †){(†, ) − {(§, )]P(§, †), ˆ x (t) X dt G and the process y(t). Martingale problem for RE over Wiener process Let w(t), t ≥ 0, be the Wiener process in Rd and consider the linear stochastic equation:  V (t) = I +

t

V (s)Γ0 (w(s))ds +

0

d   j=1

t 0

V (s)Γj (w(s))dwj (s),

where the final term is a stochastic integral of the Itˆo class and Γ0 , . . . , Γd are bounded operators on a Banach space B. If f is any C 2 function, then the formula of Itˆo gives f (w(t)) = f (w(0)) +

1 2



t 0

Δf (w(s))ds +

d   j=1

t 0

∂f (w(s))dwj (s). ∂wj

Using the stochastic product rule d(M f ) = M df + (dM )f + (dM )df

[2.27]

and rearranging terms, we have ⎛

 V (t)f (w(t)) = f (w(0)) +

t 0

d 



∂f 1 V (s) ⎝ Δf + Γj + Γ0 f ⎠ w(s)ds + Z(t), 2 ∂w j j=1

where Z(t) :=

d   j=1

'

t 0

V (s)

( ∂f (w(s)) + Γj (w(s))f (w(s)) dwj (s), ∂wj

which is a martingale. Therefore, we have obtained the solution of the martingale problem, with the infinitesimal generator Gf =

d  ∂f 1 Γj (w) (w) + Γ0 (w)f (w). Δf (w) + 2 ∂w j j=1

Homogeneous Random Evolutions (HRE) and their Applications

33

This corresponds to the stochastic solution of the parabolic system ∂u = Gu. ∂t Martingale problem for RE over diffusion process Let ξ(t), t ≥ 0, be the diffusion process in R: dξ(t) = a(ξ(t))dt + σ(ξ(t))dw(t) and consider the linear stochastic equation:  t  t V (s)Γ0 (ξ(s))ds + V (s)Γ1 (ξ(s))dξ(s), V (t) = I + 0

0

with the bounded operators Γ0 and Γ1 on B. If f is any C 2 function, the formula of Itˆo gives  t f (ξ(t)) = f (ξ(0)) +  +

t 0

0

 df (ξ(s)) 1 2 d2 f (ξ(s)) a(ξ(s)) ds + σ (ξ(s)) dξ 2 dξ 2

∂f (ξ(s)) σ(ξ(s))dw(s). ∂ξ

Using the stochastic product rule [2.27], we have: ' (  t 2 df df −1 2 d f V (s) a + 2 σ + Γ1 + Γ0 f (ξ(s))ds V (t)f (ξ(t)) = f (ξ(0)) + dξ dξ 2 dξ 0 +Z(t), where  Z(t) :=

t 0

'

df V (s) σ + Γ1 f dξ

( (ξ(s))dw(s),

which is a martingale. Therefore, we have obtained the solution of the martingale problem with the operator Gf = a

df df 1 d2 f + σ 2 2 + Γ1 + Γ0 f. dξ 2 dξ dξ

Other solutions of martingale problems for RE will be obtained in the limit theorems for RE.

34

Random Motions in Markov and Semi-Markov Random Environments 1

2.1.4. Analogue of Dynkin’s formula for HRE Let x(t), t ≥ 0, be a strongly measurable strong Markov process, let V (t) be an MOF of x(t) (Pinsky 1991; Swishchuk 1997), let A be the infinitesimal operator of semigroup (T (t)f )(x) := Ex [V (t)f (x(t))],

[2.28]

and let τ be a stopping time for x(t). It is known (Swishchuk 1997) that if Ah = g and Ex [τ ] < +∞, then  τ V (t)Ah(x(t))dt. [2.29] Ex [V (τ )h(x(τ ))] − h(x) = Ex 0

Formula [2.28] is an analogue of Dynkin’s formula for MOF (Swishchuk 1997). In fact, if we set V (t) ≡ I-identity operator, then from [2.29] we obtain  τ  Ex [h(x(τ ))] − h(x) = Ex Qh(x(t))dt , [2.30] 0

where Q is an infinitesimal operator of x(t) (see [2.28]). Formula [2.30] is the wellknown Dynkin’s formula. Let x(t), t ≥ 0, be a continuous Markov process on (X, Ξ) and V (t) be a continuous RE, then d V (t) = V (t)Γ(x(t)), dt

V (0) = I.

[2.31]

It should be noted that the function u(t, x) := Ex [V (t)f (x(t))] satisfies the following equation (Swishchuk 1997): d u(t, x) = Qu(t, x) + Γ(x)u(t, x), dt

u(0, x) = f (x),

[2.32]

where Q is an infinitesimal operator of x(t). From [2.29] and [2.32], we obtain the analogue of Dynkin’s formula for continuous Markov RE V (t) in [2.31]. Namely,  τ Ex [V (τ )h(x(τ ))] − h(x) = Ex V (t)[Q + Γ(x(t))]h(x(t))dt. [2.33] 0

Let x(t), t ≥ 0, be a jump Markov process with infinitesimal operator Q and V (t) be a discontinuous Markov RE in [2.2]. In this case, the function u(t, x) := Ex [V (t)f (x(t))] satisfies the equation (Swishchuk 1997): d u(t, x) = Qu(t, x) + Γ(x)u(t, x) dt  P (x, dy)[D(x, y) − I]u(t, y), +λ(x) X

[2.34]

Homogeneous Random Evolutions (HRE) and their Applications

35

where u(0, x) = f (x). From [2.29] and [2.34], we obtain the analogue of Dynkin’s formula for discontinuous Markov RE in [2.2]: ) τ  Ex [V (τ )f (x(τ ))] − f (x) = Ex V (s) Q + Γ(x(t)) 0





*

P (x(t), dy)(D(x(t), y) − I) f (x(t))dt .

+λ(x)

[2.35]

X

Let us finally assume that x(t), t ≥ 0, is a semi-Markov process, and V (t) is a semi-Markov RE in [2.2]. Let us define the process γ(t) := t − τν(t) .

[2.36]

Then, the process y(t) := (x(t), γ(t))

[2.37]

is a Markov process in X × R+ with infinitesimal operator (Korolyuk and Swishchuk 1995b) ˆ := d + gx (t) [P − I], Q ¯ x (t) dt G

[2.38]

¯ x (t) := 1 − Gx (t), P is an operator generated by where gx (t) := dGx (t)/dt, G the kernel P (x, A). Hence, the process (V (t)f ; x(t); γ(t); t ≥ 0) ≡ (V (t)f ; y(t); t ≥ 0) in B × X × R+ is a Markov process with infinitesimal operator  ˆ + Γ(x) + gx (t) [2.39] L(x) := Q ¯ x (t) X P (x, dy)[D(x, y) − I], G ˆ is defined in [2.38]. where Q Let f (x, t) be a function on X × R+ bounded by x and differentiable by t, and let τ be a stopping time for y(t) = (x(t), γ(t)). Then for semi-Markov RE V (t) in [2.2] we have from [2.29] and [2.36]–[2.39] the following analogue of Dynkin’s formula: ) τ  ˆ + Γ(x(t)) Ey [V (τ )f (y(τ ))] − f (y) = Ey V (s) Q 0

gx (t) +¯ Gx (t)

*  P (x(t), dy)[D(x(t), y) − I] f (y(t))dt ,

 X

where y := y(0) = (x, 0), f (y) = f (x, 0).

[2.40]

36

Random Motions in Markov and Semi-Markov Random Environments 1

2.1.5. Boundary value problems for HRE Let x(t), t ≥ 0, be a continuous Markov process in semicompact state space (X, Ξ). Let V (t) be a continuous Markov RE in [2.31], and let G be an open set satisfying ∀ x ∈ G; there exists U : Ex [τU ] < +∞, U ∈ Ξ, and τU := inf {t : x(t) ∈ U }, Px [τG = +∞] = 0, ∀ x ∈ X. t

[2.41]

If f (x) is a bounded measurable function on ∂G (boundary of G) and function b(x) := Ex [V (τG )f (x(τG ))]

[2.42]

is continuous on X, then function b(x) is the solution of the equation (Swishchuk 1997): Qb(x) + Γ(x)b(x) = 0,

∀ x ∈ G,

where Q is an infinitesimal operator of x(t). If function  τG  V (t)g(x(t))dt H(x) := Ex

[2.43]

[2.44]

0

is continuous and bounded, then this function satisfies the following equation (Swishchuk 1997): QH(x) + Γ(x)H(x) = −g(x),

∀ x ∈ X.

[2.45]

It follows from [2.41] to [2.44] that the boundary value problem QH(x) + Γ(x)H(x) = −g(x),

H(x)|∂G = f (x)

has the following solution:  τG  V (s)g(x(s))ds + Ex [V (τG f (x(τG ))]. H(x) = Ex

[2.46]

[2.47]

0

Now, let x(t), t ≥ 0, be a jump Markov process in (X, Ξ), let V (t) be a discontinuous Markov RE in [2.2] and let conditions [2.41] be satisfied. It follows from [2.44] to [2.47] and from [2.44] that the boundary value problem  QH(x) + Γ(x)H(x) + P (x, dy)[D(x, y) − I]H(y) = −g(x), H(x)|∂G = f (x) X

has the solution H(x) = Ex



τG 0

 V (s)g(x(s))ds + Ex [V (τG )f (x(τG ))].

Homogeneous Random Evolutions (HRE) and their Applications

37

2.2. Limit theorems for HRE The main approach to the investigation of limit theorems for SMRE is the martingale method. The martingale method for obtaining the limit theorems (averaging and diffusion approximation) for the sequence of SMRE is bounded on the solution of the following problems: 1) weak compactness of the family of measures generated by the sequence of SMRE; 2) any limiting point of this family of measures is the solution of martingale problem; 3) the solution of martingale problem is unique. The conditions (1) and (2) guarantee the existence of a weakly converging subsequence, and condition (3) gives the uniqueness of a weak limit. From (1) to (3), it follows that SMRE converges weakly to the unique solution of martingale problem. 2.2.1. Weak convergence of HRE A weak convergence of SMRE in a series scheme can be obtained from the criterion of weak compactness of the process with values in a separable Banach space (Korolyuk and Swishchuk 1995b). The limit SMRE that we obtain from the solution of the corresponding martingale problem is a kind of integral operator equation in Banach space B. The main idea is that process Mn := Vn − I −

n−1 

E [(Vk+1 − Vk )| Fk ] ,

V0 = I,

[2.48]

k=0

is an Fn –martingale in B, where Fn := σ{xk , τk ; 0 ≤ k ≤ n},

Vn := V (τn ),

E is an expectation by probability P on a probability space (Ω, F, P). The representation of the martingale Mn in the form of martingale-differences Mn =

n−1  k=0

[Vk+1 − E(Vk+1 | Fk )]

[2.49]

38

Random Motions in Markov and Semi-Markov Random Environments 1

gives us the possibility to calculate the weak quadratic variation: < l(Mn f ) >:=

n−1 

  E l2 ((Vk+1 − Vk )f )| Fk ,

[2.50]

k=0

where l ∈ B ∗ , and B ∗ is a dual space to B, dividing points of B. From [2.19], it follows that Vk+1 − Vk = [Γxk (θk+1 )D(xk , xk+1 ) − I] · Vk .

[2.51]

Note that the following expression for semigroup of operators Γx (t) is fulfilled: Γx (t)f = I +

n−1  k k=1

t k 1 Γ(x) f + k! (n − 1)!



t 0

(t − s)n Γx (s)Γn(x) f ds,

[2.52]

+ ∀ x ∈ X, ∀ f ∈ x Dom(Γn (x)). Taking into account [2.48]–[2.52], we obtain the above-mentioned results. To be more precise, we have supposed that the following conditions are satisfied everywhere: A) there exist Hilbert spaces H and H ∗ compactly imbedded in Banach spaces B and B ∗ , respectively, H ⊂ B, H ∗ ⊂ B ∗ , where B ∗ is a dual space to B, that divides points of B; B) operators Γ(x) and(Γ(x))∗ are dissipative on any Hilbert space H and H ∗ , respectively; C) operators D(x, y) and D∗ (x, y) are contractive on any Hilbert space H and H , respectively; ∗

D) (xn ; n ≥ 0) is a uniformly ergodic Markov chain with stationary distribution ρ(A), A ∈ X ; ∞ E) mi (x) := 0 ti Gx (dt) are uniformly integrable, ∀ i = 1, 2, 3, where Gx (t) := P {ω : θn+1 ≤ t| xn = x}; 

 ρ(dx) Γ(x)f < +∞;

ρ(dx) PDj (x, ·)f k < +∞;

k

F) 

X

X

ρ(dx) Γ(x)f k−1 · PDj (x, ·)f k−1 < +∞; X

[2.53]

[2.54]

Homogeneous Random Evolutions (HRE) and their Applications

39

∀ k = 1, 2, 3, 4, f ∈ B, where P is an operator generated by the transition probabilities P (x, A) of Markov chain (xn ; n ≥ 0): P (x, A) := P {ω : xn+1 ∈ A| xn = x},

[2.55]

and {Dj (x, y); x, y ∈ X, j = 1, 2} is a family of some closed operators. If B := C0 (R), then H := W l,2 (R) is a Sobolev space (Sobolev 1991), and W (R) ⊂ C0 (R) and this imbedding is compact. It is the same for the spaces B := L2 (R) and H := W l,2 (R). l,2

It follows from the conditions (A) and (B) that operators Γ(x) and (Γ(x))∗ generate a strongly continuous contractive semigroup of operators Γx (t) and Γ∗x (t), ∀ x ∈ X, in H and H ∗ , respectively. From the conditions (A)–(C), it follows that SMRE V (t) in (1) is a contractive operator in H, ∀ t ∈ R+ , and V (t)f H is a semimartingale ∀ f ∈ H. Therefore, the conditions (A)–(C) supply the following result: SMRE V (t)f is a tight process in B. Namely, ∀ Δ > 0 there exists a compact set KΔ P{V (t)f ∈ KΔ ; 0 ≤ t ≤ T } ≥ 1 − Δ.

[2.56]

This result follows from the Kolmogorov–Doob inequality (Jacod and Shiryaev 2010) for semimartingale V (t)f H (Korolyuk and Swishchuk 1995b). Condition [2.56] is the main step in proving limit theorems and rates of convergence for the sequence of SMRE in a series scheme. 2.2.2. Averaging of HRE Let us consider a SMRE in a series scheme:  t Vε (t) = f + Γ(x(s/ε))Vε (s)f ds 0



ν(t/ε)

+

[Dε (xk−1 , xk ) − I] Vε (ετk− )f,

[2.57]

k=1

where Dε (x, y) = I + εD1 (x, y) + O(ε),

[2.58]

40

Random Motions in Markov and Semi-Markov Random Environments 1

{D1 (x, y); x, y ∈ X} is a family of closed linear operators, O(ε)f /ε → 0 as ε → 0, and ε is a small parameter, , f ∈ B0 := Dom(Γ2 (x)) ∩ Dom(D12 (x, y)). [2.59] x,y∈X

Another form for Vε (t) in [2.57] is 

ν(t/ε)

Vε (t) = Γx(t/ε) (t − ετν(t/ε) )

Dε (xk−1 , xk )Γk−1 (εθk ).

[2.60]

k=1

Under conditions (A)–(C), the sequence of SMRE Vε (t)f is tight (see [2.56]), ρ − a.s. Under conditions (D, E) i = 2, (F )k = 2, j = 1, the sequence of SMRE Vε (t)f is weakly compact ρ − a.s. in DB [0, +∞) with limit points in CB [0, +∞), f ∈ B0 . Let us consider the following process in DB [0, +∞): 

ν(t/ε)−1 ε ε f ε := Vν(t/ε) fε − fε − Mν(t/ε)

ε ε Eρ [(Vk+1 fk+1 − Vkε fkε )| Fk ],

[2.61]

k=0

where Vnε := Vε (ετn ) (see [2.19]), f ε := f + εf1 (x(t/ε)),

fkε := f ε (xk ),

function f1 (x) is defined from the equation   ˆ + D) ˆ − (m(x)Γ(x) + PD1 (x, ·)) f, (P − I)f1 (x) = (Γ

[2.62]



 ˆ := Γ

ρ(dx)m(x)Γ(x),

ˆ := D

x

ρ(dx)PD1 (x, ·),

m(x) := m1 (x)

x

(see E), f ∈ B0 . ε f ε is an Ftε -martingale with respect to the σ-algebra Ftε := The process Mν(t/ε) σ{x(s/ε); 0 ≤ s ≤ t}. ε f ε in [2.61] has the asymptotic representation: The martingale Mν(t/ε)



ν(t/ε) ε Mν(t/ε) fε

=

ε Vν(t/ε) f

−f −ε

k=0

ˆ + D)V ˆ kε f + Of (ε), (Γ

[2.63]

Homogeneous Random Evolutions (HRE) and their Applications

41

ˆ D, ˆ f, f ε are defined in [2.61]–[2.62] and where Γ, Of (ε) /ε → const

as ε → 0,

∀ f ∈ B0 .

We have used [2.19] and [2.20] as n = 2, and representations [2.51] and [2.61] in [2.63]. ε The families l(Mν(t/ε) f ε ) and

⎛ l⎝



ν(t/ε)

⎞ ε ε Eρ [(Vk+1 fk+1 − Vkε fkε )| Fk ]⎠

k=0

are weakly compact for all l ∈ B0∗ , where B0∗ is some dense subset from B ∗ . Let V0 (t) be a limit process for Vε (t) as ε → 0. Since (see [2.60]) ε ε ] = [Γx(t/ε) (t − ετν(t/ε) ) − I] · Vν(t/ε) [Vε (t) − Vν(t/ε)

[2.64]

and the right-hand side in [2.64] tends to zero as ε → 0, then it is clear that the limits ε for Vε (t) and Vν(t/ε) are the same, i.e. V0 (t) ρ − a.s. The sum ε · 1 m



t 0

ν(t/ε) ˆ ˆ ε k=0 (Γ + D)Vk f converges strongly as ε → 0 to the integral

ˆ + D)V ˆ 0 (s)f ds. (Γ

ε The quadratic variation of the martingale l(Mν(t/ε) f ε ) tends to zero, and ε f ε → 0 as Mν(t/ε)

ε → 0,

∀ f ∈ B0 ,

∀ l ∈ B0∗ .

Passing to the limit in [2.63] as ε → 0 and taking into account all the previous reasoning, we obtain that the limit process V0 (t) satisfies the equation 0 = V0 (t)f − f − where

1 m



t 0

ˆ + D)V ˆ 0 (s)f ds, (Γ

 ρ(dx)m(x),

m := X

f ∈ B0 ,

t ∈ [0, T ].

[2.65]

42

Random Motions in Markov and Semi-Markov Random Environments 1

2.2.3. Diffusion approximation of HRE Let us consider SMRE Vε (t/ε), where Vε (t) is defined in [2.57] or [2.60], with the operators Dε (x, y) := I + εD1 (x, y) + ε2 D2 (x, y) + O(ε2 ),

[2.66]

{Di(x, y); x, y ∈ X, i = 1, 2} are closed linear operators and O(ε2 )f /ε2 → 0, as ε→0 ,

∀ f ∈ B0 :=

Dom(Γ4 (x))

,

Dom(D2 (x, y)),

x,y∈X

Dom(D2 (x, y)) ⊆ Dom(D1 (x, y)); ∀ x, y ∈ X,

D1 (x, y) ⊆ Dom(D1 (x, y)),

Γ (x) ⊂ Dom(D2 (x, y)), i

i = 1, 2, 3. [2.67]

Hence, ν(t/ε2 )

Vε (t/ε) = Γx(t/ε2 ) (t/ε − ετν(t/ε2 ) )



Dε (xk−1 , xk )Γxk−1 (ε, θk ), [2.68]

k=1

where Dε (x, y) are defined in [2.66]. Under conditions (A)–(C), the sequence of SMRE Vε (t/ε)f is tight (see [2.56]) ρ − a.s. Under conditions (D, E), i = 3, (F ) k = 4, the sequence of SMRE Vε (t/ε)f is weakly compact ρ − a.s. in DB [0, +∞) with limit points in CB [0, +∞), f ∈ B0 . Let us assume that the balance condition is satisfied, then  ρ(dx)[m(x)Γ(x) + PD1 (x, ·)]f = 0, ∀ f ∈ B0 .

[2.69]

X

And let us consider the following process in DB [0, +∞), ν(t/ε2 )−1 ε ε Mν(t/ε 2)f

:=

ε ε Vν(t/ε 2)f

−f − ε



ε ε Eρ [Vk+1 fk+1 − Vkε fkε | Fk ], [2.70]

k=0

where f ε := f + εf1 (x(t/ε2 )) + ε2 f2 (x(t/ε2 )), and functions f1 and f2 are defined from the following equations: (P − I)f1 (x) = −[m(x)Γ(x) + PD1 (x, ·)]f,

[2.71]

Homogeneous Random Evolutions (HRE) and their Applications

43

 ˆ − L(x)]f, (P − I)f2 (x) = [L

ˆ := L

ρ(dx)L(x),

[2.72]

X

L(x) := (m(x)Γ(x) + PD1 (x, ·))(R0 − I)(m(x)Γ(x) + PD1 (x, ·)) +m2 (x)Γ2 (x)/2 + m(x)PD1 (x, ·)Γ(x) + PD2 (x, ·), R0 is a potential operator of (xn ; n ≥ 0). The balance condition [2.69] and condition of the equations in [2.71].

- ˆ (L − L(x)) = 0 give the solvability

ε ε is an Ftε -martingale with respect to the σ-algebra Ftε := The process Mν(t/ε 2)f 2 σ{x(s/ε ); 0 ≤ s ≤ t}.

This martingale has the asymptotic representation: ν(t/ε2 )−1 ε ε Mν(t/ε 2)f

=

ε Vν(t/ε 2)f

2

−f −ε



ˆ kε f − Of (εt), LV

[2.73]

k=0

ˆ is defined in [2.72] and where L Of (ε) /ε → const

ε → 0,

∀ f ∈ B0 .

We have used [2.19] and [2.20] with n = 3, and representations [2.70] and [2.71] in [2.72]. ν(t/ε2 ) ε ε ε ε The families l(Mν(t/ε Eρ [(Vk+1 fk+1 − Vkε fkε )| Fk ]) are 2 ) f ) and l( k=0 ∗ weakly compact for all l ∈ B0 , f ∈ B0 . Let us assume that V 0 (t) is the limit process for Vε (t/ε) as ε → 0. ε From [2.60], we obtain that the limits for Vε (t/ε) and Vν(t/ε 2 ) are the same, 0 namely, V (t).

1 m

ν(t/ε2 ) ˆ ε The sum ε2 k=0 LV k f converges strongly as ε → 0 to the integral t ˆ 0 (s)f ds. LV 0

ε ε as ε → 0. Now, let us suppose that M 0 (t)f is the limit martingale for Mν(t/ε 2)f

Then, from [2.72] to [2.73] and previous reasoning, we have as ε → 0 :  1 tˆ 0 0 0 LV (s)f ds. M (t)f = V (t)f − f − m 0

[2.74]

44

Random Motions in Markov and Semi-Markov Random Environments 1

The quadratic variation of the martingale M 0 (t)f is of the form  t

0

< l(M (t)f ) >=

0

  l2 σ(x)Γ(x)V 0 (s)f σ(dx)ds,

[2.75]

X

where σ 2 (x) := [m2 (x) − m2 (x)]/m. The solution of the martingale problem for M 0 (t) (i.e. to find the representation of M 0 (t) with quadratic variation [2.74]) is expressed by the integral over the Wiener orthogonal martingale measure W (dx, ds) with quadratic variation ρ(dx) · ds: M 0 (t)f =

 t 0

σ(x)Γ(x)V 0 (s)f W (dx, ds).

[2.76]

x

Hence, the limit process V 0 (t) satisfies the following equation (see [2.74] and [2.75]): V 0 (t)f = f +

1 m



t 0

ˆ 0 (s)f ds + LV

 t 0

σ(x)Γ(x)V 0 (s)f W (dx, ds). X

[2.77] ˆ generates the semigroup U (t), then the process V 0 (t)f in Cox If the operator L (1976) satisfies equation: 0

V (t)f = U (t)f +

 t 0

σ(x)U (t − s)Γ(x)V 0 (s)f W (dx, ds).

[2.78]

x

The uniqueness of the limit evolution V0 (t)f in the averaging scheme follows ˆ+D ˆ (see [2.62]) generates a from equation [2.78] and the fact that if the operator Γ ˆ + D) ˆ · t}f and this representation is unique. semigroup, then V0 (t)f = exp{(Γ The uniqueness of the limit evolution V 0 (t)f in diffusion approximation scheme follows from the uniqueness of the solution of martingale problem for V 0 (t)f (see [2.74]–[2.75]) (Stroock and Varadhan 1969). The latter is proved by dual SMRE in the series scheme by constructing the limit equation in diffusion approximation and by using a dual identify (Korolyuk and Swishchuk 1995b).

Homogeneous Random Evolutions (HRE) and their Applications

45

2.2.4. Averaging of REs in reducible phase space: merged HRE Suppose that the following conditions hold true: a) decomposition of phase space X (reducible phase space): , . Xu , Xu Xu = ∅, u = u : X=

[2.79]

u∈U

where (U, U ) is some measurable phase space (merged phase space); b) a Markov renewal process (xεn , θn ; n ≥ 0) on (X, X ) has the semi-Markov kernel: Qε (x, A, t) := Pε (x, A)Gx (t),

[2.80]

where Pε (x, A) = P (x, A) − εl P1 (x, A), x ∈ X, A ∈ X , = ∞, ∈; P (x, A) are the transition probabilities of the supporting non-perturbed Markov chain (xn ; n ≥ 0); c) the stochastic kernel P (x, A) is adapted to the decomposition (Da Fonseca et al. 2009) in the following form: 1, x ∈ Xu 0, x∈X / u, u ∈ U ;

P (x, Xu ) =

d) the Markov chain (xn ; n ≥ 0) is uniformly ergodic with stationary distributions ρu (B):  ρu (B) = P (x, B)ρu (dx), ∀ u ∈ U, ∀ B ∈ X . [2.81] Xu

e) there is a family {ρεu (A); u ∈ U, A ∈ χ, ε > 0} of stationary distributions of perturbed Markov chain (xεn ; n ≥ 0);  ρu (dx)P1 (x, Xu ) > 0, ∀ u ∈ U, f) b(u) := Xu



b(u, Δ) := −

ρu (dx)P1 (x, XΔ ) > 0,

∀ u∈Δ, /

Δ ∈ U;

[2.82]

Xu

g) the operators Γ(u) :=

 Xu

ρu (dx)m(x)Γ(x), and



 ˆ D(u) :=

ρu (dx) Xu

P (x, dy)D1 (x, y) Xu

[2.83]

46

Random Motions in Markov and Semi-Markov Random Environments 1

ˆ ˆ are closed ∀ u ∈ U with common domain B0 , and operators Γ(u) + D(u) generate the semigroup of operators ∀ u ∈ U . Decomposition [2.79] in (a) defines the merging function ∀ x ∈ Xu,

u(x) = u,

u ∈ U.

[2.84]

Note that σ-algebras X and U are coordinated such that .

XΔ =

∀ u ∈ U,

Xu,

Δ ∈ U.

[2.85]

u∈Δ

We set

u



f (u) :=

Xu

ρu (dx)f (x) and xε (t) := xεν(t/ε2 ) .

SMRE in reducible phase space X is defined by the solution of the equation:  Vε (t) = I +

t 0



ν(t/ε)

+

Γ(xε (s/ε))Vε (s)ds [Dε (xεk−1 , xεk ) − I]Vε (ετk− ),

[2.86]

k=0

where Dε (x, y) are defined in [2.58]. Let us consider the martingale ε ε Mν(t/ε) f ε (xε (t/ε)) := Vν(t/ε) f ε (xε (t/ε)) − f ε (x)



ν(t/ε)−1



ε ε Eρεu [Vk+1 fk+1 − Vkε fkε | Fkε ],

[2.87]

k=0

where Fnε := σ{xεk , θk ; 0 ≤ k ≤ n},  ρu (dx)f (x), f ε (x) := fˆ(u(x)) + εf 1 (x), fˆ(u) :=

[2.88]

Xu

ˆ (P − I)f1 (x) = [−(m(x)Γ(x) + PD1 (x, ·)) + Γ(u) fkε

:= f

ε

(xεk ),

ˆ +D(u) + (Πu − I)P1 ]fˆ(u), ε Vn := Vε (ετn ),

[2.89]

and Vε (t) is defined in [2.86], and P1 is an operator generated by P1 (x, A) (see [2.80]).

Homogeneous Random Evolutions (HRE) and their Applications

47

The following representation is true (Korolyuk and Swishchuk 1995b): Πεu = Πu − εr Πu P1 R0 + ε2r Πεu (P1 R0 )2 ,

r = 1, 2,

[2.90]

-ε where u , u , P1 are the operators generated by ρεu , ρu and P1 (x, A), respectively, x ∈ X, A ∈ X , u ∈ U . It follows from Di Crescenzo (2001) that for any continuous and bounded function f (x) Eρεu [f (x)] → Eρu [f (x)] as

ε → 0,

∀ u ∈ U,

and the whole calculation in section 2.2.2 can be used in this section after replacing Eρu by Eρεu , and it reduces the calculations by Eρu as ε → 0. Under conditions (A) − −(C), the sequence of SMRE Vε (t)f in [2.86], f ∈ B0 , is tight ρu − a.s., ∀ u ∈ U . Under conditions (D, E), i = 2, (F ), k = 2, j = 1, the sequence of SMRE Vε (t)f is weakly compact ρu − a.s., ∀ u ∈ U , in DB [0, +∞) with limit points in CB [0, +∞). Note that u(xε (t/ε)) → x ˆ(t) as ε → 0, where x ˆ(t) is a merged jump Markov process in (U, U ) with infinitesimal operator Λ(Pˆ − I),  ˆ ˆ ˆ ˆ Λf (u) := [b(u)/m(u)]f (u), P f (u) := [b(u, du )/b(u)]fˆ(u), [2.91] U

 m(u) :=

ρu (dx)m(x), Xu

b(u) and b(u, Δ) are defined in [2.82]. We also note that Πu P1 = Λ(Pˆ − I), where

u

[2.92]

is defined in [2.90], P1 in [2.90], Λ and Pˆ in [2.91].

Using [2.19] and [2.20] as n = 2, and [2.88]–[2.89] and [2.90] as r = 1, [2.92], we obtain the following representation: ε ε Mν(t/ε) f ε (xε (t/ε)) = Vν(t/ε) fˆ(u(xε (t/ε))) − fˆ(u(x))



[2.93]

ν(t/ε)

−ε

k=0

ˆ ˆ [m(u)Γ(u) + m(u)D(u) + m(u)Λ(Pˆ − I)]Vkε fˆ(u(xεk )) + Of (ε),

48

Random Motions in Markov and Semi-Markov Random Environments 1

where Of (ε) /ε → const ε → 0, ∀ f ∈ B0 . Since the third term in [2.93] tends to the integral  t ˆ x(s)) + D(ˆ ˆ x(s))] × Vˆ0 (s)fˆ(ˆ [Λ(Pˆ − I) + Γ(ˆ x(s))ds 0

ε f ε (xε (t/ε))) tends to zero as and the quadratic variation of the martingale l(Mν(t/ε) ε f ε (xε (t/ε)) → 0, ε → 0), ∀ l ∈ B0∗ , then we obtain from ε → 0 (hence, Mν(t/ε) [2.93] that the limit evolution Vˆ0 (t) satisfies equation

 Vˆ0 (t)fˆ(ˆ x(t)) = fˆ(u) +

t 0

ˆ x(s)) + D(ˆ ˆ x(s))]Vˆ0 (s)fˆ(ˆ [Λ(Pˆ − I) + Γ(ˆ x(s))ds. [2.94]

The RE Vˆ0 (t) is called a merged RE in the averaging scheme. 2.2.5. Diffusion approximation of HRE in reducible phase space Let us consider SMRE Vε (t/ε) with expansion [2.66], where Vε (t) is defined in [2.86], and conditions (A) − −(F) (as i = 3, k = 4, j = 1, 2) and conditions (a) − −(f)(e = 2) are satisfied. Let us state that the balance condition  ρu (dx)[m(x)Γ(x) + PD1 (x, ·)]f = 0,

∀ u ∈ U,

[2.95]

Xu

is also satisfied and operator  ρu (dx)L(x)/m(u),

L(u) :=

[2.96]

Xu

generates the semigroup of operators, where L(x) is defined in [2.71] and m(u) in [2.91]. Now, consider the martingale ε ε ε 2 ε ε ε 2 ε Mν(t/ε 2 ) f (x (t/ε )) = Vν(t/ε2 ) f (x (t/ε )) − f (x) ν(t/ε2 )





k=0

ε ε Eρεu [Vk+1 fk+1 − Vkε fkε | Fkε ],

[2.97]

Homogeneous Random Evolutions (HRE) and their Applications

49

where f ε (x) := fˆ(u(x)) + εf 1 (x) + ε2 f 2 (x), (P − I)f 1 (x) = [m(x)Γ(x) + PD1 (x, ·)]fˆ(u), (P − I)f 2 (x) = [m(u)L(u) − L(x) + (Πu − I)P1 ]fˆ(u),

[2.98]

and L(u) is defined in [2.96]. From the balance condition [2.95] and from the condition Πu [L(u) − L(x) + (Πu − I)P1 ] = 0 it follows that functions f i (x), i = 1, 2, are defined to be unique. Assume that Vˆ 0 (t) is the limit of Vε (t/ε) as ε → 0. Thus, from [2.64] the limits ε ˆ0 for Vε (t/ε) and Vν(t/ε 2 ) can be obtained and they are the same, namely, V (t). Weakly compactness of Vε (t/ε) is analogue to the one in section 2.2.3 with the use of [2.80] as l = 2 and [2.89] as r = 2. That is why the whole calculations in section 2.2.3 can be used in this section just replacing Eρu by Eρεu , and it reduces to the rates by Eρu as ε → 0. Using [2.19] and [2.20] with n = 3, as well as representations [2.66] and [2.97]–[2.98], we have the following representation for M ε f ε : ε ε ε ε 2 ˆ ˆ = Vν(t/ε Mν(t/ε 2)f 2 ) f (u(x (t/ε ))) − f (u)(x) ν(t/ε2 )

−ε

2



[m(u)L(u(xεk ) + Πu P1 ]Vkε fˆ(u(xεk )) + Of (ε),

[2.99]

k=0

where L(u) is defined in [2.96], and Of (ε) /ε → const as ε → 0. The sum in [2.99] converges strongly as ε → 0 to the integral 

t 0

[Λ(Pˆ − I) + L(ˆ x(s))]Vˆ 0 (s)fˆ(ˆ x(s))ds,

[2.100]

because of relation [2.91], where x ˆ(t) is a jump Markov process in (U, U ) with infinitesimal operator Λ(Pˆ − I), x ˆ(0) = u ∈ U . ˆ 0 (t)f be a limit martingale for Let M ε ε ε 2 Mν(t/ε 2 ) f (x (t/ε )) as ε → 0.

50

Random Motions in Markov and Semi-Markov Random Environments 1

From [2.94]–[2.99], we have the equation, as ε → 0, ˆ 0 (t)fˆ(ˆ x(t)) = Vˆ 0 (t)fˆ(ˆ x(t)) − fˆ(u) M  t − [Λ(Pˆ − I) + L(ˆ x(s))]Vˆ 0 (s)fˆ(ˆ x(s))ds.

[2.101]

0

ˆ 0 (t) has the form The quadratic variation of the martingale M ˆ 0 (t)fˆ(u)) >= < l(M

 t 0

l2 (σ(x, u)Γ(x)Vˆ 0 (s)fˆ(u))ρu (dx)ds, [2.102] Xu

where σ 2 (x, u) := [m2 (x) − m2 (x)]/m(u). ˆ 0 (t) is expressed by the integral The solution of the martingale problem for M 

ˆ 0 (t)fˆ(ˆ M x(t)) = where

t 0

ˆ (ds, x x(s)), W ˆ(s))Vˆ 0 (s)fˆ(ˆ

[2.103]

 ˆ (t, u)f := W Xu

Wρu (t, dx)σ(x, u)Γ(x)f.

Finally, from [2.100] to [2.102], it follows that the limit process Vˆ 0 (t) satisfies the following equation: Vˆ 0 (t)fˆ(ˆ x(t)) = fˆ(u) +  +

t 0



t 0

[Λ(Pˆ − I) + L(ˆ x(s))]Vˆ 0 (s)fˆ(ˆ x(s))ds

ˆ (ds, x x(s)). W ˆ(s))Vˆ 0 (s)fˆ(ˆ

[2.104]

The RE Vˆ 0 (t) in [2.104] is called a merged RE in the diffusion approximation ˆ 0 (t) is a solution of the Cauchy problem scheme. If the operator U ⎧ ˆ 0 (t) ⎪ dU ⎪ ˆ 0 (t)L(ˆ ⎨ x(t)) =U dt ⎪ ⎪ ⎩ ˆ0 U (0) = I,

Homogeneous Random Evolutions (HRE) and their Applications

51

then the operator process Vˆ 0 fˆ(ˆ x(t)) satisfies the equation ˆ 0 (t)fˆ(u) + Vˆ 0 (t)fˆ(ˆ x(t)) = U  +

t 0



t 0

ˆ 0 (t − s)Λ(Pˆ − I)Vˆ 0 (s)fˆ(ˆ x(s))ds U

ˆ (ds, x ˆ 0 (t − s)W ˆ(s))Vˆ 0 (s)fˆ(ˆ x(s)). U

[2.105]

The uniqueness of the limit of the RE Vˆ 0 (t) is established by the dual of SMRE. 2.2.6. Normal deviations of HRE The averaged evolution obtained by averaging and merging schemes can be considered as the first approximation to the initial evolution. The diffusion approximation of the SMRE determines the second approximation to the initial evolution, since the first approximation under balance condition, the averaged evolution, appears to be trivial. Now, we consider the double approximation to SMRE, i.e. both the averaged and the diffusion approximation, provided that the balance condition fails. We introduce the deviation process as the normalized difference between the initial and averaged evolutions. In the limit we will obtain the normal deviations of the initial SMRE from the averaged one. We consider the SMRE Vε (t) in [2.57] and the averaged evolution V0 (t) in [2.65]. Let us also consider the deviation of the initial evolution Vε (t)f from the averaged one V0 (t)f , i.e. Wε (t)f := ε−1/2 · [Vε (t) − V0 (t)]f,

∀ f ∈ B0 .

[2.106]

Taking into account equations [2.57] and [2.106], we obtain the relation for Wε (t)  t  t  −1/2 ˆ ˆ ε (s)f ds Γ(x(s/ε)) − Γ Vε (s)f ds + ΓW Wε (t)f = ε 0



+ε−1/2 Vεd (t) −



t 0

0

 ˆ V0 (s)ds f, D

∀ f ∈ B0 ,

where 

ν(t/ε)

Vεd (t)f :=

[Dε (xk−1 , xk ) − I]Vε (ετk− )f,

k=1

ˆ D ˆ are defined in [2.62]. and Γ,

[2.107]

52

Random Motions in Markov and Semi-Markov Random Environments 1

If the process Wε (t)f has the weak limit W0 (t)f as ε → 0, then we obtain  t  t ˆ ε (s)f ds → ˆ 0 (s)f ds, as ε → 0. [2.108] ΓW ΓW 0

0

ˆ satisfies the balance condition Since the operator Γ(x) − Γ   ˆ =0 , Π(Γ(x) − Γ)f then the diffusion approximation of the first term in the right-hand side of [2.107] gives  t   ˆ ε−1/2 e (Γ(x(s/ε)) − Γ)f ds → l(σ1 f )w(t), as ε → 0 [2.109] 0

where 2



 ˆ )(R0 − I)m(x)l((Γ(x) − Γ)f ˆ ) ρ(dx) m(x)l((Γ(x) − Γ)f

l (σ1 f ) = X

 1 1 ˆ ) /m, + m2 (x) ((Γ(x) − Γ)f 2 2 ∀ l ∈ B0 , w(t) is a standard Wiener process. ˆ = 0, then the diffusion approximation of the third term Since (PD1 (x, ·) − D)f in the right-hand side of Elliott and Swishchuk (2007) gives the following limit: ' (  t −1/2 d ˆ ε l Vε (t)f − [2.110] DV0 (s)f ds → l(σ2 f ) w(t), as ε → 0, 0

where l2 (σ2 f ) :=



    ˆ ˆ ρ(dx)l (PD1 (x, ·) − D)f (R0 − I) l (PD1 (x, ·) − D)f . X

The passage to the limit as ε → 0 in the representation [2.107] by using [2.108]–[2.110] allows us to arrive at the equation for W0 (t)f  t ˆ 0 (s)f ds + σf w(t), [2.111] ΓW W0 (t)f = 0

where the variance operator σ is determined from the relation: l2 (σf ) := l2 (σ1 f ) + l2 (σ2 f ),

∀ l ∈ B0 ,

∀ l ∈ B0∗ ,

and operators σ1 and σ2 are defined in [2.109] and [2.110], respectively.

[2.112]

Homogeneous Random Evolutions (HRE) and their Applications

53

Thus, the double approximation of the SMRE has the form √ Vε (t)f ≈ V0 (t)f + ε W0 (t)f for small ε, which perfectly fits the standard form of the CLT with non-zero limiting mean value. 2.2.7. Rates of convergence in the limit theorems for HRE The rates of convergence in the averaging and diffusion approximation scheme for the sequence of SMRE are studied in this section. Averaging scheme: The problem is to estimate the value Eρ [Vε (t)f ε (x(t/ε)) − V0 (t)f ] ,

∀ f ∈ B0 ,

[2.113]

where V0 (t), Vε (t), f ε , f and B0 are defined in [2.65], [2.57], [2.61] and [2.59], respectively. We use the following representation: Eρ [Vε (t)f ε (x(t/ε)) − V0 (t)f ] ≤ Eρ [Vε (t)f − Vε (τν(t/ε) )f ] + Eρ [Vε (τν(t/ε) )f − V0 (t)f ] + ε Eρ Vε (t)f1 (x(t/ε))

[2.114]

that follows from [2.108] and [2.65], [2.61], and [2.59]. For the first term in the right-hand side of [2.114], we obtain (see [2.64] and [2.52] with n = 2): Eρ [Vε (t)f − Vε (τν(t/ε) ))f ] ≤ ε C1 (T, f ), where

 C1 (T, f ) := X

∀ t ∈ [0, T ],

[2.115]

ρ(dx)[C0 (T, x, f ) + C02 (T, x, f )],

C0 (T, x, f ) := T m2 (x) Γ(x)f /2m,

∀ f ∈ B0 .

For the second term in the right-hand side of [2.114], we have from [2.63] and ε [2.113] (since Eρ [Mν(t/ε) f ε (x(t/ε)) = 0]): ε − I]f1 (x(t/ε)) Eρ [Vε (τν(t/ε) )f − V0 (t)f ] ≤ ε Eρ [Vν(t/ε)

54

Random Motions in Markov and Semi-Markov Random Environments 1

 ⎡ ⎤    1 ν(t/ε)−1    1 ε ˆ ˆ ˆ ˆ  ⎣ ⎦ E +ε  ( Γ + D)V f − ( Γ + D)V (s)f ds 0 k  ρ  εm 0   k=0 +ε C2 (T, f ),

[2.116]

where the constant C2 (T, f ) is expressed by the algebraic sum of  mi (x) Γi (x)f ρ(dx) X

and

 mi (x) PD1 (x, ·) · Γi (x)f ρ(dx),

i = 1, 2,

f ∈ B0 ,

X

and R0 , which is a potential of Markov chain (xn ; n ≥ 0). For the third term in right-hand side of [2.114], we obtain Eρ f1 (x) ≤ 2 C3 (f ), where

[2.117]

 C3 (f ) := R0

p(dx) [m(x) Γ(x)f + PD1 (x, ·)f ] . X

Finally, from [2.114] to [2.117], we obtain the estimate of the value in [2.113], namely, rate of convergence in averaging scheme for SMRE: Eρ [Vε (t)f ε (x(t/ε)) − V0 (t)f ] ≤ ε C(T, f ),

[2.118]

where the constant C(T, f ) can be expressed as a function of constants Ci (T, f ), i = 1, 2, 3. Diffusion approximation: In this case, the problem is to estimate the value Eρ [Vε (t/ε)f ε (x(t/ε2 )) − V 0 (t)f ] ,

∀ f ∈ B0 ,

[2.119]

where Vε (t/ε), f ε , V 0 (t), f, and B0 are defined in [2.68], [2.70], [2.77], and [2.67], respectively. Here, we use the following representation: Eρ [Vε (t/ε)f ε (x(t/ε2 )) − V 0 (t)f ] ≤ Eρ [Vε (t/ε)f − Vε (τν(t/ε2 ) )f ]

Homogeneous Random Evolutions (HRE) and their Applications

55

+ Eρ [Vε (τν(t/ε2 ) )f − V 0 (t)f ] + ε Eρ [Vε (t/ε)f1 (x(t/ε2 ))] +ε2 Eρ [Vε (t/ε)f2 (x(t/ε2 ))] ,

[2.120]

that follows from [2.118], [2.70] and [2.64], respectively. First of all, we have for the fourth term in the right-hand side of [2.119]:  ρ(dx) L(x)f ε2 Eρ [Vε (t/ε)f2 (x(t/ε2 ))] ≤ ε2 2 R0 X 2

:= ε d1 (f ),

[2.121]

where L(x) is defined in [2.71]. For the third term in the right-hand side of [2.119], we obtain: ε Eρ [Vε (t/ε)f1 (x(t/ε2 ))] ≤ ε d2 (f ), where

[2.122]

 d2 (f ) := 2 R0 ·

ρ(dx)[m(x) Γ(x)f + P D1 (x, ·)f ],

f ∈ B0 .

X

For the first term in the right-hand side of [2.119], we have from [2.118]: Eρ [Vε (t/ε)f − Vε (τν(t/ε2 ) )f ] ≤ ε C1 (T, f ),

[2.123]

where C1 (T, f ) is defined in [2.118]. For the second term in the right-hand side of [2.124], we use the asymptotic ε ε and the conditions representation [2.72] for the martingale Mν(t/ε 2)f Eρ [M ε f ε ] = 0,

Eρ [M 0 (t)f ] = 0,

∀ f ∈ B0

[2.124]

Eρ [Vε (τν(t/ε2 ))f − V 0 (t)f ] ≤ ε Eρ [V ε (τν(t/ε2 ))f1 − f1 (x)] +ε2 Eρ [Vε (τν(t/ε2 ))f2 − f2 (x)]  ⎡ ⎤    t ν(t/ε2 )−1    1 ε 2  0 ˆ ˆ LVk f − 2 LV (s)f ds⎦ +ε Eρ ⎣  + ε d3 (f ), ε m 0   k=0 [2.125]

56

Random Motions in Markov and Semi-Markov Random Environments 1

where the constant d3 (f ) is expressed by the algebraic sum of  mi (x) Γj (x)P De (x, ·)f ρ(dx), i = 1, 2, 3, j = 0, 1, 2, 3,

e = 1, 2.

X

We should note that  ⎡ ⎤    t ν(t/ε2 )−1    ˆ kε f − ε−2 m−1 ˆ 0 (s)f ds⎦ ≤ d4 (T, f ). E ρ ⎣ LV LV   0   k=0

[2.126]

Finally, from [2.120] to [2.126], we obtain the estimate of the value in [2.119], namely, rate of convergence in diffusion approximation scheme for SMRE as Eρ [Vε (t/ε)f ε (x(t/ε2 )) − V 0 (t)f ] ≤ ε d(T, f ),

[2.127]

where the constant d(T, f ) is expressed by di , i = 1, 2, 3, 4, and C1 (T, f ), f ∈ B0 .

PART 2

Applications to Reliability, Random Motions, and Telegraph Processes

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

3 Asymptotic Analysis for Distributions of Markov, Semi-Markov and Random Evolutions

This chapter is devoted to finding the probabilities of large deviations for trajectories of a semi-Markov process and random evolutions on the line. In particular, we estimate the asymptotic distribution of the reaching time of an infinitely increasing level by a semi-Markov process on the line, as long as a discrete analogue of the action functional for the embedded Markov chain is obtained. We also find the asymptotic estimations of that reaching time if the limit, which defies the discrete analogue of the action functional, does not exist. In the modern theory of random processes, considerable interest is devoted to problems related to the asymptotic behavior of the instants of achieving “hard to reach” areas of the phase space. Such problems can be divided into the following three mutually complementary groups: 1) Large deviation theorems: This type of problems is studied by many research works, including large deviations of empirical measures of Markov processes, reported in fundamental works by Donsker and Varadhan (1975), and large deviations of diffusion processes, studied by Ventzel and Freydlin (1970). A series of works by Borovkov and Mogulsky (1980) and Foss (2007) are devoted to large deviations for sums of random vectors and fields. Large deviations for walks on a multidimensional lattice are studied by Molchanov and Yarovaya (2013), and large deviations for random particle systems are studied by Dawson and Gärtner (1994). This group also includes the recent results of Novak, who investigated the accuracy of the approximation of processes exceeding a threshold level, large deviations for Poisson processes and a Poisson approximation to study extremes (Novak 2007).

60

Random Motions in Markov and Semi-Markov Random Environments 1

2) If a domain D of the phase space is fixed, and some local characteristics of a process depend on a small parameter ε > 0 such that the probability of reaching time at D tends to zero as ε → 0. Such problems have also been the main focus of many research works. In particular, they were studied by Korolyuk and Turbin (1993), Gnedenko and Ushakov (1995), Skorokhod (1989), Kovalenko (1980), Soloviev (1993), Anisimov (1977), and others. 3) Semi-Markov process on a domain Dε that depends on a parameter ε > 0 such that the probability to reach the domain by the process tends to zero as ε → 0. First, we refer to the works of Soloviev (1993) and Vinogradov (1994). Secondly, these problems have also been addressed by Korolyuk and Silvestrov (1983), Korolyuk and Turbin (1993), Silvestrov (2007a, 2007b) and others. In this chapter, we include some problems of the last group. As in most of the cases in this book, we consider problems for which it is not possible to specify the exact form of the distribution of hard-to-reach states. Therefore, we study the asymptotic distribution. There are multiple applications for these problems. For instance, when we study the probability of losing a customer in a queueing system, in models for storage management, in the theory of runs and so on. In section 3.1, we deal with the problem of finding an asymptotic distribution of the functional type for reaching a level that is infinitely increasing by a semi-Markov process on the set of natural numbers. For this analysis, the existence of a discrete analogue of the action functional for a Markov chain, embedded in the semi-Markov process, arises as an important condition. In section 3.2, we find asymptotic estimates for the distribution of the time that a semi-Markov process spends in a set of states. These estimates infinitely expand when the condition for the existence of a discrete analogue of the action functional is not fulfilled. Section 3.3 is devoted to the asymptotic expansion for the distribution of first exit time from an expanding set of states by the semi-Markov process that is embedded in a diffusion process. In section 3.4, an asymptotic expansion is obtained for a semigroup of operators of the corresponding three-component Markov process, after the diffusion approximation of the process is stated. Section 3.5 deals with the asymptotic expansion under the diffusion approximation for the distribution of the position of a particle, which performs a random walk in n-dimensional space with Markov switching, provided Kac’s condition is achieved.

Asymptotic Analysis for Distributions of Evolutions

61

3.1. Asymptotic distribution of time to reach a level that is infinitely increasing by a family of semi-Markov processes on the set N We will obtain the asymptotic expansion for the distribution of a functional type of the time for reaching a level, which is infinitely increasing, by a semi-Markov process with the discrete phase space. Let ξ (t), t ≥ 0, be a semi-Markov process on the phase space E = {1, 2, 3, . . .}, and its embedded Markov chain ξm , m ∈ N, satisfies the following condition C2. The Markov chain ξm is irreducible and has the stationary distribution ρ = {ρi , i ∈ E}, moreover, ρk > 0 for any arbitrarily large k ∈ E. Denote by en a single lumped state (merging a set of states), and by E n = {n + 1, n + 2, . . .} (Korolyuk et al. 1979a,b), i.e. the transition probabilities of the lumped Markov chain ξ m , m ∈ N, on the phase space En ∪ {en }, where En = {1, 2, . . . , n}, are of the form of definition 1.12. We will use the notion of the lumped Markov chain introduced in Chapter 1 of Volume 1: Let us fix a state k ∈ E and denote by πij , i, j ∈ {k, en }, then the (n) transition probabilities of the lumped Markov chain ξ/m , m ∈ N (Korolyuk et al. (n) 1979b), obtained from the Markov chain ξm , are defined on the phase space Ek = {k, en }. Let us state the following condition: C2. There exists ρk πken = A > 0, εn  where εn = i>n ρi . lim

n→∞

R EMARK 3.1.– In Korolyuk et al. (1979b), it was proved that 0 ≤ A ≤ 1 and it was shown that, in some manners, A is a discrete analogue of the action functional, which was introduced by Ventzel and Freydlin (1970). On the phase space E0 = {0, 1, 2, . . .} consider an ergodic irreducible Markov chain {ξm , m ∈ N} with the following transition probabilities matrix (this matrix is formed with four submatrices)

62

Random Motions in Markov and Semi-Markov Random Environments 1





⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎡ ⎜ 0 ⎜ ⎜⎢0 ⎜⎣ ⎜ . ⎝ ..

⎢ ⎣

[P0 ]

··· ··· .. .

0 qk0 0 0 .. .. . .





0 .. . pk0 −1

⎤ 0 ··· .. . . ⎥ . .⎦ 0 ···

0 pk0 ⎢ qk0 +1 0 ⎥⎢ ⎦⎢ 0 qk0 +2 ⎣ .. .. . .



⎟ ⎟ ⎟ ⎟ ⎟ ⎤⎟ ··· ⎟ ⎟ ⎟ ···⎥ ⎥⎟ ⎟ ···⎥ ⎦⎟ .. ⎠ .

where [P0 ] is a k0 × k0 submatrix. Let ρ = {ρi , i ∈ E0 } be the stationary distribution of {ξm }, and en be the new state  obtained after merging the set of states E n = {n + 1, n + 2, · · · }, εn = i>n ρi . For a fixed n0 < n, define En0 = {0, 1, · · · , n0 } and denote by πij , i, j ∈ En0 ∪ en , the transition probabilities of the lumped Markov chain. L EMMA 3.1.– (Pogorui 1990). Suppose that there exists lim pn = p
p we have 2

A=

(q − p) . q

In our case for n0 ≥ k0 , we have  ρn+1 qn+1 en πen i = πen n0 = fnn0 , εn i∈En0

en where fnn is the probability that, after starting from state n, the chain {ξm } reaches 0 state n0 before en .

Taking into account [3.1], for n ≥ k0 we have ρn+1 qn+1 = εn 1+

pn+2 qn+3

qn+1 . pn+3 + pqn+2 + ... n+3 qn+4

We need to show that ρn+1 qn+1 = q − p. lim n→∞ εn Denote by ank =

k  pn+i+1 i=1

qn+i+2

, an0 = 1.

∞ εn = k=0 ank . Since pn → p < 12 as n → ∞ for any fixed ε > 0, Hence, ρn+1 ∞ there exist N1 and N2 such that k=N1 ank < 2ε for all n > N2 . Then, there exists 2 2 2 N 1 n  N 1  p  k 2 ε 2 N3 such that 2 k=0 ak − k=0 q 22 < 2 for all n > N3 . By defining N4 = max (N2 , N3 ), we obtain for all n > N4 2∞ 2 ∞ ' (k 2 2  p 2 2 n ak − 2 2 < ε. 2 q 2 k=0

k=0

Therefore, lim

n→∞

∞  k=0

ank =

q . q−p

Since qn → q as n → ∞, we have limn→∞

ρn+1 εn qn+1

= q − p.

64

Random Motions in Markov and Semi-Markov Random Environments 1

en Now, it remains to show that limn→∞ fnn = 0

en limn→∞ f nn0

n ≥ k0 .

=

q−p q ,

where

en f nn0

is

en fnn 0

q−p q .

It is easily verified that

in the case when pn = p, qn = q for all

en Suppose ε > 0 is small such that q − ε > p + ε. It is easily seen that limn→∞ fnn 0 does not depend on n0 (otherwise A depends on n0 ).

Consider N0 such that |pn − p| < ε and |qn − q| < ε for any n > N0 . en en Let fˇnn be fnn in the case where pn = p − ε, qn = q + ε for all n ≥ k0 and 0 0 en en ˆ fnn0 be fnn0 in the case where pn = p + ε, qn = q − ε for all n ≥ k0 . It is then easily verified that

q − p − 2ε q − p + 2ε en en en ≤ lim fnn ≤ lim fˇnn = = lim fˆnn . 0 0 0 n→∞ n→∞ n→∞ q−ε q+ε en = Since ε > 0 can be arbitrary small, we have limn→∞ fnn 0

q−p q .

Now, let us consider the following example of a Markov chain, for which the constant A is calculated directly. E XAMPLE 3.1.– Let ξm , m ∈ N, be a Markov chain on the phase space E0 = {0, 1, 2, · · · } with the following transition probabilities: ⎞ ⎛ 0 1 0 0 ··· ⎜ q1 0 p1 0 · · · ⎟ ⎟ ⎜ ⎟ ⎜ P = ⎜ 0 q2 0 p2 · · · ⎟ ⎜ 0 0 q3 0 · · · ⎟ ⎠ ⎝ .. .. .. .. . . . . . . . where pi + qi = 1 for all i ≥ 1. Now suppose that there exists lim pn = p
k, the time to reaching the set E n by ξ (t) assuming ξ (0) = k, i.e. τn(k) = inf {t : ξ (t) ∈ E n ,

ξ (0) = k} .

Asymptotic Analysis for Distributions of Evolutions

65

We are interested in the asymptotic expansion of the distribution of the random (k) variable εn τn as n → ∞. Denote by θi a sojourn time of ξ (t) at state i ∈ E, and define ai = E[θi ]. Let the following condition be satisfied: C3.

sup ai = c < ∞. i∈E

Let ξ/(n) (t), t ≥ 0, be the lumped semi-Markov process, obtained from ξ (t) and (n) (n) defined on the phase space Ek with the embedded Markov chain ξ/m . Denote by Gn (t) the sojourn time distribution function of ξ/(n) (t) at state k, and we assume that d there exists the pdf gn (t) = dt Gn (t). We have removed the explicit state dependence in this cumulative distribution function (cdf) and pdf to ease the notation. Define the first moment as  ∞ (n) m1 = t gn (t) dt. 0

It was shown in Korolyuk et al. (1979b) that condition C3 provides the existence of the first moment and  (n) ρk ak . lim m1 = n→∞

k≥1

We also have Bn =

ρ k πk e n (n)

m1 εn

where gn(i)∗ (t) = and

(1) gn (t)



t 0

,

(n)

(n)

V n = m1 B n ,

R0 (t) =

∞ 

−1  (n) gn(i)∗ (t) − m1 ,

i=1

gn(i−1)∗ (t − u) gn (u) du,

= gn (t).

Consider

  Fεn (t) = P εn τn(k) > t ,

(n)

F0

(t) = e−Bn t .

L EMMA 3.2.– If the conditions C1–C3 are fulfilled, then ' (   t u (n) Vn Fεn (t − u) R0 Fεn (t) = e−Bn t − ε n 0 ' (   t u−v (n) e−Bn v dv du. −Vn Bn R0 εn 0

[3.2]

66

Random Motions in Markov and Semi-Markov Random Environments 1

P ROOF.– The function Fεn (t) satisfies the following perturbed Markov renewal equation ' (  t/εn t + gn (s) (1−πken ) Fεn (t − εn s) ds, [3.3] Fεn (t) = Gn εn 0 where Gn (t) = 1−Gn (t). Consider the following Laplace transforms:  ∞  e−λt Fεn (t) dt, gˆn (εn λ) = Fˆεn (λ) = 0

(n) Fˆ0 (λ) =



∞ 0

(n)

e−λt F0

ˆ (n) (εn λ) = (t) dt, R 0

ˆ (n) (εn λ) = gˆn (εn λ) (1 − πke ) , Q n



∞ 0 ∞

0

e−εn λt gn (t) dt, (n)

e−εn λt R0 (t) dt [3.4]

ˆ (ε λ) = G n n



∞ 0

e−εn λt Gn (t) dt.

Taking into account equations [3.4], then from equation [3.3] it follows that  −1 ˆ (ε λ) . ˆ (n) (εn λ) G Fˆεn (λ) = εn 1−Q n n Let us define S (n) (εn λ) by the equality −1 (n) ˆ εn S (n) (εn λ) = m1 G n (εn λ) − 1,

[3.5]

ˆ (ε λ) = m(n) 1+ε S (n) (ε λ)−1 . i.e. G n n n n 1 It is easily seen that  ∞ ˆ (ε λ) = G e−εn λt Gn (t) dt n n 0

1 1 = − εn λ εn λ



∞ 0

e−εn λt gn (t) dt =

1 (1−ˆ gn (εn λ)) . εn λ

Whence, taking into account equations [3.4]–[3.5], we have ˆ (n) (εn λ) = gˆn (εn λ) (1−πke ) Q n '  −1 ( (n) (1−πken ) . = 1−εn λm1 1+εn S (n) (εn λ) Therefore,  −1 ˆ (ε λ) ˆ (n) (εn λ) Fˆεn (λ) = εn 1−Q G n n

Asymptotic Analysis for Distributions of Evolutions

⎛ = εn ⎝

1+εn S

(n)

(εn λ)

(n) m1



  (n) 1+εn S (n) (εn λ) − λm1 (n)

m1

67

⎞−1 (1−πken )⎠

  −1  (n) Bn = λ + Bn + εn S (n) (εn λ) − λm1

[3.6]

From equation [3.5], it follows that (n)

εn λm1 −1 1 − gˆn (εn λ) 3 (n) ˆ (n) (εn λ) + 1+R = εn λm1 0

εn S (n) (εn λ) =

4

1 (n)

εn λm1

−1

(n) (n) ˆ (n) = εn λm1 + εn λm1 R 0 (εn λ) .

[3.7]

Combining equations [3.6]–[3.7], we get −1  (n) ˆ (n) (εn λ) . Fˆεn (λ) = λ + Bn + εn λm1 Bn R 0 (n)

As Fˆ0

(λ) =

1 λ+Bn ,

then

(n) (n) ˆ (n) (εn λ) Fˆ (n) (λ) . (λ) Fˆ0 (λ) = 1 + εn λm1 Bn R Fˆε−1 0 0 n

Taking into account 1 1 1 Bn = − , λ + Bn λ λ λ + Bn we have

  (n) ˆ (n) (n) ˆ (n) [3.8] Fˆεn (λ) = Fˆ0 (λ) − Fˆεn (λ)εn m1 R 0 (εn λ) Bn 1 − Bn F0 (λ)

By taking the inverse Laplace transform of equation [3.8], we conclude the validity of lemma 3.2. By a component of a distribution F (x), x ∈ R we understand a non-negative measure M with the property 0 = M ≤ F . D EFINITION 3.1.– (Sevastyanov 1971). A distribution function F (x) is said to be spread-out if there exists k ≥ 1 such that the k-fold convolution F ∗k (x) has a component M that is absolutely continuous (i.e. it has a density m).

68

Random Motions in Markov and Semi-Markov Random Environments 1

We will also need the following theorem : T HEOREM 3.1.– (Sevastyanov 1971). If the function F (x) is spread-out and there ∞ exist mi = 0 ti dF (t), i = 1, 2, then, 2  ∞2 2 2 2dH (t) − dt 2 < ∞, 2 m1 2 0 where H (t) =

∞ n=1

Fn (t), F1 (t) = F (t), Fn+1 (t) =

t 0

Fn (t − u) dF (t).

    (k) Let us find the asymptotic expansion for Fεn (t) = P εn τn >t up to O ε3n . T HEOREM 3.2.– Let the conditions C1–C3 be fulfilled. In addition, the following condition holds: C4. ∀ n > k Gn (t) is spread-out and there exist all the moments  ∞ (n) m2 = t2 gn (t) dt. 0

Then for t ∈ [δ, T ], 0 < δ < T < ∞, Fεn (t) = e−Bn t − Vn  t



t 0

(n)

e−Bn (t−u) R0 '

'

u εn (

( du

u−y dydu εn 0 0 ' ( ' (  t  t−u u v (n) 2 −Bn (t−u−v) (n) R0 dvdu +Vn e R0 εn εn 0 0 ' ( u ' (  t  t−u v u−y (n) 2 −Bn (t−u−v) (n) e−Bn y dydvdu −Vn Bn e R0 R εn 0 0 εn 0 0 ' (  t  t−u  v v−y (n) 2 2 −Bn (t−u−v) e−Bn y dydv e R0 −Vn Bn εn 0 0 0 ' (  u u−z (n) dzdu R0 × εn 0 ' ( ' (  v  t  t−u v−y u (n) (n) −Bn (t−u−v) 2 −Bn y e du −Vn Bn R0 e dydvR0 ε ε n n 0 0 0   +O ε3n . [3.9]

+Vn Bn

u

(n)

e−Bn (t−(u−y)) R0

Asymptotic Analysis for Distributions of Evolutions

69

P ROOF.– From theorem 3.1 and the condition C4 it follows that  ∞ (n) |R0 (s) |ds < ∞. 0

After two iterations of equation [3.2], we have  t   t−u  e−Bn (t−u) − e−Bn (t−u−v) Fεn (t) = e−Bn t − 0

 −

0

t−u−v

Fεn (t − u − v − z) ' ( ' ( ( " '  z z z − x −Bn x (n) (n) Vn − V n B n e R0 dx dz × R0 εn εn 0 ' ( ' ( (  '  v v v−y (n) (n) Vn − V n B n e−Bn y dy dv R0 × R0 εn εn 0 ' ( ' ( (" '  u u u−y (n) (n) −Bn y Vn − V n B n e R0 dy du × R0 εn εn 0 0

[3.10]

After opening the parentheses in equation [3.10], it is easy to see that all terms, except those contained under integral function Fεn (t), are included in equation [3.9]. Assuming conditions C2, C4 and Fεn (t) ≤ 1, we obtain 2    1 22 3 t t−u t−u−v Fεn (t − u − v − z) sup 3 2Vn n≥1 εn 0 0 0 2 ' ( ' ( ' ( 2 v u z (n) (n) (n) R0 R0 dudzdv 22 ×R0 εn εn εn ' t 2 ' (2 (3 ' +∞ 2 2 (3 2 (n) s 2 Vn3 2 (n) 2 2 2 ds ≤ ≤ sup 3 2R0 (s)2 ds < ∞. 2 R0 εn 2 n≥1 εn 0 0 Here, we are using the following fact: ρk πken ≤ 1. Vn = εn Similarly to the arguments for the other terms in equation   [3.10], which contain Fεn (t), we can find that they are also of the order O ε3n . Hence, theorem 3.2 is proved. Now, let us state the following condition: C5. There exist the moments for all n > k  ∞ (n) m3 = t3 gn (t) dt. 0

70

Random Motions in Markov and Semi-Markov Random Environments 1

L EMMA 3.3.– Let the conditions C1–C5 hold. Then for all n > k 2  (n) (n)  ∞ m2 − 2 m1 (n) R0 (t) dt = 2  (n) 0 2 m1

[3.11]

and  0



2  (n) (n) (n) 2m1 m3 − 3 m2 (n) tR0 (t) dt = . 3  (n) 12 m1

[3.12]

∞ P ROOF.– Define gˆn (s) = 0 e−st gn (t) dt. From conditions C1–C5, it follows that:  ∞  ∞ (n) (n) e−st R0 (t) dt R0 (t) dt = lim s→0

0

3 = lim

s→0

0

gˆn (s) 1 − 1−ˆ gn (s) sm(n) 1

4

2  (n) (n) m2 − 2 m1 = . 2  (n) 2 m1

Similarly, we can prove equation [3.12]. (n)

(n)

To simplify expressions involving mi , the superscript n in mi i = 1, 2, 3.

will be omitted,

T HEOREM 3.3.– Let conditions C1–C5 be satisfied. Then, uniformly with respect to t ∈ [δ, T ], 0 < δ < T < ∞ ' (   m2 P εn τn(k) >t = e−Bn t − εn Vn (tBn − 1) − 1 e−Bn t 2m21  ∞ (n) −Bn t +εn Vn e R0 (s) ds t/εn

−εn Vn Bn e

−Bn t

5 +ε2n e−Bn t −ε2n Vn2

'

 t 0

∞ u/εn

(n)

R0 (s) dsdu − ε2n Bn Vn e−Bn t

' (2 6 − 3m22 m2 2 + Vn −1 12m31 2m21

2m1 m3 − 3m22 12m31

2m1 m3 Vn Bn2 t

(2     m2 − 1 3 − 3e−Bn t − 2Bn e−Bn t + Bn te−Bn t + o ε2n . 2 2m1

Asymptotic Analysis for Distributions of Evolutions

Note that the terms  t εn Vn Bn e−Bn t 0

and εn Vn e

−Bn t



∞ t/εn

∞ u/εn

71

(n)

R0 (s) dsdu

(n)

R0 (s) ds

of this expansion are called boundary layer terms (Korolyuk and Turbin 1982). P ROOF.– Let us analyze the terms on the right side of equation [3.9]. From equations [3.11]–[3.12] and condition C5, it follows that ' (  t  t/εn   u (n) −Bn (t−u) (n) du = εn e−Bn (t−εn s) − e−Bn t R0 (s) ds e R0 εn 0 0  +εn

t/εn 0

(n)

e−Bn t R0 (s) ds

= ε2n Bn e−Bn t

 0

(n)

sR0 (s) ds + εn e−Bn t

 

t/εn 0

  (n) R0 (s) ds + O ε3n

∞ 2m1 m3 − (n) 2 −Bn t − ε B e sR0 (s) ds n n 12m31 t/εn (  ∞   (n) − 1 e−Bn t − εn e−Bn t R0 (s) ds + O ε3n

= ε2n Bn e−Bn t '

t/εn

3m22

m2 2m21 t/εn ' ( 2m1 m3 − 3m22 m2 = εn − 1 e−Bn t + ε2n Bn e−Bn t 2 2m1 12m31  ∞   (n) −εn e−Bn t R0 (s) ds + o ε2n . +εn

[3.13]

t/εn

Similarly, for the following integral, we have ' (  t u u−y −Bn (t−(u−y)) n dydu e R0 εn 0 0 ' (  t  u v −Bn (u−v) n −Bn (t−u) = dvdu e R0 e ε n 0 0 5  ∞  t 2m1 m3 − 3m22 (n) −Bn (t−u) 2 −Bn u ε2n Bn e−Bn u − ε B e sR0 (s) ds = e n n 3 12m 0 u/εn 1

72

Random Motions in Markov and Semi-Markov Random Environments 1

6 (  ∞   m2 (n) + εn − 1 e−Bn u − εn e−Bn u R0 (s)ds + O ε3n du 2m21 u/εn ' ( 2m1 m3 − 3m22 m2 + ε t − 1 e−Bn t = ε2n Bn e−Bn t t n 12m31 2m21  t ∞   (n) +εn e−Bn t R0 (s) dsdu + o ε2n . [3.14] '

0

u/εn

Let us show that εn → 0.

 t/εn 0

e−Bn (t−εn z)

∞ t/εn −z

(n)

(n)

R0 (s) dsR0 (z) dz → 0, as

Indeed, 2 2  ∞ 2 2 t/εn 2 2 (n) (n) −Bn (t−εn z) e R0 (s) dsR0 (z) dz 2 2 2 2 0 t/εn −z  t/εn  ∞ 2 2 2 2 2 (n) 2 2 (n) 2 ≤ 2R0 (s)2 ds 2R0 (z)2 dz. 0

t/εn −z

2  ∞ 22 (n) 2 Since 0 2R0 (z)2 dz converges we have ∀ ε > 0 and there exists T > 0, such 2 2  ∞ 2 (n) 2 that T 2R0 (z)2 dz < ε. There also exists L > 0, such that for a sufficiently small 2  ∞ 22 (n) 2 εn , t/εn − L > T and L 2R0 (z)2 dz < ε. Thus, t/εn ∞

 0



(n)

t/εn −z t/εn

+ L

(n)

|R0 (s) |ds|R0 (z) |dz =



∞ t/εn −z

 L 0



(n)

(n)

|R0 (s) |ds|R0 (z) |dz

t/εn −z

 2 2 2 2 2 (n) 2 2 (n) 2 2R0 (s)2 ds 2R0 (z)2 dz ≤ εn 2

∞ 0

2 2 2 (n) 2 2R0 (s)2 ds.

Therefore, taking onto account equation [3.13], we get ' ( ' (  t  t−u u v R0n dvdu e−Bn (t−u−v) R0n ε ε n n 0 0 (  t ' 2m1 m3 − 3m22 m2 1 −Bn (t−u) = εn e − + εn Bn e−Bn (t−u) 2 2m1 2 12m31 0 6 ' (  ∞   u (n) (n) −Bn (t−u) du + o ε2n − e R0 (s) ds R0 ε n (t−u)/εn

Asymptotic Analysis for Distributions of Evolutions

5 = ε2n e−Bn t  +

t/εn

e

0

= ε2n e−Bn t

3'

m2 −1 2m21

−Bn (t−εn z)

'



(2

' −

∞ t/εn −z

(2

m2 −1 2m21

(

m2 −1 2m21

(n) R0

(n) (s) dsR0

73

4

∞ t/εn

(n) R0

(s) ds

6

  (z) dz + o ε2n

  + o ε2n .

[3.15]

For the following term, we have ' ( u ' (  t  t−u v u−y e−Bn y dydvdu e−Bn (t−u−v) R0n R0n Vn2 Bn εn εn 0 0 0 ' ( ' ( u  t  t−u z v 2 −Bn (t−u−v) n −Bn (u−z) n dzdvdu = Vn Bn e R0 e R0 εn εn 0 0 0 ' (  t  t−u v 2 −Bn u −Bn (t−u−v) n dv e e R0 = Vn Bn εn 0 0 6 5 ' (  ∞  2 2m1 m3 − 3m22 m2 (n) 2 − 1 εn B n − εn R0 (s)ds + o εn du × εn 2m21 12m31 u/εn ' (2     m2 = ε2n Vn2 − 1 1 − e−Bn t + o ε2n . [3.16] 2 2m1 Then,  t Vn2 Bn2

( u−z dzdu εn 0 0 0 0 ' (  u ' (  t t−u  v s w = Vn2 Bn2 dwdv R0n dsdu e−Bn (t−u−v) e−Bn (v−w) R0n ε ε n n 0 0 0 0  ' (  t  t−u 2m1 m3 − 3m22 m2 e−Bn (t−u) ε2n Bn + ε − 1 = Vn2 Bn2 n 12m31 2m21 0 0 6  ∞  u/εn  2 (n) R0 (s) ds + o εn dvεn R0n (p) dpdu − εn t−u

e−Bn (t−u−v)



v

R0n

'

v−y εn

(

e−Bn y dydv



u

0

v/εn

'

(2 

t   m2 −1 (t − u) e−Bn (t−u) du + o ε2n 2 2m1 0 ' (2     m2 2 = ε2n V n −1 1 − e−Bn t − Bn e−Bn t + o ε2n . 2m21 2

= ε2n V n Bn2

'

R0n

[3.17]

74

Random Motions in Markov and Semi-Markov Random Environments 1

Finally, ( u e du e εn 0 0 0 ' ( ' (  t  t−u  v s u (n) 2 −Bn (t−u−v) −Bn (v−s) (n) = Vn Bn dsdvR0 du e e R0 ε ε n n 0 0 0 3  t t−u  ∞ 2m1 m3 − 3m22 (n) 2 −Bn (t−u) 2 = Vn Bn ε2n Bn e − ε B sR0 (s) ds n n 12m31 0 0 v/εn 4 ( ' ' (  ∞  3 m2 u 1 (n) (n) − ε du − R (s) ds + O ε dvR + εn n n 0 0 2 2m1 2 εn v/εn ' (2   m2 2 = ε2n V n Bn t − 1 + o ε2n . [3.18] 2 2m1

Vn2 Bn

 t

t−u

−Bn (t−u−v)



v

(n) R0

'

v−y εn

(

−Bn y

(n) dydvR0

'

By adding equations [3.13]–[3.18], we conclude the proof of theorem 3.3. 3.2. Asymptotic inequalities for the distribution of the occupation time of a semi-Markov process in an increasing set of states We will assume that condition C2 of section 3.1 is not fulfilled. We then establish estimates for the distribution of the first hitting time of an infinitely remote level (or hard-to-reach state) of a semi-Markov process. We will number the states on the semiaxis of positive integer numbers. Since ∀ n > k 0≤

ρk πken ≤ 1, εn

then, there exist the limits lim n→∞

ρk πken = A0 , εn

lim

n→∞

ρk πken = A1 . εn

[3.19]

It was proved in Korolyuk et al. (1979b) that if the following limit exists: lim

n→∞

ρk πken = A, εn

[3.20]

then A does not depend on k. Similarly, we can show that A0 , A1 do not depend on k. T HEOREM 3.4.– Let conditions C1, C3, C4 and C5 of theorem 3.1 be satisfied. Furthermore, suppose that (n)

lim m2

n→∞

= m2 < ∞.

Asymptotic Analysis for Distributions of Evolutions

75

Then, uniformly with respect to t ∈ [σ, T ], 0 < σ < T "   exp −t sup Bi + εn min f (λ, t, m1 , m2 ) + o (εn ) ≤ P εn τn(k) ≥ t λ∈[λ0 ,λ1 ]

i≥n

" ≤ exp −t inf Bi i≥n

+ εn

max f (λ, t, m1 , m2 ) + o (εn ) ,

λ∈[λ0 ,λ1 ]

[3.21]

where λ0 =

A0 , m1

λ1 =

A1 , m1

m1 =

∞ 

ρi ai ,

i=1

  f (λ,t, m1 , m2 ) = λ2 t − λ (m1 − m2 /2m1 ) e−λt . P ROOF.– By iterating equation [3.2], we obtain ' (   t  t−u y (n) −Bn t −Bn (t−u) Fεn (t) = e e Vn − − Fεn (t − u − y) R0 εn 0 0 ' (  "   y u (n) (n) Vn −Vn Bn R0 ((y − s)/εn ) e−Bn s ds dy R0 ε n 0 ' (   u u−v (n) e−Bn v dv du. −Vn Bn R0 [3.22] εn 0 Let us analyze the terms in the right-hand side of equation [3.22]. Taking into account condition C4, we have ' (  t u (n) Vn du e−Bn (t−u) R0 ε n 0  = ε n Vn

t/εn 0

+εn Vn e−Bn t

  (n) e−Bn (t−εn s) − e−Bn t R0 (s) ds



t/εn 0

(n)

R0 (s) ds

 −2 ' 2 (  (n) (n) (n) m2 /2 − m1 Vn e−Bn t + o (εn ) , = ε n m1 where we have used the result ' (  ∞ 2  (n) (n) (n) R0 (s) ds = m2 /2 m1 −1 . 0

[3.23]

76

Random Motions in Markov and Semi-Markov Random Environments 1

Similarly,



Bn V n

t 0

e−Bn (t−u) 

t



u 0

(n)

R0 ((u − v)/εn ) e−Bn v dvdu

e−Bn (t−u)



u/εn

(n)

R0 (s) e−Bn (u−εn s) dsdu 0 0  −2 ' 2 (  (n) (n) (n) −Bn v m2 /2 − m1 + o (εn ) . m1 = εn Bn Vn te

= ε n B n Vn

[3.24]

Considering that Fεn (t) ≤ 1, and condition C4, we obtain 2 t  t−u 2 2 2 (n) (n) 22 Fεn (t − u − y) R0 (y/εn ) R0 (u/εn ) dydu22 Vn 2 0

≤ ε2n

0

'

∞ 0

2 2 (2 2 (n) 2 2R0 (s)2 ds = o (εn ) .

[3.25]

It was proved in Korolyuk et al. (1979b) that (n)

lim m1

n→∞

= m1 =

∞ 

ρi ai .

[3.26]

i=1

Taking into account equation [3.26] and condition C3, we have 2    y 2 2 2 t t−u (n) 2Bn Vn F (t − u − y) R0 ((y − s)/εn ) e−Bn s dsdy ε n 2 0

 ×

u 0

(n) R0

t ≤ 2 ε2n m1

0

0

((u − v)/εn ) e

'

∞ 0

−Bn v

2 2 dvdu22

2 2 (2 2 (n) 2 2R0 (s)2 ds = o (εn ) .

[3.27]

In a similar manner, we can prove that all other terms in the right-hand side of equation [3.22] containing Fεn (t) are of the order o (εn ). Passing to the upper and lower limits in equations [3.23]–[3.24] as n → ∞, and taking into account both equations [3.25]–[3.27] and condition C4, we can now finish the proof of theorem 3.4. As a result of equation [3.21], we have   e−λ1 t + o (1) ≤ P εn τn(k) ≥ t ≤ e−λ0 t + o (1) , where o (1) → 0 as n → +∞.

Asymptotic Analysis for Distributions of Evolutions

77

R EMARK 3.2.– It was proved in Korolyuk et al. (1979b) that if   lim Vn = A > 0 then lim P εn τn(k) ≥ t = e−λt n→∞

n→∞

where λ = A/m1 . We have obtained the generalization of this result without considering the requirement of the existence of the limit for A. Now, the results of section 3.1, with some  additional conditions depending on l, allow us to write inequalities [3.21] up to o εln , ∀ l ≥ 1. C OROLLARY 3.1.– Let us assume that a semi-Markov process {ξ (t)} satisfies conditions C1, C3 and C4 of theorem 3.2. Then, uniformly, with respect to t ∈ [σ,S], 0 < σ < S < ∞,   (n) P πken τn(k) ≥ t = e−t/m1 3 +πken

4

t (n) m1



⎛ (n) m2

⎟ −t/m(n) ⎜ 1 −1 ⎝  + o (πken ) . 2 − 1 ⎠ e (n) 2 m1 (n)

It is easily seen that if m2 = limn→∞ m2 , then ' (' (   (n) t m2 P πken τn(k) ≥ t = e−t/m1 + πken −1 − 1 e−t/m1 m1 2m21 +o (πken ) , where m1 =

∞ i=1

ρi ai (formula [3.26]).

3.3. Asymptotic analysis of the occupation time distribution of an embedded semi-Markov process (with increasing states) in a diffusion process Let {η (t), t ≥ 0} be a diffusion process with the infinitesimal operator A=

d σ 2 (x) d2 + a (x) , 2 dx2 dx

[3.28]

where σ 2 (x), a (x) are continuous functions on R. Let us state the following condition:

78

Random Motions in Markov and Semi-Markov Random Environments 1

A1. Suppose  ∞ −1  x 2a(u)/σ2 (u)du  2 e −∞ dx < ∞. σ (x) −∞

It is well-known (Gikhman and Skorokhod 1975; Feller 1971) that under condition A1 there exists the stationary distribution of the process η (t). Let us define a semi-Markov process embedded in the diffusion process η (t). Now, we fix a real number Δ > 0 to define the following partition: X = {. . . , −2Δ, −Δ, 0, Δ, 2Δ, . . .} . Assume that η (0) = 0. Consider a sequence of random variables {τj , j ≥ 0} as follows: specify τ0 = 0, and by τ1 denote the first exit time of the process η (t) from (−Δ, Δ); next, suppose τi is the first exit time of η (t) from ((l − 1) Δ, (l + 1) Δ), then τi+1 is the first exit time of η (t) from (lΔ, (l + 2) Δ) if η (τi ) = (l+1) Δ or the first exit time of η (t) from ((l − 2) Δ, lΔ) if η (τi ) = (l − 1) Δ. Thus, for i ≥ 1 / [η (τi−1 ) − Δ, η (τi−1 ) + Δ]} . τi = inf {t > τi−1 : η (t) ∈ Since η (t) is a diffusion process and therefore, strictly Markov, the sequence {η (τj ) , j ≥ 0} is a Markov walk on the phase space X. Define h (t) = η (τj ) for τj ≤ t < τj+1 . It is easy to see that the process {h (t)} is semi-Markov with the embedded Markov chain {η (τj ) , j ≥ 0}. It is also easily verified that if h (0) = η (0), then the first reaching time of a level mΔ, m ∈ Z by the process h (t) coincides with the first reaching time of the same level by the corresponding diffusion processes η (t). In order to apply corollary 3.1 to the semi-Markov process h (t), we define π0eT , mT1 , mT2 , where T ∈ X, (T = kΔ), k > 0, and eT is the lumped state after merging the set of states X T = X\XT , where XT = {− (k − 1) Δ, − (k − 2) Δ, · · · , −Δ, 0, Δ, · · · , (k − 2) Δ, (k − 1) Δ} , mT1 , mT2 are the first and the second moments, respectively, for the hitting time (or reaching time) of the set X T by the process η (t).

Asymptotic Analysis for Distributions of Evolutions

79

Denote by f0,Δ (respectively, f0,−Δ ) the probability of a hitting time of Δ (respectively, −Δ) by the diffusion process η (t), which started from 0 and did not reach −Δ (respectively, Δ). The function f0,Δx is a solution for the following equation: σ 2 (x) d2 f df + a (x) =0 2 dx2 dx

[3.29]

with the boundary conditions f (−Δ) = 0, f (Δ) = 1 (Gikhman and Skorokhod 1975). By solving equation [3.29], we have 0 f0,Δx =

y 2 e− −Δ 2a(u)/(σ(u)) du dy −Δ  Δ −  y 2a(u)/(σ(u))2 du . e −Δ dy −Δ

[3.30]

Similarly, solving equation [3.29] with the boundary conditions f (−Δ) = 1, f (Δ) = 0, we have Δ

f0,−Δx =

y

2a(u)/(σ(u))2 du −Δ dy 0   Δ − y 2a(u)/(σ(u))2 du . −Δ e dy −Δ

e−

Denote by fΔ,T (respectively, f−Δ,−T ) the hitting time probability of reaching the set X T by the diffusion process η (t), which started from Δ (respectively, −Δ) and did not reach 0. The function fΔ,T is a solution of equation [3.29] with the boundary conditions f (0) = 0, f (T ) = 1: Δ fΔ,T =

0 T 0

e− e−

y 0

y 0

2a(u)/(σ(u))2 du 2

2a(u)/(σ(u)) du

dy dy

.

Similarly, the function f−Δ,−T is a solution of equation [3.29] with the boundary conditions f (0) = 0, f (−T ) = 1: 0

f−Δ,−T

y

2

e− −T 2a(u)/(σ(u)) du dy y = −Δ . 2 0 e− −T 2a(u)/(σ(u)) du dy −T

Thus, π0 eT = f0,Δ fΔ,T + f0,−Δ f−Δ,−T .

[3.31]

80

Random Motions in Markov and Semi-Markov Random Environments 1

The expectation mT1 of the occupation time of the process h (t) in the set XT coincides with the expectation of the first hitting time of the set X T by the process η (t). This expected value can be obtained from the following equation: σ 2 (x) d2 m dm + a (x) = −1, 2 dx2 dx

[3.32]

with the boundary conditions m (−T ) = 0, m (T ) = 0 (Gikhman and Skorokhod 1975). By solving equation [3.32], we have (Gikhman and Skorokhod 1975)   0 ϕ (0) − ϕ (z) ϕ (0) T ϕ (T ) − ϕ (z) T m1 = m (0) = −2 dz+2 dz [3.33] 2  ϕ (T ) −T σ 2 (z) ϕ (z) −T σ (z) ϕ (z) where

 ϕ (x) =



x

−T

exp −

y −T

" 2 2a (u)/(σ (u)) du dy.

Let mT2 be the second moment of the hitting time of T by the process η (t) provided that η (t) = 0. It is well-known (Gikhman and Skorokhod 1975) that mT2 is a solution of the following equation: σ 2 (x) d2 m2 dm2 + a (x) = −2m1 (x) , 2 dx2 dx

[3.34]

with boundary conditions m2 (−T ) = 0, m2 (T ) = 0. By solving equation [3.34], we have    0 s 2 exp − 2a (u)/(σ (u)) du ds −T −T    mT2 = m2 (0) = C  T s 2 exp − −T 2a (u)/(σ (u)) du ds −T  −

0 −T





s −T

2m1 (z) exp 

× exp −

z −T

" 2 2a (u)/(σ (u)) du dz

s −T

2

2a (u)/(σ (u)) du ds,

where  C=

T −T





s −T

2m1 (z) exp

× exp −



s −T

"

z −T

" 2 2a (u)/(σ (u)) du dz 2

"

2a (u)/(σ (u)) du ds.

[3.35]

Asymptotic Analysis for Distributions of Evolutions

81

Let ν0T be a sojourn time at state 0 of the lumped semi-Markov process h0T (t), defined on the phase space X0T = {0} ∪ eT , where eT is a lumping state after merging the set of states X T .    i Denote by GT (t) = P ν0T ≤ t . Then mTi = E ν0T , i = 1, 2, · · · . The moments mTj Skorokhod 1975):

= mj (0) can be calculated as follows (Gikhman and

σ 2 (x) d2 mj dmj + a (x) = −jmj−1 (x) , mj (−T ) = mj (T ) = 0. [3.36] 2 2 dx dx Let us state the following condition: A2. For any sufficiently large  ∞ T > 0, the distribution GT (t) is spread-out and there exists the moment mT2 = 0 t2 dGT (t). (b)

Denote by θT the first exit time from XT of the diffusion process η (t) provided η (0) = b ∈ X, |b| < T , i.e. (b)

θT = inf {t : | η (t)| = T, η (0) = b} . T HEOREM 3.5.– Suppose conditions A1 and A2 are fulfilled. Then, uniformly with respect to t ∈ [σ, S], 0 < σ < S < ∞,   (0) P εT θT ≥ t T

= e − (m1 )

−1

t

    2 T −1 + εT t/mT1 − 1 mT2 /2 mT1 − 1 e−(m1 ) t + o (εT ) ,

where εT = π0 eT is defined in equation [3.31] and mT1 , mT2 can be obtained from equations [3.33]–[3.36]. P ROOF.– This result can be proved after applying corollary 3.1 to the semi-Markov process h (t), and taking into account that the hitting time of set X T by the h (t), (0) provided h (0) = 0, is equal in distribution to θT . E XAMPLE 3.2.– Let τ[−T, T ] , T > 0, be an exit time from [−T, T ] of the Ornstein– ∂2 ∂ Uhlenbeck process {ξ (t)} with the infinitesimal operator A = ∂x 2 −x ∂x . The process ξ (t) satisfies conditions A1–A2 and we can apply theorem 3.5 to the random variable (0) θT = inf {t : |ξ (t)| = T, ξ (0) = 0}.

82

Random Motions in Markov and Semi-Markov Random Environments 1

3.4. Asymptotic analysis of a semigroup of operators of the singularly perturbed random evolution in semi-Markov media In this section,  we will deal with a random evolution uε (t, x) in semi-Markov media ξε t/ε2 that depends on a small parameter ε > 0. It was proved in Korolyuk (1993), Korolyuk and Swishchuk (1995b) and Korolyuk and Limnios (2004), that under the balance condition for a fixed t > 0, the process uε (t, x) weakly converges to the diffusion process. By using a common method for solutions of singularly perturbed equations, we obtain asymptotic expansions in the form of a series of regular and singular terms for the semigroup of operators of a corresponding three-component Markov process provided that uε (t, x) weakly converges to a diffusion process as ε → 0. Let {ξ (t) , t ≥ 0} be a semi-Markov process on the phase space (E, F) with the semi-Markov kernel Q (x, B, t) = P (x, B) Gx (t) , B ∈ F, where P (x, B) are the transition probabilities of the embedded Markov chain {ξn , n ≥ 0}, and Gx (t) is the cdf of a sojourn time (holding time) of ξ (t) in x ∈ E. Now, let us assume a function C (u, x), u ∈ R, x ∈ E such that it satisfies the unique valued solvability condition of the following evolution equation: du (t, x) = C (u (t, x) , ξ (t)) , dt

u (0, x) = u0 .

In addition, we assume that the derivative

∂ ∂u C

[3.37]

(u, x) is bounded for all x ∈ E.

For the fixed parameter ε > 0, considerthe following random transport process  uε (t) in the scaled semi-Markov medium ξ t/ε2   duε (t) 1  = C uε (t) , ξ t/ε2 , dt ε

uε (0) = u0 .

Let us assume that the following four conditions hold: x (t) and the first two moments B1. There exists the pdf gx (t) = dGdt ∞ ∞ 2 (2) = 0 t gx (t) dt, mx = 0 t gx (t) dt for all x ∈ E.

(1) mx

B2. The embedded Markov chain ξn , n = 0, 1, 2, . . . is uniformly ergodic (Kartashov 1985) with the stationary distribution ρ (·), i.e.   n  1    lim  P (i) − Π0  = 0, n→∞  n  i=1

Asymptotic Analysis for Distributions of Evolutions

83

 (i−1) where P (i) (x, (x, dz) P (z, y) with P (z, y) = P (1) (z, y), and  y) = E P Π0 h (x) = E ρ (ds) h (s)1 (x) is the projector of the Markov chain ξn (Volume 1, Chapter 1, section 1.3) (Korolyuk and Turbin 1993). B3. The following supremum is bounded: ∞ Gx (t)dt sup u < ∞, Gx (t) = 1 − Gx (t) . Gx (u) x,u B4. The following moments are positive:  m ˆ (k) = ρ (dx)m(k) x > 0, k = 1, 2. X

Denote by X = E × [0, ∞ ) and by X = F × B+ , where B+ is a Borel σ-algebra on [0, ∞ ). On the phase space X, we consider the following couple of processes: {ς (t) = (ξ (t) , τ (t)) , t ≥ 0} , where τ (t) = t − sup {u ≤ t: ξε (u) = ξ ε (t)}. Denote by B the Banach space of bounded X -measurable functions on X with supremum norm ϕ = sup |ϕ (x, τ )| < ∞. x,τ

It is well-known that ς (t) is a Markov process and its infinitesimal operator Q can be written as (see Volume 1, Chapter 1, section 1.4): Qϕ (x, τ ) = rx (τ ) [P ϕ (x, 0) − ϕ (x, τ )] +

∂ ϕ (x, τ ) , ∂τ

where gx (t) , 1−Gx (t)  P (x, dy) ϕ (y, 0), P ϕ (x, 0) = rx (t) =

E

and ϕ (x, τ ) ∈ B is a continuously differentiable function with respect to τ . Let us introduce the operator Π1 f = (π, f )I, where f ∈ B and I (x, τ ) = 1, for all (x, τ ) ∈ X,   s 1 π (B × [0, s]) = (1) ρ (dx) Gx (s)ds, m ˆ B 0

84

Random Motions in Markov and Semi-Markov Random Environments 1

and the inner product  (π, g) = π (dz) g (z) = X

1 m ˆ (1)

  X

∞ 0

ρ (dx) Gx (s)g (x, s) ds.

It is well-known that Π1 is the projection operator on N (Q) (Volume 1, Chapter 1, section 1.3), (Korolyuk and Turbin 1982), i.e. Π1 Q = QΠ1 = 0. As was mentioned in Volume 1, Chapter 1, section 1.3, let R0 be the potential −1 operator of the embedded Markov chain ξn , i.e. R0 = Π0 − (P + Π0 ) . It is proved in lemma 4.2 of Korolyuk and Turbin (1982) that under conditions C1–C4 the potential operator R0 of ς (t) is given by ∞  ∞ Gx (t) dt Gx (t) f (x, t) dt − (π, f ) 0 R0 f (x, u) = Gx (u) Gx (u) 0 1 − (1) m ˆ

  X

∞ 0





ρ (dx) Gx (z) f (x, z) dzdy + y

+ (I − Π1 ) P R0



∞ 0

m ˆ (2) 2m ˆ (1)

(π, f )

Gx (z) f (x, z) dz − (π, f ) (I − Π1 ) P R0 m (x) .

We should recall that the potential operator of ς (t) is the generalized inverse operator of Q and it satisfies R0 Q = QR0 = I − Π1 (Volume 1, Chapter 1, section 1.3). It is also well-known that the three-component process ς (t) = (u (t, x) , ξ (t) , τ (t)) is a Markov process on the phase space R × E × [0, ∞) with the infinitesimal operator A given by (Volume 1, Chapter 1, section 1.4), (Korolyuk and Turbin 1982; Corlat et al. 1991): Aϕ (u, x, τ ) = C (u) ϕ (u, x, τ ) + Qϕ (u, x, τ ) = C (u) ϕ (u, x, τ ) + rx (τ ) [P ϕ (u, x, 0) − ϕ (u, x, τ )] +

∂ ϕ(u, x, τ ), ∂τ

where ϕ (u, z, τ ) is a continuously differentiable function with respect to u and τ , and it is in the domain of the operator A. We have C (u) f (u, x) = C (u, x)

∂ f (u, x), ∂u

Asymptotic Analysis for Distributions of Evolutions

85

 P ϕ (u, z, 0) =

P (z, dy) ϕ(u, y, 0). E

     Let us consider the scaled process ςε (t) = uε (t, x) , ξε t/ε2 , τ t/ε2 . It can be easily verified that its infinitesimal operator A satisfies: Aε ϕ (u, x, τ ) = +

1 1 C (u) ϕ (u, x, τ ) + 2 rx (τ ) [P ϕ (u, x, 0) − ϕ (u, x, τ )] ε ε

1 ∂ ϕ (u, x, τ ) . ε2 ∂τ

By defining ϕε (t, u, x, τ ) = E [ϕ(ςε (t))| ςε (0) = (u, x, τ )], we have (for notational conciseness we define t, u, x, τ as θ in the rest of this chapter) ∂ 1 ϕε (θ) = Aε ϕε (θ) = C (u) ϕε (θ) ∂t ε +

1 ∂ 1 rx (τ ) [P ϕε (t, u, x, 0) − ϕε (θ)] + 2 ϕε (θ) , ε2 ε ∂τ

[3.38]

with the boundary condition ϕε (0, u, x, τ ) = ϕ(0) . We will assume ϕ(0) (u) ∈ C ∞ . T HEOREM 3.6.– Suppose that conditions C1–C4 are fulfilled. In addition, assume that the following balance condition holds: B5. Π1 C (u) Π1 = 0. Then, the solution of [3.38] can be expanded in the following form: ' ' (( ∞  t (0) n (n) (n) ϕε (θ) = ϕ (θ) + ε ϕ (θ) + g , u, x, τ , ε2 n=1 where the first regular term ϕ(0) (θ) satisfies the diffusion equation ∂ (0) ∂ ∂ ϕ (θ) + Π1 C (u, x) R0 C (u, x) Π1 ϕ(0) (θ) = 0, ∂t ∂u ∂u and all the terms ϕ(k) (θ), k ≥ 1 can be calculated in a recursive manner. The first singular term is of the following form: g (1) (θ) = g (1) (0, u, x, τ ) exp0 {Qt} , where exp0 {Qt} = exp {Qt} − Π1 and all the terms k ≥ 2 can be recursively calculated in the following form: g (k+1) (θ) = exp0 {Qt} g (k+1) (0, u, x, τ )  t ∂ (k) g (θ) ds. exp0 {Q (t − s)}C (u, x) + ∂u 0

86

Random Motions in Markov and Semi-Markov Random Environments 1

P ROOF.– By applying the method for singularly perturbed equations developed by Korolyuk and considered in Korolyuk and Turbin (1982), we can find a solution of [3.38] in the following form: ' ' (( ∞  t ϕε (θ) = ϕ(0) (θ) + εn ϕ(n) (θ) + g (n) 2 , u, x, τ ε n=1 and ϕ(0) (0, u, x, τ ) +

∞ 

  εn ϕ(n) (0, u, x, τ ) + g (n) (0, u, x, τ ) = ϕ(0) [3.39] ε

n=1

where ϕ(n) (θ), n ≥ 0, are regular terms and g (n) terms of expansion [3.39].



t ε2 , u, x, τ



, n ≥ 1, are singular

Substituting [3.39] into [3.38], we have for the regular term ϕ(0) (θ):   ∂ (0) Qϕ(0) (θ) = rx (τ ) P ϕ(0) (t, u, x, 0) − ϕ(0) (θ) + ϕ (θ) = 0. ∂τ Therefore, ϕ(0) (θ) ∈ ker (Q). Then, we can obtain for ϕ(1) (θ), C (u, x)

  ∂ (0) ϕ (θ) + rx (τ ) P ϕ(1) (t, u, x, 0) − ϕ(1) (θ) ∂u +

∂ (1) ϕ (θ) = 0. ∂τ

[3.40]

Hence, ϕ(1) (θ) = −R0 C (u, x)

∂ (0) ϕ (θ) + n1 (θ) , ∂u

[3.41]

where n1 (θ) ∈ ker (Q), and it depends on the initial conditions of equation [3.39]. Now, for k ≥ 2 we have ∂ (k−2) ∂ (k−1) ϕ ϕ (θ) = C (u, x) (θ) ∂t ∂u   ∂ (k) +rx (τ ) P ϕ(k) (t, u, x, 0) − ϕ(k) (θ) + ϕ (θ) . ∂τ Hence, Qϕ(k) (θ) =

∂ (k−2) ∂ (k−1) (θ) − C (u, x) (θ) . ϕ ϕ ∂t ∂u

[3.42]

Asymptotic Analysis for Distributions of Evolutions

87

For the specific k = 2 case, we have Qϕ(2) (θ) = =

∂ (0) ∂ (1) ϕ (θ) − C (u, x) ϕ (θ) ∂t ∂u

∂ (0) ∂ ∂ (0) ∂ ϕ (θ) + C (u, x) R0 C (u, x) ϕ (θ) − C (u, x) n1 (θ) . ∂t ∂u ∂u ∂u

Now, by applying the operator Π1 on the left we obtain Π1 Qϕ(2) (θ) = 0 =

∂ ∂ (1) Π1 ϕ(0) (θ) − Π1 C (u, x) ϕ (θ) , ∂t ∂u

and taking into account that ϕ(0) (θ) ∈ ker (Q), i.e. Π1 ϕ(0) (θ) = ϕ(0) (θ) , we obtain ∂ (0) ∂ ∂ ϕ (θ) + Π1 C (u, x) R0 C (u, x) Π1 ϕ(0) (θ) ∂t ∂u ∂u +Π1 C (u, x)

∂ Π1 n1 (θ) = 0. ∂u

Since the operator Π1 does not depend on u, then Π1 C (u, x)

∂ Π1 n1 (θ) ∂u

is equivalent to Π1 C (u, x) Π1

∂ n1 (θ) = 0, ∂u

and from the balance condition B5 Π1 C (u, x) Π1 = 0. Therefore, ∂ ∂ ∂ (0) ϕ (θ) + Π1 C (u, x) R0 C (u, x) Π1 ϕ(0) (θ) = 0. ∂t ∂u ∂u Similarly, it follows from [3.42] that   ∂ (k−2) ∂ (k−1) (k) (θ) − C (u, x) (θ) + nk (θ) , ϕ (θ) = R0 ϕ ϕ ∂t ∂u where nk (θ) ∈ ker (Q).

[3.43]

[3.44]

88

Random Motions in Markov and Semi-Markov Random Environments 1

To find nk (θ), we use the fact that ϕ(0) (θ) ∈ ker (Q), and we establish n0 (θ) = ϕ(0) (θ) . From [3.41] and [3.44], we have for k = 2, ϕ(2) (θ) = R0 −R0 C (u, x)

∂ ∂ ∂ n0 (θ) + R0 C (u, x) R0 C (u, x) n0 (θ) ∂t ∂u ∂u

∂ n1 (θ) + n2 (θ) . ∂u

[3.45]

By letting k = 3 in [3.42], we obtain Qϕ(3) (θ) =

∂ (1) ∂ (2) ϕ (θ) − C (u, x) ϕ (θ) . ∂t ∂u

Then, by using [3.41] and [3.45], it follows that Qϕ(3) (θ) =

∂ (1) ∂ (2) ϕ (θ) − C (u, x) ϕ (θ) ∂t ∂u

∂ ∂ ∂ ∂ ∂ R0 C (u, x) n0 (θ) + n1 (θ) − C (u, x) R0 n0 (θ) ∂t ∂u ∂t ∂u ∂t ∂ ∂ ∂ −C (u, x) R0 C (u, x) R0 C (u, x) n0 (θ) ∂u ∂u ∂u ∂ ∂ ∂ +C (u, x) [3.46] R0 C (u, x) n1 (θ) − C (u, x) n2 (θ) . ∂u ∂u ∂u =−

Taking into account the balance condition C5, and multiplying [3.46] by Π1 , we have ' ( ∂ ∂ ∂ n1 (θ) + Π1 C (u, x) R0 C (u, x) Π1 ∂t ∂u ∂u ' ∂ ∂ ∂ ∂ − Π1 C (u, x) R0 Π1 + Π1 R0 C (u, x) Π1 ∂u ∂t ∂t ∂u ( ∂ ∂ ∂ n0 (θ) = 0. [3.47] + Π1 C (u, x) R0 C (u, x) R0 C (u, x) Π1 ∂u ∂u ∂u After solving [3.47], we can express n1 (θ) in terms of n0 (θ). Now, we consider [3.42] for k = 4, namely, Qϕ(4) (θ) =

∂ (2) ∂ (3) ϕ (θ) − C (u, x) ϕ (θ) . ∂t ∂u

[3.48]

Asymptotic Analysis for Distributions of Evolutions

Let us substitute ϕ(3) (θ) = R0



89

 ∂ (1) ∂ (2) ϕ (θ) − C (u, x) ϕ (θ) + n3 (θ) ∂t ∂u

and ϕ(2) (θ) from [3.45] into [3.48], and then multiply by Π1 both sides of the resulting equation. In the same manner, we use the balance condition B5. Then we obtain a differential equation that relates n1 (θ) and n2 (θ). After solving this equation, we can express n2 (θ) in terms of n1 (θ). The same procedure can be applied to obtain ni (θ) for all i = 0, 1, 2, . . . as follows: ' ( ∂ ∂ ∂ ni+1 (θ) + Π1 C (u, x) R0 C (u, x) Π1 ∂t ∂u ∂u ' ∂ ∂ ∂ ∂ − Π1 C (u, x) R0 Π1 + Π1 R0 C (u, x) Π1 ∂u ∂t ∂t ∂u ( ∂ ∂ ∂ ni (θ) = 0. +Π1 C (u, x) R0 C (u, x) R0 C (u, x) Π1 ∂u ∂u ∂u By solving [3.43] and substituting ϕ(0) (θ), ϕ(1) (θ) into [3.42], we obtain ϕ (θ). (2)

Continuing the iterative process in equation [3.42], we obtain ϕ(k) (θ) for all k = 0, 1, 2, . . . . Then, regarding the first singular term, we have ∂ (1) g = Qg (1) , ∂t

[3.49]

and for k ≥ 1, we have ∂ (k+1) ∂ (k) − Qg (k+1) = C (u, x) g g . ∂t ∂u By solving [3.44], we obtain g (1) (θ) = exp0 {Qt} g (1) (0, u, x, τ ), where the modified exponent exp0 {Qt} = exp {Qt} − Π1 satisfies lim exp {Qt} − Π1 = 0.

t→∞

[3.50]

90

Random Motions in Markov and Semi-Markov Random Environments 1

  Thus, limt→∞ g (1) (θ) = 0. It is easily verified that the semigroup of operators of the Markov process (ξ (t) , τ (t)) is a solution of equation [3.49]. By solving equation [3.50], we have g (k+1) (θ) = exp0 {Qt} g (k+1) (0, u, x, τ )  +

t 0

exp0 {Q (t − s)}C (u, x)

∂ (k) g (s, u, x, τ ) ds. ∂u

Thus, g (k) can be obtained from [3.50] recursively for all k ≥ 1, and the theorem is proved. 3.5. Asymptotic expansion for distribution of random motion in Markov media under the Kac condition In this section, we study an asymptotic expansion for the distribution of the random motion of a particle driven by a Markov process under a diffusion approximation. We show that the singularly perturbed equation of a Markov random motion can be reduced to the regularly perturbed equation for the distribution of the random motion. 3.5.1. The equation for the probability density of the particle position performing a random walk in Rn Let us consider the random motion of a particle in Rn driven by a Markov process {ξ (t)}, whose sojourn times at states are exponentially distributed with rate λ > 0 1 and transition probabilities pij = 2n−1 (1 − δ ij ), i, j ∈ E = {1, 2, . . . , 2n}, where E is the phase space of ξ (t). Let e1 , . . . , en be a Cartesian basis of Rn . Define en+k = −ek , k = 1, 2, . . . , n and vi = v ei , i = 1, 2, . . . , 2n, where v > 0 is the constant speed of the particle. We assume that the particle moves in an n-dimensional space in the following manner: If at some instant t the particle has velocity vi , then at a renewal epoch of the Markov process the particle takes a new velocity vj , j = i with probability 1 pij = 2n−1 . The particle continues its motion with velocity vj until the next renewal occurrence of the Markov process, and so on. Let us denote by r(t) = (x1 (t), x2 (t), . . . , xn (t)), t ≥ 0 the particle position at time t.

Asymptotic Analysis for Distributions of Evolutions

91

Consider the function C (i) = (C1 (i) , C2 (i) , . . . , Cn (i)) = vi , i ∈ E. Then, the position of the particle at time t can be expressed as  t C (ξ (t)) dt. r (t) = r (0) + 0

3.5.2. Equation for the probability density of the particle position Let us consider a bivariate stochastic process ς (t) = (r(t), ξ (t)) with the phase space Rn × E. It is well-known that this process is a Markov process and that it has the generating operator (Korolyuk 1993; Korolyuk and Korolyuk 1999): Aϕ (r, i) = C(i)ϕ (r, i) + λ [P ϕ (r, i) − ϕ (r, i)] ,

i ∈ E,

[3.51]

where 

C (i) ϕ (r, i) = C1 (i)

∂ ∂ ∂ ϕ(r, i) + C2 (i) ϕ(r, i) + . . . + Cn (i) ϕ(r, i) ∂x ∂y ∂z

and P ϕ (r, i) =

λ 2n − 1



ϕ (r, j).

j∈E j = i

Now, let us consider the density function fi (t, x1 , . . . , xn ) dx1 . . . dxn = P {x1 ≤ x1 (t) ≤ x1 + dx1 , . . . , xn ≤ xn (t) ≤ xn + dxn ; ξ (t) = i} . It is easily verified that f (t, x1 , . . . , xn ) =

n 

fi (t, x1 , . . . , xn ),

i=1

is the probability density of the particle position in Rn at time t. L EMMA 3.4.– The function f satisfies the following differential equation: 2n  k=1

∂ ∂ 2nλ + + (−1)k v ∂t ∂xk 2n − 1

" f

92

Random Motions in Markov and Semi-Markov Random Environments 1 2n 2n 2nλ   + 2n − 1 l=1 k= 1 k = l

∂ ∂ 2nλ + + (−1)k v ∂t ∂xk 2n − 1

" f = 0.

P ROOF.– For i ∈ E, the function fi satisfies the first Kolmogorov equation, namely ∂ fi (t, x1 , . . . , xn ) = Afi (t, x1 , . . . , xn ) , ∂t (0) with initial conditions fi (0, x1 , . . . , xn ) = fi .

i ∈ E,

[3.52]

Equation [3.52] can be written in more detail as follows: ∂fi (t, x1 , . . . , xn ) i ∂fi (t, x1 , . . . , xn ) + λfi (t, x1 , . . . , xn ) + (−1) ∂t ∂xi  λ fj (t, x1 , . . . , xn ) = 0, i ∈ E. [3.53] − 2n − 1 j∈E\i

Let us define f (t, x1 , . . . , xn ) = {fi (t, x1 , . . . , xn ) ,

i ∈ E} .

The set of equations [3.53] can be written in the following form: L2n f = 0, where for k = i, k, i ∈ E L2n = lij i,j∈E , lkk =

∂ ∂ −λ + λ, lik = + (−1)k v . ∂t ∂xk 2n − 1

The function f satisfies the following equation: det (L2n ) f = 0. with the initial condition f (0, x1 , . . . , xn ) = f0 =

[3.54] n i=1

(0)

fi .

The determinant of matrix L2n is well-known and has the form ( 2n '  ∂ ∂ 2nλ k + det (L2n ) = + (−1) v ∂t ∂xk 2n − 1 k=1

' ( 2n 2n ∂ 2nλ   ∂ 2nλ k + . + + (−1) v 2n − 1 ∂t ∂xk 2n − 1 l=1 k=1 k = l Since v ∂x∂ k and −v ∂x∂ k appear symmetrically in L2n , it is easy to see that all monomials of the polynomial det (L2n ) contain v k only for even powers of k ≥ 0.

Asymptotic Analysis for Distributions of Evolutions

93

3.5.3. Reduction of a singularly perturbed evolution equation to a regularly perturbed equation Let us define λ = ε−2 , v = cε−1 , where ε > 0 is a small parameter. It is well-known (Kac 1974) that the solution of equation [3.54] in the hydrodynamical limit (as ε → 0) weakly converges to the corresponding functional of Wiener process. As mentioned before, we can find an asymptotic expansion of the solution of equation [3.54], and it consists of regular and singular terms. However, this technique may involve tedious calculations. So, we will follow an alternative approach here. P ROPOSITION 3.1.– The equation det (L2n ) f = 0 is regular perturbed, i.e. if we multiply it by ε4n−2 , then we obtain ' ( ∂ ∂2 2n − 1 ∂ 2 f + Dε f, + ... + [3.55] f= ∂t 2n2 ∂x21 ∂x2n where Dε = ε2 D1 + ε4 D2 + . . ., and Di , i = 1, 2, . . . , are respective differential operators. P ROOF.– To avoid cumbersome expressions, we consider the case when n = 3. Let us define x = x1 , y = x2 , z = x3 , and the coordinates x, y, z as ψ in this section. In this case, equation [3.53] has the following form: ∂ i ∂ fi (t, ψ) + (−1) v fi (t, ψ) + λfi (t, ψ) ∂t ∂x λ  fj (t, ψ) = 0, i = 1, 2, . . . , n. − 5 j∈E j = i

[3.56]

Now v = ε−1 and λ = ε−2 , we obtain the following singularly perturbed system of equations ∂ ∂ fi (t, ψ) − ε−1 fi (t, ψ) + ε−2 fi (t, ψ) ∂t ∂x 1  fj (t, ψ) = 0, i = 1, 2, . . . , n. −ε−2 5 j∈E j = i Let us consider det (L6 ) f = 0, where f (t, x1 , . . . , xn ) =

n  i=1

fi (t, x1 , . . . , xn ).

94

Random Motions in Markov and Semi-Markov Random Environments 1

It is easy to see that elements of matrix L6 = (lij )i,j∈E are as follows: ∂ ∂ lii = ∂t + (−1)i v ∂x + λ and lij = −λ 5 , i = j, i, j ∈ {1, 2, · · · , 6}. i Hence, the equation det (L6 ) f = 0 becomes of the following form: det (L6 ) f (t, ψ) =

7776 −10 ∂ 432 −8 ∂ 432 −10 ε − ε Δ− ε Δ 3125 ∂t 625 125 ∂t

∂4 ∂ 144 −6 ∂ 2 24 72 ∂3 ε Δ 2 + ε−8 Δ(2) + ε−4 4 − 4ε−4 Δ 3 + 2ε−6 Δ(2) 25 ∂t 25 5 ∂t ∂t ∂t ∂5 ∂2 ∂6 ∂6 ∂4 +6ε−2 5 + 6 + ε−8 Δ(2) 2 − ε−2 Δ 4 − ε−6 2 2 2 ∂t ∂t ∂t ∂t ∂x ∂y ∂z " 2 3 1296 −2 ∂ 432 −6 ∂ f (t, ψ) = 0, [3.57] + + ε ε 2 125 ∂t 25 ∂ t3 −

with the initial condition f (0, ψ) = f0 , where Δ=

∂2 ∂2 ∂2 + + , ∂x2 ∂y 2 ∂z 2

Δ(2) =

∂4 ∂4 ∂4 + + . ∂x2 ∂y 2 ∂y 2 ∂z 2 ∂x2 ∂z 2

Multiplying [3.57] by ε10 , we obtain  2  ∂ ∂ 1 ∂ 5 5 (2) 2 25 − − Δ+ε Δ+ Δ ∂ t 18 6 ∂t2 3∂t 54   125 ∂ 3 1 ∂2 5 ∂ (2) +ε4 − Δ + Δ 18 ∂t3 3 ∂ t2 216 ∂ t  4  3 ∂ 5 ∂ 5 ∂ 2 (2) 6 625 − Δ+ Δ +ε 216 ∂t4 18 ∂ t3 72 ∂ t2  5  " 6 ∂ 1 ∂ 5 (2) 8 3125 10 3125 ∂ +ε f = 0. − Δ +ε 1296 ∂ t5 6 ∂ t4 7776 ∂t6

[3.58]

L EMMA 3.5.– The solution of equation [3.58] with the initial condition f (0, ψ) = u0 (0, ψ) + ε2 u1 (0, ψ) + ε4 u2 (0, ψ) + . . . has the following asymptotic solution: f (t, ψ) = u0 (t, ψ) + ε2 u1 (t, ψ) + ε4 u2 (t, ψ) + . . . where the main term u0 (t, ψ) represents the solution of equation ∂ 5 u0 (t, ψ) = Δu0 (t, ψ) . ∂t 18

[3.59]

Asymptotic Analysis for Distributions of Evolutions

95

P ROOF.– To find the asymptotic expansion of the solution of equation [3.58], we apply the method in the proof of theorem 3.6. In conformity with this method, the solution of [3.58] can be expanded into the series [3.59], where ε > 0 is a small parameter. Substituting [3.59] into [3.59], we can obtain the following equations for the calculation of ui , i ≥ 0: ∂ 5 u0 (t, ψ) = Δu0 (t, ψ); ∂t 18

 ∂ 5 25 ∂ 2 1 ∂ − u1 (t, ψ) = Δu1 (t, ψ) + Δ+ ∂t 18 6 ∂t2 3 ∂t  ∂ 5 25 ∂ 2 1 ∂ − u2 (t, ψ) = Δu2 (t, ψ) + Δ+ ∂t 18 6 ∂t2 3 ∂t   125 ∂ 3 1 ∂2 5 ∂ (2) + u0 (t, ψ); − Δ + Δ 18 ∂t3 3 ∂t2 216 ∂t

 5 (2) u0 (t, ψ); Δ 54  5 (2) u1 (t, ψ) Δ 54

.. . 5 ∂ um+5 (t, ψ) = Δum+5 (t, ψ) ∂t 18  2  1 ∂ 5 (2) 25 ∂ um+4 (t, ψ) − Δ+ Δ + 6 ∂t2 3 ∂t 54   125 ∂ 3 1 ∂2 5 ∂ (2) + um+3 (t, ψ) − Δ+ Δ 18 ∂t3 3 ∂t2 216 ∂t   625 ∂ 4 5 ∂3 5 ∂ 2 (2) + um+2 (t, ψ) − Δ + Δ 216 ∂t4 18 ∂t3 72 ∂t2   3125 ∂ 5 1 ∂ 5 (2) 3125 ∂ 6 + u − Δ (t, ψ) + um (t, ψ) = 0, m+1 1296 ∂t5 6 ∂t4 7776 ∂t6 for m ≥ 0. Let us consider fkε (t, ψ) = u0 (t, ψ) + ε2 u1 (t, ψ) + . . . + ε2k uk (t, ψ). The solution of a singularly perturbed equation of type [3.53] and the remainder of the asymptotic expansion, in the context of diffusion approximation, was studied in Korolyuk and Turbin (1993). Taking into account that  f (t, ψ) = fi (t, ψ), i∈E

96

Random Motions in Markov and Semi-Markov Random Environments 1

it follows from the estimate of the reminder given in Korolyuk and Turbin (1993) that   f (t, ψ) − fnε (t, ψ) = O εk . 3.6. Asymptotic estimation for application of the telegraph process as an alternative to the diffusion process in the Black–Scholes formula In this section, we study the one-dimensional transport process in the case of disbalance. In the hydrodynamic limit, this process approximates the diffusion process on the line. The application of the telegraph process for option prices was studied in Ratanov (2007). By using the asymptotic expansion for the singularly perturbed random evolution in Markov media with no balance condition, we study the accuracy of the application of the telegraph process in the Black–Scholes formula, as an alternative to the diffusion process. 3.6.1. Asymptotic expansion for the singularly perturbed random evolution in Markov media in case of disbalance Let {ξ (t) , t ≥ 0} be a Markov process on the phase space {0, 1} with the infinitesimal matrix ' ( −1 1 Q=λ . 1 −1 Now, consider the random velocity model or transport process  t x (t) = x0 + v(s)ds, 0

where v (t) =

v0 , when ξ (t) = 0; v1 , when ξ (t) = 1.

The infinitesimal operator A of the bivariate process {ς (t) = (x (t) , ξ (t)) , t ≥ 0} is of the following form: Aϕ (x, 0) = v0

∂ ϕ (x, 0) + λϕ (x, 1) − λϕ (x, 0) ∂x

Aϕ (x, 1) = v1

∂ ϕ (x, 1) + λϕ (x, 0) − λϕ (x, 1) , ∂x

where ϕ ∈ D(A) is the domain of the operator A and x ∈ R.

[3.60]

Asymptotic Analysis for Distributions of Evolutions

97

We can interpret this operator in the following equivalent manner: assume Z = R × {0, 1} and  Tt ϕ (x, i) = ϕ(z)P {ς (t) ∈ dz | ς (0) = (x, i)}, i ∈ {0, 1} . R

Then, T t ϕ (x, i) − ϕ (x, i) . t→0+ t

Aϕ (x, i) = lim

Let us define ui (x, t) = Tt ϕ (x, i). Thus, ∂ ui (x, t) = ATt ϕ (x, i) = Aui (x, t) . ∂t Now, let us consider the scaled evolution equation 1 xε (t) = x0 + ε



t 0

v

s ds, ε2

with corresponding velocities vi /ε , i ∈ {0, 1}. Since the Markov process v scaled time, then its infinitesimal operator is of the form ε12 Q.



t ε2



has

Hence, we obtain the following system of Kolmogorov backward differential equations ∂ ε v0 ∂ ε λ λ u (x, t) = u (x, t) + 2 uε1 (x, t) − 2 uε0 (x, t) ∂t 0 ε ∂x 0 ε ε ∂ ε v1 ∂ ε λ λ u1 (x, t) = u1 (x, t) + 2 uε0 (x, t) − 2 uε1 (x, t) . ∂t ε ∂x ε ε

[3.61]

Equations [3.61] can be written in matrix form as follows: ∂ ε 1 1 u (x, t) = V ∇uε (x, t) + 2 Quε (x, t) ∂t ε ε where

' u (x, t) = ε

uε0 (x, t) uε1 (x, t)

(

' ,

V =

,

Q=λ

v0 0 0 v1

[3.62]

( ,

and ' V∇=

∂ 0 v0 ∂x ∂ 0 v1 ∂x

(

'

−1 1 1 −1

( .

98

Random Motions in Markov and Semi-Markov Random Environments 1

Let us define  ∞   R0 = Π − eQt dt, 0

where eQt = {pij (t) ; i, j ∈ {0, 1}} is the set of time-dependent transition probabilities, and 3 4 1 2 1 2

Π=

1 2 1 2

.

is the projector operator on N (Q) = ker (Q) (Volume 1, Chapter 1, section 1.2). −1

Thus, R0 = Π − (Q + Π) is the potential operator of ξ (t) (Volume 1, Chapter 1, section 1.3) (Korolyuk and Turbin 1976). The balance condition states that v0 +v1 = 0, i.e. v1 = −v0 . This balance condition can be also expressed as 2 ΠV Π = 0 (see condition B5 of theorem 3.6). Now, we will consider the following disbalance condition: v0 = v + Δ1 and v1 = −v − Δ2 , where Δi = εai , i = 1,2, ε > 0.  Itis easily verified that the infinitesimal operator of the process ζε (t) = xε (t), ξ εt2 is of the following form: Aε = where

1 1 V ∇ + A∇ + 2 Q, ε ε

' A=

a1 0 0 a2

( .

Denote as uεi (x, t) = Ttε ϕ (x, i) =

 Z

ϕ (z) P {ζε (t) ∈ dz/ζε (0) = (x, i)}.

Then, similarly as above, we can write the matrix equation ∂ ε 1 1 u (x, t) = V ∇uε (x, t) + A∇uε (x, t) + 2 Quε (x, t). ∂t ε ε

[3.63]

By using the technique of asymptotic expansion, as in the proof of theorem 3.6, we will find a solution of equation [3.63] in the following form: uε (x, t) = u(0) (x, t) +

∞ 

   εn u(n) (x, t) + v(n) x, t/ε2 ,

[3.64]

n=1

where u(n) (x,  t), n = 0, 1, 2, . . . are the regular terms of the expansion, whereas v(n) x, t/ε2 , n = 1, 2, . . . are the singular ones.

Asymptotic Analysis for Distributions of Evolutions

99

Then, by substituting equation [3.64] into [3.63], we obtain Qu(0) (x, t) = 0,

Qu(1) (x, t) + V ∇u(0) (x, t) = 0,

∂ (0) [3.65] u (x, t) = 0, ∂t ∂ Qu(k+2) (x, t) + V ∇u(k+1) (x, t) + A∇u(k) (x, t) − u(k) (x, t) = 0, ∂t for k ≥ 0. Qu(2) (x, t) + V ∇u(1) (x, t) + A∇u(0) (x, t) −

Hence, u(0) (x, t) ∈ N (Q), i.e. Πu(0) (x, t) = u(0) (x, t). It follows from equation [3.65] that u(1) (x, t) = R0 V ∇u(0) (x, t) + c1 (t),

[3.66]

where c1 (t) ∈ N (Q). Similarly, Qu(2) (x, t) = =

∂ (0) u (x, t) − V ∇u(1) (x, t) ∂t

∂ (0) u (x, t) − V ∇R0 V ∇u(0) (x, t) − V ∇c1 (t) − A∇u(0) (x, t). ∂t

[3.67]

Multiplying equation [3.67] by the operator Π, we obtain ΠQu(2) (x, t) = 0 =

∂ (0) u (x, t) − ΠV ∇R0 V ∇Πu(0) (x, t) − ΠA∇Πu(0) (x, t). ∂t

[3.68]

Here we use the fact that ΠV ∇c1 (t) = ΠV ∇Πc1 (t) = 0. Therefore, the main term of the expansion [3.64] satisfies the diffusion equation [3.68]. We can obtain similar equations for the rest of the terms, but we are just interested in the main term u(0) (x, t). Now, let us write the matrix equation [3.63] in the following form: 3 4 ∂ v ∂ λ λ ∂ ∂t − ε ∂x − a1 ∂x + ε2 ε2 u(ε) (x, t) ∂ λ ∂ v ∂ λ + + a + 2 2 2 ε ∂t ε ∂x ∂x ε = Ψu(ε) (x, t) = 0.

100

Random Motions in Markov and Semi-Markov Random Environments 1

It is easy to verify that the function fε (x, t) = uε0 (x, t) + uε1 (x, t) is the solution of the following equation det (Ψ) fε (x, t) = 0.

[3.69]

Writing equation [3.69] in more detail, we have ( ' 2 ∂ ∂ 2 ∂ 2λ − v + λ (a2 − a1 ) ∂t ∂x2 ∂x ' 2 ( ∂2 ∂ ∂2 v (a2 + a1 ) ∂ 2 2 +ε + (a2 − a1 ) − a1 a2 2 − ∂t2 ∂x∂t ε ∂x2 ∂x ×fε (x, t) = 0.

[3.70]

Let us define the notation u(0) (x, t) = (u0 (x, t), u1 (x, t))and f0 (x, t) = u0 (x, t) + u1 (x, t). Since uε (x, t) > u(0) (x, t) as ε > 0 (Korolyuk and Korolyuk 1999), we have limε→0 fε (x, t) = f0 (x, t). Hence, it follows from equation [3.70] that ' ( ∂ a 2 − a1 ∂ v2 ∂ 2 f0 (x, t) = 0. + − ∂t 2λ ∂x2 2 ∂x

[3.71]

So, if a2 = a1 , then f0 (x, t) satisfies the diffusion equation [3.71] with drift v2 2 coefficient a1 −a and diffusion coefficient 2λ . 2 3.6.2. Application to an economic model of stock market The well-known Black–Scholes formula (Dineen 2005) gives the price Xt of a stock at time t such that Xt = X0 exp(μt + σWt ),

[3.72]

where μ is the drift, σ is the volatility of the stock, and {Wt } is the Wiener process. 2 By substituting μ = a1 −a and σ = √v2λ in equation [3.69], we obtain the 2 diffusion equation for the process {μt + σWt }. Therefore, the Black–Scholes formula is obtained by considering an exponential Brownian motion for the share price Xt .

The Wiener process provides a mathematical consistent model for Brownian motion. However, it has the following drawbacks in capturing the physics of many applications:

Asymptotic Analysis for Distributions of Evolutions

101

1) modulus of the velocity is almost always infinite at any instant in time; 2) free path length of zero; 3) the path function of a particle is almost surely non-differentiable at any given point, and its Hausdorff dimension is equal to 1.5, i.e. the path function is fractal. However, the actual movement of a physical particle as well as the actual evolution of share prices Xt are barely justified as fractal quantities. Taking into account these considerations, we propose to use the process given by equation [3.70] instead of the diffusion process given in equation [3.71] for most applications. This recommendation is based on the fact that this process does not suffer from the drawbacks mentioned above. For instance, the process in equation [3.70] has almost always a finite modulus of velocity Vε = vε , non-zero free paths that depend on Λε = ελ2 , and its trajectories are almost surely differentiable. Some difficulties may arise in calculating the coefficients Vε and Λε . For such a purpose, we suggest considering the time between two consecutive renewal epochs (or, equivalently, two consequent impacts of a particle or two consequent changes of price) as an exponential distributed random variable with parameter Λε . Then, Λε can be estimated from experimental data as well as the velocity Vε . Hence, instead of the Black–Scholes formula, we propose the following formula for the price Xt of a stock at time t: '  t   ( s v 2 ds , Xt = X0 exp x0 + ε−1 ε 0 with the corresponding coefficients Vε and Λε .

4 Random Switched Processes with Delay in Reflecting Boundaries

In this chapter, we deal with the problem of finding the stationary distributions for the random switching processes in both a Markov and semi-Markov environment with delay in reflecting boundaries. These results can be used for the calculation of the stationary effectiveness parameter of multiphase supplying systems with feedback, as well as for determining the reliability of systems modeled by Markov and semi-Markov evolutions. Examples illustrating the application of these results are presented. In section 4.1, we obtain the stationary distribution of the transport process with delaying boundaries in Markov media. In particular, these results are applicable to the study of multiphase systems with several buffers. Section 4.2 deals with the transport process in semi-Markov switching. We find the stationary distribution of a random evolution, which is defined by the differential equation with the phase space on the interval [V0 , V1 ] and a constant vector field, whose values depend on the semi-Markov switching process with a finite set of states. In section 4.3, we give a detailed example of the method of calculation for the stationary distributions of random switched processes, with delaying boundaries for the Markov case. In section 4.4, we provide the corresponding calculations for the semi-Markov switching process. By using these results, we calculate the stationary effectiveness parameters for single- and two-phase supplying systems with reservoirs and feedback.

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

104

Random Motions in Markov and Semi-Markov Random Environments 1

4.1. Stationary distribution of evolutionary switched processes in a Markov environment with delay in reflecting boundaries We will calculate the stationary distribution of an evolutionary switched process (or a transport process) with delaying barriers in a Markov environment. These results have particular application in the study of multiphase systems with several buffers (Pogorui and Turbin 2002; Pogorui 2004a). Consider the following evolutionary switched process in a Markov environment: dv(t) = C (v(t), κ(t)) dt

[4.1]

where κ(t) is a Markov process. Let G = X ∪ Y,

X = {x1 , x2 , . . . , xn } ,

Y = {y1 , y2 , . . . , ym }

be the phase space of κ(t), and P = {pαβ , α, β ∈ G} be the transition probabilities matrix of the embedded (into κ(t)) ergodic Markov chain κl , l ∈ N, i.e. pαβ = P {κl+1 = β| κl = α}. A sojourn time τα of κ(t) at α ∈ G is a random variable with exponential distribution Fα (t) = 1 − exp (−λα t). Assume that the function C on the right-hand side of equation [4.1] is such that C1. For xi ∈ X, C (v, xi ) =

−ai , V0 < v ≤ V1 , 0, v = V0

i = 1, 2, . . . , n,

b j , V 0 ≤ v < V1 , 0, v = V1

j = 1, 2, . . . , m,

for yj ∈ Y , C (v, yj ) =

where V0 , V1 , ai , bj ∈ R, V0 < V1 , ai > 0, bj > 0, i = 1, 2, . . . , n, j = 1, 2, . . . , m. We also assume the following: C2. The phase space G is composed of exactly one ergodic class (Korolyuk and Turbin 1982; Nummelin 1984). Thus, the process {v(t)} develops in the segment (V0 , V1 ) with constant velocities that are switched by the Markov process κ(t). At point V0 (respectively,

Random Switched Processes with Delay in Reflecting Boundaries

105

V1 ), the process v(t) delays up to the time when κ(t) reaches the set Y (respectively, X). We introduce a bivariate process {ξ(t)} on the phase space Z = G × [V0 , V1 ], i.e. ξ(t) = (v(t), κ(t)). You should note that ξ(t) is a Markov process (Korolyuk and Limnios 2005). Our aim is to study the stationary distribution of ξ(t). The process ξ(t) has an infinitesimal operator given by Korolyuk (1993); Korolyuk and Limnios (2005): Aϕ (v, α) = C (v, α)

d ϕ (v, α) + Qϕ(v, α), dv

where Qϕ (v, α) = λα [P ϕ (v, α) − ϕ(v, α)], and P ϕ (v, α) =



pαβ ϕ (v, β).

β∈G

If the process ξ(t) has the stationary distribution ρ, then for any function ϕ (·) from the domain of A we have  Aϕ (z) ρ (dz) = 0. [4.2] Z

Since the process ξ(t) experiences delays at the points (V0 , x) , x ∈ X and (V1 , y) y ∈ Y , it follows that there are singularities (atoms) of the distribution ρ at these points. We will denote these singularities as ρ [V0 , x], ρ [V1 , y], respectively, and by ρ (v, α), α ∈ G as the absolute continuous components (or the probability density function) of the distribution ρ. We assume that the following limits exist:     ρ V0+ , α = lim ρ (v, α) , ρ V1− , α = lim ρ(v, α). v↓V0

v↑V1

Let A∗ be the adjoint operator of A. By changing the order of integration in equation [4.2], we obtain the following expression for A∗ ρ = 0, namely: 5    V1 d C (v, α) ϕ (v, α) Aϕ (z) ρ (dz) = dv Z V0 α∈G

6 +λα (P ϕ (v, α) − ϕ (v, α)) ρ (v, α) dv

106

Random Motions in Markov and Semi-Markov Random Environments 1

=



    ϕ (V1 , α) C (V1 , α) ρ V1− , α − ϕ (V0 , α) C (V0 , α) ρ V0+ , α

α∈G

 −

V1

ϕ (v, α) C (v, α) V0

+



d ρ (v, α) dv) dv

ρ (v, α) λα (P ϕ (v, α) − ϕ (v, α)) = 0,

[4.3]

α∈G

where   ρ V0+ , α = lim ρ (v, α) , v↓V0

  ρ V1− , α = lim ρ(v, α). v↑V1

Thus, taking into account [4.2], for the pdf ρ (α, v) we have: C (v, α)

d ρ (v, α) + Q∗ ρ (v, α) = 0, dv

α ∈ G,

V 0 < v < V1 ,

where Q∗ is the adjoint operator of Q, i.e.  λβ pβα ρ (v, β) + λα ρ(v, α). Q∗ ρ (v, α) = β∈G

Denoted by ρ [V0 ] = (ρ [V0 , x1 ] , . . . , ρ [V0 , xn ] , 0, . . . , 0), ρ [V1 ] = (0, . . . , 0, ρ [V1 , y1 ] , . . . , ρ[V1 , ym ]), ρ (v) = (ρ (v, x1 ) , . . . , ρ (v, xn ) , ρ (v, y1 ) , . . . , ρ (v, ym )) vectors with respective (n + m) components, and by C = diag (−a1 , −a2 , . . . , −an , b1 , b2 , . . . , bm ) the (n + m) × (n + m) diagonal matrix. The matrix representation of the operator Q is as follows: ⎞ ⎛ −λx1 λx1 px1 x2 . . . λx1 px1 ym ⎜ λx2 px2 x1 −λx2 . . . λx2 px2 ym ⎟ ⎟ ⎜ Q=⎜ . ⎟, .. . .. .. ⎠ ⎝ .. . . λym pym x1 λym pym x2 . . . −λym where we assume that pαα = 0, α ∈ G.

[4.4]

Random Switched Processes with Delay in Reflecting Boundaries

It follows from [4.3], that for the atoms of the distribution ρ, we have:     ρ [V0 ] Q = ρ V0+ C, ρ [V1 ] Q = ρ V1− C.

107

[4.5]

Let us consider equation [4.4] in the matrix form: d ρ (v) C + ρ (v) Q = 0, dv

v ∈ (V0 , V1 ).

By solving equation [4.6], we have     ρ (v) = ρ V0+ exp QC −1 (V0 − v) ,

v ∈ (V0 , V1 ).

[4.6]

[4.7]

  From [4.5] we can find the values of ρ V0+ and the atoms ρ [V0 ], ρ [V1 ] up to a constant c. Indeed, by taking into account condition C2, the equation Q x = 0 has ∗ the unique solution x = 1, where 1 = (1, . . . , 1) is a column vector with (n + m) components. Therefore, formulas   [4.5] consist of 2 (n + m) − 1 independent equations, and the vectors ρ V0+ , ρ [V0 ], ρ [V1 ] contain 2 (n + m) components.   It follows from [4.7], that the vector ρ V1− is uniquely expressed in terms of the  + vector ρ V0 . A normalization constant c is determined by the following condition:  V1 ρ (v) 1 dv + (ρ [V0 ] + ρ [V1 ]) 1 = 1. [4.8] V0

Thus, we have the following theorem: T HEOREM 4.1.– (Pogorui 2005) If conditions C1 and C2 are satisfied, then the Markov process ξ(t) = (v(t), κ(t)) has the stationary distribution  ρ (·), with the  absolutely continuous part ρ (v) = ρ V0+ exp QC −1 (V0 − v) , v ∈ (V0 , V1 ) and singular part (atoms) ρ [V0 ], ρ [V1 ] determined by [4.5]. E XAMPLE 4.1.– Let us consider the case where X = {x}, Y = {y1 , y2 , . . . , ym }. Then ρ [V0 ] = (ρ [V0 , x] , 0, . . . , 0), ρ [V1 ] = (0, ρ [V1 , y 1 ] , . . . , ρ [V1 , y m ]) and the matrices C and Q are of the following form: ⎛ ⎞ −a 0 . . . 0 ⎜ 0 b1 . . . 0 ⎟ ⎜ ⎟ C = ⎜ . . . . ⎟, ⎝ .. .. . . .. ⎠ 0 0 . . . bm ⎛ ⎞ −λx λx pxy1 . . . λx pxym ⎜ λy1 py1 x −λy1 . . . λy1 py1 ym ⎟ ⎜ ⎟ Q=⎜ . ⎟. . . . .. .. ⎝ .. ⎠ . . λym pym x λym pym y1 . . . −λym

108

Random Motions in Markov and Semi-Markov Random Environments 1

From equation [4.5], it follows that  λx ρ [V0 , x]  ρ V0+ , x = , a ...,

 λx pxy1 ρ [V0 , x]  , ρ y1 , V0+ = b1

 λx pxym ρ [V0 , x]  . ρ ym , V0+ = bm

and (0, ρ [V1 , y1 ] , . . . , ρ [V1 , y m ]) Q     = ρ V1− C = ρ (V0 +) exp −QC −1 (V1 − V0 ) C.

[4.9]

Denote as QY and CY , two matrices obtained from the matrices Q and C respectively, after removing the first column and the first row. Consider     ρY V1− = (ρ V1− , y 1 , . . . , ρ(V1− , y m )). It follows from condition C2, that an inverse matrix Q−1 Y exists. Thus, it is easily verified that (ρ [V1 , y 1 ] , . . . , ρ [V1 , ym ])      = ρ V1+ , y1 , . . . , ρ V1+ , y m , CY Q−1 Y .

[4.10]

    Since the vector ρY V1− is uniquely expressed in terms of the vector ρ V0+ , then up to the constant c = ρ [V0 , x], we have the absolutely continuous part of the stationary distribution ( ' λ x c λ y1 c λym c exp(QC −1 (V0 − v)). ρ (v) = ,..., , a b1 bm The atoms of the stationary distribution are determined by equation [4.10] and the constant c = ρ [x, V0 ] can be obtained from equation [4.8]. For the estimation ofefficiency of some storage systems, it is enough to calculate m the value of ρs (v) = i=1 ρ (v, xi ). In order to obtain ρs (v), let us consider the following differential operator: C (v, α) ⎛

d + Q∗ dv

d −λx1 − a1 dv λx2 px2 x1 . . . λym pym x1 ⎜ λx1 px1 x2 −λx2 − a2 d . . . λym pym x2 dv ⎜ =⎜ .. .. .. .. ⎝ . . . . d λx1 px1 ym λx2 px2 ym . . . −λym + bm dv

⎞ ⎟ ⎟ ⎟ ⎠

[4.11]

Random Switched Processes with Delay in Reflecting Boundaries

109

Let the differential operator D as a formal determinant of the operator  be defined  d d C (v, α) dv + Q∗ , i.e. D = det C (v, α) dv + Q∗ . As it was mentioned in Volume 1, Chapter 3 (see the proof of lemma 3.4), the function ρs (v) satisfies the following differential equation of the order n + m: Dρs (v) = 0. E XAMPLE 4.2.– Let us consider a similar case, such as that of example 4.1, where m = 2, i.e. X = {x}, Y = {y1 , y2 }. Then, ⎞ ⎛ d λy1 py1 x λy2 py2 x −λx − a dv d d C (v, α) + Q∗ = ⎝ λx pxy1 −λy1 + b1 dv λ y2 p y2 y1 ⎠ . dv d λx pxy2 λy1 py1 y2 −λy2 + b2 dv Hence, ( d ∗ Dρs (v) = det C (v, α) + Q ρs (v) dv '

d3 d2 ρs (v) + (−λx b1 b2 + λy1 ab2 + λy1 ab1 ) 2 ρs (v) 3 dv dv + (λx λy1 b2 (1 − pxy1 py1 x ) + λx λy2 b1 (1 − pxy2 py2 x ) = −ab1 b2

d ρs (v) dv + pxy1 py1 x + pxy2 py2 x + pxy1 py1 y2 py2 x

−λy1 λy2 a (1 − py1 y2 py2 y1 )) +λx λy1 λy2 (py1 y2 py2 y1

+pxy2 py2 y1 py1 x − 1) ρs (v) = 0. 4.2. Stationary distribution of switched process in semi-Markov media with delay in reflecting barriers This section deals with a switched process that has semi-Markov switching (or, in other words, a switched process that is in a semi-Markov media) (Korolyuk and Limnios 2005). We find the stationary measure of this process, which is defined by the differential equation with the phase space on the interval [V0 , V1 ] and a constant vector field of values that depends on the semi-Markov switching process, with a finite set of states. The results obtained in this section can be used in reliability theory for the calculation of the stationary effectiveness of systems (Pogorui 2011b; Rodríguez-Said et al. 2007, 2008). In particular, it can be used in the study of multiphase systems with storage that can be modeled by stochastic transport processes, with delay in barriers with a Markov or semi-Markov switching process

110

Random Motions in Markov and Semi-Markov Random Environments 1

(Pogorui and Turbin 2002; Pogorui 2004a). If the switching process is semi-Markov, then we can obtain the stationary distribution of the corresponding three-variate Markov process, in which the first temporal component (i.e. the time that is passed after the last change of a state by the switching process), the second component corresponds to the switching process and the third one is the spatial component describing the fullness of the storage facility. We should remark that the problem of finding the stationary distribution of the switched process in semi-Markov media is a nontrivial problem, even for the simplest case of an alternative switching process (Pogorui 2011b). 4.2.1. Infinitesimal operator of random evolution with semi-Markov switching We will study the infinitesimal operator of a stochastic switched process in a random environment, which can be seen as the superposition of independent semi-Markov processes. Such a process is defined by the evolutionary differential equation, which depends on the given superposition. These types of processes are applied to the calculation of the efficiency of various feedback control systems (Pogorui and Turbin 2002; Pogorui 2003), and in some queueing systems (Foss 2007). Another example of the application of such processes is considered in Volume 2, Chapter 5, where the distribution of the first crossing time of two independent telegraph processes on the line, is studied. Let us consider the evolutionary switched process {V (t)}, which is defined as a solution of the following evolutionary differential equation: dV (t) = C (V (t), ξ1 (t), ξ2 (t), . . . , ξn (t)) , dt

V (0) = v,

[4.12]

where ξi (t), i = 1, 2, . . . , n, is a family of independent regular semi-Markov   (i) (i) (i) x1 , x2 , . . . , xmi and processes with the respective phase spaces Xi =   semi-Markov matrices Q(i) (t) = px(i) x(i) Gx(i) (t); 1 ≤ k, l ≤ mi , and the k l k function -n C (v, x1 , x2 , . . . , xn ) , xi ∈ Xi , is continuously differentiable on R × i=1 Xi with respect to v for all xi ∈ Xi , i = 1, 2, . . . , n. Denote by pyz , y, z ∈ Xi the transition probabilities of the embedded (in ξi (t)) (i) (i) (i) Markov chain ξn = ξi τn , where τn , n = 0, 1, 2, . . ., are renewal epochs of the renewal process ξi (t), and Gx(i) (t) is the sojourn time distribution function of the (i)

process ξi (t) at xk .

k

Random Switched Processes with Delay in Reflecting Boundaries dG

Let us assume that the pdf gx(i) (t) = k 1, 2, . . . , n, and define rx(i) (τi ) = k

gx(i) (t) k

1 − Gx(i) (t)

(i) (t) x k

dt

111

(i)

exists for all xki ∈ Xi , i =

.

k

Now, let us consider τi (t) = t − inf {u ≤ t : ξi (u) = ξi (t)}, i = 1, 2, . . . , n. Since the bivariate process (ξi (t), τi (t)) is a Markov process (Corlat et al. 1991; Gikhman and Skorokhod 1975; Korolyuk and Turbin 1982), it is easy to verify that the process ζ(t) = (V (t), ξ1 (t), τ1 (t), ξ2 (t), τ2 (t), . . . , ξn (t), τn (t)) is a Markov process as well. L EMMA 4.1.– The infinitesimal operator of the process ζ(t) is as follows: Gf (v, x1 , τ1 , x2 , τ2 , . . . , xn , τn ) = C (v, x1 , τ1 , x2 , τ2 , . . . , xn , τn ) +

∂ f (v, x1 , τ1 , x2 , τ2 , . . . , xn , τn ) ∂v

n  ∂f (v, x1 , τ1 , x2 , τ2 , . . ., xn , τn )

∂τi

i=1

+rx1 (τ1 ) (P1 f (v, x1 , 0, x2 , τ2 , . . . , xn , τn ) − f (v, x1 , τ1 , x2 , τ2 , . . . , xn , τn )) +rx2 (τ2 ) (P2 f (v, x1 , τ1 , x2 , 0, . . . , xn , τn ) − f (v, x1 , τ1 , x2 , τ2 , . . . , xn , τn )) , +... +rxn (τn ) (Pn f (v, x1 , τ1 , x2 , τ2 , . . . , xn , 0) − f (v, x1 , τ1 , x2 , τ2 , . . . , xn , τn )) , where f is in the domain of the operator G. Pi f (v, x1 , τ1 , . . . , xi , 0, . . . , xn , τn )  pxi y f (v, x1 , τ1 , . . . , y, 0, . . . , xn , τn ). = y∈Ei

R EMARK 4.1.– Let us consider a particular case dV (t) = C (V (t), ξ(t)) , dt

V (0) = v,

that is, the “superposition” consists of one semi-Markov process ξ(t).

112

Random Motions in Markov and Semi-Markov Random Environments 1

Let us state τ (t) = t − sup {u : ξ (u) = ξ(t)}. On the phase space, Rn × X × [0, +∞) introduce the three-variate process ζ(t) = (V (t), ξ(t), τ (t)), where τ (t) = t − sup {u ≤ t : ξ (u) = ξ(t)}. Then, ζ(t) is a Markov process and its infinitesimal operator G is given by Gf (v, x, t) = C (v, x)

∂ f (v, x, t) ∂v

∂ f (v, x, t), ∂t where f (v, x, τ ) is a bounded and continuously differentiable function with respect to v and τ . +rx (t) [P f (v, x, 0) − f (v, x, t)] +

P ROOF.– Consider the proof of lemma 4.1 for the case of one switching process, considered in remark 4.1. (n)

Suppose ξ (0) = x, τ (0) = t, ξn (0) = xkn , τn (0) = tn . Taking into account the definition of the infinitesimal operator of a Markov process (Gikhman and Skorokhod 1975), we have Gf (v, x, t) = lim

Δ→0

Ev,x,t [f (V (Δ) , ξ (Δ) , τ (Δ))] − f (v, x, t) . Δ

To simplify expressions involving Ev,x,t , the subscript (v, x, t) will be omitted. We present this expectation in the following form: E [f (V (Δ) , ξ (Δ) , τ (Δ))] = f (v, x, t) +E [f (V (Δ) , ξ (Δ) , τ (Δ)) − f (v, ξ (Δ) , τ (Δ))] +E [f (v, ξ (Δ) , τ (Δ)) − f (v, x, t)] . Denoting a sojourn time of ξ(t) at the state x as θx , we obtain E [f (V (Δ) , ξ (Δ) , τ (Δ)) − f (v, x, t)] = E [f (V (Δ) , ξ (Δ) , τ (Δ)) − f (v, ξ (Δ) , τ (Δ))] I (θx > Δ) +E [f (V (Δ) , ξ (Δ) , τ (Δ)) − f (v, ξ (Δ) , τ (Δ))] I (θx ≤ Δ) +E [f (v, ξ (Δ) , τ (Δ)) − f (v, x, t)] . Since the semi-Markov process ξ(t) is regular and f is differentiable with respect to v, we have E [f (V (Δ) , ξ (Δ) , τ (Δ)) − f (v, ξ (Δ) , τ (Δ))] I (θx ≤ Δ) = o (Δ) .

Random Switched Processes with Delay in Reflecting Boundaries

Then, using the fact that V (Δ) = v + function f with respect to v, we obtain

Δ 0

113

C (V (t), ξ(t))dt, for differentiable

lim Δ−1 E [f (V (Δ) , ξ (Δ) , τ (Δ)) − f (v, ξ (Δ) , τ (Δ))] I (θx > Δ)

Δ→0

5 3

= lim Δ−1 E f Δ→0

 v+ 6

0

4

Δ

C (V (t), ξ(t))dt, ξ (Δ) , τ (Δ)

−f (v, ξ (Δ) , τ (Δ)) I (θx > Δ) = C (v, x)

∂ f (v, x, t) . ∂v

[4.13]

It is well known that the bivariate process (ξ(t), τ (t)) is a Markov process and its infinitesimal operator is of the following form (Volume 1, Chapter 1, section 1.4) (Corlat et al. 1991; Korolyuk and Turbin 1982): Ah (x, t) = lim

Δ→0

Ex,t [h (ξ (Δ) , τ (Δ))] − h (x, t) Δ

∂ h (x, t) , ∂t where h (x, t) is bounded and differentiable with respect to t. = rx (t) [P h (x, 0) − h (x, t)] +

Hence, E [f (v, ξ (Δ) , τ (Δ)) − f (v, x, t)] Δ→0 Δ lim

= rx (t) [P f (v, x, 0) − f (v, x, t)] +

∂ f (v, x, t). ∂t

[4.14]

Taking into account [4.13] and [4.14], we obtain the proof of lemma 4.1 for one switching process. For the general case, the proof follows the same idea. 4.2.2. Stationary distribution of random evolution in semi-Markov media with delaying boundaries in balance case In this section, we study a switched process with semi-Markov switching, that is defined by an evolution differential equation with a finite vector field. The phase space of the differential equation is a segment of the real line, and its vector field consists of constants. We obtain the stationary distribution of the switched process given by the evolution differential equation. Let {κ(t)} be a semi-Markov process with phase space G = X ∪ Y , X = {x1 , x2 , . . . , xn }, Y = {y1 , y2 , . . . , ym }, and P = {pαβ , α, β ∈ G} as the

114

Random Motions in Markov and Semi-Markov Random Environments 1

transition probabilities matrix of the embedded (into κ(t)) Markov chain κl , l ∈ N, i.e. pαβ = P {κl+1 = β | κl = α}. We denote by τα a sojourn time of κ(t) at α ∈ G, and by Fα (t), the cumulative distribution function of τα . Consider the evolutionary switched process v(t) defined as a solution of the following evolutionary equation: dv(t) = C (κ(t), v(t)) . dt

[4.15]

The process v(t) is also called a random evolution (or a transit process) in a semiMarkov media κ(t) (Korolyuk 1993; Korolyuk and Swishchuk 1995b; Korolyuk and Limnios 2005). Now, let us introduce the following condition: α (t) , and moments D1 . The density function fα (t) = dFdt  ∞  ∞ tfα (t)dt, m(2) t2 fα (t)dt ∀ α ∈ G exist. mα = α =

0

0

Consider V0 , V1 , ai , bj ∈ R, V0 < V1 , ai > 0, bj > 0, i = 1, 2, . . . , n, j = 1, 2, . . . , m, and assume that the function C, of the right-hand side of equation [4.15], satisfies the following conditions: C (xi , v) =

bi , V 0 ≤ v < V 1 , 0, v = V1 ,

C (yj , v) =

−aj , 0,

V0 < v ≤ V1 , v = V0 .

for xi ∈ X, i = 1, 2, . . . , n for yj ∈ Y, j = 1, 2, . . . , m.

On the phase space Z = [0, ∞) × G × [V0 , V1 ], consider the three-variate process ξ(t) = (τ (t), κ(t), v(t)), where τ (t) = t − sup {u ≤ t : κ(t) = κ (u)}. Our aim is to calculate the stationary distribution of ξ(t). The process ξ(t) is a Markov process and its infinitesimal operator is of the following form: Aϕ (τ, α, v) = +C (α, v)

∂ ϕ (τ, α, v) + rα (τ ) [P ϕ (0, α, v) − ϕ (τ, α, v)] ∂τ

∂ ϕ(τ, α, v), ∂v

Random Switched Processes with Delay in Reflecting Boundaries

115

with the boundary conditions ϕ τ (τ, x, V0 ) = ϕ τ (τ, y, V1 ) = 0, x ∈ X, y ∈ Y , where  fα (τ ) rα (τ ) = pαβ ϕ(0, β, v). , P ϕ (0,α, v) = 1 − Fα (τ ) β∈G

If the process ξ(t) has the stationary distribution ρ (·), then for any function ϕ (·), from the domain of A, we have:  Aϕ (z) ρ (dz) = 0. [4.16] Z

Consider the following condition: D2 . The pdf ρ (τ, α, v), of the absolutely continuous part of the stationary distribution of the process ξ(t), exists. In addition, everywhere, ∂ρ(τ,α,v) , ∂ρ(τ,α,v) , ∂τ ∂v and the following limits limτ →+∞ ρ (τ, α, v), limv↑V1 ρ (τ, x, v), and limv↓V0 ρ (τ, x, v) exist. The analysis of the properties of the process ξ(t) leads to the conclusion that the stationary distribution ρ (·) has singularities at points (τ, x, V1 ) , x ∈ X and (τ, y, V0 ) , y ∈ Y , which are denoted as ρ [τ, x, V0 ], ρ [τ, y, V1 ]. Now, let A∗ be the adjoint operator of A. Then, by changing the order of integration in equation [4.16] (integrating by parts), we can obtain the following expressions for the non-singular part of A∗ ρ = 0: C (α, v)

∂ ∂ ρ (τ, α, v) + rα (τ ) ρ (τ, α, v) + ρ (τ, α, v) = 0, ∂v ∂τ

ρ (∞, α, v) = 0, α ∈ G,  ∞ rβ (τ ) ρ (τ, β, v) dτ , β∈G

0

pβα = ρ (0,α, v),

α ∈ G.

[4.17]

[4.18]

And for the singular part we have: ρ [∞, x, V1 ] = 0,

x ∈ X,

ρ [∞, y, V0 ] = 0,

y∈Y

  d ρ [τ, x, V1 ] + rx (τ ) ρ [τ, x, V1 ] − bx ρ τ, x, V1− = 0, dτ   d ρ [τ, y, V0 ] + ry (τ ) ρ [τ, y, V0 ] − ay ρ τ, y, V0+ = 0, dτ where   ρ τ, x, V1− = lim ρ (τ, x, v) , v↑V1

  ρ τ, x, V0+ = lim ρ (τ, x, v) . v↓V0

[4.19] [4.20]

116

Random Motions in Markov and Semi-Markov Random Environments 1

Then,  y∈Y

∞ 0

 

y∈Y

rx (τ ) ρ [τ, x, V0 ] dτ

pxz = ρ [0, z, V0 ] , z ∈ X, 



ry (τ ) ρ [τ, y, V0 ] dτ

0



x∈X

pyz = ρ [0, z, V0 ] , z ∈ Y,

∞ 0

x∈X

ry (τ ) ρ [τ, y, V0 ] dτ

pyx = bx

0



∞ 0



rx (τ ) ρ [τ, x, V1 ] dτ

pxy = ay

  ρ τ, x, V0+ dτ , x ∈ X,

∞ 0

[4.21]

  ρ τ, y, V1− dτ , y ∈ Y.

By solving equation [4.17], we have: ρ (τ, x, v) = fx (v − bx τ ) e− ρ (τ, y, v) = fy (v + ay τ ) e−

τ 0

τ 0

rx (s)ds

, x ∈ X,

ry (s)ds

, y ∈ Y.

The balance case is characterized by the independence of the non-singular part, from the stationary distribution ρ (·) on v. In this case, functions fx (v − bx τ ) = cx , fy (v + ay τ ) = cy are constants. Hence, τ taking into account that e− 0 rα (s)ds = 1 − Fα (τ ), we have the following solution to equation [4.17]: ρ (τ, α, v) = cα (1−Fα (τ )) , If condition D1 is fulfilled, then

α ∈ G.

∂ρ(τ,α,v) ∂τ

[4.22]

= −cα fα (τ ), limτ →+∞ (τ, α, v) = 0.

Since ρ (τ, α, v) does not depend on v, it is easily verified that the solution [4.22] satisfies the condition C2 . By substituting ρ (τ, α, v) = cα (1 − Fα (τ )) into [4.18], we have  cα pαβ = cβ .

[4.23]

α∈G

We suppose that the following condition holds: D3 . The embedded (into κ(t)) Markov chain is irreducible, ergodic and has the stationary distribution ρα , α ∈ G.

Random Switched Processes with Delay in Reflecting Boundaries

117

Then a solution of [4.23] is as follows: cα = cρα ,

α ∈ G.

[4.24]

Hence,  ∞ 0

 ρ (τ, α, v) dτ = cρα

∞ 0

(1 − Fα (τ )) dτ = cρα mα ,

α ∈ G.

By solving [4.19], [4.20], and considering [4.22] and [4.24], we obtain ( ' ρ [0, x, V1 ] ρ [τ, x, V1 ] = c ρx (1 − Fx (τ )) bx τ + , σρx ( ' ρ [0, y, V0 ] . ρ [τ, y, V0 ] = c ρy (1 − Fy (τ )) ay τ + σρy Substituting [4.26] into [4.21], we have   ρx bx mx pxz + ρ [0, x, V1 ] pxz = ρ [0, z, V1 ], x∈X





ρy ay my pyz +

z ∈ X,

ρ [0, y, V0 ] pyz = ρ [0, y, V0 ],

z ∈ Y,

y∈Y

ρy ay my pyx +

y∈Y



[4.26]

x∈X

y∈Y



[4.25]



ρ [0, y, V0 ] pyx = bx mx ρx ,

x ∈ X,

ρ [0, x, V1 ] pxy = ay my ρy ,

y ∈ Y.

y∈Y

ρx bx mx pxy +

x∈X



[4.27]

x∈X

= {pxz , x, z ∈ X}, PY = {pyz , y, z ∈ X}, Denote by PX −1 −1 GX = (I − PX ) = {gxz , x, z ∈ X}, GY = (I − PY ) = {gyz , y, z ∈ Y }. The matrices GX , GY resemble the potential, as it is described in Kemeny and Snell (1960); Korolyuk and Turbin (1993). By solving the first two equations of [4.21], we obtain   ρx bx mx pxk gkz , z ∈ X, ρ [0, z, V0 ] = x∈X

ρ [0, z, V1 ] =

 y∈Y

k∈X

ρy ay my

 k∈Y

pyk gkz ,

z ∈ Y.

118

Random Motions in Markov and Semi-Markov Random Environments 1

After the substitution of these relations into the last two equations of [4.21], we obtain the following balance condition:     D4 . bx mx ρx pxy + ρx bx mx pxk gkz pzy = by my ρy , y ∈ Y, x∈X



x∈X

ay my ρy pyx +

y∈Y





ρy ay my

y∈Y

z∈X

k∈X

pyk

k∈Y



gkz pzx = bx mx ρx , x ∈ X. [4.28]

z∈Y

Thus, we have proved the following theorem: T HEOREM 4.2.– If conditions D1 − D4 are satisfied, then the stationary distribution ρ (·) of ξ(t) has the following form: for the continuous part (x ∈ X and y ∈ Y ) ρ (τ, x, v) = cρx (1 − Fx (τ )) ;

ρ (τ, y, v) = cρy ( 1 − Fy (τ )) ,

and for the singular part (atoms) ( ρ [0, x, V0 ] , ρ [τ, x, V0 ] = cρx (1 − Fx (τ )) bx τ + cρx ( ' ρ [0, y, V1 ] ρ [τ, y, V1 ] = cρy (1 − Fy (τ )) ay τ + , cρy '

where ρ [0, z, V0 ] =



ρx bx mx

x∈X

ρ [0, z, V1 ] =

 y∈Y



pxk gkz ,

z ∈ X;

pyk gkz ,

z ∈ Y.

k∈X

ρy ay my



k∈Y

The constant c is determined from the condition  ρ (dz) = 1. Z

Now, let us show that ρ of theorem 4.2 is a unique measure, i.e. there is no other stationary distribution that satisfies equation [4.16], even if condition D2 is not fulfilled. As we know, the coupling method is well known in the theory of random processes. The method is widely used in the investigation of Markov processes and similar processes (Lindvall 2002; Yu. 2002). The following lemma is true:

Random Switched Processes with Delay in Reflecting Boundaries

119

L EMMA 4.2.– (Yu. 2002) Suppose that a process ζ(t), with phase space Ψ, satisfies the following condition: (C) T > 0, γ > 0 and the two-dimensional vector Markov process θ(t) = (θ1 (t), θ2 (t)) exist on the phase space Ψ2 , for which the following assertions are true: 1) both components of the process θ(t) have the same distributions as the process ζ(t); 2) for any x and y, one has P {θ1 (t) = θ2 (t) | θ1 (0) = x, θ2 (0) = y} ≥ γ. Then the process ζ(t) has, at most, one stationary distribution. Let us show how the coupling method can be used in the present case. I. Let κ(t) be a Markov process, i.e. instead of the three-variate process ξ(t) = (τ (t), κ(t), v(t)), we can consider the bivariate process ξ(t) = (κ(t), v(t)). Without the loss of generality, we assume that the embedded (into κ(t)) Markov chain {κl , l ≥ 1} satisfies the following condition: pαβ = P {κl+1 = α | κl = β} ≥ δ > 0,

∀ α, β ∈ G.

Let κl  , l ≥ 1, and κl  , l ≥ 1, be two independent Markov chains with common phase space G, transition-probability matrix P = {pαβ , α, β ∈ G} and simultaneous jumps. It is easy to verify that P {κ1  = α0 , κ1  = α0 | κ0  = α, κ0  = β} ≥ δ 2 > 0, ∀ α0 , α, β ∈ G. Thus, after the first jump, the processes κl  , l ≥ 1, and κl  , l ≥ 1, are coupled with a probability not less than δ 2 > 0. Now, consider two independent Markov processes κ (t) and κ (t) with embedded chains κl  and κl  , respectively, and the same sojourn intensities as for κ(t). Furthermore, in the period of time preceding a certain epoch T0 > 0, the process κ(t) jumps with a probability not less than T δ1 = λ 0 0 e−λs ds, where λ = minα∈G λα (λα is the sojourn intensity of the process κ(t) at the state α ∈ G). Thus, in the period of time preceding T0 , the processes κ (t) and κ (t) are coupled with a probability not less than δ1 δ 2 > 0. −V0 . Let us denote by vmin = mini,j∈G (ai , bj ) and T = 2 Vv1min

∞ Jumps are absent after T0 , with a probability δ2 = λ T0 e−λs ds . Therefore, for identical κ (t) and κ (t) in a period of time that does not exceed T , the components v  (t) and v  (t), where v  (t) satisfies equation [4.15], derived by the process κ (t), and v  (t) satisfies equation [4.15], derived by the process κ (t), are coupled in an atom.

120

Random Motions in Markov and Semi-Markov Random Environments 1

Thus, in the period of time preceding T + T0 , the processes ξ  (t) = (κ (t), v  (t))

and

ξ  (t) = (κ (t), v  (t))

are coupled with a probability not less than δ2 δ1 δ 2 > 0. II. We now consider the case where κ(t) is a semi-Markov process. Consider the independent processes ξ  (t) = (τ  (t), κ (t), v  (t)) ,

ξ  (t) = (τ  (t), κ (t), v  (t)) ,

where κ (t) and κ (t) are independent semi-Markov processes with embedded chains κl  and κl  , respectively, and with the same sojourn intensities as for κ(t). We also know that v  (t) satisfies equation [4.15], governed by the process κ (t), and v  (t) satisfies equation [4.15], governed by the process κ (t). By using lemma 4.2, we will show that, under certain conditions, the processes (τ  (t), κ (t)) and (τ  (t), κ (t)) can be coupled. Assume that μt,x (ds) = P {τ ∈ ds | τ = t, κ = x} and that a continuous density function pt,x (s) > 0 exists, i.e. μt,x (ds) = pt,x (s) ds. Also assume that, for certain T0 > 0, we have  T0   inf min pt1 , x 1 (s) , pt2 , x2 (s) ds = δ3 > 0, (t1 , x1 ) , (t2 , x2 ) 0 t1 , t2 ≤ T 0

[4.29]

inf μt,x ([0, T0 ]) = δ4 > 0. t,x

By virtue of [4.29] and lemma 4.2, (τ  (t), κ (t)) and (τ  (t), κ (t)) can be coupled prior to T0 , with a probability not less than δ3 δ4 δ 2 . Furthermore, by analogy of the Markov case, we wait until the components v  (t) and v  (t) are coupled in an atom. If max (t1 , t2 ) > T0 , then, with a probability not less than  ∞ δ5 = (Fx1 (t1 + T0 + t) − Fx1 (t1 + t)) fx2 (t2 + t) dt 0



we have τ = 0, τ  ≤ T0 , at the epoch of jump of κ (t) and, taking [4.27] into account, we can use the coupling lemma 4.2. For practical applications, e.g. for the calculation of the stationary reliability of a system, it is necessary to know the function ρ (α, v), that is equal to the sojourn time of a bivariate process ζ(t) = (κ(t), v(t)) at (α, v) (Pogorui 2011b).

Random Switched Processes with Delay in Reflecting Boundaries

This function is determined as follows:  ∞ ρ (α, v) = ρ (τ, α, v) dτ = c ρα mα , 0

121

α ∈ G,

for x ∈ X we have 4 3  ∞   1 (2) 2 ρ [x, V1 ] = ρ [τ, x, V1 ] dτ = c ρz bz mz + ρz bz mz pzk gkx , 2 0 z∈X

z∈X

k∈X

whereas for y ∈ Y we have

4 3  ∞   1 2 (2) ρz az mz pzk gky , ρ [y, V0 ] = ρ [τ, y, V0 ] dτ = c ρz bz mz + 2 0 z∈Y

z∈Y

k∈Y

where c−1 =



ρβ bβ mβ +

β∈G

+



x∈X z∈X

β∈G

ρz az m2z

y∈Y z∈Y

   1 (2) ρβ bβ mβ + ρz bz m2z pzk gkx 2



k∈X

pzk gky .

k∈Y

4.2.3. Stationary distribution of random evolution in semi-Markov media with delaying boundaries Now, we will study the problem of section 4.2.2, without considering the balance condition D4 . We will keep the same notation used in section 4.2.2, assuming that conditions D1 –D3 are satisfied. Thus, by solving equation [4.17], we have: ρ (τ ,x, v) = fx (v − bx τ ) e− ρ (τ ,y, v) = fy (v + ay τ ) e−

τ 0

τ 0

rx (s)ds

, x ∈ X,

ry (s)ds

, y ∈ Y.

By considering equations [4.18], we will find functions fα (·), α ∈ G in the following form: fα (x) = cα esx ,

cα > 0, α ∈ G.

By substituting [4.30] into [4.18], we obtain   ∞ cβ fβ (τ ) eskβ τ dτ pβα = cα , α ∈ G, β∈G

0

[4.30]

[4.31]

122

Random Motions in Markov and Semi-Markov Random Environments 1

where kβ =

−aβ , β ∈ X, bβ , β ∈ Y.

∞ Denote as f˜β (skβ ) = 0 fβ (τ ) eskβ τ dτ . Then, it is easily seen that [4.31] has a non-zero solution cα , α ∈ G if, and only if, the following condition holds:   −1   f˜ (−a s) p 2 x2 x1  x2  ˜  fx3 (−a3 s) px3 x1  .  .  .   f˜ym (bm s) pym x1

f˜x1 (−a1 s) px1 x2 −1 f˜x3 (−a3 s) px3 x2 .. . f˜ym (bm s) pym x2

... ... ... .. . ...

f˜x1 (−a1 s) px1 y1 f˜x2 (−a2 s) px2 y1 f˜x3 (−a3 s) px3 y1 .. . f˜ym (bm s) pym y1

... ... ... .. . ...

f˜x1 (−a1 s) px1 ym f˜x2 (−a2 s) px2 ym f˜x3 (−a3 s) px3 ym .. . −1

= 0.

           

[4.32]

It is clear that s = 0 is a solution of equation [4.32]. Let {s0 = 0, s1 , . . . , sl }, l ≥ 1 be a set of solutions of [4.32]. Each si , 0 ≤ i ≤ l corresponds to a solution ciα , α ∈ G of equation [4.31]. For example, s0 = 0 corresponds to cρα , where c is a normalizing factor. Thus, ρ (τ ,α,v) will be found in the following form: ρ (τ ,x, v) =

l 

cix esi (v+ax τ ) (1−Fx (τ )) ,

x ∈ X,

ciy esi (v−by τ ) (1−Fy (τ )) ,

y ∈ Y.

i=0

ρ (τ ,y, v) =

l  i=0

Then, by solving [4.19] and [4.20], we obtain ρ [τ, x, V0 ] =

l  ci

x

i=1

s

(1−Fx (τ )) esi V0 (eax si τ − 1) + c ρx ax τ (1−Fx (τ ))

+ρ [0, x, V0 ] (1−Fx (τ )) , ρ [τ, y, V1 ] =

l  ci

x

i=1

s

[4.33]

  (1 − Fy (τ )) esi V1 1 − e−by si τ + σρy ay τ (1 − Fy (τ ))

+ρ [0, y, V1 ] (1 − Fy (τ )). Substituting [4.33] in the first two equations of [4.21], we obtain 5 6 l    esi V0  i ˜ cx f (sax ) pxk gkz − cix pxk gkz ρ [0, z, V0 ] = s i i=1 x∈X

k∈X

k∈X

Random Switched Processes with Delay in Reflecting Boundaries

+

 x∈X



l  esi V1  i=1

σρy ay my

y∈Y

si



z ∈ X,

pxk gkz ,

k∈X

ρ [0, z, V0 ] = +



σρx ax mx

5 ciy f˜ (say )

y∈Y



pyk gkz − ciy

k∈Y

pxk gkz ,

123



6 pxk gkz

k∈Y

z ∈ Y.

k∈Y

  To find the required values ciα , α ∈ G , we substitute these expressions into the last two equations of [4.21]. Hence, we have the following condition:   i E. Among solutions cα , α ∈ G of [4.31],  corresponding to si i = 0, 1, . . . , l,  i that are non-zero cαr , α ∈ G, r = 0, 1, . . . , d , satisfy the following equations: d  

    cixr f˜ (sir ax ) − 1 gxz pzy + c ax mx ρx gkz pzy

r=1 x∈X

=

d 

z∈X

  ciyr 1 − f˜ (−sir by ) + c by my ρy ,

x∈X

z∈X

∀ y ∈ Y,

r=1 d  

    ciyr f˜ (−sir by ) − 1 gyz pzx + c ay my ρy gyz pzx

r=1 y∈Y

=

d 

z∈Y

  cixr 1 − f˜ (sir ax ) + c bx mx ρx ,

y∈Y

z∈Y

∀ x ∈ X.

r=1

Thus, we have proved the following theorem: T HEOREM 4.3.– If conditions D2 – D3 and E are satisfied, then the Markov process ξ(t) has the stationary distribution ρ (·) with the continuous part: ρ (τ, x, v) = σ1

d 

cixr esir (v+ax τ ) (1 − Fx (τ )),

r=0

ρ (τ, y, v) = σ1

d 

ciyr esir (v−by τ ) (1 − Fy (τ )),

r=0

where

c0α

x ∈ X, y ∈ Y, and the singular part (atoms): 3 d 5 6   esir V0   cixr f˜ (sir ax ) pxk gkz − cixr pxk gkz ρ [0, z, V0 ] = σ1 s i r r=1 = cρα ,

x∈X

k∈X

k∈X

124

Random Motions in Markov and Semi-Markov Random Environments 1

+



c ρ x a x mx

x∈X

4



pxk gkz

,

z ∈ X,

k∈X

⎛ 5 6 d    esir V1  ir ˜ ir ⎝ ρ [0, z, V0 ] = σ1 cy f (sir ay ) pyk gkz − cy pxk gkz s ir r=1 y∈Y

+



c ρy ay my

y∈Y



k∈Y



pxk gkz ⎠ ,

k∈Y

z ∈ Y,

k∈Y

where σ1 is determined from the condition

 Z

ρ (z) dz = 1.

4.3. Stationary efficiency of a system with two unreliable subsystems in cascade and one buffer: the Markov case 4.3.1. Introduction In this section, we study the stationary efficiency, or product transfer effectiveness, of two subsystems that are interconnected in a cascade (series), and where the first subsystem is also connected to a buffer. The buffer delivers product to the second subsystem in the event of the failure of the first subsystem. Both subsystems deliver product to its destination through reliable lines, but these two subsystems are considered unreliable. We model the dynamics of the overall system by using a Markov evolution environment, and we derive design formulas involving the main parameters. We include numerical analysis for some specific cases as well. This basic scheme can be extended in several manners, and as an extension, we analyze the parallel interconnection of these two basic systems. The availability of repairable systems consisting of several stages (or subsystems) in a cascade, also called series systems, has been investigated by several researchers since the early 1960s, by using jump Markov stochastic processes, with exponential distribution for faultless and restoration periods, i.e. up and down periods, as seen in Chapters 6–8 of Gnedenko and Ushakov (1995) (see also, Barlow et al. (1965)). A typical approach to increase the availability of such a series system is to introduce redundancy in some of the subsystems. However, when a subsystem is expensive, such an approach produces prohibitive costs. In this section, we explore another possibility to increase the availability of the system, and as a result, the effectiveness for the transfer of product, by introducing a reliable buffer. We should mention that the random evolutions approach has been proposed before, for modeling the data fluid flow in a single buffer (Kulkarni 1997). However, such models are aimed at capturing the dynamics of statistical multiplexers (Anick et al. 1982) and leaky-bucket mechanisms for congestion control in data communications

Random Switched Processes with Delay in Reflecting Boundaries

125

(Elwalid and Mitra 1994); see also, Sericola (1998) and Reibman et al. (1989) for further related results. The most important performance measure in such cases is the tail probability for the loss of information. Spectral methods and asymptotic results are used for such a purpose, which are different to our solution method. We remark that in our single buffer scheme, there is no loss of information in the buffer, which can be inferred from conditions 1 and 2 in this section. Our buffer is a spare subsystem aimed at increasing the availability of the whole system. Besides the C(z) driving function given in equation [4.49], typical in the random evolution formulation (Kulkarni 1997), we introduce an additional function f (z) in equation [4.51] to capture the functionality of our scheme. These functions, f (z) and C(z), allow us to model more sophisticated systems, such as the parallel system presented in the second half of this section. The dynamics of this linear system can be captured by a first-order differential equation having a random process. In probabilistic literature, these models are called random evolutions (Korolyuk and Korolyuk 1999). These systems have been analyzed before by using the random evolutions approach. For instance, the cascade of an arbitrary number of subsystems and reservoirs has been investigated in Pogorui (2003), where some asymptotic results have been obtained. Similarly, a cascade of two subsystems and two buffers has been analyzed in Pogorui and Turbin (2002), where a closed-form expression for the effectiveness has been obtained for the so-called “balance” case. However, the analysis becomes too cumbersome to obtain such an effectiveness, and it is not clear which is the fault tolerance gain of the system, by introducing two buffers. In sections 4.3.2–4.3.4, we consider the single system. In section 4.3.2, we elaborate our mathematical modeling. In section 4.3.3, we present our main mathematical results, including a closed-form expression for K (effectiveness), whereas numerical results for several special cases are shown in section 4.3.4. 4.3.2. Stationary distribution of Markov stochastic evolutions In reliability problems, where we like to find the stationary parameters of a system’s effectiveness, we need to calculate the stationary distribution of the random processes that are modeling the given system. In particular, we will deal with this problem in the Markov case in this section. Consider an evolutionary switched process v(t) that is determined by the equation dv(t) = C(κ(t), v(t)), dt

[4.34]

where κ(t) is a Markov process with phase space G = X ∪ Y, X = {x1 , . . . , xn }, Y = {y1 , . . . , ym }. Let us assume that in κ(t) there is a homogeneous Markov chain κl , l ≥ 1 embedded, which has a transition probabilities matrix P = {pαβ , α, β ∈ G},

126

Random Motions in Markov and Semi-Markov Random Environments 1

pαβ = P {κl+1 = β | κl = α}, and the holding, or sojourn time τα of process κ(t) in a state α has a cumulative distribution function P {τα ≤ t} = 1 − e−λα t . It is known (Korolyuk and Swishchuk 1995b; Korolyuk and Korolyuk 1999), that equation [4.34] determines the stochastic random evolution in Markov media. Let the following conditions be satisfied: – C1) The phase space of the embedded Markov chain κl consists of one ergodic class G. – C2) Now, let us consider real numbers V0 < V1 , ai > 0, bj > 0, i = 1, . . . , n; j = 1, . . . , m, and suppose that the function from the right part of equation [4.34] satisfies: i) for xi ∈ X, i = 1, . . . , n, ) −ai , V0 < v ≤ V1 C(xi , v) = 0, v = V0 , ii) for yj ∈ Y, j = 1, . . . , m, ) bj , V 0 ≤ v < V1 C(yj , v) = 0, v = V1 . Let us consider the two-component process (or bivariate process) ξ(t) = (κ(t), v(t)) on the phase space Z = G × [V0 , V1 ]. Our purpose is to investigate the stationary distribution of the Markov process ξ(t). The process ξ(t) has an infinitesimal operator A, of the following form (Korolyuk and Swishchuk 1995b), (Korolyuk and Korolyuk 1999): Aϕ(α, v) = C(α, v)

d ϕ(α, v) + Qϕ(α, v), dv

where Qϕ(α, v) = λα [P ϕ(α, v) − ϕ(α, v)], and P ϕ(α, v) =



pαβ ϕ(β, v).

β∈G

If process {ξ(t)} has a stationary distribution ρ(·), then for any function ϕ(·) belonging to the domain of definition of the operator A, we have:  Aϕ(z)ρ(dz) = 0. [4.35] Z

The analysis of properties of the process ξ(t) leads to the conclusion that in points (x, V0 ), x ∈ X and (y, V1 ), y ∈ Y of the phase space Z, there are atoms on v of

Random Switched Processes with Delay in Reflecting Boundaries

127

the stationary distribution ρ(·). Henceforth, we shall denote these atoms by ρ[x, V0 ], ρ[y, V1 ], and its probability density by ρ(α, v), α ∈ Z, V0 < v < V1 . Changing the order of integration in [4.35], we obtain expressions for A∗ ρ = 0, where A∗ is the conjugate (dual or adjoint) operator for operator A. Namely,  Aϕ(z)ρ(dz) Z



V1

=

α∈GV

=



(C(α, v)

d ϕ(α, v) + λα (P ϕ(α, v) − ϕ(α, v)))ρ(α, v)dv dv

0

(ϕ(α, V1 )C(α, V1 )ρ(α, V1− ) − ϕ(α, V0 )C(α, V0 )ρ(α, V0+ )

[4.36]

α∈G

 −

V1

ϕ(α, v)C(α, v) V0

where

 d ρ(α, v)(P ϕ(α, v) − ϕ(α, v)) = 0, ρ(α, v)dv) + dv α∈G

ρ(α, V0+ )

= lim ρ(α, v) and v↓V0

ρ(α, V1− )

= lim ρ(α, v). v↑V1

As a result, for the density we have C(α, v)

d ρ(α, v) + Q∗ ρ(α, v) = 0, dv

α ∈ G, V0 < v < V1 ,

[4.37]

where Q∗ is the conjugate operator of Q:  Q∗ ρ(α, v) = λβ pβα ρ(β, v) + λα ρ(α, v). β∈G

Denote by ρ[V0 ] = (ρ[x1 , V0 ], . . . , ρ[xn , V0 ], 0, . . . , 0), ρ[V1 ] = (0, . . . , 0, ρ[y1 , V1 ], . . . , ρ[ym , V1 ]) (n + m)-dimensional vectors, and by C = diag(−a1 , −a2 , . . . , −an , b1 , b2 , . . . , bm ) the (n+m)×(n+m) diagonal matrix. We shall denote by Q, the matrix representation of operator Q. Thus, ⎛ ⎞ −λx1 λx1 px1 x2 . . . λx1 px1 ym ⎜ λx2 px2 x1 −λx2 . . . λx2 px2 ym ⎟ ⎜ ⎟ Q=⎜ ⎟, .. .. .. . . ⎝ ⎠ . . . . λym pym x1 λym pym x2 . . . −λym

128

Random Motions in Markov and Semi-Markov Random Environments 1

where it is assumed that pαα = 0, for all α ∈ G. For the atoms, it follows from [4.36] that ρ[V0 ]Q = ρ(V0+ )C,

ρ[V1 ]Q = ρ(V1− )C.

[4.38]

We write equations [4.37] in the vector–matrix form: d ρ(v)C + ρ(v)Q = 0, dv

v ∈ (V0 , V1 )

[4.39]

Now, by solving [4.39] we obtain ρ(v) = ρ(V0+ ) exp(QC −1 (V0 − v)),

v ∈ (V0 , V1 ).

[4.40]

The measure ρ(V0+ ) and its atoms ρ[V0 ] and ρ[V1 ] can be obtained from equations [4.38]. These values are determined up to a normalizing factor c. This fact is a consequence of condition C1, implying that the equation Qx = 0 has a unique solution x = 1 (up to a normalizing factor), where 1 = (1, . . . , 1)∗ is a (n + m)-vector. Hence, matrix formulas [4.38] contain 2(n + m) − 2 independent equations and vectors ρ(V0+ ), ρ[V0 ], ρ[V1 ] contain 2(n+m)−1 independent variables. This is since the vector ρ(V1− ), from formula [4.40], is uniquely determined in terms of vector ρ(V0+ ). The normalizing factor c can be obtained from the following equation: V1 ρ(v)1dv + (ρ[V0 ] + ρ[V1 ])1 = 1.

[4.41]

V0

Thus, the following theorem has been proved. T HEOREM 4.4.– If conditions C1 and C2 hold, then the Markov process ξ(t) = (κ(t), v(t)) has a stationary distribution ρ(·) with a density determined by the formula ρ(v) = ρ(V0+ ) exp(QC −1 (V0 − v)), v ∈ (V0 , V1 ), and atoms ρ[V0 ], ρ[V1 ] determined by equation [4.38]. E XAMPLE 4.3.– Let us consider the special case when n = 1, i.e. X = {x}, Y = {y1 , y2 , . . . , ym }. Then, ρ[V0 ] = (ρ[x, V0 ], 0, . . . , 0), ρ[V1 ] = (0, ρ[y1 , V1 ], . . . , ρ[ym , V1 ])

Random Switched Processes with Delay in Reflecting Boundaries

and matrices Q, C are given by ⎛ −λx λx pxy1 ⎜ λy1 py1 x −λy1 ⎜ Q=⎜ .. .. ⎝ . . λym pym x λym pym y1

⎞ . . . λx pxym . . . λy1 py1 ym ⎟ ⎟ ⎟, .. .. ⎠ . . . . . −λym

⎛ −a ⎜ 0 ⎜ C=⎜ . ⎝ ..

0 b1 .. .

... ... .. .

0 0 .. .

129

⎞ ⎟ ⎟ ⎟. ⎠

0 0 . . . bm

The first formula of [4.38] implies ρ(x, V0+ ) =

λx ρ[x, V0 ] , a

ρ(y1 , V0+ ) =

λx pxy1 ρ[x, V0 ] , b1

λx pxym ρ[x, V0 ] . bm The second equation of [4.38] is of the following form: . . . , ρ(ym , V0+ ) =

(0, ρ[y1 , V1 ], . . . , ρ[ym , V1 ])Q = ρ(V0+ ) exp(QC −1 (V0 − v))C.

[4.42]

Denote by QY and CY matrices, which are obtained from matrices Q and C after the removal of the first column and the first line, and assume ρY (V1− ) = (ρ(y1 , V1− ), . . . , ρ(ym , V1− )). The condition C2 implies the existence of an inverse matrix Q−1 Y corresponding to matrix QY . Consequently, taking into account [4.38] we have (ρ[y1 , V1 ], . . . , ρ[ym , V1 ]) = (ρ(y1 , V1+ ), . . . , ρ(y1 , Vm+ ))CY Q−1 Y .

[4.43]

Now, since ρY (V1− ) is uniquely expressed in terms of ρY (V0+ ), we obtain the stationary distribution of ξ(t) with density ' ( λ x c λ y1 c λy c ρ(v) = exp(QC −1 (V0 − v)), v ∈ (V0 , V1 ). [4.44] ,..., 1 , a b1 b1 The atoms are determined by formula [4.44], and the normalizing factor c = ρ[x, V0 ] is determined by [4.41]. 4.3.3. Stationary efficiency of a system with two unreliable subsystems in cascade and one buffer In this section, as an important example of the previous section, we analyze a series system with two subsystems and a single buffer, as shown in Figure 4.1. We also obtain a closed-form expression for the effectiveness of this system and have established the fault tolerance gain provided by the introduction of the buffer.

130

Random Motions in Markov and Semi-Markov Random Environments 1

Buffer B a

b

S1

a

S2

D

Figure 4.1. A system of two unreliable subsystems, say S1 and S2 , connected in series with a buffer B with capacity V . For a color version of this figure, see www.iste.co.uk/pogorui/random1.zip

Let us describe the main elements of this system. The two subsystems, or transmission subsystems, transferring information are denoted as S1 , S2 , with a reservoir (storage or buffer) B, which can accumulate a maximum of V amount of information or product (its capacity). All of the interconnection lines are considered reliable, and the time for failure (1) detection is negligible. The non-failure operating time interval τi , of subsystem Si , is an exponential distributed random variable with a failure rate or parameter λi > 0, (2) i = 1, 2, (up period), and the renewal (downtime or restoration period) time τi of subsystem Si is also an exponential distributed random variable with a repair rate or parameter μi , i = 1, 2. The system functionality is as follows: 1) If subsystems S1 and S2 are working properly and the reservoir B is full, then the product is transferred from subsystem S1 to subsystem S2 and then to its destination D (or consumer) without any problem, with velocity a. 2) If subsystems S1 , S2 are working and the buffer B is not full, then the product goes from subsystem S1 to subsystem S2 , and then to its destination D, with rapidity a. In addition, the product goes from subsystem S1 to the buffer B, with velocity b. 3) If subsystem S1 is not working, subsystem S2 is working and the buffer B is not empty, then the product flows from the buffer B to subsystem S2 , and then to its destination D, with velocity a. 4) If subsystem S2 is not working, then the product is not transferred to its destination D. However, if the subsystem S1 is working and the buffer B is not full, then the product flows from S1 to B, with velocity b. 5) If subsystem S1 is not working, subsystem S2 is working and the buffer B is empty, then the product does not flow to the destination D.

Random Switched Processes with Delay in Reflecting Boundaries

131

Typically, the provisioning of fault tolerance has been done by adding redundant spares to the system. However, in this scheme we are providing some degree of fault tolerance by introducing a buffer. As we know, data memories are becoming cheaper on the market and at present, they are very reliable because of the technology itself and by the addition of error correcting codes in their electronic design. Now, let us denote as V (T ), the volume or amount of product delivered to V (T ) destination D in the time interval [0,T]. Thus, we can define K = lim (when T →∞ T it exists) as the steady-state parameter for system performance (or effectiveness). Our main purpose in this subsection is to determine K as a function of the system parameters a, b, V, λ1 , λ2 , μ1 and μ2 . 4.3.4. Mathematical model Let us introduce the following stochastic process {χ(t)}, such that ⎧ 00, if S1 and S2 are not working; ⎪ ⎪ ⎪ ⎪ ⎨ 01, if S2 is working, and S1 is not working; χ(t) := 10, if S1 is working, and S2 is not working; ⎪ ⎪ ⎪ ⎪ ⎩ 11, if S1 and S2 are working.

[4.45]

The stochastic process χ(t) is a Markov process on the phase space (or states) Z = {00, 01, 10, 11}. To simplify notation, we make the following correspondence 00 ⇔ 0, 01 ⇔ 1, 10 ⇔ 2, and 11 ⇔ 3. Hence, the generating operator (or matrix) of χ(t) can be written as (Korolyuk and Korolyuk 1999): ⎡ ⎢ ⎢ ⎢ Q = q [P − I] = ⎢ ⎢ ⎣

−(μ1 + μ2 )

μ2

μ1

0

λ2

−(μ1 + λ2 )

0

μ1

λ1

0

−(λ1 + μ2 )

μ2

0

λ1

λ2

−(λ1 + λ2 )

⎤ ⎥ ⎥ ⎥ ⎥, ⎥ ⎦ [4.46]

where q = [qi δij ; i, j ∈ {0, 1, 2, 3}] is a diagonal matrix of sojourn times intensities of different states, and q0 = μ1 + μ2 , q1 = μ1 + λ2 , q2 = λ1 + μ2 , q3 = λ1 + λ2 . Here, as usual, the Kronecker’s delta is defined as ) 1, i = j; δij := [4.47] 0, i = j.

132

Random Motions in Markov and Semi-Markov Random Environments 1

The elements of matrix P are the transition probabilities of the Markov chain embedded in the Markov process χ(t), i.e. ⎡ ⎤ μ2 μ1 0 0 μ1 + μ2 μ1 + μ2 ⎢ ⎥ ⎢ ⎥ ⎢ λ2 μ1 ⎥ ⎢ ⎥ 0 0 ⎢ μ1 + λ2 μ1 + λ2 ⎥ ⎢ ⎥ ⎥ [4.48] P =⎢ μ2 ⎥ . ⎢ λ1 ⎢ ⎥ 0 0 ⎢ λ1 + μ2 λ1 + μ2 ⎥ ⎢ ⎥ ⎢ ⎥ λ2 λ1 ⎣ ⎦ 0 0 λ1 + λ2 λ1 + λ2 Consider a function C(z) on the space Z = {00, 01, 10, 11} × [0, V ], which is defined as ⎧ ⎪ ⎨ b, z = {11, v}, z = {10, v}, v < V ; z = {01, v}, v > 0; [4.49] C(z) := −a, ⎪ ⎩ 0, in other cases. Let ν(t) be the amount of information (or product, e.g., oil) in the reservoir B at time t, hence it is easily verified that ν(t) obeys the ordinary differential equation d ν(t) = C(χ(t), ν(t)), dt

[4.50]

with initial conditions ν(0) = ν0 ∈ [0, V ]. It can be said that equation [4.50] determines the random evolution of the system in the Markov medium χ(t) (Korolyuk and Korolyuk 1999). Now, let us define the function f (z) on Z as follows: ⎧ ⎪ ⎨ a, if z = {11, v}, 0 ≤ v ≤ V ; , f (z) := a, if z = {01, v}, v > 0; ⎪ ⎩ 0, in other cases.

[4.51]

Assume now that the joint stochastic processes with a two-dimensional phase space ξ(t) = (χ(t), ν(t)). Then, we can state the following equality: V (T ) 1 = lim T →∞ T T →∞ T



T

K = lim

0

f (ξ(t)) dt.

[4.52]

Random Switched Processes with Delay in Reflecting Boundaries

133

It follows from ergodic theory (Gikhman and Skorokhod 1975; Korolyuk and Turbin 1993; Reed and Simon 1972) that if the process ξi (t) has a stationary distribution, say ρ(·), then   1 T lim f (ξ(t)) dt = f (z) dρ(z). [4.53] T →∞ T 0 Z Hence, by using equation [4.52] we have   K= f (z) dρ(z) = f (z) ρ(z) dz. Z

[4.54]

Z

In summary, the calculation of parameter K can be reduced to the calculation of the stationary distribution ρ, of the process ξ(t). 4.3.5. Main mathematical results The sojourn time distribution functions, say Qij (t), have the following form for the different states: Q00 (t) = 1 − e−(μ1 +μ2 )t , Q01 (t) = 1 − e−(μ1 +λ2 )t , −(λ1 +μ2 )t Q10 (t) = 1 − e and Q11 (t) = 1 − e−(λ1 +λ2 )t . Now, denote by d Qx (t) qx (t) qx (t) = and rx = for all x ∈ Z, i.e. r00 = μ1 + μ2 , dt 1 − Qx (t) r01 = μ1 + λ2 , r10 = λ1 + μ2 and r11 = λ1 + λ2 . Then, the two-component process ξ(t) = (ν(t), χ(t)) is a Markov process with the generator (Korolyuk and Korolyuk 1999; Pogorui and Turbin 2002) A ϕ(x, v) = C(x, v) where P ϕ(x, v) =



∂ ϕ(x, v) + rx [P ϕ(x, v) − ϕ(x, v)], ∂v

[4.55]

pxy ϕ(y, v), or equivalently

y∈Z

A ϕ(x, v) = C(x, v)

∂ ϕ(x, v) + Q ϕ(x, v)]. ∂v

[4.56]

Hence, since ρ(·) is the stationary distribution of the process ξ(t) for every function ϕ(·) belonging to the domain of the operator A, we have  A ϕ(z)ρ(dz) = 0. [4.57] Z

CONDITION 4.1.– Since at atoms (01, 0), (10, V ) the subsystems S1 and S2 are not involved in the functionality of the system, we assume that they are reliable, i.e. λ1 = λ2 = 0 at (01, 0), (10, V ).

134

Random Motions in Markov and Semi-Markov Random Environments 1

The analysis of the properties of the process ξ(t) leads to the conclusion that the stationary distribution ρ has atoms at the points (01, 0), (10, V ) and (11, V ), and we denote them by ρ[01, V ], ρ[10, V ] and ρ[11, V ], respectively. The continuous part of the stationary distribution ρ, is denoted as ρ(ij, v); i, j ∈ {0, 1}. Let us write equation [4.57] in more detail, as follows:   V [−(μ1 + μ2 )ϕ(00, v) + μ2 ϕ(01, v) + μ1 ϕ(10, v)] ρ(00, v) Aϕ(z)ρ(dz) = 0

Z



 ∂ϕ(01, v) + λ2 ϕ(00, v) − (λ2 + μ1 )ϕ(01, v) + μ1 ϕ(11, v) ρ(01, v) ∂v   ∂ϕ(10, v) + C(10, v) + λ1 ϕ(00, v) − (λ1 + μ2 )ϕ(10, v) + μ2 ϕ(11, v) ρ(10, v) ∂v   " ∂ϕ(11, v) + C(11, v) +λ1 ϕ(01, v) − (λ1 + λ2 )ϕ(11, v) +λ2 ϕ(10, v) ρ(11, v) dv ∂v

+ C(01, v)

+ (−ϕ(01, 0) + ϕ(11, 0)) μ1 ρ[01, 0] + (−ϕ(10, V ) + ϕ(11, V )) μ2 ρ[10, V ] + (λ1 ϕ(01, V ) + λ2 ϕ(10, V ) − (λ1 + λ2 )ϕ(11, V )) ρ[11, V ] = 0.

[4.58]

Now, let A∗ be the conjugate operator of A. Then, by changing the order of integration in equation [4.58], we can obtain the following expressions for the continuous part of A∗ ρ = 0: ⎧ ∂ ⎪ ⎪ a ρ(01, v) + μ2 ρ(00, v) + λ1 ρ(11, v) − (λ2 + μ1 )ρ(01, v) = 0 ⎪ ⎪ ∂v ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂ ⎪ ⎪ ⎨ −b ∂v ρ(10, v) + μ1 ρ(00, v) + λ2 ρ(11, v) − (λ1 + μ2 )ρ(10, v) = 0 [4.59] ⎪ ∂ ⎪ ⎪ ⎪ ρ(01, v) + μ ρ(10, v) − (λ + λ )ρ(11, v) = 0 −b ρ(11, v) + μ 1 2 1 2 ⎪ ⎪ ∂v ⎪ ⎪ ⎪ ⎪ ⎪ −(μ1 + μ2 )ρ(00, v) + λ2 ρ(01, v) + λ1 ρ(10, v) = 0. ⎪ ⎩ Hence, it follows that aρ(01, v) − bρ(10, v) − bρ(11, v) = c0 = constant. Similarly, from equation [4.57], we obtain ⎧ aρ(01, 0+ ) − μ1 ρ[01, 0] = 0 ⎪ ⎪ ⎪ ⎪ ⎨ μ1 ρ[01, 0] − bρ(11, 0+ ) = 0 , ⎪ ⎪ ⎪ ⎪ ρ(10, 0+ ) = 0 ⎩

[4.60]

[4.61]

Random Switched Processes with Delay in Reflecting Boundaries

and

⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩

−aρ(01, V − ) + λ1 ρ[11, V ]

135

= 0

bρ(11, V − ) + μ2 ρ[10, V ] − (λ1 + λ2 )ρ[11, V ] = 0 .

[4.62]



bρ(10, V ) − μ2 ρ[10, V ] + λ2 ρ[11, V ] = 0

In these equations, we have defined the notation ρ(ij, 0+ ) := lim ρ(ij, v) and v↓0

ρ(ij, V − ) := lim ρ(ij, v) v↑V

for all i, j ∈ {0, 1}. Combining equations [4.61] and [4.62], we obtain ⎧ + + + ⎪ ⎨ aρ(01, 0 ) − bρ(10, 0 ) − bρ(11, 0 ) = 0 − − − ⎪ ⎩ aρ(01, V ) − bρ(10, V ) − bρ(11, V ) = 0.

[4.63]

It follows from equations [4.61] and [4.63] that c0 = 0 in equation [4.60], so aρ(01, v) − bρ(10, v) − bρ(11, v) = 0.

[4.64]

Hence, the first two equations of the set of equations [4.59] can be written in the following form: ⎧ ∂ρ(01, v) ⎪ ⎪ = c11 ρ(01, v) + c12 ρ(10, v) ⎪ ⎪ ⎨ ∂v [4.65] ∂ρ(10, v) ⎪ ⎪ ρ(01, v) + c ρ(10, v), = c ⎪ 21 22 ⎪ ⎩ ∂v where c11 = − c12 = c21 =

λ1 μ 1 λ2 μ1 + + , b a(μ1 + μ2 ) a

μ1 λ 1 , a(μ1 + μ2 ) μ1 λ2 aλ2 , + b2 b(μ1 + μ2 )

c22 = −

λ2 μ 2 λ1 μ2 − − . b b(μ1 + μ2 ) b

[4.66]

136

Random Motions in Markov and Semi-Markov Random Environments 1

Solving the set of equations [4.65], we obtain ⎧ ρ(01, v) = c1 eδ1 v + c2 eδ2 v ⎪ ⎪ ⎨ 1 1 ρ(10, v) = c1 (δ1 − c11 ) eδ1 v + c2 (δ2 − c11 ) eδ2 v , ⎪ ⎪ ⎩ c12 c12

[4.67]

where 7 (c22 − c11 )2 + 4c12 c21 2 7 c11 + c22 − (c22 − c11 )2 + 4c12 c21 . δ2 = 2 δ1 =

c11 + c22 +

[4.68] [4.69]

We know from equation [4.61] that ρ(10, 0+ ) = 0, and from the second equation in the set of equations [4.67], we have c2 = c1

c11 − δ1 . δ2 − c11

Hence, the set of equations [4.67] can be rewritten as ⎧ ' ( c11 − δ1 δ2 v ⎪ δ1 v ⎪ ρ(01, v) = c e + e ⎪ ⎪ ⎨ δ2 − c11   c ⎪ ⎪ (δ1 − c11 ) eδ1 v − eδ2 v . ρ(10, v) = ⎪ ⎪ c12 ⎩ The factor c can be calculated from the normalization equation  ρ(z) dz = 1.

[4.70]

[4.71]

[4.72]

Z

For such a purpose, we need to find a few more results for the stationary distribution ρ(·). For instance, by rewriting equation [4.64] and using equation [4.71], we have the solution for ρ(11, v): ρ(11, v) =

a ρ(01, v) − ρ(10, v). b

[4.73]

Similarly, by rewriting the last equation of the set of equations [4.59], we obtain the solution for ρ(00, v) as: ρ(00, v) =

λ1 λ2 ρ(01, v) + ρ(10, v). μ1 + μ2 μ1 + μ2

[4.74]

Random Switched Processes with Delay in Reflecting Boundaries

137

The corresponding equations for atoms of the stationary distribution ρ of ξ(t) can be obtained from equations [4.61] and [4.62]. For instance, ρ[01, 0] can be obtained directly from the first equation of the set of equations [4.61], and it is given by ρ[01, 0] =

a ρ(01, 0+ ). μ1

[4.75]

In the same manner, ρ[11, V ] =

a ρ(01, V − ), λ1

[4.76]

ρ[10, V ] =

b aλ2 ρ(10, V − ) + ρ(01, V − ), μ2 λ1 μ 2

[4.77]

are obtained from the set of equations [4.62]. Therefore, we give the following closed-form expression for c: ( a δ1 − c11 λ1 λ2 1+ + [4.78] + b μ1 + μ2 c12 μ1 + μ2 (' ' ( ( ' δ2 V −1 c11 − δ1 a δ1 − c11 λ1 e λ2 1+ + − + δ2 δ2 − c11 b μ1 + μ2 c12 μ1 + μ2 ' ( ( ' a 1 λ2 c11 − δ1 1 eδ 2 V + + + + a δ2 − c11 μ1 λ1 μ2 λ1 μ1 ' (  a λ2 b δ1 − c11  δ1 V + + 1 e δ1 V + − e δ2 V . e λ 1 μ2 μ2 c12

1 = c

'

e δ1 V − 1 δ1

('

Now, from the formula  V (ρ(11, v) + ρ(01, v)) dv + aρ[11, V ], K=a

[4.79]

0

we have ( a δ1 − c11 1+ − b c12 ' (  δ2 V − 1 c11 − δ1 a  δ1 − c11 e + 1+ + δ2 δ2 − c11 b c12 ' ( c11 − δ1 δ2 V a e δ1 V + . e + λ1 δ2 − c11

K e δ1 V − 1 = ac δ1

'

[4.80]

138

Random Motions in Markov and Semi-Markov Random Environments 1

4.3.6. Numerical results for the symmetric case It is interesting to mention that when V = 0, we can obtain an efficiency K, that we call K0 , as a . [4.81] K0 =  λ1 1 + μ1 + μλ22

0.4

K

0.6

0.8

1.0

On the other hand, it is well known that the availability of a series system, say Av , similar to the system in Figure 4.1, is given by    1/λ2 1 1/λ1  . [4.82] Av = = λ1 1/λ1 + 1/μ1 1/λ2 + 1/μ2 1 + μ1 + μλ22 + μλ11 λμ22

0.0

0.2

(1,1) (1,10) (1,100)

0

1

2 3 Storage Capacity, V

4

5

Figure 4.2. Efficiency parameter K as a function of reservoir size V for different (λ, μ) combinations. We have fixed a = 1 and b = 1

Hence, the efficiency of this series system, say K0∗ , is given by aAv . We should remark that K0∗ ∼ = K0 for most practical systems since λi  μi . However, they are slightly different due to condition 4.1.

Random Switched Processes with Delay in Reflecting Boundaries

139

In order to gain some insights for the behavior of this system, we analyze the numerical results for the symmetric case, i.e. λ1 = λ2 = λ and μ1 = μ2 = μ.

0.4

K

0.6

0.8

1.0

In Figure 4.2, we plot the efficiency K as a function of reservoir size V , for three different values of the ratio λ/μ, namely λ = 1, and μ = 100, 10 and 1. We assume a = b = 1. We can observe that K increases as the buffer capacity V increases. The maximum value for K is achieved for smaller buffer capacities, when the ratio λ/μ is smaller (systems with higher availabilities). However, the fault tolerance gain provided by the buffer (Kmax − K0∗ ) is also smaller in these cases.

0.0

0.2

(0.1,0.1) (0.1,1) (0.1,10)

0

1

2 3 Storage Capacity, V

4

5

Figure 4.3. Efficiency parameter K as a function of reservoir size V for different (λ, μ) combinations. We have fixed a = 1 and b = 1

A similar behavior can be observed in Figure 4.3, where we plot K for three different values of the ratio λ/μ, namely λ = 0.1, and μ = 0.1, 1, and 10. The influence of the parameter λ on K is that the maximum is achieved at larger buffer capacities. This is stated with more detail in Table 4.1, where we can observe that the maximum value of K is not the same at V = 5, being smaller when λ = 0.1 for the

140

Random Motions in Markov and Semi-Markov Random Environments 1

same λ/μ rates. This means that larger buffer capacities are required to achieve the same fault tolerance gain, when the system has smaller values of λ. λ μ λ/μ max K at V = 5 min K = K0 K0∗ 1 100 0.01 0.990099 0.9803922 0.980296 1 10 0.1 0.909090 0.8333333 0.8264463 1 1 1 0.49775 0.3333333 0.25 0.1 10 0.01 0.1 1 0.1 0.1 0.1 1

0.990099 0.9085053 0.4132633

0.9803922 0.8333333 0.3333333

0.980296 0.8264463 0.25

Table 4.1. Maximum and minimum values of K for different λ/μ ratios. We have fixed a = 1 and b = 1

In Table 4.2, we state the effect on the maximum K, by varying the parameter b. It seems that the variation around 1 of this parameter is not important for λ/μ ratios of 0.1 and 0.01. It appears important for cases of low availability, and in such cases a∼ = b is recommended. b 0.1 0.3 0.6 0.9 1.1 2.0 5.0 10 100 b 0.1 0.3 0.6 0.9 1.1 2.0 5.0 10 100

λ 1 1 1 1 1 1 1 1 1

max K when μ = 1 max K when μ = 10 max K when μ = 100 0.3413731 0.9090294 0.990099 0.4196899 0.9090909 0.990099 0.4876704 0.9090909 0.990099 0.496913 0.9090909 0.990099 0.4982768 0.9090909 0.990099 0.4995211 0.9090909 0.990099 0.4998276 0.9090909 0.990099 0.4998797 0.9090909 0.990099 0.4999138 0.9090909 0.990099

λ max K when μ = 0.1 max K when μ = 1 max K when μ = 10 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1

0.3414187 0.3859017 0.4050227 0.4118787 0.4143976 0.4194981 0.4232016 0.4244218 0.4255116

0.8989422 0.9076991 0.9083301 0.9084797 0.9085254 0.9086085 0.9086621 0.9086788 0.9086933

0.990099 0.990099 0.990099 0.990099 0.990099 0.990099 0.990099 0.990099 0.990099

Table 4.2. Maximum values of K at V = 5 for different λ/μ ratios and different values of b. We have fixed a = 1

Random Switched Processes with Delay in Reflecting Boundaries

141

4.4. Application of random evolutions with delaying barriers to modeling control of supply systems with feedback: the semi-Markov switching process We will calculate the stationary efficiency of a supply system with feedback. The system considered consists of one or more unreliable flows, or flows with interruptions (or aggregates) and a corresponding number of reservoirs (or buffers). As we will see, these systems can be modeled by random evolutions in Markov or semi-Markov environments, and the calculation of the stationary efficiency of such systems can often be reduced to finding the stationary distribution of the corresponding random evolution. Some of the first investigations on the reliability theory of systems with storages or reservoirs were done by Sevastyanov (1962), Buzacott (1967) and Cherkesov (1974). These works used, the technique of jump Markov processes. More works related to this topic are given in Kulkarni (1997), Malathornas et al. (1983) and Mitra (1998), among others. In some works, for instance Korolyuk and Turbin (1982), the requirement of exponential distributions of faultless operation and restoration is removed and semi-Markov models of operation systems with reservoirs are studied. A limitation of the methods mentioned above is the assumption of a condition stating that the reserves are added instantaneously in the reservoirs. This is a rather strong assumption. In Pogorui and Turbin (2002) and Pogorui (2003), the methodology of Markov evolutions is used for the study of a two-phase system with two reservoirs. This method does not need the assumption of the instantaneous addition of reserves in reservoirs. In Pogorui (2004a), system effectiveness is studied by using the phase merging scheme (Korolyuk and Swishchuk 1995b; Korolyuk and Korolyuk 1999; Korolyuk and Limnios 2005). So, in this section, the operation of the system is modeled by a random evolution in an alternating semi-Markov medium. 4.4.1. Estimation of stationary efficiency of one-phase system with a reservoir Consider a system that consists of a supply line or flow A, a reservoir (accumulator, container or buffer) B and a customer C. The supply line A is unreliable, with ON-OFF cycles: the time of non-failure operation (ON period) τ1 of A, is a random variable with a general distribution function G1 (t), and the time of failure operation (OFF period) τ0 of A can be represented by a random variable with a general distribution function G0 (t). Let us denote by V the volume (or capacity) of the reservoir B. The system operates in the following manner:

142

Random Motions in Markov and Semi-Markov Random Environments 1

1) If the supply line A is ON and the reservoir B is full, then the product goes from A to C at rate a0 . 2) If the supply line A is ON and the reservoir B is not full, then the product goes from A to C at rate a0 . In addition, the product goes from A to B at rate b. 3) If the supply line A is OFF and the reservoir B is not empty, then the product goes from B to C at rate a1 . 4) If the reservoir B is full, then the supply line does not provide any product to the reservoir. 5) If the supply line A is OFF and the reservoir B is empty, then the customer C does not receive any product. Denote by VT , the product total amount that is received by the customer during time [0, T ]. The limit K = limT →∞ productivity of the system.

VT T

, if it exists, will be called the stationary average

Our purpose is to calculate K as a function of the following system parameters: a0 , a1 , b, V , and distributions G0 (t), G1 (t). Let us consider the following random process {n(t)}: n(t) =

0, 1,

if the supply line A is OFF; if the supply line A is ON.

The process n(t) is semi-Markov, with the phase space E = {0, 1}. A sojourn time of n(t) in state 0 (respectively, 1) is τ0 (respectively, τ1 ). Let v(t) be the volume of the product in the reservoir B at time t. We introduce the following functions on W = {0, 1} × [0, V ]: ⎧ w = (1, v) , v < V, ⎨ b, w = (0, v) , v > 0, C (w) = −a1 , ⎩ 0, other cases. ⎧ w = (1, v) , ⎨ a0 , w = (0, v) , v > 0, f (w) = a1 , ⎩ 0, w = (0, 0) . We can see that v(t) satisfies the following evolution equation: dv(t) = C (n(t), v(t)) , dt

v(t) = v0 ∈ [0, V ] .

Random Switched Processes with Delay in Reflecting Boundaries

143

Let us consider the following bivariate random process ξ(t) = (n(t), v(t)). Then we have  T f (ξ(t)) dt VT = 0

and  lim

T →∞

T 0

f (ξ(t)) dt = K

provided that K exists. If the random process ξ(t) has the stationary distribution ρ (·), then according to the ergodic theory, we have   1 T lim f (ξ(t)) dt = f (w) ρ (dw) . T →∞ T 0 W Hence, to compute the stationary productivity K of the system, it suffices to determine the stationary distribution ξ(t). We assume that Gi (t) is non-degenerate and that gi (t) ∞ mi = 0 tgi (t)dt, i = 0, 1 exists. Let us state as  gˆi (s) =



0

est gi (t)dt,

ri (t) =

gi (t) , 1 − Gi (t)

=

dGi (t) dt

and

i = 0, 1.

Suppose the following conditions are fulfilled: F1 . If (m0 a1 − m1 b) = 0, then there exists s0 = 0 such that gˆ0 (a1 s0 ) gˆ1 (−b s0 ) = 1, F2 .

 I0 =  I1 =

∞ 0 ∞ 0

[4.83]

 exp a1 s0 t −

t 0

 exp −b s0 t −

" r0 (l) dl dt < ∞, t

0

" r1 (l) dl dt < ∞.

In the case when (m0 a1 − m1 b) = 0 instead of F2 , we will assume that the following condition is satisfied:

144

Random Motions in Markov and Semi-Markov Random Environments 1

F2  . There exists  ∞  (2) (2) m0 = t2 g0 (t)dt, m1 = 0

∞ 0

t2 g1 (t)dt.

L EMMA 4.3.– If m0 a1 = m1 b and l1 < l2 , p1 < p2 , σ1 > 0, σ2 > 0 exist, such that g0 (t) ≥ σ1 , t ∈ [l1 , l2 ], g1 (t) ≥ σ2 , t ∈ [p1 , p2 ] and b p1 < a1 l2 , a1 l1 < b p2 , then s0 = 0 which satisfies condition F1 exists. P ROOF.– Let us define f (s) = gˆ0 (a1 s) gˆ0 (−b s). We f  (0) = (m0 a1 − m1 b) = 0 and f (0) = 1. If (m0 a1 − m1 b) < 0, then  l2  p2 a1 s t f (s) ≥ e g0 (t)dt e−b s0 t g1 (t)dt l1



have

p1

  σ 1 σ2  a 1 l 2 s − ea1 l1 s e−b p1 s − e−b p2 s → +∞ as s → +∞. e a1 b

The case f  (0) = (m0 a1 − m1 b) > 0 can be proved in a similar manner. R EMARK 4.2.– We should note that condition F2 is a consequence of condition F1 , provided that δ1 , δ2 > 0 and T < ∞ exist such that ri (t) ≥ δi for t > T , i = 1, 2. T HEOREM 4.5.– (Pogorui 2011b) If m0 a1 = m1 b and s0 = 0 exists, such that conditions F1 , F2 , are fulfilled, then the stationary distribution of ξ(t), with the absolutely continuous part exists: ρ (0, v) = c s0 I0 es0 v ,

v > 0;

ρ (1, v) = c gˆ0 (a1 s0 ) s0 I1 es0 v ,

v < V.

and the singular part (or atoms) ρ [0, 0], ρ [1, V ] at points (0, 0), (1, V ), respectively: ρ [0, 0] = c (I0 − m0 ) ,

ρ [1, V ] = c gˆ0 (a1 s) (m1 − I1 ) es0 V ,

where −1  . c = I0 es0 V − I1 gˆ0 (a1 s0 ) − m0 + m1 gˆ0 (a1 s0 ) es0 V If m0 a1 = m1 b and instead of F2 , condition F2  is fulfilled, then the stationary distribution of ξ(t) with the absolutely continuous part exists: ρ (i, v) = c mi , i = 0, 1 and the singular part (or atoms) at points (0, 0), (1, V ): ρ [0, 0] = c a1

m20 , 2

ρ [1, V ] = c b

where c−1 = V (m0 + m1 ) + a1

m20 2

+b

m21 , 2

m21 2 .

Random Switched Processes with Delay in Reflecting Boundaries

145

P ROOF.– Let us consider the three-variate process ξ(t) = (τ (t), n(t), v(t)) in the phase space Z = [0, ∞) × {0, 1} × [0, V ], where τ (t) = t − sup {u < t : x (u) = x(t)} . It is well known that the process ξ(t) is a Markov process and its infinitesimal operator is of the following form: Aϕ (τ, n, v) = +C (n, v)

∂ ϕ (τ, n, v) + rx (τ ) [P ϕ (0, n, v) − ϕ (τ, n, v)] ∂τ

∂ ϕ(τ, n, v), ∂v

where the function ϕ (τ, n, v) is continuously differentiable with respect to τ and v. In addition, ϕ (τ, n, v) satisfies the boundary conditions ϕτ  (τ, 1, 0) = ϕτ  (τ, 0, V ) = 0. Here, (τ, n, v) ∈ Z, P ϕ (0, 1, v) = ϕ (0, 0, v), P ϕ (0, 0, v) = ϕ (0, 1, v). In more details, the operator A has the form Aϕ (τ, 1, v) =

∂ ϕ (τ, 1, v) + r1 (τ ) [ϕ (0, 0, v) − ϕ (τ, 1, v)] ∂τ

∂ ϕ (τ, 1, v) , 0 ≤ v ≤ V, ∂v ∂ Aϕ (τ, 0, v) = ϕ (τ, 0, v) + r0 (τ ) [ϕ (0, 1, v) − ϕ (τ, 0, v)] ∂τ ∂ −a1 ϕ (τ, 0, v) , 0 ≤ v ≤ V, ∂v ∂ Aϕ (τ, 0, 0) = ϕ (τ, 0, 0) + r0 (τ ) [ϕ (0, 1, 0) − ϕ(τ, 0, 0)], ∂τ ∂ Aϕ (τ, 1, V ) = ϕ (τ, 1, V ) + r1 (τ ) [ϕ (0, 0, V ) − ϕ(τ, 1, V )], ∂τ +b

ϕτ  (τ, 1, 0) = ϕτ  (τ, 0, V ) = 0. If the stationary distribution ρ (·) of ξ(t) exists, then for any function ϕ (·) from the domain of A, we have  Aϕ (z) ρ (dz) = 0. [4.84] Z

The analysis of the properties of the process ξ(t) leads to the conclusion that the stationary distribution ρ (·) has singularities at points (τ, 0, 0) and (τ, 1, V ), which are denoted by ρ [τ, 0, 0] and ρ [τ, 1, V ].

146

Random Motions in Markov and Semi-Markov Random Environments 1

After the conversion of the operator A into the conjugate operator A∗ , and using equation [4.84], we have ∂ ∂ ρ (τ, 0, v) + r0 (τ ) ρ (τ, 0, v) − a1 ρ (τ, 0, v) = 0, ∂τ ∂v ∂ ∂ ρ (τ, 1, v) + r1 (τ ) ρ (τ, 1, v) + b ρ (τ, 1, v) = 0. ∂τ ∂v  ∞ r1 (τ ) ρ (τ, 1, v) dτ = ρ (0, 0, v) , 

[4.85] [4.86] [4.87]

0



r0 (τ ) ρ (τ, 0, v) dτ = ρ (0, 1, v) ,

0

[4.88]

ρ (∞, 0, v) = ρ (∞, 1, v) = 0, ∂ ρ [τ, 0, 0] + r0 (τ ) ρ [τ, 0, 0] − a1 ρ (τ, 0, 0+) = 0, ∂τ ∂ ρ [τ, 1, V ] + r1 (τ ) ρ [τ, 1, V ] − bρ (τ, 1, V −) = 0, ∂τ ρ [∞, 1, V ] = ρ [∞, 0, 0] = ρ [0, 1, V ] = ρ [0, 0, 0] = 0. Now, by considering the boundary conditions, we obtain  ∞  ∞ r0 (τ ) ρ [τ, 0, 0] dτ = b ρ (τ, 1, 0+) dτ , 

0

[4.89] [4.90]

[4.91]

0





r1 (τ ) ρ [τ, 1, V ] dτ = a1

0

∞ 0

ρ (τ, 0, V −) dτ .

[4.92]

Solving equations [4.85] and [4.86] gives for fi ∈ C 1 , i = 0, 1, ρ (τ, 0, v) = f0 (v + a1 τ ) e− ρ (τ, 1, v) = f1 (v − bτ ) e−



τ 0

0

r0 (t)dt

r1 (t)dt

,

.

Substituting [4.93] and [4.94] into [4.87] and [4.88], we obtain  ∞ t f0 (v + a1 t) e− 0 r0 (s)ds r0 (t)dt =f1 (v), 

[4.93] [4.94]

[4.95]

0

∞ 0

f1 (v − bt) e−

t 0

r1 (s)ds

r1 (t)dt =f0 (v).

[4.96]

Random Switched Processes with Delay in Reflecting Boundaries

It follows from equations [4.95] and [4.96], for i = 0, 1, that  ∞ ∞ t l fi (v − bt + a1 l) e− 0 r1 (s)ds e− 0 r0 (s)ds r1 (t)r0 (l) dtdl = fi (v). 0

147

[4.97]

0

By combining equations [4.95]–[4.97], we will find the function fi (v) in the following form: fi (v) = ci esv ,

ci > 0,

i = 0, 1.

[4.98]

Next, substituting [4.93] into [4.92], and since e−

t

ri (s)ds

0

= 1 − Gi (t), we have

gˆ0 (a1 s) gˆ1 (−bs) = 1. It follows from condition F1 and equations [4.93]–[4.96] that ρ (τ, 0, v) = c0 es0 v+a1 s0 τ e−

τ 0

r0 (t)dt

ρ (τ, 1, v) = c0 gˆ0 (a1 s) es0 v−bs0 τ e−

,

τ 0

[4.99] r1 (t)dt

.

[4.100]

It is not hard to find that if m0 a1 = m1 b, then only s0 = 0 satisfies condition F1 . In addition, it also satisfies equations [4.91] and [4.92]. In particular cases, when m0 a1 = m1 b, equations [4.91] and [4.92] have the solution s0 = 0. Substituting [4.99] and [4.100] into [4.89] and [4.90], respectively, we have  τ τ ea1 s0 t dt [4.101] ρ [τ, 0, 0] = c0 a1 e− 0 r0 (s)ds 0

ρ [τ, 1, V ] = c0 gˆ0 (a1 s0 )b e

s0 V



e

τ 0

r1 (s)ds



τ 0

e−bs0 t dt.

[4.102]

The normalizing constant c0 of equations [4.99]–[4.102] can be obtained from the condition 1   i=0

V



0





ρ (τ, i, v) dτ dv +

0

To complete the proof, note that  ∞ ρ (i, v) = ρ (τ, i, v) dτ ,  ρ [0, 0] =

0



∞ 0

ρ [τ, 0, 0] dτ +

0

0

ρ [τ, 1, V ] dτ = 1.

i = 0, 1, 



ρ [τ, 0, 0] dτ ,



ρ [1, V ] =

∞ 0

ρ [τ, 1, V ] dτ ,

148

Random Motions in Markov and Semi-Markov Random Environments 1

It follows from theorem 4.5 that if m0 a1 = m1 b, then the stationary average productivity of the system is of the following form:      K = c a0 I0 es0 V − 1 + a1 gˆ0 (a1 s0 ) I1 es0 V − 1  +a0 gˆ0 (a1 s0 ) (m1 − I1 ) es0 V , where −1  . c = I0 es0 V − I1 gˆ0 (a1 s0 ) − m0 + m1 gˆ0 (a1 s0 ) es0 V In the case when m0 a1 = m1 b, which we call the balance case, we have 6 5 (2) m1 , K = c (m0 a1 + m1 b) V + a0 b 2 where c−1 = V (m0 + m1 ) + a1

(2)

m0 2

+b

(2)

m1 2

.

E XAMPLE 4.4.– Let g0 (t) = q 2 e−qt , q > 0, t ≥ 0, g1 (t) = p2 te−pt , p > 0. In this case, equation [4.83] has the form (2 ' (2 ' p q gˆ0 (a1 s0 ) gˆ1 (−bs0 ) = = 1, q − a1 s 0 p + bs0

[4.103]

with the conditions b s0 > −p.

(a1 s0 ) < q,

[4.104]

By solving equation [4.103], we obtain the following solutions that satisfy 1p conditions [4.104], i.e. s0  = 0 and s0  = qb−a a1 b . 2

2

q t p t → q > 0 as t → ∞ and r1 (t) = 1+pt → p > 0 as t → ∞, it Since r0 (t) = 1+qt follows from remark 4.2, that the integrals I0 , I1 do exist.

Hence, we can apply theorem 4.5. If we have the balance case, say q b = a1 p, then   2V a1 p−1 + bq −1 + 3a0 bq −2 K= . 2V (p−1 + q −1 ) + 3a1 p−2 + 3bq −2 1p In the unbalance case, say q b = a1 p and s0  = qb−a a1 b , we have 5' ( ( '  qb−a1 p qb − a1 p I1 e a1 b V − 1 K = c a0 I0 + a1 gˆ0 b

Random Switched Processes with Delay in Reflecting Boundaries

149

6 +a0 gˆ0 (a1 s0  ) (m1 − I1 ) es0



V

,

where ( (  ' ' −1 qb−a1 p qb−a1 p 1 1 qb − a1 p qb − a1 p − + gˆ0 e a1 b V c = I0 e a1 b V − I1 gˆ0 , b p q b and ' gˆ0 (λ) =  I0 =  I1 =

∞ 0 ∞ 0

q q+λ

(2 ,

(1 + qt) e−

a1 p b t

dt =

bp

(1 + pt) e− a1 t dt =

qb2 + a1 pb , a21 p2 pa21 + a1 qb . b2 q 2

4.4.2. Estimation of stationary efficiency of a production system with two unreliable supply lines Now, we will deal with a system consisting of two unreliable supply lines A1 and A2 , a customer C, and two reservoirs B1 and B2 located between A1 and A2 , and also between A2 and C, respectively. The ON time period of each supply line Ai is a random variable distributed exponentially with a parameter λi > 0 and the OFF periods are random variables distributed exponentially with parameter μi > 0. The system functions as follows: 1) If the supply lines A1 and A2 are ON and the reservoirs B1 and B2 are not full, then the flow (or product) arrives from A1 to A2 at rate a0 +b2 and from A2 to C at rate a0 . In addition, the reservoir B1 is receiving a flow from A1 at rate b1 , and the reservoir B2 is receiving a flow from A2 at rate b2 . If the reservoir B1 is full and B2 is not full, then the flow goes from A1 to A2 and from A2 to C at rate a0 , and to the reservoir B2 at rate b2 (there is no flow to reservoir B1 ). If B2 is full and B1 is not full, then the flow goes from A1 to A2 and from A2 to C at rate a0 , and to the reservoir B1 at rate b1 (there is no flow going into reservoir B2 ). If B1 and B2 are full, then the flow goes from A1 to A2 , and from A2 to C at rate a0 . 2) If A1 is OFF, reservoir B1 is not empty, and the reservoir B2 is not full, then the flow goes from the non-empty B1 to A2 at rate a1 > b2 , from A2 to the reservoir B2 at rate b2 , and to C at rate a1 − b2 . If the reservoir B2 is full, then the flow goes from A2 to C at rate a1 , and there is no flow going into B2 .

150

Random Motions in Markov and Semi-Markov Random Environments 1

3) If A2 is OFF, then the flow goes from the non-empty reservoir B2 to the customer C at rate a2 . If B2 is empty, then the customer does not receive any flow or product. The supply line A1 functions according to items 1 and 2, with the difference that the product does not arrive from A1 to A2 . As mentioned before, let VT be the volume of the product received by the customer during an interval of time [0, T ]. The limit K = limT →∞ VTT (if any) is called the stationary average productivity of the system. Our purpose is to calculate K as a function of the following system parameters: a1 , a2 , b1 , b2 , μ1 , μ2 , λ1 , λ2 , V1 , V2 . Let us introduce the following random process: ⎧ ⎪ ⎪ 11; supply lines A1 and A2 are ON; ⎨ 10; A1 is ON and A2 is OFF; χ(t) = 01; A1 is OFF and A2 is ON; ⎪ ⎪ ⎩ 00; supply lines A1 and A2 are OFF, at time t. The process {χ(t)} is a Markov process. Denote by vi (t), the volume (or amount) of the product in Bi at time t, and by Vi , the volume or capacity of the reservoir Bi , i = 1, 2. Now, consider the three-variate process ζ(t) = {χ(t), v1 (t), v2 (t)}. It is easily seen that ζ(t) is a Markov process with the following phase space: X = {(ij, v1 , v2 ) , i, j ∈ {0, 1} , 0 ≤ v1 ≤ V1 , 0 ≤ v2 ≤ V2 } . Let us define the following function on X: ⎧ 0, if x = (i0, v1 , 0) , i ∈ {0, 1} , 0 ≤ v1 ≤ V1 ; ⎪ ⎪ ⎪ ⎪ if x = (i0, v1 , v2 ) , i ∈ {0, 1} , 0 ≤ v1 ≤ V1 , v2 >0; ⎨ a2 , if x = (11,v1 , v2 ) ; f (x) = a0 , ⎪ ⎪ a − b , if x = (01,v1 , v2 ) , v1 > 0, v2 < V2 ; ⎪ 1 2 ⎪ ⎩ a1 , if x = (01,v1 , V2 ) , v1 > 0. Then, we have  T f (ζ(t))dt VT = 0

and VT 1 lim = lim T →∞ T T →∞ T



T 0

f (ζ(t))dt = K.

Random Switched Processes with Delay in Reflecting Boundaries

151

The process ζ(t) is uniformly ergodic and we denote by ρ (·), its stationary distribution. According to the ergodic theory, we have 1 T →∞ T



K = lim



T 0

f (ζ(t))dt =

f (x) ρ (dx). X

Thus, to compute the stationary productivity K of the system it suffices to calculate ρ (·). Let us introduce the following functions on the space X: ⎧ if x = (0, 0) or x = (1, 1) , v1 = V1 ; ⎨ 0, if x = (1, i) , i ∈ {0, 1} , v1 < V1 ; C1 (x, v1 , v2 ) = b1 , ⎩ −a1 , if x = (0, 1) , v1 > 0, ⎧ if x = (1, 1) or x = (0, 1) , v1 > 0, v2 < V2 ; ⎨ b2 , C2 (x, v1 , v2 ) = −a2 , if x = (0, 1) , v1 = 0, v2 > 0 or x = (0, 0) , v2 > 0; ⎩ 0, other cases. It is easily verified that ⎧ dv (t) ⎪ ⎨ 1 = C1 (χ(t), v1 (t), v2 (t)) ; dt ⎪ ⎩ dv2 (t) = C (χ(t), v (t), v (t)) , 2 1 2 dt i.e. γ(t) = (v1 (t), v2 (t)) is a stochastic transport process (Korolyuk 1993; Korolyuk and Limnios 2005). The infinitesimal operator of the process ζ(t) = (χ(t), v1 (t), v2 (t)) is as follows (Corlat et al. 1991; Korolyuk and Turbin 1982): Aϕ (x, v1 , v2 ) = C1 (x, v1 , v2 ) +C2 (x, v1 , v2 )

∂ ϕ (x, v1 , v2 ) ∂v1

∂ fx ϕ (x, v1 , v2 ) + (P − I) ϕ (x, v1 , v2 ) , ∂v2 1−Fx

where Fx is the cdf and fx is the pdf of a sojourn time of the process χ(t) at state x ∈ {11, 10, 01, 00}, P is the operator of the transition probabilities of χ (t) and I is a unit operator. By direct calculation, we obtain f00 = μ1 + μ2 , 1 − F00

f01 = μ1 + λ2 , 1 − F01

f10 = μ 2 + λ1 , 1 − F10

f11 = λ1 + λ 2 . 1 − F11

[4.105]

152

Random Motions in Markov and Semi-Markov Random Environments 1

And the matrix P is as follows: ⎛ μ2 μ1 0 μ2 +μ1 μ2 +μ1 ⎜ λ2 0 0 ⎜ 1 P = ⎜ λ2λ+μ 1 0 0 ⎝ μ2 +λ1 λ1 λ2 0 λ1 +λ2 λ1 +λ2

0 μ1 λ2 +μ1 μ2 μ2 +λ1

⎞ ⎟ ⎟ ⎟ ⎠

[4.106]

0

We use the following ordering in the binary system: 00 ⇔ 0, 01 ⇔ 1, 10 ⇔ 2 and 11 ⇔ 3. The points of the phase space X, at which the functions C1 (·) and C2 (·) are simultaneously equal to zero, are atoms of the stationary distribution. We denote these atoms by ρ [·]. The points at which the function C1 (·) or function C2 (·) (but not both) are equal to zero, are said to be semiatoms, and we denote them by ρ [·). At other points of the space X, the measure ρ (·) is absolutely continuous with respect to the Lebesgue measure on [0, V1 ] × [0, V2 ]. By obtaining the adjoint operator A∗ corresponding to the operator A, and using the fact that A∗ ρ = 0, we have a1

∂ ∂ ρ (01,v1 , v2 ) − b2 ρ (01,v1 , v2 ) +μ2 ρ (00, v1 , v2 ) +λ1 ρ (11,v1 , v2 ) ∂v1 ∂v2

− (λ2 + μ1 ) ρ (01,v1 , v2 ) = 0; a2

∂ ρ (00, v1 , v2 ) +λ1 ρ (10,v1 , v2 ) +λ2 ρ (01,v1 , v2 ) ∂v2

− (μ1 + μ2 ) ρ (00, v1 , v2 ) = 0; −b1

∂ ∂ ρ (10,v1 , v2 ) + a2 ρ (10,v1 , v2 ) +μ1 ρ (00, v1 , v2 ) +λ2 ρ (11,v1 , v2 ) ∂v1 ∂v2

− (μ2 + λ1 ) ρ (10,v1 , v2 ) = 0; −b1

∂ ∂ ρ (11,v1 , v2 ) −b2 ρ (11,v1 , v2 ) +μ1 ρ (01,v1 , v2 ) +μ2 ρ (10,v1 , v2 ) ∂v1 ∂v2

− (λ1 + λ2 ) ρ (11,v1 , v2 ) = 0.

[4.107]

We can also obtain equalities for atoms and semiatoms of the distribution ρ (·). However, it is difficult to solve such a set of equations. Therefore, we propose to consider a “corrected” model instead. Namely, it is assumed that if v1 > 0, then C1 (00, v1 , v2 ) = −a1 . That is, if both supply lines are OFF, then the product leaves B1 at rate a1 , and neither the customer nor reservoir B1 receive it. We denote the stationary productivity of such a system by K0 . It is obvious that K0 ≤ K.

Random Switched Processes with Delay in Reflecting Boundaries

153

COMMENT.– For a known value of K0 , the upper estimate can be easily obtained for K if we assume that, after leaving B1 in a state (00, v1 , v2 ), v1 >0, the customer receives the product. We also note that the larger the value of max (μ1 , μ2 ), the smaller the difference between K0 and K. The set of equations for the “corrected” model in comparison with equation [4.107], has one more item (a1 ∂v∂ 1 ρ (00, v1 , v2 )) in the second equation, and it is given by a1

∂ ∂ ρ (01,v1 , v2 ) − b2 ρ (01,v1 , v2 ) + μ2 ρ (00, v1 , v2 ) ∂v1 ∂v2

+λ1 ρ (11,v1 , v2 ) − (λ2 + μ1 ) ρ (01,v1 , v2 ) = 0; a1 ∂v∂ 1 ρ (00, v1 , v2 ) + a2 ∂v∂ 2 ρ (00, v1 , v2 ) + λ1 ρ (10,v1 , v2 ) +λ2 ρ (01,v1 , v2 ) − (μ1 + μ2 ) ρ (00, v1 , v2 ) = 0; −b1

∂ ∂ ρ (10,v1 , v2 ) + a2 ρ (10,v1 , v2 ) + μ1 ρ (00, v1 , v2 ) ∂v1 ∂v2

+λ2 ρ (11,v1 , v2 ) − (μ2 + λ1 ) ρ (10,v1 , v2 ) = 0; −b1

∂ ∂ ρ (11,v1 , v2 ) − b2 ρ (11,v1 , v2 ) + μ1 ρ (01,v1 , v2 ) ∂v1 ∂v2

+μ2 ρ (10,v1 , v2 ) − (λ1 + λ2 ) ρ (11,v1 , v2 ) = 0. Hence, by solving [4.108], we have ( ( ( '' ' μ1 λ1 μ2 λ2 v1 + v2 , − − ρ (00, v1 , v2 ) = c exp a b1 a2 b2 ( ( ( ''1 ' a2 μ1 λ1 μ2 λ2 v1 + v2 , ρ (01,v1 , v2 ) = c exp − − b2 a b1 a b2 ( ( ( '' 1 ' 2 a1 μ1 λ1 μ2 λ2 v1 + v2 , ρ (10,v1 , v2 ) = c exp − − b1 a1 b1 a2 b2 ( ( ( '' ' a 1 a2 μ1 λ1 μ2 λ2 v1 + v2 , ρ (11,v1 , v2 ) = c exp − − b1 b2 a1 b1 a2 b2

[4.108]

[4.109]

where c is a normalization constant. The set of equations for the singular part of the “corrected” system is as follows: −b1

d ρ [11,v1 , V2 ) + μ1 ρ [01,v1 , V2 ) − (λ1 + λ2 ) ρ [11,v1 , V2 ) dv1   +b2 ρ 11,v1 , V2− = 0;

154

Random Motions in Markov and Semi-Markov Random Environments 1

−b2

a1

a2

d ρ [11,V1 , v2 ) + μ2 ρ [10,V1 , v2 ) − (λ1 + λ2 ) ρ [11,V1 , v2 ) dv2   +b1 ρ 11,V1− , v2 = 0;

d ρ [01,v1 , V2 ) + λ1 ρ [11,v1 , V2 ) − (λ2 + μ1 ) ρ [01,v1 , V2 ) dv1   +b2 ρ 01,v1 , V2− = 0; d ρ [10,V1 , v2 ) + λ2 ρ [11,V1 , v2 ) − (μ2 + λ1 ) ρ [10,V1 , v2 ) dv2   +b1 ρ 10,V1− , v2 = 0;

λ1 ρ [10,V1 , v2 ) − (μ1 + μ2 ) ρ [00, V1 , v2 ) = 0;   λ2 ρ [11,v1 , V2 ) − a2 ρ 10,v1 , V2− = 0;   λ2 ρ [01,v1 , V2 ) + μ2 ρ [00, V1 , v2 ) − a2 ρ 00, v1 , V2− = 0;   λ1 ρ [11, V1 , v2 ) − a1 ρ 01, V1− , v2 = 0,

[4.110]

where   ρ x, V1− , v2 = lim ρ (x, v1 , v2 ) , v1 →V1

  ρ x, v1 , V2− = lim ρ (x, v1 , v2 ) . v2 →V2

By considering [4.109], we have the following solution of [4.110]: ( ( ( '' ' a 1 a2 μ1 λ1 μ2 λ2 v1 + V2 , exp − − ρ [11,v1 , V2 ) = c λ2 b 1 a1 b1 a2 b2 ( ( ( '' ' a2 μ1 λ1 μ2 λ2 v1 + V2 , ρ [01,v1 , V2 ) = c exp − − λ2 a1 b1 a2 b2 ( ( ( '' ' a1 a 2 μ1 λ1 μ2 λ2 V1 + v2 , ρ [11,V1 , v2 ) = c exp − − λ1 b 2 a1 b1 a2 b2 ( ( ( '' ' a1 μ1 λ1 μ2 λ2 V1 + v2 . ρ [10,V1 , v2 ) = c exp − − λ1 a1 b1 a2 b2

[4.111]

We can write a system of equations for semiatoms, when the first reservoir is empty for the “corrected” system, as follows:   μ1 ρ [00, 0, v2 ) − b1 ρ 10, 0+ ,v2 = 0;   [4.112] μ1 ρ [01, 0,v2 ) − b1 ρ 11, 0+ ,v2 = 0; a2

d ρ [01, 0,v2 ) − (μ1 + λ2 ) ρ [01, 0,v2 ) + μ2 ρ [00, 0, v2 ) dv2 +a1 ρ (01, 0,v2 ) = 0;

Random Switched Processes with Delay in Reflecting Boundaries

a2

155

d ρ [00, 0, v2 ) − (μ1 + μ2 ) ρ [00, 0, v2 ) + λ2 ρ [01, 0,v2 ) dv2 +a1 ρ (00, 0, v2 ) = 0

and when the second reservoir is empty:   μ2 ρ [00, v1 , 0) − b2 ρ 01,v1 , 0+ = 0; d ρ [10,v1 , 0) − (λ1 + μ2 ) ρ [10,v1 , 0) + μ1 ρ [00, v1 , 0) dv1   +a2 ρ 10,v1 , 0+ = 0;   μ2 ρ [10,v1 , 0) − μ2 ρ 11,v1 , 0+ = 0; −b1

[4.113]

d ρ [00, v1 , 0) − (μ1 + μ2 ) ρ [00, v1 , 0) + λ1 ρ [10,v1 , 0) dv1   +a2 ρ 00, v1 , 0+ = 0. a1

By solving [4.112] and [4.113], and considering [4.111], we have ( ( '' a1 μ2 λ2 v2 , − ρ [00, 0, v2 ) = c exp μ1 a b2 ( ( ''2 a 1 a2 μ2 λ2 v2 , ρ [01, 0,v2 ) = c exp − μ1 b2 a b2 ( ( '' 2 a2 μ1 λ1 v1 , ρ [00, v1 , 0) = c exp − μ2 a1 b1 ( ( '' a1 a2 μ1 λ1 v1 . ρ [10,v1 , 0) = c exp − μ2 b1 a1 b1

[4.114]

Next, we will write the corresponding equations for atoms. If the reservoirs are full, then we have the system:     − (λ1 + λ2 ) ρ [11,V1 , V2 ] + b2 ρ 11,V1 , V2− + b1 ρ 11,V1− , V2 = 0;   λ1 ρ [11,V1 , V2 ] − a1 ρ 01,V1− , V2 = 0;   λ2 ρ [11,V1 , V2 ] − a2 ρ 10,V1 , V2− = 0. By applying the condition [4.111], we solve the system and it has the following solution: ( ( ( '' ' a1 a 2 μ1 λ1 μ2 λ2 V1 + V2 . [4.115] exp − − ρ [11,V1 , V2 ] = c λ1 λ2 a1 b1 a2 b2

156

Random Motions in Markov and Semi-Markov Random Environments 1

The system of equations for the atom ρ [00, 0, 0] can be written as     − (μ1 + μ2 ) ρ [00, 0, 0] + a2 ρ 00, 0, 0+ + a1 ρ 00, 0+ , 0 = 0;   μ1 ρ [00, 0, 0] − b1 ρ 00, 0+ , 0 = 0;   μ2 ρ [00, 0, 0] − b2 ρ 00, 0, 0+ = 0. By solving this system, after considering [4.114], the solution is given by ρ [00, 0, 0] =

a1 a2 c. μ1 μ 2

[4.116]

And, finally, we obtain the following system of equations for the atom ρ [10,V1 , 0]:     − (μ1 + μ2 ) ρ [10,V1 , 0] + a2 ρ 10,V1− , 0 + a1 ρ 10,V1 , 0+ = 0;   μ1 ρ [10,V1 , 0] − b1 ρ 00, V1− , 0 = 0;   μ2 ρ [10,V1 , 0] − b2 ρ 11,V1 , 0+ = 0. By solving this system and considering [4.114], we have ( ( '' a1 a2 μ1 λ1 ρ [10,V1 , 0] = c [4.117] V1 exp − λ1 μ 2 a1 b1  The coefficient c is determined from the relation X ρ (dx) = 1. Namely,  ρ [00, 0, 0] + ρ [10,V1 , 0] + ρ [11,V1 , V2 ] +  +  +  +  +  +  +



ρ [11,v1 , V2 ) dv1 +

0 V1 0 V1 0

  



V2 0



V2

V1

ρ [00, v1 , 0) dv1

ρ [10,v1 , 0) dv1

0

ρ [00, 0, v2 ) dv2 +

0

0

V1

V2 0

V1

0

ρ [01, 0,v2 ) dv2 +



ρ [10,V1 , v2 ) dv2 + 

ρ (00, v1 , v2 ) dv2 dv1 + 

V2

ρ (01,v1 , v2 ) dv2 dv1 +

0

V1 0 V1

0

V2 0

ρ [01,v1 , V2 ) dv1

0

V2

V2 0

V2

ρ (11,v1 , v2 ) dv2 dv1 = 1.

 

0

V2

ρ [11,V1 , v2 ) dv2

V2 0

ρ (00, v1 , v2 ) dv2 dv1

V2 0

ρ (10,v1 , v2 ) dv2 dv1

Random Switched Processes with Delay in Reflecting Boundaries

157

Substituting the values that have been found for ρ into this formula, we obtain b1 b2 b1 b2 a1 a 2 λ1 (b2 + a2 ) b2 + a2 + + + V1 + V1 μ1 μ 2 μ2 μ1 μ 2 μ 1 μ2 μ2 ' ( λ2 (b1 + a1 ) b1 + a1 λ1 λ 2 λ1 λ2 + V2 + V2 + + + + 1 V1 V 2 . μ1 μ 2 μ1 μ 1 μ2 μ1 μ2

c−1 =

Substituting the value of c into [4.108], [4.110] and [4.113]–[4.116], we calculate 3   V2

K0 =

ρ [11,V1 , V2 ] +

f (x)ρ (dx) = a0 X

 +



V2

ρ [11,V1 , v2 ) dv2 +

0

3

+a2  +

0

 +



0

ρ (11,v1 , v2 ) dv2 dv1

V2 0

ρ [01, 0,v2 ) dv2

V2

ρ (00, v1 , v2 ) dv2 dv1

0

V1

4

V2 0

 ρ [00, 0, v2 ) dv2 +





0

V2

0 V1

V1

ρ [11,v1 , V2 ) dv1

0



V2

ρ (10,v1 , v2 ) dv2 dv1 +

0

 + (a1 − b2 )

V1 0



0

4

V2

ρ [10,V1 , v2 ) dv2 

V2 0

ρ (01,v1 , v2 ) dv2 dv1 + a1

V2 0

ρ [01,v1 , V2 ) dv1 .

Let us introduce the function: f1 (x) =

f (x) , x ∈ X, x = (00, v1 , v2 ) , a1 , x = (00, v1 , v2 ) .

Then, it is easily seen that  λ1 λ2 f1 (x) ρ (dx) = K0 + a1 . μ 1 μ2 X Therefore, we have K0 ≤ K ≤ K0 + a1

λ1 λ2 . μ1 μ2

The approach that we use in this section can be applied to study the stationary efficiency of a production system with an arbitrary number of unreliable supply lines and reservoirs in between them.

5 One-dimensional Random Motions in Markov and Semi-Markov Media

In this chapter, several models of random evolutions which generalize the telegraph process of Goldstein–Kac are considered, and distributions of such evolutions are studied. In particular, section 5.1 is devoted to one-dimensional semi-Markov evolutions in a generalized Erlang environment. The selection of this distribution is due to the fact that, in this case, we are able to write a partial differential equation (PDE) of hyperbolic type for the density distribution of the particle position, which can be solved by a method described in section 5.1.2. In section 5.1.1, we obtain a hyperbolic differential equation for the pdf of a random motion with Erlang distributed velocity alternations. In section 5.1.2, we describe the solution method for the PDE derived in section 5.1.1. The solution method generalizes the connection between holomorphic functions and harmonic functions through relationships between monogenic functions on a commutative algebra with associated PDEs By using this method, in section 5.1.3 we have found the pdf of the random walk in fixed time described in section 5.1.1. Section 5.2 is aimed at the study of fading evolutions, where, in contrast to the Goldstein–Kac model, a particle that is moving at variable speed slows down to zero as time goes to infinity, i.e. the particle slows down its chaotic movement up to the freezing point as t → ∞. These processes simulate the motion of a particle under the influence of external forces, such as friction. The distribution of the limiting position of a fading evolution in the cases of Erlang-2 or uniform distributions for an interarrival time of the renewal process is obtained.

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

160

Random Motions in Markov and Semi-Markov Random Environments 1

In section 5.3, we obtain a differential equation for the characteristic function of random jump motion on the line, where the direction alternations and random jumps occur according to the renewal epochs of the Erlang distribution. Section 5.4 is devoted to the estimation of the number of level crossings by the telegraph process. By passing to the limit in Kac’s condition, the estimation for a level crossing by the Wiener process is obtained. 5.1. One-dimensional semi-Markov evolutions with general Erlang sojourn times In this section, we study a one-dimensional random motion with a general Erlang distribution for the sojourn times with velocities v1 > 0 and v2 < 0. This stochastic process generalizes the well-known Goldstein–Kac process because it considers the evolution in an Erlang (i.e. non-Markov) media. We should mention that many of the published papers on random motions make the typical assumption that motions are driven by a homogeneous Poisson process, so their processes are Markov (Di Crescenzo 2001; Orsingher and De Gregorio 2007; Pinsky 1991). Thus, their mathematical techniques cannot be extended to the non-Markov case in general. For instance, in the case of a non-Markov switching velocity process, it is not evident how to use the theory of infinitesimal operators and obtain Kolmogorov equations for the distribution of the random motion. However, in the specific case of the Erlang case, we can extend some of the same ideas by increasing the phase space, and as a consequence we can reduce the case of non-Markov switching to the well-known Markov case. We also develop a novel method based on commutative algebras for solving this hyperbolic PDE and obtain the distribution of the particle position. 5.1.1. Mathematical model We assume that the particle moves on the line R in the following manner: at each instant, it moves according to one of two velocities, namely v1 > 0 or v2 < 0. Starting at the position x0 ∈ R, the particle continues its motion with velocity v1 > 0 during the random time τ1 = ξλ1 + . . . + ξλn , n ≥ 1, where {ξλi } is a set of independent exponential random variables with parameters λi , respectively. Then the particle moves with velocity v2 < 0 during the random time τ2 = ξμ1 + . . . + ξμm , m ≥ 1, where {ξμi } is a set of independent exponential random variables with parameters μi , respectively. Moreover, the particle moves with velocities v1 > 0 and v2 < 0, i.e. after an even renewal epoch, the particle has the velocity v1 , and after an odd renewal epoch it has the velocity v2 .

One-dimensional Random Motions in Markov and Semi-Markov Media

161

Thus, we study this one-dimensional random motion by assuming a general Erlang distribution for the sojourn times, meaning that the random variables τ1 and τ2 have general Erlang probability densities with Laplace transforms  fˆτ1 (s) =

∞ 0

 fˆτ2 (s) =

∞ 0

e−st fτ1 (t)dt = e−st fτ2 (t)dt =

n 

λi ; s + λi i=1 m 

μi . s + μi i=1

We will show that the movement of this particle can be determined in terms of a random evolution. Let us consider on the phase space T = {1, 2} an alternating semiMarkov stochastic process η (t) , t ≥ 0 with the sojourn time τi , corresponding to the state i ∈ T , and transition probability matrix of its corresponding embedded Markov chain ' ( 0 1 P = . 1 0 Denote by y (t) , t ≥ 0 the particle position at time t. Now, consider the function C1 on T as C1 (y) =

v1 , if y = 1, v2 , if y = 2.

Then, the position of the particle at anytime t can be expressed as  t C1 (η (s)) ds. y (t) = x0 +

[5.1]

[5.2]

0

Now, we show that the evolution y (t) in semi-Markov media η (t) can be reduced to a random evolution in a Markov media. Denote by {ξ (t) , t ≥ 0} a Markov chain with the phase space E = {1, 2, . . . , n, n + 1, . . . , n + m} and the infinitesimal operator Q = q [P − I], where q = diag (λ1 , λ2 , . . . , λn , μ1 , . . . , μm ) is the intensities matrix and the transition probability matrix is given by ⎞ ⎛ 0 1 0 ... 0 ⎜0 0 1 ... 0⎟ ⎟ ⎜ ⎟ ⎜ P = ⎜ ... ... ... . . . ... ⎟ . ⎟ ⎜ ⎝0 0 0 ... 1⎠ 1 0 0 ... 0 We will assume that the initial distribution is P {ξ (0) = 1} = 1.

162

Random Motions in Markov and Semi-Markov Random Environments 1

Now, let us introduce the following function C on E: C (i) =

v1 , if i = 1, . . . , n, v2 , if i = n + 1, . . . , n + m.

Then the following equation  t C (ξ (s)) ds. x (t) = x0 +

[5.3]

0

determines the random evolution in the Markov medium {ξ (t) , t ≥ 0}. The random processes y(t) and x(t) are stochastically equivalent and they model the particle motion evolution with velocities v1 and v2 , and the random times between velocities switching have probability density function fτ1 (t), if the velocity is v1 , and fτ2 (t), when the velocity of the particle is v2 . Basically, we have reduced the semiMarkov case given by equation [5.2], to the Markov case given by equation [5.3]. So, we now need to study the stochastic process given by equation [5.3]. Let us consider the bivariate stochastic process ζ (t) = (x(t), ξ(t)) with phase space Z = R × E, where E = {1, . . . , n, n + 1, . . . , n + m}. It is well-known that the infinitesimal operator of ζ (t) is of the following form: Aϕ (x, i) = C (x, i)

∂ ϕ (x, i) + q (P ϕ (x, i) − ϕ (x, i)), ∂x

where ϕ ∈ D (A) = domain of operator A, x ∈ R, i ∈ E, and C (x, i) = C (i) in this case. Let us write the operator A in more detail Aϕ (x, 1) = v1

∂ ϕ (x, 1) + λ1 (ϕ (x, 2) − ϕ(x, 1)), ∂x

Aϕ (x, 2) = v1

∂ ϕ (x, 2) + λ2 (ϕ (x, 3) − ϕ(x, 2)), ∂x

.. . ∂ ϕ (x, n) + λn (ϕ (x, n + 1) − ϕ(x, n)), ∂x ∂ Aϕ (x, n + 1) = v2 ϕ (x, n + 1) + μ1 (ϕ (x, n + 2) − ϕ(x, n + 1)), ∂x .. . ∂ Aϕ (x, n + m) = v2 ϕ (x, n + m) + μm (ϕ (x, 1) − ϕ(x, n + m)). ∂x Aϕ (x, n) = v1

One-dimensional Random Motions in Markov and Semi-Markov Media

163

Now, let us consider the density function f (t, x, n) dx = P {x ≤ x(t) ≤ x + dx, ζ = n} ,

n ∈ E.

This function satisfies the first Kolmogorov equation, namely: ∂f (t, x, n) = Af (t, x, n). ∂t

[5.4]

Equation [5.4] can be written in more detail as follows: ∂ ∂f (t, x, 1) = v1 f (t, x, 1) + λ1 (f (t, x, 2) − f (t, x, 1)), ∂t ∂x ∂ ∂f (t, x, 2) = v1 f (t, x, 2) + λ2 (f (t, x, 3) − f (t, x, 2)), ∂t ∂x ... ∂f (t, x, n) ∂ = v1 f (t, x, n) + λn (f (t, x, n + 1) − f (t, x, n)), ∂t ∂x ∂ ∂f (t, x, n + 1) = v2 f (t, x, n + 1) ∂t ∂x +μ1 (f (t, x, n + 2) − f (t, x, n + 1)), .. . ∂ ∂f (t, x, n + m) = v2 f (t, x, n + m) ∂t ∂x +μm (f (t, x, 1) − f (t, x, n + m)).

[5.5]

The set of equations [5.5] can be represented as M f = 0, where f = (f (t, x, 1) , f (t, x, 2) , . . . , f (t, x, n + m)), and M is equal to ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

∂ ∂t

∂ − v1 ∂x + λ1

0 .. . 0

−λ1 ∂ ∂t

...

0

0

∂ − v1 ∂x + λ2 −λ2 . . . .. .. . . . . . 0 0 ...

0 .. .

0 .. .

−μn+m

0

0

0

...

∂ ∂t

∂ − v2 ∂x + μm−1

0

−μn+m−1 ∂ ∂t

∂ − v2 ∂x + μm

The probability density function of a particle position at time t is given by f (t, x) =

n+m  i=1

f (t, x, i).

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

164

Random Motions in Markov and Semi-Markov Random Environments 1

L EMMA 5.1.– The function f (t, x) satisfies the following equation: ⎡ ⎤ ( (  m n ' m ' n   ∂ ∂ ∂ ∂ ⎣ λi μj ⎦ − v1 + λi − v2 + μj − ∂t ∂x ∂t ∂x i=1 j=1 i=1 j=1 ×f (t, x) = 0.

[5.6]

P ROOF.– It is well-known that f (t, x) satisfies (det (M )) f = 0,

[5.7]

It is easily seen that ( (  m n ' m ' n   ∂ ∂ ∂ ∂ det (M ) = λi μj . − v1 + λi − v2 + μj − ∂t ∂x ∂t ∂x i=1 j=1 i=1 j=1 Since velocity vi is finite for each time t > 0, the process {x(t)} is restricted to the interval [v2 t, v1 t]. The function f (t, x) can be expressed as f (t, x) = fc (t, x) + fs (t, x), where fc (t, x) is the absolute continuous part of the distribution of the process x(t) with respect to the Lebesgue measure on the line, and fs (t, x) is the singular part of the distribution. Now, let us consider v1 = −v2 = v, n = m, λi = μi = λ. L EMMA 5.2.– The absolute continuous part fc (t, x) is bounded for each t > 0 and the singular part fs (t, x) is given by 4 3m−1  (λt)i −λt δ(tv − x). [5.8] fs (t, x) = e i! i=0 P ROOF.– The singular part [5.8] states the fact that the particle starts at x = 0 with velocity v, and before time t the velocity direction does not change. Let us show that for |x| < vt the distribution f (t, x) has no singularity with respect to the Lebesgue measure on R. Denote by νk the random event “k changes of velocities occur”. For Δx = [x, x + Δ], where Δ > 0, consider the probability  P {x (t) ∈ Δx, νk }, Pν 0 {x (t) ∈ Δx} = k≥1

One-dimensional Random Motions in Markov and Semi-Markov Media

165

where Pν 0 {x (t) ∈ Δx} is the probability that x (t) ∈ Δx and at least one change of velocity occurs. Now, we show that for all t > 0 there exists a constant Ct < ∞ such that sup x

Pν 0 {x (t) ∈ Δx} < Ct . Δx

Denote by θk , k ≥ 1 a time period between (k − 1)th and kth changes of velocity direction. Recall that θk , k ≥ 1 are independent random variables with Erlang-m distribution. It is easily seen that Pν 0 {x (t) ∈ Δx} 3 4 ) k * k k     i+1 k = P (−1) θi v + (−1) t − θi v ∈ Δx, θi < t i=1

k≥1

=



P

k≥1

=



)3 k  )

 l≥0

≤ sup

(−1)

i+1

θi − (−1)

k

k 

θi



|x|≤vt l≥0

k

v ∈ x − (−1) vt, 2l+1 

*

k 

−2v (θ2 + θ4 + · · · + θ2l+2 ) ∈ Δx − vt,

* θi < t

i=1

θi < t

i=1

) P

i=1

i=1

2v (θ1 + θ3 + · · · + θ2l+1 ) ∈ x + vt,

P

l≥0

+

i=1

i=1

4

2l+2 

* θi < t

i=1

P {2v (θ1 + θ3 + . . . + θ2l+1 ) ∈ Δx,

2v (θ2 + θ4 + . . . + θ2l ) < 2vt − x}  + sup P {−2v (θ2 + θ4 + · · · + θ2l+2 ) ∈ Δx, |x|≤vt l≥0

2v (θ1 + θ3 + . . . + θ2l+1 ) < 2vt + x + Δ}. Since |x| ≤ vt, then for all m ≥ 1 the density function pm (x, λ), corresponding to the Erlang-m distribution with parameter λ, satisfies pm (x, λ) ≤ λ, we have  P {2v (θ1 + θ3 + · · · + θ2l+1 ) ∈ Δx, 2v (θ2 + θ4 + . . . + θ2l ) < 2vt − x} l≥1



λΔ  P {θ2 + θ4 + . . . + θ2l < 2t}. 2v l≥1

[5.9]

166

Random Motions in Markov and Semi-Markov Random Environments 1

Taking into account the distribution of θi , we have for 2lm + 1 > 2λt 4 3 2lm i  (2λt) e−2λt P {θ2 + θ4 + . . . + θ2l < 2t} ≤ e2λt − i! i=0 ≤

2lm+1

e−2λt (2λt) . 2lm!(2lm + 1 − 2λt)

Thus, considering [5.9], there exists a constant At such that  P {−2v (θ1 + θ3 + . . . + θ2l−1 ) ∈ Δx, sup |x|≤vt l≥1

2v (θ2 + θ4 + . . . + θ2l ) < 2vt − x + Δ} ≤ At Δ. Similarly, we can show that there exists a constant Bt such that  P {2v (θ2 + θ4 + . . . + θ2l ) ∈ Δx, sup x

l≥1

2v (θ1 + θ3 + . . . + θ2l−1 ) < 2vt − x + Δ} ≤ Bt Δ. The proof of lemma 5.2 concludes by defining Ct = At + Bt . It is easily seen that if v1 = −v2 = v, n = m, λi = μi = λ, then equation [5.6] is as follows: ' (m ' (m ∂ ∂ ∂ ∂ −v +λ +v + λ fc (t, x) − λ2m fc (t, x) = 0. [5.10] ∂t ∂x ∂t ∂x C OROLLARY 5.1.– The absolute continuous part fc (t, x) satisfies equation [5.10] for |x| < tv. Now, let us study the behavior of the continuous part fc (t, x) close to the lines t = ± xv . To avoid cumbersome expressions, we choose v = 1. L EMMA 5.3.– For m ≥ 2, we have lim

P {0 < t − x (t) < ε} λm tm−1 e−λt = , ε 2 (m − 1)!

lim

P {t + x (t) < ε} = 0. ε

ε↓0

ε↓0

One-dimensional Random Motions in Markov and Semi-Markov Media

167

P ROOF.– It is easily verified that   ε P {0 < t − x (t) ≤ ε} = P t − ≤ θ1 < t 2  t   ε + P θ3 ≥ t − u, θ2 ≤ , θ1 ∈ du + o(ε), 2 0 random variables with where θi , i = 1, 2,3 are independent m-Erlang distributed  t parameter λ. Since 0 P θ3 ≥ t − u, θ2 ≤ 2ε , θ1 ∈ du = o(ε), then after applying the limit we obtain P {0 < t − x (t) < ε} [5.11] ε 33m−1 4 3m−1    44  (λt)i  λ t− ε i e−λt λm tm−1 e−λt λ 2ε 2 −e = = lim . ε↓0 ε i! i! 2 (m − 1)! i=0 i=0

lim ε↓0

  Similarly, P {t + x (t) ≤ ε} = P θ2 ≥ t − 2ε , θ1 ≤ 2ε + o (ε), and, as is easily seen, limε↓0 P {t+x(t) 0 we have −t f (t, x)dx = 1. lim fc (t, x) = x↑t

For a small ε > 0, consider the probability P {0 < t − x (t) < ε}. Let us verify that lim fc (t, x) = lim x↑t

ε↓0

P {0 < t − x (t) < ε} ε

or equivalently, lim ε↓0

 P {0 < t − x (t) < ε} 1  −t = e + te−t . ε 2

Indeed, it is easily seen that   ε P {0 < t − x (t) ≤ ε} = P t − ≤ θ1 < t 2  t   ε + P θ3 ≥ t − u, θ2 ≤ , θ1 ∈ du + o(ε), 2 0 where θi , i = 1, 2, 3 are independent exponentially distributed random variables. The random variable θ1 represents the time of the first velocity alternation, θ2 is the time between the first and the second velocity alternations, and θ3 is the time between the second and the third velocity alternations.   ε We have that P t − 2ε ≤ θ1 < t = e−t+ 2 − e−t . It is then easy to calculate 

t 0

    t −t+u −u ε − 2ε P θ3 ≥ t − u, θ2 ≤ , θ1 ∈ du = 1 − e e e du 2 0  ε = 1 − e− 2 te−t . 

One-dimensional Random Motions in Markov and Semi-Markov Media

179

Hence, it is evident that lim ε↓0

 P {0 < t − x (t) < ε} 1  −t = e + te−t . ε 2

Similarly,  ε ε + o(ε). P {t + x(t) ≤ ε} = P θ2 ≥ t − , θ1 ≤ 2 2 This implies that lim ε↓0

P {t + x (t) < ε} 1 = e−t = lim fc (t, x). x↓−t ε 2

Therefore, fc (t, x) is a solution of the Goursat problem for the linear second-order hyperbolic equation that ensures the uniqueness of the solution of equation [5.25]. It means that f (t, x) is the pdf of the particle position at time t for m = 1. It is relevant to remark that f (t, x) coincides with the result obtained in Pinsky (1991). We now turn to the case m = 2 and continue to calculate integrals of ulk . From u40 = 2 

t

−t

∂u12 ∂t

u04 dy = 2

− 2u00 , it follows that '

∂ ∂t



t

−t

( u12 dy − u12 (t, t) − u12 (t, −t) − 2sinh t − 2sin t

= 4 (sinh t + sin t − t) − 2sinh t − 2sin t = 2sinh t + 2sin t − 4t. Next, from u24 = 2 

t

−t

u24 dy = 2

∂u32 ∂t

'

∂ ∂t

− 2u20 , it follows that 

t

−t

( u32 dy − u32 (t, t) − u32 (t, −t) − 2sinh t + 2sin t

= 4sinh t − 4sin t − 2sinh t + 2sin t = 2sinh t − 2sin t. For t ≤ |y|, we introduce the function g (t, y) = gc (t, y) + gs (t, y), where uc (t, y)  1 1 1 2 u (t, y) + u (t, y) + u31 (t, y) + u12 (t, y) + u32 (t, y) + u04 (t, y) , 2 0 4 1 [5.26] gs (t, y) = δ (t − y) + tδ (t − y) . =

180

Random Motions in Markov and Semi-Markov Random Environments 1

By construction, the function gc (t, y) is a solution of the following equation: '

∂2 ∂2 − 2 2 ∂t ∂y

(2 g (t, y) − g (t, y) = 0.

[5.27]

Therefore, the function fc (t, x) = e−t gc (t, x) is a solution of equation [5.10] for m = 2 (λ = 1, v = 1). We establish that f (t, x) = fc (t, x) + e−t gs (t, x). Taking into account the values of integrals of functions that are involved in the expression for gc (t, y), we have that t f (t, x)dx = 1, for all t ≥ 0. −t Let us show that lim fc (t, x) = lim x↑t

ε↓0

P {0 < t − x (t) < ε} ε

and lim fc (t, x) = lim

x↓−t

ε↓0

P {t + x (t) < ε} . ε

From lemma 5.2, it follows that for m = 2 we have lim

P {0 < t − x (t) < ε} 1 = te−t . ε 2

lim

P {t + x (t) < ε} = 0. ε

ε↓0

[5.28]

and ε↓0

It can be easily verified that limy↑t u04 (t, y) = 0, limy↑t u20 (t, y) = 0, and consequently lim gc (t, y) = lim y↑t

y↑t

t+y t I1 (Λy ) = , 2Λy 2

lim gc (t, y) = lim

y↓−t

y↓−t

t+y I1 (Λy ) = 0. 2Λy

[5.29]

Thus, lim fc (t, x) = x↑t

P {t − x (t) < ε} 1 −t te = lim , ε↓0 2 ε

lim fc (t, x) = 0 = lim

x↓−t

ε↓0

P {t + x (t) < ε} . ε

[5.30]

One-dimensional Random Motions in Markov and Semi-Markov Media

181

Let us show that conditions [5.29], in addition to condition  t g (t, y) e−t dx = 1, −t

ensure the uniqueness of the solution gc (t, y) for equation [5.27], and, consequently, the uniqueness of the solution fc (t, x) of equation [5.21]. It is not hard to verify that each solution of equation [5.25] is a solution of equation [5.24]. By changing the variables s = t + y, p = t − y, we reduce equation [5.24] to ∂4 G (s, p) − G (s, p) = 0. ∂s2 ∂p2

[5.31]

ˆ (0, α) = 0. G

[5.32]

 ˆ (s, α) = ∞ G (s, p) eiαp dp in equation Let us take the Fourier transform G −∞ [5.31], thus we obtain the ordinary differential equation of order 4. Now, by considering that limy↓−t gc (t, y) = 0, we have

Hence, at most four independent solutions of the ordinary differential equation [5.31] satisfy the initial condition [5.32] for each α. By taking the inverse Fourier transform, we obtain four independent solutions of equation [5.27] with the condition limx↓−t gc (t, x) = 0, and just two of them satisfy equation [5.27], but not equation [5.24]. By construction, one of these solutions gc (t, y) is given by equation [5.26]. As another solution, we can take g2 (t, y) = u20 (t, y) + u04 (t, y) . We can easily verify that no linear combination c(t, y) of functions gc (t, y) and g2 (t, x) satisfies conditions [5.30] and  t (c (t, x) + gs (t, y)) e−t dx = 1 −t

for all t > 0 except the solution gc (t, y). Similarly, with the pdf f (t, x) of the particle position for m = 2 we can also obtain solutions of equation [5.10] with conditions [5.8] and [5.12] for each m > 2. 5.2. Distribution of limiting position of fading evolution This section deals with the distribution of the following random series: ∞  n=1

an θn ,

182

Random Motions in Markov and Semi-Markov Random Environments 1

where 0 < a < 1, and θn , n = 1, 2, . . . are i.i.d. random variables with uniform or Erlang distributions. Based on this result, we obtain the distribution of the limiting position of a particle whose trajectory is determined by the fading random evolution in Erlang or uniform media on the line or in multidimensional space. 5.2.1. Distribution of random power series in cases of uniform and Erlang distributions We calculate the distribution of a random power series in the cases of the uniform and 2-Erlang distributions of terms, and by using this distribution we obtain the distribution of a fading evolution.  2n−2 Let us consider a random power series ν = ∞ θ2n−1 , where 0 < a < 1, n=1 a and θn are i.i.d. random variables. Let θ1 be uniformly distributed on the interval [0, 1]. Denote by Fν (x) the distribution function of the sum of the power series ν. L EMMA 5.4.– The function Fν (x) is given by for 0 ≤ x ≤ 1, Fν (x) = c0

3 +∞ 

x

(−1) an(n+1) e− a2n n

n=0

+

+∞ 

(−1)

4

m+1 m(m+1) −xa2m

a

e

,

[5.33]

m=1

and for 1 < x ≤

1 1−a2 ,

Fν (x) = c0

3 +∞ 

(−1)

n−1 n(n+1)

a

n=0

+

+∞ 

m m(m+1)

(−1) a

 1−x  x e a2n − e− a2n

4   (1−x)a2m −xa2m e , −e

[5.34]

m=1

where c0

−1

=

+∞ 

3 (−1)

n−1 n(n+1)

a

e

a2 a2n (a2 −1)

−e

n=0

+

+∞  m=1

' m m(m+1)

(−1) a

e

a2m+2 a2 −1

−e

a2m a2 −1

( .

1 a2n (a2 −1)

4

One-dimensional Random Motions in Markov and Semi-Markov Media

P ROOF.– For Fν (x), we have 



 Fν (x) = P θ1 + a ν ≤ x =  =

x 0

2

x−u a2

η ≤

P



  P u + a2 ν  ≤ x du

x 0

"

183

du.

where η  is a random variable with the same distribution as η. Therefore, we have  x Fν (x) = Fν

x−u a2

0

 Fν (x) =

1

x−u a2



0

" du, 0 ≤ x ≤ 1, " du, x > 1.

[5.35]

We will find a solution of equation [5.35] in the following form: Fν (x) =

+∞ 

x

cn e− a2n +

n=0

+∞ 

bm e−xa

2m

.

[5.36]

m=1

Substituting [5.36] into [5.35], we have for 0 ≤ x ≤ 1 +∞ 

x

cn e− a2n +

n=0

=

bm e−xa

2m

m=1

+∞ 

 cn

n=0

=

+∞ 

x 0

x−u

e− a2n+2 du +

+∞ 

 bm

m=1

x 0

+∞ 

+∞    x cn a2n+2 1 − e− a2n+2 +

n=0

m=1

e−(x−u)a bm

a2m−2

2m−2

du

  2m−2 1 − e−xa .

After equating coefficients of like exponential terms, we have cn a2n+2 = −cn+1 , b1 = −c0 ,

n = 0, 1, 2, . . . ,

bm = −a2m−2 bm−1 , n n(n+1)

Therefore, cn = (−1) a

m = 2, 3, . . . .

c0 and bm+1 = (−1)

m+1 m(m+1)

a

c0 .

It is easily verified that +∞  n=0

cn a2n+2 +

+∞  m=1

bm 2m−2 a

= 0.

[5.37]

184

Random Motions in Markov and Semi-Markov Random Environments 1

Since θn is an absolutely continuous random variable, then the function Fν (x) is d absolutely continuous, i.e. there exists  the pdf fν (x) = dx Fν (x). It iseasily seen that 1 the support of fν (x) is the segment 0, 1−a 2 , and, consequently, Fν

1 1−a2

= 1.

Thus, we have Fν (x) = c0

3 +∞ 

x n n(n+1) − 2n a

(−1) a

e

+

n=0

+∞ 

4 (−1)

m+1 m(m+1) −xa2m

a

e

,

m=1

for 0 ≤ x ≤ 1. It is easily verified that Fν (0) = 0. From [5.36], it follows that Fν (x) = c0

+∞ 

(−1)

n−1 n(n+1)

a

 1−x  x e a2n − e− a2n

n=0

+c0

+∞ 

  2m 2m m . (−1) am(m+1) e(1−x)a − e−xa

m=1 1 1−a2 .

for 1 < x ≤

 Taking into account that Fν c0 =

3 +∞ 



1 1−a2

= 1, we obtain

3 (−1)

n−1 n(n+1)

a

e

a2 a2n (a2 −1)

−e

1 a2n (a2 −1)

4

n=0

+

+∞ 

' m m(m+1)

(−1) a

e

a2m+2 a2 −1

−e

a2m a2 −1

(4−1 .

m=1

Next, it follows from [5.35] that the function Fν (x) is monotonic. Indeed, as a consequence of lemma 5.5, all coefficients of the expansion [5.36] are bounded. Therefore, Fν (x) is continuously differentiable. Let

d dx Fν

(x) = fν (x), and using equations [5.35], we have the pdf ' (   u x−u 1 x 1 x −(x−u) du = du. (x − u) e f fν (x) = 2 fν ν a 0 a2 a2 0 a2

[5.38]

Since fν (x) is continuous, then it follows from [5.38] that fν (x) is continuously  differentiable, i.e. there exists fν (x), x ≥ 0. Suppose that for a segment [0, α0 ], α0 > 0 the function fν (x) ≥ 0, i.e. Fν (x) is a non-decreasing function on [0, α0 ]. It  is easily verified that there exists α1 > 0 such that 0 < α1 < α0 and fν (x) ≥ 0 for

One-dimensional Random Motions in Markov and Semi-Markov Media

185

each x ∈ [0, α1 ]. By continuing this procedure, we obtain that for any n ∈ N, there (n) exists an αn > 0 such that fν (x) ≥ 0 for all x ∈ [0, αn ]. (n)

Hence, all derivatives fν consequently 

fν (x) =

(0), n ≥ 1, of Fη (x) are non-negative and

∞ (n)  fν (0) n−1 ≥ 0, x ≥ 0. x (n − 1)! n=1

Therefore, Fη (x) is a non-decreasing function for all x ≥ 0. The case where Fν (x) is non-increasing on an initial segment [0, α0 ] can be studied in a similar manner. Thus, the solution Fν (x) of equation [5.35] is a monotonic function. From [5.34] and [5.36], it follows that limx→+∞ Fν (x) = 1. Hence, Fν (x) is a distribution function. Let us show that there is not another distribution function satisfying equation [5.35]. Indeed, suppose a distribution function F˜ν (x) that satisfies [5.35] and F˜ν (x) = Fν (x), then the function G (x) = Fν (x) − F˜ν (x) is a solution of equation [5.35], but this is impossible as G (x) is not a monotonic function, except for the case where G (x) = 0. ∞ Now consider the random power series η = n=1 a2n−2 θ2n−1 , where 0 < a < 1, and θn are i.i.d. random variables with the Erlang-2 distribution of the following form: fk (t) =

dFk (t) = λ2 te−λt I{t≥0} , dt

λ > 0.

Denote by Fη (x) = P {η ≤ x} the cdf of η. To simplify expressions, we consider λ = 1. L EMMA 5.5.– The function Fη (x) has the form x

Fη (x) = 1 + (a01 x + a02 ) e−x + (a11 x + a12 ) e− a2 + . . . x

+ (an1 x + an2 ) e− a2n + . . . ,

186

Random Motions in Markov and Semi-Markov Random Environments 1

where for n= 1, 2, . . . an1 = an−11

a4n−2 (a2n − 1)

an2 = −an−11

2;

2a6n−2 (a2n − 1)

3

+ an−12

a4n

2;

(a2n − 1)

and S2 + S3 S1 S3 + S2 S4

a01 = where

3 S1 =

1+

a02 =

a2 (a2 − 1)

2

+

S4 − S1 , S1 S3 + S2 S4

a8 2

(a2 − 1) (a4 − 1) 4

a18

+

2

2 2 2 + ... (a2 − 1) (a4 − 1) (a6 − 1) 3 a18 a8 + +2 3 3 2 (a2 − 1) (a4 − 1) (a2 − 1) (a4 − 1) (a6 − 1) 4 a18 + ... ; + 2 3 (a2 − 1) (a4 − 1) (a6 − 1)

S2 =

a2 a8 a18 + + 2 2 2 2 2 4 2 1−a (1 − a ) (1 − a ) (1 − a ) (1 − a4 ) (1 − a6 )

+...; S3 =

2a2 3

(a2 − 1)

+

2

(a2 − 1) (a4 − 1) 2a24

2

2

(a2 − 1) (a4 − 1) (a6 − 1)

S4 = 1 + +

2a12

+

a4 2

(a2 − 1)

+

3

3

(a2 − 1) (a4 − 1)

+ ...;

a12 2

2

(a2 − 1) (a4 − 1)

a24 2

3

2a12

+

2

(a2 − 1) (a4 − 1) (a6 − 1)

2

+ ...;

2

One-dimensional Random Motions in Markov and Semi-Markov Media

187

P ROOF.– For Fη (x), we have 

2







Fη (x) = P θ1 + a η ≤ x =  =

x

0

ue−u P

η ≤

x 0

x−u a2

  ue−u P u + a2 η  ≤ x du

"

du.

where η  is a random variable with the same distribution as that of η. Hence,  Fη (x) =

x 0

ue−u Fη

x−u a2

" du.

[5.39]

It is easy to see that Fη (0) = 0. We will find Fη (x) in the form of the following series: x

Fη (x) = 1 + (a01 x + a02 ) e−x + (a11 x + a12 ) e− a2 + . . . x

+ (an1 x + an2 ) e− a2n + . . .

[5.40]

Substituting [5.40] into [5.39], we have (  '  x x−u x−u −u 1 + a01 2 + a02 e− a2 Fη (x) = ue a 0 ( ( ' '  x−u x−u x−u x−u + a11 2 + a12 e− a4 + a21 2 + a22 e− a6 + . . . du a a  2  −x  2  x 2 e + a x − x − 2a2 e− a2 −x −x 2 a x − x + 2a = 1 − xe − e + a01 a 3 (a2 − 1)   x x − a2 x − a2 e−x + a2 e− a2 +a02 a2 2 (a2 − 1)  4  −x  4  x 4 e + a x − x + 2a4 e− a4 6 a x − x + 2a +a11 a 3 (a4 − 1)   x x − a4 x − a4 e−x + a4 e− a4 +a12 a4 2 (a4 − 1)  6  −x  6  x 6 e + a x − x + 2a6 e− a6 10 a x − x + 2a +a21 a 3 (a6 − 1)   x x − a6 x − a6 e−x + a6 e− a6 + .... [5.41] +a22 a6 2 (a6 − 1)

188

Random Motions in Markov and Semi-Markov Random Environments 1

Whence, taking into account [5.40], for the term xe−x to be held constant, we have: a01 = −1 + a01

+a21

a10 (a6 − 1)

a2 (a2 − 1) + a22

2

2

+ a02

a2 a6 a4 + a + a + ... 11 12 2 1 − a2 1 − a4 (a4 − 1)

a6 a4n+2 a2n+2 + . . . + an1 + an2 2 6 1−a 1 − a2n+2 (a2n+2 − 1)

+.... For the term e−x to be held constant: a02 = −1 + a01

+a21

2a16 (a6 − 1)

3

2a4 (a2

− 1)

− a22

3

− a02

a12 (a6 − 1)

2

a4 (a2

2

− 1)

+ . . . + an1

+ a11

2a10 (a4

− 1)

2a6n+4 (a2n+2 − 1)

3

3

− a12

− an2

a8 (a4

2

− 1)

a4n+4 2

(a2n+2 − 1)

+.... x

For the term x e− a2n to be held constant: an1 = an−11

a4n−2 (a2n−1 )

2;

x

and for the term e− a2n to be held constant: an2 = −an−11

2a6n−2 3 (a2n−1 )

+ an−12

a4n (a2n−1 )

2;

n = 1, 2, . . . .

This implies the following relations: 53 a2 a8 1 = a01 1+ + 2 2 2 (a2 − 1) (a2 − 1) (a4 − 1) a18

4

2 2 2+... (a2 − 1) (a4 − 1) (a6 − 1) 3 a18 a8 + +2 3 3 2 (a2 − 1) (a4 − 1) (a2 − 1) (a4 − 1) (a6 − 1) 46 a18 + ... + 2 3 (a2 − 1) (a4 − 1) (a6 − 1)

+

[5.42]

One-dimensional Random Motions in Markov and Semi-Markov Media

3

a2 a8 + 2 1 − a2 (1 − a2 ) (1 − a4 )

+a02

+

a18 2

and

3 1 = a01

+

2a2 3

(a2 − 1)

+

4

+ ... ;

2

(1 − a2 ) (1 − a4 ) (1 − a6 )

189

2a12 2

3

(a2 − 1) (a4 − 1)

+

[5.43]

2a12 3

2

(a2 − 1) (a4 − 1)

2a24 2

2

3

(a2 − 1) (a4 − 1) (a6 − 1) 2a24

+

4

2a24

2 3 2 + 3 2 2 + ... (a2 − 1) (a4 − 1) (a6 − 1) (a2 − 1) (a4 − 1) (a6 − 1) 3 a4 a12 + −a02 1 + 2 2 2 (a2 − 1) (a2 − 1) (a4 − 1) 4 a24 + [5.44] 2 2 2 + ... . (a2 − 1) (a4 − 1) (a6 − 1)

By using the D’Alambert test for convergence, it is easy to check that all series involved in equations [5.43] and [5.44] are convergent. We should note that coefficient a01 in [5.43] is related to the series 1+

a2 2

(a2 − 1) =

+

a8 2

2

(a2 − 1) (a4 − 1)

+

a18 2

2

2

(a2 − 1) (a4 − 1) (a6 − 1)

+ ...

∞   −1 . 1−a2k k=1

By solving equations [5.43] and [5.44], we have a01 =

S2 + S3 ; S1 S3 + S2 S4

a02 =

S4 − S1 . S1 S3 + S2 S4

Whence, by using [5.42], we calculate all other coefficients of expansion [5.40].

190

Random Motions in Markov and Semi-Markov Random Environments 1

From [5.42], it also follows that Fη (0) = 1 + a02 + a12 + a22 + . . . = 0. Next, it follows from [5.39] that the function Fη (x) is monotonic. The proof is much the same as for the case of the uniform distribution. From [5.40] and [5.42], it follows that limx→+∞ Fη (x) = 1. Therefore, Fη (x) is a unique distribution function satisfying equation [5.39] and is the distribution of η. R EMARK 5.7.– It is easily verified that in the case of an arbitrary parameter λ > 0 of Erlang-2, the distribution function Fη (x) has the following form: λx

Fη (x) = 1 + (a01 x + a02 ) e−λx + (a11 x + a12 ) e− a2 + . . . λx

+ (an1 x + an2 ) e− a2n + . . . . 5.2.2. The distribution of the limiting position Let {ξ (t)} be a renewal process, = n which is defined by ξ (t) max {n ≥ 0 : τn ≤ t}, t>0, where τn = k=0 θk and θk > 0, k = 0, 1, 2, . . . are i.i.d. random variables. The process {x (t)}, which is defined by the following equation for 0 < a < 1,  t ξ(s) x (t) = v (−a) ds, 0

is called a fading random evolution with the switching process ξ (t) (or in semi-Markov media ξ (t)). A fading evolution describes the random movement of a particle such that after each switching direction of the particle movement its absolute velocity decreases and, therefore, the particle loses energy with time. Thus, a fading evolution may be a model of one-dimensional movement, where a particle loses its energy under the action of friction or loses energy colliding with other particles. Over time, the particle stops its movement and our aim is to obtain the distribution of a stopping point for the particle, i.e. the distribution of the following random variable:  ∞ ξ(s) σ=v (−a) ds. 0

Since θk , k = 1, 2, . . . are identically distributed and σ = (θ1 + a2 θ3 + . . .) − a(θ2 + a2 θ4 + . . .),

One-dimensional Random Motions in Markov and Semi-Markov Media

191

the cdf Fσ (x) = P {σ ≤ x}, which is the distribution of the limiting position of the fading evolution x (t), and can be represented as follows:      Fσ (x) = P {σ ≤ x} = P θ1 + a2 θ3 + . . . − a θ2 + a2 θ4 + . . . ≤ x * )     θ1 + a2 θ3 + . . . − x 2 =P θ2 + a θ 4 + . . . ≥ a ( '   x y−x = , dFs (y) 1 − Fs a 0 where s is the sum of the series s=

∞ 

ak+2 θk .

k=1

Suppose θ1 has the uniform distribution on [0, 1], then the cdf Fs (x) = Fν (x), where Fν (x) is calculated in lemma 5.4. In the case where θ1 has the Erlang-2 distribution, the cdf Fs (x) = Fη (x), where Fη (x) is calculated in lemma 5.5. 5.3. Differential and integral equations for jump random motions In this section, we obtain a differential equation for the characteristic function of random jump motion on the line, where the direction alternations and random jumps occur according to the renewal epochs of the Erlang distribution. We also study random jump motion in higher dimensions and we obtain a renewal-type equation for the characteristic function of the process. In the three-dimensional case, we obtain the telegraph-type differential equation for jump random motion, where the direction alternations and random jumps occur according to the renewal epochs of the Erlang-2 distribution. In 1951 and 1956, Goldstein and Kac solved the problem of a one-dimensional random motion driven by a Poisson process, i.e. defined by the movement of a particle at a constant velocity v traveling in some direction a random distance drawn from an exponential distribution. After that, the direction of the particle changed in the opposite direction under the same stochastic conditions. The problem can be seen as a random motion governed by a switching Poisson process with alternating directions and having exponentially distributed sojourn times. Goldstein and Kac found that the solution of this problem satisfies the one-dimensional telegraph-type equation, which is similar to the Heaviside telegraph equation for wave propagation in transmission lines. Namely, ∂ 2 f (t, x) ∂ 2 f (t, x) ∂ + 2λ f (t, x) = v 2 , 2 ∂t ∂t ∂x2

[5.45]

192

Random Motions in Markov and Semi-Markov Random Environments 1

and it is called the Goldstein–Kac telegraph equation. Many variations of this basic idea have been approached by many researchers worldwide, and in this section we focus on the important case of the jump telegraph process on a line and in higher dimensions. The one-dimensional jump telegraph process, which is a generalization of the telegraph process, was introduced and studied (including its applications in financial market theory) by Ratanov (2007, 2010), and Ratanov and Melnikov (2008). Some limit theorems for the jump telegraph process on a line were studied by Di Crescenzo and Martinucci (2013). The jump telegraph process with random jumps was introduced and studied by Di Crescenzo et al. (2013). Some previous results regarding Erlang distributions in the telegraph process context can be found in Di Crescenzo (2001), Pogorui and Rodríguez-Dagnino (2005a) and Pogorui and Rodríguez-Dagnino (2012), random motions in multidimensional spaces in De Gregorio (2012, 2014), connections between random motion and PDEs in physics in Garra and Orsingher (2014) and recent results on jump-telegraph processes governed by alternating fractional Poisson process in Di Crescenzo and Meoli (2018). In this section, we obtain differential equations for the one-dimensional jump telegraph process in the case of Erlang distributed switching times for velocities, and arbitrary distributed random jumps. We also study the jump telegraph process in Rn , n > 1, and we obtain the Volterra integral equation of convolution type for the characteristic function of the process. As a particular case, we consider a differential equation for the jump process in R3 and study its solution. 5.3.1. The Erlang jump telegraph process on a line Let us consider the random motion of a particle on the line described in the following manner: the particle moves according to one of two velocities c > 0 or −v < 0. Starting from the point x0 ∈ R, the particle moves with velocity c during the random time τ1 = ξλ1 + . . . + ξλn , n ≥ 1, where ξλi is exponentially distributed with rate λi > 0 and random variables {ξλi , i = 1, . . . , n}. Then, the particle jumps a random length η1 and continues its motion with velocity −v during the random time τ2 = ξμ1 + . . . + ξμm , m ≥ 1, where ξμi is exponentially distributed with rate μi > 0. Furthermore, the particle jumps a random length η2 and moves with velocity c, and so on. It is assumed that random variables  η1 , η2 , ξλi , ξμj , i = 1, . . . , n, j = 1, . . . , m are mutually independent. Thus, after even renewal events the particle has velocity c and after odd events velocity −v. The motion of this particle can be described by using the theory of random evolutions in the following form. Denote by ψ (t) , t ≥ 0, the alternating

One-dimensional Random Motions in Markov and Semi-Markov Media

193

semi-Markov process on the phase (or state) space T = {1, 2}, with sojourn time τi at state i ∈ T , and with transition probabilities matrix of the embedded Markov chain ' ( 0 1 P = . 1 0 Denote by y (t) , t ≥ 0, the position of the particle at time t. Let us introduce the following function C1 on T C1 (y) =

c, if y = 1, −v, if y = 2.

[5.46]

Thus, the process y (t) is a random evolution in the semi-Markov medium ψ (t) satisfying the following equation (Korolyuk and Swishchuk 1995b):  y (t) = x0 +

0

t

C1 (ψ (s)) ds +

ν(t) 

ηk ,

[5.47]

k=1

where ν (t) is the number of renewal events of ψ (t), and ηk has the same distribution as η1 for odd k, and ηk has the same distribution as η2 for even k. In addition, the random variables η1 , η2 , . . . are mutually independent. This implies that 2 2 2 ν(t) 2 2 k=1 ηk 2 < +∞ almost surely for any t ≥ 0. Now, let us show that the evolution y (t) can be represented as an evolution driven by a Markov process. Denote by ξ (t) , t ≥ 0, a Markov chain in the phase space E = {1, 2, . . . , n, n + 1, . . . , n + m} with the infinitesimal operator Q = q [P − I], where q = diag (λ1 , λ2 , . . . , λn , μ1 , . . . , μm ) and P is the following (n + m) × (n + m) transition probabilities matrix: ⎞ ⎛ 0 1 0 ... 0 ⎜0 0 1 ... 0⎟ ⎟ ⎜ ⎟ ⎜ P = pij = ⎜ ... ... ... . . . ... ⎟ . ⎟ ⎜ ⎝0 0 0 ... 1⎠ 1 0 0 ... 0 We assume that P [ξ (0) = 1] = 1. Let us introduce the following function C on E: C (i) =

c, if i = 1, . . . , n, −v, if i = n + 1, . . . , n + m

194

Random Motions in Markov and Semi-Markov Random Environments 1

and consider the equation  x (t) = x0 +

0

t

C (ξ (s)) ds +

n(t) 

αk ,

[5.48]

k=1

where n (t) is the number of renewal events of ξ (t), and αkn+(k−1)m , k = 1, 2, . . . has the same distribution as η1 , and αkm+kn , k = 1, 2, . . . has the same distribution as η2 , and αk = 0 in other cases. We suppose that there exists the pdf gηi (y), i = 1, 2, and we assume gαi (y) = δ (y) for αi = 0, where δ (·) is the delta function. Equation [5.48] describes a random evolution in Markov media ξ (t) , t ≥ 0. In addition, it is easily verified that P [y (t) = x (t)] = 1 for each t ≥ 0. Thus, we reduce the semi-Markov case described by equation [5.47] to the Markov case described by equation [5.48]. In the rest of this section, we will study the process x(t), which is described by equation [5.48], and we will consider the two-component process ζ (t) = (x (t) , ξ (t)) in the phase space Z = R × E, where R = (−∞, +∞), E = {1, . . . , n, n + 1, . . . , n + m}. It is well-known (Korolyuk and Korolyuk 1999; Korolyuk and Swishchuk 1995b) that the process ζ (t) = (x (t) , ξ (t)) has the following infinitesimal operator: Aϕ (x, i) = C (x, i)

∂ ϕ (x, i) + λi (P ϕ (x, i) − ϕ (x, i)) + A0 ϕ (x, i) , ∂x

where ϕ (x, i) is a differential function with respect to x ∈ R for all i ∈ E and ∞ such that there exists g (y) ϕ (x + y, j) dy for any x ∈ R, i ∈ E, P ϕ (x, i) = α i −∞  p ϕ (x, j) and the operator A0 is of the following form, for i = n or i = n+m, ij j∈E ⎛ A0 ϕ (x, i) = λi ⎝

 j∈E

 pij

∞ −∞

⎞ gαi (y) [ϕ (x + y, j) − ϕ (x, i)] dy ⎠

and for other i ∈ E A0 ϕ (x, i) = 0. Let us write the operator A in more detail: Aϕ (x, 1) = c

∂ ϕ (x, 1) + λ1 (ϕ (x, 2) − ϕ (x, 1)) , ∂x

Aϕ (x, 2) = c

∂ ϕ (x, 2) + λ2 (ϕ (x, 3) − ϕ (x, 2)) , ∂x

One-dimensional Random Motions in Markov and Semi-Markov Media

195

.. . Aϕ (x, n) = c  +λn

∂ ϕ (x, n) + λn (ϕ (x, n + 1) − 2ϕ (x, n)) ∂x

∞ −∞

gη1 (y) ϕ (x + y, n + 1) dy,

Aϕ (x, n + 1) = −v

∂ ϕ (x, n + 1) + μ1 (ϕ (x, n + 2) − ϕ(x, n + 1)), ∂x

.. . Aϕ (x, n + m) = −v  +μm

∂ ϕ (x, n + m) + μm (ϕ (x, 1) − 2ϕ (x, n + m)) ∂x

∞ −∞

gη2 (y) ϕ (x + y, 1) dy.

Consider f (t, x, k) dx = P {x ≤ x (t) ≤ x + dx, ξ (t) = k}, k ∈ E. The function f (t, x, k) satisfies the Kolmogorov backward equation ∂f (t, x, k) = A f (t, x, k) , ∂t with the initial condition  f (0, x, k) = δ(x). k∈E

Passing, in equation [5.49], into characteristic functions  ∞ eiωx f (t, x, k)dx, Hk (t, ω) = −∞

we obtain ∂H1 (t, ω) = −iωcH1 (t, ω) + λ1 (H2 (t, ω) − H1 (t, ω)), ∂t ∂H2 (t, ω) = −iωcH2 (t, ω) + λ2 (H3 (t, ω) − H2 (t, ω)), ∂t .. . ∂Hn (t, ω) = −iωcHn (t, ω) + λn (Hn+1 (t, ω) − 2Hn (t, ω)) ∂t +λn gˆη1 (−ω) Hn+1 (t, ω),

[5.49]

196

Random Motions in Markov and Semi-Markov Random Environments 1

∂Hn+1 (t, ω) = iωvHn+1 (t, ω) + μ1 (Hn+2 (t, ω) − Hn+1 (t, ω)), ∂t .. . ∂Hn+m (t, ω) = iωvHn+m (t, ω) + μm (H1 (t, ω) − 2Hn+m (t, ω)) ∂t +μm gˆη2 (−ω) H1 (t, ω) .

[5.50]

Let us write equation [5.50] in the matrix form M H (t, ω) = 0, 

where H (t, ω) = (H1 (t, ω) , H2 (t, ω) , . . . , Hn+m (t, ω)) and the matrix M follows: ⎛ ∂ −λ1 0 ··· 0 ∂t + iωc + λ1 ∂ ⎜ + iωc + λ −λ · · · 0 0 2 2 ∂t ⎜ M =⎜ .. .. .. .. . . ⎝ . . . . . −μm − μm gˆη2 (−ω)

···

0

···

∂ ∂t

is as ⎞ ⎟ ⎟ ⎟ ⎠

− iωv + 2μm

n+m Consider the function H (t, ω) = i=1 Hi (t, ω). It is easily seen that H (t, ω) is n+m the characteristic function of f (t, x), where f (t, x) = i=1 f (t, x, i). The function f (t, x) is the pdf of the position x (t) of the particle at time t. T HEOREM 5.4.– The function H (t, ω) satisfies the following differential equation: n−1 '

( m−1 (  '∂ ∂ + iωc + λk − iωv + μl ∂t ∂t k=1 l=1 (' ( ' ∂ ∂ × + iωc + 2λn − iωv + 2μm H (t, ω) ∂t ∂t m n    − (1 + gˆη1 (−ω)) 1 + gˆη2 (−ω) λi μj H (t, ω) = 0. i=1

[5.51]

j=1

D EMONSTRATION .– It is well-known (Pogorui and Rodríguez-Dagnino 2005a) that H (t, ω) satisfies the equation (det (M )) H (t, ω) = 0,

[5.52]

where det (M ) is the determinant of the matrix M . It is easily verified that ( ( n ' m '  ∂ ∂ + iωc + λk − iωv + μl det (M ) = ∂t ∂t k=1

l=1

One-dimensional Random Motions in Markov and Semi-Markov Media

' ×

∂ + iωc + 2λn ∂t

('

∂ − iωv + 2μm ∂t

197

(

m n    − (1 + gˆη1 (−ω)) 1 + gˆη2 (−ω) λi μj . i=1

j=1

 Consider the case where c = v, n = m, λi = μi = λ. In this case, equation [5.51] has the following form: '

∂2 ∂ + 2λ + ω 2 c2 + λ2 2 ∂t ∂t

(n−1 '

( ∂2 ∂ 2 2 2 + 4λ + ω c + 4λ H (t, ω) ∂t2 ∂t

− (1 + gˆη1 (−ω)) (1 + gˆη2 (−ω)) λ2n H (t, ω) = 0. [5.53] Without loss of generality, we assume that P (ξ (0) = 1) = 1, i.e. the particle starts moving with velocity c. Therefore, H (0, ω) = H1 (0, ω) = 1. It follows from the first and the last equations of the set of [5.50] that (all derivatives are taken with respect to t)  H  (0, ω) = H1 (0, ω) + Hn+m (0, ω) = −iωc + λˆ gη2 (−ω).

Similarly, we can calculate H (n) (0, ω) for any n ∈ N. Now, the pdf f (t, x) can be obtained through the inverse Fourier transform applied to equation [5.53]. Hence, '

∂2 ∂2 ∂ + 2λ − c2 2 + λ2 2 ∂t ∂t ∂x

(n−1 '

( 2 ∂2 ∂ 2 ∂ 2 f (t, x) + 4λ + 4λ − c ∂t2 ∂t ∂x2

−λ2n f (t, x)  ∞ 2n (gη1 (−x + u) + gη2 (−x + u) − gη1 +η2 (−x + u))f (t, u) du = 0, +λ −∞

  with initial  conditions f (0, x) = δ (x) , f (0, x) = −cδ (x) + λgη2 (−x) and so on,

where operator

∂2 ∂t2

2

∂ ∂t2

2

∂ ∂ 2 + 2λ ∂t − c2 ∂x 2 + λ 2

∂ ∂ + 2λ ∂t − c2 ∂x 2

m

is a formal m-times product of the differential  2 m ∂ ∂ 2m + λ2 , and it is assumed that ∂t = ∂t 2 2m .

198

Random Motions in Markov and Semi-Markov Random Environments 1

5.3.2. Examples 5.3.2.1. Exponential case In the case where n = 1, equation [5.53] can be written in the following form:  2  ∂ ∂ 2 2 2 + 4λ + ω c + 4λ − (1 + gˆη1 (−ω)) (1 + gˆη2 (−ω)) ∂t2 ∂t ×H (t, ω) = 0,

[5.54]

gη2 (−ω). with the initial conditions H (0, ω) = 1, H  (0, ω) = −iωc + λˆ By solving equation [5.54], we obtain the characteristic function ' ( 2λ − iωc + λˆ gη2 (−ω) H (t, ω) = e−2λt cosh (A(ω)t) + sinh (A(ω)t) , A(ω) 7 where A(ω) = λ2 (1 + gˆη1 (−ω) + gˆη2 (−ω) + gˆη1 (−ω) gˆη2 (−ω)) − ω 2 c2 . 5.3.2.2. Erlang-2 case In the case where n = 2, equation [5.53] can be written in the following form: ' 2 (' 2 ( ∂ ∂ ∂ ∂ 2 2 2 2 2 2 H (t, ω) + 2λ c + λ + 4λ c + 4λ + ω + ω ∂t2 ∂t ∂t2 ∂t −λ4 (1 + gˆη1 (−ω)) (1 + gˆη2 (−ω)) H (t, ω) = 0, with the initial conditions H (0, ω) = H1 (0, ω) = 1; ∂H (0, ω) ∂H1 (0, ω) ∂H4 (0, ω) = + = −iωc + λˆ gη2 (−ω) ; ∂t ∂t ∂t ∂ 2 H (0, ω) ∂ 2 H1 (0, ω) ∂ 2 H3 (0, ω) ∂ 2 H4 (0, ω) = + + ∂t2 ∂t2 ∂t2 ∂t2 = −ω 2 c2 − λ2 + λ2 gˆη2 (−ω) − 2iωcλˆ gη2 (−ω) ; ∂ 3 H (0, ω) ∂ 3 H1 (0, ω) ∂ 3 H2 (0, ω) ∂ 3 H3 (0, ω) ∂ 3 H4 (0, ω) = + + + 3 ∂t ∂t3 ∂t3 ∂t3 ∂t3 = 6icλ2 ωˆ gη2 (−ω) − 3c2 λω 2 gˆη2 (−ω) + 3icλ2 ω + λ3 (ˆ gη1 (−ω) +5ˆ gη2 (−ω) + 3) + ic3 ω 3 . Thus, the characteristic function is given by H (t, ω) = e−

3λ 2 t

(C1 cosh (A(ω)t) + C2 cosh (B(ω)t)

+C3 sinh (A(ω)t) + C4 sinh(B(ω)t)),

One-dimensional Random Motions in Markov and Semi-Markov Media

where

9

A(ω) =

L − 4λ

8

199

λ2 (1 + gˆη1 (−ω) + gˆη2 (−ω) + gˆη1 (−ω) gˆη2 (−ω)) + 2ω 2 c2 ,

9 B(ω) =

8 L + 4λ λ2 (1 + gˆη1 (−ω) + gˆη2 (−ω) + gˆη1 (−ω) gˆη2 (−ω)) + 2ω 2 c2

where L = λ2 − 4ω 2 c2 , and the coefficients Ci , i = 1, 2, 3, 4 can be calculated by using initial conditions. 5.4. Estimation of the number of level crossings by the telegraph process It is well-known that for almost all sample paths of Brownian motion W (t), (W (0) = 0) (that is, for a set of sample paths having probability one), for all δ > 0, the path has infinitely many zeros in the interval (0, δ). Under Kac’s condition, the telegraph process weakly converges to the Wiener process. In this section, we estimate the number of level crossings by the telegraph process and, passing to the limit in Kac’s condition, we obtain the estimation for a level crossing by the Wiener process. Let {ξ (t) , t ≥ 0} be a Markov process on the phase space {0, 1} with the generative matrix ' ( −1 1 Q=λ . 1 −1 D EFINITION 5.2.– x (t) is the telegraph process if d ξ(t) x (t) = v(−1) , v = const > 0, dt

x (0) = x0 .

[5.55]

In the sequel, we need some auxiliary results. The explicit form for the distribution of the first passage time of a level by a telegraph processes on the line. For a fixed l, let us define that Δ(t) = l − x (t). We assume that z = l − x0 > 0. Suppose ξ (0) = k and define τk (z) = inf {t ≥ 0 : Δ (t) = 0} , k ∈ {0, 1} . It is easily seen that τk (z) is the first passage time of the level l by x (t) assuming ξ (0) = k. Denote by fk (t, z) dt = P (τk (z) ∈ dt) the probability density function of τk (z).

200

Random Motions in Markov and Semi-Markov Random Environments 1

T HEOREM 5.5.– For t ≥ f0 (t, z) = e

−λt

z v

 √  zλ −λt I1 λv v 2 t2 − z 2 √ δ (z − vt) + 2 e , v v 2 t2 − z 2

z  e−λt   z I1 λ t − t− v v  √   t −λt e I1 (λ (t − u)) I1 λv v 2 u2 − z 2 √ du, + zλ (t − u) v 2 u2 − z 2 z/v

f1 (t, z) =

[5.56]

[5.57]

where I1 (·) is the modified Bessel function of the first kind. P ROOF.– Let us consider the Laplace transforms of τk (z), k ∈ {0, 1}.   ϕk (s, z) = E e−sτk (z) , s > 0. By using the renewal theory, we can obtain the following system of integral equations for these Laplace transforms:  s+λ λ z − s+λ u ϕ0 (s, z) = e− v z + e v ϕ1 (s, z − u) du v 0  λ − s+λ z z s+λ u − s+λ z v v =e + e e v ϕ1 (s, u) du, v 0  ∞ s+λ λ s+λ ϕ1 (s, z) = e v z e− v u ϕ0 (s, u) du. v z These equations are then differentiated to obtain the following system: ∂ s+λ λ ϕ0 (s, z) = − ϕ0 (s, z) + ϕ1 (s, u) ∂z v v ∂ s+λ λ ϕ1 (s, z) = ϕ1 (s, z) − ϕ0 (s, u). ∂z v v It is well-known (Pogorui and Rodríguez-Dagnino 2005a) that ϕ0 (s, z) and ϕ1 (s, z) satisfy the following equation: ' ∂ ( + s+λ − λv ∂z v det f (z) = 0. ∂ s+λ λ v ∂z − v By calculating the determinant, we have ∂2 s2 + 2λs f (z) − f (z) = 0. 2 ∂z v2

One-dimensional Random Motions in Markov and Semi-Markov Media

201

and the solution of this equation can be written as f (z) = C1 e

√ z s2 +2λs v

+ C2 e −



z s2 +2λs v

.

The constants obtained from the system of integral equations yields z

ϕ0 (s, z) = e− v

√ s2 +2λs

and ϕ1 (s, z) =

s+λ−

√ λ

,

s2 + 2λs

[5.58]

z

e− v



s2 +2λs

.

[5.59]

The inverse Laplace transform of ϕ(0,1) (s, z) with respect to s yields the following pdf (Korn and Korn 2000, p. 239, formula 88):  z√ 2  f0 (t, z) = L−1 e− v s +2λs , s [5.60] λ√  2 2 2 −λt −λt I1 v v t − z √ = e δ (z − vt) + zλe , v 2 t2 − z 2 z t≥ . v Hence, λ√  2 2 2 −λt −λt I1 v v t − z √ dt, [5.61] P (τ0 (z) ∈ dt) = e δ (z − vt) dt + zλe v 2 t2 − z 2 where t ≥ vz . It follows from (Korn and Korn 2000, p. 237), we have 3 4 √ s + λ − s2 + 2λs e−λt −1 L ,s = I1 (λt). λ t Therefore, e−λt I1 (λt) . t Taking into account equations [5.60] and [5.61], we have for t ≥ vz  t −λ(t−u) e f1 (t, z) = I1 (λ (t − u)) t−u z/v 5 λ√ 6 2 u2 − z 2 I v 1 v √ du × e−λu δ (z − vu) + zλe−λu v 2 u2 − z 2  √   t −λt e I1 (λ (t − u)) I1 λv v 2 u2 − z 2 z  e−λt   √ + zλ I1 λ t − du. = t − vz v (t − u) v 2 u2 − z 2 z/v f1 (t, 0) =

202

Random Motions in Markov and Semi-Markov Random Environments 1

5.4.1. Estimation of the number of level crossings for the telegraph process in Kac’s condition Denote by Ck (t, z)a number of intersections of level z by the particle x (t) during (0, t), t > 0 assuming ξ (0) = k ∈ {0, 1}. Consider the following renewal function Hk (t, z) = E [Ck (t, z)]. Let us consider the following so-called Kac’s condition (or the hydrodynamic limit). Denote, by λ = ε−2 , v = cε−1 ; as ε > 0; i.e. λ > 0 and v > 0; such that v2 2 λ >c . It was proved in Kac (1974) that, under Kac’s condition, the telegraph process x(t) weakly converges to the Wiener process W (t) ∼ N (0, ct) T HEOREM 5.6.– Under Kac’s condition, we have 7 H1 (t, 0) cH1 (t, 0) √ = lim = lim εH1 (t, 0) = 2t/π. v→∞ ε>0 λ→∞ v λ lim

P ROOF.– By using the Laplace transform for the general renewal function (Cox 1970, p. 56, formula 2), it follows from equation [5.59] that the Laplace transform ˆ 1 (s, 0) = L (H1 (t, 0) , t) of H1 (t, 0) with respect to t has the form H ∞

 ˆ 1 (s, 0) = 1 H s

3

s+λ−

k=0

4k √ s2 + 2λs λ = √ . λ s s2 + 2λs − s2

It is easily verified that ( λ E [C1 (t, 0)] = L s s2 + 2λs − s2 ( '' ( 1 1 = + + λt I0 (2λt) + λtI1 (2λt) e−λt . 2 2 −1

'



Hence, E [C1 (t, 0)] 7 √ = 2t/π. λ→∞ λ lim

Taking into account λ = ε−2 , v = cε−1 , we have 7 7 √ H1 (t, 0) ∼ t 2/π λ = c 2t/πv, as v → ∞. Therefore, for a fixed t > 0, the number of level crossings by the telegraph process under Kac’s condition goes to infinity as the velocity v does.

One-dimensional Random Motions in Markov and Semi-Markov Media

203

Denote as  F1 (x) =



x 0

T HEOREM 5.7.– )3  P 1−

0

f1 (t, 0)dt =

λx

x 0

e−λt I1 (λt) dt. t

* 4 e−u 3 → G1/2 (y) as λ → ∞, I1 (u) du C1 (x, 0) ≥ √ u y

where G1/2 (y) is the one-sided stable distribution satisfying the condition   y 1/2 1 − G1/2 (y) → 3 as y → ∞ (Feller 1971). P ROOF.– It is easily seen that lim (1 − F1 (x))

x→∞



3 x = lim

x→∞

1−



λx 0

4 √ e−2λt x I1 (2λt) dt t

2 = 2 lim e−2λx x1/2 I1 (2λx) = √ . x→∞ π √ Therefore, L (x) = (1 − F1 (x)) x is slowly varying and 1 − F1 (x) = x−1/2 L (x) .  x −λt  λx −u Since F1 (x) = 0 e t I1 (λt) dt = 0 e u I1 (u) du and λ → ∞ we have F1 (x) = F (s) = s−1/2 L (s), where for a fixed x > 0, s = λx → ∞. By using the result in (Feller 1971, Chapter XI.5, p. 373), we obtain )3 * 4  λx −u e 3 P 1− → G1/2 (y) as λ → ∞, I1 (u) du C1 (x, 0) ≥ √ u y 0 where G1/2 (y) is the one-sided stable distribution satisfying the condition   y 1/2 1 − G1/2 (y) → 3 as y → ∞. Therefore, under Kac’s condition, the number √ of level crossings by the telegraph process C1 (x, 0) is of the order of magnitude λ = v.

References

Albeverio, S., Korolyuk, V., Samoilenko, I. (2009). Asymptotic expansion of semi-Markov random evolutions. Stochastics, 81(5), 477–502. Anick, D., Mitra, D., Sondhi, M.M. (1982). Stochastic theory of a data-handling system. Bell System Technical Journal, 61(8), 1871–1894. Anisimov, V.V. (1977). Switched processes. Kibernetika, 4, 111–115. Arratia, R.A. (1979). Coalescing Brownian motions on the line. PhD Thesis, University of Wisconsin, Madison. Balakrishnan, V., Van den Broeck, C., Hänggi, P. (1988). First-passage of non-Markovian processes: The case of a reflecting boundary. Physical Review A, 38, 4213–4222. Barlow, R.E., Proschan, F., Hunter, L. (1965). Mathematical Theory of Reliability. John Wiley & Sons, New York. Bateman, H. (1954). Tables of Integral Transforms, McGraw Hill, New York. Beghin, L. and Orsingher, E. (2010a). Moving randomly amid scattered obstacles. Stochastics, 82, 201–229. Beghin, L. and Orsingher, E. (2010b). Poisson-type processes governed by fractional and higher-order recursive differential equations. Electronic Journal of Probability, 15, 684–709. Bertoin, J. (2002). Self-attracting Poisson clouds in an expanding universe. Communications in Mathematical Physics, 232, 59–81. Borovkov, A.A. and Mogulsky, A.A. (1980). On large deviation probabilities in topological spaces. II. Siberian Mathematical Journal, 21(5), 12–26. Buzacott, J.A. (1967). Prediction of the efficiency of production systems without internal storage. International Journal of Production Research, 6(3), 173–188. Cahoy, D.O. (2007). Fractional Poisson process in terms of α-stable densities. PhD Thesis, Case Western University, Cleveland. Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

206

Random Motions in Markov and Semi-Markov Random Environments 1

Cherkesov, G.N. (1974). Reliability of Engineering Systems with Temporal Redundancy. Sovetskoe Radio, Moscow. Corlat, A.N., Kuznetsov, V.N., Novikov, M.M., Turbin, A.F. (1991). Semi-Markov Models of Renewal and Queuing Systems. Shtiintsa, Chisinau. Cox, D.R. (1970). Renewal Theory. Methuen & Co., London. Cox J. and Ross R. (1976). Valuation of options for alternative stochastic processes. Journal of Financial Economics, 3(1–2), 145–166. Dawson, D. (1993). Measure-valued Markov Processes. Springer-Verlag, Berlin. Dawson, D.A. and Gärtner, J. (1994). Multilevel large deviations and interacting diffusions. Probability Theory and Related Fields, 98, 423–487. Da Fonseca J., Ielpo F., Grasselli M. (2009). Hedging (co)variance risk with variance swaps. Available at: http://ssrn.com/abstract=1341811. De Gregorio, A. (2012). On random flights with non-uniformly distributed directions. Journal of Statistical Physics, 147(2), 382–411. De Gregorio, A. (2014). A family of random walks with generalized Dirichlet steps. Journal of Mathematical Physics, 55, 023302. De Gregorio, A. and Orsingher, E. (2012). Flying randomly in Rd with Dirichlet displacements. Stochastic Processes and their Applications, 122, 676–713. De Gregorio, A., Orsingher, E., Sakhno, L. (2005). Motions with finite velocity analyzed with order statistics and differential equations. Theory of Probability and Mathematical Statistics, 71, 63–79. Di Crescenzo, A. (2001). On random motions with velocities alternating at Erlangdistributed random times. Adv. Appl. Prob, 33, 690–701. Di Crescenzo, A. and Martinucci, B. (2010). A damped telegraph random process with logistic stationary distribution. Journal of Applied Probability, 47(1), 84–96. Di Crescenzo, A. and Martinucci, B. (2013). On the generalized telegraph process with deterministic jumps. Methodology and Computing in Applied Probability, 15(1), 215–235. Di Crescenzo, A. and Meoli, A. (2018). On a jump-telegraph process driven by an alternating fractional Poisson process. Journal of Applied Probability, 55(1), 94–111. Di Crescenzo, A., Iuliano, A., Martinucci, B., Zacks, S. (2013). Generalized telegraph process with random jumps. Journal of Applied Probability, 50(2), 450–463. Dineen, S. (2005). Probability Theory in Finance: A Mathematical Guide to the BlackScholes Formula. American Mathematical Society, Rhode Island.

References

207

Donsker, M.D. and Varadhan, S.R.S. (1975). Large deviations for Markov processes and the asymptotic evaluation of certain Markov process expectations for large times. In Probabilistic Methods in Differential Equations, Lecture Notes in Mathematics, Volume 451, Pinsky, M.A. (eds). Springer, Berlin, Heidelberg. Available at: ttps://doi.org/10.1007/BFb0068580. Dorogovtsev, A.A. (2007a). Fourier-Wiener transforms of functionals on Arratia flow. Ukrainian Mathematical Bulletin, 4(3), 333–354. Dorogovtsev, A.A. (2007b). Measure-valued stochastic processes and flows. Institute Mathematics NAS of Ukraine, Kyiv. Dorogovtsev, A.A. (2010). One Brownian stochastic flow. Theory of Stochastic Processes, 10, 21–25. D’Ovidio, M., Orsingher, E., Toaldo, B. (2014). Fractional telegraph-type equations and hyperbolic Brownian motion. Statistics & Probability Letters, 89, 131–137. Dynkin, E.B. (1991). Markov Processes. Springer-Verlag, Berlin, Heidelberg. Elliott, R. and Swishchuk, A.V. (2007). Pricing options and variance swaps in Markov-modulated Brownian markets. In Hidden Markov Models in Finance, Mamon, R., Elliott, R. (eds), Springer, New York. Elwalid, A.I. and Mitra, D. (1994). Statistical multiplexing with loss priorities in rate-based congestion control of high-speed networks. IEEE Transactions on Communications, 42(11), 2989–3002. Feller, W. (1968). An Introduction to Probability Theory and Its Applications, Volume 1, 3rd edition John Wiley & Sons, New York. Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Volume 2, 2nd edition John Wiley & Sons, New York. Foss, S. (2007). On exact asymptotics for a stationary sojourn time distribution in a tandem of queues for a class of light-tailed distributions. Problems of Information Transmission, 43(4), 93–108. Franceschetti, M. (2007). When a random walk of fixed length can lead uniformly anywhere inside a hypersphere. Journal of Theoretical Probability, 20, 813–823. Garra, R. and Orsingher, E. (2014). Random flights governed by Klein-Gordontype partial differential equations. Stochastic Processes and their Applications, 124, 2171–2187. Gikhman, I.I. and Skorokhod, A.V. (1973). The Theory of Stochastic Processes II. Nauka, Moscow. Giraud, C. (2001). Clustering in a self-gravitating one-dimensional gas at zero temperature. Journal of Statistical Physics, 105, 585–604. Giraud, C. (2005). Gravitational clustering and additive coalescence. Stochastic Processes and their Applications, 115, 1302–1322.

208

Random Motions in Markov and Semi-Markov Random Environments 1

Girko, V.L. (1982). Limit theorems for products of random matrices with positive elements. Teor. Veroyatnost. i Primenen., 27, 777–783. Gnedenko, B. and Ushakov, I. (1995). Probabilistic Reliability Engineering. John Wiley & Sons, New York. Goldstein, S. (1951). On diffusion by discontinuous movements and on the telegraph equation. The Quarterly Journal of Mechanics and Applied Mathematics, 4, 129–156. Gorostiza, L.G. (1973). The central limit theorem for random motions of d-dimensional Euclidean space. The Annals of Probability, 1(4), 603–612. Gorostiza, L.G. and Griego, R.J. (1979). Strong approximations of diffusion processes by transport processes. Kyoto Journal of Mathematics, 19(1), 91–103. Gradshteyn, I.S. and Ryzhik, I.M. (1980). Tables of Integrals, Sums, Series and Products. Academic Press, San Diego. Griego, R. and Hersh, R. (1969). Random evolutions, Markov chains, and systems of partial differential equations. Proceedings of the National Academy of Sciences, 62, 305–308. Griego, R. and Hersh, R. (1971). Theory of random evolutions and applications to partial differential equations. Transactions of the American Mathematical Society, 156, 405–418. Griego, R. and Korzeniowski, A. (1989). On principal eigenvalues for random evolutions. Stochastic Analysis and Applications, 7, 35–45. Hersh, R. (1974). Random evolutions: A survey of results and problems. Rocky Mountain Journal of Mathematics, 4, 443–477. Hersh, R. (2003). The birth of random evolutions. The Mathematical Intelligencer, 25(1), 53–60. Hersh, R. and Papanicolaou, G. (1972). Non-commuting random evolutions, and an operator-valued Feynman-Kac formula. Communications on Pure and Applied Mathematics, 30, 337–367. Hersh, R. and Pinsky, M. (1972). Random evolutions are asymptotically Gaussian. Communications on Pure and Applied Mathematics, 25, 33–44. Iksanov, A.M. (2006). On the rate of convergence of a regular martingale related to the branching random walk. Ukrainian Mathematical Journal, 58(3), 326–342. Iksanov, A.M. and Résler, U. (2006). Some moment results about the limit of a martingale related to the supercritical branching random walk and perpetuities. Ukrainian Mathematical Journal, 58(4), 451–471. Jacod, J. and Shiryaev, A.N. (2010). Limit Theorems for Stochastic Processes. Springer-Verlag, Berlin. Kabanov, Y. and Pergamenshchiskov, S. (2003). Two-Scale Stochastic Systems: Asymptotic Analysis and Control. Springer-Verlag, Berlin.

References

209

Kac, M. (1951). On some connections between probability theory and differential and integral equations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, 12, 189–215. Kac, M. (1974). A stochastic model related to the telegrapher’s equation. Rocky Mountain Journal of Mathematics, 4, 497–509. Kartashov, N.V. (1985). Criteria for uniform ergodicity and strong stability of Markov chains with a common phase space. Theory of Probability and Mathematical Statistics, 30, 71–89. Kato, T. (1980). Perturbation Theory for Linear Operators, 2nd edition SpringerVerlag, Berlin. Kemeny, J.G. and Snell, J.L. (1960). Finite Markov Chains. Van Nostrand, New York. Kertz, R. (1978). Random evolutions with underlying semi-Markov processes. Publications of the Research Institute for Mathematical Sciences, 14(3), 589–614. Konarovskii, V.V. (2011). On infinite system of diffusing particles with coalescing. Theory of Probability and Its Applications, 55(1), 134–144. Korn, G.A. and Korn, T.M. (2000). Mathematical Handbook for Scientists and Engineers, 2nd edition Dover Publications, New York. Korolyuk, D.V. and Silvestrov, D.S. (1983). Time of reaching asymptotically receding domains for ergodic Markov chains [in Russian, with an English summary]. , 28(2), 410–420. Kornfeld, I.P., Ya, G.S., Fomin, S.V. (1982). Ergodic Theory. Springer-Verlag, New York. Korolyuk, V.S. (1987). Evolution of a semi-Markov medium [in Russian]. Cybernetics, 5, 106–109. Korolyuk, V.S. (1993). Stochastic Models of Systems [in Ukrainian]. Lybid, Kyiv. Korolyuk, V.S. and Borovskikh, Y.V. (1981). Analytical Problems for Asymptotics of Probability Distributions [in Russian]. Naukova Dumka, Kyiv. Korolyuk, V.S. and Korolyuk, V.V. (1999). Stochastic Models of Systems. Kluwer Academic, Dordrecht. Korolyuk, V.S. and Limnios, N. (2004). Average and diffusion approximation for evolutionary systems in an asymptotic split phase space. The Annals of Applied Probability, 14(1), 489–516. Korolyuk, V.S. and Limnios, N. (2005). Stochastic Systems in Merging Phase Space. World Scientific Publishing Company, New Jersey. Korolyuk, V.S. and Limnios, N. (2009). Poisson approximation of processes with locally independent increments with Markov switching. Theory of Stochastic Processes, 15(1), 40–48. Korolyuk, V.S. and Swishchuk, A.V. (1986). The central limit theorem for semiMarkov random evolutions. Ukrainian Mathematical Journal, 38, 286–289.

210

Random Motions in Markov and Semi-Markov Random Environments 1

Korolyuk, V.S. and Swishchuk, A.V. (1995a). Evolution of Systems in Random Media. Chapman & Hall/CRC Press, London. Korolyuk, V.S. and Swishchuk, A.V. (1995b). Semi-Markov Random Evolution. Kluwer Academic Publishers, Dordrecht. Korolyuk, V.S. and Tadjiev, A. (1977). Asymptotic expansion for the distribution of absorption time of a semi-Markov process. Reports of the USSR Academy of Sciences, 12, 133–135. Korolyuk, V.S. and Turbin, A.F. (1976). Semi-Markov Processes and their Applications. Naukova Dumka, Kyiv. Korolyuk, V.S. and Turbin, A.F. (1982). Markov Renewal Processes in Problems of Systems Reliability [in Russian]. Naukova Dumka, Kyiv. Korolyuk, V.S. and Turbin, A.F. (1993). Mathematical Foundations of the State Lumping of Large Systems. Kluwer Academic Publishers, Dordrecht. Korolyuk, V.S., Penev, I.P., Turbin, A.F. (1973). Asymptotic expansions for the distribution of the absorption time of a Markov chain. Cybernetics, 4, 133–135. Korolyuk, V.S., Turbin, A.F., Tomusyak, A.A. (1979a). The phase merging scheme of Markov chains based on phase lumping [in Russian]. In Analytical Methods in the Theory of Probability. Naukova Dumka, Kyiv. Korolyuk, V.S., Turbin, A.F., Tomusyak, A.A. (1979b). Residence time of semiMarkov process in expanding set of states [in Russian]. In Analytical Methods in the Theory of Probability. Naukova Dumka, Kyiv. Kovalenko, I.N. (1980). Analysis of Rare Events in Evaluation of System Effectiveness and Reliability. Sovetskoe Radio, Moscow. Kovalenko, I.N., Yu., K.N., Shurenkov, V.M. (1983). Random Processes: Manual. Naukova Dumka, Kyiv. Krein, S.G. (1971). Linear differential equations in banach space. Proceedings of the American Mathematical Society, Translations of Mathematical Monographs, 29, Rhode Island. Kulkarni, V.G. (1997). Fluid models for single buffer systems. In Frontiers in Queueing. Models and Applications in Science and Engineering, Dshalalow, J.H. (ed.). CRC Press, Florida. Lachal, A. (2006), Cyclic random motions in Rd -space with n directions. ESAIM: Probability and Statistics, 10, 277–316. Le, J.Y. and Raimond, O. (2004). Flows, coalescence and noise. The Annals of Probability, 32(2), 1247–1315. Le Caér, G. (2010). A Pearson-Dirichlet random walk. Journal of Statistical Physics, 140, 728–751. Le Caér, G. (2011). A new family of solvable Pearson-Dirichlet random walks. Journal of Statistical Physics, 144, 23–45.

References

211

Letac, G. and Piccioni, M. (2014). Dirichlet random walks. Journal of Applied Probability, 51, 1081–1099. Lifshits, M. and Shi, Z. (2005). Aggregation rates in one-dimensional stochastic systems with adhesion and gravitation. The Annals of Probability, 33(1), 53–81. Lindvall, T. (2002). Lectures on the Coupling Method. Dover Publications, New York. López, O. and Ratanov, N. (2012). Option pricing under jump-telegraph model with random jumps. Journal of Applied Probability, 49(3), 838–849. Maglaras, C. and Zeevi, A. (2004), Diffusion approximation for a multiclass Markovian service system with “guaranteed” and “best-effort” service level. Mathematics of Operations Research, 9(4), 786–813. Malathornas, J.P., Perkins, J.D., Smith, R.L. (1983). The availability of a system of two unreliable machines connected by an intermediate storage tank. AIIE Transactions, 15(3), 195–201. Masoliver, J., Porra, J.M., Weiss, G.H. (1993). Solution to the telegrapher’s equation in the presence of reflecting and partly reflecting boundaries. Physical Review, 48(2), 939–944. Mel’nichenko, I.P. and Plaksa, S.A. (2008). Commutative Algebras and Spatial Potential Fields. Institute of Mathematics NAS of Ukraine, Kyiv. Meyn, S.P. and Tweedie, R.L. (1993). Markov Chains and Stochastic Stability. Springer-Verlag, New York. Mitra, D. (1998). Stochastic theory of a fluid model of producers and consumers coupled by a buffer. Advances in Applied Probability, 20, 646–676. Molchanov, S.A. and Yarovaya, E.B. (2013). Large deviations for a symmetric branching random walk on a multidimensional lattice. Proceedings of the Steklov Institute of Mathematics, 282, 186–201. Available at: https://doi.org/10.1134/S0081543813060163. Nischenko, I.I. (2001). On asymptotic representation normalizing multiplier for random matrix-valued evolution. Theory of Probability and Mathematical Statistics, 64, 129–135. Novak, S.Y. (2007). Measures of financial risks and market crashes. Theory of Stochastic Processes, 13(29), 182–193. Novak, S.Y. (2011). Extreme Value Methods with Applications to Finance. Chapman & Hall/CRC Press, London. Nummelin, E. (1984). General Irreducible Markov Chain and the Non-negative Operators. Cambridge University Press, New York. Orsingher, E. (1985). Hyperbolic equations arising in random models. Stochastic Processes and their Applications, 21, 93–106. Orsingher, E. and Beghin, L. (2009). Fractional diffusion equations and processes with randomly-varying time. The Annals of Probability, 37(1), 206–249.

212

Random Motions in Markov and Semi-Markov Random Environments 1

Orsingher, E. and De Gregorio, A. (2007). Random flights in higher spaces. Journal of Theoretical Probability, 20, 769–806. Orsingher, E. and Ratanov, N. (2002). Planar random motions with drift. J. Appl. Math. Stoch., 15, 205–221. Orsingher, E. and Somella, A.M. (2004). A cyclic random motion in R3 with four directions and finite velocity. Stochastics and Stochastics Reports, 76, 113–133. Papanicolaou, G. (1971a). Asymptotic analysis of transport processes. Bulletin of the American Mathematical Society, 81, 330–391. Papanicolaou, G. (1971b). Motion of a particle in a random field. Journal of Mathematical Physics, 12(24), 473–489. Pinsky, M. (1991). Lectures on Random Evolutions. World Scientific, Singapore. Pogorui, A.A. (1989). Distribution of reaching time of infinitely increasing level by family of semi-Markov processes with extending phase space. Analytical Methods of Research of Evolution Stochastic Systems. Institute of Mathematics of NAS of Ukraine, Kyiv. Pogorui, A.A. (1990), Distribution of reaching time of infinitely increasing level by diffusion process on line. Asymptotic and Applied Problems of Stochastic Evolution Theory. Institute of Mathematics of NAS of Ukraine, Kyiv. Pogorui, A.A. (1992). Asymptotic decomposition of distribution of reaching time of infinitely increasing level by semi-Markov process. Stochastic Evolutions: Theoretical and Applied Problems. Institute of Mathematics of NAS of Ukraine, Kyiv. Pogorui, A.A. (1994). Asymptotic inequalities for the distribution of the time of stay of a semi-Markov process in an expanding set of states. Ukrainian Mathematical Journal, 46(11), 1757–1762. Pogorui, A.A. (2003). Estimation of the stationary efficiency of a two-phase system with two storages. Journal of Automation and Information Sciences, 35, 16–23. Pogorui, A.A. (2004a). Estimation of the efficiency of a data transmission line with feedback. Journal of Automation and Information Sciences, 36, 44–50. Pogorui, A.A. (2004b). Reliability of two-phase transfer lines in parallel in Markov medium. Int. Seminar Analisis: Norte-Sur, Mexico, Cinvestav, National Polytechnic Institute, 96–97. Pogorui, A.A. (2005). Stationary distribution of the random shift process with delaying in reflecting boundaries. Scientific Journal of National Pedagogical Dragomanov University, Series 1, 6, 168–172. Pogorui, A.A. (2006). Stationary distribution of a process of random semi-Markov evolution with delaying screens in the case of balance. Ukrainian Mathematical Journal, 58(3), 430–437. Pogorui, A.A. (2007). Hyperholomorphic functions in commutative algebras. Complex Variables and Elliptic Equations, 52(12), 1155–1159.

References

213

Pogorui, A.A. (2009a). Asymptotic expansion for distribution of Markovian random motion. Random Operators and Stochastic Equations, 17(2), 189–196. Pogorui, A.A. (2009b). Stationary distributions of fading evolutions. Ukrainian Mathematical Journal, 61(3), 425–431. Pogorui, A.A. (2010a). Asymptotic analysis for phase averaging of transport process. Ukrainian Mathematical Journal, 62(2), 190–198. Pogorui, A.A. (2010b). Fading evolution in multidimensional spaces. Ukrainian Mathematical Journal, 62(11), 1577–1582. Pogorui, A.A. (2011a). The distribution of random evolution in Erlang semi-Markov media. Theory of Stochastic Processes, 17(33), 90–99. Pogorui, A.A. (2011b). Estimation of stationary productivity of one-phase system with a storage. Markov Processes and Related Fields, 17(2), 305–314. Pogorui, A.A. (2012a). Evolution in multidimensional spaces. Random Operators and Stochastic Equations, 20(2), 135–141. Pogorui, A.A. (2012b). System of interactive particles with Markovian switching. Theory of Stochastic Processes, 18(34–2), 83–95. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2005a). One-dimensional semi-Markov evolution with general Erlang sojourn times. Random Operators and Stochastic Equations, 13(4), 399–405. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2005b). Semi-Markov evolution with general Erlang sojourn times. Int. Seminar: Analisis: Norte-Sur, Mexico, Cinvestav, National Polytechnic Institute, 54–55. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2006). Limiting distribution of random motion in an n-dimensional parallelepiped. Random Operators and Stochastic Equations, 14(4), 385–392. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2008). Evolution process as an alternative to diffusion process and Black-Scholes formula. 5th Conference in Actuarial Science & Finance on Samos, 4–7, Samos, Greece. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2009a). Evolution process as an alternative to diffusion process and Black-Scholes formula. Random Operators and Stochastic Equations, 17(1), 61–68. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2009b). Limiting distribution of fading evolution in some semi-Markov media. Ukrainian Mathematical Journal, 61(12), 1720–1724. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2010a). Asymptotic expansion for transport processes in semi-Markov media. Theory of Probability and Mathematical Statistics, 83, 127–134. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2010b). Stationary distribution of random motion with delay in reflecting boundaries. Applied Mathematics, 1(1), 24–28.

214

Random Motions in Markov and Semi-Markov Random Environments 1

Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2011a). Isotropic random motion at finite speed with k-Erlang distributed direction alternations. Journal of Statistical Physics, 145(1), 102–112. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2011b). Multidimensional random motion with uniformly distributed changes of direction and Erlang steps. Ukrainian Mathematical Journal, 63(4), 572–577. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2012). Random motion with uniformly distributed directions and random velocity. Journal of Statistical Physics, 147(6), 1216–1225. Pogorui, A.A. and Rodríguez-Dagnino, R.M. (2013). Random motion with gamma steps in higher dimensions. Statistics & Probability Letters, 83, 1638–1643. Pogorui, A.A. and Turbin, A.F. (2002). Estimation of stationary efficiency of a production line with two unreliable aggregates. Cybernetics and Systems Analysis, 38, 823–829. Pogorui, A.A., Rodríguez-Said, R.D., Pogorui, A.A., Rodríguez-Dagnino, R.M. (2008). Stationary effectiveness of an information server with a single buffer and bursty demands of two different customers. Stochastic Models, 24, 246–269. Pogorui, A.A., Rodríguez-Dagnino, R.M., Shapiro, M. (2014). Solutions for PDEs with constant coefficients and derivability of functions ranged in commutative algebras. Mathematical Methods in the Applied Sciences, 37(17), 2799–2810. Qian, H., Raymond, G.M., Bassingthwaighte, J.B. (1998). On two-dimensional fractional Brownian motion and fractional Brownian random field. Journal of Physics A: Mathematical and General, 31, 1527–1535. Ratanov, N. (2007). A jump telegraph model for option pricing. Quantitative Finance, 7(5), 575–583. Ratanov, N. (2010). Option pricing model based on a Markov-modulated diffusion with jumps. The Brazilian Journal of Probability and Statistics, 24, 413–431. Ratanov, N. and Melnikov, A. (2008). On financial markets based on telegraph processes. Stochastics, 80, 247–268. Reed, M. and Simon, B. (1972). Methods of Modern Mathematical Physics, Vol. 1. Academic Press, New York. Reibman, A., Smith, R., Trivedi, K.S. (1989). Markov and Markov reward model transient analysis: An overview of numerical approaches. European Journal of Operational Research, 40, 257–267. Rodríguez-Said, R.D., Pogorui, A.A., Rodríguez-Dagnino, R.M. (2007). Stationary probability distribution of a system with n equal customers with bursty demands connected to a single buffer. Random Operators and Stochastic Equations, 15(2), 181–204.

References

215

Rodríguez-Said, R.D., Pogorui, A.A., Rodríguez-Dagnino, R.M. (2008). Stationary effectiveness of an information server with a single buffer and bursty demands of two different customers. Stochastic Models, 24, 246–269. Samoilenko, I.V. (2001). Markovian random evolution in Rn . Random Operators and Stochastic Equations, 9, 247–257. Samoilenko, I.V. (2002). Fading Markov random evolution. Ukrainian Mathematical Journal, 54(3), 448–459. Samoilenko, I.V. (2005). Asymptotic expansion for the functional of markovian evolution in Rd in the circuit of diffusion approximation. Journal of Applied Mathematics and Stochastic Analysis, 3, 247–257. Sato, K. (1971). Potential operators for Markov processes. Proceedings of the Sixth Berkeley Symposium on Math, Statistics and Probability, 193–211, Berkeley, CA. Semenov, A.T. (2008). Asymptotic expansions of the ruin probability for a generalized Poisson process. Bulletin of Tomsk State University, Mathematics and Mechanics, 3(4), 230–31. Sericola, B. (1998). Transient analysis of stochastic fluid models. Performance Evaluation, 1(32), 245–263. Sevastyanov, B.A. (1962). The effect of the bunker capacity on idle mean time of an automatic machine tool line problem [in Russian]. , 7(4), 11–24. Sevastyanov, B.A. (1971). Branching Processes [in Russian]. Nauka, Moscow. Shurenkov, V.M. (1986). Markov intervention of chance and limit theorems. Mathematics of the USSR-Sbornik, 54, 1. Shurenkov, V.M. (1989). Ergodic Markov Processes [in Russian]. Nauka, Moscow. Silvestrov, D.S. (1980). Semi-Markov Processes with a Discrete State Space. Library for an Engineer in Reliability. Sovetskoe Radio, Moscow. Silvestrov, D.S. (2004). Limit Theorems for Randomly Stopped Stochastic Processes (Probability and Its Applications). Springer-Verlag, London. Silvestrov, D.S. (2007a). Asymptotic expansions for distributions of the surplus prior and at the time of ruin. Theory of Stochastic Processes, 13(29), 183–88. Silvestrov, D.S. (2007b). Asymptotic expansions for quasi-stationary distributions of nonlinearly perturbed semi-Markov processes. Theory of Stochastic Processes, 13(29), 267–271. Sinai, Y. G. (1992). Distribution of some functionals of the integral of a random walk. Theoretical and Mathematical Physics, 90, 219–241. Skorokhod, A.V. (1989). Asymptotic Methods in the Theory of Stochastic Differential Equations. American Mathematical Society, Rhode Island. Sobolev, S.L. (1991). Some Applications of Functional Analysis in Mathematical Physics. American Mathematical Society, Rhode Island.

216

Random Motions in Markov and Semi-Markov Random Environments 1

Soloviev, A.D. (1993). Asymptotic methods for highly reliable repairable systems. In Handbook of Reliability Engineering, Ushakov, I.A., Harrison, R. (eds). John Wiley & Sons, New York. Stadje, W. (2007). The exact probability distribution of a two-dimensional random walk. Journal of Statistical Physics, 56, 207–216. Stadje, W. and Zacks, S. (2004). Telegraph processes with random velocities. Journal of Applied Probability, 41, 665–678. Stroock, D.W. and Varadhan, S.R.S. (1969). Diffusion processes with continuous coefficients. Communications on Pure and Applied Mathematics, 22(3), 345–400. Stroock, D.W. and Varadhan, S.R.S. (1979). Multidimensional Diffusion Processes. Springer-Verlag, Berlin. Sviridenko, M.N. (1989). Martingale approach to limit theorems for semi-Markov processes. Theory of Probability and its Applications, 34(3), 540–545. Swishchuk, A.V. (1989). Weak convergence of semi-Markov random evolutions in an averaging scheme (martingale approach). Ukrainian Mathematical Journal, 41(3), 1450–1456. Swishchuk, A.V. (1997). Random Evolutions and their Applications. Kluwer Academic Publishers, Dordrecht. Swishchuk, A.V. (2000). Random Evolutions and their Applications: New Trends. Kluwer Academic Publishers, Dordrecht. Swishchuk, A.V. (2004). Modeling of variance and volatility swaps for financial markets with stochastic volatilities. Willmott Magazine, 2, 64–72. Swishchuk, A.V. (2020). Inhomogeneous Random Evolutions and their Applications. CRC Press, New York. Swishchuk, A.V. and Burdeinyi, A.G. (1996). Stability of semi-Markov evolution systems and its application in financial mathematics. Ukrainian Mathematical Journal, 48(10), 1574–1591. Swishchuk, A.V. and Islam, S. (2010). The geometric Markov renewal processes with applications to finance. Stochastic Analysis and Applications, 29(4), 684–705. Turbin, A.F. (1972). The solution of some problems on perturbing the occurrence of matrices. Reports of the National Academy of Sciences of Ukraine A, 539–541. Turbin, A.F. (1981). Limit theorems for perturbed semigroups and Markov processes in the scheme of asymptotic phase lumping [in Russian]. Preprint no. 80.18. Institute of Mathematics, Ukrainian National Academy of Sciences, Kyiv, 133–147. Turbin, A.F. (1998). Mathematical models of one-dimensional Brownian motion as alternatives to the mathematical models of Einstein, Wiener and Levy [in Russian]. Fractal Analysis and Related Fields, 2, 47–60. Vasileva, A.B. and Butuzov, V.F. (1973). Asymptotic Expansion of Solutions of Singularly Perturbed Equations [in Russian]. Nauka, Moscow.

References

217

Vasileva, A.B. and Butuzov, V.F. (1990). Asymptotic Methods in the Theory of Singular Perturbation [in Russian]. Vysshaya Shkola, Moscow. Ventzel, A.D. and Freydlin, M.I. (1970). About small perturbations of dynamical systems. Uspekhi Matematichaskikh Nauk. [in Russian], 25(1), 3–55. Vinogradov, O.P. (1994). On certain asymptotic properties of waiting time in a multiserver queueing system with identical times. Theory of Probability and its Applications, 39(4), 714–718. Vishik, M. and Lusternik, L.A. (1960). The asymptotic behavior of solutions of linear differential equations with large or quickly changing coefficients and boundary conditions. Russian Mathematical Surveys, 15, 23–91. Vysotsky, V.V. (2008). Clustering in a stochastic, model of one-dimensional gas. The Annals of Applied Probability, 18(3), 1026–1058. Watkins, J. (1984). A central limit theorem in random evolution. The Annals of Probability, 12(2), 480–514. Watkins, J. (1985). Limit theorems for stationary random evolutions. Stochastic Processes and their Applications, 19, 189–224. Yeleyko, Y.I. and Zhernovyi, Y.V. (2002). Asymptotic properties of stochastic evolutions described by the solutions of the ordinary differential equations partially solved with respect to the higher derivatives. Bulletin of the Lviv University, Series: Physics and Mathematics, 60, 60–66. Yu, P.A. (1999). The evolution of a system of particles and measure-valued processes. Theory of Stochastic Processes, 5(3–4), 188–197. Yu, V.A. (2002). Coupling method for Markov chains under integral Doeblin type condition. Theory of Stochastic Processes, 8(24), 383–391.

Index

A, B adjoint operator, 9, 105, 115 analogue of Dynkin’s formula for RE, 34 averaging of random evolutions, 39 Black–Scholes formula, 96 boundary value problems for RE, 36 C, D Chapman–Kolmogorov equation, 12 commutative algebras, 168 compact set, 39 continuous RE, 24 coupling method, 118 criterion of weakly compactness, 37 diffusion approximation, 95 of random evolutions, 42 process, 77 in random media, 26 discrete RE, 24 F, G, H family of operators, 24 fredholm operator, 8 Gateaux differentiability, 168 generalized Erlang environment, 159 inverse operator, 7 geometric Markov renewal process (GMRP), 27

Goldstein–Kac model, 159, 191 hard-to-reach domain, 17 I, J, K impulse traffic process, 25 jump RE, 24 jump telegraph process, 192 Kolmogorov backward differential equations, 97 L, M level crossing by a telegraph process, 199 limit theorems for random evolutions, 37 Markov environment, 104 processes, 10 renewal process, 14, 24 martingale characterization of random evolutions, 28 method, 37 problem, 29 for discontinuous RE over a jump Markov process, 30 for discontinuous RE over a semi-Markov process, 31 for RE over diffusion process, 33 for RE over Wiener process, 32

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

220

Random Motions in Markov and Semi-Markov Random Environments 1

merged diffusion RE, 48 RE, 45 monogenic functions, 168

in diffusion approximation, 54 reducible-invertible operator, 8 resolvent, 13 S

N, O normal deviated RE, 51 normally solvable operator, 7 operator dynamical system, 23, 25 Ornstein–Uhlenbeck process, 81 P, R potential operator, 7, 98 projective operator, 8 proper projector, 9, 11 random evolutions, 24 jump motion, 191 rate of convergence of RE, 53 in averaging scheme, 53

semi-Markov kernel, 14 media, 110, 114 process, 14, 24, 61 singularly perturbed system, 93 sojourn time(s), 15, 161 stationary projector, 13 summation on a Markov chain, 26 superposition of Markov renewal processes, 21 T, W telegraph process, 110, 159, 191 tight process, 39 weak convergence of RE, 29, 37

Summary of Volume 2

Preface Acknowledgments Introduction Part 1. Higher-dimensional Random Motions and Interactive Particles Chapter 1. Random Motions in Higher Dimensions 1.1. Random motion at finite speed with semi-Markov switching directions process 1.1.1. Erlang-K-distributed direction alternations 1.1.2. Some properties of the random walk in a semi-Markov environment and its characteristic function 1.2. Random motion with uniformly distributed directions and random velocity 1.2.1. Renewal equation for the characteristic function of isotropic motion with random velocity in a semi-Markov media 1.2.2. One-dimensional case 1.2.3. Two-dimensional case 1.2.4. Three-dimensional case 1.2.5. Four-dimensional case 1.3. The distribution of random motion at non-constant velocity in semi-Markov media 1.3.1. Renewal equation for the characteristic function 1.3.2. Two-dimensional case 1.3.3. Three-dimensional case

Random Motions in Markov and Semi-Markov Random Environments 1: Homogeneous Random Motions and their Applications, First Edition. Anatoliy Pogorui, Anatoliy Swishchuk and Ramón M. Rodríguez-Dagnino. © ISTE Ltd 2021. Published by ISTE Ltd and John Wiley & Sons, Inc.

Random Motions in Markov and Semi-Markov Random Environments

1.3.4. Four-dimensional case 1.4. Goldstein–Kac telegraph equations and random flights in higher dimensions 1.4.1. Preliminaries about our modeling approach 1.4.2. Two-dimensional case 1.4.3. Three-dimensional case 1.4.4. Five-dimensional case 1.5. The jump telegraph process in Rn 1.5.1. The jump telegraph process in R3 1.5.2. Conclusions and final remarks Chapter 2. System of Interactive Particles with Markov and Semi-Markov Switching 2.1. Description of the Markov model 2.1.1. Distribution of the first meeting time of two telegraph processes 2.1.2. Estimate of the number of particle collisions 2.1.3. Free path times of a family of particles 2.1.4. Estimation of the number of particle collisions for systems with boundaries 2.1.5. Estimation of the number of particle collisions for systems without boundaries 2.2. Interaction of particles governed by generalized integrated telegraph processes: a semi-Markov case 2.2.1. Laplace transform of the distribution of the first collision of two particles 2.2.2. Semi-Markov case 2.2.3. Distribution of the first collision of two particles with finite expectation Part 2. Financial Applications Chapter 3. Asymptotic Estimation for Application of the Telegraph Process as an Alternative to the Diffusion Process in the Black–Scholes Formula 3.1. Asymptotic expansion for the singularly perturbed random evolution in Markov media in the case of disbalance 3.2. Application: Black–Scholes formula Chapter 4. Variance, Volatility, Covariance and Correlation Swaps for Financial Markets with Markov-modulated Volatilities 4.1. Volatility derivatives 4.1.1. Types of volatilities 4.1.2. Models for volatilities

Summary of Volume 2

4.1.3. Variance and volatility swaps 4.1.4. Covariance and correlation swaps 4.1.5. A brief literature review 4.2. Martingale representation of a Markov process 4.3. Variance and volatility swaps for financial markets with Markov-modulated stochastic volatilities 4.3.1. Pricing variance swaps 4.3.2. Pricing volatility swaps 4.4. Covariance and correlation swaps for two risky assets for financial markets with Markov-modulated stochastic volatilities 4.4.1. Pricing covariance swaps 4.4.2. Pricing correlation swaps 4.4.3. Correlation swap made simple 4.5. Example: variance, volatility, covariance and correlation swaps for stochastic volatility driven by two state continuous Markov chain 4.6. Numerical example 4.6.1. S&P 500: variance and volatility swaps 4.6.2. S&P 500 and NASDAQ-100: covariance and correlation swaps 4.7. Appendix 1 4.7.1. Correlation swaps: first-order correction Chapter 5. Modeling and Pricing of Variance, Volatility, Covariance and Correlation Swaps for Financial Markets with Semi-Markov Volatilities 5.1. Introduction 5.2. Martingale representation of semi-Markov processes 5.3. Variance and volatility swaps for financial markets with semi-Markov stochastic volatilities 5.3.1. Pricing of variance swaps 5.3.2. Pricing of volatility swaps 5.3.3. Numerical evaluation of variance and volatility swaps with semi-Markov volatility 5.4. Covariance and correlation swaps for two risky assets in financial markets with semi-Markov stochastic volatilities 5.4.1. Pricing of covariance swaps 5.4.2. Pricing of correlation swaps 5.5. Numerical evaluation of covariance and correlation swaps with semi-Markov stochastic volatility

Random Motions in Markov and Semi-Markov Random Environments

5.6. Appendices 5.6.1. Appendix 1. Realized correlation: first-order correction 5.6.2. Appendix 2. Discussions of some extensions References Index

Other titles from

in Mathematics and Statistics

2020 BARBU Vlad Stefan, VERGNE Nicolas Statistical Topics and Stochastic Models for Dependent Data with Applications CHABANYUK Yaroslav, NIKITIN Anatolii, KHIMKA Uliana Asymptotic Analyses for Complex Evolutionary Systems with Markov and Semi-Markov Switching Using Approximation Schemes KOROLIOUK Dmitri Dynamics of Statistical Experiments MANOU-ABI Solym Mawaki, DABO-NIANG Sophie, SALONE Jean-Jacques Mathematical Modeling of Random and Deterministic Phenomena

2019 BANNA Oksana, MISHURA Yuliya, RALCHENKO Kostiantyn, SHKLYAR Sergiy Fractional Brownian Motion: Approximations and Projections GANA Kamel, BROC Guillaume Structural Equation Modeling with lavaan

KUKUSH Alexander Gaussian Measures in Hilbert Space: Construction and Properties LUZ Maksym, MOKLYACHUK Mikhail Estimation of Stochastic Processes with Stationary Increments and Cointegrated Sequences MICHELITSCH Thomas, PÉREZ RIASCOS Alejandro, COLLET Bernard, NOWAKOWSKI Andrzej, NICOLLEAU Franck Fractional Dynamics on Networks and Lattices VOTSI Irene, LIMNIOS Nikolaos, PAPADIMITRIOU Eleftheria, TSAKLIDIS George Earthquake Statistical Analysis through Multi-state Modeling (Statistical Methods for Earthquakes Set – Volume 2)

2018 AZAÏS Romain, BOUGUET Florian Statistical Inference for Piecewise-deterministic Markov Processes IBRAHIMI Mohammed Mergers & Acquisitions: Theory, Strategy, Finance PARROCHIA Daniel Mathematics and Philosophy

2017 CARONI Chysseis First Hitting Time Regression Models: Lifetime Data Analysis Based on Underlying Stochastic Processes (Mathematical Models and Methods in Reliability Set – Volume 4) CELANT Giorgio, BRONIATOWSKI Michel Interpolation and Extrapolation Optimal Designs 2: Finite Dimensional General Models CONSOLE Rodolfo, MURRU Maura, FALCONE Giuseppe Earthquake Occurrence: Short- and Long-term Models and their Validation (Statistical Methods for Earthquakes Set – Volume 1)

D’AMICO Guglielmo, DI BIASE Giuseppe, JANSSEN Jacques, MANCA Raimondo Semi-Markov Migration Models for Credit Risk (Stochastic Models for Insurance Set – Volume 1) GONZÁLEZ VELASCO Miguel, del PUERTO GARCÍA Inés, YANEV George P. Controlled Branching Processes (Branching Processes, Branching Random Walks and Branching Particle Fields Set – Volume 2) HARLAMOV Boris Stochastic Analysis of Risk and Management (Stochastic Models in Survival Analysis and Reliability Set – Volume 2) KERSTING Götz, VATUTIN Vladimir Discrete Time Branching Processes in Random Environment (Branching Processes, Branching Random Walks and Branching Particle Fields Set – Volume 1) MISHURA YULIYA, SHEVCHENKO Georgiy Theory and Statistical Applications of Stochastic Processes NIKULIN Mikhail, CHIMITOVA Ekaterina Chi-squared Goodness-of-fit Tests for Censored Data (Stochastic Models in Survival Analysis and Reliability Set – Volume 3) SIMON Jacques Banach, Fréchet, Hilbert and Neumann Spaces (Analysis for PDEs Set – Volume 1)

2016 CELANT Giorgio, BRONIATOWSKI Michel Interpolation and Extrapolation Optimal Designs 1: Polynomial Regression and Approximation Theory CHIASSERINI Carla Fabiana, GRIBAUDO Marco, MANINI Daniele Analytical Modeling of Wireless Communication Systems (Stochastic Models in Computer Science and Telecommunication Networks Set – Volume 1)

GOUDON Thierry Mathematics for Modeling and Scientific Computing KAHLE Waltraud, MERCIER Sophie, PAROISSIN Christian Degradation Processes in Reliability (Mathematial Models and Methods in Reliability Set – Volume 3) KERN Michel Numerical Methods for Inverse Problems RYKOV Vladimir Reliability of Engineering Systems and Technological Risks (Stochastic Models in Survival Analysis and Reliability Set – Volume 1)

2015 DE SAPORTA Benoîte, DUFOUR François, ZHANG Huilong

Numerical Methods for Simulation and Optimization of Piecewise Deterministic Markov Processes DEVOLDER Pierre, JANSSEN Jacques, MANCA Raimondo Basic Stochastic Processes LE GAT Yves Recurrent Event Modeling Based on the Yule Process (Mathematical Models and Methods in Reliability Set – Volume 2)

2014 COOKE Roger M., NIEBOER Daan, MISIEWICZ Jolanta Fat-tailed Distributions: Data, Diagnostics and Dependence (Mathematical Models and Methods in Reliability Set – Volume 1) MACKEVIČIUS Vigirdas Integral and Measure: From Rather Simple to Rather Complex PASCHOS Vangelis Th Combinatorial Optimization – 3-volume series – 2nd edition Concepts of Combinatorial Optimization / Concepts and Fundamentals – volume 1 Paradigms of Combinatorial Optimization – volume 2 Applications of Combinatorial Optimization – volume 3

2013 COUALLIER Vincent, GERVILLE-RÉACHE Léo, HUBER Catherine, LIMNIOS Nikolaos, MESBAH Mounir Statistical Models and Methods for Reliability and Survival Analysis JANSSEN Jacques, MANCA Oronzio, MANCA Raimondo Applied Diffusion Processes from Engineering to Finance SERICOLA Bruno Markov Chains: Theory, Algorithms and Applications

2012 BOSQ Denis Mathematical Statistics and Stochastic Processes CHRISTENSEN Karl Bang, KREINER Svend, MESBAH Mounir Rasch Models in Health DEVOLDER Pierre, JANSSEN Jacques, MANCA Raimondo Stochastic Methods for Pension Funds

2011 MACKEVIČIUS Vigirdas Introduction to Stochastic Analysis: Integrals and Differential Equations MAHJOUB Ridha Recent Progress in Combinatorial Optimization – ISCO2010 RAYNAUD Hervé, ARROW Kenneth Managerial Logic

2010 BAGDONAVIČIUS Vilijandas, KRUOPIS Julius, NIKULIN Mikhail Nonparametric Tests for Censored Data BAGDONAVIČIUS Vilijandas, KRUOPIS Julius, NIKULIN Mikhail Nonparametric Tests for Complete Data

IOSIFESCU Marius et al. Introduction to Stochastic Models VASSILIOU PCG Discrete-time Asset Pricing Models in Applied Stochastic Finance

2008 ANISIMOV Vladimir Switching Processes in Queuing Models FICHE Georges, HÉBUTERNE Gérard Mathematics for Engineers HUBER Catherine, LIMNIOS Nikolaos et al. Mathematical Methods in Survival Analysis, Reliability and Quality of Life JANSSEN Jacques, MANCA Raimondo, VOLPE Ernesto Mathematical Finance

2007 HARLAMOV Boris Continuous Semi-Markov Processes

2006 CLERC Maurice Particle Swarm Optimization