139 22 6MB
English Pages 288 [277] Year 2023
Lecture Notes in Mathematics 2338
Kôhei Uchiyama
Potential Functions of Random Walks in ℤ with Infinite Variance Estimates and Applications
Lecture Notes in Mathematics Volume 2338
Editors-in-Chief Jean-Michel Morel, Ecole Normale Supérieure Paris-Saclay, Paris, France Bernard Teissier, IMJ-PRG, Paris, France Series Editors Karin Baur, University of Leeds, Leeds, UK Michel Brion, UGA, Grenoble, France Rupert Frank, LMU, Munich, Germany Annette Huber, Albert Ludwig University, Freiburg, Germany
Davar Khoshnevisan, The University of Utah, Salt Lake City, UT, USA Ioannis Kontoyiannis, University of Cambridge, Cambridge, UK Angela Kunoth, University of Cologne, Cologne, Germany Ariane Mézard, IMJ-PRG, Paris, France Mark Podolskij, University of Luxembourg, Esch-sur-Alzette, Luxembourg Mark Policott, Mathematics Institute, University of Warwick, Coventry, UK László Székelyhidi Germany
, Institute of Mathematics, Leipzig University, Leipzig,
Gabriele Vezzosi, UniFI, Florence, Italy Anna Wienhard, Ruprecht Karl University, Heidelberg, Germany
This series reports on new developments in all areas of mathematics and their applications - quickly, informally and at a high level. Mathematical texts analysing new developments in modelling and numerical simulation are welcome. The type of material considered for publication includes: 1. Research monographs 2. Lectures on a new field or presentations of a new angle in a classical field 3. Summer schools and intensive courses on topics of current research.
Texts which are out of print but still in demand may also be considered if they fall within these categories. The timeliness of a manuscript is sometimes more important than its form, which may be preliminary or tentative. Please visit the LNM Editorial Policy (https://drive.google.com/file/d/19XzCzDXr0FyfcV-nwVojWYTIIhCeo2LN/ view?usp=sharing) Titles from this series are indexed by Scopus, Web of Science, Mathematical Reviews, and zbMATH.
Kôhei Uchiyama
Potential Functions of Random Walks in ℤ with Infinite Variance Estimates and Applications
Kôhei Uchiyama Tokyo Institute of Technology Tokyo, Japan
ISSN 0075-8434 ISSN 1617-9692 (electronic) Lecture Notes in Mathematics ISBN 978-3-031-41019-2 ISBN 978-3-031-41020-8 (eBook) https://doi.org/10.1007/978-3-031-41020-8 Mathematics Subject Classification (2020): 60G50, 60G52, 60K05 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
This treatise consists of nine chapters and an Appendix. Chapters 3–5 are adaptations from the preprint [85]; Chapters 6–7 are edited versions of the preprints [86] and [87] and can be read independently of the preceding three chapters, except for occasional uses of the results of the latter. Chapters 8–9 and the Appendix are written for the present treatise. Chapter 1 is the introduction, where the main results of the treatise are summarised. In Chapter 2, some preliminary results used throughout the later chapters are presented. We are concerned with a one-dimensional random walk (r.w.) in the integer lattice Z with a common step distribution function 𝐹. For the most part, the r.w. will be assumed to be recurrent and have infinite variance, and we are primarily concerned with the fluctuation theory of the r.w. In the classical book [71], F. Spitzer derived many beautiful formulae, especially those that involve the potential function, 𝑎(𝑥), under very natural assumptions on 𝐹. If 𝐹 has a finite variance when 𝑎(𝑥) has a simple asymptotic form, they give explicit asymptotic expressions involving 𝑎(𝑥) for objects like hitting probabilities, Green functions, occupation time distributions, etc., of some subsets of Z. Even if the results do not explicitly involve 𝑎(𝑥), the asymptotics of 𝑎(𝑥) often play crucial roles in their derivation. If the variance of 𝐹 is infinite, the behaviour of the function 𝑎(𝑥), possibly varying in diverse ways depending on the tails of 𝐹, is known in a few particular cases. In this treatise, we shall obtain some estimates of 𝑎(𝑥) under conditions valid for a broad class of 𝐹, and with the estimates obtained, we shall address typical problems of the fluctuation theory, where the potential function is at the centre of the analysis. Many features of the r.w. in Z may be expected to be shared by more general realvalued stochastic processes with independent stationary increments, such as Lévy processes and real-valued random walks. Our principal concern in this treatise is the asymptotic form of the potential function of the r.w. and its applications. S.C. Port and C.J. Stone showed that there exists a potential operator that corresponds to 𝑎(𝑥) for recurrent r.w.’s on 𝑑-dimensional Euclidean space (necessarily 𝑑 = 1 or 2) [59] and for Lévy processes [60], and thereby developed a potential theory for these processes and established analogues of some of the basic formulae obtained v
vi
Preface
for arithmetic random walks. However, it is worth studying the problem in the simple setting of arithmetic r.w.’s. Some significant problems that require delicate analytic estimates remain open for general processes, yet they are solved for the arithmetic r.w.’s. Even when an extension to a broader class of processes is possible, one often needs to figure out additional problems that make the exposition much longer or more complicated – by nature, some problems for r.w.’s may not even exist for any Lévy process other than compound Poisson processes. By the functional limit theorem, many r.w. results yield those on a stable process and vice versa; the proof for the r.w. case is usually elementary, direct, and intuitively comprehensible, with the principal results established by Spitzer taken for granted, while the proof for the stable process often requires an advanced analytic theory. Tokyo April, 2023
Kôhei Uchiyama
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Basic Facts, Notation, and Conventions . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Known Results Involving 𝑎(𝑥) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Domain of Attraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Relative Stability, Distribution of 𝑍 and Overshoots . . . . . . . . . . . . . . 2.4.1 Relative Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Overshoots and Relative Stability of 𝑍 . . . . . . . . . . . . . . . . . . 2.5 Overshoot Distributions Under (AS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Table for Modes of the Overshoot 𝑍 (𝑅) Under (AS) . . . . . . . . . . . . .
7 7 11 13 14 14 15 16 17
3
Bounds of the Potential Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Statements of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Auxiliary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Proofs of Theorems 3.1.1 to 3.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Proof of Theorem 3.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Proof of Theorem 3.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Proof of Proposition 3.1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Proof of Theorem 3.1.6 and Proposition 3.1.7 . . . . . . . . . . . . . . . . . . . 3.6 An Example Exhibiting Irregular Behaviour of 𝑎(𝑥) . . . . . . . . . . . . .
19 19 23 31 31 34 36 39 45
4
Some Explicit Asymptotic Forms of 𝒂(𝒙) . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Relatively Stable Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Distributions in Domains of Attraction . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Asymptotics of 𝑎(𝑥) I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Asymptotics of 𝑎(𝑥) II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Asymptotics of 𝑎(𝑥 + 1) − 𝑎(𝑥) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51 51 56 57 62 68
vii
viii
Contents
5
Applications Under 𝒎 + /𝒎 → 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.1 Some Asymptotic Estimates of 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] . . . . . . . . . . . . . . . . . . . 76 5.2 Relative Stability of 𝑍 and Overshoots . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.3 The Two-Sided Exit Problem Under 𝑚 + /𝑚 → 0 . . . . . . . . . . . . . . . . . 85 5.4 Spitzer’s Condition and the Regular Variation of 𝑉d . . . . . . . . . . . . . . 88 5.5 Comparison Between 𝜎𝑅 and 𝜎[𝑅,∞) and One-Sided Escape From Zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.6 Escape Into (−∞, −𝑄] ∪ [𝑅, ∞) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.7 Sojourn Time of a Finite Set for 𝑆 with Absorbing Barriers . . . . . . . . 101
6
The Two-Sided Exit Problem – General Case . . . . . . . . . . . . . . . . . . . . . . 105 6.1 Statements of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.2 Upper Bounds of 𝑃 𝑥 (Λ𝑅 ) and Partial Sums of 𝑔𝛺 (𝑥, 𝑦) . . . . . . . . . . 111 6.3 Proof of Theorem 6.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.3.1 Case (C2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.3.2 Cases (C3) and (C4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.4 Miscellaneous Lemmas Under (C3), (C4) . . . . . . . . . . . . . . . . . . . . . . 117 6.5 Some Properties of the Renewal Functions 𝑈a and 𝑉d . . . . . . . . . . . . . 124 6.6 Proof of Proposition 6.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.7 Note on the Over- and Undershoot Distributions I . . . . . . . . . . . . . . . . 131
7
The Two-Sided Exit Problem for Relatively Stable Walks . . . . . . . . . . . 135 7.1 Statements of the Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 7.2 Basic Facts from Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 7.3 Proof of Theorem 7.1.1 and Relevant Results . . . . . . . . . . . . . . . . . . . 143 7.4 Proof of Proposition 7.1.3 (for 𝐹 Recurrent) and Theorem 7.1.4 . . . . 148 7.4.1 Preliminary Estimates of 𝑢 a and the Proof of Proposition 7.1.3 in the Case When 𝐹 is Recurrent . . . . . . . . . . . . . . . . . . 148 7.4.2 Asymptotic Forms of 𝑢 a and 𝑃0 (Λ𝑅 ) . . . . . . . . . . . . . . . . . . . . 151 7.4.3 Proof of Theorem 7.1.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 and Theorem 7.1.6 . . . . 157 7.5 Proof of Proposition 7.1.3 (for 𝐹 Transient) 7.6 Estimation of 𝑃 𝑥 𝑆 𝑁 (𝑅) = 𝑦 Λ𝑅 and the Overshoots . . . . . . . . . . . 162 7.7 Conditions Sufficient for (7.7) or ℓ ∗ (𝑥)ℓ♯ (𝑥) ∼ 𝐴(𝑥) . . . . . . . . . . . . . . 168
8
Absorption Problems for Asymptotically Stable Random Walks . . . . . 171 8.1 Strong Renewal Theorems for the Ladder Height Processes . . . . . . . . 172 8.2 The Green Function 𝑔𝛺 (𝑥, 𝑦) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 8.3 Asymptotics of 𝑃 𝑥 [𝜎𝑅 < 𝑇 | Λ𝑅 ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 8.4 Asymptotic Form of the Green Function 𝑔 𝐵(𝑅) (𝑥, 𝑦) . . . . . . . . . . . . . 185 8.5 The Scaling Limit of 𝑔 𝐵(𝑅) (𝑥, 𝑦) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 8.6 Asymptotics of 𝑔 𝐵(𝑅) (𝑥, 𝑦) Near the Boundary . . . . . . . . . . . . . . . . . . 196 8.7 Note on the Over- and Undershoot Distributions II . . . . . . . . . . . . . . . 203
Contents
9
ix
Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 9.1 The Potential Function for a Finite Set . . . . . . . . . . . . . . . . . . . . . . . . . 208 9.1.1 Recurrent Walks with 𝜎 2 = ∞ . . . . . . . . . . . . . . . . . . . . . . . . . 208 9.1.2 Case 𝜎 2 < ∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 9.1.3 Transient Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.2 The r.w. Conditioned to Avoid a Finite Set; Statements of Results . . 211 9.3 Proof of Theorem 9.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9.4 The Distribution of the Starting Site of a Large Excursion . . . . . . . . . 225 9.5 An Application to the Escape Probabilities From a Finite Set . . . . . . 227 9.6 Proof of Theorem 9.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 9.7 Proof of Theorem 9.2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 9.8 Random Walks Conditioned to Avoid a Finite Set Forever . . . . . . . . . 235 9.9 Some Related Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 9.9.1 The r.w. Avoiding a Finite Set in the Case 𝜎 2 < ∞ . . . . . . . . 241 9.9.2 Uniform Estimates of 𝑄 𝑛𝐵 (𝑥, 𝑦) in the Case 1 < 𝛼 < 2 . . . . . . 242 9.9.3 Asymptotic Properties of 𝔭𝑡0 (𝜉, 𝜂) and 𝔮𝑡 (𝜉) and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 9.9.4 The r.w. Killed Upon Entering the Negative Half-Line . . . . . 247
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 A.1 Relations Involving Regularly Varying Functions . . . . . . . . . . . . . . . . 249 A.1.1 Results on s.v. Functions I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 A.1.2 Results on s.v. Functions II . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 A.2 Renewal Processes, Strong Renewal Theorems and the Overshoot Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 A.2.1 Strong Renewal Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 A.2.2 Renewal Function and Over- and Undershoot Distributions . 257 A.3 The First Ladder Epoch and Asymptotics of 𝑈a . . . . . . . . . . . . . . . . . . 260 A.3.1 Stability of 𝜏1 and Spitzer’s Condition . . . . . . . . . . . . . . . . . . . 261 A.3.2 Asymptotics of 𝑈a Under (AS) . . . . . . . . . . . . . . . . . . . . . . . . . 262 A.4 Positive Relative Stability and Condition (C3) . . . . . . . . . . . . . . . . . . . 264 A.5 Some Elementary Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 A.5.1 An Upper Bound of the Tail of a Trigonometric Integral . . . . 266 A.5.2 A Bound of an Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 A.5.3 On the Green Function of a Transient Walk . . . . . . . . . . . . . . 267 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Notation Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Chapter 1
Introduction
The potential function for the recurrent random walk (r.w.) in Z𝑑 (𝑑 = 1, 2), denoted by 𝑎(𝑥), was introduced by Frank Spitzer at the beginning of the 1960s. He established its existence, derived its fundamental properties, and developed a potential theory of the recurrent r.w. analogous to those for Brownian motion or transient Markov processes that had been studied by S. Kakutani, J. Doob, M. Kac, C. Hunt, and others. As in the classical case of transient Markov processes, the function 𝑎(𝑥) appears in various objects of probabilistic significance like the hitting distribution and Green function for a subset of Z, and it determines their asymptotic forms. (The definition and fundamental properties of 𝑎(𝑥) and some of the previous results inherently involving 𝑎(𝑥) are presented in Sections 2.1 and 2.2, respectively.) For 𝑑 = 1, Spitzer distinguishes two cases: one with 𝜎 2 < ∞ and the other with 𝜎 2 = ∞, where 𝜎 2 denotes the variance of the common distribution, 𝐹, of increments of the r.w., because of the sharp differences between them: 𝑎(𝑥) is a positive harmonic function for the recurrent r.w., absorbed as the r.w. visits the origin, and in the first case, there is a continuum of positive harmonic functions other than 𝑎(𝑥), while in the second case, there are no others. If 𝜎 2 < ∞ and 𝑑 = 1, we have a simple asymptotic form of 𝑎(𝑥) as |𝑥| → ∞, asymptotically linear growth with slopes ±1/𝜎 2 as 𝑥 → ±∞; the ladder processes, playing a vital role in the fluctuation theory, are easy to deal with since the ladder height variables have finite expectations. These, sometimes combined with the central limit theorem, allow the theory of r.w.’s to be described in a unified way with rather explicit formulae under the single condition 𝜎 2 < ∞. On the other hand, if 𝜎 2 = ∞ there are a variety of circumstances that are characterised by features different from one another; the exact asymptotic behaviour of 𝑎(𝑥) for large 𝑥 has been given only in some special cases; lim 𝑥→∞ 𝑎(𝑥)/𝑥 ¯ = 0 and lim 𝑥→∞ 𝑎(𝑥) ∈ [0, ∞] always exist, but there is a wide range of ways in which 𝑎(𝑥) can behave as 𝑥 → ±∞, depending on the behaviour of the tails of 𝐹 (it can behave very violently with lim sup 𝑎(𝑟𝑥)/𝑎(𝑥) = ∞ for each 𝑟 ∈ (0, 1); see Section 5.6), and it is desirable to find conditions under which 𝑎(𝑥) behaves in a specific way. The ladder height, at least the ascending or descending ladder height, has infinite mean.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8_1
1
2
1 Introduction
The central limit theorem (convergence to stable laws) requires a specific regularity of tails of 𝐹 that depend on the exponent of a limit stable law. At the time when Spitzer introduced 𝑎(𝑥), several works appeared which studied problems closely related to or involving the potential function, both in its application (H. Kesten, F. Spitzer, B. Belkin) and its extension to other processes (D. Ornstein, S.G. Port, C.J. Stone). For instance, Kesten and Spitzer [49] obtained certain ratio limit theorems for the distributions of the hitting times and sojourn times of a finite set. Kesten [41] gave refinements under mild additional assumptions. (An excellent exposition on the principal contents of [70], [49], [41] is given in Chapter VII of Spitzer’s book [71]; extensions to non-lattice random walks are given by Ornstein [55], Port and Stone [59], and Stone [74].) However, in the last half-century, only a few works have appeared related to the potential function of one-dimensional r.w.’s. During the same period, in contrast, many papers have appeared that address subjects concerning r.w.’s, recurrent or transient, but not involving the potential function, like renewal theory, r.w.’s conditioned to stay positive, large deviations, issues related to Spitzer’s condition, and so on. In this treatise, we shall deal with one-dimensional arithmetic r.w.’s and obtain upper and lower bounds of the potential function of the r.w. under a widely applicable condition on the tails of 𝐹. We will then use these to solve several problems that are significant from both a probabilistic and an analytic viewpoint. It would be hard to obtain precise results without extra conditions in the case 𝜎 2 = ∞, so in our applications we shall assume some specific conditions on the tails of 𝐹 that, in particular, allow us to compute exact asymptotic forms of 𝑎(𝑥). Based on the estimates of 𝑎(𝑥) obtained, we shall delve into the detailed study of the fluctuation theory in the following cases: • 𝐹 is attracted to a strictly stable law. • The r.w. is relatively stable, or one tail of 𝐹 is negligible relative to the other in a certain average. Many authors have studied various aspects of the r.w. for the first case and established remarkable results. However, important questions still need to be answered, some of which we shall resolve satisfactorily. On the other hand, the second case has been much less studied. With the help of results obtained in the last half century on the ladder processes associated with a random walk, especially those on renewal theory, overshoot estimates, and relatively stable r.w.’s, in addition to the classical theory of stochastic processes, we shall derive some precise asymptotic results on hitting probabilities of finite sets, Green functions on long finite intervals or a half-line of Z, absorption probabilities of two-sided exit problems and the like.
1 Introduction
3
Summary We let 𝑆 𝑛 = 𝑆0 + 𝑋1 + 𝑋2 + · · · + 𝑋𝑛 be a random walk in Z where the starting position 𝑆0 is an unspecified integer and the increments 𝑋1 , 𝑋2 , . . . are i.i.d. random variables. 𝑋 denotes a random variable having the same law as 𝑋1 , 𝐹 the distribution function of 𝑋, 𝑃 𝑥 the probability of the random walk with 𝑆0 = 𝑥 and 𝐸 𝑥 the expectation with respect to 𝑃 𝑥 . We assume the r.w. 𝑆 is oscillatory and irreducible (as a Markov chain with the state space Z). Chapter 2 collects previously obtained results that are used later in this treatise. In particular, the first section presents the fundamental results and notation used throughout that readers should read through and absorb. The other sections may be read when referred to or when the current problems become relevant to them. In Chapter 3, we are concerned with the upper and lower bounds of 𝑎(𝑥) that are of crucial importance in its applications, but we previously only know lim 𝑥→∞ [𝑎(𝑥) + 𝑎(−𝑥)] = ∞ in general under the condition 𝐸 𝑋 2 = ∞. We shall show that whenever 𝐸 𝑋 = 0 (which we tacitly understand is valid only if 𝐸 |𝑋 | < ∞), 𝑎(𝑥) + 𝑎(−𝑥) ≥ 𝑐 ∗ 𝑥/𝑚(𝑥) (𝑥 > 0), ∫ 𝑥 ∞ where 𝑚(𝑥) = 0 d𝑡 𝑡 𝑃 [|𝑋 | > 𝑠] d𝑠 and 𝑐 ∗ is a universal positive constant, while we give a set of sufficient conditions – also necessary if 𝑋 is in the domain of attraction – for the upper bound ∫
𝑎(𝑥) + 𝑎(−𝑥) ≤ 𝐶𝑥/𝑚(𝑥) ∫𝑥 ∫∞ to be true. Putting 𝑚 + (𝑥) = 0 d𝑡 𝑡 𝑃 [𝑋 > 𝑠] d𝑠 we shall also show that 𝑎(−𝑥)/𝑎(𝑥) → 0 if
𝑚 + (𝑥)/𝑚(𝑥) → 0.
In Chapter 4, we compute a precise asymptotic form of 𝑎(𝑥) under some regularity assumptions on the tails of 𝐹. In the first section, we verify that if 𝐴(𝑥) →∞ 𝑥𝑃[|𝑋 | > 𝑥]
(∗) where 𝐴(𝑥) =
∫ 0
𝑥
(𝑥 → ∞),
(𝑃[𝑋 > 𝑡] − 𝑃[𝑋 < −𝑡]) d𝑡, then ∫
𝑎(𝑥) ∼ 0
𝑥
𝑃[𝑋 < −𝑡] d𝑡 𝐴2 (𝑡)
and
𝑎(𝑥) − 𝑎(−𝑥) ∼
1 . 𝐴(𝑥)
The condition (∗) is satisfied if 𝐸 𝑋 = 0 and as 𝑥 → ∞ ∫∞ 𝑃[𝑋 > 𝑡]d𝑡 𝐿 (𝑥) 𝑥 and lim sup ∫ ∞ (♯) 𝑃[𝑋 < −𝑥] ≍ < 1, 𝑥 𝑃[𝑋 < −𝑡]d𝑡 𝑥
4
1 Introduction
where 𝐿 is some slowly varying function. For 𝑋 in the domain of attraction with exponent 1 < 𝛼 < 2, an asymptotic form of 𝑎(𝑥) expressed in terms of 𝐹 has been obtained, whereas for 𝛼 = 1, no result like it is found in the existing literature. In the second section, we compute exact asymptotic forms of 𝑎(𝑥) for all the cases 1 ≤ 𝛼 ≤ 2 (under some additional side conditions in extreme cases). We also obtain some estimates of the increments of 𝑎(𝑥) in the third section. We shall consider the case (∗) in detail in Chapter 7, focusing on the two-sided exit problem and related issues. We deal with the case when two tails of d𝐹 are not comparable in Chapter 5, where we mainly consider the case when the negative tail is dominant in the sense that 𝑚 + (𝑥)/𝑚(𝑥) → 0 and apply the estimates of 𝑎(𝑥) obtained in Chapters 3 and 4 to evaluate some hitting probabilities. Denote by 𝜎𝐵 the first hitting time of 𝐵 ⊂ R: 𝜎𝐵 = inf{𝑛 ≥ 1 : 𝑆 𝑛 ∈ 𝐵}, and by 𝑈a (𝑥) and 𝑉d (𝑥), the renewal functions of the strictly ascending and weakly descending ladder height processes, respectively. Among other things, we verify in Section 5.3 that uniformly for 0 ≤ 𝑥 ≤ 𝑅, as 𝑅 → ∞, (∗∗)
𝑃 𝑥 (Λ𝑅 ) ∼ 𝑉d (𝑥)/𝑉d (𝑅),
provided 𝑚 + (𝑥)/𝑚(𝑥) → 0. Here Λ𝑅 = 𝜎[𝑅+1,∞) < 𝜎(−∞,−1] . The formula (∗∗) does not seem to have appeared previously in the literature, except in [76], although its analogue is known to hold for spectrally negative Lévy processes [22]. In Sections 5.5–5.7, we study estimations of 𝑃 𝑥 [𝜎𝐵 < 𝜎0 ], the probability of the r.w. escaping from 0 into a set 𝐵 ⊂ Z for, e.g., 𝐵 = [𝑅, ∞), 𝐵 = (−∞, −𝑄] ∪ [𝑅, ∞) (with 𝑅, 𝑄 positive integers). For 𝑥 = 0, the asymptotic form of this probability is of particular interest since it determines the rate of increase of the sojourn time of 𝑆 spent before visiting 𝐵 in each finite set (outside 𝐵) as 𝑅 (and 𝑄) becomes large. The results obtained about them will be complemented by those given in subsequent chapters. In Chapter 6, we provide a sufficient condition for (∗∗) to hold that is also necessary if 𝑋 belongs to the domain of attraction of a stable law and is satisfied if either (∗) holds or 𝑚 + (𝑥)/𝑚(𝑥) → 0 (provided 𝐸 𝑋 = 0). Suppose that 𝑋 belongs to the domain of attraction of a stable law of exponent 0 < 𝛼 ≤ 2 and the limit 𝜌 := lim 𝑃0 [𝑆 𝑛 > 0] exists. Then we see that if (𝛼 ∨ 1) 𝜌 = 1, the sufficient condition obtained holds; if 0 < (𝛼 ∨ 1) 𝜌 < 1, there exist positive constants 𝜃 ∗ < 𝜃 ∗ < 1 such that 𝜃 ∗ < 𝑃 𝑥 (Λ𝑅 )𝑉d (𝑅)/𝑉d (𝑥) < 𝜃 ∗ for 0 ≤ 𝑥 < 𝛿𝑅; and if 𝜌 = 0, 𝑃 𝑥 (Λ𝑅 )𝑉d (𝑅)/𝑉d (𝑥) → 0 as 𝑅 → ∞ uniformly for 0 ≤ 𝑥 < 𝛿𝑅, where 𝛿 is an arbitrarily fixed constant less than 1. If 0 < 𝜌 < 1, the asymptotic form of 𝑃 𝑥 (Λ𝑅 )
1 Introduction
5
as 𝑥/𝑅 → 𝜉 ∈ (0, 1) is known,1 which, however, does not yield the uniform estimate near the endpoints of the interval, as given in our result mentioned above. Chapter 7 addresses, as mentioned above, the two-sided exit problem under (∗) (which implies 𝑃 ⌊𝑅/2⌋ (Λ𝑅 ) → 1). We verify, among other things, that if (∗) holds then ∫ 𝑅 𝑈a (𝑅 − 𝑥)𝑉d (𝑅) 1 − 𝑃 𝑥 (Λ𝑅 ) ∼ 𝑃[𝑋 < −𝑡]d𝑡 𝑅−𝑥−1 𝑥−1 (uniformly for 𝜀𝑅 < 𝑥 ≤ 𝑅) and 𝑣d (𝑥) := 𝑉d (𝑥) − 𝑉d (𝑥 − 1) ∼
𝑉d (𝑥)𝑃[𝑋 < −𝑥] , 𝐴(𝑥)
under some additional regularity condition on 𝑃[𝑋 < −𝑥] for large 𝑥, which is satisfied at least if 𝐸 𝑋 = 0 and (♯) holds. Of the above two relations, the latter is especially interesting since in general we know very little about 𝑣d (𝑥), except that 𝑣d (𝑥) → 0, from the fact that 𝑉d is slowly varying (the case under (∗)). Chapter 8 mainly concerns the evaluation of the Green function 𝑔 𝐵(𝑅) (𝑥, 𝑦), 𝐵(𝑅) := R \ [0, 𝑅], of the r.w. killed as it exits a long interval [0, 𝑅]. When the law of 𝑋 is symmetric, the asymptotics of 𝑔 𝐵(𝑅) (𝑥, 𝑦) as 𝑅 → ∞ were studied by Spitzer and Stone [72] in the case 𝐸 𝑋 2 < ∞ and by Kesten [39], [40] under the condition 𝐸 [1 − e𝑖 𝜃 𝑋 ]/|𝜃| 𝛼 −→ 𝑄 (𝜃 → 0) with 1 ≤ 𝛼 ≤ 2 and 𝑄 > 0.2 In particular, Kesten proved that under this condition, if 1 < 𝛼 < 2, then for (𝜉, 𝜂) ∈ (0, 1) 2 , as 𝑥/𝑅 → 𝜉 and 𝑦/𝑅 → 𝜂,3 (♭)
𝑔 𝐵(𝑅) (𝑥, 𝑦) |𝜂 − 𝜉 | 𝛼−1 −→ 𝑅 𝛼−1 𝑄 [Γ(𝛼/2)] 2
n
∫
min
𝜉 (1−𝜂) 𝜂 (1− 𝜉 )
,
𝜂 (1− 𝜉 𝜉 (1−𝜂)
o 1
(1 − 𝑤) −𝛼 𝑤 2 𝛼−1 d𝑤
0
(Theorem 3 of [40]), and if 𝛼 = 1 and 𝑥/𝑅 is bounded away from 0 and 1, 𝑔 𝐵(𝑅) (𝑥, 𝑥)/log 𝑅 −→ (𝜋𝑄) −1 (Theorem 3 of [39]). Kesten [40] left the case 𝛼 = 1 with 𝜉 ≠ 𝜂 open, although he seemed to be convinced of the truth of the convergence in (♭), saying “There are good reasons to believe” that it is valid also for 𝛼 = 1. We shall generalise these results to the r.w. that is attracted to a stable law with exponent 1 ≤ 𝛼 ≤ 2, and refine them, the asymptotic formulae obtained being valid uniformly for 0 ≤ 𝑥, 𝑦 ≤ 𝑅 if 𝛼 > 1 and for |𝑦 − 𝑥|/(𝑥 ∨ 𝑦) > 𝜀 (𝜀 > 0) if 𝛼 = 1 (Theorem 8.4.3, Remark 8.4.4). Chapter 9 concerns the r.w. conditioned to avoid a finite set when 𝑋 is in the domain of attraction. For the r.w. conditioned to avoid a finite set up to time 𝑛, Belkin [3] proved a functional limit theorem, under appropriate scaling, as 𝑛 → ∞. We shall prove an extension of it using a different approach from [3], and apply 1 The asserted formula in the literature is wrong, although it and its derivation can be easily rectified (see Remark 6.1.4). 2 In [39], the asymmetric case where ℜ𝑄 > 0 is also considered. 3 If 𝜎 2 < ∞, this reduces to the result of [72] (cf. (8.2)), though not explicitly stated in [40].
6
1 Introduction
the same method to estimate an escape probability from the set. We also obtain a functional limit theorem of the r.w. conditioned to avoid the set forever. Some related results obtained earlier are briefly reviewed. Finally, we provide an appendix where miscellaneous results are presented (either known, or easily derived from known facts). We shall give proofs of many of the results that are not currently available in textbooks.
Chapter 2
Preliminaries
Here we briefly review several previously obtained results – mostly when the variance is infinite – that are used or closely related to the topics treated in this book. In the Appendix, we shall give proofs of some of them, mainly those not found in textbooks, to make this treatise as self-contained as possible. The first section gives the basic notation, concepts, and facts used throughout.
2.1 Basic Facts, Notation, and Conventions Throughout, we shall apply the following facts from Spitzer’s book [71], and employ the functions and the random variables involved in them repeatedly, usually without explicit reference. Let 𝑆 𝑛 = 𝑆0 + 𝑋1 + 𝑋2 + · · · + 𝑋𝑛 be a random walk in Z, where the starting position 𝑆0 is an unspecified integer and the increments 𝑋1 , 𝑋2 , . . . are independent and identically distributed random variables defined on some probability space (Ω, F , 𝑃) and taking values in Z. Let 𝑋 be a random variable having the same law as 𝑋1 . 𝐹 denotes the distribution function of 𝑋 and 𝐸 indicates integration with respect to 𝑃 as usual. Denote by 𝑃 𝑥 the probability of the random walk with 𝑆0 = 𝑥 and 𝐸 𝑥 the expectation with respect to 𝑃 𝑥 . Our basic assumption is ∞ is oscillatory and irreducible (as a Markov chain on Z), the r.w. 𝑆 = (𝑆 𝑛 ) 𝑛=0
namely 𝑃0 [lim sup 𝑆 𝑛 = − lim inf 𝑆 𝑛 = ∞] = 1 and ∀𝑦 ∈ Z, sup 𝑃0 [𝑆 𝑛 = 𝑦] > 0, 𝑛
which we always suppose, except on a few occasions when explicitly stated otherwise.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8_2
7
8
2 Preliminaries
Definition of the potential function For 𝑥 ∈ Z, put 𝑝 𝑛 (𝑥) = 𝑃0 [𝑆 𝑛 = 𝑥], write 𝑝(𝑥) for 𝑝 1 (𝑥) and define the function 𝑎(𝑥) by ∞ ∑︁ 𝑎(𝑥) = ( 𝑝 𝑛 (0) − 𝑝 𝑛 (−𝑥)). 𝑛=0
Here the series on the RHS is convergent for every r.w. (cf. Spitzer [71, P28.8]). When the r.w. is recurrent, 𝑎(𝑥), called the potential function of 𝑆, plays a fundamental role in the potential theory of recurrent random walks, especially for random walks which are killed as they hit the origin, 𝑎(𝑥) being a non-negative harmonic function,1 i.e., ∑︁ 𝑎(𝑥) = 𝑝(𝑦 − 𝑥)𝑎(𝑦) 𝑥 ≠ 0, 𝑦 ∈Z\{0}
and the Green function of the killed r.w. being expressed in a simple form in terms of 𝑎, as described in the next paragraph.
The first hitting time and the Green function of a set 𝐵 For a subset 𝐵 of the whole real line R such that 𝐵 ∩ Z ≠ ∅, put 𝜎𝐵 = inf{𝑛 ≥ 1 : 𝑆 𝑛 ∈ 𝐵}, and define the Green function 𝑔 𝐵 (𝑥, 𝑦) of the r.w. killed as it hits 𝐵 by 𝑔 𝐵 (𝑥, 𝑦) =
∞ ∑︁
𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝐵 > 𝑛].
(2.1)
𝑛=0
Our definition of 𝑔 𝐵 is not standard: 𝑔 𝐵 (𝑥, 𝑦) = 𝛿 𝑥,𝑦 + 𝐸 𝑥 [𝑔 𝐵 (𝑆1 , 𝑦); 𝑆1 ∉ 𝐵] not only for 𝑥 ∉ 𝐵 but for 𝑥 ∈ 𝐵, while 𝑔 𝐵 (𝑥, 𝑦) = 𝛿 𝑥,𝑦 for all 𝑥 ∈ Z, 𝑦 ∈ 𝐵. Here 𝛿 𝑥,𝑦 denotes the Kronecker delta symbol. The function 𝑔 𝐵 (·, 𝑦) restricted to 𝐵 equals the hitting distribution of 𝐵 by 𝑆ˆ ( 𝑦) := 𝑦 − 𝑋1 − 𝑋2 − · · · − 𝑋𝑛 , the dual r.w. started at 𝑦, in particular (2.2) 𝑔 {0} (0, 𝑦) = 1 (𝑦 ∈ Z). In [71] Spitzer defined the Green function of the r.w. killed on hitting 0 by 𝑔(𝑥, 𝑦) = 𝑔 {0} (𝑥, 𝑦) − 𝛿0, 𝑥 (so that 𝑔(0, ·) = 𝑔(·, 0) ≡ 0) and proved that 𝑔(𝑥, 𝑦) = 𝑎(𝑥) + 𝑎(−𝑦) − 𝑎(𝑥 − 𝑦)
1 The uniqueness holds if and only if 𝜎 2 = ∞.
(𝑥, 𝑦 ∈ Z).
(2.3)
2.1 Basic Facts, Notation, and Conventions
9
The ladder height variables and renewal functions Let 𝑍 be the first strictly ascending ladder height that is defined by 𝑍 = 𝑆 𝜎[𝑆0 +1,∞) − 𝑆0 .
(2.4)
We also define 𝑍ˆ = 𝑆 𝜎(−∞,𝑆0 −1] − 𝑆0 , the first strictly descending ladder height. As 𝑆 is oscillatory, both 𝑍 and − 𝑍ˆ are proper random variables whose distributions concentrate on positive integers 𝑥 = 1, 2, . . .. Let 𝑈a (resp. 𝑉d ) be the renewal function of the strictly ascending (resp. weakly descending) ladder height process: 𝑈a (𝑥) =
∞ ∑︁
𝑃[𝑍1 + · · · + 𝑍 𝑛 ≤ 𝑥];
𝑉d (𝑥) = 𝑣◦
∞ ∑︁
𝑃[ 𝑍ˆ 1 + · · · + 𝑍ˆ 𝑛 ≥ −𝑥]
𝑛=1
𝑛=1
ˆ and for 𝑥 = 1, 2, . . ., where 𝑍 𝑘 (resp 𝑍ˆ 𝑘 ) are i.i.d. copies of 𝑍 (resp 𝑍) o n ∑︁ 𝑘 −1 𝑝 𝑘 (0) . 1/𝑣0 = 𝑃0 [𝑆 𝜎 (−∞,0] < 0] = 𝑃0 [𝑆 𝜎 [0,∞) > 0] = exp − We put 𝑈a (0) = 1, 𝑉d (0) = 𝑣◦ and 𝑈a (𝑥) = 𝑉d (𝑥) = 0 for 𝑥 < 0. The product 𝐸 𝑍 𝐸 | 𝑍ˆ | equals [2𝑣◦ ] −1 𝜎 2 ≤ ∞ (cf. [71, Section 18]). Put ∫ 𝑥 ∫ 𝑥 1 ℓ ∗ (𝑥) = 𝑃[𝑍 > 𝑡] d𝑡 and ℓˆ∗ (𝑥) = ◦ 𝑃[− 𝑍ˆ > 𝑡] d𝑡. 𝑣 0 0 𝑈a varies regularly with index 1 if and only if ℓ ∗ is slowly varying (s.v.) (Theorem A.2.3) and if this is the case 𝑢 a (𝑥) := 𝑈a (𝑥) − 𝑈a (𝑥 − 1) ∼ 1/ℓ ∗ (𝑥)
(𝑥 → ∞)
(2.5)
[83] (see Lemma A.2.1 of the Appendix for a proof applicable in the current setting). Let 𝑣d (𝑥) = 𝑉d (𝑥) − 𝑉d (𝑥 − 1) and 𝛺 = (−∞, −1]. Together with (2.3), the following identity [71, Propositions 18.7, 19.3] 𝑔𝛺 (𝑥, 𝑦) =
𝑥∧𝑦 ∑︁
𝑣d (𝑥 − 𝑘)𝑢 a (𝑦 − 𝑘)
for 𝑥, 𝑦 ≥ 0.
(2.6)
𝑘=0
is fundamental to our investigation. We shall refer to (2.6) as Spitzer’s formula. Remark 2.1.1 If 𝑈d denotes the renewal function of the strictly descending ladder height process, then 𝑈d (𝑥) = 𝑉d (𝑥)/𝑉d (0) (cf., e.g., [31, Sections XII.9, XVIII.3]). It is natural to work with the pair (𝑈a , 𝑉d ) instead of (𝑈a , 𝑈d ) because of Spitzer’s formula (2.6) (see also, e.g., Lemma 6.5.1).
10
2 Preliminaries
Time reversal and Feller’s duality Duality for a random walk with step law 𝑝(𝑥) usually refers to the relation between properties of the r.w. and corresponding properties of its dual walk (i.e., the r.w. with step law 𝑝(−𝑥)). Feller [31] (Section XII.2) discusses a different sort of ‘duality’, the duality between a random walk path with increments (𝑋1 , . . . , 𝑋𝑛 ) and that obtained by taking 𝑋 𝑘 in reverse order. Clearly, a path has the same probability as its ‘dual’ path. A little reflection shows that time-reversed paths and Feller’s dual paths are symmetric with respect to the time axis. In many cases, the two dualities have essentially the same effects. For instance, Feller’s duality yields 𝑔 𝐵 (𝑥, 𝑦) = 𝑔−𝐵 (−𝑦, −𝑥). On the other hand, the corresponding duality relation (in the sense of time-reversal) reads 𝑔 𝐵 (𝑥, 𝑦) = 𝑔ˆ 𝐵 (𝑦, 𝑥). Here and throughout the treatise, 𝑔ˆ 𝐵 denotes the dual of 𝑔 𝐵 , namely, the Green function of the dual walk killed as it enters 𝐵.
Additional notation and conventions In the above, we have introduced the functions 𝐹 (𝑥), 𝑎(𝑥), 𝑉d (𝑥), 𝑝 𝑛 (𝑥) and 𝑔(𝑥, 𝑦) and the random variables 𝑍, 𝑍ˆ and 𝜎𝐵 . In addition to these, the following notation is used throughout. • 𝐹 (𝑡) = 𝑃[𝑋 ≤ 𝑡] (𝑡 ∈ R),
𝑝(𝑥) = 𝑃[𝑋 = 𝑥] (𝑥 ∈ Z), and
𝜇− (𝑡) = 𝑃[𝑋 < −𝑡], 𝜇+ (𝑡) = 𝑃[𝑋 > 𝑡],
𝜇(𝑡) = 𝜇− (𝑡) + 𝜇+ (𝑡)
(0 ≤ 𝑡 < ∞).
• A positive Borel function 𝑓 defined on a neighbourhood of +∞ is said to be slowly varying (s.v.) if lim 𝑓 (𝜆𝑥)/ 𝑓 (𝑥) = 1 as 𝑥 → ∞ for each 𝜆 > 1, regularly varying with index 𝛼 ∈ R if 𝑥 −𝛼 𝑓 (𝑥) is s.v., and almost decreasing (increasing) if there exist numbers 𝐶 ≥ 1 and 𝑥0 such that 𝑓 (𝑦)/ 𝑓 (𝑥) ≤ 𝐶 (≥ 𝐶 −1 ) whenever 𝑦 > 𝑥 ≥ 𝑥0 . variation if An almost decreasing positive function 𝑓 is said to be of dominated ∫𝑥 −1 /d𝑡 𝜀 (𝑡)𝑡 lim inf 𝑥→∞ 𝑓 (2𝑥)/ 𝑓 (𝑥) > 0. An s.v. 𝑓 (𝑥) is represented as 𝑐(𝑥)e 1 with 𝑐(𝑡) → 1 and 𝜀(𝑡) → 0 as 𝑡 → ∞. If 𝑐(𝑡) ≡ 1, 𝑓 is called normalised. (See Feller [31] and/or Bingham, Goldie and Teugels [8] for properties of these functions.)2 • 𝑥 ∼ 𝑦 (resp. 𝑥 ≍ 𝑦) designates that the ratio of 𝑥 and 𝑦 approaches 1 (resp. is bounded away from zero and infinity). A formula of the form 𝑓 (𝑥) ∼ 𝐶𝑔(𝑥) with an extended real number 0 ≤ 𝐶 ≤ ∞ and 𝑔(𝑥) > 0 is understood to mean 𝑓 (𝑥)/𝑔(𝑥) → 𝐶. [We often write 𝑓 /𝑔 → 𝐶 for this if doing so causes no ambiguity.] • For non-negative functions 𝑓 and 𝑔 of 𝑥 ≥ 0, we sometimes write 𝑓 (𝑥) ≪ 𝑔(𝑥) for 𝑓 (𝑥) = 𝑜(𝑔(𝑥)), i.e., 𝑓 (𝑥)/𝑔(𝑥) → 0; and similarly for 𝑔(𝑥) ≫ 𝑓 (𝑥). These expressions tacitly entail 𝑔(𝑥) > 0 for all 𝑥 large enough. 2 In these books, functions of dominated variation are presumed monotone.
2.2 Known Results Involving 𝑎 ( 𝑥)
11
• 𝑥 ∧ 𝑦 and 𝑥 ∨ 𝑦 respectively denote the minimum and maximum of 𝑥 and 𝑦; for a real number 𝑥, ⌊𝑥⌋ is its integer part, 𝑥 + = 𝑥 ∨ 0, 𝑥 − = (−𝑥) ∨ 0; sgn(𝑥) = 𝑥/|𝑥|. For 𝑥 ≠ 0, 𝑥/0 = +∞ (−∞) if 𝑥 > 0 (𝑥 < 0). • We shall write 𝐸 𝑋 = 0 with the understanding that 𝐸 |𝑋 | < ∞ is implicitly assumed. We often omit ‘𝑥 → ∞’ and/or ‘𝑅 → ∞’ and the like when it is obvious. We shall write 0 < 𝑥, 𝑦 < 𝑅 for the conjunction not of 0 < 𝑥 and 𝑦 < 𝑅 but of 0 < 𝑥 < 𝑅 and 0 < 𝑦 < 𝑅. • The letters 𝑥, 𝑦, 𝑧, 𝑤 designate space variables that may be real numbers or integers. Their exact nature will be specified only when significant ambiguity arises. For Í Í real numbers 𝑎, 𝑏, the summation sign 𝑎 ≤𝑥 ≤𝑏 will often be written 𝑏𝑎 when the contributions of the boundary terms to the sum are negligible. • By 𝐶, 𝐶 ′, 𝐶1 , etc., we denote finite positive constants whose values may change from line to line. For a subset 𝐵 of R we write −𝐵 = {−𝑥 : 𝑥 ∈ 𝐵} and 𝐵 + 𝑦 = {𝑥 + 𝑦 : 𝑥 ∈ 𝐵} (𝑦 ∈ R) and ♯𝐵 stands for the cardinality of 𝐵. • 1(S) equals 1 or 0 according as a statement S is true or false; 𝛿 𝑥,𝑦 = 1(𝑥 = 𝑦). • 𝜗𝑡 denotes the shift operator of sample paths: 𝑆 ◦ 𝜗𝑛 = 𝑆 𝑛+· for 𝑛 = 0, 1, 2, . . .; 𝑓 ◦ 𝜗𝑡 = 𝑓 (𝑡 + ·) for a real function 𝑓 (𝑡), 𝑡 ≥ 0. • ‘−→ 𝑃 ’ (‘⇒’) designates convergence in probability (distribution).
2.2 Known Results Involving 𝒂(𝒙) Among the known results, the following are remarkable: for 𝑥, 𝑦 ∈ Z, lim
𝑛→∞
𝑃 𝑥 [𝜎0 > 𝑛] = 𝑎 † (𝑥) 𝑃0 [𝜎0 > 𝑛]
(2.7)
(valid for every irreducible r.w., recurrent or transient, of dimension 𝑑 ≥ 1); and lim
𝑛→∞
𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎0 ≥ 𝑛] 𝑥𝑦 = 𝑎 † (𝑥)𝑎 † (−𝑦) + 4 , 𝑃0 [𝜎0 = 𝑛] 𝜎
(2.8)
provided 𝐹 is recurrent and strongly aperiodic.3 Here 𝜎𝑥 = inf{𝑛 ≥ 1 : 𝑆 𝑛 = 𝑥}, 𝑥𝑦/∞ is understood to be zero, 𝑎 † (0) = 1
and
𝑎 † (𝑥) = 𝑎(𝑥)
for 𝑥 ≠ 0,
and 𝐹 is called strongly aperiodic if for every 𝑥 there exists an 𝑛0 ≥ 1 such that 𝑃0 [𝑆 𝑛 = 𝑥] > 0 for all 𝑛 ≥ 𝑛0 .
(2.9)
The first identity (2.7) is Theorem 32.1 of [71]. The formula (2.8) follows from Theorem 6a (𝜎 2 < ∞) and Theorem 7 (𝜎 2 = ∞) of Kesten [41]. Some extensions and refinements of (2.8) are given in [76], [79] in the case when 𝜎 2 < ∞ (see Section 3 We call 𝐹 recurrent (or transient) if the r.w. 𝑆 is recurrent (resp. transient).
12
2 Preliminaries
9.9.1) and in [82] for the stable walks with exponent 1 < 𝛼 < 2 (see Section 9.9.3). (See [78] in the case 𝜎 2 < ∞ for further detailed estimates of 𝑃 𝑥 [𝜎0 = 𝑛].) For recurrent r.w.’s, the following basic properties of 𝑎(𝑥) hold [71, Sections 28, 29]: (2.10) 𝑎(𝑥 + 1) − 𝑎(𝑥) → ±1/𝜎 2 as 𝑥 → ±∞, ¯ := 𝑎(𝑥) and 𝑥 𝑎(𝑥) − 2 𝜎
1 [𝑎(𝑥) + 𝑎(−𝑥)] → ∞ 2
= 0 for all 𝑥 > 0 if 𝑃[𝑋 ≤ −2] = 0, > 0 for all 𝑥 > 0 otherwise
(2.11)
(2.12)
(for the strict positivity in the second case of (2.12), not given in [71] when 𝜎 2 < ∞, see, e.g., [76, Eq(2.9)]). When 𝜎 2 < ∞ (2.10) entails the exact asymptotic 𝑎(𝑥) ∼ |𝑥|/𝜎 2 , whereas if 𝜎 2 = ∞ it gives only 𝑎(𝑥) = 𝑜(|𝑥|) and sharper asymptotic estimates are highly desirable. Below we state some results concerning the properties of 𝑎(𝑥) that the present author has recently obtained, and which will be relevant to the main focus of this treatise. Let 𝜎 2 = ∞. In [84] it is shown that for every recurrent r.w., the limits lim 𝑎(𝑥) ≤ ∞ and lim 𝑎(−𝑥) ≤ ∞ exist; 𝑥→ ∞
𝑥→ ∞
|𝑋 | < ∞, 𝑃[𝑋 ≥ 2] > 0 and ∫𝐸 ∞ [𝑡/𝑚 − (𝑡)] 2 [1 − 𝐹 (𝑡)] d𝑡 < ∞, 0
lim 𝑎(−𝑥) < ∞ 0 < 𝑥→ ∞
if
lim 𝑎(−𝑥) = ∞ 𝑥→ ∞
otherwise;
lim 𝑎(−𝑥) < ∞ if and only if 0
𝑥]𝑎(𝑥) < ∞;
(2.13)
(2.14)
𝑥=1
(Theorem 2.4 and Corollary 2.6 of [84]), and analogously for lim 𝑥→∞ 𝑎(𝑥); and 𝐸 𝑥 [𝑎(𝑆 𝜎 (−∞,0] )] 𝑎(𝑥) → 1/𝐸 𝑍, → 0 (𝑥 → ∞) if 𝐸 𝑍 < ∞, 𝑉d (𝑥) 𝑎(𝑥) (2.15) 𝑎(𝑥) = 0, lim inf 𝑎(𝑥) = 𝐸 [𝑎(𝑆 )] (𝑥 > 0) otherwise 𝑥 𝜎 (−∞,0] 𝑥→+∞ 𝑉d (𝑥) ∫𝑥 ∫∞ (Corollary 2.2 of [84]). [Here 𝑚 − (𝑥) = 0 d𝑡 𝑡 𝐹 (−𝑠) d𝑠 (see the beginning of Section 3.1).] Note that if 𝐸 |𝑋 | = ∞, then 𝐸 𝑋+ = 𝐸 𝑋− = ∞ because of the assumed oscillation of the walk. In (2.15), lim inf can be replaced by lim in various cases, and no counter-example contrary to it has been found so far.
2.3 Domain of Attraction
13
2.3 Domain of Attraction We shall be concerned with the asymptotic stable walks that satisfy the condition (a) 𝑋 is attracted to a stable law of exponent 0 < 𝛼 ≤ 2, (AS) (b) there exists the limit 𝜌 := lim 𝑃0 [𝑆 𝑛 > 0]. Note that the assumption of 𝑆 being oscillating (in particular 𝐸 𝑋 = 0 if 𝐸 |𝑋 | < ∞) is always in force. Condition (a) holds if and only if 𝐸 [𝑋 2 ; |𝑋 | < 𝑥] ∼ 𝐿 (𝑥) 𝜇+ (𝑥) ∼ 𝑝𝐿(𝑥)𝑥 −𝛼 and
𝜇− (𝑥) ∼ 𝑞𝐿(𝑥)𝑥 −𝛼
for 𝛼 = 2, for 𝛼 < 2,
(2.16)
for some s.v. function 𝐿 and constants 𝑝 ∈ [0, 1], 𝑞 = 1 − 𝑝 (cf. [31]).4 In (b), 𝜌 ranges exactly over [1 − 𝛼−1 ] ∨ 0 ≤ 𝜌 ≤ 𝛼−1 ∧ 1. For 𝛼 ≠ 1, (b) follows from (a), and according to [91], 𝜌 = 2−1 + (𝜋𝛼) −1 arctan [( 𝑝 − 𝑞) tan(𝛼𝜋/2)] , while for 𝛼 = 1 (b) holds with 𝜌 = 1 at least if either 𝑝 < 1/2, 𝐸 𝑋 = 0 or 𝑝 > 1/2, 𝐸 |𝑋 | = ∞ and analogously for the case 𝜌 = 0; for 𝑝 = 1/2, 0 < 𝜌 < 1 is equivalent to the existence of lim 𝑛𝐸 [sin(𝑋/𝑐 𝑛 )] ∈ R as well as to that of lim 𝑛𝑐−1 𝑛 𝐸 [𝑋; |𝑋 | < 𝑐 𝑛 ] ∈ R with constants 𝑐 𝑛 as specified below. (See [31, Sections XVII.5 and IX.8]; also (4.29).) Let (𝑐 𝑛 ) be a positive sequence chosen so that 𝑛𝐿(𝑐 𝑛 )/𝑐 𝑛𝛼 → 𝑐 ♯
(2.17)
for some arbitrary positive constant 𝑐 ♯ . Then if 𝛼 ≠ 1, 𝑆 𝑛 /𝑐 𝑛 converges in law to a strictly stable variable. For 𝛼 = 1, with the centring constants given by 𝑏 𝑛 = 𝑛𝐸 [sin(𝑋/𝑐 𝑛 )], 𝑆 𝑛 /𝑐 𝑛 − 𝑏 𝑛 converges in law to a stable variable. Denote by 𝑌 ◦ = (𝑌 ◦ (𝑡))𝑡 ≥0 the limit stable process (assumed to be defined on the same probability ◦ space as 𝑋). The characteristic exponent Φ(𝜃) := − log 𝐸 [e𝑖 𝜃𝑌 (1) | 𝑌 ◦ (0) = 0] (𝜃 ∈ R) is then given by ( 𝐶Φ |𝜃| 𝛼 1 − 𝑖(sgn 𝜃) ( 𝑝 − 𝑞) tan 21 𝛼𝜋 if 𝛼 ≠ 1, 1 Φ(𝜃) = (2.18) 𝐶Φ |𝜃| 2 𝜋 + 𝑖(sgn 𝜃)( 𝑝 − 𝑞) log |𝜃| if 𝛼 = 1, where 𝐶Φ = 𝑐 ♯ Γ(1 − 𝛼) cos 12 𝛼𝜋 if 𝛼 ∉ {1, 2}, 𝐶Φ = 𝑐 ♯ /𝛼 if 𝛼 ∈ {1, 2}, and sgn 𝜃 = 𝜃/|𝜃| (cf. [31, (XVII.3.18-19)]).5 We shall often write 𝜌ˆ for 1 − 𝜌. 4 We have chosen the formulation (2.16), which differs from the standard one (as given in [31]) where one has the factor (2 − 𝛼)/𝛼 on the RHS in the case 𝛼 < 2, since it makes many constants simpler. 5 If 𝛼 ∉ {1, 2}, then the difference in the choice of 𝐿 from [31] means that the constant 𝐶Φ must be replaced by [ (2 − 𝛼)/𝛼]𝐶Φ in the standard formulation.
14
2 Preliminaries
For convenience, we shall assume the norming sequence 𝑐 𝑛 to be extended to a continuous function on [0, ∞) by linear interpolation.
2.4 Relative Stability, Distribution of 𝒁 and Overshoots The class of relatively stable r.w.’s contains the extreme case 𝜌 𝜌ˆ = 0 of 𝛼 = 1 in (AS), which has been least studied when the problem is treated in relation to limit stable processes. Here we summarise some of the known results on the relatively stable r.w.’s directly related to the topics treated in this treatise; see [48] for more details.
2.4.1 Relative Stability We say a random walk 𝑆 is relatively stable (abbreviated as r.s.) if there exists a (nonrandom) sequence 𝐵𝑛 ≠ 0 such that 𝑆 𝑛 /𝐵𝑛 → 1 in probability under 𝑃0 . Here 𝐵𝑛 is necessarily either ultimately positive or ultimately negative (cf. [48, Theorem 2.3], [45, p. 1806]). In the positive (negative) case, after [48], we say that 𝑆 is positively (negatively) relatively stable (abbreviated as p.r.s. (n.r.s.)). Put ∫ 𝑥 𝐴± (𝑥) = (2.19) 𝜇± (𝑦) d𝑦 and 𝐴(𝑥) = 𝐴+ (𝑥) − 𝐴− (𝑥). 0
According to [53], [65] (see also the remark around Eq(1.15) of [48]), 𝑆 is p.r.s. if and only if 𝐸 [𝑋; |𝑋 | < 𝑥]/𝑥𝜇(𝑥) → ∞ (𝑥 → ∞), or, what amounts to the same, 𝐴(𝑥) −→ ∞ 𝑥𝜇(𝑥)
as 𝑥 → ∞;
(2.20)
and in this case 𝐵𝑛 can be chosen so that 𝐵𝑛 ∼ 𝑛𝐴(𝐵𝑛 ) [for a proof, see the beginning of Section A.4]; clearly 𝐴 ′ (𝑥) = 𝑜( 𝐴(𝑥)/𝑥) (𝑥 → ∞),∫so 𝐴 is slowly varying (s.v.) · at infinity, and the same is true for 𝐴+ and 𝐴+ + 𝐴− = 0 𝜇(𝑦) d𝑦. If 𝐹 belongs to a domain of attraction of a stable law of exponent 𝛼 ∈ (0, 2], (2.20) holds if and only if 𝛼 = 1 and 𝑃[𝑆 𝑛 > 0] → 1. (See Remark 6.1.2 and Section 7.1 for more about related matters.) Under (2.20) the function 𝐴 is positive from some point on and in the sequel 𝑥 0 is a positive integer such that 𝐴(𝑥) > 0 for 𝑥 ≥ 𝑥0 . According to [63] the relative stability of 𝑆 implies that for any 𝜀 > 0 " # 𝑃0
sup |𝑆 ⌊𝑛𝑡 ⌋ /𝐵𝑛 − 𝑡| −→ 1
(𝑛 → ∞)
0≤𝑡 0] −→ 1; (b) Px (Λ2x ) −→ 1; (c) Sn −→P +∞,
(2.22)
where Λ𝑅 stands for the event {𝜎(𝑅,∞) < 𝜎(−∞,0) }. The converse holds under (AS), but not in general. An exact solution is obtained by Kesten and Maller [46], where they do not assume the oscillation of 𝑆, nor any moment condition. From Theorem 2.1 and Lemma 4.3 of [46] it follows that if either 𝐸 𝑋−2 = ∞ or 𝜇− (𝑥) > 0 for all 𝑥 < 0 and if 𝜎 2 = ∞, then each of (a) to (c) above is equivalent to ∃ 𝑥0 > 0, 𝐴(𝑥) > 0 for 𝑥 ≥ 𝑥0 and
𝑥𝜇− (𝑥) →0 𝐴(𝑥)
(2.23)
(see also [47, Theorem 3] for another criterion).6 [If ∫ ∞𝐸 𝑋 = 0, (2.23) is rephrased as lim[𝜂− (𝑥) − 𝜂+ (𝑥)]/𝑥𝜇− (𝑥) = ∞ (where 𝜂± (𝑥) = 𝑥 𝜇± (𝑦) d𝑦), which implies that 𝜂− (𝑥)/𝑥𝜇− (𝑥) → ∞, hence 𝜂− is s.v. at infinity. Note that if 𝜇− (𝑥) = 0 for 𝑥 large enough (so that 𝐸 𝑋 = 0 under our basic hypothesis), (2.23) is impossible.]
2.4.2 Overshoots and Relative Stability of 𝒁 We define the overshoot that the r.w. 𝑆 makes beyond the level 𝑅 by 𝑍 (𝑅) := 𝑆 𝜎[𝑅+1,∞) − 𝑅.
(2.24)
Let 𝑆0 = 0. Then 𝑍 (𝑅) coincides with the overshoot made by the ladder height process 𝐻𝑛 = 𝑍1 + · · · + 𝑍 𝑛 , where 𝑍 𝑘 are i.i.d. copies of 𝑍. When 𝐸 𝑍 < ∞, the law of the overshoot itself accordingly converges weakly as 𝑅 → ∞ to a proper probability distribution by standard renewal theory [31, (XI.3.10)]. On the other hand, if 𝐸 𝑍 = ∞, according to Kesten [43] (or [44, Section 4]), lim sup𝑛→∞ 𝑍 𝑛 /𝐻𝑛−1 = ∞ a.s., or, what amounts to the same, lim sup 𝑍 (𝑅)/𝑅 = ∞ a.s. In particular, it follows that lim 𝑍 (𝑅)/𝑅 = 0 a.s. if and only if 𝐸 𝑍 < ∞. 𝑅→∞
It is important to find the corresponding criterion for convergence in probability, which holds under a much weaker condition. In this respect, the following result due to Rogozin [63] is relevant: (i) the following are equivalent (a) 𝑍 (𝑅)/𝑅 −→ 𝑃 0;
(b) 𝑍 is r.s.;
(c) 𝑈a (𝑥)/𝑥 is s.v.
(2.25)
Note that (c) holds if and only if 𝑢 a is s.v. (see (2.5)). In [63], it is also shown that
6 Under 𝜎 2 = ∞ Griffin and McConnell [35] obtain an analytic condition equivalent to (b) that is quite different in appearance from (2.23), but equivalent to it, which Kesten and Maller proved in (iii) of Remarks to their Theorem 2.1.
16
2 Preliminaries
(ii) the following are equivalent (a) 𝑍 (𝑅)/𝑅 −→ 𝑃 ∞;
(b) 𝑈a (𝑥) is s.v.
(2.26)
(See Theorem A.2.3 for a proof of (i) and (ii).) By (2.21) each of (a) to (c) in (2.25) holds if the r.w. 𝑆 is p.r.s. and each of (a) and (b) of (2.25) holds if 𝑆 is n.r.s. (see Remark 6.1.2 for the latter). In Section 5.2 we shall obtain a reasonably fine sufficient condition for (2.25) to hold that complements the sufficiency of 𝑆 being p.r.s. stated above.
2.5 Overshoot Distributions Under (AS) Suppose that (AS) holds and that 0 < 𝜌 < 1 if 𝛼 = 1 (so that 𝑃0 [𝑆 𝑛 /𝑐 𝑛 ∈ ·] converges to a non-degenerate stable law for appropriate constants 𝑐 𝑛 ; the results below do not depend on the choice of 𝑐 𝑛 ). The distribution of the overshoot 𝑍 (𝑅) is a rather classical result as given in [31] (see (A.16) of Theorem A.2.3), and here we consider the overshoot made by the r.w. 𝑆 when it exits the interval 𝐵(𝑅) = [0, 𝑅]. Let 𝑌 = (𝑌 (𝑡))𝑡 ≥0 be a limit stable process with probabilities 𝑃𝑌𝜉 , 𝜉 ∈ R (i.e., the limit law for 𝑆 ⌊𝑛𝑡 ⌋ /𝑐 𝑛 as 𝑆0 /𝑐 𝑛 → 𝜉). Denote by Λ𝑌𝑟 (𝑟 > 0) the event for 𝑌 defined by Λ𝑌𝑟 = {the first exit of 𝑌 from the interval [0, 𝑟] is on the upper side} . Rogozin [64] established the overshoot law given in (2.27) below. Put 𝑍 𝑌(𝑟) = 𝑌 𝜎𝑌 (𝑟 ,∞) − 𝑟, the overshoot of 𝑌 as 𝑌 crosses the level 𝑟 > 0 from the left to the right of it (the analogue of 𝑍 (𝑅) given in (2.24)). The law of 𝑍 𝑌(𝑟) is obtained by Blumenthal, Getoor and Ray [9], Widom [90], Watanabe [89] and Kesten [40] for the symmetric case, by Port [61] for the spectrally one-sided case and by Rogozin [64] for the general case. We have only to consider the problem in the case when Λ𝑌𝑟 occurs. The result may read as follows: if 0 < 𝛼𝜌 ≤ 1, then for 0 ≤ 𝜉 ≤ 𝑟 and 𝜂 > 0, ∫ ∞ −𝛼𝜌 sin 𝛼𝜌𝜋 𝑡 (𝑡 + 𝑟) −𝛼𝜌ˆ (𝑟 − 𝜉) 𝛼𝜌 𝜉 𝛼𝜌ˆ d𝑡. 𝑃𝑌𝜉 𝑍 𝑌 (𝑟) > 𝜂, Λ𝑌𝑟 = (2.27) 𝜋 𝑡 +𝑟 −𝜉 𝜂 By the functional limit theorem, it follows that as 𝑦/𝑐 𝑛 → 𝜂 > 0 ∫ ∞ −𝛼𝜌 sin 𝛼𝜌𝜋 𝑡 (𝑡 + 1) −𝛼𝜌ˆ (1 − 𝜉) 𝛼𝜌 𝜉 𝛼𝜌ˆ d𝑡, 𝑃 𝑥 [𝑍 (𝑅) > 𝑦, Λ𝑅 ] −→ 𝜋 𝑡+1−𝜉 𝜂
(2.28)
where Λ𝑅 = {𝜎[𝑅 + 1, ∞) < 𝜎𝛺 }. [Note that if 𝛼𝜌 = 1, the RHS vanishes. The above convergence is true also for 𝛼 = 𝜌 = 1, the case excluded in [64]. Indeed we shall obtain somewhat stronger results (see Lemma 5.2.4 and Proposition 7.6.4).]
2.6 Table for Modes of the Overshoot 𝑍 (𝑅) Under (AS)
17
2.6 Table for Modes of the Overshoot 𝒁(𝑹) Under (AS) Let 𝑆0 = 0. The following table7 summarises the modes of 𝑍 (𝑅) valid under (AS), in which 𝜃 𝑍 (𝑥) stands for 𝑃[𝑍 > 𝑥], 𝑅 𝜈 stands for the set of regularly varying functions (at infinity) with index 𝜈, and 𝜁 (𝑠) is a random variable having the density (𝜋 −1 sin 𝜋𝑠) [𝑥 𝑠 (1 + 𝑥)] −1 , 𝑥 > 0 (cf. Theorems A.2.3 and A.3.2 and Lemmas 6.4.8 (for 𝛼 = 1) and A.2.1 (for 𝛼𝜌 = 1)). 𝑍 (𝑅)/𝑅 −→ 𝑃 ∞
𝑍 (𝑅)/𝑅 =⇒ 𝜁 ( 𝛼𝜌)
𝜃 𝑍 ∈ 𝑅0
𝜃 𝑍 ∈ 𝑅−𝛼𝜌
𝑍 (𝑅)/𝑅 −→ 𝑃 0 ∫ · 𝜃 (𝑢) d𝑢 ∈ 𝑅0 0 𝑍
𝜌=0
0 < 𝛼𝜌 < 1
𝛼𝜌 = 1
𝑈a (𝑥) ∼ 1/ℓˆ♯ (𝑥)
𝑈a (𝑥) ∼
𝑥 𝛼𝜌 /ℓ(𝑥)
𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) 0 ≤ 𝑝 ≤ 1, 𝜌 =
𝛼=2
∗
∗
1 0 ¯ ≥ 𝑐 ∗ 𝑥/𝑚(𝑥) 𝑎(𝑥)
for all sufficiently large 𝑥 ≥ 1.
We bring in the following condition to obtain an upper bound of 𝑎(𝑥): ¯ (H)
𝛿 𝐻 := lim inf 𝑡 ↓0
𝛼(𝑡) + |𝛾(𝑡)| > 0. 𝜂(1/𝑡)
Note that 𝛼(𝑡) + |𝛾(𝑡)| ≍ |1 − 𝜓(𝑡)|/𝑡 (0 < 𝑡 ≤ 𝜋). ∗ depending Theorem 3.1.2 (i) If condition (H) holds, then for some constant 𝐶 𝐻 only on 𝛿 𝐻 , ∗ 𝑎(𝑥) ¯ ≤ 𝐶𝐻 𝑥/𝑚(𝑥)
for all sufficiently large 𝑥 ≥ 1.
𝛼(𝑡) + |𝛾(𝑡)| 𝑎(𝑥) ¯ → ∞ (𝑥 → ∞). → 0 ( 𝑡 ↓ 0), then 𝑡 𝑚(1/𝑡) 𝑥/𝑚(𝑥) [The converse is not true; see the end of Section 3.6 for a counter-example.]
(ii) If
According to Theorems 3.1.1 and 3.1.2, if (H) holds, then 𝑎(𝑥) ¯ ≍ 𝑥/𝑚(𝑥) as 𝑥 → ∞ (i.e., the ratio of the two sides is bounded away from zero and infinity), which entails some regularity of 𝑎(𝑥) ¯ like its being almost increasing and of dominated variation, while in general, 𝑎(𝑥) ¯ may behave very irregularly, as will be exhibited by an example in Section 3.6. Theorem 3.1.1 plainly entails the implication ∫ ∞ ∑︁ 𝑥𝜇+ (𝑥) d𝑥 < ∞ 𝑎(𝑥)𝜇 ¯ (3.1) + (𝑥) < ∞ =⇒ 𝑚(𝑥) 1 which bears immediate relevance to the criteria for the summability of the first ascending ladder height∫𝑍. The integrability condition on the RHS of (3.1), implying ∞ 𝑥𝜂+ (𝑥)/𝑚(𝑥) → 0 and 1 𝑐 + (𝑦)𝑑 (−1/𝑚(𝑦)) < ∞, and hence 𝑚 + (𝑥)/𝑚(𝑥) → 0, is equivalent to the necessary and sufficient condition for 𝐸 𝑍 < ∞ due to Chow [15] ((3.1) is verified in [77] and constitutes one of the key observations there that lead to the following equivalences 1 One may also use another expression 1 −Í 𝜓 (𝑡) = [ 𝛼 ◦ (𝑡) + 𝑖𝛾 ◦ (𝑡) ] sin 𝑡, where 𝛼 ◦ (𝑡) = Í ∞ 1 1 𝛾 ◦ (𝑡) = ∞ 0 𝜇 (𝑛) [sin 𝑡 𝑛 + tan 2 𝑡 cos 𝑡 𝑛], 0 [𝜇− (𝑛) − 𝜇+ (𝑡) ] [cos 𝑡 𝑛 − tan 2 𝑡 sin 𝑡 𝑛], which, though preferable for some problems, is not applied in this treatise.
3.1 Statements of Results
∑︁
21
∫ 𝑎(𝑥)𝜇 ¯ + (𝑥) < ∞ ⇐⇒ 𝐸 𝑍 < ∞ ⇐⇒ 1
∞
𝑥𝜇+ (𝑥) d𝑥 < ∞ 𝑚 − (𝑥)
(3.2)
without recourse to Chow’s result (see Section 4 (especially Lemma 4.1) of [77]; cf. also [84, Remark 2.3(b)]). By the same token, it follows from the dual of (2.13) (see also (3.6)) that ∑︁ 2 [ 𝑎(𝑥)] ¯ 𝜇+ (𝑥) < ∞ ⇐⇒ lim sup 𝑎(−𝑥) < ∞. (3.3) For condition (H) to hold each of the following conditions (3.4) to (3.8) is sufficient: 1 1 𝜇− (𝑥) 𝜇+ (𝑥) < or lim sup < ; (3.4) either lim sup 𝜇(𝑥) 2 𝜇(𝑥) 2 𝑥→∞ 𝑥→∞ lim sup 𝑥→∞
lim
𝑥→∞
𝑥𝜂+ (𝑥) 𝑥𝜂− (𝑥) 𝑥𝜂(𝑥) < 1 or lim sup < 1 or lim sup < 1; 𝑚 + (𝑥) 𝑚 (𝑥) 𝑥→∞ 𝑥→∞ 𝑚(𝑥) −
(3.5)
𝑚 + (𝑥) 1 converges as 𝑥 → ∞ to a number ≠ ; 𝑚(𝑥) 2
(3.6)
𝑥𝜂(𝑥) 1 1 𝜂− (𝑥) 𝜂+ (𝑥) = 1 and either lim sup < or lim sup < ; 𝑚(𝑥) 𝜂(𝑥) 2 𝜂(𝑥) 2 𝑥→∞ 𝑥→∞
(3.7)
lim sup 𝑥→∞
𝑥 [𝜂+ (𝑥) ∧ 𝜂− (𝑥)] 1 ≤ . 𝑚 + (𝑥) ∨ 𝑚 − (𝑥) 9
(3.8)
That (3.4) is sufficient for (H) follows from lim inf 𝑡 ↓0 𝛽(𝑡)/𝜂(1/𝑡) > 0 (cf. (3.12b)) since lim inf 𝑡 ↓0 |𝛾(𝑡)|/𝛽(𝑡) > 0 under (3.4). The sufficiency of the conditions (3.5) and (3.8) will be verified in Section 3.2 (Lemmas 3.2.9 and 3.2.5 for (3.5) and Lemma 3.2.10 for (3.8)); as for (3.6) and (3.7), see Remark 3.3.7 and the comment given immediately after Theorem 4.1.1, respectively. In (3.7), the first condition is equivalent to the slow variation of 𝜂, which together with the second one implies either 𝜂+ or 𝜂− is s.v. (so that 𝑆 is r.s. under (3.7); see Theorem 4.1.1 for consequences on 𝑎(𝑥)). We do not know whether (H) is necessary for 𝑎(𝑥)𝑚(𝑥)/𝑥 ¯ to be bounded. The next result entails that the necessity of (H) follows when the distribution of 𝑋 is nearly symmetric in the sense that the limit in (3.6) equals 1/2, namely 𝑚 + (𝑥)/𝑚(𝑥) → 1/2
as 𝑥 → ∞.
(3.9)
Theorem 3.1.3 Suppose (3.9) holds. Then (i) lim𝑡 ↓0 𝛾(𝑡)/[𝑡 𝑚(1/𝑡)] = 0; and (ii) each of the three inequalities in the disjunction (3.5) is necessary (as well as sufficient) in order that lim sup 𝑥→∞ 𝑎(𝑥)𝑚(𝑥)/𝑥 ¯ < ∞. Corollary 3.1.4 Under (3.9), lim sup 𝑎(𝑥)𝑚(𝑥)/𝑥 ¯ < ∞ if and only if (H) holds.
22
3 Bounds of the Potential Function
¯ Proof If lim sup 𝑎(𝑥)𝑚(𝑥)/𝑥 < ∞, then by Theorem 3.1.3(ii) lim sup 𝑥𝜂(𝑥)/𝑚(𝑥) < 1, or equivalently, lim inf 𝑐(𝑥)/𝑚(𝑥) > 0, which implies lim inf 𝛼(𝑡)/𝜂(1/𝑡) > 0, as we shall see (Lemma 3.2.5), whence (H) holds. The converse follows from Theorem 3.1.2. □ In practice, in most cases, Theorem 3.1.3 together with (3.4) to (3.8) provides the criterion, expressed in terms of 𝜇± , 𝑚 ± and/or 𝜂± , to judge whether the condition (H) holds. When 𝐹 is attracted to a stable law, the result is simplified so that = ∞ if 𝑥𝜂(𝑥)/𝑚(𝑥) → 1 and 𝑚 + (𝑥)/𝑚(𝑥) → 1/2; otherwise lim 𝑎(𝑥)𝑚(𝑥)/𝑥 ¯ 𝑎(𝑥) ∼ 𝐶𝑥/𝑚(𝑥) (see Proposition 4.2.1(i, iv) and Remark 4.2.2(ii)). ¯ Proposition 3.1.5 If (H) holds, then for 0 < 𝑥 < 2𝑅, 1/4 ¯ 1 − 𝑎(𝑥) ≤ 𝐶 1 − 𝑥 𝑎(𝑅) ¯ 𝑅 for some constant 𝐶 that depends only on 𝛿 𝐻 . From Proposition 3.1.5 it easily follows that 𝑎(𝑥)/ ¯ 𝑎(𝑅) ¯ → 1 as 𝑥/𝑅 → 1 if (H) holds, which fact, not trivial, is what we need for some of our applications. [Proposition 4.3.1 in Section 4.3 entails that the exponent 1/4 can be replaced by 1 if lim inf 𝑐(𝑥)/𝑚(𝑥) > 0, but does not ensure 𝑎(𝑥)/ ¯ 𝑎(𝑅) ¯ → 1 otherwise.] In the next result, as well as the applications in Chapter 5, we consider the r.w. 𝑆 under the condition 𝑚 + (𝑥)/𝑚(𝑥) → 0 as 𝑥 → +∞, which for simplicity we abbreviate as 𝑚 + /𝑚 → 0. (Similar conventions will also apply to 𝜂+ /𝜂, 𝑐/𝑚, etc.) Theorem 3.1.6 Suppose 𝑚 + /𝑚 → 0. Then (H) holds with 𝛿 𝐻 = 1, and 𝑎(−𝑥) → 0 as 𝑥 → +∞; 𝑎(𝑥) in particular 𝑎(𝑥) ∼ 2𝑎(𝑥) ¯ ≍ 𝑥/𝑚(𝑥). In the proof of Theorem 3.1.6 we shall see that if 𝑚 + /𝑚 → 0 and ∫ 𝜋 𝑡 𝛽± (𝑡) sin 𝑥𝑡 1 𝑏 ± (𝑥) = d𝑡, 𝜋 0 |1 − 𝜓(𝑡)| 2 then 𝑎(𝑥) ¯ ∼ 𝑏 − (𝑥) and 𝑏 + (𝑥) = 𝑜 (𝑏 − (𝑥)), which together indeed imply that 𝑎(−𝑥)/𝑎(𝑥) → 0 since 21 [𝑎(𝑥) − 𝑎(−𝑥)] = 𝑏 − (𝑥) − 𝑏 + (𝑥). Applications of the above theorems are made mostly in the case 𝑎(−𝑥)/𝑎(𝑥) → 0 in this treatise. In general, however, it is significant to consider the r.w. 𝑆 under the condition 𝛿∗ := lim inf 𝑎(−𝑥)/𝑎(𝑥) ¯ > 0 and/or 𝛿∗ := lim sup 𝑎(−𝑥)/𝑎(𝑥) ¯ < 1, 𝑥→∞
𝑥→∞
but we do not know when this condition holds except for some restricted classes of r.w.’s (as studied in Chapter 4; see also Section 7.7 for 𝛿∗ < 1) and for the case 𝛿∗ = 𝛿∗ = 1 dealt with in the following
3.2 Auxiliary Results
23
Proposition 3.1.7 If (H) holds and 𝑚 + /𝑚 → 21 , then lim 𝑎(±𝑥)/𝑎(𝑥) ¯ → 1. In the next section we derive some fundamental facts about 𝑎(𝑥) and the functionals introduced above, which incidentally yield (i) of Theorem 3.1.2 (see Lemmas 3.2.3 and 3.2.6) and the sufficiency for (H) of (3.5) and (3.8) (Lemmas 3.2.5, 3.2.9 and 3.2.10); also the lower bound of Theorem 3.1.1 is verified under a certain side condition. The proof of Theorem 3.1.1 is more involved and given in Section 3.3, in which we also prove Theorems 3.1.2(ii) and 3.1.3. Proposition 3.1.5 is proved in Section 3.4, and Theorem 3.1.6 and Proposition 3.1.7 are in Section 3.5. In the last section we present an example of 𝐹 for which 𝑎(𝑥) ¯ is not almost increasing and 𝑐(𝑥)/𝑚(𝑥) oscillates between 𝜀 and 𝛿 for any 0 < 𝜀 < 𝛿 < 1.
3.2 Auxiliary Results In this section we first present some known or easily derived facts and then give several lemmas, in particular, Lemmas 3.2.3 and 3.2.7, which together assert that 𝑎(𝑥) ¯ ≍ 𝑥/𝑚(𝑥) under the last inequality in (3.5) and whose proofs involve typical arguments that are implicitly used in Sections 3.3 to 3.5. As in [77], we bring in the following functionals of 𝐹 in addition to those introduced in Section 3.1: ∫ 𝑥 ∫ 𝑥 2 1 2 𝑦 𝜇(𝑦) d𝑦, 𝑚(𝑥) 𝑦𝜂(𝑦) d𝑦, 𝑐(𝑥) ˜ = ˜ = 𝑥 0 𝑥 0 ∫ 𝜀𝑥 ℎ 𝜀 (𝑥) = 𝑦 [𝜇(𝑦) − 𝜇(𝜋𝑥 + 𝑦)] d𝑦 (0 < 𝜀 ≤ 𝜋/2); 0
also 𝑐˜± (𝑥) and 𝑚˜ ± (𝑥) are defined with 𝜇± in place of 𝜇, so that 𝑐(𝑥) ˜ = 𝑐˜− (𝑥) + 𝑐˜+ (𝑥), etc. Since 𝜎 2 = ∞, 𝑐(𝑥) and ℎ 𝜀 (𝑥) tend to infinity as 𝑥 → ∞; 𝜂, 𝑐 and ℎ 𝜀 are monotone and ℑ𝜓(𝑡) = −𝑡𝛾(𝑡). Here and throughout the rest of this section and Sections 3.3 to 3.5, 𝑥 ≥ 0 and 𝑡 ≥ 0. We shall be concerned with the behaviour of these functions as 𝑥 → ∞ or 𝑡 ↓ 0 and omit “𝑥 → ∞” or “𝑡 ↓ 0” when it is obvious. As noted previously, the function 𝑚 admits the decomposition 𝑚(𝑥) = 𝑥𝜂(𝑥) + 𝑐(𝑥). The function 𝑚 is relatively tractable: it is increasing and concave, hence subadditive, and for any 𝑘 > 1, 𝑚(𝑘𝑥) ≤ 𝑘𝑚(𝑥) and 𝑚(𝑥)/𝑘 ≤ 𝑚(𝑥/𝑘). However 𝑐, though increasing, may vary quite differently. The ratio 𝑐(𝑥)/𝑚(𝑥) may converges to 0 or to 1 as 𝑥 → ∞ depending on 𝜇 and possibly oscillates asymptotically between 0 and 1; and ∫ 𝑥 𝑐(𝑘𝑥) = 𝑘 2 𝜇(𝑘𝑢)𝑢 d𝑢 ≤ 𝑘 2 𝑐(𝑥) (𝑘 > 1), 0
24
3 Bounds of the Potential Function
where the factor 𝑘 2 cannot be replaced by 𝑜(𝑘 2 ) for the upper bound to be valid (cf. Section 3.6). It also follows that 𝑚˜ is increasing and concave and that 𝑚(𝑥) ˜ = 𝑥𝜂(𝑥) + 𝑐(𝑥). ˜ + 𝑥 −1 For 𝑥 > 0, 𝑐(𝑥) = 𝑐(𝑥) ˜ It holds that
∫
𝑥
˜ 𝑐(𝑦) d𝑦, in particular 𝑚(𝑥) < 𝑚(𝑥). ∫ 𝜋 1 1 𝑎(𝑥) ¯ = ℜ (1 − cos 𝑥𝑡) d𝑡 2𝜋 − 𝜋 1 − 𝜓(𝑡) 0
(cf. [71, Eq(28.2)]). Recalling 1 − 𝜓(𝑡) = 𝑡𝛼(𝑡) + 𝑖𝑡𝛾(𝑡), we have 𝛼(𝑡) − 𝑖𝛾(𝑡) 1 1 = · . 1 − 𝜓(𝑡) 𝛼2 (𝑡) + 𝛾 2 (𝑡) 𝑡 Hence 𝑎(𝑥) ¯ =
1 𝜋
∫
𝜋
0
𝛼(𝑡) (1 − cos 𝑥𝑡) d𝑡. [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡
(3.10)
Note that 𝛼± (𝑡) and 𝛽± (𝑡) are all positive (for 𝑡 > 0); by Fatou’s lemma ∫ ∞ 1 − cos 𝑡𝑥 1 𝑑 (−𝜇(𝑥)) ≥ 𝜎 2 , lim inf 𝛼(𝑡)/𝑡 = lim inf 2 2 𝑡 0 so that 𝛼(𝑡)/𝑡 → ∞ under the present setting. To find the asymptotics of 𝑎¯ we need to know those of 𝛼(𝑡) and 𝛽(𝑡) as 𝑡 ↓ 0 (which entail those of 𝛼± and 𝛽± as functionals of 𝜇± ). We present some of them in the following two lemmas. Although the arguments made therein are essentially the same as in [77], we provide complete proofs since some constants in [77] are wrong or inadequate for the present need and must be rectified – the values of the constants involved are not significant in [77], but some of them turn out to be of crucial importance for our proof of Theorem 3.1.1. Lemma 3.2.1 For 0 < 𝜀 ≤ 𝜋/2 and 0 < 𝑡 ≤ 𝜋, [𝜀 −1 sin 𝜀]ℎ 𝜀 (1/𝑡) < 𝛼(𝑡)/𝑡 < [5𝑐(1/𝑡)] ∧ 𝑚(1/𝑡).
(3.11)
Proof By monotonicity of 𝜇 it follows that ∫
∫
𝜀/𝑡
( 𝜋+𝜀)/𝑡
+
𝛼(𝑡) > 0
∫
𝜋/𝑡
𝜀/𝑡
[𝜇(𝑧) − 𝜇(𝜋/𝑡 + 𝑧)] sin 𝑡𝑧 d𝑧,
𝜇(𝑧) sin 𝑡𝑧 d𝑧 = 0
which by sin 𝑡𝑧 ≥ 𝜀 −1 (sin 𝜀)𝑡𝑧 (𝑡𝑧 ≤ 𝜀) shows the first inequality of the lemma. Splitting the defining integral at 1/𝑡 we see 𝛼(𝑡) < 𝑡𝑐(1/𝑡) + 𝜂(1/𝑡) = 𝑡𝑚(1/𝑡). ∫ 𝜋/𝑡 ∫ 𝜋/𝑡 Similarly we see 𝛼(𝑡) < 0 𝜇(𝑧) sin 𝑡𝑧 d𝑧 ≤ 𝑡𝑐( 21 𝜋/𝑡) + 𝜇(1/𝑡) 1 𝜋/𝑡 sin 𝑡𝑧 d𝑧 ≤ 2
1 2 4 𝜋 𝑡𝑐(1/𝑡)
+ 𝑡 −1 𝜇(1/𝑡) < 5𝑡𝑐(1/𝑡). Thus the second inequality follows.
□
3.2 Auxiliary Results
25
Lemma 3.2.2 For 0 < 𝑡 ≤ 𝜋, (𝑎) (𝑎 ′) (𝑏)
1 ˜ 2 𝑚(1/𝑡)𝑡
≤ 𝛽(𝑡) ≤ 2𝑚(1/𝑡)𝑡, ˜
|𝛽(𝑡) − 𝜂(1/𝑡)| ≤ 4𝑡𝑐(1/𝑡), 1 3 𝑚(1/𝑡)𝑡
(3.12)
≤ 𝛼(𝑡) + 𝛽(𝑡) ≤ 3𝑚(1/𝑡)𝑡.
Proof Integrating by parts and using the inequality sin 𝑢 ≥ (2/𝜋)𝑢 (𝑢 < 12 𝜋) in turn we see ∫ 𝜋/2𝑡 ∫ ∞ ∫ ∞ 2𝑡 2 𝜂(𝑥)𝑥 d𝑥 + 𝑡 𝜂(𝑥) sin 𝑡𝑥 d𝑥. 𝛽(𝑡) = 𝑡 𝜂(𝑥) sin 𝑡𝑥 d𝑥 ≥ 𝜋 0 𝜋/2𝑡 0 Observing that the first term of the expression of the∫ rightmost member equals ∞ 1 ˜ ˜ and the second one equals − 𝜋/2𝑡 𝜇(𝑥) cos 𝑡𝑥 d𝑥 > 0, we ≥ 21 𝑡 𝑚(1/𝑡) 2 𝑡 𝑚(𝜋/2𝑡) obtain the left-hand inequality of (3.12a). As for the right-hand inequality of (3.12a), ˜ we split the defining integral of 𝛽 at 1/𝑡 to see that 𝛽(𝑡) ≤ 12 𝑡 𝑐(1/𝑡) + 2𝜂(1/𝑡) ≤ 𝑚(1/𝑡). ˜ The proof of (𝑎 ′) is similar (rather simpler): one has only to note that ∫2𝑡∞ 𝜇(𝑥) cos 𝑡𝑥 d𝑥 is less than 𝑡 −1 𝜇(1/𝑡) (1 − sin 1) and larger than −2𝑡 −1 𝜇(𝜋/2𝑡) and 1/𝑡 that 𝑡 𝜇(1/𝑡) ≤ 2𝑡𝑐(1/𝑡). The upper bound of (3.12b) is immediate from those in (3.11) and (a) proved above since 𝑚(𝑥) ˜ ≤ 𝑚(𝑥). To verify the lower bound, use (3.11) and the inequalities ℎ1 (𝑥) ≥ 𝑐(𝑥) − 12 𝑥 2 𝜇(𝑥) and sin 1 > 5/6 to obtain [𝛼(𝑡) + 𝛽(𝑡)]/𝑡 > (5/6) 𝑐(1/𝑡) − 𝜇(1/𝑡)/2𝑡 2 + 2−1 [ 𝑐(1/𝑡) ˜ + 𝜂(𝑡)/𝑡] . ˜ Since 𝑥 2 𝜇(𝑥) ≤ [2𝑐(𝑥)] ∧ [3𝑐(𝑥)] ˜ it follows that 56 𝜇(1/𝑡)/2𝑡 2 ≤ 12 𝑐(1/𝑡) + 12 𝑐(1/𝑡) 2 and hence 𝛼(𝑡) + 𝛽(𝑡) > 6 𝑐(1/𝑡) + 12 𝜂(1/𝑡)/𝑡 𝑡 > 13 𝑚(1/𝑡)𝑡, as desired. □ For 𝑡 > 0 define 𝑓𝑚 (𝑡) = Observe that
1 𝑡 2 𝑚 2 (1/𝑡)
𝑓 ◦ (𝑡) =
and
𝑥 𝑚(𝑥)
′ =
𝛼2 (𝑡)
1 . + 𝛾 2 (𝑡)
𝑐(𝑥) , 𝑚 2 (𝑥)
(3.13)
(3.14)
which, in particular, entails 𝑥/𝑚(𝑥) is increasing and 𝑓𝑚 (𝑡) is decreasing. Lemma 3.2.3 For some universal constant 𝐶 ∫ 𝜋 𝑓𝑚 (𝑡)𝛼(𝑡) 𝑥 (1 − cos 𝑥𝑡) d𝑡 ≤ 𝐶 , 𝑡 𝑚(𝑥) 0
(3.15)
in particular if lim inf 𝑓𝑚 (𝑡) [𝛼2 (𝑡) + 𝛾 2 (𝑡)] > 𝛿 for some 𝛿 > 0, then for all sufficiently large 𝑥, 𝑎(𝑥) ¯ < 𝐶 [𝜋𝛿] −1 𝑥/𝑚(𝑥).
26
3 Bounds of the Potential Function
Proof Break the integral on the LHS of (3.15) into two parts ∫
𝜋/2𝑥
𝑓𝑚 (𝑡)𝛼(𝑡) (1 − cos 𝑥𝑡) d𝑡 𝑡 𝑓𝑚 (𝑡)𝛼(𝑡) (1 − cos 𝑥𝑡) d𝑡. 𝑡
𝐽 (𝑥) = ∫0
𝜋
𝐾 (𝑥) = 𝜋/2𝑥
and
Using 𝛼(𝑡) ≤ 5𝑐(1/𝑡)𝑡 we then observe ∫ 𝜋 2𝑐(1/𝑡) 𝐾 (𝑥) ≤ d𝑡 0≤ 2 𝑚 2 (1/𝑡) 5 𝑡 𝜋/2𝑥 ∫ 2𝑥/ 𝜋 2𝑐(𝑦) = d𝑦 𝑚 2 (𝑦) 1/ 𝜋 2𝑥/ 𝜋 2𝑦 2𝑥 = < . 𝑚(𝑦) 𝑦=1/ 𝜋 𝑚(𝑥)
(3.16)
Similarly 𝑥2 𝐽 (𝑥) ≤ 5 2
∫
𝜋/2𝑥
𝑓𝑚 (𝑡)𝑐(1/𝑡)𝑡 2 d𝑡 = 0
𝑥2 2
∫
∞
2𝑥/ 𝜋
𝑐(𝑦) d𝑦 𝑦 2 𝑚 2 (𝑦)
and, observing ∫ 𝑥
∞
𝑐(𝑦) d𝑦 ≤ 2 𝑦 𝑚 2 (𝑦)
∫ 𝑥
∞
d𝑦 𝑦 2 𝑚(𝑦)
≤
1 , 𝑥𝑚(𝑥)
we have 𝐽 (𝑥) ≤ 52 𝑥/𝑚(𝑥), finishing the proof.
(3.17) □
If there exists a constant 𝐵0 > 0 such that for all 𝑡 small enough, 𝛼(𝑡) ≥ 𝐵0 𝑐(1/𝑡)𝑡,
(3.18)
then the estimation of 𝑎¯ becomes much easier. Unfortunately, condition (3.18) may fail to hold in general: in fact, the ratio 𝛼(𝑡)/[𝑐(1/𝑡)𝑡] may oscillate between 1 − 𝜀 and 𝜀 as 𝑡 ↓ 0 for any 0 < 𝜀 < 1 (cf. Section 3.6). The following lemma will be used crucially to cope with such a situation. Lemma 3.2.4 Let 0 ≤ 𝜀 ≤ 𝜋/2. Then for all 𝑥 > 0, ℎ 𝜀 (𝑥) ≥ 𝑐(𝜀𝑥) − (2𝜋) −1 𝜀 2 𝑥 [𝜂(𝜀𝑥) − 𝜂(𝜋𝑥 + 𝜀𝑥)] . ∫ 𝜀𝑥 Proof On writing ℎ 𝜀 (𝑥) = 𝑐(𝜀𝑥) − 0 𝑢𝜇(𝜋𝑥 + 𝑢) d𝑢, integration by parts yields ∫
𝜀𝑥
[𝜂(𝜋𝑥 + 𝑢) − 𝜂(𝜋𝑥 + 𝜀𝑥)] d𝑢.
ℎ 𝜀 (𝑥) − 𝑐(𝜀𝑥) = − 0
3.2 Auxiliary Results
27
By monotonicity and convexity of 𝜂 it follows that if 0 < 𝑢 ≤ 𝜀𝑥, 0 ≤ 𝜂(𝜋𝑥 + 𝑢) − 𝜂(𝜋𝑥 + 𝜀𝑥) ≤
𝜀𝑥 − 𝑢 [𝜂(𝜀𝑥) − 𝜂(𝜋𝑥 + 𝜀𝑥)] , 𝜋𝑥 □
and substitution leads to the inequality of the lemma. Lemma 3.2.5 Let 0 < 𝛿 ≤ 1, 0 < 𝑡 < 𝜋 and put 𝑠 = [1 ∧ (𝛿𝜋)] 𝑡. Then
(i) if 𝑐(1/𝑡) ≥ 𝛿𝜂(1/𝑡)/𝑡, then 𝛼(𝑠) > 𝜆(𝜋 −1 ∧ 𝛿) 2 𝜂(1/𝑠) with 𝜆 := 12 𝜋 sin 1 > 1, in particular if 𝛿 := lim inf 𝑐(𝑥)/𝑥𝜂(𝑥) > 0, then 𝛼(𝑡) > (𝜋 −1 ∧ 𝛿) 2 𝜂(1/𝑡) for all sufficiently small 𝑡 > 0 – so that (H) holds; (ii) if 𝑐(1/𝑡) ≥ 𝛿𝑚(1/𝑡), then 𝛼(𝑠) > 𝜆(𝜋 −1 ∧ 𝛿) 2 𝑠𝑚(1/𝑠) (𝜆 = 12 𝜋 sin 1). Proof Suppose 𝛿𝜋 ≤ 1 and take 𝜀 = 𝛿𝜋 in Lemma 3.2.4. Then 𝜀 −1 sin 𝜀 > sin 1 = 2𝜆/𝜋, while the premise of the first statement of the lemma implies ℎ 𝜀 (1/𝜀𝑡) > 𝑐(1/𝑡) −
𝛿𝜀𝜂(1/𝜀𝑡) 𝜋𝛿2 𝜂(1/𝛿𝜋𝑡) 𝜀𝜂(1/𝑡) 𝛿𝜂(1/𝑡) ≥ = ≥ , 2𝜋𝑡 2𝑡 2𝑡 2𝛿𝜋𝑡
whence by Lemma 3.2.1 𝛼(𝑠)/𝑠 ≥ [𝜀 −1 sin 𝜀]ℎ 𝜀 (1/𝑠) > 𝜆𝛿2 𝜂(1/𝑠)/𝑠 for 𝑠 = 𝛿𝜋𝑡. Similarly, if 𝛿𝜋 > 1, then taking 𝜀 = 1 we have 𝛼(𝑡)/𝑡 > (sin 1) (𝛿 − (2𝜋) −1 )𝜂(1/𝑡)/𝑡 > 𝜆𝜋 −2 𝜂(1/𝑡)/𝑡, □
showing (i). The same proof applies to (ii). Lemma 3.2.6 If (H) holds, then (H′)
𝛼(𝑡) + |𝛾(𝑡)| > [𝜋 −2 ∧ 13 𝛿 𝐻 ]𝑡𝑚(1/𝑡)
for all sufficiently small 𝑡 > 0,
and ∫ 0
1
𝛽(𝑡) d𝑡 < ∞. 𝛼2 (𝑡) + 𝛾 2 (𝑡)
(3.19)
[By Lemma 3.2.2, (H′) ensures that 𝑓 ◦ (𝑡) ≍ 𝑓𝑚 (𝑡).] Proof If 12 𝑚(1/𝑡) ≤ 𝑐(1/𝑡), then by Lemma 3.2.5(ii), 𝛼(𝑡) > 14 𝑡𝑚(1/𝑡), entailing (H′) (for this 𝑡), while if 12 𝑚(1/𝑡) > 𝑐(1/𝑡), then 𝜂(1/𝑡)/𝑡 > 2−1 𝑚(1/𝑡). Hence (H) implies (H′). By Lemma 3.2.2 (H′) implies that the integral in (3.19) is at most a constant multiple of ∫ 0
1
𝑚(1/𝑡) ˜ d𝑡 = 𝑡𝑚 2 (1/𝑡)
∫
∞
1
∫ = 1
∞
𝑚(𝑥) ˜ d𝑥 𝑥𝑚 2 (𝑥) ∫𝑥 2 0 𝑦𝜂(𝑦) d𝑦 𝑥 2 𝑚 2 (𝑥)
d𝑥.
28
3 Bounds of the Potential Function
Interchanging the order of integration, one deduces that the last integral above equals ∫ 1
∞
2
∫1 0
𝑦𝜂(𝑦) d𝑦
𝑥 2 𝑚 2 (𝑥)
∫
∞
d𝑥 +
∫
∞
2𝑦𝜂(𝑦) d𝑦 1
𝑦
d𝑥 0, then for some constant 𝐶 > 0 that depends only on 𝛿, 𝐶 −1 𝑥/𝑚(𝑥) ≤ 𝑎(𝑥) ¯ ≤ 𝐶𝑥/𝑚(𝑥)
for all 𝑥 large enough.
Proof By Lemmas 3.2.5(ii) and 3.2.6 condition (H′) is satisfied, provided 𝛿 > 0. Hence the upper bound follows from Lemma 3.2.3. Although the lower bound follows from Theorem 3.1.1, which will be shown independently of Lemma 3.2.7 in the next section, here we provide a direct proof. Let 𝐾 (𝑥) be as in the proof of Lemma 3.2.3. By Lemma 3.2.5(ii) we may suppose that 𝛼(𝑡)/𝑡 ≥ 𝐵1 𝑐(1/𝑡) with a constant 𝐵1 > 0. Since both 𝑐(1/𝑡) and 𝑓𝑚 (𝑡) are decreasing and hence so is their product, we see that ∫ 𝜋 ∫ 𝜋 𝐾 (𝑥) 𝑐(1/𝑡) ≥ 𝑓𝑚 (𝑡)𝑐(1/𝑡) (1 − cos 𝑥𝑡) d𝑡 ≥ d𝑡, 2 𝑚 2 (1/𝑡) 𝐵1 𝑡 𝜋/2𝑥 𝜋/2𝑥 from which we deduce, as in (3.16), that 2𝑥/ 𝜋 𝑦 2 𝑥 1/𝜋 𝐾 (𝑥) ≥ ≥ · − . 𝐵1 𝑚(𝑦) 𝑦=1/ 𝜋 𝜋 𝑚(𝑥) 𝑚(1/𝜋)
(3.20) □
Thus the desired lower bound obtains.
Lemma 3.2.8 Suppose that 0 = lim inf 𝑐(𝑥)/𝑚(𝑥) < lim sup 𝑐(𝑥)/𝑚(𝑥). Then for any 𝜀 > 0 small enough there exists an unbounded sequence 𝑥 𝑛 > 0 such that 𝑐(𝑥 𝑛 ) = 𝜀𝑚(𝑥 𝑛 )
and
𝛼(𝑡)/𝑡 > 2−1 𝜀 2 𝑚(𝑥 𝑛 )
for 0 < 𝑡 ≤ 1/𝑥 𝑛 .
Proof Put 𝜆(𝑥) = 𝑐(𝑥)/𝑚(𝑥) and 𝛿 = 21 lim sup 𝜆(𝑥). Let 0 < 𝜀 < 𝛿2 . Then there exists two sequences 𝑥 𝑛 and 𝑥 𝑛′ such that 𝑥 𝑛 → ∞, 𝑥 𝑛′ < 𝑥 𝑛 , 𝜆(𝑥 𝑛 ) = 𝜀 < 𝜆(𝑥) for 𝑥 𝑛′ < 𝑥 < 𝑥 𝑛 and 𝛿 = 𝜆(𝑥 𝑛′ ),
(3.21)
provided lim inf 𝜆(𝑥) = 0. Observing 𝑥 𝑛′ /𝑥 𝑛 ≤ 𝑚(𝑥 𝑛′ )/𝑚(𝑥 𝑛 ) < 𝜆(𝑥 𝑛 )/𝜆(𝑥 𝑛′ ) = 𝜀/𝛿 < 𝛿, we see that 1/𝑥 𝑛 < 𝛿/𝑥 𝑛′ and then by using (3.21), 𝜆(𝑥 𝑛′ ) 1 𝜆(𝑥 𝑛 ) < < . 𝑥𝑛 𝑥𝑛 𝑥 𝑛′ Hence the intermediate value theorem ensures that there exists a solution of the equation 𝜆(𝑥) = 𝑥/𝑥 𝑛 in the interval 𝑥 𝑛′ < 𝑥 < 𝑥 𝑛 . Let 𝑦 𝑛 be the largest solution and
3.2 Auxiliary Results
29
put 𝜀 𝑛 = 𝑦 𝑛 /𝑥 𝑛 . Then 𝜀 < 𝜆(𝑦 𝑛 ) = 𝑐(𝑦 𝑛 )/𝑚(𝑦 𝑛 ) = 𝜀 𝑛 < 1 and
𝑦𝑛 = 𝜀𝑛𝑥𝑛 .
Hence by Lemma 3.2.4 h i 2𝜋 − 1 𝜀𝑛 5 ℎ 𝜀𝑛 (𝑥 𝑛 ) ≥ 𝑐(𝑦 𝑛 ) − 𝑚(𝑦 𝑛 ) = 𝜀 𝑛 𝑚(𝜀 𝑛 𝑥 𝑛 ) > 𝜀 2 𝑚(𝑥 𝑛 ). 2𝜋 2𝜋 6 Since ℎ 𝜀 is non-decreasing, we have for 0 < 𝑡 ≤ 1/𝑥 𝑛 , 𝛼(𝑡)/𝑡 ≥ (sin 1)ℎ 𝜀𝑛 (1/𝑡) >
5 ℎ 𝜀 (𝑥 𝑛 ) ≥ 2−1 𝜀 2 𝑚(𝑥 𝑛 ). 6 𝑛
(3.22) □
Thus the lemma is verified. Lemma 3.2.9 For any 0 < 𝛿 ≤ 1/𝜋 and 0 < 𝑡 < 𝜋, one has the implication 𝑐 + (1/𝑡)/𝑚 + (1/𝑡) ≥ 𝛿 =⇒ 𝛼+ (𝑠) + |𝛾(𝑠)| ≥ 3−1 𝛿2 𝛽(𝑠) for 𝑠 = 𝛿𝜋𝑡.
(3.23)
In particular, if lim inf 𝑐 + (𝑥)/𝑚 + (𝑥) > 0, then (H) holds. Proof For any positive numbers 𝛽± and 𝛿 < 2, we have 𝛿𝛽+ + |𝛽+ − 𝛽− | ≥
1 𝛿(𝛽+ + 𝛽− ). 2
(3.24)
If the LHS inequality of (3.23) holds, Lemma 3.2.5(ii) applied with 𝜇+ in place of 𝜇 shows that for 𝑠 = 𝛿𝜋𝑡, 𝛼+ (𝑠)/𝑠 ≥ 12 (5/6)𝜋𝛿2 𝑚 + (1/𝑠). In conjunction with Lemma 3.2.2(a) this entails 𝛼+ (𝑠) ≥ 32 𝛿2 𝛽+ (𝑠), so that, by (3.24), the RHS inequality of (3.23) follows. The second assertion is immediate from (3.23) given Lemma 3.2.2(b). □ Lemma 3.2.10 For (H) to hold, it is sufficient that 2 1 𝑥 [𝜂+ (𝑥) ∧ 𝜂− (𝑥)] 1 1− . lim sup < 𝑚 + (𝑥) ∨ 𝑚 − (𝑥) 4 𝜋
(3.25)
Proof We apply Lemma 3.2.9 with 𝛿 = 1/𝜋 so that 𝑠 = 𝑡 in (3.23). Write 𝑥 = 1/𝑠. Then, using the identity 𝑚 ± (𝑥) = 𝑐 ± (𝑥) + 𝑥𝜂± (𝑥), we see that if 𝛿∗ = 1/(𝜋 − 1), if either 𝑐 + (𝑥) ≥ 𝛿∗ 𝑥𝜂+ (𝑥) or 𝑐 − (𝑥) ≥ 𝛿∗ 𝑥𝜂− (𝑥), (∗) then 𝛼(𝑠) + |𝛾(𝑠)| ≥ 31 𝜋 −2 𝛽(𝑠). Put 𝜆(𝑥) :=
𝑚˜ + (𝑥) ∧ 𝑚˜ − (𝑥) , 𝑚˜ + (𝑥) ∨ 𝑚˜ − (𝑥)
𝜔(𝑠) =
𝛽+ (𝑠) ∧ 𝛽− (𝑠) . 𝛽+ (𝑠) ∨ 𝛽− (𝑠)
Then applying Lemma 3.2.2(a) to obtain 𝜔(𝑠) ≤ 4𝜆(𝑥), we infer that |𝛾(𝑠)| = [1 − 𝜔(𝑠)] (𝛽+ (𝑠) ∨ 𝛽− (𝑠)) ≥ 2−1 [1 − 4𝜆(𝑥)] 𝛽(𝑠).
(3.26)
30
3 Bounds of the Potential Function
Suppose 𝑐 + (𝑥) < 𝛿∗ 𝑥𝜂+ (𝑥) and 𝑐 − (𝑥) < 𝛿∗ 𝑥𝜂− (𝑥), complementarily to the case of (∗). Then 𝑚 ± (𝑥)/(1 + 𝛿∗ ) < 𝑥𝜂± (𝑥) < 𝑚˜ ± (𝑥), so that the ratio under the lim sup on the LHS of (3.25) is larger than (1+𝛿∗ ) −2 𝜆(𝑥). It therefore follows that (3.25) implies lim sup 𝜆(𝑥) < 1/4. Hence |𝛾(𝑠)| > 𝜀𝛽(𝑠) with some 𝜀 > 0 for all sufficiently small 𝑠 > 0. Combined with (∗) this shows (H) to be valid, since 𝛽(𝑠) > 𝜂(1/𝑠). □ The following lemma is used in order to handle the oscillating part of the integrals defining 𝛼± (𝑡) and/or 𝛽± (𝑡) in Sections 3.4, 3.5 and 4.3 (see (3.54)). Lemma 3.2.11 Let 0 < 𝑠 < 𝑡 ≤ 𝜋. Then ( |𝛼(𝑡) − 𝛼(𝑠)| ∨ |𝛽(𝑡) − 𝛽(𝑠)| ≤
(𝑡 + 7𝑠)𝑐(1/𝑠), 5(𝑡 − 𝑠)𝑚(1/𝑠).
If (H) holds, then by Lemma 3.2.6 the second bound entails that for some constant 𝐶 depending only on 𝛿 𝐻 , | 𝑓 ◦ (𝑡) − 𝑓 ◦ (𝑠)| ≤ 𝐶 [(𝑡 − 𝑠)/𝑡] 𝑓 ◦ (𝑡) (𝑠 < 𝑡 < 2𝑠). (3.27) ∫∞ Proof By definition 𝛽(𝑠) − 𝛽(𝑡) = 0 𝜇(𝑧) (cos 𝑡𝑧 − cos 𝑠𝑧) d𝑧, and we deduce ∫ ∫ 1/𝑠 1/𝑠 𝜇(𝑧) (cos 𝑡𝑧 − cos 𝑠𝑧) d𝑧 ≤ (𝑡 − 𝑠) 𝑧𝜇(𝑧) d𝑧 0 0 = (𝑡 − 𝑠)𝑐(1/𝑠), and ∫
∞
1/𝑠
2 2 𝜇(𝑧) (cos 𝑡𝑧 − cos 𝑠𝑧) d𝑧 ≤ + 𝜇(1/𝑠) 𝑠 𝑡
(3.28)
≤ 8𝑠 · 𝑐(1/𝑠). Evidently we have the corresponding bound for 𝛼(𝑡) − 𝛼(𝑠), hence ∫ 𝑡 the first bound of the lemma. For the second one, substituting cos 𝑡𝑧 − cos 𝑠𝑧 = 𝑠 𝑧 sin 𝑧𝜏 d𝜏 and interchanging the order of integration in turn result in ∫ ∞ ∫ 𝑡 ∫ ∞ 𝜇(𝑧) (cos 𝑡𝑧 − cos 𝑠𝑧) d𝑧 = d𝜏 𝑧𝜇(𝑧) sin 𝜏𝑧 d𝑧. (3.29) 1/𝑠
𝑠
1/𝑠
According to (A.38) of Section A.5.1, the inner integral on the RHS is at most 4𝜏 −1 𝑠 · 𝑚(1/𝑠), so that the RHS of (3.28) may be replaced by 4[𝑠 log(𝑡/𝑠)]𝑚(1/𝑠). The same bound is valid if the cos’s and sin’s are interchanged. Since 𝑠 log 𝑡/𝑠 ≤ 𝑡 −𝑠, we now obtain the second bound of the lemma. □
3.3 Proofs of Theorems 3.1.1 to 3.1.3
31
3.3 Proofs of Theorems 3.1.1 to 3.1.3 The main content of this section consists of the proof of Theorem 3.1.1. Theorem 3.1.2 is verified after it: part (i) of Theorem 3.1.2 was virtually proved in the preceding section, whereas part (ii) is essentially a corollary of the proof of Theorem 3.1.1. The proof of Theorem 3.1.3, which partly uses Theorem 3.1.2(ii), is given at the end of the section.
3.3.1 Proof of Theorem 3.1.1 By virtue of the right-hand inequality of (3.12b), 𝑓 ◦ (𝑡) ≥ 91 𝑓𝑚 (𝑡), and for the present purpose, it suffices to bound the integral in (3.15) from below by a positive multiple of 𝑥/𝑚(𝑥). As a lower bound we take the contribution to the integral from the interval ∫ 2 𝜋/𝑡 𝜋/2𝑥 < 𝑡 < 1. We also employ the lower bound 𝛼(𝑡) ≥ 0 𝜇(𝑧) sin 𝑡𝑧 d𝑧 and write down the resulting inequality as follows: with the constant 𝐵 = (9𝜋) −1 , 𝑎(𝑥) ¯ ≥ 𝐵
∫
1
∫
𝜋/2𝑥 1
≥ 𝜋/2𝑥
𝑓𝑚 (𝑡)𝛼(𝑡) (1 − cos 𝑥𝑡) d𝑡 𝑡 ∫ 2 𝜋/𝑡 𝑓𝑚 (𝑡) (1 − cos 𝑥𝑡) d𝑡 𝜇(𝑧) sin 𝑡𝑧 d𝑧 𝑡 0
= 𝐾I (𝑥) + 𝐾II (𝑥) + 𝐾III (𝑥), where ∫
1
𝐾I (𝑥) = 𝜋/2𝑥 ∫ 1
𝐾II (𝑥) =
d𝑡 𝑓𝑚 (𝑡) 𝑡
∫
d𝑡 𝑡
∫
𝑓𝑚 (𝑡) 𝜋/2𝑥
and
∫
1
𝐾III (𝑥) = 𝜋/2𝑥
Lemma 3.3.1 𝐾I (𝑥) ≥
𝜋/2𝑡
𝜇(𝑧) sin 𝑡𝑧 d𝑧, 0 2 𝜋/𝑡
𝜇(𝑧) sin 𝑡𝑧 d𝑧 𝜋/2𝑡
d𝑡 𝑓𝑚 (𝑡) (− cos 𝑥𝑡) 𝑡
∫
2 𝜋/𝑡
𝜇(𝑧) sin 𝑡𝑧 d𝑧. 0
𝑥 1 5 · − . 3𝜋 𝑚(𝑥) 𝑚(1)
Proof Since sin 1 ≥ 5/6 it follows that ∫
∫
𝜋/2𝑡
0
and hence
1/𝑡
𝜇(𝑧) sin 𝑡𝑧 d𝑧 ≥
𝜇(𝑧) sin 𝑡𝑧 d𝑧 > 0
5 𝑡𝑐(1/𝑡), 6
32
3 Bounds of the Potential Function
𝐾I (𝑥) >
5 6
∫
∫ 5 2𝑥/ 𝜋 𝑐(𝑧) 𝑐(1/𝑡) d𝑡 ≥ d𝑧 6 1 𝑡 2 𝑚 2 (1/𝑡) 𝑚 2 (𝑧) 5 𝑥 5 ≥ · − , 3𝜋 𝑚(2𝑥/𝜋) 6𝑚(1)
1 𝜋/2𝑥
implying the inequality of the lemma because of the monotonicity of 𝑚.
□
Lemma 3.3.2 𝐾III (𝑥) ≥ −4𝜋 𝑓𝑚 (1/2)/𝑥 for all sufficiently large 𝑥. ∫ 2 𝜋/𝑡 Proof Put 𝑔(𝑡) = 𝑡 −1 0 𝜇(𝑧) sin 𝑡𝑧 d𝑧. One sees that 0 ≤ 𝑔(1) < 2. We claim that 𝑔 is decreasing. Observe that d −1 𝑡 d𝑡
∫
2 𝜋/𝑡
sin 𝑡𝑧 d𝑧 = 𝑡 −2
0
∫
2 𝜋/𝑡
(𝑡𝑧 cos 𝑡𝑧 − sin 𝑡𝑧) d𝑧 = 0 0
and
∫
1 𝑔 (𝑡) = 2 𝑡 ′
2 𝜋/𝑡
𝜇(𝑧)(𝑡𝑧 cos 𝑡𝑧 − sin 𝑡𝑧) d𝑧, 0
and that 𝑢 cos 𝑢 − sin 𝑢 < 0 for 0 < 𝑢 < 𝜋 and the integrand of the last integral has a unique zero in the open interval (0, 2𝜋/𝑡). Then the monotonicity of 𝜇 leads to 𝑔 ′ (𝑡) < 0, as claimed. Now 𝑓 being decreasing, it follows that 𝐾III (𝑥) ≥ ∫1 − (2𝑛+ 1 ) 𝜋/𝑥 𝑓𝑚 (𝑡)𝑔(𝑡) d𝑡 for any positive integer 𝑛 such that (2𝑛 + 12 )𝜋/𝑥 ≤ 1. Since 2
one can choose 𝑛 so that 0 ≤ 1 − (2𝑛 + 12 )𝜋/𝑥 ≤ 2𝜋/𝑥, the inequality of the lemma obtains. □ Lemma 3.3.3 𝐾II (𝑥) ≥ −
4 𝑥 · . 3𝜋 𝑚(𝑥)
Proof Since 𝜇 is non-increasing, we have ∫
∫
2 𝜋/𝑡
2 𝜋/𝑡
𝜇(𝑧) sin 𝑡𝑧 d𝑧 ≥
𝜇(𝑧) sin 𝑡𝑧 d𝑧, 3 𝜋/2𝑡
𝜋/2𝑡
so that
∫
1
𝐾II (𝑥) ≥ 𝜋/2𝑥
d𝑡 𝑓𝑚 (𝑡) 𝑡
∫
2 𝜋/𝑡
𝜇(𝑧) sin 𝑡𝑧 d𝑧. 3 𝜋/2𝑡
We wish to integrate with respect to 𝑡 first. Observe that the region of the double integral is included in {3𝜋/2 ≤ 𝑧 ≤ 4𝑥; 3𝜋/2𝑧 < 𝑡 < 2𝜋/𝑧}, where the integrand of the inner integral is negative. Hence ∫
∫
4𝑥
𝐾II (𝑥) ≥
2 𝜋/𝑧
𝑓𝑚 (𝑡)
𝜇(𝑧) d𝑧 3 𝜋/2
3 𝜋/2𝑧
sin 𝑡𝑧 d𝑡. 𝑡
3.3 Proofs of Theorems 3.1.1 to 3.1.3
33
Put 𝜆 = 3𝜋/2, so that 𝑡 ≥ 𝜆/𝑧. Then, since 𝑓𝑚 is decreasing, the RHS is further bounded below by ∫
∫
4𝑥
2 𝜋/𝑧
𝜇(𝑧) 𝑓𝑚 (𝜆/𝑧) d𝑧 3 𝜋/2𝑧
𝜆
The inner integral being equal to change of variable we obtain ∫
∫ 2𝜋 3 𝜋/2
sin 𝑢 d𝑢/𝑢, which is larger than −1/𝜆, after a ∫
4𝑥/𝜆
𝐾II (𝑥) ≥ −
𝜇(𝜆𝑧) 𝑓𝑚 (1/𝑧) d𝑧 ≥ − 1
− 1
𝑥
𝑥
𝜇(𝜆𝑧) 𝑓𝑚 (1/𝑧) d𝑧. 1
Recall 𝑓𝑚 (1/𝑧) = 𝑧2 /𝑚 2 (𝑧). Since ∫
sin 𝑡𝑧 d𝑡. 𝑡
∫∞ 𝑥
𝜇(𝜆𝑧) d𝑧 = 𝜆−1 𝜂(𝜆𝑥), by integration by parts,
𝑥 ∫ 𝑥 2 𝑧𝜂(𝜆𝑧)𝑐(𝑧) 1 𝜂(𝜆𝑧)𝑧 2 − d𝑧 𝜇(𝜆𝑧) 𝑓𝑚 (1/𝑧) d𝑧 = 2 𝜆 𝑚 (𝑧) 𝑧=1 𝜆 1 𝑚 3 (𝑧) ∫ 𝑥 𝑧𝜂(𝑧)𝑐(𝑧) 𝜂(1) 2 ≥− d𝑧 − . 3 𝜆 1 𝑚 (𝑧) 𝜆𝑚 2 (1)
Noting that 𝑧𝜂(𝑧) = 𝑚(𝑧) − 𝑐(𝑧), we have 𝑐(𝑧) 𝑐2 (𝑧) 𝑧𝜂(𝑧)𝑐(𝑧) = − . 𝑚 3 (𝑧) 𝑚 2 (𝑧) 𝑚 3 (𝑧) Since 𝑚(1) > 𝜂(1), we conclude ∫ 𝑥 𝑐(𝑧) 1 1 2 4 𝑥 d𝑧 − + 𝐾II (𝑥) ≥ − =− , 𝜆 1 𝑚 2 (𝑧) 𝜆𝑚(1) 3𝜋 𝑚(𝑥) 𝜆𝑚(1) hence the inequality of the lemma.
□
Completion of the proof of Theorem 3.1.1. Combining Lemmas 3.3.1 to 3.3.3 we obtain ∫ 1 1 − cos 𝑥𝑡 1 𝑥 1 𝑎(𝑥) ¯ ≥ 𝑓𝑚 (𝑡)𝛼(𝑡) d𝑡 ≥ · − + 𝑂 (1/𝑥), (3.30) 𝐵 𝑡 3𝜋 𝑚(𝑥) 𝑚(1) 𝜋/2𝑥 showing Theorem 3.1.1.
□
Proof (of Theorem 3.1.2) The first part (i) of Theorem 3.1.2 is obtained by combining Lemmas 3.2.3 and 3.2.6. As for (ii) its premise implies that for any 𝜀 > 0 there exists a 𝛿 > 0 such that 𝑓 ◦ (𝑡) > 𝜀 −1 𝑓𝑚 (𝑡) for 0 < 𝑡 < 𝛿, which concludes the ∫1 result in view of the second inequality of (3.30), since 𝛿 𝑓𝑚 (𝑡)𝛼(𝑡) (1 − cos 𝑥𝑡) d𝑡/𝑡 is bounded for each 𝛿 > 0. □
34
3 Bounds of the Potential Function
3.3.2 Proof of Theorem 3.1.3 Lemma 3.3.4 Suppose that there exists 𝑝 := lim 𝑚 + (𝑥)/𝑚(𝑥). Then as 𝑡 ↓ 0 𝛽+ (𝑡) = 𝑝𝛽(𝑡) + 𝑜 (𝑡𝑚(1/𝑡))
and
𝛼+ (𝑡) = 𝑝 𝛼(𝑡) + 𝑜 (𝑡𝑚(1/𝑡)) .
𝑥 ≥ 0, denote by 𝑚, ˜ 𝜂, ˜ ˜ 𝛼˜ and Proof Take a non-increasing summable function 𝜇(𝑥), ˜ → 1. It suffices to show that 𝛽˜ the corresponding functions and suppose 𝑚/𝑚 ˜ = 𝑜(𝑡𝑚(1/𝑡)) 𝛽(𝑡) − 𝛽(𝑡)
and
˜ = 𝑜(𝑡𝑚(1/𝑡)) 𝛼(𝑡) − 𝛼(𝑡)
(3.31)
as 𝑡 ↓ 0.2 Pick a positive number 𝑀 such that sin 𝑀 = 1. Then ∫ ∞ 𝜇(𝑀/𝑡) 4𝑡 4 𝑡𝑦 𝜇(𝑦) cos d𝑦 ≤ 2 𝑐(𝑀/𝑡) ≤ 𝑡𝑚(1/𝑡), ≤2 𝑡 𝑀 𝑀 𝑀/𝑡 while by integrating by parts, ∫
∫
𝑀/𝑡
𝑀/𝑡
(𝜇 − 𝜇) ˜ (𝑦) (1 − cos 𝑡𝑦) d𝑦 = −(𝜂 − 𝜂) ˜ (𝑀/𝑡) + 𝑡 0
(𝜂 − 𝜂) ˜ (𝑦) sin 𝑡𝑦 d𝑦. 0
∫∞ Since 𝑀/𝑡 (𝜇 − 𝜇) ˜ (𝑦) d𝑦 = (𝜂 − 𝜂) ˜ (𝑀/𝑡) (which will cancel out the first term on the RHS above), these together yield ∫ ˜ =𝑡 𝛽(𝑡) − 𝛽(𝑡)
𝑀/𝑡
(𝜂 − 𝜂) ˜ (𝑦) sin 𝑡𝑦 d𝑦 + 𝑡𝑚(1/𝑡) × 𝑂 (1/𝑀) 0
and integrating by parts the last integral leads to ∫ 𝑀/𝑡 ˜ 𝛽(𝑡) − 𝛽(𝑡) = (𝑚 − 𝑚)(𝑀/𝑡) ˜ −𝑡 (𝑚 − 𝑚) ˜ (𝑦) cos 𝑡𝑦 d𝑦 + 𝑚(1/𝑡) × 𝑂 (1/𝑀). 𝑡 0 If 𝑚/𝑚 ˜ → 1, the first two terms on the RHS are 𝑜(𝑚(1/𝑡)) for each 𝑀 fixed, and we conclude the first relation of (3.31) since 𝑀 can be made arbitrarily large. The second relation is verified in the same way but with 𝑀 taken from 𝜋Z. □ Lemma 3.3.5 Suppose lim |𝛾(𝑡)|/𝑡𝑚(1/𝑡) = 0. If 𝑐/𝑚 → 0, then 𝑎(𝑥)𝑚(𝑥)/𝑥 ¯ diverges to infinity, and if lim inf 𝑐(𝑥)/𝑚(𝑥) = 0, then lim sup 𝑎(𝑥)𝑚(𝑥)/𝑥 ¯ = ∞. Proof The first half follows from Theorem 3.1.2(ii) and Lemma 3.2.1. For the proof of the second one, we apply the trivial lower bound ∫ 𝜋 𝑎(𝜋𝑥 ¯ 𝑛) = 0
𝜋
𝛼(𝑡) (1 − cos 𝜋𝑥 𝑛 𝑡) d𝑡 ≥ [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡
∫
1/𝑥𝑛
1/2𝑥𝑛
𝛼(𝑡) d𝑡 , + 𝛾 2 (𝑡)]𝑡
[𝛼2 (𝑡)
2 We may let 𝑝 > 0, for if 𝑝 = 0 the result follows from Lemma 3.2.2.
(3.32)
3.3 Proofs of Theorems 3.1.1 to 3.1.3
35
valid for any sequence 𝑥 𝑛 ∈ Z/𝜋. Suppose that lim sup 𝑐(𝑥)/𝑚(𝑥) > 0 in addition to lim inf 𝑐(𝑥)/𝑚(𝑥) = 0, so that we can take a constant 𝜀 > 0 and an unbounded sequence 𝑥 𝑛 as in Lemma 3.2.8, according to which 𝑐(𝑥 𝑛 ) = 𝜀𝑚(𝑥 𝑛 ) and 𝛼(𝑡)/𝑡 ≥ 4−1 𝜀 2 𝑚(1/𝑡)
if 1/2𝑥 𝑛 < 𝑡 ≤ 1/𝑥 𝑛 .
(3.33)
Because of the assumption of the lemma, the second relation above leads to |𝛾(𝑡)| = 𝑜(𝑚(1/𝑡)𝑡) = 𝑜(𝛼(𝑡))
(𝑛 → ∞),
(3.34)
while, in conjunction with (3.21) and Lemma 3.2.1, the first one shows that for 1/2𝑥 𝑛 ≤ 𝑡 ≤ 1/𝑥 𝑛 , 𝛼(𝑡)/𝑡 ≤ 5𝑐(1/𝑡) ≤ 5𝑐(2𝑥 𝑛 ) ≤ 4 · 5𝑐(𝑥 𝑛 ) = 20𝜀𝑚(𝑥 𝑛 ). These together yield 𝛼(𝑡) 1 1 1 ≥ · ∼ , 𝑡 20𝜀𝑚(𝑥 𝑛 ) 𝛼2 (𝑡) + 𝛾 2 (𝑡) 𝛼(𝑡) and substitution into (3.32) shows that for all 𝑛 large enough 𝜋 𝑎(𝜋𝑥 ¯ 𝑛) ≥
1 30𝜀𝑚(𝑥 𝑛 )
∫
1/𝑥𝑛
1/2𝑥𝑛
𝑥𝑛 d𝑡 = . 2 30𝜀𝑚(𝑥 𝑛 ) 𝑡
Thus 𝑎(𝑥)𝑥/𝑚(𝑥) ¯ is unbounded, 𝜀 being made arbitrarily small.
□
Remark 3.3.6 Suppose 𝑚 + /𝑚 → 1/2. Then for the second assertion of Lemma 3.3.5, the condition lim inf 𝑐(𝑥)/𝑚(𝑥) = 0 can be replaced by lim inf 𝑐 + (𝑥)/𝑚 + (𝑥) = 0.
(3.35)
Indeed, if 𝑐 + /𝑚 + → 0, then 𝛼(𝑡) = 𝑜(𝑡𝑚(1/𝑡)) owing to Lemma 3.3.4, and by Theorem 3.1.2(ii) we have only to consider the case lim sup 𝑐 + (𝑥)/𝑚 + (𝑥) > 0. Take 𝑥 𝑛 as above but with 𝑐 + , 𝑚 + , 𝛼+ in place of 𝑐, 𝑚, 𝛼. Then, for 1/2𝑥 𝑛 ≤ 𝑡 ≤ 1/𝑥 𝑛 , the same argument as leading to (3.33) verifies 𝑚 + (1/𝑡)𝑡 = 𝑂 (𝛼+ (𝑡)), so that we have (3.34) and, by Lemma 3.3.4 again, 𝛼+ (𝑡) ∼ 𝛼− (𝑡), and hence 𝛼(𝑡)/[𝛼2 + 𝛾 2 ] (𝑡) ∼ 1/2𝛼+ (𝑡). We can follow the proof of Lemma 3.3.5 for the rest. Remark 3.3.7 Let 𝑝 < 1/2 in Lemma 3.3.4. Then we have the following. (i) 𝛾(𝑡) = (2𝑝 − 1) 𝛽(𝑡) + 𝑜(𝑡𝑚(1/𝑡)), so that (H) holds owing to Lemma 3.2.2(a). (ii) If 𝑐/𝑚 → 0, or equivalently 𝑚(𝑥) ∼ 𝑥𝜂(𝑥), then, by 𝑐(𝑥) ≥ 12 𝑥 2 𝜇(𝑥), 𝑥𝜇(𝑥)/𝜂(𝑥) → 0 so that 𝜂 is s.v. due to Karamata’s theorem. Hence 𝑚 − (𝑥) ∼ (1 − 𝑝)𝑥𝜂(𝑥) ∼ 𝑥𝜂− (𝑥). These properties of 𝜂− and 𝜂 entail that 𝐹 is positively relatively stable, so 𝑎(𝑥) admits a simple explicit expression for its asymptotic form (see Section 4.1).
36
3 Bounds of the Potential Function
Completion of the proof of Theorem 3.1.3. The assertion (i) follows from Lemma 3.3.4; the necessity of the condition lim sup 𝑥𝜂(𝑥)/𝑚(𝑥) < 1 asserted in (ii) is immediate from Lemma 3.3.5. The necessity of the conditions lim sup 𝑥𝜂± (𝑥)/𝑚 ± (𝑥) < 1 is verified in Remark 3.3.6. □
3.4 Proof of Proposition 3.1.5 ¯ Suppose (H) holds. Since then 𝑎(𝑥) ≍ 𝑥/𝑚(𝑥) by Theorems 3.1.1 and 3.1.2, we may suppose 𝑅/2 < 𝑥 < 𝑅 and on writing 𝛿 = (𝑅 − 𝑥)/𝑥 the assertion to be shown can be rephrased as ¯ ¯ ≤ 𝐶𝛿1/4 [𝑥/𝑚(𝑥)] | 𝑎(𝑅) − 𝑎(𝑥)|
(𝑅/2 < 𝑥 < 𝑅).
(3.36)
¯ For 𝑀 > 1 and 𝑅/2 < 𝑥 ≤ 𝑅 we make the decomposition 𝑎(𝑥) = 𝑢 𝑀 (𝑥) + 𝑣 𝑀 (𝑥), where ∫ 𝑀/𝑥 𝛼(𝑡) 𝑓 ◦ (𝑡) 𝑀 𝑢 (𝑥) = (1 − cos 𝑥𝑡) d𝑡, 𝑡 ∫0 𝜋 𝛼(𝑡) 𝑓 ◦ (𝑡) (1 − cos 𝑥𝑡) d𝑡. 𝑣 𝑀 (𝑥) = 𝑡 𝑀/𝑥 By the inequality |cos 𝑥𝑡 − cos 𝑅𝑡| ≤ |𝑅𝑡 − 𝑥𝑡| it is then easy to show |𝑢 𝑀 (𝑥) − 𝑢 𝑀 (𝑅)| < 𝐶 𝑀𝛿[𝑥/𝑚(𝑥)]
(3.37)
for a (universal) constant 𝐶 (as we shall see shortly), whereas to obtain a similar estimate for 𝑣 𝑀 we cannot help exploiting the oscillation of cos 𝑥𝑡. To the latter purpose one may seek some appropriate smoothness of 𝛼(𝑡) 𝑓 ◦ (𝑡),∫ which, however, ∞ is a property difficult to verify because of the intractable part 1/𝑡 𝜇(𝑦) sin 𝑡𝑦 d𝑦 involved in the integral defining 𝛼(𝑡). In order to circumvent it, for each positive integer 𝑛 we bring in the function ∫ 𝛼𝑛 (𝑡) :=
𝑛 𝜋/𝑡
𝜇(𝑦) sin 𝑡𝑦 d𝑦 0
and make use of the inequalities 𝛼2𝑛 (𝑡) < 𝛼(𝑡) < 𝛼2𝑛+1 (𝑡).
3.4 Proof of Proposition 3.1.5
37
If 𝑣 𝑀 (𝑥) ≤ 𝑣 𝑀 (𝑅), then ∫
𝜋
[𝛼2𝑛+1 (𝑡) − 𝛼2𝑛 (𝑡)]
0 ≤ 𝑣 𝑀 (𝑅) − 𝑣 𝑀 (𝑥) ≤
𝑀/𝑥 ∫ 𝜋
+
𝛼2𝑛 (𝑡) 𝑓 ◦ (𝑡) (cos 𝑥𝑡 − cos 𝑅𝑡) d𝑡 𝑡
𝑀/𝑥 ∫ 𝑀/𝑥
+ 𝑀/𝑅
𝑓 ◦ (𝑡) (1 − cos 𝑅𝑡) d𝑡 𝑡 (3.38)
𝛼2𝑛+1 (𝑡) 𝑓 ◦ (𝑡) (1 − cos 𝑅𝑡) d𝑡 𝑡
= I𝑛 + II 𝑛 + III 𝑛
(say);
and if 𝑣 𝑀 (𝑥) > 𝑣 𝑀 (𝑅), we have an analogous inequality. We consider the first case only, the other one being similar. Since 0 ≤ 𝛼𝑛 (𝑡) ≤ 𝛼(𝑡) ≤ 𝜋 2 𝑡𝑐(1/𝑡) and by Lemma 3.2.6, which may read 𝑓 ◦ (𝑡) ≤ 𝐶1 /[𝑚 2 (1/𝑡)𝑡 2 ] under (H), we see that for any 𝑛, ∫
𝑀/𝑥
𝑀/𝑅
𝛼𝑛 (𝑡) 𝑓 ◦ (𝑡) d𝑡 ≤ 𝐶 𝑡
∫
𝑅/𝑀
𝑥/𝑀
𝑐(𝑦) d𝑦 (𝑅 − 𝑥)/𝑀 𝛿𝑥 ≤𝐶 ≤𝐶 , 𝑚(𝑥/𝑀) 𝑚(𝑥) 𝑚 2 (𝑦)
(3.39)
so that III 𝑛 admits a bound small enough for the present purpose. Now we are able to verify (3.37). Noting −(1/𝑥𝑚(𝑥)) ′ > 1/𝑥 2 𝑚(𝑥), we have ∫ ∞ 1 1 d𝑦 < . (3.40) 2 𝑥𝑚(𝑥) 𝑦 𝑚(𝑦) 𝑥 Since |cos 𝑥𝑡 − cos 𝑅𝑡| = |[1 − cos(𝑅 − 𝑥)𝑡] cos 𝑥𝑡 + sin 𝑥𝑡 sin(𝑅 − 𝑥)𝑡| ≤ 2𝛿(𝑥𝑡) 2 it therefore follows that ∫ ∫ ∞ 1/𝑥 𝛼(𝑡) 𝑓 ◦ (𝑡) 𝐶𝛿𝑥 𝑐(𝑦) d𝑦 (cos 𝑥𝑡 − cos 𝑅𝑡) d𝑡 ≤ 𝐶𝛿𝑥 2 ≤ . 2 (𝑦)𝑦 2 0 𝑡 𝑚(𝑥) 𝑚 𝑥 Similarly, using |cos 𝑥𝑡−cos 𝑅𝑡| ≤ |𝑅−𝑥|𝑡 one infers that the integral over [1/𝑥, 𝑀/𝑥] is dominated in absolute value by ∫ 𝑥 ∫ 𝑥 𝐶 𝑀𝛿𝑥 𝑐(𝑦) d𝑦 𝑐(𝑦) d𝑦 ≤ 𝐶 𝑀𝛿 ≤ , 𝐶 (𝑅 − 𝑥) 2 (𝑦)𝑦 2 (𝑦) 𝑚(𝑥) 𝑚 𝑚 𝑥/𝑀 𝑥/𝑀 so that
∫ 𝑀/𝑥 𝛼(𝑡) 𝑓 ◦ (𝑡) 𝐶 ′ 𝑀𝛿𝑥 (cos 𝑅𝑡 − cos 𝑥𝑡) d𝑡 ≤ , 0 𝑡 𝑚(𝑥)
which concludes the proof of (3.37). Lemma 3.4.1 Under (H), |I𝑛 | ≤ 𝐶 [𝑥/𝑚(𝑥)]/𝑛 whenever 𝑀 > 2𝑛𝜋. Proof The integrand of the integral defining I𝑛 is less than 2 [𝛼2𝑛+1 (𝑡) − 𝛼2𝑛 (𝑡)] 𝑓 ◦ (𝑡)/𝑡 (≥ 0).
38
3 Bounds of the Potential Function (2𝑛+1) 𝜋 𝑥/𝑀 2𝑛
∫
Noting that 𝛼2𝑛+1 (𝑡) − 𝛼2𝑛 (𝑡) = order of integration, we infer that ∫
(2𝑛+1) 𝜋 𝑥/𝑀
I𝑛 ≤ 2
∫
𝜇(𝑦) sin 𝑦𝑡 d𝑦 and interchanging the
[ (2𝑛+1) 𝜋/𝑦 ]∧ 𝜋
𝜇(𝑦) d𝑦 2𝑛
2𝑛 𝜋/𝑦
𝑓 ◦ (𝑡) sin 𝑦𝑡 d𝑡. 𝑡
Let (H) be satisfied. Then, by Lemma 3.2.6 again, in the range of the inner integral, where 𝑦/2𝑛𝜋 ≥ 1/𝑡 ≥ 𝑦/(2𝑛 + 1)𝜋, we have 𝑓 ◦ (𝑡)/𝑡 ≤ 𝐶2 (𝑦/𝑛) 3 /𝑚 2 (𝑦/𝑛) ≤ 𝐶2 𝑛−1 𝑦 3 /𝑚 2 (𝑦), whereas the integral of sin 𝑦𝑡 over 0 < 𝑡 < 𝜋/𝑦 equals 2/𝑦. Thus I𝑛 ≤
4𝐶2 𝑛
∫
(2𝑛+1) 𝜋 𝑥/𝑀
𝑦2 𝜇(𝑦) 𝑚 2 (𝑦)
2𝑛
d𝑦.
By 𝜇(𝑦)𝑦 2 ≤ 2𝑐(𝑦) ∫
𝑧
0
𝜇(𝑦)𝑦 2 d𝑦 ≤ 2 𝑚 2 (𝑦)
∫ 0
𝑧
𝑐(𝑦) 2𝑧 d𝑦 = . 𝑚(𝑧) 𝑚 2 (𝑦)
Hence we obtain the bound of the lemma because of the monotonicity of 𝑥/𝑚(𝑥).□ Lemma 3.4.2 Under (H) it holds that if 12 𝑅 < 𝑥 ≤ 𝑅 and 1 < 𝑀 < 𝑥, ∫
𝜋
𝑀/𝑥
𝐶𝑛2 𝑥 𝛼2𝑛 (𝑡) 𝑓 ◦ (𝑡) cos 𝑥𝑡 d𝑡 ≤ · . 𝑡 𝑀 𝑚(𝑥)
∫ 𝑛 𝜋/𝑡 Proof Put 𝑔(𝑡) = 𝑓 ◦ (𝑡)/𝑡. Since 𝛼𝑛′ (𝑡) = 0 𝑦𝜇(𝑦) cos 𝑡𝑦 d𝑦, we have |𝛼𝑛′ (𝑡)| ≤ 𝑐(𝑛𝜋/𝑡) ≤ 𝜋 2 𝑛2 𝑐(1/𝑡). From this and Lemmas 3.2.11 and 3.2.6 (see (3.27)) we infer that if 𝑠 < 𝑡 < 2𝑠, |𝛼2𝑛 (𝑡)𝑔(𝑡) − 𝛼2𝑛 (𝑠)𝑔(𝑠)| ≤ 𝛼2𝑛 (𝑡)|𝑔(𝑡) − 𝑔(𝑠)| + |𝛼2𝑛 (𝑡) − 𝛼2𝑛 (𝑠)|𝑔(𝑠) ≤ 𝐶1 𝛼2𝑛 (𝑡)𝑔(𝑡)|𝑡 − 𝑠|/𝑠 + 𝐶2 𝑛2 𝑐(1/𝑡)𝑔(𝑡)|𝑡 − 𝑠| ≤ 𝐶 𝑛2 |𝑡 − 𝑠|/𝑠 𝑐(1/𝑡)𝑔(𝑡)𝑡. Put 𝑠 𝑘 = (𝑀 + 2𝜋𝑘)/𝑥 for 𝑘 = 0, 1, 2, . . . . Then i h √ |𝛼2𝑛 (𝑡)𝑔(𝑡) − 𝛼2𝑛 (𝑠 𝑘 )𝑔(𝑠 𝑘 )| ≤ 𝐶 1/ 𝑀 + 𝑛2 /𝑀 𝑐(1/𝑡)𝑔(𝑡)𝑡 for 𝑠 𝑘−1 ≤ 𝑡 ≤ 𝑠 𝑘 (𝑘 ≥ 1) and if 𝑁 = ⌊𝑥/2−𝑀/2𝜋⌋, then on noting
∫
𝑠𝑘 𝑠𝑘−1
cos 𝑥𝑡 d𝑡 = 0,
3.5 Proof of Theorem 3.1.6 and Proposition 3.1.7
∫
𝑠𝑁
𝑀/𝑥
39
∑︁ ∫ 𝑠𝑘 𝑁 𝛼2𝑛 (𝑡) 𝑓 ◦ (𝑡) [𝛼2𝑛 (𝑡)𝑔(𝑡) − 𝛼2𝑛 (𝑠 𝑘 )𝑔(𝑠 𝑘 )] cos 𝑥𝑡 d𝑡 cos 𝑥𝑡 d𝑡 = 𝑡 𝑘=1 𝑠𝑘−1 𝑁 ∫ 𝑠𝑘 ∑︁ ≤ 𝐶 2𝜋𝑛2 /𝑀 𝑐(1/𝑡)𝑔(𝑡)𝑡 d𝑡 𝑘=1
′
𝑠𝑘−1
≤ 𝐶 𝑛2 /𝑀 [𝑥/𝑚(𝑥)], where condition (H′) is used for the last inequality. Since 0 ≤ 𝜋 − 𝑠 𝑁 = 𝑂 (1/𝑥), this gives the bound of the lemma. □ Proof (of Proposition 3.1.5) By Lemmas 3.4.1 and 3.4.2 I𝑛 + II 𝑛 ≤ 𝐶 ′ 𝑀 −1/3 [𝑥/𝑚(𝑥)]
for ⌊𝑛 = 𝑀 1/3 ⌋.
Hence taking 𝑀 = 𝛿−3/4 and recalling (3.39), we have |𝑣 𝑀 (𝑥) − 𝑣 𝑀 (𝑅)| ≤ 𝐶 ′′ 𝛿1/4 [𝑥/𝑚(𝑥)], which together with (3.37) yields the required bound (3.36). □
3.5 Proof of Theorem 3.1.6 and Proposition 3.1.7 If (H) holds, then by (3.19) of Lemma 3.2.6 ℜ (1 − e𝑖 𝑥𝑡 )/[1 − 𝜓(𝑡)] is integrable over |𝑡| < 𝜋, which ensures ∫ 𝜋 1 − e𝑖𝑥𝑡 1 ℜ d𝑡 (3.41) 𝑎(𝑥) = 2𝜋 − 𝜋 1 − 𝜓(𝑡) ∫𝜋 Í 𝑛 𝑛 𝑛 since − 𝜋 ℜ (1 − e𝑖𝑥 𝑦 )/(1 − 𝑠𝜓(𝑡)) d𝑡 = 2𝜋 ∞ 𝑛=0 𝑠 [ 𝑝 (0) − 𝑝 (−𝑥)] → 𝑎(𝑥) (𝑠 ↑ 1) by virtue of Abel’s lemma (alternatively see the proof of ([71, P28.4(a)]), and it follows that ∫ 𝜋 𝛼(𝑡) (1 − cos 𝑥𝑡) − 𝛾(𝑡) sin 𝑥𝑡 1 𝑎(𝑥) = d𝑡. (3.42) 𝜋 0 [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡 Recalling 𝛾(𝑡) = 𝛽+ (𝑡) − 𝛽− (𝑡), we put ∫ 𝜋 𝛽± (𝑡) sin 𝑥𝑡 1 𝑏 ± (𝑥) = d𝑡 𝜋 0 [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡
(3.43)
so that 𝑎(𝑥) = 𝑎(𝑥) ¯ + 𝑏 − (𝑥) − 𝑏 + (𝑥).
(3.44)
Proof (of Proposition 3.1.7) Suppose (H) holds. By (3.44) and Theorem 3.1.1 𝑎(𝑥) ∼ 𝑎(−𝑥) if and only if 𝑏 − (𝑥) − 𝑏 + (𝑥) = 𝑜(𝑥/𝑚(𝑥)). Let 𝑚 + /𝑚 → 1/2. Since then 𝛾(𝑡) = 𝑜 (𝑡𝑚(1/𝑡)) (𝑡 ↓ 0) according to Theorem 3.1.3(i), it suffices to show ∫ 𝜋 𝑥 . (3.45) 𝑓 ◦ (𝑡)𝑚(1/𝑡)|sin 𝑥𝑡| d𝑡 ≤ 𝐶 𝑚(𝑥) 0
40
3 Bounds of the Potential Function
By virtue of Theorem 3.1.3(ii) and Lemma 3.2.6, 𝑚(𝑥) ≤ 𝐶1 𝑐(𝑥) and 𝑓 ◦ ≤ 𝐶2 𝑓𝑚 . It therefore follows that ∫ 𝑥 ∫ 𝜋 ∫ 𝜋 𝑐(𝑦) 𝐶 ′′𝑥 ◦ ′′ ′ 𝑓 (𝑡)𝑚(1/𝑡)|sin 𝑥𝑡| d𝑡 ≤ 𝐶 𝑓𝑚 (𝑡)𝑐(1/𝑡) d𝑡 ≤ 𝐶 d𝑦 ≤ . 2 𝑚(𝑥) 1/ 𝜋 𝑚 (𝑦) 1/𝑥 1/𝑥 ∫∞ Noting that 𝑥 𝜂(𝑦) d𝑦/𝑚 2 (𝑦) = 1/𝑚(𝑥), we have the same bound for the integral over 𝑡 ∈ [0, 1/𝑥] in view of (3.46) below, which we take for granted to conclude (3.45). □ The following inequalities hold whenever 𝐸 𝑋 = 0: 1 2
∫
∞
∫
1/𝑥
𝑓𝑚 (𝑡) 𝛽+ (𝑡) d𝑡 ≤ 0
𝑥
𝑚˜ + (𝑦) 𝑚˜ + (𝑥) + d𝑦 ≤ 2 𝑦𝑚 2 (𝑦) 𝑚 (𝑥)
∫
∞
𝑥
2𝜂+ (𝑦) d𝑦. 𝑚 2 (𝑦)
(3.46)
The first inequality of (3.46) is∫ immediate from (3.12(a)) by changing the variable ∞ of integration. Putting 𝑔(𝑥) = 𝑥 d𝑦/[𝑦 2 𝑚(𝑦)] and integrating by parts we have ∫ 𝑥
∞
∞ ′ ∫ ∞ 𝑦 𝑚˜ + (𝑦) 𝑚˜ + (𝑦) 𝑦 𝑚˜ + (𝑦) + 𝑔(𝑦) d𝑦 d𝑦 = − 𝑔(𝑦) 𝑚(𝑦) 𝑦=𝑥 𝑚(𝑦) 𝑦𝑚 2 (𝑦) 𝑥
as well as 𝑔(𝑦) = On observing
𝑦 𝑚˜ + (𝑦) 𝑚(𝑦)
1 − 𝑦𝑚(𝑦)
′ =
∫ 𝑦
∞
𝜂(𝑢) d𝑢. 𝑢𝑚 2 (𝑢)
2𝑦𝜂+ 𝑦 𝑚˜ + 𝜂 2𝑦𝜂+ (𝑦) − ≤ 𝑚 𝑚(𝑦) 𝑚2
substitution leads to the second inequality of (3.46). ∫ 1/𝑥 Lemma 3.5.1 If 𝑚 + /𝑚 → 0, then 0 𝑓 ◦ (𝑡) 𝛽+ (𝑡) d𝑡 = 𝑜(1/𝑚(𝑥)). Proof Let 𝑚 + /𝑚 → 0. Then (H) holds so that we may replace 𝑓 ◦ by 𝑓𝑚 owing to Lemma 3.2.6. The first term on the right of (3.46) is 𝑜(1/𝑚(𝑥)) since 𝑚˜ + ≤ 𝑚 + . On the other hand, integrating by parts yields ∫ ∞ ∫ ∞ 𝑚 + (𝑥) 𝑚 + (𝑦)𝜂(𝑦) 𝜂+ (𝑦) d𝑦 = − + d𝑦 = 𝑜(1/𝑚(𝑥)). 𝑚 2 (𝑦) 𝑚 2 (𝑥) 𝑚 3 (𝑦) 𝑥 𝑥 Thus the lemma is verified.
□
In preparation for the proof of Theorem 3.1.6, choose a positive integer 𝑁 such that 𝐸 [ |𝑋 |; |𝑋 | > 𝑁] ≤ 𝑃[ |𝑋 | ≤ 𝑁] and define a function 𝑝 ∗ (𝑥) on Z by 𝑝 ∗ (1) = 𝐸 [ |𝑋 |; |𝑋 | > 𝑁], 𝑝 ∗ (0) = 𝑃[ |𝑋 | ≤ 𝑁] − 𝑝 ∗ (1) and 𝑝(𝑘) + 𝑝(−𝑘) if 𝑘 < −𝑁, 𝑝 ∗ (𝑘) = 0 if −𝑁 ≤ 𝑘 ≤ −1 or 𝑘 ≥ 2,
3.5 Proof of Theorem 3.1.6 and Proposition 3.1.7
41
where 𝑝(𝑘) = 𝑃[𝑋 = 𝑘]. Then 𝑝 ∗ is a probability distribution on Z with zero mean. Denote the corresponding functions by 𝑎 ∗ , 𝑏 ∗± , 𝛼∗ , 𝛼∗± , etc. Since 𝑝 ∗ (𝑧) = 0 for 𝑧 ≥ 2 and 𝜎∗2 = ∞, we have for 𝑥 > 0, 𝑎 ∗ (−𝑥) = 0, so that 𝑎 ∗ (𝑥) = 𝑎 ∗ (𝑥)/2 + 𝑏 ∗− (𝑥) − 𝑏 ∗+ (𝑥), hence 𝑎¯ ∗ (𝑥) = 2−1 𝑎 ∗ (𝑥) = 𝑏 ∗− (𝑥) − 𝑏 ∗+ (𝑥). We shall show that if 𝑚 + /𝑚 → 0, then ∼ 𝑎¯ ∗ (𝑥), 𝑎(𝑥) ¯
¯ |𝑏 + (𝑥)| + |𝑏 ∗+ (𝑥)| = 𝑜( 𝑎(𝑥))
(3.47)
and ¯ 𝑏 ∗− (𝑥) = 𝑏 − (𝑥) + 𝑜( 𝑎(𝑥)).
(3.48)
These together with (3.44) yield 𝑏 − (𝑥) = 𝑎¯ ∗ (𝑥){1 + 𝑜(1)} = 𝑎(𝑥){1 ¯ + 𝑜(1)},
(3.49)
+ 𝑜(1)} + 𝑏 − (𝑥) ∼ 2𝑎(𝑥), and hence 𝑎(𝑥) = 𝑎(𝑥){1 ¯ which shows 𝑎(−𝑥)/𝑎(𝑥) → 0. ¯ The rest of this section is devoted to the proof of (3.47) and (3.48). It is easy to see that 𝛼∗ (𝑡) = 𝛼(𝑡) + 𝑂 (𝑡), 𝛽∗− (𝑡) = 𝛽− (𝑡) + 𝛽+ (𝑡) + 𝑂 (𝑡 2 ), ∫1 𝛽∗+ (𝑡) = 𝑝 ∗ (1) 0 (1 − cos 𝑡𝑥) d𝑥 = 𝑂 (𝑡 2 )
(3.50)
(as 𝑡 ↓ 0). Let 𝛥(𝑡), 𝑡 > 0, denote the difference 𝛥(𝑡) := 𝑓 ◦ (𝑡) − 𝑓∗◦ (𝑡) =
1 1 − 𝛼2 (𝑡) + 𝛾 2 (𝑡) 𝛼∗2 (𝑡) + 𝛾∗2 (𝑡)
= {(𝛼∗2 − 𝛼2 ) (𝑡) + (𝛾∗2 − 𝛾 2 ) (𝑡)} 𝑓 ◦ (𝑡) 𝑓∗◦ (𝑡). Observe that (𝛾∗ + 𝛾) (𝑡) = −2𝛽− (𝑡) + 𝑜(𝑡 2 ), (𝛾∗ − 𝛾) (𝑡) = −2𝛽+ (𝑡) + 𝑜(𝑡 2 ); (𝛼∗2 − 𝛼2 ) (𝑡) = 2𝛼(𝑡) × 𝑂 (𝑡)
and
(𝛾∗2 − 𝛾 2 ) (𝑡) = 4𝛽− (𝑡) 𝛽+ (𝑡) + 𝑜(𝑡 2 ). (3.51)
Now we suppose 𝑚 + /𝑚 → 0. Then it follows that 𝑓𝑚 (𝑡) ≍ 𝑓 ◦ (𝑡) ≍ 𝑓∗◦ (𝑡) and 𝛽− (𝑡) 𝛽+ (𝑡)/[𝛼2 (𝑡) + 𝛽2 (𝑡)] → 0, and hence that 𝛥(𝑡) = 𝑜( 𝑓𝑚 (𝑡)), which implies that 𝑎(𝑥) ¯ ∼ 𝑎¯ ∗ (𝑥), the first relation of (3.47). The proofs of the second relation in (3.47) and of (3.48) are somewhat involved since we need to take advantage of the oscillating nature of the integrals defining 𝛽± (𝑡). First, we dispose of the non-oscillatory parts of these integrals. By (3.12b) applied to 𝛼± + 𝛽± in place of 𝛼 + 𝛽 it follows that if 𝑚 + /𝑚 → 0, then + (𝑡) lim𝑡 ↓0 𝛼𝛼−+ (𝑡)+𝛽 (𝑡)+𝛽− (𝑡) = 0 (the converse is also true), which entails 𝛼(𝑡) − 𝛾(𝑡) ∼ 𝛼(𝑡) + 𝛽(𝑡) ≍ 𝑚(1/𝑡)𝑡.
(3.52)
42
3 Bounds of the Potential Function
Lemma 3.5.2 If 𝑚 + /𝑚 → 0, then ∫ 𝑥 𝑐 + (𝑦) 𝑥 d𝑥 = 𝑜 . 2 𝑚(𝑥) 1 𝑚 (𝑦) Proof The assertion of the lemma follows from the following identity for primitive functions ∫ ∫ 𝑐(𝑥) 𝑚 + (𝑥) 𝑥𝑚 + (𝑥) 𝑐 + (𝑥) d𝑥 = 2 · d𝑥 − 2 . (3.53) 2 2 𝑚 (𝑥) 𝑚 (𝑥) 𝑚(𝑥) 𝑚 (𝑥) This identity may be verified by differentiation or by integration by parts, the latter giving ∫
𝑚 +2 (𝑥) 𝑥 𝑐 + (𝑥) d𝑥 = · −2 𝑚 + (𝑥) 𝑚 2 (𝑥) 𝑚 2 (𝑥)
∫
𝑥 (𝜂+ 𝑐 − 𝑐 + 𝜂)𝑚 + (𝑥) · d𝑥, 𝑚 + (𝑥) 𝑚3
from which we deduce (3.53) by an easy algebraic manipulation.
□
From Lemma 3.2.11 it follows that for 𝑥 ≥ 4, |𝛼(𝑡) − 𝛼(𝑠)| ∨ |𝛽(𝑡) − 𝛽(𝑠)| ≤ 9𝑐(1/𝑡)𝑡
if
𝑡 ≥ 𝜋/𝑥 and 𝑠 = 𝑡 + 𝜋/𝑥.
(3.54)
We shall apply Lemma 3.2.11 only in this form in this section. Proof (of (3.47)) We prove 𝑏 + (𝑥) = 𝑜( 𝑎(𝑥)) ¯ only, 𝑏 ∗+ being dealt with in the same way. In view of Theorem 3.1.1 and Lemma 3.5.1 it suffices to show that ∫ 𝜋 ◦ 𝑓 (𝑡) 𝛽+ (𝑡) 𝑥 sin 𝑥𝑡 d𝑡 = 𝑜 . (3.55) 𝑡 𝑚(𝑥) 𝜋/𝑥 We make the decomposition ∫ 𝜋 ( 𝑓 ◦ 𝛽+ ) (𝑡) sin 𝑥𝑡 d𝑡 2 𝑡 𝜋/𝑥 ∫ 𝜋 ∫ ( 𝑓 ◦ 𝛽+ ) (𝑡) sin 𝑥𝑡 d𝑡 − = 𝑡 0 𝜋/𝑥
𝜋− 𝑥𝜋
( 𝑓 ◦ 𝛽+ )(𝑡 + 𝜋/𝑥) sin 𝑥𝑡 d𝑡 𝑡 + 𝜋/𝑥
= 𝐼 (𝑥) + 𝐼𝐼 (𝑥) + 𝐼𝐼𝐼 (𝑥) + 𝑟 (𝑥), where
∫
𝜋
𝐼 (𝑥) = 𝜋/𝑥
∫
𝑓 ◦ (𝑡) − 𝑓 ◦ (𝑡 + 𝜋/𝑥) 𝛽+ (𝑡) sin 𝑥𝑡 d𝑡, 𝑡
𝜋
𝛽+ (𝑡) − 𝛽+ (𝑡 + 𝜋/𝑥) sin 𝑥𝑡 d𝑡, 𝑡 𝜋/𝑥 ∫ 𝜋 1 1 ◦ − sin 𝑥𝑡 d𝑡, 𝐼𝐼𝐼 (𝑥) = 𝑓 (𝑡 + 𝜋/𝑥) 𝛽+ (𝑡 + 𝜋/𝑥) 𝑡 𝑡 + 𝜋/𝑥 𝜋/𝑥 𝐼𝐼 (𝑥) =
and
𝑓 ◦ (𝑡 + 𝜋/𝑥)
(3.56)
3.5 Proof of Theorem 3.1.6 and Proposition 3.1.7
∫ 𝑟 (𝑥) = − 0
𝜋/𝑥
43
( 𝑓 ◦ 𝛽+ ) (𝑡 + 𝜋/𝑥) sin 𝑥𝑡 d𝑡 + 𝑡 + 𝜋/𝑥
∫
𝜋 𝜋− 𝜋/𝑥
( 𝑓 ◦ 𝛽+ ) (𝑡 + 𝜋/𝑥) sin 𝑥𝑡 d𝑡. 𝑡 + 𝜋/𝑥
From (3.54) (applied not only with 𝜇 but with 𝜇± in place of 𝜇) we obtain |𝛽+ (𝑡 + 𝜋/𝑥) − 𝛽+ (𝑡)| ≤ 𝜅 ◦𝛼 𝑐 + (1/𝑡)𝑡 and
| 𝑓 ◦ (𝑡 + 𝜋/𝑥) − 𝑓 ◦ (𝑡)| ≤ 𝐶1 𝑐(1/𝑡) [ 𝑓𝑚 (𝑡)] 3/2 𝑡
for 𝑡 > 𝜋/𝑥. From the last inequality together with 𝑓 3/2 (𝑡)𝑡 = 1/𝑡 2 𝑚 3 (𝑡), 𝛽+ (𝑡) ≤ 𝐶3 𝑚˜ + (1/𝑡)𝑡 and 𝑚˜ + (𝑥) ≤ 𝑚 + (𝑥) = 𝑜(𝑚(𝑥)) we infer that ∫ 𝑥/2 𝑥 𝑚˜ + (𝑦) 𝑐(𝑦) · d𝑦 = 𝑜 . |𝐼 (𝑥)| ≤ 𝐶 𝑚(𝑦) 𝑚 2 (𝑦) 𝑚(𝑥) 1/2 Similarly ∫
𝑥/2
|𝐼𝐼 (𝑥)| ≤ 𝐶 1/2
and 𝐶 |𝐼𝐼𝐼 (𝑥)| ≤ 𝑥
∫
𝑥/ 𝜋
1/ 𝜋
𝑥 𝑐 + (𝑦) d𝑦 = 𝑜 𝑚(𝑥) 𝑚 2 (𝑦) 𝑚˜ + (𝑦)𝑦 𝑥 d𝑦 = 𝑜 , 𝑚(𝑥) 𝑚 2 (𝑦)
where the equalities follow from Lemma 3.5.2 and the monotonicity of 𝑦/𝑚(𝑦) in the bounds of |𝐼𝐼 (𝑥)| and |𝐼𝐼𝐼 (𝑥)|, respectively. Finally ∫
𝑥/ 𝜋
|𝑟 (𝑥)| ≤ 𝐶 𝑥/2 𝜋
𝑥 𝑚˜ + (𝑦) d𝑦 + 𝑂 (1/𝑥) = 𝑜 . 𝑚(𝑥) 𝑚 2 (𝑦)
Thus we have verified (3.55) and accordingly (3.47).
□
Proof (of (3.48)) Recalling 𝛥(𝑡) = 𝑓 ◦ (𝑡) − 𝑓∗◦ (𝑡) we have 𝜋 [𝑏 − (𝑥) − 𝑏 ∗− (𝑥)] ∫ 𝜋 ∫ 𝛽− (𝑡) sin 𝑥𝑡 d𝑡 + = 𝛥(𝑡) 𝑡 0 0 = 𝐽 (𝑥) + 𝐾 (𝑥) (say).
𝜋
𝑓∗◦ (𝑡)
𝛽− (𝑡) − 𝛽∗− (𝑡) sin 𝑥𝑡 d𝑡 𝑡
Suppose 𝑚 + /𝑚 → 0. Since 𝛽∗− (𝑡) − 𝛽− (𝑡) = 𝛽+ (𝑡) + 𝑂 (𝑡 2 ) and 𝑓∗◦ is essentially of the same regularity as 𝑓 ◦ , the proof of (3.55) and Lemma 3.5.1 applies to 𝐾 (𝑥) on the RHS above to yield 𝐾 (𝑥) = 𝑜(𝑥/𝑚(𝑥)). As for 𝐽 (𝑥), we first observe that in view of (3.51), | 𝛥(𝑡)| ≤ 𝐶 [ 𝑓𝑚 (𝑡)] 3/2 (𝑡 + 𝛽+ (𝑡)), (3.57) so that the integral defining 𝐽 (𝑥) restricted to [0, 𝜋/𝑥] is 𝑜(𝑥/𝑚(𝑥)) in view of Lemma 3.5.1. It remains to show that
44
3 Bounds of the Potential Function
∫
𝜋 𝜋/𝑥
𝛽− (𝑡) 𝑥 𝛥(𝑡) sin 𝑥𝑡 d𝑡 = 𝑜 . 𝑡 𝑚(𝑥)
(3.58)
We decompose 𝛥(𝑡) = 𝐷 1 (𝑡) + 𝐷 2 (𝑡), where 𝐷 1 (𝑡) = (𝛼∗2 − 𝛼2 ) (𝑡) + (𝛾∗2 − 𝛾 2 ) (𝑡) − 4(𝛽− 𝛽+ ) (𝑡) 𝑓 ◦ (𝑡) 𝑓∗◦ (𝑡), 𝐷 2 (𝑡) = 4(𝛽− 𝛽+ ) (𝑡) 𝑓 ◦ (𝑡) 𝑓∗◦ (𝑡). By (3.51) |𝐷 1 (𝑡) 𝛽− (𝑡)/𝑡| ≤ 𝐶1 (𝛼(𝑡) + 𝑡) [ 𝑓𝑚 (𝑡)] 3/2 ≤ 𝐶2 𝑐(1/𝑡)/[𝑡 2 𝑚 3 (1/𝑡)] and ∫ 𝜋 ∫ 𝑥/ 𝜋 𝑥 𝛽− (𝑡) 𝑐(𝑦) ≤ 𝐶2 sin 𝑥𝑡 d𝑦 = 𝑜 𝐷 (𝑡) d𝑡 , 1 3 𝑡 𝑚(𝑥) 𝜋/𝑥 1/ 𝜋 𝑚 (𝑦) as is easily verified. For the integral involving 𝐷 2 we proceed as in the proof of (3.55). To this end it suffices to evaluate the integrals corresponding to 𝐼 (𝑥) and 𝐼𝐼 (𝑥), namely, ∫ 𝜋 𝐷 2 (𝑡) − 𝐷 2 (𝑡 + 𝜋/𝑥) 𝛽− (𝑡) sin 𝑥𝑡 d𝑡 𝐽𝐼 (𝑥) := 𝑡 𝜋/𝑥 and
∫
𝜋
𝐽𝐼𝐼 (𝑥) :=
𝐷 2 (𝑡 + 𝜋/𝑥) 𝜋/𝑥
𝛽− (𝑡) − 𝛽− (𝑡 + 𝜋/𝑥) sin 𝑥𝑡 d𝑡, 𝑡
the other integrals being easily dealt with as before. By (3.54) the integrand for 𝐽𝐼𝐼 (𝑥) is dominated in absolute value by a constant multiple of [ 𝑓𝑚 (𝑡)] 3/2 𝛽+ (𝑡)𝑐(1/𝑡), from which it follows immediately that 𝐽𝐼𝐼 (𝑥) = 𝑜(𝑥/𝑚(𝑥)). For the evaluation of 𝐽𝐼 (𝑥), observe that |(𝛽− 𝛽+ ) (𝑡) − (𝛽− 𝛽+ ) (𝑡 + 𝜋/𝑥)| ≤ 𝐶1 {𝛽+ (𝑡)𝑐(1/𝑡)𝑡 + 𝛽− (𝑡)𝑐 + (1/𝑡)𝑡} and
|( 𝑓 ◦ 𝑓𝑚 ) (𝑡) − ( 𝑓 ◦ 𝑓𝑚 ) (𝑡 + 𝜋/𝑥)| 𝛽− (𝑡) ≤ 𝐶1 𝑐(1/𝑡)𝑡 [ 𝑓𝑚 (𝑡)] 2
so that |𝐷 2 (𝑡) − 𝐷 2 (𝑡 + 𝜋/𝑥)|𝛽− (𝑡) ≤ 𝐶 {𝛽+ (𝑡)𝑐(1/𝑡) + 𝛽− (𝑡)𝑐 + (1/𝑡)} [ 𝑓𝑚 (𝑡)] 3/2 𝑡 𝑐 + (1/𝑡) 𝑚˜ + (1/𝑡)𝑐(1/𝑡) + 𝐶′ 2 2 . ≤ 𝐶′ 𝑡 2 𝑚 3 1/𝑡) 𝑡 𝑚 (1/𝑡) The integral of the first term of the last expression is immediately evaluated and that of the second by Lemma 3.5.2, showing ∫ 𝜋 ∫ 𝑥/ 𝜋 𝑥 𝛽− (𝑡) 𝑚˜ + (𝑦)𝑐(𝑦) 𝑐 + (𝑦) sin 𝑥𝑡 + d𝑦 = 𝑜 . 𝐷 (𝑡) d𝑡 ≤ 𝐶 2 2 𝑡 𝑚(𝑥) 𝑚 3 (𝑦) 𝑚 2 (𝑦) 𝜋/𝑥 1/ 𝜋 The proof of (3.48) is complete.
□
3.6 An Example Exhibiting Irregular Behaviour of 𝑎 ( 𝑥)
45
3.6 An Example Exhibiting Irregular Behaviour of 𝒂(𝒙) We give a recurrent symmetric r.w. such that 𝐸 [|𝑋 | 𝛼 /log(|𝑋 | + 2)] < ∞, and 𝑃[|𝑋 | ≥ 𝑥] = 𝑂 (|𝑥| −𝛼 ) for some 1 < 𝛼 < 2 and 0 < ∀𝑟 < 1, 𝑎(𝑟𝑥 ¯ 𝑛 )/𝑎(𝑥 ¯ 𝑛 ) −→ ∞ as 𝑛 → ∞
(3.59)
for some sequence 𝑥 𝑛 ↑ ∞, 𝑥 𝑛 ∈ 2Z, which provides an example of an irregularly behaving 𝑎(𝑥), ¯ as forewarned in the paragraph immediately after Theorem 3.1.2. In fact for the law 𝐹 defined below it holds that for any 0 < 𝛿 < 2 − 𝛼 there exists positive constants 𝑐 ∗ such that for all sufficiently large 𝑛, 𝑎(𝑥)/ ¯ 𝑎(𝑥 ¯ 𝑛 ) ≥ 𝑐∗ 𝑛
for all integers 𝑥 satisfying 2− 𝛿𝑛 < 𝑥/𝑥 𝑛 ≤ 1 − 2− 𝛿𝑛 (3.60)
(showing 𝑎¯ diverges to ∞, fluctuating with valleys at the points 𝑥 𝑛 – having relatively steep (gentle) slopes on their left (right) – and very wide plateaus in between). Put 2 2 𝑥 𝑛 = 2𝑛 , 𝜆 𝑛 = 𝑥 𝑛−𝛼 = 2−𝛼𝑛 (𝑛 = 0, 1, 2, . . .), 𝐴𝜆 𝑛 if 𝑥 = ±𝑥 𝑛 (𝑛 = 0, 1, 2, . . .), 𝑝(𝑥) = 0 otherwise, where 𝐴 is the constant chosen so as to make 𝑝(·) a probability. (1) Denote by 𝜂 𝑛,𝑘,𝑡 the value of 1 − 2(1 − cos 𝑢)/𝑢 2 at 𝑢 = 𝑥 𝑛−𝑘 𝑡 so that uniformly for |𝑡| < 1/𝑥 𝑛−1 and 𝑘 = 1, 2, . . . , 𝑛, 𝜆 𝑛−𝑘 [1 − cos(𝑥 𝑛−𝑘 𝑡)] 1− and
(1) 𝜂 𝑛,𝑘,𝑡
(1) 𝜂 𝑛,𝑘,𝑡 = 𝑜(1)
for
=
2 𝜆 𝑛−𝑘 𝑥 𝑛−𝑘
2𝑥 𝑛2
𝑘≠1
(𝑥 𝑛 𝑡) 2 =
1 −(2−𝛼) (2𝑛−𝑘) 𝑘 2 𝜆 𝑛 (𝑥 𝑛 𝑡) 2 2
(1) and 0 ≤ 𝜂 𝑛,1,𝑡 < (𝑥 𝑛−1 𝑡) 2 /12.
Then for |𝑡| < 1/𝑥 𝑛−1 , 𝑥∑︁ 𝑛−1
𝑝(𝑥) (1 − cos 𝑥𝑡) =
𝑥=1
𝑛 ∑︁
𝑝(𝑥 𝑛−𝑘 ) (1 − cos 𝑥 𝑛−𝑘 𝑡)
𝑘=1 (1) = 𝐴𝜀 𝑛 𝜆 𝑛 (𝑥 𝑛 𝑡) 2 {1 − 𝜂 𝑛,1,𝑡 + 𝑜(1)}
with
𝜀 𝑛 = 21−𝛼 2−2(2−𝛼)𝑛 ,
since 𝑝(𝑥 𝑛−1 ) (1 − cos 𝑥 𝑛−1 𝑡), the last term of the series, is dominant over the rest. On the other hand ∑︁ 𝐴𝜆 𝑛 + 𝑂 (𝜆 𝑛+1 ) (𝑥 𝑛−1 ≤ 𝑥 < 𝑥 𝑛 ), 𝜇+ (𝑥) = 𝑝(𝑦) = (3.61) 𝑜(𝜆 𝑛 𝜀 𝑛 ) (𝑥 ≥ 𝑥 𝑛 ). 𝑦>𝑥
46
3 Bounds of the Potential Function
Í 𝑥𝑛 𝑝(𝑥) (1 − cos 𝑡𝑥) + 𝑂 (𝜇(𝑥 𝑛 )), observe that uniformly On writing 1 − 𝜓(𝑡) = 2 𝑥=1 for |𝑡| < 1/𝑥 𝑛−1 , h i (1) ) + 𝑜(𝜆 𝑛 𝜀 𝑛 ). (3.62) 1 − 𝜓(𝑡) = 2𝐴𝜆 𝑛 1 − cos 𝑥 𝑛 𝑡 + 𝜀 𝑛 (𝑥 𝑛 𝑡) 2 (1 − 𝜂 𝑛,1,𝑡 Also, 2𝐴𝜆 𝑛 (1 − cos 𝑥 𝑛 𝑡) = 𝐴𝜆 𝑛 (𝑥 𝑛 𝑡) 2 (1 − 𝜂 (2) (𝑛, 𝑡)), with 0 ≤ 𝜂 (2) (𝑛, 𝑡) ≤ 1/12 for |𝑡| < 1/𝑥 𝑛 . Recall that ∫ 𝜋 1 1 − cos 𝑥𝑡 = 𝑎(𝑥) ¯ d𝑡. 𝜋 0 1 − 𝜓(𝑡) We break the integral into three parts ∫
∫
1/𝑥𝑛
(𝜋 𝐴) 𝑎(𝑥) ¯ =
∫
1/𝑥𝑛−1
+
𝜋
+
0
1/𝑥𝑛
1/𝑥𝑛−1
= I (𝑥) + II (𝑥) + III (𝑥)
1 − cos 𝑥𝑡 d𝑡 [1 − 𝜓(𝑡)]/𝐴
(say).
Upper bound of 𝑎(𝑥 ¯ 𝑛 ). By the trivial inequality 1 − 𝜓(𝑡) ≥ 2𝑝(𝑥 𝑛 ) (1 − cos 𝑥 𝑛 𝑡) one sees I (𝑥) ≤ 1/(2𝜆 𝑛 𝑥 𝑛 ) = 𝑥 𝑛𝛼−1 /2. Put 𝑟 = 𝑟 (𝑛, 𝑥) = 𝑥/𝑥 𝑛 . Then by (3.62) it follows that for sufficiently large 𝑛, ∫
1/𝑥𝑛−1
1 − cos 𝑥𝑡 d𝑡 2𝜆 𝑛 (1 − cos 𝑥 𝑛 𝑡) + 𝜀 𝑛 𝜆 𝑛 (𝑥 𝑛 𝑡) 2
II (𝑥) ≤ 1/𝑥𝑛
=
1 𝜆𝑛𝑥𝑛
∫
𝑥𝑛 /𝑥𝑛−1
1
1 − cos 𝑟𝑢 d𝑢. 2(1 − cos 𝑢) + 𝜀 𝑛 𝑢 2
(3.63)
If 𝑥 = 𝑥 𝑛 , so that 𝑟 = 1, this leads to II (𝑥 𝑛 ) ≤
𝑥 𝑛𝛼−1
22𝑛 ∫ ∑︁ 𝑘=0
𝜋
−𝜋
1 − cos 𝑢 d𝑢. 2(1 − cos 𝑢) + 𝜀 𝑛 (𝑢 + 2𝜋𝑘) 2
Noting that 𝑢 + 2𝜋𝑘 > 2𝜋𝑘 − 𝜋 and 2(𝑢/𝜋) 2 < 1 − cos 𝑢 < 12 𝑢 2 for −𝜋 < 𝑢 < 𝜋, one observes ∫ 𝜋 ∫ ∞ ∫ 𝜋 𝜋/2 𝜋 3 /4 𝑢2 d𝑢 = 𝑢 d𝑢 = d𝑘 √ √ 𝜀𝑛 0 𝜀𝑛 𝑢2 + 𝜀 𝑛 𝑘 2 0 0 to conclude II (𝑥 𝑛 ) ≤
𝐶1 𝑥 𝑛𝛼−1 . √ 𝜀𝑛
3.6 An Example Exhibiting Irregular Behaviour of 𝑎 ( 𝑥)
47
For the evaluation of III we apply (3.62) with 𝑛 − 1 in place of 𝑛 to deduce that ∫
1/𝑥𝑛−2
1/𝑥𝑛−1
1 − cos 𝑥 𝑛 𝑡 d𝑡 ≤ [1 − 𝜓(𝑡)]/𝐴
∫
1/𝑥𝑛−2
2 d𝑡 2 1/𝑥𝑛−1 2𝜆 𝑛−1 (1 − cos 𝑥 𝑛−1 𝑡) + 𝜀 𝑛−1 𝜆 𝑛−1 (𝑥 𝑛−1 𝑡) ∫ 𝑥𝑛−1 /𝑥𝑛−2 1 2 = d𝑢. 𝜆 𝑛−1 𝑥 𝑛−1 1 2(1 − cos 𝑢) + 𝜀 𝑛−1 𝑢 2
The last integral is less than ∫ 1
𝜋
22𝑛
∑︁ 1 d𝑢 + 2 2 𝑢 /3 + 𝜀 𝑛−1 𝑢 𝑘=1
≤ 𝐶1
22𝑛 ∑︁ 𝑘=1
√
∫
𝜋
−𝜋
1 d𝑢 𝑢 2 /3 + 𝜀 𝑛−1 (−𝜋 + 2𝜋𝑘) 2
𝑛 1 ≤ 𝐶2 √ . 𝜀𝑛 𝑘 𝜀𝑛
Since 𝜆 𝑛 𝑥 𝑛 /𝜆 𝑛−1 𝑥 𝑛−1 = 2−( 𝛼−1) (2𝑛−1) and the remaining part of III is smaller, √ III (𝑥 𝑛 ) ≤ 𝐶3 𝑛2−2( 𝛼−1)𝑛 𝑥 𝑛𝛼−1 / 𝜀 𝑛 . Consequently
√ 𝑎(𝑥 ¯ 𝑛 ) ≤ 𝐶𝑥 𝑛𝛼−1 / 𝜀 𝑛 .
(3.64)
Lower bound of 𝑎(𝑥). ¯ By (3.62) it follows that for 𝑛 sufficiently large, [1 − 𝜓(𝑡)]/𝐴 ≤ 2𝜆 𝑛 (1 − cos 𝑥 𝑛 𝑡) + 3𝜀 𝑛 𝜆 𝑛 (𝑥 𝑛 𝑡) 2 Let
1 2
for 1/𝑥 𝑛 ≤ |𝑡| < 1/𝑥 𝑛−1 .
< 𝑥/𝑥 𝑛 ≤ 1 and put 𝑏 = 𝑏(𝑛, 𝑥) = 𝑦/𝑥 𝑛 = 1 − 𝑥/𝑥 𝑛 (< 1/2). Then II (𝑥) ≥ 𝑥 𝑛𝛼−1
∫
𝑥𝑛 /𝑥𝑛−1
1
1 − cos(1 − 𝑏)𝑢 d𝑢. 2(1 − cos 𝑢) + 3𝜀 𝑛 𝑢 2
(3.65)
The RHS is bounded from below by 22𝑛−1 ∑︁/2 𝜋 ∫ 𝜋 𝑘=1
−𝜋
1 − cos [(1 − 𝑏)𝑢 − 2𝜋𝑏𝑘] d𝑢. 𝑢 2 + 3𝜋 2 𝜀 𝑛 (2𝑘 + 1) 2
√ √ If 𝑏 ≤ 𝜀 𝑛 = 2 (1−𝛼)/2 2−(2−𝛼)𝑛 then, restricting the summation to 𝑘 < 1/ 𝜀 𝑛 , one sees that √ 1/∑︁ 𝜀𝑛 ∫ II (𝑥) ≥ 𝐶 𝑥 𝑛𝛼−1 0 𝑘=1
𝜋
𝑢2 + 𝑏2 𝑘 2 𝐶𝜋/2 d𝑢 ≥ √ 2 2 𝜀𝑛 𝑢 + 𝜀𝑛 𝑘
√ 𝑥 if 1 − 𝜀 𝑛 ≤ ≤ 1. 𝑥𝑛
(3.66)
√ For 𝑏 ≥ 𝜀 𝑛 , we restrict the summation to the intervals (𝜈 + 41 )/𝑏 ≤ 𝑘 ≤ (𝜈 + 34 )/𝑏, 𝜈 = 0, 1, 2, . . . , 2 (2−𝛼)𝑛 𝑏/2 to see that
48
3 Bounds of the Potential Function
√ √ 1/∑︁ 𝜀𝑛 ∫ 𝜋/2 log 4𝑏/ 𝜀 𝑛 II (𝑥) d𝑢 ≥𝐶 ≥𝐶 √ 𝜀𝑛 𝑢2 + 𝜀 𝑛 𝑘 2 𝑥 𝑛𝛼−1 𝑘=1/4𝑏 − 𝜋/2
if
√ 1 𝑥 ≤ ≤ 1 − 𝜀𝑛 . 2 𝑥𝑛 (3.67)
In particular this shows that if 0 < 𝛿 < 2 − 𝛼, then √ 𝑎(𝑥) ¯ ≥ 𝐶𝑛𝑥 𝑛𝛼−1 / 𝜀 𝑛
for 2−1 < 𝑥/𝑥 𝑛 < 1 − 2− 𝛿𝑛 .
Let 1 ≤ 𝑗 < (2 − 𝛼)𝑛 + 1 and 2− 𝑗−1 < 𝑥/𝑥 𝑛 ≤ 2− 𝑗 . Put ( 𝑗)
𝑏 = 𝑦/𝑥 𝑛 = 1 − 2 𝑗 𝑥/𝑥 𝑛 . Then we have (3.65) with (1 − 𝑏) replaced by 2− 𝑗 (1 − 𝑏) and making the changes of variable 𝑢 = 𝑠 + 2𝜋(2 𝑗−1 + 𝑘) in (3.65) with 𝑘 = 0, 1, . . . , we infer that 𝐼𝐼 (𝑥) ≥ 𝑐1 𝑥 𝑛𝛼−1
𝑗 [22𝑛 −2 ∑︁]/4 𝜋 ∫ 𝜋
𝑘=0
−𝜋
1 − cos 𝜃 𝑗, 𝑥 (𝑠, 𝑘) 2(1 − cos 𝑠) + 3𝜀 𝑛 [2𝜋(2 𝑗 + 𝑘)] 2
d𝑠,
where 𝜃 𝑗, 𝑥 (𝑠, 𝑘) = 2− 𝑗 (1 − 𝑏)𝑠 + 𝜋 + 2𝜋 2− 𝑗 (1 − 𝑏)𝑘 − 21 𝑏 . Noting 𝑏 < 21 , we see that for 𝜈 = 0, 1, 2, . . . 1 cos 𝜃 𝑗, 𝑥 (𝑠, 𝑘) ≤ √ 2
if |𝑠|
𝜀𝑛𝑥𝑛, (𝑥 𝑛−1 ) 2−𝛼 𝑥 < 𝜀 𝑛 𝑥 𝑛 ,
(3.68)
and then that 𝑥𝜂(𝑥) −→ 𝑐(𝑥)
for 𝑥 with 1 − 𝑜(1) < 𝑥/𝑥 𝑛−1 < 1 + 𝑜(22( 𝛼−1)𝑛 ), for 𝑥 with 22( 𝛼−1)𝑛 𝑥 𝑛−1 ≪ 𝑥 ≪ 𝑥 𝑛 .
0 ∞
From (3.62) one also infers that 𝛼(𝑡)/[𝑡𝑐(1/𝑡)] oscillates between 8 and 4𝜀 𝑛 𝑡 2 𝑥 𝑛2 {1 − 𝑜(1)} about ⌊𝑀/2𝜋⌋ times as 𝑡 ranges over the interval [1/𝑥 𝑛 , 𝑀/𝑥 𝑛 ] and that 𝛼(2𝜋/𝑥 𝑛 )/𝛼(𝜋/𝑥 𝑛 ) = 𝑂 (𝜀 𝑛 ). Thus 𝛼(𝑡) behaves quite irregularly. This example also shows that the converse of Theorem 3.1.2(ii) is untrue. From what we observed above, it follows that lim inf
𝛼(𝑡) =0 𝜂(1/𝑡)
and
lim sup
𝛼(𝑡) 𝛼(𝑡) ≥ lim sup > 0. 𝜂(1/𝑡) 𝑡𝑚(1/𝑡)
On the other hand lim
𝑎(𝑥) ¯ = ∞ if 𝛼 < 4/3, 𝑥/𝑚(𝑥)
(3.69)
as is verified below. Thus the condition lim 𝛼(𝑡)/𝑡𝑚(1/𝑡) = 0 is not necessary for the ratio 𝑎(𝑥)/[𝑥/𝑚(𝑥)] ¯ to diverge to infinity. [For 𝛼 > 4/3 the ratio 𝑎(𝑥)/[𝑥/𝑚(𝑥)] ¯ is bounded along the sequence 𝑥 𝑛∗ = 𝜀 𝑛 𝑥 𝑛 – as one may easily show.] To verify (3.69) we prove √ 𝑎(𝑥) ¯ ≥ 𝑐 1 [𝑥 𝑛−1 ] 𝛼−1 / 𝜀 𝑛 for 𝑥 ≥ 𝑥 𝑛−1 (3.70) with a constant 𝑐 1 > 0. Observe that 𝛼 < 4/3 entails – in fact, is equivalent to – each √ 𝛼−1 /√𝜀 ) and 𝜀 𝑥 = 𝑜(𝑥 of the relations 𝑥 𝑛𝛼−1 = 𝑜(𝑥 𝑛−1 𝑛 𝑛 𝑛 𝑛−1 / 𝜀 𝑛 ). Then, comparing the above lower bound of 𝑎(𝑥) ¯ to the upper bound of 𝑥/𝑚(𝑥) obtained from (3.68) ensures the truth of (3.69). For the proof of (3.70) we evaluate III (𝑥). Changing the variable 𝑡 = 𝑢/𝑥, we have III (𝑥) ≥
∫
1 3𝜆 𝑛−1 𝑥
𝑥/𝑥𝑛−2
𝑥/𝑥𝑛−1
(1 − cos 𝑢) d𝑢 . 1 − cos(𝑥 𝑛−1 𝑢/𝑥) + 𝜀 𝑛−1 (𝑥 𝑛−1 𝑢/𝑥) 2
The integral on the RHS is easily evaluated to be larger than 1 4
∫
𝑥/𝑥𝑛−2
𝑥/𝑥𝑛−1
d𝑢
𝑥/𝑥 𝑛−1 √ 2 = 4√𝜀 𝑛 1 + 𝜀 𝑛 (𝑥 𝑛−1 /𝑥)𝑢
showing (3.70) since 𝑎(𝑥) ¯ ≥ III (𝑥)/(𝜋 𝐴).
∫
√ 22𝑛−3 𝜀𝑛
√ 𝜀𝑛
d𝑠 𝜋𝑥 ∼ √ , 2 8𝑥 1+𝑠 𝑛−1 𝜀 𝑛
Chapter 4
Some Explicit Asymptotic Forms of 𝒂(𝒙)
Here we compute asymptotic forms of 𝑎(𝑥), explicit in terms of 𝐹, in two cases, first when 𝐹 is relatively stable and second when 𝐹 belongs to the domain of attraction of a stable law, in Sections 4.1 and 4.2, respectively. In Section 4.3, we give some estimates of the increments 𝑎(𝑥 + 1) − 𝑎(𝑥). Throughout this chapter 𝑆 will be recurrent and 𝜎 2 = ∞ except in (b) of Proposition 4.3.5. We continue to use the notation introduced in Chapter 3.
4.1 Relatively Stable Distributions When the r.w. 𝑆 is r.s. – we shall also call 𝐹 r.s. – one can obtain some exact asymptotic forms of 𝑎(𝑥). Recall that 𝑆 is p.r.s. if and only if 𝐴(𝑥)/𝑥𝜇(𝑥) −→ ∞ as 𝑥 → ∞, ∫𝑥 where 𝐴(𝑥) = 𝐴+ (𝑥) − 𝐴− (𝑥), 𝐴± (𝑥) = 0 𝜇± (𝑦) d𝑦; and in this case 𝐵𝑛 can be chosen so that 𝐵𝑛 ∼ 𝑛𝐴(𝐵𝑛 ); clearly 𝐴 ′ (𝑥) = 𝑜( 𝐴(𝑥)/𝑥) (𝑥 → ∞), entailing 𝐴 and 𝐴+ are s.v. at infinity; we have chosen an integer 𝑥0 > 0 so that 𝐴(𝑥) > 0 for 𝑥 ≥ 𝑥0 (see Section 2.4). Let 𝑀 (𝑥) be a function of 𝑥 ≥ 𝑥 0 defined by ∫ 𝑥 𝜇± (𝑦) d𝑦 and 𝑀 (𝑥) = 𝑀+ (𝑥) + 𝑀− (𝑥). (4.1) 𝑀± (𝑥) = 2 𝑥0 𝐴 (𝑦) Eq(2.20)
Theorem 4.1.1 Suppose that 𝐹 is recurrent and p.r.s. Then as 𝑥 → ∞ (i) 𝑎(𝑥) − 𝑎(−𝑥) ∼ 1/𝐴(𝑥); (ii) 2𝑎(𝑥) ¯ ∼ 𝑀 (𝑥); (iii) 𝑎(−𝑥)/𝑎(𝑥) → 1 if and only if 𝑀+ (𝑥)/𝑀 (𝑥) → 1/2; and 𝑎(𝑥) − 𝑎(−𝑥) (iv) if 𝐸 𝑋 = 0 and 𝑚 + /𝑚 → 1/2, then → ∞. 𝑥/𝑚(𝑥)
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8_4
51
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
52
Suppose that 𝐸 𝑋 = 0 and (2.20) holds. Then 𝐴(𝑥) = 𝜂− (𝑥) − 𝜂+ (𝑥) and 𝛼(𝑡) ≪ |𝛾(𝑡)| ∼ | 𝐴(1/|𝑡|)|, (𝑡 → 0) (see (P2) below), hence condition (H) holds if and only if lim sup 𝜂+ (𝑥)/𝜂(𝑥) < 1/2, and if this is the case, by Theorems 3.1.1 and 3.1.2, ¯ 𝑎(𝑥) ≍ 𝑥/𝑚(𝑥) [which also follows from (ii) above]. ∫ 𝑥 Let (2.20) hold and put 𝐾 (𝑥) = 𝜇+ (𝑥) − 𝜇− (𝑥). Then log[ 𝐴(𝑥)/𝐴(𝑥0 )] = 𝜀(𝑡) d𝑡/𝑡 with 𝜀(𝑡) = 𝑡𝐾 (𝑡)/𝐴(𝑡), which approaches zero as 𝑡 → ∞, hence 𝑥0 𝐴(𝑥) is a normalised s.v. function. By (1/𝐴) ′ (𝑥) = −𝐾 (𝑥)/𝐴2 (𝑥), we have ∫ 𝑥 1 1 −𝐾 (𝑦) − = d𝑦 = 𝑀− (𝑥) − 𝑀+ (𝑥), (4.2) 2 𝐴(𝑥) 𝐴(𝑥 0 ) 𝑥0 𝐴 (𝑦) in particular 𝑀− (𝑥) ≥ 𝑀+ (𝑥) ∨ [1/𝐴(𝑥)] − 1/𝐴(𝑥 0 ). Because of the recurrence of the r.w., we have 𝑀− (𝑥) → ∞ (see (P1) below), and from (i) and (ii) of Theorem 4.1.1 one infers that, as 𝑥 → ∞, 𝑎(𝑥) ∼ 𝑀− (𝑥)
and
𝑎(−𝑥) = 𝑀+ (𝑥) + 𝑜(𝑀− (𝑥));
(4.3)
it also follows that 𝑥𝜇(𝑥)/𝐴2 (𝑥) = 𝑜 (1/𝐴(𝑥)) = 𝑜(𝑀− (𝑥)), so that both 𝑀− and 𝑀 are s.v. Thus Theorem 4.1.1 yields the following Corollary 4.1.2 Suppose that (2.20) holds. Then both 𝑎(𝑥) and 𝑎(𝑥) ¯ are s.v. at infinity, and (4.3) holds. The proof of Theorem 4.1.1 rests on the following results obtained in [83]: If (2.20) holds, then ∫∞ (P1) 𝑆 is recurrent if and only if 𝑥 𝜇(𝑥) d𝑥/𝐴2 (𝑥) = ∞, 0 (P2) 𝛼(𝜃) = 𝑜(𝛾(𝜃)) and 𝛾(𝜃) ∼ −𝐴(1/𝜃) (𝜃 ↓ 0),1 (P3) there exists a constant 𝐶 such that for any 𝜀 > 0 and all 𝑥 large enough, ∫ ∫ 𝜋 𝐶𝜀 𝑓 ◦ (𝑡)𝛼(𝑡) cos 𝑥𝑡 𝜋 𝑓 ◦ (𝑡)𝛾(𝑡) sin 𝑥𝑡 d𝑡 + d𝑡 ≤ 𝐴(𝑥) , (4.4) 𝑡 𝑡 1/𝜀 𝑥 1/𝜀 𝑥 where 𝑓 ◦ (𝑡) = 1/[𝛼2 (𝑡) + 𝛾 2 (𝑡)] as given in (3.13). (See Lemmas 4, 6, and 8 of [83] for (P1), (P2), and (P3), respectively.) The proofs of (P1) and (P2) are rather standard. We give a proof of (P3) at the end of this section. Proof (of Theorem 4.1.1) Proof of (i). By (3.42), which is valid since 1/|𝛾(𝑡)| is summable over [0, 1] because of (P2), ∫ 𝜋 −𝛾(𝑡) 𝑓 ◦ (𝑡) sin 𝑥𝑡 2 d𝑡. 𝑎(𝑥) − 𝑎(−𝑥) = 𝜋 0 𝑡 By (P2)
𝑓 ◦ (𝑡) ∼ 1/𝐴2 (1/𝑡)
and
− 𝛾(𝑡) 𝑓 ◦ (𝑡) ∼ 1/𝐴(1/𝑡)
(4.5)
1 If 𝐸 |𝑋 | = ∞, the integrals defining 𝛼(𝑡) and 𝛾 (𝑡) are regarded as improper integrals, which are evidently convergent.
4.1 Relatively Stable Distributions
53
(𝑡 ↓ 0). Let 0 < 𝜀 < 1. For the above integral restricted to 𝑡 ≥ 1/𝜀𝑥 we have the bound 𝐶𝜀/𝐴(𝑥) because of (P3), while that restricted to 𝑡 < 𝜀/𝑥 is dominated in ∫ 𝜀/𝑥 absolute value by a constant multiple of 𝑥 0 d𝑡/𝐴(1/𝑡) ∼ 𝜀/𝐴(𝑥). These reduce our task to showing that, for each 𝜀, ∫
1/𝜀 𝑥
lim 𝐴(𝑥)
𝑥→∞
𝜀/𝑥
−𝛾(𝑡) 𝑓 ◦ (𝑡) sin 𝑥𝑡 d𝑡 = 𝑡
∫
1/𝜀 𝜀
sin 𝑡 d𝑡, 𝑡
(4.6)
for as 𝜀 ↓ 0 the integral on the RHS converges to 𝜋/2. The change of the variable 𝑡 = 𝑤/𝑥 transforms the integral under the above limit to ∫
1/𝜀
𝑔(𝑤/𝑥) 𝜀
sin 𝑤 d𝑤 𝑤
where 𝑔(𝑡) = −𝛾(𝑡) 𝑓 ◦ (𝑡).
By the slow variation of 𝐴 together with (4.5) it follows that 𝑔(𝑤/𝑥) 𝐴(𝑥) → 1 as 𝑥 → ∞ uniformly for 𝜀 ≤ 𝑤 ≤ 1/𝜀. Thus we have (4.6). Proof of (ii). Given a constant 0 < 𝜀 < 1, we decompose the integral in the representation of 𝑎¯ given in (3.10) as follows: 𝜋 𝑎(𝑥) ¯ = I (𝑥) − II (𝑥) + III (𝑥), where ∫
𝜋
I (𝑥) = 1/𝜀 𝑥
and
∫ 𝜋 𝑓 ◦ (𝑡)𝛼(𝑡) 𝑓 ◦ (𝑡)𝛼(𝑡) cos 𝑥𝑡 d𝑡, d𝑡, II (𝑥) = 𝑡 𝑡 1/𝜀 𝑥 ∫ 1/𝜀 𝑥 ◦ 𝑓 (𝑡)𝛼(𝑡) (1 − cos 𝑥𝑡) d𝑡. III (𝑥) = 𝑡 0
By (2.20) it follows that ∫
𝑥
𝑦𝜇(𝑦) d𝑦 = 𝑜 (𝑥 𝐴(𝑥)) .
𝑐(𝑥) =
(4.7)
0
Using 𝑓 ◦ (𝑡) ∼ 1/𝐴2 (1/𝑡) (see (4.5)) as well as 𝛼(𝑡) ≤ 𝐶1 𝑡𝑐(1/𝑡), we then deduce that ∫ ∞ ∫ 1/𝜀 𝑥 1 𝑐(1/𝑡)𝑡 2 d𝑦 2 d𝑡 = 𝑥 d𝑦 × 𝑜(1) = 𝑜 . III (𝑥) ≤ 𝐶1 𝑥 2 3 𝐴(𝑥) 𝐴2 (1/𝑡) 𝜀 𝑥 𝑦 𝐴(𝑦) 0 By (P3) we also have |II (𝑥)| ≤ 𝐶𝜀/𝐴(𝑥). Hence for the proof of (ii) it suffices to verify I (𝑥) ∼ 12 𝜋𝑀 (𝑥). Since 𝛼(𝑡) is positive, this follows if we can show ∫ 𝑏 𝜋 𝛼(𝑡) d𝑡 ∼ 𝑀 (𝑥) (4.8) I˜ (𝑥) := 2 2 1/𝜀 𝑥 𝐴 (1/𝑡)𝑡
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
54
for some and, therefore, any positive constant 𝑏 ≤ (1/𝑥0 ) ∧ 𝜋. We decompose the integral that defines 𝛼(𝑡) by splitting its range, ∫
∫
𝜀/𝑡
∞
+
+
𝛼(𝑡) =
∫
1/𝜀𝑡
0
𝜇(𝑦) sin 𝑡𝑦 d𝑦 1/𝜀𝑡
𝜀/𝑡
= 𝛼1 (𝑡) + 𝛼2 (𝑡) + 𝛼3 (𝑡)
(say),
and accordingly ∫
𝑏
[𝛼1 (𝑡) + 𝛼2 (𝑡) + 𝛼3 (𝑡)]
𝐼˜(𝑥) = 1/𝜀 𝑥
∫
Obviously 𝛼1 (𝑡) = to ∫
𝑏
𝛼1 (𝑡) 1/𝜀 𝑥
.
𝜀/𝑡
𝜇(𝑦) sin 𝑡𝑦 d𝑦 ≤ 𝑡𝑐(𝜀/𝑡), and integrating by parts leads
∫
𝜀𝑥
0
d𝑡 ≤ 2 𝐴 (1/𝑡)𝑡
d𝑡 𝐴2 (1/𝑡)𝑡
1/𝑏
=−
𝑐(𝜀𝑦) d𝑦 𝐴2 (𝑦)𝑦 2
𝑐(𝜀 2 𝑥){1 + 𝑜(1)} + 𝑂 (1) + 𝜀𝑥 𝐴2 (𝜀𝑥)
∫
𝜀𝑥
1/𝑏
𝜀 2 𝜇(𝜀𝑦) d𝑦 {1 + 𝑜(1)} 𝐴2 (𝑦)
≤ 𝜀𝑀 (𝑥){1 + 𝑜(1)}. Noting that 𝛼2 (𝑡) = ∫
∫ 1/𝜀𝑡 𝜀/𝑡
𝑏
𝛼2 (𝑡) 1/𝜀 𝑥
(4.9)
𝜇(𝑦) sin 𝑡𝑦 d𝑦 = 𝑡 −1
d𝑡 = 𝐴2 (1/𝑡)𝑡
∫
∫ 1/𝜀 𝜀
∫
1/𝜀
𝜇(𝑤/𝑡) sin 𝑤 d𝑤, we infer 𝜀𝑥
sin 𝑤 d𝑤 1/𝑏
𝜀
𝜇(𝑤𝑦) d𝑦 . 𝐴2 (𝑦)
Changing the variable 𝑦 = 𝑧/𝑤 transforms the last repeated integral into ∫
1/𝜀 𝜀
sin 𝑤 d𝑤 𝑤
∫
𝜀𝑤𝑥
𝑤/𝑏
𝜇(𝑧) d𝑧 . 𝐴2 (𝑧/𝑤)
Because of the slow variation of 𝐴, the inner integral is asymptotically equivalent to 𝑀 (𝜀𝑤𝑥) as 𝑥 → ∞ uniformly for 𝜀 ≤ 𝑤 ≤ 1/𝜀. Since 𝑀 (𝑥) is s.v., we can conclude that for each 𝜀, as 𝑥 → ∞, ∫ 1/𝜀 ∫ 𝑏 d𝑡 sin 𝑤 𝛼2 (𝑡) 2 ∼ d𝑤 𝑀 (𝑥). (4.10) 𝑤 𝐴 (1/𝑡)𝑡 1/𝜀 𝑥 𝜀 By definition of 𝛼3 , for any 0 < 𝛿 ≤ 𝑏, ∫
𝑏
𝛼3 (𝑡) 1/𝜀 𝑥
d𝑡 = 2 𝐴 (1/𝑡)𝑡
∫
𝛿
1/𝜀 𝑥
d𝑡 2 𝐴 (1/𝑡)𝑡
∫
∞
𝜇(𝑦) sin 𝑡𝑦 d𝑦 + 𝐶𝑏, 𝛿 . 1/𝜀𝑡
Splitting the inner integral at 𝑦 = 𝑥 and interchanging the order of integration transform the repeated integral on the RHS into
4.1 Relatively Stable Distributions
∫
∫
𝑥
𝛿
𝜇(𝑦) d𝑦 1/𝜀𝑦
1/𝜀 𝛿
55
sin 𝑡𝑦 d𝑡 + 𝐴2 (1/𝑡)𝑡
∫
∞
∫
𝛿
𝜇(𝑦) d𝑦 𝑥
1/𝜀 𝑥
sin 𝑡𝑦 d𝑡 . 𝐴2 (1/𝑡)𝑡
For 𝑡 small enough, 1/𝐴2 (1/𝑡)𝑡 is decreasing and we choose 𝛿 > 0 so that this is true for 𝑡 ≤ 𝛿. Then the inner integrals of the first and second repeated integrals above are bounded by 𝜋𝜀/𝐴2 (𝜀𝑦) and 𝜋𝜀𝑥/[𝑦 𝐴2 (𝜀𝑥)], respectively. Observe that ∫ ∞ ∫ ∞ 𝐴(𝑦) 𝐴(𝑥) 𝜇(𝑦) d𝑦 ≤ d𝑦 × 𝑜(1) = 𝑜 . 𝑦 𝑥 𝑦2 𝑥 𝑥 Now, we can easily see that ∫
𝑏
𝛼3 (𝑡) 1/𝜀 𝑥
d𝑡 ≤ 𝜀 [𝐶 𝑀 (𝑥) + 𝑜 (1/𝐴(𝑥))] + 𝐶 𝛿,𝑏 . 𝐴2 (1/𝑡)𝑡
In conjunction with (4.9) and (4.10) this shows (4.8), finishing the proof of (ii). Proof of (iii) and (iv). (iii) follows by combining (i), (ii) and (4.2) since 𝑀− (𝑥) → ∞. Let 𝐸 𝑋 = 0. Then 𝜂− is s.v. and 𝐴(𝑥) = 𝜂− (𝑥) −𝜂+ (𝑥). If 𝑚 + /𝑚 − → 1, then 𝑚 ± (𝑥) ∼ 𝑥𝜂− (𝑥), so that 𝜂+ (𝑥) ∼ 𝜂− (𝑥) by the monotone density theorem. Hence 𝐴(𝑥) = 𝑜(𝑚(𝑥)/𝑥), and the result follows from (i). The proof of Proposition 4.1.1 is complete. □ Proof (of (P3).) Here we show that for some constant 𝐶 and any 𝜀 > 0, ∫ ∫ 𝜋 𝐶𝜀 𝑓 ◦ (𝑡)𝛼(𝑡) cos 𝑥𝑡 𝜋 cos 𝑥𝑡 d𝑡 = d𝑡 ≤ ; 𝑡 𝐴(|𝑥|) 1/𝜀 𝑥 1 − 𝜓(𝑡) 1/𝜀 𝑥
(4.11)
the bound of the integral with 𝛾(𝑡) sin 𝑥𝑡 in place of 𝛼(𝑡) cos 𝑥𝑡 is derived similarly. We make the decomposition 1 − 𝜓(𝑡) = 𝜔 𝑥 (𝑡) − 𝑟 𝑥 (𝑡), where 𝜔 𝑥 (𝑡) = 1 − 𝐸 e𝑖𝑡 𝑋 : |𝑋 | < 𝑥 , 𝑟 𝑥 (𝑡) = 𝐸 e𝑖𝑡 𝑋 : |𝑋 | ≥ 𝑥 . Since |𝑟 𝑥 (𝑡)| ≤ 𝜇(𝑥) = 𝑜( 𝐴(𝑥)/𝑥) (𝑥 → ∞), 1 − 𝜓(𝑡) ∼ −𝑖𝑡 𝐴(1/|𝑡|) (𝑡 → 0) by (P2), and ℜ𝜓(𝑡) < 1 for 𝑡 ∈ (0, 𝜋], it follows that 𝑟 𝑥 (𝑡) = 𝑜(𝜔 𝑥 (𝑡)) as 𝑥 → ∞ uniformly for 1/𝑥 < |𝑡| < 𝜋, and there exists a 𝛿 > 0 such that 𝜔 𝑥 (𝑡) = −𝑖𝑡 𝐴(1/|𝑡|){1 + 𝑜(1)}
uniformly for 1/𝑥 < |𝑡| < 𝛿,
−1 where ∫ 𝜋 𝑜(1) → 0 as 𝑡 ∧ 𝑥 → ∫ 0. Using these bounds we find that as 𝑥 → ∞, 𝑟 𝑥 (𝑡)/𝜔2 (𝑡) d𝑡 ≤ 𝐶 𝜇(𝑥) 𝑥 [ 𝐴(𝑦)] −2 d𝑦 = 𝑜(1/𝐴(𝑥)) (for some 𝑥0 > 1), and 𝑥 1/𝑥 𝑥0 hence ∫ ∫ cos 𝑥𝑡 1 cos 𝑥𝑡 d𝑡 = d𝑡 + 𝑜 . (4.12) 𝐴(𝑥) 1/𝜀 | 𝑥 |< |𝑡 |< 𝜋 𝜔 𝑥 (𝑡) 1/𝜀 | 𝑥 |< |𝑡 |< 𝜋 1 − 𝜓(𝑡)
□
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
56
n
Lemma 4.1.3 Under (2.20), |𝜔 ′𝑥 (𝑡)| = 𝐴(𝑥) 1 + 𝑜 for |𝑡| > 1/𝑥.
√︁
|𝑡|𝑥
Proof Performing differentiation we have ∫ 𝑥 ∫ 𝑥 ∫ ′ 𝑖𝑡 𝑦 𝜔 𝑥 (𝑡) = −𝑖 𝑦e d𝐹 (𝑦) = −𝑖 𝑦 d𝐹 (𝑦) + 𝑖 −𝑥
−𝑥
o
as 𝑥 → ∞ uniformly
𝑥
𝑦(1 − e𝑖𝑡 𝑦 ) d𝐹 (𝑦).
−𝑥
The first integral on the RHS is asymptotically √︁ equivalent to 𝐴(𝑥) under (2.20). As for the second one, on noting |e𝑖𝑦𝑡 − 1| ≤ 2 |𝑡|𝑦 (𝑦 > 0), observe ∫ 𝑥 √︁ ∫ 𝑥 √︁ ∫ 𝑥 √ 3/2 𝑖𝑡 𝑦 ≤ 2 |𝑡| |𝑦| d𝐹 (𝑦) |𝑡| 𝑦 𝜇(𝑦) d𝑦, 𝑦(1 − e ) d𝐹 (𝑦) ≤ 3 −𝑥
−𝑥
of which the final expression is
√︁
0
√
√ |𝑡|𝑥 × 𝑜( 𝐴(𝑥)) since 𝑦𝜇(𝑦) = 𝑜 𝐴(𝑦)/ 𝑦 .
□
To the integral on the RHS of (4.12) we apply integration by parts. Noting that 𝑥|𝜔 𝑥 (1/𝜀𝑥)| ∼ 𝐴(𝑥)/𝜀 we infer ∫ 𝜋 ∫ 𝜋 𝜔 ′𝑥 (𝑡) sin 𝑥𝑡 𝜀 1 cos 𝑥𝑡 d𝑡 = 𝑂 + d𝑡. (4.13) 𝐴(𝑥) 𝑥 1/𝜀 𝑥 𝜔2𝑥 (𝑡) 1/𝜀 𝑥 𝜔 𝑥 (𝑡) The contribution to the second term of the last integral restricted to 𝛿 < 𝑡 ≤ 𝜋 is negligible, while by Lemma 4.1.3 the contribution of the other part of the integral is dominated in absolute value by a constant multiple of √ √︁ ∫ 𝛿 1 + 𝑜 𝑡𝑥 ∫ 𝜀 𝑥 1 + 𝑜 𝑥/𝑡 𝐴(𝑥) 𝜀 𝐴(𝑥) d𝑡 = d𝑡 ∼ . 2 𝐴2 (1/𝑡) 2 (𝑡) 𝑥 𝑥 𝐴(𝑥) 𝑡 𝐴 1/𝜀 𝑥 1/ 𝛿 This together with (4.13) verifies (4.11) since 𝜀 may be made arbitrarily small.
4.2 Distributions in Domains of Attraction In this section we suppose that 𝑋 belongs to the domain of attraction of a stable law with exponent 1 ≤ 𝛼 ≤ 2 and let 𝐿, 𝑝 and 𝑞 be given as in the condition (2.16) of Section 2.3, namely ∫𝑥 (a) −𝑥 𝑦 2 d𝐹 (𝑦) ∼ 𝐿(𝑥) if 𝛼 = 2, (4.14) −𝛼 −𝛼 (b) 𝜇− (𝑥) ∼ 𝑞𝑥 𝐿 (𝑥) and 𝜇+ (𝑥) ∼ 𝑝𝑥 𝐿(𝑥) if 1 ≤ 𝛼 < 2, as 𝑥 → ∞ (0 ≤ 𝑝, 𝑞 ≤ 1, 𝑝 + 𝑞 = 1 and 𝐿 is s.v. at infinity). In the sequel we suppose, without causing any loss of generality, that 𝐿 is positive and chosen ∫ ∞ so that 𝐿 is absolutely continuous and 𝐿 ′ (𝑥) = 𝑜(𝐿(𝑥)/𝑥). It is assumed that 1 𝐿 (𝑥)𝑥 −1 d𝑥 < ∞ if 𝛼 = 1 and lim 𝐿(𝑥) = ∞ if 𝛼 = 2 so that 𝐸 |𝑋 | < ∞ (hence 𝐸 𝑋 = 0) and
4.2 Distributions in Domains of Attraction
57
𝐸 𝑋 2 = ∞ unless the contrary is stated explicitly (in some cases we allow 𝐸 |𝑋 | = ∞ as in Proposition 4.2.1(iv) and Remark 4.2.2(i)). ¯ 𝛼(𝑡) and 𝛽(𝑡) introduced in Sections 3.1 and We use the notation 𝑐(𝑥), 𝑐(𝑥), ˜ 𝑎(𝑥), 2.2. Note that Spitzer’s condition (b) of (2.16) is not assumed.
4.2.1 Asymptotics of 𝒂(𝒙) I Put 𝐿 ∗ (𝑥) =
∫∞ 𝑥
𝑦 −1 𝐿 (𝑦) d𝑦. Then
𝑜(𝑐(𝑥)/𝑥) 𝜂(𝑥) = (𝛼 − 1) −1 𝑥𝜇(𝑥){1 + 𝑜(1)} ∗ 𝐿 (𝑥){1 + 𝑜(1)} 𝑐(𝑥) = 𝛼−1 𝑥 2−𝛼 𝐿(𝑥){1 + 𝑜(1)} and 𝐿(𝑥)/2 𝑥 2−𝛼 𝐿(𝑥) 𝑚(𝑥) ∼ (𝛼 − 1) (𝛼 − 2) ∗ 𝑥𝐿 (𝑥)
if
if
𝛼 = 2,
if 1 < 𝛼 < 2, if 𝛼 = 1,
(4.15)
(1 ≤ 𝛼 ≤ 2)
(4.16)
𝛼 = 2,
if 1 < 𝛼 < 2, if
𝛼 = 1.
The derivation is straightforward. If 𝛼 = 2, condition (4.14a) is equivalent to 𝑥 2 𝜇(𝑥) = 𝑜(𝐿 (𝑥)) as well as to 𝑐(𝑥) ∼ 𝐿 (𝑥)/2 (Section A.1.1), which together show 𝑚(𝑥) ˜ = 𝑥𝜂(𝑥) + 𝑐(𝑥) ˜ = 𝑜(𝑚(𝑥)) if 𝛼 = 2. (4.17) The asymptotics of 𝛼(𝑡) and 𝛽(𝑡) as 𝑡 ↓ 0 are given as follows: 𝑡 𝐿(1/𝑡)/2 ′ 𝛼−1 𝐿 (1/𝑡) 𝛼(𝑡) ∼ 𝜅 𝛼 𝑡 1 𝜋𝐿 (1/𝑡) 2
if
𝛼 = 2,
if 1 < 𝛼 < 2, if
𝛼 = 1,
and 𝑜(𝛼(𝑡)) 𝛽(𝑡) = 𝜅 ′′𝛼 𝑡 𝛼−1 𝐿 (1/𝑡){1 + 𝑜(1)} 𝐿 ∗ (1/𝑡){1 + 𝑜(1)}
(4.18)
if
𝛼 = 2,
if 1 < 𝛼 < 2, if 𝛼 = 1,
(4.19)
where 𝜅 ′𝛼 = Γ(1 − 𝛼) cos 21 𝜋𝛼 and 𝜅 ′′𝛼 = −Γ(1 − 𝛼) sin 21 𝜋𝛼; in particular, for 1 < 𝛼 < 2, 1 − 𝐸e𝑖𝑡 𝑋 = 𝑡𝛼(𝑡) + 𝑖𝑡𝛾(𝑡) ∼ (𝜅 ′𝛼 + 𝑖( 𝑝 − 𝑞)𝜅 ′′𝛼 )𝑡 𝛼 𝐿(1/𝑡)
(𝑡 ↓ 0).
(4.20)
For verification, see [58], [8, Theorems 4.3.1–2] if 1 ≤ 𝛼 < 2. The estimate in the case 𝛼 = 2 is deduced from (4.15) and (4.16). Indeed, uniformly for 𝜀 > 0,
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
58
∫
𝜀/𝑡
𝜇(𝑥) sin 𝑡𝑥 d𝑥 + 𝑂 (𝜂(𝜀/𝑡)) = 𝑡𝑐(𝜀/𝑡) 1 + 𝑂 (𝜀 2 ) + 𝑜(1) ,
𝛼(𝑡) = 0
so that 𝛼(𝑡) ∼ 𝑡𝑐(1/𝜀𝑡) ∼ 𝑡 𝐿(1/𝑡)/2; as for 𝛽(𝑡), use (3.12a) and (4.17). In the case 𝛼 = 1, we shall need the following second-order estimate: 𝛽(𝑡) = 𝜂(1/𝑡) + 𝐶 ∗ 𝐿(1/𝑡){1 + 𝑜(1)},
(4.21)
where 𝐶 ∗ is Euler’s constant, as is shown below. It holds that ∫ ∞ ∫ 1/𝑡 𝜇(𝑦) cos 𝑡𝑦 d𝑦 ∼ 𝐶1 𝐿 (1/𝑡) and 𝜇(𝑦) (1 − cos 𝑡𝑦) d𝑦 ∼ 𝐶2 𝐿(1/𝑡), 1/𝑡
0
(4.22) where 𝐶1 = 1 cos 𝑠 d𝑠/𝑠, 𝐶2 = 0 (1 − cos 𝑠) d𝑠/𝑠. Indeed, for each 𝑀 > 1, by ∫ ∞ monotonicity of 𝜇 we have 𝑀/𝑡 𝜇(𝑦) cos 𝑡𝑦 d𝑦 ≤ 2𝜇(𝑀/𝑡)/𝑡 ∼ 2𝐿 (1/𝑡)/𝑀, while ∫∞
∫
∫1
∫
𝑀/𝑡
1/𝑡
𝑀
𝐿 (𝑧/𝑡) cos 𝑧
𝜇(𝑦) cos 𝑡𝑦 d𝑦 = 1
d𝑧 ∼ 𝐶1 𝐿 (1/𝑡) 𝑧
as 𝑡 ↓ 0 and 𝑀 → ∞ in this order, showing the first formula of (4.22). One can deduce the second one similarly. Since 𝐶2 − 𝐶1 = 𝐶 ∗ , (4.21) follows. The next proposition is also valid in the case 𝐸 |𝑋 | = ∞, as long as 𝑆 is recurrent. This remark is relevant only for its last assertion (iv), the r.w. being recurrent under 𝐸 |𝑋 | = ∞ only if 𝛼 = 2𝑝 = 1 (see (4.33) for the recurrence criterion). Proposition 4.2.1 Suppose that (4.14) is satisfied. Then as 𝑥 → ∞ (i) except when 𝛼 = 2𝑝 = 1, 𝑎(𝑥) ¯ ∼ 𝜅 −1 (4.23) 𝛼 𝑥/𝑚(𝑥), where 𝜅 𝛼 = 2 1 − 4𝑝𝑞 sin2 21 𝜋𝛼 Γ(𝛼)Γ(3 − 𝛼) [note that 1 − 4𝑝𝑞 sin2 𝑡 = cos2 𝑡 + [( 𝑝 − 𝑞) sin 𝑡] 2 and 𝜅 𝛼 = 0 ⇔ 𝛼 = 2𝑝 = 1]; (ii) if 1 ≤ 𝛼 < 2 and 𝑝 ≠ 1/2, then if 𝛼 = 1, 𝑎(−𝑥) ∼ 2𝑝 𝑎(𝑥), ¯ (4.24) 𝑎(𝑥) ∼ 2𝑞 𝑎(𝑥), ¯ where the sign ‘ ∼’ is interpreted in the obvious way if 𝑝𝑞 = 0; and (iii) if 𝛼 = 2 and 𝑚 + /𝑚 → 𝑝 (0 ≤ 𝑝 ≤ 1), then (4.24) holds; (iv) if 𝛼 = 2𝑝 = 1 and there exists 𝜌 := lim 𝑃0 [𝑆 𝑛 > 0], then ∫ 𝑥 d𝑦 2 sin2 𝜌 𝜋 2 2 𝜋 1 𝑦 𝜇(𝑦) 𝑎(𝑥) ¯ ∼ ∫ 𝑥 𝜇(𝑦) 1 d𝑦 2 2 (𝑦) 𝐴 𝑥 0
if 0 < 𝜌 < 1, (4.25) if
𝜌 = 0 or 1;
4.2 Distributions in Domains of Attraction
59
𝑎(𝑥) ∼ 𝑎(−𝑥); and if 𝐸 |𝑋 | < ∞.
𝑥/𝑚(𝑥) = 𝑜( 𝑎(𝑥)) ¯
[𝐴(𝑥) is given in (2.19) and 𝑥 0 is the same constant as in (4.1); see Remark 4.2.2(ii) when the existence of lim 𝑃0 [𝑆 𝑛 > 0] is not assumed.] ∫𝑥 If 𝛼 = 1, 𝑝 ≠ 𝑞 and 𝐸 𝑋 = 0, then 1 𝜇(𝑦) d𝑦/𝐴2 (𝑦) ∼ 𝑥/( 𝑝 − 𝑞) 2 𝑚(𝑥), so that the second expression on the RHS of (4.25) is a natural extension of that on the RHS of (4.23). Proof (i) and (ii) are given in [3, Lemma 3.3] (cf. also [82, Lemma 3.1] for 1 < 𝛼 < 2) except for the case 𝛼 = 1, and implied by Theorem 4.1.1 for 𝛼 = 1. + 𝑏 − (𝑥) − 𝑏 + (𝑥), where ¯ Proof of (iii). Recall that 𝑎(𝑥) = 𝑎(𝑥) ∫ 𝜋 1 𝛽± (𝑡) sin 𝑥𝑡 𝑏 ± (𝑥) = d𝑡 (4.26) · 𝜋 0 [𝛼2 (𝑡) + 𝛾 2 (𝑡)] 𝑡 (see (3.44)). By Theorem 3.1.6 we can suppose 𝑝𝑞 > 0. Let 𝛼 = 2. Since 𝛽(𝑡)/𝛼(𝑡) → 0 (𝑡 → 0) and 𝛼(𝑡) ∼ 21 𝑡 𝐿(1/𝑡) ∼ 𝑡𝑚(1/𝑡) (cf. Section A.1.1 for the latter equivalence), for each 𝜀 > 0, on using |sin 𝑥𝑡| ≤ 1, the contribution from 𝑡 > 𝜀/𝑥 to the above integral can be written as ∫
𝜋 𝜀/𝑥
∫
1 d𝑡 × 𝑜(1) = 𝑡 2 [𝐿(1/𝑡)]
𝑥/𝜀
1/ 𝜋
𝑥/𝜀 𝑥 1 d𝑦 × 𝑜(1) = × 𝑜(1) = 𝑜 . 𝐿 (𝑦) 𝐿(𝑥/𝜀) 𝑚(𝑥)
(Note that the integral over 𝛿 < 𝑡 ≤ 𝜋 tends to zero as 𝑥 → 0 for each 𝛿.) Thus 𝑥 𝑏 + (𝑥) = 𝜋
∫
𝜀/𝑥
0
𝛽+ (𝑡) 𝑥 2 , d𝑡{1 + 𝑂 (𝜀 )} + 𝑜 𝑚(𝑥) [𝑡𝑚(1/𝑡)] 2
(4.27)
and similarly for 𝑏 − (𝑥). To evaluate the above integral, we proceed analogously to the proof of Proposition 4.1.1(ii). We claim that if 𝑚 + (𝑥)/𝑚(𝑥) → 𝑝, ∫
𝜀/𝑥
𝐼 (𝑥) := 0
𝛽+ (𝑡) 𝜋𝑝 {1 + 𝑂 (𝜀) + 𝑜(1)} d𝑡 = 2 2𝑚(𝑥) [𝑡𝑚(1/𝑡)]
(as 𝑥 → ∞ and 𝜀 ↓ 0 in this order). Define 𝜁 𝑗 (𝑡), 𝑗 = 1, 2, 3, by 𝛽+ (𝑡) = 𝑡
∫
∫
𝜀/𝑡
0
∫
1/𝜀𝑡
+
∞
+ 𝜀/𝑡
𝜂+ (𝑦) sin 𝑡𝑦 d𝑦
1/𝜀𝑡
= 𝜁1 (𝑡) + 𝜁2 (𝑡) + 𝜁3 (𝑡), so that 𝐼 (𝑥) =
3 ∑︁ 𝑗=1
∫ 𝐼 𝑗 (𝑥),
where
𝐼 𝑗 (𝑥) = 0
𝜀/𝑥
𝜁 𝑗 (𝑡) d𝑡. 𝑡𝑚 2 (1/𝑡)
(4.28)
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
60
Noting that the integrand of 𝐼1 (𝑥) is at most ∫
∞
∫
𝜀𝑦
0
𝐼1 (𝑥) ≤ 𝑥/𝜀 ∫𝑥 𝜀 0
∫
𝜀/𝑡
0
𝜂+ (𝑧)𝑧 d𝑧/𝑚 2 (1/𝑡) we see
𝜂+ (𝑧)𝑧 d𝑧
𝑦 2 𝑚 2 (𝑦) 𝜂(𝑧)𝑧 d𝑧
d𝑦 ∫
∞
𝜂(𝜀𝑦) {1 + 𝑜(1)} d𝑦. 𝑚 2 (𝑦) ∫∞ The last integral being asymptotically equivalent to 𝜀 𝑥 𝜂(𝑧) d𝑧/𝑚 2 (𝑧) = 𝜀/𝑚(𝑥), it follows that 𝐼1 (𝑥) ≤ {𝜀 + 𝑜(1)}/𝑚(𝑥). ∫ 1/𝜀 Changing the variable 𝑡𝑦 = 𝑤 we have 𝜁2 (𝑡) = 𝑡 −1 𝜀 𝜂+ (𝑤/𝑡) sin 𝑤 d𝑤, and hence ∫ 1/𝜀 ∫ ∞ ∫ 1/𝜀 ∫ ∞ sin 𝑤 𝜂+ (𝑤𝑦) 𝜂+ (𝑧) d𝑦 = d𝑤 d𝑧. 𝐼2 (𝑥) = sin 𝑤 d𝑤 2 2 𝑤 𝜀 𝑥𝑤/𝜀 𝑚 (𝑧/𝑤) 𝜀 𝑥/𝜀 𝑚 (𝑦) =
𝑥𝑚 2 (𝑥)
{1 + 𝑜(1)} + 𝜀 2
𝑥/𝜀
The inner integral of the last double integral is ∼ 𝑝/𝑚(𝑥) owing to the condition 𝑚 + (𝑥)/𝑚(𝑥) → 𝑝, for it implies that uniformly for 𝑤 ∈ [𝜀, 1/𝜀], ∫ ∞ ∫ ∞ 𝑚 + (𝑥) 𝑚 + (𝑦)𝜂(𝑦) 𝑝 𝜂+ (𝑦) = − + 2 d𝑦 = {1 + 𝑜(1)}. 2 2 3 𝑚(𝑥) 𝑚 (𝑦) 𝑚 (𝑥) 𝑚 (𝑦) 𝑥 𝑥 Since
∫ 1/𝜀 𝜀
sin 𝑤 d𝑤/𝑤 = 21 𝜋 + 𝑂 (𝜀), we can now conclude 𝐼2 (𝑥) =
𝑝𝜋/2 {1 + 𝑂 (𝜀) + 𝑜(1)}. 𝑚(𝑥)
Since 𝜁3 (𝑡) ≤ 2𝑡 −1 𝜂+ (1/𝜀𝑡), ∫ 𝐼3 (𝑥) ≤ 2 0
𝜀/𝑥
𝜂(1/𝜀𝑡) d𝑡 = 2 𝑡 2 𝑚 2 (1/𝑡)
∫
∞
𝑥/𝜀
𝜂(𝑦/𝜀) d𝑦 ∼ 2𝜀/𝑚(𝑥). 𝑚 2 (𝑦)
Combining the estimates of 𝐼1 to 𝐼3 obtained above yields (4.28). Thus 𝑏 + (𝑥) ∼ In the same way 𝑏 − (𝑥) ∼ 21 𝑞𝑥/𝑚(𝑥) and by (i) 𝑎(𝑥) ¯ ∼ 12 𝑥/𝑚(𝑥). Consequently 𝑎(𝑥) ∼ 𝑞𝑥/𝑚(𝑥) ∼ 2𝑞 𝑎(𝑥), ¯ and similarly 𝑎(−𝑥) ∼ 2𝑝 𝑎(𝑥). ¯ The proof of (iii) is complete. Proof of (iv). Here 𝐸 |𝑋 | may be infinite. Note that 𝛼(𝑡) ∼ 𝜋2 𝐿 (1/𝑡) remains valid under 𝐸 |𝑋 | = ∞. If 𝜌 = 1, then the assertion follows from Theorem 4.1.1 as a special case; the same is true when 𝜌 = 0, by duality. Below, we let 0 < 𝜌 < 1. Let 𝑐 𝑛 be determined by (2.17) with 𝑐 ♯ = 1, namely, 𝑐 𝑛 /𝑛 ∼ 𝐿(𝑐 𝑛 ) as 𝑛 → ∞. Then the laws of 𝑆 𝑛 /𝑐 𝑛 constitute a tight family under 𝑃0 if and only if 𝜌 𝑛 := 𝑃0 [𝑆 𝑛 > 0] is bounded away from 0 and 1. The convergence 𝜌 𝑛 → 𝜌 ∈ (0, 1) is equivalent to the convergence 1 2 𝑝𝑥/𝑚(𝑥).
𝑏 𝑛 := 𝑛𝐸 [sin{𝑋/𝑐 𝑛 }] → 𝑏
for some − ∞ < 𝑏 < ∞.
(4.29)
4.2 Distributions in Domains of Attraction
61
Observing 𝛾(𝑡 + 𝑠) − 𝛾(𝑡) = 𝑜(𝐿 (𝑡)) as 𝑠/𝑡 → 0, from (4.29) we infer − 𝛾(𝑡) = 𝑡 −1 𝐸 [sin 𝑋𝑡] ∼ 𝑏𝐿(1/|𝑡|)
(𝑡 → 0).
(4.30)
Recalling 𝛼(𝑡) ∼ 21 𝜋𝐿(1/𝑡), we accordingly obtain 1 − 𝜓(𝑡) ∼
1 2 𝜋|𝑡|
− 𝑖𝑏𝑡 𝐿(1/|𝑡|)
(𝑡 → 0). ◦
By our choice of 𝑐 𝑛 , 𝐶Φ = 1 in (2.18) so that if 𝑆 𝑛 /𝑐 𝑛 − 𝑏 𝑛 ⇒ 𝑌 ◦ , 𝐸 [e𝑖𝑡𝑌 ] = 𝜋 lim 𝐸 [e𝑖𝑡 (𝑆𝑛 /𝑐𝑛 −𝑏𝑛 ) ] = e− 2 |𝑡 | , showing that the values of 𝜌 and 𝑏 are related by 𝜌 = 𝑃[𝑌 ◦ + 𝑏 > 0].
(4.31)
−1 The law of 𝑌 ◦ has the density given by 12 𝑥 2 + 14 𝜋 2 , and solving (4.31) for 𝑏, one finds 𝑏 = 12 𝜋 tan 𝜋(𝜌 − 21 ) . By (4.30) 𝛼(𝑡) 2/𝜋 2 sin2 𝜌 𝜋 ∼ = , 𝜋𝐿(1/𝑡) 𝛼2 (𝑡) + 𝛾 2 (𝑡) (1 + tan2 [𝜋(𝜌 − 12 )])𝐿(1/𝑡) and as before (see the argument used at (∗) in the next paragraph), we deduce 𝑎(𝑥) ¯ ∼
2 sin2 𝜌 𝜋 𝜋2
∫ 1
𝑥
d𝑦 . 𝑦𝐿 (𝑦)
For the proof of 𝑎(𝑥) ∼ 𝑎(−𝑥), which is equivalent to 𝑎(𝑥) − 𝑎(−𝑥) = 𝑜( 𝑎(𝑥)), ¯ it suffices to show that ∫ 𝜋 −𝛾(𝑡) sin 𝑥𝑡 2 1 (𝑥) , (4.32) 𝑎(𝑥) − 𝑎(−𝑥) = d𝑡 = 𝑜 𝜋 0 [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡 𝐿 ∗ ∫𝑥 where [1/𝐿] ∗ (𝑥) = 1 [𝑦𝐿 (𝑦)] −1 d𝑦. Denote the above integral restricted to [0, 1/𝑥] and [1/𝑥, 𝜋] by 𝐽1/𝑥 , respectively. By (4.30) and 𝛼(𝑡) ∼ 21 𝜋𝐿(1/𝑡) it follows that ∫ 1/𝑥 𝐶 d𝑡 ∼ , 𝐽1/𝑥 = 𝑜 ([1/𝐿] ∗ (𝑥)). Thus (4.32) is verified. In the case 𝐸 |𝑋 | < ∞, by (4.25) and 𝑚(𝑥)/𝑥 ∼ 𝐿 ∗ (𝑥) we can readily see that (see Remark 4.2.2(ii) below). This finishes the proof of (iv). □ 𝑥/𝑚(𝑥) = 𝑜( 𝑎(𝑥)) ¯ Remark 4.2.2 (i) Let 𝛼 = 1 in (4.14) and 𝐸 |𝑋 | = ∞. Then 𝐹 is recurrent if and only if ∫ ∞ 𝜇(𝑦) d𝑦 = ∞ (4.33) (𝐿 (𝑦) ∨ | 𝐴(𝑦)|) 2 1 ∫𝑦 (cf. [83, Section 5.3]). If 𝑝 ≠ 1/2, then 𝐹 is transient, | 𝐴(𝑦)| ∼ | 𝑝 − 𝑞| 0 𝜇(𝑡) d𝑡 → ∫∞ ∞ and 𝜇(𝑥) d𝑥 𝐴2 (𝑥) < ∞. (ii) Let 𝛼 = 2𝑝 = 1 in (4.14) and 𝐸 𝑋 = 0. Observing that 𝛾(𝑡) = −𝐴(1/𝑡) + 𝑜(𝐿(1/𝑡)) (see (4.21)), and hence 𝛼2 (𝑡) + 𝛾 2 (𝑡) ∼ ( 12 𝜋) 2 𝐿 2 (1/𝑡) + 𝐴2 (1/𝑡) (𝑡 ↓ 0), one infers that ∫ 𝑥 𝜇(𝑦) d𝑦 1 𝑎(𝑥) ¯ ∼ (4.34) 1 2 2 1 ( 2 𝜋) 𝐿 2 (𝑦) + 𝐴2 (𝑦) without assuming the existence of lim 𝑃0 [𝑆 𝑛 > 0]. If 𝐸 𝑋 = 0, then 𝐿(𝑦) ∨ | 𝐴(𝑦)| = ∗ (𝑦)), which shows that 𝑎(𝑥)/[𝑥/𝑚(𝑥)] ¯ → ∞, for 𝑥/𝑚(𝑥) ∼ 1/𝐿 ∗ (𝑥) ∼ ∫𝑜 (𝐿 𝑥 ∗ 2 𝜇(𝑦) d𝑦/[𝐿 (𝑦)] . This is also deduced from Theorem 3.1.2(ii). 1
4.2.2 Asymptotics of 𝒂(𝒙) II For any subset 𝐵 of R such that 𝐵 ∩ Z is non-empty, define 𝐻 𝐵𝑥 (𝑦) = 𝑃 𝑥 [𝑆 𝜎𝐵 = 𝑦]
(𝑦 ∈ 𝐵),
(4.35)
the hitting distribution of 𝐵 for the r.w. 𝑆. We continue to assume (4.14) to hold. ˆ = ∞, under Suppose 𝑚 + /𝑚 → 0 throughout this subsection. In particular 𝐸 (− 𝑍) which it holds that 𝑎(−𝑥) =
∞ ∑︁
𝐻 −𝑥 [0,∞) (𝑦)𝑎(𝑦),
𝑥>0
(4.36)
𝑦=1
(see (2.15)). Using this identity we will derive the asymptotic form of 𝑎(−𝑥) as 𝑥 → ∞ when 𝜇+ (𝑥) varies regularly at infinity. Recall that 𝑉d (𝑥) (𝑈a (𝑥)) denotes
4.2 Distributions in Domains of Attraction
63
the renewal function for the weakly descending (strictly ascending) ladder height process and that, on writing 𝑣◦ for 𝑉d (0), ∫ 𝑥 ∫ 𝑥 1 ℓ ∗ (𝑥) = 𝑃[𝑍 > 𝑡] d𝑡 and ℓˆ∗ (𝑥) = ◦ 𝑃[− 𝑍ˆ > 𝑡] d𝑡. 𝑣 0 0 We know that ℓ ∗ (𝑥) (resp. ℓˆ∗ (𝑥)) is s.v. as 𝑥 → ∞ if 1 ≤ 𝛼 ≤ 2 (resp. 𝛼 = 2) (under the present setting), that 𝑃[− 𝑍ˆ > 𝑥] is s.v. if 𝛼 = 1 and that 𝑥 𝑈a (𝑥) ∼ ∗ ℓ (𝑥)
𝑥/ℓˆ∗ (𝑥) 𝛼 = 2, ′ 𝛼−1 ∗ and 𝑉d (𝑥) ∼ 𝜅 𝛼 𝑥 ℓ (𝑥)/𝐿(𝑥) 1 < 𝛼 < 2, 𝑣◦ /𝑃[− 𝑍ˆ > 𝑥] 𝛼 = 1,
(4.37)
where 𝜅 ′𝛼 = −𝛼[(2 − 𝛼)𝜋] −1 sin 𝛼𝜋 – provided (4.14) holds (see2 the table in Section 2.6.) Proposition 4.2.3 Suppose that (4.14) holds and 𝑚 + /𝑚 → 0 and that 𝜇+ (𝑥) is regularly varying at infinity with index −𝛽 and lim 𝑥→∞ 𝑎(−𝑥) = ∞. Then
𝑎(−𝑥) ∼
∞ ∑︁ 𝜇+ (𝑧)𝑉d (𝑧)𝑎(𝑧) 𝑈 (𝑥) a 𝑧
if
𝛼 = 𝛽 = 2,
𝑧=𝑥
𝑥 𝑈a (𝑥) ∑︁ 𝐶 𝜇+ (𝑧)𝑉d (𝑧)𝑎(𝑧) 𝛼,𝛽 𝑥 𝑧=1
otherwise
as 𝑥 → ∞, where 𝐶 𝛼,𝛽 = [Γ(𝛼)] 2 Γ(𝛽 − 2𝛼 + 2)/Γ(𝛽). [More explicit expressions of the RHS are given in the proof: see (4.42) to (4.45). By virtue of Theorem 3.1.6 and Proposition 4.2.1(i) it follows, under 𝑚 + /𝑚 → 0, that 𝑎(𝑥) ∼ [Γ(𝛼)Γ(3 − 𝛼)] −1 𝑥/𝑚(𝑥).] Proof Put 𝑢 a (𝑥) = 𝑈a (𝑥) − 𝑈a (𝑥 − 1), 𝑥 > 0, 𝑣d (𝑧) = 𝑉d (𝑧) − 𝑉d (𝑧 − 1), 𝑢 a (0) = 𝑈a (0) = 1 and 𝑣d (0) = 𝑉d (0) = 𝑣◦ . Then 𝐺 (𝑥1 , 𝑥2 ) := 𝑢 a (𝑥2 − 𝑥 1 ) is the Green function of the strictly increasing ladder process killed on its exiting the half-line (−∞, 0] and by the last exit decomposition we obtain 𝐻 −𝑥 [0,∞) (𝑦) =
𝑥 ∑︁
𝑢 a (𝑥 − 𝑘)𝑃[𝑍 = 𝑦 + 𝑘]
(𝑥 ≥ 1, 𝑦 ≥ 0).
(4.38)
𝑘=1
Suppose the conditions of the proposition to hold and let 𝜇+ (𝑥) ∼ 𝐿 + (𝑥)/𝑥 𝛽 with 𝛽 ≥ 𝛼 and 𝐿 + slowly varying at infinity. Then noting 𝑔 [1,∞) (0, −𝑧) = 𝑣d (𝑧) and summing by parts one deduces
2 In Lemma 5.3.1(i)) we shall see that 𝑉d ( 𝑥) ∼ 𝑎 ( 𝑥)ℓ ∗ ( 𝑥) under 𝑚+ /𝑚 → 0, so that the formula (4.37) for 𝑉d with 1 < 𝛼 < 2 also follows from Proposition 4.2.1.
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
64
𝑃[𝑍 > 𝑦] =
∞ ∑︁
𝑣d (𝑧)𝜇+ (𝑦 + 𝑧) ∼ 𝛽
∞ ∑︁
𝑉d (𝑧)
𝑧=0
𝑧=0
𝜇+ (𝑦 + 𝑧) 𝑦+𝑧+1
∼ 𝐶0𝑉d (𝑦)𝜇+ (𝑦) and
∞
∫
𝑡 𝛼−1 (1 + 𝑡) −𝛽−1 d𝑡 =
𝐶0 = 𝛽 0
(𝑦 → ∞),
(4.39)
Γ(𝛼)Γ(𝛽 − 𝛼 + 1) . Γ(𝛽)
Í
If 𝛽 > 2𝛼 − 1, then 𝜇+ (𝑥) [𝑎(𝑥)] 2 < ∞, so that 𝑎(−𝑥) converges to a constant. Hence we may consider only the case 𝛼 ≤ 𝛽 ≤ 2𝛼 − 1. Let 𝛼 > 1. Recall 𝑎 varies regularly with index 𝛼 − 1 at infinity. Then returning to (4.36), performing summation by parts (w.r.t. 𝑦) and then substituting the above equivalence into (4.38) lead to 𝑎(−𝑥) ∼ (𝛼 − 1)
∞ ∑︁ 𝑥 ∑︁
𝑢 a (𝑥 − 𝑘)𝑃[𝑍 ≥ 𝑦 + 𝑘]
𝑦=1 𝑘=0 ∞ ∑︁ 𝑥 ∑︁
∼ (𝛼 − 1)𝐶0
𝑎(𝑦) 𝑦
𝑢 a (𝑥 − 𝑘)𝑉d (𝑦 + 𝑘)𝜇+ (𝑦 + 𝑘)
𝑎(𝑦) . 𝑦
(4.40)
𝑦=1 𝑘=0
Owing to (2.5), namely 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥), one can replace 𝑢 a (𝑥 − 𝑘) by 1/ℓ ∗ (𝑥) in (4.40), the inner sum over (1 − 𝜀)𝑥 < 𝑘 ≤ 𝑥 being negligible as 𝑥 → ∞ and 𝜀 → 0. After changing the variables by 𝑧 = 𝑦 + 𝑘, the last double sum restricted to 𝑦 + 𝑘 ≤ 𝑥 accordingly becomes asymptotically equivalent to 𝑥 𝑧−1 𝑥 ∑︁ ∑︁ 𝑎(𝑧 − 𝑘) 1 1 ∑︁ 𝑉 (𝑧)𝜇 (𝑧) ∼ 𝑉d (𝑧)𝜇+ (𝑧)𝑎(𝑧). d + ℓ ∗ (𝑥) 𝑧=1 𝑧−𝑘 (𝛼 − 1)ℓ ∗ (𝑥) 𝑧=1 𝑘=0
(4.41)
If 𝛽 = 2𝛼 − 1 > 1 (entailing lim ℓ ∗ (𝑥) = 𝐸 𝑍 < ∞), then 𝐶0 = 𝐶 𝛼,𝛽 , the sum on the RHS of (4.41) varies slowly and the remaining part of the double sum in (4.40) is negligible, showing 𝑎(−𝑥) ∼
𝑥 𝑥 𝐶 𝛼,𝛽 ∑︁ 𝐿 + (𝑧) 𝐶0 ∑︁ 𝑉 (𝑧)𝜇 (𝑧)𝑎(𝑧) , ∼ d + ∗ ∗ ℓ (𝑥) 𝑧=1 ℓ (𝑥) 𝑧=1 ℓ∗ (𝑧)𝑧
(4.42)
where ( ℓ∗ (𝑧) =
ℓˆ∗ (𝑧)𝐿 (𝑧)/2 (𝛼2 𝜅
𝛼 /[2(𝛼
𝛼 = 2, − 1) (2 −
𝛼) 3 𝜅 ′𝛼 ]) [𝐿 (𝑧)] 2 /ℓ ∗ (𝑧)
1 < 𝛼 < 2.
Let 𝛼 ≤ 𝛽 < 2𝛼 − 1. If |𝛼 − 2| + |𝛽 − 2| ≠ 0, then the outer sum in (4.40) over 𝑦 > 𝑀𝑥 as well as that over 𝑦 < 𝑥/𝑀 becomes negligibly small as 𝑀 becomes large and one can easily infer that
4.2 Distributions in Domains of Attraction
𝑎(−𝑥) ∼
65
𝑥 𝐶 𝛼,𝛽 ∑︁ (𝛼 − 1)𝐶1 𝐶0 𝐿 + (𝑥)𝑥 2𝛼−1−𝛽 𝑉d (𝑧)𝜇+ (𝑧)𝑎(𝑧), ∼ ℓ ∗ (𝑥)ℓ∗ (𝑥)(2𝛼 − 1 − 𝛽) ℓ ∗ (𝑥) 𝑧=1
(4.43)
where ℓ∗ is as above, and ( ∫1 ∫∞ ∫∞ 𝐶1 = (2𝛼 − 1 − 𝛽) 0 d𝑠 0 (𝑠 + 𝑡) −𝛽+𝛼−1 𝑡 𝛼−2 d𝑡 = 0 (1 + 𝑡) −𝛽+𝛼−1 𝑡 𝛼−2 d𝑡, 𝐶 𝛼,𝛽 = (𝛼 − 1)𝐶1 𝐶0 = 𝐶0 Γ(𝛼)Γ(𝛽 − 2𝛼 + 2)/Γ(𝛽 − 𝛼 + 1). If 𝛼 = 𝛽 = 2, then 𝑎(𝑥) ∼ 2𝑥/𝐿 (𝑥) and uniformly for 0 ≤ 𝑘 ≤ 𝑥, ∞ ∑︁
𝑉d (𝑦 + 𝑘)𝜇+ (𝑦 + 𝑘)
𝑦=𝑥
∞ ∞ ∑︁ ∑︁ 𝐿 + (𝑦 + 𝑘){1 + 𝑜(1)} 𝑎(𝑦) 𝐿 + (𝑦) = 𝑣◦ , ∼ ◦ ∗ ˆ 𝑦 ℓ (𝑦)𝑦 𝑦=𝑥 (𝑦 + 𝑘)𝑣 ℓ (𝑦 + 𝑘)𝐿(𝑦)/2 𝑦=𝑥 ∗
Í𝑥 Í𝑥 while 𝑘=1 𝑢 a (𝑥 − 𝑘) 𝑦=1 𝑉d (𝑦 + 𝑘)𝜇+ (𝑦 + 𝑘)𝑎(𝑦)/𝑦 ≤ 𝐶𝑥𝐿 + (𝑥)/[ℓ ∗ (𝑥)ℓ∗ (𝑥)] (split the outer sum at 𝑘 = 𝑥/2 and use (4.41)). With these two bounds, from (4.40) we deduce that 𝑎(−𝑥) ∼
∞ ∞ 𝑎(𝑦) 𝑥 ∑︁ 𝐿 + (𝑦) 𝐶0 𝑥 ∑︁ 𝑉 (𝑦)𝜇 (𝑦) ∼ d + ℓ ∗ (𝑥) 𝑦=𝑥 𝑦 ℓ ∗ (𝑥) 𝑦=𝑥 ℓ∗ (𝑦)𝑦
(4.44)
(since (𝛼 − 1)𝐶0 = 𝐶0 = 1). The relations (4.42) to (4.44) together show those of Proposition 4.2.3 in the case 1 < 𝛼 ≤ 2 since 𝑈a (𝑥) ∼ 𝑥/ℓ ∗ (𝑥). It remains to deal with the case 𝛼 = 𝛽 = 1 when in place of (4.40) we have 𝑎(−𝑥) ∼
∞ ∑︁ 𝑥 ∑︁ 𝑢 a (𝑥 − 𝑘)𝜇+ (𝑦 + 𝑘) 𝑦=1 𝑘=0
𝑃[− 𝑍ˆ > 𝑦 + 𝑘]/𝑣◦
·
d 1 . d𝑦 𝐿 ∗ (𝑦)
d Note 𝑃[− 𝑍ˆ > 𝑥] is s.v. and d𝑦 [1/𝐿 ∗ (𝑦)] = 𝐿 (𝑦)/𝑦[𝐿 ∗ (𝑦)] 2 . Then one sees that the above double sum restricted to 𝑦 + 𝑘 ≤ 𝑥 is asymptotically equivalent to 𝑥 𝐿 + (𝑧) 𝑣◦ ∑︁ , ℓ ∗ (𝑥) 𝑧=1 𝑧𝑃[− 𝑍ˆ > 𝑧] 𝐿 ∗ (𝑧)
(4.45)
hence s.v., while the outer sum over 𝑦 > 𝑥 is negligible. It, therefore, follows that the above formula represents Í 𝑥 the asymptotic form of 𝑎(−𝑥) and may be written alternatively as 𝑥 −1𝑈a (𝑥) 𝑦=1 𝑉d (𝑦)𝜇+ (𝑦)𝑎(𝑦), as required. □ ∫∞ Remark 4.2.4 Let 𝛼 = 𝛽 = 1. Then 𝑃[− 𝑍ˆ > 𝑥]/𝑣◦ ∼ 𝑥 [𝜇− (𝑡)/ℓ ∗ (𝑡)] d𝑡 according Í to Lemma 5.4.5. If 𝐸 𝑍 < ∞ and 𝑥>0 𝜇+ (𝑥)𝑎 2 (𝑥) = ∞ in addition (so that ℓ ∗ (𝑥) → 𝐸 𝑍 and 𝑎(−𝑥) → ∞), by evaluating the sum in (4.45) one finds ∫𝑥 ∫𝑥 𝑎(−𝑥) ∼ 1 𝜇+ (𝑡) [1/𝐿 ∗ (𝑡)] 2 d𝑡 ∼ 𝑥 𝜇+ (𝑡) [1/𝐴(𝑡)] 2 d𝑡. 0
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
66
Remark 4.2.5 Let the assumption of Proposition 4.2.3 be satisfied and 𝑀 be an arbitrarily given number > 1. In Sections 5.1, 5.5 and 5.6 (see Lemma 5.1.3, Propositions 5.5.1 and 5.6.1) we shall be concerned with the condition 𝑎(−𝑅) − 𝑎(−𝑅 + 𝑥) −→ 0 as 𝑅 → ∞ 𝑎 † (𝑥)
(𝑥 ≤ 𝑅),
(4.46)
and, in particular, we will see that if 𝑚 + /𝑚 → 0, then 𝑃 𝑥 [𝜎𝑅 < 𝜎0 | 𝜎[𝑅,∞) < 𝜎0 ] → 1 (𝑅 → ∞) uniformly for 𝑥 < 𝑅 satisfying (4.46) (cf. Proposition 5.5.1). Below we show the following. (i) If 2𝛼 − 1 ≤ 𝛽, then (4.46) holds uniformly for −𝑀 𝑅 < 𝑥 < 𝑅, while if 𝛼 ≤ 𝛽 < 2𝛼 − 1 (entailing 𝛼 > 1) (4.46) holds whenever 𝑥/𝑅 → 0, but fails for 𝑥 < −𝑅/𝑀. (ii) Let 𝛼 = 2. Then 𝑃 𝑥 [𝜎𝑅 < 𝜎0 | 𝜎[𝑅,∞) < 𝜎0 ] → 1 as 𝑅 → ∞ uniformly for −𝑀 𝑅 < 𝑥 < 𝑅, although (4.46) fails for 𝑥 < −𝑅/𝑀 if 2 ≤ 𝛽 < 3. (iii) Let 𝛼 ≤ 𝛽 < 2𝛼 − 1 < 3. Then 𝛿 < 𝑃 𝑥 [𝜎𝑅 < 𝜎0 | 𝜎[𝑅,∞) < 𝜎0 ] < 1 − 𝛿 for −𝑅𝑀 < 𝑥 < −𝑅/𝑀 for some 𝛿 = 𝛿 𝑀 > 0. [If 𝛼 ≤ 𝛽 < 2𝛼 − 1, then for lim 𝑃 𝑥 [𝜎𝑅 < 𝜎0 | 𝜎[𝑅,∞) < 𝜎0 ] = 1 to hold (4.46) is necessary or not according as 𝛼 < 2 or = 2 as a consequence of (i) to (iii).] For the proof we use results from Chapters 5 and 6. Write 𝑍 ∗ for 𝑆 𝜎 [0,∞) and, first of all, note that Lemma 5.1.3 entails 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎0 ] ∼ 𝐸 𝑥 [𝑎(𝑍 ∗ ); 𝑍 ∗ < 𝑅] 𝑎(𝑅) + 𝑃 𝑥 [𝑍 ∗ ≥ 𝑅], (4.47) 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ∼ 𝐸 𝑥 [𝑎(𝑍 ∗ ); 𝑍 ∗ < 𝑅] 𝑎(𝑅) + 𝑃 𝑥 [𝜎𝑅 < 𝜎0 , 𝑍 ∗ ≥ 𝑅] Í for 𝑥 < 0 and alsoÍthat for − 21 𝑅 < 𝑥 < 0, 𝑅 𝑧=1 𝑔 [0,∞) (𝑥, −𝑧) ≍ 𝑈a (𝑥)𝑉d (𝑅) (by Lemma 6.2.1) and ∞ 𝑣 (𝑧)𝜇 (𝑧) ∼ [𝛼 𝜌/(𝛽 ˆ − 𝛼 𝜌)]𝑉 ˆ d (𝑅)𝜇+ (𝑅), yielding d + 𝑅 ∗
𝑃 𝑥 [𝑍 > 𝑅] =
∞ ∑︁
𝑔 [0,∞) (𝑥, −𝑧)𝜇+ (𝑧 + 𝑅) ≍ 𝑈a (𝑥)𝑉d (𝑅)𝜇+ (𝑅).
(4.48)
𝑧=1
If 𝛽 < 2𝛼 − 1, then from Propositions 4.2.1 and 4.2.3 it follows that 𝑎(−𝑥) varies regularly with a positive index, which implies that (4.46) holds whenever 𝑅/𝑥 → −∞ because of the inequalities in (5.8), and fails for 𝑥 < −𝑅/𝑀. Suppose 𝛽 ≥ 2𝛼 − 1. Then 𝑎(−𝑦), 𝑦 > 0 is s.v., so that (4.46) holds uniformly for −𝑀 𝑅 < 𝑥 < −𝑅/𝑀. If 𝛽 > 2𝛼 − 1, 𝑎(−𝑅) converges to a positive constant; hence the assertion is trivial. Let 𝛽 = 2𝛼 − 1 and − 21 𝑅 < 𝑥 < 0, and put 𝐿 1 (𝑦) = 𝑦𝑉d (𝑦)𝜇+ (𝑦)𝑎(𝑦). Then, noting that 𝐿 1 is s.v., we deduce from (4.48) and Proposition 4.2.3 𝑃 𝑥 [𝑍 ∗ > 𝑅]𝑎(𝑅) 𝐸 𝑥 [𝑎(𝑍 ∗ ); 𝑍 ∗ > 𝑅] ≍ ≍ ∫ 𝑎(𝑥) 𝑎(𝑥) 1
𝐿 1 (𝑅) |𝑥 |
𝐿 1 (𝑧)𝑧 −1 d𝑧
·
|𝑥| . 𝑅
4.2 Distributions in Domains of Attraction
67
∫ |𝑥 | Since 1 𝐿 1 (𝑧)𝑧−1 d𝑧 ≫ 𝐿 1 (|𝑥|), the rightmost member above approaches zero uniformly for − 21 𝑅 < 𝑥 < 0. In view of (4.47) and the dual of (2.15), this shows 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ]𝑎(𝑅) ∼ 𝑎(𝑥), an equivalent of (4.46). These verify (i). For the proof of (ii), we show that if 𝛽 > 1, then for any 𝜀 > 0 there exists a 𝑛 𝜀 ≥ 1 such that 𝑃𝑥 𝑍 ∗ > 𝑛 𝜀 𝑅 𝑍 ∗ ≥ 𝑅 ≤ 𝜀 for − 𝑀 𝑅 < 𝑥 < −𝑅/𝑀. (4.49) To this end, use the inequality 𝑔 [0,∞) (𝑥, 𝑧) ≤ 𝑔 {0} (𝑥, 𝑧) ≤ 𝑔 {0} (𝑥, 𝑥) = 2𝑎(𝑥) ¯ to see 𝑥 𝐻 [0,∞) (𝑦) := 𝑃 𝑥 [𝑍 ∗ = 𝑦] ≤ 2𝑎(𝑥)𝜇 ¯ + (𝑦),
showing the upper bound: for 𝑛 ≥ 1 ∑︁
𝑃 𝑥 [𝑍 ∗ > 𝑛𝑅] ≤ 2𝑎(𝑥) ¯
𝜇+ (𝑦).
(4.50)
𝑦 ≥𝑛𝑅
For the lower bound, noting that 𝑔 𝐵 (𝑥, −𝑧) = 𝑔−𝐵 (𝑧, −𝑥), we apply (5.24) to obtain 𝑔 [0,∞) (𝑥, −𝑧) ≥ 𝑔 {0} (𝑧, −𝑥) − 𝑎(𝑥){1 + 𝑘 } = 𝑎 † (𝑧) − 𝑎(𝑧 + 𝑥) − 𝑜(𝑎(|𝑥|)), and hence that for |𝑥|/2 ≤ 𝑧 ≤ |𝑥|, 𝑔 [0,∞) (𝑥, −𝑧) ≥ 𝑎(|𝑥|){1 + 𝑜(1)}. This yields 𝑥 𝐻 [0,∞) (𝑦) ≥ 𝐶1 𝑎(|𝑥|)𝜇+ (𝑦)
if − 𝑥 ≍ 𝑦,
(4.51)
provided 𝜇+ varies regularly with index −𝛽. Thus for −𝑀 𝑅 < 𝑥 < −𝑅/𝑀, ∑︁ 𝑃 𝑥 [𝑅 < 𝑆 𝜎 [0,∞) ≤ 2𝑅] ≥ 𝐶1′ 𝑎(𝑥) ¯ 𝜇+ (𝑦), 𝑦 ≥𝑅
which together with (4.50) shows (4.49). If 𝛼 = 2, then uniformly for 𝑅 ≤ 𝑦 ≤ 𝑛𝑅 (𝑛 = 1, 2, . . .), 𝑎(𝑦) − 𝑎(−𝑅 + 𝑦) = 𝑎(𝑅) + 𝑜(𝑎(𝑦)) in view of Proposition 4.2.1(i), and hence 𝑃 𝑦 [𝜎𝑅 < 𝜎0 ] → 1, so that by (4.49) 𝑃 𝑥 [𝜎𝑅 < 𝜎0 , 𝑍 ∗ ≥ 𝑅] ∼ 𝑃 𝑥 [𝑍 ∗ ≥ 𝑅] in (4.47). In conjunction with the result for 𝑥 ≥ 0 given in (5.53) (as well as with (4.47) and (4.49)) this shows (ii). The proof of (iii) is similar. Note that 1 < 𝛼 < 2 under the assumption of (iii). Observe that 𝑃 𝑦 [𝜎0 < 𝜎𝑅 ] ∧ 𝑃 𝑦 [𝜎0 > 𝜎𝑅 ] is bounded away from zero for 2𝑅 ≤ 𝑦 ≤ 3𝑅, which together with (4.51) shows that for some 𝑐 > 0, for −𝑀 𝑅 ≤ 𝑥 ≤ −𝑅/𝑀, 𝑃 𝑥 [𝑍 ∗ ≥ 𝑅, 𝜎𝑅 < 𝜎0 ] ∧ 𝑃 𝑥 [𝑍 ∗ ≥ 𝑅, 𝜎0 < 𝜎𝑅 ] ≥ 𝑐𝑎(|𝑥|)𝜇+ (𝑅)𝑅. It also holds that 𝑃 𝑥 [𝑍 ∗ > 𝑅] ≤ 𝐶𝑎(|𝑥|)𝜇+ (𝑅)𝑅 (by (4.50)) and 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ≍ 𝑎(−𝑅)/𝑎(𝑅) (since by (4.43) 𝑎(𝑥) varies regularly with index 2𝛼 − 1 − 𝛽 ∈ (0, 1)). By (4.43) one has 𝑎(−𝑅)/𝑎(𝑅) ≍ 𝑎(𝑅)𝜇+ (𝑅)𝑅. In view of (4.47), these together
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
68
show 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎0 ] ≍ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ≍ 𝑎(𝑅)𝜇+ (𝑅)𝑅. Now one can conclude that both 𝑃 𝑥 [𝜎0 < 𝜎𝑅 | 𝜎[𝑅,∞) < 𝜎0 ] and 𝑃 𝑥 [𝜎0 > 𝜎𝑅 | 𝜎[𝑅,∞) < 𝜎0 ] are bounded away from zero. Thus (iii) is verified.
4.3 Asymptotics of 𝒂(𝒙 + 1) − 𝒂(𝒙) It is sometimes useful to know the bounds of the increments of 𝑎(±𝑥). From (3.10), whether 𝑆 is recurrent or not,3 it follows that if 𝐸 𝑋 = 0, ∫ 𝜋 1 cos 𝑥𝑡 − cos(𝑥 + 1)𝑡 ¯ + 1) − 𝑎(𝑥) = 𝑎(𝑥 ¯ ℜ d𝑡 (4.52) 𝜋 0 1 − 𝜓(𝑡) ∫ 𝜋 1 sin 𝑡 sin 𝑥𝑡 + (1 − cos 𝑡) cos 𝑥𝑡 = 𝛼(𝑡) d𝑡. 𝜋 0 [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡 Proposition 4.3.1 Let 𝐸 𝑋 = 0. If condition (H) (introduced in Section 3.1) holds, then for some constant 𝐶, as |𝑥| → ∞ ∫ |𝑥 | d𝑦 𝐶 |𝑎(𝑥 + 1) − 𝑎(𝑥)| ≤ |𝑥| 1 𝑚(𝑦) = 𝑂 |𝑥| − 𝛿 𝑎(𝑥) ¯ (for each 𝛿 < 1); in particular |𝑎(𝑥 + 1) − 𝑎(𝑥)| = 𝑂 |𝑥| −1 𝑎(𝑥) ¯ if lim inf 𝑐(𝑥)/𝑚(𝑥) > 0. [Cf. Proposition 4.3.5 below for the case 𝑐/𝑚 → 0.] Proof We split the range of the second integral in (4.52) at 𝜋/𝑥. Denote by 𝐼∗ (𝑥) and by 𝐼 ∗ (𝑥) the contributions to it from 𝑡 ≤ 𝜋/𝑥 and from 𝑡 > 𝜋/𝑥, respectively. Under (H) we have 𝑓 ◦ (𝑡) = 1/[𝛼2 (𝑡)+𝛾 2 (𝑡)] ≤ 𝐶1 /[𝑡 2 𝑚 2 (1/𝑡)] for 𝑡 small enough (Lemma 3.2.6). Using this together with 𝛼(𝑡) ≤ 5𝑡𝑐(1/𝑡) (Lemma 3.2.1), one deduces ∫
𝜋/𝑥
|𝐼∗ (𝑥)| ≤ 𝑥
2𝑡𝛼(𝑡) d𝑡 + 𝛾 2 (𝑡) 𝑐(𝑦) d𝑦 𝑚 2 (𝑦)𝑦 2
𝛼2 (𝑡)
0
∫
∞
≤ 𝐶𝑥 𝑥
𝐶 ≤ , 𝑚(𝑥) where for the last inequality the monotonicity of 𝑚 is used.
3 Even if 𝐸 |𝑋 | = ∞, 𝛾 may be well defined, although 𝛽± is not.
4.3 Asymptotics of 𝑎 ( 𝑥 + 1) − 𝑎 ( 𝑥)
69
For the evaluation of 𝐼 ∗ (𝑥) we apply Lemma 3.2.11, which entails |𝛼(𝑡) − 𝛼(𝑡 + 𝜋/𝑥)| ≤ 𝐶𝑚(1/𝑡)/𝑥 and | 𝑓 ◦ (𝑡) − 𝑓 ◦ (𝑡 + 𝜋/𝑥)| ≤ 𝐶 𝑓 ◦ (𝑡)/𝑡𝑥 for 1/𝑥 < 𝑡 < 𝜋. By a decomposition of the integral over [𝜋/𝑥, 𝜋] similar to that made in (3.56) these bounds yield ∫ 𝜋 𝐶 [𝑚(1/𝑡) + 𝑐(1/𝑡)] 𝑓 ◦ (𝑡) d𝑡 |𝐼 ∗ (𝑥)| ≤ 𝑥 𝜋/𝑥 ∫ 𝑥/ 𝜋 d𝑦 𝐶′ . ≤ 𝑥 1/ 𝜋 𝑚(𝑦) Thus we have
| 𝑎(𝑥 ¯ + 1) − 𝑎(𝑥)| ¯ ≤ 𝐶0 𝑥 −1
∫ 1
𝑥
d𝑦 𝑚(𝑦).
As for the increments of 𝑎(±𝑥), it suffices to obtain a similar bound for the increment of 𝜋[𝑏 + (𝑥) − 𝑏 − (𝑥)] because of (3.44). Split the range of the integral representing it at 𝜋/𝑥 and let 𝐽∗ (𝑥) + 𝐽 ∗ (𝑥) be ∫the corresponding decomposition. In 𝑥 the same way as above we get |𝐽 ∗ (𝑥)| ≤ 𝐶𝑥 −1 1 d𝑦/𝑚(𝑦). Putting ∫ 𝐾± (𝑥) = 0
𝜋/𝑥
𝛽± (𝑡) (sin (𝑥 + 1)𝑡 − sin 𝑥𝑡) d𝑡, + 𝛾 2 (𝑡)]𝑡
[𝛼2 (𝑡)
(4.53)
by elementary computation (see Lemma 4.3.2 below) we deduce that |𝐽∗ (𝑥)| = |𝐾− (𝑥) − 𝐾+ (𝑥)| ≤ 𝜋|𝑏 − (𝑥) − 𝑏 + (𝑥)|/𝑥 + 𝑂 (1/𝑚(𝑥)). that |𝑏 − (𝑥) − 𝑏 + (𝑥)| = |𝑎(𝑥) − 𝑎(−𝑥)|/2 ≤ 𝐶2 𝑥/𝑚(𝑥) under (H) and that ∫Noting 𝑥 d𝑦/𝑚(𝑦) is 𝑜 𝑥 2− 𝛿 /𝑚(𝑥) in general and bounded above by 𝑥/𝑚(𝑥) if 𝑚(𝑥) ≤ 0 𝑂 (𝑐(𝑥)), we can now conclude the assertion of the proposition. □ Lemma 4.3.2 Let (H) hold and 𝐾± (𝑥) be as given in (4.53). Then uniformly for 𝑥 > 1, ∫ 𝜋/𝑥 𝛽± (𝑡) sin 𝑥𝑡 1 1 . (4.54) 𝐾± (𝑥) = d𝑡 + 𝑂 𝑥 0 𝑚(𝑥) [𝛼2 (𝑡) + 𝛾 2 (𝑡)]𝑡 Proof The result follows from the equality sin(𝑥 + 1)𝑡 − sin 𝑥𝑡 = sin 𝑡 cos 𝑥𝑡 + 𝑂 (𝑥𝑡 3 ) = 𝑥 −1 sin 𝑥𝑡 + 𝑂 (𝑥 2 𝑡 3 ), valid uniformly for 𝑥 > 1, 𝑡 > 0 satisfying 0 < 𝑥𝑡 ≤ 𝜋.
□
If the tails of 𝐹 possess an appropriate regularity, we can obtain the exact asymptotics of the increments of 𝑎(±𝑥).
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
70
Proposition 4.3.3 Suppose that (4.14) is satisfied with 1 < 𝛼 ≤ 2. Then 𝑎(𝑥 + 1) − 𝑎(𝑥) ∼ (𝛼 − 1)𝑎(𝑥)/𝑥
if lim inf 𝑎(𝑥)/𝑎(𝑥) > 0, ¯
𝑎(−𝑥 − 1) − 𝑎(−𝑥) ∼ (𝛼 − 1)𝑎(−𝑥)/𝑥
> 0, ¯ if lim inf 𝑎(−𝑥)/𝑎(𝑥)
𝑥→∞
𝑥→∞
as 𝑥 → ∞; and 𝑎(𝑥 + 1) − 𝑎(𝑥) = 0 if 𝑥→±∞ 𝑎(𝑥)/|𝑥| ¯
𝑎(𝑥) = 0, 𝑥→±∞ 𝑎(𝑥) ¯
lim
lim
where both upper or both lower signs should be chosen in the double signs. Let 1 < 𝛼 ≤ 2. If 1 < 𝛼 < 2, the assertion of the above proposition is verified in [82, Lemma 3.1] by a relatively simple argument, whereas for 𝛼 = 2, the proof is somewhat involved. In preparation for this, we introduce some notation. Put 𝜙 𝑐 (𝜃) := ℜ[1 − 𝜓(𝜃)], ∫
𝜙 𝑠 (𝜃) := ℑ[1 − 𝜓(𝜃)],
∞
𝑊+ (𝑥) :=
∫ 𝑦 d𝐹 (𝑦),
𝑦 d𝐹 (𝑦), −∞
𝑥+0
and
−𝑥−0
𝑊− (𝑥) := −
𝑊 (𝑥) := 𝑊+ (𝑥) + 𝑊− (𝑥).
It follows that 1 − 𝜓(𝜃) = 𝜙 𝑐 (𝜃) + 𝑖𝜙 𝑠 (𝜃), 𝜙 𝑐 (𝜃) = 𝜃𝛼(𝜃), 𝜙 𝑠 (𝜃) = 𝜃𝛾(𝜃), ∫ 𝑥 𝑊± (𝑥) = 𝑥𝜇± (𝑥) + 𝜂± (𝑥) and 𝑊± (𝑦) d𝑦 = 𝑐 ± (𝑥) + 𝑚 ± (𝑥). 0
Integrating by parts leads to ∫ ∞ 𝜙 𝑐′ (𝜃) = 𝜃 𝑊 (𝑥) cos 𝜃𝑥 d𝑥,
𝜙 𝑠′ (𝜃) = 𝜃
0
∫
∞
[𝑊+ (𝑥) − 𝑊− (𝑥)] sin 𝜃𝑥 d𝑥, 0
for 𝜃 ≠ 0, while 𝜙 𝑐′ (0+) = 𝜙 𝑐′ (0) = 0, 𝜙 𝑠′ (0+) = 𝜙 𝑠′ (0) = 0. Here we have used the assumption 𝐸 𝑋 = 𝑊+ (0) − 𝑊− (0) = 0 in deriving the expression of 𝜙 𝑠′ (𝜃). Lemma 4.3.4 Let 𝛼 = 2 in (4.14). (i) 𝜙 𝑐′ (𝜃) ∼ 2𝜃𝑚(1/𝜃){1 + 𝑜(1)}, 𝜙 𝑠′ (𝜃) = 𝑜(𝜃𝑚(1/𝜃)) as 𝜃 ↓ 0. (ii) As 𝑥 → ∞ ′ ∫ 𝜋 𝑥 ℜ 1 𝑡 d𝑡 = 𝑂 (𝑡) uniformly for 𝑀 > 1, 1−𝜓 𝑀𝑚(𝑥/𝑀) 𝑀/𝑥 ∫
𝜋 𝜀/𝑥
′ 𝑥 𝑡 d𝑡 = 𝑜 ℑ 1 (𝑡) 1−𝜓 𝑚(𝑥)
[The symbol ′ denotes differentiation.]
for each 𝜀 > 0.
4.3 Asymptotics of 𝑎 ( 𝑥 + 1) − 𝑎 ( 𝑥)
71
Proof∫ Split the range of the integral representing 𝜙 𝑐′ (𝜃) and 𝜙 𝑠′ (𝜃) at 1/𝜃, and note ∞ that 1/𝜃 𝑊 (𝑥) cos 𝜃𝑥 d𝑥 ≤ 2𝜃 −1𝑊 (1/𝜃). Then, the formulae of (i) readily follow from 𝑚(𝑥) ∼ 𝑐(𝑥) ∼ 𝐿(𝑥)/2 and 𝑥𝑊 (𝑥) = 𝑜(𝑚(𝑥)) (cf. Section A.1.1). Since |𝜙 𝑠 (𝑡)| ≪ 𝜙 𝑐 (𝑡) ∼ 𝑡𝛼(𝑡) ∼ 𝑡 2 𝑚(1/𝑡), by (i) one obtains ′ [(𝜙2𝑠 − 𝜙2𝑐 )𝜙 𝑐′ − 2𝜙 𝑐 𝜙 𝑠 𝜙 𝑠′ 𝜙 𝑐 ] (𝑡) − 𝜙 𝑐′ (𝑡) 1 −2 ℜ (𝑡) = ∼ 2 ∼ 3 , 2 2 2 1−𝜓 𝑡 𝑚(1/𝑡) [𝜙 𝑐 + 𝜙 𝑠 ] (𝑡) 𝜙 𝑐 (𝑡) hence the first bound of (ii). The proof of the second one is similar.
□
Proof (of Proposition 4.3.3 in the case 𝜶 = 2) Recall the increment 𝑎(𝑥 ¯ + 1) − 𝑎(𝑥) ¯ is given by (4.52). Putting 𝜀 = [(𝑛 + 21 )𝜋] −1 with a (large) integer 𝑛, we infer that its last integral restricted to 𝑡 ≤ 1/𝜀𝑥 is written as ∫ 0
1/𝜀 𝑥
𝜋/2 1 + 𝑜(1) sin 𝑥𝑡 + 𝑂 (1) d𝑡 = {1 + 𝑂 (𝜀) + 𝑜(1)} 𝑚(1/𝑡) 𝑡 𝑚(𝑥)
∫ 1/𝜀 𝑥 (see the derivation of (4.28); also note that 𝜀/𝑥 𝑡 −1 sin 𝑥𝑡 d𝑡 = 21 𝜋 + 𝑂 (𝜀)), while by the first formula of Lemma 4.3.4(ii) ′ ∫ 𝜋 ∫ 𝜋 1 𝑡 sin 𝑥𝑡 𝑡 d𝑡 = cos 𝑥𝑡 d𝑡 + 𝑂 (1/𝑥) ℜ ℜ 1 − 𝜓(𝑡) 𝑥 1/𝜀 𝑥 1 − 𝜓(𝑡) 1/𝜀 𝑥 𝜀 =𝑂 . 𝑚(𝜀𝑥) From these estimates we can easily conclude that 𝑎(𝑥 ¯ + 1) − 𝑎(𝑥) ¯ ∼ 𝑎(𝑥)/𝑥. ¯ We still need to identify the asymptotic form of increments of 𝑏 ± (𝑥). From the second formula of Lemma 4.3.4(ii) we deduce, as above, that the integral representing 𝑏 ± (𝑥 + 1) − 𝑏 ± (𝑥) over 𝜀/𝑥 ≤ 𝑡 ≤ 𝜋 is 𝑜(1/𝑚(𝑥)). Since 𝑚/𝑚 ˜ → 0, we can replace 𝑂 (1/𝑚(𝑥)) by 𝑜(1/𝑚(𝑥)) on the RHS of (4.54). Hence, recalling (4.26) (the integral representation of 𝑏 + (𝑥)) we obtain 1 𝑏 + (𝑥) +𝑜 , 𝑏 + (𝑥 + 1) − 𝑏 + (𝑥) = 𝑥 𝑚(𝑥) and similarly for 𝑏 − (𝑥). Now the result follows immediately from 𝑎(±𝑥) = 𝑎(𝑥) ¯ ± [𝑏 − (𝑥) − 𝑏 + (𝑥)]. □ If (4.14) is valid with 𝛼 = 1, one may expect to have |𝑎(𝑥 + 1) − 𝑎(𝑥)| = 𝑂 ( 𝑎(𝑥)/|𝑥|) ¯ (or rather = 𝑜 ( 𝑎(𝑥)/𝑥)) ¯ as |𝑥| → ∞, in Propositions 4.3.1 and 4.3.3. However, without some nice regularity condition of 𝜇, it seems hard to show such a∫ result. The difficulty arises mainly from that of getting a proper evaluation of ∞ 𝑦𝜇± (𝑦)e𝑖𝑡 𝑦 d𝑦. Still, we can get one under a mild additional condition on 𝜇± , as 1/𝑡 given below.
4 Some Explicit Asymptotic Forms of 𝑎 ( 𝑥)
72
Put 𝐿 ± (𝑥) = 𝑥𝜇± (𝑥) and 𝐿 (𝑥) = 𝑥𝜇(𝑥) and bring in the condition: as 𝑥 → ∞ ∫𝑥 Í (∗) 0 𝐿 (𝑦) d𝑦 = 𝑂 (𝑥𝐿(𝑥)) and 𝑛>𝑥 |𝐿 ± (𝑛) − 𝐿 ± (𝑛 − 1)| = 𝑂 (𝐿(𝑥)).4 as |𝑥| → ¯ Proposition 4.3.5 Suppose (∗) holds. Then 𝑎(𝑥 + 1) − 𝑎(𝑥) = 𝑂 ( 𝑎(𝑥)/|𝑥|) ∞, under each of the conditions (a) 𝐸 𝑋 = 0 and (H) holds; (b) 𝐹 is r.s. (𝐹 may be transient); (c) 𝑥𝜇(𝑥) is s.v. and | 𝐴(|𝑥|)| = 𝑂 (𝑥𝜇(𝑥)) (𝑥 → ∞). [In case (c), 𝐹 is recurrent under (∗) (by the well-known recurrence criterion) and 𝑃[𝑆 𝑛 > 0] is bounded away from 0 and 1 if 𝜇− /𝜇+ → 1 (cf. [83, Section 4.3]).] Proof Under (∗) we have ∫ ∞ 𝐶 𝑖𝑡 𝑦 𝐿 ± (𝑦)e d𝑦 ≤ 𝐿(𝑥) 𝑡 𝑥
and
∫
𝑥 𝑖𝑡 𝑦
𝐿 ± (𝑦)e
0
𝐶 d𝑦 ≤ 𝐿 (1/𝑡) 𝑡
(4.55)
for 1/𝑥 < 𝑡 < 𝜋, 𝑥 > 1.5 Before proving (4.55), we verify the assertion of the proposition by taking it for granted. In case (a), one has only to follow the proof of Proposition 4.3.1. For by the first bound of (4.55) the inner integral on the RHS of (3.29) is bounded above by a constant multiple of 𝜏 −1 𝐿 (1/𝑠) ≤ 𝑐(1/𝑠) (𝜏 > 𝑠) so that the second bound of Lemma 3.2.11 can be replaced by const.(𝑡 − 𝑠)𝑐(1/𝑠). For cases (b) and (c) we bring in the functions ∫ 𝑥 ∫ 𝑥 𝛼 𝑥 (𝑡) = 𝜇(𝑦) sin 𝑡𝑦 d𝑦, 𝛾 𝑥 (𝑡) = [𝜇− (𝑦) − 𝜇+ (𝑦)] cos 𝑡𝑦 d𝑦. 0
0
Clearly |𝛼(𝑡) − 𝛼 𝑥 (𝑡)| ∨ |𝛾(𝑡) − 𝛾 𝑥 (𝑡)| ≤ 2𝜇(𝑥)/𝑡. By the second bound of (4.55) |𝛼 𝑥′ (𝑡)|, |𝛾 𝑥′ (𝑡)| ≤ 𝑂 (𝐿(1/𝑡)/𝑡) .
(4.56)
Let (b) hold. Then, 𝑓 ◦ (𝑡) ∼ | 𝐴(1/𝑡)| −2 (see (P2) of Section 4.1) and, noting that by (∗) 𝐿 (𝑥) ≤ 𝑂 (𝐿(1/𝑡)) for 𝑡 > 1/𝑥, we see ∫
𝜋
1/𝑥
𝐶 [𝛼(𝑡) − 𝛼 𝑥 (𝑡)] 𝑓 (𝑡) d𝑡 ≤ 𝑥 ◦
∫
1/𝑥0
1/𝑥
𝐿 (1/𝑡) d𝑡 2𝐶 𝑎(𝑥) ¯ . ∼ 𝑥 𝑡 𝐴2 (1/𝑡)
By the bound of 𝛼 𝑥′ in (4.56) it follows that for 𝑡 > 1/𝑥 and for a positive constant 𝑏 both small enough 4 The second condition holds if 𝑛𝜇± (𝑛) (𝑛 = 1, 2, 3, . . .) are ultimately non-increasing, while it implies that 𝐿 is almost decreasing. To avoid any misunderstanding, we remark that none of 𝐿± or 𝐿 are assumed to be r.s. Í Í 5 Alternatively derive the corresponding bounds of 𝑛>𝑥 𝐿± (𝑛)e𝑖𝑡𝑛 , 0≤𝑛≤𝑥 𝐿± (𝑛)e𝑖𝑡𝑛 (much easier than (4.55)) and use the expression for 1 − 𝜓 in the footnote on page 20.
4.3 Asymptotics of 𝑎 ( 𝑥 + 1) − 𝑎 ( 𝑥)
∫
73
𝑏
|𝛼 𝑥′ (𝑡)| ∨ |𝛾 𝑥′ (𝑡)| 𝑓 ◦ (𝑡) d𝑡 ≤ 𝐶 ′
∫
𝑏
1/𝑥
1/𝑥
𝐿 (1/𝑡) d𝑡 ∼ 2𝐶 ′ 𝑎(𝑥), ¯ 𝑡 𝐴2 (1/𝑡)
showing that the contribution from 1/𝑥 < 𝑡 ≤ 𝜋 to the second integral in (4.52) is 𝑂 ( 𝑎(𝑥)/𝑥). ¯ The contribution from 0 ≤ 𝑡 ≤ 1/𝑥 is at most ∫ 1/𝑥 ∫ 1/𝑥 1 1 𝑎(𝑥) ¯ . 2 𝛼(𝑡) 𝑓 ◦ (𝑡) d𝑡 = 𝑜 d𝑡 = 𝑜 =𝑜 | 𝐴(1/𝑡)| 𝑥 𝐴(𝑥) 𝑥 0 0 Thus 𝑎(𝑥 ¯ + 1) − 𝑎(𝑥) ¯ = 𝑂 ( 𝑎(𝑥)/𝑥). ¯ In the same way we see that the increment of 𝑏 − (𝑥) − 𝑏 + (𝑥) is dominated, in absolute value, by a constant multiple ¯ ∫ 𝑥of 𝑎(𝑥)/𝑥. If (c) holds, then 𝛼(𝑡) ∼ 21 𝜋𝐿 (1/𝑡), 𝛾(𝑡) = 𝑂 (𝐿 (1/𝑡)) and 𝑎(𝑥) ¯ ≍ 1 [𝑦𝐿(𝑦)] −1 d𝑦 (see the derivation of (4.34)), and as above, one can easily obtain ∫ 𝜋 ∫ 𝜋 d𝑡 𝐶1 [1/𝐿] ∗ (𝑥) , [𝛼(𝑡) − 𝛼 𝑥 (𝑡)] 𝑓 ◦ (𝑡) d𝑡 ≤ ∼ 𝐶1 𝑥 1/𝑥 𝑡 𝐿(1/𝑡) 𝑥 1/𝑥 ∫ 𝑏 ∫ 𝑏 ′ 1 |𝛼 𝑥 (𝑡)| ∨ |𝛾 𝑥′ (𝑡)| 𝑓 ◦ (𝑡) d𝑡 ≤ 𝐶1 d𝑡 ∼ 𝐶1 [1/𝐿] ∗ (𝑥), 𝑡 𝐿(1/𝑡) 1/𝑥 1/𝑥 and
∫ 1/𝑥 0
𝛼 𝑓 ◦ d𝑡 ≤ 𝐶
∫∞ 𝑥
[𝑦 2 𝐿(𝑦)] −1 d𝑦 = 𝑜( 𝑎(𝑥)/𝑥), ¯ to conclude the result.
□
Proof (of (4.55)) We have only to prove the formulae for 𝐿 + . Note that 𝜇(𝑦) = 𝜇(𝑛) for 𝑛 ≤ 𝑦 < 𝑛 + 1. Then, writing 𝑦e𝑖𝑡 𝑦 = (𝑤e𝑖𝑡𝑤 + 𝑛e𝑖𝑡𝑤 )e𝑖𝑡 𝑛 , 𝑤 = 𝑦 − 𝑛, one infers that for 𝑥 > 1/𝑡, ∫ 𝑥 𝑦𝜇+ (𝑦)e𝑖𝑡 𝑦 d𝑦 = 𝜆1 (𝑡)I (𝑡, 𝑥) + 𝜆2 (𝑡)II (𝑡, 𝑥) + 𝑂 (𝐿(1/𝑡)), ⌊1/𝑡 ⌋
where 𝜆1 (𝑡) =
∫1 0
𝑤e𝑖𝑡𝑤 d𝑤 =
I (𝑡, 𝑥) =
⌊𝑥 ⌋ ∑︁ 𝑛= ⌊1/𝑡 ⌋
1 2
+ 𝑂 (𝑡), 𝜆2 (𝑡) =
𝜇+ (𝑛)e
𝑖𝑡 𝑛
∫1 0
e𝑖𝑡𝑤 d𝑤 = 1 + 𝑂 (𝑡),
and II (𝑡, 𝑥) =
⌊𝑥 ⌋ ∑︁
𝐿 + (𝑛)e𝑖𝑡 𝑛 .
𝑛= ⌊1/𝑡 ⌋
Summing by parts and using (∗) one obtains II (𝑡, 𝑥) = 𝑂 (𝐿(1/𝑡)/𝑡), while I (𝑡, 𝑥) = 𝑂 (𝜇(1/𝑡)/𝑡) by monotonicity of 𝜇+ . One can therefore conclude that under (∗) ∫ 𝑥 ∫ ⌊1/𝑡 ⌋ ∫ 𝑥 𝐶′ 𝑖𝑡 𝑦 𝑖𝑡 𝑦 ≤ 𝐿 (1/𝑡), 𝐿 (𝑦)e d𝑦 ≤ 𝐿 (𝑦) d𝑦 + 𝐿 (𝑦)e d𝑦 + + + 𝑡 0 0 ⌊1/𝑡 ⌋ showing the second bound of (4.55). The first one is shown similarly.
□
Chapter 5
Applications Under 𝒎 + /𝒎 → 0
This chapter is devoted to applications of the results of Chapter 3. We apply Theorems 3.1.1, 3.1.2 and 3.1.6 to obtain some asymptotic estimates of the upwards overshoot distribution of the r.w. over a high level, 𝑅 say. We also estimate the probability of the r.w. escaping the origin, i.e., going beyond the level 𝑅 > 1 or either of the levels 𝑅 or −𝑅 without visiting zero. It seems hard to obtain sharp results for these things in a general setting. Under the condition lim 𝑥→∞ 𝑚 + (𝑥)/𝑚(𝑥) = 0 (abbreviated as 𝑚 + /𝑚 → 0, as before), however, our theorems about 𝑎(𝑥) are effectively used to yield natural results. The overshoot distribution is related to the relative stability of the ladder height variables. We shall obtain a sufficient condition for the relative stability of the ascending ladder height (Proposition 5.2.1) and the asymptotic estimates of the escape probabilities as mentioned above (Proposition 5.5.1 (one-sided) and Propositions 5.6.1 and 5.6.6 (two-sided)). As a byproduct of these results, we deduce that if 𝑚 + /𝑚 → 0, then both 𝑎(𝑥) and 𝑎(−𝑥) are asymptotically increasing as 𝑥 → ∞ (see Corollary 5.1.6) as well as the following result concerning the classical twosided exit problem, which has not been satisfactorily answered in the case 𝜎 2 = ∞. Denote by 𝜎𝐵 the first entrance time into a set 𝐵 ⊂ Z of the r.w. and by 𝑉d (𝑥) the renewal function for the weakly descending ladder height process of the r.w. It turns out that if 𝑚 + /𝑚 → 0, then the ratio 𝑉d (𝑥)/𝑎(𝑥) is s.v. at infinity and uniformly for 1 ≤ 𝑥 ≤ 𝑅, as 𝑅 → ∞, 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎(−∞,−1] ] ∼ 𝑉d (𝑥)/𝑉d (𝑅)
(5.1)
(Lemma 5.3.1(i) and Proposition 5.3.2, respectively). For Lévy processes having no upward jumps, there is an identity for the corresponding probability (cf. [22, Section 9.4]), and (5.1) is an exact analogue of it for the r.w. In Chapter 6, the asymptotic equivalence (5.1) will be obtained under conditions other than 𝑚 + /𝑚 → 0, and related matters will be addressed. Recall that if the r.w. is left-continuous (i.e., 𝑃[𝑋 ≤ −2] = 0), then 𝑎(𝑥) = 𝑥/𝜎 2 for 𝑥 > 0; analogously 𝑎(𝑥) = −𝑥/𝜎 2 for 𝑥 < 0 for right-continuous r.w.’s (i.e., r.w.’s satisfying 𝑃[𝑋 > 2 = 0]) and that there exists lim 𝑥→∞ 𝑎(𝑥) ≤ ∞ and inf 𝑥>0 𝑎(𝑥) > 0 except for the left-continuous r.w.’s (cf. (2.12), 2.13), 2.15)). Most of the results © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8_5
75
5 Applications Under 𝑚+ /𝑚 → 0
76
obtained in this chapter, except for those in the last two sections, are thought of as examining to what extent the results which are simple or well-established for the right-continuous r.w.’s can be generalised to those satisfying 𝑚 + /𝑚 → 0. In later chapters, we shall discuss the subjects treated in this chapter in some other settings. In [71], the Green function of the r.w. killed on hitting zero is defined by 𝑔(𝑥, 𝑦) = 𝑔 {0} (𝑥, 𝑦) − 𝛿0, 𝑥 , ¯ = 21 [𝑎(𝑥) + 𝑎(−𝑥)]. so that 𝑔(0, ·) = 𝑔(·, 0) = 0. Recall 𝑎 † (𝑥) = 𝑎(𝑥) + 𝛿0, 𝑥 and 𝑎(𝑥) The function 𝑎(𝑥) bears relevance to 𝑔 {0} through the identity (2.3), or, what is the same, 𝑔 {0} (𝑥, 𝑦) = 𝑎 † (𝑥) + 𝑎(−𝑦) − 𝑎(𝑥 − 𝑦). (5.2) Since 𝑔 {0} (𝑥, 𝑦) = 𝑃 𝑥 [𝜎𝑦 < 𝜎0 ]𝑔(𝑦, 𝑦) if 𝑥 ≠ 𝑦 ≠ 0, from (5.2) one infers that 𝑃 𝑥 [𝜎𝑥 > 𝜎0 ] = 𝑃0 [𝜎𝑥 < 𝜎0 ] = 1/𝑔(𝑥, 𝑥) (𝑥 ≠ 0), and 𝑃 𝑥 [𝜎𝑦 < 𝜎0 ] =
𝑎 † (𝑥) + 𝑎(−𝑦) − 𝑎 † (𝑥 − 𝑦) 2𝑎(𝑦) ¯
(𝑦 ≠ 0).
(5.3)
The identities (5.2) and (5.3) will be fundamental in this chapter and will occasionally be used in later chapters. Throughout this chapter, we shall tacitly assume 𝐸 𝑋 = 0 (resp. 𝑆 is recurrent) when the function 𝑚(𝑥) (resp. 𝑎(𝑥)) is involved.
5.1 Some Asymptotic Estimates of 𝑷𝒙 [𝝈𝑹 < 𝝈0 ] The potential function satisfies the functional equation ∞ ∑︁
𝑝(𝑦 − 𝑥)𝑎(𝑦) = 𝑎 † (𝑥)
(5.4)
𝑦=−∞
(cf. [71, p.352]), which restricted to 𝑥 ≠ 0 states that 𝑎 is harmonic there, so that the process 𝑀𝑛 := 𝑎(𝑆 𝜎0 ∧𝑛 ) is a martingale (under 𝑃 𝑥 ) for each 𝑥 ≠ 0, and by the optional sampling theorem and Fatou’s lemma 𝑎(𝑥) ≥ 𝐸 𝑥 [𝑎(𝑆 𝜎0 ∧𝜎𝑦 )] = 𝑎(𝑦)𝑃 𝑥 [𝜎𝑦 < 𝜎0 ]
(𝑥 ≠ 0).
(5.5)
if 𝑎(−𝑦) ≠ 0.
(5.6)
Lemma 5.1.1 For all 𝑥, 𝑦 ∈ Z, −
𝑎(𝑦) 𝑎(𝑥) ≤ 𝑎(𝑥 + 𝑦) − 𝑎(𝑦) ≤ 𝑎(𝑥) 𝑎(−𝑦)
This is Lemma 3.2 of [84]. The right-hand inequality of (5.6), stating the subadditivity of 𝑎(·) that is given in [71], is the same as 𝑔(𝑥, −𝑦) ≥ 0. The left-hand inequality, which seems much less familiar, will play a significant role in the sequel.
5.1 Some Asymptotic Estimates of 𝑃𝑥 [ 𝜎𝑅 < 𝜎0 ]
77
It follows readily from (5.3) and (5.5). Indeed, with variables suitably chosen, these relations together yield 𝑎(𝑥) + 𝑎(𝑦) − 𝑎(𝑥 + 𝑦) 𝑎(𝑥) ≤ 𝑎(𝑦) + 𝑎(−𝑦) 𝑎(−𝑦)
(𝑎(−𝑦) ≠ 0),
(5.7)
which, after rearrangement, becomes the left-hand inequality of (5.6). Remark 5.1.2 (a) The left-hand inequality of (5.6) may yield useful upper as well as lower bounds of the middle term. Here we write down such bounds in a form to be used later: −
𝑎(𝑥 − 𝑅) 𝑎(−𝑥) ≤ 𝑔 {0} (𝑥, 𝑅) − 𝑎 † (𝑥) 𝑎(𝑅 − 𝑥) = 𝑎(−𝑅) − 𝑎(𝑥 − 𝑅) ≤
𝑎(−𝑅)𝑎(𝑥) , 𝑎(𝑅)
(5.8)
provided 𝑎(𝑅)𝑎(𝑅 − 𝑥) ≠ 0. Both inequalities are deduced from the left-hand inequality of (5.6): the lower bound follows by replacing 𝑦 and 𝑥 with 𝑥 − 𝑅 and −𝑥, respectively, and the upper bound by replacing 𝑦 with −𝑅. For 𝑥 ≠ 𝑅 (≥ 1), since 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] = 𝑔 {0} (𝑥, 𝑅)/2𝑎(𝑅), ¯ −
𝑎(𝑥 − 𝑅) 𝑎(−𝑥) 𝑎 † (𝑥) 𝑎(−𝑅)𝑎(𝑥) · . ≤ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] − ≤ 𝑎(𝑅 − 𝑥) 2𝑎(𝑅) 2𝑎(𝑅) 2𝑎(𝑅) 𝑎(𝑅) ¯ ¯ ¯
(5.9)
(5.8) is quite efficient in the case 𝑎(−𝑅)/𝑎(𝑅) → 0 and will be used later. (b) By (5.3) and the subadditivity of 𝑎, 𝑃 𝑥 [𝜎0 < 𝜎𝑦 ] =
𝑎(𝑥 − 𝑦) + 𝑎(𝑦) − 𝑎 † (𝑥) 𝑎(𝑦 ¯ − 𝑥) ≤ 2𝑎(𝑦) 𝑎(𝑦) ¯ ¯
(𝑦 ≠ 0, 𝑥).
Lemma 5.1.3 Suppose lim 𝑎(−𝑧)/𝑎(𝑧) ¯ = 0. Then 𝑧→∞
(i) uniformly for 𝑥 ≤ −𝑅, as 𝑅 → ∞, 𝑎(𝑥) − 𝑎(𝑥 + 𝑅) −→ 0 and 𝑎(𝑅)
𝑃 𝑥 [𝜎−𝑅 < 𝜎0 ] −→ 1;
(ii) uniformly for −𝑀 < 𝑥 < 𝑅 with any fixed 𝑀 > 0, as 𝑅 → ∞, 𝑎(−𝑅) − 𝑎(𝑥 − 𝑅) −→ 0 and 𝑎 † (𝑥)
𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] =
𝑎 † (𝑥) {1 + 𝑜(1)}. 𝑎(𝑅)
Proof Suppose lim𝑧→∞ 𝑎(−𝑧)/𝑎(𝑧) ¯ = 0. This excludes the possibility of the leftcontinuity of the r.w. so that 𝑎 † (𝑥) > 0 for all 𝑥 and (ii) follows immediately from (5.8) and (5.9). (i) is deduced from (5.6) as above (substitute 𝑥 + 𝑅 and −𝑅 for 𝑥 and 𝑦, respectively, for the lower bound; use sub-additivity of 𝑎 for the upper bound). □
5 Applications Under 𝑚+ /𝑚 → 0
78
By virtue of Proposition 3.1.5 and Theorem 3.1.6, Lemma 5.1.3(ii) entails the following Corollary 5.1.4 Suppose 𝑚 + /𝑚 → 0. Then 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] → 1 as 𝑥/𝑅 ↑ 1. ¯ which is a By Remark 5.1.2(b) we know that 𝑃 𝑅−𝑦 [𝜎𝑅 < 𝜎0 ] ≥ 1 − 𝑎(𝑦)/ 𝑎(𝑅), ¯ better estimate than the one given above in most cases but does not generally imply ¯ possibly approaches unity even 𝑎(𝑅) the consequence in Corollary 5.1.4 since 𝑎(𝑦)/ ¯ if 𝑦/𝑅 ↓ 0 (cf. Lemma 5.3.1(i)). By the same token 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] may approach zero when 𝑥/𝑅 increases to 1 in a suitable way (under 𝑚 − /𝑚 → 0). Lemma 5.1.5 Suppose that 𝑃[𝑋 ≥ 2] > 0 and 𝛿 := lim sup 𝑎(−𝑦)/𝑎(𝑦) < 1. ¯ 𝑦→∞
Then lim inf
𝑥→∞ 𝑦 ≥𝑥
𝑎(−𝑦) ≥ 1 − 𝛿. 𝑎(−𝑥)
In particular, if 𝛿 = 0, then 𝑎(−𝑥) is asymptotically increasing with 𝑥 in the sense that there exists an increasing function 𝑓 (𝑥) such that 𝑎(−𝑥) = 𝑓 (𝑥){1 + 𝑜(1)} (𝑥 → ∞). Proof For any 𝛿 ′ ∈ (𝛿, 1) choose 𝑁 > 0 such that 𝑎(−𝑧)/𝑎(𝑧) < 𝛿 ′ for all 𝑧 ≥ 𝑁 and let 𝑁 ≤ 𝑥 ≤ 𝑦 − 𝑁 and 𝑧 = 𝑦 − 𝑥. If 𝑎(−𝑥) ≤ 𝑎(−𝑧), then on using (5.6) 𝑎(−𝑦) − 𝑎(−𝑧) = 𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧) 𝑎(−𝑧) 𝑎(−𝑧) 𝑎(−𝑥) ≥ − 𝑎(−𝑧), ≥− 𝑎(𝑧) 𝑎(𝑧)
(5.10)
which by a simple rearrangement leads to 𝑎(−𝑧) 𝑎(−𝑧) ≥ (1 − 𝛿 ′)𝑎(−𝑧) ≥ (1 − 𝛿 ′)𝑎(−𝑥). 𝑎(−𝑦) ≥ 1 − 𝑎(𝑧) If 𝑎(−𝑥) ≥ 𝑎(−𝑧), then interchanging the roles of 𝑥 and 𝑧 in (5.10), we have 𝑎(−𝑦) ≥ (1 − 𝛿 ′)𝑎(−𝑥) for 𝑥 > 𝑁. Since lim 𝑥→∞ 𝑎(−𝑦)/𝑎(−𝑥) = 1 uniformly for 𝑥 ≤ 𝑦 < 𝑥 + 𝑁 and 𝛿 ′ − 𝛿 can be made arbitrarily small, we conclude the first inequality of the lemma. □ By the subadditivity of 𝑎(−𝑥) the inequality of Lemma 5.1.5 entails that for some positive constant 𝐶 𝐶 −1 𝑎(−𝑥) ≤ 𝑎(−𝜆𝑥) ≤ (𝜆 + 𝐶)𝑎(−𝑥)
for 𝑥 > 0, 𝜆 ≥ 1.
(5.11)
By Lemma 5.1.3 it follows that if 𝑎(−𝑥)/𝑎(𝑥) → 0, then 𝑃 𝑥 [𝜎(−∞,−𝑅] < 𝜎0 ] ∼ 𝑃 𝑥 [𝜎−𝑅 < 𝜎0 ], in particular 𝑃0 [𝜎(−∞,−𝑅] < 𝜎0 ] ∼ 1/𝑎(𝑅).
(5.12)
5.2 Relative Stability of 𝑍 and Overshoots
79
This together with Lemma 5.1.5 yields Corollary 5.1.6 If 𝑚 + /𝑚 → 0, then both 𝑎(𝑥) and 𝑎(−𝑥) are asymptotically increasing. If 𝑎(−𝑥), 𝑥 > 0, is asymptotically increasing, then by the subadditivity of 𝑎(𝑥) it follows that uniformly for 𝑦 ≥ 0, as 𝑥 → ∞, 𝑜 𝐸 𝑥 [𝑎(𝑆 𝜎 (−∞,0] )] + 𝑎(−𝑦) ≤ 𝐸 𝑥 [𝑎(𝑆 𝜎 (−∞,0] − 𝑦)] − 𝐸 𝑥 [𝑎(𝑆 𝜎 (−∞,0] )] ≤ 𝑎(−𝑦). Hence, by (2.15), 𝐸 𝑥 [𝑎(𝑆 𝜎 (−∞,0] − 𝑦)] − 𝐸 𝑥 [𝑎(𝑆 𝜎 (−∞,0] )] = 𝑜(𝑎(𝑥)) + 𝑂 (𝑎(−𝑦)). Noting that 𝑔 (−∞,0] (𝑥, 𝑦) = 𝑔(𝑥, 𝑦) − 𝐸 𝑥 [𝑔(𝑆 𝜎 (−∞,0] , 𝑦)] and 𝑎(·) is subadditive, we have the second corollary of Lemma 5.1.5 Corollary 5.1.7 If 𝑎(−𝑥)/𝑎(𝑥) ¯ → 0 (𝑥 → ∞), then as 𝑥 → ∞, uniformly for 0 ≤ 𝑦 < 𝑀𝑥 (for each 𝑀 > 1), 𝑔 (−∞,0] (𝑥, 𝑦) = 𝑎(𝑥){1 + 𝑜(1)} − 𝑎(𝑥 − 𝑦), in particular 𝑔 (−∞,0] (𝑥, 𝑥) ∼ 𝑎(𝑥). Remark 5.1.8 If 𝜎 2 < ∞ (with 𝐸 𝑋 = 0), then uniformly for 𝑥 > 0, 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ∼ (2𝑅) −1 𝜎 2 𝑎 † (𝑥) + 𝑥 ∧ 1, ( (2𝑅) −1 𝜎 2 𝑎 † (−𝑥) − 𝑥 if 𝐸 [𝑋 3 ; 𝑋 > 0] < ∞, 𝑃−𝑥 [𝜎𝑅 < 𝜎0 ] ∼ Í otherwise 𝑅 −1 𝑅𝑦=0 𝑃−𝑥 [𝑆 𝜎 [0,∞) > 𝑦] as 𝑅 ∧ 𝑥 → ∞. The first equivalence (valid also for 𝑥 = 0) is immediate from (2.10) − 𝑎(𝑧 − 𝑅) ∼ 𝑅/𝜎 2 for 𝑧 > 𝑅 (𝑅 → ∞), and (5.3). For the second one, on noting 𝑎(𝑧) observe that for 𝑥 > 0, 𝑔(−𝑥, 𝑅) = 𝐸 −𝑥 𝑔(𝑆 𝜎 [0,∞) , 𝑅) can be written as 𝑅 ∑︁
h i 𝑦 2𝑅 + 𝑜(𝑅) 𝐻 −𝑥 ; [0,∞) (𝑦) 𝑎(𝑦) + 2 + 𝑜(𝑦) + 𝑃−𝑥 [𝑆 𝜎 [0,∞) > 𝑅] 𝜎 𝜎2 𝑦=0
Í −𝑥 2 2 then use the fact that ∞ 𝑦=0 𝐻 [0,∞) (𝑦) [𝑎(𝑦) + 𝑦/𝜎 ] = 𝑎(−𝑥) − 𝑥/𝜎 (𝑥 > 0) and sup 𝑥 𝐸 −𝑥 |𝑆 𝜎 [0,∞) | < ∞ ⇔ 𝐸 [𝑋 3 ; 𝑋 > 0] < ∞ [76, Eq(2.9), Corollary 2.1] for the first case; perform summation by parts for the second case.
5.2 Relative Stability of 𝒁 and Overshoots Here we are concerned with a sufficient condition for 𝑍 to be r.s. As stated in Section 2.4.2, this is related to the overshoot 𝑍 (𝑅) = 𝑆 𝜎(𝑅,∞) − 𝑅. By (2.25) and what is stated at the end of Section 2.4.2 we have 𝑍 is r.s. if and only if 𝑍 (𝑅)/𝑅 −→ 𝑃 0 ; and for this to be the case it is sufficient that 𝑋 is p.r.s.
(5.13)
5 Applications Under 𝑚+ /𝑚 → 0
80
By combining Theorem 3.1.6 and a known criterion for the positive relative stability of 𝑋 (see Section 2.4.1), we obtain a reasonably fine sufficient condition for 𝑍 to be r.s. For condition (C.I) in the following result, we neither assume 𝐸 |𝑋 | < ∞ nor the ∫𝑥 recurrence of 𝐹. Recall 𝐴(𝑥) = 0 {𝜇+ (𝑦) − 𝜇− (𝑦)} d𝑦 [as defined in (2.19)]. Proposition 5.2.1 For 𝑍 to be r.s. each of the following conditions is sufficient. (C.I) (C.II)
𝜇(𝑥) > 0 for all 𝑥 ≥ 0 and lim 𝐴(𝑥)/[𝑥𝜇(𝑥)] = ∞, 𝑥→∞
𝑥𝜂+ (𝑥) 𝐸 𝑋 = 0 and lim = 0. 𝑥→∞ 𝑚(𝑥)
Proof As mentioned in Section 2.4, condition (C.I) is equivalent to the positive relative stability of 𝑋, and hence it is a sufficient condition for the relative stability of 𝑍 in view of (5.13). As for (C.II), expressing 𝑃[𝑍 (𝑅) > 𝜀𝑅] as the infinite series ∑︁ 𝑔 (𝑅,∞) (0, 𝑅 − 𝑤)𝑃[𝑋 > 𝜀𝑅 + 𝑤], 𝑤≥0
one observes first that 𝑔 [𝑅,∞) (0, 𝑅 − 𝑤) < 𝑔 {0} (−𝑅, −𝑤) ≤ 𝑔 {0} (−𝑅, −𝑅) = 2𝑎(𝑅). ¯ Note that (C.II) entails (3.8) and accordingly (H) holds so that 𝑎(𝑥) ¯ ≍ 𝑥/𝑚(𝑥) by Theorems 3.1.1 and 3.1.2. Hence for any 𝜀 > 0 𝑃0 [𝑍 (𝑅) > 𝜀𝑅] ≤ 2𝑎(𝑅) ¯
∑︁
𝑃[𝑋 > 𝜀𝑅 + 𝑤] ≍
𝑅𝜂+ (𝜀𝑅) 𝑅𝜂+ (𝜀𝑅) ≤ . 𝑚(𝑅) 𝑚(𝜀𝑅)
𝑤≥0
Thus (C.II) implies 𝑍 (𝑅)/𝑅 → 𝑃 0, concluding the proof in view of (5.13) again. □ Remark 5.2.2 (a) Condition (C.I) is stronger than (2.23), namely lim 𝐴(𝑥)/𝑥𝜇− (𝑥) = ∞, which is necessary and sufficient in order that 𝑃[𝑆 𝑛 > 0] → 1 according to [46], so that (C.I) is only of relevance in such a case. It is also pointed out that if 𝑆 is recurrent and lim sup 𝑎(−𝑥)/𝑎(𝑥) < 1, then the sufficiency of (C.I) for 𝑍 to be r.s. follows from the trivial bound 𝑔−𝛺 (−𝑅, 𝑥) ≤ 2𝑎(𝑅) ¯ (which entails 𝑃0 [𝑍 (𝑅) > 𝜀𝑅] ≤ 2𝑅 𝑎(𝑅)𝜇(𝜀𝑅)), ¯ since 𝑎(𝑥) − 𝑎(−𝑥) ∼ 1/𝐴(𝑥) and 𝐴 is s.v. under (C.I) (cf. Theorem 4.1.1). (b) Condition (C.II) is satisfied if 𝑚 + /𝑚 → 0. The converse is of course not true – (C.II) may be fulfilled even if 𝑚 + /𝑚 → 1 – and it seems hard to find any simpler substitute for (C.II). Under the restriction 𝑚 + (𝑥) ≍ 𝑚(𝑥), however, (C.II) holds if and only if 𝑚 + is s.v. (which is the case if 𝑥 2 𝜇+ (𝑥) ≍ 𝐿(𝑥) with an s.v. 𝐿). (c) If 𝐹 belongs to the domain of attraction of a stable law and Spitzer’s condition1 holds, then that either (C.I) or (C.II) holds is also necessary for 𝑍 to be r.s., as will be discussed at the end of this section.
Í 1 The condition that there exists 𝜌 = lim 𝑛−1 𝑛𝑘=1 𝑃0 [𝑆𝑘 > 0] is called Spitzer’s condition. It is equivalent to 𝜌 = lim 𝑃 [𝑆𝑛 > 0], according to [6].
5.2 Relative Stability of 𝑍 and Overshoots
81
(d) If 𝜇+ (𝑥) is positive for all 𝑥 > 0 and of dominated variation, then (∗)
𝑍 is r.s. ⇐⇒ lim 𝑈a (𝑅)𝐸 [𝑉d (𝑋); 𝑋 ≥ 𝑅] = 0. Í For the proof, splitting the sum ∞ 𝑦=0 𝑔−Ω (−𝑅, −𝑦)𝜇+ (𝑦 + 𝜀𝑅) at 𝑦 = 2𝑅 and using the dominated variation of 𝜇+ (with the help of (6.16)), one infers that for each 𝜀 > 0 𝑃0 [𝑍 (𝑅) > 𝜀𝑅] ≍ 𝑈a (𝑅)𝑉d (𝑅)𝜇+ (𝑅) + 𝑈a (𝑅)
∞ ∑︁
𝑣d (𝑦)𝜇+ (𝑦),
𝑦=𝑅
Í of which the RHS equals 𝑈a (𝑅) ∞ 𝑦=𝑅 𝑉d (𝑦) 𝑝(𝑦){1 + 𝑜(1)}. Hence, (∗) follows. (e) The situation is simplified if we are concerned with an (oscillatory) Lévy process 𝑌 (𝑡). According to [24] (cf. also [22]) 𝑍 𝑌𝑅 /𝑅 → 0 in probability if and only if the law of 𝑍 𝑌𝑅 converges to a proper probability law, or, what ∫ ∞amounts to the same, the ascending ladder height has a finite mean, which imposes 1 𝜇𝑌+ (𝑥)𝑥 d𝑥/𝑚𝑌 (𝑥) < ∞ (see (d)), a more restrictive condition than that expected by considering the r.w. ∞ , a remarkable distinction from general r.w.’s. Here 𝑚𝑌 (𝑥) and 𝜇𝑌 (𝑥) (𝑌 (𝑛)) 𝑛=0 + ∫𝑥 ∫∞ stand for 0 d𝑦 𝑦 𝑃[|𝑌 (1)| > 𝑡] d𝑡 < ∞ and 𝑃[𝑌 (1) > 𝑥], respectively. The following result, used in the next section, concerns an overshoot estimate for the r.w. 𝑆 conditioned to avoid the origin. Put 𝑎¯ † (𝑥) :=
1 † 𝑎 (𝑥) + 𝑎(−𝑥) . 2
Lemma 5.2.3 (i) If 𝐸 𝑋 = 0, then for 𝑧 ≥ 0 and 𝑥 ∈ Z, 𝑃 𝑥 𝑍 (𝑅) > 𝑧, 𝜎[𝑅,∞) < 𝜎0 ≤ 2𝑎¯ † (𝑥)𝜂+ (𝑧).
(5.14)
(ii) If 𝑚 + /𝑚 → 0, then for each 𝜀 > 0, uniformly for 0 ≤ 𝑥 ≤ 𝑅, 𝑃 𝑥 𝑍 (𝑅) ≥ 𝜀𝑅 𝜎[𝑅,∞) < 𝜎0 → 0 (𝑅 → ∞).
(5.15)
[See Lemma 5.2.4, (2.28), Proposition 7.6.4 for (5.15) with 𝜎0 replaced by 𝑇 = 𝜎(−∞,0) in the conditioning.] By the trivial inequality 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎0 ] ≥ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ], (5.14) implies 𝑃 𝑥 𝑍 (𝑅) > 𝑧 𝜎[𝑅,∞) < 𝜎0 ≤ 2𝑎¯ † (𝑥)𝜂+ (𝑧)/𝑃 𝑥 [𝜎𝑅 < 𝜎0 ]. (5.16) Proof Put
𝑟 (𝑧) = 𝑃 𝑥 𝑍 (𝑅) > 𝑧, 𝜎[𝑅,∞) < 𝜎0 .
Plainly 𝑔 {0}∪[𝑅,∞) (𝑥, 𝑧) ≤ 𝑔 {0} (𝑥, 𝑧) ≤ 2𝑎¯ † (𝑥). Hence ∑︁ ∑︁ 𝑟 (𝑧) = 𝑔 {0}∪[𝑅,∞) (𝑥, 𝑅 − 𝑤)𝑃[𝑋 > 𝑧 + 𝑤] ≤ 2𝑎¯ † (𝑥) 𝑃[𝑋 > 𝑧 + 𝑤]. 𝑤>0
𝑤>0
5 Applications Under 𝑚+ /𝑚 → 0
82
Í Since 𝑤>0 𝑃[𝑋 > 𝑧 + 𝑤] = 𝜂+ (𝑧) it therefore follows that 𝑟 (𝑧) ≤ 2𝑎¯ † (𝑥)𝜂+ (𝑧). Suppose 𝑚 + /𝑚 → 0. Then 𝑎(𝑥) ¯ ≍ 𝑥/𝑚(𝑥) and 𝑎(−𝑥)/𝑎(𝑥) → 0 (𝑥 → ∞) owing to Theorem 3.1.2 and Theorem 3.1.6, respectively. We can accordingly apply Lemma 5.1.3 to see that 𝑃 𝑥 [𝜎 𝑅 < 𝜎0 ] = 𝑎¯ † (𝑥)/𝑎(𝑅){1 ¯ for 0 ≤ 𝑥 < 𝑅. + 𝑜(1)} uniformly + (𝑧){1 + 𝑜(1)} on the one Hence, by (5.16), 𝑃 𝑥 𝑍 (𝑅) > 𝑧 𝜎[𝑅,∞) < 𝜎0 ≤ 2𝑎(𝑅)𝜂 ¯ hand. On the other hand for 𝑧 > 0, recalling 𝜂+ (𝑧) < 𝑚 + (𝑧)/𝑧 we deduce that ¯ 𝑎(𝑅)𝜂 + (𝑧) ≤ 𝐶
𝑚 + (𝑧)𝑅 , 𝑚(𝑅)𝑧
(5.17) □
of which the RHS with 𝑧 = 𝜀𝑅 tends to zero. Thus (5.15) follows.
The following result – used only for the proof of Proposition 8.6.4(i) – is placed here, although, in its proof, we use some results from the next section and chapter. Lemma 5.2.4 Let Λ𝑅 = {𝜎[𝑅+1,∞) < 𝜎𝛺 }. Each of the following (a) 𝑚 + /𝑚 → 0; (b) 𝑆 is p.r.s.; (c) 𝑆 is attracted to the standard normal law, implies that for each 𝜀 > 0, 𝑃 𝑥 [𝑍 (𝑅) > 𝜀𝑅 | Λ𝑅 ] → 0
uniformly for 0 ≤ 𝑥 ≤ 𝑅.
Proof If 𝑈a is regularly varying, then by Lemma 6.2.1 there exists a positive constant 𝑐 0 such that for 0 ≤ 𝑥 ≤ 𝑦, 𝑐 0 𝑉d (𝑥/2)𝑈a (𝑦) ≤
𝑦 ∑︁
𝑔𝛺 (𝑥, 𝑧) ≤ 𝑉d (𝑥)𝑈a (𝑦).
(5.18)
𝑧=0
Under either of (a) to (c), 𝑈a (𝑥) ∼ 𝑥/ℓ ∗ (𝑥)) in view of Proposition 5.2.1 and 𝑃 𝑥 (Λ𝑅 ) ∼ 𝑉d (𝑥)/𝑉d (𝑅) according to Proposition 5.3.2 (for (a)) and Theorem 6.1.1 (for (b), (c)) (a p.r.s. 𝑆 satisfies (C3) as noted in Remark 6.1.2); see also Remark 6.3.7. In particular (5.18) is applicable, and it easily follows that 𝑃 𝑥 [𝑍 (𝑅) > 𝜀𝑅, Λ𝑅 ] ≤
𝑅 ∑︁
𝑔𝛺 (𝑥, 𝑅 − 𝑤)𝜇+ (𝑤 + 𝜀𝑅) ≤ 𝑉d (𝑥)𝑈a (𝑅)𝜇+ (𝜀𝑅).
𝑤=0
If (b) or (c) holds, then Lemmas 6.3.4 and 6.5.1(i) imply 𝑉d (𝑅)𝑈a (𝑅)𝜇(𝜀𝑅) → 0 and the result follows. By the left-hand inequality of (5.18) we see that 𝑉d (𝑅/2)𝑈a (𝑅) < 2𝑅 𝑎(𝑅/2)/𝑐 ¯ 0. If 𝑚 + /𝑚 → 0, then 𝑎(𝑥) ¯ ≤ 𝐶1 𝑥/𝑚(𝑥), and noting 𝑥 2 𝜇+ (𝑥) < 2𝑐 + (𝑥) we see that −2 𝑃 𝑥 [𝑍 (𝑅) > 𝜀𝑅 | Λ𝑅 ] ≤ 𝐶2 𝑅 𝑎(𝑅)𝜇 ¯ + (𝜀𝑅) ≤ 𝐶𝜀 𝑐 + (𝜀𝑅)/𝑚(𝑅) → 0.
Thus the lemma is verified.
□
5.2 Relative Stability of 𝑍 and Overshoots
83
We shall need an estimate of overshoots as the r.w. exits from the half-line (−∞, −𝑅] after having entering into it. Put (5.19) 𝜏(𝑅) = inf 𝑛 > 𝜎(−∞,−𝑅] : 𝑆 𝑛 ∉ (−∞, −𝑅] , the first time when the r.w. exits from (−∞, −𝑅] after entering it. Lemma 5.2.5 Suppose 𝑚 + /𝑚 → 0. Then for each constant 𝜀 > 0, uniformly for 𝑥 > −𝑅 satisfying 𝑃 𝑥 [𝜎(−∞,−𝑅] < 𝜎0 ] ≥ 𝜀 𝑎¯ † (𝑥)/𝑎(𝑅), ¯ as 𝑅 → ∞, 𝑃 𝑥 𝑆 𝜏 (𝑅) > −𝑅 + 𝜀𝑅 𝜎(−∞,−𝑅] < 𝜎0 → 0. Proof Denoting by E 𝑅 the event {𝜎(−∞,−𝑅] < 𝜎0 } we write down 𝑃 𝑥 𝑆 𝜏 (𝑅) > −𝑅 + 𝜀𝑅 E 𝑅 ∑︁ 𝑃 𝑥 𝑆 𝜎 (−∞,−𝑅] = 𝑤 E 𝑅 𝑃𝑤 𝑆 𝜎 (−𝑅, ∞) > −𝑅 + 𝜀𝑅 . =
(5.20)
𝑤≤−𝑅
¯ by Lemma 5.2.3(i) (applied to −𝑆 · ), If 𝑃 𝑥 [E 𝑅 ] ≥ 𝜀 𝑎¯ † (𝑥)/𝑎(𝑅), 𝑃 𝑥 𝑆 𝜎 (−∞,−𝑅] < −𝑅 − 𝑧 E 𝑅 ≤ 2𝜀 −1 𝑚(𝑧) 𝑎(𝑅)/𝑧. ¯ Given 𝛿 > 0 (small enough) we define 𝜁 = 𝜁 (𝛿, 𝑅) (> 𝑅) by the equation 𝑚(𝜁)𝑅 =𝛿 𝑚(𝑅)𝜁 (uniquely determined since 𝑥/𝑚(𝑥) is increasing), so that by 2𝑎(𝑅) ¯ < 𝐶 𝑅/𝑚(𝑅) 𝑃 𝑥 𝑆 𝜎 (−∞,−𝑅] < −𝑅 − 𝜁 E 𝑅 ≤ 2𝜀 −1 𝑚(𝜁) 𝑎(𝑅)/𝜁 ¯ ≤ (𝐶𝜀 −1 )𝛿. For −𝑅 − 𝜁 ≤ 𝑤 ≤ −𝑅, ∑︁
𝑃𝑤 𝑆 𝜎 (−𝑅,∞) > −𝑅 + 𝜀𝑅 =
∑︁
𝑔 (−𝑅, ∞) (𝑤, 𝑧) 𝑝(𝑦 − 𝑧)
𝑦>−𝑅+𝜀𝑅 𝑧 ≤−𝑅
∑︁ ∑︁
=
𝑔 [1,∞) (𝑤 + 𝑅, 𝑧) 𝑝(𝑦 − 𝑧)
𝑦> 𝜀𝑅 𝑧 ≤0
≤ 𝐶1 𝑎(𝜁)𝜂 ¯ + (𝜀𝑅) ≤ 𝛿−1 𝐶 ′ 𝑅𝜂+ (𝜀𝑅)/𝑚(𝑅), where the first inequality follows from 𝑔 [1,∞) (𝑤 + 𝑅, 𝑧) ≤ 𝑔 {1} (𝑤 + 𝑅, 𝑧) ≤ 𝑎(𝑤 ¯ + 𝑅 − 1) and the second from 𝑎(𝜁) ¯ ≤ 𝐶2 𝜁/𝑚(𝜁) and the definition of 𝜁. Now, returning to
5 Applications Under 𝑚+ /𝑚 → 0
84
(5.20) we apply the bounds derived above to see that 𝑃 𝑥 𝑆 𝜏 (𝑅) > −𝑅 + 𝜀𝑅 𝜎(−∞,−𝑅] < 𝜎0 ≤ (𝐶𝜀 −1 )𝛿 + 𝛿−1 𝐶 ′ 𝑅𝜂+ (𝜀𝑅)/𝑚(𝑅). Since 𝑅𝜂+ (𝜀𝑅)/𝑚(𝑅) → 0 and 𝛿 > 0 may be arbitrarily small, this concludes the proof. □ If E is an event depending only on {𝑆 𝑛 , 𝑛 ≥ 𝜏(𝑅)}, then 𝑃 𝑥 [E, 𝜎0 < 𝜎(−∞,−𝑅] ] = 𝑃0 (E)𝑃 𝑥 [𝜎0 < 𝜎(−∞,−𝑅] ]. Putting 𝑥 = 0 in this identity and subtracting both sides from 𝑃0 [E] one obtains 𝑃0 [E | 𝜎0 > 𝜎(−∞,−𝑅] ] = 𝑃0 (E); in particular, 𝑃0 𝑆 𝜏 (𝑅) = 𝑥 = 𝑃0 𝑆 𝜏 (𝑅) = 𝑥 𝜎(−∞,−𝑅] < 𝜎0 . (5.21) Since 𝑃0 [𝜎(−∞, −𝑅] < 𝜎0 ] > 𝑃0 [𝜎−𝑅 < 𝜎0 ] =
1 † 𝑎 (0)/𝑎(𝑅), ¯ 2
Lemma 5.2.5 yields the following Corollary 5.2.6 Suppose 𝑚 + /𝑚 → 0. Then for any 𝜀 > 0, as 𝑅 → ∞, 𝑃0 [𝑆 𝜏 (𝑅) > −𝑅 + 𝜀𝑅] → 0.
Overshoots for 𝐹 in the domain of attraction We suppose that (4.14) holds and examine the behaviour of the overshoot 𝑍 (𝑅). We neither assume the condition 𝐸 |𝑋 | < ∞ nor the recurrence of the r.w., but assume that if 𝐸 |𝑋 | = ∞, then 𝜎[1,∞) < ∞ a.s.(𝑃0 ),2 which, according to Erickson [29, Corollary 2]), is equivalent to ∫ ∞ 𝑥 d𝐹 (𝑥) ∫𝑥 = ∞. (5.22) 1 + 0 𝜇− (𝑡) d𝑡 0 We shall observe that under (4.14), the sufficient condition of Proposition 5.2.1 is also necessary for 𝑍 to be r.s. Put 𝜌 = 𝑃𝑌0 [𝑌 > 0], which, if 𝛼 ≠ 1, 𝑃0 [𝑆 𝑛 > 0] approaches as 𝑛 → ∞. Then we have the following: (i) If either 1 < 𝛼 < 2 and 𝑝 = 0 or 𝛼 = 2, then 𝑥𝜂+ (𝑥) = 𝑜(𝑚(𝑥)), so that 𝑍 (𝑅)/𝑅 → 𝑃 0 as 𝑅 → ∞ according to Proposition 5.2.1. (In this case, we have 𝛼𝜌 = 1, and the same result also follows from Theorems A.2.3 and A.3.2.) (ii) Let 𝛼 ∈ (0, 1)∪(1, 2) and 𝑝 > 0. Then 0 < 𝛼𝜌 < 1, which implies that 𝑃[𝑍 > 𝑥] is regularly varying of index 𝛼𝜌 and the distribution of 𝑍 (𝑅)/𝑅 converges weakly to the probability law determined by the density 𝐶 𝛼𝜌 /𝑥 𝛼𝜌 (1 + 𝑥), 𝑥 > 0 (see Theorems A.2.3 and A.3.2, [31, Theorem XIV.3]), so that 𝑍 is not r.s.
2 This condition, becoming relevant only in (iv) below, is automatic under our assumption of oscillation, which is not needed in (iii).
5.3 The Two-Sided Exit Problem Under 𝑚+ /𝑚 → 0
85
(iii) Let 𝛼 = 1. If 𝑝 = 1/2, we also suppose that Spitzer’s condition is satisfied, or what amounts to the same thing, there exists lim 𝑃0 [𝑆 𝑛 > 0] = 𝑟.
(5.23)
For 𝑝 ≠ 1/2, (5.23) holds with 𝑟 ∈ {0, 1}: 𝑟 = 1 if either 𝑝 < 1/2 and 𝐸 𝑋 = 0 or 𝑝 > 1/2 and 𝐸 |𝑋 | = ∞; 𝑟 = 0 in the other case of 𝑝 ≠ 1/2.3 It follows that 𝑟 = 1 if and only if (C.I) of Proposition 5.2.1 holds, and if this is the case, 𝑍 is r.s., so that 𝑍 (𝑅)/𝑅 → 𝑃 0. If 𝑝 = 1/2, 𝑟 may take all values from [0, 1] and if 0 < 𝑟 < 1, then the same convergence of 𝑍 (𝑅)/𝑅 as mentioned in (ii) holds. These together entail that 𝑍 is r.s. if and only if 𝑟 = 1 (if 𝛼 = 1). (iv) Let 𝛼 ≤ 1 and suppose that 𝑟 = 0 if 𝛼 = 1 and 𝑝 = 0 if 𝛼 < 1. Then, according to [81] (see the table in Section 2.6),4 𝑃[𝑍 > 𝑥] is s.v. at infinity, which is equivalent to 𝑍 (𝑅)/𝑅 −→ 𝑃 ∞. Thus, if 𝛼 > 1, condition (C.II) (which then becomes equivalent to 𝛼𝜌 = 1) works as a criterion of whether 𝑍 is r.s., while if 𝛼 = 1, 𝑍 can be r.s. under 𝑥𝜂+ (𝑥) ≍ 𝑚(𝑥) (so that (C.II) does not hold) and condition (C.I) must be employed for the criterion. Since 𝑍 is not r.s. if either 𝑟 < 1 = 𝛼 or 𝛼𝜌 < 1 ≠ 𝛼, the validity of (C.I) or (C.II) is necessary and sufficient for 𝑍 to be r.s.
5.3 The Two-Sided Exit Problem Under 𝒎 + /𝒎 → 0 Put 𝑘 := sup 𝑥 ≥1
𝑎(−𝑥) , 𝑎(𝑥)
with the understanding that 𝑘 = ∞ if the r.w. is left-continuous. The following bounds that play a crucial role in this section are taken from [84, Lemma 3.4]: 0 ≤ 𝑔 {0} (𝑥, 𝑦) − 𝑔 (−∞,0] (𝑥, 𝑦) ≤ (1 + 𝑘)𝑎(−𝑦)
(𝑥, 𝑦 ∈ Z).
(5.24)
For 0 ≤ 𝑥 ≤ 𝑦 we shall apply (5.24) in the slightly weaker form − 𝑎(−𝑥) ≤ 𝑎 † (𝑥) − 𝑔 (−∞,0] (𝑥, 𝑦) ≤ (1 + 𝑘)𝑎(−𝑦) + 𝑘𝑎(−𝑥),
(5.25)
where the subadditivity 𝑎(−𝑦) − 𝑎(𝑥 − 𝑦) ≤ 𝑎(−𝑥) is used for the lower bound and the inequalities 𝑎(𝑥 − 𝑦) − 𝑎(−𝑦) ≤ [𝑎(𝑥 − 𝑦)/𝑎(𝑦 − 𝑥)]𝑎(−𝑥) ≤ 𝑘𝑎(−𝑥) for the upper bound.
3 This is easily deduced by examining (C.I) (entailing 𝑟 = 1; see the comment given after (2.21)). 4 See also Remark 6.1.2 (𝛼 = 1) and Lemma 6.3.1 (𝛼 < 1) of Chapter 6 for another argument and related results.
5 Applications Under 𝑚+ /𝑚 → 0
86
Recall 𝑢 a (0) = 1, 𝑢 a (𝑥) = 𝑈a (𝑥) − 𝑈a (𝑥 − 1) (𝑥 ≥ 1), 𝑣◦ = 𝑣d (0) = 𝑉d (0) and ∫ 𝑥 ∫ 𝑥 1 ℓ ∗ (𝑥) = 𝑃[𝑍 > 𝑡] d𝑡, ℓˆ∗ (𝑥) = ◦ 𝑃[− 𝑍ˆ > 𝑡] d𝑡. (5.26) 𝑣 0 0 In the rest of this section we assume 𝑍 is r.s.
and
𝑎(−𝑥) →0 𝑎(𝑥) ¯
(𝑥 → ∞).
(5.27)
By Theorem 3.1.6 and Proposition 5.2.1 this condition holds if 𝑚 + /𝑚 → 0. The first half of (5.27) is equivalent to the slow variation of ℓ ∗ 5 ([31, Theorem VII.7.3], Theorem A.2.3), and if it is the case, it follows (Section A.2.1) that 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥).
(5.28)
Lemma 5.3.1 Suppose (5.27) holds. Then ℓ ∗ is s.v., and the following hold (i) 𝑎(𝑥) ∼ 𝑉d (𝑥)/ℓ ∗ (𝑥) as 𝑥 → ∞;6 (ii) uniformly for 1 ≤ 𝑥 ≤ 𝑦, as 𝑦 → ∞, 𝑔 (−∞,0] (𝑥, 𝑦) ∼ 𝑉d (𝑥 − 1)/ℓ ∗ (𝑦);
(5.29)
(iii) for each 𝑀 > 1, as 𝑥 → ∞,7 𝑔 (−∞,0] (𝑥, 𝑦)/𝑎(𝑥) → 1 uniformly for 𝑥 ≤ 𝑦 ≤ 𝑀𝑥.
(5.30)
(iv) For 𝑥 ≤ 𝑦, ℓ ∗ (𝑥)/ℓ ∗ (𝑦) → 1 as 𝑥 → ∞ along with 𝑎(−𝑦)/𝑎(𝑥) → 0. Proof Under the second condition of (5.27), which allows us to apply the right-hand inequality of (5.11), (5.30) follows immediately from (5.25). Let 𝑍 be r.s., so that ℓ ∗ varies slowly. We write Spitzer’s formula as 𝑔 (−∞,0] (𝑥, 𝑦) =
𝑥−1 ∑︁
𝑣d (𝑘)𝑢 a (𝑦 − 𝑥 + 𝑘)
(1 ≤ 𝑥 ≤ 𝑦),
(5.31)
𝑘=0
(𝑣d (𝑥) = 𝑉d (𝑥) − 𝑉d (𝑥 − 1), 𝑥 ≥ 1) and owing to (5.28) we see 𝑔 (−∞,0] (𝑥, 2𝑥) ∼ 𝑉d (𝑥 − 1)/ℓ ∗ (𝑥).
(5.32)
5 That ℓ ∗ is s.v. (equivalently, 𝑥 𝑃∫[𝑍 > 𝑥 ] = 𝑜 (ℓ ∗ ( 𝑥))) implies 𝑓 ( 𝑥) := 𝐸 [𝑍; 𝑍 ≤ 𝑥 ] is ∞ s.v.; the converse is also true: since 0 e−𝜆𝑥 d 𝑓 ( 𝑥) = (𝐸 [e−𝜆𝑍 ]) ′ , the slow variation of 𝑓 implies ∫𝜆 ∫∞ 1− 𝐸 [e−𝜆𝑍 ] = 0 𝑠 d𝑠 0 e−𝑠𝑥 𝑓 ( 𝑥) d𝑥 ∼ 𝜆 𝑓 (1/𝜆) (𝜆 ↓ 0), so that ℓ ∗ ( 𝑥) ∼ 𝑓 ( 𝑥) by Karamata’s Tauberian Theorem. 6 See Remarks 6.4.6 and 7.1.2(b,d) for more about the behaviour of 𝑎 ( 𝑥)ℓ ∗ ( 𝑥)/𝑉d ( 𝑥). 7 In fact, it suffices to assume 𝑎 (−𝑀 𝑥)/𝑎 ( 𝑥) → 0 (even if 𝑀 → ∞) instead of 𝑥 → ∞.
5.3 The Two-Sided Exit Problem Under 𝑚+ /𝑚 → 0
87
If 𝑎(−𝑥)/𝑎(𝑥) → 0, by (5.30) we also have 𝑔 (−∞,0] (𝑥, 2𝑥) ∼ 𝑎(𝑥), whence the equivalence relation of (i) follows. (ii) follows from (5.31) and the slow variation of 𝑢 a for 1 ≤ 𝑥 < 𝑦/2, and from (5.30) in conjunction with (i) and the bound 𝑔 (−∞,0] (𝑥, 𝑦) ≤ 𝑔 (−∞,0] (𝑦, 𝑦) for 𝑦/2 ≤ 𝑥 ≤ 𝑦. In view of (5.25), (iv) follows from (i) and (ii). □ Define a function 𝑓𝑟 on Z by 𝑓𝑟 (𝑥) =
Í∞
𝑦=1 𝑉d (𝑦
𝑓𝑟 (𝑥) = 𝑉d (𝑥 − 1),
− 1) 𝑝(𝑦 − 𝑥) for 𝑥 ≤ 0 and
𝑥 ≥ 1.
(5.33)
Then 𝑓𝑟 (𝑥) = 𝑃[ 𝑍ˆ < 𝑥] (𝑥 ≤ 0) (cf. [84, Eq(2.3)]) and Í 𝑓𝑟 (𝑥) is a harmonic function of the r.w. killed as it enters (−∞, 0], in the sense that 𝑦 ≥1 𝑓𝑟 (𝑦) 𝑝(𝑦 − 𝑥) = 𝑓𝑟 (𝑥) (𝑥 ≥ 1). For each 𝑥 ∈ Z, 𝑀𝑛 := 𝑓𝑟 (𝑆 𝑛 )1(𝑛 < 𝜎(−∞,0] ) is a martingale under 𝑃 𝑥 , so that by optional stopping theorem 𝑓𝑟 (𝑥) ≥ lim inf 𝐸 𝑥 𝑀𝑛∧𝜎[𝑅,∞) ≥ 𝑓𝑟 (𝑅)𝑃 𝑥 𝜎[𝑅,∞) < 𝜎(−∞,0] . 𝑛→∞
Hence 𝑓𝑟 (𝑥)/ 𝑓𝑟 (𝑅) ≥ 𝑃 𝑥 𝜎[𝑅,∞) < 𝜎(−∞,0] ≥ 𝑃 𝑥 𝜎𝑅 < 𝜎(−∞,0] .
(5.34)
The last probability equals 𝑔 (−∞,0] (𝑥, 𝑅)/𝑔 (−∞,0] (𝑅, 𝑅) (𝑥 ≠ 𝑅), which is asymptotically equivalent to 𝑓𝑟 (𝑥)/ 𝑓𝑟 (𝑅) because of Lemma 5.3.1(ii) (note that (5.29) extends to 𝑥 ≤ 0 if 𝑓𝑟 (𝑥) replaces 𝑉 (𝑥 − 1)). This leads to the following Proposition 5.3.2 If (5.27) holds, then uniformly for 𝑥 ≤ 𝑅, as 𝑅 → ∞, 𝑃 𝑥 𝜎𝑅 < 𝜎(−∞,0] ∼ 𝑃 𝑥 𝜎[𝑅,∞) < 𝜎(−∞,0] ∼ 𝑓𝑟 (𝑥)/ 𝑓𝑟 (𝑅),
(5.35)
and for 𝑥 = 0, in particular, 𝑃0 (Λ𝑅 ) ∼ 1/𝑉d (𝑅). Remark 5.3.3 If (5.27) holds, then uniformly for 1 ≤ 𝑥 ≤ 𝑅, as 𝑅 → ∞, 𝑓𝑟 (𝑥)/ 𝑓𝑟 (𝑅) 1 −→ 0 𝑎(𝑥)/𝑎(𝑅)
as ℓ ∗ (𝑥)/ℓ ∗ (𝑅) → 1, as ℓ ∗ (𝑥)/ℓ ∗ (𝑅) → 0,
so that both probabilities in (5.35) are asymptotically equivalent to, or of a smaller order of magnitude than, 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] according as ℓ ∗ (𝑥)/ℓ ∗ (𝑅) tends to 1 or 0, because of Lemma 5.1.3. If 𝑚 + /𝑚 → 0 (entailing (5.27)), then by Proposition 3.1.5 it especially follows that as 𝑥/𝑅 → 1, 𝑉d (𝑥)/𝑉d (𝑅) → 1 and 𝑃 𝑥 [𝜎𝑅 < 𝜎(−∞,0] ] → 1.
5 Applications Under 𝑚+ /𝑚 → 0
88
5.4 Spitzer’s Condition and the Regular Variation of 𝑽d Combined with [77, Theorem 1.1], formula (5.1) will lead to the following result. Theorem 5.4.1 Suppose 𝜎 2 = ∞ and 𝑚 + /𝑚 → 0. Then ∫𝑥 (i) ℓ ∗ (𝑥) := 0 𝑃[𝑍 > 𝑡] d𝑡 is s.v. and 𝑎(𝑥) ∼ 𝑉d (𝑥)/ℓ ∗ (𝑥) (𝑥 → ∞); (ii) for a constant 𝛼 ≥ 1 the following (a) to (e) are equivalent (a) 𝑃0 [𝑆 𝑛 > 0] → 1/𝛼, (b) 𝑚(𝑥) is regularly varying with index 2 − 𝛼, (c) 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎(−∞,0] ] → 𝜉 𝛼−1 as 𝑥/𝑅 → 𝜉 for each 0 < 𝜉 < 1, (d) 𝑉d (𝑥) is regularly varying with index 𝛼 − 1, (e) 𝑎(𝑥) is regularly varying with index 𝛼 − 1, and each of (a) to (e) implies that 𝑎(𝑥) ∼ 𝐶 𝛼 𝑥/𝑚(𝑥),
where 𝐶 𝛼 = 1/Γ(3 − 𝛼)Γ(𝛼).
(5.36)
[If 𝜎 2 < ∞, then all the statements (a) to (e) above are valid with 𝛼 = 2, and (i) and (5.36) are also valid but with 𝑎(𝑥) replaced by 2𝑎(𝑥) since lim 𝑚(𝑥) = 21 𝜎 2 .] It is unclear how conditions (a) to (e) are related when the condition 𝑚 + /𝑚 → 0 fails. For 𝛼 > 1, however, this condition seems to be a reasonable restriction – if no additional one on the tails of 𝐹 is supposed – for describing the relation of the quantities of interest. The situation is different for 𝛼 = 1, where (a) and (c) are equivalent to each other quite generally, and under the condition lim inf 𝜇− (𝑥)/𝜇(𝑥) > 0, (c) implies that 𝐹 is r.s. – hence the other conditions – owing to what is stated at (2.23) (cf. Theorems 4.1.1 and A.4.1). Corollary 5.4.2 (i) Suppose 𝑚 + /𝑚 → 0. Then 𝐹 is attracted to the normal law if and only if each of conditions (a) to (e) of Theorem 5.4.1 holds with 𝛼 = 2. (ii) Suppose 𝜇+ /𝜇 → 0 and 1 < 𝛼 < 2. Then 𝐹 belongs to the domain of attraction of a stable law of exponent 𝛼 if and only if each of (a) to (e) holds. This corollary follows immediately from Theorem 5.4.1 because of the well-known characterisation theorem for the domain of attraction (for (i) see Lemma A.1.2 in Section A.1.1). The condition lim 𝑃0 [𝑆 𝑛 > 0] = 𝜌 – apparently stronger than Spitzer’s condition Í 𝑛−1 𝑛𝑘=1 𝑃0 [𝑆 𝑘 > 0] → 𝜌 but equivalent to it (cf. [22], [6]) – plays an essential role in the study of fluctuations of r.w.’s. The condition holds if 𝐹 belongs to the domain of attraction of a stable law of exponent 𝛼 ≠ 1 (for 𝛼 = 1, see the comments after (AS) given in Section 2.3). It is of great interest to find a condition for the reverse implication to be true. Doney [20] and Emery [27] observed that for 𝛼 > 1, Spitzer’s condition implies that 𝐹 is attracted to a stable law if one tail outweighs the other overwhelmingly – conditions much stronger than 𝜇+ /𝜇 → 0. (ii) of Corollary 5.4.2 sharpens their results. For 𝛼 = 1, however, the assertion corresponding to (ii) fails since the slow variation of 𝜂− does not imply the regular variation of 𝜇− .
5.4 Spitzer’s Condition and the Regular Variation of 𝑉d
89
Remark 5.4.3 In Theorem 5.4.1 the assertion that (a) is equivalent to (b) under 𝑚 + /𝑚 → 0 is essentially Theorem 1.1 of [77] (see (5.37) below), so that Corollary 5.4.2 restricted to conditions (a) and (b) (with “each of (a) to (e)” replaced by “each of (a) and (b)” in the statements) should be considered as its corollary, though not stated in [77] as such. The equivalence of (c) and (d) and that of (d) and (e) follow from (5.1) and Lemma 5.3.1 (i), respectively (recall 𝑍 is r.s. under 𝑚 + /𝑚 → 0 by Proposition 5.2.1). The equivalence of (a) and (c) is valid if 𝛼 = 1, as mentioned above. Therefore the essential content supplemented by Theorem 5.4.1 is that (a) and (b) are equivalent to the conditions (c) to (e) in the case 𝛼 > 1. According to [77, Theorem1.1] it follows under 𝑚 + /𝑚 → 0 that for (a) in (ii) of Theorem to hold, i.e., lim 𝑃0 [𝑆 𝑛 > 0] → 1/𝛼, it is necessary and sufficient that 1 ≤ 𝛼 ≤ 2 and for some s.v. function 𝐿 (𝑥) at infinity, ∫0
𝑡 2 d𝐹 (𝑡) ∼ 2𝐿(𝑥) 𝜇− (𝑥) ∼ (𝛼 − 1) (2 − 𝛼)𝑥 −𝛼 𝐿(𝑥) ∫ −𝑥 (−𝑡) d𝐹 (𝑡) ∼ 𝐿 (𝑥) −∞
if 𝛼 = 2, if 1 < 𝛼 < 2,
−𝑥
if
(5.37)
𝛼 = 1.
Here we have chosen the coefficients – differently from [77] – so that the above condition is expressed for all 1 ≤ 𝛼 ≤ 2 by the single formula 𝑚 − (𝑥) ∼ 𝑥 2−𝛼 𝐿 (𝑥).8
(5.38)
This, in particular, shows that (a) and (b) are equivalent. Here we provide its proof adapted from [77]. Proof (of (a) ⇔ (b)) This equivalence is well-established for right-continuous r.w.’s (cf., e.g, [18], [8, Proposition 8.9.16] (in the case 𝛼 > 1); see also the comment made right before Corollary 5.4.2 for 𝛼 = 1). Let 𝑝 ∗ be the probability defined in the proof of Lemma 3.5.1 and Í denote the corresponding objects by putting the suffix ∗ on them as before. Then 𝑥 𝑝 ∗ (𝑥) = 0 and 𝑝 ∗ (𝑥) = 0, 𝑥 ≥ 2, and 𝑚 ∗ (𝑥) ∼ 𝑚(𝑥), and by virtue of a Tauberian theorem (and the Bertoin–Doney result on Spitzer’s condition) it suffices to show that (1 − 𝑠) 𝛥(𝑠) → 0 as 𝑠 ↑ 1, where 𝛥(𝑠) :=
∞ ∑︁ 𝑛=0
𝑠
𝑛
0 ∑︁
𝑦=−∞
𝑛
( 𝑝 (𝑦) −
𝑝 ∗𝑛 (𝑦))
1 = 2𝜋
∫
𝜋
−𝜋
1 1 d𝑡 − . 1 − 𝑠𝜓(𝑡) 1 − 𝑠𝜓∗ (𝑡) 1 − e𝑖𝑡
By (b) of Lemma 3.2.2 (applied with 𝜇± separately) we see that 𝛼+ (𝑡) + 𝛽+ (𝑡) ≪ 𝛼− (𝑡) + 𝛽− (𝑡) ≍ 𝑡𝑚(1/𝑡) (𝑡 → 0), and the same relations for 𝛼∗± (𝑡) + 𝛽∗± (𝑡). Hence from (3.50) it follows that 8 Standard arguments verify the equivalence between (5.38) and (5.37). Indeed, if 1 < 𝛼 < 2 this is immediate by ∫the monotone density theorem; for 𝛼 = 1 observe that the condition in (5.37) implies −𝑥 that 𝑥 𝜇− ( 𝑥)/ −∞ (−𝑡) d𝐹 (𝑡) approaches zero (see e.g., Theorem VIII.9.2 of [31]), and then that 𝜂− ( 𝑥) ∼ (2 − 𝛼) 𝐿 ( 𝑥) for 𝛼 < 2 – the converse implication is easy; for 𝛼 = 2 see Lemma A.1.1 of Section A.1.1.
5 Applications Under 𝑚+ /𝑚 → 0
90
|𝜓∗ (𝑡) − 𝜓(𝑡)| ≍ 𝑂 (𝑡 2 ) + |𝑡 𝛽+ (𝑡)| ≪ 𝑡 2 𝑚(1/𝑡) ≍ |1 − 𝜓(𝑡)| ∼ |1 − 𝜓∗ (𝑡)| as 𝑡 → 0. Using |1 − 𝑠𝜓| 2 ≥ (1 − 𝑠) 2 + 𝑠2 |1 − 𝜓| 2 , valid for any complex number 𝜓 with ℜ𝜓 ≤ 1, for any 𝜀 > 0 one can accordingly choose a positive constant 𝛿 so that ∫ (1 − 𝑠)| 𝛥(𝑠)| ≤ 𝜀 0
𝛿
(1 − 𝑠)𝑚(1/𝑡)𝑡 d𝑡 + 𝑜(1) (1 − 𝑠) 2 + [𝑚(1/𝑡)𝑡 2 ] 2
(𝑠 ↑ 1).
Writing ℎ(𝑡) for 𝑚(1/𝑡)𝑡 2 , one sees that ℎ ′ (𝑡) = [𝑐(1/𝑡) + 𝑚(1/𝑡)]𝑡 > ℎ(𝑡)/𝑡 (𝑡 > 0), which entails that the above integral is less than 𝜋/2 (see Section A.5.2). Thus (1 − 𝑠)| 𝛥(𝑠)| → 0 as 𝑠 ↑ 1, as desired. □ Because of what has been mentioned in Remark 5.4.3, for the proof of Theorem 5.4.1 now we have only to show the equivalence of (b) and (d), which is involved in the following Proposition 5.4.4 Let 𝑚 + /𝑚 → 0. Then as 𝑥 → ∞, (i) 𝑉d (𝑥)𝑈a (𝑥) ∼ 𝑥𝑎(𝑥) ≍ 𝑥 2 /𝑚(𝑥) and 𝑉d (𝑥)𝑈a (𝑥)𝜇+ (𝑥) → 0; and (ii) in order for 𝑚 to vary regularly with index 2 − 𝛼 (∈ [0, 1]), it is necessary and sufficient that 𝑉d varies regularly with index 𝛼 − 1; in this case it holds that 𝑉d (𝑥)𝑈a (𝑥) ∼ 𝐶 𝛼 𝑥 2 /𝑚(𝑥),
(5.39)
where 𝐶 𝛼 = 1/Γ(𝛼)Γ(3 − 𝛼). When 𝐹 is in the domain of attraction of a stable law of exponent 𝛼 ∈ (0, 2] and there exists 𝜌 := lim 𝑃0 [𝑆 𝑛 > 0], one can readily derive from Eq(15) and Lemma 12 of Vatutin & Wachtel [88] the asymptotic form (explicit in terms of 𝐹) of 𝑉d (𝑥)𝑈a (𝑥) if 𝜌(1 − 𝜌) ≠ 0 – in particular (5.39) follows as a special case of it except for the expression of 𝐶 𝛼 ; our proof, quite different from theirs, rests on Lemma 5.3.1. In Lemma 6.5.1 the result will be extended to the case 𝜌(1 − 𝜌) = 0 unless 𝛼 = 2𝑝 = 1. The first half of (i) of Proposition 5.4.4 follows immediately from Lemma 5.3.1(i) (since 𝑈a (𝑥) ∼ 𝑥/ℓ ∗ (𝑥)). The second one follows from the first since 𝑥 2 𝜇+ (𝑥) ≤ 2𝑐 + (𝑥). The proof of (ii) will be based on the identity 𝑃[ 𝑍ˆ ≤ −𝑥] = 𝑣d (0)
∞ ∑︁
𝑢 a (𝑦)𝐹 (−𝑦 − 𝑥)
(𝑥 > 0),
(5.40)
𝑦=0
which one can derive by using (5.31) (or by the duality lemma [31, Section XII.1], which says 𝑢 a (𝑦) = 𝑔 (−∞,−1] (0, 𝑦), 𝑦 ≥ 0). Recall (5.28), i.e., 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) and that ℓ ∗ is s.v. (at least under 𝑚 + /𝑚 → 0). Here we also note that for 0 ≤ 𝛾 ≤ 1, if either 𝑉d (𝑥) or 1/𝑃[ 𝑍ˆ ≤ −𝑥] varies regularly as 𝑥 → ∞ with index 𝛾, then 𝑃[ 𝑍ˆ ≤ −𝑥]𝑉d (𝑥)/𝑣◦ −→ 1/Γ(1 − 𝛾)Γ(𝛾 + 1), where 𝑣◦ = 𝑣d (0) (cf. Theorem A.2.3).
(5.41)
5.4 Spitzer’s Condition and the Regular Variation of 𝑉d
91
Proof (of the necessity part of Proposition 5.4.4(ii)) By virtue of (5.35) 𝑉d is s.v. if and only if 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎(−∞,0] ] → 1 as 𝑅 → ∞ for 𝑥 ≥ 𝑅/2, which is equivalent to lim 𝑃0 [𝑆 𝑛 > 0] = 1 due to Kesten & Maller [46], as noted in (2.22). For 𝛼 = 1, by what is mentioned right after (5.28), this verifies the first half of (ii). [Here, the sufficiency part is also proved, which we shall prove without resorting to [46] (cf. Lemma 5.4.6).] For 1 < 𝛼 < 2, by (5.40) the condition (5.37) implies that ∞ ∑︁ 𝑃[ 𝑍ˆ < −𝑥] 𝐿 (𝑥 + 𝑦) (2 − 𝛼)𝐿(𝑥) ∼ , (𝛼 − 1) (2 − 𝛼) ∼ ∗ ◦ ∗ 𝛼 𝑣 ℓ (𝑦)(𝑥 + 𝑦) ℓ (𝑥)𝑥 𝛼−1 𝑦=0
(5.42)
hence 𝑉d varies regularly with index 𝛼 − 1 because of (5.41). Let 𝛼 = 2. Then the slow variation of 𝑚 − entails that 𝐹 is in the domain of attraction of a normal law (cf. Section A.1.1)) and we have seen (Proposition 4.2.1(i)) that this entails 2𝑎(𝑥) ¯ ∼ 𝑥/𝑚 ( 𝑥), hence 𝑉d (𝑥) ∼ 𝑎(𝑥)ℓ ∗ (𝑥) ∼ 𝑥ℓ ∗ (𝑥)/𝑚 − (𝑥) owing to Lemma 5.3.1, showing 𝑉d (𝑥)/𝑥 is s.v., as required. □ In the following proof of the sufficiency part of Proposition 5.4.4(ii), we shall be concerned with the condition ˆ 𝑉d (𝑥) ∼ 𝑥 𝛼−1 /ℓ(𝑥)
for some ℓˆ that is s.v. at infinity.
(5.43)
The next lemma deals with the case 1 ≤ 𝛼 < 2, when (5.41) is valid with 𝛾 = 𝛼 − 1 (under (5.43)). By (5.40) and (5.41) the above equivalence relation may be rewritten as ∞ ∑︁ ˆ 𝑢 a (𝑦)𝐹 (−𝑦 − 𝑥) = 𝐶 𝛼′ 𝑥 1−𝛼 ℓ(𝑥){1 + 𝑜(1)}, (5.44) 𝑦=0
where
𝐶 𝛼′
= (2 − 𝛼)𝐶 𝛼 = 1/Γ(𝛼)Γ(2 − 𝛼). Put for 1 ≤ 𝛼 < 2, ∫ ∞ 𝜇− (𝑡) d𝑡. ℓ♯ (𝑥) = 𝑥 𝛼−1 ℓ ∗ (𝑡) 𝑥
(5.45)
Lemma 5.4.5 If 𝑍 is r.s. and (5.43) holds with 1 ≤ 𝛼 < 2, then ˆ ℓ(𝑥) ∼ Γ(𝛼)Γ(2 − 𝛼)ℓ♯ (𝑥) and 𝑃[ 𝑍ˆ ≤ −𝑥] 𝑣◦ ∼ 𝑥 1−𝛼 ℓ♯ (𝑥). Proof For this proof, we need neither the condition 𝐸 |𝑋 | < ∞ nor the recurrence of the r.w., but we do need the oscillation of the r.w. so that both 𝑍 and 𝑍ˆ are welldefined proper random variables and Spitzer’s representation (5.31) of 𝑔 (−∞,0] (𝑥, 𝑦) is applicable. Let 𝑍 be r.s., so that ℓ ∗ is s.v. and 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥). Put ∫ ∞ ∫ ∞ ℓ♯ (𝑥) 𝜇− (𝑡) 𝐹 (−𝑡 − 𝑥) d𝑡 and Δ(𝑥) = 𝛼−1 = d𝑡. Σ(𝑥) = ∗ (𝑡) ℓ ℓ ∗ (𝑡) 𝑥 𝑥 1
5 Applications Under 𝑚+ /𝑚 → 0
92
In view of (5.44) it suffices to show the implication Σ(𝑥) varies regularly with index 1 − 𝛼 =⇒ Σ(𝑥) ∼ Δ(𝑥). (5.46) ∫𝑥 Clearly Σ(𝑥) ≥ 𝐹 (−2𝑥) 1 d𝑡/ℓ ∗ (𝑡) ∼ 𝑥𝐹 (−2𝑥)/ℓ ∗ (𝑥). Replacing 𝑥 by 𝑥/2 in this inequality and noting that Σ(𝑥/2) ∼ 2 𝛼−1 Σ(𝑥) because of the premise of (5.46) we obtain (5.47) 𝑥𝜇− (𝑥)/ℓ ∗ (𝑥) ≤ 2 𝛼 Σ(𝑥){1 + 𝑜(1)}. From the defining expressions of Σ and Δ one observes first that for each 𝜀 > 0, as 𝑥 → ∞, 𝜀𝑥 Σ(𝑥) ≤ ∗ 𝜇− (𝑥){1 + 𝑜(1)} + Δ ((1 + 𝜀)𝑥) , ℓ (𝑥) then, by substituting (5.47) into the RHS, that [1 − 2 𝛼 𝜀] Σ(𝑥){1 + 𝑜(1)} ≤ Δ ((1 + 𝜀)𝑥) ≤ Δ(𝑥) ≤ Σ(𝑥){1 + 𝑜(1)}. Since 𝜀 may be arbitrarily small, we can conclude Δ(𝑥)/Σ(𝑥) → 1, as desired.
□
Lemma 5.4.6 If 𝑉d is s.v. and 𝑥𝜂+ (𝑥)/𝑚 − (𝑥) → 0, then 𝑍 is r.s., 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥), 𝜂− is s.v. and 𝜂− (𝑡) ∼ 𝐴(𝑡) ∼ ℓ ∗ (𝑡)ℓ♯ (𝑡). Proof The relative stability of 𝑍 follows from Proposition 5.2.1 and, together with the slow variation of 𝑉d , implies 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥) (with 𝛼 = 1 in (5.45)) according to the preceding lemma. The slow variation of ℓ♯ implies 𝑥𝜇− (𝑥)/ℓ ∗ (𝑥) ≪ ℓ♯ (𝑥), which in turn shows 𝜂− is s.v. since ℓ♯ (𝑥) ≤ 𝜂− (𝑥)/ℓ ∗ (𝑥). It accordingly follows that 𝜂− (𝑥) ∼ 𝑚 − (𝑥)/𝑥 ≫ 𝜂+ (𝑥), hence 𝐴(𝑥) ∼ 𝜂− (𝑥). We postpone the proof of ℓ ∗ (𝑥)ℓ♯ (𝑥) ∼ 𝐴(𝑥) to the next chapter (Lemma 6.4.3), where the result is extended to the case when 𝑉d is s.v. and 𝑍 is r.s. □ Completion of the proof of Proposition 5.4.4. We show that if 𝑉d (𝑥) varies regularly, ˆ then (5.37) and (5.39) hold. Suppose that (5.43), that is 𝑉d (𝑥) ∼ 𝑥 𝛼−1 /ℓ(𝑥), holds for 1 ≤ 𝛼 ≤ 2. Case 1 ≤ 𝛼 < 2. Let 𝐶 𝛼′ = (2 − 𝛼)𝐶 𝛼 = 1/[Γ(𝛼)Γ(2 − 𝛼)] as before. By Lemma 5.4.5 ∫ ∞ ˆ ℓ♯ (𝑥) 𝐶 𝛼′ ℓ(𝑥) 𝜇− (𝑡) ∼ 𝛼−1 = d𝑡. (5.48) 𝛼−1 ℓ ∗ (𝑡) 𝑥 𝑥 𝑥 Let 1 < 𝛼 < 2. Then by the monotone density theorem – applicable since ∗ (𝑥) (with ˆ 𝜇− (𝑡)/ℓ ∗ (𝑡) is decreasing – the above relation leads to 𝜇− (𝑥) ∼ 𝐶𝑥 −𝛼 ℓ(𝑥)ℓ ∗ (𝑥) ˆ 𝐶 = [(2 − 𝛼) (𝛼 − 1)𝐶 𝛼 ]), which is equivalent to (5.37) with 𝐿(𝑥) = 𝐶 𝛼 ℓ(𝑥)ℓ and shows (5.39) again (which has been virtually seen already in (5.42)). If 𝛼 = 1, then 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) and, by (5.48) and Lemma 5.4.6, 1/𝑉d (𝑥) ∼ ∗ (𝑥)) and (5.39), as one can ˆ 𝜂− (𝑥)/ℓ ∗ (𝑥), which implies (5.37) (with 𝐿(𝑥) = ℓ(𝑥)ℓ readily check.
5.4 Spitzer’s Condition and the Regular Variation of 𝑉d
93
Case 𝛼 = 2. We have ∫ 𝑥 ∫ 𝑥 ∑︁ ∞ 1 ∗ ˆ ˆ ℓ (𝑥) = ◦ 𝑃[− 𝑍 > 𝑡] d𝑡 = d𝑡 𝑢 a (𝑦)𝐹 (−𝑦 − 𝑡) 𝑣 0 0 𝑦=0
(5.49)
and, instead of (5.41), ℓˆ∗ (𝑥)𝑉d (𝑥) ∼ 𝑥 (Theorem A.2.3), so that ˆ ℓˆ∗ (𝑥) ∼ 𝑥/𝑉d (𝑥) ∼ ℓ(𝑥).
(5.50)
To complete the proof it suffices to show that 𝑚 is s.v., or ∫equivalently, 𝑥 𝑥𝜂− (𝑥)/𝑚(𝑥) → 0, because the slow variation of 𝑚 entails that of −𝑥 𝑦 2 d𝐹 (𝑦) and hence (5.39) (see the proof of the necessity part). We shall verify the latter condition, and to this end, we shall apply the inequalities 1/𝐶 ≤ ℓˆ∗ (𝑥)ℓ ∗ (𝑥)/𝑚(𝑥) ≤ 𝐶
(𝑥 > 1)
(5.51)
(for some positive constant 𝐶), which follow from (5.50) and Lemma 5.3.1(i), the latter entailing 𝑉d (𝑥)/ℓ ∗ (𝑥) ≍ 𝑥/𝑚(𝑥) in view of Theorems 3.1.1 and 3.1.2. Using (5.49) and 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) as well as the monotonicity of ℓ ∗ and 𝐹, one observes that for each 𝑀 > 1, as 𝑥 → ∞, ∫ 𝑥 ∫ ( 𝑀−1) 𝑥 𝐹 (−𝑦 − 𝑡) ℓˆ∗ (𝑥) − ℓˆ∗ (𝑥/2) ≥ d𝑡 {1 + 𝑜(1)} d𝑦 ℓ ∗ (𝑦) ∨ 1 𝑥/2 0 ∫ 𝑥 ∫ ( 𝑀−1) 𝑥 𝐹 (−𝑦 − 𝑡) d𝑦{1 + 𝑜(1)} ≥ d𝑡 ℓ ∗ (𝑦 + 𝑥) 𝑥/2 0 𝑥/2 [𝜂− (𝑥) − 𝜂− (𝑀𝑥)] {1 + 𝑜(1)}, ≥ ∗ ℓ (𝑥) which implies
𝑥𝜂− (𝑥) ≤ 𝑥𝜂− (𝑀𝑥) + ℓˆ∗ (𝑥)ℓ ∗ (𝑥) × 𝑜(1),
for by (5.50) ℓˆ∗ is s.v. Combining this with (5.51) one infers that 𝑚(𝑀𝑥) {1 + 𝑜(1)} 𝑀 𝐶 [ℓˆ∗ ℓ ∗ ] (𝑥) {1 + 𝑜(1)} ≤ 𝑀 𝐶 2 𝑚(𝑥) {1 + 𝑜(1)} ≤ 𝑀
𝑥𝜂− (𝑥) ≤
and concludes that 𝑥𝜂− (𝑥)/𝑚(𝑥) → 0, hence the desired slow variation of 𝑚.
□
5 Applications Under 𝑚+ /𝑚 → 0
94
5.5 Comparison Between 𝝈𝑹 and 𝝈[𝑹,∞) and One-Sided Escape From Zero The following proposition gives the asymptotic form in terms of the potential kernel 𝑎 of the probability of one-sided escape of the r.w. that is killed as it hits 0. ¯ = 0, then as 𝑅 → ∞, Proposition 5.5.1 (i) If lim 𝑥→∞ 𝑎(−𝑥)/𝑎(𝑥) 𝑃 𝑥 [𝜎(−∞,−𝑅] < 𝜎0 ] ∼ 𝑃 𝑥 [𝜎−𝑅 < 𝜎0 ]
uniformly for 𝑥 ∈ Z.
(5.52)
(ii) Let 𝑚 + /𝑚 → 0 and 𝜎 2 = ∞. Then lim 𝑥→∞ 𝑎(−𝑥)/𝑎(𝑥) ¯ = 0 and as 𝑅 → ∞, 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎0 ] ∼ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ∼ 𝑎 † (𝑥)/𝑎(𝑅)
(5.53)
uniformly for 𝑥 ≤ 𝑅, subject to the condition 𝑎(−𝑅) − 𝑎(−𝑅 + 𝑥) −→ 0 and 𝑎 † (𝑥)
𝑎 † (𝑥) > 0.
(5.54)
Under 𝑚 + /𝑚 → 0, condition (5.54) holds at least for 0 ≤ 𝑥 < 𝑅 by Lemma 5.1.3(ii) and for −𝑅 ≪ 𝑥 < 0 subject to [𝑎(−𝑥)/𝑎(𝑥)] · [𝑎(−𝑅)/𝑎(𝑅)] → 0, 𝑎(𝑥) > 0 by (5.8). [As to the necessity of (5.54) for the first equivalence of (5.53), see the exceptional cases addressed in Remark 4.2.5.] In Section 5.7 we shall state a corollary – Corollary 5.7.1 – of Proposition 5.5.1 that concerns the asymptotic distribution of ♯{𝑛 < 𝜎[𝑅,∞) : 𝑆 𝑛 ∈ 𝐼}, the number of visits of a finite set 𝐼 ⊂ Z by the r.w. before entering [𝑅, ∞). Proof Let E 𝑅 stand for the event {𝜎[𝑅,∞) < 𝜎0 }. Note that for each 𝑥, 𝑃 𝑥 [𝜎[𝑅,∞) < 𝜎0 ] ∼ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ⇐⇒ lim 𝑃 𝑥 𝜎𝑅 < 𝜎0 E 𝑅 = 1. 𝑅→∞
(5.55)
By an analogous equivalence, (i) is immediate from Lemma 5.1.3(i). For the proof of (ii), let 𝑚 + /𝑚 → 0. By (5.3) condition (5.54) is then equivalent to the second equivalence of (5.53) under the assumption of (ii). For the proof of the first, take a small constant 𝜀 > 0. Then for any 𝑅 large enough we can choose 𝑢 > 0 so that 𝑢/𝑚(𝑢) = 𝜀𝑅/𝑚(𝑅); put 𝑧 = 𝑧(𝑅, 𝜀) = ⌊𝑢⌋. Since 𝑎(𝑥) ¯ ≍ 𝑥/𝑚(𝑥), this entails 𝜀𝐶 ′ ≤ 𝑎(𝑧)/ ¯ 𝑎(𝑅) ¯ < 𝜀𝐶 ′′ for some positive constants 𝐶 ′, 𝐶 ′′. Now define ℎ 𝑅, 𝜀 via ∑︁ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 | E 𝑅 ] = ℎ 𝑅, 𝜀 + 𝑃 𝑥 𝑆 𝜎 [𝑅,∞) = 𝑦 E 𝑅 𝑃 𝑦 [ 𝜎 ˜ 𝑅 < 𝜎0 ] 𝑅 ≤𝑦 ≤𝑅+𝑧
5.6 Escape Into (−∞, −𝑄] ∪ [𝑅, ∞)
95
with 𝜎 ˜ 𝑅 = 0 if 𝑆0 = 𝑅 and 𝜎 ˜ 𝑅 = 𝜎𝑅 otherwise. Then by using 𝜎[𝑅,∞) ≤ 𝜎𝑅 together with (5.54) we infer from Lemma 5.2.3(i) (see (5.17) and its derivation) 𝑚 + (𝑧) 𝑎(𝑅) ¯ 𝑚 + (𝑧) ℎ 𝑅, 𝜀 ≤ 𝑃 𝑥 𝑆 𝜎 [𝑅,∞) > 𝑅 + 𝑧 E 𝑅 ≤ 𝐶 · ≤ [𝐶/𝜀𝐶 ′] , 𝑚(𝑧) 𝑎(𝑧) ¯ 𝑚(𝑧) whereas 1 − 𝑃 𝑦 [ 𝜎 ˜ 𝑅 < 𝜎0 ] ≤ 𝑎(𝑦 ¯ − 𝑅)/𝑎(𝑅) ¯ < 𝜀𝐶1 for 𝑅 ≤ 𝑦 ≤ 𝑅 + 𝑧 (see Remark 5.1.2(b) for the first inequality). Since 𝑚 + (𝑧)/𝑚(𝑧) → 0 so that ℎ 𝑅, 𝜀 → 0, and since 𝑃𝑦 [𝜎 ˜ 𝑅 < 𝜎0 ] ∼ 𝑎(𝑦)/𝑎(𝑅), we conclude lim inf inf 𝑃 𝑥 𝜎𝑅 < 𝜎0 𝜎[𝑅,∞) < 𝜎0 > 1 − 𝜀𝐶1 , 𝑅→∞ 0≤𝑥 1 this condition is superfluous. Note that inf 𝑧∉(−𝑄,𝑅) 𝑃𝑧 [ 𝜎−𝑄 < 𝜎0 ] > 0 if 𝑝 = 0 and 𝑄/𝑅 is bounded above.
5.6 Escape Into (−∞, −𝑄] ∪ [𝑅, ∞)
97
𝑃0 [𝜎−𝑅,𝑅 < 𝜎0 ] ∼
(2 𝛼−1 )/(1 − 𝑐2𝛼 𝑝𝑞) . 2𝑎(𝑅) ¯
The computation rests on Proposition 4.2.1. Combining (4.23) and (4.24) one observes that 𝑃 𝑅 [𝜎−𝑅 < 𝜎0 ] → 𝑐 𝛼 𝑞, 𝑃−𝑅 [𝜎𝑅 < 𝜎0 ] → 𝑐 𝛼 𝑝 [in the case 𝛼 = 2, use the fact that both 𝑍 and − 𝑍ˆ are r.s.], so that 𝑃0 [𝜎−𝑅 < 𝜎0 ] = 𝑃0 [𝜎𝑅 < 𝜎0 ] ∼ 𝜆− + 𝑐 𝛼 𝑞𝜆 + ∼ 𝜆+ + 𝑐 𝛼 𝑝𝜆− , showing (1 − 𝑐 𝛼 𝑝)𝜆− ∼ (1 − 𝑐 𝛼 𝑞)𝜆+ . This together with 𝑃0 [𝜎−𝑅 < 𝜎0 ] ∼ 1/2𝑎(𝑅), ¯ yields (5.57). Recall that 𝐻 𝐵𝑥 (·) denotes the hitting distribution of a set 𝐵 for the r.w. 𝑆 under 𝑃𝑥 : 𝐻 𝐵𝑥 (𝑦) = 𝑃 𝑥 [𝑆 𝜎𝐵 = 𝑦] (𝑦 ∈ 𝐵) (see (4.35)). Put 𝐵(𝑄, 𝑅) = {−𝑄, 0, 𝑅}. Then (5.56) is rephrased as 𝑥 𝑃 𝑥 [𝜎Z\(−𝑄,𝑅) < 𝜎0 ] ∼ 1 − 𝐻 𝐵(𝑄,𝑅) (0).
(5.58)
By using Theorem 30.2 of Spitzer [71] one can compute an explicit expression of 𝑥 𝐻 𝐵(𝑄,𝑅) (0) in terms of 𝑎(·), which though useful, is pretty complicated. We derive the following result without using it. Lemma 5.6.3 For 𝑥 ∈ Z 𝑥 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ] ≥ 1 − 𝐻 𝐵(𝑄,𝑅) (0) − 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ]𝑃 𝑅 [𝜎0 < 𝜎−𝑄 ]
≥ 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ]𝑃−𝑄 [𝜎0 < 𝜎𝑅 ]. Proof Write 𝐵 for 𝐵(𝑄, 𝑅). Plainly we have 1 − 𝐻 𝐵𝑥 (0) = 𝑃 𝑥 [𝜎−𝑄,𝑅 < 𝜎0 ] (5.59) = 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ] + 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] − 𝑃 𝑥 [𝜎−𝑄 ∨ 𝜎𝑅 < 𝜎0 ]. Let E denote {𝜎𝑅 < 𝜎0 } ∩ {𝑆 𝜎𝑅 + · hits −𝑄 before 0}, the event that 𝑆 · visits −𝑄 after the first hitting of 𝑅 without visiting 0. Then E ⊂ {𝜎𝑅 ∨ 𝜎−𝑄 < 𝜎0 }
and
{𝜎−𝑄 ∨ 𝜎𝑅 < 𝜎0 } \ E ⊂ {𝜎−𝑄 < 𝜎𝑅 < 𝜎0 }.
By 𝑃 𝑥 [𝜎−𝑄 < 𝜎𝑅 < 𝜎0 ] ≤ 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ]𝑃−𝑄 [𝜎𝑅 < 𝜎0 ] it therefore follows that 0 ≤ 𝑃 𝑥 [𝜎−𝑄 ∨ 𝜎𝑅 < 𝜎0 ] − 𝑃 𝑥 (E) ≤ 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ]𝑃−𝑄 [𝜎𝑅 < 𝜎0 ].
(5.60)
5 Applications Under 𝑚+ /𝑚 → 0
98
The left-hand inequality together with 𝑃 𝑥 (E) = 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ]𝑃 𝑅 [𝜎−𝑄 < 𝜎0 ] leads to 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] − 𝑃 𝑥 [𝜎−𝑄 ∨ 𝜎𝑅 < 𝜎0 ] ≤ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] − 𝑃 𝑥 (E) = 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ]𝑃 𝑅 [𝜎0 < 𝜎−𝑄 ], which, substituted into (5.59), yields the first inequality of the lemma after the trite transposition of a term. In a similar way, the second one is deduced by substituting the right-hand inequality of (5.60) into (5.59). □ → 0 (𝑥 → ∞), then as 𝑄 ∨ 𝑅 → ∞, Lemma 5.6.4 If 𝑎(−𝑥)/𝑎(𝑥) ¯ 0 (0) ∼ 1 − 𝐻 𝐵(𝑄,𝑅)
𝑎(𝑄 + 𝑅) . 𝑎(𝑄)𝑎(𝑅)
(5.61)
Proof On taking 𝑥 = 0 in Lemma 5.6.3, its second inequality reduces to 𝑎(𝑄 + 𝑅) + 𝑎(−𝑄) − 𝑎(𝑅) 𝑎(−𝑄 − 𝑅) + 𝑎(𝑅) − 𝑎(−𝑄) + 4𝑎(𝑅) 4𝑎(𝑄) ¯ 𝑎(𝑄) ¯ ¯ 𝑎(𝑅) ¯ 𝑎(𝑄 ¯ + 𝑅) = 2𝑎(𝑄) ¯ 𝑎(𝑅) ¯ 𝑎(𝑄 + 𝑅) {1 + 𝑜(1)}, = 𝑎(𝑄)𝑎(𝑅)
0 1 − 𝐻 𝐵(𝑄,𝑅) (0) ≥
and its first inequality gives the upper bound 0 1 − 𝐻 𝐵(𝑄,𝑅) (0) ≤
𝑎(𝑄 + 𝑅) + 𝑎(−𝑄) + 𝑎(−𝑅) 𝑎(𝑄 + 𝑅) {1 + 𝑜(1)}, = 4𝑎(𝑄) 𝑎(𝑄)𝑎(𝑅) ¯ 𝑎(𝑅) ¯ □
showing (5.61).
According to Lemma 3.10 of [84], for any finite set 𝐵 ⊂ Z that contains 0 we have the identity 1 − 𝐻 𝐵𝑥 (0) = [1 − 𝐻 𝐵0 (0)]𝑎 † (𝑥) +
(5.62) ∑︁
[𝑎(−𝑧) − 𝑎(𝑥 − 𝑧)]𝐻 𝐵𝑧 (0), 𝑥 ∉ 𝐵 \ {0}.
𝑧 ∈𝐵\{0}
Recalling 𝑃 𝑥 [𝜎0 < 𝜎𝑥 ] = 1/2𝑎(𝑥) ¯ = 𝑃0 [𝜎𝑥 < 𝜎0 ] (𝑥 ≠ 0) we see that −𝑄 𝑅 0 𝐻 𝐵(𝑄,𝑅) (0) ∨ 𝐻 𝐵(𝑄,𝑅) (0) ≤ 𝑃0 [𝜎𝑅 < 𝜎0 ] ∨ 𝑃0 [𝜎−𝑄 < 𝜎0 ] ≤ 1 − 𝐻 𝐵(𝑄,𝑅) (0).
For 𝐵 = 𝐵(𝑄, 𝑅), the second term on the RHS of (5.62) is negligible in comparison to the first under the following condition: as 𝑄 ∧ 𝑅 → ∞
5.6 Escape Into (−∞, −𝑄] ∪ [𝑅, ∞)
99
|𝑎(𝑄) − 𝑎(𝑥 + 𝑄)| + |𝑎(−𝑅) − 𝑎(𝑥 − 𝑅)| = 𝑜(𝑎 † (𝑥)) and 𝑎 † (𝑥) ≠ 0.10
(5.63)
This gives us the following result. Lemma 5.6.5 Uniformly for −𝑄 < 𝑥 < 𝑅 subject to (5.63), as 𝑄 ∧ 𝑅 → ∞, h i 𝑥 0 1 − 𝐻 𝐵(𝑄,𝑅) (0) . (0) ∼ 𝑎 † (𝑥) 1 − 𝐻 𝐵(𝑄,𝑅) (5.64) Condition (5.63) – always satisfied for each 𝑥 (fixed) if 𝜎 2 = ∞ and only for 𝑥 = 0 otherwise – is necessary and sufficient for the following condition to hold: ¯ 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ] ∼ 𝑎 † (𝑥)/2𝑎(𝑄)
and
𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ∼ 𝑎 † (𝑥)/2𝑎(𝑅). ¯
(5.65)
If 𝑚 + /𝑚 → 0, then |𝑎(−𝑅) − 𝑎(𝑥 − 𝑅)| = 𝑜(𝑎(𝑥) ∨ 1) uniformly for 0 ≤ 𝑥 < 𝑅 (as noted after Proposition 5.5.1) while by Proposition 4.3.1, |𝑎(𝑄) − 𝑎(𝑥 + 𝑄)| = ∫𝑄 𝑜(𝑎(𝑥) ∨ 1) for 𝑥 ≥ 0 whenever 𝑚(𝑥)𝑄 −1 1 d𝑦/𝑚(𝑦) → 0. From Proposition 5.6.1 and Lemmas 5.6.4 and 5.6.5 we have the following Proposition 5.6.6 Let 𝑚 + /𝑚 → 0 and 𝜎 2 = ∞. Then, as 𝑄 ∧ 𝑅 → ∞, 0 1 − 𝐻 𝐵(𝑄,𝑅) (0) ∼
𝑎(𝑄 + 𝑅) 𝑎(𝑄)𝑎(𝑅)
and uniformly for −𝑄 < 𝑥 < 𝑅 subject to condition (5.63), 0 𝑃 𝑥 [𝜎Z\(−𝑄,𝑅) < 𝜎0 ] ∼ 𝑎 † (𝑥) (1 − 𝐻 𝐵(𝑄,𝑅) (0)).
(5.66)
Proposition 5.6.7 Suppose 𝑚 + /𝑚 → 0 and 𝜎 2 = ∞. Then uniformly for −𝑄 < 𝑥 < 𝑅 subject to (5.63), as 𝑄 ∧ 𝑅 → ∞, 𝑥 (i) 𝐻 𝐵(𝑄,𝑅) (𝑅) ∼ 𝑎 † (𝑥)/𝑎(𝑅) 11 and
𝑃 𝑥 𝜎𝑅 < 𝜎−𝑄 𝜎−𝑄,𝑅 < 𝜎0 ∼ 𝑎(𝑄)/𝑎(𝑄 + 𝑅);
(5.67)
(ii) if 𝑄/𝑅 < 𝑀 for some 𝑀 > 1 in addition, 𝑃 𝑥 𝜎[𝑅,∞) < 𝜎(−∞,−𝑄] 𝜎Z\(−𝑄,𝑅) < 𝜎0 ∼
𝑎(𝑄) . 𝑎(𝑄 + 𝑅)
Proof Let 𝑚 + /𝑚 → 0 and (5.63) be valid. By decomposing {𝜎𝑅 < 𝜎0 } = {𝜎−𝑄 < 𝜎𝑅 < 𝜎0 } + {𝜎𝑅 < 𝜎−𝑄 ∧ 𝜎0 }
(5.68)
(‘+’ designates the disjoint union) it follows that 10 The case 𝑎† ( 𝑥) = 0 (i.e., 𝑆 is right-continuous and 𝑥 < 0), where 𝑃𝑥 [ 𝜎Z\(−𝑄,𝑅) < 𝜎0 ] = 𝑥 1 − 𝐻𝐵(𝑄,𝑅) (0) = [𝑎 (𝑄) − 𝑎 (𝑄 + 𝑥) ]/𝑎 (𝑄), is not interesting. 11 This is true for −𝑄 ≪ 𝑥 ≤ 𝑅 subject to 𝑎 (−𝑄)/𝑎 (𝑄) ≪ 𝑎† ( 𝑥)/𝑎† ( |𝑥 |) regardless of (5.63). [Use 𝑃𝑥 [ 𝜎−𝑄 < 𝜎0 ] ≤ 𝑎¯ † ( 𝑥)/ 𝑎¯ (𝑄) instead of the bound used for (5.70).]
5 Applications Under 𝑚+ /𝑚 → 0
100
𝑥 (𝑅) = 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] − 𝑃 𝑥 [𝜎−𝑄 < 𝜎𝑅 < 𝜎0 ]. 𝐻 𝐵(𝑄,𝑅)
(5.69)
The second probability on the RHS is equal to 𝑃 𝑥 [𝜎−𝑄 < 𝜎0,𝑅 ]𝑃−𝑄 [𝜎𝑅 < 𝜎0 ], hence less than 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ]𝑃−𝑄 [𝜎𝑅 < 𝜎0 ]. By Lemma 5.1.1 (see (5.7)) 𝑃−𝑄 [𝜎𝑅 < 𝜎0 ] ≤ 𝑎(−𝑄)/𝑎(𝑅), while owing to (5.63) we have 𝑃 𝑥 [𝜎−𝑄 < 𝜎0 ] ∼ 𝑎 † (𝑥)/𝑎(𝑄). Hence 𝑃 𝑥 [𝜎−𝑄 < 𝜎𝑅 < 𝜎0 ] ≤
𝑎 † (𝑥)𝑎(−𝑄) {1 + 𝑜(1)}. 𝑎(𝑅)𝑎(𝑄)
(5.70)
The RHS being 𝑜 𝑎 † (𝑥)/𝑎(𝑅) , we obtain 𝑥 𝐻 𝐵(𝑄,𝑅) (𝑅) ∼ 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ] ∼ 𝑎 † (𝑥)/𝑎(𝑅).
Now noting 𝑥 𝑃 𝑥 [𝜎𝑅 < 𝜎−𝑄 | 𝜎{−𝑄,𝑅 } < 𝜎0 ] = 𝐻 𝐵(𝑄,𝑅) (𝑅)
𝑥 1 − 𝐻 𝐵(𝑄,𝑅) (0)
one deduces (5.67) immediately from (5.64). For the proof of (ii) let 𝜏(𝑄) be the first time 𝑆 exits from (−∞, −𝑄] after its entering this half-line (see (5.19)) and E𝑄 denote the event {𝜎(−∞,−𝑄] < 𝜎0 }. Then ∑︁ 𝑃 𝑥 [𝜎(−∞,−𝑄] < 𝜎[𝑅,∞) < 𝜎0 ] ≤ 𝑃 𝑥 [𝑆 𝜏 (𝑄) = 𝑦, E𝑄 ]𝑃 𝑦 [𝜎[𝑅,∞) < 𝜎0 ] −𝑄 𝑅. This together with 𝑃 𝑥 (Λ𝑅 ) → 0 shows 𝑔 𝐵(𝑅) (𝑥, 𝑦) ∼ 𝑔𝛺 (𝑥, 𝑦) (cf. (7.54)). The case when 𝑆 is p.r.s. is disposed of by duality. □
5.7 Sojourn Time of a Finite Set for 𝑆 with Absorbing Barriers
103
Remark 5.7.4 (i) Using the relation noted in Remark 5.6.8 we see that if 𝜎 2 < ∞, 𝑒 𝑅 ∼ 1/[2𝑎(𝑅)]
and
𝑒 𝑄,𝑅 ∼ 𝑎(𝑄 + 𝑅)/[2𝑎(𝑄)𝑎(𝑅)].
For asymptotically stable walks with exponent 1 ≤ 𝛼 ≤ 2 having 𝜌 = lim 𝑃0 [𝑆 𝑛 > 0], we shall compute the asymptotics of 𝑔𝛺 (𝑥, 𝑦) and 𝑔 𝐵(𝑅) (𝑥, 𝑦) in Chapter 8 (Theorems 8.2.1 and 8.4.3), which give asymptotic forms of 𝑒 𝑅 and 𝑒 𝑅,𝑄 in view of (5.74) (see (8.14) and Theorem 8.4.1) and in particular show that the equivalences in (5.72) and (5.73) hold for 𝛼 = 1 with 0 < 𝜌 < 1 (where 𝑎(𝑥) ∼ 𝑎(−𝑥)). See also Proposition 9.5.1 and Theorem 9.5.5. (ii) If 𝑚 + /𝑚 → 0, then by Theorem 3.1.6 and (5.24), as 𝑥 → ∞ 𝑔𝛺 (𝑥, 𝑥) ∼ 𝑎(𝑥). (iii) If 𝑎(−𝑥), 𝑥 ≥ 1, is almost increasing, then by using (2.15) (see also (7.35) and (7.34)) one can easily deduce the lower bound 𝑔𝛺 (𝑥, 𝑥) ≥ 𝑐[𝑎(𝑥) ∨ 𝑎(−𝑥)] (with 𝑐 > 0), entailing 𝑔𝛺 (𝑥, 𝑥) ≍ 𝑔 {0} (𝑥, 𝑥). This “ ≍ ” is not generally true, Í𝑥 since 𝑔𝛺 (𝑥, 𝑥) = 0 𝑣d (𝑘)𝑢 a (𝑘) is increasing, whereas it is possible that = 0 (see Section 3.6), so that lim inf [𝑔𝛺 (𝑥, 𝑥)/𝑔(𝑥, 𝑥)] = 0. lim inf 𝑎(2𝑥)/ ¯ 𝑎(𝑥) ¯
Chapter 6
The Two-Sided Exit Problem – General Case
Put 𝛺 = (−∞, −1]
and 𝑇 = 𝜎𝛺 .
In this and the next chapter, we are concerned with the asymptotic form as 𝑅 → ∞ of the probability 𝑃 𝑥 (Λ𝑅 ), where Λ𝑅 = {𝜎[𝑅+1,∞) < 𝑇 }, the event that the r.w. 𝑆 exits from an interval [0, 𝑅] on the upper side. The classical result given in [71, Theorem 22.1] says that if the variance 𝜎 2 := 𝐸 𝑋 2 is finite, then 𝑃 𝑥 (Λ𝑅 ) − 𝑥/𝑅 → 0 uniformly for 0 ≤ 𝑥 ≤ 𝑅. This can be refined to 𝑃 𝑥 (Λ𝑅 ) ∼
𝑉d (𝑥) 𝑉d (𝑅)
uniformly for 0 ≤ 𝑥 ≤ 𝑅 as 𝑅 → ∞
(6.1)
as is given in [76, Proposition 2.2] (the proof is easy: see Section 6.3.1 of the present chapter). Here 𝑉d (𝑥) denotes as before the renewal function of the weakly descending ladder height process. Our primary purpose in this chapter is to find a sufficient condition for (6.1) to hold that applies to a wide class of r.w.’s in Z with 𝜎 2 = ∞. When 𝑆 is attracted to a stable process of index 0 < 𝛼 ≤ 2 and there exists 𝜌 = lim 𝑃0 [𝑆 𝑛 > 0], we shall see that the sufficient condition obtained is also necessary for (6.1) and fulfilled if and only if (𝛼 ∨ 1) 𝜌 = 1, and also provide some asymptotic estimates of 𝑃 𝑥 (Λ𝑅 ) in the case (𝛼 ∨ 1) 𝜌 ≠ 1. If the distribution of 𝑋 is symmetric and belongs to the domain of normal attraction of a stable law with exponent 0 < 𝛼 ≤ 2, the problem is treated by Kesten [40]: for 0 < 𝛼 ≤ 2 he derived (among other things) the explicit analytic expression of the limit of the law of the suitably scaled overshoot (beyond the level −1 or 𝑅 + 1) and thereby identified the limit of 𝑃 𝑥 (Λ𝑅 ) as 𝑥 ∧ 𝑅 → ∞ under 𝑥/𝑅 → 𝜆 ∈ (0, 1). Rogozin [64] studied the corresponding problem for strictly stable processes and obtained an analytic expression of the probability of the process exiting the unit interval on the upper side (based on the law of the overshoot distribution given in (2.27)) and, as an application of the result, obtained the scaled limit of 𝑃 𝑥 (Λ𝑅 ) for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8_6
105
106
6 The Two-Sided Exit Problem – General Case
asymptotically stable r.w.’s (see Remark 6.1.4), extending the result of [40] mentioned above. There are a few other results concerning the asymptotic behaviour of 𝑃 𝑥 (Λ𝑅 ). Griffin and McConnell [35] and Kesten and Maller [46] gave a criterion for 𝑃 𝑥 (Λ2𝑥 ) to approach unity (see Section 2.4). When 𝑋 is in the domain of attraction of a stable law, Doney [19] derived the asymptotic form of 𝑃0 (Λ𝑅 ). Kesten and Maller [47] gave an analytic condition for lim inf 𝑃 𝑥 (Λ2𝑥 ) ∧ [1 − 𝑃 𝑥 (Λ2𝑥 )] > 0, i.e., for 𝑆 started at the centre of a long interval to exit either side of it with positive probability (see Remark 6.1.6).
6.1 Statements of Results ∫∞ In the case 𝐸 |𝑋 | < ∞, we use the notations 𝜂± (𝑥) = 𝑥 𝜇± (𝑡) d𝑡, 𝜂 = 𝜂− + 𝜂+ , as ∫𝑥 well as 𝑚 ± (𝑥) = 0 𝜂± (𝑡) d𝑡 and 𝑚 = 𝑚 − + 𝑚 + . In the sequel we always assume 𝜎 2 = 𝐸 𝑋 2 = ∞ for simplicity. Because of our basic assumption that 𝑆 is oscillatory, 𝐸 𝑋 = 0 if 𝐸 |𝑋 | < ∞. The main result of this chapter is stated in the following theorem. We provide some results complementary to it in Proposition 6.1.3. Theorem 6.1.1 For (6.1) to be true each of the following is sufficient. (C1) (C2) (C3) (C4)
𝐸 𝑋 = 0 and 𝑚 + (𝑥)/𝑚(𝑥) → 0 as 𝑥 → ∞. 𝐸 𝑋 = 0 and 𝑚(𝑥) is s.v. as 𝑥 → ∞. 𝑍 is r.s. and 𝑉d is s.v. 𝜇− (𝑥)/𝜇+ (𝑥) → 0 as 𝑥 → ∞ and 𝜇+ (𝑥) is regularly varying at infinity with index −𝛼, 0 < 𝛼 < 1.
Recall that the ladder height 𝑍 is relatively stable (r.s.) if and only if 𝑢 a (𝑥) is s.v., in which case 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) (cf. (5.28)). On the other hand 𝑉d is s.v. if and only if 𝑃0 [ 𝑍ˆ (𝑅)/𝑅 > 𝑀] → 1 for every 𝑀 > 1 (see Theorem A.2.3). The sufficiency of (C1) follows from Proposition 5.3.2, and the primary task of this chapter is to verify the sufficiency of (C2) to (C4); the proof will be made independently of the result under (C1). It is warned that the second condition in (C4) does not mean that 𝜇− (𝑥) may get small arbitrarily fast as 𝑥 → ∞, the random walk 𝑆 being assumed to oscillate, so that the dual of the growth condition (5.22) must be satisfied. In case (C3) (6.1) is especially of interest when 𝑥/𝑅 → 0, for if 𝑉d is s.v., (6.1) entails 𝑃 𝑥 (Λ𝑅 ) → 1 whenever 𝑥/𝑅 is bounded away from zero. Condition (C2) holds if and only if 𝐹 belongs to the domain of attraction of the standard normal law (cf. Section A.1.1). If 𝐹 is attracted to a stable law of exponent 𝛼, then (C3) is equivalent to 𝛼 = 1 = lim 𝑃0 [𝑆 𝑛 > 0] [81] (see the table in Section 2.6). (C4) holds if and only if 𝐹 is attracted to a stable law of exponent 𝛼 (∈ (0, 1)) and 𝜇+ /𝜇 → 1, and if this is the case lim 𝑃0 [𝑆 𝑛 > 0] = 1.
6.1 Statements of Results
107
Remark 6.1.2 The condition (C3) holds if 𝑆 is p.r.s., or, what amounts to the same under 𝜎 2 = ∞, 𝐴(𝑥)/[𝑥𝜇(𝑥)] −→ ∞ (𝑥 → ∞) (see Section 2.4.1), because the positive relative stability entails that 𝑍 is r.s. and 𝑃 𝑥 (Λ2𝑥 ) → 1 (see (2.21)), the latter implies 𝑉d is s.v. by virtue of (6.12) below. Note that in view of Theorem 6.1.1, (C3) implies 𝑃 𝑥 (Λ2𝑥 ) → 1. Combining this with the result given at (2.22) we shall see, in Theorem A.4.1, that (C.3) implies – hence is equivalent to – the positive relative stability of 𝑆 (this fact will not be used throughout). As before put 𝑣◦ = 𝑣d (0) (= 𝑉d (0)), ∫ 𝑥 ∫ 𝑥 1 ∗ ∗ ˆ 𝑃[𝑍 > 𝑡] d𝑡 and ℓ (𝑥) = ◦ ℓ (𝑥) = 𝑃[− 𝑍ˆ > 𝑡] d𝑡. 𝑣 0 0 Under (C3) and (C4), we bring in the function ∫ ∞ 𝑈a (𝑡)𝜇− (𝑡) d𝑡 ℓ♯ (𝑥) = 𝛼 𝑡 𝑥
(𝑥 > 0, 𝛼 < 1),
in which the summability of the integral is assured by 𝑈a (𝑥) = (see (6.10) given in the dual form and (6.19)). Then
(6.2)
Í0
−∞ 𝑈a (𝑦) 𝑝(𝑦
∗ 𝑎(𝑥)ℓ (𝑥) in case (C1), ∗ ˆ 𝑉d (𝑥) ∼ 𝑥/ℓ (𝑥) in case (C2), 1/ℓ (𝑥) in cases (C3), (C4). ♯
+ 𝑥)
(6.3)
[See Lemma 5.3.1, Theorem A.3.2 and Lemma 6.3.1 for the first, second and third formulae of (6.3), respectively.] In each case of (6.3), ℓ ∗ or ℓˆ∗ or ℓ♯ is s.v. – hence in case (C4) (like (C3)) 𝑃 𝑥 (Λ2𝑥 ) → 1. Under (C1) 𝑎(𝑥) ≍ 𝑥/𝑚(𝑥) (cf. Section 3.1). Since (C3) holds if 𝑆 is p.r.s., combining the third equivalence of (6.3) with Theorem 4.1.1 leads to the following extension of the first one (see Remark 6.4.4): if 𝑝 := lim 𝜇+ (𝑥)/𝜇(𝑥) < 1/2 and 𝐹 is recurrent and p.r.s., then 𝑉d (𝑥) ∼ 1 − 𝑞 −1 𝑝 𝑎(𝑥)ℓ ∗ (𝑥) (𝑞 = 1 − 𝑝). In all cases (C1) to (C4) it follows that 𝑈a (𝑥) ∼ 𝑥 𝛼∧1 /ℓ(𝑥), where 𝛼 ≥ 1 in cases (C1) to (C3) and ℓ is some s.v. function which is chosen to be a normalised one1 and that either 𝑍 is r.s. or 𝑃[𝑆 𝑛 > 0] → 1 (see Theorem 5.4.1 (i) for the case (C1)). In the rest of this section, we shall suppose the asymptotic stability condition (AS) together with (2.16) introduced in Section 2.3 to hold and state results as to upper and lower bounds of 𝑃 𝑥 (Λ𝑅 ). If (AS) holds, then it follows that 𝛼𝜌 ≤ 1; 𝑆 is p.r.s. if and only if 𝜌 = 1 (see (2.23)); 𝑍 is r.s. if and only if 𝛼𝜌 = 1; 𝜌 = 1/2 for 𝛼 = 2, and that (C2) ⇔ 𝛼 = 2; (C3) ⇔ 𝛼 = 𝜌 = 1; (C4) ⇔ 𝛼 < 1 = 𝑝 ⇒ 𝜌 = 1 ∫
𝑥
1 An s.v. function of the form e 1
𝑓 (𝑡 ) d𝑡
with 𝑓 (𝑡) = 𝑜 (1/𝑡) is called normalised (see [8]).
(6.4)
108
6 The Two-Sided Exit Problem – General Case
(cf. Section 2.6); in particular, either one of (C1) to (C4) holds if and only if (𝛼 ∨ 1) 𝜌 = 1. We consider the case (𝛼 ∨ 1) 𝜌 < 1 in the next proposition, which combined with Theorem 6.1.1 shows that (6.1) holds if and only if (𝛼 ∨ 1) 𝜌 = 1. Proposition 6.1.3 Suppose (AS) holds with 0 < 𝛼 < 2. Let 𝛿 be a constant arbitrarily chosen so that 12 < 𝛿 < 1. (i) If 0 < (𝛼 ∨ 1) 𝜌 < 1 (equivalently, 𝑝 > 0 for 𝛼 > 1 and 0 < 𝜌 < 1 for 𝛼 ≤ 1), then there exist constants 𝜃 ∗ > 0 and 𝜃 ∗ < 1 such that for all sufficiently large 𝑅, 𝜃∗ ≤
𝑃 𝑥 (Λ𝑅 )𝑉d (𝑅) ≤ 𝜃∗ 𝑉d (𝑥)
for 0 ≤ 𝑥 ≤ 𝛿𝑅.
(6.5)
(ii) Let 𝛼 ≤ 1. In (6.5), one can choose 𝜃 ∗ and 𝜃 ∗ so that 𝜃 ∗ → 0 as 𝜌 → 0 and 𝜃 ∗ → 1 as 𝜌 → 1. [The statement “ 𝜃 ∗ → 0 as 𝜌 → 0” means that for any 𝜀 > 0 there exists a 𝛿 > 0 such that 𝜃 ∗ < 𝜀 for any ‘admissible’ 𝐹 with 0 < 𝜌 < 𝛿, and similarly for “ 𝜃 ∗ → 1 as 𝜌 → 1”.] If 𝜌 = 0, then 𝑃 𝑥 (Λ𝑅 )𝑉d (𝑅)/𝑉d (𝑥) → 0 ˆ uniformly for 0 ≤ 𝑥 < 𝛿𝑅, and 𝑉d (𝑥) ∼ 𝑥 𝛼 /ℓ(𝑥) for some s.v. function ℓˆ (for ∗ ˆ ˆ 𝛼 = 1, one can take ℓ for ℓ (see (6.49).) Remark 6.1.4 Suppose that (AS) holds and that 0 < 𝜌 < 1 if 𝛼 = 1 (so that 𝑃0 [𝑆 𝑛 /𝑐 𝑛 ∈ ·] converges to a non-degenerate stable law for appropriate constants 𝑐 𝑛 ; the results below do not depend on the choice of 𝑐 𝑛 ). Let 𝑌 = (𝑌 (𝑡))𝑡 ≥0 be a limit stable process with probabilities 𝑃𝑌𝜉 , 𝜉 ∈ R (i.e., the limit law for 𝑆 ⌊𝑛𝑡 ⌋ /𝑐 𝑛 as 𝑆0 /𝑐 𝑛 → 𝜉). Denote by 𝜎Δ𝑌 the first hitting time of 𝑌 to a ‘nice’ set Δ ⊂ R and by Λ𝑌𝑟 (𝑟 > 0) the event corresponding to Λ𝑅 for 𝑌 : Λ𝑌𝑟 = {𝜎𝑌(𝑟 ,∞) < 𝜎𝑌(−∞,0) }, and put 𝑄 𝑟 (𝜉) = 𝑃𝑌𝜉 (Λ𝑌𝑟 ) (0 < 𝜉 < 1). Rogozin [64] established the overshoot law (2.27) of Section 2.5 and from it derived an analytic expression of 𝑄 1 (𝜉) which, if 0 < 𝜌 < 1, must read ∫ 𝜉 1 ˆ 𝑄 1 (𝜉) = 𝑡 𝛼𝜌−1 (1 − 𝑡) 𝛼𝜌−1 d𝑡 (0 < 𝜉 < 1), (6.6) 𝐵(𝛼𝜌, 𝛼 𝜌) ˆ 0 where 𝜌ˆ = 1 − 𝜌 and 𝐵(𝑠, 𝑡) = Γ(𝑠)Γ(𝑡)/Γ(𝑠 + 𝑡), the beta function. (When 𝜌 𝜌ˆ = 0, 𝑄 1 (𝜉) = 1 or 0 (𝜉 > 0) according as 𝜌 = 1 or 0.) By the functional limit theorem, one deduces that for each 0 < 𝜉 < 1, 𝑃 𝜉 𝑐𝑛 (Λ𝑐𝑛 ) → 𝑃𝑌𝜉 (Λ𝑌1 ), as is noted in [64]. This convergence is uniform since 𝑃 𝑥 (Λ𝑅 ) is monotone in 𝑥 and 𝑄(𝜉) is continuous. The verification of (6.6), given in [64], seems to be unfinished or wrongly given, one more non-trivial step to evaluate a certain integral being skipped that results in the incorrect formula – 𝜌 and 𝜌ˆ are interchanged in (6.6). Here we provide a proof of (6.6), based on (2.27). Suggested by the proof of Lemma 1 of [40], we prove the following equivalent of (6.6): 1 𝑄 1+𝑐 (1) = 𝐵(𝛼𝜌, 𝛼 𝜌) ˆ
∫
1/(1+𝑐) ˆ 𝑡 𝛼𝜌−1 (1 − 𝑡) 𝛼𝜌−1 d𝑡
(𝑐 > 0),
(6.7)
0
an approach which makes the necessary computations plain to see. By (2.27) the conditional probability on the LHS is expressed as
6.1 Statements of Results
109
sin 𝜋𝛼𝜌 𝛼𝜌 𝑐 𝜋
∫
∞ −𝛼𝜌 𝑡 (𝑡
0
+ 1 + 𝑐) −𝛼𝜌ˆ d𝑡. 𝑡+𝑐
On changing the variable 𝑡 = 𝑐𝑠, this yields ∫ ∞ −𝛼𝜌 𝑠 sin 𝜋𝛼𝜌 (1 + (1 + 𝑠)𝑐) −𝛼𝜌ˆ 𝑌 𝑌 d𝑠. 𝑃0 (Λ1+𝑐 ) = 𝜋 1+𝑠 0 ∫∞ Using the identity 0 𝑠 𝛾−1 (1 + 𝑠) −𝜈 d𝑠 = 𝐵(𝛾, 𝜈 − 𝛾) (𝜈 > 𝛾 > 0) one can easily deduce that the above expression and the RHS of (6.7) are both equal to 1 for 𝑐 = 0 and their derivatives with respect to 𝑐 coincide, showing (6.7). Remark 6.1.5 If 𝛼𝜌 = 1, from Lemma 5.2.4 it follows that inf 𝑃0 [𝑍 (𝑅) ≤ 𝜂𝑅|Λ𝑅 ] → 1 for 𝜂 > 0.
0≤𝑥 ≤𝑅
If 𝜌ˆ = 𝛼 = 1, 𝑃 𝑥 [𝑍 (𝑅) ≤ 𝜂𝑅] → 0 for every 𝜂 > 0 while, as we shall see in Proposition 7.6.4, 𝑃 𝜉 𝑅 [𝑍 (𝑅)/𝑅 ∈ ·|Λ𝑅 ] converges to a proper probability law for every 𝜉 ∈ (0, 1) under some additional condition on 𝜇+ if 𝐹 is transient. Remark 6.1.6 For general random walks (not restricted to arithmetic ones), Kesten and Maller [47] gave an analytic equivalent for the r.w. 𝑆 to exit from symmetric intervals on the upper side with positive 𝑃0 -probability. Theorem 1 of [47] entails that lim sup 𝑃 𝑥 (Λ2𝑥 ) < 1 if and only if lim sup 𝑃0 [𝑆 𝑛 ≥ 0] < 1, provided that 𝜇− (𝑥) > 0 for all 𝑥 > 0, and each of these∫ two is equivalent 𝑥 to lim sup 𝑥→∞ 𝐴(𝑥) [𝜇− (𝑥)𝑐(𝑥)] −1/2 < ∞ (recall 𝑐(𝑥) = 0 𝑡 𝜇(𝑡) d𝑡). Under 𝜇+ (𝑥) > 0 for all 𝑥 > 0, this result may be paraphrased as follows: lim inf 𝑃 𝑥 (Λ2𝑥 ) > 0 if and only if 𝑥→∞
lim inf 𝑃0 [𝑆 𝑛 ≥ 0] > 0, 𝑛→∞
(6.8)
and this is the case if and only if lim inf 𝑥→∞ 𝐴(𝑥) [𝜇+ (𝑥)𝑐(𝑥)] −1/2 > −∞. As mentioned in Remark 6.1.4, the formula (6.6) yields the explicit asymptotic form of 𝑃 𝑥 (Λ𝑅 ) if 𝑥/𝑅 is bounded away from zero unless 𝛼 = 1 and 𝜌 𝜌ˆ = 0, but when 𝑥/𝑅 → 0 it says only 𝑃 𝑥 (Λ𝑅 ) → 0. One may especially be interested in the case 𝑥 = 0 (cf. Remark 8.3.7). By extending the idea of Bolthausen [10], Doney [19, Corollary 3] proved that if (AS) holds with 0 < 𝜌 < 1, then 𝑃0 max 𝑆 𝑛 ≥ 𝑏 𝑛 = 𝑃0 (Λ𝑏𝑛 ) ∼ 𝑐/𝑛, (6.9) 𝑛 𝑡 ] instead of 𝑉d ( 𝑥).
110
6 The Two-Sided Exit Problem – General Case
The corollary below gives the corresponding consequence of Proposition 6.1.3, which improves on the result of [19] mentioned right above by extending it to the case 𝜌 ∈ {0, 1} and providing information on how the limit depends on 𝜌. Suppose (AS) holds. We distinguish the following three cases: (𝛼 ∨ 1) 𝜌 = 1. Case I: Case II: 0 < (𝛼 ∨ 1) 𝜌 < 1. Case III: 𝜌 = 0. If 𝛼 = 2, I is always the case. For 1 < 𝛼 < 2, I or II is the case according as 𝑝 = 0 or 𝑝 > 0. If 𝛼 = 1, we have Cases I (III) if 𝑝 < 1/2 (𝑝 > 1/2), and all the cases I to III are possible if 𝑝 = 1/2. For 𝛼 < 1, I or III is the case according as 𝑝 = 1 or 𝑝 = 0, otherwise we have Case II. The following result is obtained as a corollary of Theorem 6.1.1 and Proposition 6.1.3, except for the case 𝛼 > 1 of its last assertion. Corollary 6.1.7 Suppose (AS) holds with 𝛼 < 2. Then (i) there exists 𝑐 := lim 𝑃0 (Λ𝑅 )𝑉d (𝑅)/𝑣◦ ; (ii) 𝑐 = 1 in Case I, 0 < 𝑐 < 1 in Case II and 𝑐 = 0 in Case III; (iii) 𝑐 → 0 as 𝜌 → 0 (necessarily 𝛼 ≤ 1), and 𝑐 → 1 as 𝜌 → 1/(𝛼 ∨ 1). Proof As to the proof of (i) and (ii), Cases I and III are covered by Theorem 6.1.1 and by the second half of Proposition 6.1.3(ii), respectively, and Case II by the above mentioned result of [19]3 combined with Proposition 6.1.3. The assertion (iii) follows from Proposition 6.1.3(ii) if 𝛼 ≤ 1. If 1 < 𝛼 < 2, we need to use a result from Chapter 8. There we shall obtain the asymptotic form of 𝑃 𝑥 [𝜎𝑦 < 𝑇 | Λ𝑅 ] as 𝑅 → ∞ for 𝑥 < 𝑦 in Proposition 8.3.1(i) (deduced from the estimates of 𝑢 a and 𝑣d (see Lemma 8.1.1) proved independently of the present arguments), which entails that as 𝑅 → ∞, 𝑃0 (Λ𝑅 )𝑉d (𝑅) 𝑃0 [𝜎𝑅 < 𝑇]𝑉d (𝑅) 𝛼−1 ≥ −→ . ◦ ◦ 𝑣 𝑣 𝛼(1 − 𝜌) Thus the second half of (iii) also follows for 𝛼 > 1.
□
Remark 6.1.8 For 𝛼 ≥ 1 (with an additional condition if 𝛼 = 2𝜌 = 1), we shall obtain an explicit asymptotic form of 𝑃 𝑥 (Λ𝑅 ) as 𝑥/𝑅 → 0, which will in particular give 𝑐 = [𝛼 𝜌𝐵(𝛼𝜌, ˆ 𝛼 𝜌)] ˆ −1 (see (8.69) of Proposition 8.6.4 in the case 0 < 𝛼𝜌 < 1). The rest of this chapter is organised as follows. Section 6.2 states several known facts and provides a lemma; these will be fundamental in the remainder of this treatise. A proof of Theorem 6.1.1 is given in Section 6.3. In Section 6.4, we suppose (C3) or (C4) and prove lemmas as to miscellaneous matters. In Section 6.5 we prove Proposition 6.1.3 by further showing several lemmas. 3 In [19] 𝐸𝑋 = 0 is assumed if 𝛼 = 1. However, by examining the arguments therein, one sees that this assumption is superfluous, so that (6.9) is valid under (AS) with 0 < 𝜌 < 1.
6.2 Upper Bounds of 𝑃𝑥 (Λ𝑅 ) and Partial Sums of 𝑔𝛺 ( 𝑥, 𝑦)
111
6.2 Upper Bounds of 𝑷𝒙 (𝚲 𝑹 ) and Partial Sums of 𝒈 𝜴 (𝒙, 𝒚) Let 𝑣d denote the renewal sequence for the weakly descending ladder height process: 𝑣d (𝑥) = 𝑉d (𝑥) − 𝑉d (𝑥 − 1) and 𝑣d (0) = 𝑣◦ . Recall 𝑉d (𝑥) is set to be zero for 𝑥 ≤ −1. Similarly, let 𝑈a (𝑥) and 𝑢 a (𝑥) be the renewal function and sequence for the strictly ascending ladder height process. The function 𝑉d is harmonic for the r.w. 𝑆 killed as it enters 𝛺 in the sense that ∞ ∑︁
𝑉d (𝑦) 𝑝(𝑦 − 𝑥) = 𝑉d (𝑥)
(𝑥 ≥ 0),
(6.10)
𝑦=0
where 𝑝(𝑥) = 𝑃[𝑋 = 𝑥] [71, Proposition 19.5], so that the process 𝑀𝑛 = 𝑉d (𝑆 𝑛∧𝑇 ) is a martingale under 𝑃 𝑥 for 𝑥 ≥ 0. By the optional stopping theorem one deduces 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); Λ𝑅 = 𝑉d (𝑥). (6.11) Indeed, on passing to the limit in 𝐸 𝑥 [𝑀𝑛∧𝜎 (𝑅,∞) ] = 𝑉d (𝑥), Fatou’s lemma shows that the expectation on the LHS is less than or equal to 𝑉d (𝑥), which, in turn, shows that the martingale 𝑀𝑛∧𝜎(𝑅,∞) is uniformly integrable since it is bounded by the summable random variable 𝑉d (𝑅) ∨ 𝑀 𝜎(𝑅,∞) . Obviously the expectation in (6.11) is not less than 𝑉d (𝑅)𝑃 𝑥 (Λ𝑅 ), so that 𝑃 𝑥 (Λ𝑅 ) ≤ 𝑉d (𝑥)/𝑉d (𝑅). If either 𝑉d or
∫
𝑥
0
(6.12)
𝑃[− 𝑍ˆ > 𝑡] d𝑡 is regularly varying with index 𝛾, it follows that 𝑉d (𝑥) 𝑣◦ 𝑥
∫
𝑥
𝑃[− 𝑍ˆ > 𝑡] d𝑡 −→ 0
1 Γ(1 + 𝛾)Γ(2 − 𝛾)
(6.13)
(see Theorem A.2.3). For a set 𝐵 ⊂ R with 𝐵 ∩ Z ≠ ∅ we have defined by (2.1) the Green function 𝑔 𝐵 (𝑥, 𝑦) of the r.w. killed as it hits 𝐵. We restate Spitzer’s formula Eq(2.6)
𝑔𝛺 (𝑥, 𝑦) =
𝑥∧𝑦 ∑︁
𝑣d (𝑥 − 𝑘)𝑢 a (𝑦 − 𝑘)
for 𝑥, 𝑦 ≥ 0.
𝑘=0
It follows that for 𝑥 ≥ 1, 𝑃[ 𝑍ˆ = −𝑥] =
∞ ∑︁
𝑔𝛺 (0, 𝑦) 𝑝(−𝑥 − 𝑦) = 𝑣◦
𝑦=0
∞ ∑︁
𝑢 a (𝑦) 𝑝(−𝑥 − 𝑦)
(6.14)
𝑦=0
and, by the dual relation 𝑔 [1,∞) (0, −𝑦) = 𝑔ˆ 𝛺 (0, 𝑦) = 𝑣d (𝑦), 𝑃[𝑍 = 𝑥] =
∞ ∑︁ 𝑦=0
𝑔 [1,∞) (0, −𝑦) 𝑝(𝑥 + 𝑦) =
∞ ∑︁ 𝑦=0
𝑣d (𝑦) 𝑝(𝑥 + 𝑦).
(6.15)
112
6 The Two-Sided Exit Problem – General Case
Lemma 6.2.1 For 𝑥 ≥ 0 and 𝑅 ≥ 1, 𝑅 ∑︁
𝑔𝛺 (𝑥, 𝑦) < 𝑉d (𝑥)𝑈a (𝑅).
(6.16)
𝑦=0
Take a positive constant 𝛿 < 1 arbitrarily. Then 𝛿𝑥 ∑︁
𝑔𝛺 (𝑥 − 𝑤, 𝑥) ≤
𝑤=0
𝛿𝑥 ∑︁
𝑔𝛺 (𝑥, 𝑥 + 𝑤) ≤ 𝑉d (𝑥)𝑈a (𝛿𝑥).
(6.17)
𝑤=0
If 𝑈a is regularly varying, then there exists a constant 𝑐 > 0 such that for 0 ≤ 𝑥 ≤ 𝛿𝑅, 𝑅 ∑︁
𝑔𝛺 (𝑥, 𝑦) ≥ 𝑐 𝑉d (𝑥)𝑈a (𝑅),
𝑦=0
and if 𝑈a is s.v., then uniformly for 0 ≤ 𝑥 < 𝛿𝑅, 𝑅 ∑︁
𝑔𝛺 (𝑥, 𝑦) ∼ 𝑉d (𝑥)𝑈a (𝑅).
𝑦=0
Í𝑥 𝑣d (𝑘)𝑢 a (𝑤 + 𝑘) = 𝑔𝛺 (𝑥, 𝑥 + 𝑤) for 𝑤 ≥ 0. By Proof One has 𝑔𝛺 (𝑥 − 𝑤, 𝑥) ≤ 𝑘=0 the subadditivity of 𝑈a (· − 1) that entails 𝑈a (𝑘 + 𝛿𝑥) −𝑈a (𝑘 − 1) ≤ 𝑈a (𝛿𝑥), summing over 𝑤 gives (6.17). Split the sum on the LHS of (6.16) at 𝑥. By (2.6) the sum over 𝑦 > 𝑥 and that over 𝑦 ≤ 𝑥 are equal to, respectively, 𝑥 𝑅 ∑︁ ∑︁
𝑣d (𝑘)𝑢 a (𝑦 − 𝑥 + 𝑘) =
𝑥 𝑥 ∑︁ ∑︁
𝑣d (𝑘) [𝑈a (𝑅 − 𝑥 + 𝑘) − 𝑈a (𝑘)]
𝑘=0
𝑦=𝑥+1 𝑘=0
and
𝑥 ∑︁
𝑣d (𝑥 − 𝑦 + 𝑘)𝑢 a (𝑘) =
𝑥 ∑︁
[𝑉d (𝑥) − 𝑉d (𝑘 − 1)] 𝑢 a (𝑘).
𝑘=0
𝑘=0 𝑦=𝑘
Summing by parts yields 𝑥 ∑︁
[𝑉d (𝑥) − 𝑉d (𝑘 − 1)] 𝑢 a (𝑘) =
𝑥 ∑︁
𝑣d (𝑘)𝑈a (𝑘),
𝑘=0
𝑘=0
so that 𝑅 ∑︁ 𝑦=0
𝑔𝛺 (𝑥, 𝑦) =
𝑥 ∑︁
𝑣d (𝑘)𝑈a (𝑅 − 𝑥 + 𝑘),
𝑘=0
from which the assertions of the lemma other than (6.17) follow immediately.
□
6.3 Proof of Theorem 6.1.1
113
Lemma 6.2.2 Suppose 𝑆 is recurrent. (i) For each 𝑧 ∈ Z, as 𝑅 → ∞, 𝑃 𝑅+𝑧 [𝜎𝑅 < 𝑇] → 1, or, what is the same thing, 𝑔𝛺 (𝑅 + 𝑧, 𝑅)/𝑔𝛺 (𝑅, 𝑅) → 1. (ii) If 𝑢 a is asymptotically monotone, then 𝑔𝛺 (𝑥, 𝑅) is asymptotically monotone in 0 ≤ 𝑥 ≤ 𝑅 in the sense that 𝑔𝛺 (𝑥, 𝑅) ≤ 𝑔𝛺 (𝑥 ′, 𝑅){1 + 𝑜(1)} where 𝑜(1) → 0 as 𝑅 → ∞ uniformly for 0 ≤ 𝑥 ≤ 𝑥 ′ ≤ 𝑅. Similarly if 𝑣d is asymptotically monotone, then 𝑔𝛺 (𝑥, 𝑅) is asymptotically monotone in 𝑥 ≥ 𝑅 in the sense that 𝑔𝛺 (𝑥, 𝑅) ≥ 𝑔𝛺 (𝑥 ′, 𝑅){1 + 𝑜(1)} for 𝑅 ≤ 𝑥 < 𝑥 ′. Proof One can easily verify (i) by comparing the hitting time distributions of 𝜎𝑅 and 𝑇 under 𝑃 𝑅+𝑧 as 𝑅 → ∞. Let 𝑆 be recurrent and 𝑢 a asymptotically monotone. By (i) we may suppose 𝑅 − 𝑥 ′ > 𝑀 for an arbitrarily chosen constant 𝑀. For any 𝜀 > 0, choose 𝑀 so that 𝑢 a (𝑧 ′) < 𝑢 a (𝑧) (1 + 𝜀) if 𝑧 > 𝑧 ′ > 𝑀. Then for 𝑀 < 𝑥 ≤ 𝑅, 𝑔𝛺 (𝑥, 𝑅) =
𝑥 ∑︁
𝑣d (𝑘)𝑢 a (𝑅 − 𝑥 + 𝑘) ≤
𝑘=0
𝑥 ∑︁
𝑣d (𝑘)𝑢 a (𝑅 − 𝑥 ′ + 𝑘) (1 + 𝜀).
𝑘=0
The RHS being less than 𝑔𝛺 (𝑥 ′, 𝑅) (1 + 𝜀) shows the first half of (ii). The proof of the second half is similar. □
6.3 Proof of Theorem 6.1.1 The proof of Theorem 6.1.1 is given in two subsections, the first for case (C2) and the second for cases (C3) and (C4).
6.3.1 Case (C2) ∫∞ Suppose 𝐸 𝑋 = 0 and recall 𝜂± (𝑥) = 𝑥 𝑃[±𝑋 > 𝑡] d𝑡 and 𝜂 = 𝜂− +𝜂+ . It follows that if lim 𝑥𝜂+ (𝑥)/𝑚(𝑥) = 0, then 𝑍 is r.s. (Proposition 5.2.1), so that 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) is s.v. (see (2.5)). Thus (C2),Íwhich is equivalent to lim 𝑥𝜂(𝑥)/𝑚(𝑥) = 0, entails both 𝑦 𝑣d and 𝑢 a are s.v., so that 𝑘=0 𝑣d (𝑘)𝑢 a (𝑘) ∼ 𝑦𝑣d (𝑦)𝑢 a (𝑦) ∼ 𝑉d (𝑦)𝑢 a (𝑦) (𝑦 → ∞) and one can easily deduce that as 𝑅 → ∞, 𝑔𝛺 (𝑥, 𝑅) ∼
𝑥 ∑︁
𝑉d (𝑥) 𝑣d (𝑘) ∼ ∗ − 𝑥 + 𝑘) ℓ (𝑅)
ℓ ∗ (𝑅
uniformly for 0 ≤ 𝑥 ≤ 𝑅.
𝑘=0
This leads to the lower bound 𝑃 𝑥 (Λ𝑅−1 ) ≥ 𝑃 𝑥 [𝜎{𝑅 } < 𝑇] =
𝑔𝛺 (𝑥, 𝑅) 𝑉d (𝑥) {1 + 𝑜(1)}, = 𝑔𝛺 (𝑅, 𝑅) 𝑉d (𝑅)
which combined with (6.12) shows (6.1), as desired.
(6.18)
114
6 The Two-Sided Exit Problem – General Case
6.3.2 Cases (C3) and (C4) Each of conditions (C3) and (C4) implies that 𝑈a (𝑥)/𝑥 𝛼 (necessarily 0 < 𝛼 ≤ 1) is s.v. Let ℓ be an s.v. function such that 𝑈a (𝑥) ∼ 𝑥 𝛼 /ℓ(𝑥)
if 𝛼 < 1,
and ℓ = ℓ ∗
if 𝛼 = 1.
(6.19)
Here and in the sequel 𝛼 = 1 in case (C3). In cases (C1) and (C2) we have 𝑃 𝑥 [𝜎𝑅 < 𝑇] ∼ 𝑉d (𝑥)/𝑉d (𝑅) which, in conjunction with (6.12), entails (6.1), but in cases (C3) and (C4) this equivalence does not generally hold (see Proposition 8.3.1) and for the proof of (6.1) we shall make use of (6.11) not only for the upper bound but for the lower bound. Lemma 6.3.1 Suppose that either (C3) or (C4) holds. Then ℓ♯ (𝑥) is s.v., and (𝑎) 𝑃[− 𝑍ˆ ≥ 𝑥] ∼ 𝑣◦ ℓ♯ (𝑥)
and
(𝑏) 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥).
Proof We have only to show (a), in addition to ℓ♯ being s.v., (a) entailing (b) in view of (6.13) applied with 𝛼 𝜌ˆ = 0. If (C3) or (C4) holds, then 𝑉d is s.v. (cf. [83] in case (C4)), hence by (5.41) 𝑃[− 𝑍ˆ ≥ Í 𝑥] is also s.v. Because of this fact, together with the expression 𝑃[− 𝑍ˆ ≥ 𝑥] = 𝑣◦ ∞ 𝑦=0 𝑢 a (𝑦)𝐹 (−𝑥 − 𝑦), the assertion follows from a general result involving regularly varying functions that we give in Section A.1.2 (Lemma A.1.3) so as not to break the flow of the discourse. [Under (C3), Lemma □ 6.3.1 is a special case of Lemma 5.4.5, where a different proof is employed.] Lemma 6.3.2 (i) Let (C3) hold. Then for any 𝛿 > 0, 𝑃[𝑍 > 𝑥] ≥ 𝑉d (𝑥)𝜇+ (𝑥 + 𝛿𝑥){1 + 𝑜(1)}.
(6.20)
If in addition lim sup 𝜇+ (𝜆𝑥)/𝜇+ (𝑥) < 1 for some 𝜆 > 1, then 𝑃[𝑍 > 𝑥] ≤ 𝑉d (𝑥)𝜇+ (𝑥){1 + 𝑜(1)}.
(6.21)
(ii) If (C4) holds, then 𝑃[𝑍 > 𝑥] ∼ 𝑉d (𝑥)𝜇+ (𝑥). Proof First, we prove (ii). Suppose (C4) holds and let 𝜇+ (𝑥) ∼ 𝐿 + (𝑥)/𝑥 𝛼 with some s.v. function 𝐿 + . By (6.15) 𝑃[𝑍 > 𝑥] =
∞ ∑︁
𝑣d (𝑦)𝜇+ (𝑥 + 𝑦).
𝑦=0
The sum over 𝑦 ≤ 𝑥 is asymptotically equivalent to 𝑥 ∑︁ 𝑣d (𝑦)𝐿 + (𝑥 + 𝑦)
(𝑥 + 𝑦) 𝛼 𝑦=0
∼ 𝐿 + (𝑥)
𝑥 ∑︁ 𝑣d (𝑦) 𝑉d (𝑥)𝐿 + (𝑥) {1 + 𝑜(1)} , = 𝛼 (𝑥 + 𝑦) 𝑥𝛼 𝑦=0
(6.22)
6.3 Proof of Theorem 6.1.1
115
Í𝑥 since 𝑉d is s.v. owing to Lemma 6.3.1 and hence 𝑦=𝜀 𝑥 𝑣d (𝑦) = 𝑜(𝑉d (𝑥)) for any 𝜀 > 0. Choosing 𝐿 + so that 𝐿 +′ (𝑥) = 𝑜(𝐿 + (𝑥)/𝑥) we also see that the sum over 𝑦 > 𝑥 is 𝑜 (𝑉d (𝑥)𝜇+ (𝑥)). Thus the asserted equivalence follows. The inequality (6.20) is obtained by restricting the range of summation in (6.20) to 𝑦 ≤ 𝛿𝑥. (6.21) follows from 𝑃[𝑍 > 𝑥] ≤ 𝑉d (𝑥)𝜇+ (𝑥) +
∞ ∑︁
𝑣d (𝑦)𝜇+ (𝑦).
(6.23)
𝑦=𝑥
Indeed, under the assumption for (6.21), the sum of the infinite series is of a smaller order of magnitude than 𝑉d (𝑥)𝜇+ (𝑥), according to a general result, Lemma A.1.4 of the Appendix, concerning s.v. functions. □ Remark 6.3.3 Lemma 6.3.2 says that if the positive tail of 𝐹 satisfies a mild regularity condition∫(such as (2.16)), then 𝑃[𝑍 ∫ 𝑥 > 𝑥] ∼ 𝜇+ (𝑥)/ℓ♯ (𝑥) under (C3). As for the 𝑥 integrals 0 𝜇+ (𝑡)/ℓ♯ (𝑡) d𝑡 and 0 𝑃[𝑍 > 𝑡]ℓ♯ (𝑡) d𝑡, we have the corresponding relations without assuming any additional condition (see Lemmas 6.4.1 and 6.4.2). ∫𝑥 The slow variation of ℓ ∗ (𝑥) = 0 𝑃[𝑍 > 𝑡] d𝑡 implies 𝑃[𝑍 > 𝑥] = 𝑜(ℓ ∗ (𝑥)/𝑥), hence lim 𝑈a (𝑥)𝑃[𝑍 > 𝑥] =Í0 if 𝑍 is r.s. (the converse is true (see (A.24)). This combined with 𝑃[𝑍 > 𝑥] = ∞ 𝑦=0 𝑣d (𝑦)𝜇+ (𝑦 + 𝑥) ≥ 𝑉d (𝑥)𝜇+ (𝑥) shows 𝑍 is r.s. =⇒ lim 𝑉d (𝑥)𝑈a (𝑥)𝜇+ (𝑥) = 0.
(6.24)
Lemma 6.3.4 (i) Under (C3), 𝑉d (𝑥)𝑈a (𝑥)𝜇(𝑥) → 0.
(6.25)
(ii) Under (C4), 𝑉d (𝑥)𝑈a (𝑥)𝜇(𝑥) → (sin 𝜋𝛼)/𝜋𝛼. One may compare the results above with the known result under (AS) with 0 < 𝜌 < 1 stated in (6.55) below; (i) will be refined in Lemma 6.5.1(iii) under (AS) and in Remark 7.1.2(d) for p.r.s. r.w.’s. Proof Under (C4), Lemma 6.3.2(ii) says 𝑃[𝑍 > 𝑥] ∼ 𝑉d (𝑥)𝜇+ (𝑥) so that by Lemma 6.3.1 𝑃[𝑍 > 𝑥] is regularly varying with index −𝛼, which implies that 𝑃[𝑍 > 𝑥]𝑈a (𝑥) approaches (sin 𝜋𝛼)/𝜋𝛼 (cf. Theorem A.2.3). Hence we have (ii). As for (i), on recalling the definition of ℓ♯ , the slow variation of ℓ♯ entails 𝑥𝜇− (𝑥)/ℓ ∗ (𝑥) = 𝑜(ℓ♯ (𝑥)), so that 𝑉d (𝑥)𝑈a (𝑥)𝜇− (𝑥) ∼ 𝑉d (𝑥)𝑥𝜇− (𝑥)/ℓ ∗ (𝑥) ∼ 𝑥𝜇− (𝑥)/[ℓ ∗ (𝑥)ℓ♯ (𝑥)] → 0. Thus by (6.24) (i) follows.
(6.26) □
116
6 The Two-Sided Exit Problem – General Case
Remark 6.3.5 (6.25) holds also under (C2). Indeed, we know ∫ 𝑥 𝑉d (𝑥)𝑈a (𝑥) ∼ 𝑥 2 𝑡 𝜇(𝑡) d𝑡 0
(see Lemma 6.5.1(ii)), and hence (6.25) follows since 𝑥 2 𝜇(𝑥) = 𝑜
∫
𝑥
0
𝑡 𝜇(𝑡) d𝑡 .
Recall 𝑍 (𝑅) = 𝑆 𝜎 [𝑅+1,∞) − 𝑅, the overshoot beyond the level 𝑅. Lemma 6.3.6 (i) Under (C3), for each 𝜀 > 0 as 𝑅 → ∞, 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); 𝑍 (𝑅) ≥ 𝜀𝑅, Λ𝑅 /𝑉d (𝑥) → 0 uniformly for 0 ≤ 𝑥 ≤ 𝑅. (6.27) (ii) Under (C4), as 𝑀 ∧ 𝑅 → ∞, 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); 𝑍 (𝑅) ≥ 𝑀 𝑅, Λ𝑅 /𝑉d (𝑥) → 0 uniformly for 0 ≤ 𝑥 ≤ 𝑅. (6.28) Proof We first prove (ii). The expectation on the LHS of (6.28) is less than ∑︁
𝑅−1 ∑︁
𝑔𝛺 (𝑥, 𝑧) 𝑝(𝑤 − 𝑧)𝑉d (𝑤)
(6.29)
𝑤≥𝑅+𝑀 𝑅 𝑧=0
because of the trivial inequality 𝑔Z\[0,𝑅) (𝑥, 𝑧) < 𝑔𝛺 (𝑥, 𝑧). By Lemma 6.3.1 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥) and the derivative (1/ℓ♯ ) ′ (𝑥) = 𝑜(1/𝑥ℓ♯ (𝑥)). Hence for each 𝑀 > 0 there exists a constant 𝑅0 such that for all 𝑅 > 𝑅0 and 𝑧 < 𝑅, ∑︁
𝑝(𝑤 − 𝑧)𝑉d (𝑤) ≤
𝑤≥𝑅+𝑀 𝑅
=
∑︁ 𝜇+ (𝑤) 𝜇+ (𝑀 𝑅) + × 𝑜(1) ℓ♯ (𝑅){1 + 𝑜(1)} 𝑤≥𝑀 𝑅 𝑤ℓ♯ (𝑤)
(6.30)
𝜇+ (𝑅) 𝑉d (𝑅)𝜇+ (𝑅) {1 + 𝑜(1)} ≤ 2 , 𝑀 𝛼 ℓ♯ (𝑅) 𝑀𝛼
where we have employed summation by parts and the trivial bound 𝑤 < 𝑅 + 𝑤 for the inequality and (C4) for the equality. Owing to Lemma 6.3.4(ii) the last expression is at most 3/𝑀 𝛼𝑈a (𝑅) and hence on applying Lemma 6.2.1 the double sum in (6.29) is dominated by 3𝑉d (𝑥)/𝑀 𝛼 , showing (6.28). In case (C3), by Lemma 6.3.4(i) 𝜇+ (𝑥)/ℓ♯ (𝑥) ≪ 1/𝑈a (𝑥) ∼ ℓ ∗ (𝑥)/𝑥, and we infer that the second expression in (6.30) is 𝑜(1/𝑈a (𝑅)). Thus the sum on the left of (6.30) is 𝑜(1/𝑈a (𝑅)) for any 𝑀 > 0 and on returning to (6.29) we conclude (6.27) by Lemma 6.2.1 again. □ Proof (of Theorem 6.1.1 in cases (C3), (C4)) We have the upper bound (6.12) of 𝑃 𝑥 (Λ𝑅 ). To obtain the lower bound, we observe that for 𝑥 ≤ 𝑅/2, 𝑉d (𝑅) (1 − 𝜀)𝑉d (𝑥) ≥ 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); 𝑍 (𝑅) < 𝑀 𝑅 Λ𝑅 ≥ . 1 + 𝑜(1) 𝑃 𝑥 (Λ𝑅 ){1 + 𝑜(1)}
6.4 Miscellaneous Lemmas Under (C3), (C4)
117
The left-hand inequality follows immediately from the slow variation of 𝑉d (see Lemma 6.3.1 in case (C4)), and the right-hand one from (6.11) and Lemma 6.3.6. The inequalities above yield the lower bound for 𝑥 ≤ 𝑅/2 since 𝜀 can be made arbitrarily small. Because of the slow variation of 𝑉d the result for ⌊𝑅/2⌋ ≤ 𝑥 < 𝑅 □ follows from 𝑃 ⌊𝑅/2⌋ (Λ𝑅 ) → 1. The proof is complete. Remark 6.3.7 (a) Theorem 6.1.1 together with Lemma 6.3.6 shows that, uniformly for 0 ≤ 𝑥 ≤ 𝑅, if (C3) holds, 𝑃 𝑥 [𝑍 (𝑅) > 𝜀𝑅 | Λ𝑅 ] → 0
as 𝑅 → ∞ for each 𝜀 > 0;
if (C4) holds, 𝑃 𝑥 [𝑍 (𝑅) > 𝑀 𝑅 | Λ𝑅 ] → 0 as 𝑅 ∧ 𝑀 → ∞.
(6.31)
(b) Let (AS) hold with 𝛼𝜌 = 1. Then (6.27) holds, since 𝑉d (𝑥)𝑈a (𝑥)𝜇+ (𝑥) → 0 (see Remark 6.3.5), entailing 𝑉d ((1 + 𝜀)𝑅)𝜇+ (𝜀(𝑅)) = 𝑜 (1/𝑈a (𝑅)), and the same proof given under (C3) applies. The first formula of (6.31) also holds.
6.4 Miscellaneous Lemmas Under (C3), (C4) In this section, we suppose (C3) or (C4) to hold and provide several lemmas for later usage, not only in this chapter but also in Chapters 6 and 7. ∫𝑥 Lemma 6.4.1 Under (C3), ℓ ∗ (𝑥) ∼ 0 𝑉d (𝑡)𝜇+ (𝑡) d𝑡. Proof Summation by parts yields 𝑃[𝑍 > 𝑠] =
∞ ∑︁
𝑣d (𝑦)𝜇+ (𝑦 + 𝑠) =
𝑦=0
Since
∫ 0
𝑥
∞ ∑︁
𝑉d (𝑦) 𝑝(𝑦 + 1 + ⌊𝑠⌋).
𝑦=0
𝑝(𝑦 + 1 + ⌊𝑠⌋) d𝑠 = 𝐹 (𝑦 + 1 + 𝑥) − 𝐹 (𝑦) + 𝑂 ( 𝑝( ⌊𝑥⌋ + 𝑦 + 1)), putting ∫
∞
𝑉d (𝑡) [𝐹 (𝑡 + 𝑥) − 𝐹 (𝑡)] d𝑡
˜ ℓ(𝑥) := 0
and noting
Í
𝑦 ≥0 𝑉d (𝑦) 𝑝(𝑦) ∗
< ∞ we see that ∫
𝑥
˜ + 𝑜(1). 𝑃[𝑍 > 𝑠] d𝑠 = ℓ(𝑥)
ℓ (𝑥) = 0
After replacing 𝐹 (𝑡 + 𝑥) − 𝐹 (𝑡) by 𝜇+ (𝑡) − 𝜇+ (𝑡 + 𝑥), a rearrangement of terms and a change of the variable yield ∫ ∞ ∫ 𝑥 ˜ (𝑉d (𝑡 + 𝑥) − 𝑉d (𝑡)) 𝜇+ (𝑡 + 𝑥) d𝑡. 𝑉d (𝑡)𝜇+ (𝑡) d𝑡 + (6.32) ℓ(𝑥) = 0
0
˜ we deduce Similarly, taking the difference for the integral defining ℓ(𝑥),
118
6 The Two-Sided Exit Problem – General Case
˜ 1 𝑥) ˜ − ℓ( ℓ(𝑥) 2 ∫ ∞ = 𝑉d (𝑡) 𝜇+ (𝑡 + 12 𝑥) − 𝜇+ (𝑡 + 𝑥) d𝑡
(6.33)
0
∫
∫
𝑥/2
𝑉d (𝑡)𝜇+ (𝑡 + 21 𝑥) d𝑡 +
= 0
∞
0
𝑉d (𝑡 + 12 𝑥) − 𝑉d (𝑡) 𝜇+ (𝑡 + 𝑥) d𝑡.
Since the integrals in the last expression of (6.33) are positive and ℓ˜ is s.v., both must ˜ be of smaller order than ℓ(𝑥), in particular the ∫second integral restricted to 𝑡 ≥ 12 𝑥 is ∞ ˜ 𝑜( ℓ(𝑥)), hence, by the monotonicity of 𝜇+ (𝑡), 𝑥 (𝑉d (𝑡 + 𝑥) − 𝑉d (𝑡)) 𝜇+ (𝑡 + 𝑥) d𝑡 = ∫𝑥 ˜ 𝑜( ℓ(𝑥)). We also have 0 𝑉d (𝑡 +𝑥)𝜇+ (𝑡 +𝑥) d𝑡 ≤ 𝑥𝑉d (2𝑥)𝜇+ (𝑥) ≪ 𝑥/𝑈a (𝑥) ∼ ℓ ∗ (𝑥). Thus ∫ ∞ (𝑉d (𝑡 + 𝑥) − 𝑉d (𝑡)) 𝜇+ (𝑡 + 𝑥) d𝑡 = 𝑜(ℓ ∗ (𝑥)). 0
Returning to (6.32) we can now conclude
∫ 0
𝑥
𝑉d (𝑡)𝜇+ (𝑡) d𝑡 ∼ ℓ ∗ (𝑥) as desired.
Lemma 6.4.2 Suppose (C3) holds. Then ∫ 𝑥 ∫ ℓ♯ (𝑡)𝑃[𝑍 > 𝑡] d𝑡 ∼ 0
□
𝑥
𝜇+ (𝑡) d𝑡;
(6.34)
0
and if in addition 𝐸 𝑋 = 0, then ∫ ∞ ℓ♯ (𝑡)𝑃[𝑍 > 𝑡] d𝑡 = 𝜂+ (𝑥){1 + 𝑜(1)} + 𝑜 (𝑥𝜇+ (𝑥)) .
(6.35)
𝑥
[In case (C4), (6.34) holds, following immediately from Lemma 6.3.2(ii).] Proof Denote by 𝐽 (𝑥) the integral on the LHS of (6.34). We first derive the lower bound for it. By (6.20) and lim ℓ♯ (𝑥)𝑉d (𝑥) → 1, for any 𝜆 > 1, 𝐽 (𝑥) ≥ 1 + 𝑜(1)
∫
𝑥
ℓ♯ (𝑡)𝑉d (𝑡)𝜇+ (𝜆𝑡) d𝑡 = 0
1 𝜆
∫
𝜆𝑥
𝜇+ (𝑡) d𝑡{1 + 𝑜(1)}
(𝑥 → ∞).
0
∫𝑥 Since 𝜆 may be arbitrarily close to unity, we have 𝐽 (𝑥) ≥ 0 𝜇+ (𝑡) d𝑡{1 + 𝑜(1)}. According to Lemma A.1.5 given in the Appendix, by the slow variation of 𝑉d and the monotonicity of 𝜇+ it follows that ∞ ∑︁ 𝑦=𝑥
∫ 𝑣d (𝑦)𝜇+ (𝑦) = 𝑜 𝑉d (𝑥)𝜇+ (𝑥) + 𝑥
∞
d𝑡 . 𝑉d (𝑡)𝜇+ (𝑡) 𝑡
In view of (6.23), it therefore suffices for the upper bound to show that ∫ 𝑥 ∫ 𝑥 ∫ ∞ d𝑡 𝜇+ (𝑡) d𝑡. ℓ♯ (𝑠) d𝑠 𝑉d (𝑡)𝜇+ (𝑡) ≤ 𝐶 𝑡 0 0 𝑠
(6.36)
6.4 Miscellaneous Lemmas Under (C3), (C4)
119
Integrating by parts we may write the LHS as ∫ ∞ ∫ 𝑥 d𝑡 𝑉d (𝑡)𝜇+ (𝑡) + 𝑥ℓ♯ (𝑥) 𝜇+ (𝑡) d𝑡 {1 + 𝑜(1)}. 𝑡 𝑥 0 Integrating by parts again and using Lemma 6.4.1, one∫ can easily deduce that the ∞ first integral in the large square brackets is less than 𝑥 ℓ ∗ (𝑡)𝑡 −2 {1 + 𝑜(1)} d𝑡 ∼ ∫𝑥 ℓ ∗ (𝑥)/𝑥 ≤ 𝑉d (𝑥) 0 𝜇+ (𝑡) d𝑡 × 𝑂 (1/𝑥). Thus we obtain (6.36), hence (6.34). Let 𝐸 𝑋 = 0. One can analogously show (6.35). Indeed, observing that ∫ 1 ∞ 1 𝜇+ (𝑡) d𝑡 ≥ [𝜂+ (𝑥) − (𝜆 − 1)𝑥𝜇+ (𝑥)] 𝜆 𝜆𝑥 𝜆 one can readily obtain the lower bound, while the proof of the upper bound is only simplified. □ Lemma 6.4.3 If (C3) holds, then as 𝑡 → ∞, ∫ 𝑡 ∫ 𝑡 ℓ ∗ (𝑡)ℓ♯ (𝑡) = − 𝜇− (𝑠) d𝑠 + 𝑃[𝑍 > 𝑠]ℓ♯ (𝑠) d𝑠 0 0 ∫ 𝑡 = 𝐴(𝑡) + 𝜇+ (𝑠) d𝑠 × 𝑜(1)
(6.37)
0
and if 𝐸 𝑋 = 0, both 𝜂− and 𝜂 are s.v. and ∫ ∞ ∗ ℓ (𝑡)ℓ♯ (𝑡) = 𝜇− (𝑠) − 𝑃[𝑍 > 𝑠]ℓ♯ (𝑠) d𝑠 = 𝐴(𝑡){1 + 𝑜(1)} + 𝑜(𝜂+ (𝑡)). 𝑡
(6.38) ∫𝑡
Proof Recall ℓ ∗ (𝑡) = 0 𝑃[𝑍 > 𝑠] d𝑠 and note that as 𝑡 ↓ 0, ℓ ∗ (𝑡) = 𝑂 (𝑡) and ℓ♯ (𝑡) ∼ 𝑂 (log 1/𝑡). Then integration by parts leads to ∫
𝑡
𝑃[𝑍 > 𝑠]ℓ♯ (𝑠) d𝑠 = ℓ ∗ (𝑡)ℓ♯ (𝑡) +
0
∫
𝑡
𝜇− (𝑠) d𝑠, 0
hence the first equality of (6.37). The second one follows from (6.34). Let 𝐸 𝑋 = 0. Then, ℓ ∗ (𝑡)ℓ♯ (𝑡) → 0 because of (6.37), and the first equality of (6.38) is deduced from the identity [ℓ ∗ (𝑡)ℓ♯ (𝑡)] ′ = 𝑃[𝑍 > 𝑡]ℓ♯ (𝑡) − 𝜇− (𝑡)
(6.39) (valid for almost every 𝑡 > 1). For the second one, noting that 𝑥𝜇(𝑥) = 𝑜 ℓ♯ (𝑥)ℓ ∗ (𝑥) by virtue of Lemma 6.3.4(i), we have only to apply (6.35). Lemma 6.3.4(i) together with (b) of Lemma 6.3.1 entails that under (C3) 𝑥𝜇(𝑥) ≪ ℓ ∗ (𝑥)ℓ♯ (𝑥) ≤ 𝜂− (𝑥). Thus both 𝜂− and 𝜂 are s.v.
□
120
6 The Two-Sided Exit Problem – General Case
Remark 6.4.4 Suppose that 𝑆 is p.r.s. and 𝜇+ (𝑥)/𝜇(𝑥) → 𝑝, and let 𝑞 = 1 − 𝑝. Then combining Lemmas 6.3.1 and 6.4.3 shows that for 𝑝 ≠ 1/2, 𝐴(𝑥) ∼ ℓ♯ (𝑥)ℓ ∗ (𝑥) ∼ 𝑥/[𝑈a (𝑥)𝑉d (𝑥)]. (See Section (necessarily 𝑝 ≤ 1/2), we have ∫ 𝑥 7.7 formore details.) If 𝐹 is ∫recurrent 𝑥 𝑎(𝑥) ∼ 𝑥 𝜇− (𝑡) d𝑡 𝐴2 (𝑡) and 𝑎(−𝑥) = 𝑥 𝜇+ (𝑡) d𝑡 𝐴2 (𝑡) + 𝑜(𝑎(𝑥)) as 𝑥 → ∞ 0 0 according to Corollary 4.1.2. Hence ∫ 𝑥 1 𝜇− (𝑡) − 𝜇+ (𝑡) 1 = d𝑡 + ∼ (1 − 𝑞 −1 𝑝)𝑎(𝑥) 2 (𝑡) 𝐴(𝑥) 𝐴(𝑥 𝐴 0) 𝑥0 (with the usual interpretation if 𝑝 = 𝑞), so that 𝑈a (𝑥)𝑉d (𝑥)/𝑥 ∼ (1 − 𝑞 −1 𝑝)𝑎(𝑥) (𝑝 < 𝑞). Lemma 6.4.5 If 𝐹 is p.r.s., 𝐸 𝑋 = 0, lim inf 𝜇− (𝑥)/𝜇(𝑥) > 0, and either 𝜇+ is regularly varying at infinity with index −1 or lim sup 𝑎(−𝑥)/𝑎(𝑥) < 1, then ∫ 𝑥 𝜇− (𝑡) d𝑡 ∼ 𝑔𝛺 (𝑥, 𝑥). ∗ (𝑡)ℓ (𝑡)] 2 [ℓ 1 ♯ Proof Suppose 𝑥𝜇+ (𝑥) is s.v. [See Remark 6.4.6(a) below for the other case.] Then [ℓ ∗ (𝑡)] ′ = 𝑃[𝑍 > 𝑡] ∼ 𝜇+ (𝑡)/ℓ♯ (𝑡) (Remark 6.3.3) and we observe that 𝑥 ∑︁ 𝑣d (𝑘) {1 + 𝑜 𝑘 (1)} ∗ (𝑘) ℓ 𝑘=0 𝑘=0 𝑥 ∑︁ 1 𝑉d (𝑥) 1 {1 + 𝑜(1)} + − {1 + 𝑜 𝑘 (1)} = ∗ 𝑉d (𝑘) ∗ ℓ (𝑥 + 1) ℓ (𝑘) ℓ ∗ (𝑘 + 1) 𝑘=0
𝑔𝛺 (𝑥, 𝑥) =
𝑥 ∑︁
𝑣d (𝑘)𝑢 a (𝑘) =
𝑥
=
∑︁ 𝜇+ (𝑘) 1 {1 + 𝑜(1)} + {1 + 𝑜 𝑘 (1)}, 2 ˜ ˜ 𝐴(𝑥) 𝑘=1 [ 𝐴(𝑘)]
˜ = ℓ ∗ (𝑡)ℓ♯ (𝑡) and 𝑜 𝑡 (1) → 0 as 𝑡 → ∞. By 𝑃[𝑍 > 𝑡]ℓ♯ (𝑡) ∼ 𝜇+ (𝑡) again where 𝐴(𝑡) and (6.39) it follows that ∫ 𝑥 ∫ 𝑥 𝜇− (𝑡) − 𝜇+ (𝑡) 1 𝜇+ (𝑡) × 𝑜 𝑡 (1) 1 = (6.40) d𝑡 + d𝑡 + 2 2 ˜ ˜ ˜ ˜ 𝐴(𝑥) [ 𝐴(𝑡)] [ 𝐴(𝑡)] 𝐴(1) 1 1 ˜ and hence, on absorbing 1/ 𝐴(1) into the second integral on the RHS, ∫ 𝑥 ∫ 𝑥 𝜇− (𝑡) − 𝜇+ (𝑡) 𝜇+ (𝑡) 𝑔𝛺 (𝑥, 𝑥) = d𝑡{1 + 𝑜(1)} + {1 + 𝑜 𝑡 (1)} d𝑡, (6.41) 2 2 ˜ ˜ [ 𝐴(𝑡)] 1 1 [ 𝐴(𝑡)] which leads to the equivalence of the lemma.
□
Remark 6.4.6 (a) Let 𝐹 be recurrent and p.r.s. We shall see that 𝑔𝛺 (𝑥, 𝑥) ∼ 𝑎(𝑥) (𝑥 → ∞) in Chapter 7 (see Theorem 7.1.1), so that the equivalence in Lemma 6.4.5,
6.4 Miscellaneous Lemmas Under (C3), (C4)
121
which we do not use in this treatise, gives ∫ 𝑥 ∫ 𝑥 𝜇− (𝑡) d𝑡 𝜇− (𝑡) d𝑡 . ∼ 𝑎(𝑥) ∼ ∗ (𝑡)ℓ (𝑡)] 2 [ℓ 𝐴2 (𝑡) 1 𝑥0 ♯ These asymptotic relations are of interest in the critical case lim 𝑎(−𝑥)/𝑎(𝑥) = 1, when we have little information about the behaviour of 𝐴(𝑥)/ℓ ∗ (𝑥)ℓ♯ (𝑥). (b) Under (C3) we have 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥) and [1/ℓ♯ ] ′ (𝑡) = 𝜇− (𝑡)/[ℓ♯2 (𝑡)ℓ ∗ (𝑡)], so ∫𝑥 that 𝑉d (𝑥) ∼ 1 ℓ ∗ (𝑡)𝜇− (𝑡) [ℓ♯ (𝑡)ℓ ∗ (𝑡)] −2 d𝑡. Combined with the first equivalence in (a) this shows (see also (8.11) if 𝜌 > 0) that under the assumption of Lemma 6.4.5, 𝑎(𝑥)/𝑉d (𝑥) −→ 1/𝐸 𝑍
and
𝑉d (𝑥) ≤ ℓ ∗ (𝑥)𝑎(𝑥){1 + 𝑜(1)}.
Lemma 6.4.7 If either (C3) or (C4) holds, then for any 𝛿 < 1, as 𝑅 → ∞ 𝑈a (𝑥) uniformly for 0 ≤ 𝑥 < 𝛿𝑅. 1 − 𝑃 𝑅−𝑥 (Λ𝑅 ) = 𝑜 𝑈a (𝑅) Proof We prove the dual assertion. Suppose 𝑈a and 𝑣d are s.v. We show 𝑃 𝑥 (Λ𝑅 )𝑉d (𝑅)/𝑉d (𝑥) → 0
uniformly for 0 ≤ 𝑥 < 𝛿𝑅.
(6.42)
Since for 31 𝑅 ≤ 𝑥 < 𝛿𝑅, 𝑉d (𝑅)/𝑉d (𝑥) is bounded and 𝑃 𝑥 (Λ𝑅 ) → 0 as 𝑅 → ∞, we have only to consider the case 𝑥 < 13 𝑅. Putting 𝑅 ′ = ⌊𝑅/3⌋ and 𝑅 ′′ = 2𝑅 ′, we have
where 𝐽1 (𝑅) = 𝑃 𝑥 Obviously 𝐽1 (𝑅) ≤
𝑃 𝑥 (Λ𝑅 ) = 𝐽1 (𝑅) + 𝐽2 (𝑅), 𝑆 𝜎 (𝑅′ ,∞) > 𝑅 ′′, Λ𝑅 and 𝐽2 (𝑅) = 𝑃 𝑥 𝑆 𝜎 (𝑅′ ,∞) ≤ 𝑅 ′′, Λ𝑅 .
𝑅′ ∑︁
𝑔Z\[0,𝑅′ ] (𝑥, 𝑧)𝑃[𝑋 > 𝑅 ′′ − 𝑧] ≤ 𝜇+ (𝑅 ′)
𝑧=0
𝑅′ ∑︁
𝑔𝛺 (𝑥, 𝑧).
𝑧=0
The last sum is at most 𝑉d (𝑥)𝑈a (𝑅) by Lemma 6.2.1. Using Lemma 6.3.4(i) we accordingly infer that 𝑉d (𝑅) ≤ 𝑉d (𝑅)𝑈a (𝑅)𝜇+ (𝑅 ′) −→ 0. 𝑉d (𝑥) Í ′′ As for 𝐽2 , we decompose 𝐽2 (𝑅) = 𝑃 𝑥 (Λ𝑅′ ) 𝑅 𝑧=𝑅′ 𝑃 𝑥 𝑆 [𝑅′ ,∞) = 𝑧 Λ 𝑅′ 𝑃 𝑧 (Λ 𝑅 ). On applying the upper bound (6.12) to 𝑃 𝑥 (Λ𝑅′ ) it then follows that as 𝑅 → ∞ 𝐽1 (𝑅)
𝑅′′ 𝑉d (𝑅) 𝑉d (𝑅) ∑︁ ≤ 𝐽2 (𝑅) 𝑃 𝑥 𝑆 [𝑅′ ,∞) = 𝑧 Λ𝑅′ 𝑃 𝑧 (Λ𝑅 ) −→ 0, ′ 𝑉d (𝑥) 𝑉d (𝑅 ) 𝑧=𝑅′
for 𝑃 𝑧 (Λ𝑅 ) → 0 uniformly for 𝑅 ′ ≤ 𝑧 ≤ 𝑅 ′′. Thus (6.42) is verified.
(6.43)
122
6 The Two-Sided Exit Problem – General Case
ˆ In case (C4), 𝑉d (𝑥) ∼ 𝑥 𝛼 /ℓ(𝑥) and 𝑈a (𝑥) ∼ 1/ℓˆ♯ (𝑥) (see (6.48) and (6.49)) and we can proceed as above. □ The following lemma presents some of the results obtained above in a neat form under condition (AS). Lemma 6.4.8 Suppose (AS) holds with 𝛼 = 1. (i) The following are equivalent (a) 𝑃[− 𝑍ˆ ≥ 𝑥] is s.v.; (b) 𝑍 is r.s. ; (c) 𝜌 = 1, and each of (a) to (c) above implies 𝑃[− 𝑍ˆ ≥ 𝑥] ∼ 𝑣◦ /𝑉d (𝑥) ∼ 𝑣◦ ℓ♯ (𝑥); in particular these two asymptotic equivalences hold if either 𝐸 𝑋 = 0, 𝑝 < 1/2 or 𝐸 |𝑋 | = ∞, 𝑝 > 1/2. (ii) If either of (a) to (c) of (i) holds, then for each 𝜀 > 0; and sup 𝑥> (1+𝜀) 𝑅 𝑃 𝑥 [𝜎{𝑅 } < 𝑇] → 0 𝑃[𝑍 > 𝑥] ∼ 𝑉d (𝑥)𝜇+ (𝑥) if 𝑝 > 0, 𝑃[𝑍 > 𝑥] = 𝑜 (𝑉d (𝑥)𝜇− (𝑥)) if 𝑝 = 0.
(6.44)
Proof Each of (a) and (b) follows from (c) as noted in Remark 6.1.2 (since 𝑆 is p.r.s. under (AS) with 𝜌 = 1; cf. (2.23)). The converses follow from Theorem A.3.2. The other assertion of (i) is contained in Lemma 6.3.1. From the slow variation of 𝑉d it follows that lim 𝑃 𝑦 [𝑆 𝜎 (−∞,0] < −𝑀 𝑦] = 1 𝑦→∞
for each 𝑀 > 1 (the dual of the third case in (A.16)), which entails the first relation of (ii). The second one follows immediately from Lemma 6.3.2(i) if 𝑝 > 0, since one can take 𝛿 = 0Í in (6.20) under (AS). Í In the case 𝑝 = 0, examine the proof of (6.21) by noting that 𝑦 𝑣d (𝑦)𝜇+ (𝑦) ≪ 𝑦 𝑣d (𝑦)𝜇− (𝑦). □ Under (C3) one can derive the exact asymptotic form of 𝑣d if the negative tail of 𝐹 satisfies the following regularity conditions: (a) ∃ 𝜆 > 1, lim sup 𝑥→∞
𝜇− (𝜆𝑥) < 1 and 𝜇− (𝑥)
(b) lim sup 𝑥→∞
𝑝(−𝑥)𝑥 < ∞.4 𝜇− (𝑥)
The next result, not used in this chapter, plays a crucial role in Section 7.5 (where 𝑆 is transient). Its proof is based on an extension – given in Section A.2.1.2 – of Theorem 1.1 of Nagaev [54]. 4 (a) implies that there exist positive constants 𝐶, 𝜃 and 𝑥1 such that 𝜇− ( 𝑦)/𝜇− ( 𝑥) ≤ 𝐶 ( 𝑦/𝑥) −𝜃 (Obviously the converse is true.) If lim sup 𝑥→∞ and 𝐶 = 1/ 𝛿.
whenever 𝜇 (𝜆𝑥) 𝜇 ( 𝑥)
𝑦 > 𝑥 > 𝑥1 .
< 𝛿 < 1, one may take 𝜃 = −(log 𝛿)/log 𝜆
6.4 Miscellaneous Lemmas Under (C3), (C4)
123
Lemma 6.4.9 If (a) and (b) above hold in addition to (C3), then (i) 𝑃[− 𝑍ˆ = 𝑥]/𝑣◦ ∼ 𝜇− (𝑥)/ℓ ∗ (𝑥); and (ii) 𝑣d (𝑥) ∼ [𝑉d (𝑥)] 2 𝑃[− 𝑍ˆ = 𝑥]/𝑣◦ ∼ [𝑉d (𝑥)] 2 𝑢 a (𝑥)𝜇− (𝑥). [If 𝐸 𝑋 = 0 and the positive and negative tails of 𝐹 are not balanced, condition (b) can be replaced by much weaker one in order for (ii) to be valid (see Theorem 7.1.4).] Proof After changing the variable of summation, write (6.14) in the form 𝑃[− 𝑍ˆ = 𝑥]/𝑣◦ =
∞ ∑︁
𝑢 a (𝑧 − 𝑥) 𝑝(−𝑧) =
𝑧=𝑥
(1+𝜀) ∑︁ 𝑥 𝑧=𝑥
+
∑︁ 𝑧> (1+𝜀) 𝑥
= 𝐼∗ (𝑥) + 𝐼 ∗ (𝑥)
(say).
Since 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) under (C3), for each 𝜀 ∈ (0, 21 ), we have ∗
𝐼 (𝑥) =
𝑥/𝜀 ∑︁ 𝑧=𝑥+𝜀 𝑥
𝜇− (𝑥/𝜀) 𝜇− (𝑥 + 𝜀𝑥) + 𝑂 (𝜇− (𝑥/𝜀)) 𝑝(−𝑧){1 + 𝑜(1)} +𝑂 = , ℓ ∗ (𝑧 − 𝑥) ℓ ∗ (𝑥) ℓ ∗ (𝑥){1 + 𝑜(1)}
while (b) entails that 𝐼∗ (𝑥) ≤ 𝐶𝑈a (𝜀𝑥)𝜇− (𝑥)/𝑥 ≤ 𝐶 ′ 𝜀𝜇− (𝑥)/ℓ ∗ (𝑥) and 𝜇− (𝑥 + 𝜀𝑥) = 𝜇− (𝑥){1 + 𝑂 (𝜀)}. Hence by (a) we can conclude (i). Using the ‘continuity’ of 𝜇− (𝑥) noted right above, one deduces from (i) that lim sup sup 𝑃[− 𝑍ˆ = 𝑦] − 𝑃[− 𝑍ˆ = 𝑥] 𝑃[− 𝑍ˆ = 𝑥] → 0 as 𝛿 ↑ 1. (6.45) 𝑥→∞
𝛿 𝑥 1 so that 𝐼 ∗ (𝑥) ≥ 𝑢 a (𝜆𝑥) [𝜇− (𝑥 + 𝜀𝑥) − 𝜇− (𝑥 + 𝜆𝑥)] ≥ 2−1 𝜆 𝛼−1 (𝑢 a 𝜇− ) (𝑥 + 𝜀𝑥), hence 𝐼∗ (𝑥) + 𝐼∗ (𝑦) ≤ 𝐶 ′ 𝜀 𝛼 𝑥 𝛼−1 𝜇− (𝑥) ≤ 𝐶𝜀 𝛼 𝐼 ∗ (𝑥). Now we obtain (6.45), and applying Lemma A.2.2 gives the first half of the lemma. Let 𝜇− vary regularly with index −𝛼. Then as 𝑥 → ∞, the measure 𝜁 𝑥 (d𝑡) := −𝑑𝑡 𝜇− (𝑥(1 + 𝑡))/𝜇− (𝑥) weakly converges to 𝛼(1 + 𝑡) −𝛼−1 d𝑡 on the interval [𝜀, 1/𝜀] for each 𝜀 > 0. Hence 𝑥/𝜀 ∑︁
∫
∫
1/𝜀
𝜀
𝜀
𝑦=𝜀 𝑥
1/𝜀
𝑢 a (𝑥𝑡)𝜁 𝑥 (d𝑡) ∼ 𝑢 a (𝑥)𝜇− (𝑥)
𝑢 a (𝑦) 𝑝(−𝑥 − 𝑦) ∼
𝛼𝑡 −1+𝛼 d𝑡 . (1 + 𝑡) 1+𝛼
As 𝜀 ↓ 0, the last integral tends to 𝐶 𝛼,𝛽 , and the sums over 𝑦 < 𝜀𝑥 and 𝑦 > 𝑥/𝜀 become negligible because of (b). Thus the second half of the lemma follows. □
6.5 Some Properties of the Renewal Functions 𝑼a and 𝑽d Here we make a slight digression from the main topic of the present discussion to collect some facts regarding 𝑈a and 𝑉d that are used in the next section and Chapter 8 as well. Throughout this and the next section, we suppose (AS) to hold. Let 𝑝, 𝑞 and 𝐿 be as in (2.16). Recall that 𝛼𝜌 ranges exactly over the interval [(𝛼 − 1) ∨ 0, 𝛼 ∧ 1]. It is known that 𝑃[𝑍 > 𝑥] varies regularly at infinity with index −𝛼𝜌 if 𝛼𝜌 < 1 (which, for 1 < 𝛼 < 2, is equivalent to 𝑝 > 0), and 𝑍 is r.s. if 𝛼𝜌 = 1 (see [63, Theorem 9] for 𝜌 > 0 and Lemma 6.3.1 for 𝜌 = 0, see also Theorems A.2.3 and A.3.2); and in either case ∫𝑥 𝑈a (𝑥) 0 𝑃[𝑍 > 𝑡] d𝑡 1 −→ (6.46) 𝑥 Γ(1 + 𝛼𝜌)Γ(2 − 𝛼𝜌) (the dual of (6.13)). We choose an s.v. function ℓ(𝑥) so that ℓ(𝑥) = ℓ ∗ (𝑥)
if 𝛼𝜌 = 1,
sin 𝜋𝛼𝜌 −𝛼𝜌 𝑃[𝑍 > 𝑥] ∼ 𝑥 ℓ(𝑥) 𝜋𝛼𝜌
if 𝛼𝜌 < 1.
(6.47)
Here 𝑡 −1 sin 𝑡 is understood to equal unity for 𝑡 = 0. [The constant factor in the second formula above is chosen so as to have the choice conform to that in (6.19) – see (6.49) below.] As the dual relations we have an s.v. function ℓˆ such that ℓˆ = ℓˆ∗
(𝛼 𝜌ˆ = 1);
𝑃[− 𝑍ˆ > 𝑥]/𝑣◦ ∼
sin 𝜋𝛼 𝜌ˆ −𝛼𝜌ˆ ˆ 𝑥 ℓ(𝑥) 𝜋𝛼 𝜌ˆ
(𝛼 𝜌ˆ < 1).
6.5 Some Properties of the Renewal Functions 𝑈a and 𝑉d
125
Note that according to Lemma 6.3.1, ˆ ∼ ℓ♯ (𝑥) ℓ(𝑥)
if 𝜌 = 1;
similarly ℓ(𝑥) ∼ ℓˆ♯ (𝑥) if 𝜌 = 0 (necessarily 𝛼 ≤ 1), where ∫ ∞ 𝜇+ (𝑡) ℓˆ♯ (𝑥) = 𝛼 d𝑡 (𝑥 > 0). 1−𝛼 ˆ 𝑡 ℓ(𝑡) 𝑥
(6.48)
It then follows that for all 0 ≤ 𝜌 ≤ 1, 𝑈a (𝑥) ∼ 𝑥 𝛼𝜌 /ℓ(𝑥)
ˆ and 𝑉d (𝑥) ∼ 𝑥 𝛼𝜌ˆ /ℓ(𝑥).
(6.49)
The two s.v. functions ℓ and ℓˆ are linked, as shown below in Lemma 6.5.1. We bring in the constant Γ(𝛼)𝜋 −1 sin 𝜋𝛼𝜌 𝑝Γ(𝛼𝜌 + 1)Γ(𝛼 𝜌ˆ + 1) Γ(𝛼)𝜋 −1 sin 𝜋𝛼 𝜌ˆ = 𝑞Γ(𝛼𝜌 + 1)Γ(𝛼 𝜌ˆ + 1) Γ(𝛼) [sin 𝜋𝛼𝜌 + sin 𝜋𝛼 𝜌] ˆ = , 𝜋Γ(𝛼𝜌 + 1)Γ(𝛼 𝜌ˆ + 1)
𝜅=
where only the case 𝑝 > 0 or 𝑞 > 0 of the first two expressions above is adopted if 𝑝𝑞 = 0; if 𝑝𝑞 ≠ 0 the two coincide, namely 𝑝 −1 sin 𝜋𝛼𝜌 = 𝑞 −1 sin 𝜋𝛼 𝜌ˆ = sin 𝜋𝛼𝜌 + sin 𝜋𝛼 𝜌ˆ
( 𝑝𝑞 ≠ 0),
as is implicit in the proof of the next result or directly derived (cf. [82, Appendix (A)]). Note that 𝜅 = 0 if and only if either 𝛼 = 1 and 𝜌 𝜌ˆ = 0 or 𝛼 = 2.5 𝛼 𝜇(𝑥)] → 1/𝜅, in other words ˆ Lemma 6.5.1 (i) For all 0 < 𝛼 ≤ 2, ℓ(𝑥) ℓ(𝑥)/[𝑥
𝑈a (𝑥)𝑉d (𝑥) =
𝜅 + 𝑜(1) . 𝜇(𝑥)
∫𝑥 ∫𝑥 (ii) If 𝛼 = 2, then ℓ ∗ (𝑥) ℓˆ∗ (𝑥) ∼ 0 𝑡 𝜇(𝑡) d𝑡 and, because of 0 𝑡 𝜇(𝑡) d𝑡 ∼ 𝑚(𝑥) ∼ 𝐿(𝑥)/2, 𝑈a (𝑥)𝑉d (𝑥) ∼ 𝑥 2 /𝑚(𝑥) ∼ 2𝑥 2 /𝐿(𝑥). (iii) If 𝛼 = 1 and 𝑝 ≠ 1/2 (entailing 𝜌 ∈ {0, 1} so that 𝜅 = 0), then 𝑥 ∫∞ | 𝑝 − 𝑞| 𝑥 𝜇(𝑡) d𝑡 𝑥 𝑈a (𝑥)𝑉d (𝑥) ∼ ∼ 𝑥 | 𝐴(𝑥)| | 𝑝 − 𝑞| ∫ 𝑥 𝜇(𝑡) d𝑡 0
(𝐸 𝑋 = 0), (𝐸 |𝑋 | = ∞).
5 In fact, 𝜅/(2 − 𝛼) → 1 as 𝛼 ↑ 2; if 𝛼 = 1, 𝜅/(𝜌 ∧ 𝜌) ˆ → 2 as 𝜌 ∧ 𝜌ˆ → 0.
126
6 The Two-Sided Exit Problem – General Case
For the identification of the asymptotic form of 𝑈a (𝑥)𝑉d (𝑥), explicit in terms of 𝐹, Lemma 6.5.1 covers all the cases of (AS) other than the case 𝛼 = 2𝑝 = 𝜌 ∨ 𝜌ˆ = 1. Proof (iii) follows immediately from Lemmas 6.3.1 and 6.4.3 (see also Remark 6.4.4). For the rest, we show (ii) first, which we need for the proof of (i). Proof of (ii). Let 𝛼 = 2. Then ℓ = ℓ ∗ , ℓˆ = ℓˆ∗ . Suppose that 𝐸 𝑍 = 𝐸 | 𝑍ˆ | = ∞ so that ℓ ∗ (𝑥) ∧ ℓˆ∗ (𝑥) → ∞, otherwise the proof below merely being simplified. Recalling (6.14) we have ∞
𝑃[− 𝑍ˆ ≥ 𝑥] ∑︁ = 𝑢 a (𝑧)𝐹 (−𝑥 − 𝑧), 𝑣◦ 𝑧=0
(6.50)
ˆ ′ (𝑥) = ℓ ∗ (𝑥)𝑃[− 𝑍ˆ > 𝑥]/𝑣◦ + ℓˆ∗ (𝑥)𝑃[𝑍 > 𝑥] (ℓℓ) ∫ ∞ 𝜇− (𝑥 + 𝑦) = ℓ ∗ (𝑥) d𝑦{1 + 𝑜(1)} ℓ ∗ (𝑦) ∨ 1 0 ∫ ∞ 𝜇+ (𝑥 + 𝑦) d𝑦{1 + 𝑜(1)}. + ℓˆ∗ (𝑥) ℓˆ∗ (𝑦) 1
(6.51)
by which and its dual
We claim that for any constant 𝜀 ∈ (0, 1), h ∫𝑥 i ∫𝑥 ∫ 𝑥 ∫ ∞ ≤ 𝜀 𝑡 𝜇 (𝑡) d𝑡 + 𝜂 (𝑡) d𝑡 {1 + 𝑜(1)}, 𝜇 (𝑡 + 𝑦) − − − 0 0 ℓ ∗ (𝑡) d𝑡 d𝑦 ∫𝑥 ∗ ℓ (𝑦) ∨ 1 0 0 ≥ 𝑡 𝜇− (𝑡) d𝑡{1 − 𝜀 + 𝑜(1)}. 0 (6.52) By splitting the inner integral of the LHS at 𝜀𝑡 and using the monotonicity of ℓ ∗ and 𝜇− it follows that ∫ ∞ 𝜇− (𝑡 + 𝑦) ℓ ∗ (𝑡) d𝑦 ≤ [𝜀𝑡 𝜇− (𝑡) + 𝜂− (𝑡)] {1 + 𝑜(1)}, ℓ ∗ (𝑦) ∨ 1 0 showing the first inequality of (6.52). On the other hand the LHS of (6.52) is greater than ∫ 𝑥 ∫ 𝑥−𝑡 ∫ 𝑥 ∫ 𝑤 ∗ 𝜇− (𝑡 + 𝑦) ℓ (𝑤 − 𝑦) ℓ ∗ (𝑡) d𝑡 d𝑦 = 𝜇 (𝑤) d𝑤 d𝑦 − ℓ ∗ (𝑦) ∨ 1 ℓ ∗ (𝑦) ∨ 1 0 0 0 0 ∫ 𝑥 ∫ (1−𝜀)𝑤 ≥ 𝜇− (𝑤) d𝑤 {1 + 𝑜(1)} d𝑦, 0
0
which yields the second inequality of (6.52). ∫𝑥 Put 𝐿 ∗ (𝑥) = 0 𝑡 𝜇(𝑡) d𝑡. Then (AS) (with 𝛼 = 2) is equivalent to the slow ∫𝑥 variation of 𝐿 ∗ and in this case 𝑚(𝑥) = 0 𝜂(𝑡) d𝑡 ∼ 𝐿 ∗ (𝑥) (cf. Section A.1.1)), which entails 𝑥𝜂(𝑥) = 𝑜(𝐿 ∗ (𝑥)). By (6.52) and its dual, which follows because of
6.5 Some Properties of the Renewal Functions 𝑈a and 𝑉d
127
ˆ ′ (𝑥) symmetric roles of the two tails of 𝐹, integrating the second expression of (ℓℓ) in (6.51) leads to ≤ {1 + 𝜀 + 𝑜(1)}𝐿 ∗ (𝑥), ˆ ℓ(𝑥) ℓ(𝑥) ≥ {1 − 𝜀 + 𝑜(1)}𝐿 ∗ (𝑥), which shows the formula of (ii) since 𝜀 is arbitrary. Proof of (i). We have only to consider the case 0 < 𝜌 < 1 of 𝛼 < 2, since for 𝜌 = 1 the result follows ∫ 𝑥 from Lemma 6.3.4 and for 𝜌 = 0 by duality, and since for 𝛼 = 2, 𝑥 2 𝜇(𝑥) = 𝑜( 0 𝑡 𝜇(𝑡) d𝑡), so that the result follows from (ii). By symmetry, by 𝑈a {d𝑦} the measure d𝑈a (𝑦) we may suppose 𝑞 > 0 (entailing 𝛼 𝜌ˆ < 1). Denoting ∫∞ and writing the sum on the RHS of (6.50) as 0 𝜇− (𝑥 + 𝑥𝑡)𝑈a {𝑥 d𝑡} we see that 𝑃[− 𝑍ˆ ≥ 𝑥] 𝑞 𝑈a (𝑥)𝐿(𝑥) ∼ 𝑣◦ 𝑥𝛼
∫ 0
∞
𝐿(𝑥 + 𝑥𝑡) 𝑈a {𝑥 d𝑡} . · 𝐿 (𝑥)(1 + 𝑡) 𝛼 𝑈a (𝑥)
𝛼𝜌−1 d𝑡 Observe that 𝑈a {𝑥 d𝑡}/𝑈a (𝑥) weakly ∫ ∞ converges as 𝑥 → ∞ to the measure 𝛼𝜌 𝑡 on every finite interval and that 𝑀 𝑡 −𝛼+𝜀 𝑈a {𝑥 d𝑡}/𝑈a (𝑥) → 0 as 𝑀 → ∞ uniformly in 𝑥 for each 0 < 𝜀 < 𝛼 𝜌. ˆ With the help of Potter’s bound 𝐿 (𝑥 + 𝑥𝑡)/𝐿(𝑥) = 𝑂 ((1 + 𝑡)) 𝜀 ), valid for any 𝜀 > 0 (cf. [8]), we then see that the integral above ∫∞ converges to 𝛼𝜌 0 𝑡 𝛼𝜌−1 (1 + 𝑡) −𝛼 d𝑡 = 𝛼𝜌𝐵(𝛼𝜌, 𝛼 𝜌). ˆ We accordingly obtain
𝑃[− 𝑍ˆ ≥ 𝑥]/𝑣◦ ∼ 𝑞𝛼𝜌𝐵(𝛼𝜌, 𝛼 𝜌)𝑥 ˆ −𝛼𝜌ˆ 𝐿(𝑥)/ℓ(𝑥).
(6.53)
Thus by (6.13) (the dual of (6.46)) and the identity Γ(1 − 𝑡)Γ(1 + 𝑡) = 𝜋𝑡/sin 𝑡𝜋 (|𝑡| < 1), 𝑉d (𝑥) ∼
𝜋 −1 sin 𝜋𝛼 𝜌ˆ 𝑥 𝛼𝜌ˆ ℓ(𝑥) Γ(𝛼)𝜋 −1 sin 𝜋𝛼 𝜌ˆ 𝑥 𝛼𝜌ˆ ℓ(𝑥) · = · , 𝑞𝐿(𝑥) Γ(𝛼𝜌 + 1)Γ(𝛼 𝜌ˆ + 1) 𝑞𝐿 (𝑥) 𝛼2 𝜌 𝜌ˆ 𝐵(𝛼𝜌, 𝛼 𝜌) ˆ
which combined with 𝑈a (𝑥) ∼ 𝑥 𝛼𝜌 /ℓ(𝑥) shows the asserted convergence. The proof of Lemma 6.5.1 is complete.
(6.54)
□
Remark 6.5.2 In the above, (6.53) is shown under the condition 𝑞𝜌 𝜌ˆ > 0. Here we prove that (6.53) holds (under (AS) with 𝛼 < 2) unless 𝑞 = 𝜌ˆ = 0 with the understanding that 𝑠𝐵(𝑠, 𝑡) = 1 for 𝑠 = 0, 𝑡 > 0 and 𝐵(𝑠, 0) = ∞ for 𝑠 > 0, and ∼ is interpreted according to our convention if the constant factor of its right-hand side equals zero or infinity. The proof is carried out in the following three cases (a) 𝛼 = 𝜌 = 1 > 𝑝; (b) either 𝛼 = 𝜌ˆ = 1 or 𝛼 < 1 = 𝑞; (c) 𝛼 > 1 = 𝑝. Suppose (a) holds. Then ℓ = ℓ ∗ and 𝑃[− 𝑍ˆ > 𝑥]/𝑣◦ ∼ ℓ♯ (𝑥) ≫ 𝑥𝜇− (𝑥)/ℓ ∗ (𝑥). Since 𝑞 > 0 = 𝜌ˆ (entailing 𝐵(𝛼𝜌, 𝛼 𝜌) ˆ = ∞), this shows (6.53). If (b) holds, then 𝜌 = 0, ℓ = ℓˆ♯ and 𝑈a (𝑥) ∼ 1/ℓˆ♯ (𝑥), and by the representation 𝑃[− 𝑍ˆ > 𝑥]/𝑣◦ = Í∞ 𝑦=0 𝑢 a (𝑦)𝜇− (𝑥 + 𝑦) we deduce that
128
6 The Two-Sided Exit Problem – General Case
𝑃[− 𝑍ˆ > 𝑥]/𝑣◦ =
𝑈a (𝑥)𝜇− (𝑥){1 + 𝑜(1)} ∼ 𝑞𝑥 −𝛼 𝐿 (𝑥)/ℓ(𝑥) if 𝑞 > 0, 𝑜(𝜇(𝑥)/ℓ(𝑥)) if 𝑞 = 0,
showing (6.53). As for the case (c) we have 𝛼𝜌 = 1, ℓ = ℓ ∗ , and 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥), Í −𝛼𝜌ˆ 𝐿 (𝑥)/ℓ(𝑥) since 𝜇 /𝜇 → 0. and see that ∞ − 𝑦=0 𝑢 a (𝑦)𝜇− (𝑥 + 𝑦) ≪ 𝑈a (𝑥)𝜇(𝑥) ∼ 𝑥 If the sequence (𝑐 𝑛 ) is determined as in (2.17) then the asymptotic form of 𝑈a𝑉d given in Lemma 6.5.1 may also be expressed as 𝑈a (𝑐 𝑛 )𝑉d (𝑐 𝑛 ) −→ 𝜅 or 2 according as 0 < 𝛼 < 2 or 𝛼 = 2 𝑛/𝑐 ♯
(6.55)
and is known if 0 < 𝜌 < 1 [23, Eq(15)], [88, Eq(15, 31)] apart from the explicit expression of the limit value.
6.6 Proof of Proposition 6.1.3 This section assumes that ( 𝐴𝑆) is valid, as mentioned before. Lemma 6.6.1 Suppose 𝜌 > 0. Then for any 𝜀 > 0, there exists a constant 𝑀 > 0 such that for all sufficiently large 𝑅, 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); 𝑍 (𝑅) ≥ 𝑀 𝑅, Λ𝑅 ≤ 𝜀𝑉d (𝑥) (0 ≤ 𝑥 < 𝑅), and, if 𝛼𝜌 = 1, lim sup
𝑅→∞ 0≤𝑥 0, for all sufficiently large 𝑅, ∑︁
𝑝(𝑤 − 𝑧)𝑉d (𝑤) ≤ 𝐶∗
𝑉d (𝑅)𝜇(𝑅) 𝑀 𝛼𝜌
(0 ≤ 𝑧 < 𝑅)
(6.56)
𝑤≥𝑅+𝑀 𝑅
for some constant 𝐶∗ which one can take to be 2/𝛼𝜌. Using Lemmas 6.2.1 and 6.5.1(i) in turn one then sees that for any 𝑀 > 0 1 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); 𝑍 (𝑅) ≥ 𝑀 𝑅, Λ𝑅 𝑉d (𝑥) 𝑅 1 ∑︁ ∑︁ = 𝑔 𝐵(𝑅) (𝑥, 𝑧)𝑉d (𝑤) 𝑝(𝑤 − 𝑧) 𝑉d (𝑥) 𝑧=0 𝑤≥ ( 𝑀+1) 𝑅
≤ 𝐶∗
𝑈a (𝑅)𝑉d (𝑅)𝜇(𝑅) 𝜅 + 𝑜(1) = 𝐶∗ 𝑀 𝛼𝜌 𝑀 𝛼𝜌
(6.57)
6.6 Proof of Proposition 6.1.3
129
(𝐵(𝑅) := R \ [0, 𝑅]), so that the first half of the lemma is ensured by taking 𝑀 = [2𝐶∗ 𝜅/𝜀] 1/𝛼𝜌 .
(6.58)
If 𝛼𝜌 = 1, we have either 𝜅 = 0 or 1 < 𝛼 < 2 with 𝑝 = 0 and the second half follows from (6.57), for if 𝑝 = 0, 𝜇(𝑅) can be replaced by 𝑜(𝜇(𝑅)) in (6.56) and thereafter. □ Lemma 6.6.2 Suppose 𝜌 > 0. Then there exists a constant 𝜃 > 0 such that 𝑃 𝑥 (Λ𝑅 ) ≥ 𝜃𝑉d (𝑥)/𝑉d (𝑅) for 0 ≤ 𝑥 < 𝑅 and for 𝑅 large enough. If 𝛼 ≤ 1, then 𝜃 can be chosen so as to approach unity as 𝜌 → 1, and for any 𝛿 > 0, for 𝑥 ≥ 𝛿𝑅, 𝑃 𝑥 (Λ𝑅 ) → 1 as 𝑅 → ∞ and 𝜌 → 1 in this order. Proof Because of the identity (6.11) (due to the optional stopping theorem), Lemma 6.6.1 shows that for any 𝜀 > 0 we can choose a constant 𝑀 > 0 so that (1 − 𝜀)𝑉d (𝑥) ≤ 𝐸 𝑥 𝑉d (𝑆 𝜎 (𝑅,∞) ); 𝑍 (𝑅) < 𝑀 𝑅 Λ𝑅 𝑃 𝑥 (Λ𝑅 ). Since the conditional expectation on the RHS is less than 𝑉d (𝑅 + 𝑀 𝑅), which is asymptotically equivalent to (1 + 𝑀) 𝛼𝜌ˆ 𝑉d (𝑅), we have the lower bound of 𝑃 𝑥 (Λ𝑅 ) with 𝜃 = (1 − 𝜀) (1 + 𝑀) −𝛼𝜌ˆ . Noting that 𝑀 may be determined by (6.58) and that as 𝜌 → 1 (possible only when 𝛼 ≤ 1), 𝜃 → 1 − 𝜀, it follows that 𝜃 → 1 since 𝜀 may be arbitrarily small. Since we also have 𝑉d (𝑥)/𝑉d (𝑅) ∼ (𝑥/𝑅) 𝛼𝜌ˆ → 1 (uniformly □ for 𝛿𝑅 < 𝑥 < 𝑅), the second half of the lemma follows. Lemma 6.6.3 Suppose 0 < (𝛼 ∨ 1) 𝜌 < 1. Then for any 𝛿 < 1, uniformly for 0 ≤ 𝑥 < 𝛿𝑅, as 𝑅 → ∞ and 𝜀 ↓ 0 (interchangeably), 𝑃 𝑥 𝑍 (𝑅) ≤ 𝜀𝑅 Λ𝑅 → 0. Proof By the restriction on (𝛼 ∨ 1) 𝜌 it follows that 𝛼 < 2 and (a) 𝑝 > 0 so that 𝜇 is regularly varying; + (b) 𝑃[𝑍 > 𝑥] is regularly varying with index ∈ (−1, 0); (c) 𝑉d (𝑥)/𝑉d (𝑅) ≤ 𝐶𝑃 𝑥 (Λ𝑅 ) and ∀𝜀 > 0, inf inf 𝑃 𝑥 (Λ𝑅 ) > 0. 𝑅 ≥1 𝑥 ≥ 𝜀𝑅
(6.59)
The reasoning for (a) has already been given, while (b) and (c) are due to (6.47) and Lemma 6.6.2, respectively. By the generalised arcsine law (cf. Theorem A.2.3) (b) implies that 𝑃[𝑍 (𝑟) < 𝜀𝑟] → 0 as 𝜀 → 0 uniformly for 𝑟 ≥ 1. This, in particular, shows the asserted convergence of the lemma to be valid for 41 𝑅 ≤ 𝑥 < 𝛿𝑅, therein 𝑃 𝑥 (Λ𝑅 ) being bounded away from zero by (c). For 𝑥 < 14 𝑅, take a constant 0 < ℎ < 1/4 arbitrarily and let E 𝜀,𝑅 stand for the event {(1 − ℎ)𝑅 ≤ 𝑆 𝜎 [𝑅/4,∞) < (1 + 𝜀)𝑅}. It holds that ( the event Λ𝑅 ∩ {𝑍 (𝑅) < 𝜀𝑅} is contained in (6.60) Λ𝑅/4 ∩ {𝑆 𝜎 [𝑅/4,∞) < (1 − ℎ)𝑅, 𝑍 (𝑅) < 𝜀𝑅} ∪ E 𝜀,𝑅 ,
130
6 The Two-Sided Exit Problem – General Case
and that 𝑃 𝑥 (Λ𝑅/4 ∩ E 𝜀,𝑅 ) ≤
∑︁
𝑔𝛺 (𝑥, 41 𝑅 − 𝑦)𝑃[(1− ℎ)𝑅 ≤ 𝑦 + 𝑋 < (1+ 𝜀)𝑅]. (6.61)
0≤𝑦 𝜀𝑅 Λ𝑅 ≥ 1/2 (𝑥 < 𝛿𝑅) (6.64) for some 𝜀 > 0. Writing the equality (6.11) as ∞ ∑︁ 𝑉d (𝑥) = 𝑃 𝑥 [𝑆 𝜎 (𝑅,∞) = 𝑦 | Λ𝑅 ]𝑉d (𝑦), 𝑃 𝑥 (Λ𝑅 ) 𝑦=𝑅+1
one deduces from (6.64) that the sum above is larger than [𝑉d (𝑅) + 𝑉d ((1 + 𝜀)𝑅)]/2. By (6.49) lim sup 𝑉d ((1 + 𝜀)𝑅)/𝑉d (𝑅) = (1 + 𝜀) 𝛼𝜌ˆ > 1, since (𝛼 ∨ 1) 𝜌 < 1 entails 𝜌ˆ > 0. Hence 𝑃 𝑥 (Λ𝑅 ) ≤ 2/[1 + (1 + 𝜀) 𝛼𝜌ˆ ] 𝑉d (𝑥)/𝑉d (𝑅){1 + 𝑜(1)}. This verifies the upper bound. (ii) follows from Lemmas 6.6.2 and 6.6.4. □
6.7 Note on the Over- and Undershoot Distributions I Recall that condition (C3), saying both 𝑢 a and 𝑉d are s.v., holds if and only if 𝑃0 [𝑍 (𝑅) < 𝜀𝑅] → 1 and 𝑃0 [ 𝑍ˆ (𝑅) < −𝑅/𝜀] → 1 (𝑅 → ∞) for any 𝜀 > 0. With the help of a few results from what has been obtained in the present chapter, we can read off exact modes of these convergences in terms of 𝑉d or ℓ ∗ in the following proposition, where (AS) is not supposed. Put 𝑁 + (𝑅) = inf{𝑛 ≥ 0 : 𝑆 𝑛+1 > 𝑅},
𝑁 − (𝑅) = inf{𝑛 ≥ 0 : 𝑆 𝑛+1 < −𝑅}
and 𝑍ˆ (𝑅) = 𝑆 𝑁 − (𝑅)+1 + 𝑅. Their derivations are based on the following identities 𝑃0 [𝑍 (𝑅) = 𝑥] =
𝑅 ∑︁
𝑢 a (𝑦)𝑃[𝑍 = 𝑥 + 𝑅 − 𝑦],
(6.65)
𝑦=0
𝑃0 [𝑆 𝑁 + (𝑅) = 𝑅 − 𝑥] = 𝑔𝛺 (𝑥, 𝑅)𝜇+ (𝑥)
(6.66)
132
6 The Two-Sided Exit Problem – General Case
(since 𝑔 (𝑅,∞) (0, 𝑅 − 𝑥) = 𝑔𝛺 (𝑥, 𝑅)), and their duals. Let 𝛿 be a constant arbitrarily chosen from (0, 1). Proposition 6.7.1 (i) Uniformly for 1 ≤ 𝑥 ≤ 𝑅,6 (a) if 𝑢 a is s.v., 𝑃0 [𝑍 (𝑅) ≤ 𝑥] = [ℓ ∗ (𝑥)/ℓ ∗ (𝑅)] + 𝑜(1) as 𝑅 → ∞,7 (b) under (C3), 𝑃0 [𝑆 𝑁 + (𝑅) ≥ 𝑅 − 𝑥] ∼ ℓ ∗ (𝑥)/ℓ ∗ (𝑅) as 𝑥 → ∞, and uniformly for 0 ≤ 𝑥 < 𝛿𝑅, 𝑉d (𝑥)𝜇+ (𝑥) 𝑅 𝑉d (𝑦)𝜇+ (𝑦) d𝑦 0
𝑃0 [𝑆 𝑁 + (𝑅) = 𝑅 − 𝑥] ∼ ∫
as 𝑅 → ∞.
(ii) Uniformly for 𝑥 ≥ 1, as 𝑅 → ∞ (a) if 𝑉d is s.v., 𝑃0 [ 𝑍ˆ (𝑅) ≤ −𝑥] ∼ 𝑉d (𝑅)/𝑉d (𝑥 ∨ 𝑅), (b) under (C3), 𝑃0 [𝑆 𝑁 − (𝑅) ≥ −𝑅 + 𝑥] ∼ 𝑉d (𝑅)/𝑉d (𝑥 ∨ 𝑅) and 𝑃0 [𝑆 𝑁 − (𝑅) = −𝑅 + 𝑥] ∼ ∫ ∞ 𝑅
𝜇− (𝑥)/ℓ ∗ (𝑥) 𝜇− (𝑦) d𝑦/ℓ ∗ (𝑦)
uniformly for 𝑥 > 𝑅/𝛿.
[(ii.b) entails that for any 𝑀𝑅 ≫ 𝑅 such that 𝑉d (𝑀𝑅 )/𝑉d (𝑅) → 1, both − 𝑍ˆ (𝑅)/𝑀𝑅 and 𝑆 𝑁 − (𝑅) /𝑀𝑅 diverge to +∞ in probability.] Í Proof By (6.65), 𝑃0 [𝑍 (𝑅) > 𝑥] = 𝑅𝑦=0 𝑢 a (𝑦)𝑃[𝑍 > 𝑥 + 𝑅 − 𝑦]. If 𝑢 a is s.v., then 𝑢 a ∼ 1/ℓ ∗ , and, recalling the definition of ℓ ∗ , one sees that 𝑥𝑃[𝑍 > 𝑥]/ℓ ∗ (𝑥) → 0 (𝑥 → ∞), so that the above sum over 𝑦 < 𝑅/2, being less than 𝑈a (𝑅)𝑃[𝑍 ≥ 𝑅/2], tends to zero. Hence 𝑃0 [𝑍 (𝑅) > 𝑥] = 𝑜(1) +
1 + 𝑜(1) ∗ ℓ ∗ (𝑥) ∗ 1 ℓ (𝑥 + 𝑅) + 𝑜(1), − ℓ (𝑥) = 1 − 2 ℓ ∗ (𝑅) ℓ ∗ (𝑅)
showing (a) of (i). For the proof of (ii.a) let 𝑉d be s.v. Then 𝑃[ 𝑍ˆ < −𝑥] ∼ 𝑣◦ /𝑉d (𝑥), so that from the dual of (6.65) one deduces that uniformly for 𝑥 > 𝑅, 𝑃0 [ 𝑍ˆ (𝑅) < −𝑥] =
𝑅 ∑︁
𝑣d (𝑦) 𝑉d (𝑅) {1 + 𝑜(1)} ∼ . 𝑉 (𝑥 + 𝑅 − 𝑦) 𝑉d (𝑥) 𝑦=0 d
As for (i.b), under (C3) we have 𝑔𝛺 (𝑥, 𝑅) ∼ 𝑉d (𝑥)/ℓ ∗ (𝑅) for 𝑥 ≤ 𝛿𝑅, and hence, by (6.66), 𝑃0 [𝑆 𝑁 + (𝑅) = 𝑅Í− 𝑥] = 𝑉d (𝑥)𝜇+ (𝑥)/ℓ ∗ (𝑅), which together yield the equivalences of (i.b), since 0𝑥 𝑉d (𝑦)𝜇+ (𝑦) ∼ ℓ ∗ (𝑥) (Lemma 6.4.1) and the probability of the LHS of the first equivalence is increasing in 𝑥. For the proof of (ii.b), let (C3) hold and observe that for each 0 < 𝛿 < 1, 𝑃0 [𝑆 𝑁 − (𝑅) = −𝑅 + 𝑥] = 𝑔𝛺 (𝑅, 𝑥)𝜇− (𝑥) ∼ 𝑉d (𝑅)𝜇− (𝑥)/ℓ ∗ (𝑥),
for 𝑥 > 𝑅/𝛿,
6 To be precise, 𝑥 must be restricted to integers if lim ℓ ∗ ( 𝑥) = 𝐸 𝑍 < ∞. 7 Virtually the same formula (together with closely related results) is obtained by Erickson [28, Eq(9.3)] under the condition: 𝑥 𝑃 [𝑍 > 𝑥 ] is s.v.
6.7 Note on the Over- and Undershoot Distributions I
133
where the equality is the dual∫of (6.66) and the asymptotic equivalence is uniform in ∞ 𝑥. Since 1/𝑉d (𝑥) ∼ ℓ♯ (𝑥) = 𝑥 𝜇− (𝑡) d𝑡/ℓ ∗ (𝑡), one can accordingly conclude both □ relations of (ii.b).
Chapter 7
The Two-Sided Exit Problem for Relatively Stable Walks
This chapter is a continuation of Chapter 6. We use the same notation as therein. As in Chapter 6, we shall be primarily concerned with the event Λ𝑅 = {𝜎(𝑅,∞) < 𝑇 }, (𝑇 = 𝜎𝛺 , 𝛺 = (−∞, −1]). In Chapter 6 we obtained several sufficient conditions in order for (6.1) to hold, namely 𝑃 𝑥 (Λ𝑅 ) ∼ 𝑉d (𝑥)/𝑉d (𝑅)
uniformly for 0 ≤ 𝑥 < 𝑅 as 𝑅 → ∞.
If the r.w. 𝑆 is p.r.s. (positively relatively stable), then one such condition – condition (C3) in Theorem 6.1.1 – is fulfilled (cf. Remark 6.1.2), hence (6.1) holds. At the same time, in Chapter 6, we also showed that if 𝑆 is n.r.s., then 𝑃 𝑥 (Λ𝑅 ) = 𝑜 𝑉d (𝑥) 𝑉d (𝑅) uniformly for 0 ≤ 𝑥 < 𝛿𝑅 (0 < 𝛿 < 1). In this chapter, we obtain the precise asymptotic form of 𝑃 𝑥 (Λ𝑅 ) in the case when 𝑆 is n.r.s. under some additional regularity condition on the positive tail of 𝐹 that is satisfied at least when 𝐹 is in the domain of attraction of a stable law with exponent 1, 𝐸 𝑋 = 0 and lim sup 𝑚 − (𝑥)/𝑚(𝑥) < 1/2. The derivation of the asymptotic form of 𝑃0 (Λ𝑅 ) is accompanied by those of the renewal sequence 𝑢 a (𝑥) and the Green function 𝑔𝛺 (𝑥, 𝑦). Indeed, the gist of the problem is to obtain the asymptotic form of 𝑢 a (𝑥), which yields estimates of 𝑔𝛺 (𝑥, 𝑦) sufficient for the derivation of the asymptotics of 𝑃 𝑥 (Λ𝑅 ). We suppose 𝜎 2 = ∞, and for later citations, we bring in the conditions (PRS) (NRS)
𝐴(𝑥)/𝑥𝜇(𝑥) → ∞ as 𝑥 → ∞, 𝐴(𝑥)/𝑥𝜇(𝑥) → −∞ as 𝑥 → ∞
∫𝑥 (𝐴(𝑥) = 0 [𝜇+ − 𝜇− (𝑡)] d𝑡). As mentioned previously, condition (PRS) holds if and only if 𝑆 is p.r.s. We shall say 𝐹 is p.r.s. (n.r.s., recurrent or transient) if so is the r.w. 𝑆. If 𝐹 is p.r.s. (n.r.s.) both 𝑢 a and 𝑉d (both 𝑣d and 𝑈a ) are s.v. at infinity. When 𝐹 is transient, the case of interest as much as the case of recurrence in this chapter, the Green kernel 𝐺 (𝑥) defined by
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8_7
135
136
7 The Two-Sided Exit Problem for Relatively Stable Walks
𝐺 (𝑥) =
∞ ∑︁
𝑃0 [𝑆 𝑛 = 𝑥],
(7.1)
𝑛=0
plays a significant role. Under the relative stability condition, 𝐹 is transient if and only if ∫ ∞ 𝜇(𝑡) d𝑡 < ∞ 2 𝑥0 𝐴 (𝑡) and in this case | 𝐴(𝑥)| → ∞ (cf. [83]). We shall also seek the asymptotic estimates of 𝑃 𝑥 [𝜎𝑅 < 𝑇] or 𝑃 𝑥 [𝜎0 < 𝜎(𝑅,∞) ] which are not only of interest in themselves but are sometimes crucial for estimating 𝑃 𝑥 (Λ𝑅 ). If 𝐹 is n.r.s., the comparison of 𝑃0 [𝜎𝑅 < 𝑇] with 𝑃0 (Λ𝑅 ) leads to the determination of the asymptotic form of the renewal sequence – as well as these two probabilities – under some regularity condition on the positive tail of 𝐹. Applying the obtained asymptotic estimates of 𝑃 𝑥 [𝜎𝑅 < 𝑇] and 𝑃 𝑥 (Λ𝑅 ), we shall derive the asymptotic behaviour as 𝑅 → ∞ of the probability that 𝑅 is ever hit by 𝑆, conditioned to avoid the negative half-line forever.
7.1 Statements of the Main Results The following result (together with its dual) is fundamental for later arguments. Theorem 7.1.1 If 𝐹 is recurrent and p.r.s., then 𝑎(𝑥), 𝑥 > 0, is s.v. at infinity and as 𝑦 → ∞, 𝑣d (𝑦) = 𝑜 (𝑎(𝑦)/𝑈a (𝑦)) , and 𝑔𝛺 (𝑥, 𝑦) = 𝑎(𝑥) − 𝑎(𝑥 − 𝑦) + 𝑜 (𝑎(𝑥))
uniformly for 𝑥 > 𝑦/2,
in particular 𝑔𝛺 (𝑦, 𝑦) ∼ 𝑎(𝑦); and for each constant 𝛿 < 1 𝑔𝛺 (𝑥, 𝑦) (7.2) 𝑉 (𝑥) d [𝑎(𝑦) − 𝑎(−𝑦) + 𝑜(𝑎(𝑦))] (𝑦 → ∞) uniformly for 0 ≤ 𝑥 ≤ 𝛿𝑦, 𝑉d (𝑦) = 𝑈a (𝑦) 𝑎(𝑥) (𝑥 → ∞) 𝑜 uniformly for 0 ≤ 𝑦 < 𝛿𝑥. 𝑈a (𝑥) [For an alternative asymptotic form in the first case of (7.2) see (7.5) below.] Remark 7.1.2 (a) The essential ingredients of Theorem 7.1.1 will be derived under a condition weaker than (PRS). (See Proposition 7.3.1 and Remark 7.3.2(a).) Under an additional regularity condition on 𝐹 the estimate of 𝑣d (𝑥) and the second estimate in (7.2) are refined in the dual form in Theorems 7.1.4 and 7.1.6 below (see the comments following Corollary 7.1.8).
7.1 Statements of the Main Results
137
(b) Under (PRS) some asymptotic estimates of 𝑎(𝑥) and 𝐺 (𝑥) as 𝑥 → ±∞ are obtained in Section 4.1 and [83], respectively; in particular it follows that as 𝑥 → ∞, 1 ∼ 𝑎(𝑥) − 𝑎(−𝑥). 𝐴(𝑥)
(7.3)
Note that by definition, 𝑎(𝑥) − 𝑎(−𝑥) = 𝐺 (𝑥) − 𝐺 (−𝑥) if 𝐹 is transient. (See (7.36) and (7.90) for more about 𝑎 and 𝐺, respectively.) (c) In view of the identity 𝑔𝛺 (𝑥, 𝑦) = 𝑔ˆ 𝛺 (𝑦, 𝑥), the dual statement of Theorem 7.1.1 may read as follows: If 𝐹 is recurrent and n.r.s., then as 𝑥 → ∞ 𝑔𝛺 (𝑥, 𝑦) = 𝑎(−𝑦) − 𝑎(𝑥 − 𝑦) + 𝑜 (𝑎(−𝑦))
uniformly for 𝑦 > 𝑥/2,
(7.4)
in particular 𝑔𝛺 (𝑥, 𝑥) ∼ 𝑎(−𝑥); and 𝑔𝛺 (𝑥, 𝑦) 𝑉d (𝑥) 𝑎(−𝑦) (𝑦 → ∞) 𝑜 uniformly for 0 ≤ 𝑥 < 𝛿𝑦, 𝑉d (𝑦) = 𝑈a (𝑦) [𝑎(−𝑥) − 𝑎(𝑥) + 𝑜(𝑎(−𝑥))] (𝑥 → ∞) uniformly for 0 ≤ 𝑦 ≤ 𝛿𝑥. 𝑈a (𝑥) (d) If both 𝑢 a and 𝑉d are s.v., then by Spitzer’s formula (2.6) 𝑔𝛺 (𝑥, 𝑦) ∼
𝑉d (𝑥) 𝑉d (𝑥)𝑈a (𝑦) ∼ ℓ ∗ (𝑦) 𝑦
for 0 ≤ 𝑥 < 𝛿𝑦.
(7.5)
Comparing this to (7.2) (with 𝑥 = 𝑦/2) and using (7.3) one sees that if 𝐹 is recurrent and p.r.s, then 𝑉d (𝑥)𝑈a (𝑥)/𝑥 = 1/𝐴(𝑥) + 𝑜(𝑎(𝑥)). For transient 𝐹, we shall give in Section 7.5 a formula analogous to (7.2), and according to it, the corresponding formula becomes 𝑉d (𝑥)𝑈a (𝑥)/𝑥 = 1/𝐴(𝑥) + 𝑜(𝐺 (𝑥)). As to the estimates of 𝑔𝛺 (𝑥, 𝑦), the result corresponding to Theorem 7.1.1 for the transient walk is much cheaper because 𝐺 is bounded – we shall give it in Section 7.5 as Lemma 7.5.1 (in the dual setting, i.e., for an n.r.s. walk), whereas the estimation of 𝑢 a (given in Theorem 7.1.6 below not for 𝑣a because of the dual setting) is more costly than for the recurrent walk. Here we state the well-established fact that if 𝐹 is transient, then 0 < 𝐺 (0) = 1/𝑃0 [𝜎0 = ∞] < ∞ and 𝑔𝛺 (𝑥, 𝑥) −→ 𝐺 (0). (See Section A.5.3 for the proof of the latter assertion.)
(7.6)
138
7 The Two-Sided Exit Problem for Relatively Stable Walks
If 𝐹 is n.r.s., then 𝑃 𝑥 (Λ𝑅 ) → 0 (0 ≤ 𝑥 < 𝛿𝑅), and the exact estimation of 𝑃 𝑥 (Λ𝑅 ) seems demanding to obtain in general. However, if the positive and negative tails of 𝐹 are not balanced in the sense that if 𝐹 is recurrent, lim sup 𝑥→∞ 𝑎(𝑥)/𝑎(−𝑥) < 1 (7.7) lim sup 𝑥→∞ 𝐺 (𝑥)/𝐺 (−𝑥) < 1 if 𝐹 is transient and if the positive tail of 𝐹 satisfies an appropriate regularity condition, then we can compute the precise asymptotic form of 𝑔𝛺 (𝑥, 𝑦) for 0 ≤ 𝑥 < 𝛿𝑦 (which is lacking in the second formula in (c) of Remark 7.1.2), and thereby obtain that of 𝑃 𝑥 (Λ𝑅 ) for 𝐹 that is n.r.s. We need to assume ∃ 𝜆 > 1, lim sup
𝜇+ (𝜆𝑡) < 1. 𝜇+ (𝑡)
(7.8)
Proposition 7.1.3 Suppose that 𝐹 is n.r.s. Then 𝑃0 (Λ𝑅 ) ≥ 𝑣◦𝑈a (𝑅)𝜇+ (𝑅){1 + 𝑜(1)}.
(7.9)
Let (7.8) hold in addition. Then as 𝑥 → ∞ 𝑢 a (𝑥) ≥
𝑈a (𝑥)𝜇+ (𝑥) {1 + 𝑜(1)} −𝐴(𝑥)
(7.10)
and under (7.7), 𝑈a (𝑥)𝑉d (𝑥) ∼ −𝑥/𝐴(𝑥) and 𝑢 a (𝑥) = 𝑜(𝑈a (𝑥)/𝑥) and ∃ 𝑐 > 0, 𝑃0 [𝜎𝑅 < 𝑇]/𝑃0 (Λ𝑅 ) > 𝑐, if 𝐹 is recurrent ; 𝑃0 [𝜎𝑅 < 𝑇] 1 + 𝑜(1) | 𝐴(𝑥)|𝑈a (𝑥) and ≥ 𝑢 a (𝑥) = 𝑜 , 𝑥 𝑃0 (Λ𝑅 ) | 𝐴(𝑅)|𝐺 (0) if 𝐹 is transient. In the next theorem, we obtain precise asymptotic forms of 𝑃0 (Λ𝑅 ) and 𝑢 a (𝑥) in the case when 𝐹 is recurrent by assuming, in addition to (NRS), (7.7) and (7.8), the ‘continuity’ condition lim lim sup 𝜇+ (𝛿𝑥)/𝜇+ (𝑥) = 1. 𝛿 ↑1
(7.11)
𝑥→∞
Theorem 7.1.4 If 𝐹 is recurrent and n.r.s. and if (7.11), (7.7) and (7.8) hold, then 𝑈a (𝑥)𝜇+ (𝑥) , −𝐴(𝑥) 𝑃0 (Λ𝑅 )/𝑣◦ ∼ 𝑈a (𝑅)𝜇+ (𝑅), 𝑢 a (𝑥) ∼
and for each 𝛿 < 1, uniformly for 0 ≤ 𝑥 < 𝛿𝑦 as 𝑦 → ∞,
(7.12) (7.13)
7.1 Statements of the Main Results
𝑔𝛺 (𝑥, 𝑦) ∼
139
𝑉d (𝑥)𝑈a (𝑦) | 𝐴(𝑦)|(𝑥 + 1)
∫
𝑦
𝜇+ (𝑡) d𝑡; and
(7.14)
𝑦−𝑥−1
𝑃 𝑥 [𝜎𝑦 < 𝑇] 𝑎(−𝑦) − 𝑎(𝑦) ∼ . 𝑃 𝑥 [Λ 𝑦 ] 𝑎(−𝑦)
(7.15)
Remark 7.1.5 (a) Under (NRS), condition (7.7) follows if we suppose
lim sup 𝑥→∞ 𝑚 − (𝑥)/𝑚(𝑥) < 1/2 if 𝐸 𝑋 = 0, lim sup 𝑥→∞ 𝐴+ (𝑥)/𝐴(𝑥) < 1/2 if 𝐹 is transient, ∫𝑥 where 𝐴± (𝑥) = 0 𝜇± (𝑡) d𝑡 (𝑥 ≥ 0) [note 𝐴(𝑥) = 𝜂− (𝑥) − 𝜂+ (𝑥) if 𝐸 𝑋 = 0]. (See Section 7.7, where related matters are discussed in the dual setting.) (b) Suppose that 𝐸 𝑋 = 0, lim sup 𝜇− (𝑥)/𝜇(𝑥) < 21 and the asymptotic ‘continuity’ condition (7.11) holds. Then the assumption of Theorem 7.1.4 holds if 𝜇+ (𝑥) ≍ 𝐿 (𝑥)/𝑥
(𝑥 → ∞)
for some s.v. 𝐿,
(7.16)
as is readily verified (the converse is not true). If 𝐹 is transient one may expect the formulae parallel to those given in Theorem 7.1.4 to be true on an ad hoc basis, but the analysis runs into a difficulty. The next theorem ensures them under the following condition, more restrictive than the continuity condition (7.11): ∃ 𝐶∗ > 0,
𝑝(𝑥) ≤ 𝐶∗
𝜇+ (𝑥) 𝑥
(𝑥 ≥ 1).
(7.17)
Theorem 7.1.6 Let 𝐹 be transient and n.r.s. Suppose that (7.17) holds in addition to (7.7) and (7.8). Then the formulae (7.12) to (7.15) are valid, where (7.15) is more naturally written as 𝑃 𝑥 [𝜎𝑦 < 𝑇] 𝐺 (−𝑦) − 𝐺 (𝑦) ∼ . (7.18) 𝑃 𝑥 [Λ 𝑦 ] 𝐺 (0) Remark 7.1.7 Let 𝐸 |𝑋 | = ∞. Then by Erickson’s criterion (5.22), under (NRS) our hypothesis of 𝑆 being oscillatory is expressed as ∫ ∞ 𝜇+ (𝑡) d𝑡 = ∞. 𝐴− (𝑡) 1 Because of the identity 𝑃 𝑥 [𝜎𝑦 < 𝑇] = 𝑔𝛺 (𝑥, 𝑦)/𝑔𝛺 (𝑦, 𝑦) (𝑦 ≠ 𝑥), combining (7.6), (7.14) and (7.4) with the equivalence (7.15) or (7.18) leads to the following Corollary 7.1.8 If the assumption of Theorem 7.1.4 or that of Theorem 7.1.6 holds (according as 𝐹 is recurrent or transient), then uniformly for 0 ≤ 𝑥 < 𝛿𝑅, 𝑃 𝑥 (Λ𝑅 ) ∼ | 𝐴(𝑅)|𝑔𝛺 (𝑥, 𝑅) ∫ 𝑅 𝑉d (𝑥)𝑈a (𝑅) 𝜇+ (𝑡) d𝑡 ∼ 𝑥+1 𝑅−𝑥−1
(7.19) as 𝑅 → ∞.
140
7 The Two-Sided Exit Problem for Relatively Stable Walks
In the dual setting, Theorem 7.1.4 is paraphrased as follows. If 𝐹 is recurrent and p.r.s., (7.11) and (7.8) hold with 𝜇− in place of 𝜇+ and lim sup 𝑎(−𝑥)/𝑎(𝑥) < 1, then 1 − 𝑃0 (Λ 𝑦 ) ∼ 𝑉d (𝑦)𝜇− (𝑦) ∼ 𝑣d (𝑦) 𝐴(𝑦)
(𝑦 → ∞),
(7.20)
uniformly for 0 ≤ 𝑥 ≤ 𝑅
𝑃 𝑥 [𝜎0 < 𝜎(𝑅,∞) ] ∼ 𝑔𝛺 (𝑅, 𝑅 − 𝑥)/𝑎(𝑅) and for each 𝜀 > 0, 𝑈a (𝑦)𝑉d (𝑥) 𝑔𝛺 (𝑥, 𝑦) ∼ 𝐴(𝑥) (𝑦 + 1)
∫
𝑥
𝜇− (𝑡) d𝑡
(𝑥 → ∞) (7.21)
𝑥−𝑦−1
uniformly for 0 ≤ 𝑦 < (1 − 𝜀)𝑥, 𝑃 𝑥 [𝜎0 < 𝜎(𝑅,∞) ] 𝑎(𝑅) − 𝑎(−𝑅) ∼ 𝑃 𝑥 [𝑇 < 𝜎(𝑅,∞) ] 𝑎(𝑅)
(𝑅 → ∞) uniformly for 𝜀𝑅 < 𝑥 ≤ 𝑅,
and 𝑈a (𝑅 − 𝑥)𝑉d (𝑅) 𝑅−𝑥+1 as 𝑅 → ∞ uniformly for 𝜀𝑅 < 𝑥 ≤ 𝑅;
∫
𝑅
𝜇− (𝑡) d𝑡
1 − 𝑃 𝑥 (Λ𝑅 ) ∼ 𝐴(𝑅)𝑔𝛺 (𝑅, 𝑅 − 𝑥) ∼
𝑥−1
(7.22)
in particular by (7.21) as 𝑥 → ∞ 𝑦/𝑉d (𝑦) 𝑦𝑉d (𝑥)𝜇− (𝑥) =𝑜 𝑃 𝑥 [𝜎𝑦 < 𝑇] ≍ 𝑉d (𝑦) 𝐴(𝑥) 𝑥/𝑉d (𝑥)
uniformly for 1 ≤ 𝑦 < 𝛿𝑥,
where for the last formula we have used 𝑉d (𝑦)𝑈a (𝑦) ≍ 𝑦𝑎(𝑦) (see L(3.1) of the next section and (7.39)). It would be straightforward to state the dual results of Theorem 7.1.6. Remark 7.1.9 In [84, Eq(2.22)] it is shown that if 𝐸 𝑍 < ∞, then 1 − 𝑃 𝑥 (Λ𝑅 ) ∼ [𝑉d (𝑅) − 𝑉d (𝑥)] 𝑉d (𝑅) as 𝑅 − 𝑥 → ∞ for 𝑥 ≥ 0, which, giving an exact asymptotic for 0 ≤ 𝑥 ≤ 𝛿𝑅, partially complements (7.22). From the estimates of 𝑢 a and 𝑔𝛺 (𝑥, 𝑦) of Theorems 7.1.4 and 7.1.6 we can derive some exact asymptotic estimates of 𝑃 𝑥 [𝑆 𝜎 (𝑅,∞)−1 = 𝑦 | Λ𝑅 ],
(7.23)
the conditional probability of 𝑆 exiting the interval [0, 𝑅] through 𝑦, given Λ𝑅 . We shall carry out the derivation in Section 7.6. The result obtained will lead to the evaluation of the distribution of the overshoot 𝑍 (𝑅) (Proposition 7.6.4). We conclude this section by giving the asymptotic forms of 𝑔 𝐵(𝑅) (𝑥, 𝑦), the results being essentially consequences of what we have stated above.
7.1 Statements of the Main Results
141
Theorem 7.1.10 Let 𝐹 be n.r.s. Put 𝐵(𝑅) = R\ [0, 𝑅] and take a constant arbitrarily and put 𝜀 = 1 − 𝛿.
1 2
𝛿𝑅). Proof (of Theorem 7.1.10) We show the duals of (i) and (ii). Let 𝐹 be p.r.s. Then by Lemma 5.7.3 𝑔 𝐵(𝑅) (𝑥, 𝑦) ∼ 𝑔𝛺 (𝑥, 𝑦) for 0 ≤ 𝑥 ≤ 𝑦 < 𝛿𝑅, which combined with (7.4) yields the dual of (7.24). This and (7.5) together show that if 0 ≤ 𝑥 < 𝛿𝑦 and 𝑦 < 𝛿𝑅, 𝑔 𝐵(𝑅) (𝑥, 𝑦) ∼ 𝑉d (𝑥)/ℓ ∗ (𝑦), which conforms to (7.27). Because of the identity 𝑔 𝐵(𝑅) (𝑥, 𝑦) = 𝑔 𝐵(𝑅) (𝑅 − 𝑦, 𝑅 − 𝑥) it follows that as 𝑅 − 𝑥 → ∞, 𝑔 𝐵(𝑅) (𝑥, 𝑦) ∼ 𝑉d (𝑅 − 𝑦)/ℓ ∗ (𝑅 − 𝑥)
for 𝑅 − 𝑦 < 𝛿(𝑅 − 𝑥) and 𝑥 > 𝜀𝑅,
(7.29)
which also conforms to (7.27) and shows it for 𝑦 ≥ 𝛿𝑅 and 𝜀𝑅 < 𝑥 < 12 𝑅. Now one notices that (7.27) is verified under the additional restriction 𝑥 ∧ (𝑅 − 𝑦) < 𝜀𝑅, provided 𝐹 is merely p.r.s.
142
7 The Two-Sided Exit Problem for Relatively Stable Walks
Put 𝑅 ′ = ⌊ 21 𝑅⌋. Then (7.27) follows if we can show that 𝑔 𝐵(𝑅) (𝑥, 𝑦) ∼ 𝑃 𝑥 (Λ𝑅′ )𝑔 𝐵(𝑅) (𝑅 ′, 𝑦) for 0 ≤ 𝑥 < 𝜀𝑦 and 𝛿𝑅 ≤ 𝑦 ≤ 𝑅.
(7.30)
Indeed, using 𝑃 𝑥 (Λ𝑅′ ) ∼ 𝑉d (𝑥)/𝑉d (𝑅 ′), valid under (PRS), one infers from (7.29) that the RHS is asymptotically equivalent to that of (7.27). Since for any 𝜀1 > 0, 𝑃 𝑥 [𝑍 (𝑅 ′) > 𝜀1 𝑅 | Λ′𝑅 ] → 0, in view of the representation ∑︁ 𝑔 𝐵(𝑅) (𝑥, 𝑦) = 𝑃 𝑥 𝑍 (𝑅 ′) = 𝑧; Λ′𝑅 𝑔 𝐵(𝑅) (𝑅 ′ + 𝑧, 𝑦) 𝑧
one may expect (7.30) to be true. However, sup𝑧 𝑔 𝐵(𝑅) (𝑅 ′ + 𝑧, 𝑦)/𝑔 𝐵(𝑅) (𝑅 ′, 𝑦) tends to infinity, so the problem is non-trivial and delicate without the additional assumption made for (7.27) in the theorem. We postpone the proof of (7.30) to Section 8.6 (given after the proof of Lemma 8.6.2, where we encounter a similar problem). The proof of (iii) rests on Theorems 7.1.4 and 7.1.6. Suppose that the assumption of Theorem 7.1.4 holds and 𝑥𝜇+ (𝑥) is s.v. We apply Lemma 7.6.1 proved in Section 7.6, which says that uniformly for 0 ≤ 𝑥 < 𝛿𝑅, 0 ≤ 𝑦 ≤ 𝑅, 𝑔 𝐵(𝑅) (𝑥, 𝑦) = 𝑔𝛺 (𝑥, 𝑦) − [𝑈a (𝑦)/𝑈a (𝑅)] 𝑔𝛺 (𝑥, 𝑅){1 + 𝑜(1)}.
(7.31)
Substitution of the asymptotic form of 𝑔𝛺 (𝑥, ·) in (7.14) leads to ∫ 𝑦 ∫ 𝑅 𝑉d (𝑥)𝑈a (𝑦) 1 + 𝑜(1) 1 + 𝑜(1) 𝑔 𝐵(𝑅) (𝑥, 𝑦) = 𝜇+ (𝑡) d𝑡 − 𝜇+ (𝑡) d𝑡 . 𝑥+1 | 𝐴(𝑦)| 𝑦−𝑥−1 | 𝐴(𝑅)| 𝑅−𝑥−1 Noting that | 𝐴(𝑦)| is s.v. and 𝑈a (𝑦)/| 𝐴(𝑦)| ∼ 𝑢 a (𝑦)/𝜇+ (𝑦), we can easily identify the RHS with that of (7.26) for 0 ≤ 𝑥 < 𝛿𝑦 and 𝜀𝑅 < 𝑦 < 𝛿𝑅. The proof under the assumption of Theorem 7.1.6 is similar. □ Section 7.2 states some of the results from Chapter 6 that are fundamental in the later discussions. The proof of Theorem 7.1.1 will be given in Section 7.3. In Section 7.4, we shall prove Proposition 7.1.3 (in the case when 𝐹 is recurrent) and Theorem 7.1.4 after showing miscellaneous lemmas in preparation for their proofs. Proposition 7.1.3 (in the case 𝐸 |𝑋 | = ∞) and Theorem 7.1.6 will be proved in Section 7.5.
7.2 Basic Facts from Chapter 6 We shall use the results obtained in Chapter 6, some of which we collect below. First of all recall (6.12), which says 𝑃 𝑥 (Λ𝑅 ) ≤ 𝑉d (𝑥)/𝑉d (𝑅), and that (6.1) holds, i.e., 𝑃 𝑥 (Λ𝑅 ) ∼ 𝑉d (𝑥)/𝑉d (𝑅), under
7.3 Proof of Theorem 7.1.1 and Relevant Results
143
(C3) both 𝑉d (𝑥) and 𝑥𝑈a (𝑥) are s.v. as 𝑥 → ∞. For the convenience of later citations we write down the dual of (C3): c both 𝑥𝑉d (𝑥) and 𝑈a (𝑥) are s.v. as 𝑥 → ∞. ( C3) c from (NRS), as mentioned previThe condition (C3) follows from (PRS) and ( C3) ously. Put, for 𝑡 ≥ 0, ∫ 𝑡 ∫ 𝑡 1 ℓ ∗ (𝑡) = 𝑃[𝑍 > 𝑠] d𝑠 and ℓˆ∗ (𝑡) = ◦ 𝑃[− 𝑍ˆ > 𝑠] d𝑠 𝑣 0 0 (as in Chapter 6), and ∞
∫ ℓ♯ (𝑡) = 𝑡
𝜇− (𝑠) d𝑠 ℓ ∗ (𝑠)
∫ and ℓˆ♯ (𝑡) = 𝑡
∞
𝜇+ (𝑠) d𝑠, ℓˆ∗ (𝑠)
(7.32)
c respectively, in case (C3) and in case ( C3). The ladder height 𝑍 is r.s. if and only if 𝑥𝑈a (𝑥) is s.v. (Theorem A.2.3) and if this is the case, 𝑢 a is s.v. and 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) (cf. (5.28)). The following are of repeated use. 𝑅 ∑︁ L(2.1) For 𝑥 ≥ 0 and 𝑅 ≥ 1, 𝑔𝛺 (𝑥, 𝑦) < 𝑉d (𝑥)𝑈a (𝑅). 𝑦=0
ℓ∗
L(3.1) Under (C3), and ℓ♯ are s.v., 𝑢 a (𝑥) ∼ 1/ℓ ∗ (𝑥) and 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥). c ℓˆ∗ and ℓˆ♯ are s.v., By the duality this entails that under ( C3), 𝑣d (𝑥) ∼ 1/ℓˆ∗ (𝑥)
and 𝑈a (𝑥) ∼ 1/ℓˆ♯ (𝑥).
(7.33)
c holds, then 𝑉d (𝑥)𝑈a (𝑥)𝜇(𝑥) → 0. L(3.4) If either (C3) or ( C3) L(3.6) If (C3) holds, then for each 𝜀 > 0, 𝑃 𝑥 [𝑍 (𝑅) > 𝜀𝑅 | Λ𝑅 ] → 0 as 𝑅 → ∞ uniformly for 0 ≤ 𝑥 < 𝑅. c or (AS) with 𝛼 < 1 = 𝜌ˆ holds, then L(4.7) If either ( C3) 𝑃 𝑥 (Λ𝑅 ) = 𝑜 (𝑉d (𝑥)/𝑉d (𝑅)) (𝑅 → ∞) uniformly for 0 ≤ 𝑥 < 𝛿𝑅. These results follow from Lemmas 6.2.1, 6.3.1, 6.3.4, 6.3.6 and 6.4.7, respectively.
7.3 Proof of Theorem 7.1.1 and Relevant Results Here we shall suppose that 𝐹 is recurrent. Consider the Green function of 𝑆 killed as it hits (−∞, 0], instead of 𝛺 = (−∞, −1]. We make this choice for convenience in applying (2.15), which yields the relation (7.34) below. Recall (2.3) and (5.2): 𝑔(𝑥, 𝑦) = 𝑔 {0} (𝑥, 𝑦) − 𝛿 𝑥,0 = 𝑎(𝑥) + 𝑎(−𝑦) − 𝑎(𝑥 − 𝑦)
144
7 The Two-Sided Exit Problem for Relatively Stable Walks
and use (2.15) to see that for 𝑥 ≥ 1, 𝑎(𝑥) if 𝐸 𝑍 = ∞, 𝐸 𝑥 [𝑎(𝑆 𝜎(−∞,0] )] = 𝑜 (𝑎(𝑥)) (𝑥 → ∞) if 𝐸 𝑍 < ∞; 𝑔 (−∞,0] (𝑥, 𝑦) = 𝑔 {0} (𝑥, 𝑦) − 𝐸 𝑥 [𝑔(𝑆 𝜎 (−∞,0] , 𝑦)],
(7.34) (7.35)
which take less simple forms for 𝐸 𝑥 [𝑎(𝑆𝑇 )] and 𝑔𝛺 (𝑥, 𝑦). The following conditions, assumed in the next proposition, are all satisfied if either (PRS) or (AS) with 𝛼 = 1 and 𝜌 > 0 holds (see Remark 7.3.2(a),(c) below): (1) 𝑎(𝑥) is almost increasing and 𝑎(−𝑥)/𝑎(𝑥) is bounded as 𝑥 → ∞; |𝑎(−𝑧 ′) − 𝑎(−𝑧)| (2) sup −→ 0 as 𝑧 → ∞; 𝑎(𝑧) 𝑧 ′ :𝑧 −𝜀𝑥] −→ 0 as 𝑥 → ∞ and 𝜀 ↓ 0 in this order. Proposition 7.3.1 Suppose that the conditions (1) to (3) above hold. Then for each 𝜀 > 0, lim sup 𝑥→∞ 𝑎(−𝑥)/𝑎(𝜀𝑥) ≤ 1 and as 𝑥 → ∞: (i) 𝑔𝛺 (𝑥, 𝑦) = 𝑎(𝑥) − 𝑎(𝑥 − 𝑦) + 𝑜 (𝑎(𝑥)) uniformly for 𝑥 > 𝜀𝑦 > 0, in particular, 𝑔𝛺 (𝑥, 𝑦) ∼ 𝑎(𝑥) if 𝑎(𝑥 − 𝑦) = 𝑜(𝑎(𝑥)) and 𝑥 > 𝜀𝑦 > 0; (ii) 𝑔𝛺 (𝑥, 𝑦) = 𝑎(𝑥) − 𝑎(−𝑥) + 𝑜 (𝑎(𝑥)) uniformly for 𝜀𝑦 ≤ 𝑥 ≤ (1 − 𝜀)𝑦. Remark 7.3.2 (a) Let (PRS) hold. Then, according to Corollary 4.1.2, 𝑎(𝑥) − 𝑎(−𝑥) ∼ 1/𝐴(𝑥) (> 0); 𝑎(𝑥) is∫s.v.; ∫ 𝑥 𝑥 𝜇− (𝑡) 𝜇+ (𝑡) 𝑎(𝑥) ∼ d𝑡, 𝑎(−𝑥) = d𝑡 + 𝑜(𝑎(𝑥)), 2 (𝑡) 2 𝐴 𝑥0 𝑥0 𝐴 (𝑡)
(7.36)
as 𝑥 → ∞. Hence, by (PRS), for 𝑥 0 < 𝑧 < 𝑧 ′ ≤ 2𝑧 ′
∫
𝑎(−𝑧 ) − 𝑎(−𝑧) = 𝑧
𝑧′
′ 𝜇+ (𝑡) 𝑧 −𝑧 d𝑡 + 𝑜(𝑎(𝑧)) = 𝑜 + 𝑜(𝑎(𝑧)), 𝐴(𝑧)𝑧 𝐴2 (𝑡)
and one sees that (2) is satisfied. We also know that by L(3.1) 𝑉d (𝑥) is s.v., or what amounts to the same thing, ∀𝑀 > 1, lim 𝑃 𝑥 [𝑆𝑇 < −𝑀𝑥] = 1
(7.37)
(cf. Theorem A.2.3), in particular (3) is satisfied. (1) is evident from (7.36). (b) Let 𝐹 be recurrent and p.r.s. By virtue of the Spitzer’s formula (2.6) we know 𝑔𝛺 (𝑥, 𝑦) ∼ 𝑉d (𝑥)/ℓ ∗ (𝑦) for 0 ≤ 𝑥 < 𝛿𝑦 (0 < 𝛿 < 1), which, in conjunction with (7.36), 𝑉d (𝑥) ∼ 1/ℓ♯ (𝑥) and the second half of Proposition 7.3.1, leads to the equivalence relations 𝑎(𝑥) ≍
𝑎(−𝑥) 1 1 ⇐⇒ lim sup < 1 ⇐⇒ 𝑎(𝑥) ≍ ∗ , 𝐴(𝑥) 𝑎(𝑥) ℓ (𝑥)ℓ♯ (𝑥)
(7.38)
7.3 Proof of Theorem 7.1.1 and Relevant Results
145
as well as 1/ℓ ∗ (𝑥)ℓ♯ (𝑥) ∼ 𝑔𝛺 (𝑥, 𝑥/𝛿) = 𝑎(𝑥) − 𝑎(−𝑥) + 𝑜(𝑎(𝑥)), so that each of the conditions in (7.38) implies 𝐴(𝑥) ∼ ℓ ∗ (𝑥)ℓ♯ (𝑥)
(7.39)
and that lim 𝑎(−𝑥)/𝑎(𝑥) = 1 if and only if ℓ ∗ (𝑥)ℓ♯ (𝑥)𝑎(𝑥) → ∞. (c) Suppose that 𝐹 satisfies (AS) with 𝛼 = 1 and 𝜌 > 0. Then conditions (1) to (3) are satisfied. Indeed, for 𝜌 = 1, (PRS) is satisfied, while for 0 < 𝜌 < 1, 𝑎(𝑥) ∼ 𝑎(−𝑥) ∼ 𝑐 𝜌 [1/𝐿] ∗ (𝑥) ∫
(7.40)
𝑥
([1/𝐿] ∗ (𝑥) = 1 [𝑡 𝐿(𝑡)] −1 d𝑡) with a certain constant 𝑐 𝜌 > 0 (Proposition 4.2.1(iv)), entailing (1) and (2); moreover, 𝑉d is regularly varying with index 𝜌ˆ ∈ (0, 1), so that (3) is satisfied owing to the generalised arcsin law ([31, p. 374], Theorem A.2.3).Thus Proposition 7.3.1 is applicable. For 0 < 𝜌 < 1, (ii) can be refined to 𝑔𝛺 (𝑥, 𝑦) = 𝑜(𝑎(𝑦))
(𝑦 → ∞) uniformly for 0 ≤ 𝑥 < (1 − 𝜀)𝑦.
(7.41)
For 𝑥 > 𝜀𝑦, this is Í immediate from (7.40) and (i). For 0 < 𝑥 ≪ 𝑦, one has only to use 𝑔𝛺 (𝑥, 𝑦) = 𝑧 𝑃 𝑥 [𝑆 𝜎 [ 𝜀𝑦,∞) = 𝑧, Λ 𝜀 𝑦 ]𝑔𝛺 (𝑧, 𝑦). Indeed, by Proposition 6.1.3 the sum restricted to 𝑧 < 2𝑦 is 𝑜(𝑎(𝑦)). Since 𝑎(𝑧) − 𝑎(𝑧 − 𝑦) = 𝑜(𝑎(𝑧)) for 𝑧 ≥ 2𝑦, using 𝐻 −𝑥 (𝑎) = 𝑎(−𝑥) (valid if 𝐸 | 𝑍ˆ | = ∞), one sees that the sum over 𝑧 ≥ 2𝑦 is [0,∞) also 𝑜(𝑎(𝑦)). Hence (7.41) holds true. The following corollary follows immediately from Proposition 7.3.1 and Remark 7.3.2(a,c) because of 𝑃 𝑥 [𝜎𝑦 < 𝑇] =
𝑔𝛺 (𝑥, 𝑦) 𝑔𝛺 (𝑦, 𝑦)
(𝑥 ≠ 𝑦).
(7.42)
Corollary 7.3.3 Suppose that 𝐹 is recurrent and satisfies either (PRS) or (AS) with 𝜌ˆ < 𝛼 = 1. Then, for any 0 < 𝛿 < 1 and 𝜀 > 0, as 𝑦 → ∞, 𝑎(𝑥) − 𝑎(−𝑥) + 𝑜(1) uniformly for 𝜀𝑦 ≤ 𝑥 < 𝛿𝑦, 𝑃 𝑥 [𝜎𝑦 < 𝑇 ] = 𝑎(𝑥) 𝑜(1) uniformly for 𝑥 > 𝑦/𝛿; in particular under (AS) with 𝜌ˆ < 𝛼 = 1 = 2𝑝, 𝑃 𝑥 [𝜎𝑦 < 𝑇] → 0 as 𝑦 → ∞ uniformly for 𝑥 ∉ [𝛿𝑦, 𝑦/𝛿]. [For a transient r.w., 𝑎(𝑥) = 𝐺 (0) − 𝐺 (−𝑥), so that 𝑎(𝑥) − 𝑎(−𝑥) = 𝐺 (𝑥) − 𝐺 (−𝑥). If a transient 𝐹 is p.r.s., by the estimate of 𝐺 (𝑥) given in [83] (see (7.90) and Lemma 7.5.1 – given in the dual setting – of this chapter) one sees that a formula analogous to the above holds but for 𝑅/𝛿 < 𝑥 < 𝑅/𝜀 (resp. 𝑥 < 𝛿𝑅) instead of 𝜀𝑅 ≤ 𝑥 < 𝛿𝑅 (resp. 𝑥 > 𝑅/𝛿).] Proof One has only to examine the last statement in the case 𝑥 < 𝜀𝑦, which is verified similarly to the corresponding part of (7.41). [See Proposition 8.3.1 for a sharper result.] □
146
7 The Two-Sided Exit Problem for Relatively Stable Walks
The proofs of Proposition 7.3.1 and Theorem 7.1.1 will be given after the following two lemmas. Lemma 7.3.4 If (1) and (2) hold, then for any 𝑀 ≥ 1, |𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧)| sup : 0 ≤ 𝑥 < 𝑀 𝑧 −→ 0 𝑎 † (𝑥) + 𝑎(−𝑧)
(𝑧 → ∞).
(7.43)
Proof From (1) and (2) it follows that |𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧)|/𝑎(𝑥) → 0
(𝑧 → ∞)
uniformly for 𝑧 ≤ 𝑥 < 𝑀 𝑧.
(7.44)
Pick 𝜀 > 0 arbitrarily and choose 𝑧0 – possible under (2) – so that |𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧)| < 𝜀 2 𝑎(𝑧)
whenever 𝑧 ≥ 𝑧 0 and 0 ≤ 𝑥 < 𝑧.
(7.45)
Let 0 < 𝑥 ≤ 𝑧 and 𝑧 > 𝑧0 . Then if 𝑎(−𝑧) > 𝜀𝑎(𝑧), |𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧)| < 𝜀 2 𝑎(𝑧) < 𝜀𝑎(−𝑧). On the other hand, if 𝑎(−𝑧) ≤ 𝜀𝑎(𝑧), by the inequalities −
𝑎(−𝑧) 𝑎(−𝑥 − 𝑧) 𝑎(−𝑥) ≤ 𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧) ≤ 𝑎(𝑥) 𝑎(𝑧) 𝑎(𝑧 + 𝑥)
(cf. Section 5.1), condition (1) entails that |𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧)| ≤ 𝜀𝐶𝑎(𝑥), where we have also used the bound (7.45) to have 𝑎(−𝑥 − 𝑧)/𝑎(𝑧 + 𝑥) ≤ 𝐶 ′ [𝑎(−𝑧) + 𝜀 2 𝑎(𝑧)]/𝑎(𝑧) < 2𝐶 ′ 𝜀. Thus we have |𝑎(−𝑥 − 𝑧) − 𝑎(−𝑧)| ≤ 𝜀𝐶 [𝑎(𝑥) + 𝑎(−𝑧)], showing the supremum in (7.43) restricted to 0 < 𝑥 ≤ 𝑧 tends to zero. Combined with (7.44) this concludes the proof. □ Lemma 7.3.5 If (1) to (3) hold, then for any 𝜀 > 0, as 𝑦 → ∞, 𝐸 𝑥 [𝑎(𝑆𝑇 − 𝑦)] = 𝐸 𝑥 [𝑎(𝑆𝑇 )] + 𝑜 (𝑎(𝑥))
uniformly for 𝑥 > 𝜀𝑦.
(7.46)
Proof By (3), for any 𝜀 > 0 and 𝜀1 > 0 we can choose 𝑀 > 1 and 𝑀 ′ > 1 so that 𝑃 𝑥 [𝑆𝑇 ≥ −𝑦/𝑀] < 𝜀1
if 𝑥 > 𝜀𝑦, 𝑦 > 𝑀 ′ .
(7.47)
Supposing (1) and (2) to hold we apply Lemma 7.3.4 to see that as 𝑦 → ∞, 𝑎(−𝑦 − 𝑧) = 𝑎(−𝑧) + 𝑜 (𝑎(−𝑧) + 𝑎(𝑦))
uniformly for 𝑧 > 𝑦/𝑀,
hence 𝐸 𝑥 [𝑎(𝑆𝑇 − 𝑦); 𝑆𝑇 < −𝑦/𝑀] = 𝐸 𝑥 [𝑎(𝑆𝑇 ); 𝑆𝑇 < −𝑦/𝑀]{1 + 𝑜(1)} + 𝑜(𝑎(𝑦)) = 𝐸 𝑥 [𝑎(𝑆𝑇 )]{1 + 𝑜(1)} + 𝑂 (𝜀1 𝑎(𝑦)) + 𝑜(𝑎(𝑦)).
7.3 Proof of Theorem 7.1.1 and Relevant Results
147
Here (7.47) as well as (1) is used for the second equality, and by the same reasoning (with the help of (2)), the left-most expression is written as 𝐸 𝑥 [𝑎(𝑆𝑇 − 𝑦)] + 𝑂 (𝜀1 𝑎(𝑦)). Noting 𝐸 𝑥 [𝑎(𝑆𝑇 )] ∼ 𝐸 𝑥 [𝑎(𝑆 𝜎(−∞,0] )] ≤ 𝐶𝑎 † (𝑥), we can conclude that 𝐸 𝑥 [𝑎(𝑆𝑇 − 𝑦)] = 𝐸 𝑥 [𝑎(𝑆𝑇 )] + 𝑜 (𝑎(𝑥 ∨ 𝑦)) + 𝑂 (𝜀1 𝑎(𝑦)), which shows (7.46), 𝜀1 being arbitrary (note that 𝑎(𝑦) ≤ 𝐶𝑎(𝑥)/𝜀 because of subadditivity of 𝑎). □ Proof (of Proposition 7.3.1) Note that for the asymptotic estimates of our concern under 𝑃 𝑥 , 𝑇 and 𝜎(−∞,0] may be interchangeable as 𝑥 → ∞. Then, applying (7.35) and Lemma 7.3.5 in turn one sees that as 𝑦 → ∞ 𝑔 (−∞,0] (𝑥, 𝑦) = 𝑎(𝑥) − 𝑎(𝑥 − 𝑦) − 𝐸 𝑥 [𝑎(𝑆 (−∞,0] )] − 𝐸 𝑥 [𝑎(𝑆 (−∞,0] − 𝑦)] = 𝑎(𝑥) − 𝑎(𝑥 − 𝑦) + 𝑜(𝑎(𝑥)) uniformly for 𝑥 > 𝜀𝑦, showing the formula (i) of the proposition. The assumption (2) entails 𝑎(𝑥 − 𝑦) − 𝑎(−𝑥) = 𝑜(𝑎(𝑦)) for 𝜀𝑅 ≤ 𝑥 < (1 − 𝜀)𝑦, where, because of 𝑎(𝑛𝑥) ≤ 𝑛𝑎(𝑥) (𝑛 = 2, 3, . . .), we may replace 𝑜(𝑎(𝑦)) by 𝑜(𝑎(𝑥)). Hence (ii) □ follows from (i). Taking 𝑦 = 2𝑥 in (i) it follows that lim sup 𝑎(−𝑥)/𝑎(𝑥) ≤ 1. Proof (of Theorem 7.1.1) Taking (6.1) into account, it follows that under (PRS) for any 0 < 𝛿 < 1 𝑃 𝑥 (Λ 𝑦 ) [𝑎(𝑦) − 𝑎(−𝑦) + 𝑜(𝑎(𝑦))] uniformly for 0 ≤ 𝑥 < 𝛿𝑦, 𝑔𝛺 (𝑥, 𝑦) = 𝑜(𝑎(𝑦)) uniformly for 𝑥 > 𝑦/𝛿. Indeed, if 𝑥 ≥ 14 𝑦, this readily follows from Proposition 7.3.1, the slow variation of 𝑉d and (7.36), while uniformly for 0 ≤ 𝑥 < 41 𝑦 we have 𝑔𝛺 (𝑥, 𝑦) =
∑︁
𝑃 𝑥 [𝑆 𝜎 ( 1 𝑦,∞) = 𝑤, Λ 1 𝑦 ]𝑔𝛺 (𝑤, 𝑦) + 𝑜 𝑃 𝑥 (Λ 1 𝑦 )𝑔𝛺 (𝑦, 𝑦) 4
4
4
1 1 4 𝑦 ≤𝑤< 2 𝑦
= 𝑃 𝑥 (Λ 𝑦 ) [𝑎(𝑦) − 𝑎(−𝑦) + 𝑜(𝑎(𝑦)], by L(3.6) and what we have just shown. Thus the first formula of (7.2) obtains. From the dual of L(4.7) (applied with 𝑥 = 0) one deduces that 𝑣d (𝑥) = 𝑜 (𝑎(𝑥)/𝑈a (𝑥))
(7.48)
(in our setting the dual of (7.48) is easier to deal with: see (7.52)). The second formula of (7.2) follows from (7.48) in view of Spitzer’s formula for 𝑔𝛺 (𝑥, 𝑦). □ For later usage, here we state the following result – corresponding to Corollary 7.3.3 for the n.r.s. walk – that follows immediately from the dual of Theorem 7.1.1 (stated in Remark 7.1.2(c)):
148
7 The Two-Sided Exit Problem for Relatively Stable Walks
If 𝐹 is n.r.s and recurrent, then as 𝑦 → ∞, 𝑃 𝑥 [𝜎𝑦 < 𝑇 ] (7.49) uniformly for 0 ≤ 𝑥 < 𝛿𝑦, 𝑜 𝑉d (𝑥) 𝑉d (𝑦) = 𝑈a (𝑦) 𝑎(−𝑥) − 𝑎(𝑥) 𝑎(−𝑥) +𝑜 uniformly for 𝑥 > 𝑦/𝛿. 𝑎(−𝑦) 𝑎(−𝑦) 𝑈a (𝑥) By Lemma 6.2.2(ii), the second case gives a lower bound for 𝑥 ∈ (𝑦, 𝑦/𝛿).
7.4 Proof of Proposition 7.1.3 (for 𝑭 Recurrent) and Theorem 7.1.4 This section consists of three subsections. In the first one we obtain some basic estimates of 𝑃0 (Λ𝑅 ) and 𝑢 a (𝑥) under (NRS) and prove Proposition 7.1.3 in the case when 𝐹 is recurrent. In the second we derive the asymptotic forms of 𝑃0 (Λ𝑅 ) and 𝑢 a (𝑥) asserted in Theorem 7.1.4. The rest of Theorem 7.1.4 is proved in the third subsection. Let 𝐵(𝑅) = R \ [0, 𝑅] as before and define 𝑁 (𝑅) = 𝜎𝐵(𝑅) − 1,
(7.50)
the first epoch at which the r.w. 𝑆 leaves the interval [0, 𝑅]. Throughout this section we assume that (NRS) holds, i.e., 𝑥𝜇(𝑥)/𝐴(𝑥) → −∞ (𝑥 → ∞). Recall that this entails 𝑃0 [𝑆 𝑛 > 0] → 0, 𝑉d (𝑥) ∼ 𝑥/ℓˆ∗ (𝑥) and 𝑈a (𝑥) ∼ 1/ℓˆ♯ (𝑥).
7.4.1 Preliminary Estimates of 𝒖 a and the Proof of Proposition 7.1.3 in the Case When 𝑭 is Recurrent In this subsection, we are mainly concerned with the case when 𝐹 is recurrent, but some of the results are also valid for transient 𝐹. Note that the function 𝑎(𝑥) is always well defined and approaches the constant 𝐺 (0) as |𝑥| → ∞ if 𝐹 is transient. If 𝐹 is n.r.s., then 𝑔𝛺 (𝑦, 𝑦) ∼ 𝑎(−𝑦) (by Theorem 7.1.1) so that as 𝑦 → ∞, 𝑃0 [𝜎𝑦 < 𝑇] ∼ 𝑔𝛺 (0, 𝑦)/𝑎(−𝑦) = 𝑣◦ 𝑢 a (𝑦)/𝑎(−𝑦),
(7.51)
hence by L(27) (applied with 𝑥 = 0), 𝑢 a (𝑦) = 𝑜 𝑎(−𝑦) 𝑉d (𝑦) .
(7.52)
Lemma 7.4.1 Let 𝐹 be recurrent and n.r.s. and suppose lim sup 𝑎(𝑥)/𝑎(−𝑥) < 1. Then 𝑢 a (𝑦) = 𝑜(𝑈a (𝑦)/𝑦). (7.53)
7.4 Proof of Proposition 7.1.3 (for 𝐹 Recurrent) and Theorem 7.1.4
149
Proof Under lim sup 𝑎(𝑥)/𝑎(−𝑥) < 1, we have 𝑎(−𝑥) ≍ 1/𝐴(𝑥) ∼ 𝑉d (𝑥)𝑈a (𝑥)/𝑥 owing to (7.39) of Remark 7.3.2(b), so that (7.53) follows from (7.52).
□
Lemma 7.4.2 Suppose (NRS) holds. (i) 𝑃0 (Λ𝑅 ) ≥ 𝑈a (𝑅)𝜇+ (𝑅){𝑣◦ + 𝑜(1)}. (ii) If 𝜇+ is of dominated variation, then for each 0 < 𝜀 < 1, 𝑃0 𝜀𝑅 ≤ 𝑆 𝑁 (𝑅) < (1 − 𝜀)𝑅 Λ𝑅 → 0 and 𝑃0 𝑆 𝑁 (𝑅) < 𝜀𝑅, Λ𝑅 ≍ 𝑈a (𝑅)𝜇+ (𝑅). [Here a non-increasing 𝑓 is of dominated variation if lim inf 𝑓 (2𝑥)/ 𝑓 (𝑥) > 0.] Proof We have the trivial inequality 𝑔 𝐵(𝑅) (𝑥, 𝑦) ≤ 𝑔𝛺 (𝑥, 𝑦). We need to find an appropriate lower bound of 𝑔 𝐵(𝑅) (𝑥, 𝑦). To this end we use the identity 𝑔 𝐵(𝑅) (𝑥, 𝑦) = 𝑔𝛺 (𝑥, 𝑦) − 𝐸 𝑥 [𝑔𝛺 (𝑆 𝜎 (𝑅,∞) , 𝑦); Λ𝑅 ]
(0 ≤ 𝑥, 𝑦 ≤ 𝑅).
(7.54)
By Spitzer’s formula (2.6) we see that for 𝑧 ≥ 𝑅, 1 ≤ 𝑦 < 𝛿𝑅 (𝛿 < 1), 𝑔𝛺 (𝑧, 𝑦) ≤ 𝑈a (𝑦) ℓˆ∗ (𝑅){1 + 𝑜(1)}, so that for 0 ≤ 𝑥, 𝑦 < 𝛿𝑅, 𝑔 𝐵(𝑅) (𝑥, 𝑦) ≥ 𝑔𝛺 (𝑥, 𝑦) − 𝑈a (𝑦)𝑃 𝑥 (Λ𝑅 ) ℓˆ∗ (𝑅){1 + 𝑜(1)}.
(7.55)
Since 𝑈a is s.v., 𝑅/ℓˆ∗ (𝑅) ∼ 𝑉d (𝑅) and 𝑃0 (Λ𝑅 )𝑉d (𝑅) → 0 by virtue of L(4.7), letting 𝑥 = 0 in (7.55) and summing both sides of it lead to 𝑅/2 ∑︁
𝑔 𝐵(𝑅) (0, 𝑦) =
𝑅/2 ∑︁
𝑔𝛺) (0, 𝑦) + 𝑜(𝑈a (𝑅)) ∼ 𝑣◦𝑈a (𝑅).
(7.56)
𝑦=0
𝑦=0
Hence 𝑃0 (Λ𝑅 ) ≥
𝑅/2 ∑︁
𝑔 𝐵(𝑅) (0, 𝑦)𝜇+ (𝑅 − 𝑦) ≥ 𝑈a (𝑅)𝜇+ (𝑅){𝑣◦ + 𝑜(1)},
(7.57)
𝑦=0
showing (i). Í (1−𝜀) 𝑅 𝑢 a (𝑦)𝜇+ (𝑅 − 𝑦). Let 0 < 𝜀 < 12 . The first probability in (ii) is less than 𝑦=𝜀𝑅 After summing by parts, this sum may be expressed as [𝑈a (𝑦)𝜇+ (𝑅 −
(1−𝜀) 𝑅 𝑦)] 𝑦=𝜀𝑅
∫
(1−𝜀) 𝑅
−
𝑈a (𝑡)𝑑𝜇+ (𝑅 − 𝑦) 𝑦=𝜀𝑅
(7.58)
150
7 The Two-Sided Exit Problem for Relatively Stable Walks
apart from the error term of a smaller order of magnitude than 𝑈a (𝑅)𝜇+ (𝜀𝑅). Because of the slow variation of 𝑈a , the above difference is 𝑜 (𝑈a (𝑅)𝜇+ (𝜀𝑅)). By (i) we, therefore, obtain the first relation of (ii), provided that 𝜇+ is of dominated variation. The second relation of (ii) follows from the first and (7.56). □ Lemma 7.4.2(ii) says that, given Λ𝑅 , the conditional law of 𝑆 𝑁 (𝑅) /𝑅, the position of departure of the scaled r.w. 𝑆 𝑛 /𝑅 from the interval [0, 1], tends to concentrate near the boundary. If the positive tail of 𝐹 satisfies the continuity condition (7.11), we shall see that such a concentration should be expected to occur only about the lower boundary (see Lemma 7.4.6); otherwise, this may be not true. Lemma 7.4.3 Suppose lim sup 𝜇+ (𝜆𝑥)/𝜇+ (𝑥) < 1 for some 𝜆 > 1. Then sup 𝑃 𝑥 [𝑍 (𝑅) > 𝑀 𝑅 | Λ𝑅 ] → 0 as 𝑀 → ∞ uniformly for 𝑅.
(7.59)
0≤𝑥 ≤𝑅
Proof For any integer 𝑀 > 1, writing 𝑀 ′ = 𝑀 + 1 we have 𝑦=0
𝑔 𝐵(𝑅) (𝑥, 𝑦)𝜇+ (𝑀 ′ 𝑅 − 𝑦)
Í𝑅
𝑔 𝐵(𝑅) (𝑥, 𝑦)𝜇+ (𝑅 − 𝑦)
Í𝑅 𝑃 𝑥 [𝑍 (𝑅) > 𝑀 𝑅 | Λ𝑅 ] ≤
𝑦=0
≤
𝜇+ (𝑀 𝑅) , 𝜇+ (𝑅)
of which the last expression approaches zero as 𝑀 → ∞ uniformly in 𝑅 under the supposition of the lemma. □ Proof (of Proposition 7.1.3 (recurrent case)) Suppose that 𝐹 is recurrent and the assumption of Proposition 7.1.3 holds, namely (a) 𝐹 is n.r.s.; (b) 𝐹 is recurrent and lim sup 𝑥→∞ 𝑎(𝑥)/𝑎(−𝑥) < 1; (c) ∃ 𝜆 > 1, lim sup 𝜇+ (𝜆𝑥)/𝜇+ (𝑥) < 1.
(7.60)
Then Lemmas 7.4.2(i) and 7.4.3 are applicable, of which the former one gives the lower bound of 𝑃0 (Λ𝑅 ) asserted in Proposition 7.1.3. By (7.51) we have 𝑣◦ 𝑢 a (𝑅) ∼ 𝑎(−𝑅)𝑃0 [𝜎𝑅 < 𝑇] ≥ −𝑃0 (Λ𝑅 )/𝐴(𝑅){1 + 𝑜(1).}
(7.61)
Here, for the inequality we have employed, in turn, (7.59), (b) of (7.60), the second half of (7.49) and 𝑎(−𝑥) − 𝑎(𝑥) ∼ −1/𝐴(𝑥). Since 𝑎(−𝑅) ≍ 1/𝐴(𝑅) because of (b), we have 𝑃0 [𝜎𝑅 < 𝑇] ≥ 𝑐𝑃0 (Λ𝑅 ) for some constant 𝑐 > 0, showing the required lower bound of 𝑢 a . The rest of the assertions follow from Remark 7.1.2(d) (or (7.39)) and Lemma 7.4.1. □ The following lemma is used in the next subsection.
7.4 Proof of Proposition 7.1.3 (for 𝐹 Recurrent) and Theorem 7.1.4
151
Lemma 7.4.4 Suppose that (7.60) holds. (i) For any 1/2 ≤ 𝛿 < 1, as 𝑦 → ∞ = 𝑉d (𝑤) × 𝑜 𝑈a (𝑦) 𝑦 𝑔𝛺 (𝑤, 𝑦) ≤ 𝑎(−𝑦){1 + 𝑜(1)} ≤ 𝐶/| 𝐴(𝑦)| ∼ 𝑈 (𝑦)/ℓˆ∗ (𝑤) a (ii) sup
∞ ∑︁
0 ≤ 𝑤 < 𝛿𝑦, 𝛿𝑦 ≤ 𝑤 ≤ 𝑦/𝛿, 𝑤 > 𝑦/𝛿 > 0.
(7.62)
𝑔𝛺 (𝑤, 𝑦)𝜇+ (𝑤) < ∞.
𝑦 ≥0 𝑤=0
Proof Let 𝐹 be recurrent. By (2.6), Spitzer’s representation of 𝑔𝛺 (𝑤, 𝑦), one can easily deduce (i) with the help of (7.53) and Theorem 7.1.1 (see (7.67) below). For the convenience of later citations, we note that (i) entails that for some constant 𝐶 ≤ 𝐶𝑉d (𝑤)𝑈a (𝑦) 𝑦 0 ≤ 𝑤 < 𝑦/2, 𝑔𝛺 (𝑤, 𝑦) (7.63) ≤ 𝐶𝑈a (𝑦)/ℓˆ∗ (𝑤) 𝑤 ≥ 2𝑦 > 0. By (7.62),
Í∞ 𝑤=0
𝑔𝛺 (𝑤, 𝑦)𝜇+ (𝑤) is bounded above by a constant multiple of
𝑦/2 ∞ ∑︁ 𝑈a (𝑦) ∑︁ 𝑦𝜇+ (𝑦/2) 1 𝑉d (𝑤)𝜇+ (𝑤) + 𝜇+ (𝑤) + 𝑈a (𝑦) ∗ ˆ 𝑦 𝑤=0 | 𝐴(𝑦)| 𝑤=2𝑦 ℓ (𝑤)
(7.64)
for 𝑦 ≥ 𝑥0 . The first two terms above approach zero as 𝑦 → ∞, for by L(3.4) 𝑉d (𝑤)𝜇+ (𝑤) = 𝑜(1/𝑈a (𝑤)), while the last term tends to unity since the second sum equals ℓˆ♯ (2𝑦) ∼ 1/𝑈a (𝑦). Thus (ii) follows. □
7.4.2 Asymptotic Forms of 𝒖 a and 𝑷0 (𝚲𝑹 ) Throughout this subsection we suppose (7.60) holds, so that the results obtained in the preceding subsection are applicable; in particular, we have the bound on 𝑔𝛺 (𝑥, 𝑦) in the last lemma, as well as the following bounds 𝑢 a (𝑦) = 𝑜 𝑈a (𝑦) 𝑦 ; and (7.65) 𝑃0 (Λ 𝑦 ) ≍ 𝑃0 [𝜎𝑦 < 𝑇] ∼ 𝑣◦ 𝑢 a (𝑦)/𝑎(−𝑦). Recalling 𝐵(𝑅) = R \ [0, 𝑅], one sees that 𝑃 𝑥 (Λ𝑅 ) =
𝑅 ∑︁
𝑔 𝐵(𝑅) (𝑥, 𝑅 − 𝑤)𝜇+ (𝑤).
𝑤=0
For each 𝑟 = 1, 2, . . ., 𝑔 𝐵(𝑟) (𝑥, 𝑟 − 𝑦) is symmetric in 𝑥, 𝑦 ∈ 𝐵(𝑟): 𝑔 𝐵(𝑟) (𝑥, 𝑟 − 𝑦) = 𝑔 𝐵(𝑟) (𝑦, 𝑟 − 𝑥),
(7.66)
152
7 The Two-Sided Exit Problem for Relatively Stable Walks
ˆ since both sides are equal to 𝑔−𝐵(𝑟) (−𝑟 + 𝑦, −𝑥). Recall ∫ ∞that under (NRS), 𝑍 is r.s., ∗ ∗ ˆ ˆ ˆ ˆ 𝑣d (𝑥) ∼ 1/ℓ (𝑥) and 𝑈a (𝑥) ∼ 1/ℓ♯ (𝑥), where ℓ♯ (𝑡) = 𝑡 𝜇+ (𝑡) d𝑡/ℓ (𝑡). By the dual of Theorem 7.1.1 and Remark 7.3.2(b) it follows that under (7.60), 𝑎(−𝑥) − 𝑎(𝑥) ∼
−1 1 𝑉d (𝑥)𝑈a (𝑥) ∼ ∼ . 𝐴(𝑥) ℓˆ∗ (𝑥) ℓˆ♯ (𝑥) 𝑥
(7.67)
The next lemma is crucial for proving Theorem 7.1.4. Recall that 𝑁 (𝑅) = 𝜎(𝑅,∞) − 1 and that the continuity condition (7.11), stronger than dominated variation, states lim 𝛿↑1 lim sup 𝑥→∞ 𝜇+ (𝛿𝑥)/𝜇+ (𝑥) = 1. Lemma 7.4.5 Suppose that (7.60) holds and 𝜇+ varies dominatedly. Then for some constant 𝐶, 𝑃0 𝑆 𝑁 (𝑅) ≥ 18 𝑅, Λ𝑅 ≤ 𝐶𝑈a (𝑅)𝜇+ (𝑅) + 𝑜 𝑃0 (Λ𝑅/2 ) , (7.68) and if one further supposes the continuity condition (7.11), 𝑃0 𝑆 𝑁 (𝑅) ≥ 81 𝑅, Λ𝑅 = 𝑜 (𝑈a (𝑅)𝜇+ (𝑅)) + 𝑜 𝑃0 (Λ𝑅/2 ) .
(7.69)
Proof We use the representation ∑︁
𝑃0 𝑆 𝑁 (𝑅) ≥ 81 𝑅, Λ𝑅 =
𝑔 𝐵(𝑅) (0, 𝑅 − 𝑤)𝜇+ (𝑤).
(7.70)
0≤𝑤≤𝑅/8
Splitting the r.w. paths by the landing points, 𝑦 say, when 𝑆 (started at the origin) exits [0, 12 𝑅], we obtain for 0 ≤ 𝑤 < 𝑅/8, ∑︁ 𝑔 𝐵(𝑅) (0, 𝑅 − 𝑤) = 𝑃0 [𝑆 𝜎𝐵(𝑅/2) = 𝑦]𝑔 𝐵(𝑅) (𝑦, 𝑅 − 𝑤). 𝑅/2 0. If 𝑞 = 0, suppose that 𝑘 𝜆 := lim ℓˆ∗ (𝑥)/ℓˆ∗ (𝑐 𝑛 ) exists and u 𝐵 (𝑥) > 0 for all 𝑥 ∈ Z. Then, as 𝑥 ∧ 𝑛 → ∞ so that 𝑥 = 𝑜(𝑐 𝑛 ) and 𝐿(𝑥)/𝐿(𝑐 𝑛 ) → 𝜆 ∈ [0, 1] if 𝑞 > 0 and lim inf ℓ ∗ (𝑥)/ℓ ∗ (𝑐 𝑛 ) > 0 if 𝑞 = 0, 𝑃 𝑥 [𝑌𝑛 (1) ∈ d𝜂 | 𝜎𝐵 > 𝑛] =⇒ 2−1 [1 + sgn(𝜂)𝑘 𝜆 ] |𝜂|e−𝜂
2 /2𝑐 ♯
d𝜂/𝑐 ♯ .
(9.40)
Remark 9.2.12 The local versions of (9.33), (9.35) and (9.40) hold (see the proof of Proposition 9.6.1). As in the proof of Proposition 9.6.3,4 one can accordingly show that if 𝜆 ′ = lim 𝐿(|𝑦|)/𝐿(𝑐 𝑛 ) and 0 < 𝑞 < 1, then for 𝑥 ∨ |𝑦| = 𝑜(𝑐 𝑛 ) with 𝑥 > −𝑀, √︂ 𝑃−𝑦 [𝜎𝐵 > 𝑛] 𝜋 [1 + sgn(𝑦)𝑘 𝜆 𝑘 𝜆′ ] . 𝑃 𝑥 [𝑆 𝑛 = 𝑦 | 𝜎𝐵 > 𝑛] ∼ 8𝑐 ♯ 𝑐𝑛 4 One needs to perform some additional algebraic manipulations.
218
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
Proof (of Proposition 9.2.9) The second relation of (i) follows from the first (see the proof of Lemma 9.3.5). For the verification of the latter, from (9.30) and (9.31) one infers that ∑︁ 𝑃 𝑥 [𝜎𝐵 > 𝑛] ∑︁ 𝑃−𝑧 [𝜎−𝐵 > 𝑛] = 𝑔 𝐵 (𝑥, 𝑦) 𝑃−𝑧 [𝑆 𝑛+1 = −𝑦 | 𝜎−𝐵 > 𝑛] . (9.41) 𝑃0 [𝜎0 > 𝑛] 𝑃0 [𝜎0 > 𝑛] 𝑧 ∈𝐵 𝑦 ∈Z Í The last ratio approaches u 𝐵 (−𝑧) (see (9.46)) and 𝑧 ∈𝐵 u−𝐵 (−𝑧) = 1. Let 𝑚 − /𝑚 → 𝑞 > 0 and u 𝐵 (𝑥) > 0. Then by (9.36), (9.6) and Proposition 4.3.3, 𝑔 𝐵 (𝑥, 𝑦) 1 − 𝐿 (𝑥)/𝐿 (𝑐 𝑛 ) for 𝑦 < −𝜀𝑐 𝑛 , = (9.42) 1 + 𝑝𝐿(𝑥)/[𝑞𝐿(𝑐 𝑛 )] for 𝑦 > 𝜀𝑐 𝑛 , u 𝐵 (𝑥){1 + 𝑜(1)} provided 0 < 𝑥 = 𝑜(𝑐 𝑛 ). With the help of the bound 𝑔 𝐵 (𝑥, 𝑦) ≤ 2𝑎(𝑥) ¯ ≍ 𝑎(𝑥) together with (9.42) and the above expression of 𝑃 𝑥 [𝜎𝐵 > 𝑛]/𝑃0 [𝜎0 > 𝑛], an 𝑛,𝐵 application of the weak convergence P−𝑧 ⇒ P0 deduces the first relation of (i). For the proof of (ii) we may suppose 𝐵 ⊂ (−∞, 0]. Let 𝑚 − /𝑚 → 0. Then 𝑃 𝑥 [𝜎𝐵 > 𝑛] = 𝑃 𝑥−1 [𝑇 > 𝑛] +
𝑛 ∑︁ ∑︁
𝑃 𝑥 [𝑆 𝑘 = 𝑦, 𝜎(−∞,0] = 𝑘]𝑃 𝑦 [𝜎𝐵 > 𝑛 − 𝑘]
𝑘=1 𝑦∉𝐵
for 𝑥 > 0. Observe that the above double sum restricted to 𝑘 > 𝜀𝑛 is negligible in comparison with 𝑃 𝑥 [𝑇 > 𝑛] as 𝑛 → ∞ for every 𝜀 > 0. Then, using (9.32) and 2𝑎(𝑥) ¯ ∼ 𝑎(−𝑥) (𝑥 → ∞), valid under the present assumption, one sees that for any 𝑀 > 1, the double sum may be written as 𝜀𝑛 ∑︁
∑︁
𝑃 𝑥 [𝑆 𝑘 = 𝑦, 𝜎(−∞,0] = 𝑘]𝑃 𝑦 [𝜎𝐵 > 𝑛] +
𝑘=1 −𝑀 𝑥 𝑛]), 𝑛
Í 𝑥 where 𝑟 𝑀, 𝑥 ≤ 𝐶 𝑦 𝑛] ∼ u 𝐵 (𝑦)𝑃0 [𝜎0 > 𝑛] for −𝑀𝑥 < 𝑦 ≤ 0, provided 𝐿(𝑥)/𝐿(𝑐 𝑛 ) → 0, while by the dual Í 𝑥 of (9.9), 𝑦∉𝐵 𝐻 (−∞,0] (𝑦) u 𝐵 (𝑦) = u 𝐵 (𝑥) for 𝑥 > 0. These together verify the second half of (ii). The first half is similar (but much simpler). □ Remark 9.2.13 Let 𝛼 = 1 (with 𝜌 𝜌ˆ > 0). If 𝑎(𝑦 +1) − 𝑎(𝑦) = 𝑂 (𝑎(𝑦)/|𝑦|) (|𝑦| → ∞) (see Proposition 4.3.5 for a sufficient condition), then by (9.41) it follows that for |𝑥| = 𝑜(𝑐 𝑛 ) 𝑃 𝑥 [𝜎𝐵 > 𝑛] ∼ u 𝐵 (𝑥)𝑃0 [𝜎0 > 𝑛] and the formulae asserted above for 𝑥 fixed are extended to 𝑥 = 𝑜(𝑐 𝑛 ) (observe 𝑎(𝑥)𝐿(𝑥) → ∞).
9.3 Proof of Theorem 9.2.1
219
9.3 Proof of Theorem 9.2.1 The proof will be given by a series of lemmas. Let 𝑐 𝑛 be given as in (2.17) and write 𝑥 𝑛 for 𝑥/𝑐 𝑛 and similarly for 𝑦 𝑛 . Let 𝜏𝑛𝐵 stand for the epoch when the first excursion of length larger than 𝑛 starts: 𝜏𝑛𝐵 = inf{𝑘 : 𝑆 𝑘+ 𝑗 ∉ 𝐵 for 𝑗 = 1, . . . , 𝑛}.
(9.43)
The following lemma, adapted from [10, Lemma 3.1], is central to the present approach. Lemma 9.3.1 For any positive integers 𝑛 and 𝑚 and real numbers 𝑎 1 , . . . , 𝑎 𝑚 , 𝑃 𝑥 [𝑆 𝜏𝑛𝐵 +𝑘 ≤ 𝑎 𝑘 , 𝑘 = 1, . . . , 𝑚] = 𝑃 𝑥 [𝑆 𝑘 ≤ 𝑎 𝑘 , 𝑘 = 1, . . . , 𝑚 | 𝜎𝐵 > 𝑛]𝑃 𝑥 [𝜎𝐵 > 𝑛] ∑︁ 𝑃 𝑥 [𝑆 𝜏𝑛𝐵 = 𝑦, 𝜎𝐵 ≤ 𝑛]𝑃 𝑦 [𝑆 𝑘 ≤ 𝑎 𝑘 , 𝑘 = 1, . . . , 𝑚 | 𝜎𝐵 > 𝑛]. + 𝑦 ∈𝐵
Proof Writing 𝜏𝑛 for 𝜏𝑛𝐵 , one has 𝑃 𝑥 [𝑆 𝜏𝑛 +𝑘 ≤ 𝑎 𝑘 , 𝑘 = 1, . . . , 𝑚] ∞ ∑︁ ∑︁ 𝑃 𝑥 [𝑆 𝑗+𝑘 ≤ 𝑎 𝑘 , 𝑘 = 1, . . . , 𝑚 | 𝜏𝑛 = 𝑗, 𝑆 𝑗 = 𝑦]𝑃 𝑥 [𝜏𝑛 = 𝑗, 𝑆 𝑗 = 𝑦]. = 𝑗=0 𝑦 ∈Z
Ñ 𝑗−1 For 𝑗 ≥ 1, observe that {𝜏𝑛 ≥ 𝑗 } = 𝑖=0 {𝑖 + 1 ≤ ∃ 𝑟 ≤ 𝑗 ∧ (𝑖 + 𝑛), 𝑆𝑟 ∈ 𝐵}, which depends only on 𝑋1 , . . . , 𝑋 𝑗 , and that {𝜏𝑛 = 𝑗, 𝑆 𝑗 = 𝑦} = {𝜏𝑛 ≥ 𝑗 } ∩ {𝑆 𝑗 = 𝑦, 𝑆 𝑗+𝑘 ∉ 𝐵, 𝑘 = 1, . . . , 𝑛}
(𝑦 ∈ 𝐵).
One then infers that the conditional probability above is expressed as 𝑃 𝑦 [𝑆 𝑘 ≤ 𝑎 𝑘 , 𝑘 = 1, . . . , 𝑚 | 𝜎𝐵 > 𝑛]. Í Since ∞ 𝑗=1 𝑃 𝑥 [𝜏𝑛 = 𝑗, 𝑆 𝑗 = 𝑦] = 𝑃 𝑥 [𝑆 𝜏𝑛 = 𝑦, 𝜎𝐵 ≤ 𝑛], 𝑃 𝑥 [𝑆 𝑗 ∈ 𝐵 | 𝜏𝑛 = 𝑗] = 1 (𝑦 ∈ 𝐵, 𝑗 > 0) and {𝜏𝑛 = 0} = {𝜎𝐵 > 𝑛}, we obtain the asserted identity. □ Let Pexc(𝑛,𝐵) stand for the probability measure on 𝐷 [0,∞) induced from 𝑃 𝑥 by the 𝑥 excursion process 𝑌𝑛exc(𝑛,𝐵) (𝑡) := 𝑆 𝜏𝑛𝐵 + ⌊𝑛𝑡 ⌋ , 𝑡 ≥ 0. Lemma 9.3.2 Let 𝜎 2 = ∞. Then for each 𝑥 with u 𝐵 (𝑥) > 0, Pexc(𝑛,𝐵) ⇒ P0 . In 𝑥 exc(𝑛, {0}) 𝑛, {0} ⇒ P0 . = P0 particular P0 Proof For 𝑓 ∈ 𝐷 [0,∞) , we have denoted by 𝔱 𝑓 the left end point of the first excursion interval (away from zero) of 𝑓 of length > 1. Put 𝐵(𝑛) = ∪ 𝑦 ∈𝐵 [𝑦 − 21 , 𝑦 + 12 ]/𝑐 𝑛 and denote by 𝔱 𝐵(𝑛) ( 𝑓 ) the left point of the first excursion away from 𝐵(𝑛) of length larger than 1. Note that 𝔱 𝐵(𝑛) (𝑌𝑛exc( 𝐵) ) = 𝜏𝑛𝐵 , the augmentation of 𝐵 having no effect
220
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
on the excursion intervals of Z-valued paths. Define the map ℎ and ℎ 𝑛 from 𝐷 [0,∞) into itself, respectively, by ℎ( 𝑓 ) = 𝑓 (𝔱 𝑓 + · )
and
ℎ 𝐵(𝑛) ( 𝑓 ) = 𝑓 (𝔱 𝐵(𝑛) + · ), 𝑓
so that P0 = 𝑃𝑌0 ◦ ℎ(𝑌 ) −1 and 𝑌𝑛exc( 𝐵) = ℎ 𝐵(𝑛) (𝑌𝑛 ). Since 𝑃 𝑥 ◦ 𝑌𝑛−1 ⇒ 𝑃𝑌0 , by the continuity theorem (Theorem 5.5 of [7]) it suffices to show that if 𝑓𝑛 → 𝑓 , then ℎ 𝐵(𝑛) ( 𝑓𝑛 ) → ℎ( 𝑓 ) for P0 -almost every 𝑓 ∈ 𝐷 [0,∞) for any sequence ( 𝑓𝑛 ) ⊂ 𝐷 [0,∞) , where the convergence is, of course, in Skorokhod’s topology. Below we apply some sample path properties of 𝑌 , whose proofs are postponed to the end of this section. Since 𝑌 can reach zero if and only if it continuously approaches zero (a.s.) (Lemmas 9.3.8) and since every excursion interval of 𝑓 to the left of 𝔱 is of length less than 1 (Lemma 9.3.9), both 𝔱 : 𝐷 [0,∞) ↦→ (0, ∞) and ℎ : 𝐷 [0,∞) ↦→ 𝐷 [0,∞) are continuous. One also sees that if 𝑓𝑛 → 𝑓 , then for 𝑃𝑌𝑥 ◦ 𝑌 −1 -almost all 𝑓 , → 0 and 𝔱 𝐵(𝑛) → 𝔱𝑓 . 𝔱 𝐵(𝑛) − 𝔱 𝐵(𝑛) 𝑓 𝑓 𝑓𝑛 is defined with 𝐵(𝑛) instead of 𝐵/𝑐 𝑛 . Since Here it is effectively applied that 𝔱 𝐵(𝑛) 𝑓 𝐵(𝑛) ◦ 𝑓𝑛 (0) = 𝑂 (1/𝑐 𝑛 ) → 0 = ℎ ◦ 𝑓 (0), our task reduces to showing that for ℎ sequences of 𝑓𝑛 ∈ 𝐷 [0,∞) and positive real numbers 𝑠 𝑛 , if 𝑓𝑛 → 𝑓 , 𝑠 𝑛 → 𝑠 and 𝑓𝑛 (𝑠 𝑛 ) → 𝑓 (𝑠), then 𝑓𝑛 (𝑠 𝑛 + ·) → 𝑓 (𝑠 + ·). This is an elementary matter concerning the convergence of a sequence in the space (𝐷 [0,∞) , 𝑑), so we omit its proof. □ Lemma 9.3.3 Suppose 𝐹 is recurrent. Then ♯ 𝜅 𝑐 /𝑛 𝑃0 [𝜎0 > 𝑛] ∼ 𝛼 𝑛 ˜ 1/[1/ 𝐿] ∗ (𝑛)
if 1 < 𝛼 ≤ 2, if 𝛼 = 1.
∫ −1 and 𝜅 ♯ is given in (9.17). ˜ ˜ ˜ ∗ (𝑛) = 𝑛 [𝑡 𝐿(𝑡)] = 𝐿 (𝑐 𝑘 ) and [1/ 𝐿] Here 𝐿(𝑘) 𝛼 0 Í∞ Í Proof Put 𝜑(𝑠) = 𝑛=0 𝑃0 [𝑆 𝑛 = 0]e−𝑠𝑛 and 𝜔(𝑠) = 0∞ 𝑃0 [𝜎0 > 𝑛]e−𝑠𝑛 . Then 𝜑(𝑠)𝜔(𝑠) = 1/(1 − e−𝑠 ). When 1 < 𝛼 ≤ 2, from the local limit theorem (9.1), Í −1/𝛼 , and a ˜ one obtains 𝜑(𝑠) ∼ 𝔭1 (0) e−𝑠𝑛 /𝑐 𝑛 ∼ 𝔭1 (0)Γ(1 − 1/𝛼)𝑠1/𝛼−1 [ 𝐿(1/𝑠)] standard Tauberian argument leads to the result for 1 < 𝛼 ≤ 2. ˜ Let 𝛼 = 1. Then by (9.1) again one has 𝑃[𝑆 𝑛 = 0] ∼ 𝔭1 (0)/𝑛 𝐿(𝑛), so that ˜ ∗ (1/𝑠), hence 𝜑(𝑠) ∼ 𝔭1 (0) [1/ 𝐿]
˜ ∗ (1/𝑠) 𝑠𝜔(𝑠) ∼ 1/[1/ 𝐿]
(𝑠 ↓ 0).
To use the Tauberian argument, we need a Tauberian condition. To this end we apply Lemma 9.3.2 to see that the logarithm of 𝑟 (𝑡) := 𝑃0 [𝜎0 > 𝑡] is slowly decreasing,
9.3 Proof of Theorem 9.2.1
221
that is inf 𝑡 ′ >𝑡:𝑡 ′ −𝑡 ≤ 𝜀𝑡 log[𝑟 (𝑡 ′)/𝑟 (𝑡)] → 0 as 𝑡 → ∞ and 𝜀 ↓ 0, which serves as the Tauberian condition (cf. [8, Theorem 1.7.5]). This condition is paraphrased as lim lim sup 𝑃0 [𝑡 ≤ 𝜎0 ≤ 𝑡 + 𝜀𝑡 | 𝜎0 > 𝑡] = 0, 𝜀↓0
𝑡→∞
□
whose validity one can easily ascertain by using Lemma 9.3.2.
Remark 9.3.4 When 𝛼 = 𝜌 = 1, 𝑆 ⌊𝑛𝑡 ⌋ /𝐵𝑛 converges to 𝑡 locally uniformly in probability (cf. Theorem A.4.1(d)), so that by Lemma 9.3.1 the conditional process 𝑆 ⌊𝑛· ⌋ /𝐵𝑛 under the law 𝑃0 [· | 𝜎0 > 𝑛] also converges to 𝑡. The local limit theorem implies merely 𝑃0 [𝑆 𝑛 = 0] = 𝑜(1/𝑐 𝑛 ). Even though this does not allow us to follow the proof above, we can still prove that 𝑃 𝑥 [𝑆 ⌊𝑛· ⌋ /𝐵𝑛 ∈ · | 𝜎𝐵 > 𝑛] converges to the law concentrating at the single path 𝑓 (𝑡) ≡ 𝑡. To see this we show that (∗)
𝑃0 [𝜎0 > 𝑛] is s.v. as 𝑛 → ∞.
To this end, it suffices, in view of the proof of Lemma 9.3.3, to show 𝜑(𝑠) is s.v. as 𝑠 ↓ 0. For simplicity, assume 𝐹 is strongly aperiodic. Then 2𝜋𝜑(𝑠) =
∞ ∑︁ 𝑛=0
By 1 − 𝜓(𝜃) =
e−𝑠𝑛
∫
∫
𝜋
𝛿
𝜓 𝑛 (𝜃) d𝜃 ∼ ℜ −𝛿
−𝜋
𝜋 2 𝜃 [𝐿(1/𝜃)
d𝜃 1 − e−𝑠 𝜓(𝜃)
for any 𝛿 > 0.
+ 𝑖 𝐴(1/𝜃)]{1 + 𝑜(1)} (𝜃 → 0), one infers ∫
𝛿
2𝜋𝜑(𝑠) ∼ 𝜁 (𝑠; 𝛿) := −𝛿
[𝑠 +
𝑠 + 𝜋2 𝜃 𝐿(1/𝜃) 𝜋 2 2 2 𝜃 𝐿(1/𝜃)] + [𝜃 𝐴(1/𝜃)]
d𝜃.
Since 𝐴 is s.v., one has 𝜁 (𝜆𝑠; 𝛿) ∼ 𝜁 (𝑠; 𝛿/𝜆) ∼ 𝜁 (𝑠; 𝛿) for any 𝜆 > 0. Thus 𝜑(𝑠) is s.v., as desired. Now (∗) is applicable. For the rest, one can proceed as in the sequel. In order to obtain the convergence of the conditional process P𝑛,𝐵 to P0 we must 𝑥 verify the next lemma (due to Belkin [3]). For this purpose we employ the analytic approach based on the harmonic analysis as in [2] but with the help of Lemma 9.3.2. Lemma 9.3.5 For any 𝑥 ∈ Z with 𝑎 † (𝑥) > 0, as 𝑛 → ∞, {0} P𝑛, [𝔶1 ∈ ·] =⇒ P0 [𝔶1 ∈ ·]. 𝑥
Proof Put 𝑟 𝑛𝑥 = 𝑃 𝑥 [𝜎0 > 𝑛] and 𝑓𝑛𝑥 = 𝑃 𝑥 [𝜎0 = 𝑛]. We first make the decomposition ! 𝑛 ∑︁ 1 𝑥 𝑃 𝑥 [𝑆 𝑛 = 𝑦 | 𝜎0 > 𝑛] = 𝑥 𝑃 𝑥 [𝑆 𝑛 = 𝑦] − 𝑓 𝑘 𝑃0 [𝑆 𝑛−𝑘 = 𝑦] , 𝑛 ≥ 1. 𝑟𝑛 𝑘=1 𝑖 𝜃 𝑆𝑛 | 𝜎 > 𝑛]. Then, from (Note both sides vanish for 𝑦 = 0.) Put 𝜙 𝑛,𝑥 (𝜃) = 𝐸 𝑥 [eÍ 0 𝑛 𝑖 𝜃 𝑥 the above identity one obtains 𝜙 𝑛,𝑥 (𝜃) = 𝜓 (𝜃)e − 𝑛𝑘=1 𝑓 𝑘𝑥 𝜓 𝑛−𝑘 (𝜃) /𝑟 𝑛𝑥 , and after summing by parts one gets
222
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
𝜙 𝑛,𝑥 (𝜃) =
𝑛−1 𝑥 ∑︁ 𝑟 𝑘 𝑛−𝑘−1 e𝑖 𝜃 𝑥 − 1 𝑛 𝜓 (𝜃) + 1 − (𝜃) − 𝜓 𝑛−𝑘 (𝜃) . 𝑥 𝑥 𝜓 𝑟𝑛 𝑟 𝑘=0 𝑛
(9.44)
Now, putting Ψ𝑛𝑥 (𝜃) = 𝐸 𝑥 [e𝑖 𝜃𝑌𝑛 (1) | 𝜎0 > 𝑛] = 𝜙 𝑛,𝑥 (𝜃/𝑐 𝑛 ), this is rewritten as Ψ𝑛𝑥 (𝜃)
=1−
𝑛−1 𝑥 ∑︁ 𝑟 𝑘 𝑥 𝑟 𝑛 𝑘=0
𝜃 1−𝜓 𝑐𝑛
𝜓
𝑛−𝑘−1
e𝑖 𝜃 𝑥/𝑐𝑛 − 1 𝑛 𝜃 𝜃 + 𝜓 . 𝑐𝑛 𝑟 𝑛𝑥 𝑐𝑛
(9.45)
By Lemma 9.3.3 the last term tends to zero as 𝑛 → ∞ (note 𝑛/𝑐2𝑛 ∼ 1/𝐿(𝑐 𝑛 ) → 0 for 𝛼 = 2). Let 𝑎 † (𝑥) > 0. Then, by (2.7), 𝑟 𝑘𝑥 /𝑟 𝑛𝑥 ∼ 𝑟 𝑛0 /𝑟 𝑛0 if 𝜀𝑛 < 𝑘 < 𝑛, and by (4.18, 4.19) (and (4.30) if 𝛼 = 1) 1 − 𝜓(𝜃) ≍ 𝜃 𝛼 𝐿(1/𝜃), which shows Í (1 − 𝜓(𝜃/𝑐 𝑛 )) = 𝑂 (1/𝑛) (recall 𝑐 𝑛𝛼 /𝐿(𝑐 𝑛 ) ∼ 𝑛). Noting 1𝜀𝑛 𝑟 𝑘0 ≤ 𝐶𝜀 1/𝛼 𝑛𝑟 𝑛0 we can therefore conclude Ψ𝑛𝑥 (𝜃) = Ψ𝑛0 (𝜃) +𝑜(1). We know Ψ𝑛0 (𝜃) tends to the characteristic function of 𝔮 owing to Lemma 9.3.2. Thus we have for 𝜉 ∈ R, {0} P𝑛, [𝔶1 ≤ 𝜉] = 𝑃 𝑥 [𝑆 𝑛 /𝑐 𝑛 ≤ 𝜉 | 𝜎0 > 𝑛] −→ P0 [𝔶1 ≤ 𝜉]. 𝑥
□
{0} Our proof of the following lemma, which extends the result on P𝑛, in Lemma 𝑥 𝑛,𝐵 9.3.5 to P 𝑥 for any finite set 𝐵, rests on the formula due to Kesten and Spitzer [49] that states 𝑃 𝑥 [𝜎𝐵 > 𝑛] lim = u 𝐵 (𝑥). (9.46) 𝑛→∞ 𝑃0 [𝜎0 > 𝑛]
Lemma 9.3.6 For any 𝑥 ∈ Z with u 𝐵 (𝑥) > 0, 0 lim P𝑛,𝐵 𝑥 [𝔶1 ≤ 𝜉] = P [𝔶1 ≤ 𝜉]
𝑛→∞
(𝜉 ∈ R).
(9.47)
{𝑤} Proof By Lemma 9.3.5 we have P𝑛, ⇒ P0 if u {𝑤} (𝑥) = 𝑎 † (𝑥 − 𝑤) > 0. Let 𝑥 u 𝐵 (𝑥) > 0. Suppose 0 ∈ 𝐵 for simplicity and observe that for any real number 𝜉, the probability P𝑛,𝐵 𝑥 [𝔶1 ≤ 𝜉] is expressed as {0} P𝑛, [𝔶1 ≤ 𝜉] 𝑥
−
𝑃 𝑥 [𝜎0 > 𝑛] 𝑃 𝑥 [𝜎𝐵 > 𝑛]
∞ ∑︁
𝜉𝑐 𝑛 𝑃𝑤 [𝜎0 > 𝑛 − 𝑘] 𝑃 𝑥 [𝑆 𝜎𝐵 = 𝑤, 𝜎𝐵 = 𝑘]P𝑤𝑛−𝑘, {0} 𝔶1 ≤ . 𝑐 𝑛−𝑘 𝑃 𝑥 [𝜎𝐵 > 𝑛] 𝑤∈𝐵,𝑤≠0 𝑘=1 ∑︁
Noting that 𝑃 𝑥 [𝜎𝐵 > 𝑀] → 0 as 𝑀 → ∞ and the ratio in the summand above approaches 𝑎(𝑤)/u 𝐵 (𝑥) for 𝑘 ≤ 𝑀 (for each 𝑀), we pass to the limit as 𝑛 → ∞ to find 0 † lim P𝑛,𝐵 𝑥 [𝔶1 ≤ 𝜉] = P [𝔶1 ≤ 𝜉]{𝑎 (𝑥) − 𝐸 𝑥 [𝑎(𝑆 𝜎𝐵 )]}/u 𝐵 (𝑥).5 Since 𝑎 † (𝑥) − 𝐸 𝑥 [𝑎(𝑆 𝜎𝐵 )] = u 𝐵 (𝑥) under the present supposition that 0 ∈ 𝐵, we 0 obtain P𝑛,𝐵 □ 𝑥 [𝔶1 ≤ 𝜉] → P [𝔶1 ≤ 𝜉]. 5 If 𝑆 is transient, 𝑎 (𝑆 𝜎𝐵 ) is understood to vanish when 𝜎𝐵 = ∞.
9.3 Proof of Theorem 9.2.1
223
Proof (of Theorem 9.2.1) To complete the proof we use Theorem 9.2.2,6 which we shall deduce in Section 9.6 from relation (9.47) together with Lemma 9.3.3. As mentioned previously, Theorem 9.2.2 entails the convergence of finite-dimensional distributions of P𝑛,𝐵 𝑥 , and in particular lim 𝑃 𝑥 [|𝑆 ⌊𝑡 𝑛⌋ | > 𝜂𝑐 𝑛 | 𝜎𝐵 > 𝑛] = P0 [𝔶𝑡 > 𝜂].
𝑛→∞
(9.48)
We show the tightness of the sequence (P𝑛,𝐵 𝑥 ) in the next lemma, and hence conclude □ the convergence P𝑛,𝐵 ⇒ P0 . 𝑥 Lemma 9.3.7 The sequence (P𝑛,𝐵 𝑥 ) is tight for every 𝑥 ∈ Z. Proof If 𝑆 is transient, then 𝑃 𝑥 [𝜎𝐵 > 𝑛] → u 𝐵 (𝑥)/𝐺 (0) and the assertion follows from the result for the unconditional law. Suppose 𝑆 is recurrent in the sequel. Since the excursion paths are right-continuous and start at zero a.s. (P0 ), by (9.48) it follows that for any 𝜀 > 0, as 𝑛 → ∞ and 𝛿 ↓ 0 in this order 𝑃 𝑥 [|𝑆 ⌊ 𝛿𝑛⌋ | > 𝜀𝑐 𝑛 | 𝜎𝐵 > 𝑛] −→ 0.
(9.49)
This intuitively obvious result is crucial to the proof. Put 𝜏𝑛, 𝜀 = inf{𝑘 : |𝑆 𝑘 | > 𝜀𝑐 𝑛 }. In view of the tightness criterion as given by [7, Theorem 14.4], for the proof of the lemma it suffices to show that 𝑃 𝑥 [𝜏𝑛, 𝜀 ≤ 𝛿𝑛 | 𝜎𝐵 > 𝑛] −→ 0 as 𝑛 → 0 and 𝛿 ↓ 0 in this order
(9.50)
′ ′ since (P𝑛,𝐵 𝑦 ) satisfies the tightness criterion uniformly for |𝑦| > 𝜀 𝑐 𝑛 for each 𝜀 > 0 ′ ′ and 𝑃 𝑥 [|𝑆 ⌊ 𝛿𝑛⌋ | > 𝜀 𝑐 𝑛 ] ↑ 1 as 𝜀 ↓ 0 by Lemma 9.3.6. Make the decomposition
𝑃 𝑥 [𝜏𝑛, 𝜀 ≤ 𝛿𝑛, 𝜎𝐵 > 𝑛] ∑︁ ∑︁ = 𝑃 𝑥 [𝜏𝑛, 𝜀 = 𝑘, 𝑆 𝑘 = 𝑦, 𝜎𝐵 > 𝑘]𝑃 𝑦 [𝜎𝐵 > 𝑛 − 𝑘]. |𝑦 |> 𝜀𝑐𝑛 0≤𝑘 ≤ 𝛿𝑛
Using the functional limit theorem for the unconditioned walk, observe that there exists a constant 𝐶 = 𝐶 (𝜀) such that for all (𝑘, 𝑦) in the range of the double sum above 𝑃 𝑦 [𝜎𝐵 > 𝑛 − 𝑘] < 𝐶𝑃 𝑦 [𝑆 ⌊ 𝛿𝑛⌋−𝑘 > 𝜀𝑐 𝑛 , 𝜎𝐵 > 𝑛 − 𝑘] for all sufficiently large 𝑛. Hence 𝑃 𝑥 [𝜏𝑛, 𝜀 < 𝛿𝑛, 𝜎𝐵 > 𝑛] is less than ∑︁ ∑︁ 𝑃 𝑥 [𝜏𝑛, 𝜀 = 𝑘, 𝑆 𝑘 = 𝑦, 𝜎𝐵 > 𝑘]𝑃 𝑦 [𝑆 ⌊ 𝛿𝑛⌋−𝑘 > 𝜀𝑐 𝑛 , 𝜎𝐵 > 𝑛 − 𝑘]. 𝐶 |𝑦 |> 𝜀𝑐𝑛 0≤𝑘< 𝛿𝑛
Noting this double sum equals 𝑃 𝑥 [|𝑆 ⌊ 𝛿𝑛⌋ | > 𝜀𝑐 𝑛 , 𝜎𝐵 > 𝑛], one finds that (9.50) □ follows from (9.49).
6 What we actually need is (9.20) rather than the local theorems such as (9.22) and (9.19).
224
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
Some sample path properties of the limit stable process In our proof of Lemma 9.3.2 we have taken for granted the sample property of 𝑌 that 𝑌 can reach any given point if and only if it continuously approaches the point. Here we provide a proof where we use the fact that the bridge 𝑃𝑌𝜉 [· | 𝑌 (1) = 𝜂] has the version, denoted by 𝑃𝑌𝜉 , 𝜂 [·], which is uniquely defined for every pair of 𝜉, 𝜂 ∈ R by (∗)
𝑃𝑌𝜉 , 𝜂 [Γ] = lim 𝑃𝑌𝜉 Γ |𝑌 (1) − 𝜂| < 𝜀 𝜀↓0
valid for every event Γ depending on (𝑌 (𝑡))𝑡 < 𝛿 for any 𝛿 < 1 (cf. [4, §III.3]). Lemma 9.3.8 (i) For every 𝜉, 𝑃𝑌𝜉 [∃𝑡 > 0, 𝑌 (𝑡−) ≠ 𝑌 (𝑡) = 0] = 0. (ii) For all 𝜉, 𝜂 ∈ Z, 𝑃𝑌𝜉 , 𝜂 [∃𝑡 ∈ (0, 1], 𝑌 (𝑡−) ≠ 𝑌 (𝑡) = 0] = 0. (iii) For every 𝜉, 𝑃𝑌𝜉 [∃𝑡 > 0, 𝑌 (𝑡) ≠ 𝑌 (𝑡−) = 0] = 0. Proof For the proof of (i) it is sufficient to show that 𝑃𝑌𝜉 [∃𝑡 > 0, 𝑌 (𝑡−) < −𝜀 and 𝑌 (𝑡) = 0] = 0 for any 𝜀 > 0; 7
(9.51)
by symmetry this entails 𝑃𝑌𝜉 [∃ 𝑡 > 0, |𝑌 (𝑡−)| > 𝜀 and 𝑌 (𝑡) = 0] = 0. The event under the symbol 𝑃𝑌𝜉 in (9.51) is contained in ∪0 0 (automatically holds if 𝛼 > 1). It follows from (i) that 𝑃𝑌𝜉 ,𝜁 [∃𝑡 ∈ (0, 1), 𝑌 (𝑡−) ≠ 𝑌 (𝑡) = 0] = 0 for almost every 𝜁 (w.r.t. the Lebesgue measure), since 𝑃𝑌𝜉 [𝑌 (1) ∈ d𝜁] has a positive continuous density. By virtue of (∗) we accordingly obtain 𝑃𝑌𝜉 , 𝜂 [∃𝑡 ∈ (0, 𝛿), 𝑌 (𝑡−) ≠ 𝑌 (𝑡) = 0] = 0. Letting 𝛿 ↑ 1, we have the desired result. One can easily modify the above argument to deal with the case 𝜌 𝜌ˆ = 0. The proof of (iii) follows from (ii) by time reversal. Put for 0 ≤ 𝑟 < 𝑠 ≤ 1, Γ(𝑟 ,𝑠) (𝑌 ) = {∃𝑡 ∈ (𝑟, 𝑠), 𝑌 (𝑡−) ≠ 𝑌 (𝑡) = 0}, and
Γ∗(𝑟 ,𝑠) (𝑌 ) = {∃𝑡 ∈ (𝑟, 𝑠), 𝑌 (𝑡) ≠ 𝑌 (𝑡−) = 0}.
Then, Γ∗(0,𝑠) (𝑌 ) = {∃𝑡 ∈ (1 − 𝑠, 1), 𝑌 (1 − 𝑡) ≠ 𝑌 ((1 − 𝑡)−) = 0}, so that, on writing 𝑌 ∗ (𝑡) := 𝑌 ((1 − 𝑡)−), Γ∗(0,𝑠) (𝑌 ) = Γ(1−𝑠,1) (𝑌 ∗ ). Since, by a theorem concerning duality, the law of (𝑌 ∗ (𝑡))0≤𝑡 ≤1 under 𝑃𝑌𝜉 , 𝜂 coincides with that of (−𝑌 (𝑡))0≤𝑡 ≤1 under 𝑃𝑌−𝜂,− 𝜉 [4, Corollary II.1.3] and since Γ(𝑟 ,𝑠) (𝑌 ) = Γ(𝑟 ,𝑠) (−𝑌 ), we can conclude that 7 This can be deduced from Proposition III.2.2(i) of [4] since 0 is regular for (0, ∞) w.r.t. Y.
9.4 The Distribution of the Starting Site of a Large Excursion
225
𝑃𝑌𝜉 , 𝜂 [Γ∗(0,𝑠) (𝑌 )] = 𝑃𝑌𝜉 , 𝜂 [Γ(1−𝑠,1) (𝑌 ∗ )] = 𝑃𝑌−𝜂,− 𝜉 [Γ(1−𝑠,1) (−𝑌 )] = 𝑃𝑌𝜉 , 𝜂 [Γ(1−𝑠,1) (𝑌 )] = 0. Thus (iii) has been verified, since 𝑠 can be made arbitrarily close to 1.
□
Lemma 9.3.9 For each 𝜉 ∈ R and 𝑐 > 0, with 𝑃𝑌𝜉 -probability 1 there is no excursion interval, away from zero, of 𝑌 whose length equals 𝑐. Proof For any rational number 𝑟 > 0, let (𝑔, 𝑑) be the excursion interval containing 𝑟, which exists a.s. (𝑃𝑌𝜉 ) because of the trivial fact that 𝑌 (𝑟) ≠ 0 a.s. Since 𝑟 − 𝑔 is measurable w.r.t. F𝑟 := 𝜎(𝑌 (𝑠) : 0 ≤ 𝑠 ≤ 𝑟), it follows that 𝑃𝑌𝜉 [𝑑 − 𝑔 = 𝑐] = ∫ □ 𝑃𝑌𝜂 [𝑌 (𝑐 − 𝑟 + 𝑡) = 0]𝑃𝑌𝜉 [𝑌 (𝑟) ∈ d𝜂, 𝑔 ∈ d𝑡] = 0.
9.4 The Distribution of the Starting Site of a Large Excursion Though not used in this treatise, the following proposition is of independent interest.. Recall 𝜏𝑛𝐵 is the random time defined in (9.43). Proposition 9.4.1 If 𝑆 is recurrent, 𝜎 2 = ∞ and u 𝐵 (𝑥) > 0 for all 𝑥, then lim𝑛→∞ 𝑃 𝑥 [𝑆 𝜏𝑛𝐵 = 𝑦] = u 𝐵 (𝑦) for 𝑦 ∈ 𝐵 and 𝑥 ∈ Z. To prove this result we need to use the following lemma. In the sequel suppose u 𝐵 (𝑥) > 0 for all 𝑥.
Lemma 9.4.2 If 𝑆 is recurrent, for any 𝑥, 𝑦 ∈ Z and 𝑤, 𝑤 ′ ∈ 𝐵, Í∞ 𝑃 𝑥 [𝜏𝑛𝐵 ≥ 𝑚, 𝑆 𝑚 = 𝑤] lim Í∞𝑚=0 = 1. 𝐵 ′ 𝑛→∞ 𝑚=0 𝑃 𝑦 [𝜏𝑛 ≥ 𝑚, 𝑆 𝑚 = 𝑤 ] Proof It suffices to show the asserted relation when 𝑥 ∈ 𝐵. Note that u 𝐵 (𝑥) > 0 implies 𝑃 𝑥 [𝑆 𝜎𝐵 = 𝑤] > 0 for all 𝑤 ∈ 𝐵. Let 𝑥 ∈ 𝐵 and u 𝐵 (𝑥) > 0. If 𝑚 ≤ 𝑛, then 𝑃 𝑥 [𝜏𝑛 ≥ 𝑚, 𝑆 𝑚 = 𝑤] = 𝑃 𝑥 [𝑆 𝑚 = 𝑤] ∼ 1/𝑐 𝑚 as 𝑚 → ∞. Noting that 𝜎𝐵 ≤ 𝑚 ∧ 𝑛 if 𝜏𝑛 ≥ 𝑚 and 𝑆 𝑚 = 𝑤, we see 𝑃 𝑥 [𝜏𝑛𝐵 ≥ 𝑚, 𝑆 𝑚 = 𝑤] =
∑︁ 𝑛∧𝑚 ∑︁ 𝑦
𝑃 𝑥 [𝜎𝐵 = 𝑗, 𝑆 𝑗 = 𝑦]𝑃 𝑦 [𝜏𝑛 ≥ 𝑚 − 𝑗, 𝑆 𝑚− 𝑗 = 𝑤].
𝑗=0
Í 𝐵 Put 𝐽𝑛 (𝑥, 𝑤) = ∞ 𝑚=0 𝑃 𝑥 [𝜏𝑛 ≥ 𝑚, 𝑆 𝑚 = 𝑤]. Then after interchanging the order of summation and making a change of variables, we obtain 𝐽𝑛 (𝑥, 𝑤) =
𝑛 ∑︁ ∑︁
𝑃 𝑥 [𝜎𝐵 = 𝑗, 𝑆 𝑗 = 𝑦]𝐽𝑛 (𝑦, 𝑤)
𝑦 ∈𝐵 𝑗=0
=
∑︁ 𝑦 ∈𝐵
𝑃 𝑥 [𝑆 𝜎𝐵 = 𝑦]{1 + 𝑜(1)}𝐽𝑛 (𝑦, 𝑤).
226
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
We apply this last relation to 𝐽𝑛 (𝑦, 𝑤) and iterate the same procedure. The trace of the r.w. on 𝐵 constitutes a Markov chain with the one-step transition probability 𝑝 𝑥,𝑦 = 𝑃 𝑥 [𝑆 𝜎𝐵 = 𝑦], which is ergodic. If 𝜋 stands for the unique invariant measure of the chain, then it follows that ∑︁ 𝐽𝑛 (𝑥, 𝑤) = 𝜋(𝑦){1 + 𝑜(1)}𝐽𝑛 (𝑦, 𝑤), 𝑦 ∈𝐵
showing 𝐽𝑛 (𝑥, 𝑤) ∼ 𝐽𝑛 (𝑦, 𝑤) for every 𝑥, 𝑦 ∈ 𝐵. This result applies to the dual walk so that 𝐽ˆ𝑛 (𝑥, 𝑤) ∼ 𝐽ˆ𝑛 (𝑦, 𝑤), which in turn yields 𝐽ˆ𝑛 (𝑤, 𝑥) ∼ 𝐽ˆ𝑛 (𝑤, 𝑦). Since 𝑤 is □ arbitrary, we have the relation of the lemma. Proof (of Proposition 9.4.1) We may suppose 0 ∈ 𝐵. Put 𝜁 𝑛 = inf{𝑘 > 𝜏𝑛𝐵 : 𝑆 𝑘 = 0}, the epoch when the excursion that begins at 𝜏𝑛𝐵 terminates. Í Put ℎ 𝑛 (𝑦, 𝑤) = 𝑃 𝑥 [𝑆 𝜏𝑛𝐵 = 𝑦, 𝑆 𝜁𝑛 = 𝑤] (𝑤 ∈ 𝐵) so that 𝑃 𝑥 [𝑆 𝜏𝑛𝐵 = 𝑦] = 𝑤∈𝐵 ℎ 𝑛 (𝑦, 𝑤). We have ℎ 𝑛 (𝑦, 𝑤) = =
∞ ∑︁ 𝑚=𝑛+1 ∞ ∑︁
𝑃 𝑥 [𝑆 𝜏𝑛𝐵 = 𝑦, 𝜁 𝑛 = 𝑚, 𝑆 𝑚 = 𝑤] 𝑚 ∑︁
𝑃ˆ 𝑤 [𝜎𝐵 = 𝑘, 𝑆 𝜎𝐵 = 𝑦] 𝑃ˆ 𝑦 [𝜏𝑛𝐵 ≥ 𝑚 − 𝑘, 𝑆 𝑚−𝑘 = 𝑥],
𝑚=𝑛+1 𝑘=𝑛+1
where the second equality is obtained by considering the dual walk. After changing the order of summation, the repeated sum above becomes 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛, 𝑆 𝜎𝐵 = 𝑦]
∞ ∑︁
𝑃ˆ 𝑦 [𝜏𝑛𝐵 ≥ 𝑚, 𝑆 𝑚 = 𝑥].
𝑚=0
Í [𝑆 𝑛 = 𝑧, 𝑆 𝜎𝐵 = 𝑦 | 𝜎𝐵 > 𝑛] 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛], Write the first probability above as 𝑧 𝑃ˆ 𝑤Í ˆ which we further rewrite as 𝑃𝑤 [𝜎𝐵 > 𝑛] 𝑧∉𝐵 𝑃ˆ 𝑤 [𝑆 𝑛 = 𝑧 | 𝜎𝐵 > 𝑛] 𝑃ˆ 𝑧 [𝑆 𝜎𝐵 = 𝑦]. Since 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛] ≍ 𝑃ˆ 𝑤 [𝜎0 > 𝑛] and {𝜎𝐵 > 𝑛} ⊂ {𝜎0 > 𝑛}, from Lemma 9.3.5 √ we have 𝑃ˆ 𝑤 [|𝑆 𝑛 | > 𝑐 𝑛 | 𝜎𝐵 > 𝑛] → 1 as 𝑛 → ∞. Combining this with the fact that 𝑃 𝑧 [𝑆 𝜎𝐵 = 𝑦] → u 𝐵 (𝑦) (as |𝑧| → ∞ under 𝜎 2 = ∞) shows that as 𝑛 → ∞, 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛, 𝑆 𝜎𝐵 = 𝑦] = u 𝐵 (𝑦) 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛]{1 + 𝑜(1)}, hence ℎ 𝑛 (𝑦, 𝑤) = u 𝐵 (𝑦) 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛]
∞ ∑︁ 𝑚=0
𝑃ˆ 𝑦 [𝜏𝑛𝐵 ≥ 𝑚, 𝑆 𝑚 = 𝑥]{1 + 𝑜(1)}.
9.5 An Application to the Escape Probabilities From a Finite Set
227
By Lemma 9.4.2 we infer that the probability under the summation sign can be replaced by 𝑃ˆ 𝑥 [𝜏𝑛𝐵 ≥ 𝑚, 𝑆 𝑚 = 𝑤], so that ℎ 𝑛 (𝑦, 𝑤) = u 𝐵 (𝑦) 𝑃ˆ 𝑤 [𝜎𝐵 > 𝑛]
∞ ∑︁
𝑃ˆ 𝑥 [𝜏𝑛𝐵 ≥ 𝑚, 𝑆 𝑚 = 𝑤]{1 + 𝑜(1)}.
(9.52)
𝑚=0
Í∞ ˆ Í 𝐵 ˆ Since 1 = 𝑃ˆ 𝑥 [𝜏𝑛𝐵 < ∞] Í = 𝑤∈𝐵 𝑚=0 𝑃 𝑥 [𝜏𝑛 ≥ 𝑚, 𝑆 𝑚 = 𝑤] 𝑃𝑤 [𝜎𝐵 > 𝑛], we can □ accordingly conclude 𝑤∈𝐵 ℎ 𝑛 (𝑦, 𝑤) ∼ u 𝐵 (𝑦), the relation of the proposition.
9.5 An Application to the Escape Probabilities From a Finite Set Shimura [68] (in the case 𝜎 2 < ∞) and Doney [19] (under (AS) with some restriction) applied Bolthausen’s idea [10] to the r.w. conditioned on various events that depend only on the path (𝑆 𝑛 )0≤𝑛≤ 𝜎𝛺 and obtained the corresponding conditional limit theorems. Here we follow them to derive asymptotics of the escape probability from a finite set 𝐵 ⊂ Z that are addressed in Section 5.6 in the special case 𝐵 = {0} under 𝑚 + /𝑚 → 0. We continue to suppose that (AS) holds and 𝜌 𝜌ˆ ≠ 0 if 𝛼 = 1. We also suppose that 𝐹 is recurrent, the present problem being trivial for transient 𝐹. As in Section 5.6, let 𝑄, 𝑅 be two positive integers and put 𝛥(𝑄, 𝑅) = (−∞, −𝑄] ∪ [𝑅, ∞). Proposition 9.5.1 As 𝑅 → ∞ along with 𝑄/𝑅 → 𝜆 ∈ (0, ∞), 𝐶 (𝜆)𝑅 1−𝛼 𝐿 (𝑅) if 1 < 𝛼 ≤ 2, 𝑃0 [𝜎𝛥(𝑄,𝑅) < 𝜎0 ] ∼ ˜ ∗ (𝑅) 1/[1/ 𝐿] if 𝛼 = 1 and 𝐹 is recurrent,
(9.53)
for a certain constant 𝐶 (𝜆) > 0 (see Remark 9.5.3 for 𝐶 (𝜆)). The proof is given after Lemma 9.5.2 below. The result will be extended to a general finite set at the end of the section. We consider the conditioning on the following two events: ◦ Γ𝑛,𝑠 = {𝜎0 > 𝑠𝑛}
and
∗ = {𝜎𝛥(𝑄,𝑅) < 𝜎0 }. Γ𝑄,𝑅
𝑓
For 𝑓 ∈ 𝐷 [0,∞) denote by 𝜎𝐾 the first hitting time to 𝐾 ⊂ R of the path 𝑓 and put 𝐴◦𝑠 = { 𝑓 : 𝜎0 > 𝑠}
∗ 𝐴𝑞,𝑟 = { 𝑓 ; 𝜎𝛥(𝑞,𝑟) < 𝜎0 } 𝑓
𝑓
and
𝑓
(𝑠, 𝑞, 𝑡 > 0)
∗ ◦ = {𝑌 ∈ 𝐴◦ } and Γ∗ so that Γ𝑛,𝑠 𝑛 𝑠 𝑄,𝑅 = {𝑌𝑛 ∈ 𝐴𝑄/𝑐𝑛 ,𝑅/𝑐𝑛 }. Corresponding to each ◦ ∗ of 𝐴𝑠 and 𝐴𝑞,𝑟 we can consider excursions of 𝑓 , away from 0, that belong to them and define
(𝑔𝑠◦ ( 𝑓 ), 𝑑 𝑠◦ ( 𝑓 )) ∗ ∗ (𝑔𝑞,𝑟 ( 𝑓 ), 𝑑 𝑞,𝑟 ( 𝑓 ))
: the first excursion interval (𝑔, 𝑑) of 𝑓 s.t. 𝑓 (𝑔 + ·) ∈ 𝐴◦𝑠 , ∗ : the first excursion interval (𝑔, 𝑑) of 𝑓 s.t. 𝑓 (𝑔 + ·) ∈ 𝐴𝑞,𝑟 .
228
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
Below we shall often omit 𝑓 from notation: e.g., 𝑌𝑛 (𝑔𝑟 ) is written for 𝑌𝑛 (𝑔𝑟◦ (𝑌𝑛 )) as usual. Observe for 𝑗 = 1, 2, . . . ∗ {𝑔𝑄,𝑅 (𝑆 ⌊𝑛· ⌋ ) = 𝑗, 𝑆 𝑗 = 0} = {𝜎𝐾 (𝑄,𝑅) > 𝑗, 𝜎𝐾 (𝑄,𝑅) ◦ 𝜗 𝑗 < 𝜎0 ◦ 𝜗 𝑗 , 𝑆 𝑗 = 0}.
With the help of this, as in the proof of Lemma 9.3.1, we obtain ∗ ∗ ] = 𝑃0 𝑌𝑛 (𝑔𝑄/𝑐 + ·) ∈ Γ 𝑃0 [𝑌𝑛 ∈ Γ | Γ𝑄,𝑅 𝑛 ,𝑅/𝑐𝑛
(9.54)
for any Γ ∈ B∞ . From (9.54) we obtain, in the same way as before, the following analogue of Lemma 9.3.2. Throughout the rest of this subsection, we suppose 1 ≤ 𝛼 ≤ 2 and 𝐹 is recurrent if 𝛼 = 1. Lemma 9.5.2 For each 𝑞, 𝑟 ∈ (0, ∞), as 𝑄/𝑐 𝑛 → 𝑞, 𝑅/𝑐 𝑛 → 𝑟, ∗ ∗ + · ) ∈ · ]. ] =⇒ 𝑃𝑌0 [𝑌 (𝑔𝑞,𝑟 𝑃0 [𝑌𝑛 ∈ · | Γ𝑄,𝑅
(9.55)
Proof Since 0 is a regular point of 𝑌 , Lemma 9.3.2 and (9.55) together imply that as 𝑛 → ∞, ◦ ∗ | Γ𝑛,𝑠 𝑃0 [Γ𝑄,𝑅 ] −→ 𝑃𝑌0 [𝜎𝛥(𝑞,𝑟) ◦ 𝜗𝑔𝑠◦ < 𝑑 𝑠◦ − 𝑔𝑠◦ ],
(9.56)
◦ 𝑃0 [Γ𝑛,𝑠
(9.57)
|
∗ Γ𝑄,𝑅 ]
−→
∗ 𝑃𝑌0 [𝑑 𝑞,𝑟
−
∗ 𝑔𝑞,𝑟
> 𝑠].
Put 𝜆 = 𝑞/𝑟. Dividing (9.56) by (9.57) we have ∗ ◦ )/𝑃0 (Γ𝑛,𝑠 ) → 𝐶𝜆 (𝑟, 𝑠), 𝑃0 (Γ𝑄,𝑅
where 𝐶𝜆 (𝑟, 𝑠) =
𝑃𝑌0 [𝜎𝛥(𝜆𝑟 ,𝑟) ◦ 𝜗𝑔𝑠◦ < 𝑑 𝑠◦ − 𝑔𝑠◦ ] ∗ − 𝑔 ∗ > 𝑠] 𝑃𝑌0 [𝑑 𝑞,𝑟 𝑞,𝑟
.
Now, on putting 𝐶𝜆 (𝑟) = 𝐶𝜆 (𝑟, 1) ∗ ◦ , substitution from Lemma 9.3.3 yields and recalling the definitions of Γ𝑄,𝑅 and Γ𝑛,1
𝑃0 [𝜎𝛥(𝑄,𝑅) < 𝜎0 ] ∼ 𝐶𝜆 (𝑟)𝑃0 [𝜎0 > 𝑛] ∼ 𝜅 ♯𝛼 𝐶𝜆 (𝑟)𝑐 𝑛 /𝑛. 𝛼−1 𝑅 1−𝛼 𝐿(𝑅)/𝑐 . By 𝑐 𝑛 /𝑛 ∼ 𝑐1−𝛼 ♯ 𝑛 𝐿 (𝑐 𝑛 )/𝑐 ♯ and 𝑐 𝑛 ∼ 𝑅/𝑟, it follows that 𝑐 𝑛 /𝑛 ∼ 𝑟 Hence 𝑃0 [𝜎𝛥(𝑄,𝑅) < 𝜎0 ] ∼ [𝜅 ♯𝛼 /𝑐 ♯ ]𝐶𝜆 (𝑟)𝑟 𝛼−1 𝑅 1−𝛼 𝐿(𝑅). (9.58)
Since this equivalence does not depend on the choice of 𝑟, we conclude 𝑃0 [𝜎𝛥(𝑄,𝑅) < 𝜎0 ] ∼ [𝜅 ♯𝛼 /𝑐 ♯ ]𝐶𝜆 (1)𝑅 1−𝛼 𝐿 (𝑅),
(9.59)
9.5 An Application to the Escape Probabilities From a Finite Set
229
showing the formula of Proposition 9.5.1 with 𝐶 (𝜆) = [𝜅 ♯𝛼 /𝑐 ♯ ]𝐶𝜆 (1) for 𝛼 > 1. The case when 𝛼 = 1 and 𝐹 is recurrent is treated in a similar (simpler) way. □ Remark 9.5.3 (a) Let P (𝑞,𝑟) stand for the law on 𝐷 [0,∞) induced from 𝑃𝑌0 by the map 0 ∗ + ·) ∈ 𝐷 (𝜆,1) [𝜎 > 1] 𝜔 ∈ Ω ↦→ 𝑌 (𝑔𝑞,𝑟 0 [0,∞) . Then 𝐶𝜆 (1) = P [𝜎𝛥(𝜆,1) < 𝜎0 ]/P can be written as ∫ P0 [𝜎𝛥(𝜆,1) ≤ 1] + R P0 [𝜎𝛥(𝜆,1) > 1, 𝔶1 ∈ d𝜂]𝑃𝑌𝜂 [𝜎𝛥(𝜆,1) < 𝜎0 ] ∫ . P (𝜆,1) [𝜎𝛥(𝜆,1) ≥ 1] + [0,1]×R P (𝜆,1) [(𝜎𝛥(𝜆,1) , 𝔶𝑡 ) ∈ d𝑡d𝜂]𝑃𝑌𝜂 [𝜎0 > 1 − 𝑡] Note that comparing (9.60) and (9.61) shows that 𝐶𝜆 (𝑟)𝑟 𝛼−1 = 𝐶𝜆 (1). −1 + [2𝑎(𝑅)] −1 . (b) Let 𝛼 = 2. It is easy to see 𝑃0 [𝜎𝛥(𝑄,𝑅) < 𝜎0 ] ∼ [2𝑎(𝑄)] ¯ ¯ Since 2𝑎(𝑅) ¯ ∼ 2𝑅/𝐿(𝑅), it follows that 𝑃0 [𝜎𝛥(𝑄,𝑅) < 𝜎0 ] ∼ 2−1 (1 + 𝜆−1 )𝐿(𝑅)/𝑅. √︁ By Proposition 9.5.1 and since 𝜅 ♯𝛼 = 2𝑐 ♯ /𝜋, this yields √︃ 𝐶𝜆 (1) = (1 + 𝜆−1 ) 𝑐 ♯ 𝜋/8. To extend Proposition 9.5.1 to 𝜎𝐵 in place of 𝜎0 we show the following Lemma 9.5.4 On the LHS of (9.55) 𝑃0 may be replaced by 𝑃 𝑥 for any 𝑥 ∈ Z. Proof Let 0 = 𝑡 0 < 𝑡 1 < · · · < 𝑡 𝑚 and 𝜑 be a bounded continuous function on R𝑚 and put 𝐻 ( 𝑓 ) = 𝜑(𝔶𝑡0 , . . . , 𝔶𝑡𝑚 ), 𝑡 > 0 and 𝑊𝑛,𝑡 = 𝐻 (𝑌𝑛 (𝑡 + · )) for 𝑡 > 0. Let 𝑅/𝑐 𝑛 → 𝑟 > 0 and 𝑄/𝑅 → 𝜆 > 0 and put ∗ 𝐽 𝜀 (𝑥, 𝑛, 𝑡) = 𝐸 𝑥 [𝑊𝑛,𝑡 , 𝜎𝛥 ∧ 𝜎0 > 𝜀𝑛 | Γ𝑄,𝑅 ],
where 𝛥 = 𝛥(𝑄, 𝑅). Then by (9.54) it follows that ∗ lim lim 𝐽 𝜀 (𝑥, 𝑛, 𝑡) = 𝐸 𝑥 [𝑊𝑛,𝑡 | Γ𝑄,𝑅 ]. 𝜀↓0 𝑛→∞
(9.60)
Í ∗ ∗ Writing 𝑃 𝑥 Γ𝑄,𝑅 = 𝑦 𝑃 𝑥 [𝑌𝑛 (𝜀) = 𝑦/𝑐 𝑛 , 𝜎𝛥 ∧ 𝜎0 > 𝜀𝑛] 𝑃 𝑦 Γ𝑄,𝑅 {1 + 𝑜 𝜀 (1)}, where 𝑜 𝜀 (1) → 0 as 𝜀 ↓ 0 (uniformly in 𝑛), one infers that if 𝜀 < 𝑡, 𝐽 𝜀 (𝑥, 𝑛, 𝑡) (9.61) Í ∗ 𝑦 𝑃 𝑥 𝑌𝑛 (𝜀) = 𝑦/𝑐 𝑛 , 𝜎𝛥 > 𝜀𝑛 𝜎0 > 𝜀𝑛 𝐸 𝑦 𝑊𝑛,𝑡−𝜀 ; Γ𝑄,𝑅 = {1 + 𝑜 𝜀 (1)}. Í ∗ 𝑦 𝑃 𝑥 𝑌𝑛 (𝜀) = 𝑦/𝑐 𝑛 , 𝜎𝛥 > 𝜀𝑛 𝜎0 > 𝜀𝑛 𝑃 𝑦 Γ𝑄,𝑅 ∗ For 𝜂 ∈ R put ℎ 𝑛 (𝜂) = 𝐸 𝑐𝑛 𝜂 𝑊𝑛,𝑡−𝜀 ; Γ𝑄,𝑅 . Let 𝑞 𝑛 = 𝑄/𝑐 𝑛 , 𝑟 𝑛 = 𝑅/𝑐 𝑛 . Then ℎ 𝑛 (𝑦 𝑛 ) = 𝐸 𝑦 [𝐻 (𝑌𝑛 (𝑡 − 𝜀 + ·)); 𝜎𝛥(𝑞𝑛 ,𝑟𝑛 ) (𝑌𝑛 ) < 𝜎0 ], where 𝑦 𝑛 = 𝑦/𝑐 𝑛 . Since any 𝜉 is regular for (𝜉, ∞) and (−∞, 𝜉), we have ℎ 𝑛 (𝑦 𝑛 ) −→ 𝐸 𝑌𝜂 [𝐻 (𝑌 (𝑡 − 𝜀 + · )); 𝜎𝛥(𝑞,𝑟) < 𝜎0 ]
230
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
if 𝜂 ∉ {𝑞, 𝑟, 0}. Since 𝐸 𝑥 [𝜑(𝑌𝑛 (𝜀)); 𝜎𝛥(𝑄,𝑅) > 𝜎0 | 𝜎0 > 𝜀𝑛] converges weakly 𝑓 𝑓 to E0 [𝜑(𝜀 1/𝛼 𝔶1 ); 𝜎𝛥(𝑞,𝑟) > 𝜎0 ] for any bounded continuous function 𝜑 by the continuity theorem, from (9.61) one infers that as 𝑛 → ∞, 𝐽 𝜀 (𝑥, 𝑛, 𝑡) = ∫ P0 [𝔶1 ∈ d𝜂, 𝜎𝛥(𝑞,𝑟) > 𝜀]E0𝜀 1/𝛼 𝜂 [𝐻 (𝔶𝑡−𝜀+· ); 𝜎𝛥(𝑞,𝑟) < 𝜎0 ] ∫ {1 + 𝑜 𝜀 (1)}, P0 [𝔶1 ∈ d𝜂, 𝜎𝛥(𝑞,𝑟) > 𝜀]P0𝜀 1/𝛼 𝜂 [𝜎𝛥(𝑞,𝑟) < 𝜎0 ] with the self-evident notation of E0𝜂 and P0𝜂 and 𝑓 suppressed from 𝜎 𝑓 . As 𝜀 ↓ 0, the RHS above converges to a limit since the LHS converges for 𝑥 = 0. Since the limit does not depend on 𝑥, (9.60) shows the convergence of finite-dimensional distributions of the conditional law. The tightness of it is verified as before. □ With this lemma, we can follow the arguments given after Lemma 9.5.2 to conclude the following extension of Proposition 9.5.1. Theorem 9.5.5 As 𝑅 → ∞ along with 𝑄/𝑅 → 𝜆 ∈ (0, ∞), for each 𝑥 ∈ Z, 𝐶 (𝜆) u 𝐵 (𝑥)𝑅 1−𝛼 𝐿 (𝑅) if 1 < 𝛼 ≤ 2 and u 𝐵 (𝑥) > 0, 𝑃 𝑥 [𝜎𝛥(𝑄,𝑅) < 𝜎𝐵 ] ∼ ˜ ∗ (𝑅) u 𝐵 (𝑥)/[1/ 𝐿] if 𝛼 = 1 and 𝐹 is recurrent, where 𝐶 (𝜆) = [𝜅 ♯𝛼 /𝑐 ♯ ]𝐶𝜆 (1). [𝐶𝜆 (1) is given in Remark 9.5.3. Compare the result with (5.66); see also Remark 5.6.2.]
9.6 Proof of Theorem 9.2.2 Let 𝐹 be recurrent and 𝜎 2 = ∞. Write 𝑥 𝑛 = 𝑥/𝑐 𝑛 for 𝑥 ∈ Z and put 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝐵 ≥ 𝑛]. Note that for 𝑦 ∈ 𝐵, 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝐵 = 𝑛] by definition. In the sequel, 𝑀 denotes an arbitrarily fixed constant larger than 1. Proposition 9.6.1 Suppose 𝐹 is strongly aperiodic and 𝜎 2 = ∞. (i) Uniformly for |𝑦| ∈ [𝑀 −1 𝑐 𝑛 , 𝑀𝑐 𝑛 ], as 𝑛 → ∞, 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝜅 ♯𝛼 u 𝐵 (𝑥)𝔮(𝑦 𝑛 )/𝑛{1 + 𝑜(1)} and 𝑃−𝑦 [𝜎𝐵 = 𝑛] =
∑︁
for 𝑥 ∈ Z with u 𝐵 (𝑥) > 0 (9.62)
𝑛 𝑄 −𝐵 (𝑥, 𝑦) ∼ 𝜅 ♯𝛼𝔮(𝑦 𝑛 )/𝑛.
𝑥 ∈−𝐵
(ii) For each 𝑥 ∈ Z, lim lim sup sup 𝑛𝑄 𝑛𝐵 (𝑥, 𝑦) = 0. 𝜀↓0
𝑛→∞
|𝑦 |< 𝜀𝑐𝑛
(9.63)
9.6 Proof of Theorem 9.2.2
231
Using Lemma 9.3.3 and Theorem 9.2.1 we obtain the following: If 𝜑 is a continuous function on R and 𝐼 is a finite interval of the real line, then uniformly for |𝑦| < 𝑀𝑐 𝑛 , as 𝑛 → ∞ ∫ ∑︁ 𝑐𝑛 𝑄 𝑛𝐵 (𝑥, 𝑦−𝑤)𝜑(𝑤/𝑐 𝑛 ) = 𝜅 ♯𝛼 u 𝐵 (𝑥) 𝔮(𝑦 𝑛 +𝜉)𝜑(𝜉) d𝜉{1+𝑜(1)}. (9.64) 𝑛 𝐼 𝑤:𝑤/𝑐𝑛 ∈𝐼
The proof of (9.62) is based on this relation and Gnedenko’s local limit theorem. Proof (of Proposition 9.6.1) 8 Taking 𝑚 = ⌊𝜀 2𝛼 𝑛⌋ with a small 𝜀 > 0 we decompose ∑︁ 𝑚 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑄 𝑛−𝑚 𝐵 (𝑥, 𝑧)𝑄 𝐵 (𝑧, 𝑦). 𝑧 ∈Z\𝐵
Note that 𝑐 𝑚 /𝑐 𝑛−𝑚 ∼ 𝑛
∑︁
𝜀 2 /(1
𝑄 𝑛−𝑚 𝐵 (𝑥, 𝑦 − 𝑤)
|𝑤|< 𝜀𝑐𝑛−𝑚
−
𝜀 2𝛼 ) 1/𝛼 .
We apply (9.64) in the form
𝜅 ♯ u 𝐵 (𝑥) 𝜑(𝑤/𝑐 𝑚 ) ∼ 𝛼 2𝛼 𝑐𝑚 1−𝜀
∫ 𝔮(𝑦 𝑛−𝑚 + 𝜉) | 𝜉 |< 𝜀
𝜑(𝜉/𝜀˜2 ) d𝜉, 𝜀˜2 (9.65)
valid for each 𝜀 > 0 fixed, where 𝜀˜ = 𝜀/(1 − 𝜀 2𝛼 ) 1/2𝛼 ∼ 𝜀 (𝜀 → 0). Let |𝑦 𝑛 | ∈ [1/𝑀, 𝑀]. It is easy to see that sup
𝑚 𝑚 |𝑄 𝑚 𝐵 (𝑧, 𝑦) − 𝑝 (𝑦 − 𝑧)| = 𝑝 (𝑦 − 𝑧)| × 𝑜 𝜀 (1).
𝑧:|𝑧−𝑦 |< 𝜀𝑐𝑛
Since 𝜀𝑐 𝑛 /𝑐 𝑚 ∼ 1/𝜀 and 𝔭1 (±1/𝜀) = 𝑂 (𝜀 𝛼+1 ) according to [92] (cf. [67] Eq(14.34–35)), we also have for all sufficiently large 𝑛, sup
𝛼+1 𝑄𝑚 /𝑐 𝑚 , 𝐵 (𝑧, 𝑦) < 𝐶𝜀
𝑧:|𝑧−𝑦 | ≥ 𝜀𝑐𝑛−𝑚
which, combined with the preceding bound, yields ∑︁ 𝑛 𝑛−𝑚 𝑚 𝑄 (𝑥, 𝑦) − 𝑄 (𝑥, 𝑧) 𝑝 (𝑦 − 𝑧){1 + 𝑜 (1)} 𝜀 𝐵 𝐵 |𝑧−𝑦 |< 𝜀𝑐𝑛−𝑚
≤ 𝐶′
𝑃 𝑥 [𝜎𝐵 > 𝑛 − 𝑚]𝜀 𝛼+1 . 𝑐𝑚
On the LHS, 𝑝 𝑚 (𝑦 − 𝑧) may be replaced by 𝔭1 ((𝑦 − 𝑧)/𝑐 𝑚 )/𝑐 𝑚 (whenever 𝜀 is fixed), whereas the RHS is dominated by a constant multiple of u 𝐵 (𝑥) [𝑐 𝑛 /𝑛𝑐 𝑚 ]𝜀 𝛼+1 ∼ u 𝐵 (𝑥)𝜀 𝛼−1 /𝑛 (provided u 𝐵 (𝑥) > 0). Hence, after a change of variable 𝑛𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑛
∑︁
𝑄 𝑛−𝑚 𝐵 (𝑥, 𝑦 − 𝑤)
|𝑤|< 𝜀𝑐𝑛−𝑚
8 This proof of (9.62) is borrowed from [82].
𝔭1 (𝑤/𝑐 𝑚 ) {1 + 𝑜 𝜀 (1)} + u 𝐵 (𝑥) × 𝑜 𝜀 (1). 𝑐𝑚
232
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
Now, on letting 𝑛 → ∞ and 𝜀 → 0 in this order, (9.65) shows that the RHS can be written as ∫ 𝔭1 (𝜉/𝜀 2 ) 𝜅 ♯𝛼 u 𝐵 (𝑥) 𝔮(𝑦 𝑛 − 𝜉) d𝜉 + 𝑜 (1) = 𝜅 ♯𝛼 u 𝐵 (𝑥){𝔮(𝑦 𝑛 ) + 𝑜 𝜀 (1)}. 𝜀 𝜀2 | 𝜉 |< 𝜀 (9.66) This verifies the formula (9.62). Time reversal shows (9.63). Thus (i) is verified. Using the bound 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑂 (1/𝑐 𝑛 ) and taking 𝑦 = 0, 𝐼 =∫ [−𝜀, 𝜀] and 𝜑 ≡ Í 𝜀 1/𝑐 we have |𝑧 | ≤ 𝜀𝑐𝑛 𝑄Í𝑛𝐵 (𝑥, 𝑧)𝑄 𝑛𝐵 (𝑧, 𝑦) ≤ 𝐶𝑎 † (𝑥)𝑛−1 −𝜀 𝔮(𝜂) d𝜂. Since Í 𝑛 in (9.64) 𝑛 𝑛 |𝑧 |> 𝜀𝑐𝑛 𝑄 𝐵 (𝑥, 𝑧) = 𝑂 (1/𝑛) and |𝑧 |> 𝜀𝑐𝑛 𝑄 𝐵 (𝑧, 𝑦) → 0 as 𝑛 → ∞, 𝑦/𝑐 𝑛 → 0, the sum over |𝑧| > 𝜀𝑐 𝑛 is of a smaller order of magnitude than 1/𝑛, showing (ii). □ Lemma 9.6.2 If 1 < 𝛼 ≤ 2, then 𝜅 ♯𝛼𝔮(𝜂) = 𝔣−𝜂 (1). Proof According to [4, Lemma VIII.13], ∫ 𝑡 ∫ 𝜅 ♯𝛼 𝔣−𝜂 (𝑠) d𝑠 = 0
𝑡
𝑠−1+1/𝛼𝔭𝑡−𝑠 (𝜂).
0
After taking Fourier transforms in 𝜂 of both sides, differentiate with respect to 𝑡. Then ∫ ∞ ∫ 𝑡 𝜅 ♯𝛼 𝔣−𝜂 (𝑡)e𝑖 𝜃 𝜂 d𝜂 = 𝑡 −1+1/𝛼 − Φ(𝜃) 𝑠−1+1/𝛼 e−(𝑡−𝑠)Φ( 𝜃) d𝑠. −∞
0
The RHS with 𝑡 = 1 coincides with the expression of the characteristic function of 𝔮 in (9.29). Thus the identity of the lemma follows. Alternatively, one can proceed as follows. For 𝜀 > 0, let 𝐽𝑛 (𝜀) stand for the expectation 𝐸 0 [𝜑(𝑌𝑛 (1)) < 𝜀, 𝑛 ≤ 𝜎0 < 𝑛 + 𝑛𝜀 | 𝜎0 > 𝑛] and make the decomposition ∑︁ 𝐽𝑛 (𝜀) = 𝑃 𝑦 [𝑛 ≤ 𝜎0 < 𝑛 + 𝜀𝑛]𝑃0 [𝑆 𝑛 = 𝑦 | 𝜎0 > 𝑛]𝜑(𝑦/𝑐 𝑛 ). 𝑦
As 𝑛 → ∞, by Theorem 9.2.1, (9.62) and the equality in (9.63) the sum above equals Í Í −1 𝜅 ♯𝛼 𝑦 𝑛+𝜀𝑛 𝑘=𝑛 𝔮(−𝑦 𝑛+𝑘 ) (𝑛 + 𝑘) 𝔮(𝑦 𝑛 )𝜑(𝑦 𝑛 )/𝑐 𝑛 {1 + 𝑜(1)}, which converges to 𝜅 ♯𝛼
∫
∞
∫
1+𝜀
𝔮(𝜂)𝜑(𝜂) d𝜂 −∞
1
𝔮(−𝜂(1 + 𝑡) −1/𝛼 ) d𝑡, 1+𝑡
𝑓
while 𝐽𝑛 (𝜀) approaches E0 [𝜑(𝔶1 ), 1 ≤ 𝜎0 ◦ 𝜗1 < 1 + 𝜀], which can be written ∫∞ as −∞ 𝑃𝑌𝜂 [1 ≤ 𝜎0 < 1 + 𝜀]𝔮(𝜂)𝜑(𝜂) d𝜂. Dividing these two expressions by 𝜀 and letting 𝜀 ↓ 0 we find 𝔣 𝜂 (1) = 𝜅 ♯𝛼𝔮(−𝜂), the dual of the required identity.
□
9.6 Proof of Theorem 9.2.2
233
Proposition 9.6.3 If 𝐹 is strongly aperiodic, for 𝑥, 𝑦 ∈ Z with u 𝐵 (𝑥) u 𝐵 (−𝑦) > 0, 𝑄 𝑛𝐵 (𝑥, 𝑦) ∼ u 𝐵 (𝑥)𝑃0 [𝜎0 = 𝑛] u−𝐵 (−𝑦). Í 𝑚 Proof Suppose 𝑛 is even and decompose 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑄 𝑚 𝐵 (𝑥, 𝑧)𝑄 𝐵 (𝑧, 𝑦), where 𝑚 = 𝑛/2. For any 𝜀 > 0, the sum restricted to 𝜀𝑐 𝑛 < |𝑧| < 𝑐 𝑛 /𝜀 is written as ∑︁ 𝑧 −𝑧 ♯ 1 + 𝑜(1) ♯ 𝜅 𝛼 u 𝐵 (𝑥)𝔮 𝔮 𝜅 u−𝐵 (−𝑦) 𝑐𝑚 𝑐𝑚 𝛼 𝑚2 𝜀𝑐 < |𝑧 | 0, for 𝑡 > 0, ∫ ∞ (1 − 𝛼−1 ) . 𝔮(𝜂)𝔣 𝜂 (𝑡) d𝜂 = lim 𝑃 𝑥 [𝜎𝐵 = 𝑘 + 𝑛 | 𝜎𝐵 > 𝑘] = 𝑛/𝑘→𝑡 (1 + 𝑡) 2−1/𝛼 −∞ Recalling that 𝐼1 ( 𝑓 ) denotes the first excursion interval of 𝑓 whose length is larger than 1, one easily infers that the above equalities entail the following Corollary 9.6.4 If 1 < 𝛼 ≤ 2, P0 [the length of 𝐼1 is larger than 𝑡] = 𝑡 −1+1/𝛼 for 𝑡 ≥ 1.
234
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
9.7 Proof of Theorem 9.2.6 Put ⟨𝐵] = (−∞, min 𝐵] ∪ 𝐵. Recall 𝑇 = 𝜎𝛺 and 𝛺 = (−∞, −1]. Lemma 9.7.1 There exists the limit 𝑉𝐵 (𝑥) := lim 𝑘→∞ 𝐸 𝑥 𝑉𝑑 (𝑆 𝑘 ); 𝑘 < 𝜎⟨𝐵] , and 𝑃 𝑥 [𝜎⟨𝐵] > 𝑛]/𝑃0 [𝑇 > 𝑛] → 𝑉𝐵 (𝑥)/𝑣◦ . If 𝑥 ≥ 0 and 𝛺 ⊂ ⟨𝐵], then 𝑉𝐵 (𝑥) = 𝑉d (𝑥) − 𝐸 𝑥 [𝑉d (𝑆 𝜎⟨𝐵] )]. [The assertion is valid for every r.w. satisfying the basic assumption of this treatise.] Proof The proof rests on Theorem 10 of [44], a weak version of which reads 𝑃 𝑥 [𝑇 > 𝑛] 𝑉d (𝑥) −→ 𝑃0 [𝑇 > 𝑛] 𝑣◦
𝑎𝑛𝑑
𝑃0 [𝑇 > 𝑛 + 1] −→ 1 9 𝑃0 [𝑇 > 𝑛]
as 𝑛 → ∞. For each 𝑘 < 𝑛, ∑︁ 𝑃 𝑥 [𝜎⟨𝐵] > 𝑛] = 𝑃 𝑥 [𝑆 𝑘 = 𝑦, 𝜎⟨𝐵] > 𝑘]𝑃 𝑦 [𝜎⟨𝐵] > 𝑛 − 𝑘].
(9.68)
(9.69)
𝑦
For any 𝑀 > 1, 𝑃 𝑥 [𝑆 𝑘 < 𝑀 | 𝜎⟨𝐵] > 𝑘] → 0 as 𝑘 → ∞ and 𝑉d (𝑦)/𝑉d (𝑦 + 1) → 1 as 𝑦 → ∞. It also holds that 𝑃 𝑦−𝑟𝐵 −1 [𝑇 > 𝑛 − 𝑘] ≤ 𝑃 𝑦 [𝜎⟨𝐵] > 𝑛 − 𝑘] ≤ 𝑃 𝑦−𝑙𝐵 [𝑇 > 𝑛 − 𝑘] (where 𝑙 𝐵 = min 𝐵, 𝑟 𝐵 = max 𝐵). Substituting these inequalities into the sum in (9.69) we then apply (9.68) to see that if one defines 𝑟 (𝑛, 𝑘) by 𝑃 𝑥 [𝜎⟨𝐵] > 𝑛] 𝐸 𝑥 [𝑉d (𝑆 𝑘 ), 𝜎⟨𝐵] > 𝑘] {1 + 𝑟 (𝑛, 𝑘)}, = 𝑃0 [𝑇 > 𝑛] 𝑣◦ then for any 𝜀 > 0, one can choose 𝑘 so that lim sup𝑛→∞ |𝑟 (𝑛, 𝑘)| < 𝜀. This shows the convergence of the LHS as 𝑛 → ∞, which, in turn, shows the first half of the lemma. If 𝑥 ≥ 0 and 𝛺 ⊂ ⟨𝐵], then under 𝑃 𝑥 , 𝑉d (𝑆 𝑛∧𝜎 ⟨𝐵] ) is equal to 𝑉d (𝑆 𝑛∧𝑇∧𝜎 ⟨𝐵] ), hence a martingale, and letting 𝑛 → ∞ in the identity 𝑉d (𝑥) = 𝐸 𝑥 [𝑉d (𝑆 𝑛∧𝜎 ⟨𝐵] )] = 𝐸 𝑥 [𝑉d (𝑆 𝜎 ⟨𝐵] ); 𝜎⟨𝐵] ≤ 𝑛] + 𝐸 𝑥 [𝑉d (𝑆 𝑛 ); 𝜎⟨𝐵] > 𝑛] leads to the second half. □ Lemma 9.7.2 For any 𝑥 ∈ Z the conditional law 𝑃 𝑥 [𝑌𝑛 ∈ · | 𝑇 > 𝑛] converges weakly to the law of the stable meander. Proof10 For 𝑥 = 0 this is the first example given in Section 3 of [19]. Given a ‘nice’ set Γ ∈ B∞ that depends only on paths after an arbitrarily chosen 𝑡0 ∈ (0, 1), decompose 𝑃1 [𝑌𝑛 ∈ Γ | 𝑇 > 𝑛] as the sum ∑︁
𝑃1 [𝜎0 = 𝑘, 𝑇 > 𝑘]𝑃0 [𝑌𝑛 ∈ 𝜗𝑘/𝑛 (Γ) | 𝑇 > 𝑛 − 𝑘]
𝑘 𝑛]
𝑃0 [𝑇 > 𝑛 − 𝑘] 𝑃1 [𝑇 > 𝑛]
𝑃1 [𝜎𝛺+1 > 𝑛] + 𝜂𝑛 , 𝑃1 [𝑇 > 𝑛]
9 In [44] strong aperiodicity is assumed, which is not needed for this weak version. 10 An alternative proof can be given by examining those in [19], [68].
9.8 Random Walks Conditioned to Avoid a Finite Set Forever
235
with 0 ≤ 𝜂 𝑛 ≤ 𝑃1 [𝑡0 𝑛 ≤ 𝜎0 ≤ 𝑛 | 𝑇 > 𝑛] → 0. Then, noting 𝑃1 [𝜎𝛺+1 > 𝑛] = 𝑃0 [𝑇 > 𝑛] and employing (9.68), one infers that lim 𝑃1 [𝑌𝑛 ∈ Γ | 𝑇 > 𝑛] =
𝑛→∞
𝑃1 [𝜎0 < 𝑇] + 1 lim 𝑃0 [𝑌𝑛 ∈ Γ | 𝑇 > 𝑛]. 𝑛→∞ 𝑉d (1)/𝑣◦
Since 𝑃1 [𝜎0 < 𝑇] = 𝑃0 [𝑆𝑇 = −1] = 𝑣d (1)/𝑣◦ , the ratio on the RHS equals unity, and the assertion of the lemma is verified for 𝑥 = 1. Induction generalises this to any 𝑥 > 0 instead of 1, which, in turn, implies the result for any 𝑥 < 0. □ Proof (of Theorem 9.2.6) Let Γ ∈ B∞ be as in proof of Lemma 9.7.2. We may suppose min 𝐵 = 0. Let u 𝐵 (𝑥) = 0. Then 𝑥 > 0 and 𝜎𝐵 ≤ 𝑇 a.s.(𝑃 𝑥 ) and one sees that 𝑃 𝑥 [𝑌𝑛 ∈ Γ, 𝜎⟨𝐵] > 𝑛] = 𝑃 𝑥 [𝑌𝑛 ∈ Γ, 𝑇 > 𝑛] −
𝑛 ∑︁ ∑︁
𝑃 𝑥 [𝜎𝐵 = 𝑘, 𝑆 𝑘 = 𝑦]𝑃 𝑦 [𝑌𝑛 ∈ 𝜗𝑘/𝑛 (Γ), 𝑇 > 𝑛 − 𝑘].
𝑘=1 𝑦 ∈𝐵
The second probability in the summand of the double sum is asymptotically equivalent to 𝑃 𝑦 [𝑌𝑛 ∈ Γ | 𝑇 > 𝑛] (𝑉d (𝑦)/𝑣◦ )𝑃0 [𝑌 > 𝑛] for each 𝑘 fixed. After dividing both sides of the identity above by 𝑃 𝑥 [𝜎𝐵 > 𝑛], let 𝑛 → ∞ and apply the above two lemmas to see lim 𝑃 𝑥 [𝑌𝑛 ∈ Γ | 𝜎⟨𝐵] > 𝑛] = lim 𝑃0 [𝑌𝑛 ∈ Γ | 𝑇 > 𝑛]. This implies the convergence of finite-dimensional distribution when 𝑥 is fixed. The tightness can be shown similarly to Lemma 9.3.7. □
9.8 Random Walks Conditioned to Avoid a Finite Set Forever Here we suppose u 𝐵 (𝑥) > 0 for all 𝑥 ∈ Z.11 For any recurrent walk there exists the limit law ∞ lim 𝑃 𝑥 [(𝑆 𝑛 ) 𝑛=0 ∈ · | 𝜎𝐵 > 𝑛] 𝑛→∞
(in the sense of finite-dimensional distributions); indeed it is a harmonic transform of the r.w. killed upon hitting 𝐵, the 𝑛-step transition law being given by the Doob ℎ-transform: 𝑄 𝑛𝐵 (𝑥, 𝑦) u 𝐵 (𝑦)/u 𝐵 (𝑥) (𝑥, 𝑦 ∉ 𝐵), (9.70) where 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝐵 ≥ 𝑛] as in Section 9.6.12 Let 𝑃∞,𝐵 𝑥 , 𝑥 ∈ Z, stand for the above limit law, which may be regarded as the probability law of the conditional process 𝑆 𝑛 given that it never visits the origin.
11 If u 𝐵 ( 𝑥) = 0 for some 𝑥, the r.w. stays in a half-line forever once it enters this half-line, and the problem addressed in this subsection essentially becomes the problem for a half-line. 𝑛 ( 𝑥, 𝑦) = 𝑃 [𝑆 = 𝑦, 𝜎 > 𝑛] for 𝑦 ∉ 𝐵. 12 Note that 𝑄 𝐵 𝑥 𝑛 𝐵
236
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
It is observed in [84, Section 7] that the conditional law 𝑃 𝑥 [ · | 𝜎𝛥(𝑅) < 𝜎𝐵 ] converges to 𝑃∞,𝐵 as 𝑅 → ∞, 𝑥
(9.71)
where 𝛥(𝑅) = (−∞, −𝑅] ∪ [𝑅, ∞), and that if 𝜎 2 = ∞, for every 𝑥 ∈ Z, (a) 𝑃∞,𝐵 [ lim 𝑆 𝑛 = +∞] = 1 𝑥 (b)
[ lim sup 𝑆 𝑛 𝑃∞,𝐵 𝑥
if 𝐸 𝑍 < ∞,
= +∞, lim inf 𝑆 𝑛 = −∞] = 1 if 𝐸 𝑍 = 𝐸 | 𝑍ˆ | = ∞;
(9.72)
and if 𝜎 2 < ∞, either lim 𝑆 𝑛 = +∞ or lim 𝑆 𝑛 = −∞ with 𝑃∞,𝐵 𝑥 -probability one and 𝑃∞,𝐵 [ lim 𝑆 𝑛 = +∞] = 𝑥
u 𝐵 (𝑥) + w 𝐵 (𝑥)
(𝑥 ∈ Z).
2u 𝐵 (𝑥)
(9.73)
One notices that if 𝜎 2 = ∞, there is only one harmonic function, hence a unique Martin boundary point: lim |𝑦 |→∞ 𝑔 {0} (·, 𝑦)/𝑔 {0} (𝑥0 , 𝑦) = 𝑎(·)/𝑎(𝑥0 ), so that two geometric boundary points +∞ and −∞ are not distinguished in the Martin boundary whereas the walk itself discerns them provided that either 𝐸 𝑍 or 𝐸 𝑍ˆ is finite. In the sequel we suppose that (AS) holds with 1 < 𝛼 ≤ 2, 𝜎 2 = ∞, and 𝑝 := lim 𝜇+ (𝑥)/𝜇(𝑥) (even if 𝛼 = 2), and consider the scaling limit of 𝑆 𝑛 under 𝑃∞,𝐵 𝑥 . By (9.70) we have for Γ ∈ B1 , 𝑃∞,𝐵 [𝑌𝑛 ∈ Γ] 𝐸 𝑥 [u 𝐵 (𝑆 𝑛 ); 𝑌𝑛 ∈ Γ | 𝜎𝐵 > 𝑛] 𝑥 . = 𝑃 𝑥 [𝜎𝐵 > 𝑛] u 𝐵 (𝑥)
(9.74)
For simplicity we let 𝑐 ♯ = 1. [For general 𝑐 ♯ , one has only to replace 𝔣 𝜉 (𝑡) by 𝔣 𝜉 (𝑐 ♯ 𝑡)𝑐 ♯ , 𝔮(𝜉) by 𝔮(𝑐−1/𝛼 𝜉)𝑐−1/𝛼 , ♯ ♯ 𝔭𝑡0 by 𝔭𝑐0♯ 𝑡 , 𝐿 by 𝐿/𝑐 ♯ and so on.] As one may expect, we shall see that the limit is an ℎ-transform of the process on R \ {0} whose transition probability is given by 𝔭𝑡0 (𝜉, 𝜂). Put 𝔞(𝜉) =
1 − sgn(𝜉) ( 𝑝 − 𝑞) ♯ 𝛼−1 𝜅 𝛼 |𝜉 | 𝜅 ◦𝛼
(1 < 𝛼 ≤ 2),
(9.75)
where 𝜅 ◦𝛼 =
1
if
𝑞) 2 tan2 21 𝛼𝜋
𝜋 1 + (𝑝 − 𝜅𝛼 = (2 − 𝛼)(𝛼 − 1) | tan 21 𝛼𝜋|
𝛼 = 2,
(9.76) if 1 < 𝛼 < 2.
It follows that 𝔞(𝜉) = 0 if either 𝑝 = 1 and 𝜉 > 0 or 𝑝 = 0 and 𝜉 < 0. By (4.23), 𝑎(𝑥) ¯ ∼ [𝜅 ◦𝛼 𝐿 (𝑥)] −1 |𝑥| 𝛼−1 (|𝑥| ↑ ∞), hence u 𝐵 (𝑐 𝑛 ) ∼ 𝑎(𝑐 𝑛 ) ∼ [2𝑞/𝜅 ◦𝛼 ]𝑛/𝑐 𝑛 and
9.8 Random Walks Conditioned to Avoid a Finite Set Forever
237
similarly for u 𝐵 (−𝑐 𝑛 ); one accordingly has 𝔞(𝜉) = lim u 𝐵 (𝜉𝑐 𝑛 )𝑃 𝑥 [𝜎𝐵 > 𝑛]/u 𝐵 (𝑥) 13 𝑛→∞
if u 𝐵 (𝑥) > 0.
(9.77)
Theorem 9.8.1 Let 1 < 𝛼 ≤ 2, 𝜎 2 = ∞ and 𝑝 = lim 𝑥→∞ 𝑚 + (𝑥)/𝑚(𝑥). (i) 𝔞 is harmonic w.r.t. 𝔭𝑡0 : ∫
∞
𝔭𝑡0 (𝜉, 𝜂)𝔞(𝜂) d𝜂
𝔞(𝜉) =
(𝜉 ∈ R).
(9.78)
−∞
(ii) The probability law 𝑃∞,𝐵 ◦ 𝑌𝑛−1 on 𝐷 [0,∞) converges weakly to the law of a 𝑥 Markov process on R of which the transition function is given by the ℎ-transform 𝔭𝑡𝔞 (𝜉, 𝜂) := 𝔭𝑡0 (𝜉, 𝜂)𝔞(𝜂)/𝔞(𝜉) (𝜉𝜂 ≠ 0) and the entrance law by ∫ 𝜂 0 𝜂] ≤ [𝔞(𝔶 𝔮𝑡 (𝜉)𝔞(𝜉) d𝜉. (9.79) lim 𝑃∞,𝐵 = [𝑌 (𝑡) E = 𝜂] ); ≤ 𝔶 𝑡∨1 𝑛 𝑡 𝑥 −∞
(E0
P0
is the expectation with respect to that is characterised by Corollary 9.2.4(i).) (iii) lim 𝜉 ′ →0 𝔭𝑡0 (𝜉 ′, 𝜉)/𝔞(𝜉 ′) = 𝔮𝑡 (𝜉), with the understanding that when 𝛼 ≠ 2, the limit is taken along 𝜉 ′ > 0 if 𝑝 = 0 and 𝜉 ′ < 0 if 𝑝 = 1 (so that 𝔞(𝜉 ′) > 0). (9.78) (as well as Proposition 9.8.3 below) is a special case of Theorem 2(iii) of Pantí [56]. We shall derive it from the corresponding property of 𝑎. Remark 9.8.2 (a) Let 𝑝 = 1. Then 𝔞(𝜉) is positive for 𝜉 < 0 and zero for 𝜉 ≤ 0, and (9.79) gives a proper probability law that concentrates on the negative half-line 0 – a remarkable difference from the limit of P𝑛,𝐵 𝑥 . Note that for 𝜉 > 0, 𝔭𝑡 (𝜉, 𝜂) = 𝔭𝑡(∞,0] (𝜉, 𝜂), so that both sides of (9.78) vanish. (b) If 1 < 𝛼 ≠ 2, 𝔞 is a unique non-negative harmonic function in the sense that any non-negative function satisfying (9.78) is a constant multiple of 𝔞 (Lemma 9.9.3). First, we explain how (i) and (ii) of Theorem 9.8.1 are derived, setting aside any necessary technical results, which we shall provide later. We derive (i) from the identity Í (∗) 𝑦 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎0 > 𝑛]𝑎(𝑦) = 𝑎(𝑥) (𝑥 ≠ 0). Let 𝐿 ≡ 1 for simplicity. Then 𝑎(𝑐 𝑛 𝜉)/𝑐 𝑛𝛼−1 → 𝑐𝔞(𝜉) as 𝑛 → ∞. Divide both sides Íof (∗) by 𝑐 𝑛𝛼−1 and then let 𝑛 → ∞ along with 𝑥 𝑛 = 𝑥/𝑐 𝑛 → 𝜉. If sup𝑛 𝑦 𝑝 𝑛 (𝑥, 𝑦)|𝑦/𝑐 𝑛 | 𝜈 < ∞ for some 𝜈 > 𝛼 − 1 (which we shall show in Proposition 9.8.5), by the trivial bound 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎0 > 𝑛] ≤ 𝑝 𝑛 (𝑥, 𝑦) an application of (9.21) (in Corollary 9.2.3) yields the identity of (i) for 𝑡 = 1. For general 𝑡 > 0, the result follows the second scaling relation in (9.26). 13 For general 𝑐♯ ≠ 1, this limit becomes 𝔞 ( 𝜉 )/𝑐♯ .
238
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
For the proof of (ii), one has only to show the convergence of (𝑌𝑛 (𝑡))0≤𝑡 ≤𝑀 to the process specified therein under 𝑃 𝑥𝐵 for each 𝑀 ≥ 1. To this end it suffices to show it for 𝑀 = 1; the general case follows from this particular case. From (9.74) it follows that for any finite interval 𝐼 and Γ ∈ B1 , 𝑃∞,𝐵 [𝑌𝑛 ∈ Γ, 𝑌𝑛 (1) ∈ 𝐼] = 𝑥
𝐸 𝑥 [𝑟 𝑛𝐵,𝑥 u 𝐵 (𝑆 𝑛 ), 𝑌𝑛 ∈ Γ, 𝑌𝑛 (1) ∈ 𝐼 | 𝜎𝐵 > 𝑛] , u 𝐵 (𝑥)
where 𝑟 𝑛𝐵, 𝑥 = 𝑃 𝑥 [𝜎𝐵 > 𝑛]. Combined with (9.77) and Theorem 9.2.1 this shows lim 𝑃∞,𝐵 [𝑌𝑛 ∈ Γ, 𝑌𝑛 (1) ∈ 𝐼] = E0 [𝔞(𝔶1 ); Γ, 𝔶1 ∈ 𝐼]. 𝑥
(9.80)
For Γ = {𝜉 ≤ 𝔶𝑡 ≤ 𝜂} (−∞ < 𝜉 < 𝜂 < ∞) with 𝑡 ≤ 1, this becomes (9.79) – but with the range (−∞, 𝜂] replaced by [𝜉, 𝜂] – because of (9.24), (9.25) and (9.78) (for 𝑡 > 1, see (ii) of Corollary 9.2.4). To complete the proof of (ii) of Theorem 9.8.1 it suffices to show the following proposition (taking it for granted that 𝔞 is harmonic), for it entails the uniform integrability of 𝑟 𝑛𝐵,𝑥 u 𝐵 (𝑆 𝑛 ) under 𝑃 𝑛,𝐵 := 𝑃 𝑥 [· | 𝜎𝐵 > 𝑛]. 𝑥 Proposition 9.8.3 For every 1 < 𝛼 ≤ 2, E0 [𝔞(𝔶1 )] =
∫∞ −∞
𝔞(𝜂)𝔮(𝜂) d𝜂 = 1.
Since 𝑟 𝑛𝐵,𝑥 u 𝐵 (𝑆 𝑛 )/u 𝐵 (𝑥) ∼ 𝔞(𝑌𝑛 (1)), taking Γ = 𝐷 [0,∞) in (9.74) gives 1 = [ u 𝐵 (𝑥)] −1 𝐸 𝑥 [u 𝐵 (𝑆 𝑛 ); 𝜎𝐵 > 𝑛] ∼ 𝐸 𝑥 [𝔞(𝑌𝑛 (1)) | 𝜎𝐵 > 𝑛]. This yields the result asserted in Proposition 9.8.3 if 𝐸 𝑥 [|𝑌𝑛 (1)| 𝛼−1+ 𝛿 | 𝜎𝐵 > 𝑛] is bounded for some 𝛿 > 0, which we show at the end of this subsection in the case 𝑥 = 0, 𝐵 = {0}, which is enough. Alternatively, one may try to carry out a direct computation of E0 [𝔞(𝔶1 )] =
∫ ∞ ∫ 0 2𝜅 ♯𝛼 𝛼−1 𝛼−1 𝑞 |𝜂| 𝔮(𝜂) d𝜂 + 𝑝 |𝜂| 𝔮(𝜂) d𝜂 . 𝜅 ◦𝛼 0 −∞
(9.81)
If 𝛼 = 2, then 𝔮(𝜂) = 𝔮(−𝜂) and 𝜅 ◦𝛼 = 1, so that the RHS of (9.81) becomes ∫∞ 2 𝜅 ♯𝛼 0 𝜂𝔮(𝜂) d𝜂. Observing 𝜅 ♯𝛼𝔮(𝜂) = 𝔣 𝜂 (1) = (2𝜋) −1/2 𝜂e−𝜂 /2 one can easily see that it equals unity, as desired. If 𝑝 = 𝑞, one can compute the RHS of (9.81), without much difficulty, by using Belkin’s expression (9.29) for Ψ(𝜃) and the Parseval relation. The details are omitted. Unfortunately, this approach runs into a serious difficulty in the case 𝑝 ≠ 𝑞. The proof of (iii) is postponed to the next section (see Lemma 9.9.4), where we shall give alternative proofs of (i) of Theorem 9.8.1 and Proposition 9.8.3 (Lemma 9.9.3(ii)).
9.8 Random Walks Conditioned to Avoid a Finite Set Forever
239
Lemma 9.8.4 Let 0 < 𝛼 ≤ 2. If 0 < 𝜈 < 𝛼, then as 𝑛 → ∞. ∞ ∑︁
𝑝 𝑛 (𝑥)|𝑥| 𝜈 =
𝐶 (𝜈) 𝜋
∫
𝑥=−∞
∞
−∞
1 − 𝜓 𝑛 (𝜃) sin 𝜃/2 · d𝜃{1 + 𝑜(1)}, 𝜃/2 |𝜃| 𝜈+1
(9.82)
where 𝐶 (𝜈) = −Γ(𝜈 + 1) cos[(𝜈 + 1)𝜋/2]. ∫∞ Proof Let 𝜆 > 0 and put 𝜒𝜆 (𝑠) = |𝑠| 𝜈 e−𝜆|𝑠 | and 𝜒ˆ 𝜆 (𝜃) = −∞ 𝜒𝜆 (𝑠)e𝑖 𝜃 𝑠 d𝑠. Writing √ −1 𝑏 := (𝜆 − 𝑖𝜃) = 𝜆2 + 𝜃 2 e−𝑖 tan 𝜃/𝜆 , substitute 𝑠 = 𝑡/𝑏 (so that −𝜆𝑠 + 𝑖𝜃𝑠 = −𝑡) to obtain ∫ ∞ Γ(𝜈 + 1) cos[(𝜈 + 1) tan−1 𝜃/𝜆] 𝜒ˆ 𝜆 (𝜃) = ℜ 𝑏 −𝜈−1 e−𝑡 𝑡 𝜈 d𝑡 = ; 2 (𝜆2 + 𝜃 2 ) (𝜈+1)/2 0 in particular 𝜒ˆ 𝜆 (𝜃)|𝜃| 𝜈+1 is bounded and tends to 2𝐶 (𝜈) as 𝜆 ↓ 0. Let ℎ be the Z-valued function on R defined by ℎ(𝑡) = 𝑥 for − 21 < 𝑡 − 𝑥 ≤ 12 , 𝑥 ∈ Z. Then ∫
∞
𝑝 𝑛 (ℎ(𝑡))e𝑖 𝜃𝑡 d𝑡 = −∞
∑︁ ∫ 𝑦
𝑦+ 12
𝑦− 21
𝑝 𝑛 (𝑦)e𝑖 𝜃𝑡 d𝑡 = 𝜓 𝑛 (𝜃)
sin 𝜃/2 . 𝜃/2
∫∞ Noting that 𝜒𝜆 (𝜃) is integrable, one can easily verify 𝜒𝜆 (𝑡) −∞ 𝜒𝜆 (𝑡) d𝑡 = 0. By virtue of Parseval’s identity we accordingly find ∫ ∞ ∫ ∞ 1 sin 𝜃/2 (𝜓 𝑛 (𝜃) − 1) 𝜒ˆ 𝜆 (𝜃) d𝜃. 𝑝 𝑛 (ℎ(𝑡)) 𝜒𝜆 (𝑡) d𝑡 = 2𝜋 𝜃/2 −∞ −∞ ∫∞ Í 𝑛 𝜈 𝑛 𝜈 It is easy to see ∞ 𝑥=−∞ 𝑝 (𝑥)|𝑥| = −∞ 𝑝 (ℎ(𝑡))|𝑡| d𝑡{1+ 𝑜(1)}. Thus letting 𝜆 ↓ 0 one concludes (9.82). □ Proposition 9.8.5 Let 1 < 𝛼 < 2. If 0 ≤ 𝜈 < 𝛼, then ∫ ∞ ∞ 1 ∑︁ 𝑛 𝜈 𝑝 (𝑥)|𝑥| = 𝔭1 (𝜉)|𝜉 | 𝜈 d𝜉. 𝜈 𝑛→∞ 𝑐 𝑛 −∞ 𝑥=−∞ lim
Proof Suppose 𝛼 ≠ 2, the case 𝛼 = 2 being similar. First, observe that the integral in (9.82) restricted to |𝜃| > 𝑀/𝑐 𝑛 is dominated by 𝐶 (𝑐 𝑛 /𝑀) 𝜈 , so that we have only to evaluate the integral over |𝜃| < 𝑀/𝑐 𝑛 . From (4.20) it follows that as 𝜃 → ±0 (1 − 𝜓(𝜃))/|𝜃| 𝛼 𝐿 (1/|𝜃|) −→ 𝜁 ± := Γ(1 − 𝛼) [cos 21 𝛼𝜋 ∓ 𝑖( 𝑝 − 𝑞) sin 21 𝛼𝜋], ±
hence 𝜓 𝑛 (𝜃) = e−𝑛[1−𝜓 ( 𝜃) ] {1+𝑜(1) } = e−𝑛𝜁 | 𝜃 | 𝐿 (1/𝜃) {1+𝑜(1) } . Substitute this expression into (9.82) and observe that 𝑐−𝜈 𝑛 times the integral restricted on {|𝜃| < 𝜀/𝑐 𝑛 } tends to zero as 𝑛 → ∞ and 𝜀 ↓ 0 in this order, which leads to 𝛼
∫ ∞ ∞ 1 − exp{−𝜁 + 𝑡 𝛼 } 1 ∑︁ 𝑛 2𝐶 (𝜈) 𝜈 𝑝 (𝑥)|𝑥| ℜ d𝑡. = 𝜈 𝑛→∞ 𝑐 𝑛 𝜋 𝑡 𝜈+1 0 𝑥=−∞ lim
240
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
Thus, for every 0 ≤ 𝜈 < 𝛼, 𝐸 0 |𝑌𝑛 (1)| 𝜈 is bounded, which entails the uniform integrability of |𝑌𝑛 (1)| 𝜈 under 𝑃0 and hence the identity of the theorem. □ As mentioned right after its statement, Proposition 9.8.3 follows from the next Lemma 9.8.6 If 1 < 𝛼 ≤ 2, lim 𝐸 0 [|𝑌𝑛 (1)| 𝜈 | 𝜎0 > 𝑛] = E0 [|𝔶1 | 𝜈 ] < ∞ for 0 ≤ 𝜈 < 𝛼. Proof We use the notation in the proof of Lemma 9.3.5. For 𝑥 = 0, (9.44) reduces to 𝜙 𝑛,0 (𝜃) = 𝐸 0𝑛, {0} [e𝑖 𝜃 𝑆𝑛 ] = 1 −
𝑛−1 0 ∑︁ 𝑟 𝑘
𝑘=0
𝑟 𝑛0
[1 − 𝜓(𝜃)]𝜓 𝑛−𝑘−1 (𝜃).
(9.83)
Let ℎ be the function defined in the proof of Lemma 9.8.4. Then ∫ ∞ sin 𝜃/2 . 𝑃0 [𝑆 𝑛 = ℎ(𝑡) | 𝜎0 > 𝑛]e𝑖 𝜃𝑡 d𝑡 = 𝜙 𝑛,0 (𝜃) 𝜃/2 −∞ As in the proof of Lemma 9.8.4, one therefore obtains ∫ ∞ ∫ ∞ 1 − 𝜙 𝑛,0 (𝜃) sin 𝜃/2 𝜈 · d𝜃, |𝑡| 𝑃0 [𝑆 𝑛 = ℎ(𝑡) | 𝜎0 > 𝑛] d𝑡 = 𝐶1 𝜃/2 |𝜃| 𝜈+1 −∞ −∞ and one sees that the integral above restricted to |𝜃| > 1/𝑐 𝑛 is bounded by a constant multiple of 𝑐 𝑛 . By Lemma 9.3.3 𝑟 𝑘0 /𝑟 𝑛0 ∼ (𝑘/𝑛) 1/𝛼−1 𝑙 (𝑘)/𝑙 (𝑛) (as 𝑘∧ → ∞) for some s.v. 𝑙 while |1 − 𝜓(𝜃)| = 𝑂 (|𝜃| 𝛼 𝐿 (1/|𝜃|)). Hence for a fixed large number 𝑀, 𝐸 0𝑛, {0} [|𝑌𝑛 (1)| 𝜈 ]
𝐶4 ≤ 𝐶3 + 𝜈 𝑐𝑛
∫
1/𝑐𝑛
𝜃 𝛼−𝜈−1 𝐿 (1/𝜃) d𝜃 0
𝑛 ∑︁ 𝑘 1/𝛼−1 𝑙 (𝑘) . 𝑛1/𝛼−1 𝑙 (𝑛) 𝑘=𝑀
It is easy to see that the integral and the sum above are bounded by a constant multiple of 𝑐 𝑛𝜈 /𝑛 and 𝑛, respectively. This shows that |𝑌𝑛 (1)| 𝜈 is uniformly integrable under 𝑃0𝑛, {0} . Hence the identity of the lemma follows from the convergence of 𝑃0𝑛, {0} . □
9.9 Some Related Results Here we state, without proof except for Lemmas 9.9.1 to 9.9.4, the principal results of Uchiyama [79], [82], Doney [19] and Caravenna and Chaumont [13]. The first two works obtain precise asymptotic forms of the transition probability for the walk killed when it hits the finite set, in the cases when 𝜎 2 < ∞ and when 𝐹 is in the domain of attraction of a stable law of exponent 1 < 𝛼 < 2, respectively. In [19], the walk killed on entering the negative half-line is studied, and the corresponding result on the transition probability and a local limit theorem for the hitting time of the half-line for the killed walk are obtained. The estimates obtained in these works are uniform in the space variables within the natural space-time region 𝑥 = 𝑂 (𝑐 𝑛 ),
9.9 Some Related Results
241
where 𝑐 𝑛 is a norming sequence associated with the walk. In [13] the functional limit theorem for the r.w. conditioned to stay positive forever is obtained.
9.9.1 The r.w. Avoiding a Finite Set in the Case 𝝈 2 < ∞ This subsection continues Section 9.1.2, and we use the notation introduced there. Let 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝐵 ≥ 𝑛] as in Section 9.6, which entails 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝐵 = 𝑛] (𝑛 ≥ 1, 𝑦 ∈ 𝐵, 𝑥 ∈ Z) as well as 𝑄 0𝐵 (𝑥, 𝑦) = 𝛿 𝑥,𝑦 . Suppose 𝜎 2 < ∞ and let 𝑀 be an arbitrarily chosen positive number > 1. The following results (i) to (iii) are shown in [79]: √ √ (i) Uniformly for (𝑥, 𝑦) ∈ [−𝑀, 𝑀√ 𝑛] 2 ∪ [−𝑀 𝑛, 𝑀] 2 with K 𝐵 (𝑥, 𝑦) ≠ 0, as 𝑛 → ∞ along with |𝑥| ∧ |𝑦| = 𝑜( 𝑛), 𝜎2 K 𝐵 (𝑥, 𝑦)𝑃 𝑥 [𝑆 𝑛 = 𝑦]{1 + 𝑜(1)}; 𝑛 √ (ii) as 𝑥 ∧ 𝑦 ∧ 𝑛 → ∞ under 𝑥 ∨ 𝑦 < 𝑀 𝑛 along with 𝑃 𝑥 [𝑆 𝑛 = 𝑦] > 0, 𝑄 𝑛𝐵 (𝑥, 𝑦) =
2 2 2 2 𝑑∗ 𝑄 𝑛𝐵 (𝑥, 𝑦) = √ e−( 𝑦−𝑥) /2𝜎 𝑛 − e−( 𝑥+𝑦) /2𝜎 𝑛 {1 + 𝑜(1)}, 2𝜋𝜎 2 𝑛 where 𝑑∗ denotes the temporal period of the r.w. 𝑆; √ (iii) uniformly for |𝑥| < 𝑀 𝑛 and for 𝑦 0 ∈ 𝐵 with K 𝐵 (𝑥, 𝑦 0 ) ≠ 0, as 𝑛 → ∞, 𝑃 𝑥 [𝜎𝐵 = 𝑛, 𝑆 𝜎𝐵 = 𝑦 0 ] = 𝜎 2 K 𝐵 (𝑥, 𝑦 0 )
𝑃 𝑥 [𝑆 𝑛 = 𝑦 0 ] {1 + 𝑜(1)}. 𝑛
By (9.12), which may read K 𝐵 (𝑥, 𝑦) = u 𝐵 (𝑥) u−𝐵 (−𝑦) − w 𝐵 (𝑥) w−𝐵 (−𝑦), one finds that the formula of (i) is quite analogous to (2.8) (the corresponding one for the case 𝐵 = {0}) valid for fixed 𝑥, 𝑦 for which 𝑃0 [𝜎0 = 𝑛] ∼ 𝜎 2 𝑃 𝑥 [𝑆 𝑛 = 𝑦]/𝑛. Í On noting 𝑦0 ∈𝐵 K 𝐵 (𝑥, 𝑦 0 ) = 2u 𝐵 (𝑥), an elementary computation deduces from (iii) √︂ ∫ | 𝑥 |+ /√𝑛 2 2 2𝜎 2 u 𝐵 (𝑥) e−𝑦 /2𝜎 d𝑦, 𝑃 𝑥 [𝜎𝐵 > 𝑛] ∼ 𝜋 |𝑥| + 0 where |𝑥| + = |𝑥| ∨ 1. Below, combining this and the formula in (i), we derive the asymptotic form of 𝑃 𝑥 [𝑆 𝑛 = 𝑦 | 𝜎𝐵 > 𝑛]. In the sequel suppose K 𝐵 (𝑥, 𝑦) > 0 for all 𝑥, 𝑦 for simplicity. Since 𝑔 −𝐵 (−𝑦) ∼ 2𝑦/𝜎 2 (resp. = 𝑜(𝑦)) as 𝑦 → +∞ (resp. −∞) and similarly for 𝑔 +𝐵 (−𝑦), it follows that + 𝑦𝑔 𝐵 (𝑥) + 𝑜(𝑦) 𝑦 → ∞, 2 𝜎 K 𝐵 (𝑥, 𝑦) = (9.84) −𝑦𝑔 −𝐵 (𝑥) + 𝑜(|𝑦|) 𝑦 → −∞. Note that u 𝐵 (𝑥) + sgn(𝑦) w 𝐵 (𝑥) equals 𝑔 +𝐵 (𝑥) or 𝑔 −𝐵 (𝑥) according as 𝑦 > 0 or 𝑦 < 0, and, therefore, is bounded away from zero as |𝑥| → ∞ by virtue of Lemma 9.1.2(c,c′).
242
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
√ √ Then writing 𝑦 𝑛 = 𝑦/ 𝑛, one sees that uniformly for |𝑦 𝑛 | < 𝑀, as 𝑥/ 𝑛 → 0 and |𝑦| → ∞ along with 𝑃 𝑥 [𝑆 𝑛 = 𝑦] > 0, 𝑑∗ [u 𝐵 (𝑥) + sgn(𝑦) w 𝐵 (𝑥)] |𝑦 𝑛 |e−𝑦𝑛 /2𝜎 {1 + 𝑜(1)}, · √ 𝑛 2𝜋𝜎 2 2
𝑄 𝑛𝐵 (𝑥, 𝑦) = and
|𝑦 𝑛 |e−𝑦𝑛 /2𝜎 {1 + 𝑜(1)}, √ 2𝜎 2 𝑛 2
𝑃 𝑥 [𝑆 𝑛 = 𝑦 | 𝜎𝐵 > 𝑛] = 𝑑∗ 𝑘 𝑦 (𝑥) where 𝑘 𝑦 (𝑥) := 1 +
2
(9.85)
2
(9.86)
sgn(𝑦) w 𝐵 (𝑥) . u 𝐵 (𝑥)
Note that 𝑘 𝑦 (𝑥) → 2 or ∼ (𝜎 2 𝑎(𝑥) − |𝑥|)/|𝑥| according as sgn(y)𝑥 → ∞ or −∞. Thus (9.85) provides the exact asymptotics of 𝑃 𝑥 [𝑆 𝑛 = 𝑦 | 𝜎√𝐵 > 𝑛] by means of 𝑎, indicating how the limit of the conditional ‘density’ 𝑃 𝑥 [𝑆 𝑛 / 𝑛 ∈ d𝜂 | 𝜎𝐵 > 𝑛]/ d𝜂 depends on the initial site 𝑥. The assertion of Lemma 9.3.2 is valid also in the case 𝜎 2 < ∞ and accordingly so is Lemma 9.8.6. Thus from (9.86) we deduce its integral version: ∫ 2 2 1 𝑃 𝑥 [𝑆 𝑛 /𝑐 𝑛 ∈ 𝐼 | 𝜎𝐵 > 𝑛] ∼ 𝑘 𝜂 (𝑥)|𝜂|e−𝜂 /2𝜎 d𝜂, 2 2𝜎 𝐼 √ valid as 𝑥/ 𝑛 → 0 for any finite interval 𝐼. From this the corresponding functional limit theorem also follows (see the proof of Lemma 9.3.7 for the tightness).
9.9.2 Uniform Estimates of 𝑸 𝒏𝑩 (𝒙, 𝒚) in the Case 1 < 𝜶 < 2 Below we state the results in [82] on the asymptotic form of 𝑄 𝑛𝐵 (𝑥, 𝑦) in the spacetime region |𝑥| ∨ |𝑦| < 𝑀𝑐 𝑛 with an arbitrarily given constant 𝑀. It differs in a significant way according as 𝑝𝑞 ≠ 0 or 𝑝𝑞 = 0. First, we state the result for the case 𝑝𝑞 ≠ 0, which can be formulated in a neat form. We write 𝑥 𝑛 for 𝑥/𝑐 𝑛 as before. Let 𝑝𝑞 ≠ 0. Then for each 𝜀 > 0 and uniformly for |𝑥| ∨ |𝑦| < 𝑀𝑐 𝑛 , as 𝑛 → ∞
𝑄 𝑛𝐵 (𝑥, 𝑦) ∼
u 𝐵 (𝑥)𝑃0 [𝜎0 = 𝑛] u 𝐵 (−𝑦) 𝔣 𝑥𝑛 (1) u 𝐵 (−𝑦)/𝑛 u 𝐵 (𝑥) 𝔣 −𝑦𝑛 (1)/𝑛 0 𝔭1 (𝑥 𝑛 , 𝑦 𝑛 )/𝑐 𝑛
(|𝑥 𝑛 | ∨ |𝑦 𝑛 | → 0), (𝑦 𝑛 → 0, |𝑥 𝑛 | > 𝜀), (𝑥 𝑛 → 0, |𝑦 𝑛 | > 𝜀), (|𝑥 𝑛 | ∧ |𝑦 𝑛 | > 𝜀).
One can deduce from (9.87) that uniformly for |𝑥| ∨ |𝑦| < 𝑀𝑐 𝑛 , 𝑄 𝑛𝐵 (𝑥, 𝑦) ∼ 𝔭10 (𝑥 𝑛 , 𝑦 𝑛 )/𝑐 𝑛 = 𝔭𝑛0 (𝑥, 𝑦)
as 𝑛 ∧ |𝑥| ∧ |𝑦| → ∞.14
14 Use Lemma 9.9.2 if 𝑦𝑛 → 0, |𝑥𝑛 | ∈ [ 𝜀, 𝑀 ] and (2.23) of [82] if |𝑥𝑛 | ∨ |𝑦𝑛 | → 0.
(9.87)
9.9 Some Related Results
243
Summing over 𝑦 ∈ 𝐵 in the first two formulae of (9.87), one sees that as 𝑛 → ∞ u (𝑥)𝑃0 [𝜎0 = 𝑛] as 𝑥 𝑛 → 0, (9.88) 𝑃 𝑥 [𝜎𝐵 = 𝑛] ∼ 𝑥𝐵𝑛 𝔣 (1)/𝑛 uniformly for |𝑥 𝑛 | ∈ [𝜀, 𝑀]. (In [82] it is shown 𝑃0 [𝜎0 = 𝑛] ∼ (1 − 𝛼−1 )𝜅 ♯ 𝑐 𝑛 /𝑛2 .) From this, one can easily show 𝑃 𝑥 [𝜎𝐵 > 𝑛] ∼ u 𝐵 (𝑥)𝑃0 [𝜎0 > 𝑛] as 𝑥 𝑛 → 0, and recalling 𝜅 ♯𝔮(𝜂) = 𝔣−𝜂 (1), one accordingly obtains that if 𝑝𝑞 ≠ 0, 𝑐 𝑛 𝑃 𝑥 [𝑆 𝑛 = 𝑥 | 𝜎𝐵 > 𝑛] −→ 𝔮(𝜂)
as 𝑥 𝑛 → 0 and 𝑦 𝑛 → 𝜂 ≠ 0.
(9.89)
In the case 𝑝𝑞 = 0 we have a somewhat delicate situation. Let 𝑞 = 0 so that 𝔭𝑛0 (𝑥, 𝑦) = 𝔭𝑛(−∞,0] (𝑥, 𝑦). If u 𝐵 (𝑥0 ) = 0 for some 𝑥0 (necessarily 𝑥 0 > 0), then (9.87) holds for 𝑥 with u 𝐵 (𝑥) > 0 and 𝑄 𝑛𝐵 (𝑥, 𝑦) ∼ 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝑇 > 𝑛] as 𝑥 ∧ 𝑦 → ∞ when u 𝐵 (𝑥) = 0 (so, (9.102) below applies). Let u 𝐵 (𝑥) > 0 for all 𝑥. Then 𝑄 𝑛𝐵 (𝑥, 𝑦) behaves quite differently. (9.87) holds at least for (𝑥, 𝑦) (with |𝑥| ∨ |𝑦| < 𝑀𝑐 𝑛 ) contained in the region (9.90) Z × (−𝑀, ∞) \ 𝐷 𝑛,𝑀 ∪ (−∞, 𝑀] × Z \ 𝐸 𝑛,𝑀 , where 𝐷 𝑛,𝑀 = (𝑀, 𝜀𝑐 𝑛 ) × [−𝑀, ∞) and 𝐸 𝑛,𝑀 = (−∞, 𝑀] × (−𝜀𝑐 𝑛 , −𝑀). In a part of the region 𝐷 𝑛,𝑀 ∪ 𝐸 𝑛,𝑀 , (9.87) fails, while (9.87) is still valid in a large subregion of it that depends on the behaviour of 𝑎(𝑥) as 𝑥 → ∞. By duality, we only consider 𝐷 𝑛,𝑀 . We make the decomposition 𝑛 𝑄 𝑛𝐵 (𝑥, 𝑦) = 𝑄 𝛺 (𝑥, 𝑦) + 𝑅 𝑛𝐵 (𝑥, 𝑦).
(9.91)
The second term denotes the remainder, i.e., the probability of the event {𝑆 𝑛 = 𝑦, 𝑇 < 𝑛 < 𝜎𝐵 }. It can be shown that 𝑅 𝑛𝐵 (𝑥, 𝑦) ∼ 𝜅 ♯𝛼 u 𝐵 (𝑥)𝔮(𝜂)/𝑛 as 𝑥 𝑛 → 0, 𝑦 𝑛 → 𝜂 ≠ 0 (as one may expect from (9.87) together with lim 𝑥𝑛 ↓0 𝑃 𝑥 [𝑇 < 𝑛 < 𝜎[𝑐𝑛 ,∞) ] = 1). Put 𝑉d (𝑥)/u 𝐵 (𝑥) 1 𝜆(𝑛, 𝑥) = ♯ (𝑥 > 0). · 𝜅 𝛼 Γ(1 − 1/𝛼)|Γ(1 − 𝛼)| 1/𝛼 𝑉d (𝑐 𝑛 )𝑐 𝑛 /𝑛 Then 𝑄 𝑛𝛺 (𝑥, 𝑦) ∼ 𝜅 ♯𝛼 u 𝐵 (𝑥)𝑛−1 𝜆(𝑛, 𝑥)𝔮mnd (𝜂), where 𝔮mnd denotes the stable meander (see (9.102) and (A.32)), so that as 𝑥 𝑛 ↓ 0, 𝑦 𝑛 → 𝜂 > 0, 𝑄 𝑛𝐵 (𝑥, 𝑦) ∼ 𝜅 ♯𝛼 u 𝐵 (𝑥)𝑛−1 [𝔮(𝜂) + 𝜆(𝑛, 𝑥)𝔮mnd (𝜂)].
(9.92)
Hence the two terms on the RHS in (9.91) are comparable to each other under 𝜆(𝑛, 𝑥) ≍ 1. If 𝑥 𝑛 ≍ 1, the first term of it is negligible compared to the second since by 𝑎(𝑥)/𝑎(−𝑥) → 0 (𝑥 → ∞) we have 𝜆(𝑛, 𝑐 𝑛 ) → 0. One can verify the equivalence 𝜆(𝑛, 𝑥) ≪ 1 ⇔ 𝑥 ≪ 𝑎 † (𝑥)𝑐2𝑛 /𝑛. These in particular show that in (9.90) 𝑀 can be replaced by 𝑐2𝑛 /𝑛 if 𝑎(𝑥) → ∞ (𝑥 → ∞); while if 𝑎(𝑥) is bounded on 𝑥 > 0, then 𝐸 | 𝑍ˆ | < ∞, 𝑉d (𝑥) ∼ 𝑣◦ 𝑥/𝐸 | 𝑍ˆ |, so that 𝜆(𝑛, 𝑥) ∼ 𝐶𝑥𝑛/𝑐2𝑛 . In any case from (9.92) one obtains
244
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
𝑃 𝑥 [𝜎𝐵 > 𝑛] ∼ u 𝐵 (𝑥)𝜅 ♯𝛼 𝑐 𝑛 𝑛−1 [1 + 𝜆(𝑛, 𝑥)]
(𝑥 𝑛 ↓ 0),
and hence 𝑃 𝑥 [𝑆 𝑛 = 𝑦 | 𝜎𝐵 > 𝑛] ∼
𝔮(𝜂) + 𝜆(𝑛, 𝑥)𝔮mnd (𝜂) 1 + 𝜆(𝑛, 𝑥)
(𝑥 𝑛 ↓ 0, 𝑦 𝑛 → 𝜂).
We know that 𝔮mnd (𝜉) ∼ 𝐶 ′ 𝜉 (𝜉 ↓ 0), 𝐶 ′′ 𝜉 −𝛼 (𝜉 → ∞) ([17, Remark 5 and Eq(33)]), which together with (9.96, 9.97) yields 𝔮(𝜉)/𝔮mnd (𝜉) ∼ 𝐶1 𝜉 𝛼−2 (𝜉 ↓ 0), 𝐶2 𝜉 −1 (𝜉 → ∞). In particular, this shows that (9.89) fails if 𝑞 = 0.
9.9.3 Asymptotic Properties of 𝖕𝒕0 (𝝃, 𝜼) and 𝖖𝒕 (𝝃) and Applications We rewrite the scaling relation (9.23) as 𝔣 𝜉 (𝜆𝑡) = 𝔣 𝜉 /𝑡
1/𝛼
(𝜆)/𝑡 = 𝔣sgn( 𝜉 ) (𝜆𝑡/|𝜉 | 𝛼 )/|𝜉 | 𝛼
(𝜉 ≠ 0, 𝑡 > 0, 𝜆 > 0). (9.93)
We know that if 𝑝 = 1, then 𝑉d varies regularly with index 1 and 𝔣 𝜉 (𝑡) = 𝑃𝑌𝜉 [𝜎(−∞,0) ∈ d𝑡]/d𝑡 = 𝜉𝑡 −1𝔭𝑡 (−𝜉)
(𝜉 > 0)
(9.94)
(cf. [4, Corollary 7.3]). In the case 𝑝𝑞 = 0, expansions of 𝔣 𝜉 (𝑡)𝑡 = 𝔣 𝜉 /𝑡 (1) into power series of 𝜉/𝑡 1/𝛼 are known. Indeed, if 𝑝 = 1, owing to (9.94) the series expansion for 𝜉 > 0 is obtained from that of 𝑡 1/𝛼𝔭𝑡 (−𝜉), which is found in [31], while for 𝜉 < 0, the series expansion is derived by Peskir [57]. In the recent paper [50, Theorem 3.14] Kuznetsov et al. obtain a similar series expansion for all cases. Here we let 1 < 𝛼 < 2 and we give the leading term for 𝔣1 (𝑡) and an error estimate (as 𝑡 → ∞ and 𝑡 ↓ 0) that are deduced from the series expansions of 𝔣 𝜉 (𝑡) mentioned above. To have neat expressions we put 1/𝛼
◦ 𝜅 𝑎,± 𝛼 = [1 ∓ ( 𝑝 − 𝑞)]/𝜅 𝛼 ,
(9.95)
𝛼−1 /𝐿(𝑥) as 𝑥 → ±∞; see Proposition 4.2.1, also (9.76) for 𝜅 ◦ . so that 𝑎(𝑥) ∼ 𝜅 𝑎,± 𝛼 𝑥 𝛼 (In (9.95) both upper or both lower signs should be chosen in the double signs.) In the rest of this subsection let 𝑐 ♯ = 1 as in Section 9.8. Then, as 𝑡 → ∞,
( ±1
𝔣 (𝑡) =
𝜅 ∗𝛼 𝑡 −1−1/𝛼 {1 + 𝑂 (𝑡 −1/𝛼 )}
if
𝜅 𝑎,± 𝛼 = 0,
−2+1/𝛼 {1 + 𝑂 (𝑡 1−2/𝛼 )} 𝜅 𝔣,± 𝛼 𝑡
if
𝜅 𝑎,± 𝛼 > 0,
(9.96)
−1 ♯ 𝑎,± ∗ 1/𝛼 /|Γ(−1/𝛼)|. Note that 𝜅 𝑎,+ > 0 where 𝜅 𝔣,± 𝛼 = (1−𝛼 )𝜅 𝛼 𝜅 𝛼 and 𝜅 𝛼 = |Γ(1−𝛼)| 𝛼 𝑎,− (𝜅 𝛼 > 0) if and only if 𝑞 > 0 (𝑝 > 0). The first formula above follows from [31, Lemma XVII.6.1] in view of (9.94). In [50, Theorem 3.14] and [82, Lemma 8.2] an asymptotic expansion as 𝑡 → 0 is obtained which entails that
9.9 Some Related Results
𝔣1 (𝑡) =
245
𝐶Φ 𝛼2 Γ(𝛼)𝜅 ♯𝛼 sin 𝛼 𝜌𝜋 ˆ 1/𝛼 𝑡 + 𝑂 (𝑡 1+1/𝛼 ) 𝜋 cos( 𝜌ˆ − 𝜌)𝛼𝜋
(𝑡 ↓ 0),
(9.97)
if 𝑞 > 0, and 𝔣1 (𝑡) = 𝑜(𝑡 𝑀 ) for all 𝑀 > 1 if 𝑞 = 0. We note that the leading term in the second formula of 𝔣1 (𝑡) in (9.96) is deduced from (9.88) together with 𝑃0 [𝜎0 > 𝑛] ∼ 𝛼−1 (𝛼 − 1)𝜅 ♯𝛼 𝑐 𝑛 /𝑛; lim 𝜉 ↓0
𝑛 𝑓 𝑥 (𝑛) (𝛼 − 1)𝜅 ♯𝛼 𝑎(𝑥) 𝛼 − 1 𝑎,+ ♯ 𝔣 𝜉 (1) = lim lim = lim lim 𝜅 𝜅 = 𝜅 𝔣,+ = 𝛼 , 𝜉 ↓0 𝑥𝑛 → 𝜉 𝜉 𝛼−1 𝜉 ↓0 𝑥𝑛 → 𝜉 𝛼 𝛼 𝛼 𝜉 𝛼−1 𝛼𝜉 𝛼−1 𝑛/𝑐 𝑛
giving what we wanted because of the scaling relation (9.93). ∫∞ Lemma 9.9.1 (i) 𝛼 0 [𝔭1 (0) − 𝔭1 (±𝑢)]𝑢 −𝛼 d𝑢 = 𝜅 𝑎,∓ 𝛼 . (ii) If 𝜑(𝑡) is a continuous function on 𝑡 ≥ 0, then for 𝑡 0 > 0 ∫ 𝑡0 𝔭𝑡 (0) − 𝔭𝑡 (𝜂) 𝜑(𝑡) d𝑡 = 𝜅 𝑎,∓ lim 𝛼 𝜑(0). 𝜂→±0 0 |𝜂| 𝛼−1 Proof Since 𝔭1 is bounded and differentiable, the existence of the integral in (i) is obvious. We must identify it as 𝜅 𝑎,∓ 𝛼 , which we do in the next lemma. Using the scaling relation as above and changing the variable 𝑡 = (|𝜂|/𝑠) 𝛼 we obtain ∫ ∞ ∫ 𝑡0 𝔭𝑡 (0) − 𝔭𝑡 (𝜂) 𝔭1 (0) − 𝔭1 (sgn(𝜂)𝑢) 𝜑(𝑡) d𝑡 = 𝛼 𝜑(|𝜂| 𝛼 /𝑢 𝛼 ) d𝑢. 𝛼−1 1/𝛼 𝑢𝛼 |𝜂| | 𝜂 |/𝑡0 0 Dominated convergence shows the formula of (ii) by virtue of (i).
□
Lemma 9.9.2 For any 𝑡 0 > 0, uniformly for |𝜉 | > 0 and 𝑡 > 𝑡0 , as 𝜂 → ±0, 𝜉 𝔭𝑡0 (𝜉, 𝜂)/|𝜂| 𝛼−1 −→ 𝜅 𝑎,∓ 𝛼 𝔣 (𝑡).
Proof Although the result follows from use the latter only for the iden∫ ∞ (9.87), we 1 (0) d𝑢 in this proof, which is based tification of the constants 𝑏 ±𝛼,𝛾 = 𝛼 0 𝔭1 (±𝑢)−𝔭 𝛼 |𝑢 | on ∫ 𝑡 𝔭𝑡0 (𝜉, 𝜂) = 𝔭𝑡 (𝜂 − 𝜉) − 𝔣 𝜉 (𝑡 − 𝑠)𝔭𝑠 (𝜂) d𝑠. 0
Subtracting from this equality the same one, except with 𝜂 = 0, for which the LHS vanishes, and then dividing by |𝜂| 𝛼−1 , one obtains 𝔭𝑡0 (𝜉, 𝜂) 𝔭𝑡 (𝜂 − 𝜉) − 𝔭𝑡 (−𝜉) = + |𝜂| 𝛼−1 |𝜂| 𝛼−1
∫ 0
𝑡
𝔭𝑠 (0) − 𝔭𝑠 (𝜂) 𝜉 𝔣 (𝑡 − 𝑠) d𝑠. |𝜂| 𝛼−1
(9.98)
As 𝜂 → 0, the first term on the RHS tends to zero, and Lemma 9.9.1 applied to the second term yields the equality of the lemma. The uniformity of the convergence is checked by noting that the above integral restricted to 𝑠 > 𝑡/2 is negligible.
246
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
𝑎,∓ 𝑎,∓ To show 𝑏 ±𝛼,𝛾 = 𝜅 𝑎,∓ 𝛼 , suppose 𝜅 𝛼 ≠ 0; if 𝜅 𝛼 = 0, the required identity follows from what we have just verified. By the second case of (9.87), for any 𝜀 > 0, there exists 0 < 𝛿 < 𝜀 such that for |𝑦 𝑛 | < 𝛿, lim sup 𝑄 𝑛{0} (𝑐 𝑛 , 𝑦)𝑛[𝔣1 (1)𝑎(−𝑦)] −1 −1 < 𝜀. Hence, by the fourth case of (9.87), for 𝜀𝛿 < |𝑦 𝑛 | < 𝛿, 𝔭0 (1, 𝑦 𝑛 )𝑛/𝑐 𝑛 𝔭0 (1, 𝑦 𝑛 ) 𝑄 𝑛{0} (𝑐 𝑛 , 𝑦)𝑛 1 1 lim sup 1 − 1 = lim sup 𝑛 · 1 − 1 ≤ 𝜀. 𝔣 (1)𝑎(−𝑦) 𝑄 {0} (𝑐 𝑛 , 𝑦)𝑐 𝑛 𝔣 (1)𝑎(−𝑦)
Letting 𝜀 ↓ 0, this entails that as 𝑦 → ±∞ under 𝑦 𝑛 → ±0, so that 𝐿 (𝑐 𝑛 )/𝐿(|𝑦|) → 1, 1∼
𝔭10 (1, 𝑦 𝑛 )𝑛/𝑐 𝑛 𝔣1 (1)𝑎(−𝑦)
∼
𝔭10 (1, 𝑦 𝑛 )𝑐 𝑛𝛼−1 /𝐿(𝑐 𝑛 ) 1 𝛼−1 /𝐿 (|𝑦|) 𝜅 𝑎,∓ 𝛼 𝔣 (1)|𝑦|
∼
𝔭10 (1, 𝑦 𝑛 ) . 𝑎,∓ 1 𝜅 𝛼 𝔣 (1)|𝑦 𝑛 | 𝛼−1
Thus 𝑏 ±𝛼,𝛾 = [𝔣1 (1)] −1 lim 𝜂→±0 𝔭10 (1, 𝜂)/|𝜂| 𝛼−1 = 𝜅 𝑎,∓ 𝛼 , finishing the proof.
□
The following lemma shows that 𝔞 is the ‘unique’ non-negative harmonic function for the transition function 𝔭𝑡0 (𝜉, 𝜂) in view of the theory of Martin boundaries. Lemma 9.9.3 Put 𝔤0 (𝜉, 𝜂) =
∫∞
𝔭𝑡0 (𝜉, 𝜂)𝑑𝑡. Then ∫ ∞ d𝑟 𝛼[1 − sgn(𝜉) ( 𝑝 − 𝑞)] 𝔣∓𝑟 (1) |𝜉 | 𝛼−1 ; (i) lim 𝔤0 (𝜉, 𝜂) = 𝜂→±∞ 2𝜅 ◦𝛼 𝑟 0 −1 ♯ 𝑎,± (ii) lim 𝔤0 (𝜉, 𝜂)/𝔤0 (±1, 𝜂) = 𝜅 𝛼 𝜅 𝛼 𝔞(𝜉), 0
| 𝜂 |→∞
where either sign applies if 𝑝𝑞 > 0 whereas the upper (resp. lower) sign is allowed only if 𝑞 > 0 (resp. 𝑝 > 0) so that 𝔤0 (±1, 𝜂) > 0 for all 𝜂 ≠ 0. Proof We show (i) in its dual form, which reads (∗)
lim 𝔤0 (𝜉, 𝜂) =
𝜉 →±∞
𝛼[1 + sgn(𝜂) ( 𝑝 − 𝑞)] 2𝜅 ◦𝛼
∫ 0
∞
𝔣±𝑟 (1)
d𝑟 𝛼−1 |𝜂| . 𝑟
By the second scaling relation in (9.26) and the change of variable 𝑡 = (|𝜉 |/𝑟) 𝛼 , ∫ ∞ d𝑟 𝛼−1 (9.99) 𝔤0 (𝜉, 𝜂) = 𝛼|𝜉 | 𝔭10 (sgn(𝜉)𝑟, |𝜉 | −1 𝜂𝑟) 𝛼 . 𝑟 0 A formal application of Lemma 9.9.2 gives the formula of (i), which must be justified because of the singularity of the integrand at zero. For the justification, resuming the procedure leading to Lemma 9.9.2, we apply the decomposition of 𝔭𝑡0 (𝜉, 𝜂) in (9.98). Let I 𝜉 and II 𝜉 denote the contribution to, respectively, the RHS above from the part corresponding to the first and second terms of the RHS of (9.98). Let 𝜉 > 0 and 𝛿 = 𝜉 −1 𝜂. First, we dispose of I 𝜉 . Using |(𝑟 + 𝛿𝑟) −𝛼 − 𝑟 −𝛼 | < 𝐶 |𝛿|𝑟 −𝛼 , we observe that
9.9 Some Related Results
∫
0
∞
247
∫ 1 ∫ ∞ 𝔭1 (𝛿𝑟 − 𝑟) − 𝔭1 (−𝑟) 1−𝛼 −𝛼 d𝑟 ≤ 𝐶 |𝛿| 𝑟 d𝑟 + 𝑟 d𝑟 ≤ 𝐶2 |𝜂|/𝜉, 1 𝑟𝛼 0 1−| 𝛿 |
provided |𝛿| < 1/2. Thus I 𝜉 → 0. To evaluate II 𝜉 we write it as ∫ II 𝜉 = 𝛼|𝜂|
∞
𝛼−1 0
d𝑟 𝑟
∫ 0
1
𝔭𝑠 (0) − 𝔭𝑠 (𝛿𝑟) 𝑟 𝔣 (1 − 𝑠) d𝑠. (|𝛿|𝑟) 𝛼−1
Since 𝔭1 is continuously differentiable, for every 𝜀 > 0, this repeated integral restricted to 𝑟 > 𝜀 and 1/2 < 𝑠 ≤ 1 is 𝑂 (𝛿2−𝛼 ), hence tends to zero as 𝜉 → ∞. In the remaining range of integration 𝔣𝑟 (1 − 𝑠) is bounded by ∫a constant multiple of 𝑟 𝛼−1 , ∞ 𝑟 hence the repeated integral over that converges to 𝜅 𝑎,∓ 𝔣 (1)𝑟 −1 d𝑟 by virtue of 𝛼 𝜀 Lemma 9.9.1(ii), where ± accords with the sign of 𝜂. Letting 𝜀 ↓ 0 we see that (∗) follows when 𝜉 → ∞. The case 𝜉 → −∞ is similar. The second assertion (ii) is immediate from (i) and the definition of 𝔞. □ Lemma 9.9.4 (i) lim | 𝜂 |→0 𝔭𝑡0 (𝜉, 𝜂)/𝔞(−𝜂) = 𝔮(−𝜉), where the limit is restricted to 𝜂 such that 𝔞(−𝜂)∫ > 0 when 𝑝𝑞 = 0. ∞ (ii) For every 𝑡 > 0, −∞ 𝔞(𝜉)𝔮𝑡 (𝜉) d𝜉 = 1. ♯ 𝛼−1 Proof Recalling the definitions in (9.75) and (9.95), we have 𝔞(𝜂) = 𝜅 𝑎,± 𝛼 𝜅 𝛼 |𝜂| (± accords with the sign of 𝜂) and find that (i) follows from Lemma 9.9.2 because 𝔣 𝜉 (𝑡) = 𝜅 ♯𝛼𝔮𝑡 (−𝜉). To verify (ii), write down the dual of what we have just obtained in the form lim 𝔭𝑡0 (𝜂, 𝜉)/𝔞(𝜂) = 𝔮𝑡 (𝜉), | 𝜂 |→0
multiply both sides by 𝔞(𝜉) and integrate over 𝜉 ∈ R. Then a formal application of (9.78) gives the result, where the limit procedure can be justified as in the proof of the preceding lemma. □
9.9.4 The r.w. Killed Upon Entering the Negative Half-Line Concerning the conditional law of the r.w. given 𝑇 (= 𝜎𝛺 ) > 𝑛, there are many works that have studied the functional limit theorem [10], [11], [13], [17], [19], [38], [68] and the local limit theorems [1], [11], [12], [17], [88] in various frameworks. Below we state the main results from [11], [13], [17] and [88] in our setting of the arithmetic r.w. Suppose that |1 − 𝛼| ∧ 𝜌 𝜌ˆ > 0 and 𝐹 is strongly aperiodic. We know that ˆ for some ℓˆ s.v. at infinity (Theorem A.3.1). Let 𝔮mnd (𝜂) 𝑃0 [𝑇 > 𝑛] ∼ 𝑡 −𝜌ˆ ℓ(𝑡) mnd ˆ and 𝔮 (𝜂), 𝜂 ≥ 0 denote the densities of the stable meander of length 1 at time 1 associated with 𝑌 and −𝑌 , respectively (see [4] for the definition), and 𝑄 𝑛𝛺 (𝑥, 𝑦) := 𝑃 𝑥 [𝑆 𝑛 = 𝑦, 𝜎𝛺 > 𝑛]. Vatutin and Wachtel [88] proved that
248
9 Asymptotically Stable Random Walks Killed Upon Hitting a Finite Set
( 𝑛 (0, 𝑦) 𝑄𝛺
∼
𝑉d (0)𝑈a (𝑦)𝔭1 (0)/(𝑛𝑐 𝑛 )
(𝑦 𝑛 ↓ 0),
𝑃[𝑇 > 𝑛]𝔮mnd (𝑦 𝑛 )/𝑐 𝑛
(𝑦 𝑛 > 𝜀),
(9.100)
and based on it, derived ˆ 𝑃0 [𝑇 = 𝑛] ∼ 𝜌𝑛 ˆ −1−𝜌ˆ ℓ(𝑛).
(9.101)
Partially based on the analysis made in [88], Doney [17] extended the above result to the general initial point. The result reads (given in the dual setting in [17]): for each 𝜀 > 0,
𝑛 𝑄𝛺 (𝑥, 𝑦) ∼
𝑉d (𝑥)𝑈a (𝑦)𝔭1 (0)/(𝑛𝑐 𝑛 ) 𝑈a (𝑦)𝑃0 [𝜎 ˆ mnd (𝑥 𝑛 )/𝑐 𝑛 [1,+∞) > 𝑛] 𝔮 𝑉d (𝑥)𝑃0 [𝜎(−∞,0] > (−∞,0] (𝑥 , 𝑦 )/𝑐 𝔭1 𝑛 𝑛 𝑛
𝑛]𝔮mnd (𝑦
𝑛 )/𝑐 𝑛
(𝑥 𝑛 ↓ 0, 𝑦 𝑛 ↓ 0), (𝑦 𝑛 ↓ 0, 𝑥 𝑛 > 𝜀), (𝑥 𝑛 ↓ 0, 𝑦 𝑛 > 𝜀),
(9.102)
(𝑥 𝑛 ∧ 𝑦 𝑛 ≥ 𝜀)
uniformly for 𝑥 ∨ 𝑦 < 𝑐 𝑛 /𝜀. Based on this result, the following was derived ( 𝜌𝑉d (𝑥)𝑃0 [𝑇 = 𝑛]/𝑣◦ as 𝑥 𝑛 ↓ 0, (9.103) 𝑃 𝑥 [𝑇 = 𝑛] ∼ 𝑥𝑛 (1) uniformly for 𝑥 𝑛 ∈ [𝜀, 𝜀 −1 ]. 𝑛−1 𝔣 (−∞,0] (The derivations of (9.101) from (9.100), and (9.103) from (9.102), constitute essential parts of [88] and [17], respectively.) The conditional limit theorem for the r.w. given 𝑇 = ∞ was studied by Bryn-Jones and Doney [11] and Caravenna and Chaumont [13]. The conditional process is an ℎ-transform of 𝑆 by the renewal function 𝑉d , which is harmonic on Z \ 𝛺. Define the probability laws on 𝐷 [0,∞) by P𝑛,𝑉 𝑥 (Γ) = 𝑃 𝑥 [𝑉d (𝑆 ⌊𝑛𝑡 ⌋ ); 𝑌𝑛 ∈ Γ, 𝑇 > 𝑛𝑡]/𝑉d (𝑥) 𝑥 = 0, 1, 2, . . . , ( 𝐸 𝑌𝜉 [(𝑌 (𝑡)) 𝛼𝜌ˆ ; 𝑌 ∈ Γ, 𝜎𝑌(−∞,0] > 𝑡]/𝜉 𝛼𝜌ˆ 𝜉 > 0, P 𝜉 (Γ) = 𝛼𝜌ˆ 𝛼𝜌ˆ Emnd [𝔶𝑡 ; Γ]/Emnd [𝔶𝑡 ] 𝜉 = 0. 𝑡 𝑡 stands for the expectation with respect to the law Here for 𝑡 > 0, Γ ∈ B𝑡 , and Emnd 𝑡 of the stable meander of length 𝑡. Bryn-Jones and Doney [11] established that for ⇒ P 𝜉 as 𝑥/𝑐 𝑛 → 𝜉 ≥ 0 in the case 𝛼 = 2 by showing the local 0 < 𝜌 < 1, P𝑛,𝑉 𝑥 limit theorem, and Caravenna and Chaumont [13] extended it to the case 0 < 𝛼 ≤ 2 by employing the earlier result (cf. [10], [19], [38], [68]) that the r.w. conditioned on 𝑃0 [𝑇 > 𝑛] converges in law to the stable meander.
Appendix
A.1 Relations Involving Regularly Varying Functions A.1.1 Results on s.v. Functions I Here we present some results derived from standard facts about the s.v. functions. Lemmas A.1.1 and A.1.2 below are used for the proof of Theorem 5.4.1 as well as for the deduction of Corollary 5.4.2(i) from Theorem 5.4.1. Lemma A.1.1 Let 𝐿 (𝑥) > 0 be an s.v. function at infinity. The following are equivalent (a) 𝑚(𝑥) ∼ 𝐿(𝑥)/2. ∫𝑥 (b) 𝑥𝜂(𝑥) = 𝑜 (𝑚(𝑥)), 𝑐(𝑥) := 0 𝑡 𝜇(𝑡) d𝑡 ∼ 𝑚(𝑡). (c) 𝑐(𝑥) ∼ 𝐿 ∫(𝑥)/2. 𝑥 (d) 𝑥 2 𝜇(𝑥) −𝑥 𝑡 2 d𝐹 (𝑡) → 0. ∫𝑥 (e) −𝑥 𝑡 2 d𝐹 (𝑡) ∼ 𝐿 (𝑥). Proof The implication (a) ⇒∫ (b) follows from 21 𝑥𝜂(𝑥) ≤ 𝑚(𝑥) − 𝑚( 12 𝑥) and its 𝑥
converse from 𝑚(𝑥) = 𝑚(1)e 1 𝜀 (𝑡) d𝑡/𝑡 , where 𝜀(𝑡) = 𝑡𝜂(𝑡)/𝑚(𝑡). Combining (a) and (b) shows (c) in view of 𝑚(𝑥) = 𝑐(𝑥) +∫𝑥𝜂(𝑥); conversely, (c) entails 𝑥 2 𝜇(𝑥) = ∞ 𝑜(𝐿(𝑥)), by which one deduces 𝑥𝜂(𝑥) ≪ 𝑥 𝑥 𝐿 (𝑡)𝑡 −2 d𝑡 ∼ 𝐿 (𝑥). Thus (b) ⇔ (𝑐). The equivalence of (d) and (e) follows ∫ 𝑥+ from Theorem VIII.9.2 of [31] and shows the equivalence of (e) and (c) since −𝑥− 𝑡 2 d𝐹 (𝑡) = −𝑥 2 𝜇(𝑥) + 2𝑐(𝑥). □ ∫𝑥 Lemma A.1.2 Let 𝑚 + /𝑚 → 0. Then by 0 𝑡 2 d𝐹 (𝑡) ≤ 2𝑐 + (𝑥) = 𝑜(𝑚 − (𝑥)) the slow ∫𝑥 ∫0 variation of 𝑚 − (𝑥) entails −𝑥 𝑡 2 d𝐹 (𝑡) ∼ −𝑥 𝑡 2 d𝐹 (𝑡) ∼ 2𝑚 − (𝑥) because of the equivalence (a) ⇔ (c) in (1).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8
249
250
Appendix
A.1.2 Results on s.v. Functions II We prove three lemmas involving s.v. functions, which are used in the proofs of Lemmas 6.3.1, 6.3.2 and 6.4.2, respectively. Lemma A.1.3 Let 𝛼 be a positive constant, ℓ(𝑡) (𝑡 ≥ 0) an s.v.∫ function, and 𝑢(𝑡) and 𝑥 𝐺 (𝑡) positive measurable functions of 𝑡 ≥ 0. Suppose that 0 𝑢(𝑡) d𝑡 ∼ 𝑥 𝛼 /𝛼ℓ(𝑥) (𝑥 → ∞), 𝐺 (𝑡) is non-increasing and both 𝑢 and 1/ℓ are locally bounded, and put ∫ ∞ 𝑡 𝛼−1 𝑢(𝑡) ˜ = , ℎ♯ (𝑥) = 𝑢(𝑡)𝐺 ˜ (𝑡) d𝑡, ℓ(𝑡) 𝑥 ∫ ∞ ∫ ∞ ˜ ℎ(𝑥) = 𝑢(𝑡)𝐺 (𝑥 + 𝑡) d𝑡, and ℎ(𝑥) = 𝑢(𝑡)𝐺 ˜ (𝑥 + 𝑡) d𝑡. 0
0
˜ If one of ℎ(1), ℎ(1) and ℎ♯ (1) is finite, then so are the other two and ˜ (i) if either ℎ or ℎ˜ is regularly varying with index > −1, then ℎ(𝑥) ∼ ℎ(𝑥); ˜ (ii) if either ℎ or ℎ♯ is s.v., then ℎ(𝑥) ∼ ℎ♯ (𝑥). Proof ∫ 𝑥We may suppose ∫ 𝑥 ℓ is positive and continuous. Integrating by parts verifies ˜ that 0 𝑢(𝑡)𝐺 ˜ (𝑡) ≍ 0 𝑢(𝑡)𝐺 (𝑡) d𝑡, so that ℎ(1), ℎ(1) and ℎ♯ (1) are finite if either of them is finite. ∫𝑡 ˜ Note that ℎ(1) < ∞ entails 𝐺 (𝑡) = 𝑜(𝑡 −𝛼 ℓ(𝑡)), so that 𝐺 (𝑡) 0 𝑢(𝑠) d𝑠 → 0 ∫𝑡 ∫𝑡 (𝑡 → ∞). Put 𝑈 (𝑥) = 0 𝑢(𝑠) d𝑠 and 𝑈˜ (𝑡) = 0 𝑢(𝑠) ˜ d𝑠. Then, interchanging the order of integration and integrating by parts, in turn, one deduces ∫ 𝑥 ∫ ∞ ∫ 𝑥+𝑡 ∫ ∞ ℎ(𝑡) d𝑡 = 𝑢(𝑡) d𝑡 𝐺 (𝑠) d𝑠 = [𝐺 (𝑡) − 𝐺 (𝑥 + 𝑡)]𝑈 (𝑡) d𝑡, 0
0
and similarly
∫
0
𝑡
∫
𝑥
0
∞
[𝐺 (𝑡) − 𝐺 (𝑥 + 𝑡)]𝑈˜ (𝑡) d𝑡.
˜ d𝑡 = ℎ(𝑡) 0
Since 𝐺 is monotone and 𝑈 (𝑡) ∼ 𝑈˜ (𝑡), these identities together show ∫ 𝑥 ∫ 𝑥 ˜ ℎ(𝑠) d𝑠 ∼ ℎ(𝑠) d𝑠, 0
∫
0
∫
𝑥 𝑥 ˜ provided 0 ℎ(𝑠) d𝑠 or 0 ℎ(𝑠) d𝑠 diverges to infinity as 𝑥 → ∞. It therefore follows ˜ that if one of ℎ or ℎ is regularly varying with index > −1, then so is the other and ˜ ℎ(𝑥) ∼ ℎ(𝑥) owing to the monotone density theorem [8]. Thus (i) is verified. For the proof of (ii), suppose either ℎ˜ or ℎ♯ is s.v. Take 𝑀 > 2 large. Then we see
∫ 0
and
( 𝑀−1) 𝑥
𝑢(𝑡)𝐺 ˜ (𝑥 + 𝑡) d𝑡 = (𝑀 − 1) 𝛼
˜ ℎ(𝑥) ≥
𝑥𝛼 𝐺 (𝑀𝑥){1 + 𝑜(1)} 𝛼ℓ(𝑥)
A.1 Relations Involving Regularly Varying Functions
∫
∞
∫
𝑥
(𝑀𝑥 + 𝑡) d𝑡 ≤ 𝑢(𝑡)𝐺 ˜
+
˜ ℎ(𝑀𝑥) = 0
𝑥
251
𝑥𝛼 𝐺 (𝑀𝑥) + ℎ♯ (𝑥). 𝛼ℓ(𝑥)
(A.1)
˜ ˜ If ℎ˜ is s.v., so that ℎ(𝑀𝑥) ∼ ℎ(𝑥), these inequalities lead to ˜ [1 + 𝑂 (𝑀 −𝛼 ){1 + 𝑜(1)}] . ℎ♯ (𝑥) ≥ ℎ(𝑥)
(A.2)
On the other hand, if ℎ♯ is s.v., then ∫ 𝑥 𝐺 (𝑥)𝑥 𝛼 ℎ♯ (𝑥) ∼ ℎ♯ (𝑥/2) ≥ 𝑢(𝑡)𝐺 ˜ (𝑡) d𝑡 ≥ (1 − 2−𝛼 ) , 𝛼ℓ(𝑥) 𝑥/2 so that by (A.1) applied with 𝑥/𝑀 in place of 𝑥, one obtains ˜ ℎ(𝑥) ≤ [1 + 𝑂 (𝑀 −𝛼 ){1 + 𝑜(1)}] ℎ♯ (𝑥), ˜ hence (A.2). Since 𝑀 can be made arbitrarily large, one concludes ℎ(𝑥) ≤ ℎ♯ (𝑥){1 + 𝑜(1)} in either case. The reverse inequality is verified as follows. If 𝛼 ≥ 1, then taking 𝜀 ∈ (0, 1) so small that for 0 < 𝑠 < 1, (1 − 𝜀𝑠) 𝛼−1 > 1 − 𝛼𝜀, one deduces ∫ ∞ (𝑡 − 𝜀𝑥) 𝛼−1 ˜ ℎ(𝜀𝑥) ≥ 𝐺 (𝑡) d𝑡 ≥ (1 − 𝛼𝜀)ℎ♯ (𝑥){1 + 𝑜(1)}, ℓ(𝑡 − 𝜀𝑥) 𝑥 ˜ which shows ℎ(𝑥) ≥ ℎ♯ (𝑥){1 + 𝑜(1)}. If 𝛼 < 1, one may suppose ℓ to be normalised ˜ so that for all sufficiently large 𝑥, 𝑢(𝑡) ˜ ≥ 𝑢(𝑥 ˜ + 𝑡) (𝑡 ≥ 0), hence ℎ(𝑥) ≥ ℎ♯ (𝑥) as ˜ well. Now we can conclude that ℎ(𝑥) ∼ ℎ♯ (𝑥), and hence ℎ(𝑥) ∼ ℎ♯ (𝑥) by (i). □ Lemma A.1.4 Let 𝑉 (𝑥) and 𝜇(𝑥) be positive functions on 𝑥 ≥ 0. If 𝜇 is nonincreasing, ∫ ∞lim sup 𝜇(𝜆𝑥)/𝜇(𝑥) < 1 for some 𝜆 > 1 and 𝑉 is non-decreasing and s.v., then 1 𝜇(𝑡) d𝑉 (𝑡) < ∞ and ∫
∞
𝜇(𝑡) d𝑉 (𝑡) = 𝑜 (𝑉 (𝑥)𝜇(𝑥))
(𝑥 → ∞).
𝑥
Proof We may suppose that 𝑉 is continuous and there exists constants 𝛿 < 1 and 𝑥 0 such that 𝜇(𝜆𝑥) < 𝛿𝜇(𝑥) for 𝑥 ≥ 𝑥0 . Let 𝑥 > 𝑥0 . It then follows that ∫
∞
∫
∞ ∫ 𝜆𝑛+1 𝑥
𝜇(𝑡) d𝑉 (𝑡) = 𝑥
𝜇(𝑡) d𝑉 (𝑡) ≤ 𝜇(𝑥) 𝑛=0
𝜆𝑛 𝑥
∞ ∑︁
𝛿 𝑛 [𝑉 (𝜆 𝑛+1 𝑥) − 𝑉 (𝜆 𝑛 𝑥)].
𝑛=0
(A.3) Take 𝜀 > 0 so that (1+ 𝜀)𝛿 < 1. Then forÍall sufficiently large 𝑥, 𝑉 (𝜆𝑥)/𝑉 (𝑥) ≤ 1+ 𝜀, 𝑛 𝑛 so that 𝑉 (𝜆 𝑛 𝑥)/𝑉 (𝑥) < (1 + 𝜀) 𝑛 , hence ∞ 𝑛=0 𝛿 𝑉 (𝜆 𝑥) ≤ 𝐶𝑉 (𝑥), which shows that the last expression in (A.3) is 𝑜(𝜇(𝑥)𝑉 (𝑥)), for 𝑉 (𝜆 𝑛+1 𝑥) − 𝑉 (𝜆2 𝑥) = 𝑜((𝑉 (𝜆 𝑛 𝑥)) uniformly in 𝑛. □
252
Appendix
Lemma A.1.5 Let ∫𝑣(𝑡) and 𝐺 (𝑡) be non-negative ∫ ∞ measurable functions on 𝑡 ≥ 0 𝑡 such that 𝑉 (𝑡) := 0 𝑣(𝑠) d𝑠 is s.v. at infinity, 1 𝑣(𝑡)𝐺 (𝑡) d𝑡 < ∞ and 𝐺 is nonincreasing. Then as 𝑡 → ∞ ∫∞ 𝑣(𝑠)𝐺 (𝑠) d𝑠 𝑡 ∫∞ −→ 0. 𝑉 (𝑡)𝐺 (𝑡) + 𝑡 𝑉 (𝑠)𝐺 (𝑠) d𝑠/𝑠 ∫𝑡
Proof Let 𝑉˜ be a normalised version of 𝑉: 𝑉˜ (𝑡) = e 0 𝜀 (𝑠) d𝑠 with 𝜀(𝑡) → 0, so that 𝑣˜ (𝑡) := 𝑉˜ ′ (𝑡) = 𝑜(𝑉 (𝑡)/𝑡). We show that ∫ ∞ ∫ ∞ 𝑣(𝑠)𝐺 (𝑠) d𝑠 = 𝑣˜ (𝑠)𝐺 (𝑠) d𝑠{1 + 𝑜(1)} + 𝑜 (𝑉 (𝑡)𝐺 (𝑡)) , (A.4) 𝑡
𝑡
which implies the asserted result. Put ∫ ∞ 𝛥(𝑡) = 𝑣(𝑠)𝐺 (𝑠) d𝑠,
∫ 𝐸 (𝑡) = −
𝑡
∞
𝑉 (𝑠) d𝐺 (𝑠). 𝑡
Then 𝐸 (𝑡) = 𝑉 (𝑡)𝐺 (𝑡) + 𝛥(𝑡). Since 𝐸 (𝑡) ∼ −
∫∞ 𝑡
𝑉˜ (𝑠) d𝐺 (𝑠), this entails
∫ ∞ 𝛥(𝑡) = −𝑉˜ (𝑡)𝐺 (𝑡){1 + 𝑜(1)} − 𝑉˜ (𝑠) d𝐺 (𝑠) + 𝑜(𝐸 (𝑡)) 𝑡 ∫ ∞ = 𝑣˜ (𝑠)𝐺 (𝑠) d𝑠 + 𝑜 (𝑉 (𝑡)𝐺 (𝑡)) ; +𝑜(𝐸 (𝑡)). 𝑡
Since 𝑜(𝐸 (𝑡)) = 𝑜(𝑉 (𝑡)𝐺 (𝑡)) + 𝑜( 𝛥(𝑡)), we obtain (A.4).
□
A.2 Renewal Processes, Strong Renewal Theorems and the Overshoot Distribution ∞ be a r.w. on {0, 1, 2, . . .} with i.i.d. increments. Put Let 𝑇0 = 0 and (𝑇𝑛 ) 𝑛=0
𝑢(𝑥) =
∞ ∑︁
𝑃[𝑇𝑛 = 𝑥] (𝑥 = 1, 2, . . .)
𝑛=0
(renewal mass function) and 𝑈 (𝑥) = 𝑢(0) + · · · + 𝑢(𝑥). Suppose that 𝑇1 is strongly aperiodic so that 𝑢(𝑥) is positive for all sufficiently large 𝑥. We shall consider asymptotic forms of 𝑢 and/or 𝑈 and the overshoot distribution. Most results (some with auxiliary assumptions) can be extended to non-arithmetic cases.
A.2 Renewal Processes, Strong Renewal Theorems and the Overshoot Distribution
253
A.2.1 Strong Renewal Theorems We give some results on the asymptotics of 𝑢(𝑥) in two extremal cases, one for r.s. variables and the other for those having s.v. tails, which are dealt with in Sections A.2.1.1 and A.2.1.2, respectively.
A.2.1.1 Case: 𝑻1 is r.s. First, consider the case when 𝑇1 is r.s. Put ∫ 𝑡 𝐿 (𝑡) = 𝑃[𝑇1 > 𝑠] d𝑠 (𝑡 ≥ 0). 0
Erickson [28, §2(ii)] shows that lim 𝑢(𝑥)𝐿(𝑥) = 1 if 𝑡𝑃[𝑇1 > 𝑡] is s.v. at infinity. This restriction on 𝑇1 is relaxed as in the following lemma, which is used to obtain (5.28). (As mentioned before, the result is extended to general r.s. variables in [83].) Lemma A.2.1 If 𝐿 is s.v. at infinity, then 𝑢(𝑥) ∼ 1/𝐿(𝑥) as 𝑥 → ∞. Proof We follow the argument made by Erickson [28]. Let 𝜙(𝜃) = 𝐸 exp{𝑖𝜃𝑇1 }. Unlike [28], we take up the sine series of coefficients 𝑢(𝑥) that represents the imaginary part of 1/(1 − 𝜙(𝜃)). Suppose 𝐸𝑇1 = ∞; otherwise, the result is the well-known renewal theorem. Fourier inversion yields ∫ 𝜋 2 1 𝑢(𝑥) = 𝑆(𝜃) sin 𝑥𝜃 d𝜃, where 𝑆(𝜃) = ℑ , (A.5) 𝜋 0 1 − 𝜙(𝜃) where the integral is absolutely convergent, as is verified shortly. The proof of this representation of 𝑢(𝑥) will be given after we derive the assertion of the lemma by taking it for granted. The assumed slow variation of 𝐿 implies – in fact, is equivalent to – each of ∫ 𝑡 𝑠𝑃[𝑇1 ∈ d𝑠] ∼ 𝐿 (𝑡) and 𝑡𝑃[𝑇1 > 𝑡]/𝐿(𝑡) → 0 (𝑡 → ∞) 0
(cf. [31, Theorem VIII.9.2], [8, Corollary 8.1.7]). Using this we observe that as 𝜃 ↓ 0, ∫ 𝜀/𝜃 𝑡𝑃[𝑇1 ∈ d𝑡] ∼ 𝐿(1/𝜃) for each 𝜀 > 0 and hence 0 ∫ 1 − 𝜙(𝜃) =
∫
1/𝜃
∞
(1 − e𝑖 𝜃𝑡 )𝑃[𝑇1 ∈ d𝑡]
+ 0
1/𝜃
= −𝑖𝜃 𝐿(1/𝜃){1 + 𝑜(1)}. In particular, 𝑆(𝜃) sin 𝑥𝜃 is summable on (0, 𝜋), as mentioned above.
(A.6)
254
Appendix
Decomposing 𝑢(𝑥) =
∫
𝐵/𝑥 2 𝜋 0
𝐽1 =
2 𝜋
+ 𝜋2 ∫ 0
∫
𝐵
𝜋 𝐵/𝑥
= 𝐽1 + 𝐽2 , we deduce from (A.6) that
sin 𝑢 d𝑢 {1 + 𝑜(1)} 𝑢𝐿 (𝑥/𝑢)
(A.7)
with 𝑜(1) → 0 as 𝑥 → ∞ for each 𝐵 > 1. On the other hand ∫ (𝐵+ 𝜋)/𝑥 ∫ 𝜋+ 𝜋/𝑥 ∫ 𝜋 h 𝜋 i 𝜋𝐽2 = 𝑆(𝜃) − 𝑆 𝜃 + sin 𝑥𝜃 d𝜃 + − 𝑆(𝜃) sin 𝑥𝜃 d𝜃 𝑥 𝐵/𝑥 𝐵/𝑥 𝜋 = 𝐽2′ + 𝐽2′′
(say).
With the help of the bound |𝜙(𝜃) − 𝜙(𝜃 ′)| ≤ 2|𝜃 − 𝜃 ′ |𝐿(1/|𝜃 − 𝜃 ′ |) (𝜃 ≠ 𝜃 ′) (Lemma 5 of [28]) the same proof as given in [28, (5.15)] yields the bound 𝐽2′ ≤ 𝐶 ′/𝐵𝐿(𝑥). By the strong aperiodicity of 𝑇1 and (A.6), |𝑆(𝜃)| ≤ 𝐶/𝜃 𝐿 (1/𝜃) (0 < 𝜃 ≤ 𝜋), and it is easy to see that |𝐽2′′ | ≤ 𝐶 ′′ [𝐵/𝑥 + 1/𝐵𝐿(𝑥/𝐵)]. Thus lim 𝑥→∞ 𝐿(𝑥)𝐽2 ≤ 𝐶 ′/𝐵. Since (A.7) implies that 𝐿(𝑥)𝐽1 → 1 as 𝑥 → ∞ and 𝐵 → ∞ in this order, we can conclude 𝐿(𝑥)𝑢(𝑥) → 1 as desired. □ ∫ 𝜋 Proof (of (A.5)) The ‘cosine formula’, namely 𝑢(𝑛) = 𝜋2 0 𝐶 (𝜃) cos 𝑥𝜃 d𝜃, where 𝐶 (𝜃) := ℜ(1 − 𝜙(𝜃)) −1 , is applied in [33] and [28] without proof. In [33] is cited the article [37], which proves the Herglotz representation theorem (named after its author) of the analytic functions in the unit open disc with non-negative real parts. Noting ℜ(1 − 𝜙(𝜃)) −1 ≥ 0 (|𝜃| < 1) one can easily obtain the cosine formula by deducing the summability of 𝐶 (𝜃) from the representation theorem. The same argument does not go through for (A.5), 𝑆(𝜃) not always being summable. An elementary proof of the cosine formula is given in [71, p.98-99] and easily modified to obtain (A.5) as given below. Put ∞ 1 1 ∑︁ 𝑛 𝑛 𝑟 𝜙 (𝜃) = 𝑤𝑟 (𝜃) = , 2𝜋 𝑛=0 2𝜋(1 − 𝑟𝜙(𝜃)) ∫ −1 𝜋 𝑛 −𝑖𝑥 𝜃 d𝜃 and 𝑢(𝑥) = where 1/2 Í∞ < 𝑛𝑟 < 1. Noting 𝑃[𝑇𝑛 = 𝑥] = (2𝜋) − 𝜋 𝜙 (𝜃)e lim𝑟 ↑1 𝑛=0 𝑟 𝑃[𝑇𝑛 = 𝑥] for all 𝑥 ∈ Z as well as 𝑢(−𝑥) = 0 for 𝑥 ≥ 1 one deduces that ∫ 𝜋 𝑢(𝑥) = 𝑢(𝑥) − 𝑢(−𝑥) = lim 𝑤𝑟 (𝜃) (−𝑖2 sin 𝑥𝜃) d𝜃. 𝑟 ↑1
−𝜋
Since ℜ𝑤𝑟 (𝜃) is even and ℑ𝑤𝑟 (𝜃) is odd we see that the limit above equals ∫ 𝜋 ∫ 𝜋 2 𝑆(𝜃) sin 𝑥𝜃 d𝜃. 4 lim ℑ𝑤𝑟 (𝜃) sin 𝑥𝜃 d𝜃 = 𝑟 ↑1 0 𝜋 0 Here the equality is justified by the bound |ℑ𝑤𝑟 (𝜃) ≤ 1 [2𝜋𝑟 |ℑ𝜙(𝜃)|] ≤ 1/𝜃 𝐿(1/𝜃) for 𝜃 > 0 small enough, which follows from (A.6). Thus yielding (A.5). □
A.2 Renewal Processes, Strong Renewal Theorems and the Overshoot Distribution
255
A.2.1.2 Case: ℓ(𝒕) is s.v. Next consider the case when the tail ℓ(𝑡) := 𝑃[𝑇1 > 𝑡] is s.v., or what is the same thing, the renewal function 𝑈 (𝑥) is s.v. Nagaev [54] shows that if 𝑥𝑃[𝑇1 = 𝑥] is s.v., then 𝑢(𝑥) ∼ 𝑃[𝑇1 = 𝑥]/[ℓ(𝑥)] 2 . For the proof of Theorem 7.1.6 (cf. (7.97)) we needed the estimate 𝑢(𝑥) = 𝑜 (𝑈 (𝑥)/𝑥). The next lemma, an extension of [54], gives a better bound on 𝑞(𝑥) := 𝑃[𝑇1 = 𝑥], under a condition. Lemma A.2.2 Suppose 𝑃[𝑇1 > 𝑡] is s.v. If 𝐶 := lim lim sup 𝛿↑1
𝑥→∞
1 𝑞(𝑥)
sup 𝑞(𝑦) < ∞,
(A.8)
inf
(A.9)
𝛿 𝑥 2𝜀𝑥. Then using (A.12), (A.8) and (A.11) one sees that for all sufficiently large 𝑥, 3 𝑥𝑞(𝑥) 𝑥𝑢(𝑥) ≤ 𝐶 𝛿 + [ℓ(𝑥)] 2 Since
Í𝑥 𝑦=0
𝑦𝑞(𝑦) = −𝑥ℓ(𝑥) + 𝑁 (𝑥) =
Í 𝑥−1 𝑦=0
Í𝛿𝑥 𝑦=0
𝑦𝑞(𝑦) max 𝑢(𝑦).
𝜀 𝑥 ≤𝑦 ≤𝑥
ℓ(𝑥)
(A.13)
ℓ(𝑦) = 𝑜(𝑥ℓ(𝑥)), on writing
𝑞(𝑥) , [ℓ(𝑥)] 2
𝑀 𝛿 (𝑥) = max 𝑢(𝑦), 𝜀 𝑥 ≤𝑦 ≤𝑥
(A.13) yields that for 𝑥 large enough 𝑢(𝑥) ≤ 𝐶 𝛿 𝑁 (𝑥) + 𝑜(𝑀 𝛿 (𝑥)). We claim that there exist positive constants 𝑟 0 and 𝛼 such that 𝑞(𝑥) > 𝑥 −𝛼 for 𝑥 ≥ 𝑟 0 . For the proof, let 𝑞(𝑡) ¯ denote the continuous function of 𝑡 ≥ 1 that is obtained from 𝑞(𝑥) by linearly interpolating between successive positive integers. By (A.8), one can choose constants 𝑟 0 ≥ 2, 0 < 𝛿 < 1 and 𝑀 > 1 so that max 𝛿𝑟 ≤𝑡 ≤𝑟 𝑞(𝑡) ¯ ≤ 𝑀 𝑞(𝑟) ¯ for 𝑟 ≥ 𝑟 0 . One may suppose min 𝛿𝑟0 ≤𝑠 ≤𝑟0 𝑞(𝑠) ¯ > 0, for if 𝑞(𝑥 𝑛 ) = 0 for some sequence 𝑥 𝑛 ↑ ∞, then 𝑞 vanishes for some point on, contradicting the slow variation of ℓ. Put 𝜆 = [log 𝑀]/log 𝛿−1 . Then for 𝛿𝑟 0 ≤ 𝑠 ≤ 𝑟 0 and 𝑡 = 𝛿−𝑛 𝑠, (𝑛 = 1, 2, . . .), # " 𝑞(𝑡) ¯ ≥ 𝑀 −𝑛 𝑞(𝑠) ¯ = 𝛿𝜆𝑛 𝑞(𝑠) ¯ ≥ (𝛿𝑟 0 ) 𝜆 Hence the claim is verified.
min
1 2 𝑟0 ≤𝑠 ≤𝑟0
𝑞(𝑠) ¯ 𝑡 −𝜆 .
A.2 Renewal Processes, Strong Renewal Theorems and the Overshoot Distribution
257
Take 𝜂 = 𝜂 𝛿 > 0 such that 𝜂𝐶 𝛿 < 𝜀
and
log 𝜂−1 > 2𝛼 log 𝜀 −1 ,
and choose 𝑟 ≥ 𝑟 0 so that for 𝑥 > 𝑟, 𝑢(𝑥) ≤ 𝐶 𝛿 𝑁 (𝑥) + 𝜂𝑀 𝛿 (𝑥)
and
sup 𝑁 (𝑦) < 𝐶 𝛿 𝑁 (𝑥).
𝛿 𝑥 ≤𝑦 𝑦] d𝑦 ∼ ℓ(𝑥) 0 (A.14) for an s.v. function ℓ; 𝛾 𝑃[𝑇1 > 𝑥] ∼ ℓ(𝑥)/𝑥 0 ≤ 𝛾 < 1, 𝑈 (𝑥) ∼ 𝑐 𝛾 𝑥 𝛾 /ℓ(𝑥) where 𝑐 𝛾 =
(𝛾𝜋) −1 sin 𝛾𝜋
(0 ≤ 𝛾 ≤ 1)
for an s.v. function ℓ,
(A.15)
for 0 < 𝛾 < 1 and 𝑐 𝛾 = 1 for 𝛾 ∈ {0, 1};
𝑍 𝑅 /𝑅 −→ 𝑃 0 𝑃 [𝑍 𝑅 /𝑅 < 𝜉] −→ 𝐹𝛾(𝑍) (𝜉) 𝑍 𝑅 /𝑅 −→ 𝑃 ∞
𝛾 = 1, 0 < 𝛾 < 1, 𝛾 = 0,
as 𝑅 → ∞,
(A.16)
where 𝐹𝛾(𝑍) (𝜉)
sin 𝛾𝜋 = 𝜋
∫
𝜉
d𝑡 (𝜉 ≥ 0) for 0 < 𝛾 < 1; + 𝑡)
𝑡 𝛾 (1
0
𝑇𝑁𝑅 /𝑅 −→ 𝑃 1 𝑃 𝑇𝑁𝑅 /𝑅 ≤ 𝜉 −→ 𝐹𝛾( 𝑁 ) (𝜉) 𝑇𝑁 /𝑅 −→ 𝑃 0 𝑅
𝛾 = 1, 0 < 𝛾 < 1, 𝛾 = 0,
as 𝑅 → ∞,
(A.17)
where 𝐹𝛾( 𝑁 ) (𝜉) =
sin 𝛾𝜋 𝜋
∫
𝜉
0
d𝑡 (0 ≤ 𝜉 ≤ 1, 0 < 𝛾 < 1). 𝑡 1−𝛾 (1 − 𝑡) 𝛾
Moreover, each of the above relations implies 0 𝑈 (𝑥)𝑃[𝑇1 > 𝑥] −→ 𝑐𝛾
𝛾 = 1, 0 ≤ 𝛾 < 1;
(A.18)
if 𝛾 =∫ 1, then the converse is true; in particular (A.18) (with 𝛾 = 1) implies that 𝑥 𝑈 (𝑥) 0 𝑃[𝑇1 > 𝑡] d𝑡/𝑥 → 1. The equivalence of (A.14) to (A.17) is shown by Dynkin [26] for 0 < 𝛾 < 1 and by Rogozin [63] for 𝛾 ∈ {0, 1}, where the proofs are made fully analytically by using the Laplace transform 𝜑(𝜆) := 𝐸 [e−𝜆𝑇1 ] and are somewhat involved. Note that the equivalence of (A.14) and (A.15) is immediate from the identity b 𝑢 (𝜆) :=
∞ ∑︁ 𝑥=0
𝑢(𝑥)e−𝜆𝑥 =
1 1 − 𝜑(𝜆)
(A.19)
in view of the Tauberian theorem that says (A.14) is equivalent to 1 − 𝜑(𝜆) ∼ Γ(1 − 𝛾)𝜆 𝛾 ℓ(1/𝜆)
(𝜆 ↓ 0),
(A.20)
A.2 Renewal Processes, Strong Renewal Theorems and the Overshoot Distribution
259
Í and similarly for 𝑈, the Laplace transform of which equals 𝜑 𝑛 = (1 − 𝜑) −1 . For 0 < 𝛾 < 1, Feller [31, §XIV.3] provides a rather simple derivation of (A.16) and (A.17) from the conjunction of (A.14) and (A.18) based on the identities 𝑃[𝑍 𝑅 > 𝑥] =
𝑅 ∑︁
𝑢(𝑦)𝑃[𝑇1 > 𝑅 − 𝑦 + 𝑥]
(A.21)
𝑦=0
and
𝑃 𝑇𝑁𝑅 = 𝑥 = 𝑢(𝑥)𝑃[𝑇1 > 𝑅 − 𝑥],
(A.22)
respectively.1 On taking 𝑥 = 0, (A.21) in particular yields 𝑈 (𝑅)𝑃[𝑇1 > 𝑅] ≤ 1.
(A.23)
In the sequel we let 𝛾 ∈ {0, 1} and we give a direct proof of the assertion of the theorem based on (A.21) and (A.22), except for the implication (A.18) ⇒ (A.14) in the case 𝛾 = 1, for which we resort to an analytic method. Therein we shall also show 𝑍 𝑅 /𝑅 −→ 𝑃 0 ⇐⇒ 𝑈 (𝑅)𝑃[𝑇1 > 𝑅] → 0 ⇐⇒ ∃𝑘 > 0, 𝑃[𝑍 𝑅 > 𝑘 𝑅] → 0. (A.24) For simplicity we write 𝜇𝑇 (𝑥) for 𝑃[𝑇1 > 𝑥]. Let 𝛾 = 0. First we show that (A.16) implies (A.14). Using (A.23), one infers that for any 𝜆 ≥ 1, 𝑃[𝑍 𝑅 > 𝜆𝑅] ≤ 𝑈 (𝑅)𝜇𝑇 (𝜆𝑅). By virtue of (A.23), this inequality leads to 𝑈 (𝑅)𝜇𝑇 (𝜆𝑅) → 1 under (A.16), showing that 𝜇𝑇 is s.v. (as well as Similarly verifies the implication (A.17) ⇒ (A.14) by using the identity (A.18)). one Í 𝑃 𝑇𝑁𝑅 /𝑅 < 𝜉 = 𝑦 ≤ 𝜉 𝑅 𝑢(𝑦)𝜇𝑇 (𝑅 − 𝑦), which follows from (A.22). Now it is easy to show the equivalence of (A.16) and (A.17), and that each of them follows from (A.14). Let 𝛾 = 1. Since then (A.14) implies 𝑥𝜇𝑇 (𝑥) = 𝑜(ℓ(𝑥)), (A.18) follows from (A.14), which is equivalent to (A.15), as mentioned at (A.19). We show (A.24). By using (A.21) and the sub-additivity of 𝑈 in turn we see for any constant 𝑘 > 0 𝑃[𝑍 𝑅 > 𝑘 𝑅] ≥ 𝑈 (𝑅)𝜇𝑇 (𝑅 + 𝑘 𝑅) ≥ (𝑘 + 2) −1𝑈 (𝑅 + 𝑘 𝑅)𝜇𝑇 (𝑅 + 𝑘 𝑅). The convergence to zero of the right-most member entails that of 𝑈 (𝑅)𝜇𝑇 (𝑅), as is readily verified; hence lim 𝑃[𝑍 𝑅 > 𝑘 𝑅] = 0 implies lim 𝑈 (𝑅)𝜇𝑇 (𝑅) = 0. Similarly, if lim 𝑈 (𝑅)𝜇𝑇 (𝑅) = 0, 𝑃[𝑍 𝑅 > 𝜀𝑅] ≤ 𝑈 (𝑅)𝜇𝑇 (𝜀𝑅) ≤ (𝜀 −1 + 1)𝑈 (𝜀𝑅)𝜇𝑇 (𝜀𝑅) → 0. Thus (A.24) is verified. In particular, (A.18) is equivalent to (A.16). The implication (A.16) ⇒ (A.14) seems hard to prove by a simple method as above, and the proof by the use of Laplace transform, as in [63], must be natural. The proof of Theorem 1 In [31], the necessity of (A.14) for each of (A.16) and (A.17) is stated without proof. One can prove it by using a double transform, as given in (A.25) (see [8, §8.6.2]).
260
Appendix
A.2.3 for 𝛾 = 1 is complete if we can show that the following are equivalent (𝑎) 𝑍 𝑅 /𝑅 −→ 𝑃 0,
(𝑏) 𝑇𝑁𝑅 /𝑅 −→ 𝑃 1,
(𝑐) 1 − 𝜑(𝜆) ∼ 𝜆ℓ(1/𝜆) (𝜆 ↓ 0),
where ℓ is some s.v. function in (c). (a) follows from 𝑈 (𝑥)𝜇𝑇 (𝑥) → 0 in view of (A.24), hence from (c) by what we have observed above. Thus it suffices to show (a) ⇒ (b) ⇒ (c). proof is standard. Í The following Since 𝑃 𝑇𝑁𝑅 /𝑅 < 𝛿 = 𝑦 ≤ 𝛿𝑅 𝑢(𝑦)𝜇𝑇 (𝑅 − 𝑦) ≤ 𝑈 (𝛿𝑅)𝜇𝑇 ((1 − 𝛿)𝑅) for 0 < 𝛿 < 1, by (A.24) it follows that (a) ⇒ (b). To verify (b) ⇒ (c) we first deduce (1 − e−𝜆 )
∞ ∑︁
𝐸 e−𝑠𝑇𝑁𝑅 e−𝜆𝑟 =
𝑟=0
1 − 𝜑(𝜆) . 1 − 𝜑(𝑠 + 𝜆)
(A.25)
Í∞ −𝜆𝑟 Í𝑟 The sum on the LHS equals 𝑟=0 e d𝑟 𝑦=0 e−𝑠𝑦 𝜇𝑇 (𝑟 − 𝑦)𝑢(𝑦), which, interchanging the order of summation and integration, transforms to ∞ ∑︁ 𝑦=0
e−𝑠𝑦 𝑢(𝑦)
∞ ∑︁
e−𝜆𝑟 𝜇𝑇 (𝑟 − 𝑦) =
𝑟=𝑦
∞ ∑︁
e−(𝑠+𝜆) 𝑦 𝑢(𝑦) 𝜇 c𝑇 (𝜆) = b 𝑢 (𝑠 + 𝜆) 𝜇 c𝑇 (𝜆).
𝑦=0
Í∞ Here 𝜇 c𝑇 (𝜆) denotes 𝑟=0 𝜇𝑇 (𝑟)e−𝜆𝑟 and equals (1 − 𝜑(𝜆))/(1 − e−𝜆 ). Hence (A.25) follows from (A.19) (valid with no extra assumption). Now using (A.25), we deduce (c) from (b) as follows. On the LHS of (A.25), for any constant 𝑀 > 1, the sum over 𝑟 < 𝑀 is lessthan 𝑀 and negligible as 𝜆 → 0. On the other hand, (b) entails that e−𝑠 (𝑟+1) ≤ 𝐸 e−𝑠𝑇𝑁 (𝑟 ) {1 + 𝑜(1)} ≤ e−𝑠 (𝑟−1) as 𝑟 → ∞. Hence it follows that the LHS of (A.25) can be written as 𝜆/(𝑠 +𝜆){1+ 𝑜(1)} as 𝑠 +𝜆 → 0 under 0 < 𝑠 < 𝜆, so that putting 𝑠 = 𝛿𝜆 with 0 < 𝛿 < 1 yields (1 − 𝜑(𝜆))/(1 − 𝜑((1 + 𝛿)𝜆) → 1/(1 + 𝛿). Thus we have (c). This finishes the proof of Theorem A.2.3 for 𝛾 ∈ {0, 1}.
A.3 The First Ladder Epoch and Asymptotics of 𝑼a This section concerns the strict ascending ladder process (𝐻𝑛 , 𝜏𝑛 ), 𝑛 = 0, 1, 2, . . ., associated with the r.w. 𝑆 studied in this treatise. Here 𝜏0 = 𝑍0 = 0, 𝜏1 = 𝜎[1,∞) , 𝐻1 = 𝑆 𝜏1 (= 𝑍) and for 𝑛 > 1 𝜏𝑛+1 = inf{𝑘 > 𝜏𝑛 : 𝑆 𝑘 > 𝐻𝑛 }
and
𝐻𝑛+1 = 𝑆 𝜏 (𝑛+1) .
We shall write 𝑃 for 𝑃0 (the r.w. 𝑆 always starting at zero) in the sequel and adhere to our primary hypothesis of irreducibility and oscillation of arithmetic r.w.’s, although most of them can be extended to non-arithmetic r.w.’s. We shall apply Baxter’s formula ([71, Prop.17.5], [31, XVIII, (3.7)], [16]), given in the form 1 − 𝐸 [𝑠 𝜏1 e−𝜆𝑍 ] = e−
Í∞ 1
𝑛−1 𝑠 𝑛 𝐸 [e−𝜆𝑆𝑛 ;𝑆𝑛 >0]
(𝜆 ≥ 0, 0 < 𝑠 ≤ 1).
(A.26)
A.3 The First Ladder Epoch and Asymptotics of 𝑈a
261
A.3.1 Stability of 𝝉1 and Spitzer’s Condition The following result is due to Rogozin [63]. We write 𝜏 for 𝜏1 . Theorem A.3.1 (i) There exists 𝜌 := lim 𝑃[𝑆 𝑛 > 0] if and only if ∫ 𝑡 𝑃[𝜏 > 𝑠] d𝑠 ∼ ℓ𝜏 (𝑡) if 𝜌 = 1, 𝑃[𝜏 > 𝑡] ∼ 𝑡 −𝜌 ℓ𝜏 (𝑡) if 0 ≤ 𝜌 < 1 and 0
with an s.v. function ℓ𝜏 , in other words, 𝜏 is in the domain of attraction of a stable law of exponent 𝜌 if 0 < 𝜌 < 1; r.s. if 𝜌 = 1 and has a distribution function that is s.v. if 𝜌 = 0. If this is the case and 𝜏ˆ = 𝜎(−∞,0] , then 𝑡𝑃[𝜏 > 𝑡]𝑃[ 𝜏ˆ > 𝑡] −→ 𝜋 −1 sin 𝜌𝜋
as 𝑡 → ∞.
(A.27)
(ii) In order that the law of 𝜏𝑛 𝑛−1/𝜌 converges to a (non-degenerate) stable law as 𝑛 → ∞ for 0 0] − 𝜌) converges. The proof of the first half of Theorem A.3.1(i) is performed by combining Sparre Andersen’s identity ([31, Theorem XII.8.2], [16, 8.5.2]), Spitzer’s arcsine law for occupation times of the positive half line (cf. [69]) and Theorem A.2.3 as follows.2 Put 𝐾𝑛 = min{0 ≤ 𝑘 ≤ 𝑛 : 𝑆 𝑘 = 𝑀𝑛 } (the first time (≤ 𝑛) the maximum 𝑀𝑛 := max{𝑆0 , . . . , 𝑆 𝑛 } is attained). Then the first and second of these theorems yield that the distribution of 𝐾𝑛 /𝑛 converges to 𝐹𝜌( 𝑁 ) (given in (A.17) with obvious interpretation for 𝜌 ∈ {0, 1}) if and only if 𝑃[𝑆 𝑛 > 0] → 𝜌 (see [71, Problems 7, 9 in p233]). Let 𝑁 𝑛( 𝜏) denote the value taken by the renewal process of epochs (𝜏𝑘 ) ∞ 𝑘=0 just before it enters the half line {𝑛 + 1, 𝑛 + 2, 𝑛 + 3, . . .}. After a little reflection, one sees 𝐾𝑛 = 𝑁 𝑛( 𝜏) . By Theorem A.2.3, this identity, the gist of the proof, shows that 𝜏 satisfies the properties asserted in the theorem if and only if the law of 𝑁 𝑛( 𝜏) /𝑛 converges to 𝐹𝜌( 𝑁 ) , hence the equivalences asserted in the first half of (i). As for the second half of (i), we apply (A.26), which, together with its dual, yields 1 − 𝐸 [𝑠 𝜏 ] = e−
Í∞ 𝑛=1
𝑠 𝑛 𝑃 [𝑆𝑛 >0]/𝑛
and 1 − 𝐸 [𝑠 𝜏ˆ ] = e−
Í∞ 𝑛=1
𝑠 𝑛 𝑃 [𝑆𝑛 ≤0]/𝑛
.
(A.28)
SupposeÍ𝜌 = lim 𝑃[𝑆 𝑛 > 0] ∈ [0, 1). Then, on putting 𝜃 𝑛 = 𝜌 − 𝑃[𝑆 𝑛 > 0] and ∞ 𝑛 ℎ(𝑠) = e 𝑛=1 𝑠 𝜃𝑛 /𝑛 , the RHSs of (A.28) are written, respectively, as (1 − 𝑠) 𝜌 ℎ(𝑠)
and
(1 − 𝑠) 1−𝜌 /ℎ(𝑠).
we obtain ℎ(𝑠) ∼ Γ(1− 𝜌)ℓ𝜏 (1/(1− 𝑠)) as 𝑠 ↑ 1 (see If 𝜌 < 1, from the first half Í of (i) 𝑛 𝑃[ 𝜏ˆ > 𝑛] = (1 − 𝑠) −1 (1 − 𝐸 [𝑠 𝜏ˆ ]) = (1 − 𝑠) −𝜌 /ℎ(𝑠), 𝑠 (A.20)) and, on noting ∞ 𝑛=0 2 In [63] Spitzer’s arcsine law is directly derived from Theorem A.2.3 by using Sparre Andersen’s result and the formula (A.28).
262
Appendix
an of the Tauberian theorem leads to (A.27). For 𝜌 = 1, use the identities Í∞application 𝑛 Í𝑛 𝑃[𝜏 > 𝑘] = 1 − 𝐸 [𝑠 𝜏 ] = (1 − 𝑠)ℎ(𝑠). 𝑛=0 𝑠 𝑘=0 If 0 < 𝜌 < 1, for the law of 𝜏𝑛 𝑛−1/𝜌 to converge to a stable law, it is necessary and sufficient that the sum appearing as the exponent in the definition of ℎ(𝑠) approaches a finiteÍnumber as 𝑠 ↑ 1, which condition is equivalent to the convergence of the series 𝜃 𝑛 /𝑛 owing to Littlewood’s Tauberian theorem. One can dispose of the case 𝜌 = 1 similarly. The proof of Theorem A.3.1 is finished.
A.3.2 Asymptotics of 𝑼a Under (AS) Theorem A.3.2 If (AS) holds, then 𝑈a (𝑥) ∼ 𝑥 𝛼𝜌 /ℓ(𝑥) for some s.v. function ℓ. This is Theorem 9 of [63] except in the extreme cases 𝜌 = 0 and 𝛼 = 𝜌 = 1. If 𝛼𝜌 = 1, we know 𝑍 is r.s., so that 𝑈a (𝑥) ∼ 𝑥/ℓ ∗ (𝑥) (see Section 5.2); we also know 𝑈a is s.v. if 𝜌 = 0 (see the table in Section 2.6). Let 0 < 𝛼𝜌 < 1. For the proof of Theorem A.3.2, it suffices to show (𝑍) (𝜉), 𝑃[𝑍 (𝑐 𝑛 )/𝑐 𝑛 > 𝜉] → 1 − 𝐹𝛼𝜌
(A.29)
because of the equivalence of (A.16) and (A.15). Since the law of the scaled process 𝑆 ⌊𝑛𝑡 ⌋ /𝑐 𝑛 , 𝑡 ≥ 0, converges in Skorokhod’s topology to a stable process, 𝑌 , of exponent 𝛼 with 𝑃[𝑌 (1) > 0] = 𝜌, it is not hard to show that 𝑍 (𝑐 𝑛 )/𝑐 𝑛 converges in law to 𝜁 := 𝑌 (𝜎𝑌[1,∞) ) − 1. It is proved in [4, Lemma VII.3] that 𝜁 is subject to the stable law of exponent 𝛼𝜌, which immediately gives (A.29) in view of Theorem A.2.3. Although this approach is elegant, the result of [4] mentioned above is based on a sophisticated theory about the local time of Le´vy processes. A direct proof, as given in [63], is also of interest. We give it below, with some simplifications. In view of the invariance principle mentioned above, (A.29) holds if it does for the stable r.w. whose step distribution is given by 𝐹𝛼,𝜌 (𝑥) = 𝑃𝑌 [𝑌 (1) ≤ 𝑥]. Let 𝑍 ♯ be the first ascending ladder height for this stable r.w. Since the overshoot across a level 𝑅 of an r.w. is the same as the corresponding overshoot of the ladder height process of the r.w., by Theorem A.2.3 (see (A.20)), (A.29) follows if we can verify 1 − 𝐸 𝑌 e−𝜆𝑍 ∼ 𝐶𝜆 𝛼𝜌 ♯
(𝜆 ↓ 0)
(A.30)
with some positive constant 𝐶. Because of (A.26) the Laplace transform on the LHS is represented by using the distributions of 𝑌 (𝑛), 𝑛 = 1, 2, . . . while by the scaling property of the stable law we have 𝑃𝑌 [𝑌 (𝑛) ≤ 𝑥] = 𝑃𝑌 [𝑌 (1) ≤ 𝑥/𝑛1/𝛼 ]. It accordingly follows that log
1 1−
𝐸 𝑌 [e−𝜆𝑍 ♯ ]
=
∞ ∑︁ 𝑛=1
𝑛−1 𝐸 𝑌 [e−𝜆𝑌 (𝑛) ; 𝑌 (𝑛) > 0] = 𝜌
∞ ∑︁ 1
𝑛
𝜑+♯ (𝑛1/𝛼 𝜆),
𝑛=1
(A.31)
A.3 The First Ladder Epoch and Asymptotics of 𝑈a
263
where 𝜑+♯ (𝜆) = 𝐸 𝑌 [e−𝜆𝑌 (1) | 𝑌 (1) > 0]. We write the right-most expression as ∑︁
𝜌
1≤𝑛≤𝜆−𝛼
∞ i ∑︁ 1 1h ♯ +𝜌 𝜑+ ((𝑛𝜆 𝛼 ) 1/𝛼 ) − 1 [0,1] (𝑛𝜆 𝛼 ) . 𝑛 𝑛 𝑛=1
As 𝜆 ↓ 0, the first term may be written as 𝜌[log 𝜆−𝛼 + 𝐶 ∗ ]h + 𝑜(1) (𝐶 ∗ is Euler’si con∫∞ stant) while the second term converges to 𝐶1 := 𝜌 0 𝑡 −1 𝜑+♯ (𝑡 1/𝛼 ) − 1 [0,1] (𝑡) d𝑡 ∈ R. Now returning to (A.31) one finds (A.30) to hold with 𝐶 = e−𝜌𝐶
∗ −𝐶 1
.
Remark A.3.3 Suppose (AS) holds with 0 < 𝜌 < 1. According to [88, Lemma 12] there then exist positive constants 𝐶, 𝐶0 and 𝐶0′ such that ( 𝑃[𝜏 > 𝑛] ∼
𝐶𝑃[𝑍 > 𝑐 𝑛 ] ∫ 𝑐𝑛 𝐶𝑐−1 𝑛 0 𝑃[𝑍 > 𝑥] d𝑥
and 𝑈a (𝑐 𝑛 ) ∼ 𝐶0 /𝑃[𝜏 > 𝑛]
and
if 𝛼𝜌 < 1, if 𝛼𝜌 = 1,
𝑉d (𝑐 𝑛 ) ∼ 𝐶0′ /𝑃[ 𝜏ˆ > 𝑛]
in either case. (Recall 𝜏 = 𝜎[1,∞) , 𝜏ˆ = 𝜎(−∞,0] .) By the last relation ℓ is related to ℓ𝜏 in Theorem A.3.1 by ℓ𝜏 (𝑛) ∼ 𝐶0 ℓ(𝑐 𝑛 ) [𝐿(𝑐 𝑛 )] −𝜌 . In Theorem 5.4.1 (𝜌 = 1 is not excluded), we have observed that under 𝑚 + /𝑚 → 0, Spitzer’s condition holds if and only if 𝑚(𝑥) varies regularly with index 2 − 1/𝜌: 𝑚(𝑥) ∼ 𝑐 𝜌 𝑥 −1/𝜌+2 𝐿(𝑥) for 12 ≤ 𝜌 ≤ 1 (with a certain 𝑐 𝜌 > 0). It is shown in [18], [77] that if this is the case and 𝐸 𝑍 < ∞, then ℓ𝜏 is given by a positive multiple of the de Bruijn ∗ ∗ (𝑥 1/𝜌 𝐿(𝑥)) ∼ [𝐿 (𝑥)] 𝜌 , 𝜌 −1 -conjugate of 𝐿, 𝐿 1/𝜌 say, which is determined by 𝐿 1/𝜌
so that ℓ𝜏 (𝑥 1/𝜌 𝐿 (𝑥)) ∼ const.[𝐿 (𝑥)] −𝜌 , hence ℓ𝜏 is directly related to 𝐿. This fails if 𝐸 𝑍 = ∞. We also point out that if 𝐸 | 𝑍ˆ | < ∞ and 1 < 𝛼 ≤ 2, then ℓ𝜏 (𝑥) ∼ 𝐶0′ [𝐿(𝑐 𝑛 )] 𝜌ˆ , where 𝐶0′ is equal to 𝑣◦ 𝐶0 /[2𝐸 | 𝑍ˆ |] if 𝛼 = 2 and to 𝑣◦ 𝜅 𝛼 𝐶0 /[2(𝛼 − 1) (2 − 𝛼)𝐸 | 𝑍ˆ |] if 1 < 𝛼 < 2, and ℓ𝜏 (𝑥) = 𝑜( [𝐿 (𝑐 𝑛 )] 𝜌ˆ ) otherwise. This follows from the dual of (2.15) with the help of 𝑎(−𝑥) ∼ 2𝜅 −1 𝛼 𝑥/𝑚(𝑥). In the extreme case 𝑝𝑞 = 0 of 1 < 𝛼 < 2 and the case 𝛼 = 2 we can derive explicit expressions of the constants 𝐶0 and 𝐶0′ above. If 1 < 𝛼 ≤ 2 and 𝛼 𝜌ˆ = 1, then by the third and fourth cases of (9.102) it follows that for 𝜀 2 < 𝑥 𝑛 < 𝜀, as 𝑦 𝑛 → 𝜂 > 0, 𝔭1{0} (𝑥 𝑛 , 𝑦 𝑛 ) 𝑥𝑛
=
𝐶0′ 𝑉d (𝑥)𝔮mnd (𝑦 𝑛 ) 𝑥 𝑛𝑉d (𝑐 𝑛 )
{1 + 𝑜 𝜀 (1)} = 𝐶0′ 𝔮mnd (𝑦 𝑛 ){1 + 𝑜 𝜀 (1)},
where 𝑜 𝜀 (1) → 0 as 𝑛 → ∞ and 𝜀 ↓ 0 in this order. One can show that √︁ 𝔭1{0} (𝜉, 𝜂)/𝜉 → 𝛼𝔭1 (0)𝔮mnd (𝜂) (𝜉 ↓ 0) and 𝛼𝔭1 (0) equals 2/[𝜋𝑐 ♯ ] if 𝛼 = 2 −1/𝛼 and Γ(1 − 𝛼)𝑐 ♯ if 0 < 𝛼 < 2. This together with (A.27) and Lemma 6.5.1 leads to the following:
264
Appendix
(i) if 𝛼 = 2, then 𝑃[𝜏 > 𝑛]𝑈a (𝑐 𝑛 ) ∼ 𝑃[ 𝜏ˆ > 𝑛]𝑉d (𝑐 𝑛 ) −→ (ii) if 𝑞 = 0 and 1 < 𝛼 < 2, then
√︁ 2/𝜋𝑐 ♯ ;
1/𝛼−1 𝑃[𝜏 > 𝑛]𝑈a (𝑐 𝑛 ) −→ Γ(1 − 𝛼)𝑐 ♯ [Γ(𝛼)Γ(1/𝛼)] −1 , −1/𝛼 𝑃[ 𝜏ˆ > 𝑛]𝑉d (𝑐 𝑛 ) −→ [Γ(1 − 1/𝛼)] −1 Γ(1 − 𝛼)𝑐 ♯ .
(A.32)
A.4 Positive Relative Stability and Condition (C3) By specialising the central limit theorem applied to the triangular array 𝑋𝑛 , 𝑘 = 𝑋 𝑘 /𝐵𝑛 (𝑘 = 1, . . . , 𝑛) to the degenerate case (cf. 1◦ (with (𝛼, 𝜎) = (1, 0)) in §22.5.1 of [52]), or [31, Theorem of §XVII.7]) we have the following criterion for 𝑆 to be p.r.s. Put 𝜈(𝑥) = 𝐸 [𝑋; |𝑋 | ≤ 𝑥] 𝑥 > 0. Given a positive sequence (𝐵𝑛 ), in order that 𝑆 𝑛 /𝐵𝑛 → 1 in probability,it is necessary and sufficient that for any constant 𝛿 > 0,3 𝜈(𝛿𝐵𝑛 ) ∼ 𝐵𝑛 /𝑛, 𝜇(𝛿𝐵𝑛 ) = 𝑜(1/𝑛), 𝐸 [𝑋 2 ; |𝑋 | ≤ 𝛿𝐵𝑛 ] = 𝑜(𝐵2𝑛 /𝑛).
(A.33)
Note that the first condition above prescribes how one may choose 𝐵𝑛 . Here we show that (A.33) holds with 𝐵𝑛 > 0 for large 𝑛 if and only if (∗)
𝜈(𝑥)/[𝑥𝜇(𝑥)] −→ ∞ (𝑥 → ∞).
[Since 𝜈(𝑥) = 𝑥 [𝜇− (𝑥) − 𝜇+ (𝑥)] + 𝐴(𝑥 − 1) ∼ 𝐴(𝑥) under (∗), this equivalence shows what is mentioned at (2.20) of Section 2.4.] If (A.33) holds with 𝐵𝑛 > 0, then its second relation implies that 𝐵𝑛 → ∞ and by 𝑆 𝑛 /𝐵𝑛 → 𝑃 1 we have 𝐵𝑛+1 /𝐵𝑛 → 1, which shows that 𝜈 is s.v. because of the first two relations of (A.33) (cf. [8, Theorem 1.9.2]). Hence (A.33) implies (∗). For the converse, we have only to verify the third condition of (A.33) under (∗), since (∗) implies 𝜈(𝑥) is s.v., and hence the first two relations of (A.33) hold for some sequence (𝐵𝑛 ). To this end we show that 𝑥𝜈(𝑥)/𝐸 [𝑋 2 ; |𝑋 | ≤ 𝑥] → ∞, which shows that the third relation follows from the first, hence from Í (∗). But the slow variation of 𝜈(𝑥) entails 𝐸 [𝑋 2 ; |𝑋 | ≤ 𝑥] = −(𝑥 + 1) 2 𝜇(𝑥 + 1) + 2 0𝑥 𝑦𝜇(𝑦) = 𝑜(𝑥𝜈(𝑥)), which concludes the required estimate. Theorem A.4.1 The following are equivalent (under our basic framework): (a) (C3) holds, namely 𝑍 is r.s. and 𝑉d is s.v.; (b) lim 𝑃 𝑥 (Λ2𝑥 ) = 1 and 𝑍 is r.s.; (c) the r.w. 𝑆 is p.r.s.; (d) there exists a positive increasing sequence 𝐵𝑛 such that for each 𝑀 ≥ 1 and 𝜀 > 0, 𝑃0 sup0≤𝑡 ≤𝑀 |𝐵−1 𝑛 𝑆 ⌊𝑛𝑡 ⌋ − 𝑡| < 𝜀 → 0 (𝐵 𝑛 is necessarily unbounded); (e) lim 𝑃 𝑥 (Λ2𝑥 ) = lim 𝑃[𝑆 𝑛 > 0] = 1, and 𝑍 (𝑅)/𝑅 → 𝑃 0. 3 The sufficiency part is readily verified by the elementary inequality Eq(VII.7.5) of [31].
A.4 Positive Relative Stability and Condition (C3)
265
Proof (of Theorem A.4.1) That (a) implies (b) follows immediately from Theorem 6.1.1. The implication (d) ⇒ (e) is obvious; (a) follows from (e) since by (6.12), 𝑉d is s.v. if 𝑃 𝑥 (Λ2𝑥 ) → 1, and 𝑍 is r.s. if 𝑍 (𝑅)/𝑅 → 𝑃 0. Thus we have only to show (b) ⇒ (c) and (c) ⇒ (d). First, we show the latter. Proof of (c) ⇒ (d) (adapted from [63]). Recall (c) implies that 𝜇(𝑥) = 𝑜( 𝐴(𝑥)/𝑥), 𝐸 [𝑋; |𝑋 | ≤ 𝑥] ∼ 𝐴(𝑥), 𝐴(𝑥) is s.v. (normalised), and 𝑆 𝑛 /𝐵𝑛 → 𝑃 1 with 𝐵𝑛 defined by 𝐵𝑛 = 𝑛𝐴(𝐵𝑛 ) for large 𝑛. In particular, it follows that for each 𝑡 ≥ 0, 𝑆 ⌊𝑛𝑡 ⌋ /𝐵𝑛 → 𝑡 in probability and it suffices to show the tightness (with respect to the uniform topology of paths) of the law of the continuous process on [0, 𝑀] obtained by the linear interpolation of the points (𝑛−1 𝑘, 𝐵−1 𝑛 𝑆 𝑘 ) ∈ [0, 𝑀] × R, 0 ≤ 𝑘 ≤ 𝑀𝑛. To this end, it suffices to show that for any 𝜀 > 0, one can choose 𝛿 > 0 so that (A.34) 𝑃0 max |𝑆 𝑘 | > 𝜀𝐵𝑛 −→ 0 (𝑛 → ∞) 1≤𝑘 ≤ 𝛿𝑛
(cf. [7, Theorem 2.8.3]). For the proof of (A.34), we introduce the random variables 𝜃 𝑛, 𝛿 (𝑘) = 1(|𝑋 𝑘 | ≤ 𝛿𝐵𝑛 ), 𝑆 𝑘′ =
𝑘 ∑︁
𝑋 𝑗 𝜃 𝑛, 𝛿 ( 𝑗) − 𝐸 [𝑋𝜃 𝑛, 𝛿 ] ,
𝑗=1
where 𝜃 𝑛, 𝛿 stands for 𝜃 𝑛, 𝛿 (1). Then the condition (A.33) is written as (A.35) 𝐸 [𝑋𝜃 𝑛, 𝛿 ] ∼ 𝐵𝑛 /𝑛, 𝐸 [1 − 𝜃 𝑛, 𝛿 ] = 𝑜(1/𝑛), 𝐸 [𝑋 2 𝜃 𝑛, 𝛿 ] = 𝑜(𝐵2𝑛 /𝑛). Í Writing 𝑆 𝑘 = 𝑆 𝑘′ + 𝑘 𝐸 [𝑋𝜃 𝑛, 𝛿 ] + 𝑘𝑗=1 𝑋 𝑗 [1 − 𝜃 𝑛, 𝛿 ( 𝑗)], from the first estimate of (A.35) one infers that if 𝛿 < 𝜀/3, for all 𝑛 large enough 𝜀 ′ ∃ 𝑃 max |𝑆 𝑘 | > 𝜀𝐵𝑛 ≤ 𝑃 1 ≤ 𝑘 ≤ 𝛿𝑛, 𝜃 𝑛, 𝛿 (𝑘) = 0 +𝑃 max |𝑆 𝑘 | > 𝐵𝑛 . 1≤𝑘 ≤ 𝛿𝑛 1≤𝑘 ≤ 𝛿𝑛 2 The second (resp. third) estimate of (A.35) implies that the first (resp. second) probability on the RHS tends to zero. Thus (A.34) has been verified. Proof of (b) ⇒ (c). Here we need to apply the following implications (∗∗)
lim 𝑃 𝑥 (Λ2𝑥 ) = 1 =⇒ lim 𝑃[𝑆 𝑛 > 0] = 1 =⇒ 𝜏1 is r.s.,
the first implication following from (2.22) and the second from Theorem A.3.1. Below we denote by (𝜏𝑛 , 𝐻𝑛 ) 𝑛=0,1,2,... the strict ascending ladder process associated with the r.w. 𝑆 (𝜏0 = 𝐻0 = 0, 𝜏1 = 𝜎[1,∞) , 𝐻1 = 𝑍; see the beginning of Section A.3 for 𝑛 > 1). We shall write 𝜏(𝑛) for 𝜏𝑛 when it appears as a subscript like 𝐻𝑛 = 𝑆 𝜏 (𝑛) . ∫𝑡 Suppose (b) holds. Then 𝜏1 is r.s. by (∗∗). Hence both ℓ(𝑡) := 0 𝑃[𝜏1 > 𝑠] d𝑠 and ∫𝑡 ℓ ∗ (𝑡) = 0 𝑃[𝑍 > 𝑠] d𝑠 are normalised s.v. functions. Let s (𝑡) and w (𝑡) be continuous increasing functions of 𝑡 ≥ 0 such that s (𝑡) = 𝑡ℓ( s (𝑡)) and w (𝑡) = 𝑡ℓ ∗ ( w (𝑡)) for all 𝑡 large enough, and put 𝐵(𝑡) = w ( s−1 (𝑡)). Then it follows that s, w and 𝐵 are all regularly varying at infinity with index one, and that
266
Appendix
𝜏𝑛 /s𝑛 −→ 𝑃 1,
𝐻𝑛 /w𝑛 −→ 𝑃 1 and
𝐻𝑛 /𝐵 𝜏 (𝑛) = [𝐻𝑛 /w𝑛 ] · [𝐵s (𝑛) /𝐵 𝜏 (𝑛) ],
where we write s𝑛 , w𝑛 and 𝐵𝑛 for s (𝑛), w (𝑛) and 𝐵(𝑛). Hence 𝐻𝑛 /𝐵 𝜏 (𝑛) −→ 1 in probability.
(A.36)
We are going to show that for any 𝜀 > 0 there exists a constant 𝛿 ∈ (0, 1) such that, as 𝑛 → ∞, (A.37) 𝑃 |𝑆 𝑘 /𝐵 𝑘 − 1| > 𝜀 for s𝑛 ≤ 𝑘 ≤ s𝑛/ 𝛿 −→ 0, which is stronger than (c). We write 𝜏𝑟 or 𝜏(𝑟) for 𝜏 ⌊𝑟 ⌋ for positive real number 𝑟 and 𝜉 𝑛 ≺ 𝜂 𝑛 when two sequences of random variables (𝜉 𝑛 ) and (𝜂 𝑛 ) satisfy 𝑃[𝜉 𝑛 < 𝜂 𝑛 ] → 1. Given 𝛿 ∈ (0, 1) one observes the following obvious relations 𝑆 2 ≺ 𝑆 𝜏 ( 𝛿𝑛) − 𝜀1 w𝑛 , 𝜏𝛿 2 𝑛 < 𝜏𝛿𝑛 ≺ s𝑛 < s𝑛/ 𝛿 ≺ 𝜏𝑛/ 𝛿 2 and (♯) 𝜏 ( 𝛿 𝑛) 𝑆 𝜏 (𝑛/ 𝛿 2 ) ≺ 𝑆 𝜏 ( 𝛿𝑛) + 𝜀2 w𝑛 , where 𝜀1 = 2−1 (𝛿 − 𝛿2 ), 𝜀2 = 2(𝛿−2 − 𝛿). By the first condition of (b) the probability of the r.w. 𝑆¯ 𝑘 := 𝑆 𝜏 ( 𝛿𝑛)+𝑘 − 𝑆 𝜏 ( 𝛿𝑛) (𝑘 = 0, 1, . . .) exiting the interval [−𝜀1 w𝑛 , 𝜀2 w𝑛 ] on the upper side approaches unity as 𝑛 → ∞. Combined with the relation (♯) above and the trivial fact that 𝑆 𝑘 < 𝑆 𝜏 (𝑛/ 𝛿 2 ) for 𝑘 < 𝜏𝑛/ 𝛿 2 , this yields 𝑃[𝑆 𝜏 ( 𝛿 2 𝑛) < 𝑆 𝑘 < 𝑆 𝜏 (𝑛/ 𝛿 2 ) for 𝜏𝛿𝑛 ≤ 𝑘 < 𝜏𝑛/ 𝛿 2 ] −→ 1. Here one can replace the range of 𝑘 by s𝑛 ≤ 𝑘 ≤ s𝑛/ 𝛿 , for which 𝑆 𝜏 ( 𝛿 2 𝑛) 𝐵𝑘
≥
𝑆 𝜏 ( 𝛿 2 𝑛) w 𝛿2 𝑛
·
w 𝛿2 𝑛
P
−→ 𝛿3
𝐵s (𝑛/ 𝛿)
𝑆 𝜏 (𝑛/ 𝛿 2 ) and
𝐵𝑘
≤
𝑆 𝜏 (𝑛/ 𝛿 2 ) w𝑛/ 𝛿 2
·
w𝑛/ 𝛿 2
𝐵 s𝑛
P
−→
1 . 𝛿2
Consequently, taking 𝛿 so close to 1 that 𝛿−2 − 𝛿3 < 𝜀, one finds (A.37) to be true. □
A.5 Some Elementary Facts A.5.1 An Upper Bound of the Tail of a Trigonometric Integral Let 𝐺 (𝑥) be a right continuous function of bounded variation on 𝑥 ≥ 0 such that 𝐺 (𝑥) → 0 (𝑥 → ∞). Then ∫ ∞ ∫ ∞ 1 cos 𝜃𝑦 ≤ |𝐺 (𝑥)| 𝐺 (𝑦) d𝑦 + |d𝐺 (𝑦)| ; 𝜃 sin 𝜃𝑦 𝑥 𝑥 in particular if 𝐸 |𝑋 | < ∞,
A.5 Some Elementary Facts
∫
𝑥
∞
𝑦𝜇(𝑦)
267
cos 𝜃𝑦 sin 𝜃𝑦
2 4𝑚(𝑥) d𝑦 ≤ [𝑥𝜇(𝑥) + 𝜂(𝑥)] ≤ . 𝜃 𝜃𝑥
(A.38)
We consider the bound of the sine transforms only; the other one is similar. The first inequality follows from the identity ∫ ∫ ∞ 1 1 ∞ cos 𝜃𝑦 d𝐺 (𝑦). 𝐺 (𝑦) sin 𝜃𝑦 d𝑦 = 𝐺 (𝑥) cos 𝜃𝑥 + 𝜃 𝜃 𝑥 𝑥 ∫∞ ∫∞ Applying it with 𝐺 (𝑥) = 𝑥𝜇(𝑥) and observing 𝑥 |d(𝑦𝜇(𝑦)| ≤ 𝜂(𝑥)+ 𝑥 𝑦 d(−𝜇(𝑦)) = 2𝜂(𝑥) + 𝑥𝜇(𝑥) lead to the first inequality of (A.38). The second one follows from 𝑥 2 𝜇(𝑥) ≤ 2𝑐(𝑥).
A.5.2 A Bound of an Integral If ℎ(𝑡), 𝑡 ≥ 0, is continuous and increasing, vanishes at 𝑡 = 0, and satisfies ℎ(𝑡) d𝑡 ≤ 𝐶𝑡𝑑ℎ(𝑡), then for any 𝑠 > 0 and 𝜀 > 0, ∫ 𝑠 𝜀ℎ(𝑡) d𝑡 ≤ 𝐶 arctan(ℎ(𝑠)/𝜀), 2 2 0 𝜀 + ℎ (𝑡) 𝑡 since by the assumption the integral above is not larger than 𝐶
∫
ℎ(𝑠)/𝜀 0
𝑑ℎ/[1 + ℎ2 ].
A.5.3 On the Green Function of a Transient Walk Let 𝐹 be transient so that we have the Green function 𝐺 (𝑥) =
∞ ∑︁
𝑃[𝑆 𝑛 = 𝑥] < ∞.
𝑛=0
For 𝑦 ≥ 0, 𝑥 ∈ Z it holds that 𝐺 (𝑦 − 𝑥) − 𝑔𝛺 (𝑥, 𝑦) =
∞ ∑︁
𝑃 𝑥 [𝑆𝑇 = −𝑤]𝐺 (𝑦 + 𝑤).
𝑤=1
According to the Feller–Orey renewal theorem [31, Section XI.9], lim | 𝑥 |→∞ 𝐺 (𝑥) = 0 (under 𝐸 |𝑋 | = ∞), showing that the RHS above tends to zero as 𝑦 → ∞ uniformly in 𝑥 ∈ Z; in particular 𝑔𝛺 (𝑥, 𝑥) → 𝐺 (0) = 1/𝑃0 [𝜎0 = ∞] as 𝑥 → ∞. It also follows that 𝑃 𝑥 [𝜎0 < ∞] = 𝐺 (−𝑥)/𝐺 (0) → 0 (|𝑥| → ∞).
References
1. L. Alili and R. Doney, Wiener–Hopf factorization revisited and some applications, Stochastics Stochastics Rep. 66, 87–102 (1999). 2. B. Belkin, A limit theorem for conditioned recurrent random walk attracted to a stable law, Ann. Math. Statist., 42, 146–163 (1970). 3. B. Belkin, An invariance principle for conditioned recurrent random walk attracted to a stable law, Zeit. Wharsch. Verw. Gebiete 21, 45–64 (1972). 4. J. Bertoin, Lévy Processes, Cambridge Univ. Press, Cambridge (1996). 5. J. Bertoin and R.A. Doney, On conditioning a random walk to stay nonnegative, Ann. Probab. 22, no. 4, 2152–2167 (1994). 6. J. Bertoin and R.A. Doney, Spitzer’s condition for random walks and Lévy processes, Ann. Inst. Henri Poincaré 33, 167–178 (1997). 7. P. Billingsley, Convergence of Probability Measures, John Wiley and Sons, Inc. NY. (1968). 8. N.H. Bingham, G.M. Goldie and J.L. Teugels, Regular Variation, Cambridge Univ. Press (1989). 9. R. Blumenthal, R. Getoor and D. Ray, On the distribution of first exits for symmetric stable processes, Trans. Amer. Math. Soc. 99, 540–554 (1961). 10. E. Bolthausen, On a functional central limit theorem for random walks conditioned to stay positive. Ann. Probab. 4, 480–485 (1976). 11. A. Bryn-Jones and R.A. Doney, A functional central limit theorem for random walk conditioned to stay non-negative, J. London Math. Soc. (2) 74 244–258 (2006). 12. F. Caravenna, A local limit theorem for random walks conditioned to stay positive, Probab. Theory Related Fields 133, 508–530 (2005). 13. F. Caravenna and L. Chaumont, Invariance principles for random walks conditioned to stay positive, Ann. Inst. Henri Poincaré- Probab et Statist. 44, 170–190 (2008). 14. F. Caravenna and R.A. Doney, Local large deviations and the strong renewal theorem, arXiv:1612.07635v1 [math.PR] (2016). 15. Y.S. Chow, On the moments of ladder height variables, Adv. Appl. Math. 7, 46–54 (1986). 16. K.L. Chung, A Course in Probability Theory, 2nd ed. Academic Press (1970). 17. R.A. Doney, A note on a condition satisfied by certain random walks, J. Appl. Probab. 14, 843–849 (1977). 18. R.A. Doney, On the exact asymptotic behaviour of the distribution of ladder epochs, Stoch. Proc. Appl. 12, 203–214 (1982). 19. R.A. Doney, Conditional limit theorems for asymptotically stable random walks, Z. Wahrsch. Verw. Gebiete 70, 351–360 (1985). 20. R.A. Doney, Spitzer’s condition and ladder variables for random walks, Probab. Theor. Rel. Fields 101, 577–580 (1995). 21. R.A. Doney, One-sided large deviation and renewal theorems in the case of infinite mean, Probab. Theor. Rel. Fields 107, 451–465 (1997). 22. R.A. Doney, Fluctuation Theory for Lévy Processes, Lecture Notes in Math. 1897, Springer, Berlin (2007). 23. R.A. Doney, Local behaviour of first passage probabilities, Probab. Theor. Rel. Fields 152, 559–588 (2012). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8
269
270
References
24. R.A. Doney and R.A. Maller, The relative stability of the overshoot for Lévy processes, Ann. Probab. 30, 188–212 (2002). 25. R. Durrett, Conditioned limit theorems for some null recurrent Markov processes, Ann. Probab. 6, 798–828 (1978). 26. E.B. Dynkin, Some limit theorems of independent random variables with infinite mathematical expectation. Select. Transl. Math. Statist. Prob. 1, 171–178 (1961). 27. D.J. Emery, On a condition satisfied by certain random walks, Zeit. Wharsch. Verw. Gebiete 31, 125–139 (1975). 28. K.B. Erickson, Strong renewal theorems with infinite mean, Trans. Amer. Math. Soc. 151, 263–291 (1970). 29. K.B. Erickson, The strong law of large numbers when the mean is undefined, Trans. Amer. Math. Soc. 185, 371–381 (1973). 30. S. Ethier and T. Kurtz, Markov Processes: Characterization and Convergence, John Wiley and Sons. Inc. Hoboken, NJ. (1986). 31. W. Feller, An Introduction to Probability Theory and Its Applications, vol. 2, 2nd ed., John Wiley and Sons, Inc. NY. (1971). 32. J.B.G. Frenk, The behavior of the renewal sequence in case the tail of the time distribution is regularly varying with index −1, Advances in Applied Probability 14, 870–884 (1982). 33. A. Garsia and J. Lamperti, A discrete renewal theorem with infinite mean, Comment. Math. Helv. 37, 221–234 (1962/3). 34. P.E. Greenwood, E. Omey and J.I. Teugels, Harmonic renewal measures, Z. Wahrsch. verw. Geb. 59, 391–409 (1982). 35. P. Griffin and T. McConnell, Gambler’s ruin and the first exit position of random walk from large spheres, Ann. Probab. 22, 1429–1472 (1994). 36. V. Gnedenko and A.N. Kolmogorov, Limit Distributions for Sums of Independent Random Variables, Addison-Wesley, Reading, Mass. (1954) [Russian original 1949]. 37. G. Herglotz, Über Potenzreihen mit positivem reellen Teil im Einheitskreis, Ber. Verh. Sächs. Akad. Wiss. Leipzig. Math. Nat. Kl. 63, 501–511 (1911). 38. D.L. Iglehart, Functional central limit theorems for random walks conditioned to stay positive. Ann. Probab. 2, 608–619 (1974). 39. H. Kesten, On a theorem of Spitzer and Stone and random walks with absorbing barriers, Illinois J. Math. 5, 246–266 (1961). 40. H. Kesten, Random walks with absorbing barriers and Toeplitz forms, Illinois J. Math. 5, 267–290 (1961). 41. H. Kesten, Ratio limit theorems II, Journal d’Analyse Math. 11, 323–379 (1963). 42. H. Kesten, Hitting probabilities of single points for processes with stationary independent increments. Mem. Amer. Math. Soc. No. 93, Providence (1969). 43. H. Kesten, Problem 5716, Amer. Math. Monthly 77, 197 (1970). 44. H. Kesten, The limit points of a normalized random walk, Ann. Math. Statist. 41, 1173–1205 (1970). 45. H. Kesten and R.A. Maller, Ratios of trimmed sums and other statistics, Ann. Probab. 20, 1805–1842 (1992). 46. H. Kesten and R.A. Maller, Infinite limits and infinite limit points of random walks and trimmed sums, Ann. Probab. 22, 1473–1513 (1994). 47. H. Kesten and R.A. Maller, Divergence of a random walk through deterministic and random subsequences, J. Theor. Probab. 10, 395–427 (1997). 48. H. Kesten and R.A. Maller, Stability and other limit laws for exit times of random walks from a strip or a half line, Ann. Inst. Henri Poincaré 35, 685–734 (1999). 49. H. Kesten and F. Spitzer, Ratio limit theorems I, Journal d’Analyse Math. 11, 285–322 (1963). 50. A. Kuznetsov, A.E. Kyprianou, J.C. Pardo and A.R. Watson, The hitting time for a stable process, Electron. J. Probab. 19 no. 30, 1–26 (2014). 51. A.E. Kyprianou, V.M. Rivero and W. Satitkanitkul, Conditioned real self-similar Markov processes. arXiv:1510.01781v1 (2015). 52. M. Loéve, Probability Theory, 3rd ed., Van Nostrand, New York (1963). 53. R.A. Maller, Relative stability, characteristic functions and stochastic compactness, J. Austral. Math. Soc. (Series A) 28, 499–509 (1979). 54. S.V. Nagaev, The renewal theorem in the absence of power moments, Theory Probab. Appl. 56 no. 1, 166–175 (2012). 55. D. Ornstein, Random walks I and II. Trans. Amer. Math. Soc. 138, 1–43 and 45–60 (1969). 56. H. Pantí, On Lévy processes conditioned to avoid zero, preprint: arXiv:1304.3191v1 [math.PR] (2013).
References
271
57. G. Peskir, The law of the hitting times to points by a stable Lévy process with no negative jumps, Electronic Commun. Probab. 13, 653–659 (2008). 58. E.J.G. Pitman, On the behaviour of the characteristic function of a probability distribution in the neighbourhood of the origin, J. Amer. Math. Soc. (A) 8, 422–43 (1968). 59. S.C. Port and C.J. Stone, Hitting time and hitting places for non-lattice recurrent random walks, Journ. Math. Mech. 17, no. 1, 35–57 (1967). 60. S.C. Port and C.J. Stone, Infinitely divisible processes and their potential theory I & II, Ann. Inst. Fourier 21, Fasc. 2, 157–275 & Fasc 4. 179–265 (1971). 61. S.C. Port, The exit distribution of an interval for completely asymmetric stable processes, Ann. Math. Statist. 41, 39–43 (1970). 62. B.A. Rogozin, Local behavior of processes with independent increments, Theory Probab. Appl. 13, 482–486 (1968). 63. B.A. Rogozin, On the distribution of the first ladder moment and height and fluctuations of a random walk, Theory Probab. Appl. 16, 575–595 (1971). 64. B.A. Rogozin. The distribution of the first hit for stable and asymptotically stable walks on an interval, Theory Probab. Appl. 17, 332–338 (1972). 65. B.A. Rogozin, Relatively stable walks, Theory Probab. Appl. 21, 375–379 (1976). 66. L.C.G. Rogers and D. Williams, Diffusions, Markov Processes and Martingales, vol. 1, 2nd ed. Cambridge University Press (2000). 67. K. Sato, Lévy Processes and Infinitely Divisible Distributions, Cambridge Univ. Press, Cambridge (1999). 68. M. Shimura, A class of conditional limit theorems related to ruin problem, Ann. Probab. 11, no. 1, 40–45 (1983). 69. F. Spitzer, A combinatorial lemma and its application to probability theory, Trans. Amer. Math. Soc. 82, 323–339 (1956). 70. F. Spitzer, Hitting probabilities, J. of Math. and Mech. 11, 593–614 (1962). 71. F. Spitzer, Principles of Random Walks, 2nd ed., Springer-Verlag, NY. (1976). 72. F. Spitzer and C.J. Stone, A class of Toeplits forms and their applications to probability theory, Illinois J. Math. 4, 253–277 (1960). 73. C.J. Stone, Weak convergence of stochastic processes on semi-infinite time intervals, Proc. Amer. Math. Soc. 14, 694–696 (1963). 74. C.J. Stone, On the potential operator for one-dimensional recurrent random walks, Trans. Amer. Math. Soc. 136, 413–426 (1969). 75. M.L. Silverstein, Classification of coharmonic and coinvariant functions for a Lévy process, Ann. Probab. 8, 539–575 (1980). 76. K. Uchiyama, One dimensional lattice random walks with absorption at a point / on a half line. J. Math. Soc. Japan 63, 675–713 (2011). 77. K. Uchiyama, A note on summability of ladder heights and the distributions of ladder epochs for random walks, Stoch. Proc. Appl. 121, 1938–1961 (2011). 78. K. Uchiyama, The first hitting time of a single point for random walks, Elect. J. Probab. vol. 16, no. 71, 1960–2000 (2011). 79. K. Uchiyama, One dimensional random walks killed on a finite set, Stoch. Proc. Appl. 127, 2864–2899 (2017). 80. K. Uchiyama, Asymptotic behaviour of a random walk killed on a finite set, Potential Anal. 46(4), 689–703 (2017). 81. K. Uchiyama, On the ladder heights of random walks attracted to stable laws of exponent 1, Electron. Commun. Probab. 23, no. 23, 1–12 (2018). 82. K. Uchiyama, Asymptotically stable random walks of index 1 < 𝛼 < 2 killed on a finite set, Stoch. Proc. Appl. 129, 5151–5199 (2019). 83. K. Uchiyama, A renewal theorem for relatively stable variables, Bull. London Math. Soc. 52, issue 6, 1174–1190, Dec. (2020). 84. K. Uchiyama, The potential function and ladder variables of a recurrent random walk on Z with infinite variance, Electron. J. Probab. 25, article no. 153, 1–24 (2020). 85. K. Uchiyama, Estimates of potential functions of random walks on Z with zero mean and infinite variance and their applications, preprint available at: http://arxiv.org/abs/1802.09832 (2020). 86. K. Uchiyama, The two-sided exit problem for a random walk on Z with infinite variance I, preprint available at: http://arxiv.org/abs/1908.00303 (2021). 87. K. Uchiyama, The two-sided exit problem for a random walk on Z with infinite variance II, preprint available at: http://arxiv.org/abs/2102.04102 (2021).
272
References
88. V.A. Vatutin and V. Wachtel, Local probabilities for random walks conditioned to stay positive, Probab. Theory Rel. Fields 143, 177–217 (2009). 89. S. Watanabe, On stable processes with boundary conditions, J. Math. Soc. Japan 14 no. 2, 170–198 (1962). 90. H. Widom, Stable processes and integral equations, Trans. Amer. Math. Soc. 98, 430–449 (1961). 91. A. Zygmund, Trigonometric Series, vol. 2, 2nd ed., Cambridge Univ. Press (1959). 92. V.M. Zolotarev, One-Dimensional Stable Distributions, Translations of mathematical monographs, vol. 65, AMS, Providence Rhode Island (1986).
Notation Index
Constants 𝑐 𝑛 , 13, 128, 178, 179, 207, 213 𝐶Φ , 𝑐 ♯ , 13 𝜅, 125, 171, 182 𝜅 𝛼 , 58, 236 𝜅 ◦𝛼 , 236 𝜅 ♯𝛼 , 212 𝜅 𝛼,𝜌 , 185 𝑝, 𝑞 = 1 − 𝑝, 13 𝜌, 𝜌ˆ = 1 − 𝜌, 13 𝑣◦ , 9 Functions 𝑎(𝑥), 𝑎¯ † (𝑥), 12, 81 ¯ 𝛼± (𝑡), 𝛽± (𝑡), 𝛾(𝑡), 19 𝑎(𝑥), 𝑎 † (𝑥), 8, 11 𝐴(𝑥), 𝐴± (𝑥), 3, 14, 138 𝑏 ± (𝑥), 39, 69 𝐵(𝑠, 𝑡) = Γ(𝑠)Γ(𝑡)/Γ(𝑠 + 𝑡) (Beta function), 108 ˜ 𝑚(𝑥), ˜ 𝑐(𝑥), 23 ℓ♯ (𝑥) (𝛼 ≥ 1), 91 ℓ♯ (𝑥) (𝛼 ≤ 1), 107 ℓˆ♯ (𝑥), 125 ℓ ∗ (𝑥), ℓˆ∗ (𝑥), 9, 107 𝑓𝑚 (𝑡), 𝑓 ◦ (𝑡), ∫25 𝑥 [1/ 𝑓 ] ∗ (𝑥) = 1 [ 𝑓 (𝑡)𝑡] −1 d𝑡, 61 Γ(𝑡) (Gamma function) 𝑔 𝐵 (𝑥, 𝑦), 8, 136, 176, 186 𝐺 (𝑥), 136, 139, 157, 211 𝑔(𝑥, 𝑦), 8
𝐻 𝐵𝑥 (𝑦), 62, 97–99 ℎ 𝜀 (𝑥), 23, 24, 29 𝔥𝜆 (𝜁) = 𝔥𝜆( 𝛼) (𝜁), 176, 177, 181, 194 𝐾𝜌( 𝛼) (𝜉, 𝜂), 186, 194, 195, 202, 203 𝜇± (𝑥), 𝑐 ± (𝑥), 𝜂± (𝑥), 𝑚 ± (𝑥), 19 Φ(𝜃), 13 𝜓(𝑡), 19 𝔮(𝑥), 𝔮𝑡 (𝑥), 212, 214 𝑢 a (𝑥), 9, 86, 138, 172 𝑈a (𝑥), 𝑉d (𝑥), 9, 90, 111, 262, 264 u 𝐵 (𝑥), 208, 209, 214, 222 𝑣d (𝑥), 9, 172 Random Variables/Sets 𝐵(𝑅) = R \ [0, 𝑅], 102, 151, 186 Λ𝑅 , 4, 105, 181 𝑁 (𝑅) = 𝜎𝐵(𝑅) − 1, 148 𝛺 = (−∞, −1], 9, 105, 136, 176 𝜎𝐵 , 4, 8 𝑇 = 𝜎𝛺 , 105 𝑍 (𝑅), 15, 116, 166, 203 ˆ 9 𝑍, 𝑍, Special Conditions (AS), 13, 16, 17, 108, 171, 207 (C1) to (C4), 106, 143, 264 c 143 ( C3), (H), 20 (PRS), (NRS), 135
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8
273
Subject Index
almost decreasing/increasing, 10 asymptotic properties of 𝔭𝑡0 (𝜉, 𝜂) and 𝔮𝑡 (𝜉), 244 asymptotics of hitting probabilities 𝑃 𝑥 [𝜎𝑅 < 𝜎0 ], 76, 77 𝑃 𝑥 [𝜎𝑅 < 𝜎𝐵(𝑅) ], 188 𝑃 𝑥 [𝜎𝑅 < 𝑇], 139, 156, 180–182 𝑃 𝑥 [𝜎𝑅 < 𝑇 |Λ𝑅 ], 181 comparison between 𝜎𝑅 and 𝜎[𝑅,∞) , 94 dominated variation, 10, 149 duality, — in Feller’s sense, 10 escape probability one-sided, 78, 94 two-sided, 95, 96, 99, 227, 230 Feller’s duality, 10 Green’s function 𝑔 𝐵 (𝑥, 𝑦), 8 for a finite set, 208, 209 for a single point, 8 for the complement of an interval, 102, 141, 186, 187 for the negative half-line, 9, 85, 86, 136, 141, 176, 177 scaling limit, 141, 189
Green’s kernel, 𝐺 (𝑥), for a transient walk, 136, 157, 211 harmonic function, 8, 76, 111, 209 ladder epoch, 260 ladder height, 9 distribution, 97, 111, 114, 118, 122–124 strictly ascending, 9 strictly descending, 9 left/right-continuous random walk, 75 normalised s.v. function, 10, 107 n.r.s. (negatively relatively stable), 14, 135 oscillatory random walk, 7 overshoot, 15–17, 79, 257 distribution, 84, 131, 166, 203, 258 partial sums of 𝑔𝛺 , 111 potential function, 8 asymptotics, 51, 57, 62 asymptotics of the increments, 68, 72 for a finite set, 208, 209 irregular behaviour, 45 known results, 11 upper and lower bounds, 20–22
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 K. Uchiyama, Potential Functions of Random Walks in ℤ with Infinite Variance, Lecture Notes in Mathematics 2338, https://doi.org/10.1007/978-3-031-41020-8
275
276
p.r.s. (positively relatively stable), 14, 135, 264 random walk conditioned to avoid a finite set, 211, 235 renewal functions, 9, 252 asymptotics, 88, 91, 107, 123, 258, 262, 264 renewal mass functions, 9, 252 asymptotics, 85, 121, 122, 138, 172, 253, 255 Spitzer’s condition, 80, 88, 261 Spitzer’s formula, 9 strongly aperiodic random walk, 11
Subject Index
s.v. (slowly varying), 10 two-sided exit problem an upper bound of 𝑃 𝑥 (Λ𝑅 ), 111 for r.s. walks, 135 general case, 105 under 𝑚 + /𝑚 → 0, 85 undershoot, 131, 148, 257 distribution, 132, 163–166, 203, 258 uniform convergence of 𝑄 𝑛𝐵 (𝑥, 𝑦) Case 𝐵 is finite and 𝜎 2 = ∞, 212, 213, 230, 233, 242 Case 𝐵 = 𝛺, 247