Sums of Independent Random Variables [Translated from the Russian by A.A.Brown, Reprint 2021] 9783112573006, 9783112572993


195 37 29MB

English Pages 358 [361] Year 1976

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Sums of Independent Random Variables [Translated from the Russian by A.A.Brown, Reprint 2021]
 9783112573006, 9783112572993

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

V. V. Petrov Sums of Independent Random Variables

V. V. Petrov

Sums of Independent Random Variables Translated from the Russian by

A. A. Brown

Akademie-Verlag • Berlin

1975

B . B . IleTpoB CyMMbI He3aBH0HMUX CJiyqaßHHX Be.lH'IHH Verlag Nauka, Moskau

Die englische Ausgabe erschien auch in der Reihe „Ergebnisse der Mathematik" Bd.. 82 im Springer-Verlag, Berlin • Heidelberg • New York

Lizenzausgabe im Akademie-Verlag, 108 Berlin, Leipziger Str. 3—4 Vertrieb ausschließlich f ü r die D D R und die sozialistischen Länder © by Springer-Verlag, Berlin • Heidelberg 1975 Lizenznummer: 202 • 100/415/75 Gesamtherstellung: V E B Druckhaus „Maxim Gorki", 74 Altenburg Bestellnummer: 762 026 6 (6238) • LSV 1075 Printed in GDR EVP 9 2 , -

Preface

The classic "Limit Distributions for sums of Independent Random Variables" by B. V. Gnedenko and A. N. Kolmogorov was published in 1949. Since then the theory of summation of independent variables has developed rapidly. Today a summing-up of the studies in this area, and their results, would require many volumes. The monograph by I. A. Ibragimov and Yu. V. Linnik, "Independent and Stationarily Connected Variables''', which appeared in 1965, contains an exposition of the contemporary state of the theory of the summation of independent identically distributed random variables. The present book borders on that of Ibragimov and Linnik, sharing only a few common areas. Its main focus is on sums of independent but not necessarily identically distributed random variables. I t nevertheless includes a number of the most recent results relating to sums of independent and identically distributed variables. Together with limit theorems, it presents many probabilistic inequalities for sums of an arbitrary number of independent variables. The last two chapters deal with the laws of large numbers and the law of the iterated logarithm. These questions were not treated in Ibragimov and Linnik; Gnedenko and Kolmogorov deals only with theorems on the weak law of large numbers. Thus this book may be taken as complementary to the book by Ibragimov and Linnik. I do not, however, assume that the reader is familiar with the latter, nor with the monograph by Gnedenko and Kolmogorov, which has long since become a bibliographical rarity. I therefore include a number of classical results of the theory of the summation of independent random variables. The greater part of the text is nevertheless given over to material which has not appeared in monographs on probability theory either here or abroad. The book omits from its scope such broad subjects as multidimensional limit theorems, boundary problems for sums of independent random variables, limit theorems for sums of a random number of independent terms and some others. Each chapter is accompanied by a supplement in which results that border on the basic text are stated. The text itself does not depend on

VI

Preface

the supplements. The bibliography is not complete; it includes only material cited in the text. I t is assumed that the reader is familiar with the fundamentals of probability theory, to the extent of the material contained in Chapters 1—8 of "Kurs Teorii Veroyatnostei" by B. V. Gnedenko (A Course in Probability Theory). 1 A summary of the essential results is given in Chapter I. I hope t h a t this book will be useful to specialists in probability theory and to students concerned with the theory of summation of independent random variables. I express m y deepest gratitude to Professors A. A. Borovkov, V. A. Egorov, I. A. Ibragimov, L. V. Osipov, and Yu. V. Prohorov, who have read the manuscript and given me many valuable suggestions. I take advantage of the occasion, to express my warmest thanks to Professor Yu. V. Linnik and Professor J . Neyman for their consideration and support. I owe to them the inspiration for the writing of this book. Leningrad, 1972

V. V. Petrov

1 Gnedenko, B. V.: The Theory of Probability, tr. B. D. Seckler, New York: Chelsea 1962.

Notation

The numbering of theorems and formulae begins anew in each chapter. When the chapter number is omitted in the parentheses enclosing a reference the reference itself is within the chapter where it occurs. The Halmos square Q indicates the end of a proof. The abbreviations used are: d.f. for distribution function, c.f. for characteristic function, a.c. for almost certainly, and i.o. for infinitely often. The expression sup f(x) means sup f(x). x —oo) m= 1 for arbitrary integer ku ..., kn. If the random variables X j and X 2 are independent and have the distribution functions F1(x) and F2(x), the sum X j + X 2 has the distribution function oo F(x)

=

j F

—oo

1

( x - y ) d F

2

( y ) .

The integral on the right is said to be the convolution or the composition of the distributions F, and F2 and it is denoted by F, * F2. We may also consider the convolution of functions of bounded variation on the real 1

Xn


0. We define the function G(x) by the equality x^—b,

0, G(x)

— (F(x) - F(-b)),

for



-b < x x > b.

1,

I t is clear t h a t G(x) is a non-degenerate d.f. with a finite variance and c.f. :

dF(x).

g(t) !x|S!>

Now the earlier portion of the proof implies t h a t J ea> dF(x)

< 1 - et2

for |i| iS 6 and some positive d and e. Furthermore,

|/(i)|

^

Jeitx dF(x) |x|S6

+J

dF{x).

|*|>b

Therefore [/(oo J

f

g-iiXi

p-ÜXa

—it

-T

f{t) dt.

The following uniqueness theorem is an easy consequence. Theorem 4. Two distribution functions having the same characteristic function are identical. There is an elementary consequence of Theorem 4. A random variable is symmetric if and only if its characteristic function is real. The necessity was proved in § 2. The sufficiency follows from the equalities m=W)=f(~t)

=

Ee"iu:,

which are true because the c.f. f(t) is real. The identity of the c.f. of the random variables X and — X proves the identity of the distribution functions of the corresponding random variables. • Theorem 5. If the c.f. f(t) is absolutely integrable on the real line, then the corresponding d.f. F(x) has an everywhere continuous derivative p(x) = — F(x) and, dx

moreover, oo

(3.2)

p(x)

=-Lje-i°o J — oo

g-it(x-u) It

dt dH(u)

sin t(x — u) dt dR{u). t

The integral

/

r ' sin ht /¡i , -dt=

at f sin C s y , I -dy y 7t

71

is bounded for all T. Further, lim equals — or depending on T-*oo 2 2 whether /i > 0 or h < 0. Completing the admissible passage to the limit under the integral sign, we obtain I(x) = n(R{x — 0) + R{% + 0)). •

§ 4. The convergence of sequences of distributions and characteristic functions Let F(x), F1(x), F2(X), . . . be bounded non-decreasing functions. The sequence (^„(x)) converges weakly to F(x) if Fn(x) F(x) at every point of continuity of F(x). To indicate t h a t the sequence \Fn{x)\ converges weakly to F(x) we will use the notation F„ -> F. If F„ -> F and F„{—oo)^F(—oo), F„(+OO) ^F( + oo), we shall say t h a t Fn{x) converges completely to F(X), and we write F„ F. Later we shall need the following variant of a theorem by Helly. Theorem 8. Let the function g(x) be continuous and bounded on the real line. Let F(x), Fi(x), F2(x), ...be bounded, non-decreasing functions, and let Fn~tF. Then J g(x) dFn{x)

J g(x)

dF(x).

The following proposition is not hard to prove. Lemma 2. If the sequence of characteristic functions \fn(t)} converges to the c.f. f{t) for every t, the convergence is uniform in t in an arbitrary finite interval.

4. The convergence of sequences of distributions

15

An immediate consequence of Theorem 8 and Lemma 2 is Theorem 9. Let Fix), Fj(x), F2(x), ...be distribution functions, and let f(t), fiit), f2(t), ...be the corresponding characteristic functions. If Fn - > F, then fn (t) /(f) uniformly in t in an arbitrary finite interval. The following inverse limit theorem for c.f. is important. Theorem 10. Let {/„(£)} be a sequence of c.f., \Fn(x)\ the corresponding sequence of d.f. If fn(t) /(f) for every t and if f(t) is continuous at the paint t = 0, there exists a d.f. F(x), such that F„ —> F. For this d.f., oo /(f) = J eitx dF(x). —oo The following elementary theorem is often useful. Theorem 11. If the sequence of d.f. {¿»'„(a;)) converges to a d.f. F(x), the convergence is uniform in x (—oc < x < oo).

continuous

Proof. Let e be an arbitrary positive number. The continuity of F(x) implies that there exist points satisfying the conditions F&X-p

F(£t+1) - F(£k) < j(k-=

1 , . . . , to — 1),

l - J ^ X - i . Further, there exists a number n0, such that for n > na we have the inequality \F„(£k) - F{£t)\ < -J-, ( i = l Ji li!ik^x< (k=\,...,m - 1), then for n > n0, we find that F„(x) - F(x) ^ and

m).

- F(£k+1) + F (ft+1) - F f a ) < s

Fn(x) - F(x) ^ Fn&) - F(£k+1) > - e .

Ifx -

j

for n > n0. The case when x S: f m is similarly handled. Thus, \Fn(x) — F(x) | < e for all x and n > n0. •

16

I. Probability distributions and characteristic functions

Lemma 3. Let X and Y be random variables. We write F(x) = P(X < x), G{x) = P(X + Y < x). Then (4.1)

F(x - e ) - P(| 7| ^ e) ^ G(x) ^ F{x + e) + P(|

^ e)

far every e > 0 and x. Proof. The event X < x — s implies the sum of the events X + Y < x and r i ; e. Therefore P(X < x - e) ^ P(X + Y < x) + P(r ^ s). The lefthand inequality in (4.1) follows. To prove the righthand inequality it is sufficient to note that the event X -f- Y < x implies the sum of the events X < x + e and Y < — e. • Let {Yn\ n = 1, 2, ...} be a sequence of random variables. We shall say that this sequence converges in probability to the random variable Y p and we shall write Yn —- Y, if P(| Yn — Y\ e) ->- 0 for every fixed e > 0. Theorem 12. Let {Xn\ and {F„} be sequences of random variables defined on a common probability space. If the sequence of d.f. {P(X„ < a;)} converges {P(Xn + Proof. virtue of

p

weakly to the d.f. F(x) and if Yn —> 0, then the sequence of d.f. F„ < a;)} converges weakly to F(x). Let x be an arbitrary point of continuity of the d.f. F(x). In Lemma 3, we have P(X, < a : - P ( | F , | ^ P ( X n +Yn 0 and x and for an arbitrary function H(x) the following inequality holds: |G(x) - H(x)| 5S sup \F(x) - H(x)\ + P(|F| S2 e) X

(4.2)

+ max [\H{x + e) - H(x)\, \H(x - e) -

H(x)¡}.

This inequality follows from Lemma 3 and the obvious relationship \F(x ±s)~

H{x)I ^ \F(x ± 6) - H(x ± U)| + IH(X ±e)~

H(x)|.

4. The convergence of sequences of distributions

17

Theorem 13. Let {«.„) and {bn\ be sequences of constants, in which an > 0 . Let the sequence of d.f. [^„(x)} converge weakly to the non-degenerate d.f. F(x). Then the following assertions hold: (.A) If F„(anx + bn) G(x), where G(x) is a non-degenerate d.f. then G(x) = F(ax + 6), an a and bn —b. In particular, if F„(a„x + bn) —F(x), then an -> 1 and bn 0. (.B) If a„

a and bn - > 6, i/iew Fn(anx

+ £>„) -> F(ax

+

b).

Proof. We shall first prove assertion (A). Let fn(t), f{t) and g(t) denote the c.f. of the distributions Fn(x), F(x) and G(x) respectively. Then /„(/( q(x) for every x, then the distribution of the

random variable

1In

(X„ — an) converges weakly to a distribution with

the density q{x) (Okamoto [298]). 1

[ x ] denotes the largest integer not exceeding x.

24

I . Probability distributions and characteristic functions

29. Let \Fn(x); n = 1, 2, ...} be a sequence of d.f., identically equal to zero for x ^ 0, and let {fn{t)\be the corresponding sequence of c.f. If/„( f{t) at every point of some interval \t\ < a and f(t) is continuous at the point t = 0, there exists a d.f. F(x), such that Fn(x) -> Fix) at every point of continuity of F(x). The assertion remains true when the condition Fn(x) — 0 for x ^ 0 is replaced by the weaker condition Fn(x) sS fee-*'*1 for x sS x0, where b > 0, c > 0 and x 0 are some constants not depending on n (Zygmund [339]). 30. If the distribution function F(x) is defined uniquely by its moments, and if {^„(a;)} is a sequence of distribution functions for which the moments of arbitrary positive integer order converge to the corresponding moments of F(x), then Fn(x) —> F(x) at every point of continuity of F(x) (Frechet and Shohat [230]; see also [308]). 31. We can find distribution functions F(x) and G(x) such that there exists no corresponding pair of real numbers (a0, b0), satisfying the conditions a0 > 0 and (5.3)

sup \F(a0x + 60) — G(x)\ = inf sup \F(ax + b) — G(x)\. X 0 1 points of discontinuity in which G has steps of magnitude —. Then there exists a unique pair («0, b0), n satisfying the conditions (5.3) and an > 0 (Burkholder [181]).

Chapter II. Infinitely Divisible Distributions

§ 1. Definition and elementary properties of infinitely divisible distributions A distribution function F(x) and the corresponding c.f. fit) are said to be infinitely divisible if for every positive integer n there exists a c.f. fn(t) such that (i.i) m = (/*( 0, such that fit) 4= 0 for |i| a. In the same interval |i] ^ a we have f„(t) 4= 0. Let e be an arbitrary positive number. If |i| Si a, then |/„(£)[ = \f{t)\lln exp | ^ l o g / ( t ) j > 1 — e for all sufficiently large n.

26

II. Infinitely divisible distributions

By Lemma 1 of Chapter I we have 1 - |/„(2i)|2 ^ 4(1 - |/„(i)|2) for every t. Therefore, for all sufficiently large n and |i| 5i a we have 1 - \fn(2t)\ 52 1 - |/„(2Z)|2 ^ 4(1 - |/n(n(t)

i + W

31

2. Canonical representation

for every t, where y)n(t) is defined by equation (2.9) and x

00

yn =

n I —-—

J

dFJx),

1+X2

Gn(x)

J

= n I

i

+—-— y:

B y Lemma 3 it follows from the relation y>„(t) log f(t) and from the continuity of log f(t) at the point t = 0 that there exist a real constant y and a non-decreasing bounded function 6{x), for which yn y, Gn G and log/(i) = (y, G). • Equality (2.10) is called the Levy-Khintchine formula. I t follows from Lemma 2 and Theorem 4 that if G{—oo) = 0 the representation of an infinitely divisible c.f. /( 0 . For the Poisson distribution bX

with the parameters (a, b, A) we find that y = a -\ if x

b, and G(x) = ^

m

1

^ ,iix>

+62

, G(x) =

0,

b. For the degenerate distribution

with the growth point a, we have y = a and G(x) = 0 . Formula (2.10) can be written in another way. We write (72 =

'1

G{+0)

+ v2 y2

(2.11)

L(x)

-

G(-0),

dG(y),

for

z < 0 ,

= 1 4- v 2

y2

dG{y),

for

a; >

0.

The function L(x), defined on the entire real line, except at the point zero, is non-decreasing in (—oo, 0) and in (0, + oo) and it satisfies the conditions L(—oo) = 0 , L(-\-og) = 0 . I t is continuous at precisely those points of its domain of definition at which G(x) is continuous. For every finite 0 6

we have j- x2 dL(x) < oo ; here and from now on the symbol j- signifies -6

that the point zero is excluded from the domain of integration.

32

II. Infinitely divisible distributions

Conversely, an arbitrary, non-negative constant 0. -> Equality (2.12) is called Levy's formula. The representation of an infinitely divisible c.f. by (2.12) is unique. We recall that if a random variable with the characteristic function /( 0 1 , (3) F2(X) is absolutely continuous, — oo

(4) F3(X) is absolutely continuous, (5) F3(x) is singular, F2(x) is continuous but not absolutely continuous, and the convolution F2(x) * F3(x) is absolutely continuous. For the absolute continuity of F(x) it is necessary that at least one of these five conditions be satisfied (Tucker [331]). 6. I f p(x) is the density of an infinitely divisible distribution with a oo c.f. f(t), satisfying the condition J\f(t)\s dt < oo for every s > 0, then — oo

the set of zeros of the function p(x) is either empty or is a closed halfline (Sharpe [321]). 1 In other words, the Levy-Khintchine spectral function O(x) of the distribution F(x) is not continuous at the point x = 0.

Chapter III. Some Inequalities for the Distributions of Sums of Independent Random Variables

§ 1. Concentration functions The concentration function Q(X; A) of a random variable X is defined by the equality Q{X; A) = sup P(x ^ X ^ x + A) X

for every A Si 0. I t is clear that Q(X; A) is a non-decreasing function of A, and that it satisfies the inequalities 0 5i Q(X; A) ^ 1 for every A 3: 0 . We shall prove some assertions about the concentration function which will be useful later. Lemma 1. If X and Y are independent random variables, + T; A) ^ min {Q(X; A), Q{Y; A)} for every A ^ 0.

then Q(X

Proof. Let Fv(x) denote the distribution function of the random variable U. Writing Z = X + Y, we obtain oo Fz(x + A) - Fz(x) = J (FX(X + A - y) - Fx{x - y)) dFy(y) — 00

and

oo Q(Z; A) ^ Q(X; A) / dFy(y) — OO Similarly Q(Z; A) ^ Q{Y\X).

= Q(X; A). •

Lemma 2. For every non-negative a and A we have the inequality Q(X; l.

I t is not difficult to verify that oo

H(x) = f eitxh(t) dt. — OO

We denote by F(x) the distribution function of the random variable X. For every real y and a > 0, we have oo

a

oo u

J H(a(x - y)) dF(x) = — J

j . j

e~^ h

eiuxdF(x) du

e~iyuh ( —) f(u) du,

(f)

whence H{a(x - y)) dF(x) ^ — / \f{t)\ dt.

From the equality sin x — x — — cos ftx for every real x where 1, we infer that 95 > — "" 96

(1.3)

for

1 \x\ < —. ~ 4

1 1

If 0 < al ^ 1, then /95\2 min H(ax) 2: x i w

2

2

Let us denote i y 1 = ix: \ r

* < ^ a; ~

p(\r\ \%na — . 96 — J f l ,1/(01«eft. v " —a

By Lemma 2 we have Q \ V \ — \ ^ ( — + 1) Q ( V ; ;.) for every / > 0. Therefore \ 2a> \2aA >

256,(1 * U a A ) S

m ? d t

for every positive X and a. In the case where A = 0 or a — 0, the inequality (1.8) is satisfied in an obvious way. • We shall apply the lemma just proved to the derivation of inequalities for concentration functions of infinitely divisible distributions. Theorem 1. Let the random variable X have the infinitely divisible c.f. fit), represented in the form (2.12) of Chapter II (i.e., the Levy representation).

42

I I I . Inequalities for distributions of sums

Then there exist absolute 'positive constants Ax and A2, such that (1.9)

A t min "l; Xla2 + -f x2 dL(x)Vll2~\ exp j - 4 J dL(xj\ \ l«l — e J * m si/J bt

From this we derive the lower bound in (1.9).

-1

. •

2. Concentration functions of sums of independent random variables

43

Using Theorem 1 it is easy to see that an infinitely divisible d.f. has a point of discontinuity if and only if the two conditions a 2 = 0 and oo j dL(x) < oo are satisfied simultaneously. Thus, we have: — oo Theorem 2. An infinitely divisible distribution function will be continoo uous if and only if at least one of the conditions a2 > 0 and j- dL(x) = oo — oo is satisfied. The constant a2 and the function L(x) are here the same as those in Levy's representation for the corresponding characteristic function. § 2. Inequalities ior the concentration functions of sums of independent random variables Let the random variable X have the d.f. F(x). For every A > 0 we write D(Z;A) = A-2 J x2dF(x) |S| 0. If u ^ A, then D(X; X) 22 w2 f x2dF{x)

+

J

dF{x).

Thus (2.1)

D ( X ; X) ig u~2 J x2 dF(x)

for

u ^ A.

If X is a random variable, then, as earlier, we denote by X the corresponding symmetrized random variable. Theorem 3. Sn =

n

X1,...,Xn

y Xk. Let A1; ...,X„

be

independent

random

variables,

be positive numbers, Xk iS X (k = 1, ...,

k = 1

Then there exists an absolute positive constant A, such that

(

71

\ —1/2

n).

44

III. Inequalities for distributions of sums

Proof. Let Vk(x) and vk(t) denote respectively the d.f. and c.f. of the random variable Xk. We apply Lemma 3 to the sum Sn. Setting a — — in the inequality (1.1), we obtain ^

Q(Sn; X) ^ A,/, j

KlSl/i k=1

n\vk(t)\dt.

From the inequality 1 + x sS ex for every real x it follows that \vk(t)\2 s i exp { — (1 — |'i'jt(i)l2)l. Taking into account the equality oo

1 - Mi)I 2 = / (1 - COS tx) dVk(x), — oo

where

is the d.f. of the symmetrized random variable X k , we obtain 00

(2.3)

Q(8n; X) ^ Axl J e x p . - 1

j; J(1

- cos tx) dVk(x) dt.

\t ISl/A The function Lk(x) such that Lk(x) = Vk(x) — 1 for x > 0 and Lk(x) = 0 for x < 0 is a Levy spectral function. We may apply Lemma 4 of Chapter I I to the integral on the righthand side of (2.3), and we obtain 1 - 1 / 2

Q(Sn ;A) < All £ / J x2 dVk(x) + 4 J dVk(x)> r'=1 ixia^ j



From Theorem 3 we derive the following result.

Theorem 4. Let Xx, ...,Xn be independent random variables, n Sn = y Xk. For every positive number ..., none of which exceeds A, we have (2.4)

Q(Sn; A)

(2.5)

{ n 1 -1 /: Q(S„; X) ^ AA l £ 4(1 - Q(Xk; A4))L

To prove the inequality (2.4) it is sufficient to note t h a t

4D(Xk;?.k)=

J x->

2. Concentration functions of sums of independent random variables

45

The inequality (2.5) follows from (2.4) and from the inequalities P

^

^ 1 - Q(Xk;h)

2= 1 - Q(Xt; h) •

The latter follows from Lemma 1.



Theorem 6. Let the numbers ak and bk be such that P ¡Xk - a

k

^

^ bk, P {k = 1,

Then

- ak ^

bk

...,n).

(A n

\ —1/2

for every positive none of which exceeds X. This theorem follows from (2.4) and the following proposition: if P ( x - a ^ - ^ j ^ b a and b, then

and

^ - j j — "J"

P

- a ^ j j ^ b for some X ^ 0,

For the

P r o °* o f

the

l a t t e r proposition

we note that we may take a = 0. If the random variable X has a nonpositive median, then the equality X = X — Y where Y does not depend on X and has a distribution identical with X, implies that p(|X-

Y\

oj

= p x

( ~ "i) P ( y -

0)

-lb

The case in which mX ^ 0 is treated in a similar fashion. • We now consider some consequences of Theorem 3. In (2.2) we set Xk = A (k = 1, ..., n), and obtain the inequality (2.6)

)

n-,X)^A^£D{Xk-,X)

j

1/2

.

If the independent random variables Xl;..., Xn have identical nondegenerate distributions, then D^X 1 ; /) 4= 0 and (2.7)

46

III. Inequalities for distributions of sums

for every / > 0 . In the general case of non-identical distributions we infer from (2.6) and (2.1) that (2.8)

Jsup ± ¡£ fx* käJ w 1-=1 WS«

Q(Sn

dVk(x)\

Using the same method employed for the derivation of (2.3), we can obtain from Lemma 3 the inequality

J

exp

—a

4Z i A1 - c°s tx) k=i.)

dt,

if 0 < aX ^ 1. Hence and from the obvious inequality

(1 — cos tx) dVk(x)

11 I ( 1 - cos tx) dVk(x) ^ — t2

x\

g

2 P ( S

n

^

x

-

Í 2 B ¡ )

1,

. . . , n ) ,

a n d

3. Maximum of sums of independent random variables

51

Proof.

I f X is an arbitrary random variable that has a mean value, its median satisfies the inequality

EX - ]/2DX ^mX < EX + ]/2DX, which follows from the Chebyshev inequality P(|X — EX| ^ ]/2Dx) ^ —. Here DX is the variance of the random variable X ; the value DX = oo is admissible. In view of the conditions of the theorem, the inequality \m(Sk - Sn)\ ^ y2D£„ holds for all k < n and therefore (3.1) implies (3.4). • Theorem 13. EX*. = 0, EX\ < oo 1,

Suppose that (k = ..., n), and that 0 < c„ ^ c„_! g ••• Cy. Then n \ 1/ m (3.5) P ( max c ^ x\ sS c £ EX* + £ c'EXf I X* \ k=1 k=m +1 / for every x > 0 and every positive integer m < n. Proof. We write Y=£{cl-ct+1)Sl + clSl 2

k

m

fc=m

I t is easy to show that (3.6)

EY = c l Z E X t + Z 4 E X t .

k=1

k=m+1

We also write Bm = {c m |£m| Si x} and

t> J max cr |iSr| < x, ck

nk

— \mSrSt-l

J

3: zl (k =m 1, ...,n).

It is clear that

(3.7)

P / max ck

^ x\ = ¿" P(Bk).

J k=m

If X is a random variable defined on a probability space (Q, 21, P), and if B € 91, P(B) > 0, then for every x we write 1 B )

=

P(iZ