Stochastic Processes: An Introduction Solutions Manual [Third Edition] 9780367657604


370 95 995KB

English Pages 155 Year 2009

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Stochastic Processes: An Introduction Solutions Manual [Third Edition]
 9780367657604

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Stochastic Processes: An Introduction Solutions Manual Third Edition c

Peter W Jones and Peter Smith School of Computing and Mathematics, Keele University, UK

Preface The website includes answers and solutions of all the end-of-chapter problems in the textbook Stochastic Processes: An Introduction, third edition. We hope that they will prove helpful to lecturers in designing courses, and to students as a source of model examples. The original problems as numbered in the text are also included. There are obviously references to results and examples from the textbook, and the manual should be viewed as a supplement to the book. To help identify the sections and chapters, the full contents of Stochastic Processes follow this preface. Every effort has been made to eliminate misprints or errors (or worse), and the authors, who were responsible for the LaTeX code, apologise in advance for any which occur. Peter W. Jones Peter Smith

Keele, 2017

1

Contents of Stochastic Processes Chapter 1: Some Background in Probability 1.1 Introduction 1.2 Probability 1.3 Conditional probability and independence 1.4 Discrete random variables 1.5 Continuous random variables 1.6 Mean and variance 1.7 Some standard discrete probability distributions 1.8 Some standard continuous probability distributions 1.9 Generating functions 1.10 Conditional expectation Problems Chapter 2: Some Gambling Problems 2.1 Gambler’s ruin 2.2 Probability of ruin 2.3 Some numerical simulations 2.4 Expected duration of the game 2.5 Some variations of gambler’s ruin 2.5.1 The infinitely rich opponent 2.5.2 The generous gambler 2.5.3 Changing the stakes Problems Chapter 3: Random Walks 3.1 Introduction 3.2 Unrestricted random walks 3.3 Probability distribution after n steps 3.4 First returns of the symmetric random walk Problems Chapter 4: Markov Chains 4.1 States and transitions 4.2 Transition probabilities 4.3 General two-state Markov chain 4.4 Powers of the transition matrix for the m-state chain 4.5 Gambler’s ruin as a Markov chain 4.6 Classification of states 4.7 Classification of chains 4.8 A wildlife Markov chain model Problems Chapter 5: Poisson Processes 5.1 Introduction 5.2 The Poisson process 5.3 Partition theorem approach 5.4 Iterative method 5.5 The generating function 5.6 Variance for the Poisson process

2

5.7 Arrival times 5.8 Summary of the Poisson process Problems Chapter 6: Birth and Death Processes 6.1 Introduction 6.2 The birth process 6.3 Birth process: generating function equation 6.4 The death process 6.5 The combined birth and death process 6.6 General population processes Problems Chapter 7: Queues 7.1 Introduction 7.2 The single server queue 7.3 The stationary process 7.4 Queues with multiple servers 7.5 Queues with fixed service times 7.6 Classification of queues Problems Chapter 8: Reliability and Renewal 8.1 Introduction 8.2 The reliability function 8.3 The exponential distribution and reliability 8.4 Mean time to failure 8.5 Reliability of series and parallel systems 8.6 Renewal processes 8.7 Expected number of renewals Problems Chapter 9: Branching and Other Random Processes 9.1 Introduction 9.2 Generational growth 9.3 Mean and variance 9.4 Probability of extinction 9.5 Branching processes and martingales 9.6 Stopping rules 9.7 A continuous time epidemic 9.8 A discrete time epidemic model 9.9 Deterministic epidemic models 9.10 An iterative scheme for the simple epidemic Problems Chapter 10: Brownian Motion: Wiener Process 10.1 Introduction 10.2 Brownian motion 10.3 Wiener process as a limit of a random walk 10.4 Brownian motion with drift 10.5 Scaling 10.6 First visit times 3

10.7 Other Brownian motions in one dimension 10.8 Brownian motion in more than one dimension Problems Chapter 11: Computer Simulations and Projects

4

Contents of the Solutions Manual Chapter 1: Some Background in Probability Chapter 2: Some Gambling Problems Chapter 3: Random Walks

34

Chapter 4: Markov Chains

49

Chapter 5: Poisson Processes

20

74

Chapter 6: Birth and Death Processes Chapter 7: Queues

6

80

105

Chapter 8: Reliability and Renewal

121

Chapter 9: Branching and Other Random Processes Chapter 10: Brownian Motion: Wiener Process

5

150

130

Chapter 1

Some background in probability 1.1. The Venn diagram of three events is shown in Figure 1.5(in the text). Indicate on the diagram the following events: (a) A ∪ B; (b) A ∪ (B ∪ C); (c) A ∩ (B ∪ C); (d) (A ∩ C)c ; (e) (A ∩ B) ∪ C c . S

S A

B

A

B

C

C

(a)

(b) S

S A

B

B

A

C

C

(c)

(d) S B

A

C (e)

Figure 1.1: The events are shaded in Figure 1.1. 1.2. In a random experiment, A, B, C are three events. In set notation write down expressions for the events: (a) only A occurs; (b) all three events A, B, C occur; (c) A and B occur but C does not; (d) at least one of the events A, B, C occurs; (e) exactly one of the events A, B, C occurs; (f ) not more than two of the events occur. (a) A ∩ (B ∪ C)c ; (b) A ∩ (B ∩ C) = A ∩ B ∩ C; (c) (A ∩ B) ∩ C c ; (d) A ∪ B ∪ C;

6

(e) A ∩ (B ∪ C)c represents an event in A but not in either B nor C: therefore the answer is (A ∩ (B ∪ C)c ) ∪ (B ∩ (A ∪ C)c ) ∪ (C ∩ (A ∪ B)c ). (f) A ∩ B ∩ C 1.3. For two events A and B, P(A) = 0.4, P(B) = 0.5 and P(A ∩ B) = 0.3. Calculate (a) P(A ∪ B); (b) P(A ∩ B c ); (c) P(Ac ∪ B c ). (a) From (1.1) P(A ∪ B) = P(A) + P(B) − P(A ∩ B), it follows that P(A ∪ B) = 0.4 + 0.5 − 0.3 = 0.6. (b) Since A = (A ∩ B c ) ∪ (A ∩ B) and A ∩ B c , and A ∩ B are mutually exclusive, then, P(A) = P[(A ∩ B c ) ∪ (A ∩ B)] = P(A ∩ B c ) + P(A ∩ B), so that

P(A ∩ B c ) = P(A) − P(A ∩ B) = 0.4 − 0.3 = 0.1.

(c) Since Ac ∪ B c = (A ∩ B)c , then

P(Ac ∪ B c ) = P[(A ∩ B)c ] = 1 − P(A ∩ B) = 1 − 0.3 = 0.7. 1.4. Two distinguishable fair dice a and b are rolled. What are the elements of the sample space? What is the probability that the sum of the face values of the two dice is 9? What is the probability that at least one 5 or at least one 3 appears? The 36 elements of the sample space are listed in Example 1.1. The event A1 , that the sum is 9, is given by A1 = {(3, 6), (4, 5), (5, 4), (6, 3)}.

4 Hence P = 36 = 19 . Let A2 be the event that at least one 5 or at least one 3 appears. Then by counting the elements in 20 the sample space in Example 1.1, P(A2 ) = 36 = 59 .

1.5. Two distinguishable fair dice a and b are rolled. What is the probability that the sum of the faces is not more than 6? Let the random variable X be the sum of the faces. By counting events in the sample space in Example 15 5 1.1, P(X) = 36 = 12 . 1.6. For the probability generating function 1

G(s) = (2 − s)− 2 find {pn } and its mean.

Note that G(1) = 1. Using the binomial theorem (see the Appendix)



∞ 1 1 1 1 X − 21 G(s) = √ (1 − s)− 2 = √ 2 2 2 n=0 n

Hence 1 p0 = √ , 2

1 1 pn = √ n 22

The mean is µ = G′ (1) =

1 (2 2



  n s 2



− 21 , n 3



− s)− 2

7

where



− 21 0

(n = 1, 2, . . .).

s=1

= 12 .



= 1.

1.7. Find the probability generating function G(s) of the Poisson distribution (see Section 1.7) with parameter α given by e−α αn pn = , n = 0, 1, 2, . . . . n! Determine the mean and variance of {pn } from the generating function. Given pn = e−α αn /n!, the generating function is given by G(s) =

∞ X

p n sn =

n=0

∞ X e−α αn sn

n!

n=0

= e−α

∞ X (αs)n

n!

n=0

= eα(s−1) .

The mean and variance are given by µ = G′ (1) =

d  α(s−1)  = α, e s=1 ds

σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 = [α2 eα(s−1) + αeα(s−1) − α2 e2α(s−1) ]s=1 = α. 1.8. A panel contains n warning lights. The times to failure of the lights are the independent random variables T1 , T2 , . . . , Tn which have exponential distributions with parameters α1 , α2 , . . . , αn respectively. Let T be the random variable of the time to first failure, that is T = min{T1 , T2 , . . . , Tn }.

Pn

Show that T has an exponential distribution with parameter

j=1

αj .

The probability that no warning light has failed by time t is P(T ≥ t)

=

P(T1 ≥ t ∩ T2 ≥ t ∩ · · · ∩ Tn ≥ t)

=

P(T1 ≥ t)P(T2 ≥ t) · · · P(Tn ≥ t)

e−α1 t e−α2 t · · · e−αn t = e−(α1 +α2 +···+αn )t .

=

1.9. The geometric distribution with parameter p is given by p(x) = q x−1 p,

x = 1, 2, . . .

where q = 1−p (see Section 1.7). Find its probability generating function. Calculate the mean and variance of the geometric distribution from its pgf. The generating function is given by G(s) =

∞ X

∞ pX ps p qs = , (qs)x = q q 1 − qs 1 − qs

q x−1 psx =

x=1

x=1

using the formula for the sum of a geometric series. The mean is given by ps 1 − qs



d G (s) = ds



d µ = G (1) = ds ′



For the variance, ′′

is required. Hence

= s=1



pqs p + 1 − qs (1 − qs)2

p (1 − qs)2



=

σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 =

8



= s=1

2pq . (1 − qs)3

1 1 q 2q + − 2 = 2. p2 p p p

1 . p

1.10. Two distinguishable fair dice a and b are rolled. What are the probabilities that: (a) at least one 4 appears; (b) only one 4 appears; (c) the sum of the face values is 6; (d) the sum of the face values is 5 and one 3 is shown; (e) the sum of the face values is 5 or only one 3 is shown? From the Table in Example 1.1: (a) If A1 is the event that at least one 4 appears, then P(A1 ) = 11 . 36 10 5 (b) If A2 is the event that only one 4 appears, then P(A2 ) = 36 = 18 . 5 (c) If A3 is the event that the sum of the faces is 6, then P(A3 ) = 36 . 2 1 (d) If A4 is the event that the face values is 5 and one 3 is shown, then P(A4 ) = 36 = 18 . 10 = (e) If A5 is the event that the sum of the faces is 5 or only one 3 is shown, then P(A5 ) = 36

5 . 18

1.11. Two distinguishable fair dice a and b are rolled. What is the expected sum of the face values? What is the variance of the sum of the face values? Let N be the random variable representing the sum x + y, where x and y are face values of the two dice. Then " 6 # 6 6 6 X X 1 XX 1 E(N ) = (x + y) = 6 x+6 y = 7. 36 36 x=1 y=1

x=1

y=1

and

V(N )

=

6 6 1 XX (x + y)2 − 72 E(N ) − E(N ) = 36 2

2

x=1 y=1

= =

"

6

X 2 1 12 x +2 36 x=1

6 X x=1

x

!2 #

− 49

35 329 − 49 = = 5.833 . . . . 6 6

1.12. Three distinguishable fair dice a, b and c are rolled. How many possible outcomes are there for the faces shown? When the dice are rolled, what is the probability that just two dice show the same face values and the third one is different? The sample space contains 63 = 216 elements of the form, (in the order a, b, c), S = {(i, j, k)},

(i = 1, . . . , 6; j = 1, . . . , 6; k = 1, . . . , 6).

Let A be the required event. Suppose that a and b have the same face values, which can occur in 6 ways, and that c has a different face value which can occurs in 5 ways. Hence the total number of ways in which a and b are the same but c is different is 6 × 5 = 30 ways. The faces b and c, and c and a could also be the same so that the total number of ways for the possible outcome is 3 × 30 = 90 ways. Therefore the required probability is 90 5 P(A) = = . 216 12 1.13. In a sample space S, the events B and C are mutually exclusive, but A and B, are not. Show that P(A ∪ (B ∪ C)) = P(A) + P(B) + P(C) − P(A ∩ (B ∪ C)). From a well-shuffled pack of 52 playing cards a single card is randomly drawn. Find the probability that it is a club or an ace or the king of hearts. From (1.1) (in the book) P(A ∪ (B ∪ C)) = P(A) + P(B ∪ C) − P(A ∩ (B ∪ C))

9

(i).

Since B and C are mutually exclusive, P(B ∪ C) = P(B) + P(C).

(ii)

From (i) and (ii), it follows that P(A ∪ (B ∪ C)) = P(A) + P(B) + P(C) − P(A ∩ (B ∪ C)). Let A be the event that the card is a club, B the event that it is an ace, and C the event that it is the king of hearts. We require P(A ∩ (B ∪ C)). Since B and C are mutually exclusive, we can use the result above. The individual probabilities are P(A) =

1 13 = ; 52 4

4 1 = ; 52 13

P(B) =

P(C) =

1 , 52

and since A ∩ (B ∪ C) = A ∩ B (A and C are mutually exclusive) is the ace of clubs, P(A ∩ (B ∪ C)) = Finally

1 1 1 17 1 + + − = . 4 13 52 52 52

P(A ∪ (B ∪ C)) = 1.14. Show that f (x) =

  0

xa

1 2a 1 −(x−a)/a e 2a



1 . 52

is a possible probability density function. Find the corresponding probability function. Check the density function as follows:

Z



f (x)dx

1 2a

=

−∞

1 2

=

Z

a

dx +

0

F (x) =

Z



e−(x−a)/a dx

a

− 21 [e−(x−a)/a ]∞ a = 1.

The probability function is given by, for 0 ≤ x ≤ a,

Z

1 2a

x

f (u)du =

−∞

Z

x 0

x 1 du = , 2a 2a

and, for x > a, by F (x)

Z

=

x

f (u)du =

0

Z

a

0

1 du + 2a

1 1 − [ae−(u−a)/a ]xa 2 2a 1 1 − e−(x−a)/a . 2

= =

Z

a 0

1 −(u−a)/a e du 2a

1.15. A biased coin is tossed. The probability of a head is p. The coin is tossed until the first head appears. Let the random variable N be the total number of tosses including the first head. Find P(N = n), and its pgf G(s). Find the expected value of the number of tosses. The probability that the total number of throws is n (including the head) until the first head appears is (assuming independent events) (n−1)

z

times

}|

{

P(N = n) = (1 − p)(1 − p) · · · (1 − p) p = (1 − p)n−1 p,

10

(n ≥ 1)

The probability generating function is given by G(s)

∞ X

=

n=1

(1 − p)n−1 psn =

∞ p X [(1 − p)s]n 1−p n=1

s(1 − p) ps p · = , 1 − p [1 − s(1 − p)] 1 − s(1 − p)

=

after summing the geometric series. For the mean, we require G′ (s) given by, G′ (s) =

sp(1 − p) p p + = . [1 − s(1 − p)] [1 − s(1 − p)]2 [1 − s(1 − p)]2

The mean is given by µ = G′ (1) = 1/p. 1.16. The m random variables X1 , X2 , . . . , Xm are independent and identically distributed each with a gamma distribution with parameters n and α. The random variable Y is defined by Y = X1 + X2 + · · · + Xm . Using the moment generating function, find the mean and variance of Y . The probability density function for the gamma distribution with parameters n and α is f (x) =

αn n−1 −αx x e . Γ(n)

It was shown in Section 1.9 that the moment generating function for Y is given, in general, by MY (s) = [MX (s)]m . For any gamma distribution X with parameters α and n, its mgf is MX =

 α n





Hence MY (s)

E(Y ) =

.





nm s −nm α = 1− α−s α nm(nm + 1) 2 nm 1+ s+ s + ··· α 2α2

= =

Hence

α−s

nm , α

V(Y ) = E(Y 2 ) − [E(Y )]2 =

nm . α2

1.17. A probability generating function with parameter 0 < α < 1 is given by G(s) =

1 − α(1 − s) . 1 + α(1 − s)

Find pn = P(N = n) by expanding the series in powers of s. What is the mean of the probability function {pn }? Applying the binomial theorem G(s)

= =

=

1 − α(1 − s) (1 − α)[1 + (α/(1 − α))s] = 1 + α(1 − s) (1 + α)[1 − (α/(1 + α))s]



1−α 1+α



1+

∞  X  αs n αs 1−α 1+α n=0

∞  ∞  n n α X 1−α X α α sn + sn+1 . 1+α 1+α 1+α 1+α n=0

n=0

11

The summation of the two series leads to G(s)

∞  ∞  n n X 1−α X α α n s + sn 1+α 1+α 1+α

=

n=0

1−α 2 + 1+α 1+α

= Hence

1−α , 1+α

p0 =

n=1

∞  n X α

pn =

1+α

n=1

2αn , (1 + α)n+1

sn

(n = 1, 2, . . .).

The mean is given by d G (1) = ds ′



1 − α(1 − s) 1 + α(1 − s)



=

s=1



2α [1 + a(1 − s)]2



= 2α s=1

1.18. Find the moment generating function of the random variables X which has the uniform distribution



f (x) =

1/(b − a) 0

a≤x≤b for all other values of x

Deduce E(X n ). The moment generating function of the uniform distribution is MX (s)

Z

=

b

a

1 1 bs exs dx = [e − eas ] b−a b−as

∞   1 X bn − a n sn−1 b−a n!

=

n=1

Hence E(X) =

1 (b + a), 2

bn+1 − an+1 , (n + 1)(b − a)

E(X n ) =

(n = 2, 3, . . .).

1.19. A random variable X has the normal distribution with mean µ and variance σ 2 . Find its moment generating function. By definition MX (s)

E(eXs ) =

=

1 √ σ 2π

=

Z

1 √ σ 2π



exp

−∞

Z



∞ −∞

1 2 2 σ s ) 2

Z





(x − µ)2 dx 2σ 2



2σ 2 xs − (x − µ)2 dx 2σ 2

Apply the substitution x = µ + σ(v − σs): then MX (s)



esx exp −

1 2 1 √ e− 2 v dv 2π

=

exp(sµ +

=

exp(sµ + 12 σ 2 s2 ) × 1 = exp(sµ + 21 σ 2 s2 )

−∞

(see the Appendix for the integral). Expansion of the exponential function in powers of s gives MX (s) = 1 + µs +

1 2 (µ + σ 2 )s2 + · · · . 2

So, for example, E(X 2 ) = µ2 + σ 2 .

12

1.20. Find the probability generating functions of the following distributions, in which 0 < p < 1: (a) Bernoulli distribution: pn = pn (1 − p)1−n , (n = 0, 1); (b) geometric distribution: pn = p(1 − p)n−1 , (n = 1, 2, . . .); (c) negative binomial distribution with parameter r expressed in the form: pn =





r+n−1 r p (1 − p)n , r−1

(n = 0, 1, 2, . . .)

where r is a positive integer. In each case find also the mean and variance of the distribution using the probability generating function. (a) For the Bernoulli distribution G(s) = p0 + p1 s = (1 − p) + ps. The mean is given by

µ = G′ (1) = p,

and the variance by

σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 = p − p2 = p(1 − p).

(b) For the geometric distribution (with q = 1 − p) G(s) =

∞ X

pq n−1 sn = ps

n=1

∞ X

(qs)n =

n=0

ps 1 − qs

summing the geometric series. The mean and variance are given by ′

µ = G (1) =



p (1 − qs)2

σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 =





= s=1

2pq (1 − qs)3

(c) For the negative binomial distribution (with q = 1 − p) G(s)

=

 ∞  X r+n−1 n=0

=

r−1

r n n

p q s =p

pr (1 − qs)r

r





1 , p +

s=1

1 1 1−p − 2 = . p p p2



r(r + 1) (qs)2 + · · · 1 + r(qs) + 2!

The derivatives of G(s) are given by G′ (s) =

rqpr , (1 − qs)r+1

G′′ (s) =

r(r + 1)q 2 pr . (1 − qs)r+2

Hence the mean and variance are given by µ = G′ (1) = σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 =

rq , p

r(r + 1)q 2 rq r2 q2 rq + − 2 = 2 2 p p p p

1.21. A word of five letters is transmitted by code to a receiver. The transmission signal is weak, and there is a 5% probability that any letter is in error independently of the others. What is the probability that the word is received correctly? The same result is transmitted a second time with the same errors in the signal. If the same word is received, what is the probability now that the word is correct?

13

Let A1 , A2 , A3 , A4 , A5 be the events that the letters in the word are correct. Since the events are independent, the probability that the word is correctly transmitted is P(A1 ∩ A2 ∩ A3 ∩ A4 ∩ A5 ) = P(A1 )P(A2 )P(A3 )P(A4 )P(A5 ) = 0.955 ≈ 0.774. If a letter is sent a second time the probability that one error occurs twice is 0.052 = 0.0025. Hence the probability that the letter is correct is 0.9975. For 5 letters the probability that the word is correct is 0.99755 ≈ 0.988. 1.22. A binary code of 500 bits is transmitted across a weak link. The probability that any bit has a transmission error is 0.0004 independently of the others. (a) What is the probability that only the first bit fails? (b) What is the probability that the code is transmitted successfully? (c) What is the probability that at least two bits fail? (a) Probability = 0.0004 × 0.9996499 = 0.000328... (b) Probability = 0.1638... (c) Probability is 500 X j=2

0.0004j × 0.9996500−j = 0.01750.

For any practical purposes the upper limit in the series can be replaced by 7 for sufficient accuracy. 1.23. The source of a beam of light is a perpendicular distance d from a wall of length 2a, with the perpendicular from the source meeting the wall at its midpoint. The source emits a pulse of light randomly in a direction θ, the angle between the direction of the pulse and the perpendicular is chosen uniformly in the range − tan−1 (a/d) ≤ θ ≤ tan−1 (a/d). Find the probability distribution of x (−a ≤ x ≤ a) where the pulses hit the wall. Show that its density function is given by f (x) =

d , 2(x2 + d2 ) tan−1 (a/d)

(this the density function of a Cauchy distribution). If a → ∞, what can you say about the mean of this distribution? Figure 1.2 shows the beam and wall. Let X be the random variable representing any displacement

wall

x

-a

d

a

beam 0

Figure 1.2: Source and beam for Problem 1.23 between −a and x. Then P(−a ≤ X ≤ x)

= = =

P(−a ≤ d tan θ ≤ x)

P(tan−1 (−a/d) + tan−1 (x/d)) tan−1 (x/d) + tan−1 (a/d) 2 tan−1 (a/d)

14

by uniformity. The density is given by f (x)

The mean is given by µ=



tan−1 (x/d) + tan−1 (a/d) 2 tan−1 (a/d)

=

d dx

=

d 2(x2 + d2 ) tan−1 (a/d)

Z

a −a



xd dx = 0, 2(x2 + d2 ) tan−1 (a/d)

since the integrand is an odd function and the limits are ±a. For the infinite wall the integral defining the mean becomes divergent. 1.24. Suppose that the random variable X can take the integer values 0, 1, 2, . . .. Let pj and qj be the probabilities pj = P(X = j), qj = P(X > j), (j = 0, 1, 2, . . .). Show that, if G(s) =

∞ X

p j sj ,

H(s) =

j=0

then (1 − s)H(s) = 1 − G(s). Show also that E(X) = H(1).

∞ X

qj sj ,

j=0

Using the series for H(s), (1 − s)H(s)

=

=

(1 − s) q0 +

∞ X j=0

=

q0 −

∞ X j=0

∞ X j=1

=

qj sj =

qj sj −

∞ X

qj sj+1

j=0

(qj − qj−1 )sj

∞ X

P(X = j)sj

j=1

1 − p0 −

∞ X j=1

pj sj = 1 − G(s)

Note that generally H(s) is not a probability generating function. The mean of the random variable X is given by E(X) =

∞ X

jpj = G′ (1) = H(1),

j=1

differentiating the formula above. 1.25 In a lottery players can choose q numbers from the consecutive integers 1, 2, . . . , n (q < n). The player wins if r numbers (3 ≤ r ≤ q) from q agree with those numbers of the q numbers randomly chosen from the n integers. Show that the probability of r numbers being correct is

  q r

n−q q−r

 

n . q

Compute the probabilities if n = 49, q = 6, r = 3, 4, 5, 6 (the UK lottery). This is a combinatorial problem. There are

  n q

15

possible combinations of choosing q numbers from n. From the n − q numbers which are not randomly chosen this must include the q − r losing numbers: they can be chosen in



n−q q−r

ways. Also there are



  q r

ways of choosing the r numbers from q. Hence the probability of winning in the lottery is

 

n−q q−r

q r

 

n . q

If n = 49, q = 6, and r = 3, 4, 5, 6, then the probabilities are given in the table: r 3 4 5 6

formula

 6 3

 43 3

  43 6 4

2

  43 6 5

1

  43 6 6

0

 49 6

49 6 49 6 49 6

  

exact probability

approximation

8815 499422

0.0177

645 665896

0.000969

43 2330636

0.0000184

43 6

0.0000000715

The chance of correctly choosing 6 numbers is 1 in 13,983,816. 1.26. A count of the second edition of this book showed that it contains 181,142 Roman letters (not case sensitive: Greek not included). The table of the frequency of the letters. Frequency table a b c d e f g h i

13011 4687 6499 5943 15273 5441 3751 9868 12827

j k l m n o p q r

561 1424 6916 5487 14265 11082 6891 1103 10756

s t u v w x y z

12327 15074 5708 2742 3370 2361 3212 563

The most frequent letter is e closely followed by t, n and a. However, since this a mathematical textbook there is considerable distortion compared with a piece of prose caused by the extensive use of symbols particularly in equations. What is the probability that a letter is i ? What is the probability that a letter is a vowel a, e, i , o, u? A word count shows that the word probability occurs 821 times. From the table P(i) = 12827/181142 = 0.0708... P(vowel) = (13011 + 15273 + 12827 + 11082 + 5708)/181142 = 0.3196.... 1.27 Using the table of probabilities in Example 1.9, calculate the conditional probabilities and random variable of the conditional expectation E(Y |X). From Example 1.9 the table of mass functions is

16

p(xi , yj ) x1 x2 x3

y1 0.25 0.05 0.05

y2 0 0.10 0.25

y3 0.05 0.15 0.10

The required conditional probabilities are: pY (y1 |x1 ) = p(x1 , y1 )/

3 X

p(x1 , yj ) = 0.25/0.3 =

j=1

pY (y2 |x1 ) = p(x1 , y2 )/ pY (y3 |x1 ) = p(x1 , y3 )/

3 X

3 X

5 , 6

p(x1 , yj ) = 0,

j=1

p(x1 , yj ) = 0.5/0.3 =

j=1

1 pY (y1 |x2 ) = , pY (y2 |x2 ) = 6 1 pY (y2 |x3 ) = pY (y1 |x3 ) = , 8 It follows that the conditional probabilities are:

1 , 3 5 , 8

1 , 6

1 , 2 1 pY (y3 |x5 ) = . 4 pY (y3 |x2 ) =

E(Y |X = x1 ) =

3 X

yi p(yj |x1 ) = 1 ×

5 1 4 +2×0+3× = , 6 6 3

E(Y |X = x2 ) =

3 X

yj p(yj |x2 ) = 1 ×

1 1 7 1 +2× +3× = , 6 3 2 3

E(Y |X = x3 ) =

j=1

j=1

3 X j=1

p(yj |x3 ) = 1 ×

1 5 1 17 +2× +3× = . 8 8 4 8

Hence the random variable U = E(Y |X)) takes the values { 34 , 73 , E(U ) = E[E(Y |X)] =

17 }, 8

and

7 17 39 4 × 0.3 + × 0.3 + × 0.4 = . 3 3 8 20

For comparison E(Y ) = 1 × 0.35 + 2 × 0.35 + 3 × 0.3 =

39 20

which agrees with E(U ). 1.28. (a) Show that the moment generating function (mgf ) of a N (µ, σ 2 ) random variable is exp(µs + 1 2 2 σ s ). 2 (b) Use (a) to identify the distribution of Z = (X − µ)/σ where X is N (µ, σ 2 ).

(c) Use a mgf to identify the distribution of Y = aX + b where a, b are constants and X is N (µ, σ 2 ).

(d) A sequence Xi (i = 1, 2, . . . n) of n independent normally distributed random variables has mean µi Pn X . If Zi are n iid standard normal random variables, and variance σ 2i . Derive the distribution of i i=1 Pn what is the distribution of Z . i i=1

Pn

(e) Consider the sequence of random variables in (c): find the distribution of a Xi , where the i=1 i P n ai ’s (i = 1, 2, . . . , n) are constants. What is the distribution of the mean of the Xi ’s, namely Xi /n? i−1 (a) The mgf of N (µ, σ 2 ) is (if X is the random variable) MX (s) = E(esX ) = √

1 2πσ

Z



−∞

17



exp(sx) exp −



(x − µ)2 dx. 2σ 2

Using the substitution y = (x − µ)/σ, MX (s)

Z







y2 dy exp[(yσ + µ)s] exp − 2 −∞

=

1 √ 2π

=

1 1 √ exp[µs + s2 σ 2 ] 2 2π

=

exp[µs + 21 s2 σ 2 ],

Z



1 exp[− (y − sσ 2 )2 ]dy 2 −∞

(see Appendix for the value of the integral) (b) As in (a) use the substitution z = (x − µ)/σ so that the mgf MZ (s) = E(esZ ) = √

1 2πσ

Z





exp(zs) exp −

−∞



(z − µ)2 dz = exp( 21 σ 2 ), 2σ 2

which is the mgf of a standardised normal distribution N (0, 1). (c) For the random variable Y = aX + b, the mgf is MY (s) = E(eaX+b ) = eb E(eaX ). From (a) it can be deduced that Y has the normal distribution N (aµ + b, a2 σ 2 ). (d) As in Section 1.9 the mgf of W = MW (s)

Pn

i=1

=

Xi is

E[exp(

n X

Xi )] =

i=1

"

=

exp s

. Hence W is normal N

" n X

n X

µi ,

MXi (s)

i=1

µi +

1 2 s 2

i=1

i=1

.

n Y

n X

σ 2i

i=1

Pn

σ2 i=1 i

#

#

The distribution of the iid standard normal random variables Zi is therefore, from above N (0, n). (e) Let U =

Pn

i=1

ai Xi . Then the mgf of U is given by MU (s) = E[exp(

n X

ai Xi )] =

n Y

Mai Xi (s)

i=1

i=1

From (c) ai Xi has the normal distribution N [ai µi , a2i σ 2 ]. Hence U has the normal distribution N

" n X

ai µi ,

i=1

The distribution of the mean

Pn

i=1

n X

#

a2i σ 2 .

i=1

Xi /n is therefore normal given by N

" n X i=1

µi /n,

n X i=1

2

2

σ /n

#

.

1

1.29. (a) Let Z have a standard normal distribution, show that the mgf of Z 2 is (1 − 2s)− 2 . This is the mgf of a χ2 distribution on 1 degree of freedom (χ21 ). Hence Z 2 has a χ21 distribution.

18

(b) Let Z1 , Z2 . . . , Zn be a sequence of n iid standard normal random variables . Show that the mgf of Y =

n X

Zi2

i=1

1

is (1 − 2s)− 2 n , which is the mgf of a χ2n distribution. (c) Find the mean and variance of Y in (b).

(a) If U = Z 2 , then the mgf of Z 2 is given by MU (s)

= =

1 E[exp(sZ )] = √ 2 π 2

1 √ 2 π

Z



−∞

Z



exp(sz 2 ) exp(− 21 z 2 )dz

−∞ 1

exp[( 12 − s)z 2 ]dz = (1 − 2s)− 2

(see Appendix for value of the integral). (b) Result 1

follows from Problem 28(d).

MY (s) = (1 − 2s)− 2 n

(c) We require the derivatives of MY (s) in (b): 1

MY′ (s) = n(1 − 2s)− 2 n−1 , so that Therefore

MY′ (0) = n, E(Z 2 ) = MY′ (0) = n,

1

MY′′ (s) = n(n + 2)(1 − 2s)− 2 n−2 , MY′′ (0) = n(n + 2). σ 2 = MY′′ (0) − [MY′ (0)]2 = 2n.

19

Chapter 2

Some gambling problems 2.1. In the standard gambler’s ruin problem with total stake a and gambler’s stake k and the gambler’s probability of winning at each play is p, calculate the probability of ruin in the following cases; (a) a = 100, k = 5, p = 0.6; (b) a = 80, k = 70, p = 0.45; (c) a = 50, k = 40, p = 0.5. Also find the expected duration in each case. For p 6= 12 , the probability of ruin uk and the expected duration of the game dk are given by uk =

sk − sa , 1 − sa

(a) uk ≈ 0.132, dk ≈ 409. (b) uk ≈ 0.866, dk ≈ 592. (c) For p = 21 , uk =

dk =





1 a(1 − sk ) k− . 1 − 2p (1 − sa )

a−k , a

dk = k(a − k).

so that uk = 0.2, dk = 400. 2.2. In a casino game based on the standard gambler’s ruin, the gambler and the dealer each start with 20 tokens and one token is bet on at each play. The game continues until one player has no further tokens. It is decreed that the probability that any gambler is ruined is 0.52 to protect the casino’s profit. What should the probability that the gambler wins at each play be? The probability of ruin is sk − sa , 1 − sa where k = 20, a = 40, p is the probability that the gambler wins at each play, and s = (1 − p)/p. Let r = s20 . Then u = r/(1 + r), so that r = u/(1 − u) and u=

s= Finally, with u = 0.52, p=

 u 1/20 1−u

.

(1 − u)1/20 1 = ≈ 0.498999. 1+s (1 − u)1/20 + u1/20

2.3. Find general solutions of the following difference equations: (a) uk+1 − 4uk + 3uk−1 = 0; (b) 7uk+2 − 8uk+1 + uk = 0; (c) uk+1 − 3uk + uk−1 + uk−2 = 0.

20

(d) puk+2 − uk + (1 − p)uk−1 = 0, (a) The characteristic equation is

(0 < p < 1).

m2 − 4m + 3 = 0

which has the solutions m1 = 1 and m2 = 3. The general solution is uk = Amk1 + Bmk2 = A + 3k B, where A and B are any constants. (b) The characteristic equation is

7m2 − 8m + 1 = 0,

which has the solutions m1 = 1 and m2 = 71 . The general solution is uk = A +

1 B. 7k

(c) The characteristic equation is the cubic equation m3 − 3m2 + m + 1 = (m − 1)(m2 − 2m − 1) = 0, √ √ which has the solutions m1 = 1, m2 = 1 + 2, and m3 = 1 − 2. The general solution is √ √ uk = A + B(1 + 2)k + C(1 − 2)k . (d) The characteristic equation is the cubic equation pm3 − m + (1 − p) = (m − 1)(pm2 + pm − (1 − p)) = 0, √ √ which has the solutions m1 = 1, m2 = − 21 + 12 [(4 − 3p)/p] and m3 = − 21 − 21 [(4 − 3p)/p]. The general solution is uk = A + Bmk2 + Cmk3 . 2.4 Solve the following difference equations subject to the given boundary conditions: (a) uk+1 − 6uk + 5uk−1 = 0, u0 = 1, u4 = 0; (b) uk+1 − 2uk + uk−1 = 0, u0 = 1, u20 = 0; (c) dk+1 − 2dk + dk−1 = −2, d0 = 0, d10 = 0; (d) uk+2 − 3uk + 2uk−1 = 0, u0 = 1, u10 = 0, 3u9 = 2u8 . (a) The charactedristic equation is

m2 − 6m + 5 = 0,

which has the solutions m1 = 1 and m2 = 5. Therefore the general solution is given by uk = A + 5k B. The boundary conditions u0 = 1, u4 = 0 imply A + 54 B = 0,

A + B = 1,

which have the solutions A = 625/624 and B = −1/624. The required solution is uk =

625 5k − . 624 624

(b) The characteristic equation is m2 − 2m + 1 = (m − 1)2 = 0, which has the repeated solution m = 1. Using the rule for repeated roots, uk = A + Bk.

21

The boundary conditions u0 = 1 and u20 = 0 imply A = 1 and B = −1/20. The required solution is uk = (20 − k)/20. (c) This is an inhomogeneous equation. The characteristic equation is m2 − 2m + 1 = (m − 1)2 = 0, which has the repeated solution m = 1. Hence the complementary function is A + Bk. For a particular solution, we must try dk = Ck2 . Then dk+1 − 2dk + dk−1 = C(k + 1)2 − 2Ck2 + C(k − 1)2 = 2C = −2 if C = −1. Hence the general solution is dk = A + Bk − k2 . The boundary conditions d0 = d10 = 0 imply A = 0 and B = 10. Therefore the required solution is dk = k(10 − k). (d) The characteristic equation is m3 − 3m + 2 = (m − 1)2 (m + 2) = 0, which has two solutions m1 = 1 (repeated) and m2 = −2. The general solution is given by uk = A + Bk + C(−2)k . The boundary conditions imply A + C = 1,

A + 10B + C(−2)10 = 0,

3A + 27B + 3C(−2)9 = 2[A + 8B + C(−2)8 ].

The solutions of these linear equations are A=

31744 , 31743

B=

3072 , 31743

C=−

1 31743

so that the required solution is uk =

1024(31 − 2k) − (−2)k . 31743

2.5. Show that a difference equation of the form auk+2 + buk+1 − uk + cuk−1 = 0, where a, b, c ≥ 0 are probabilities with a + b + c = 1, can never have a characteristic equation with complex roots. The characteristic equation can be expressed in the form am3 + bm2 − m + c = (m − 1)[am2 + (a + b)m − (1 − a − b)] = 0, since a + b + c = 1. One solution is m1 = 1, and the others satisfy the quadratic equation am2 + (a + b)m − (1 − a − b) = 0. The discriminant is given by (a + b)2 + 4(1 − a − b) = (a − b)2 + 4a(1 − a) ≥ 0, since 0 ≤ a ≤ 1. 2.6. In the standard gambler’s ruin problem with equal probabilities p = q = 21 , find the expected duration of the game given the usual initial stakes of k units for the gambler and a − k units for the opponent.

22

The expected duration dk satisfies dk+1 − 2dk + dk−1 = −2. The complementary function is A + Bk, and for a particular solution try dk = Ck2 . Then dk+1 − 2dk + dk−1 + 2 = C(k + 1)2 − 2Ck2 + C(k − 1)2 + 2 = 2C + 2 = 0 if C = −1. Hence

dk = A + Bk − k2 .

The boundary conditions d0 = da = 0 imply A = 0 and B = a. The required solution is therefore dk = k(a − k). 2.7. In a gambler’s ruin problem the possibility of a draw is included. Let the probability that the gambler wins, loses or draws against an opponent be respectively, p, p, 1 − 2p, (0 < p < 12 ). Find the probability that the gambler loses the game given the usual initial stakes of k units for the gambler and a − k units for the opponent. Show that dk , the expected duration of the game, satisfies pdk+1 − 2pdk + pdk−1 = −1. Solve the difference equation and find the expected duration of the game. The difference equation for the probability of ruin uk is uk = puk+1 + (1 − 2p)uk + puk−1

or

uk+1 − 2uk + uk−1 = 0.

The general solution is uk = A + Bk. The boundary conditions u0 = 1 and ua = 0 imply A = 1 and B = −1/a, so that the required probability is given by uk = (a − k)/a. The expected duration dk satisfies dk+1 − 2dk + dk+1 = −1/p. The complementary function is A + Bk. For the particular solution try dk = Ck2 . Then C(k + 1)2 − 2Ck2 + C(k − 1)2 = 2C = −1/p. Hence C = −1/(2p). The boundary conditions d0 = da = 0 imply A = 0 and B = a/(2p), so that the required solution is dk = k(a − 2p)/(2p). 2.8. In the changing stakes game in which a game is replayed with each player having twice as many units, 2k and 2(a − k) respectively, suppose that the probability of a win for the gambler at each play is 12 . Whilst the probability of ruin is unaffected by how much is the expected duration of the game extended compared with the original game? With initial stakes of k and a − k, the expected duration is dk = k(a − k). If the initial stakes are doubled to 2k and 2a − 2k, then the expected duration becomes, using the same formula, d2k = 2k(2a − 2k) = 4k(a − k) = 4dk . 2.9. A roulette wheel has 37 radial slots of which 18 are red, 18 are black and 1 is green. The gambler bets one unit on either red or black. If the ball falls into a slot of the same colour, then the gambler wins one unit, and if the ball falls into the other colour (red or black), then the casino wins. If the ball lands in the green slot, then the bet remains for the next spin of the wheel or more if necessary until the ball lands on a red or black. The original bet is either returned or lost depending on whether the outcome matches the

23

original bet or not (this is the Monte Carlo system). Show that the probability uk of ruin for a gambler who starts with k chips with the casino holdiing a − k chips satisfies the difference equation 36uk+1 − 73uk + 37uk−1 = 0. Solve the difference equation for uk . If the house starts with ∈1,000,000 at the roulette wheel and the gambler starts with ∈10,000, what is the probability that the gambler breaks the bank if ∈5,000 are bet at each play. In the US system the rules are less generous to the players. If the ball lands on green then the player simply loses. What is the probability now that the player wins given the same initial stakes? (see Luenberger (1979)) There is the possibility of a draw (see Example 2.1). At each play the probability that the gambler 18 wins is p = 37 . The stake is returned with probability 1 37



18 37



+

1 372



18 37



+ ··· =

1 1 18 = , 36 37 74

or the gambler loses after one or more greens also with probability 1/74 by the same argument. Hence uk , the probability that the gambler loses satisfies uk =

18 1 18 uk+1 + (uk + uk−1 ) + uk+1 , 37 74 37

or 36uk+1 − 73uk + 37uk−1 = 0.

The charactersitic equation is

36m2 − 73m + 37 = (m − 1)(36m − 37) = 0, which has the solutions m1 = 1 and m2 = 37/36. With u0 = 1 and ua = 0, the required solution is uk =

sk − sa , 1 − sa

s=

37 . 36

The bets are equivalent to k = 10000/5000 = 2, a = 1010000/5000 = 202. The probability that the gambler wins is 1 − sk 1 − s2 1 − uk = = = 2.23 × 10−4 . 1 − sa 1 − s202 In the US system, uk satisfies uk =

18 19 uk+1 + uk−1 , 37 37

or

18uk+1 − 37uk + 19uk−1 = 0.

in this case the ratio is s′ = 19/18. Hence the probability the the gambler wins is 1 − uk =

1 − s′2 = 2.06 × 10−6 , 1 − s′202

which is less than the previous value. 2.10. In a single trial the possible scores 1 and 2 can occur each with probability 21 . If pn is the probability of scoring exactly n points at some stage, that is score after several trials is the sum of individual scores in each trial. Show that pn = 12 pn−1 + 21 pn−2 , Calculate p1 and p2 , and find a formula for pm . How does pm behave as m becomes large? How do you interpret the result? Let An be the event that the score is n at some stage. Let B1 be the event score 1, and B2 score 2. Then P(An ) = P(An |B1 )P(B1 ) + P(An |B2 )P(B2 ) = P(An−1 ) 21 + P(An−2 ) 21 ,

24

or pn = 12 pn−1 + 21 pn−2 . Hence 2pn − pn−1 − pn−2 = 0.

The characteristic equation is

2m2 − m − 1 = (m − 1)(2m + 1) = 0, which has the solutions m1 = 1 and m2 = − 12 . Hence pn = A + B(− 21 )n . The initial conditions are p1 =

1 2

and p2 =

1 2

+

1 1 2 2

= 34 . Hence

A − 21 B = 21 , so that A = 32 , B = 31 . Hence As n → ∞, pn →

pn =

2 3

A + 14 B = 34 ,

+ 13 (− 12 )n ,

(n = 1, 2, . . .).

2 . 3

2.11. In a single trial the possible scores 1 and 2 can occur with probabilities q and 1 − q, where 0 < q < 1. Find the probability of scoring exactly n points at some stage in an indefinite succession of trials. Show that 1 pn → , 2−q as n → ∞. Let pn be the probability. Then pn = qpn−1 + (1 − q)pn−2 ,

or

pn − qpn−1 − (1 − q)pn−2 = 0.

The characteristic equation is m2 − qm − (1 − q) = (m − 1)[m + (1 − q)] = 0, which has the solutions m1 = 1 and m2 = −(1 − q). Hence pn = A + B(q − 1)n . The initial conditions are p1 = q, p2 = 1 − q + q 2 , which imply q = A + B(q − 1),

1 − q + q 2 = A + B(q − 1)2 .

The solution of these equations leads to A = 1/(2 − q) and B = (q − 1)/(q − 2), so that pn =

1 [1 − (q − 1)n+1 ]. 2−q

2.12. The probability of success in a single trial is 13 . If un is the probability that there are no two consecutive successes in n trials, show that un satisfies un+1 = 32 un + 92 un−1 . What are the values of u1 and u2 ? Hence show that    √ n √ n  √ √ 1 1+ 3 1− 3 un = (3 + 2 3) . + (3 − 2 3) 6 3 3

25

Let An be the event that there have not been two consecutive successes in the first n trials. Let B1 be the event of success and B2 the event of failure in a single trial. Then P(An ) = P(An |B1 )P(B1 ) + P(An |B2 )P(B2 ). Now P(An |B2 ) = P(An−1 ): failure will not change the probability. Also P(An |B1 ) = P(An−1 |B2 )P(B2 ) = P(An−2 )P(B2 ). Since P(B1 ) =

1 , 3

P(B2 ) = 32 , un = 29 un−2 + 32 un−1

where un = P(An ). The characteristic equation is which has the solutions m1 = 31 (1 +



or

9un − 6un−1 − 2un = 0,

9m2 − 6m − 2 = 0, √ 3) and m2 = 13 (1 − 3). Hence

un = A

√ √ 1 1 (1 + 3)n + B n (1 − 3)n . n 3 3

The initial conditions are u1 = 1 and u2 = 1 −

11 33

= 98 . Therefore A and B are defined by

√ √ B A (1 + 3) + (1 − 3), 3 3 √ 2 B √ 2 √ √ 8 A A B = (1 + 3) + (1 − 3) = (4 + 2 3) + (4 − 2 3). 9 9 9 9 9 √ √ The solutions are A = 61 (2 3 + 3) and B = 16 (−2 3 + 3). Finally 1=

un =

√ √ √ √ 1 [(2 3 + 3)(1 + 3)n + (−2 3 + 3)(1 − 3)n ]. 6 · 3n

2.13. A gambler with initial capital k units plays against an opponent with initial capital a − k units. At each play of the game the gambler either wins one unit or loses one unit with probability 21 . Whenever the opponent loses the game, the gambler returns one unit so that the game may continue. Show that the expected duration of the game is k(2a − 1 − k) plays. The expected duration dk satisfies dk+1 − 2dk + dk−1 = −2,

(k = 1, 2, . . . , a − 1).

The boundary conditions are d0 = 0 and da = da−1 , indicating the return of one unit when the gambler loses. The general solution for the duration is dk = A + Bk − k2 . The boundary conditions imply A + Ba − a2 = A + B(a − 1) − (a − 1)2 ,

A = 0,

so that B = 2a − 1. Hence dk = k(2a − 1 − k). 2.14. In the usual gambler’s ruin problem, the probability that the gambler is eventually ruined is uk =

sk − sa , 1 − sa

s=

q , p

(p 6= 12 ).

In a new game the stakes are halved, whilst the players start with the same initial sums. How does this affect the probability of losing by the gambler? Should the gambler agree to this change of rule if p < 12 ? By how many plays is the expected duration of the game extended?

26

The new probability of ruin vk (with the stakes halved) is, adapting the formula for uk , (sk + sa )(sk − sa ) s2k − s2a = = uk 2a 1−s (1 − sa )(1 + sa )

vk = u2k =



sk + sa 1 + sa



.

Given p < 21 , then s = (1 − p)/p > 1 and sk > 1. It follows that vk > uk



1 + sa 1 + sa



= uk .

With this change the gambler is more likely to lose. From (2.9), the expected duration of the standard game is given by dk =





a(1 − sk ) 1 k− . 1 − 2p (1 − sa )

With the stakes halved the expected duration hk is hk = d2k =





2a(1 − s2k ) 1 2k − . 1 − 2p (1 − s2a )

The expected duration is extended by hk − dk

= =



a(1 − sk ) 2a(1 − s2k ) 1 + k− 2a 1 − 2p (1 − s ) (1 − sa )







1 a(1 − sk )(sa − 1 − 2sk ) k+ . 1 − 2p (1 − s2a )

2.15. In a gambler’s ruin game, suppose that the gambler can win £2 with probability probability 32 . Show that (3k − 1 − 3a)(−2)a + (−2)k . uk = 1 − (3a + 1)(−2)a Compute uk if a = 9 for k = 1, 2, . . . , 8.

1 3

or lose £1 with

The probability of ruin uk satisfies uk = 31 uk+2 + 32 uk−1

or

uk+2 − 3uk + 2uk = 0.

The characteristic equation is m3 − 3m + 2 = (m − 1)2 (m + 2) = 0, which has the solutions m1 = 1 (repeated) and m2 = −2. Hence uk = A + Bk + C(−2)k . The boundary conditions are u0 = 1, ua = 0, ua−1 = 32 ua−2 . The constants A, B and C satisfy A + C = 1,

A + Ba + C(−2)a = 0,

3[A + B(a + 1) + C(−2)a−1 ] = 2[A + B(a − 2) + C(−2)a−2 ],

or

The solution of these equations is A=

A + B(a + 1) − 8C(−2)a−2 = 0.

−(−2)a (3a + 1) , 1 − (−2)a (3a + 1)

B=

3(−2)a , 1 − (−2)a (3a + 1)

C=

1 . 1 − (−2)a (3a + 1)

Finally (3k − 1 − 3a)(−2)a + (−2)k . 1 − (−2)a (3a + 1) The values of the probabilities uk for a = 9 are shown in the table below. uk =

27

k

1

2

3

4

5

6

7

8

uk

0.893

0.786

0.678

0.575

0.462

0.362

0.241

0.161

2.16. Find the general solution of the difference equation uk+2 − 3uk + 2uk−1 = 0. A reservoir with total capacity of a volume units of water has, during each day, either a net inflow of two units with probability 13 or a net outflow of one unit with probability 23 . If the reservoir is full or nearly full any excess inflow is lost in an overflow. Derive a difference equation for this model for uk , the probability that the reservoir will eventually become empty given that it initially contains k units. Explain why the upper boundary conditions can be written ua = ua−1 and ua = ua−2 . Show that the reservoir is certain to be empty at some time in the future. The characteristic equation is m3 − 3m + 2 = (m − 1)2 (m + 2) = 0. The general solution is (see Problem 2.15) uk = A + Bk + C(−1)k . The boundary conditions for the reservoir are u0 = 1,

ua = 13 ua + 23 ua−1 ,

ua−1 = 13 ua + 32 ua−2 .

The latter two conditions are equivalent to ua = ua−1 = ua−2 . Hence A + C = 1,

A + Ba + C(−2)a = A + B(a − 1) + C(−2)a−1 = A + B(a − 2) + C(−2)a−2 .

which have the solutions A = 1, B = C = 0. The solution is uk = 1, which means that that the reservoir is certain to empty at some future date. 2.17. Consider the standard gambler’s ruin problem in which the total stake is a and gambler’s stake is k, and the gambler’s probability of winning at each play is p and losing is q = 1 − p. Find uk , the probability of the gambler losing the game, by the following alternative method. List the difference equation (2.2) as u2 − u1

=

s(u1 − u0 ) = s(u1 − 1)

u3 − u2

=

s(u2 − u1 ) = s2 (u1 − 1) ...

uk − uk−1

=

s(uk−1 − uk−2 ) = sk−1 (u1 − 1),

where s = q/p 6= 12 and k = 2, 3, . . . a. The boundary condition u0 = 1 has been used in the first equation. By adding the equations show that s − sk . uk = u1 + (u1 − 1) 1−s Determine u1 from the other boundary condition ua = 0, and hence find uk . Adapt the same method for the special case p = q = 21 . Addition of the equations gives uk − u1 = (u1 − 1)(s + s2 + · · · + sk−1 ) = (u1 − 1) summing the geometric series. The condition ua = 0 implies −u1 = (u1 − 1)

28

s − sa . 1−s

s − sk 1−s

Hence u1 =

s − sa , 1 − sa

uk =

sk − sa . 1−s

so that

2.18. A car park has 100 parking spaces. Cars arrive and leave randomly. Arrivals or departures of cars are equally likely, and it is assumed that simultaneous events have negligible probability. The ‘state’ of the car park changes whenever a car arrives or departs. Given that at some instant there are k cars in the car park, let uk be the probability that the car park first becomes full before it becomes empty. What are the boundary conditions for u0 and u100 ? How many car movements can be expected before this occurs? The probability uk satisfies the difference equation uk =

1 1 uk+1 + uk−1 2 2

or

uk+1 − 2uk + uk−1 .

The general solution is uk = A + Bk. The boundary conditions are u0 = 0 and u100 = 1. Hence A = 0 and B = 1/100, and uk = k/100. The expected duration of car movements until the car park becomes full is dk = k(100 − k). 2.19. In a standard gambler’s ruin problem with the usual parameters, the probability that the gambler loses is given by sk − sa 1−p uk = , s= . 1 − sa p If p is close to

1 , 2

given say by p =

uk =

1 2

+ ε where |ε| is small, show, by using binomial expansions, that

h

a−k 4 1 − 2kε − (a − 2k)ε2 + O(ε3 ) a 3

i

as ε → 0. (The order O terminology is defined as follows: we say that a function g(ε) = O(εb ) as ε → 0 if g(ε)/εb is bounded in a neighbourhood which contains ε = 0. See also the Appendix in the book.) Let p =

1 2

+ ε. Then s = (1 − 2ε)/(1 + 2ε), and uk =

(1 − 2ε)k (1 + 2ε)−k − (1 − 2ε)a (1 + 2ε)−a . 1 − (1 − 2ε)a (1 + 2ε)−a

Apply the binomial theorem to each term. The result is uk =

h

i

4 a−k 1 − 2kε − (a − 2k)ε2 + O(ε3 ) . a 3

[Symbolic computation of the series is a useful check.]

2.20. A gambler plays a game against a casino according to the following rules. The gambler and casino each start with 10 chips. From a deck of 53 playing cards which includes a joker, cards are randomly and successively drawn with replacement. If the card is red or the joker the casino wins 1 chip from the gambler, and if the card is black the gambler wins 1 chip from the casino. The game continues until either player has no chips. What is the probability that the gambler wins? What will be the expected duration of the game? From (2.4) the probability uk that the gambler loses is uk =

sk − sa , 1 − sa

with k = 10, a = 20, p = 26/53, and s = 27/26. Hence u10 =

(27/26)10 − (27/26)20 ≈ 0.593. 1 − (27/26)20

29

Therefore the probability that the gambler wins is approximately 0.407. From (2.10)   1 a(1 − sk dk = = 98.84, k− 1 − 2p 1 − sa

for the given data.

2.21. In the standard gambler’s ruin problem with total stake a and gambler’s stake k, the probability that the gambler loses is sk − sa uk = , 1 − sa where s = (1 − p)/p. Suppose that uk = 21 , that is fair odds. Express k as a function of a. Show that, k=

Given uk =

ln[ 12 (1 + sa )] . ln s

sk − sa 1 − sa

and

uk = 12 ,

then 1 − sa = 2(sk − sa ) or sk = 21 (1 + sa ). Hence k=

ln[ 21 (1 + sa )] , ln s

but generally k will not be an integer. 2.22. In a gambler’s ruin game the probability that the gambler wins at each play is αk and loses is 1 − αk , (0 < αk < 1, 0 ≤ k ≤ a − 1), that is, the probability varies with the current stake. The probability uk that the gambler eventually loses satisfies uk = αk uk+1 + (1 − αk )uk−1 ,

uo = 1,

ua = 0.

Suppose that uk is a specified function such that 0 < uk < 1, (1 ≤ k ≤ a − 1), u0 = 1 and ua = 0. Express αk in terms of uk−1 , uk and uk+1 . Find αk in the following cases: (a) uk = (a − k)/a; (b) uk = (a2 − k2 )/a2 ; (c) uk = 21 [1 + cos(kπ/a)]. From the difference equation αk = (a) uk = (a − k)/a. Then αk =

uk − uk−1 . uk+1 − uk−1

(a − k) − (a − k + 1) 1 = , (a − k − 1) − (a − k + 1) 2

which is to be anticipated from eqn (2.5). (b) uk = (a2 − k2 )/a2 . Then αk =

2k − 1 (a2 − k2 ) − [a2 − (k − 1)2 ] = . [a2 − (k + 1)2 ] − [a2 − (k − 1)2 ] 4k

(c) uk = 1/(a + k). Then αk =

[1/(a + k)] − [1/(a + k − 1)] a+k+1 = . [1/(a + k + 1)] − [1/(a + k − 1)] 2(a + k)

30

2.23. In a gambler’s ruin game the probability that the gambler wins at each play is αk and loses is 1 − αk , (0 < αk < 1, 1 ≤ k ≤ a − 1), that is, the probability varies with the current stake. The probability uk that the gambler eventually loses satisfies uk = αk uk+1 + (1 − αk )uk−1 ,

uo = 1,

ua = 0.

Reformulate the difference equation as uk+1 − uk = βk (uk − uk−1 ), where βk = (1 − αk )/αk . Hence show that uk = u1 + γk−1 (u1 − 1),

(k = 2, 3, . . . , a)

where γk = β1 + β1 β2 + · · · + β1 β2 . . . βk .

Using the boundary condition at k = a, confirm that uk =

γa−1 − γk−1 . 1 + γa−1

Check that this formula gives the usual answer if αk = p 6= 21 , a constant. The difference equation can be expressed in the equivalent form uk+1 − uk = βk (uk − uk−1 ), where βk = (1 − αk )/αk . Now list the equations as follows, noting that u0 = 0,: u2 − u1

=

u3 − u2

=

uk − uk−1

=

···

β1 (u1 − 1)

β1 β2 (u1 − 1)

=

···

β1 β2 · · · βk−1 (u1 − 1)

Adding these equations, we obtain uk − u1 = γk−1 (u1 − 1),

where The condition ua = 0 implies

γk−1 = β1 + β1 β2 + · · · + β1 β2 · · · βk−1 . −u1 = γa−1 (u1 − 1),

so that

u1 + Finally uk =

γa−1 . 1 + γa−1

γa−1 − γk−1 . 1 + γa−1

If αk = p 6= 21 , then βk = (1 − p)/p = s, say, and γ k = s + s2 + · · · + sk = Hence uk = as required.

s − sk+1 . 1−s

(s − sa )/(1 − s) − (s − sk )/(1 − s) sk − sa = a 1 + (s − s )/(1 − s) 1 − sa

2.24. Suppose that a fair n-sided die is rolled n independent times. A match is said to occur if side i is observed on the ith trial, where i = 1, 2, . . . , n.

31

(a) Show that the probability of at least one match is



1 n

n−1 n

n

1− 1− (b) (c) (d) (e)

What What What What

n

.

is the limit of this probability as n → ∞? is the probability that just one match occurs in n trials? value does this probability approach as n → ∞? is the probability that two or more matches occur in n trials?

(a) The probability of no matches is



The probability of at least one match is 1− (b) As n → ∞,



n−1 n

n



.



=1− 1−

1 n

n

.



1 n → e−1 . n Hence for large n, the probability of at least one match approaches 1 − e−1 = (e − 1)/e. (c) There is only one match with probability

(d) As n → ∞

1−

 



n−1 n

n−1



n − 1 n−1 1 = 1− n n (e) Probability of two or more matches is



n−1 n

n−1





n−1 n



.

1−

n

=

1 n

n



= e−1 .

1 1 1− n n

n−1

.

2.25. (Kelly’s strategy) A gambler plays a repeated favourable game in which the gambler wins with probability p > 12 and loses with probability q = 1 − p. The gambler starts with an initial outlay K0 (in some currency). For the first game the player bets a proportion rK0 , (0 < r < 1). Hence after this play the stake is K0 (1 + r) after a win or K0 (1 − r) after losing. Subsequently, the gambler bets the same proportion of the current stake at each play. Hence, after n plays of which wn are wins the stake Sr will be Kn (r) = K0 (1 + r)wn (1 − r)n−wn . Construct the function Gn (r) =





1 Kn (r) . ln n K0

What is the expected value of Gn (r) for large n? For what values of r is this expected value a maximum? This value of r indicates a safe betting level to maximise the gain, although at a slow rate. You might consider why the logarithm is chosen: this is known as a utility function in gambling and economics. It is a matter of choice and is a balance between having a reasonable gain against having a high risk gain. Calculate r if p = 0.55. [At the extremes r = 0 corresponds to no bet whilst r = 1 to betting K0 the whole stake in one go which could be catastrophic.] From

Kn (r) = K0 (1 + r)wn (1 − r)n−wn ,

it follows that Gn (r) =

(n − wn ) 1 wn ln[(1 + r)wn (1 − r)n−wn = ln(1 + r) + ln(1 − r). n n n

32

G + n(r) is our chosen utility function. For large n, the ratio wn /n is approximately the probability p, and (n − wn )/n the probability q. Hence Gn (r) approaches H(r) = p ln(1 + r) + q ln(1 − r). This function is zero where r = 0 and approaches −∞ as r → 1−. Also H ′ (r) =

p q − . 1+r 1−r

Hence H ′ (0) = p + q = 1 > 0, so that the slope is positive which implies that H(r) has a maximum for in 0 < r < 1. This occurs at H ′ (r) = 0 where where rm = 2p − 1 so that H(rm ) = p ln(2p) + (1 − p) ln(2 − 2p). The function Gn (r) is an example of a utility function which attempts to avoid possible catastrophic losses, and keeps gains modest. As an example let p = 0.55. Then rm = 2p − 1 = 0.1. Suppose that n = 100 and wn = 55 (reflecting the odds). Then for this value of rm it can be calculated that K100 (rm ) ≈ 1.65K0 .

33

Chapter 3

Random Walks 3.1. In a random walk the probability that the walk advances by one step is p and retreats by one step is q = 1 − p. At step n let the position of the walker be the random variable Xn . If the walk starts at x = 0, enumerate all possible sample paths which lead to the value X4 = −2. Verify that P{X4 = −2} =

 

4 pq 3 . 1

If the walks which start at x = 0 and finish at x = −2, then each walk must advance one step with probability p and retreat 3 steps with probability q 3 . the possible walks are: 0 → −1 → −2 → −3 → −2 0 → −1 → −2 → −1 → −2 0 → 1 → 0 → −1 → −2 0 → −1 → 0 → −1 → −2 By (3.4), P{X4 = −2} =

 

4 pq 3 . 1

3.2. A symmetric random walk starts from the origin. Find the probability that the walker is at the origin at step 8. What is the probability also at step 8 that the walker is at the origin but that it is not the first visit there? By (3.4), the probability that the walker is at the origin at step 8 is given by P(X8 ) =

 

8 1 8! 1 = = 0.273. 4 28 4!4! 28

The generating function of the first returns fn is given by (see Section 3.4) Q(s) =

∞ X n=0

1

fn sn = 1 − (1 − s2 ) 2 .

We require f8 in the expansion of Q(s). Thus, using the binomial theorem, the series expansion for Q(s) is 1 1 6 5 8 1 s + s + O(s10 ). Q(s) = s2 + s4 + 2 8 16 128 Therefore the probability of a first return at step 8 is 5/128 = 0.039. Hence the probability that the walk is at the origin but not a first return is 0.273 − 0.039 = 0.234.

34

3.3. An asymmetric walk starts at the origin. From eqn (3.5), the probability that the walk reaches x in n steps is given by   1 1 n p 2 (n+x) q 2 (n−x) , vn,x = 1 (n + x) 2 where n and x are both even or both odd. If n = 4, show that the mean value position x is 4(p − q), confirming the result in Section 3.2.

The furthest positions that the walk can reach from the origin in 4 steps are x = −4 and x = 4, and since n is even, the only other positions reachable are x = −2, 0, 2. Hence the required mean value is µ

=

−4v4,−4 − 2v4,−2 + 2v4,2 + 4v4,4

 

 

 

 

=

4 4 4 4 3 4 4 −4 q −2 pq 3 + 2 p q+4 p 0 1 3 4

=

−4q 4 − 8pq 3 + 8p3 q + 4p4

=

4(p − q)(p + q)3 = 4(p − q).

3.4. The pgf for the first return distribution {fn }, (n = 1, 2, . . .), to the origin in a symmetric random walk is given by 1 F (s) = 1 − (1 − s2 ) 2 , (see Section 3.4). (a) Using the binomial theorem find a formula for fn , the probability that the first return occurs at the n-th step. (b) What is the variance of fn ? Using the binomial theorem 1

F (s) = 1 − (1 − s2 ) 2 = 1 −

∞ 1 X 2

n=0

n

(−1)n s2n .

(a) From the series, the probability of a first return at step n is fn = (b) The variance of fn is defined by

  

1

(−1) 2 n+1 0





1 2 1 n 2

n even n odd

V = F ′′ (1) − F ′ (1) − [F ′ (1)]2 . We can anticipate that the variance will be infinite (as is the case with the mean) since the limit of the derivatives of F (s) are unbounded as s → 1. 3.5. A coin is spun 2n times and the sequence of heads and tails is recorded. What is the probability that the number of heads equals the number of tails after 2n spins? This problem can be viewed as a symmetric random walk starting at the origin in which a head is represented as a step to the right, say, and a tail a step to the left. We require the probability that the walk returns to the origin after 2n steps, which is (see Section 3.3) v2n,0 =

1 22n



2n n



=

(2n)! . 22n n!n!

3.6. For an asymmetric walk with parameters p and q = 1 − p, the probability that the walk is at the origin after n steps is   2n n n q2n = v2n,0 = p q , (n = 1, 2, 3, . . .), n

35

from Eqn (3.5). Note that {q2n } is not a probability distribution. Find the mean number of steps of a return to the origin conditional on a return occurring. The generating function H(s) is defined by H(s)

=

 ∞  X 2n n

n=0

=

n n 2n

p q s

=

∞ X

2n

2

n=0

1

(1 − 4pqs2 )− 2





− 21 n n 2n p q s n

using the binomial identity from Section 3.3 or the Appendix. The mean number of steps to the return is µ = H ′ (1) =

3 (1 − 4pqs2 ) 2 4pqs

4pq

=

3

(1 − 4pq) 2

s=1

.

3.7. Using eqn (3.13) relating to the generating functions of the returns and first returns to the origin, namely H(s) = [H(s) + 1]F (s), which is still valid for the asymmetric walk, show that 1

Q(s) = 1 − (1 − 4pqs2 ) 2 , where p 6= q. Show that a first return to the origin is not certain unlike the situation in the symmetric walk. Find the mean number of steps to the first return. From Problem 3.6, 1

H(s) = (1 − 4pqs2 )− 2 ,

so that, by (3.12),

Q(s) = 1 −

1 1 = 1 − (1 − 4pqs2 ) 2 . H(s)

It follows that Q(1)

∞ X

=

n=1

1

1

fn = 1 − (1 − 4pq) 2 = 1 − [(p + q)2 − 4pq] 2 1

1 − [(p − q)2 ] 2 = 1 − |p − q| < 1,

=

if p 6= q. Hence a first return to the origin is not certain. The mean number of steps is µ = Q′ (1) =

4pq 1

(1 − 4pq) 2

=

4pq . |p − q|

3.8. A symmetric random walk starts from the origin. Show that the walk does not revisit the origin in the first 2n steps with probability hn = 1 − f2 − f4 − · · · − f2n , where f2n is the probability that a first return occurs at the 2n-th step. The generating function for the sequence {f2n } is 1

F (s) = 1 − (1 − s2 ) 2 (see Section 3.4). Show that f2 =

1 2

f2n =

(2n − 3)! , n!(n − 2)!22n−2

36

(n = 2, 3, . . .).

Show that hn satisfies the first-order difference equation (2n + 1)! . (n + 1)!(n − 1)!22n

hn+1 − hn = f2n+2 = Verify that this equation has the general solution





2n 1 , n 22n

hn = C +

where C is a constant. By calculating h1 , confirm that the probability of no return to the origin in the first 2n steps is   1 2n . 22n n The probability that a first return occurs at step 2j is f2j : a first return cannot occur after an odd number of steps. Therefore the probability hn that a first return has not occurred in the first 2n steps is given by the difference hn = 1 − f2 − f4 − · · · − f2n .

The probability f2n , that the first return to the origin occurs at step 2n is the coefficient of sm in the expansion 1

F (s) = 1 − (1 − s2 ) 2 = − The coefficient of s

2n

is f2n . It follows that

1 f2 = , 2

f2n = (−1)

n+1

1 2

n

∞ 1 X 2

n=1

n

(−s2 )n .

(2n − 3)! , n!(n − 2)!22n−2

=

(n = 2, 3, . . .),

where 0! = 1. The difference hn

= =

1 − f2 + f4 − · · · − f2n 1−

1 2

1 2

+

1

2

− · · · − (−1)

n+1

1 2

n

Hence hn satisfies the difference equation hn+1 − hn = (−1)n





1 2

n+1

=−

(2n − 1)! . (n + 1)!(n − 1)!22n

The homogeneous part of the difference equation has a constant solution C, say. For the particular solution try the choice suggested in the question, namely hn =





2n 1 . n 22n

Then hn+1 − hn

=





2n + 2 1 − n + 1 22n+2







=

1 2n 1 − n 22n+1 (n + 1)

=

(−1)n+1 − 21 2(n + 1) n

=

(−1)n+1





1 2





2n 1 n 22n

(using the binomial identity before (3.7))



n+1

37

Hence hn = C +

1 22n





2n . n

The initial condition is h1 = 21 , from which it follows that C = 0. Therefore the probability that no return to the origin has occurred in the first 2n steps is 1 hn = 2n 2





2n . n

3.9. A walk can be represented as a connected graph between coordinates (n, y) where the ordinate y is the position on the walk, and the abscissa n represents the number of steps. A walk of 7 steps which joins (0, 1) and (7, 2) is shown in Fig. 3.1. Suppose that a walk starts at (0, −y1 ) and finishes at (n, y2 ), where

Figure 3.1: Representation of a random walk y1 > 0, y2 > 0 and n + y2 − y1 is an even number. Suppose also that the walk first visits the origin at n = n1 . Reflect that part of the path for which n ≤ n1 in the n-axis (see Fig. 3.1), and use a reflection argument to show that the number of paths from (0, y1 ) to (n, y2 ) which touch or cross the n-axis equals the number of all paths from (0, −y1 ) to (n, y2 ). This is known as the reflection principle. All paths from (0, −y1 ) to (n, y2 ) must cut the n axis at least once. Let (n1 , 0) be the first such contact with n axis. Reflect the path for n ≤ n1 and y ≤ 0 in the n axis. The result is a path from (0, y1 ) to (n, y2 ) which touches or cuts the n axis at least once. All such paths must be included. 3.10. A walk starts at (0, 1) and returns to (2n, 1) after 2n steps. Using the reflection principle (see Problem 3.9) show that there are (2n)! n!(n + 1)! different paths between the two points which do not ever revisit the origin. What is the probability that the walk ends at (2n, 1) after 2n steps without ever visiting the origin, assuming that the random walk is symmetric? Show that the probability that the first visit to the origin after 2n+1 steps is pn =

(2n)! 1 . 22n+1 n!(n + 1)!

38

Let M (m, d) represent the total number of different paths in the (n, y) plane (as in Fig. 3.1) which are of length m joining positions denoted by y = y1 and y = y2 : here d is the absolute difference d = |y2 − y1 |. The total number of paths from (0, 1) to (0, 2n) is M (2n, 0) =





2n . n

By the reflection principle (Problem 3.9) the number of paths which cross the n axis (that is, visit the origin) is   2n M (2n, 2) = . n−1 Hence the number of paths from (0, 1) to (0, 2n) which do not visit the origin is M (2n, 0) − M (2n, 2)



2n n

=







(2n)! n!(n + 1)!

=



2n n−1

=

(2n)! (2n)! − n!n! (n − 1)!(n + 1)!

The total number of paths is 22n . Also to visit the origin for the first time at step 2n + 1, the walk must be at y = 1 at step 2n, from where there is a probability of 12 that the walk moves to the origin. Hence the probability is (2n)! 1 (2n)! 1 1 = 2n+1 . pn = 2n 2 n!(n + 1)! 2 2 n!(n + 1)! 3.11. A symmetric random walks starts at the origin. Let fn,1 be the probability that the first visit to position x = 1 occurs at the n-th step. Obviously, f2n,1 = 0. The result from Problem 3.10 can be adapted to give (2n)! 1 , (n = 0, 1, 2, . . .). f2n+1,1 = 2n+1 2 n!(n + 1)! Suppose that its pgf is G1 (s) =

∞ X

f2n+1,1 s2n+1 .

n=0

Show that

1

G1 (s) = [1 − (1 − s2 ) 2 ]/s.

[Hint: the identity 1 22n+1





1 (2n)! 2 , = (−1)n n+1 n!(n + 1)!

(n = 0, 1, 2, . . .)

is useful in the derivation of G1 (s).] Show that any walk starting at the origin is certain to visit x > 0 at some future step, but that the mean number of steps in achieving this is infinite. The result

(2n)! 1 22n+1 n!(n + 1)! is simply the result in the last part of Problem 3.10. Its pgf is ∞ ∞ X X 1 (2n)!s2n+1 G1 (s) = f2n+1,1 s2n+1 = 22n+1 n!(n + 1)! f2n+1,1 =

n=0

n=0

The identity before eqn (3.7) (in the book) states that



2n n



= (−1)n

39





− 21 2n 2 . n

Therefore, using this result (2n)! = 22n+1 n!(n + 1)!







2n − 21 1 = (−1)n 2n+1 (n + 1) n 2 n







1 1 2 = (−1)n . 2(n + 1) n+1

Hence G1 (s)

=

∞ X

(−1)n

s=0

= =

1 2

1

s−





1 2

n+1

1 2

2

3

s2n+1

s +

1 1 [1 − (1 − s2 ) 2 ] s

1 2

3

s5 − · · ·

using the binomial theorem. That G1 (1) = 1 implies that the random walk is certain to visit x > 0 at some future step. However, G′ (1) = ∞ which means that expected number of steps to that event is infinite. 3.12. A symmetric random walk starts at the origin. Let fn,x be the probability that the first visit to position x occurs at the n-th step (as usual, fn,x = 0 if n + x is an odd number). Explain why fn,x =

n−1 X

fn−k,x−1 fk,1 ,

k=1

If Gx (s) is its pgf, deduce that

(n ≥ x > 1).

Gx (s) = {G1 (s)}x ,

where G1 (s) is given explicitly in Problem 3.11. What are the probabilities that the walk first visits x = 3 at the steps n = 3, n = 5 and n = 7? Consider k = 1. The first visit to x − 1 has probability fn−1,x−1 in n − 1 steps. Having reached there the walk must first visit x in one further step with probability f1,1 . Hence the probability is fn−1,x−1 f1,1 . If k = 2, the first visit to x − 1 in n − 2 steps occurs with probability fn−2,x−1 : its first visit to x must occur after two steps. Hence the probability is fn−2,x−1 f2,1 . And so on. The sum of these probabilities gives fn,x =

n−1 X

fn−k,x−1 fk,1 ,

k=1

(n ≥ x > 1).

Multiply both sides of this equation by sn and sum over n from n = x: Gx (s) =

∞ n−1 X X

fn−k,x−1 fk,1 sn = Gx−1 (s)G1 (s).

n=x k=1

By repeated application of this difference equation, it follows that Gx (s) = For x = 3,

h n 1 s

1

1 − (1 − s2 ) 2

h1 n

1

1 − (1 − s2 ) 2

oix

.

oi3

. s Expansion of this function as a Taylor series in s gives the coefficients and probabilities: G1 (s) =

G2 (s) =

1 3 1 5 1 7 s + s + s + O(s9 ). 4 8 64

3.13. Problem 3.12 looks at the probability of a first visit to position x ≥ 1 at the n-th step in a symmetric random walk which starts at the origin. Why is the pgf for the first visit to position x where |x| ≥ 1 given by Gx (s) = {G1 (s)}|x| ,

40

where F1 (s) is defined in Problem 3.11? First visits to x > 0 and to −x at step n must be equally likely. Hence fn,x = fn,−x . Therefore 1

Gx(s) = [1 − (1 − s2 ) 2 ]|x| . 3.14. An asymmetric walk has parameters p and q = 1 − p 6= p. Let gn,1 be the probability that the first visit to x = 1 occurs at the n-th step. As in Problem 3.11, g2n,1 = 0. It was effectively shown in Problem 3.10 that the number of paths from the origin, which return to the origin after 2n steps is (2n)! . n!(n + 1)! Explain why g2n+1,1 = Suppose that its pgf is G1 (s) =

(2n)! pn+1 q n . n!(n + 1)! ∞ X

g2n+1,1 s2n+1 .

n=0

Show that

1

G1 (s) = [1 − (1 − 4pqs2 ) 2 ]/(2qs).

(The identity in Problem 3.11 is required again.) What is the probability that the walk ever visits x > 0? How does this result compare with that for the symmetric random walk? What is the pgf for the distribution of first visits of the walk to x = −1 at step 2n + 1? The probability that the first return to x = 1 at the (2n + 1)th step is g2n+1,1 The number of paths of length 2n which never visit x = 1 is (adapt the answer in Problem 3.10), (2n)! . n!(n + 1)! The consequent probability of this occurrence is, since there are n steps to the right with probability p and to the left with probability q, (2n)! pn q n . n!(n + 1)! the probability that the next step visits x = 1 is (2n)! pn+1 q n , n!(n + 1)! which is the previous probability multiplied by p. Using the identity in Problem 3.11, g2n+1,1 =



1 2



n+1

(−1)n (2p)n (2q)n 2p.

Its pgf G1 (s) is given by G1 (s) =

∞  X n=0

1 2



n+1

1

(−1)n (4pq)n 2ps2n+1 = [1 − (1 − 4pqs2 ) 2 ]/(2qs)

using a binomial expansion. Use an argument that any walk which enters x > 0 must first visit x = 1 as follows. The probability that the walks first visit to x = 1 occurs at all is ∞ X n=0

g2n+1,1 = G1 (1) =

1 1 1 [1 − {(p + q)2 − 4pq} 2 ] = [1 − |p − q|]. 2q 2q

41

A symmetry argument in which p and q are interchanged gives the pgf for the distribution of first visits to x = −1, namely 1 H1 (s) = [1 − (1 − 4pqs2 ) 2 ]/(2ps). 3.15. It was shown in Section 3.3 that, in a random walk with parameters p and q = 1 − p, the probability that a walk is at position x at step n is given by vn,x = where

1 (n 2





1 1 n p 2 (n+x) q 2 (n−x) , 1 (n + x) 2

|x| ≤ n,

+ x) must be an integer. Verify that vn,x satisfies the difference equation vn+1,x = pvn,x−1 + qvn,x+1 ,

subject to the initial conditions v0,0 = 1,

vn,x = 0,

(x 6= 0).

Note that this difference equation has differences of two arguments. Can you develop a direct argument which justifies the difference equation for the random walk? Given vn,x = then pvn,x−1 + qvn,x+1

=

 p



1 1 n p 2 (n+x) q 2 (n−x) , 1 (n + x) 2



1 1 n p 2 (n+x−1) q 2 (n−x+1) 1 (n + x − 1) 2

+q =



1



=



1 1 n p 2 (n+x+1) q 2 (n−x−1) 1 (n + x + 1) 2 1

p 2 (n+x+1) q 2 (n−x+1) 1

1

=

|x| ≤ n,



n 1 (n + x − 1) 2



+



n 1 (n + x + 1) 2



p 2 (n+x+1) q 2 (n−x+1) n! [ 1 (n + x + 1) + 21 (n − x + 1)] + x + 1)]![ 12 (n − x + 1)]! 2

[ 12 (n 1

1

p 2 (n+x+1) q 2 (n−x+1)





n+1 1 (n + x) 2

= vn+1,x .

3.16. In the usual notation, v2n,0 is the probability that, in a symmetric random walk, the walk visits the origin after 2n steps. Using the difference equation from Problem 3.15, v2n,0 satisfies v2n,0 = 21 v2n−1,−1 + 12 v2n−1,1 = v2n−1,1 . How can the last step be justified? Let G1 (s) =

∞ X

v2n−1,1 s2n−1

n=1

be the pgf of the distribution {v2n−1,1 }. Show that 1

G1 (s) = [(1 − s2 )− 2 − 1]/s. By expanding G1 (s) as a power series in s show that v2n−1,1 =





2n − 1 1 . 22n−1 n

42

By a repetition of the argument show that G2 (s) =

∞ X n=0

1

v2n,2 s2n = [(2 − s2 )(1 − s2 )− 2 − 2]/s2 .

Use a symmetry argument. Multiply both sides of the difference equation by s2n and sum from n = 1 to infinity. Then ∞ X

v2n,0 s2n = s

∞ X

v2n−1,1 s2n−1 .

n=1

n=1

Therefore in the notation in the problem

H(s) − 1 = sG1 (s), 1 2

where H(s) = (1 − s2 ) . Therefore G1 (s) =

∞ X

v2n−1,1 s2n−1

n=1

as required. From the series for G1 (s) expanded as a binomial series, the general coefficient is v2n−1,1 =





− 12 (−1)n = n





(2n)! 2n 1 = 2n = 2 n!n! n 22n





2n − 1 1 . 22n−1 n

From the difference equation v2n+1,1 = 21 v2n,0 + 21 v2n,2 . Multiplying by s2n+1 and summing over n G1 (s) =

∞ ∞ 1X 1 1 1X v2n,0 s2n+1 + v2n,2 s2n+1 = sH(s) + sG2 (s). 2 2 2 2 n=0

Therefore

n=0

1

G2 (s) = [(2 − s2 )(1 − s2 )− 2 − 2]/s2 .

Figure 3.2: The cyclic random walk of period n for Problem 3.17. 3.17. A random walk takes place on a circle which is marked out with n positions. Thus, as shown in Fig. 3.2, position n is the same as position O. This is known as a cyclic random walk of period n. A symmetric random walk starts at O. What is the probability that the walk is at O after j steps in the cases: (a) j < n;

43

(b) n ≤ j < 2n? Distinguish carefully the cases in which j and n are even or odd. (a) j < n. The walk cannot circumscribe the circle. This case is the same as the walk on a line. Let pj be the probability that the walk is at O at step j. Then by (3.6)

pn = vn,0 =

   n 1  1 n 2



2n

0

n even n odd

(b) n ≤ j < 2n. Since n can be reached in both clockwise and counterclockwise directions, pj

=

vj,0 + vj,n + vj,−n

       j j j 1 1 1  + +  1 1   1 j 2j (j + n) 2j (j − n) 2j 2

=

0      1

2



j 1 + (j + n) 2j 2

2





j 1 1 (j − n) 2j 2

(j, n both even) (j odd, n even, or j even, n odd) (j, n both odd)

3.18. An unrestricted random walk with parameters p and q starts from the origin, and lasts for 50 paces. Estimate the probability that the walk ends at 12 or more paces from the origin in the cases: (a) p = q = 12 ; (b) p = 0.6, q = 0.4. Consult Section 3.2. From (3.2) Zn =

Xn − n(p − q) √ ≈ N (0, 1), 4npq

where Xn is the random variable of the position of the random walk at step n. Since n = 50 is discrete we use the approximation P(−11 ≤ X50 ≤ 11) ≈ P(−11.5 < X50 < 11.5). (a) Symmetric random walk: p = q = 12 . Then

11.5 X50 11.5 −1.626 = − √ < Z50 = √ < √ = 1.626. 50 50 50 Hence P(−1.626 < Z50 < 1.626) = −Φ(−1.626) + Φ(1.626) = 2Φ(1.626) − 1 = 0.896.

Therefore the probability that the final position is 12 or more paces from the origin is approximately 1 − 0.896 = 0.104 approximately. (b) p = 0.6, q = 0.4. The bounds on Z50 are given by −3.103 =

X50 − 10 11.5 − 10 −11.5 − 10 √ √ < Z50 = √ < = 0.217. 48 48 48

Hence P(−3.103 < Z50 < 0.217) = Φ(0.217) − Φ(−3.103) = 0.585.

The probability that the final position is 12 or more paces from the origin is approximately 1−0.585 = 0.415. 3.19. In an unrestricted random walk with parameters p and q, for what value of p are the mean and variance of the probability distribution of the position of the walk at stage n the same? From Section 3.2 the mean and variance of Xn , the random variable of the position of the walk at step n, are given by E(Xn ) = n(p − q), V(Xn ) = 4npq,

44

where q = 1 − p. The mean and variance are equal if 2p − 1 = 4p(1 − p), that is, if The required probability is p = 41 (1 +



4p2 − 2p − 1 = 0. 5).

3.20. Two walkers each perform symmetric random walks with synchronized steps both starting from the origin at the same time. What is the probability that they are both at the origin at step n? If A and B are the walkers, then the probability an that A is at the origin is given by an =

   1  n 

1 n 2

(n even)

2n

0

.

(n odd)

The probability bn for B is given by the same formula. They can only visit the origin if n is even, in which case the probability that they are both there is a n bn =



2

n 1 n 2

1 . 22n

3.21. A random walk takes place on a two-dimensional lattice as shown in Fig. 3.3. In the example shown the walk starts at (0, 0) and ends at (2, −1) after 13 steps. In this walk direct diagonal steps are not

Figure 3.3: A two-dimensional random walk. permitted. We are interested the probability that, in the symmetric random walk, which starts at the origin, has returned there after 2n steps. Symmetry in the two-dimensional walk means that there is a probability of 1 that, at any position, the walk goes right, left, up, or down at the next step. The total number of different 4 walks of length 2n which start at the origin is 42n . For the walk considered, the number of right steps (positive x direction) must equal the number of left steps, and the number of steps up (positive y direction) must equal those down. Also the number of right steps must range from 0 to n, and the corresponding steps up from n to 0. Explain why the probability that the walk returns to the origin after 2n steps is p2n =

n (2n)! X 1 . 42n [r!(n − r)!]2 r=0

Prove the two identities (2n)! = [r!(n − r)!]2



2n n

 2 n r

,

45



2n n



=

n  2 X n r=0

r

.

[Hint: compare the coefficients of xn in (1 + x)2n and [(1 + x)n ]2 .] Hence show that p2n

1 = 2n 4



2n n

2

.

Calculate p2 , p4 , 2/(πp20 ), 2/(πp40 ). How do you would you guess that p2n behaves for large n? At each intersection there are 4 possible paths. Hence there are 42n different paths which start the origin. For a walk which returns to the origin there must be r, say, left and right steps, and n − r up and down steps (r = 0, 1, 2, . . . , n) to ensure the return. For fixed r, the number of ways in which r left, r right, (n − r) up and (n − r) down steps can be chosen from 2n is the multinomial formula (2n)! . r!r!(n − r)!(n − r)! For all r, the total number of ways is n X r=0

(2n)! . r!r!(n − r)!(n − r)!

Therefore the probability that a return to the origin occurs is p2n =

n (2n)! X 1 . 42n [r!(n − r)!]2 r=0

For example if n = 2, then 2   4! 1 1 3 1 4! X = + 1 + . = 44 r!(2 − r)! 44 2 2 16

p4 =

r=0

For the first identity



(2n)! (2n)! = [r!(n − r)!]2 n!n!

n! r!(n − r)!

2

=



2n n

 2 n r

.

For the second identity



2n n



=

the coefficient of xn in the expansion of (1 + x)2n

=

the coefficient of xn in the expansion of [(1 + x)n ]2

=

the coefficient of xn in

= = =

  

  n 1

n 0

n n

+

 2

+

 2

n 0

n 1

1+



n n−1

+ ···

n  2 X n r=0



 

n x+ 1

+ ··· +

 2  n n

 

n 2 x + ··· + 2

  

since

n n

  n r

n 0

=



r

Hence p2n

1 = 2n 4



2n n

2

.

The computed values are p20 = 0.0310 . . . and p40 = 0.01582 . . .. Then 2 = 20.12, πp40

2 = 10.12, πp20

46

n n−r

 



n n x n

2

approximately, which imply that possibly p2n ∼ 2/(nπ) as n → ∞. However, this conjecture would require further investigation. 3.22. A random walk takes place on the positions {. . . , −2, −1, 0, 1, 2, . . .}. The walk starts at 0. At step n, the walker has a probability qn of advancing one position, or a probability 1 − qn of retreating one step (note that the probability depends on the step not the position of the walker). Find P∞ the expected position of r is convergent, then the walker at step n. Show that if qn = 21 + rn , (− 12 < rn < 12 ), and the series j=1 j the expected position of the walk will remain finite as n → ∞. If Xn is the random variable representing the position of the walker at step n, then P(Xn+1 = j + 1|Xn = j) = qn ,

P(Xn+1 = j − 1|Xn = j) = 1 − qn .

If Wi is the modified Bernoulli random variable (Section 3.2), then E(Xn )

=

E

n X

Wi =

i=1 n

=

2

X i=1

Let qn =

1 2

n X

E(Wi ) =

i=1

n X i=1

[1.qi + (−1)(1 − qi )]

qi − n.

+ rn , (− 21 < rn < 21 ). Then E(Xn ) = 2

n X i=1

( 21 + ri ) − n =

n X

ri .

i=1

Hence E(Xn ) is finite as n → ∞ if the series on the right is convergent. 3.23. A symmetric random walk starts at k on a position chosen from 0, 1, 2, . . . , a, where 0 < k < a. As in the gambler’s ruin problem, the walk stops whenever 0 or a is first reached. Show that the expected number of visits to position j where 0 < j < k is 2j(a − k)/a before the walk stops. One approach is to this problem by repeated application of result (2.5) for gambler’s ruin. A walk which starts at k first reached j before a with probability (by (2.5)) p=

(a − j) − (k − j) a−k = . a−j a−j

A walk which starts at j reaches a (and stops) before reaching j (again by (2.5)) with probability 1 1 1 = , 2a−j 2(a − j) and reaches 0 before returning to j with probability j − (j − 1) 1 = . 2j 2j Hence the probability that the walk from j stops without returning to j with probability q=

1 a 1 + = . 2(a − j) 2j 2j(a − j)

Given that the walk is at j, it next visits j with probability r =1−q =1−

2j(a − j) − a a = . 2j(a − j) 2j(a − j)

Therefore the probability that the walk starts from k visits j m times before stopping is hm = pr m−1 q.

47

The expected number of visits to j is µ=

∞ X

m=1

mhm = pq

∞ X

mr m−1 =

m=1

pq , (1 − r)2

summing the quasi-geometric series. Substituting for p, q, and r, µ

= =



2j(a − j) − a a a−k · 1− a − j 2j(a − j) 2j(a − j)

−2

a(a − k) 4j 2 (a − j)2 2(a − k)j · = 2j(a − j)2 a2 a

48

Chapter 4

Markov chain 4.1. If T = [pij ], (i, j = 1, 2, 3) and i+j , 6 + 3i show that T is a row-stochastic matrix. What is the probability that a transition between states E2 and E3 occurs at any step? If the initial probability distribution in a Markov chain is pij =

p(0) =



1 4

1 2



1 4

,

what are the probabilities that states E1 , E2 and E3 are occupied after one step. Explain why the probability that the chain finishes in state E2 is 13 irrespective of the number of steps. Since pij = then

3 X

pij =

j=1

3 X i+j j=1

6 + 3i

i+j , 6 + 3i =

3i 6 + = 1, 6 + 3i 6 + 3i

for all i. Also 0 < pi,j < 1. Therefore T is a stochastic matrix. The probability that a transition from E2 5 to E3 occurs is p23 = 12 . The probabilities that the states E1 , E2 and E3 are occupied after one step given are given by p(1) = p(0) T =



1 2

1 4

1 4



 

1 3 1 3 1 3

2 9 1 4 4 15

4 9 5 12 2 5



=



173 720

1 3

307 720



Each term in the second column of T is a 31 . By row-on-column matrix multiplication, each element in the second column of T n is a 31 . Hence the second term in p(0) T n is 31 independently of p(0) . 4.2. If



(2)

(2)

(2)

calculate p22 , p31 and p13 .

T =

1 2 1 3 1 4

1 4 1 3 1 2

1 4 1 3 1 4



,

(2)

pij are the elements of T 2 . The matrix multiplication gives



T2 = 

1 3 13 36 17 48

19 48 13 36 17 48

49

13 48 5 18 7 24



.

Therefore

13 , 36

(2)

p22 =

(2)

p31 =

4.3. For the transition matrix T = (3)



(2)

calculate p12 , p2 and p(3) given that p(0) = formula for T n and obtain limn→∞ T n .



17 , 48 2 3 3 4

1 3 1 4



1 2

1 2

13 . 48

(2)

p13 =

 . Also find the eigenvalues of T , construct a

We require T2 =



1 3 1 4

2

2 3 3 4

=



5 18 13 48



13 18 35 48

3





1 3 1 4

13 18 35 48



=

59 216 157 576

157 216 419 576



=

or,

12λ2 − 13λ + 1 = 0.

T3 =

,

2 3 3 4

=

59 216 157 576

157 216 419 576



.

(3)

157 . Directly from T 3 , p12 = 216 The element can be read off from

p(2) = p(0) T 2 =





1 2

(2)

. namely p2 = 209 288 The vector (3)

p

(0)

=p

3

T =

1 2



1 2



1 2





5 18 13 48



79 209 , 288 288



943 3456



,

2513 3456



The eigenvalues of T are given by



1 3

−λ 1 4

The eigenvalues are λ1 = 1, λ2 =

3 4

1 . 12

= 0, −λ 2 3

Corresponding eigenvectors are given by the transposes

r1 = The matrix C is defined as



1

C= By (4.18),

1



T

r1

,

r2

r2 =





=



− 83

1

− 38 1



1 = 11



1 1

T

.

T n = CDn C −1 ,

where D is the diagonal matrix of eigenvalues. Therefore n

T =



− 38 1

1 1



1

0

0

1 12n



3 11 3 − 11

It follows that lim T

n

n→∞

1 = 11

8 11 3 11



3 3



8 8



3+ 3−

8 12n 3 12n



8 12n 3 12n

8− 8+

.

.

4.4. Sketch transition diagrams for each of the following three-state Markov chains. (a) A =

"

1 3

0 1

1 3

0 0

1 3

1 0

#

;

(b) B =

"

1 2

1 4

0

1

1 2

1 2

50

1 4

0 0

#

;

(c) C =

"

1 2

1 2

0 1

0

0

1 3

1 3

1 3

#

.

(a)

(b)

(c)

1 3

1 2

E1

E1

E1

1 3

1 4

1 4

1 3

1 2

1 2

1

E2

1 2

1

1

E3

1 2

E2

E3

1 3

1 3

E2

E3 1 3

1

Figure 4.1: Transition diagrams for Problem 4.4 The transition diagrams are shown in Figure 4.1. 4.5. Find the eigenvalues of T =

"

a c b

b a c

c b a

#

,

(a > 0, b > 0, c > 0).

Show that the eigenvalues are complex if b 6= c. (If a+b+c = 1, then T is a doubly- stochastic matrix.) Find the eigenvalues and eigenvectors in the following cases: (a) a = 21 , b = 14 , c = 41 ; (b) a = 12 , b = 18 , c = 83 . The eigenvalues are given by

or

a−λ b a−λ c b c

c b a−λ

= 0,

(a + b + c − λ)(λ2 + (b + c − 2a)λ + a2 + b2 + c2 − bc − ca − ab) = 0.

The eigenvalues are

√ 1 (2a − b − c ± i 3|b − c|). 2 (a) a = 21 , b = 14 , c = 41 . The eigenvalues are λ1 = 1, λ2 = λ3 = 14 . The eigenvectors are λ1 = a + b + c,

r1 =

"

1 1 1

#

λ2,3 =

,

r2 =

"

−1 1 0

#

"

−1 0 1

,

r3 =

λ3 =

√ 3 1 −i , 4 8

#

.

(b) a = 41 , b = 18 , c = 83 . The eigenvalues are λ1 = 1,

λ2 =

√ 3 1 +i , 4 8

The eigenvectors are r1 =

"

1 1 1

#

,







− 21 − i √23 r2 =  − 1 + i 3  , 2 2 1

51







− 21 + i √23 r3 =  − 1 − i 3  . 2 2 1

4.6. Find the eigenvalues, eigenvectors, the matrix of eigenvectors C, its inverse C −1 , a formula for T n and limn→∞ T n for each of the following transition matrices; (a)   1 8 1 2

T =

(b)

"

T =

7 8 1 2

1 8 3 8 5 8

1 2 1 4 1 4

;

#

3 8 3 8 1 8

.

(a) The eigenvalues are λ1 = 1, λ2 = − 38 . The corresponding eigenvectors are r1 =



1 1



r2 =



− 47



C −1 =



− 74



1 0

n 

,



− 47 1

.

The matrix C and its inverse are given by C= The matrix T n is given by T

n



r1

n

=

CD C

=



1 11



r2

=



=



−1

1 1

1

1 1

1

4 + 7(− 38 )n 4 − (− 83 )n

,

0 − 38



7 − 7(− 38 )n 7 + (− 83 )n



4 11 4 − 11

1 11

4 11 4 − 11





7 11 4 11

.

7 11 4 11

4

7

4

7





as n → ∞. (b) The eigenvalues are λ1 = − 41 , λ2 = 41 , λ3 = 1, and the corresponding eigenvectors are



r1 = 



− 73

− 73

,

1

The matrix C is given by

C=



r1

r2 =

r2

r3

"



Then Tn

=

=



− 73

CDn C −1 =  − 37

  

1 3

+ 13 21−2n

n 1 − 43 3 −n 1 − 43 3

1

−2 1

1

1

"

1 

1

−2 1 1

#



− 73

−2

1

1

1

1

,

r3 =

=  − 37 (− 41 )n 0 0

1

0 ( 41 )n 0

11 + 35 (−1)n 2−1−2n − 13 21−2n 30 −n 11 + 35 (−1)n 2−1−2n + 4 3 30 −n 11 − 75 (−1)n 2−1−2n + 4 3 30

so that



lim T n = 

t→∞

1 3 1 3 1 3

52

11 30 11 30 11 30

3 10 3 10 3 10



.

"

1 1 1

#



1 

0 0 1 3 10 3 10 3 10

# 0  − 13 1 3

7 − 10

7 10

1 1 3 11 3 30 10 n −1−2n

− 35 (−1) 2

 

 

− 35 (−1)n 2−1−2n  , + 75 (−1)n 2−1−2n

4.7. The weather in a certain region can be characterized as being sunny(S), cloudy(C) or rainy(R) on any particular day. The probability of any type of weather on one day depends only on the state of the weather on the previous day. For example, if it is sunny one day then sun or clouds are equally likely on the next day with no possibility of rain. Explain what other day-to-day possibilities are if the weather is represented by the transition matrix. S C R 1 1 S 0 2 2 T = 1 1 1 C 2 4 4 1 1 R 0 2 2 Find the eigenvalues of T and a formula for T n . In the long run what percentage of the days are sunny, cloudy and rainy? The eigenvalues of T are given by

Let λ1 = − 14 , λ2 =

1 2



1 2

−λ

1 4

1 2

0

1 1 = − (4λ + 1)(2λ − 1)(λ − 1) = 0. 4 8 −λ

1 2

0

−λ 1 2

1 2

and λ3 = 1. The coresponding eigenvectors are





1

r1 =  − 23  , 1

r2 =

"

− 21 0 1

#



1

,

r3 =

"

1 1 1

#

The matrix of eigenvectors C is given by

C=



r1

r2



r3

− 12

=  − 23

1

0

1

1 

1

1

0 ( 12 )n 0

0 0 1

If D is the diagonal matrix of eigenvalues, then by (4.18)

T

n

=

As n → ∞ n

n

CD C



T →

1 − 32 1

−1



1

=  − 32

− 21 0

1

1

"

1 

1

0 0 0

"

1

0

1

1 

1

− 21

1

0 0 0

#

0 0 1



(− 41 )n 0 0

4 15 − 32 2 5

− 52

2 15 2 3 1 5

0 2 5





# 4 15  − 32

= 1 5

2 5

"

2 2 2

− 25 0 2 5

2 2 2



2 15 2 3 1 5

1 1 1

#



.

In the long run 40% of the days are of the days are sunny, 40% are cloudy and 20% are rainy. 4.8. The eigenvalue method of Section 4.4 for finding general powers of stochastic matrices is only guaranteed to work if the eigenvalues are distinct. Several possibilities occur if the stochastic matrix of a Markov chain has a repeated eigenvalue. The following three examples illustrate these possibilities. (a) Let T =

"

1 4

1 4

1 2

1

0

0

1 2

1 4

1 4

#

be the transition matrix of a three-state Markov chain. Show that T has the repeated eigenvalue λ1 = λ2 = − 41 and λ3 = 1, and two distinct eigenvectors r1 =

"

1 −4 1

#

r3 =

53

"

1 1 1

#

.

In this case diagonalization of T is not possible. However it is possible to find a non-singular matrix C such that T = CJC −1 , where J is the Jordan decomposition matrix given by J=

"

λ1 0 0

1 λ1 0

#

0 0 1



C= and r2 satisfies

=

r1

"

− 41 0 0

r2

1 − 41 0



r3

0 0 1

#

,

,

(T − λ1 I3 )r2 = r1 .

Show that we can choose

r2 =

"

−10 24 0

#

.

Find a formula for J n and confirm that, as n → ∞,



Tn → 

12 25 12 25 12 25

1 5 1 5 1 5

8 25 8 25 8 25

(b) A four-state Markov chain has the transition matrix



1 3 4



S=

0 0

0 0

0

1 4

0 0

. 

0 0 

1 4

0



3 4

1

.

Sketch the transition diagram for the chain, and note that the chain has two absorbing states and is therefore not a regular chain. Show that the eigenvalues of S are − 41 , 14 and 1 repeated. Show that there are four distinct eigenvectors. Choose the diagonalizing matrix C as



0  −1 C= 1 0

0 1 1 0



−4 −3 0 1

−5 4  . 1  0

Find its inverse, and show that, as n → ∞,



Sn → 



1

0 0 0 0

4 5 1 5

0

0 0 0 0



0 1 5 4 5

 .

1

Note that since the rows are not the same this chain does not have an invariant distribution: this is caused by the presence of two absorbing states. (c) Show that the transition matrix " 1 # 0 21 2 1 1 1 U= 6 3 2 1 5 0 6 6 has a repeated eigenvalue, but that, in this case, three independent eigenvectors can be associated with U . Find a diagonalizing matrix C, and find a formula for U n using U n = CDn C −1 , where D=

"

1 3

0 1 3

0 0

0

54

0 0 1

#

.

Confirm also that this chain has an invariant distribution. (a) The eigenvalues are given by



1 4

1 2 0 = − (λ − 1)(1 + 4λ) 16 −λ

1 4

−λ 1

1 2

−λ 1 4

1 2

1 4

Hence they are λ1 = − 41 (repeated) and λ2 = 1. This leads to the two eigenvectors r1 =

"

#

1 −4 1

,

r3 =

"

1 1 1

#

.

#

The Jordan decomposition matrix is given by

"

J=

− 41 0 0

Let r2 satisfy [T − λ1 I3 ]r2 = r1

1 − 14 0



or

0 0 1 1 4 1 4 1 4

1 2

 1 1 2



1 2





1

0  r2 =  −4  . 1 2

1

The solution for the linear equations for the components of r2 gives r2 = The matrix C is defined in the usual way as C=



r1



r2

Its computed inverse is

−10

r3





=

24

0

"

1 −4 1

12 − 25

− 51

12 25

1 5

1 C −1 =  − 10

T

−10 24 0 17 25 1 10 8 25

0

If T = CJC −1 , then T n = CJ n C −1 , where



− 41

Jn = 

As n → ∞, Tn



=

0 0



1



12 25 12 25 12 25

1 − 14



1

−10

1

24

1

0 1 5 1 5 1 5



n(− 14 )n−1

0

(− 41 )n

0

0



1 5

1 0   − 10

1

1

12 25

0



0 .

− 51

1  0

0

0

12 − 25

0

0

.

.

0



#



0

1

8 25 8 25 8 25



1 1 1

(− 14 )n

0  =

0

 −4

n

0

.

0

17 25 1 10 8 25

 

.

(b) The transition diagram is shown in Figure 4.2. The eigenvectors are given by

1−λ 3 4 0 0

0 −λ 1 4

0

0 1 4

−λ 0

1 2 = (λ − 1) (4λ − 1)(4λ + 1) = 0. 3 16 4 1−λ 0 0

55

3 4

E1

E2

1 1 4

1 4

E3

1

E4

3 4

Figure 4.2: Transition diagram for Problem 4.8(b). The eigenvalues are λ1 = − 41 , λ2 = 41 , λ3 = 1 (repeated). The eigenvectors for λ1 and λ2 are r1 =



0

−1

1

T

0

Let the eigenvector for the repeated λ3 be r3 = where the constants a, b, c, d satisfy 3 a 4



,

a

r2 =

b

− b + 14 c = 0,

c

d

1 b 4



0

T

,

1

1

T

0

.

− c + 43 d = 0.

We can express the solution in the form c = −3a + 4b, Hence the eigenvector is

d = 12a − 15b.





a b   , r= −3a + 4b  −4a + 5b

which contains two arbitrary constants a and b. In this case of a repeated eigenvalue, two eigenvectors can be defined using different pairs of values for r3 =



−4

−3

0

1

The matrix C and its inverse are



0  −1 C= 1 0

0 1 1 0

−4 −3 0 1

The matrix power of T is given by Tn

= =



CDn C −1  0 1  −1  1 10 0

   

1 4 5 1 5

0

0 1 1 0

0 0

0 0

0 0

0 0

−4 −3 0 1 0 1 5 4 5

1





5 4  , 1  0

T

,

C −1



(− 41 )n 5 4  0 1  0 0 0

   56

r4 =



5



4

3 1  −5 =  0 10 2

0 ( 14 )n 0 0

0 0 1 0

1

0

−5 5 0 0

5 5 0 0



T

0 3 0   −5 0  0 2 1

.



−3 −5  . 1  8

−5 5 0 0

5 5 0 0



−3 −5  1  8

as n → ∞ (c) The eigenvalues are given by



1 2

−λ 1 6 1 6

1 3

Hence the eigenvalues are λ1 = obtain the eigenvector

1 3

0 −λ 0

1 2 = − (λ − 1)(3λ − 1) . 9 −λ 1 2 1 2

5 6

(repeated) and λ3 = 1. Corresponding to the eigenvalue λ1 , we can

r1 =

"

−3b a b

#

"

=b

#

−3 0 1

+a

"

0 1 0

#

,

where a and b are arbitrary. Two distinct eigenvectors can be obtained by putting a = 0, b = 1 and by putting a = 1, b = 0. The three eigenvectors are r1 =

"

−3 0 1

#

,

r2 =

"

0 1 0

#

,

#

"

1 1 1

0 4 0

1 −3 3

r3 =

.

The matrix C and its inverse become C=

"

−3 0 1

0 1 0

#

1 1 1

C −1

,

With D=

"

1 3

0

#

0 0 1

1 3

0 0

"

1 = 4

0

−1 −1 1

#

.

,

then U

n

=



= as n → ∞.

" "  

−3 0 1

0 1 0

−3 0 1 1 4 1 4 1 4

0 1 0 0 0 0

3 4 3 4 3 4

1 1 1

#"

( 31 )n 0 0

( 13 )n

1 1 1

#"

0 0 0

0 0 1

0 0

0 0 0

#

1 4

0 0 1

#

"

−1 −1 1

1 4

" 0 4 0

−1 −1 1 1 −3 3

0 4 0

1 −3 3

#

#



,

4.9. Miscellaneous problems on transition matrices. In each case find the eigenvalues of T , a formula for T n and the limit of T n as n → ∞. The special cases discussed in Problem 4.8 can occur. (a) T = (b) T = (c) T =

"

" "

1 2

7 32

9 32

1 2

1 4

1 4

1

1 4

0

5 12

1 4

1 4

1 2

1

1 3

1 4 3 4 1 4

0

3 16

0 1 4

57

0

0

9 16 1 4 1 2

#

#

;

;

#

;

(d)

"

T = (e)



1 4 5 12 1 2

1 4 1 3 1 4

1 1 2



T =

0 0

#

1 2 1 4 1 4

0 0 0

0 0 1

1 2

1 2

;



0 1



2 . 0  0

(a) The eigenvalues of T are λ1 = − 81 (repeated) and λ3 = 1, with the corresponding eigenvectors r1 =

"

#

1 4

−2 1

,

r2 =

"

#

1 1 1

.

The Jordan decomposition matrix J is required, where

"

− 81 0 0

or



J= and r2 is given by [T − λ1 I3 ]r2 = r1

1 − 18 0 5 8

7 32 1 8 1 4

 1

The remaining eigenvector is r2 =

1 2

"

0 0 1

−3 8 4 3

#

,

9 32





1 4



0  r2 =  −2  3 8

#

1

.

The matrix C and its inverse are given by



Then

1 4

−3

C =  −2



1 4

T n =  −2 1

8 4 3

1

−3



1

1

Matrix multiplcation gives

1

0

(− 81 )n

0

lim T n = 



− 10 27

1 2 9 2 9 2 9

 

13 − 54 1 24 5 27

0   − 16

5 27 5 27 5 27

16 27 16 27 16 27

11 18 1 8 2 9

1 24 5 27

16 27

0

0



n→∞

13 − 54

10 − 27

C −1 =  − 61

(− 18 )n

1 

4 3



1 ,



1

8

1

16 27

11 18 1 8 2 9

 



.

(b) The eigenvalues of T are λ1 = − 41 , λ2 = − 61 , and λ3 = 1, which are all different, so that the calculations are straightforward. The eigenvectors are given by



1





r1 =  −4  , 1

r2 = 

The matrix C and its inverse are



1

C =  −4 1

5 12 − 25

1

1



1 ,

1

5 12 − 25

1



, 

C −1 = 

58



1



r3 =  1  6 5 − 12 7 18 35

− 51 0 1 5

1

−1 12 7 2 7

 

Finally



5 12 − 52

1

lim T n =  −4

n→∞

1



1

1

0

0

1  0

0

0 

1

0

0

1

(c) The eigenvalues are given by



3 16

1 4 3 4 1 4



0

9 16 1 4 1 2

0 1 4

− 51

6 5 − 12 7 18 35

0 1 5



−1 12 7 2 7



1 5 1 5 1 5

18 35 18 35 18 35

=

2 7 2 7 2 7



.

1 2 = − (λ − 1)(32λ + 8λ + 1). 32

Hence the eigenvalues are λ1 = − 18 (1 + i), λ2 = − 81 (1 − i), and λ3 = 1. This stochastic matrix has complex eigenvalues. The corresponding eigenvectors are



3 − 26 +

31 r1 =  − 13 − 1



15 i 26 14 i 13



31 r2 =  − 13 + 1

,

The diagonal matrix of eigenvalues is D=

"

3 − − 26

− 81 (1 + i) 0 0



15 i 26 14 i 13

0 − 18 (1

0

r3 =

 0 0 1

− i)

#

"

1 1 1

#

.

.

After some algebra (easily computed using software)



lim T n = 

n→∞

(d) The eigenvalues are given by λ1 = − 14 , λ2 =



r1 = 

− 11 9 4 9

1



15 82 15 82 15 82

14 41 14 41 14 41 1 , 12





39 82 39 82 39 82

.

and λ3 = 1. The corresponding eigenvectors are 1



r2 =  − 38  1

,

r3 =

"

1 1 1

#

.

0 0 1

#

.

The matrix C and the diagonal matrix D are given by C=

"

− 11 9 4 9

1

1 − 38 1

1 1 1

It follows that

#

,

D=



21 55 21 55 21 55

lim T n = 

n→∞

"

3 11 3 11 3 11

− 41 0 0

0 1 12

0



19 55 19 55 19 55

.

(e) The eigenvalues of T are λ1 = − 12 , λ1 = 21 , λ3 = 1 (repeated). There is a repeated eigenvalue but we can still find four eigenvectors given by







0  −1  , r1 =  0  1







0  1  r2 =  , 0  1

3  2  r3 =  , 0  1









−2  −1  r4 =  1  0

The matrix C and its inverse can be compiled from these eigenvectors:



0  −1 C= 0 1

0 1 0 1

3 2 0 1

−2 −1  , 1  0

C −1 = 



59

1 6 − 12 1 3

0

− 12 1 2

0 0

− 61 − 21 2 3

1

1 2 1 2

 

. 0  0

The diagonal matrix D in this case is given by



− 12  0 D= 0 0

Hence Tn

=

CDn C −1 0 0  −1 1  0 0 1 1

=







=



0

 −1  0

0 1 0 1

1

1

 32  0 1 3

3 2 0 1 3 2 0 1

0 0 0 0

0 1 3

1 2 3

0 1 2

0 0



(− 12 )n −2 0 −1   1  0 0 0



0 −2 −1   0 1  0 0 0



0 0 1 0

0 0  . 0  1

0 ( 12 )n 0 0

0 0 0 0



0 0 1 0

0 0  0  1



0 0  . 0  0



0 0 1 0

1 0 6 0   − 21   1 0 3 1 0

− 21 1 2

1 6 − 12 1 3

0 0

0

− 21

− 61 − 21

1 2

2 3

0 0

− 61 − 21 2 3 1

1 1 2 1 2



1 2 1 2

 

0  0



0  0

4.10. A four-state Markov chain has the transition matrix



1 2

 1

T =

1 4 3 4

1 2

0 0 0

0 1 2

1 4

1 4

0



0 0 

0

.

Find fi , the probability that the chain returns at some step to state Ei , for each state. Determine which states are transient and which are persistent. Which states form a closed subset? Find the eigenvalues of T , and the limiting behaviour of T n as n → ∞. The transition diagram for the chain is shown in Figure 4.3. For each state, the probability that a first return occurs is as follows, using the diagram: (1)

State E1 : f1

(2)

= 12 , f1

(n)

= 12 , f1

= 0, (n ≥ 3); 1 1 2

E2

E1

1 2

1 4

3 4

1 2

1 4

E4

E3 1 1 4

Figure 4.3: Transition diagram for Problem 4.10. (1)

State E2 : f2

(2)

= 0, f2

(n)

= 21 , f2

= 1/2n−1 (n ≥ 3);

60

(1)

= 0, f3

(2)

(n)

(n)

State E3 : f3

=

1 , 42

(n)

f3

= 0, (n ≥ 3):

State E4 : f4 = f3 for all n. The probability of a return at any stage is fn =

∞ X

fr(n) ,

r=1

for each n. Therefore

1 1 1 + 2 + 3 + · · · = 1, 2 2 2 1 f3 = f4 = , 16 summing geometric series for f2 and f3 . Hence E1 and E2 are persistent states, but E3 and E4 are transient. The eigenvalues of T are λ1 = − 21 , λ2 = − 41 , λ3 = 14 , and λ4 = 1. The corresponding eigenvectors are f1 =

 

r1 = 

1 1 + = 1, 2 2

f2 =







− 31 2 3

0  0  , r2 =  −1  1



, −1  1

Therefore D, C and its inverse are given by



− 12

 0

Hence

D= 

0

0

0

− 41

0

0

1 4

0

0

0 

0

lim T n

n→∞





0

1

=

=



− 13

 23   −1

  

0

0

1 

0

1

0

0

1  0

1

1 3 1 3 1 3 1 3

1

1



0 0 0 0



0

−1

1

0  0  , r3 =  1  1

0

0 −1

1

2 3 2 3 2 3 2 3

2



3 C=  −1

, 0 



− 31





1

1

2 3



0

0  1 1  0 0

0

0 

1

0

0

0

−1

 1 0 0   − 23 0

−1

 1 C −1 =   −2 3

0



1  1  . r4 =  1  1



0

1

0

1

, 1 





1

2 3

1

0

0

−1

− 21 1 2

1 2 1 2

0

0

− 31 1 3

1

0

0

−1

− 21 1 2

1 2 1 2

0

0

− 31 1 3

   

   

0 



0  0

4.11. A six-state Markov chain has the transition matrix



   T =  

1 4

0 0 0 0 0

1 2

0 1 4

0 0 0

0 0 0 0 0 1

0 0

0 0

1 4

1 2

0

1

1 2

1 2

0

0

1 4

1 0 0 0 0



   .  

Sketch its transition diagram. From the diagram which states do you think are transient and which do you think are persistent? Which states form a closed subset? Determine the invariant distribution in the subset. Intuitively E1 , E2 , E3 and E6 are transient since paths can always escape through E3 and not return.

61

1 2

E6

E5 1 2

1 4

1 4

1 2

1

E1

1

E4

1 1 4

1 2

E2

E3

1 4

Figure 4.4: Transition diagram for Problem 4.11. For state E4 , the probabilities of first returns are (1)

f4

= 0,

(n)

f4

1 , 2n−1

=

(n = 2, 3, 4, . . .).

It follows that a return to E4 occurs at some step is f4 =

∞ X

(n)

f4

=

∞ X n=2

n=1

1 = 1. 2n−1

Hence E4 is persistent. For E5 , (1)

f5

=

1 , 2

(2)

f5

=

1 , 2

(n)

f5

= 0,

(n ≥ 3),

so that f5 = 12 + 12 = 1. Hence E5 is also persistent. The states E4 , E5 form a closed subset since no escape paths occur. The subset has the transition matrix   0 1 S= . 1 1 2

2

In the notation of Section 4.3, α = 1 and β = 21 . Hence the invariant distribution is



1 3

4.12. Draw the transition diagram for the seven-state Markov chain with transition matrix

 0  0  1  2 T =  0  0  1 2

0

1 0 0 0 0 0 0

0 1 0 0 0 0 0

0 0 1 2

0 0 0 0

0 0 0 1 0 0 0

0 0 0 0 1 0 0

0 0 0 0 0 1 2

1

2 3



.



   .   

(n)

(n)

Hence discuss the periodicity of the states of the chain. From the transition diagram calculate p11 and p44 (3) (3) (3n) for n = 2, 3, 4, 5, 6. (In this example you should confirm that p11 = 12 but that p44 = 0: however, p44 = 6 0 for n = 2, 3, . . . confirming that state E4 is periodic with period 3.) Consider state E1 . Returns to E1 can occur in the sequence E1 E2 E3 E4 which takes 3 steps, or as E1 E2 E3 E4 E5 E6 E1 which takes 6 steps which is a multiple of 3. Hence returns to E1 can only occur at steps 3, 6, 9, . . .: hence E1 has periodicity 3. Similarly E2 and E3 also have periodicity 3. On the other hand for E4 returns are possible at steps 6, 9, 12, . . . but it still has periodicity. The same is true of states E5 and E6 .

62

E1

E2

1

E3

1

E4

1 2

E5

1

E6

1

E7

1 2

1 2

1

1 2

Figure 4.5: Transition diagram for Problem 4.12. E7 is an absorbing state. 4.13. The transition matrix of a 3-state Markov chain is given by



0

3 4

1 2 3 4

0

1 4 1 2

1 4

0

T =



.

Show that S = T 2 is the transition matrix of a regular  chain. Find its eigenvectors and confirm that S has 14 13 10 an invariant distribution given by for even steps in the chain. 37 37 37 The matrix S is given by



9 16 3 8 1 8

S = T2 = 

1 16 1 2 9 16



3 8 1 8 5 16

,

which is regular since all elements are non-zero. The eigenvalues of S are given by λ1 =

E1 3 4

1 4 3 4

1 2

1 2

E2

E3

1 4

Figure 4.6: Transition diagram for Problem 4.13. λ2 =

3 16

+ 14 i, λ3 = 1, with corresponding eigenvectors



− 16 + 25

− r1 =  − 16 25

13 i 25 13 i 25

1



,



14 i 25 14 i 25

2 − − 25

2 + r2 =  − 25

1

The matrix C is given by the matrix of eigenvectors, namely



Let

16 + − 25

16 − C =  − 25

D=

13 i 25 13 i 25

1

"

3 16

2 − 25 − 2 − 25 +

14 i 25 14 i 25

1

− 14 i 0 0

63

3 16

0 + 41 i 0





1

1 .

#

.

1



r3 =  1  .

,

1

0 0 1



1

3 16



1 i, 4

Finally (computation simplifies the algebra)



which gives the limiting distribution.

CDC −1 = 

14 37 14 37 14 37

13 37 13 37 13 37



10 37 10 37 10 37

,

4.14. An insect is placed in the maze of cells shown in Figure 4.11. The state Ej is the state in which the insect is in cell j. A transition occurs when the insect moves from one cell to another. Assuming that exits are equally likely to be chosen where there is a choice, construct the transition matrix T for the Markov chain representing the movements of the insect. Show that all states are periodic with period 2. Show that T 2 has two subchains which are both regular. Find the invariant distributions of both subchains. Interpret the results. If the insect the insect starts in any compartment (state), then it can only return to that compartment after an even number of steps. Hence all states are periodic with period 2. 1

E4

E2

1 2

1 2

E2

E4

1 2

E1

E1 1 2 1 2

1 2

E5

1

E5

E3

E3 1 2

Figure 4.7: Transition diagram for Problem 4.13, and the maze. The matrix



1 2 1 2

   S=T = 0   0 2

1 2



1 4 1 2

0

0

1 4

0

0

0 0

3 4 1 4

1 4 3 4

0  

0

0

0

1 2



0   0 

has two subchains corresponding to E1 , E2 , E5 and E3 , E4 : this follows since the zeros in columns 3 and 4, and the zeros in rows 3 and 4, remain for all powers of S. The subchains have the transition matrices



1 2 1 2 1 2

T1 = 

1 4 1 2

0

1 4



0 , 1 2

T2 =



3 4 1 4

1 4 3 4



.

The eigenvalues of T1 are λ1 = 0, λ2 = 21 , λ3 = 1, with the corresponding eigenvectors r1 =

"

−1 1 1

#

,

r2 =

"

0 −1 1

#

,

r3 =

"

1 1 1

#

.

The matrix C1 and its inverse, and the diagonal D1 are given by C1 =

"

−1 1 1

0 −1 1

1 1 1

#

,

C1−1

=

"

− 21 0 1 2

64

1 4 − 12 1 4

1 4 1 2 1 4

#

,

D=

"

0 0 0

0 1 2

0

0 0 1

#

Therefore T1n

=

C1 D1n C1−1

=



−1

"

1 2 1 2 1 2

0

 1



−1

1

0

0

1

#

1 4 1 4 1 4

0

( 21 )n

1  0

1

1 4 1 4 1 4



1

0



0 

0

1

− 12

1 4

− 21

0 1 2



1 4 1 2 1 4

1 4



,

as n → ∞. The eigenvalues of T2 are λ1 = 21 , λ2 = 1, and the corresponding eigenvectors are r1 =





−1 1

,

r2 =





1 1

.

The matrix C2 and its inverse, and the diagonal matric D2 are given by C2 =



−1

1

1

1

Hence T2n

=

C2 D2n C2−1

=



C2−1

,



−1 1

1 1

=





− 21 1 2

1 2 1 2

(− 21 )n

0

0

1

as n → ∞. Combination of the two limiting matrices leads to



1 2 1 2

 

lim T n =  0 n→∞  0 1 2



,



1 4 1 4

0 0

0 0

1 4

1 2 1 2

0

1 2 1 2

0 0

D2 =

− 21 1 2

1 4 1 4

1 2 1 2



− 21





0

0



1



1 2 1 2

1 2 1 2

.



,

  

0 . 0  1 4

0

4.15. The transition matrix of a four-state Markov chain is given by



1−a  1−b T = 1−c 1

a 0 0 0

0 b 0 0



0 0  , c  0

(0 < a, b, c < 1).

(n)

Draw a transition diagram, and, from the diagram, calculate f1 , (n = 1, 2, . . .), the probability that a first return to state E1 occurs at the n-th step. Calculate also the mean recurrence time µ1 . What type of state is E1 ? The first return probabilities for state E1 are (1)

f1

= 1 − a,

(2)

f1

= a(1 − b),

Hence f1 =

∞ X n=1

(n)

f1

(3)

f1

= ab(1 − c),

(4)

f1

= abc,

(n)

f1

= 0,

(n ≥ 5).

= 1 − a + a(1 − b) + ab(1 − c) + abc = 1,

which implies that E1 is persistent. Also, the mean µ1 =

∞ X n=1

(n)

nf1

= 1 − a + 2a(1 − b) + 3ab(1 − c) + 4abc = 1 + a + ab + abc.

Hence µ1 is finite so that E1 is non-null. It is also aperiodic so that E1 is an ergodic state.

65

1-a

E1

E2

a

E3

b

E4

c

1-b 1-c

1 1

Figure 4.8: Transition diagram for Problem 4.15. 4.16. Show that the transition matrix



1−a  1−a T = 1−a 1

a 0 0 0



0 a 0 0

0 0  a  0

where 0 < a < 1, has two imaginary (conjugate) eigenvalues. If a = 4 2 1 8 . distribution p = 15 15 15 15

1 , 2

confirm that T has the invariant

The eigenvalues of T are given by λ1 = −a, λ2 = −ai, λ3 = ai, λ4 = 1, of which two are imaginary conjugates. If a = 21 , then λ1 = − 21 , λ2 = − 21 i, λ3 = 12 i, λ4 = 1, with corresponding eigenvectors











− 21 i 1  − + 12 i  , r2 =  12 +i  2 1

− 21  1  r1 =  1  , −2 1



1 i 2

 − 21 − 21 i  , 1 −i 

r3 = 

2

1





1  1  . r4 =  1  1

The matrix C of eigenvalues and its inverse, and the diagonal matrix D are given by



− 21

 1 C=  −1 2 1

− 12 i

1 i 2

1

− 12 + 21 i

− 21 − 12 i

1

1

1 2

+i

1 2

−i





1 

, 1 

− 31

1  − 10 + C −1 =   −1 − 10

3 i 10 3 i 10

8 15

1



− 21

0 − 12 i

 0

D= 

0

0

0

0

8 15

4 15

0

0

0 

0 n

2 15

−1 1 15

which is the same as the last row of C −1 .

3 − 10 − 3 − 10

+

4 15



0 1 i 2

In the limit n → ∞, it follows that all the rows of CD C



1 3 1 i 10 1 i 10

1 10 1 10

− 31 − + 2 15

1 3 3 i 10 3 i 10

3 10 3 10

+ −

1 15

1 i 10 1 i 10



 , 

. 0 

1



are given by



,

4.17. A production line consists of two manufacturing stages. At the end of each manufacturing stage each item in the line is inspected, where there is a probability p that it will be scrapped, q that it will be sent back to that stage for reworking, and (1 − p − q) that it will be passed to the next stage or completed. The production line can be modelled by a Markov chain with four states: E1 , item scrapped; E2 , item completed; E3 , item in first manufacturing stage; E4 , item in second manufacturing stage. We define states E1 and E2 to be absorbing states so that the transition matrix of the chain is



1  0 T = p p

0 1 0 1−p−q

66

0 0 q 0



0 0  . 1−p−q  q

An item starts along the production line. What is the probability that it is completed in two stages? (n) (n) Calculate f3 and f4 . Assuming that 0 < p + q < 1, what kind of states are E3 and E4 ? What is the probability that an item starting along the production line is ultimately completed? The transition diagram is shown in Figure 4.9. E1 and E2 areabsorbing states. The initial position of the item can be represented by the vector p(0) = 0 0 1 0 . We require q

q

1-p-q

1

1-p-q

E3

E2

E4 p

p

E1

1

1

Figure 4.9: Transition diagram for Problem 4.17.

p(2)

=

=

p(0) T 2 =





2p − p2

0

0

1



1  0 0  p p

(1 − p − q)2

q2

0 1 0 1−p−q

0 0 q 0

q(1 − p − q)



2

0 0  1−p−q  q

(2)

Therefore the probability that an item is completed in two stages is p2 = (1 − p − q)2 . The first return probabilities are (1)

= q,

f3

(1)

= q,

f4

f3

f4

(n)

= 0,

(n ≥ 3),

(n)

= 0,

(n ≥ 3).

Hence E3 and E4 are transient states. The probability of an item that is completed without reworking is (1 − p − q)2 , with one reworking is 2q(1 − p − q)2 , with two reworkings 3q 2 (1 − p − q)2 , and n reworkings (n + 1)q n (1 − p − q)2 . Hence the probability that an item starting is ultimately completed is (1 − p − q)2 [1 + 2q + 3q 2 + · · ·] =

(1 − p − q)2 , (1 − q)2

after summing the geometric series. 4.18. The step-dependent transition matrix of Example 4.9 is Tn =

"

1 2

0 1/(n + 1)

1 2

0 0

0 1 n/(n + 1)

#

,

(n = 1, 2, 3, . . .).

Find the mean recurrence time for state E3 , and confirm that E3 is a persistent, non-null state. The transition matrix is shown in Figure 4.10. Assuming that a walk starts at E3 , the probabilities of first returns to state E3 are (using the diagram) (1)

f3

=

1 , 2

(2)

f3

= 0,

(3)

f3

=

1 1 1 1 1 1 1 (n) × × 1 = , . . . , f3 = × n−3 × × 1 = n−1 , . . . . 1+1 2 4 1+1 2 2 2

Hence

67

1 2

E1

1/(n+1)

1 2

E3 E2

1

n/(n+1)

Figure 4.10: Transition diagram for Problem 4.18.

f3 =

∞ X

(n)

f3

=

n=1

∞ X 1 n=1

2n

= 1,

using the formula for the sum of a geometric series. This means that E3 is persistent. The mean recurrence time is given by ∞ ∞ X 1 5 1 X n (n) = +2= , µ3 = nf3 = + 2 2n−1 2 2 n=3

n=1

using the formula for the sum of the geometric series. Hence E3 is persistent and non-null.

4.19. In Example 4.9, a persistent, null state occurred in a chain with step-dependent transitions: such a state cannot occur in a finite chain with a constant transition matrix. However, chains over an infinite number of states can have persistent, null states. Consider the following chain which has an infinite number of states E1 , E2 , . . . with the transition probabilities p11 =

1 , 2

p12 =

1 , 2

pj1 =

1 , j+1

pj,j+1 =

j , j+1

(j ≥ 2).

Find the mean recurrence time for E1 , and confirm that E1 is a persistent, null state. From the transition diagram, the probabilities of first returns to E1 are given by E1

1 2

E2

1 2

E3

2 3

3 4

E4

4 5

E5

1

3

1 4

1 5

1 6

Figure 4.11: Transition diagram for Problem 4.19. (1)

f1

=

1 , 2

(2)

1 , 2.3

(3)

f1

=

1 , 3.4

...

(n)

, f1

=

1 , n(n + 1)

f1

=

....

∞ X

X 1 1 1 1 = lim − = lim 1 − N→∞ N→∞ n(n + 1) n n+1 N

Therefore f1 =

∞ X n=1

(n)

f1

=

n=1

N

n=1

h

i

which implies that E1 is persistent. However the mean recurrence time is µ1 =

∞ X n=1

(n)

nf1

=

∞ X n=1

68

1 = ∞, n+1

h

i

= 1,

the series being divergent. According to the definition, E1 is a null state. 4.20. A random walk takes place on 1, 2, . . . subject to the following rules. A jump from position i to position 1 occurs with probability qi , and from position i to i + 1 with probability 1 − qi for i = 1, 2, . . ., where 0 < qi < 1. Sketch the transition diagram for the chain. Explain why to investigate the persistence of every state, only one state, say state 1, need be considered. Show that the probability that a first return to state 1 occurs at some step is f1 =

∞  j−1 Y X j=1



(1 − qk ) qj .

k=1

If qj = q (j = 1, 2, . . .), show that every state is persistent. The transition diagram is shown in Figure 4.12 with the states labelled E1 , E2 , . . .. The chain is irreducible since every state can be reached from every other state. The diagram indicates that every state E1 q1

E2

1- q1

E3

1- q2

1- q3

E4

E5

1- q4

q2 q3

q4 q5

Figure 4.12: Transition diagram for Problem 4.20. is aperiodic since a return to any state can be achieved in any number of steps. Need only consider one state since the others will have the same properties. Consider state E1 . Then, from the diagram, the probabilities of first returns are j−1 (1)

f1

= q1 ,

(2)

f1

(3)

= (1 − q1 )q2 ,

f1

Therefore f1 =

∞ X j=1

If qj = q, (j = 1, 2, . . .), then f1 =

∞ j−1 X Y

(j)

= (1 − q1 )(1 − q2 )q3 , . . . , f1

(j)

f1

=

∞ j−1 X Y

Y

(1 − qk )qj , . . . .

k=1

(1 − qk )qj .

j=1 k=1

(1 − q)q = q

j=1 k=1

=

∞ X j=1

(1 − q)j−1 = q/q = 1,

using the formula for the sum of geometric series. Hence E1 and every state is recurrent. 4.21. A Markov chain maze. Figure 4.12 shows a maze with entrance E1 , further gates E2 , E3 , E4 , E5 , E6 with target E7 These gates can be represented by states in a Markov chain. At each E1 , . . . , E6 there are two possible new paths which are assumed equally likely to be chosen. The target E7 can be considered to be an absorbing state. Construct a 7 × 7 transition matrix for the maze assuming that the walker does not return to a previous state and does not learn from previous choices: for example, at E1 can still make the mistake of walking the dead-end again. Find the probabilities that the walker reaches the target in 6, 7, 8, . . . steps. Suppose now that the walker learns from wrong choices. To accommodate this, let E11 be the entrance and E12 the return to the entrance after a wrong choice (the dead-end); let E21 and E22 have the same meaning for the second entrance, and so on. Hence the probabilities are: P(E12 → E12 ) = 21 ;

P(E11 → E21 ) = 12 ;

P(E12 → E21 ) = 1;

P(E21 → E22 ) = 12 ;

P(E31 → E21 ) = 12 ;

P(E22 → E31 ) = 1;

69

E4 E5

E6

E7

E3

E2

E1 Figure 4.13: and similarly for the remaining probabilities. The transition matrix is now 13 × 13. with states E11 , E12 , E21 , E22 , . . . , E61 , E62 , E7 . Find the probabilities that the walker reaches the centre in 6,7,8 steps. (Really need a computer program to compute the matrix products.) With the maze rules as described the transition matrix for walks from the entrance to the target is given by the table:   E1 E2 E3 E4 E5 E6 E7 1 1 0 0 0 0 0   E1 2 2  E2 0 1  1 0 0 0 0   2 2 1 1  E3 0 0 0 0 0  2 2  T = 1 1  E4 0 0 0  0 0 2 2   1 1  E5 0 0 0 0 0  2 2  1 1  E6 0 0 0 0 0 2 2 E7 0 0 0 0 0 0 1 The initial position of the walker can be represented by p0 =



1

0

0

0

0

0

0

After n steps the probabilities are given by pn = p0 T n ,



.

(n ≥ 6).

The probability of choosing randomly the shortest path is 1/26 = 1/64. Computation makes this problem easy. The first row in T 6 which is [1/64, 3/32, 15/64, 5/16, 15/64, 3/32, 1/64] lists the probabilities of the maze-walker being at E1 , E2 , etc reaches after 6 gates. It follows that p7 = p0 T 2 =



1 128

7 128

21 128

35 128

35 128

21 128

1 16

Hence the probability that the walker reaches the centre in exactly 7 steps is



.

1 1 3 − = . 16 64 64 1 since we must exclude repeats at the absorbing state E7 .] A similar arguments leads [The answer is not 16 to the probability that the walker reaches the centre in exactly 8 steps is

37 1 1 21 − − = . 256 64 16 256

70

In the second case the table is:



          S=         

E11 E12 E21 E22 E31 E33 E41 E44 E51 E55 E61 E66 E7

E11 0 0 0 0 0 0 0 0 0 0 0 0 0

E12

E21

1 2

1 2

0 0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0 0 0

E22 0 0

E31 0 0

1 2

1 2

0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0 0 0

E33 0 0 0 0

E41 0 0 0 0

1 2

1 2

0 0 0 0 0 0 0 0

1 0 0 0 0 0 0 0

E44 0 0 0 0 0 0

E51 0 0 0 0 0 0

1 2

1 2

0 0 0 0 0 0

1 0 0 0 0 0

E55 0 0 0 0 0 0 0 0

E61 0 0 0 0 0 0 0 0

1 2

1 2

0 0 0 0

1 0 0 0

E66 0 0 0 0 0 0 0 0 0 0

E7 0 0 0 0 0 0 0 0 0 0

1 2

1 2

0 0

Let the walk start from E1 so that the initial vector is v0 = Then after one step



1

v1 = v0 T =

0



0



0

0

1 2

0

0

0

0

0

0

0

0

0

0

1 2

0

0

0

0

0

0

0

0

0

in other words, the walker is back at E1 with probability of For the next step v2 = v0 T 2 =

1 4

1 2

0

1 4

0

1 2

1 1

                    

 

0

:

or moved on to E2 with the same probability.

0

0

0

0

0

0

0



,



.

and so on. By step 12 the centre of the maze must have been reached which is confirmed by v12 = v0 T 12 =



0

0

0

0

0

0

0

0

0

0

0

0

1

The probability that the walker reaches the centre in 6 steps is 1/64 (as before), in 7 steps is 3/64 (both as in the previous case) and in 8 steps is 15/256. 4.22. In a finite Markov chain a subset C of states is said to be closed if, for states i and j, i ∈ C,

transition i → j ⇒

j ∈ C.

Find the closed subset in a chain with transition matrix



   T =  

0 0 0 0 1 2

0

0 0 0 0 0 0

1 2 1 2 1 3

0 0 1

1 2

0

0

1 3

1 4

0 0 0 0

1 4 1 3

0 0 0 0



   . 1  1  2

0

The closed subset C the consists of the the three states E3 , E4 , E6 . A transition diagram is helpful: also compute T 4 which reveals the closed subset. 4.23. Chess knight moves. A knight moves on a reduced chess board with 4 × 4 = 16 squares. The knight starts at the bottom left-hand corner and moves according to the usual chess rules. Treating moves as a Markov chain, construct a 16 × 16 transition matrix for the knight moving from any square (easier if you design a computer program for the matrix). Show that the knight returns to it starting corner after 2,4,6 7 moves (must be even) with probabilities 41 , 61 , 54 respectively. Find the corresponding first returns. [ Just a

71

(n)

reminder: if f11 is the probability that the first return to corner (1, 1) (say) after n moves, and p11 (m) is the probability that the knight is at (1, 1) after m moves] ,then (1)

f11 = p11 ,

(n)

(n)

f11 = p11 −

n−1 X

(m) (n−m)

f11 p11

,

m=1

(n ≥ 2).]

The chessboard is shown in Figure 4.14 with the squares labelled E11 ,E12 ,. . .,E44 as shown. The knight moves start in E11 . It is helpful to itemize possible return paths to E11 . For example a path with four

E41

E42

E43

E44

E31

E32

E33

E34

E21

E22

E23

E24

E11

E12

E13

E14

Figure 4.14: transitions is E11 → E23 → E44 → E32 → E11 .

Assuming that all transitions are equally likely the transition E11 → E23 occurs with probability 21 ,E23 → E44 with probability 41 since there are 4 possible moves from E23 . The probability that a return on this path occurs is 1 1 1 1 1 × × × = . 2 4 2 4 64 There are 4 paths which pass through E44 which occur wth probability 4×

1 1 = 64 16

(remember we are restricting return paths to 4 transitions with no intermediate return to E11 . A further path is E11 → E23 → E42 → E23 → E11 , which occurs with probability

1 1 1 1 1 × × × = . 2 4 3 4 96 There are a further 3 similar paths with the same probabilities. Hence the probability that a first return to E11 after 4 moves is 1 5 1 (4) +4× = . f11 = 4 × 64 96 48 The transition matrix for all the moves of the knight on the 4 × 4 chessboard is given by the matrix T .

72

The order is given by the list of squares as shown in the figure



            T =           

0 0 0 0 0 0 1 4

0 0 1 4

0 0 0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 1 3

1 3 1 3

0 0 0 0

1 4

0

0 0 0 0 0 0

1 4 1 3

0 0 0 0

0 0 0 0 0

1 3

0 0 0

0 0 0 0

0 0 0 0 0 0 0

1 4

1 4

0 0 0 0 0 0 0

1 4

0 0 0 0 0

0 0

0 0 1 3

0 0

0

0

1 4

0 0 0 0

1 3

0 0 0 0 0 0 0

1 3

1 3

0 0 0 0

1 2

0

1 3

0 0 0 0 0 0 0

1 3 1 2 1 3

0

0

1 3

0

1 3

0 0 0 0

0

0

0

0 0 0 0 0 0 0

1 2

1 2

1 2

1 4

1 3

0 0 0 0 0 0 0 0 0 0 0 0

1 3

1 2

1 3

0 0

1 2 1 3

0 0 0 0 0 0 0 1 2

0 0 0

1 3

0 0

0 0 0 0 0

0 0 0 0

1 4

1 4

0

1 4

0 0 0 0

0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0

1 3

0 0

1 3

0 0 0 0 0

1 4

0

1 3

0 0 0 0

1 4

0 0 0 0 0 0

1 3 1 3

0 0

0 0 0 0 0 0 0

1 4

1 4

0 0 0 0 0 0

                        

Returns to E11 can only occur after an even number of moves. We can obtain the returns to E11 by computing T 2n , and noting the number in the top left. Hence, in the usual notation (2)

p11 =

1 , 4

(4)

p11 =

1 , 6

(6)

p11 =

and, the first returns are given by

7 , 54

(8)

p11 =

107 , 972

1 4 1 1 5 1 (4) (2) , f4 = p11 − f2 p11 = − × = 6 4 4 48 107 (4) (6) = 0.0619 . . . . f6 = p11 − f2 p11 − f4 p11 ∗ (2) = 1728 (2)

f2 = p11 =

73

Chapter 5

Poisson processes 5.1. The number of cars which pass a roadside speed camera within a specified hour is assumed to be a Poisson process with intensity λ = 92: on average 92 cars pass in the hour. It is also found that 1% of cars exceed the designated speed limit. What are the probabilities that (a) at least one car exceeds the speed limit, (b) at least two cars exceed the speed limit in the hour? Let time be measured in hours. With λ = 92 in the Poisson process, the mean number of cars in the hour is 92 × 1 = 92. Of these, on average, 0.92 cars exceed the speed limit. Assume that the cars which exceed the limit form a Poisson process with parameter λ1 = 0.92. Let N (t) be a random variable of the number of cars exceeding the speed limit by time t measured from the beginning of the hour. (a) The probability that at least one car has exceeded the limit within the hour is 1 − P(N (t) < 1) = 1 − e−λ1 = 1 − 0.398 = 0.602. (b) The probability that at least two cars have exceeded the limit within the hour is 1 − P(N (t) < 2) = 1 − e−λ1 − λ1 e−λ1 = 1 − 0.398 − 0.367 = 0.235. 5.2. If the between-event time in a Poisson process has an exponential distribution with parameter λ with density λe−λt , then the probability that the time τ for the next event to occur is at least t is P{τt > t} = e−λt . Show that, if t1 , t2 ≥ 0, then

P{τt > t1 + t2 |τt > t1 } = P{τt > t2 }.

What does this result imply about the Poisson process and its memory of past events? By formula (1.2) on conditional probability P(τ > t1 + t2 |τ > t1 )

= = =

P(τ > t1 + t2 ∩ τ > t1 ) P(τ > t1 + t2 ) = P(τ > t1 ) P(τ > t1 ) e−λ(t1 +t2 ) = e−λt2 e−λt1 P(τ > t2 )

The result shows the no memory property of the Poisson process. 5.3. The number of cars which pass a roadside speed camera are assumed to behave as a Poisson process with intensity λ. It is found that the probability that a car exceeds the designated speed limit is p. (a) Show that the number of cars which break the speed limit also form a Poisson process. (b) If n cars pass the camera in time t, find the probability function for the number of cars which exceed the speed limit.

74

(a) Let N (t) be the random variable representing m the number of speeding cars which have occurred in time t. The probability qn (t) = P(N (t) = m) satisfies qn (t + δt) ≈ qn−1 (t)λpδt + qn (t)(1 − λpδt), where λpδt is the probability that a speeding car appears in the time δt. This is the equation for a Poisson process with intensity λσ. (b) Of the n cars the number of ways in which m speeding cars can be arranged is n! = m!(n − m)!

 

n . m

The probability that any individual event occurs is

 

n m p (1 − p)n−m , m

(m = 0, 1, 2, . . . , n),

which is the binomial distribution. 5.4. The variance of a random variable Xt is given by V(Xt ) = E(Xt2 ) − E(Xt )2 . In terms of the generating function G(s, t), show that V(Xt ) =

"

∂ ∂s



∂G(s, t) s ∂s







∂G(s, t) ∂s

2 #

.

s=1

(an alternative formula to (5.20)). Obtain the variance for the Poisson process using its generating function G(s, t) = eλ(s−1)t given by eqn (5.17), and check your answer with that given in Problem 5.3. The assumption is that the random variable Xt is a function of the time t. Let pn (t) = P(Xt = n). The probability generating function G(s, t) becomes a function of two variables s and t. It is defined by G(s, t) =

∞ X

pn (t)sn .

n=0

The mean of Xt is given by E(Xt ) =

∞ X n=1

The expected value of Xt2 is given by E(Xt2 )

=

∞ X n=1

Hence V(Xt ) =

"

∂ ∂s



∂G(s, t) npn (t) = ∂s s=1

∂ n pn (t) = ∂s 2



∂G(s, t) s ∂s







∂G(s, t) s ∂s



∂G(s, t) ∂s



.

s=1

2 #

s=1

For the Poisson process G(s, t) = eλ(s−1)t . Then E(Xt ) = and E(Xt2 ) =



∂ λ(s−1)t = λt, [e ] ∂s s=1



∂ = λt − (λt)2 . [sλteλ(s−1)t ] ∂s s=1

75

.

Hence V(Xt ) = λt. 5.5. A telephone answering service receives calls whose frequency varies with time but independently of other calls perhaps with a daily pattern—more during the day than the night. The rate λ(t) ≥ 0 becomes a function of the time t. The probability that a call arrives in the small time interval (t, t + δt) when n calls have been received at time t satisfies pn (t + δt) = pn−1 (t)(λ(t)δt + o(δt)) + pn (t)(1 − λ(t)δt + o(δt)),

(n ≥ 1),

with p0 (t + δt) = (1 − λ(t)δt + o(δt))p0 (t).

It is assumed that the probability of two or more calls arriving in the interval (t, t + δt) is negligible. Find the set of differential-difference equations for pn (t). Obtain the probability R t generating function G(s, t) for the process and confirm that it is a stochastic process with intensity 0 λ(x)dx. Find pn (t) by expanding G(s, t) in powers of s. What is the mean number of calls received at time t? From the difference equation it follows that pn (t + δt) − pn (t) = λ(t)pn−1 (t) − λ(t)pn (t) + o(1), δt p0 (t + δt) − p0 (t) = −λ(t)p0 (t) + o(1). δt Let δt → ∞. Then

p′n (t) = λ(t)pn−1 (t) − λ(t)pn (t),

(i)

p′0 (t) = −λ(t)p0 (t).

Let

G(s, t) =

∞ X

(ii)

pn (t)sn .

n=0

Multiply (i) by sn , sum from n = 1, and add (ii) so that ∂G(s, t) = λ(t)(s − 1)G(s, t). ∂t

(iii)

The initial value is G(s, 0) = 1. The solution of (iii) (which is essentially an ordinary differential equation) subject to the initial condition is



G(s, t) = exp (s − 1)

Z



t

λ(u)du .

0

Expansion of the generating function gives the series



G(s, t) = exp −1

Z



t

0

where the probability



1 exp −1 pn (t) = n! The mean of the process is µ(t) =

 Z

Z



t

λ(u)du exp s

λ(u)du =

0

t

 Z

λ(u)du

0

t



Z

pn (t)sn ,

n=0

n

λ(u)du

0

∂G(s, t) = ∂s s=1

∞ X

.

t

λ(u)du.

0

5.6. For the telephone answering service in Problem 5.5, suppose that the rate is periodic given by λ(t) = a + b cos(ωt) where a > 0 and |b| < a. Using the probability generating function from Problem 5.6 find the probability that n calls have been received at time t. Find also the mean number of calls received at time t. Sketch graphs of p0 (t), p1 (t) and p2 (t) where a = 0.5, b = 0.2 and ω = 1.

76

Using the results from Problem 5.5,



G(s, t) = exp (s − 1)

Z

0

Hence pn (t) =



t

(a + b cos ωu)du = exp[(s − 1)(at + (b/ω) sin ωt)].

1 −[at+(b/ω) sin ωt] e [at + (b/ω) sin ωt]n , n!

and the mean is µ(t) = at + (b/ω) sin ωt. The first three probabilities are shown in Figure 5.1.

p0(t)

p1(t) p (t) 2

t

Figure 5.1: Graphs of the probabilities p0 (t), p1 (t) and p2 (t) versus t in Problem 5.6.

5.7. A Geiger counter is pre-set so that its initial reading is n0 at time t = 0. What are the initial conditions on pn (t), the probability that the reading is n(≥ n0 ) at time t, and its generating function G(s, t)? Find pn (t), and the mean reading of the counter at time t. The probability generating function for this Poisson process is (see eqn (5.16)) G(s, t) = A(s)eλ(s−1)t . The initial condition is G(s, 0) = xn0 . Hence A(s) = sn0 and G(s, t) = sn0 eλ(s−1)t . The power series expansion of G(s, t) is G(s, t) =

∞ X sn λn−n0 e−λt

(n − n0 )!

n=n0

Hence pn (t) = 0,

(n < n0 ),

pn (t) =

The mean reading at time t is given by

.

(λt)n−n0 e−λt , (n − n0 )!



∂G(s, t) = n0 + λt. µ(t) = ∂s s=1

5.8. A Poisson process with random variable N (T ) has probabilities pn (t) = P[N (t) = n] =

77

(λt)n e−λt . n!

(n ≥ n0 ).

If λ = 0.5, calculate the following probabilities associated in the process: (a) P[N (3) = 6]; (b) P[N (2.6) = 3]; (c) P[N (3.7) = 4|N (2.1) = 2]; (d) P[N (7) − N (3) = 3]. (0.5 × 3)6 e−0.5×3 = 0.00353. 6! (0.5 × 2.6)3 e−0.5×2.6 (b) P(N (2.6) = 3) = p3 (2.6) = = 0.100. 3! (0.5 × 1.6)2 e−0.5×1.6 = 0.144. (c) P[N (3.7) = 4|N (2.1) = 2] = P[N (1.6) = 2] = p2 (1.6) = 2! (d) P[N (7) − N (3) = 3] = P[N (4) = 3] = 0.180. (a) P(N (3) = 6) = p6 (3) =

5.9. A telephone banking service receives an average of 1000 call per hour. On average a customer transaction takes one minute. If the calls arrive as a Poisson process, how many operators should the bank employ to avoid an expected accumulation of incoming calls? Let time t be measured in minutes . The intensity of the Poisson process λ = 1000/60 = 50/3. The expected inter-arrival time is 1 3 = = 0.06 minutes. λ 50 This must be covered 1 = 16.7 operators. 0.06 Hence 17 operators would be required to cover expected incoming calls. 5.10. A Geiger counter automatically switches off when the nth particle has been recorded, where n is fixed. The arrival of recorded particles is assumed to be a Poisson process with parameter λt. What is the expected value of the switch-off times? The probability distribution for the switch-off times is F (t)

=

1 − {probabilities that 0, 1, 2, . . . , n − 1 particles recorded by time t)}

=

1 − e−λt

=

1 − e−λt



1 + λt + · · · +

n−1 X (λt)r

(λt)n−1 (n − 1)!



r!

r=0

Its density is, for t > 0, f (t)

dF (t) d = dt dt

=

(

1−e

−λt

λe−λt

X (λt)n r=0

λe−λt

=

n!

(λt)n−1 . (n − 1)!

r=0

r!

)

n−1

n−1

=

n−1 X (λt)r

− e−λt

X λn tn−1 r=1

(n − 1)!

which is gamma. Its expected value is µ

=

Z



tf (t)dt =

0

=

1 λ(n − 1)!

Z

0

Z



0



e

−s

λe−λt

λn−1 n λn t dt = (n − 1)! (n − 1)!

n n! = . ds = λ(n − 1)! λ

78

Z

0



e−λt tn dt

5.11. Particles are emitted from a radioactive source, and, N (t), the random variable of the number of particles emitted up to time t form t = 0, is a Poisson process with intensity λ. The probability that any particle hits a certain target is p, independently of any other particle. If M (t) is the random variable of the number of particles that hit the target up to time t, show, using the law of total probability, that M (t) forms a Poisson process with intensity λp. For any two times t1 , t2 , (t2 > t1 ≥ 0), using the law of total probability, with t2 − t1 = t, P[M (t2 ) − M (t1 ) = k]

=

∞ X n=k

=

∞ X

P[N (t2 ) − N (t1 ) = n] e−λt

n=k

=

 

e−λt

 

(λt)n n k p (1 − p)n−k n! k

 

∞ X (λt)n n pk (1 − p)n k (1 − p) n! k n=k

∞ X [(1 − p)λt]n−k −λt (pλt) k

= =

e

n k p (1 − p)n−k k

k!

n=k

k −λpt

(λpt) e k!

which is a Poisson process of intensity λp.

79

,

(n − k)!

Chapter 6

Birth and death processes 6.1. A colony of cells grows from a single cell. The probability that a cell divides in a time interval δt is λδt + o(δt). There are no deaths. Show that the probability generating function for this birth process is G(s, t) =

se−λt . 1 − (1 − e−λt )s

Find the probability that the original cell has not divided at time t, and the mean and variance of population size at time t (see Problem 5.4, for the variance formula using the probability generating function). This is a birth process with parameter λ and initial population size 1. Hence the probability generating function is (see eqn (6.12)) G(s, t) = se−λt [1 − (1 − e−λt )s]−1 ,

which satisfies the initial condition G(s, 0) = s. It follows that the probability that the population size is 1 at time t is p1 (t) = e−λt . The mean population size is ∂G(s, t) µ(t) = = eλt . ∂s s=1 Using the generating function, the variance of the population size is 2

σ =

"

∂ ∂s



∂G(s, t) s ∂s







∂G(s, t) ∂s

2 #

= [2e2λt − eλt ] − e2λt = e2λt − eλt .

s=1

6.2. A simple birth process has a constant birth-rate λ. Show that its mean population size µ(t) satisfies the differential equation dµ(t) = λµ(t). dt How can this result be interpreted in terms of a deterministic model for a birth process? From Section 6.3, the mean population size of the birth process with initial population n0 is given by µ(t) = n0 eλt . It can be verified that dµ − λµ = n0 λeλt − n0 λeλt = 0, dt This differential equation is a simple deterministic model for the population in a birth process, so that the mean population size in the stochastic process satisfies a deterministic equation. [However, this is not always the case for the relation between stochastic and deterministic models.]

80

6.3. The probability generating function for a simple death process with death-rate µ and initial population size n0 is given by n0  se−µt G(s, t) = (1 − e−µt )n0 1 + 1 − e−µt

(see Equation (6.17)). Using the binomial theorem find the probability pn (t) for n ≤ n0 . If n0 is an even number, find the probability that the population size has halved by time t. A large number of experiments were undertaken with live samples with a variety of initial population sizes drawn from a common source and the times of the halving of deaths were recorded for each sample. What would be the expected time for the population size to halve? The binomial expansion of G(s, t) is given by G(s, t)

−µt n0

=

(1 − e

)

=

(1 − e−µt )n0

From this series, the coefficient of sn is pn (t) = (1 − e

−µt n0 −n −nµt

)

e



se−µt 1+ 1 − e−µt

n0

 −nµt n n0  X e s n0 n=0

(1 − e−µt )n

n

 

n0 , n

(n = 0, 1, 2, . . . , n0 )

which is the probability that the population size is n at time t. Let n0 = 2m0 , where m0 is an integer, which ensures that n0 is even. We require pm0 = (1 − e−µt )m0 e−m0 µt





2m0 . m0

The mean population size at time t is given by µ = Gs (1, t) = n0 e−µt . This mean is half the initial population if n0 eµt = 21 n0 , which occurs, on average, at time t = µ−1 ln 2. 6.4. A birth process has a probability generating function G(s, t) given by G(s, t) =

s . eλt + s(1 − e−λt )

(a) What is the initial population size? (b) Find the probability that the population size is n at time t. (c) Find the mean and variance of the population size at time t. (a) Since G(s, 0) = s, the initial population size is n0 = 1. (b) Expand the generating function using the binomial theorem: G(s, t) =



X n s = se−λt s (1 − e−λt )n . λt −λt e + s(1 − e ) n=0

The coefficients give the required probabilities: p0 (t) = 0, (c) Since

pn (t) = e−λt (1 − e−λt )n−1 .





∂G(s, t) eλt = eλt . = λt λt 2 ∂s [e + s(1 − e )] s=1 s=1

Hence the mean population size is µ = eλt .

81

(i)

From (i),





∂ 2 G(s, t) −2eλt (1 − eλt ) = = −2eλt (1 − eλt ). ∂s2 s=1 [eλt + s(1 − eλt )]3 s=1

Hence the variance is given by V(t)

=

"

∂G(s, t) ∂ 2 G(s, t) +s − ∂s ∂s2

λt

λt

λt

=

e

=

e2λt − eλt .

− 2e (1 − e ) − e



∂G(s, t) ∂s

2 #

s=1

2λt

6.5. A random process has the probability generating function G(s, t) =

 2 + st r 2+t

,

where r is a positive integer. What is the initial state of the process? Find the probability pn (t) associated with the generating function. What is pr (t)? Show that the mean associated with G(s, t) is rt . 2+t

µ(t) =

Since G(s, 0) = 1, this implies that the initial size is 1. Expansion of G(s, t) using the binomial theorem leads to G(s, t)



=

2 + st 2+t

r

=



2 2+t

r 

1+

1 st 2

∞     2 r X r st n

=

2+t

n=0

r

2

n

Hence the probability that the size is n at time t is pn (t) =



2 2+t

r    n r n

With n = r, pr (t) = The mean size is given by µ(t) =



2 2+t

t 2

,

r    r r r



t 2

(n = 0, 1, 2, . . .),

=



rt 2+t

r

.



∂G(s, t) rt(2 + st)r−1 rt = . = r ∂s (2 + t) 2 +t s=1 s=1

6.6. In a simple birth and death process with unequal birth and death-rates λ and µ, the probability generating function is given by  n0 µ(1 − s) − (µ − λs)e−(λ−µ)t G(s, t) = , λ(1 − s) − (µ − λs)e−(λ−µ)t for an initial population size n0 (see Equation (6.23)). (a) Find the mean population size at time t. (b) Find the probability of extinction at time t. (c) Show that, if λ < µ, then the probability of ultimate extinction is 1. What is the probability if λ > µ? (d) Find the variance of the population size.

82

(a) Let G(s, t) = [A(s, t)/B(s, t)]n0 with obvious definitions for A(s, t) and B(s, t). Then ∂G(s, t) [A(s, t)]n0 −1 = [(−µ + λe−(λ−µ)t )B(s, t) − (−λ + λe−(λ−µ)t ))A(s, t)]. ∂s [B(s, t)]n0 +1 If s = 1, then A(1, t) = B(1, t) = −(µ − λ)e−(λ−µ)t . Therefore the mean population size is given by µ = n0 e(λ−µ)t . (b) The probability of extinction at time t is



µ − µe−(λ−µ)t p0 (t) = G(0, t) = λ − µe−(λ−µ)t

n0

.

(c) If λ < µ, then e(λ−µ)t → 0 as t → ∞. Hence p0 (t) → 1. If λ > µ, then p0 (t) → (µ/λ)n0 as t → ∞. (d) This requires a lengthy differentiation to obtain the second derivative of G(s, t): symbolic computation is very helpful. The variance is given by V(t) = Gss (1, t) + Gs (1, t) − [Gs (1, t)]2 = n0

(λ + µ) (λ−µ) [e − 1], (λ − µ)

(λ 6= µ).

6.7. In a population model, the immigration rate λn = λ, a constant, and the death rate µn = nµ. For an initial population size n0 , the probability generating function is (Example 6.3) G(s, t) = eλs/µ exp[−λ(1 − (1 − s)e−µt )/µ][1 − (1 − s)e−µt ]n0 . Find the probability that extinction occurs at time t. What is the probability of ultimate extinction? The probability of extinction is p0 (t) = G(0, t) = (1 − e−µt )n0 e−λ(1−e

−µt

)/µ

.

The probability of ultimate extinction is lim p0 (t) = e−λ/µ .

t→∞

6.8. In a general birth and death process a population is maintained by immigration at a constant rate λ, and the death rate is nµ. Using the differential-difference equations (6.25) directly, obtain the differential equation dµ(t) + µµ(t) = λ, dt for the mean population size µ(t). Solve this equation assuming an initial population n0 and compare the answer with that given in Example 6.3. In terms of a generating function the mean of a process is given by µ(t) =

X

npn (t).

n=1

From (6.26) (in the book) the differential-difference equation for this immigration-death model is dpn (t) = λpn−1 (t) − (λ + nµ)pn (t) + µ(n + 1)pn+1 (t), dt

83

(n = 1, 2, . . .).

Multiply this equation by n and sum over over n: ∞ X dpn (t)

n

n=1

dt

=

=

λ

λ

∞ X n=1 ∞

X n=0

= =

npn−1 (t) − λ

∞ X n=1

npn (t) − µ

(n + 1)pn (t) − λµ(t) − µ

λµ(t) + λ − λµ(t) − µµ(t)

∞ X

npn (t) + µ

n=1

∞ X

n2 pn (t) + µ

n=1

∞ X

n(n + 1)pn+1 (t)

n=1 ∞

X n=2

n(n − 1)pn (t)

λ − µµ(t).

Hence

dµ(t) + µµ(t) = λ. dt This is a first-order linear equation with general solution µ(t) = Ae−µt +

λ . µ

The initial condition implies A = n0 − (λ/µ). Hence µ(t) =



n0 −

λ µ



λ , µ

e−µt +

which is the same as the result in Example 6.3. 6.9. In a death process the probability of a death when the population size is n 6= 0 is a constant µ but obviously zero if the population size is zero. Verify that, if the initial population is n0 , then pn (t), the probability that the population size is n at time t is given by p0 (t) =

µn0 (n0 − 1)!

Z

t

sn0 −1 e−µs ds,

pn (t) =

0

(µt)n0 −n −µt e , (n0 − n)!

(1 ≤ n ≤ n0 ),

Show that the mean time to extinction is n0 /µ. The probability that a death occurs in time δt is a constant µ independently of the population size. Hence (i) pn0 (t + δt) = (1 − µ)pn0 (t), pn (t + δt) = µpn+1 (t) + (1 − µ)pn (t),

(n = 1, . . . , n0 − 1),

p0 (t + δt) = µp1 (t) + p0 (t).

(ii)

(iii)

subject to the initial conditions pn0 (0) = 1, pn (0) = 0, (n = 0, 1, 2, . . . n0 − 1). Divide through each of the eqns (i) and (ii) by δt, and let t → ∞ to obtain the differential-difference equations p′n0 (t) = −µpn0 (t), p′n (t) = −µpn (t) + µpn+1 (t),

(n = 1, 2, . . . , n0 − 1).

p′0 (t) = µp1 (t).

From (iv) and the initial conditions,

pn0 (t) = Ae−µt = e−µt .

From (v) with n = n0 − 1, p′n0 −1 (t) = −µpn0 −1 (t) + µpn0 (t) = −µpn0 −1 (t) + e−µt . Subject to the initial condition pn0 −1 (0) = 0, this first-order linear equation has the solution pn0 −1 (t) = te−µt .

84

(iv) (v) (vi)

Repeat this process which leads to the conjecture that pn (t) =

(µt)n0 −n −µt e , (n0 − n)!

(n = 1, 2, . . . , n0 − 1),

which can be proved by induction. The final probability satisfies µn0 tn0 −1 −µt e . (n0 − 1)!

p′0 (t) = µp1 (t) = Direct integration gives p0 (t) =

Z

µn0 (n0 − 1)!

t

sn0 −1 e−µs ds.

(vii)

0

It can be checked that p0 (t) → 1 as t → ∞ which confirms that extinction is certain using an integral formula for the factorial again. The probability distribution of the random variable T of the time to extinction is p0 (t) = P[T ≤ t] given by (viii). Its density is dp0 (t) µn0 tn0 −1 −µt = e . f (t) = dt (n0 − 1)! Hence the expected value of T is E(T ) =

Z



tf (t)dt =

0

Z



0

µn0 tn0 −µt n0 e = . (n0 − 1)! µ

6.10. In a birth and death process the birth and death rates are given by λn = nλ + α,

µn = nµ,

where α represents a constant immigration rate. Show that the probability generating function G(s, t) of the process satisfies ∂G(s, t) ∂G(s, t) = (λs − µ)(s − 1) + α(s − 1)G(s, t). ∂t ∂s Show also that, if G(s, t) = (µ − λs)−α/λ H(s, t), then H(s, t) satisfies

∂H(s, t) ∂H(s, t) = (λs − µ)(s − 1) . ∂t ∂s Let the initial population size be n0 . Solve the partial differential equation for H(s, t) using the method of Section 6.5 and confirm that G(s, t) =

(µ − λ)α/λ [(µ − λs) − µ(1 − s)e(λ−µ)t ]n0 . [(µ − λs) − λ(1 − s)e(λ−µ)t ]n0 +(α/λ)

(Remember the modified initial condition for H(s, t).) Find p0 (t), the probability that the population is zero at time t (since immigration takes place even when the population is zero there is no question of extinction in this process). Hence show that lim p0 (t) =

t→∞



µ−λ µ

α/λ

if λ < µ. What is the limit if λ > µ? The long term behaviour of the process for λ < µ can be investigated by looking at the limit of the probability generating function as t → ∞. Show that lim G(s, t) =

t→∞



85

µ−λ µ − λs

α/λ

.

This is the probability generating function of a stationary distribution and it indicates that a balance has been achieved the birth and immigration rates, and the death rate. What is the long term mean population ? If you want a further lengthy exercise investigate the probability generating function size in the special case λ = µ. The differential-difference equations are p′0 (t) = −αp0 (t) + µp1 (t), p′n (t)

(i)

= [(n − 1)λ + α]pn−1 (t) − (nλ + α + nµ)pn (t) + (n + 1)µpn+1 (t), (n = 1, 2, . . .)

(ii)

n

Multiply (ii) by s , sum over n ≥ 1 and add (i) leads to ∞ X

p′n (t)sn

∞ X

=

n=0

n=1

{[(n − 1)λ + α]pn−1 (t) − (nλ + α + nµ)pn (t)sn } +

∞ X

(n + 1)µpn+1 (t)sn

n=0

Let the probability generating function be G(s, t) =

P∞

n=0

pn (t)sn . Then the summations above lead to

∂G(s, t) ∂G(s, t) = (λs − µ)(s − 1) + α(α − 1)G(s, t). ∂t ∂s Let G(s, t) = (µ − λs)−α/λ H(s, t). Then ∂H(s, t) ∂G(s, t) = (µ − λs)−α/λ + α(µ − λs)−(α/λ)−1 H(s, t), ∂s ∂s and

∂G(s, t) ∂H(s, t) = (µ − λs)−α/λ . ∂t ∂t This transformation removes the non-derivative term to leave ∂H(s, t) ∂H(s, t) = (λs − µ)(s − 1) . ∂t ∂s Now apply the change of variable defined by ds = (λs − µ)(s − 1) dz as in Section 6.5(a). Integration gives (see eqn (6.21)) s=

λ − µe(λ−µ)z = h(z) (say). λ − λe(λ−µ)z

(iii)

The initial condition is equivalent to G(s, 0) = sn0 or H(s, 0) = (µ − λs)α/λ sn0 . It follows that H(h(z), 0) =



λ(µ − λ) λ − λe(λ−µ)z

α/λ 

λ − µe(λ−µ)z λ − λe(λ−µ)z

n0

= w(z) (say).

Since H(h(z), t) = w(z + t) for any smooth function of z + t, it follows that G(s, t)

=

(µ − λs)−α/λ H(f (z), t) = (µ − λs)−α/λ w(z + t)

=

(µ − λs)−α/λ



λ(µ − λ) λ − λe(λ−µ)(z+t)

α/λ 

λ − µe(λ−µ)(z+t) λ − λe(λ−µ)(z+t)

n0

,

where z is defined by s = h(z) given by (iii). Finally G(s, t) =

(µ − λ)α/λ [(µ − λs) − µ(1 − s)e(λ−µ)t ]n0 , [(µ − λs) − λ(1 − s)e(λ−µ)t ]n0 +(α/λ)

86

(iv)

as displayed in the problem. The probability that the population is zero at time t is p0 (t) = G(0, t) =

(µ − λ)α/λ (µ − µe(λ−µ)t )n0 . (µ − λe(λ−µ)t )n0 +(α/λ)

If λ < µ, then p0 (t) → If λ > µ, then p0 (t) =



µ−λ µ

α/λ

(µ − λ)α/λ (µe−(λ−µ)t − µ)n0 → (µe−(λ−µ)t − λ)n0 +(α/λ)

.



λ−µ λ

α/λ  n0 µ λ

,

as t → ∞. The long term behaviour for λ < µ is determined by letting t → ∞ in (iv) resulting in lim G(s, t) =

t→∞



µ−λ µ − λs

α/λ

.

Express G(s, t) in the form G(s, t) = (µ − λ)α/λ

A(s, t) , B(s, t)

where A(s, t) = [(µ − λs) − µ(1 − s)e(λ−µ)t ]n0 ,

B(s, t) = [(µ − λs) − λ(1 − s)e(λ−µ)t ]n0 +(α/λ) .

Then

As (s, t)B(s, t) − A(s, t)Bs (s, t) [B(s, t)]2 For the mean we require s = 1, for which value Gs (s, t) = (µ − λ)α/λ

A(1, t) = (µ − λ)n0 ,

As =

∂A , ∂s

Bs =

∂b . ∂s

B(1, t) = (µ − λ)n0 +(α/λ) ,

As (1, t) = n0 (−λ + µe(λ−µ)t )(µ − λ)n0 −1 ,

Hence

Bs (1, t) = (n0 + (α/λ))(−λ + λe(λ−µ)t )(µ − λ)n0 +(α/λ)−1 .

µ(t)

As (1, t)B(1, t) − A(1, t)Bs (1, t) [B(1, t)]2

=

Gs (1, t) = (µ − λ)α/λ

=

n0 [α + {n0 (µ − λ) − α}e(λ−µ)t ]. µ−λ

If λ < µ, then

n0 α , µ−λ as t → ∞. If λ > µ, then the mean becomes unbounded as would be expected. µ(t) →

6.11. In a birth and death process with immigration, the birth and death rates are respectively λn = nλ + α,

µn = nµ.

Show directly from the differential-difference equations for pn (t), that the mean population size µ(t) satisfies the differential equation dµ(t) = (λ − µ)µ(t) + α. dt Deduce the result α µ(t) → µ−λ as t → ∞ if λ < µ. Discuss the design of a deterministic immigration model based on this equation.

87

The difference equations for the probability pn (t) are given by p′0 (t) = −αp0 (t) + µp1 (t),

(i)

p′n (t) = [(n − 1)λ + α]pn−1 (t) − [(nλ + α + nµ)pn (t) + (n + 1)µpn+1 (t).

(ii)

The mean µ(t) is given by µ(t) = the sums ∞

P∞

n=1

dµ(t) X ′ npn (t) = dt

=

npn (t). Multiply (ii) by n and sum from n = 1. Then, re-ordering

λ

∞ X n=2

n=1

n(n − 1)pn−1 (t) + α

−(λ + µ) =

λ

∞ X

∞ X n=1

∞ X n=0

∞ X

∞ X

∞ X

npn (t)

n=1

n(n + 1)pn+1 (t)

n=1

n(n + 1)pn (t) + α

−(λ + µ)

pn−1 (t) − α

n=1

n2 pn (t) + µ

n=1

=

∞ X

pn (t) − α

n2 pn (t) + µ

n=1

∞ X n=2

α + (λ − µ)µ(t).

∞ X

npn (t)

n=1

n(n − 1)pn (t)

The mean of the stochastic process satisfies a simple deterministic model for a birth and death process with immigration. 6.12. In a simple birth and death process with equal birth and death rates λ, the initial population size has a Poisson distribution with probabilities pn (0) = e−α

αn n!

(n = 0, 1, 2, . . .),

with intensity α. It could be thought of as a process in which the initial distribution has arisen as the result of some previous process. Find the probability generating function for this process, and confirm that the probability of extinction at time t is exp[−α/(1 + λt)], and that the mean population size is α for all t. In Section 6.5(b), the probability generating function G(s, t) for the case in which the birth and death rates are equal satisfies ∂G(s, t) ∂G(s, t) = λ(1 − s)2 . ∂t ∂s To solve the partial differential equation, the transformation z=

1 , λ(1 − s)

or

s=

λz − 1 λz

(i)

is used. The result is that G(s, t) = w(z + t) for any smooth function w. The initial condition at t = 0 is w(z) =

∞ X n=0

pn (0)sn =

∞ X αn e−α n=0

n!

h 

sn = eα(s−1) = exp α

i

λz − 1 −1 λz

= e−α/(λz) ,

usng the transformation (i). Hence





G(s, t) = w(z + t) = e−α/[λ(z+t)] = exp −α The probability of extinction at time t is

1 + λt 1−s



p0 (t) = G(0, t) = exp[−α/(1 + λt)]. The mean µ(t) = Gs (1, t) = α.

88

= exp



−α(1 − s) 1 + λt(1 − s)



6.13. A birth and death process takes place as follows. A single bacterium is allowed to grow and assumed to behave as a simple birth process with birth rate λ for a time t1 without any deaths. No further growth then takes place. The colony of bacteria is then allowed to die with the assumption that it is a simple death process with death rate µ for a time t2 . Show that the probability of extinction after the total time t1 + t2 is ∞ X n=1

eλt1 (1 − e−λt1 )n−1 (1 − e−µt2 )n .

Using the formula for the sum of a geometric series, show that this probability can be simplified to eµt2 − 1 . + eµt2 − 1

eλt1

Suppose that at time t = t1 the population size is n. From Section 6.3, the probability that the population is of size n at time t1 entirely through births is pn (t1 ) = e−λt1 (1 − e−λt1 )n−1 . From Section 6.4 on the death process, given that the population is of size n, the probability that the population becomes extinct after a further time t2 is q0 (t2 ) = (1 − e−µt2 )n . The probability that the population increases to n and then declines to zero is pn (t1 )q0 (t2 ) = e−λt1 (1 − e−λt1 )n−1 (1 − e−µt2 )n Now n can take any value equal to or greater than 1. Hence the probability of extinction through every possible n is s(t1 , t2 ) =

∞ X n=1

e−λt1 (1 − e−λt1 )n−1 (1 − e−µt2 )n .

The probability s(t1 , t2 ) can be expressed as a geometric series in the form s(t1 , t2 )

=

∞ e−λt1 X [(1 − e−λt1 )(1 − e−µt2 )]n 1 − e−λt1 n=1

= =

(1 − e−λt1 )(1 − e−µt2 ) e−λt1 · (1 − e−λt1 ) [1 − (1 − e−λt1 )(1 − e−µt2 )] eµt2 − 1 . + eµt2 − 1

eλt1

6.14. As in the previous problem a single bacterium grows as a simple birth process with rate λ and no deaths for a time τ . The colony numbers then decline as a simple death process with rate µ. Show that the probability generating function for the death process is (1 − e−µt (1 − s))e−λτ , 1 − (1 − e−λτ )(1 − e−µt (1 − s)) where t is measured from the time τ . Show that the mean population size during the death process is eλτ −µt . During the birth process the generating function is (see eqn (6.12)) G(s, t) =

se−λt , 1 − (1 − e−λt )s

89

assuming an initial population of 1. For the death process suppose that time restarts from t = 0, and that the new probability generating function is H(s, t). At t = 0, H(s, 0) = G(s, τ ) =

se−λτ . 1 − (1 − e−λτ )s

For the death process the transformation is s = 1 − e−µz , so that H(s, 0) = w(z) =

(1 − e−µz )e−λτ . 1 − (1 − e−λτ )(1 − e−µz )

Then, in terms of s, [1 − e−µt (1 − s)]e−λt . 1 − (1 − e−λτ )[1 − e−µt (1 − s)] The mean population size in the death process is H(s, t) = w(z + t) =

Hs (1, t) =

[1 − (1 − e−λτ )]e−µt−λτ + e−λτ −µt (1 − e−λτ ) = eλτ −µt . [1 − (1 − e−λτ ]2

6.15. For a simple birth and death process the probability generating function ( equation (6.23)) is given by G(s, t) =



µ(1 − s) − (µ − λs)e−(λ−µ)t λ(1 − s) − (µ − λs)e−(λ−µ)t

n0

for an initial population of n0 . What is the probability that the population is (a) zero, (b) 1 at time t?

G(s, t)

= = =



µ(1 − s) − (µ − λs)e−(λ−µ)t λ(1 − s) − (µ − λs)e−(λ−µ)t

n0

[{µ − µe−(λ−µ)t } − s{µ − λe−(λ−µ)t }]n0 [{λ − µe−(λ−µ)t } − s{λ − λe−(λ−µ)t }]n0



µ − µe−(λ−µ)t λ − µe−(λ−µ)t

n0 

1 + n0 s



λ − λe−(λ−µ)t µ − λe−(λ−µ)t − λ − µe−(λ−µ)t µ − µe−(λ−µ)t



+ ···



The probabilities p0 (t) and p1 (t) are given by the first two coefficients of s in this series. 6.16. (An alternative method of solution for the probability generating function) The general solution of the first-order partial differential equation A(x, y, z)

∂z ∂z + B(x, y, z) = C(x, y, z) ∂x ∂y

is f (u, v) = 0, where f is an arbitrary function, and u(x, y, z) = c1 and v(x, y, z) = c2 are two independent solutions of dx dy dz = = . A(x, y, z) B(x, y, z) C(x, y, z) This is known as Cauchy’s method. Apply the method to the partial differential equation for the probability generating function for the simple birth and death process, namely (equation (6.19)) ∂G(s, t) ∂G(s, t) = (λs − µ)(s − 1) , ∂t ∂s by solving ds dt dG = = . (λs − µ)(1 − s) −1 0

90

Show that u(s, t, G) = G = c1 ,

v(s, t, G) = e(λ−µ)t

and



1−s µ −s λ

are two independent solutions. The general solution can be written in the form



G(s, t) = H e

(λ−µ)t



1−s µ −s λ





= c2 .

.

Here H is a function determined by the initial condition G(s, 0) = sn0 . Find H and recover formula (6.22) for the probability generating function. Note that in the birth and death equation the function C is zero. Comparing the two partial differential equations, we have to solve ds dt dG = = . (λs − µ)(1 − s) −1 0 The second equality is simply dG = 0. This equation has a general solution which can be expressed as u(s, t, G) ≡ G = c1 . The first equality requires the solution of the differential equation ds = −(λs − µ)(1 − s). dt The integration is given essentially in eqn (6.20) in the text which in terms of v can be expressed as v(s, t, G) ≡ e

(λ−µ)t



1−s (µ/λ) − s



= c2 .

Hence the general solution is f (u, v) = 0,

or



f G, e(λ−µ)t



1−s (µ/λ) − s



= 0.

Alternatively, this can be written in the form



G(s, t) = H e(λ−µ)t



1−s (µ/λ) − s



,

where he function H is determined by initial conditions. Assuming that the initial population size is n0 , then G(s, 0) = sn0 , which means that H is determined by   1−s H = sn0 . (µ/λ) − s

Let u = (1 − s)/((µ/λ) − s). Then

H(u) =

h λ − µu in0

, λ − λu which determines the functional form of H. The result follows by replacing u by e(λ−µ)t



1−s (µ/λ) − s



,

as the argument of H. 6.17. Apply the Cauchy’s method outlined in Problem 6.16 to the immigration model in Example 6.3. In this application the probability generating function satisfies ∂G(s, t) ∂G(s, t) = λ(s − 1)G(s, t) + µ(1 − s) . ∂t ∂s

91

Solve the equation assuming an initial population of n0 . Reading the coefficients of the partial differential equation in Problem 6.16, ds dt dG = = . µ(1 − s) −1 λ(s − 1) Integration of the second equality gives u(s, t, G) ≡ G + whilst the first gives

λ s = c1 , µ

v(s, t, G) = eµt (1 − s) = c2 .

The general solution can be expressed in the functional form

λ G(s, t) = − s + H(eµt (1 − s)). µ From the initial condition G(s, 0) = sn0 . Therefore λ − s + H(1 − s) = sn0 , µ so that

λ s + sn0 . µ

H(1 − s) = Let u = 1 − s: then

H(u) =

λ (1 − u) + (1 − u)n0 . µ

The result follows by replacing u by eµt (1 − s) in this formula. 6.18. In a population sustained by immigration at rate λ with a simple death process with rate µ, the probability pn (t) satisfies dp0 (t) = −λp0 (t) + µp1 (t), dt dpn (t) = λpn−1 (t) − (λ + nµ)pn (t) + (n + 1)µpn+1 (t). dt Investigate the steady-state behaviour of the system by assuming that pn (t) → pn ,

dpn (t)/dt → 0

for all n, as t → ∞. Show that the resulting difference equations for what is known as the corresponding stationary process −λp0 + µp1 = 0, λpn−1 − (λ + nµ)pn + (n + 1)µpn+1 = 0,

(n = 1, 2, . . .)

can be solved iteratively to give p1 =

λ p0 , µ

p2 =

λ2 p0 , 2!µ2

···

pn =

λn p0 , n!µn

P∞

···.

Using the condition p = 1, and assuming that λ < µ, determine p0 . Find the mean steady-state n=0 n population size, and compare the result with that obtained in Example 6.3. From the steady-state difference equations p1 = p2 =

λ p0 , µ

1 λ2 [−λp0 + (λ + 2µ)p1 ] = , 2µ 2!µ2

92

1 λ3 [−λp1 + (λ + 2µ)p2 ] = p0 , 3µ 3!µ3

p3 =

and so on: the result can be confirmed by an induction proof. The requirement ∞  n X λ 1

µ

n=0

n!

p0 = eλ/µ p0 = 1

P∞

n=0

pn = 1 implies

if p0 = e−λ/µ . The mean steady state population is given by µ

=

∞ X

npn =

n=1

n=1

=

p0

 n ∞ X np0 λ

  λ µ

n!

µ

λ = . µ

eλ/µ

=

∞ X n=1

p0 (n − 1)!

 n λ µ

6.19. In a simple birth process the probability that the population is of size n at time t given that it was n0 at time t = 0 is given by pn (t) =





n−1 e−λn0 t (1 − e−λt )n−n0 , n0 − 1

(n ≥ n0 ).

(see Section 6.3 and Figure 6.1). Show that the probability achieves its maximum value for given n and n0 when t = (1/λ) ln(n/n0 ). Find also the maximum value of pn (t) at this time. Differentiating pn (t), we obtain





dpn (t) n−1 =λ e−λn0 t (1 − e−λt )n−n0 −1 [−n0 (1 − e−λt ) + λ(n − n0 )e−λt ]. dt n0 − 1 The derivative is zero if

n0 , or n Substituting this time back into pn (t), it follows e−λt =

max[pn (t)] = t



t=



1 n ln λ n0



.



0 n − 1 nn 0 (n − n0 )n−n0 . n0 − 1 nn

6.20. In a birth and death process with equal birth and death parameters, λ, the probability generating function is (see eqn(6.24))  n0 1 + (λt − 1)(1 − s) . G(s, t) = 1 + λt(1 − s) Find the mean population size at time t. Show also that the variance of the population size is 2n0 λt. The derivative of G(s, t) with respect to s is given by Gs (s, t) =

n0 [λt − (λt − 1)s]n0 −1 . [(1 + λt) − λts]n0 +1

The mean population size at time t is µ(t) = Gs (1, t) = n0 . We require the second derivative given by Gss (s, t) =

n0 [λt − (λt − 1)s]n0 −2 [n0 − 1 + 2λts + 2λ2 t2 (1 − s)] [(1 + λt) − λts]n0 +2

93

The variance of the population size is V(t) = Gss (1, t) + Gs (1, t) − [Gs (1, t)]2 = [2λn0 t + n20 − n0 ] + n0 − n20 = 2n0 λt. 6.21. In a death process the probability that a death occurs in time δt is a time-dependent parameter µ(t)n when the population size is n. The pgf G(s, t) satisfies ∂G ∂G = µ(t)(1 − s) . ∂t ∂s as in Section 6.4. Show that

G(s, t) = [1 − e−τ (1 − s)]n0 ,

where

τ =

Z

t

µ(u)du.

0

Find the mean population size at time t. In a death process it is found that the expected value of the population size at time t is given by µ(t) =

n0 , 1 + αt

(t ≥ 0),

where α is a positive constant. Estimate the corresponding death-rate µ(t). Let z= and

Z

ds , 1−s

so that

τ=

Z

s = 1 − e−z ,

t

µ(u)du.

0

The equation for the probability generating function becomes ∂G ∂G = . ∂τ ∂z The general solution can be expressed as G(s, t) = w(z + τ ) for any arbitrary differentiable function w. Initially τ = 0. Hence G(s, 0) = sn0 = w(z) = (1 − e−z )n0 , so that

G(s, t) = w(z + τ ) = [1 − e−(z+τ )]n0 = [1 − e−τ (1 − s)]n0 .

The mean population size at time t is given by

 Z



µ(t) = Gs (1, t) = n0 e−τ [1 − e−tau (1 − s)]n0 −1 s=1 = n0 e−τ = n0 exp −

Given the mean

 Z

n0 µ(t) = = n0 exp − 1 + αt

t

0

t



µ(u)du .



µ(u)du ,

0

(i)

it follows that the death-rate is µ(t) = α/(1 + αt), which can be obtained by differentiating both sides of (i) with respect to t 6.22. A population process has a probability generating function G(s, t) which satisfies the equation e−t

∂G ∂G = λ(s − 1)2 . ∂t ∂s

If, at time t = 0, the population size is n0 , show that G(s, t) =



1 + (1 − s)(λet − λ − 1) 1 + λ(1 − s)(et − 1)

94

n0

.

Find the mean population size at time t, and the probability of ultimate extinction. The generating function satisfies e−t Let τ=

Z

∂G ∂G = λ(x − 1)2 . ∂t ∂s

t u

e du,

z=

0

Z

ds . λ(s − 1)2

The transformations are τ = et − 1 and s = (λz − 1)/(λz). The transformed partial differential equation has the general solution G(s, t) = w(z + τ ). The initial condition is



1 λz

G(s, 0) = sn0 = 1 − Hence G(s, t)

= =



1−



1 λ(z + τ )

n0



n0

= 1−

(s − 1)(λet − λ − 1) − 1 λ(s − 1)(et − 1) − 1

= w(z).

s−1 −1 + λ(s − 1)(et − 1)

n0

n0

as required. The mean population size is µ(t) = Gs (1, t) = n0 . The probability of extinction at time t is p0 (t) = G(0, t) =



λ(et − 1) λ(et − 1) + 1

n0

→1

as t → ∞. 6.23. A population process has a probability generating function given by G(s, t) =

1 − µe−t (1 − s) , 1 + µe−t (1 − s)

where µ is a parameter. Find the mean of the population size at time t, and its limit as t → ∞. Expand G(s, t) in powers of s, determine the probability that the population size is n at time t. We require the derivative Gs (s, t)

µe−t [1 + µe−t (1 − s)] + µe−t (1 − s)[1 − µe−t (1 − s)] [1 + µe−t (1 − s)]2

=

2µe−t [1 + µe−t (1 − s)]2

= Then the mean population size is

µ(t) = Gs (1, t) = 2µe−t .

To find the individual probabilities we require the power series expansion of G(s, t). Using a binomial expansion G(s, t)

=



=



=



1 − µe−t 1 + µe−t 1 − µe−t 1 + µe−t −t

1 − µe 1 + µe−t

 

1+

µe−t s 1 − µe−t

1+

µe−t s 1 − µe−t

 "X ∞  n=0

−t



1−

µe−t s 1 + µe−t

X ∞ 

µe−t 1 + µe−t

µe 1 + µe−t

n=0

n

95

n

s +



−1 n

sn

µe−t 1 − µe−t

X ∞  n=1

µe−t 1 + µe−t

n

s

n+1

#

the coefficients of the powers sn give the following probabilities: p0 (t) = pn (t) =

1 − µe−t 1 + µe−t



1 − µe−t , 1 + µe−t



µn e−nt µn e−nt 1 + −t n (1 + µe ) (1 + µe−t )n−1 (1 − µe−t)

=

2µn e−nt , (n ≥ 1). (1 + µe−t )n+1

6.24. In a birth and death process with equal rates λ, the probability generating function is given by (see eqn (6.24))  n0  n0 λ(z + t) − 1 1 + (λt − 1)(1 − s) G(s, t) = = , λ(z + t) 1 + λt(1 − s)

where n0 is the initial population size. Show that pi , the probability that the population size is i at time t, is given by pi (t) =

  i  X n0 + i − m − 1 n0 m

m=0

if i ≤ n0 , and by pi (t) =

  n0  X n0 + i − m − 1 n0 m

m=0

if i > n0 , where

α(t)m β(t)n0 +i−m

i−m

α(t) =

α(t)m β(t)n0 +i−m

i−m

1 − λt , λt

λt . 1 + λt

β(t) =

Expand G(s, t) as a power series in terms of s using the binomial expansion: G(s, t)

 λt n0 h

=

1+

1 + λt



=

λt 1 + λt

1 − λt s λt

 n0  n0 X k=0

where

1−



λt s 1 + λt



i−n0 

X n0 + j − 1 j j n0 β s , αk sk j k

1 − λt , λt

α=

in0 h

β=

j=0

λt . 1 + λt

Two cases have to be considered. (a) i ≤ n0 . pi (t)

=

 λt n0 n  0 +

=

=



λt 1 + λt



λt 1 + λt

(b) i > n0 .

pi (t)

= =

α

0

1 + λt



 

n0 − 1 0 n0 αi β i 0

 i  n0 +i X  i  n0 +i X

0

α0

  m 







n0 + i − 2 i−1 n0 α β + ··· 1 i−1

n0 + i − 1 − m i−m

n0 m

m=0

  

n0 + i − 1 − m i−m

n0 m

m=0



n0 + i − 1 i β + i

  

 λt n0 n  0 1 + λt

0

n0 + i − 1 i β +··· + i

α β

1 − (λt)2 (λt)2

 

m



m   n0   λt n0 +i X n0 + i − 1 − m 1 − (λt)2 n0 1 + λt

m=0

m

i−m

96



i−1 n0 β i−n0 αn0 i − n0 n0

(λt)2



6.25. We can view the birth and death process by an alternative differencing method. Let pij (t) be the conditional probability pij (t) = P(N (t) = j|N (0) = i), where N (t) is the random variable representing the population size at time t. Assume that the process is in the (fixed) state N (t) = j at times t and t + δt and decide how this can arise from an incremental change δt in the time. If the birth and death rates are λj and µj , explain why pij (t + δt) = pij (t)(1 − λi δt − µi δt) + λi δtpi+1,j (t) + µi δtpi−1,j (t) + o(δt) for i = 1, 2, 3, . . ., j = 0, 1, 2, . . .. Take the limit as δt → 0, and confirm that pij (t) satisfies the differential equation dpij (t) = −(λi + µi )pij (t) + λi pi+1,j (t) + µi pi−1,j (t). dt How should p0,j (t) be interpreted? In this approach the final state in the process remains fixed, that is, the j in pij . We now view pij (t+δt) as pij (δt + t) — in other words see what happens in an initial δt. There will be a birth with probability λi in a time δt or a death with probability µi . Then pij (t + δt) = pij (t)[1 − λi δt − µi δt] + λi δt + µi δtpi−1,j (t) + o(δt) for i = 1, 2, 3, . . .; j = 0, 1, 2, 3, . . .. Hence pij (t + δt) − pij (t) = −(λi + µi )pij (t) + λi pi+1,j (t) + µi pi−1,j (t) + o(1). δt In the limit δt → 0,

dpij (t) = −(λi + µi )pij (t) + λi pi+1,j (t) + µi pi−1,j (t), dt

where we require p0,j (t) = P(N (t) = j|N (0) = 0) =



0, 1,

j>0 j=0

6.26. Consider a birth and death process in which the rates are λi = λi and µi = µi, and the initial population size is n0 = 1. If p1,j = P(N (t) = j|N (0) = 1), it was shown in Problem 6.25 that p1,j satisfies dp1,j (t) = −(λ + µ)p1,j (t) + λp2,j (t) + µp0,j (t), dt where p0,j (t) = If



G(i, s, t) =

0, 1, ∞ X

(j = 0, 1, 2, . . .)

j>0 j=0

pij (t)sj ,

j=0

show that

∂G(1, s, t) = −(λ + µ)G(1, s, t) + λG(2, s, t) + µ. ∂t 2 Explain why G(2, s, t) = [G(1, s, t)] (see Section 6.5). Hence solve what is effectively an ordinary differential equation for G(1, s, t), and confirm that G(1, s, t) =

µ(1 − s) − (µ − λs)e−(λ−µ)t , λ(1 − s) − (µ − λs)e−(λ−µ)t

as in eqn (6.23) with n0 = 1.

97

Given

dp1,j (t) = −(λ + µ)p1,j (t) + λp2,j (t) + µp0,j (t), dt j multiply the equation by s and sum over j = 1, 2, . . .. Then ∂G(1, s, t) = −(λ + µ)G(1, s, t) + λG(2, s, t) + µ(t). ∂t Also

G(2, s, t) = E[sN1 (t) ]E[sN2 (t) ] = E[sN1 (t) ]2 = G(1, s, t)2 .

Therefore

∂G(1, s, t) = −(λ + µ)G(1, s, t) + λG(1, s, t)2 + µ. ∂t This is a separable first-order differential equation with general solution

Z

dG(1, s, t) = (λG(1, s, t) − µ)(G(1, s, t) − 1)

Z

dt + A(x) = t + A(x),

where the ‘constant’ is a function of x. Assume that |G(1, s, t)| < min(1, µ/λ). Then t + A(x)

Z

λ λG(1, s, t) − µ

=



=

1 ln λ−µ



dG(1, s, t) 1 + λG(1, s, t) − µ λ−µ

1 − G(1, s, t) µ − λG(1, s, t)

Hence G(1, s, t) =



Z

dG(1, s, t) G(1, s, t) − 1

.

µ + B(x)e−(λ−µ)t , λ + B(x)e−(λ−µ)t

where B(x) (more convenient than A(x)) is a function to be determined by the initial conditions. Initially, G(1, s, 0) = s, so that B(x) = (µ − λs)/(1 − s). Finally G(1, s, t) agrees with G(1, s, t) in eqn (6.23) with n0 = 1. 6.27. In a birth and death process with parameters λ and µ, (µ > λ), and initial population size n0 , show that the mean time to extinction of the random variable Tn0 is given by E(Tn0 ) = n0 µ(µ − λ)2

Z



0

te−(µ−λ)t [µ − µe−(µ−λ)t ]n0 −1 dt. [µ − λe−(µ−λ)t ]n0 +1

If n0 = 1, using integration by parts, evaluate the integral over the interval (0, τ ), and then let τ → ∞ to show that   1 µ−λ E(T1 ) = − ln . λ µ The distribution function for Tn0 is given by F (t) = p0 (t) = G(0, t) =



µ − µe−(µ−λ)t µ − λe−(µ−λ)t

n0

,

(put s = 0 in (6.23)). Its density is f (t) =

[µ − µe−(µ−λ)t ]n0 −1 dF (t) = n0 µ(µ − λ)2 e−(µ−λ)t , dt [µ − λe−(µ−λ)t ]n0 +1

(t > 0).

The mean time to extinction is E(Tn0 ) =

Z

0



tf (t)dt = n0 µ(µ − λ)

2

Z

0

as required.

98



te−(µ−λ)t

[µ − µe−(µ−λ)t ]n0 −1 dt, [µ − λe−(µ−λ)t ]n0 +1

If n0 = 1, then E(T1 )

=

µ(µ − λ)2

=

(µ − λ)2 µ

Z

Z



0 ∞

0

te−(µ−λ)t dt [µ − λe−(µ−λ)t ]2

te−(µ−λ)t dt [1 − (λ/µ)e−(µ−λ)t ]2

Integrate the following finite integral between t = 0 and t = τ :

Z

0

τ

te−(µ−λ)t dt [1 − (λ/µ)e−(µ−λ)t ]2



Z



=

t µ − + λ(µ − λ) 1 − (λ/µ)e−(µ−λ)t

=

t µ 1 − ln{e(µ−λ)t − (λ/µ)} + λ(µ − λ) µ−λ 1 − (λ/µ)e−(µ−λ)t

=

µ 1 τ +τ + ln{1 − (λ/µ)e−(µ−λ)τ } − λ(µ − λ) 1 − (λ/µ)τ µ−λ

 

dt 1 − (λ/µ)e−(µ−λ)t

0



0



1 ln{1 − (λ/µ)} − µ−λ

µ ln{1 − (λ/µ)}, λ(µ − λ)2

→ as τ → ∞. Finally

E(T1 ) = −

1 ln λ



µ−λ µ



.

6.28. A death process (see Section 6.4) has a parameter µ and the initial population size is n0 . Its probability generating function is G(s, t) = [1 − e−µt (1 − s)]n0 . Show that the mean time to extinction is

n0 −1 n0 X (−1)k µ (k + 1)2 k=0





n0 − 1 . k

Let Tn0 be a random variable representing the time to extinction. The probability distribution of Tn0 is given by F (t) = p0 (t) = G(0, t) = (1 − e−µt )n0 . The mean time to extinction is

E(Tn0 ) =

Z



t

0

dp0 (t) dt = n0 µ dt

Z



0

Replace (1 − e−µt )n0 −1 by its binomial expansion, namely (1 − e−µt )n0 −1 = and integrate the series term-by-term: E(Tn0 ) = n0 µ

∞ X

(−1)k

k=0



n0 − 1 k

Z

∞ X

(−1)k

k=0



te−µt (1 − e−µt )n0 −1 dt.



te−(k+1)µt dt =

0



n0 − 1 −kµt e , k

∞ n0 X (−1)k µ (k + 1)2 k=0





n0 − 1 . k

6.29. A colony of cells grows from a single cell without deaths. The probability that a single cell divides into two cells in a time interval δt is λδt + o(δt). As in Problem 6.1, the probability generating function for the process is se−λt . G(s, t) = 1 − (1 − e−λt )s

99

By considering the probability P(Tn ≤ t) = F (t) = 1 −

n−1 X

pk (t),

k=1

where Tn is the random variable representing the time that the population is of size n(≥ 2) for the first time, show that E(Tn ) =

n−1 1X1 . λ k k=1

The expansion of the generating function is G(s, t) =

∞ X

e−λt (1 − e−λt )n−1 sn .

n−1 X

pk (t) = 1 −

n=1

Consider the probability function F (t) =

∞ X k=n

pk (t) = 1 −

k=1

n−1 X k=1

e−λt (1 − e−λt )k−1 .

Its density is defined by, for n ≥ 3, (although it is not required) n−1

f (t) =

X dF (t) = λe−λt + λe−λt (1 − ke−λt )(1 − e−λt )k−2 , dt

(t > 0)

k=2

and for n = 2,

f (t) = λe−λt ,

(t > 0).

Then, E(Tn )

= =

=

lim

τ →∞

lim

τ →∞

lim

τ →∞

Z

t

"0

τ−

"

n−1

=

τ

τ−



dF (t) dt = lim [tF (t)τ0 − τ →∞ dt n−1 X

e

−λτ

k=1

n−1 X k=1

e

−λτ

(1 − e

−λτ k−1

(1 − e

−λτ k−1

)

)



Z

0

Z

0

τ

τ

F (t)dt

(

1−



n−1 X k=1

e

−λt

(1 − e

n−1 1X1 (1 − e−λτ )k −τ + λ k k=1

−λt k−1

)

)

dt

#

#

1X1 . λ k k=1

As n → ∞ the series diverges, so that E(Tn ) → ∞ with n. 6.30. In a birth and death process, the population size represented by the random variable N (t) grows as a simple birth process with parameter λ. No deaths occur until time T when the whole population dies. Suppose that the random variable T has an exponential distribution with parameter µ. The process starts with one individual at time t = 0. What is the probability that the population exists at time t, namely that P[N (t) > 0]? What is the conditional probability P[N (t) = n|N (t) > 0] for n = 1, 2, . . .? Hence show that P[N (t) = n] = e−(λ+µ)t (1 − e−λt )n−1 . Construct the probability generating function of this distribution, and find the mean population size at time t. Since the decline is a Poisson process with intensity µ P[N (t) > 0] = e−µt .

100

This must be a simple birth process conditional on no deaths, namely P[N (t) = n|N (t) > 0] = e−λt (1 − e−λt )n−1 , Hence

(n = 1, 2, . . .).

P[N (t) = 0] = 1 − P[N (t) > 0] = 1 − e−µt ,

P[N (t) = n] = P[N (t) = n|N (t) > 0]P[N (t) > 0] = e−(λ+µ)t (1 − e−λt )n−1 ,

(n = 1, 2, . . .).

The probability generating function is G(s, t), where G(s, t)

=

∞ X n=0

=

P[N (t) = n]sn = 1 − e−µt +

1 − e−µt + e−(λ+µ)t

∞ X n=1

e−(λ+µ)t (1 − e−λt )n−1 sn

s 1 − s(1 − e−λt )

using the formula for the sum of a geometric series. For the mean, we require





∂G(s, t) s(1 − e−λt ) 1 . = e−(λ+µ)t + −λt ∂s 1 − s(1 − e ) [1 − s(1 − e−λt )]2 Then the mean µ(t) =



∂G(s, t) = e−(λ+µ)t [eλt + e2λt − eλt ] = e(λ−µ)t . ∂s s=1

6.31. In a birth and death process, the variable birth and death rates are, for t > 0, respectively given by λn (t) = λ(t)n > 0, (n = 0, 1, 2, . . .)

µn (t) = µ(t)n > 0, (n = 1, 2, . . .).

If pn (t) is the probability that the population size at time t is n, show that its probability generating function is G(s, t) =

∞ X

pn (t)sn ,

n=0

satisfies

∂G ∂G = (s − 1)[λ(t)s − µ(t)] . ∂t ∂s Suppose that µ(t) = αλ(t) (α > 0, α 6= 1), and that the initial population size is n0 . Show that G(s, t) =



1 − αq(s, t) 1 − q(s, t)

n0

where q(s, t) =

Find the probability of extinction at time t.

1−s α−s



exp (1 − α)

Z

t



λ(u)du .

0

Using eqns (6.25), the differential-difference equations are p′0 (t) = µ(t)p1 (t), p′n (t) = (n − 1)λ(t)pn−1 (t) − n[λ(t) + µ(t)]pn (t) + (n + 1)µ(t)pn+1 (t), (n = 1, 2, . . .).

In the usual way multiply the equations by sn and sum over n: ∞ X

p′n (t)sn =

n=0

Let G(s, t) =

∞ X n=2

P∞

n=0

(n − 1)λ(t)pn−1 (t)sn − [λ(t) + µ(t)]

∞ X n=1

npn (t)sn +

∞ X

(n + 1)µ(t)pn+1 (t)sn

n=0

n

pn (t)s . Then the series can be expressed in terms of G(s, t) as ∂G ∂t

= =

∂G ∂G ∂G − [λ(t) + µ(t)]s + µ(t) ∂s ∂s ∂s ∂G (s − 1)[λ(t)s − µ(t)] ∂s

λ(t)s2

101

Let µ(t) = αλ(t), (α 6= 1). Then ∂G ∂G = λ(t)(s − 1)(s − α) . ∂t ∂s Let dτ = λ(t)dt so that τ can be defined by τ =

Z

t

λ(u)du.

0

Let ds/dz = (s − 1)(s − α) and define z by z

= =

Z

1 ds = (s − 1)(s − α) 1−α h1−si 1 ln , 1−α α−s

Z h

i

1 1 − ds α−s 1−s

where s < min(1, α). Inversion of this equation gives s=

1 − αe(1−α)z = q(z), 1 − e(1−α)z

say. Let G(s, t) = Q(z, τ ) after the change of variable. Q(z, τ ) satisfies ∂Q ∂Q = . ∂τ ∂z Since the initial population size is n0 , then Q(z, 0) = sn0 = Hence Q(z, τ ) = Finally G(s, t) =



1 − αq(s, t) 1 − q(s, t)

as required. The probability of extinction is

n0





1 − αe(1−α)z 1 − e(1−α)z

1 − αe(1−α)(z+τ ) 1 − e(1−α)(z+τ )

where q(s, t) =

G(0, t) = where q(0, t) =





n0

1−s α−s

1 − αq(0, t) 1 − q(0, t)

1 exp (1 − α) α

n0

Z

0

t

n0

.

.



exp (1 − α)

Z

t



λ(u)du

0

,



λ(u)du .

6.32. A continuous time process has three states E1 , E2 , and E3 . In time δt the probability of a change from E1 to E2 is λδt, from E2 to E3 is also λδt, and from E2 to E1 is µδt. E3 can be viewed as an absorbing state. If pi (t) is the probability that the process is in state Ei (i = 1, 2, 3) at time t, show that p′1 (t) = −λp1 (t) + µp2 (t),

p′2 (t) = λp1 (t) − (λ + µ)p2 (t),

p′3 (t) = λp2 (t).

Find the probabilities p1 (t), p2 (t), p3 (t), if the process starts in E1 at t = 0. The process survives as long as it is in states E1 or E2 . What is the survival probability, that is P(T > t) of the process? By the usual birth and death method p1 (t + δt) = µδtp2 (t) + (1 − λδt)p1 (t) + O((δt)2 ),

102

p2 (t + δt) = λδtp1 (t) + (1 − λδt − µδt)p2 (t) + O((δt)2 ), p3 (t + δt) = λδtp2 (t) + O((δt)2 ).

Let δt → 0 so that the probabilities satisfy p′1 (t) = µp2 (t) − λp1 (t), p′2 (t)

(i)

= λp1 (t) − (λ + µ)p2 (t), p′3 (t)

(ii)

= λp2 (t).

(iii)

Eliminate p2 (t) between (i) and (ii) so that p1 (t) p′′1 (t) + (2λ + µ)p′1 (t) + λ2 p1 (t) = 0. This second-order differential equation has the characteristic equation m2 + (2λ + µ)m + λ2 = 0, which has the solutions m1 m2 where α = − 21 (2λ + µ) and β =

1√ (4λ 2



= α ± β,

+ µ). Therefore, since p1 (0) = 1, then

p1 (t) = Aem1 t + (1 − A)em2 t . From (i), p2 (t)

= =

1 ′ [p1 (t) + λp1 (t)] µ 1 [A(m1 + λ)em1 t + (1 − A)(m2 + λ)em2 t ] µ

Since p2 (0) = 0, then A = (m2 + λ)/(m2 − m1 ). Finally p1 (t) =

1 [−(m2 + λ)em1 t + (m1 + λ)em2 t ]. m1 − m2

The survival probability at time t is p1 (t) + p2 (t). 6.33. In a birth and death process, the birth and death rates are given respectively by λ(t)n and µ(t)n in eqn (6.25). Find the equation for the probability generating function G(s, t). If µ(t) is the mean population size at time t, show, by differentiating the equation for G(s, t) with respect to s, that µ′ (t) = [λ(t) − µ(t)]µ(t), (assume that (s − 1)∂ 2 G(s, t)/∂s2 = 0 when s = 1). Hence show that µ(t) = n0 exp

Z

0

t



[λ(u) − µ(u)]du ,

where n0 is the initial population size. The differential difference equations for the probability pn (t) are (see eqn (6.26)) p′0 (t) = µ(t)p1 (t), p′n (t) = λ(t)(n − 1)pn−1 (t) − [λ(t)n + µ(t)n]pn (t) + µ(t)(n + 1)pn+1 (t).

Hence the probability generating function G(s, t) satisfies

∂G(s, t) ∂G(s, t) = [λ(t)s − µ(t)](s − 1) , ∂t ∂s

103

(i)

(the method parallels that in Section 6.5). Differentiating (i) with respect to s: ∂ 2 G(s, t) ∂G(s, t) ∂G(s, t) ∂ 2 G(s, t) = λ(t)(s − 1) + [λ(t)s − µ(t)] + [λ(t)s − µ(t)](s − 1) . ∂s∂t ∂s ∂s ∂s2 Put s = 1 and remember that µ(t) = Gs (s, t). Then µ′ (t) = [λ(t) − µ(t)]µ(t). Hence integration of this differential equation gives µ(t) = n0 exp

Z

0

t



[λ(u) − µ(u)]du .

104

Chapter 7

Queues 7.1. In a single-server queue a Poisson process for arrivals of intensity 12 λ and for service and departures of intensity λ are assumed. For the corresponding limiting process find (a) pn , the probability that there are n persons in the queue, (b) the expected length of the queue, (c) the probability that there are not more than two persons in the queue, including the person being served in each case. (a) As in Section 7.3, with ρ =

1 2

pn = (1 − ρ)ρn =

1 . 2n+1

(b) If N is the random variable of the number n of persons in the queue (including the person being served), then its expected value is (see Section 7.3(b)) E(N ) =

1−ρ = 1. ρ

(c) The probability that there are not more than two persons in the queue is p0 + p1 + p2 =

1 1 7 1 + + = . 2 4 8 8

7.2. Consider a telephone exchange with a very large number of lines available. If n lines are busy the probability that one of them will become free in small time δt is nµδt. The probability of a new call is λδt (that is, Poisson), with the assumption that the probability of multiple calls is negligible. Show that pn (t), the probability that n lines are busy at time t satisfies p′0 (t) = −λp0 (t) + µp1 (t), p′n (t) = −(λ + nµ)pn (t) + λpn−1 (t) + (n + 1)µpn+1 (t),

In the limiting process show by induction that

pn = lim pn (t) = t→∞

e−λ/µ n!

 n λ µ

(n ≥ 1).

.

Identify the distribution. If pn = limt→∞ pn (t) (assumed to exist), then the stationary process is defined by the difference equations −λp0 + µp1 = 0,

105

(n + 1)µpn+1 − (λ + nµ)pn + λpn−1 = 0.

Assume that

e−λ/µ pn = n! pn+1

 n λ µ

=

1 [(λ + nµ)pn − λpn−1 ] µ(n + 1)

=

1 µ(n + 1)

"

=

e−λ/µ µ(n + 1)!

=

e−λ/µ (n + 1)!

 n

λ + µn −λ/mu e n! λ µ

.

 n λ µ

λ − (n − 1)!

 n−1 # λ µ

[λ + nµ − nµ]

 n+1 λ µ

Hence if the formula is true for pn and pn−1 , then it is true for pn+1 . It can be verified that that p1 and p2 are correct: therefore by induction on the positive integers the formula is proved. The distribution is Poisson with intensity λ/µ. 7.3. For a particular queue, when there are n customers in the system, the probability of an arrival in the small time interval δt is λn δt + o(δt). The service time parameter µn is also a function of n. If pn denotes the probability that there are n customers in the queue in the steady state queue, show by induction that pn = p0

λ0 λ1 . . . λn−1 , µ1 µ2 . . . µn

(n = 1, 2, . . .)

and find an expression for p0 . If λn = 1/(n + 1) and µn = µ, a constant, find the expected length of the queue. Let pn (t) be the probability that there are n persons in the queue. Then, by the usual arguments, p0 (t + δt) = µ1 p1 (t) + (1 − µ1 δt)p0 (t), pn (t + δt) = λn−1 pn−1 (t)δt + µn+1 pn+1 (t)δt + (1 − λn δt − µn δt)pn (t),

Divide through by δt, and let δt → 0:

(n = 1, 2, . . .).

p′0 (t) = µ1 p1 (t),

p′n (t) = µn+1 pn+1 (t) − (λn + µn )pn (t) + λn−1 pn−1 (t).

Assume that a limiting stationary process exists, such the pn = limt→∞ pn (t). Then pn satisfies µ1 p1 − λ0 p0 = 0, µn+1 pn+1 − (λn + µn )pn + λn−1 pn−1 = 0.

Assume that the given formula is true for pn and pn−1 . Then, using the difference equation above, pn+1

= =

1 µn+1



λ0 λ1 . . . λn−1 λ0 λ1 . . . λn−2 (λn + µn ) − λn−1 µ1 µ2 . . . µn µ1 µ2 . . . µn−1

λ0 λ1 . . . λn p0 , µ1 µ2 . . . µn+1

Showing that the formula is true for pn+1 . It can be verified directly that p1 =

λ0 p0 , µ1

p2 =

Induction proves the result for all n.

106

λ0 λ1 p0 . µ1 µ2



The probabilities satisfy

P∞

n=0

pn = 1. Therefore ∞ X λ0 λ1 . . . λn−1

1 = p0 + p0

µ1 µ2 . . . µn

n=1

,

provided that the series converges. If that is the case, then

"

p0 = 1

1+

∞ X λ0 λ1 . . . λn−1

µ1 µ2 . . . µn

n=1

#

.

If λn = 1/(n + 1) and µn = µ, then pn = where

p0 1 1 1 p0 · · ··· = n , µn 1 2 n µ n!

"

p0 = 1

1+

∞ X 1

µn n!

n=1

Hence

pn =

#

= e−1/µ .

e−1/µ . µn n!

which is Poisson with intensity 1/µ. The expected length of the queue is the usual result for the mean: µ=

∞ X

npn = e−1/µ

∞ X n=1

n=1

1 1 1 = e−1/µ e1/µ = . µn (n − 1)! µ µ

7.4. In a baulked queue (see Example 7.1) not more than m ≥ 2 people are allowed to form a queue. If there are m individuals, where m is fixed, in the queue, then any further arrivals are turned away. If the arrivals form a Poisson process with intensity λ and the service distribution is exponential with parameter µ, show that the expected length of the queue is ρ − (m + 1)ρm+1 + mρm+2 , (1 − ρ)(1 − ρm+1 ) where ρ = λ/µ. Deduce the expected length if ρ = 1. What is the expected length of the queue if m = 3 and ρ = 1? From Example 7.1, the probability of a baulked queue having length n is pn =

ρn (1 − ρ) , (1 − ρm+1 )

(n = 0, 1, 2, . . . , m),

(ρ 6= 1).

The expected length is E(N ) =

m X n=1

npn =

∞ X nρn (1 − ρ) n=1

(1 − ρm+1 )

Let S=

m X

=



1−ρ 1 − ρm+1

nρn .

n=1

Then (1 − ρ)S =

m X n=1

ρn − mρm+2 .

107

X m n=1

nρn .

(i)

Further summation of the geometric series gives ρ − (m + 1)ρm+1 + mρm+2 , (1 − ρ)2

S=

so that the expected length of the queue is E(N ) =

ρ − (m + 1)ρm+1 + mρm+2 , (1 − ρ)(1 − ρm+1 )

If ρ = 1, then, applying l’Hˆ opital’s rule pn =



1

(ρ 6= 1).

(ii)

in calculus to (i)



d n [ρ (1 − ρ)] dρ



d [1 − ρm+1 ] dρ

= ρ=1

1 . m+1

In this case the expected length is E(N ) =

m X

npn =

n=1

m 1 1 1 X 1 · m(m + 1) = m, n= m+1 (m + 1) 2 2 n=1

using an elementary formula for the sum of the first m integers. If ρ = 1 and m = 3, then E(N ) = 3/2. The expected length in (ii) can be re-arranged into E(N ) =

ρ−m−1 − (m + 1)ρ−1 + m → m, (ρ−1 − 1)(ρ−1 − 1)

as ρ → ∞. For the baulked queue there is no restriction on ρ. 7.5. Consider the single-server queue with Poisson arrivals occurring with intensity λ, and exponential service times with parameter µ. In the stationary process, the probability pn that there are n individuals in the queue is given by   n  λ λ , (n = 0, 1, 2, . . .). pn = 1 − µ µ

Find its probability generating function

G(s) =

∞ X

p n sn .

n=0

If λ < µ, use this function to determine the mean and variance of the queue length. The probability generating function G(s, t) is defined by G(s, t) =

"  ∞ n X λ n=0

µ

n

s −

 n+1 λ µ

s

n

#

.

Summation of the two geometric series G(s, t) =

λ µ µ−λ µ − · = . µ − λs µ µ − λs µ − λs

The first two derivatives of G(s, t) are Gs (s, t) =

λ(µ − λ) , (µ − λs)2

Gss (s, t) = 2

λ2 (µ − λ) (µ − λs)3

1 In (i) both the numerator and denominator are zero if ρ = 1: the rule states, under suitable conditions, if pn = f (ρ)/g(ρ), f (a) = g(a) = 0, and g ′ (a) 6= 0, then

lim

ρ→a

f ′ (a) f (ρ) = ′ g(ρ) g (a)

108

Hence the mean and variance are given by E(N ) = Gs (1, t) = V(N ) = Gss (1, t) + Gs (1, t) − [Gs (1, t)]2 =

λ , µ−λ

2λ2 λ λ2 λµ + − = (µ − λ)2 µ−λ (µ − λ)2 (µ − λ)2

7.6. A queue is observed to have an average length of 2.8 individuals including the person being served. Assuming the usual exponential distributions for both service times and times between arrivals, what is the traffic density, and the variance of the queue length? With the usual parameters λ and µ and the random variable N of the length of the queue, then its expected length is, with ρ = λ/µ, ρ , E(N ) = 1−ρ

which is 2.8 from the data. Hence the traffic density is ρ = 2.8/3.8 ≈ 0.74. The probability that the queue length is n in the stationary process is (see eqn (7.5)) pn = (1 − ρ)ρn . The variance of the queue length is given by V(N )

= = =

E(N 2 ) − [E(N )]2 = (1 − ρ) ρ(1 + ρ) ρ2 − 2 (1 − ρ) (1 − ρ)2 ρ (1 − ρ)2

∞ X n=1

n2 ρn −

ρ2 (1 − ρ)2

If ρ = 0.74, then V(N ) ≈ 10.9. 7.7. The differential-difference equations for a queue with parameters λ and µ are (see equation (7.1)) dp0 (t) = µp1 (t) − λp0 (t), dt dpn (t) = λpn−1 (t) + µpn+1 (t) − (λ + µ)pn (t), dt where pn (t) is the probability that the queue has length n at time t. Let the probability generating function of the distribution {pn (t)} be G(s, t) =

∞ X

pn (t)sn .

n=0

Show that G(s, t) satisfies the equation s

∂G(s, t) = (s − 1)(λs − µ)G(s, t) + µ(s − 1)p0 (t). ∂t

Unlike the birth and death processes in Chapter 6, this equation contains the unknown probability p0 (t) which complicates its solution. Show that it can be eliminated to leave the following second-order partial differential equation for G(s, t): s(s − 1)

∂G(s, t) ∂G(s, t) ∂ 2 G(s, t) − (s − 1)2 (λs − µ) − − λ(s − 1)2 G(s, t) = 0. ∂t∂s ∂s ∂t

This equation can be solved by Laplace transform methods.

109

Multiply the second equation in the question by sn , sum over all n from n = 1 and add the first equation to the sum resulting in ∞ X

pn′ (t)

∞ X

pn−1 (t)sn + µ

∞ X

=

µp1 (t) − λp0 (t) + λ

=

µp1 (t) − λp0 (t) + λsG(s, t) +

=

1 µ (s − 1)(λs − µ)G(s, t) + (s − 1)p0 (t). s s

n=0

n=1

n=1

pn+1 (t)sn − (λ − µ)

∞ X

pn (t)sn

n=1

µ [G(s, t) − sp1 (t) − p0 (t)] − (λ + µ)[G(s, t) − p0 (s, t)] s

Hence the differential equation for G(s, t) is s

∂G(s, t) = (s − 1)[(λs − µ)G(s, t) + µp0 (t)]. ∂t

Write the differential equation in the form s ∂G(s, t) = (λs − µ)G(s, t) + µp0 (t). s−1 ∂t Differentiate the equation with respect to s to eliminate the term p0 (t), so that ∂ ∂s



s ∂G(s, t) s−1 ∂t



=

∂ [(λs − µ)G(s, t)], ∂s

or

∂G(s, t) s ∂ 2 G(s, t) ∂G(s, t) 1 + = λG(s, t) + (λs − µ) . 2 (s − 1) ∂t s − 1 ∂s∂t ∂s The required result follows. −

7.8. A call centre has r telephones manned at any time, and the traffic density is λ/(rµ) = 0.86. Compute how many telephones should be manned in order that the expected number of callers waiting at any time should not exceed 4? Assume a limiting process with inter-arrival times of calls and service times for all operators both exponential with parameters λ and µ respectively (see Section 7.4). From (7.11) and (7.12), the expected length of the queue of callers, excluding those being served, is EHNL 8

6

4

2

r 2

4

6

8

10

Figure 7.1: Expected queue length E(N ) versus number r of manned telephones.

E(N ) =

p0 ρr+1 , (r − 1)!(r − ρ)2

110

(i)

where

 "X r−1

p0 = 1

n=0

#

ρn ρr + , n! (r − ρ)(r − 1)!

ρ=

λ . µ

(ii)

Substitute for p0 from (ii) into (i) and compute E(N ) as a function of r with ρ = 0.86r. A graph of E(N ) against r is shown in Figure 7.1 for r = 1, 2, . . . , 10. From the graph the point at r = 6 is (just) below the line E(N ) = 4. The answer is that 6 telephones should be manned. 7.9. Compare the expected lengths of the two queues M (λ)/M (µ)/1 and M (λ)/D(1/µ)/1 with ρ = λ/µ < 1. The queues have parameters such that the mean service time for the former equals the fixed service time in the latter. For which queue would you expect the mean queue length to be the shorter? From Section 7.3(b), the expected length of the M (λ)/M (µ)/1 queue is, with ρ = λ/µ < 1, E1 (N ) =

ρ . 1−ρ

Since λτ = λ/µ = ρ (τ is the fixed service time), the expected length of the M (λ)/D(1/µ)/1 is (see end of Section 7.5) ρ(1 − 12 ρ) E2 (N ) = . 1−ρ It follows that 1 2 ρ ρ ρ − 2 ≤ = E1 (N ). E2 (N ) = 1−ρ 1−ρ 1−ρ In this case the queue with fixed service time has the shorter expected length. 7.10. A queue is serviced by r servers, with the distribution of the inter-arrival times for the queue being exponential with parameter λ and each server has a common exponential service time distribution with parameter µ. If N is the random variable for the length of the queue including those being served, show that its expected value is " # r−1 X

E(N ) = p0

n=1

where ρ = λ/µ < r, and

ρr [r 2 + ρ(1 − r)] ρn , + (n − 1)! (r − 1)!(r − ρ)2

 "X r−1

p0 = 1

n=0

(see equation (7.11)). If r = 2, show that

#

ρn ρr + . n! (r − ρ)(r − 1)!

4ρ . 4 − ρ2 For what interval of values of ρ is the expected length of the queue less than the number of servers? E(N ) =

For the M (λ)/M (µ)/r queue, the probability that there n persons in the queue is pn =



ρn p0 /n! ρn p0 /(r n−r r!)

n 0, obtain the reliability function R(t) and the failure rate function r(t) for the component. Obtain the expected life of the component. For the given density, the distribution function is F (t) =

(

0 (t − t0 )/(t1 − t0 ) 1

0 ≤ t ≤ t0 t0 < t < t1 t ≥ t1

Therefore the reliability function R(t) is

(

1 (t1 − t)/(t1 − t0 ) 0

(

0 1/(t1 − t) does not exist

R(t) = 1 − F (t) =

0 ≤ t ≤ t0 t0 < t < t1 , t ≥ t1

and the failure rate function r(t) is f (t) r(t) = = R(t)

0 ≤ t ≤ t0 t0 < t < t1 . t ≥ t1

Failure of the component will not occur for 0 ≤ t ≤ t0 since R(t) = 1 in this interval. The component will not survive beyond t = t1 . If T is a random variable of the lifetime of the component, then the expected lifetime is given by E(T ) =

Z



tf (t)dt =

0

Z

t1

t0

tdt t2 − t20 1 = 1 = (t0 + t1 ). t1 − t0 2(t1 − t0 ) 2

8.2. Find the reliability function R(t) and the failure rate function r(t) for the gamma density f (t) = λ2 te−λt ,

t > 0.

How does r(t) behave for large t? Find the mean and variance of the time to failure. For the given gamma density, F (t) = λ

2

Z

0

t

se−λs ds = 1 − e−λt (1 + λt).

121

Hence the reliability function is The failure rate function is

R(t) = 1 − F (t) = (1 + λt)e−λt . r(t) =

For fixed λ and large t,

h

r(t) = λ 1 +

f (t) λ2 t = . R(t) 1 + λt



1 λt

i−1

= λ + O(t−1 ),

as t → ∞. If T is a random variable of the time to failure, then its expected value is E(T ) =

Z



sf (s)ds = λ2

0

Z



s2 e−λs ds =

0

2 . λ

Also the variance V(T ) =

Z



0

s2 f (s)ds −

 2 2 λ

= λ2

Z



0

s3 e−λs ds −

6 4 2 4 = 2 − 2 = 2. λ2 λ λ λ

These are the mean and variance of a gamma distribution with parameters λ and 2. 8.3. A failure rate function is given by r(t) =

t , 1 + t2

t ≥ 0,

The rate of failures peaks at t = 1 and then declines towards zero as t → ∞: failure becomes less likely with time (see Figure 8.1 ). Find the reliability function, and the corresponding probability density.

Figure 8.1: Failure rate distribution r(t) with a = 1 and c = 1

In terms of r(t) (see eqn (8.5), the reliability function is given by

 Z

R(t) = exp −

0

t



 Z

r(s)ds = exp −

t 0

sds 1 + s2



= exp[− 12 ln(1 + t2 )] = √

for t ≥ 0. Hence the distribution function F (t) = 1 − R(t) = 1 − √ Finally, the density is given by f (t) = F ′ (t) =

122

1 , (1 + t2 ) t 3

(1 + t2 ) 2

(t ≥ 0).

.

1 , (1 + t2 )

8.4. A piece of office equipment has a piecewise failure rate function given by r(t) =



2λ1 t, 2(λ1 − λ2 )t0 + 2λ2 t,

0 < t ≤ t0 , t > t0

λ1 , λ2 > 0.

Find its reliability function. The reliability function is given by

 Z

R(t) = exp − where, for 0 < t ≤ t0 ,

Z

t

r(s)ds = 2λ1

Z

t

r(s)ds

=

0

Z

Z

r(s)ds +

Z

λ1 t20 +

t

t0

= =

t

sds = λ1 t2 ,

Z

t0

0

=

r(s)ds , 0

0

0

and, for t > t0 ,



t

t

r(s)ds t0

[2(λ1 − λ2 )t0 + 2λ2 s]ds

λ1 t20 + 2(λ1 − λ2 )t0 (t − t0 )] + λ2 (t2 − t20 )

t0 (λ1 − λ2 )(2t − t0 ) + λ2 t2

Hence the reliability function is R(t) =



2

e−λ1 t 2 e−[t0 (λ1 −λ2 )(2t−t0 )+λ2 t ]

0 < t ≤ t0 t > t0 .

8.5. A laser printer is observed to have a failure rate function r(t) = 2λt (t > 0) per hour whilst in use, where λ = 0.00021(hours) −2 : r(t) is a measure of the probability of the printer failing in any hour given that it was operational at the beginning of the hour. What is the probability that the printer is working after 40 hours of use? Find the probability density function for the time to failure. What is the expected time before the printer will need maintenance? Since r(t) = 2λt, the reliability function is



R(t) = exp −2λ

Z

t



2

sds = e−λt .

0

−0.00021×40×40

Hence R(40) = e = 0.715. The probability of that the printer is working after 40 hours is 2 0.715. The probability of failure is F (t) = 1 − R(t), so that F (t) = 1 − e−λt . By (8.8), the expectation of T , the time to failure, is E(T ) =

Z



R(t)dt =

0

Z



2

e−λt dt =

0

1 2

(see the Appendix for the value of the integral).

q

π = 61.2 hours, λ

8.6. The time to failure is assumed to be gamma with parameters α and n, with f (t) =

α(αt)n−1 e−αt , (n − 1)!

t > 0.

Show that the reliability function is given by R(t) = e−αt

n−1 r r X α t r=0

123

r!

.

Find the failure rate function and show that limt→∞ r(t) = α. What is the expected time to failure? The gamma distribution function is F (t, α, n)

Z

=

t 0

αn α(αs)n−1 e−αs ds = (n − 1)! (n − 1)!

t

sn−1 e−αs ds

0

αn−1 n−1 −αt αn−1 t e + F (t, α, n − 1) (n − 1)! (n − 2)!

=



=

e−αt −



=

Z

1−e



αn−2 tn−2 αt αn−1 tn−1 − − ··· − + (n − 1)! (n − 2)! 1!

n−1 r r X α t −αt r=0

r!

Z

t

e−αs ds

0

.

after repeated integration by parts. The reliability function is therefore R(t) = 1 − F (t, α, n) = e−αt

n−1 r r X α t

r!

r=0

.

The failure rate function r(t) is defined by r(t)

=

=

α(αt)n−1 e−αt f (t) = R(t) (n − 1)! αn tn−1

"

(n − 1)!

"

e

−αt

n−1 r r X α t

r!

r=0

n−1 r r X α t r=0

#

r!

#

For the limit, express r(t) in the form n

r(t) = α

"

(n − 1)!

as t → ∞. The expected time to failure is E(T ) =

Z

0

n−1 r r−n+1 X α t r=0



r!

#

n

→α





αn−1 (n − 1)! =α (n − 1)!

αn tn e−αt dt αn n! n = · = . (n − 1)! (n − 1)! αn+1 α

8.7. A electrical generator has an exponentially distributed failure time with parameter λf and the subsequent repair time is exponentially distributed with parameter λr . The generator is started up at time t = 0. What is the mean time for generator to fail, and the mean time from t = 0 for it to be operational again? As in Section 8.4, the mean time to failure is 1/λf . The mean repair time is 1/λr , so that the mean time to the restart is 1 1 . + λf λr 8.8. A hospital takes a grid supply of electricity which has a constant failure rate λ. This supply is backed up by a stand-by generator which has a gamma distributed failure time with parameters (2, µ). Find the reliability function R(t) for the whole electricity supply. Assuming that time is measured in hours, what should the relation between the parameters λ and µ be in order that R(1000)=0.999? For the grid supply, the reliability function is Rg (t) = e−λt . For the stand-by supply, the reliability function is (see Problem 8.6) Rs (t) = e−µt

1 X µr tr r=0

r!

124

= e−µt (1 + µt).

The reliability function for the system is R(t)

1 − [1 − Rg (t)][1 − Rs (t)] = 1 − [1 − e−λt ][1 − (1 + µt)e−µt ]

=

e−λt − (1 + µt)e−(λ+µ)t + (1 + µt)e−µt .

=

Let T = 1000 hours. Then solving the equation above for λ at time t = T , we have



1 R(T ) − (1 + µT )e−µT λ = − ln T 1 − (1 + µT )e−µT



,

where R(T ) = R(1000) = 0.999. 8.9. The components in a renewal process with instant renewal are identical with constant failure rate λ = (1/50)(hours)−1 . If the system has one spare component which can take over when the first fails, find the probability that the system is operational for at least 24 hours. How many spares should be carried to ensure that continuous operation for 24 hours occurs with probability 0.98? Let T1 and T2 be the times to failure of the components. Let S2 be the time to failure of the spares. If τ = 24 hours is the operational time to be considered, then, as in Example 8.5, P(S2 < τ ) = 1 − (1 + λτ )e−λτ. Hence



P(S2 < 24) = 1 − 1 +



24 −24/50 e = 1 − 0.916. 50

The required probability is 0.916. This is the reverse problem: given the probability, we have to compute n. If Sn is the time to failure, then P(SN < τ )

=

Fn (τ ) =

Z

τ

0

=

1−e

−λτ



λn fn (s)ds = (n − 1)!

Z

τ

sn−1 e−λs ds

0

(λτ )n−1 (λτ )2 + ··· + 1 + λτ + 2! (n − 1)!



The smallest value of n is required which makes 1 − Fn (24) > 0.98. Conputation gives 1 − F2 (24) = 0.916,

F3 (24) = 0.987 > 0.98.

Three components are required. 8.10. A device contains two components c1 and c2 with independent failure times T1 and T2 from time t = 0. If the densities of the times to failure are f1 and f2 with probability distribution functions F1 and F2 , show that the probability that c1 fails before c2 is given by P {T1 < T2 } =

Z



y=0

Z

y

f1 (x)f2 (y)dxdy, =

x=0

Z



F1 (y)f2 (y)dy.

y=0

Find the probability P {T1 < T2 } in the cases: (a) both failure times are exponentially distributed with parameters λ1 and λ2 ; (b) both failure times have gamma distributions with parameters (2, λ1 ) and (2, λ2 ). The probability that c1 fails before c2 is P(T1 < T2 ) =

Z Z

f1 (x)f2 (x)dx,

A

where the region A is shown in Figure 8.2. As a repeated integral the double integral can be expressed as P(T1 < T2 ) =

Z

0



Z

0

y

f1 (x)f2 (y)dxdy =

Z



0

[F1 (y) − F1 (0)]f2 (y)dy =

125

Z

0



F1 (y)f2 (y)dy,

y

A

x O

Figure 8.2: Region A in Problem 8.10. since F1 (0) = 0. (a) For exponentially distributed failure times f2 (y) = λ2 e−λ2 y ,

F1 (y) = 1 − e−λ1 y .

Therefore P(T1 < T2 )

Z

=



0

(1 − e−λ1 y )λ2 e−λ2 y dy

h

1 −λ2 y 1 e + e−(λ1 +λ2 )y λ2 λ1 + λ2 h i 1 λ1 1 = λ2 − λ2 λ1 + λ2 λ1 + λ2 λ2 −

= =

i∞ 0

(b) For gamma distributions with parameters (2, λ1 ) and (2, λ2 ), f2 (y) = λ22 ye−λ2 y , Hence P(T1 < T2 ) =

Z

0



F1 (y) = 1 − (1 + λ1 y)e−λ1 y .

(1 − e−λ1 y − λ1 ye−λ1 y )(λ22 y)e−λ2 y dy =

λ21 (λ1 + 3λ2 ) . (λ1 + λ2 )3

8.11. Let T be the failure time of a component. Suppose that the distribution function of T is F (t) = P(T ≤ t), with density

f (t) = α1 e−λ1 t + α2 e−λ2 t ,

t ≥ 0,

α1 , α2 > 0,

λ1 , λ2 > 0,

where the parameters satisfy

α1 α2 + = 1. λ1 λ2 Find the reliability function R(t) and the failure rate function r(t) for this ‘double’ exponential distribution. How does r(t) behave as t → ∞? The probability distribution F (t)

=

Z

t

f (s)ds =

0

= = =

h α 1

Z

t

(α1 e−λ1 s + α2 e−λ2 s )ds

0

i

α2 −λ2 s t e λ1 λ2 0 α1 −λ1 t α2 −λ2 t α1 α2 − e − e + + λ1 λ2 λ1 λ2 α1 −λ1 t α2 −λ2 t e − e 1− λ1 λ2 −

e−λ1 s −

126

The reliability function is therefore R(t) = 1 − F (t) =

α1 −λ1 t α2 −λ2 t e + e . λ1 λ2

The failure rate function is r(t) = As t → ∞,

 f (t) = α1 e−λ1 t + α2 e−λ2 t R(t) r(t) →

(

λ1 λ2 λ





α1 −λ1 t α2 −λ2 t e + e . λ1 λ2

if λ2 > λ1 if λ2 < λ1 if λ1 = λ2 = λ

8.12. The lifetimes of components in a renewal process with instant renewal are identically distributed with constant failure rate λ. Find the probability that at least three components have been replaced by time t. In the notation of Section 8.7 P(S3 < t) = F3 (t) =

Z

t

F2 (t − y)f (y)dy,

0

where

F2 (t) = 1 − (1 + λt)e−λt ,

Then

P(S3 ≤ t)

Z

=

0

f (t) = λe−λt .

t

[1 − (1 + λ(t − y)e−λ(t−y)]λe−λy dy

1 − (1 + tλ + 21 t2 λ2 )e−λt .

=

8.13. The lifetimes of components in a renewal process with instant renewal are identically distributed with a failure rate which has a uniform distribution with density f (t) =



1/k 0

0 t0 ) (by eqn (1.2)) P(T > t0 ) P(t0 < T ≤ t + t0 ) ∂(T > t0 ) F (t + t0 ) − F (t0 ) , 1 − F (t0 )

= = = =

as required. For the mean E(Tt0 )

=

Z

0

= =







F (t + t0 ) − F (t0 ) 1− dt 1 − F (t0 )

1 1 − F (t0 ) 1 1 − F (t0 )

Z



Z0 ∞ t0

[1 − F (t + t0 )]dt [1 − F (u)]du,

128

(where u = t + t0 ).

8.16. Suppose that the time to failure, T , of a system is uniformly distributed over (0, t1 ) given by U (t) =



t/t1 , 1

0 ≤ t ≤ t1 t > t1

.

Using the result from Problem 8.15, find the conditional probability function assuming that the system is still working at time t = t0 . There are two cases to consider: t0 ≤ t1 and t0 > t1 . • t0 ≤ t1 . In the formula in Problem 8.15 U (t + t0 ) = Hence



(t + t0 )/t1 1

 [(t + t0 )/t1 ] − [t0 /t1 ]   =

1 − [t0 /t1 ] Ut0 (t) = 1 − [t0 /t1 ]   =1 1 − [t0 /t1 ]

0 ≤ t + t0 ≤ t1 , . t + t0 > t1 t t1 − t0

0 ≤ t ≤ t1 − t0

.

t > t1 − t0

• t0 > t1 . Ut0 = P(T − t0 ≤ t|T > t0 ) = 1.

8.17. In the bridge system represented by Figure 8.1(d) suppose that all components have the same reliability function Rc (t). Show that the reliability function R(t) is given by R(t) = 2Rc (t)2 + 2Rc (t)3 − 5Rc (t)4 + 2Rc (t)5 . Suppose that the bridge c3 is removed, What is the reliability function Rx (t) for this system? Show that R(t) > Rx (t). What does this inequality imply? The result follows from (8.16) with R3 (t) = Rc (t). With the bridge absent the reliability function is Rx (t) = 1 − [1 − Rc (t)2 ][1 − Rc (t)2 ] = 2Rc (t)2 − Rc (t)4 , . Finally R(t)

= = ≥

2Rc (t) − Rc (t)4 + 2Rc (t)3 − 4Rc (t)4 + 2Rc (t)5 Rx (t) + 2Rc (t)3 [1 − Rc (t)]2 Rx t

Perhaps not surprisingly, the bridge improves reliability

129

Chapter 9

Branching and other random processes 9.1. In a branching process the probability that any individual has j descendants is given by p0 = 0,

pj =

1 , 2j

(j ≥ 1).

Show that the probability generating function of the first generation is G(s) =

s . 2−s

Find the further generating functions G2 (s), G3 (s) and G4 (s). Show by induction that s . 2n − (2n − 1)s

Gn (s) =

Find pn,j , the probability that the population size of the n-th generation is j given that the process starts with one individual. What is the mean population size of the n-th generation? The generating function is given by G(s) =

∞ X

p j sj =

j=0

∞   X s j j=1

2

=

s + 2

 2 s 2

+··· =

s , 2−s

using the geometric series formula for the sum. For the second generation G2 (s) = G(G(s)), so that G2 (s) =

s/(2 − s) s s = = . 2 − [2/(2 − s)] 2(2 − s) − s 4 − 3s

Repeating this procedure, G3 (s) = G(G(G(s))) = G2 (G(s)) = G4 (s) = G3 (G(s)) = Consider the formula Gn (s) = Then Gn+1 (s) = G(Gn (s)) =

s s = , 4(2 − s) − 3s 8 − 7s

s s = . 8(2 − s) − 7s 16 − 15s s . 2n − (2n − 1)s

s s = n+1 . 2n (2 − s) − (2n − 1)s 2 − (2n+1 − 1)s

130

Hence if the formula is correct for Gn (s) then it is true for Gn+1 (s). The result has been verified for G2 (s) and G3 (s) so it is true for all n by induction on the integers. Using the binomial expansion



(2n − 1) s s Gn (s) = n = n 1− s n 2 − (2 − 1)s 2 2n

−1

=

∞  n  X 2 − 1 j−1 j=1

2n

sj .

Hence the probability that the population size of the n-th generation is j is given by the coefficient of sj in this series, namely  n  2 − 1 j−1 pn,j = . 2n Since G(s) = s/(2−s), then G′ (s) = 2/(2−s)2 , so that the mean of the first generation is µ = G′ (1) = 2. Using result (9.7) in the text, the mean size of the n-th generation is µn = G′n (1) = µn = 2n . 9.2. Suppose in a branching process that any individual has a probability given by the modified geometric distribution pj = (1 − p)pj , (j = 0, 1, 2, . . .),

of producing j descendants in the next generation, where p (0 < p < 1) is a constant. Find the probability generating function of the second and third generations. What is the mean size of any generation? The probability generating function is G(s) =

∞ X

p j sj =

j=0

∞ X j=0

(1 − p)pj sj =

1−p , 1 − ps

using the formula for the sum of the geometric series. In the second generation G2 (s)

= =

1−p 1−p = 1 − pG(s) 1 − p[(1 − p)/(1 − ps)] (1 − p)(1 − ps) , (1 − p + p2 ) − ps

G(G(s)) =

and for the third generation G3 (s)

(1 − p)[1 − {p(1 − p)/(1 − ps)}] (1 − p + p2 ) − {p(1 − p)/(1 − ps)}

=

G2 (G(s)) =

=

(1 − p)(1 − p + p2 − ps) (1 − 2p + 2p2 ) − p(1 − p + p2 )s

The mean size of the first generation is µ = G′ (1) =



p (1 − p)p = . (1 − ps)2 s=1 1−p

From (9.7) in the book, it follows that µ2 = µ2 , µ3 = µ3 , and, in general, that µn = µn for the n-th generation. 9.3. A branching process has the probability generating function G(s) = a + bs + (1 − a − b)s2 for the descendants of any individual, where a and b satisfy the inequalities 0 < a < 1,

b > 0,

131

a + b < 1.

Given that the process starts with one individual, discuss the nature of the descendant generations. What is the maximum possible size of the n-th generation? Show that extinction in the population is certain if 2a + b ≥ 1. Each descendant produces 0,1,2 individuals with probabilities a, b, 1 − a − b respectively. If Xn represents the population size in the n-th generation, then the possible values of X1 , X2 , . . . are {X1 } {X2 } {X3 } ··· {Xn }

···

{0, 1, 2} {0, 1, 2, 3, 4} {0, 1, 2, 3, 4, 5, 6, 7, 8} ··· {0, 1, 2, . . . , 2n }

The maximum possible population of the n-th generation is 2n . The probability of extinction is the smallest solution of G(g) = g, that is, a + bg + (1 − a − b)g 2 = g,

or

(g − 1)[(1 − a − b)g − a] = 0.

The equation always has the solution g = 1. The other possible solution is g = a/(1 − a − b). Extinction is certain if a ≥ 1 − a − b, that is if 2a + b ≥ 1. The region in the a, b where extinction is certain is shown in Figure 9.1. If a < 1 − a − b, then extinction occurs with probability a/(1 − a − b). b

1

extinction certain

a 0.5

O

1

Figure 9.1: Extinction probability region in the a, b plane for Problem 9.3. 9.4. A branching process starts with one individual. Subsequently any individual has a probability (Poisson with intensity λ) λj e−λ pj = , (j = 0, 1, 2, . . .) j! of producing j descendants. Find the probability generating function of this distribution. Obtain the mean and variance of the size of the n-th generation. Show that the probability of ultimate extinction is certain if λ ≤ 1. The probability generating function is given by G(s) =

∞ X j=0

p j sj =

∞ X λj e−λ j=0

j!

sj = eλs−λ .

As expected for this distribution, the mean and variance of the population of the first generation are µ = G′ (1) = λeλs−λ |s=1 = λ, σ 2 = G′′ (1) + µ − µ2 = λ2 + λ − λ2 = λ.

By Section 9.3, the mean and variance of the population of the n-th generation are µn = µn = λn ,

132

σ 2n

= =

If λ = 1, then µn = 1 and σ n = n.

µn (σ 2 − µ + µ2 )(µn − 1) µ(µ − 1)

(λ 6= 1)

λn+1 (λn − 1) . λ−1

9.5. A branching process starts with one individual. Any individual has a probability pj =

λ2j sech λ , (2j)!

(j = 0, 1, 2, . . .)

of producing j descendants. Find the probability generating function of this distribution. Obtain the mean size of the n-th generation. Show that ultimate extinction is certain if λ is less than the computed value 2.065. The probability generating function of this distribution is given by G(s, λ) = sechλ

∞ X λ2j j=0

(2j)!

√ sj = sechλ cosh(λ s),

(s ≥ 0).

Its derivative is

√ λ Gs (s, λ) = √ sechλ sinh(λ s). 2 s Hence the mean size of the population of the first generation is

1 λ tanh λ, 2 which implies that the mean population of the n-th generation is µ = Gs (1, λ) =

µn =

λn tanhn λ. 2n

λ 6

4 (1, 2.065) 2 g O

1

0.5

1.5

Figure 9.2: Graph of g = G(g) for Problem 9.5. Ultimate extinction occurs with probability g where g is the smallest solution of √ g = G(g, λ) = sechλ cosh(λ g). This equation aways has the solution g = 1, which is the only solution if λ < 2.065, approximately. This is a numerically computed value. The graph of the equation above is shown in Figure 9.2. 9.6. A branching process starts with two individuals. Either individual and any of their descendants has probability pj , (j = 0, 1, 2, . . .) of producing j descendants independently of any other. Explain why the probabilities of 0, 1, 2, . . . descendants in the first generation are p20 ,

p0 p1 + p1 p0 ,

p0 p2 + p1 p1 + p2 p0 ,

...

n X i=0

133

pi pn−i , . . . ,

respectively. Hence show that the probability generating function of the first generation is G(s)2 , where G(s) =

∞ X

p j sj .

j=0

The second generation from each original individual has generating function G2 (s) = G(G(s)) (see Section 9.2). Explain why the probability generating function of the second generation is G2 (s)2 , and of the n-th generation is Gn (s)2 . If the branching process starts with r individuals, what would you think is the formula for the probability generating function of the n-th generation? For each individual, the probability generating function is G(s) =

∞ X

p j sj ,

j=0

and each produces descendants with populations 0, 1, 2, . . . with with probabilities p0 , p1 , p2 , . . .. The combined probabilities that populations of the generations are p20 ,

p0 p2 + p21 + p2 p0 ,

p0 p1 + p1 p0 ,

....

These expressions are the coefficients of the powers of s in G(s)2 =

∞ X

p j sj

∞ X

p k sk =

k=0

j=0

∞ k X X

pj pk−j sk

k=0 j=0

(this is known as the Cauchy product of the power series). Hence the probability that the population of the first generation is of size k is k ∞ X X

pj pk−j .

k=0 j=0

Repeating the argument, each original individual generates descendants whose probabilities are the coefficients of G2 (s) = G(G(s)). Hence the probabilities of populations 0, 1, 2, . . . descandants are coefficients of G22 . This process is repeated for succeeding generations which have the generating functions Gn (s)2 . 9.7. A branching process starts with two individuals as in the previous problem. The probabilities are pj =

1 , 2j+1

(j = 0, 1, 2, . . .).

Using the results from Example 9.1, find Hn (s), the probability generating function of the n-th generation. Find also (a) the probability that the size of the size of the n-th generation is m ≥ 2; (b) the probability of extinction by the n-th generation; (c) the probability of ultimate extinction. For either individual the probability generating function is G(s) =

∞ X sj j=0

2j+1

=

Then G2 (s) = G(G(s)) = and, in general, Gn (s) =

1 . 2−s 2−s , 3 − 2s

n − (n − 1)s . (n + 1) − ns

134

According to Problem 9.6, the generating function for the combined descendants is Hn (s)

2



n − (n − 1)s (n + 1) − ns

2

=

Gn (s) =

=

2(n − 1)s (n − 1)2 s2 n2 1 − + (n + 1)2 n n2

=

 



1−

ns n+1

−2

∞  r 2(n − 1)s (n − 1)2 s2 X n n2 (r + 1) 1 − + sr 2 2 (n + 1) n n n+1 2

=



n 2n + s+ (n + 1)2 (n + 1)3

r=0

∞ X [(r − 1)nr−2 + 2nr ] r=2

(n + 1)r+2

sr

after some algebra: series expansion by computer is helpful to confirm the formula. (a) From the series above, the probability pn,m that the population of the n generation is m is the coefficient of sm in the series, namely pn,m =

[(m − 1)nm−2 + 2nm ] , (n + 1)m+2

(m ≥ 2)

(b) From the series above, the probability of extinction by the n-th generation is pn,0 =

n2 . (n + 1)2

(c) The probability of ultimate extinction is lim pn,0 = lim

n→∞

n→∞



n2 (n + 1)2



= 1,

which means that it is certain. 9.8. A branching process starts with r individuals, and each individual produces descendants with probability distribution {pj }, (j = 0, 1, 2, . . .), which has the probability generating function G(s). Given that the probability of the n-th generation is [Gn (s)]r , where Gn (s) = G(G(. . . (G(s)) . . .)), find the mean population size of the n-th generation in terms of µ = G′ (1). Let Q(s) = [Gn (s)]r . Its derivative is Q′ (s) = rG′n (s)[Gn (s)]r−1 , where

d Gn−1 (G(s)) = G′n−1 (G(s))G′ (s). ds Hence, the mean population size of the n-th generation is G′n (s) =

µn = Q′ (1) = r[Gn (1)]r−1 G′n−1 (1)G′ (1) = rµn . 9.9. Let Xn the population size of a branching process starting with one individual. Suppose that all individuals survive, and that Zn = 1 + X1 + X2 + · · · + Xn

is the random variable representing the accumulated population size. (a) If Hn (s) is the probability generating function of the total accumulated population, Zn , up to and including the n-th generation, show that H1 (s) = sG(s),

H2 (s) = sG(H1 (s)) = sG(sG(s)),

(which perhaps gives a clue to the form of Hn (s)), (b) What is the mean accumulated population size E(Zn ) (you do not require Hn (s) for this formula)?

135

(c) If µ < 1, what is limn→∞ E(Zn ), the ultimate expected population? (d) What is the variance of Zn ? (a) Let pj be the probability that any individual in any generation has j descendants, and let the probability generation function of {pj } be G(s) =

∞ X

p j sj .

j=0

The probabilities of the accumulated population sizes are as follows. Since the process starts with one individual P(Z0 = 1) = 1 P(Z1 = 0) = 0, P(Z1 = n) = pn−1 (n = 1, 2, 3, . . .). Hence the generating function of P(Z1 = n) is given by H1 (s), where H1 (s) =

∞ X

P(Z1 = r)sr =

∞ X r=1

r=1

P(X1 = r − 1)sr =

∞ X

pr−1 sr = sG(s).

r=1

For the probability of Z2 , use the identity P(Z2 = n) =

∞ X r=1

P(Z2 = n|Z1 = r − 1)P(Z1 = r − 1).

Then the probability generating function H2 (s) has the series H2 (s)

=

∞ X

P(Z2 = n)sn =

n=1

=

∞ ∞ X X

P(Z2 = n|Z1 = r)P(Z1 = r)sn

n=1 r=1

∞ ∞ X X

pr−1 P(Z2 = n|Z1 = r)sn

n=1 r=1

=

∞ X

pr−1 E(sZ2 ) =

r=1

=

∞ X r=1

=

∞ X

∞ X

pr−1 E(s(Z1 +Y1 )+(Z1 +Y2 )+···+(Z1 +Yr ) )

r=1

pr−1 E(sZ1 +Y1 )E(sZ1 +Y2 ) · · · E(sZ1 +Yr ) pr−1 [sG(s)]r = sG[sG(s)],

r=1

using a method similar to that of Section 9.2. In this analysis, in the second generation, it is assumed that X2 = Y1 + Y2 + · · · + Yr , where {Yj } are iid. (b) The mean of the accumulated population is (see eqn (9.7)) E(Zn )

= = =

E(1 + X1 + X2 + · · · + Xn ) = E(1) + E(X1 ) + E(X2 ) + · · · + E(Xn )

1 + µ + µ2 + · · · + µn 1 − µn+1 (µ 6= 1). 1−µ

after summing the geometric series. If µ = 1, then E(Zn ) = n + 1. (c) If µ < 1, then from (b) E(Zn ) → 1/(1 − µ).

136

(d) The variance of Zn is, from Section 9.3(i), V(Zn )

=

V(1 + X1 + X2 + · · · + Xn ) = V(1) + V(X1 ) + V(X2 ) + · · · + V(Xn )

=

0+

n X σ 2 µr−1 (µr − 1)

µ−1

r=1

σ2 µ−1

=



=

n σ 2 X 2r−1 (µ − µr−1 ) µ−1 r=1

µ(1 − µ2n ) 1 − µn − 2 1−µ 1−µ

σ 2 (1 − µn )(1 − µn+1 ) (1 − µ)(1 − µ2 )

=



(µ 6= 1)

9.10. A branching process starts with one individual and each individual has probability pj of producing j descendants independently of every other individual. Find the mean and variance of {pj } in each of the following cases, and hence find the mean and variance of the population of the n-th generation: (a)

pj =

pj = (1 − p)j−1 p

(b)

(c)

e−µ µj , j!

pj =





(j = 0, 1, 2, . . .)

(Poisson);

(j = 1, 2, . . . ; 0 < p < 1)

r+j−1 j p (1 − p)r , r−1

(geometric);

(j = 0, 1, 2, . . . ; 0 < p < 1)

(negative binomial).

where r is a positive integer, the process having started with one individual (a negative binomial distribution). (a) For the Poisson distribution with intensity µ, pj =

e−µ µj . j!

Its probability generating function is G(s) = e−µ(1−s) . Therefore G′ (s) = µe−µ(1−s) ,

G′′ (s) = µ2 e−µ(1−s) ,

and the mean and variance of the first generation are σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 = µ2 + µ + µ2 = µ.

µ = µ,

The mean and variance of the n-th generation are (see Section 9.3), for µ 6= 1, µn = µn = µn ,

σ 2n =

µn (µn − 1) σ 2 µn−1 (µn − 1) = . µ−1 µ−1

(b) The geometric distribution is pj = q j−1 p, where q = 1 − p, which has the probability generating function G(s) = q/(1 − ps). Then G′ (s) =

pq , (1 − ps)2

G′′ (s) =

2p2 q . (1 − ps)3

The mean and variance of the first generation are µ = G′ (1) =

p , q

σ 2 = G′′ (1) + G′ (1) − [G′ (1)]2 =

p . q2

The mean and variance of the size of n-th generation are µn =

 n p q

,

σ 2n

1 σ 2 µn−1 (µn − 1) = = µ−1 2p − 1

137

 n  n p q

p q



−1

(p 6= 21 ).

(c) The negative binomial distribution is pj =





(q = 1 − p).



r

r+j −1 j r p q r−1

Its probability generating function is G(s) = The derivatives are

q 1 − ps

.

r(r + 1)p2 q r rpq r , G′′ (s) = . r+1 (1 − ps) (1 − ps)r+2 Hence, the mean and variance of the first generation are G′ (s) =

µ=

rp , 1−p

σ2 =

rp , (1 − p)2

(p 6= 1).

The mean and variance of the size of the populations of the n-th generation are µn =



rp 1−p

σ 2 µn−1 (µn − 1) 1 σn = = µ−1 rp − 1 + p

n



,

rp 1−p

n 

rp 1−p

n



−1 .

9.11. A branching process has a probability generating function G(s) =



1−p 1 − ps

r

,

(0 < p < 1),

where r is a positive integer (a negative binomial distribution), the process having started with one individual. Show that extinction is not certain if p > 1/(1 + r). We need to investigate solutions of g = G(g) (see Section 9.4). This equation always has the solution g = 1, but does it have a solution less than 1? For this distribution the equation for g becomes g(1 − gp)r = (1 − p)r . Consider where the line y = (1 − p)r and the curve y = g(1 − gp)r intersect in terms of g for fixed p and r. The curve has a stationary value where dy = (1 − gp)r − rp(1 − gp)r−1 = 0, dg which occurs at g = 1/[p(1 + r)], which is a maximum. The line and the curve intersect for a value of g between g = 0 and g = 1 if p > 1/(1 + r), which is the condition that extinction is not certain. Graphs of the line and curve are shown in Figure 9.3 for p = 21 and r = 2. 9.12. Let Gn (s) be the probability generating function of the population size of the n-th generation of a branching process. The probability that the population size is zero at the n-th generation is Gn (0). What is the probability that the population actually becomes extinct at the n-th generation? In Example 9.1, where pj = 1/2j+1 (j = 0, 1, 2, . . .), it was shown that ∞

X nr−1 n + sr . Gn (s) = n+1 (n + 1)r+1 r=1

Find the probability of extinction, (a) at the n-th generation, (b) at the n-th generation or later. What is the mean number of generations until extinction occurs?

138

y

g1

p = 0.5 r=2

g

Figure 9.3: Graphs of the line y = (1 − p)r and the curve y = g(1 − gp)r with p = Problem 9.11.

1 2

and r = 2 for

The probability that the population is extinct at the n-th generation is Gn (0), but this includes extinction of previous generations at r = 1, 2, . . . , n − 1. The probability is therefore Gn (0) given that individuals have survived at the (n − 1)-th generation, namely Gn (0) − Gn−1 (0). (a) In this example Gn (s) = n/(n + 1). Hence probability of extinction at the n-th generation is Gn (0) − Gn−1 (0) =

n n−1 1 − = . n+1 n n(n + 1)

(b) Since ultimate extinction is certain, the probability that extinction occurs at or after the n-th generation is 1 n−1 = . 1 − Gn−1 (0) = 1 − n n The mean number of generations until extinction occurs is ∞ X n=1

n[Gn (0) − Gn−1 (0)] =

∞ X n=1



X 1 n = . n(n + 1) n+1 n=1

This series diverges so that the number of generations is infinite.

9.13. An annual plant produces N seeds in a season which is assumed to have a Poisson distribution with parameter λ. Each seed has a probability p of germinating to create a new plant which propagates in the following year. Let M be the number of new plants. Show that pm , the probability that there are m growing plants in the first year is given by pm = (pλ)m e−pλ /m!

(m = 0, 1, 2, . . .),

that is Poisson with parameter pλ. Show that its probability generating function is G(s) = epλ(s−1) . Assuming that all the germinated plants survive and that each propagates in the same manner in succeeding years, find the mean number of plants in year k. Show that extinction is certain if pλ ≤ 1. Given that plant produces seeds as a Poisson process of intensity λ, then fn =

λn e−λ . n!

Then pm

=

∞   X r

r=m

=

m

pm (1 − p)r−m

(pλ)m e−pλ . m!



X (1 − p)i λi λn e−λ = (λp)m e−λ n! i!

139

i=0

Its probability generating function is G(s) = e−pλ

∞ X (pλ)m

m=0

The mean of the first generation is

m!

sm = e−pλ+pλs .

µ = G′ (1) = pλepλ(s−1) |s=1 = pλ. The mean of the n-th generation is therefore µn = µn = (pλ)n . Extinction occurs with probability g, where g is the smaller solution of g = G(g), that is, the smaller solution of g = e−pλ epλg . Consider the line y = g and the exponential curve y = e−pλ epλg . On the curve, its slope is y 1.0

0.8

0.6

0.4

0.2

g 0.2

0.4

0.6

0.8

1.0

Figure 9.4: Graphs of the line y = g and the curve y = e−pλ epλg with λ = 2 and p = 1 for Problem 9.11.3 dy = pλe−pλ epλg , dg and its slope at g = 1 is pλ. Since e−pλ epλg → 0 as g → −∞, and e−pλ epλg and its slope decrease as g decreases, then the only solution of g = G(g) is g = 1 if pλ ≤ 1. Extinction is certain in this case. If pλ > 1 then there is a solution for 0 < g < 1. Figure 9.4 shows such a solution for λ = 2 and p = 1. 9.14. The version of Example 9.1 with a general geometric distribution is the branching process with pj = (1 − p)pj , (0 < p < 1; j = 0, 1, 2, . . .). Show that G(s) =

1−p . 1 − ps

Using an induction method, prove that Gn (s) =

(1 − p)[pn − (1 − p)n − ps{pn−1 − (1 − p)n−1 }] , [pn+1 − (1 − p)n+1 − ps{pn − (1 − p)n }]

(p 6= 12 ).

Find the mean and variance of the population size of the n-th generation. What is the probability of extinction by the n-th generation? Show that ultimate extinction is certain if p < 21 , but has probability (1 − p)/p if p > 21 . As in Problem 9.2, the generating function for the first generation is G(s) =

1−p . 1 − ps

140

Consider Gn (G(s))

= = = =

(1 − p)[pn − (1 − p)n − p[(1 − p)/(1 − ps)]{pn−1 − (1 − p)n−1 }] [pn+1 − (1 − p)n+1 − p[(1 − p)/(1 − ps)]{pn − (1 − p)n }] (1 − p)[{pn − (1 − p)n }(1 − ps) − p(1 − p){pn−1 − (1 − p)n−1 }] {pn+1 − (1 − p)n+1 }(1 − ps) − p(1 − p){pn−1 − (1 − p)n−1 }

(1 − p)[pn+1 − (1 − p)n+1 − ps{pn − (1 − p)n }] pn+2 − (1 − p)n+2 − ps{pn+1 − (1 − p)n+1 } Gn+1 (s).

Hence if the formula is true for Gn (s), then it is true for Gn+1 (s). It can be verified for G2 (s), so that by induction on the integers, it is true for all n. The probability of extinction by the n-th generation is Gn (0) =

(1 − p)[pn − (1 − p)n ] . pn+1 − (1 − p)n+1

If p > 21 , express in the following form Gn (0) =

1−p (1 − p)[1 − ((1 − p)/p)n ] → p[1 − ((1 − p)/p)n+1 ] p

as n → ∞, which is the probability of ultimate extinction. If p < 21 , then Gn (0) =

[(p/(1 − p))n − 1] → 1, [(p/(1 − p))n+1 − 1]

as n → ∞: extinction is certain. 9.15. A branching process starts with one individual, and the probability of producing j descendants has the distribution {pj }, (j = 0, 1, 2, . . .). The same probability distribution applies independently to all descendants and their descendants. If Xn is the size of the n-th generation, show that E(Xn ) ≥ 1 − P(Xn = 0). In Section 9.3 it was shown that E(Xn ) = µn , where µ = E(X1 ). Deduce that the probability of extinction eventually is certain if µ < 1. By definition E(Xn ) =

∞ X j=1

Hence

jP(Xn = j) ≥

∞ X j=1

P(Xn = j) = 1 − P(Xn = 0).

P(Xn = 0) = 1 − µn .

Therefore, if µ < 1, then P(Xn = 0) → 1 as n → ∞. This conclusion is true irrespective of the distribution. 9.16. In a branching process starting with one individual, the probability that any individual has j descendants is pj = α/2j , (j = 0, 1, 2, . . . , r), where α is a constant and r is fixed. This means that any individual can have a maximum of r descendants. Find α and the probability generating function G(s) of the first generation. Show that the mean size of the n-th generation is µn =



2r+1 − 2 − r 2r+1 − 1

n

.

What is the probability of ultimate extinction? Given pj = α/2j , then for it to be a probability distribution r X α j=0

h

1 1 1 = α 1 + + 2 + ··· + r 2j 2 2 2

141

i



= 2α 1 −

 r+1  1 2

= 1.

Therefore the constant α is defined by α=

1 . 2[1 − ( 21 )r+1 ]

(i)

The probability generating function is given by G(s) = α



r X sj

s s2 sr = α 1 + + 2 + ··· + r j 2 2 2 2

j=0





[1 − (s/2)r+1 ] , (1 − 21 s)

(ii)

using the formula for the sum of the geometric series: α is given by (i). The derivative of G(s) is





2−r α 21+r − 2(1 + r)sr + rs1+r G (s) = . (s − 2)2 ′

Hence the mean value of the first generation is



µ = G′ (1) = 2−r α 21+r − 2 − r) = By (9.7), the mean of the n-th generation is n

µn = µ =



2r+1 − 2 − r 2r+1 − 1

2r+1 − 2 − r . 2r+1 − 1

n

.

Since

2r+1 − 2 − r < 1, 2r+1 − 1 then, by Problem 9.15, ultimate extinction is certain. µ=

9.17. Extend the tree in Figure 9.4 for the gambling martingale in Section 9.5 to Z4 , and confirm that E(Z4 |Z0 , Z1 , Z2 , Z3 ) = Z3 . confirm also that E(Z4 ) = 1. Extension of the gambling martingale to Z4 is shown in Figure 9.5. The values for the random variables Z0

Z1

Z2

Z3 8

4 0 2 0

4 -4

1 6 2 -2 0 2 -2 -6

Z4 16 0 8 -8 12 -4 4 -12 14 -2 6 -10 10 -6 2 -14

Figure 9.5: Martingale for Problem 9.17. Z4 are: Z4 = {even numbers between -14 and 16 inclusive}

The mean value of Z4 is given by

E(Z4 ) =

15 X 1

m=0

24

(−24 + 2m + 2) = 1,

142

or the mean can be calculated from the mean of the final column of numbers in Figure 9.5. 9.18. A gambling game similar to the gambling martingale of Section 9.5 is played according to the following rules: (a) the gambler starts with £1, but has unlimited resources; (b) against the casino, which also has unlimited resources, the gambler plays a series of games in which the probability that the gambler wins is 1/p and loses is (p − 1)/p, where p > 1; (c) at the n-th game, the gambler either wins £(pn − pn−1 ) or loses £pn−1 . If Zn the gambler’s asset/debt at the n-th game, draw a tree diagram similar to that of Figure 9.3 as far as Z3 . Show that Zn has the possible outcomes {−p − p2 , −p2 , −p, 0, p3 − p2 − p, p3 − p2 , p3 − p, p3 } and confirm that E(Z2 |Z0 , Z1 ) = Z1 ,

E(Z3 |Z0 , Z1 , Z2 ) = Z2 ,

which indicates that this game is a martingale. Show also that

E(Z1 ) = E(Z2 ) = E(Z3 ) = 1. Assuming that it is a martingale, show that, if the gambler first wins at the n-th game, then the gambler will have an asset gain or debt of £(pn+1 − 2pn + 1)/(p − 1). Explain why a win for the gambler can only be guaranteed for all n if p ≥ 2. The tree diagram for this martingale is shown in Figure 9.6. From the last column in the Figure 9.6, Z0

Z

Z1

2

Z3 p

p

3

2

0 p

3

0

p -p -p

1

2

2

3

p -p 2

p -p

-p

0 3

2

p -p -p -p

2

-p -p

Figure 9.6: Tree diagram for Problem 9.18. it can be seen that the elements of Z3 are given by {−p − p2 , −p2 , −p, 0, p3 − p2 − p, p3 − p2 , p3 − p, p3 }. For the other conditional means, E(Z2 |Z0 , Z1 ) has outcomes {p, 0}, For the other means,

E(Z3 |Z0 , Z1 , Z2 , Z3 ) outcomes {p2 , 0, p2 − p, −p}. E(Z1 ) = p

E(Z2 ) = p2

1 p−1 +0 = 1, p p

1 1p−1 p−11 +0 + (p2 − p) −p p2 p p p p

etc.

143



p−1 p

2

= 1,

Suppose the gambler first wins at the nth game: on the tree the path will be lowest track until the last game. Extending the result for Z3 , the gambler then has an asset of



£(pn − 1 − p − p2 − · · · − pn−1 ) = £ pn −



pn − 1 p−1









pn+1 − 2pn + 1 . p−1

To guarantee winning requires pn+1 − 2pn + 1 = pn − pn−1 − pn−2 − · · · − p − 1 > 0 p−1 for all n. This will certainly be true if p > 2. A smaller value of p will guarantee winnings but it will depend on n. 9.19. Let X1 , X2 , . . . be independent random variables with means µ1 , µ2 , . . . respectively. Let Zn = X1 + X2 + · · · + Xn , and let Z0 = X0 = 0. Show that the random variable Yn = Zn −

n X

µi ,

(n = 1, , 2, . . .)

i=1

is a martingale with respect to {Xn }. [Note that E(Zn |X1 , X2 , . . . , Xn ) = Zn .] The result follows since E(Yn+1 |X1 , X2 , . . . , Xn )

=

=

=

=

E(Zn+1 −

n+1 X i=1

µi |X1 , X2 , . . . , Xn )

E(Zn + Xn+1 − Zn + µn+1 − Zn −

n X

n+1 X i=1

n+1 X

µi |X1 , X2 , . . . , Xn )

µi

i=1

µi = Yn

i=1

Hence the random variable Yn is a martingale. 9.20. Consider an unsymmetric random walk which starts at the origin. The walk advances one position with probability p and retreats one position with probability 1 − p. Let Xn be the random variable giving the position of the walk at step n. Let Zn be given by Zn = Xn + (1 − 2p)n. Show that E(Z2 |X0 , X1 )

{−2p, 2 − 2p}.

denoted by the random variable Z1 . Generally show that {Zn } is a martingale with respect to {Xn }. The conditional mean E(Z2 |X0 , X1 )

= =

E(X2 + (1 − 2p)2|X0 , X1 )

E(X2 |X0 , X1 ) + 2(1 − 2p)

which has the possible outcomes {1 + 1 − 2p, −1 + 1 − 2p} = {2 − 2p, −2p}

144

denoted by the random variable Z1 . By the Markov property of the random walk, E(Zn+1 |X0 , X1 , . . . , Xn ) = E(Zn+1 |Xn ) = E(Xn+1 + (1 − 2p)(n + 1)|Xn ) Suppose that Xn = k. Then the walk either advances one step with probability p or retreats one step with probability 1 − p. Therefore E(Xn+1 + (1 − 2p)(n + 1)|Xn )

=

p(k + 1) + (1 − p)(k − 1) + (1 − 2p)(n + 1)

=

k + (1 − 2p)n = Xn .

9.21. In the gambling martingale of Section 9.5, the random variable Zn , the gambler’s asset, in a game against a casino in which the gambler starts with £1 and doubles the bid at each play is given by Zn which can take the values {−2n + 2m + 2}, (m = 0, 1, 2, . . . , 2n − 1). Find the variance of Zn . What is the variance of

E(Zn |Z0 , Z1 , . . . , Zn−1 )?

Then the sum of the elements in random variable Zn which can take the values {−2n + 2m + 2},

m = 0, 1, 2, . . . , 2n − 1).

Therefore 2n −1

X

Zn

=

m=0

2n −1

X

n

n

n

(−2 + 2m + 2) = (−2 + 2)2 + 2

m=0

2n −1

X

m = 2n

m=1

Since all the elements in are equally likely to occur after n steps, then n

2 −1 1 X 1 E(Zn ) = n Zn = n 2n = 1. 2 2 m=0

The variance of Zn is given by n

V(Zn ) = E(Zn2 ) − [E(Zn )]2 = since

2n −1

X

2 −1 1 X 1 (−2n + 2m + 2)2 − 1 = (22n − 1), 2n 3 m=0

(−2n + 2m + 2)2 =

m=0

Since then

2n (2 + 22n ). 3

E(Zn |Z0 , Z1 , . . . , Zn−1 ) = Zn−1 , V[E(Zn |Z0 , Z1 , . . . , Zn−1 )] = V(Zn−1 ) =

1 2(n−1) [2 − 1] 3

by the previous result. 9.22. A random walk starts at the origin, and, with probability p1 advances one position and with probability q1 = 1 − p1 retreats one position at every step. After 10 steps the probabilities change to p2 and q2 = 1 − p2 respectively. What is the expected position of the walk after a total of 20 steps?

145

After 10 steps the walk could be at any position in the list of even positions {−10, −8, −6 . . . 6, 8, 10}, which are the random variable Xr . Let the random variable Yn be the position of the walk after 20 steps so that Yn = {−20, −18, −16, . . . , 16, 18, 20}. Then the position after a further 10 steps is

E(Yn |Xr ) = Xr + 10(p2 − q2 ). Its expected position is E[E(Yn |Xr )] = E[Xr + 10(p2 − q2 )] = 10(p1 + p2 − q1 − q2 ). 9.23. A symmetric random walk starts at the origin x = 0. The stopping rule that the walk ends when the position x = 1 is first reached is applied, that is the stopping time T is given by T = min{n : Xn = 1}, where Xn is the position of the walk at step n. What is the expected value of T ? If this walk was interpreted as a gambling problem in which the gambler starts with nothing with equal odds of winning or losing £1 at each play, what is the flaw in this stopping rule as a strategy of guaranteeing a win for the gambler in every game? [Hint: the generating function for the probability of the first passage is 1

G(s) = [1 − (1 − s2 ) 2 ]/s : see Problem 3.11.] The probability generating function for the first passage to x = 1 for the walk starting at the origin is G(s) =

1 s s3 s5 1 [1 − (1 − s2 ) 2 ] = + + + O(s7 ), s 2 8 16

which imples that the probability that the first visit to x = 1 occurs at the 5-th step is 1/16. The mean of the first visits is µ = G′ (s)|s=1 =



1 1 − (1 − s2 ) 2 1

s2 (1 − s2 ) 2



s=1

= ∞.

It seems a good ploy but would take, on average, an infinite number of plays to win £1. 9.24. In a finite-state branching process, the descendant probabilities are, for every individual, pj =

2m−j , 2m+1 − 1

(j = 0, 1, 2, . . . , m),

and the process starts with one individual. Find the mean size of the first generation. If Xn is the size of the n-th generation, explain why  n 2m+1 − 1 Zn = Xn 2m+1 − m − 2 defines a martingale over {Xn }.

In this model of a branching process each descendant can produce not more than m individuals. It can be checked that m m X X 2m−j = 1, pj = m+1 2 −1 j=0

j=0

using the formula for the sum of the geometric series.

146

The probability generating function for the first generation is G(s) =

m X

p j sj =

m   X s j 2m+1 − sm+1 2m . = m+1 m+1 2 −1 2 (2 − 1)(2 − s) j=0

j=0

Its first derivative is

2m+1 − 2(m + 1)sm + msm+1 . (2m+1 − 1) (2 − s)2 Therefore the mean of the first generation is G′ (s) =

µ = G′ (1) =

2m+1 − m − 2 . 2m+1 − 1

The random variable Zn is simply Zn = Xn /µn (see Section 9.5). 9.25. A random walk starts at the origin, and at each step the walk advances one position with probability p or retreats with probability 1 − p. Show that the random variable Yn = Xn2 + 2(1 − 2p)nXn + [(2p − 1)2 − 1]n + (2p − 1)2 n2 , where Xn is the random variable of the position of the walk at time n, defines a martingale with respect to {Xn }. Let α(p, n) = 2(1 − 2p)n and β(p, n) = [(2p − 1)2 − 1]n + (2p − 1)2 n2 in the expression for Yn . Then E(Yn+1 |Xn )

= =

p[(Xn + 1)2 + α(p, n + 1)(Xn + 1) + β(p, n + 1)] +(1 − p)[(Xn − 1)2 + α(p, n + 1)(Xn − 1) + β(p, n + 1)]

Xn2 + Xn [4p − 2 + α(p, n + 1)] + [1 + (2p − 1)α(p, n + 1) + β(p, n + 1)]

The coefficients in the last expression are 4p − 2 + α(p, n + 1) = 4p − 2 + 2(1 − 2p)(n + 1) = 2(1 − 2p)n = α(p, n), and 1 + (2p − 1)α(p, n + 1) + β(p, n + 1) = = Hence

1 + (2p − 1)2(1 − 2p)(n + 1) + [(2p − 1)2 − 1](n + 1) + (2p − 1)2 (n + 1)2 [(2p − 1)2 − 1]n + (2p − 1)2 n2 = β(p, n)

E(Yn+1 |Xn ) = Xn2 + 2(1 − 2p)nXn + [(2p − 1)2 − 1]n + (2p − 1)2 n2 = Yn ,

so that, by definition, Yn is a martingale.

9.26. A simple epidemic has n0 susceptibles and one infective at time t = 0. If pn (t) is the probability that there are n susceptibles at time t, it was shown in Section 9.7 that pn (t) satisfies the differential-difference equations (see eqns (9.15 and (9.16)) dpn (t) = β(n + 1)(n0 − n)pn+1 (t) − βn(n0 + 1 − n)pn (t), dt for n = 0, 1, 2, . . . n0 . Show that the probability generating function G(s, t) =

n0 X

pn (t)sn

n=0

satisfies the partial differential equation





∂ 2 G(s, t) ∂G(s, t) ∂G(s, t) . = β(1 − s) n0 −s ∂t ∂s ∂s2

147

Nondimensionalize the equation by putting τ = βt. For small τ let G(s, τ /β) = G0 (s) + G1 (s)τ + G2 (s)τ 2 + · · · . Show that

∂Gn−1 (s) ∂ 2 Gn−1 (s) − s(1 − s) , ∂s ∂s2 for n = 1, 2, 3, . . . n0 . What is G0 (s)? Find the coefficients G1 (s) and G2 (s). Hence show that the mean number of infectives for small τ is given by nGn (s) = n0 (1 − s)

n0 − n0 τ −

1 n0 (n0 − 2)τ 2 + O(τ 3 ). 2

In Example 9.9, the number of susceptibles initially is given by n0 = 4. Expand p0 (t), p1 (t) and p2 (t) in powers of τ and confirm that the expansions agree with G1 (s) and G2 (s) above. Multiply the difference equation by s and sum from s = 0 to s = n0 giving n0 X

p′ (t)sn = β

Gt (s, t)

X n=0

n=0

or,

n0 −1

=

βn0

n0 X

m=1

(n + 1)(n0 − n)pn+1 (t) − β

mpm (t)sm−1 − β +β

n0 X n=1

= =

n0 X

m=2

n0 X n=1

n(n0 + 1 − n)pn (t)sn ,

m(m − 1)pm (t)sm−1 − βn0

n0 X

npn (t)sn

n=1

n(n − 1)pn (t)sn

βn0 Gs (s, t) − βsGss (s, t) − βn0 sGs (s, t) + βs2 Gss (s, t) βn0 (1 − s)Gs (s, t) + βs(s − 1)Gss (s, t),

as required. Let τ = βt. Then the equation for H(s, τ ) = G(s, τ /β) is





∂H(s, τ ) ∂ 2 H(s, τ ) ∂H(s, τ ) . = (1 − s) n0 −s ∂τ ∂s ∂s2 For small τ , let

H(s, τ ) = G(s, τ /β) = H0 (s) + H1 (s)τ + H2 (s)τ 2 + · · · ,

and substitute this series into the partial differential equation for H(s, τ ), so that H1 (s) + 2H2 (s)τ + · · · = (1 − s)n0 [H0′ (s) + H1′ (s)τ + · · ·] − s(1 − s)n0 [H0′′ (s) + H1′′ (s)τ + · · ·]. Equating powers of τ , we obtain ′ ′ nHn (s) = (1 − s)n0 Hn−1 (s) − s(1 − s)n0 Hn−1 (s),

For τ = 0, H0 (s) = G(s, 0) =

n0 X

(n = 1, 2, 3, . . .).

pn (0)sn .

n=0

Since the number of susceptibles is n0 at time t = 0. Therefore pn0 (0) = n0 , Hence H0 (s) = s From (i),

n0

pn (0) = 0,

(n 6= n0 ).

.

H1 (s)

= = =

∂H0 (s) ∂ 2 H0 (s) − s(1 − s) ∂s ∂s2 n20 (1 − s)sn0 −1 − s(1 − s)n0 (n0 − 1)sn0 −2 n0 (1 − s)

n0 (1 − s)sn0 −1 ,

148

(i)

H2 (s)

= =

=





∂ 2 H1 (s) ∂H1 (s) 1 − s(1 − s) n0 (1 − s) 2 ∂s ∂s2 1 n0 (1 − s)[n0 {(n0 − 1)sn0 −2 − n0 sn0 −1 } 2 1 − s(1 − s)[n0 {(n0 − 1)(n0 − 2)sn0 −3 − n0 (n0 − 1)sn0 −2 }] 2 1 n0 sn0 −2 (1 − s)(2n0 − 2 − n0 s) 2

The mean number of infectives is given by µ

=

Hs (1, τ ) = H0′ (1) + H1′ (1)τ + H2′ (1)τ 2 + O(τ 3 )

=

n0 sn0 −1 + [n0 (n) − 1)sn0 −2 − n20 sn0 −1 ]τ 1 + [n30 sn0 −1 + (n0 − 1)(2n0 − 3n20 )sn0 −2 + (n0 − 2)(2n20 − 2n0 )sn0 −3 ]τ 2 + O(τ 3 ) 2 s=1 1 2 3 n0 − n0 τ − n0 (n0 − 2)τ + O(τ ), 2

=

where τ = βt.

149

Chapter 10

Brownian motion: Wiener process 10.1. Let X(t) be a standard Brownian motion. (a) Find P[X(2) > 3]. (b) Find P[X(3) > X(2)]. (a) The standard Brownian motion has the normal distribution with variance t, and density √

1 exp[−x2 /(2t)]. 2πt

Hence the density for X(2) (that is, for t = 2)is 1 √ exp[−x2 /(4)]. 4π Hence P[X(2) > 3]

= = =

1 1− √ 2 π

Z

3

e−x

2

/4

−∞ √ 3/ 2

Z

1 1− √ 2π −∞ √ 1 − Φ[3/ 2].

e−z

2

dx /2

dz,

√ (x = z 2)

(b) From the definition of the Wiener process X(3)−X(2) has a normal distribution N [0, 3−2] = N [0, 1]. Hence P[X3 > X2 ] = P[X(3) − X(2)] > 0] = 1 − Φ(0) = 0.5. 10.2 Let X(t) be a Brownian motion with mean 0 and variance σ 2 t starting at the origin. (a) Find the distribution of |X(t)|, the absolute distance of X(t) from the origin. (b) If σ = 1, evaluate P(|X(5)|) > 1). (a) The random variable X(t) is normal N (0, σ 2 t] with density function fX (x) =





1 x2 √ exp − 2 , 2σ t σ 2πt

−∞ < x < ∞.

Let U (t) = |X(t)|. Then P(U (t) ≤ u) = P(|X(t)| ≤ u) =



0 P(−u ≤ X(t) ≤ u)

Differentiating the density function of U (t) is given by fU (u) = fX (u) + fX (−u).

150

u 5)) = 2[1 − Φ(1/ 5)] = 2[1 − Φ(0.447)] = 0.635 from a table of cumulative normal distribution. 10.3. X(t) = ln[Z(t)] is a Brownian motion with variance σ 2 t. (a) Find the distribution of Z(t). (b) Evaluate P[Z(t) > 2] when σ 2 = 0.5. (a) Denote the density of X(t) by fX (x) = Thus

1 √

σ 2 2πt



exp −



x2 . 2σ 2 t

P(Z ≤ z) = P(eX ≤ z) = P(X ≤ z).

Let fZ (z) be the density of Z. Differentiating the previous equation with respect to z: fZ (z) = fX (ln z)





d 1 (ln z)2 √ (ln z) = , exp − dz 2σ 2 t σz 2πt

(z > 0).

The distribution function of Z is given by FZ (z)

=

1 √ σ 2πt

=

1 √ σ 2πt

=

1 √ 2π

=

Z

z

0

Z





(ln u)2 1 exp − du u 2σ 2 t

ln z





v2 exp − 2 dv [where v = ln u] 2σ t −∞ Z (ln z)/(σt) √ 1 2 e− 2 w dw [where w = v/(σ t)] −∞

√ Φ[(ln z)/(σ t)]

[z > 0]

Hence Z is normal and could represent a Brownian motion. (b) P[Z(t) > 2] = 1 − Φ

√



2 ln 2 √ . t

10.4. Let X(t), (t ≥ 0) be a standard Brownian motion. Let τa be stopping time for X(t) (that is, the first time that the process reaches state a). Explain why Y (t) =



X(t) 2X(τa ) − X(t)

0 < t < τa , t ≥ τa ,

represents the reflected process. Show that Y (t) is also a standard Brownian motion. The Brownian motion first hits x = a at time t = τa , that is X(τa ) = a. For t ≥ τa the graph of Y (t) is the reflection of X(t) in the line x = a: subsequently this process never exceeds X(τa ). For any differences ti+1 − ti where τa > ti+1 > ti , Y (t) will be normal N (0, t). For t > τa choose intervals with τ < ti < ti+1 . Differences will eliminate 2X(τa ). Then use the observation that if X(0) is normal then so is −X(t). Therefore Y (t) is a standard Brownian motion.

151

10.5. For a standard Brownian motion X(t), (t ≥ 0) show that E[X 2 (t)] = t using the mgf for X(t) The mgf of the standard Brownian motion defined by X(t) is E[esX ]

= =

Z



1 exp[sx] exp[−x2 /(2t)]dx 2πt −∞ 1 exp[ s2 t]. 2 √

Expanding as a power series the mgf is

 n ∞ X 1 s2 t s=0

n!

2

.

From the coefficient of s2 , it follows that E[X 2 ] = t.

10.6. In a standard Brownian motion let ta be the first time that the process reaches a ≥ 0 – often known as the hitting time. Let Y (t) be the maximum value of X(s) in 0 ≤ s ≤ t. Both ta and Y (t) are random variables with the property that ta ≤ t if and only if Y (t) ≥ a. Using the reflection principle (Section 10.7), we know that P[Y (t) ≥ a|ta ≤ t] = 21 .

Using this result show that

What is the mean hitting time?

√ 2  P(ta ≤ t) = πt

Z



exp[−x2 /(2t)]dx.

a

Since P[Y (t) ≥ a] = P[Y (t) ≥ a, ta ≤ t] = P[Y (t) ≥ a|ta ≤ t]P[ta ≤ t],

and the reflection principle (Section 10.7 and Figure 10.4), using the result in the question, P[ta ≤ t] = 2P[Y (t) ≥ a)] =





2 πt

Z



exp[−x2 /(2t)]dx,

a

√ from the normal distribution of the standard Brownian motion. Use the substitution z = x/ t. Then √2 P[ta ≤ t] = π

Z



√ a/ t

exp(−z 2 /2)dz.

Finally differentiation with respect to t, gives the probability density: fa (t) = √

a exp[−a2 /(2t)], (2πt3 )

(0 ≤ t, Inf ty).

The mean hitting time should be given by

Z



tfa (t)dt,

0

but this integral diverges implying that the mean hitting time is infinite. 10.7. X(t) is a standard Brownian motion. Show that Y (t) = X(a2 t)/a where a > 0 is a constant is a standard Brownian motion. X(t) has the density





1 x2 fX (t) = √ exp − , 2t 2πt and X(a2 t) has the density





x2 1 exp − 2 . fX (a t) = √ 2a t a 2πt 2

Then

1 P[Y (t) > y] = P[(1/a)X(a2 t)] = P[X(a2 t > ay] = √ a 2πt

152

Z

ay

−∞



exp −



x2 dx. 2a2 t

Differentiating this expression with respect to y, the density of Y (t) is given by:









1 1 a2 y 2 y2 , fY (t) = √ exp − 2 × a = √ exp − 2a t 2t a 2πt 2πt which is the density of a standard Brownian motion. 10.8. X(t) and Y (t) are independent standard Brownian motions. (a) Find the probability densities of U (t) = X 2 (t)/t and V (t) = Y 2 (t)/t (b) Using mgf ’s (see Problem 1.29) show that the probability distribution of W 2 (t) = U 2 (t) + V 2 (t) is exponential. √ (c) Find P[R(t) ≥ r] where R(t) = [X 2 (t)+Y 2 (t)] is the Euclidean distance of the two-dimensional Brownian motion [X(t), Y (t)] form the origin. (a) Both X(t) and Y (t) have normal N (0, t) distributions. Let U (t) = X 2 (t)/t. Then √ √ P[U (t) ≤ u] = P[X 2 (t)/t ≤ u] = P[− ut ≤ X(t) ≤ ut]. Differentiating, we obtain the density d fU (u) = du

"

1 √ 2πt

√ ut

#

√ 1 1 t 2 u −1 2 · √ = √ e e− 2 u , exp[−x /(2t)]dx = √ √ 2 u 2πt 2πu − ut

Z

2

which that of is a χ21 distribution. Similarly V (t) = Y 2 (t)/t has the same distribution for fV (v). Notice that these densities are independent of t. (b) The mgf of U (t) is 1 MU (s) = E[esU (t) ] = √ 2π

Z



1

1

u− 2 e−(s− 2 )u du =

0

2

√h2i π

Z

0



1

2

1

e−(s− 2 )z dz = (1 − 2s)− 2 ,

using the substitution u = z , and an integral from the Appendix. A similar result holds for MV (s) = E[esV (t) ]. Since the random variables U and V are independent MW 2 (s) = MU (s)MV (s) = 1/(1 − 2s), which is the mgf of the exponential distribution with parameter 2

2

1 2

.

2

(c) For R (t) = X (t) + Y (t), we have P[R2 (t) ≥ r] = P[W 2 (t) ≥ r/t] = exp[−r/(2t)] by (b). Finally, for the Euclidean distance R(t), P[R(t) ≥ r] = P[R2 (t) ≥ r 2 ] = exp[−r 2 /(2t)], which gives the probability distribution function of the Euclidean distance. 10.9. X(t) is a standard Brownian motion. Find the moment generating function of Y (t) =



0 tX(1/t)

Show that lim

t→∞

t = 0, t > 0,

X(t) = 0. t

The mgf is given by MY (s)

= = =

E[esY (t) ] = E[estX(1/t) ]  Z ∞ 2 √ t estx e−x t/2 dx 2π −∞ 1 2

e 2 s t.

153

This is the same mgf as that for X(t). It can be shown that Y (t) is also a standard Brownian motion. Using the previous result X(t) = lim Y (1/t) = Y (0) = 0. lim t→∞ t→∞ t [As remarked in Section 10.6, this limit is in the sense almost surely.] The limit does indicate a bound on the possible growth of standard Brownian motion — it is less than t as t → ∞. 10.10. The probability density function for geometric Brownian motion is given by fx (t) =

1 √

σx 2πt

exp[−(ln x − (µ − 12 σ 2 )t)2 /(2σ 2 t)],

(x ≥ 0).

for the random variable

1 2 σ )t + σX(t)], (x > 0) 2 Show that the mean and variance of this lognormal distribution are given by Z(t) = exp[(µ −

E[Z(t)] = eµt ,

V[Z(t)] = e2µt [eσ

2

t

− 1].

The mean is given by E[Z(t)]

1 √ σ 2πt

= =

1 √ σ 2πt

=

eµt

Z



0

Z



−∞

exp[−(ln x − (µ − 21 σ 2 )t)2 /(2σ 2 t)]dx exp[−(z − (µ − 21 σ 2 )t)2 /(2σ 2 t)] exp[z]dz

using the substitution z = ln x and a translated version of an integral in the Appendix. For the variance we require E[Z 2 (t)]

1 √ σ 2πt

=

1 √ σ 2πt

=

e2µ+σ

=

2

Z

0

Z

)t





−∞

x exp[−(ln x − (µ − 12 σ 2 )t)2 /(2σ 2 t)]dx exp[−(z − (µ − 12 σ 2 )t)2 /(2σ 2 t)] exp[2z]dz

,

using the same method. Finally V[Z(t)] = E[Z 2 ] − E[Z]2 = e2µ+σ

2

)t

− e2µt .

10.11. If X(t), (t ≥ 0) is a standard Brownian motion, use the conditional expectation E[X(t)|X(u), 0 ≤ u < y],

(0 ≤ y < t)

to show that X(t) is a martingale (see Section 9.5 for definition of martingales). The conditional expectation is a random variable derived from X(t) given any X(u) in the interval 0 ≤< y which we have to show equals X(y). Then E[X(t)|X(u), 0 ≤ u < y]

= = =

E[X(y) + X(t) − X(y)|X(u), 0 ≤ u < y],

E[X(y)|X(u), 0 ≤ u < y] + E[X(t) − X(y)|X(u), 0 ≤ u < y],

X(y) + E[X(t) − X(y)|X(u), 0 ≤ u < y],

since the first term is the value X(y) and not conditioned. For the other term E[X(t) − X(s)|X(u), 0 ≤ u < y] = E[X(t) − X(s)] = 0, since the conditioning is not effective in the interval (t, y), and the Brownian motions have zero mean. Hence the result E[X(t)|X(u), 0 ≤ u < y] = X(y) follows

154