*127*
*13*
*2MB*

*English*
*Pages [169]*
*Year 2020*

- Author / Uploaded
- Frank Beichelt

SOLUTIONS MANUAL FOR APPLIED PROBABILITY AND STOCHASTIC PROCESSES SECOND EDITION

by

Frank Beichelt University of the Witwatersrand Johannesburg, South Africa

SOLUTIONS MANUAL FOR APPLIED PROBABILITY AND STOCHASTIC PROCESSES SECOND EDITION

by

Frank Beichelt University of the Witwatersrand Johannesburg, South Africa

Boca Raton London New York

CRC Press is an imprint of the Taylor & Francis Group, an informa business

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2016 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper Version Date: 20160511 International Standard Book Number-13: 978-1-4822-5768-7 (Ancillary) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

TABLE OF CONTENTS CHAPTER 1:

RANDOM EVENTS AND THEIR PROBABILITIES

1

CHAPTER 2:

ONE-DIMNSIONAL RANDOM VARIABLES

15

CHAPTER 3:

MULTIDIMENSIONAL RANDOM VARIABLES

37

CHAPTER 4:

FUNCTIONS OF RANDOM VARIABLES

49

CHAPTER 5:

INEQUALITIES AND LIMIT THEOREMS

57

CHAPTER 6:

BASICS OF STOCHASTIC PROCESSES

65

CHAPTER 7:

RANDOM POINT PROCESSES

75

CHAPTER 8:

DISCRETE-TIME MARKOV CHAINS

97

CHAPTER 9:

CONTINUOUS-TIME MARKOV CHAINS

113

CHAPTER 10:

MARTINGALES

143

CHAPTER 11:

BROWNIAN MOTION

149

CHAPTER 12:

SPECTRAL ANALYSIS OF STATIONARY PROCESSES

161

CHAPTER 1 Random Events and their Probabilities 1.1) A random experiment consists of simultaneously flipping three coins. (1) What is the corresponding sample space? (2) Give the following events in terms of elementary events: A = 'head appears at least two times,' B = 'head appears not more than once,' C = 'no head appears.' (3) Characterize verbally the complementary events of A, B, and C. Solution (1) 1 head, 0 tail (head not). Ω = {(i, j, k); i, j, k = 0, 1}. Ω has 8 elements. (2) A = {(1, 1, 0), (1, 0, 1), (0, 1, 1), (0, 1, 1)}, B = {(1, 0, 0), (0, 1, 0), (0, 0, 1), (0, 0, 0)}, C = {(0, 0, 0)}. (3) A = 'head appears not more than once' (= B ). B = 'head appears at least two times' (= A ). C = 'at least one head appears'. 1.2) A random experiment consists of flipping a die to the first appearance of a '6.' What is the corresponding sample space? Solution The (countably infinite) sample space consists of all vectors (z 1 , z 2 , ..., z k−1 , z k ) with property that z k = 6 and all z 1 , z 2 , ..., z k−1 are integers between 1 and 5; k = 1, 2, ... . 1.3) Castings are produced weighing either 1, 5, 10, or 20 kg. Let A, B, and C be the events that a casting weighs 1 or 5kg, exactly 10kg, and at least 10kg, respectively. Characterize verbally the events A ∩ B, A ∪ B, A ∩ C, and (A ∪ B) ∩ C. Solution A∩B A∪B A∩C (A ∪ B) ∩ C

Impossible event. A casting weighs 1, 5, or 10kg. A casting weighs 1 or 5kg. A casting weighs at least 10kg.

1.4) Three randomly chosen persons are to be tested for the presence of gene g. Three random events are introduced: A = 'none of them has gene g,' B = 'at least one of them has gene g,' C = 'not more than one of them has gene g.' Determine the corresponding sample space and characterize the events A ∩ B, B by elementary events.

C, and B ∩ C

2

SOLUTIONS MANUAL

Solution Let 1 and 0 indicate whether a person has gene g or not, respectively. Then the sample space Ω consists of all the 2 3 = 8 vectors (z 1 , z 2 , z 3 ) with zi =

1 if a person has gene g, 0 otherwise.

Ω is the same sample space as in exercise 1.1. A = {(0, 0, 0)}, B = A = Ω \A, C = {(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1)}. A ∩ B = ∅ (impossible event), B

C=B

(since C ⊂ B ),

B ∩ C = B ∪ C = A ∪ C = C (de Morgan rule, A ⊂ C). 1.5) Under which conditions are the following relations between events A and B true: (1) A ∩ B = Ω , (2) A ∪ B = Ω , (3) A ∪ B = A ∩ B ? Solution (1) A = B = Ω . (2) A = B or B = A . More generally if A ⊇ B or A ⊇ B. (3) A = B . 1.6) Visualize by a Venn diagram that the following relations between random events A, B, and C are true: (1) A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) , (2) (A ∩ B) ∪ (A ∩ B) = A , (3) A ∪ B = B ∪ (A ∩ B) . B

A∩B∩C

A

(A\B) ∩ C

C

1.7) (1) Verify by a Venn diagram that for three random events A, B, and C the following relation is true: (A\B) ∩ C = (A ∩ C)\(B ∩ C) . (2) Verify by the same Venn diagram that the relation (A ∩ B)\C = (A\C) ∩ (B\C) is true as well. 1.8) The random events A and B belong to a σ−algebra E. What events, generated by A and B, must belong to E (see definition 1.2)? Solution Ω, ∅, A, A, B, B, A ∪ B, A ∩ B, A ∪ B = A ∩ B, A ∩ B = A ∪ B (de Morgan rules (1.1)). Other events arise if in these events A and/or B are replaced with A and B : A ∩ B = B\A, A ∩ B = A\B, A ∪ B = A\B, A ∪ B = B\A. Any unions of two or more of these events do not give rise to an event, which is different from the listed ones.

1 RANDOM EVENTS AND THEIR PROBABILITIES

3

1.9) Two dice D 1 and D 2 are simultaneously thrown. The respective outcomes of D 1 and D 2 are ω 1 and ω 2 . Thus, the sample space is Ω = {(ω 1 , ω 2 ); ω 1 , ω 2 = 1, 2, ..., 6}. Let events A and B be defined as follows: A = 'ω 1 is even and ω 2 is odd,' B = 'ω 1 and ω 2 satisfy ω 1 + ω 2 = 9. ' What is the σ−algebra E generated by the events A and B ? Solution A = {(2, 1), (2, 3), (2, 5), (4, 1), (4, 3), (4, 5), (6, 1), (6, 3), (6, 5)}, B = {(3, 6), (4, 5), (5, 4), (6, 3)}. With these events A and B, the σ− algebra consists of all the events listed under exercise 1.8. 1.10) Let A and B be two disjoint random events, A ⊂ Ω , B ⊂ Ω . Check whether the set of events {A, B, A ∩ B, and A ∩ B } is (1) an exhaustive and (2) a disjoint set of events (Venn diagram). Solution This set is neither exhaustive nor disjoint. 1.11) A coin is flipped 5 times in a row. What is the probability of the event A that 'head' appears at least 3 times one after the other? Solution The underlying random experiment is a Laplace experiment the state space Ω of which has 2 5 = 32 elementary events. 'head' five times in a row: 1 elementary event 'head' four times in a row: 2 elementary events 'head' three times in a row: 3 elementary events Thus, 6 elementary events are favorable for the occurrence of A. Hence, P(A) = 6/32. 1.12) A die is thrown. Let A = {1, 2, 3} and B = {3, 4, 6} be two random events. Determine the probabilities P(A ∪ B), P(A ∩ B), and P(B\A). Solution P(A) = P(B) = 0.5. P(A ∩ B) = P({3}) = 1/6. P(A ∪ B) = P(A) + P(B) − P(A ∩ B) = 0.5 + 0.5 − 1/6 = 5/6. P(B\A) = P(B) − P(A ∩ B) = 0.5 − 1/6 = 1/3. 1.13) A die is thrown 3 times. Determine the probability of the event A that the resulting sequence of three integers is strictly increasing. Solution The state space Ω of this random experiment comprises 6 3 = 216 elementary events. There are the following favorable elementary events: (1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 2, 6), (1, 3, 4), (1, 3, 5), (1, 3, 6), (1, 4, 5), (1, 4, 6), (2, 3, 4), (2, 3, 5), (2, 3, 6), (2, 4.5), (2, 4, 6), (2, 5, 6), (3, 4, 5), (3, 4, 6), (3, 5, 6), (4, 5, 6). Hence, P(A) = 19/216.

4

SOLUTIONS MANUAL

1.14) Two dice are thrown simultaneously. Let (ω 1 , ω 2 ) be an outcome of this random experiment ' A = ω 1 + ω 2 ≤ 10 ' and B = 'ω 1 ⋅ ω 2 ≥ 19 .' Determine the probability P(A ∩ B). Solution A = {(5, 6), (6, 5), (6, 6)}, B = {(4, 5), (5, 4), (5, 5), (5, 6), (6, 5), (6, 6)}. P(A) = 3/36, P(B) = 6/36. B = (A ∩ B) ∪ (A ∩ B) = (A ∩ B) ∪ A since A ⊂ B . Hence, P(B) = P(A ∩ B) + P(A) so that P(A ∩ B) = 1/12. 1.15) What is the probability p 3 to get 3 numbers right with one ticket in the '6 out of 49' number lottery? Solution Hypergeometric distribution with N = 49, M = n = 6, m = 3 : ⎛ 6 ⎞ ⎛ 43 ⎞ ⎝3⎠ ⎝ 3 ⎠ p3 = = 0.01765. ⎛ 49 ⎞ ⎝6⎠ 1.16) A sample of 300 students showed the following results with regard to physical fitness and body weight: weight [kg] 60

48

64

11

fitness satisfactory

22

42

29

bad

19

17

48

good

One student is randomly chosen. It happens to be Paul. (1) What is the probability that the fitness of Paul is satisfactory? (2) What is the probability that the weight of Paul is greater than 80 kg? (3) What is the probability that the fitness of Paul is bad and that his weight is less than 60 kg? Solution (1) p = (22 + 42 + 29) /300 = 0.3100. (2) p = (11 + 29 + 48) /300 = 0.2933. (3) p = 19/300 = 0.0633. 1.17) Paul writes four letters and addresses the four accompanying envelopes. After having had a bottle of whisky, he puts the letters randomly into the envelopes. Determine the probabilities p k that k letters are in the 'correct' envelopes, k = 0, 1, 2, 3. Solution There are 4! = 24 possibilities (elementary events) to put the letters into the envelopes k = 0 : There are 9 favorable elementary events: p 0 = 9/24 ≈ 0.3750. k = 1 : There are 8 favorable elementary events: p 1 = 8/24 = 0.3333. k = 2 : There are 6 favorable elementary events: p 2 = 6/24 = 0.2500. k = 3 : p 3 = 1 − p 0 − p 1 − p 2 = 1/24 = 0.0416.

1 RANDOM EVENTS AND THEIR PROBABILITIES

5

1.18) A straight stick is broken at two randomly chosen positions. What is the probability that the resulting three parts of the stick allow the construction of a triangle? Solution Without loss of generality, let us assume that the stick has length 1. The breaks have occurred at x and y. A triangle can be constructed if max(x, y) > 1/2, i.e., if the point (x, y) is in the hatched part of the Figure. By the formula of the geometric probability (1.8), the desired probability is 0.75. 1

y

0.5

0

0.5

x 1

1.19) (Encounter problem) Two hikers climb to the top of a mountain from different directions. Their arrival time points are between 0:00 and 1:00 a.m., and they stay on the top for 30 minutes. For each hiker, every time point between 0 and 1 has the same chance to be the arrival time. What is the probability p that the hikers meet on the top? Solution Let x and y be the arrival time points of the hikers on the top. They will meet there if and only if y − x ≤ 30, i.e., they will meet if (x, y) belongs to the hatched part of the Figure. Hence, by formula (1.8), the desired probability is p = 0.75. 1 y

0.5

10

0.5

0

0.5

x 0.5 Exercise 1.19

1

5

0

5

10

Exercise 1.20

1.20) A fence consists of horizontal and vertical wooden rods with a distance of 10 cm between them (measured from the center of the rods). The rods have a circular sectional view with a diameter of 2 cm. Thus, the arising 'empty' squares have an edge length of 8 cm. Children throw balls with a diameter of 5 cm horizontally at the fence. What is the probability p that a ball passes the fence without touching the rods? Solution The center of the ball must hit a point in the shaded square of the Figure. Otherwise the ball would hit a rod. The shaded square has edge length 3 cm. Hence, p = 9/100 = 0.09. 1.21) Determine the probability p that the quadratic equation x 2 + 2 a x = b − 1 has real solutions if the pair (a,b) is randomly chosen from the quarter circle {(a, b) ; a, b > 0, a 2 + b 2 < 1}.

6

SOLUTIONS MANUAL

Solution The solutions of the quadratic equation x 2 + 2 a x = b − 1 are x 1/2 = ± a + b − 1 − a . These solutions are real iff a + b − 1 ≥ 0 or, equivalently, b ≥ 1 − a. The corresponding area with the points (a, b) in the quarter circle, which satisfy this inequality, is hatched in the Figure. Hence, the desired probability is p = π/4 − 0.5 ≈ 0.2854. (Compare to example 1.9, page 16). 1 b

1 − a2

b=

b=1−a

a

0 Exercise 1.21

1

1.22) Let A and B be disjoint events with P(A) = 0.3 and P(B) = 0.45. Determine the probabilities P(A ∪ B), P(A ∪ B), P(A ∪ B), and P(A ∩ B). Solution By the rules provided in section 1.3.3 and taking into account P(A ∩ B) = 0 : P(A ∪ B) = 0.75, P(A ∪ B) = 0.25, P(A ∪ B) = 1 − P(A ∪ B) = 1 − P(A ∩ B) = 1, P(A ∩ B) = P(B\A) = P(B) − P(A ∩ B) = P(B) = 0.45. 1.23) Let P(A ∩ B) = 0.3 and P(B) = 0.6. Determine P(A ∪ B) . Solution P(A ∩ B) = P(A\B) = P(A) − P(A ∩ B) = 0.3. P(A ∪ B) = P(A) + P(B) − P(A ∩ B) = 0.4 − 0.3 = 0.1. 1.24) Is it possible that for two random events A and B with P(A) = 0.4 and P(B) = 0.2 the relation P(A ∩ B) = 0.3 is true? Solution No, since A ∩ B ⊆ B so that 0.3 = P(A ∩ B) ≤ P(B). But, by assumption, P(B) = 0.2. 1.25) Check whether for 3 arbitrary random events A, B, and C the following constellations of probabilities can be true: (1) P(A) = 0.6, P(A ∩ B) = 0.2, and P(A ∩ B) = 0.5, (2) P(A) = 0.6, P(B) = 0.4, P(A ∩ B) = 0, and P(A ∩ B ∩ C) = 0.1, (3) P(A ∪ B ∪ C) = 0.68 and P(A ∩ B) = P(A ∩ C) = 1. Solution (1) No, since 0.5 = P(A ∩ B) = P(A) − P(A ∩ B) ≠ 0.6 − 0.2 = 0.4. (2) No, since A ∩ B ∩ C ⊆ A ∩ B so that P(A ∩ B ∩ C) ≤ P(A ∩ B). (3) No, since the assumption P(A ∩ B) = P(A ∩ C) = 1 implies that A = B = C = Ω.

1 RANDOM EVENTS AND THEIR PROBABILITIES

7

1.26) Show that for two arbitrary random events A and B the following inequalities are true: P(A ∩ B) ≤ P(A) ≤ P(A ∪ B) ≤ P(A) + P(B). Solution This chain of inequalities follows from A ∩ B ⊆ A and P(A ∪ B) = P(A) + P(B) − P(A ∩ B). 1.27) Let A, B, and C be 3 arbitrary random events. (1) Express the event 'A occurs and both B and C do not occur' in terms of suitable relations between these events and their complements. (2) Prove: the probability p of the event 'exactly one of the events A, B, or C occurs' is P(A) + P(B) + P(C) − 2P(A ∩ B) − 2P(A ∩ C) − 2P(B ∩ C) + 3P(A ∩ B ∩ C). Solution (1) A ∩ B ∩ C (2) p = P( A ∩ B ∩ C) + P( B ∩ A ∩ C) + P( C ∩ A ∩ B) By applying the de Morgan rule (1.2), formula (1.17), and P( A ∪ B ∪ C) + P(A ∪ B ∪ C) = P( B ∪ C) , the following sum representation of the first term is obtained: P( A ∩ B ∩ C) = P(A) − P(A ∩ B) − P(A ∩ C) − P(B ∩ C) + P(A ∩ B ∩ C). Analogously one gets (simply replace A, B, C twice with B, C, A , respectively) P( B ∩ A ∩ C) = P(B) − P(B ∩ C) − P(A ∩ B) − P(A ∩ C) + P(A ∩ B ∩ C), P( C ∩ A ∩ B) = P(C) − P(A ∩ C) − P(B ∩ C) − P(A ∩ B) + P(A ∩ B ∩ C). Adding up (a), (b), and (c) gives the desired result.

(a) (b) (c)

Section 1.4 1.28) Two dice are simultaneously thrown. The result is (ω 1 , ω 2 ). What is the probability p of the event 'ω 2 = 6 ' on condition that 'ω 1 + ω 2 = 8 ?' Solution The condition reduces the space of the elementary events to Ω r = {(2, 6), (3, 5), (4, 4), (6, 2)}. Only one elementary event from Ω r is favorable for the occurrence of 'ω 2 = 6 '. Hence, p = 0.2. 1.29) Two dice are simultaneously thrown. By means of formula (1.24) determine the probability p of the event A that the dice show the same number. Solution Let B i denote the probability that die 1 shows 'i'. Then P(B i ) = P(A B i ) = 1/6. Hence, 6

6

p = Σ i=1 P(A B i )P(B i ) = Σ 16 ⋅ 16 = 16 . i=1 1.30) A publishing house offers a new book as a standard or a luxury edition and with or without a CD. The publisher analyzes the first 1000 orders: luxury edition yes with CD

no

yes

324

82

no

48

546

8

SOLUTIONS MANUAL

Let A (B) the random event that a book, randomly choosen from these 1000, is a luxury one (comes with a CD). (1) Determine the probabilities P(A), P(B), P(A ∪ B), P(A ∩ B), P(A B), P(B A), P(A ∪ B B), and P(A B). (2) Are the events A and B independent? Solution (1) P(A) = 0.372, P(B) = 0.406 P(A ∪ B) = (324 + 48 + 82)/1000 = 0.454, P(A ∩ B) = 0.324, P(A B) = 324/406 ≈ 0.7980, P(B A) = 324/372 ≈ 0.8710, P(A ∪ B B) = 48/594 = 0.8080, P(A |B ) = 546/(48 + 546) = 0.9191. (2) No since P(A) ⋅ P(B) = 0.372 ⋅ 0.406 = 0.1510 ≠ P(A ∩ B) = 0.324. 1.31) A manufacturer equips its newly developed car of type Treekill optionally with or without a tracking device and with or without speed limitation technology. He analyzes the first 1200 orders: speed limitation yes tracking device

no

yes

74

642

no

48

436

Let A (B) be the random event that a car, randomly chosen from these 1200, has speed limitation (comes with a tracking device). (1) Calculate the probabilities P(A), P(B), and P(A ∩ B) from the figures in the table. (2) Based on the probabilities determined under a) and only by using the rules developed in section 1.3.3, determine the probabilities P(A ∪ B), P(A B), P(B A), P(A ∪ B B), and P(A B) Solution (1) P(A) = 132/1200 = 0.1016, P(B) = 716/1200 = 0.5966, P(A ∩ B) = 74/1200 = 0.0616. (2) P(A ∪ B) = P(A) + P(B) − P(A ∩ B) = (132 + 716 − 74)/1200 = 0.645, P(A B) = 74/(642 + 74) ≈ 0.10335, P(B A) = 74/(48 + 74) ≈ 0.6066, P((A∪B)∩B) P(A∩B) P(A)−P(A∩B) −74 = = = 132 484 P(B) P(B) P(B) P(A∩B) 1−P(A∪B) 1−0.645 = = 484/1200 ≈ 0.8802. P(B) P(B)

P(A ∪ B |B) = P(A B) =

≈ 0.1198,

1.32) A bowl contains m white marbles and n red marbles. A marble is taken randomly from the bowl and returned to the bowl together with r marbles of the same color. This procedure continues to infinity. (1) What is the probability that the second marble taken is red? (2) What is the probability that the first marble taken is red on condition that the second marble taken is red as well? (This is a variant of Po´ lya 's urn problem.) Solution (1) Let A 1 (A 2 ) be the event that the first (second) drawn marble is red. On condition A 1 there are n + r red marbles in the bowl and on condition A 1 there are n read marbles in the bowl. Hence, P(A 2 A 1 ) =

n+r m+n+r

and P(A 2 A 1 ) =

n m+n+r .

By the total probability rule (1.24), P(A 2 ) = P(A 2 A 1 ) P(A 1 ) + P(A 2 A 1 ) P(A 1 )

1 RANDOM EVENTS AND THEIR PROBABILITIES =

9

n+r ⋅ n + n ⋅ n + r = n = P(A 1 ). m+n+r m+n m+n+r m+n+r m+n

Thus, P(A 1 ) = P(A 2 ). (2) By formula (1.22), the desired probability P(A 1 A 2 ) is given by the ratio P(A 1 ∩ A 2 ) P(A 2 A 1 ) ⋅ P(A 1 ) P(A 1 A 2 ) = = . P(A 2 ) P(A 2 ) The results obtained under (1) are applicable and yield P(A 1 A 2 ) = n + r . m+n+r 1.33) A test procedure for diagnosing faults in circuits indicates no fault with probablility 0.99 if the circuit is faultless. It indicates a fault with probability 0.90 if the circuit is faulty. Let the probability of a circuit to be faulty be 0.02. (1) What is the probability that a circuit is faulty if the test procedure indicates a fault? (2) What is the probability that a circuit is faultless if the test procedure indicates that it is faultless? Solution Let A be the random event that a circuit (selected at random from the population) is faulty, and B be the random event that the test indicates a fault. From the probabilities given, P(A) = 0.02, P(B A) = 0.90, P(B A) = 0.10, P(B A) = 0.01, P(B |A) = 0.99. (1) By the total probability rule (1.24), the probability that the test indicates a fault is P(B) = P(B A) P(A) + P(B A) P(A) = 0.90 ⋅ 0.02 + 0.01 ⋅ 0.98 = 0.0278. By Bayes' formula (1.25), the desired probability is P(B A) P(A) 0.90 ⋅ 0.02 P(A B) = = = 0.6475. P(B) 0.0278 (2) Again by Bayes' formula (1.25), the desired probability is P(B A) P(A) 0.99 ⋅ 0.98 = = 0.9970. 1 − 0.0278 P(B) Contrary to the result obtained under (1), this probability is a strong argument in favor of the test. P(A B) =

1.34) Suppose 2% of cotton fabric rolls and 3% of nylon fabric rolls contain flaws. Of the rolls used by a manufacturer, 70% are cotton and 30% are nylon. (1) What is the probability that a randomly selected roll used by the manufacturer contains flaws? (2) Given that a randomly selected roll used by the manufacturer does not contain flaws, what is the probability that it is a nylon fabric roll? Solution A roll is selected at random from the ones used by the manufacturer. Let A be the random event that this roll contains flaws, and B be the random event that this roll is cotton. Then, P(B) = 0.7, P(B) = 0.3, P(A B) = 0.02, P(A B) = 0.98, P(A B) = 0.03, P(A B) = 0.97. (1) By the total probability rule, P(A) = P(A B)P(B) + P(A B) P(B) = 0.02 ⋅ 0.70 + 0.03 ⋅ 0.3 = 0.0230. (2) By Bayes' formula: P(A B) P(B) 0.97 ⋅ 0.30 P ⎛⎝ B A) = = = 0.2979. 1 − 0.0230 P(A)

10

SOLUTIONS MANUAL

1.35) A group of 8 students arrives at an examination. Of these students 1 is very well prepared, 2 are well prepared, 3 are satisfactorily prepared, and 2 are insufficiently prepared. There is a total of 16 questions. A very well prepared student can answer all of them, a well prepared 12, a satisfactorily prepared 8, and an insufficiently prepared 4. Each student has to draw randomly 4 questions. Frank could answer all the 4 questions. What is the probability that Frank (1) was very well prepared, (2) was insufficiently prepared? Solution Let A 1 , A 2 , A 3 , A 4 be the events that in this order a randomly chosen student from the group is very well, well, satisfactorily, and insufficiently prepared. Then, by the figures given, P(A 1 ) = 1/8, P(A 2 ) = 1/4, P(A 3 ) = 3/8, P(A 4 ) = 1/4. Let further B be the event that Frank got all 4 answers right. Then, P(B A 1 ) = 1, P(B A 2 ) = 12 ⋅ 11 ⋅ 10 ⋅ 9 ≈ 0.271978, 16 15 14 13 8 7 6 5 4 3 2 1 P(B A 3 ) = 16 ⋅ 15 ⋅ 14 ⋅ 13 ≈ 0.038462, P(B A 4 ) = 16 ⋅ 15 ⋅ 14 ⋅ 13 ≈ 0.000549.

Hence, by the total probability rule (1.24), P(B) ≈ 0.207555.. The desired probabilities are obtained from the Formula of Bayes (1.25): P(B A 1 ) ⋅ P(A 1 ) 1 ⋅ 0.125 P(A 1 B) = ≈ ≈ 0.60225. (1) P(B) 0.207555 (2)

P(A 4 B) =

P(B A 4 ) ⋅ P(A 4 ) 0.000549 ⋅ 0.25 ≈ ≈ 0.00066. P(B) 0.207555

1.36) Symbols 0 and 1 are transmitted independently from each other in proportion 1 : 4 . Random noise may cause transmission failures: If a 0 was sent, then a 1 will arrive at the sink with probability 0.1. If a 1 was sent, then a 0 will arrive at the sink with probability 0.05 (Figure). (1) What is the probability that a received symbol is '1'? (2) '1' has been received. What is the probability that '1' had been sent? (3) '0' has been received. What is the probability that '1' had been sent? transmitter 0

receiver 0

0.9

0.05 0.1 1

0.95

1

Solution Let A be the event that a 1 has arrived at the sink and B be the event that a 1 had been sent. Then A ( B ) is the event that a 0 has arrived at the sink (had been sent). The proportion P(B)/P( B) = 4 implies P(B) = 0.80. From the probabilities given, P(A B) = 0.95, P( A B) = 0.05, P(A B) = 0.10, P( A B) = 0.90, P(A) = P(A B) P(B) + P(A B) P(B) = 0.95 ⋅ 0.80 + 0.10 ⋅ 0.20 = 0.7800.

1 RANDOM EVENTS AND THEIR PROBABILITIES

11

(1) By Bayes' formula (1.25), the desired probability is P(B A) =

P(A B) P(B) P(A)

⋅0.80 = 0.95 = 0.9744. 0.78

P(B A) =

P(A B) P(B) P(A)

⋅0.80 = 0.05 = 0.1818. 0.22

(2) Again by Bayes' formula,

1.37) The companies 1, 2, and 3 have 60, 80, and 100 employees with 45, 40, and 25 women, respectively. In every company, employees have the same chance to be retrenched. It is known that a woman had been retrenched (event B). What is the probability that she had worked in company 1, 2, and 3, respectively? Solution Let A 1 , A 2 , A 3 be the events that a randomly selected employee from the 240 total ones is with company 1, 2, 3, respectively, and B be the event the chosen employee is a woman. Then, P(A 1 ) = 0.25, P(A 2 ) = 0.30, P(A 3 ) = 5/12. Given A i , the conditional probabilities P(B A i ) are P(B A 1 ) = 0.75, P(B A 2 ) = 0.5, P(B A 3 ) = 0.25. Hence, P(B) = 0.75 ⋅ 0.25 + 0.5 ⋅ 0.3 + 0.25 ⋅ (5/12) = 0.44167. From Bayes' formula: P(A 1 B) =

P(B A 1 )P(A 1 ) 0.75 ⋅ 0.25 = ≈ 0.42452. P(B) 0.44167

Analogously, P(A 2 B) = 0.33962, P(A 3 B) = 0.23585. 1.38) John needs to take an examination, which is organized as follows: To each question 5 answers are given. But John knows the correct answer only with probability 0.6. Thus, with probability 0.4 he has to guess the right answer. In this case, John guesses the correct answer with probability 1/5 (that means, he chooses an answer by chance). What is the probability that John knew the answer to a question given that he did answer the question correctly? Solution event A: John knows the answer.

P(A) = 0.6.

event A : John does not know the answer. P(A) = 0.4. even B : The answer of John was correct: P(B A) = 1, P(B A) = 0.2. P(B) = P(B A) ⋅ P(A) + P(B A) ⋅ P(A) = 1 ⋅ 0.6 + 15 ⋅ 0.4 = 0.68 Hence, P(A B) =

P(B A) ⋅ P(A) 0.6 = ≈ 0.88235. P(B) 0.68

1.39) A delivery of 25 parts is subject to a quality control according to the following scheme: A sample of size 5 is drawn. If at least one part is faulty, then the delivery is rejected. If all 5 parts are o.k., then they are returned to the lot, and a sample of size 10 is randomly taken from the again 25 parts. The delivery is rejected if at least 1 part from the 10 is faulty. Determine the probabilities that a delivery is rejected on condition that (1) it contains 2 defective parts, (2) it contains 4 defective parts.

12

SOLUTIONS MANUAL

Solution Let B be the event that the delivery is rejected, and let A be the event that the delivery is rejected after having taken a sample of 5. Then P(B) = P(B A) ⋅ P(A) + P(B A) ⋅ P(A). (1) Since P(B A) = 1, P(A) = 23 ⋅ 22 ⋅ 21 ⋅ 20 ⋅ 19 ≈ 0.636, P(A) = 0.366, 25 ⋅ 24 ⋅ 23 ⋅ 22 ⋅ 21 and P(B A) is the probability of rejection of the delivery based on a sample of size 10, i.e, P(B A) = 1 − 23 ⋅ 22 ⋅ 21 ⋅ 20 ⋅ 19 ⋅ 18 ⋅ 17 ⋅ 16 ⋅ 15 ⋅ 14 ≈ 0.6482, 25 ⋅ 24 ⋅ 23 ⋅ 22 ⋅ 21 ⋅ 20 ⋅ 19 ⋅ 18 ⋅ 17 ⋅ 16 the probability of rejection of the delivery is P(B) = 1 ⋅ 0.3667 + 0.6482 ⋅ 0.6367 ≈ 0.7794.. (2) P(B A) = 1, P(A) = 21 ⋅ 20 ⋅ 19 ⋅ 18 ⋅ 17 ≈ 0.3830, P(A) = 0.6170, 25 ⋅ 24 ⋅ 23 ⋅ 22 ⋅ 21 ⋅ 19 ⋅ 18 ⋅ 17 ⋅ 16 ⋅ 15 ⋅ 14 ⋅ 13 ⋅ 12 ≈ 0.8921. 21 ⋅ 20 P(B A) = 1 − 25 ⋅ 24 ⋅ 23 ⋅ 22 ⋅ 21 ⋅ 20 ⋅ 19 ⋅ 18 ⋅ 17 ⋅ 16 P(B) = 1 ⋅ 0.61670 + 0.8921 ⋅ 0.3830 = 0.9583. 1.40) The random events A 1 , A 2 , ..., A n are assumed to be independent. Show that P(A 1 ∪ A 2 ∪ . . . ∪ A n ) = 1 − (1 − P(A 1 ))(1 − P(A 2 )) . . . (1 − P(A n )). Solution Applying the de Morgan rule gives P(A 1 ∪ A 2 ∪ . . . ∪ A n ) = 1 − P(A 1 ∩ A 2 ∩ . . . ∩ A n ). Since the A i are independent, P(A 1 ∩ A 2 ∩ . . . ∩ A n ) = P(A 1 ) ⋅ P(A 1 ) ⋅ . . . ⋅ P(A n ), which implies the desired result. 1.41) n hunters shoot at a target independently of each other, and each of them hits it with probability 0.8. Determine the smallest n with property that the target is hit with probability 0.99 by at least one hunter. Solution Let A i be the event that hunter i hits the target. Then P(A i ) = 0.8 , i = 1, 2, ..., n, and the probability p n that at least one of n hunters hits the target is given by P(A 1 ∪ A 2 ∪ . . . ∪ A n ). Hence, by exercise 1.40, p n = 1 − (0.2) n . The smallest integer n with p n ≥ 0.99 is n min = 3. 1.42) Starting a car of type Treekill is successful with probability 0.6. What is the probability p 4 that the driver needs no more than 4 start trials to be able to leave? Solution Let A i be the event that i trials are necessary. Then, since the A i are disjoint, p 4 = P(A 1 ∪ A 2 ∪ A 3 ∪ A 4 ) = P(A 1 ) + P(A 2 ) + P(A 3 ) + P(A 4 ) = 0.6 + 0.4 ⋅ 0.6 + 0.4 2 ⋅ 0.6 + 0.4 3 ⋅ 0.6 = 0.9744.

1 RANDOM EVENTS AND THEIR PROBABILITIES

13

1.43) Let A and B be two subintervals of [0, 1]. A point x is randomly chosen from [0, 1]. Now A and B can be interpreted as random events, which occur if x ∈ A or x ∈ B, respectively. Under which condition are A and B independent? Solution P(A ∩ B) = P(x ∈ A and x ∈ B) = P(A) ⋅ P(B) iff the intervals A and B are disjoint. 1.44) A tank is shot at by 3 independently acting antitank helicopter 1, 2, and 3 with one antitank missile each. Each missile hits the tank with probability 0.6. If the tank is hit by 1 missile, it is put out of action with probability 0.8. If the tank is hit by at least 2 missiles, it is put out of action with probability 1. What is the probability that the tank is put out of action by this attack? Solution The underlying sample space regarding the random number of missiles hitting the tank is {(i 1 , i 2 , i 3 ), i k = 0, 1}, where i k = 0 (1) indicates that helicopter k has missed (hit) the target, k = 0, 1. Let A n be the event that the tank is hit by n missiles. Hence, A 0 = {(0, 0, 0)}, A 1 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, A 2 = {(1, 1, 0), (1, 0, 1), (0, 1, 1)}, A 3 = {(1, 1, 1)} so that P(A 0 ) = 0.4 3 , P(A 1 ) = 3 ⋅ 0.4 2 ⋅ 0.6, P(A 2 ) = 3 ⋅ 0.4 1 ⋅ 0.6 2 , P(A 3 ) = 0.6 3 . Let B be the event that the tank is destroyed by the attack. By the total probability rule, P(B) = 0 ⋅ P(A 0 ) + 0.8 ⋅ P(A 1 ) + 1 ⋅ P(A 2 ) + 1 ⋅ P(A 3 ) = 0.8784. 1.45) An aircraft is targeted by two independently acting ground-to-air missiles 1 and 2. Each missile hits the aircraft with probability 0.6 if these missiles are not being destroyed before. The aircraft will crash with probability 1 if being hit by at least one missile. On the other hand, the aircraft defends itself by firing one air-to-air missile each at the approaching ground-to-air missiles. The air-to-air missiles destroy their respective targets with probablity 0.5. (1) What is the probability p that the aircraft will crash as a result of this attack? (2) What is the probability that the aircraft will crash if two air-to-air missiles are fired at each of the approaching ground-to-air-missiles? Solution (1) Let B i be the event that the aircraft will be hit by missile i, and A i be the event that the air-toair missile fired at missile i hits this missile; i = 1, 2. Then P(B i ) = P(B i A i )P(A i ) + P(B i A i )P(A i ) = 0 ⋅ 0.5 + 0.6 ⋅ 0.50 = 0.3. Since B 1 and B 2 are independent, the desired probability is p = P(B 1 ∪ B 2 ) = P(B 1 ) + P(B 2 ) − P(B 1 ∩ B 2 ) = 0.3 + 0.3 − 0.3 ⋅ 0.3 = 0.51. (2) In this case, each of the approaching ground-to-air-missiles 1 and 2 are destroyed with probability 1 − 0.5 2 = 0.75. Hence, if A i refers now to the effect of two identical air-to-air missiles fired at missile i, then P(B i ) = P(B i A i )P(A i ) + P(B i A i )P(A i ) = 0 ⋅ 0.75 + 0.6 ⋅ 0.25 = 0.15. Hence, p = P(B 1 ∪ B 2 ) = 0.15 + 0.15 − 0.15 ⋅ 0.15 = 02775.

14

SOLUTIONS MANUAL

1.46) The liquid flow in a pipe can be interrupted by two independently operating valves V 1 and V 2 , which are connected in series (Figure). For interrupting the liquid flow it is sufficient if one valve closes properly. The probability that an interruption is achieved when necessary is 0.98 for both valves. On the other hand, liquid flow is only possible if both valves are open. Switching from 'closed' to 'open' is successful with probability 0.99 for each of the valves. (1) Determine the probability to be able to interrupt the liquid flow if necessary. (2) What is the probability to be able to resume liquid flow if both valves are closed?

V1

V2

Solution Let A 1 (A 2 ) be the event that valve V 1 (V 2 ) properly opens when closed, and B 1 (B 2 ) be the event that valve V 1 (V 2 ) properly closes when open. (1) The desired probability is P(B 1 ∪ B 2 ) = P(B 1 ) + P(B 2 ) − P(B 1 ∩ B 2 ) = 0.98 + 0.98 − 0.98 ⋅ 0.98 = 0.9996. (2) The desired probability is P(A 1 ∩ A 2 ) = P(A 1 ) ⋅ P(A 2 ) = 0.99 ⋅ 0.99 = 0.9801.

CHAPTER 2 One-Dimensional Random Variables Sections 2.1 and 2.2 2.1) An ornithologist measured the weight of 132 eggs of helmeted guinea fowls [gram ]: number i weight x i number of eggs n i

1 38 4

2 41 6

3 42 7

4 43 10

5 44 13

6 45 26

7 46 33

8 47 16

9 48 10

10 50 7

(*)

There are no eggs weighing less than 38 or more than 50. Let X be the weight of a randomly picked egg from this sample. (1) Determine the probability distribution of X. (2) Draw the distribution function of X. (3) Determine the probabilities P(43 ≤ X ≤ 48) and P(X > 45). Var(X) , and E( X − E(X) ).

(4) Determine E(X),

Solution (1) The probability distribution of the discrete random variable X is given by { p i = P(X = x i ) = i pi

1 2 0.0303 0.0455 38

xi

3 0.0530

41

42

F(x i ) = P(X ≤ x i ) 0.0303 0.0758 0.1288

(2)

4 0.0758

ni ; 132

i = 1, 2, ..., 10}.

5 0.0985

6 0.1970

7 0.2500

8 0.1212

9 10 0.0758 0.0530

43

44

45

46

47

48

50

0.2046

0.3031

0.5001

0.7500

0.8712

0.9470

1

1 0.8

F(x)

0.6 0.4 0.2 0

38

40

9

(3) P(43 ≤ X ≤ 48) = Σ i=4 p i = 0.8183, 10

(4) E(X ) = Σ i=1 p i x i = 45.182,

44

42

48

46

50

10

P(X > 45) = Σ i=7 p i = 0.5.

Var(X ) =

10

Σ i=1 p i (x i − E(X )) 2

= 2.4056.

10

E( X − E(X) ) = Σ i=1 p i x i − E(X) = 1.7879. These numerical values have been directly calculated from table (*). 2.2) 114 nails are classified by length: number i length x i (mm) number of nails n i

1 < 15 15.0 0 3

2

3

4

5

6

15.1 10

15.2 25

15.3 40

15.4 18

15.5 16

Let X denote the length of a nail selected randomly from this set.

7 15.6 > 15.6 2 0

x

SOLUTIONS MANUAL

16 (1) Determine the probability distribution of X and draw its histogram. (2) Determine the probabilities P(X ≤ 5) and P(15.0 < X ≤ 15.5). (3) Determine E(X), m 3 = E(X − E(X)) 3 ,

Var(X) , x m , γ C , and γ P .

Interprete the skewness measures. Solution (1) The probability distribution of X is given by { p i = P(X = x i ) =

(2) P(X ≤ 5) = 0, P(15.0 < X ≤ 15.5) =

1 (n 114 2

ni ; 114

i = 1, 2, ..., 7}. 109 114

+ n3 + n4 + n5 + n6) =

≈ 0.9561.

1 (3 ⋅ 15 + 10 ⋅ 15.1 + 25 ⋅ 15.2 + 40 ⋅ 15.3 + 18 ⋅ 15.4 + 16 ⋅ 15.5 + 2 ⋅ 15.6) = 15.3018. (3) E(X) = 114 1 m 3 = 114 3 ⋅ (15 − 15.3018) 3 + 10 ⋅ (15.1 − 15.3018) 3 + . . . + 2 ⋅ (15.6 − 15.3018) 3 ) = 0.0000319. 1 ⎡ 2 2 ... 2 σ 2 = Var(X) = 114 ⎣ 3 ⋅ (15 − 15.3018) + 10 ⋅ (15.1 − 15.3018) + + 2 ⋅ (15.6 − 15.3018) ⎤⎦

= 1.91965/114 = 0.01684. σ=

Var(X) = 0.12977

x m = 15.3, γ C =

m3 σ3

= 0.0146, γ P =

E(X)−x m σ

=

15.3018−15.3 0.12977

= 0.0139.

2.3) A set of 100 coins from an ongoing production process had been sampled and their diameters measured. The measurement procedure allows for a degree of accuracy of ±0.04 mm. The table shows the measured values and their numbers: i

1

2

3

4

5

6

7

xi

24.88

24.92

24.96

25.00

25.04

25.08

25.12

ni

2

6

20

40

22

8

2

Let X be the diameter of a randomly from this set picked coin. Determine E(X), E( X − E(X) ),

Var(X) , V(X).

Solution 7

7 1 n x = 25.0024, 100 i=1 i i 7 1 E( X − E(X) ) = 100 i=1 n i x i − 25.0024 = 0.033664, 7 1 2 Var(X) = 100 Var(X) i=1 n i (x i − 25.0024) = 0.00214,

E(X ) = Σ i=1 p i x i =

Σ

Σ

Σ

= 0.0463, V(X) = 0.0018.

2.4) 83 specimen copies of soft coal, sampled from the ongoing production in a colliery over a period of 7 days, had been analyzed with regard to ash and water content, respectively [in %]. Both ash and water content have been partitioned into 6 classes. The table shows the results. Let X be the water content and Y be the ash content of a randomly chosen specimen out of the 84 ones. Since the originally measured values are not given, it is assumed that the values, which X and Y can take on, are the centres of the given classes: 16.5, 17.5, . . . , 21.5.

(1) Determine E(X), Var(X), E(Y), and Var(Y). (2) Compare the variation of X and Y.

2 ONE-DIMENSIONAL RANDOM VARIABLES

17

water

ash

[16, 17)

[17, 18)

[18, 19)

[19, 20)

[20, 21)

[21, 22]

sum

[23, 24)

0

0

1

1

2

4

8

[24, 25)

0

1

3

4

3

3

14

[25, 26)

0

2

8

7

2

1

20

[26, 27)

1

4

10

8

1

0

24

[27, 28)

0

5

4

4

0

0

13

[28, 29)

2

0

1

0

1

0

4

sum

3

12

27

24

9

8

83

Solution 1 [3 ⋅ 16.5 + 12 ⋅ 17.5 + 27 ⋅ 18.5 + 24 ⋅ 19.5 + 9 ⋅ 20.5 + 8 ⋅ 21.5] = 19.08, 83 1 [3 ⋅ (16.5 − 19.08) 2 + 12 ⋅ (17.5 − 19.08) 2 + . . . + 8 ⋅ (21.5 − 19.08) 2 ] = 1.545, 83

(1) E(X) = Var(X) = E(Y) =

1 83

Var(Y) =

[8 ⋅ 23.5 + 14 ⋅ 24.5 + 20 ⋅ 25.5 + 24 ⋅ 26.5 + 13 ⋅ 27.5 + 4 ⋅ 28.5] = 25.886, 1 83

[8 ⋅ (23.5 − 25.886) 2 + 14 ⋅ (24.5 − 25.886) 2 + . . . + 4 ⋅ (28.5 − 25.886) 2 = 1.594.

(2) V(X) = Var(X) /E(X) = 0.0662, V(Y) = Var(Y) / E(Y) = 0.0488. X has a higher variability than Y. 2.5) It costs $ 50 to find out whether a spare part required for repairing a failed device is faulty or not. Installing a faulty spare part causes damage of $1000. Is it on average more profitable to use a spare part without checking if (1) 1% of all spare parts of that type, (2) 3% of all spare parts of that type, and (3) 10 % of all spare parts of that type are faulty? Solution Let X be the random damage (in $) when not checking. (1) E(X) = 0.01 × 1000 = 10. (2) E(X) = 0.03 × 1000 = 30. (3) E(X) = 0.10 × 1000 = 100. Since E(X) = 100 > 50, only in this case checking is cost efficient. 2.6) Market analysts predict that a newly developed product in design 1 will bring in a profit of $500 000, whereas in design 2 it will bring in a profit of $200 000 with a probability of 0.4, and a profit of $800 000 with probability of 0.6. What design should the producer prefer? Solution With design 2, the mean profit is 200 000 × 0.4 + 800 000 × 0.6 = 560 000. Hence, the producer should opt for design 2. 2.7) Let X be the random number one has to throw a die, till for the first time a 6 occurs. Determine E(X) and Var(X).

SOLUTIONS MANUAL

18

Solution X has the geometric distribution (2.26) with parameter p = 1/6 and state space Z = {1, 2, ...}. Thus, E(X ) = 1/p = 6 and Var(X ) = (1 − p)/p 2 = 30. 2.8) 2% of the citizens of a country are HIV-positive. Test persons are selected at random from the population and checked for their HIV-status. What is the mean number of persons which have to be checked till for the first time an HIV-positive person is found? Solution The random number X of persons which have to be checked till an HIV-positive person is found has a geometric distribution with parameter p = 0.02 and state space Z = {1, 2, ..}. Hence, E(X) = 1/p = 50. 2.9) Let X be the difference between the number of head and the number of tail if a coin is flipped 10 times. (1) What is the range of X ? (2) Determine the probability distribution of X. Solution (1) Z = {−10, − 8, . . . − 2, 0, 2, . . ., 8, 10}. (2) X = N H − N T , where N H (N T ) is the number of heads (tails) in a series of 10 flippings. Hence, N H + N T = 10 so that the distribution of X is fully determined by the distribution of N H or N T . ⎞ ⎛1⎞ P(X = 10) = ⎛⎝ 10 10 ⎠ ⎝ 2 ⎠

10

⎞ ⎛1⎞ , P(X = 8) = ⎛⎝ 10 9 ⎠ ⎝2⎠

⎞ ⎛1⎞ P(X = −8) = ⎛⎝ 10 1 ⎠ ⎝2⎠

10

10

⎞ ⎛1⎞ , . . ., P(X = 0) = ⎛⎝ 10 5 ⎠ ⎝2⎠

⎞ ⎛1⎞ , P(X = −10) = ⎛⎝ 10 0 ⎠ ⎝2⎠

10

10

,...,

.

2.10) A locksmith stands in front of a locked door. He has 9 keys and knows that only one of them fits, but he has otherwise no a priori knowledge. He tries the keys one after the other. What is the mean value of the random number X of trials till the door opens? Solution The door will open at the 1st, 2nd, 3rd, ... , 8th, 9th trial with respective probabilities p 1 = 1 , p 2 = 8 ⋅ 1 , p 3 = 8 ⋅ 7 ⋅ 1 , . . ., p 8 = 8 ⋅ 7 ⋅ 6 . . . 2 ⋅ 1 , p 9 = 8 ⋅ 7 ⋅ 6 . . . 2 ⋅ 1 ⋅ 1. 9

9

8

9

8

7

9

8

7

3

2

9

8

7

3

2

Thus, p i = P(X = i) = 1/9 for all i = 1, 2, ..., 9. Hence, X has a discrete uniform distribution with state space Z = {1, 2, ..., 9} and E(X) = 5. 2.11) A submarine attacks a frigate with 3 torpedoes. The torpedoes hit the frigate independently of each other with probability 0.9. Any successful torpedo hits any of the 4 submerged chambers of the frigate independently of each other successful ones with probability 1/4. The chambers are isolated from each other. In case of one or more hits, a chamber fills up with water. The ship will sink if at least 2 chambers are hit by one or more torpedos. What is the probability of the event A that the attack sinks the frigate? Solution Let N be the random number of torpedoes which hit the frigate. Then N has a binomial distribution with parameters p = 0.9 and n = 3 : P(N = k) = ⎛⎝ 3k ⎞⎠ (0.9) k (0.1) 3−k ; k = 0, 1, ..., 3.

2 ONE-DIMENSIONAL RANDOM VARIABLES

19

Obviously, P(A N = 0 ) = P(A N = 1) = 0.

If N = 2 , then there are 6 elementary events out of a total of 10, which are favorable for the occurrence of event A. Hence, P(A N = 2) = 3/5. If N = 3, then there are 16 elementary events out of 20, which are favorable for the occurrence of event A. Hence, P(A N = 3) = 4/5. By the total probability rule, P(A) = ⎛⎝ 32 ⎞⎠ (0.9) 2 (0.1) ⋅ 35 + ⎛⎝ 33 ⎞⎠ (0.9) 3 (0.1) 0 ⋅ 45 = 0.729. 2.12) Three hunters shoot at a flock of three partridges. Every hunter, independently of the others, takes aim at a randomly selected partridge and hits his/her target with probability 1. Thus, a partridge may be hit by up to 3 pellets, whereas a lucky one escapes a hit. Determine the mean of the random number X of hit partridges. Solution The state space is Z = {(i 1 , i 2 , i 3 ); i k = 0, 1, 2, 3; Σ 3k=1 i k = 3}

with meaning that partridge k is hit by i k pellets; k = 1, 2, 3. This space has ten elementary events: (3, 0, 0), (0, 3, 0), (0, 0, 3), (2, 1, 0), (2, 0, 1), (1, 2, 0), (0, 2, 1), (1, 0, 2), (0, 1, 2), (1, 1, 1). Each elementary event has the same probability p to occur: p = 1/10 (Laplace experiment). Hence, the probabilities that 1, 2, or 3 partridges are hit are in this order 3/10, 6/10, and 1/10 and the mean mean number of hit partridges is E(X) =

3 10

6 1 ⋅ 1 + 10 ⋅ 2 + 3 ⋅ 10 = 1.8 .

2.13) A lecturer, for having otherwise no merits, claims to be equipped with extrasensory powers. His students have some doubt about it and ask him to predict the outcomes of ten flippings of a fair coin. The lecturer is five times successful. Do you believe that, based on this test, the claim of the lecturer is justified? Solution No. The result is the most likely outcome when predicting the results purely randomly, i.e., both head and tail are predicted with probability 1/2. 2.14) Let X have a binomial distribution with parameters p = 0.4 and n = 5. (1) Determine the probabilities P(X > 6), P(X < 2), P(3 ≤ X < 7), P(X > 3 X ≤ 2), and P(X ≤ 3 X ≥ 4).

(2) Draw the histogram of the probability distribution of X. Solution

(1)

p k = P(X = k) = ⎛⎝ 5k ⎞⎠ (0.4) k (0.6) 5−k ; k = 0, 1, ..., 5. p 0 = 0.07776, p 1 = 0.2592, p 2 = 0.3456, p 3 = 0.2304, p 4 = 0.0768, p 5 = 0.01024 P(X > 6) = 0, P(X < 2) = 0.33696, P(3 ≤ X < 7) = 0.31744, P(X > 3 X ≤ 2) = 0, P(X ≤ 3 X ≤ 4) =

P(X ≤ 3) 0.91296 = = 0.9224. P(X ≤ 4) 0.98976

SOLUTIONS MANUAL

20 (2 ) p

k

0.4 0.3 0.2 0.1 0

1

2

3

4

5

k

Probability histogram of a binomial distribution

2.15) Let X have a binomial distribution with parameters n = 5 and p. Determine an interval I so that P(X = 2) ≤ P(X = 3) for all p ∈ I. Solution P(X = 2) = ⎛⎝ 52 ⎞⎠ p 2 (1 − p) 3 ≤ ⎛⎝ 53 ⎞⎠ p 3 (1 − p) 2 = P(X = 3) This inequality is equivalent to 10(1 − p) ≤ 10p so that the desired interval is I = [0.5, 1]. 2.16) The stop sign at an intersection is on average ignored by 4% of all cars. A car, which ignores the stop sign, causes an accident with probability 0.01. Assuming independent behavior of the car drivers: (1) What is the probability p (1) that from 100 cars at least 3 ignore the stop sign?

(2) What is the probability p (2) that at least one of the 100 cars causes an accident due to ignoring the stop sign? Solution (1) The probability that from 100 cars exactly k ignore the stop sign is ⎞ (0.04) k (0.96) 100−k ; k = 0, 1, ..., 100. p k = ⎛⎝ 100 k ⎠ The desired probability is p (1) = 1 − p 0 − p 1 − p 2 = 1 − 0.01687 − 0.07029 − 0.14498 = 0.76786.

(2) The probability that from 100 cars exactly k cause an accident due to ignoring the stop sign is ⎞ (0.0004) k (0.9996) 100−k ; k = 0, 1, ..., 100. π k = ⎛⎝ 100 k ⎠ The desired probability is p (2) = 1 − π 0 = 0.03922. 2.17) Tessa bought a dozen claimed to be fresh-laid farm eggs in a supermarket. There are 2 rotten eggs amongst them. For breakfast she randomly picks two eggs one after the other. What is the probability that her breakfast is spoilt if already one bad egg will have this effect? Solution Let B be the event that the first taken egg from the dozen is rotten, and A be the event that there is at least one rotten egg amongst the two chosen ones. Then P(A B) = 1, P(A B) = 2/11. Since P(B) = 2/12 and P(B) = 10/12, by the total probability rule, 2 2 10 P(A) = P(A B) ⋅ P(B) + P(A B) ⋅ P(B) = 1 ⋅ 12 + 11 ⋅ 12 = 0.31818.

(Otherwise, apply the hypergeometric distribution.)

2 ONE-DIMENSIONAL RANDOM VARIABLES

21

2.18) A smart baker mixes 20 stale breads from previous days with 100 freshly baked ones and offers this mixture for sale. Tessa randomly chooses 3 breads one after the other from the 120, i.e., she does not feel and smell them. What is the probability p ≥1 that she has bought at least one stale bread? Solution p ≥1 = 1 − 100 ⋅ 99 ⋅ 98 = 0.42423. 120 119 118 2.19) Some of the 270 spruces of a small forest stand are infested with rot (a fungus affecting first the core of the stems). Samples are taken from the stems of 30 randomly selected trees. (1) If 45 trees from the 270 are infested, what is the probability p (1) that there are less than 4 infested trees in the sample? Determine this probability both by the binomial approximation and by the Poisson approximation to the hypergeometric distribution. (2) If the sample contains six infested trees, what is the most likely number of infested trees in the forest stand (see example 2.7)? Solution (1) Hypergeometric distribution with N = 270, M = 45, and n = 30. The probability p m that there are m infested trees in the sample, is ⎛N−M⎞ ⎛M⎞ ⎝ n−m ⎠⎝m⎠ pm = . ⎛N⎞ ⎝n⎠ Binomial approximation: m

⎞ ⎛ 45 ⎞ ⎛ 225 ⎞ p m ≈ ⎛⎝ 30 m ⎠ ⎝ 270 ⎠ ⎝ 270 ⎠

30−m

m

⎞ ⎛ 45 ⎞ ⎛ 225 ⎞ = ⎛⎝ 30 m ⎠ ⎝ 270 ⎠ ⎝ 270 ⎠

30−m

m

⎞ ⎛1⎞ ⎛5⎞ = ⎛⎝ 30 m ⎠ ⎝6⎠ ⎝6⎠

30−m

.

The desired probability is approximately p (1) ≈ p 0 + p 1 + p 2 + p 3 = ⎛⎝ 56 ⎞⎠

30

+ 30 ⋅ 16 ⋅ ⎛⎝ 56 ⎞⎠

29

2

+ 435 ⋅ ⎛⎝ 16 ⎞⎠ ⎛⎝ 56 ⎞⎠

28

3

+ 4060 ⋅ ⎛⎝ 16 ⎞⎠ ⎛⎝ 56 ⎞⎠

27

= 0.0042 + 0.0253 + 0.0733 + 0.1368 = 0.2396. Poisson approximation:

m p m ≈ λ e −λ ; λ = n ⋅ M . N m!

Hence, λ = 5 and p (1) ≈ p 0 + p 1 + p 2 + p 3 = 0.00674 + 0.0337 + 0.0842 + 0.1404 = 0.2650. (2) Let M be the unknown number of infested trees in the forest stand. Then the probability that 6 trees in a sample of 30 are infested is ⎛ 270−M ⎞ ⎛ M ⎞ ⎝ 30−6 ⎠ ⎝ 6 ⎠ p6 = . ⎛ 270 ⎞ ⎝ 6 ⎠ If M maximizes p 6 , then the following inequalities must be true: ⎛ 270−M+1 ⎞ ⎛ M−1 ⎞ ⎛ 270−M ⎞ ⎛ M ⎞ ⎛ 270−M−1 ⎞ ⎛ M+1 ⎞ ⎛ 270−M ⎞ ⎛ M ⎞ ⎝ 30−6 ⎠ ⎝ 6 ⎠ ⎝ 30−6 ⎠ ⎝ 6 ⎠ ⎝ 30−6 ⎠ ⎝ 6 ⎠ ⎝ 30−6 ⎠ ⎝ 6 ⎠ ≤ and ≤ . ⎛ 270 ⎞ ⎛ 270 ⎞ ⎛ 270 ⎞ ⎛ 270 ⎞ ⎝ 6 ⎠ ⎝ 6 ⎠ ⎝ 6 ⎠ ⎝ 6 ⎠

The left inequality implies M ≤ 54.2 , and the right inequality implies M ≥ 54.2. Hence, M opt = 54.

SOLUTIONS MANUAL

22

2.20) Because it happens that one or more airline passengers do not show up for their reserved seats, an airline would sell 602 tickets for a flight that holds only 600 passengers. The probability that, for some reason or other, a passenger does not show up is 0.008. What is the probability that every passenger who shows up will have a seat? Solution The random number X of passengers who show up for a flight has a binomial distribution with parameters n = 602 and p = 0.992. Hence, the probability that 601 or 602 passengers show up is ⎛ 602 ⎞ (0.992) 602 + ⎛ 602 ⎞ (0.992) 601 (0.008) = (0.992) 602 + 602 (0.992) 601 (0.008) = 0.0465. ⎝ 602 ⎠ ⎝ 601 ⎠

Hence, the desired probability is 1 − 0.04651 = 0.95349. 2.21) Flaws are randomly located along the length of a thin copper wire. The number of flaws follows a Poisson distribution with a mean of 0.15 flaws per cm. What is the probability p ≥2 of at least 2 flaws in a section of length 10 cm ? Solution The random number X of flaws, which occur in a section of length 10 cm, has a Poisson distribution with parameter λ = 0.15 ⋅ 10 = 1.5. Hence, p ≥2 = P(X ≥ 2) = 1 − P(X = 0) − P(X = 1) = 1 − e −1.5 − 1.5e −1.5 = 0.4422. 2.22) The random number of crackle sounds produced per hour by Karel's old radio has a Poisson distribution with parameter λ = 12. What is the probability that there is no crackle sound during the 4 minutes transmission of a Karel's favorite hit? Solution The random number X of crackle sounds within a time interval of four minutes has Poisson distribution with parameter 0.8. Hence, the desired probability is P(X = 0) = e −0.8 = 0.4493. 2.23) The random number of tickets car driver Odundo receives a year has a Poisson distribution with parameter λ = 2. In the current year, Odundo had received his first ticket on the 31st of March. What is the probability p that he will receive another ticket in that year ? Solution The number of tickets Odundo has received in the first quarter of a year has no influence on the random number of tickets X he receives till the end of that year. Hence, X has a Poisson distribution with parameter 2 ⋅ 34 = 1.5 so that p = P(X ≥ 1) = 1 − P(X = 0) = 1 − e −1.5 = 0.7769.

2.24) Let X have a Poisson distribution with parameter λ. For which nonnegative integer n is the probability p n = P(X = n) maximal? Solution The optimal n = n opt is the largest integer n with property pn p n−1 =

λ n −λt e n! λ n−1 −λt e (n−1)!

≥ 1.

Hence, n opt is the largest integer n satisfying λ ≥ n. In other words: n opt is the largest integer being equal or less than the mean value of this distribution.

2 ONE-DIMENSIONAL RANDOM VARIABLES

23

2.25) In 100 kg of a low-grade molten steel tapping there are on average 120 impurities. Castings weighing 1 kg are manufactured from this raw material. What is the probability p ≥2 that there are at least 2 impurities in a casting if the spacial distribution of the impurities in the raw material is Poisson? Solution The intensity of impurities in a casting of 1kg is λ = 1.2 . Hence, p ≥2 = 1 − e −1.2 − 1.2 e −1.2 = 0.3374. 2.26) In a piece of fabric of length 100m there are on average 10 flaws. These flaws are assumed to be Poisson distributed over the length. The 100m of fabric are cut in pieces of length 4m. What percentage of the 4m cuts can be expected to be without flaws? Solution The probability that a 4m cut contains no flaw is p 0 = e −0.4 = 0.6703, i.e., it can be expected that about 67% of the 4m cuts have no flaws. 2.27) X have a binomial distribution with parameters n and p. Compare the following exact probabilities with the corresponding Poisson approximations and give reasons for possible larger deviations: (1) P(X = 2) for n = 20, p = 0.1, (2) P(X = 2) for n = 20, p = 0.9,

(3) P(X = 4) for n = 10, p = 0.1, (4) P(X = 4) for n = 60, p = 0.1. Solution ⎞ (0.1) 2 (0.9) 18 = 0.28518, P(X = 2) ≈ 2 2 e −2 = 0.27067. (1) P(X = 2) = ⎛⎝ 20 2 ⎠ 2! The approximative value is satisfactory. ⎞ (0.9) 2 (0.1) 18 ≈ 1.5 × 10 −16 , P(X = 2) ≈ 18 2 e −18 ≈ 2.5 × 10 −6 . (2) P(X = 2) = ⎛⎝ 20 2 ⎠ 2! The condition np < 10 is not satisfied (np = 18). ⎞ (0.1) 4 (0.9) 6 = 0.01116, P(X = 4) ≈ 1 e −1 = 0.01533. (3) P(X = 4) = ⎛⎝ 10 4 ⎠ 4! n is not large enough ⎞ (0.1) 4 (0.9) 56 = 0.13356, P(X = 4) ≈ (4) P(X = 4) = ⎛⎝ 60 4 ⎠

6 4 −6 e 4!

= 0.13385.

The approximative value is satisfactory. 2.28) A random variable X has range R = {x 1 , x 2 , . . ., x m } and probability distribution {p k = P(X = x k ); k = 1, 2, ..., m}, Σ m k=1 p k = 1. A random experiment with outcome X is repeated independently of each other n times. Show: The probability of the event {x 1 occurs n 1 times, x 2 occurs n 2 times, . . . , x m occurs n m times}

is given by n n n n! p 1 p 2 . . . p mm with n1! n2! . . . nm! 1 2

m

Σ k=1 n k = n.

This probability distribution is called the multinomial distribution. It contains as a special case the binomial distribution (n = 2).

SOLUTIONS MANUAL

24 Solution The multinomial coefficient n! n1! n2! . . . n m!

is equal to the number of ways to subdivide a set consisting of m different elements into disjoint subsets comprising each n 1 , n 2 , . . ., n m elements with property m

Σ k=1 n k = n. This is easily verified by repeated application of the following representation of the binomial coefficient: ⎞ n! ⎛n⎞ = ⎛ ⎝ k ⎠ ⎝ k!(n − k)! ⎠ . 2.29) A branch of the PROFIT-Bank has found that on average 68% of its customers visit the branch for routine money matters (type 1-visitors), 14% are there for investment matters (type 2-visitors), 9% need a credit (type 3-visitors), 8% need foreign exchange service (type 4-visitors), and 1% only make a suspicious impression or even carry out a robbery (type 5-visitors). (1) What is the probability p (1) that amongst 10 randomly chosen visitors 5, 3, 1, 1, and 0 are of type 1, 2, 3, 4, or 5, respectively. (2) What is the probability p (2) that amongst 12 randomly chosen visitors 4, 3, 3, 1, and 1 are of type 1, 2, 3, 4, or 5, respectively? Solution Application of the multinomial distribution with m = 5, p 1 = 0.68, p 2 = 0.14, p 3 = 0.09, p 4 = 0.08, and p 5 = 0.01.

(1) With n = 10, the desired probability is 10! p (1) = 0.68 ⋅ 0.14 ⋅ 0.09 ⋅ 0.08 ⋅ 0.01 = 0.03455. 5! 3! 1! 1! 0! With n − 12, the desired probability is p (2) =

10! 0.68 ⋅ 0.14 ⋅ 0.09 ⋅ 0.08 ⋅ 0.01 = 0.02879. 4! 3! 3! 1! 1!

Section 2.3 2.30) Let F(x) and f (x) be the respective distribution function and the probability density of a random variable X. Answer with yes or no the following questions: (1) F(x) and f (x) can be arbitrary real functions. no (2) f (x) is a nondecreasing function in (−∞, + ∞) no (3) f (x) cannot have jumps. no (4) f (x) cannot be negative. yes (5) F(x) is always a continuous function. no (6) F(x) can assume values between in [−1, 0) no (7) The area between the abscissa and the graph of F(x) is always equal to 1. no (8) f (x) must always be smaller than 1. no (9) The area between the abscissa and the graph of f (x) is always equal to 1. yes (10) The properties of F(x) and f (x) are all the same to me.

2 ONE-DIMENSIONAL RANDOM VARIABLES

25

2.31) Check whether by suitable choice of the parameter a the following functions are densities of random variables. If the answer is yes, determine the respective distribution functions, mean values, medians, and modes. (1) f (x) = a x , − 3 ≤ x ≤ +3. 2

(2) f (x) = a x e −x , x ≥ 0. (3) f (x) = a sin x, 0 ≤ x ≤ π. (4) f (x) = a cos x, 0 ≤ x ≤ π. Solution +3

(1) ∫ −3 a x dx = 4.5 a + 4.5a = 9a. Hence, f (x) =

1 9

x , − 3 ≤ x ≤ +3,

is a density, and X has distribution function ⎧ 0.5 − x 2 /18 , − 3 ≤ x < 0, F(x) = ⎨ ⎩ 0.5 + x 2 /18, 0 ≤ x ≤ +3.

Since the density is symmetric with symmetry center x = 0, E(X) = x 0.5 = 0. The modes are x m 1 = −3 and x m 2 = +3. ∞

(2) ∫ 0 a x e −x dx = a ⋅ 12 2

so that f (x) is a density for a = 2 . Hence, 2

F(x) = 1 − e −x , x ≥ 0.

Thus, X has a Rayleigh distribution with E(X) = π/4 , x 0.5 = 0.8326, and x m = 0.71. π

(3) ∫ 0 a sin x dx = a [cos x] π0 = 2a so that f (x) is a density for a = 0.5. The mean value is π

E(X) = 0.5 ∫ 0 x sin x dx = 0.5 [sin x − x cos x] π0 = 0.5π .

Since the density has symmetry center π/2, x 05 = π/2 and x m = π/2. (4) This f (x) is not a density since cos x is negative in (π/2, π). 2.32) (1) Show that f (x) =

1 2 x

, 0 < x ≤ 1, is the probability density of a random variable X.

(2) Determine the corresponding mean value and the ε−percentile x ε . Solution

(1)

1

1 1

1

1

∫ 0 f (y) dy = 2 ∫ 0 y −1/2 dy = 2 [2 y +1/2 ] 0 = 1,

0 ≤ x ≤ 1.

Hence, f (x) is a probability density with distribution function x

F(x) = ∫ 0 f (y) dy = x , 0 ≤ x ≤ 1.

(2)

E(X) =

1 1 1/2 x dy 2 0

∫

=

1 2

1

⎡ 2 x 3/2 ⎤ = ⎣3 ⎦0

1 3

and x ε = ε 2 .

2.33) Let X be a continuous random variable. Confirm or deny the following statements: (1) The probability P(X = E(X)) is always positive. (2) There is always Var(X) ≤ 1. (3) Var(X) can be negative if X assumes negative values with positive probability. (4) E(X) is never negative.

no no no no

SOLUTIONS MANUAL

26

2.34) The current which flows through a thin copper wire is uniformly distributed in the interval [0, 10 mA]. For safety reasons, the current should not fall below the crucial level of 4mA. What is the probability p ≤4 that at any randomly chosen time point the current is below 4mA? Solution p ≤4 = 4/10 = 0.4. 2.35) According to the timetable, a lecture begins at 8:15a.m. The arrival time of Professor Wisdom in the venue is uniformly distributed between 8:13 and 8:20, whereas the arrival time of student Sluggish is uniformly distributed over the time interval from 8:05 to 8:30. What is the probability that Sluggish arrives after Wisdom in the venue? Solution y 8:20

late

8:15

late

8:13 8:10 8:05 8:05

8:13 8:10

x 8:15

8:20

8:25

8:30

Let X be the arrival time of Sluggish and Y be the arrival time of Wisdom. Then the random vector (X, Y) has a two-dimensional uniform distribution in the rectangular R = {8 : 05 ≤ x ≤ 8 : 30, 8 : 13 ≤ y ≤ 8 : 20}. This rectangular covers an area of μ(R) = 25 ⋅ 7 = 175 [min 2 ].

To determine that subarea of R given by R late = {(x, y) with x > y} consider the figure: R late has area μ(R late ) = 10 ⋅ 7 + 12 7 2 = 94.5 [min 2 ].

Thus, the desired probability is p late =

μ(R late ) = 0.54. μ(R)

2.36) A road traffic light is switched on every day at 5:00a.m. It always begins with red and holds this colour for two minutes. Then it changes to green and holds this colour for 2.5 minutes before it switches to yellow to hold this colour for 30 seconds. This cycle continues till midnight. red 9:00

g r e e n y r e d 9:02

9:04

9:06

g r e e n y

9:08

9:10

green y 8.58

9:00

red

g r e e n y

9:02

9:04

red

9:06

green

9:08

(1) A car driver arrives at this traffic light at a time point which is uniformly distributed between 9:00 and 9:10a.m. What is the probability that the driver catches the green light period? (2) Determine the same probability on condition that the driver's arrival time point has a uniform distribution over the interval [8:58, 9:08].

2 ONE-DIMENSIONAL RANDOM VARIABLES

27

Solution The figures show the red, yellow, and green periods in the time intervals [9:00 to 9:10] and from [8:58 to 9:08], respectively. Hence, in either case (1) and (2) the traffic light holds colour green for a total of 5 minutes during the car driver's arrival time interval of length 10 minutes so that the desired probabilities for both (1) and (2) are 0.5. 2.37) A continuous random variable X has the probability density ⎧ f (x) = ⎨ ⎩

1/4 for 0 ≤ x ≤ 2, 1/2 for 2 < x ≤ 3.

Determine and compare the measures of variability (1) Var(X ) and (2) E( X − E(X ) ). Solution 2 3 E(X) = ∫ 0 x 14 dx + ∫ 2 x 12 dx = 74 = 1.75. 2 3 Var(X) = ∫ 0 (x − 1.75) 2 14 dx + ∫ 2 (x − 1.75) 2 12 dx 2 3 = 14 ∫ 0 (x 2 − 3.5x + 1.75 2 ) dx + 12 ∫ 2 (x 2 − 3.5x + 1.75 2 ) dx 2 3 3 3 = 14 ⎡ x3 − 1.75 x 2 + 1.75 2 x ⎤ + 12 ⎡ x3 − 1.75 x 2 + 1.75 2 x ⎤ ⎣ ⎣ ⎦0 ⎦2

= 14 ⎛⎝ 83 − 7 + 1.75 2 ⋅ 2 ⎞⎠ + 12 ⎛⎝ 9 − 1.75 ⋅ 9 + 1.75 2 ⋅ 3 − 83 + 7 − 1.75 2 ⋅ 2 ⎞⎠ = 0.44792 + 0.32292 = 0.77084. Var(X) = 0.87797. E( X − E(X) ) = E( X − 1.75 )

(2) 1.75

= ∫0

2

3

(1.75 − x) 14 dx + ∫ 1.75 (x − 1.75) 14 dx + ∫ 2 (x − 1.75) 12 dx

1.75 2 3 = 14 ⎡⎣ 1.75x − x 2 /2 ⎤⎦ 0 + 14 ⎡⎣ x 2 /2 − 1.75x ⎤⎦ 1.75 + 12 ⎡⎣ x 2 /2 − 1.75x ⎤⎦ 2 1 4

⎡⎣ 1.75 2 − 1.75 2 /2 ⎤⎦ + 1 ⎡⎣ 2 − 3.5 − 1.75 2 /2 + 1.75 2 ⎤⎦ + 1 [4.5 − 1.75 ⋅ 3 − 2 + 3.5] 4 2 = 0.38281 + 0.01563 + 0.37500.

Thus, E( X − E(X ) ) = 0.77344. 2.38) A continuous random variable X has the probability density f (x) = 2 x, 0 ≤ x ≤ 1.

(1) Determine the distribution function F(x) and by means of it E(X). (2) Determine and compare the measures variability a) Var(X ) and b) E( X − E(X ) ). Solution

(1)

x

x F(x) = ∫ 0 2y dy = ⎡⎣ y 2 ⎤⎦ 0 = x 2 , 0 ≤ x ≤ 1. 1 1 E(X) = ∫ 0 (1 − F(x)) dx = ∫ 0 (1 − x 2 ) dx = ⎡⎣ x − x 3 /3 ⎤⎦ = 2/3.

SOLUTIONS MANUAL

28

(2) a)

1 Var(X) = E(X − 2/3) 2 = ∫ 0 (x − 2/3) 2 2x dx = 2 ∫ 1 ⎛ x 3 − 4 x 2 + 4 x ⎞ dx 0⎝ 3 9 ⎠ 1

4 2⎤ = 2 ⎡ x4 − 49 x 3 + 18 x = 1. ⎣ ⎦ 0 18 4

Thus, Var(X) = 0.23570.

b)

2/3

1

E( X − E(X ) ) = E( X − 2/3) )= ∫ 0 (2/3 − x) 2x dx + ∫ 2/3 (x − 2/3) 2x dx 2/3

1

3 3 = 2 ⎡ 26 x 2 − x3 ⎤ + 2 ⎡ x3 − 26 x 2 ⎤ . ⎣ ⎦0 ⎣ ⎦ 2/3

Hence, E( X − E(X ) ) = 0.19753. 2.39) The lifetime X of a bulb has an exponential distribution with a mean value of E(X) = 8000 [time unit: hours]. Calculate the probabilities P(X ≤ 4000), P(X > 12000), P(7000 ≤ X < 9000), and P(X < 4000). Solution X has an exponential distribution with parameter λ = 8000 −1 = 0.000125 and distribution function F(x) = 1 − e

1 − 8000 x

, x ≥ 0.

P(X ≤ 4000) = 1 − e −0.5 = 0.3935, P(X > 12000) = e −1.5 = 0.2231, P(7000 ≤ X < 9000) = e −7/8 − e −9/8 = 0.0922, P(7000 ≤ X < 9000), P(X < 4000) = P(X ≤ 4000). 2.40) The lifetimes of 5 identical bulbs are exponentially distributed with parameter λ = 1.25 ⋅ 10 −4 [h −1 ], i.e., E(X) = 1/λ = 8000 [h]. All of them are switched on at time t = 0 and will fail independently of each other. (1) What is the probability that at time t = 8000 [h] a) all 5 bulbs and b) at least 3 bulbs have failed? (2) What is the probability that at least one bulb survives 12 000 hours? Solution The distribution function of the lifetimes of the bulbs is the same as in the previous example. Hence, the probability that a bulb fails in [0, 8000] is p = 1 − 1/e ≈ 0.63212.

(1) a) All 5 bulbs fail in [0, 8000] with probability (1 − 1/e) 5 = 0.10093. b) The random number N of bulbs, which fail in [0, 8000], has a binomial distribution with parameters n = 5 and p = 1 − 1/e. Hence, the desired probability is ⎛ 5 ⎞ p 3 (1 − p) 2 + ⎛ 5 ⎞ p 4 (1 − p) 1 + p 5 (1 − p) = 0.7364. ⎝3⎠ ⎝4⎠

(2) All bulbs fail in [0, 12 000] with probability 5

000 ⎛ 1 − e − 128000 ⎞ = ⎛ 1 − e −1.5 ⎞ 5 = 0.28297. ⎠ ⎝ ⎝ ⎠

Hence, at least one bulb survives this interval with probability 1 − (1 − e −1.5 ) 5 = 0.71703.

2 ONE-DIMENSIONAL RANDOM VARIABLES

29

2.41) The random length X of employment of staff in a certain company has an exponential distribution with property that 92% of staff leave the company after only 16 months. What is the mean time an employee is with this company and the corresponding standard deviation? Solution The 0.92-percentile is x 0.92 = 16 months. Hence, the parameter λ of this distribution satisfies the equation 1 − e −λ ⋅16 = 0.92 . It follows λ = 0.15768 and E(X) =

Var(X) = 1/λ = 6.335 [months].

2.42) The times between the arrivals of taxis at a rank are independent and have an exponential distribution with mean value 10 min. An arriving customer does not find an available taxi and the previous one left 3 minutes earlier. No other customers are waiting. What is the probability p w that the customer has to wait at least 5min for the next free taxi? Solution In view of the memoryless property of the exponential distribution, p w = e −5/10 = 0.60653. 2.43) A small branch of the Profit Bank has the two tellers 1 and 2. The service times at these tellers are independent and exponentially distributed with parameter λ = 0.4 [min −1 ]. When Pumeza arrives, the tellers are occupied by a customer each. So she has to wait. Teller 1 is the first to become free, and the service of Pumeza starts immediately. What is the probability p that the service of Pumeza is finished sooner than the service of the customer at teller 2? Solution In view of the memoryless property of the exponential distribution, the residual service time of the customer at teller 2 has the same probability distribution as the one of Pumeza, namely an exponential distribution with parameter λ = 0.4 [min −1 ]. Hence, the desired probability is 0.5. Analytically, this probability is given by the integral ∞

p = ∫ 0 (1 − e −λx ) λe −λx dx, which is 0.5 for all positive λ. 2.44) Four weeks later Pumeza visits the same branch as in the previous exercise. Now the service times at tellers 1 and 2 are again independent, but exponentially distributed with respective parameters λ 1 = 0.4 [min −1 ] and λ 2 = 0.2 [min −1 ]. (1) When Pumeza enters the branch, both tellers are occupied and no customer is waiting. What is the mean time Pumeza spends in the branch till the end of her service? (2) When Pumeza enters the branch, both tellers are occupied, and another customer is waiting for service. What is the mean time Pumeza spends in the branch till the end of her service? (Pumeza does not get preferential service.) Solution (1) Let T be the total time Pumeza spends in the branch. Then T can be represented as T = W + S, where W is the time Pumeza is waiting for service, and S is her actual service time. If X 1 and X 2 denote the respective random service times at teller 1 and 2, then W is given by W = min (X 1 , X 2 ) with survival function F W (x) = e −(λ 1 +λ 2 ) x , x ≥ 0,

so that

∞

E(W) = ∫ 0 e −(λ 1 +λ 2 ) x dx = 1/(λ 1 + λ 2 ).

SOLUTIONS MANUAL

30 Her mean service time has structure

E(S) = E(S X 1 ≤ X 2 ) P(X 1 ≤ X 2 ) + E(S X 2 ≤ X 1 ) P(X 2 ≤ X 1 ). λ1 , λ 1 +λ 2

∞

Since P(X 1 ≤ X 2 ) = ∫ 0 (1 − e −λ 1 x ) λ 2 e −λ 2 x dx =

P(X 2 ≤ X 1 ) =

λ2 , λ 1 +λ 2

Pumeza's mean service time is λ1 λ2 E(S) = λ1 ⋅ λ +λ + λ1 ⋅ λ +λ = 1

1

2

2

1

2

2 . λ 1 +λ 2

Hence, E(T) =

3 λ 1 +λ 2

= 0.4+30.2 = 5 [min].

(2) The customer before Pumeza faces the same situation as Pumeza under (1). Thus, his mean sojourn time in the branch is 3/(λ 1 + λ 2 ) . Since the service time of Pumeza in the branch, when not knowing which teller becomes available first, is 2/(λ 1 + λ 2 ), her mean total sojourn time in the branch is 5/(λ 1 + λ 2 ) = 5/0.6 = 8.3 [min]. 2.45) An insurance company offers policies for fire insurance. Achmed holds a policy according to which he gets full refund for that part of the claim which exceeds $3000. He gets nothing for a claim size less than or equal to $3000. The company knows that the average claim size is $5642. (1) What is the mean refund Achmed gets from the company for a claim if the claim size is exponentially distributed? (2) What is the mean refund Achmed gets from the company for a claim if the claim size is Rayleigh distributed? Solution (1) The random claim size C has distribution function F(x) = P(C ≤ x) = 1 − e −λx = 1 − e

x − 5642

, x ≥ 0.

Let R be the refund Achmed gets from the insurance company when submitting a claim. Then ∞

E(R) = 0 ⋅ P(C ≤ 3000) + E(C − 3000 C > 3000) ⋅ P(C > 3000) = ∫ 3000 (x − 3000)λe −λx dx ∞

−λx ∞ = ⎡⎢ − e (λx + 1) ⎤⎥ − 3000 ⎡⎣ 1 − e −λx ⎤⎦ 3000 ⎣ λ ⎦ 3000 − 3000 = 5642 ⋅ e 5642 = $ 3315.

(2) In this case, for a positive parameter Θ, the claim size C has distribution function 2 2 F(x) = 1 − e −(x/θ) and density f (x) = 2x2 e −(x/θ) , x ≥ 0,

θ

where the parameter Θ is determined by E(C) = Θ π/4 , i.e., θ = 5642 = 6366.3. π/4

The corresponding mean refund is ∞

E(R) = 2 ∫ 3000 (x − 3000)

2 x e −(x/6366.3) dx 6366.3 2

= $ 2850.0.

Thus, the claim size distribution has considerable influence on the mean refund, even under invariant mean claim size.

2 ONE-DIMENSIONAL RANDOM VARIABLES

31

2.46) Pedro runs a fruit shop. Mondays he opens his shop with a fresh supply of strawberries of s pounds, which is supposed to satisfy the demand for three days. He knows that for this time span the demand X is exponentially distributed with a mean value of 200 pounds. Pedro pays $ 2 for a pound and sells it for $ 5. So he will lose $ 2 for each pound he cannot sell, and he will make a profit of $ 3 out of each pound he sells. What amount of strawberries Pedro should stock for a period of three days to maximize his mean profit? Solution There are two possibilities: The random demand D is less than s, and D exceeds s: 0

0

s

D=x

s

D=x

This leads to the net profit of Pedro: s

1 G(s) = ∫ 0 [3x − 2(s − x)] 200 e

1 − 200 x

∞

1 dx + ∫ s [3s − 3(x − s)] 200 e

1 − 200 x

dx.

In the second integral, 3s is the profit Pedro made by selling his entire stock, whereas 3(x − s) is the loss Pedro suffered for being not in the position to meet the full demand. The optimum value for s is s ∗ = 246 pounds, and the corresponding maximum net profit of Pedro is $ 40.33. 2.47) The probability density function of the random annual energy consumption X of an enterprise [in 10 8 kwh] is f (x) = 30(x − 2) 2 [1 − 2(x − 2) + (x − 2) 2 ], 2 ≤ x ≤ 3.

(1) Determine the distribution function of X and by means of this functionthe probability that the annual energy consumption exceeds 2.8. (2) What is the mean annual energy consumption? Solution (1)

x

F(x) = 30 ∫ 2 (y − 2) 2 [1 − 2(y − 2) + (y − 2) 2 ] dy = (x − 2) 3 [10 − 15(x − 2) + 6(x − 2) 2 ], 2 ≤ x ≤ 3.

Hence, ⎧ ⎪ F(x) = ⎨ ⎪ ⎩

0, x3

.

(2) P(X > 2.8) = 1 − F(2.8) ≈ 0.0579. 3

(3) E(X) = 30 ∫ 2 x (x − 2) 2 [1 − 2(x − 2) + (x − 2) 2 ] dx = 2.5. 2.48) The random variable X is normally distributed with mean μ = 5 and standard deviation σ = 4 : X = N(μ, σ 2 ) = N(5, 16).

Determine the respective values of x which satisfy P(X ≤ x) = 0.5, P(X > x) = 0.95, P(x < X < 9) = 0.2, P(3 < X < x) = 0.95, P(−x < X < +x) = 0.99. Solution Since b−μ a−μ P(a ≤ X ≤ b) = Φ ⎛⎝ σ ⎞⎠ − Φ ⎛⎝ σ ⎞⎠ ,

the equations for x are in this order equivalent to

SOLUTIONS MANUAL

32

P(N(0, 1) ≤ x−45 ) = 0.5, P(N(0, 1) > x−45 ) = 0.95, P( x−45 < N(0, 1) < 9−45 ) = 0.2,

.P( 3−45 < N(0, 1) < x−45 ) = 0.5, P(− x+45 < N(0, 1) < + x−45 ) = 0.99 or Φ ⎛⎝ x−45 ⎞⎠ = 0.5, Φ ⎛⎝ x−45 ⎞⎠ = 0.05, Φ(1) − Φ ⎛⎝ x−45 ⎞⎠ = 0.2, Φ ⎛⎝ x−45 ⎞⎠ − Φ ⎛⎝ − 12 ⎞⎠ = 0.5, Φ ⎛⎝ x−45 ⎞⎠ − Φ ⎛⎝ − x+45 ⎞⎠ = 0.99.

From the table of the standard normal distribution, the x-values satisfying these equations are in this order 5, -1.56, 6.45, 8.23, 14.3. 2.49) The response time of an average male car driver is normally distributed with mean value 0.5 and standard deviation 0.06 (in seconds). (1) What is the probability that his response time is greater than 0.6 seconds? (2) What is the probability that his response time is between 0.50 and 0.55 seconds? Solution −0.5 ⎞ (1) P(X > 0.6) = P ⎛⎝ N(0, 1) > 0.60.06 ⎠ = 1 − Φ(5/3) ≈ 0.04746. −0.5 ⎞ −0.5 ⎞ 1 (2) P(0.5 ≤ X ≤ 0.55) = Φ ⎛⎝ 0.55 − Φ ⎛⎝ 0.50.06 ⎠ = Φ(5/6) − 2 = 0.2975. 0.06 ⎠

2.50) The tensile strength of a certain brand of cardboard has a normal distribution with mean 24psi and variance 9psi. What is the probability that the tensile strength X of a randomly chosen specimen does not fall below the critical level of 20psi? Solution

Since P(X < 20) = Φ ⎛⎝ 20−324 ⎞⎠ = Φ(−4/3) = 0.911, the desired probability is P(X ≥ 20) = 0.089. 2.51) The total monthly sick leave time of employees of a small company has a normal distribution with mean 100 hours and standard deviation 20 hours. (1) What is the probability that the total monthly sick leave time will be between 50 and 80 hours? (2) How much time has to be budgeted for sick leave to make sure that the budgeted amount is exceeded with a probability of less than 0.1? Solution (1) The desired probability is P(50 ≤ X ≤ 80) = Φ ⎛⎝ 80−20100 ⎞⎠ − Φ ⎛⎝ 50−20100 ⎞⎠ = Φ(−1) − Φ(−2.5) ≈ 0.1524.

(2) The 0.9-percentile x 0.9 of the distribution defined by P(X ≤ x 0.9 ) = 0.9 or, equivalently, Φ ⎛⎝

x 0.9 −100 ⎞ 20 ⎠

= 0.9

has to be determined. Since the 0.9-percentile of the standard normal distribution is 1.28, i.e., it is Φ(1.28) = 0.9, the 0.9-percentile of X satisfies x 0.9 −100 20

Hence, x 0.9 = 125.6.

= +1.28.

2 ONE-DIMENSIONAL RANDOM VARIABLES

33

2.52) The random variable X has a Weibull distribution with mean value 12 and variance 9. (1) Calculate the parameters β and θ of this distribution. (2) Determine the conditional probabilities P(X > 10 X > 8) and P(X ≤ 6 X > 8). Solution (1) Mean value and variance of X are E(X) = θΓ(1 + 1/β), Var(X) = θ 2 Γ(1 + 2/β) − [Γ(1 + 1/β)] 2 .

Using tables of the gamma function or computer support gives the solutions of the equations E(X) = 12 and Var(X) = 9 : β = 4.542, θ = 13.1425. β

(2)

P(X > 10 X > 4) =

4.542

−(10/13.1425) −0.2890 P(X > 10) e −(10/θ) = =e = e −0.1049 = 0.8319. β 4.542 −( 8/ θ) P(X > 8) e e e −(8/13.1425)

2.53) The random measurement error X of a meter has a normal distribution with mean 0 and variance σ 2 , i.e., X = N(0, σ 2 ). It is known that the percentage of measurements which deviate from the 'true' value by more than 0.4 is 80%. Use this piece of information to determine σ. Solution It is given that P( X > 0.4) = 1 − P( X ≤ 0.4) = 1 − P(−0.4 ≤ X ≤ +0.4) = 0.8 or, equivalently, in terms of the standard normal distribution, Φ(+0.4/σ) − Φ(−0.4/σ) = 2Φ(+0.4/σ) − 1 = 0.2 or Φ(+0.4/σ) = 0.6.

The 0.6-percentile of the standard normal distribution is x 0.6 = 0.253. Hence, 0.4/σ = 0.253 so that σ = 1.58. 2.54) If sand from gravel pit 1 is used, then molten glass for producing armored glass has a random impurity content X which is N(60, 16) -distributed. But if sand from gravel pit 2 is used, then this content is N(62, 9) -distributed (μ and σ in 0.01%). The admissable degree of impurity should not exceed 0.64%. Sand from which gravel pit should be used? Solution

Pit 1: P(N(60, 16) ≥ 64) = 1 − Φ ⎛⎝ 64−460 ⎞⎠ = 1 − Φ(1) = Φ(−1) = 0.1587. Pit 2: P(N(62, 9) ≥ 64) = 1 − Φ ⎛⎝ 64−362 ⎞⎠ = 1 − Φ(2/3) = Φ(−2/3) = 0.1587. Hence, sand from gravel pit 1 should be preferred. 2.55) Let X have a geometric distribution with P(X = i) = (1 − p) p i ; i = 0, 1, ...; 0 < p < 1. By mixing this distribution with regard to a suitable structure distribution density f (p) show that ∞ 1 = 1. (i) Σ (i + 1)(i + 2) i=0 Solution Let the parameter p be the value of a random variable which has a uniform distribution over [0,1]: 1 if 0 ≤ p ≤ 1, f (p) = 0, otherwise.

Then the mixture of geometric distributions with a structure distribution given by f (p) yields the distribution of the corresponding mixed random variable Y:

SOLUTIONS MANUAL

34 1

P(Y = i) = ∫ 0 (1 − p) p i dp =

1 ; i = 0, 1, ... (i + 1)(i + 2)

For {P(Y = i); i = 0, 1, ...} being a probability distribution, relation (i) must be true. 2.56) A random variable X has distribution function (Freche´ t distribution)) F α (x) = e −α/x ; α > 0, x > 0 . What distribution type arises when mixing this distribution with regard to the exponential structure distribution density f (α) = λ e λ α ; α > 0, λ > 0 ? Solution The mixture of the distribution functions F α (x) generates a random variable Y with distribution function (Lomax distribution, page 93) ∞ G(x) = P(Y ≤ x) = ∫ 0 e −α/x λ e −λ α dα = λ x ; λ > 0, x ≥ 0. 1 + λx 2.57) The random variable X has distribution function (special Lomax distribution) F(x) = x ; x ≥ 0. x+1 Check whether there is a subinterval of [0, ∞) on which F(x) is DFR or IFR. Solution The density of X is f (x) = 1/(x + 1) 2 , x ≥ 0, so that it has the failure rate λ(x) = f (x) / F(x) = 1 . x+1

This failure rate is decreasing with increasing x on [0, ∞] so that F(x) is DFR everywhere. (Note that λ(x) = F(x).) 2.58) Check the aging behavior of systems whose lifetime distributions have 2

(1) a Freche´t distribution with distribution function F(x) = e −(1/x) (sketch its failure rate), and (2) a power distribution with distribution function F(x) = 1 − (1/x) 2 , x ≥ 1. Solution (1) Density f (x) and failure rate λ(x) of this distribution are 2

f (x) = 2 x −3 e −(1/x) , x > 0, and λ(x) =

2 x −3 , x > 0. 2 e +(1/x) − 1

1 λ(x) 0.5

0

1

2

3

4

x

This failure rate has an absolute maximum λ m at x m = 1.0695 , i.e, λ m = λ(x m ) = λ(1.0695) ≈ 1.17. Hence, F(x) is IFR in [0, 1.0695] and DFR in [1.0695, ∞), see Figure. (2) Density f (x) and failure rate λ(x) of this distribution are f (x) = 2 (1/x) 3 , x ≥ 1, and λ(x) = 2 (1/x) 2 , x ≥ 1. F(x) is DFR on [1, ∞).

2 ONE-DIMENSIONAL RANDOM VARIABLES

35

2.59) Let F(x) be the distribution function of a nonnegative random variable X with finite mean value μ . (1) Show that the function F s (x) defined by x F s (x) = μ1 ∫ 0 (1 − F(t)) dt is the distribution function of a nonnegative random variable X s . (2) Prove: If X is exponentially distributed with parameter λ = 1/μ, then so is X s and vice versa. (3) Determine the failure rate λ s (x) of X s .

Solution (1) F s (x) is increasing from x = 0 to x = ∞. Moreover, F s (0) = 0 and, in view of formula (2.52), it is F s (∞) = 1.

(2) Let F(x) = 1 − e −λx , x ≥ 0. Then F s (x) becomes x

x

F s (x) = λ ∫ 0 e −λt dt = λ ⎡⎣ − λ1 e −λt ⎤⎦ = 1 − e −λ x , x ≥ 0. 0

Now let F s (x) = 1 − e −λ x , x ≥ 0. Then, x

1 − e −λx = λ ∫ 0 (1 − F(t)) dt. Differentiation with regard to x on both sides of this equation yields λ e −λx = λ (1 − F(x)) so that F(x) = 1 − e −λ x , x ≥ 0.

(3) The density belonging to F s (x) is f s (x) = F /s (x) = μ1 (1 − F(x)) and the corresponding survival probability is (again by formula (2.52)), ∞ F s (x) = 1 − F s (x) = μ1 ∫ x (1 − F(t)) dt

Hence,

λ s (x) = f s (x)/F s (x) =

1 − F(x)

∫

∞ (1 − F(t)) dt x

.

2.60) Let X be a random variable with range {1, 2, ...} and probability distribution ⎛ ⎞ P(X = i) = ⎜ 1 − 12 ⎟ 1 ; i = 1, 2, ... . ⎝ n ⎠ n 2(i−1) Determine the z-transform of X and by means of it E(X), E(X 2 ), and Var(X). Solution This is a geometrical distribution with p = 1 − 1/n 2 (see formula (2.26)) with generating function (see page 97) ∞ ⎛ 2 ⎞ M(z) = Σ ⎜ 1 − 12 ⎟ 1 z i−1 = n 2 − 1 . i=1 ⎝ n ⎠ n 2(i−1) n −z Hence, 2 2(n 2 − 1) M (z) = n2 − 1 2 , M (z) = 2 , (n − z) (n − z) 3 2 E(X) = M (1) = 21 , E(X 2 ) = M (1) + M (1) = n2 + 1 2 . n −1 (n − 1)

Var(X) = E(X 2 ) − [E(X)] 2 =

(n 2

n2 . − 1) 2

SOLUTIONS MANUAL

36

2.61) Determine the Laplace transform f (s) of the density of the Laplace distribution with parameters λ and μ (page 66): f (x) = 12 λe −λ x−μ , − ∞ < x < +∞.

By means of f (s) determine E(X ), E(X 2 ), and Var(X ). Solution The Laplace transform of f (x) is f (s) =

+∞

∫

−∞

e −s x λ e −λ x−μ dx . 2

By partitioning the integration area, f (s) becomes μ

f (s) = λ ∫ e −s x e −λ (μ−x) dx + λ 2 −∞ 2 μ

+∞

∫

μ

= λ e −λ μ ∫ e − (s−λ) x dx + λ e +λ μ 2 2 −∞

e −s x e −λ (x−μ) dx

+∞

∫

μ

e −(s+λ) x dx

μ ∞ = λ e −λ μ ⎡ − 1 e −(s−λ) x ⎤ + λ e +λ μ ⎡ − 1 e −(s+λ) x ⎤ ⎣ s−λ ⎦ −∞ 2 ⎣ s+λ ⎦μ 2

= λ e −λ μ 1 e −(s−λ) μ + λ e +λ μ + 1 e −(s+λ) μ λ−s λ+s 2 2 1 1 λ − s μ ⎡ ⎤ = + e . λ+s ⎦ 2 ⎣λ−s

Thus, 2 f (s) = 2λ 2 e − μ s . λ −s

The first and second derivative of f (s) are 2 ⎡ ⎤ f (s) = 2 λ 2 ⎢ μ − 2 2s 2 ⎥ e −μ s s −λ ⎣ s −λ ⎦

and . f (s) =

λ2μ2 2 λ2 − + o(s), (s 2 − λ 2 ) 2 s 2 − λ 2

where in this case the Landau order function o(s) represents all those terms which have factor s. (When s tends to 0, these terms will disappear.) Hence, f (0) = −μ and f (0) = μ 2 + 2 / λ 2

so that E(X ) = μ , E(X 2 ) = μ 2 + 2 /λ 2 , and Var(X ) = 2 /λ 2 .

CHAPTER 3 Multidimensional Random Variables 3.1) Two dice are thrown. Their random outcomes are X 1 and X 2 . Let X = max(X 1 , X 2 ), and Y be the number of even components of (X 1 , X 2 ) . X and Y have the ranges R X = {1, 2, 3, 4, 5, 6} and R Y = {0, 1, 2}, respectively. (1) Determine the joint probability distribution of the random vector (X, Y) and the corresponding marginal distributions. Are X and Y independent? (2) Determine E(X), E(Y), and E(XY). Solution (1) The table shows the joint distribution {r i j = P(X = i, Y = j : i = 1, 2, ..., 6; j = 0, 1, 2} of (X, Y) and the marginal distributions {p i = P(X = i); i = 1, 2, ..., 6} and {q j = P(Y = j); j = 0, 1, 2}. X

1

2

3

4

5

6

qj

0

9/36

Y 0

1/36

0

3/36

0

5/36

1

0

2/36

2/36

4/36

4/36

2

0

1/36

0

3/36

0

pi

1/36

3/36

5/36

7/36

6/36 18/36 5/36

9/36

9/36 11/36 36/36

X and Y are not independent, since if, e.g., X = 1, then Y must be 0. (2) E(X) = 4.472. E(Y) = 1, E(XY) = 4.722. 3.2) Every day a car dealer sells X cars of type 1 and Y cars of type 2. The following table shows the joint distribution {r i j = P(X = i, Y = j); i, j = 0, 1, 3} of (X, Y). Y X

0

1

2

0

0.1

0.1

0

1

0.1

0.3

0.1

2

0

0.2

0.1

(1) Determine the marginal distributions of (X, Y). (2) Are X and Y independent? (3) Determine the conditional mean values E(X Y = i), i = 0, 1, 2. Solution (1) The marginal distribution of X is {p i = P(X = i) = Σ 2j=0 r ij , i = 0, 1, 2} = {0.2, 0.5, 0.3}. The marginal distribution of Y is {q j = P(Y = j) = Σ 2i=0 r ij , j = 0, 1, 2} = {0.2, 0.6, 0.2}.

38

SOLUTIONS MANUAL

(2) Since, for instance, r 00 = P(X = 1, Y = 1) = 0.1 ≠ P(X = 1) ⋅ P(Y = 1) = 0.2 ⋅ 0.2 = 0.04, X and Y are dependent. (3)

2 r i1 q i=0 1

E(X Y = 1) = Σ

= 1 ⋅ 0.3 + 2 ⋅ 0.2 = 76 , 0.6 0.6

2 r0 j p0

E(Y X = 0) = Σ

j=0

= 1 ⋅ 0.1 = 12 . 0.2

3.3) Let B be the upper half of the circle x 2 + y 2 = 1. The random vector (X, Y) is uniformly distributed over B. (1) Determine the joint density of (X, Y). (2) Determine the marginal distribution densities. (3) Are X and Y independent? Is theorem 3.1 applicable to answer this question? Solution (1) The area of the (upper) half of the unit circle is π/2. Hence, the joint density of the random vector (X, Y) is 2 , 0 ≤ y ≤ 1 − x 2 , − 1 ≤ x ≤ +1. f (x, y) = π (2) The marginal densities are 1−x 2

f X (x) =

∫

0

2 2 π dy = π

+ 1−y 2

f Y (y) =

∫

− 1−y 2

1 − x 2 , − 1 ≤ x ≤ +1,

2 4 2 π dx = π 1 − y , 0 ≤ y ≤ 1.

(3) Theorem 3.1 is not applicable. (It refers to rectangles.) Since f X (x) ⋅ f Y (y) ≠ f (x, y), X and Y are not independent (which is obvious from the construction of f (x, y)) . 3.4) Let the random vector (X, Y) have a uniform distribution over a circle with radius r = 2. Determine the distribution function of the distance of the point (X, Y) from the center of this circle. Solution For symmetry reasons, let us assume that the vector (X, Y) is uniformly distributed over the quarter circle 0 ≤ x ≤ 2, 0 ≤ y ≤ 4 − x 2 . Its area is π. The distance of (X, Y) to the origin (0,0) is Z = + X 2 + Y 2 . The measure of the set of all points (x, y) with property

x 2 + y 2 ≤ z is

π 2 z , 4

0 ≤ z ≤ 2. Hence

P ⎛⎝ X 2 + Y 2 ≤ z ⎞⎠ =

π 2 z 4

π

= 14 z 2

so that F Z (z) = P(Z ≤ z) = 14 z 2 , 0 ≤ z ≤ 2.

3 MULTIDIMENSIONAL RANDOM VARIABLES

39

3.5) Tessa and Vanessa have agreed to meet at a cafe´ between 16 and 17 o'clock. The arrival times of Tessa and Vanessa are X and Y, respectively. The random vector (X, Y) is assumed to have a uniform distribution over the square B = {(x, y); 16 ≤ x ≤ 17, 16 ≤ y ≤ 17}. Who comes first will wait for 40 minutes and then leave. What is the probability p that Tessa and Vanessa will miss each other? 60

y

40

x

0

40

60

Exercise 3.5

Solution The hatched part in the Figure is favorable for the occurrence of the meeting, since it contains all points (x, y) with property y − x ≤ 40 (see exercise 1.19). Hence, the 'missing probability' is p = 400/3600 = 1/9, and the 'encounter probability' is 1 − p = 8/9. 3.6) Determine the mean length of a chord, which is randomly chosen in a circle with radius r. Consider separately the following ways how to randomly choose a chord: (1) For symmetry reasons, the direction of the chord can be fixed in advance. Draw the diameter of the circle, which is perpendicular to this direction. The midpoints of the chords are uniformly distributed over the whole length of the diameter. (2) For symmetry reasons, one end point of the chord can be fixed at the periphery of the circle. The direction of a chord is uniformly distributed over the interval in [0, π]. (3) How do you explain the different results obtained under (1) an (2)? Solution (1) For symmetry reasons, we can without loss of generality restrict ourselves to the quarter circle x 2 + y 2 = r 2 ; 0 ≤ x, y ≤ r, and assume that the direction of the chord coincides with the direction of the x-axis. Hence, for given Y the random length L of a chord is L = 2X = 2 r 2 − Y 2 . Since Y is uniformly distributed over [0, r] , the mean value of L is r

E(L) = 2 ∫ r 2 − y 2 1r dy 0

2 y⎤r ⎡y 2 = 2r ⎢ r − y 2 + r arcsin r ⎥ 2 ⎣2 ⎦0 2 = 2r ⋅ r ⋅ π = π r. 2 2 2

40

SOLUTIONS MANUAL

(2) For symmetry reasons, we determine the mean length of L only in the upper half circle of the Figure. r y l

0

ϕ r

x0

2r

-r

The circle with radius r has equation (x − r) 2 + y 2 = r 2 . The straight line (chord) given by y = x tan ϕ intersects the periphery of the circle at x 0 = 2r cos 2 ϕ. Hence, the length l of the chord satisfies x 0 = l cos ϕ so that l = 2r cos ϕ . Thus, the corresponding mean length of the chord is π/2 2 dϕ = 4 r ≈ 1.273. E(L) = 2r ∫ 0 (cos ϕ ) π π (3) Different approaches to defining 'uniform distribution' have been applied. 3.7) Matching bolts and nuts have the diameters X and Y, respectively. The random vector (X, Y) has a uniform distribution in a circle with radius 1mm and midpoint (30mm, 30mm). Determine the probabilities (1) P(Y > X), and (2) P(Y ≤ X < 29). Solution (1) 0.5 (2) 0 3.8) The random vector (X, Y) is defined as follows: X is uniformly distributed in the interval [0, 10]. On condition X = x , the random variable Y is uniformly distributed in the interval [0, x]. Determine (1) f X,Y (x, y), f Y (y x), f X (x y). (2) E(Y), E(Y X = 5). (3) P(5 < Y ≤ 10). Solution (1) By assumption, f X (x) =

1/10, 0 ≤ x ≤ 10, f (y x) = 1x , 0 ≤ y < x. 0, otherwise, Y

Hence, f X,Y (x, y) = f Y (y x) f X (x) = 101 x , 0 ≤ y < x ≤ 10. 10 1 1 [ln x] 10 f Y (y) = ∫ y f X,Y (x, y)dx = 10 y = 10 [ln 10 − ln y], 0 ≤ y < 10.

f X (x y) =

f X,Y (x,y) f Y (y)

=

1 , x [ln 10−ln y]

0 ≤ y < x ≤ 10.

(2)

5 1 1 10 E(Y) = 10 ∫ 0 y [ln 10 − ln y] dy = 2.5, E(Y X = 5) = ∫ 0 y 5 dy = 2.5.

(3)

1 10 P(5 ≤ Y < 10) = 10 ∫ 5 [ln 10 − ln y]dy = 0.5 (1 − ln 2) = 0.1534.

3 MULTIDIMENSIONAL RANDOM VARIABLES

41

3.9) Let f X,Y (x, y) = c x 2 y, 0 ≤ x, y ≤ 1, be the joint probability density of the random vector (X, Y) . (1) Determine the constant c and the marginal densities. (2) Are X and Y independent? Solution (1) c = 6. 1

1

f X (x) = 6 ∫ 0 x 2 y dy = 3 x 2 , 0 ≤ x ≤ 1, f Y (y) = 6 ∫ 0 x 2 y dx = 2y, 0 ≤ y ≤ 1. (2) Since f X,Y (x, y) = f X (x) f Y (y), X and Y are independent. 3.10) The random vector (X, Y) has the joint density f X,Y (x, y) = 12 e −x , 0 ≤ x < ∞, 0 ≤ y ≤ 2. (1) Determine the marginal densities and the mean values E(X) and E(Y). (2) Determine the conditional densities f X (x y) and f Y (y x) . Are X and Y independent? Solution 2 f X (x) = ∫ 0 12 e −x dy = e −x , 0 ≤ x < ∞.

(1)

Exponential distribution with mean value E(X) = 1. ∞ f Y (y) = ∫ 0 12 e −x dy = 12 , 0 ≤ y ≤ 2.

Uniform distribution over [0, 2] with mean value E(Y) = 1. (2)

−x f X (x y) = 0.50.5e = e −x , 0 ≤ x < ∞, 0 ≤ y ≤ 2;

e −x f Y (y x) = 0.5 = 0.5; 0 ≤ x < ∞, 0 ≤ y ≤ 2. e −x

Thus, X and Y are independent. 3.11) Let f (x, y) = 12 sin(x + y), 0 ≤ x, y ≤ π2 , be the joint density of the vector (X, Y). (1) Determine the marginal densities. (2) Are X and Y independent? (3) Determine the conditional mean value E(Y X = x). (4) Compare the numerical values E(Y X = 0) and E(Y X = π/2) to E(Y). Are the results in line with your anwer to (2)? Solution (1)

f X (x) = 12 (sin x + cos x), 0 ≤ x ≤ π/2. f Y (y) = 12 (sin y + cos y), 0 ≤ y ≤ π/2.

(2) No, since f X,Y (x, y) ≠ f X (x) f Y (y). f Y (y x) =

(3) E(Y X = x) =

sin(x + y) , 0 ≤ x, y ≤ π/2. sin x + cos x

π/2 1 sin x y sin(x + y)dy = 1 − (2 − π/2) . sin x + cos x sin x + cos x ∫ 0

(4) E(Y X = 0) = 1 , E(Y X = π/2) = π/2 − 1 ≈ 0.5708, E(Y) = π/4. Yes.

42

SOLUTIONS MANUAL

3.12) The temperatures X and Y, measured daily at the same time at two different locations, have the joint density xy y3 ⎞ ⎤ ⎡ ⎛ f (x, y) = 3 exp ⎢ − 12 x 2 + 3 ⎥ , 0 ≤ x, y ≤ ∞. ⎠⎦ ⎣ ⎝

Determine the probabilities P(X > Y) and P(X < Y ≤ 3X). Solution P(X > Y) =

∞∞

∫ ∫x

f (x, y)dy dx = 0.2726 , P(X < Y ≤ 3X) =

∞ 3x

0

∫∫

0 x

f (x, y)dy dx = 0.2182.

3.13) A large population of rats had been fed with individually varying mixtures of wholegrain wheat and puffed wheat to see whether the composition of the food has any influence on the lifetimes of the rats. Let Y [months] be the lifetime of a rat and X the corresponding ratio of wholegrain it had in its food. An evaluation of (real life) data justifies the assumption that the random vector (X, Y) has a bivariate normal distribution with parameters μ X = 0.50, σ 2X = 0.028, μ Y = 6.0, σ 2Y = 3.61 and correlation coefficient ρ = 0.92. (With these parameters, negative values of X and Y are extremely unlikely.) (1) Determine the regression function m Y (x), 0 ≤ x ≤ 1, and the corresponding resid- ual variance. (2) Determine the probability P(X ≤ 6, Y ≥ 8). Use software you are familiar with to numerically calculate this probability. Otherwise, only produce the double integral. Solution (1)

m Y (x) = 10.4464 x + 0.77679, Q(α, β) = (σ Y − ασ X ) 2 = 0.0231. P(X ≤ 6, Y ≥ 8)

(2)

=

6∞

1 2πσ X σ Y 1 − ρ 2

(x−μ X )(y−μ Y ) (y−μ Y ) 2 ⎞ ⎛ (x−μ X ) 2 1 ∫ ∫ exp − 2(1−ρ 2 ) ⎝ σ 2 − 2ρ σ X σ Y + σ 2 ⎠ dydx = 0.0146.

08

X

Y

3.14) In a forest stand, stem diameter X [m] , measured 1.3 m above ground , and corresponding tree height Y [m] have a bivariate normal distribution with joint density 2 (x−0.3)(y−30) (y−30) 2 ⎞ 1 25 ⎛ (x−0.3) f X,Y (x, y) = 0.48 exp − − 2ρ + 25 ⎟ . ⎜ 2 18 ⎝ σ π 0.4 ⎠ X

Remark With this joint density, negative values of X and Y are extremely unlikely.

Determine the correlation coefficient ρ = ρ(X, Y) and the regression line ∼y = αx + β. Solution From

1 2(1−ρ 2 )

it follows that = 25 18 ρ 2 = 0.64, i.e., ρ = 0.8.

From

1 2π σ X ⋅5 1−0.64

=

1 it follows that σ = 0.08. Hence, X 0.48 π σ

5 5 α = ρ σ XY = 0.8 0.08 = 50, β = 30 − 0.8 × 0.08 × 0.3 = 15

so that the regression line is

∼y = 50 x + 15.

3 MULTIDIMENSIONAL RANDOM VARIABLES

43

3.15) The prices per unit X and Y of two related stocks have a bivariate normal distribution with parameters μ X = 24, σ 2X = 49, μ Y = 36, σ 2Y = 144, and correlation coefficient ρ = 0.8. (1) Determine the probabilities P( Y − X ≤ 10) and P( Y − X > 15). You may make use of software you are familiar with to numerically calculate these probabilities. Otherwise, only produce the respective double integrals.

(2) Determine the regression function m Y (x) and corresponding residual variance. Solution (1)

P( Y − X ≤ 10) =

1 π×10.08

∞ x+10 −

1 + π×10.08

∫ ∫

e

2 10 x+10 − 1 ⎛ (x−24) 2 −1.6 (x−24)(y−36) + (y−36) ⎞ 84 144 ⎠ 0.72 ⎝ 49 e dydx

∫ ∫

0

0

1 ⎛ (x−24) 0.72 ⎝ 49

2

−1.6

(x−24)(y−36) (y−36) 2 + 144 84

⎞ ⎠ dydx

= 0.4655.

10 x−10

P( Y − X > 15) = 1 − P( Y − X ≤ 15) = 0.2512. (2) ∼ y = 1.3714 + 3.0864. Q(α, β) = (σ Y − ασ X ) 2 = 5.7610. 3.16) The random vector (X, Y) has the joint distribution function F X,Y (x, y). Show that for a < b and c < d : P(a ≤ X < b, c ≤ Y < d ) = [F X,Y (b, d) − F X,Y (b, c)] − [F X,Y (a, d ) − F X,Y (a, c)]. (This is formula (3.7), page 121.) For illustration, see the Figure: y (a, d) d

(b, d)

c

(a, c)

(b, c)

a

b

x

Solution For illustration, see the Figure: P(a ≤ X < b, c ≤ Y < d ) = P(X < b, Y < d) − P(X < b, Y < c) − P(X < a, Y < d) +P(X < a, Y < c) = F X,Y (b, d) − F X,Y (b, c) − F X,Y (a, d) + F X,Y (a, c). 3.17) Let F(x, y) =

0 for x + y ≤ 0, 1 for x + y > 0.

Show that F(x, y) does not fulfill the condition [F(b, d) − F(b, c)] − [F(a, d ) − F(a, c)] ≥ 0 for all a, b, c and d with a < b and c < d. Hence, although this function is continuous on the left in both x and y, nondecreasing in both x and y, and satisfies conditions (1) and 4), it cannot be the joint distribution function of a random vector (X,Y).

44

SOLUTIONS MANUAL

Solution Let a = −1, b = 3, c = −1, and d = 3, then F(3, 3) − F(3, −1) − F(−1, 3 ) + F(−1, −1) = 1 − 1 − 1 + 0 = −1. 3.18) The random vector (X, Y) has the joint distribution function F X,Y (x, y). Show that P(X > x, Y > y) = 1 − F X (x) − F Y (y ) + F X,Y (x, y). Solution This is a special case of formula (3.7): a = x, b = ∞, c = y, d = ∞. 3.19) The random vector (X, Y) has the joint distribution function (Marshall-Olkin distribution, page 132) with parameters λ 1 > 0, λ 2 > 0, and λ ≥ 0 F X,Y (x, y) = 1 − e −(λ 1 +λ)x − e −(λ 2 +λ)y + e −λ 1 x−λ 2 y−λ max(x,y) ; x, y ≥ 0. Show that the correlation coefficient between X and Y is given by λ ρ(X, Y) == . λ1 + λ2 + λ Solution The marginal distribution functions and the corresponding mean values and standard deviation are F X (x) = 1 − e −(λ 1 +λ) x , F Y (y) = 1 − e −(λ 2 +λ) y ; x, y ≥ 0, E(X) = Var(X) =

1 , E(Y) = λ1 + λ

Var(Y) =

1 . λ2 + λ

The covariance C(X, Y) is given by Cov(X, Y) = E(XY) − E(X)E(Y) so that ρ(X, Y) = Cov(X, Y) (λ 1 + λ)(λ 2 + λ) = E(XY) (λ 1 + λ)(λ 2 + λ) −

1 (λ 1 + λ)(λ 2 + λ)

with (see page 132 for f X,Y (x, y)) ∞ ∞

E(XY) = ∫ 0 ∫ 0 x y f X,Y (x, y) dy dx ∞ x = λ 2 (λ 1 + λ) ∫ 0 xe −(λ 1 +λ) x ⎛⎝ ∫ 0 y e −λ 2 y dy ⎞⎠ dx ∞ ∞ += λ 1 (λ 2 + λ) ∫ 0 xe −(λ 1 +λ)x ⎛⎝ ∫ x y e −(λ 2 +λ)y dy ⎞⎠ dx . Routine integration verifies the given formula for ρ(X, Y). 3.20) At time t = 0, a parallel system S consisting of two elements e 1 and e 2 starts operating. The lifetimes X 1 and X 2 of e 1 and e 2 , respectively, are assumed to be dependent random variables with joint survival function F(x 1 , x 2 ) = P(X 1 > x 1 , X 2 > x 2 ) =

1 , x 1 , x 2 ≥ 0. e +0.1x 1 + e +0.2x 2 − 1

(1) What are the distribution functions of X 1 and X 2 ? (2) What is the probability that the system survives the interval [0, 10]?

3 MULTIDIMENSIONAL RANDOM VARIABLES

45

Note By definition, a parallel system is fully operating at a time point t if at least one of its elements is still operating at time t, i.e., a parallel system fails at that time point when the last of its elements fails (see also example 4.16, page 176).

Solution (1) For x 1 , x 2 ≥ 0, F X 1 (x 1 ) = 1 − F(x 1 , 0) = 1 − e −0.1x 1 , F X 2 (x 2 ) = 1 − F(x 2 , 0) = 1 − e −0.2x 2 . (2) Property (7) at page 121 gives the joint distribution function of (X, Y) : F X 1 ,X 2 (x 1 , x 2 ) = F X 1 ,X 2 (x 1 , x 2 ) + F X 1 (x 1 ) + F X 2 (x 2 ) − 1. F X 1 ,X 2 (10, 10) is the probability that the system fails in [0, 10]. Hence, the desired probability is 1 − F X 1 ,X 2 (10, 10) = 2 − F X 1 ,X 2 (10, 10) − F X 1 (10) − F X 2 (10) = −[e 1 + e 2 − 1] −1 + e −1 + e −2 = 0.3934. 3.21) Prove the conditional variance formula Var(X) = E[Var(X Y)] + Var[E(X Y)] .

(i)

Solution The representation of Var(X) given by formula (2.62) (page 67) is valid for any probability distribution of X, in particular for the conditional probability distribution of X given Y (see page 128). Hence, with regard to the first term on the right of (i), and taking into account that the conditional mean value E(X Y) is a random variable, E{Var(X Y)} = E{E(X 2 Y) − [E(X Y)] 2 } = E{E(X 2 Y)} − E{[E(X Y)] 2 } = E(X 2 ) − E{[E(X Y)] 2 }. Again by making use of formula (2.62), the second term on the right of (i) becomes Var[E(X Y)] = E{[E(X Y)] 2 } − {E [E(X Y)]} 2 = E{[E(X Y)] 2 } − [E(X)] 2 . Summing up gives the desired result: E[Var(X Y)] + Var[E(X Y)] = E(X 2 ) − [E(X)] 2 = Var(X). 3.22) The random edge length X of a cube has a uniform distribution in the interval [4.8, 5.2]. Determine the correlation coefficient ρ = ρ(X, Y), where Y = X 3 is the volume of the cube. Solution E(X) = 5, Var(X) = E(X 2 ) − [E(X)] 2 = 30.016/1.2 − 25 = 0.0133. 5.2 1 5.2 3 1 E(Y) = E(X 3 ) = 0.4 ∫ 4.8 x dx = 1.6 ⎡⎣ x 4 ⎤⎦ 4.8 = 125.2.

Var(Y) = E(Y 2 ) − [E(Y)] 2 = E(X 6 ) − 125.2 2 = 75.08.

46

SOLUTIONS MANUAL 5.2 1 5.2 4 1 E(XY) = 0.4 ∫ 4.8 x dx = 2 ⎡⎣ x 5 ⎤⎦ 4.8 = 627.00032

so that ρ(X, Y) =

627.00032 − 5 × 125.2 = 0.999787. 30.016 ÷ 1.2 − 25 75.08

3.23) The edge length X of a equilateral triangle is uniformly distributed in the interval [9.9, 10.1]. Determine the correlation coefficient between X and the area Y of the triangle. Solution 1 E(X) = 10, Var(X) = 300 , 3

3

Y = 4 X 2 , E(Y) = 2.4 × 60.002, Var(Y) = 1875.375004 3 1 10.1 E(XY) = 0.2 ∫ 9.9 x ⋅ 4 ⋅ x 2 dx = 3 ⋅ 250.025

so that ρ(X, Y) =

E(XY) − E(X) E(Y) Var(X)

Var(Y)

= 0.9999997.

3.24) The random vector (X, Y) has the joint density f X,Y (x, y) = 8 x y, 0 < y ≤ x < 1. Determine (1) the correlation coefficient ρ(X, Y), y (x) = αx + β of Y with regard to X, (2) the regression line ∼ (3) the regression function m y (x). Interprete the relationship between ∼y (x) and m y (x). Solution (1) The marginal densities of f X,Y (x, y) are x

f X (x) = 8 ∫ 0 xydy = 4x 3 , 0 ≤ x ≤ 1, 1

f Y (y) = 8 ∫ y xydx = 4y(1 − y 2 ), 0 ≤ y ≤ 1, The corresponding parameters are 1 E(X) = 4 ∫ 0 x 4 dx = 45 , 1 8 E(Y) = 4 ∫ 0 y 2 (1 − y 2 )dy = 15 , 1 1 2 , Var(X) = 4 ∫ 0 (x − 45 ) 2 x 3 dx = 4 ∫ 0 (x 5 − 85 x 4 + 16 x 3 )dx = 75 25 1 8 2 11 Var(Y) = 4 ∫ 0 (y − 15 ) y (1 − y 2 )dy = 225 ,

Var(X) =

2/75 ≈ 0.1633,

Var(Y) =

11/225 ≈ 0.2211.

1 x 1 x 1 E(XY) = 8 ∫ 0 ∫ 0 x 2 y 2 dydx = 8 ∫ 0 x 2 ⎛⎝ ∫ 0 y 2 dy ⎞⎠ dx = 83 ∫ 0 x 5 dx = 49 .

Hence, 8 4 Cov(X, Y) = 49 − 45 ⋅ 15 = 225 ≈ 0.0178,

3 MULTIDIMENSIONAL RANDOM VARIABLES

47

and the correlation coefficient is Cov(X, Y)

ρ(X, Y) =

Var(X)

Var(Y)

Var(Y)

α = ρ(X, Y) ⋅

(2)

4/225 ≈ 0.492366. 2/75 ⋅ 11/225

=

Var(X)

= 2, 3

β = E(Y) − αE(X) = 0 so that ∼y (x) = 2 x, 0 ≤ x ≤ 1. 3 Hence, the regression line starts at the origin (0,0) and has slope 2/3. (3) The conditional density f Y (y x) = f Y (y Y = x) is f Y (y x) =

f X,Y (x, y) 8xy 2y = 3 = 2 , 0 ≤ y ≤ x ≤ 1. f X (x) 4x x

Thus, the regression function is x

m Y (x) = ∫ 0 y f Y (y x)dy x x = 22 ∫ 0 y 2 dy = 22 ∫ 0 y 2 dy = 23 x, 0 ≤ x ≤ 1. x x

Hence, regression line and regression function coincide. This is the ideal situation, when estimating the regression function by linear regression. 3.25) The random variables U and V are uncorrelated and have mean value 0. Their variances are 4 and 9, respectively. Determine the correlation coefficient ρ(X, Y) between the random variables X = 2U + 3V and Y = U − 2V. Solution By assumption, E(U) = E(V) = 0, Var(U) = E(U 2 ) = 4, Var(V) = E(V 2 ) = 9, Cov(U, V) = E(UV) = E(U)E(V) = 0. Hence, E(X) = E(Y) = 0 so that Cov(XY) = E(XY) with E(XY) = E[(2U + 3V)(U − 2V)] = E(2U 2 − 4UV + 3UV − 6V 2 ) = 2 ⋅ 4 − 0 + 0 − 6 ⋅ 9 = −46. The variances of X and Y are Var(X) = Var(2U + 3V) = 4 ⋅ 4 + 9 ⋅ 9 = 97, Var(Y) = Var(U − 2V) = 4 + 4 ⋅ 9 = 40. Hence, ρ(X, Y) =

Cov(X, Y) Var(X)

Var(Y)

=−

46 = −0.7385. 97 ⋅ 40

48

SOLUTIONS MANUAL

3.26) The random variable Z is uniformly distributed in the interval [0, 2π]. Check whether the random variables X = sin Z and Y = cos Z are uncorrelated. Solution 2π E(X) = 21π ∫ 0 sin z dz = 21π [−cos z] 20π = 0, 2π E(Y) = 21π ∫ 0 cos dz = 0.

Hence, 2π Cov(X, Y) = E(XY) = 21π ∫ 0 sin z cos z dz

= 41π [sin 2 z] 20π = 0. X and Y are uncorrelated.

CHAPTER 4 Functions of Random Variables 4.1) In a game reserve, the random position (X, Y) of a leopard has a uniform distribution in a semicircle with radius r = 10 km (Figure). Determine E(X) and E(Y). y

10

Y -10

* 0

X

10

x

Illustration to Exercise 4.1

Solution The joint density of the random vector (X,Y) is f X,Y (x, y) = 1 ; 0 ≤ y ≤ 100 − x 2 , − 10 ≤ x ≤ 10. 50π The corresponding marginal densities are 100−x 2

∫

f X (x) =

0

1 dy 50π

=

1 50π

100 − x 2 ,

− 10 ≤ x ≤ 10.

+ 100−y 2

∫

f Y (y) =

− 100−y 2

1 dx 50π

=

1 25π

100 − y 2 ,

0 ≤ y ≤ 10.

Hence, the desired mean values are E(X) =

+10

∫

−10

1 x 50π 100 − x 2 dx = 0

(asymmetry of the integral with regard to x = 0 ), and E(Y) =

10

∫

0

1 y 25π 100 − y 2 dy =

1 25π

10

⎡ − 1 (100 − y 2 ) 3 ⎤ ≈ 4.2441. ⎣ 3 ⎦0

4.2) From a circle with radius R = 9 and center (0,0) a point is randomly selected. (1) Determine the mean value of the distance of this point to the nearest point at the periphery of the circle. (2) Determine the mean value of the geometric mean of the random variables X and Y, i.e., E( X Y ). Solution (1) The distance of the randomly selected point (X,Y) from the center (0,0) of the circle is Z = X2 + Y2 . The distribution function of Z is 2 2 F(z) = P(Z ≤ z) = π z = z , 0 ≤ z ≤ 9. π 81 81

50

SOLUTIONS MANUAL

Hence, the mean value of Z is 9 z2 ⎞ E(Z) = ∫ 0 ⎛⎝ 1 − 81 ⎠ dz =

1 9 81 0

∫

⎛ 81 − z 2 ⎞ dz = ⎝ ⎠

3 9 1 ⎡ 81z − z3 ⎤ 81 ⎣ ⎦0

= 6.

Let D be that diameter of the circle, which passes through point (X,Y). Then the section of D between point (X,Y) and the periphery of the circle is the shortest distance S. Hence, its mean value is E(S) = 9 − 6 = 3. (2) For symmetry reasons, the problem is restricted to the quarter circle x 2 + y 2 = 9; 0 ≤ x, y ≤ 9. Its area is A =

81 π. 4

Hence, the joint density of (X,Y) is f X,Y (x, y) =

4 , 81π

0 ≤ y ≤ 81 − x 2 , 0 ≤ x ≤ 9.

Thus, 81−x 2

9

E ⎛⎝ XY ⎞⎠ = ∫

∫

0

=

4 81π

⎛ x ⎜⎜ ⎜ ⎝

9

∫

0

x

4 dydx 81π

y

0 81−x 2

∫

0

⎞ y dy ⎟⎟ dx ≈ 3.236. ⎟ ⎠

4.3) X and Y are independent, exponentially with parameter λ = 1 distributed random variables. Determine (1) E(X − Y), (2) E( X − Y ), and (3) distribution function and density of Z = X − Y. Solution (1) E(X − Y) = E(X) − E(Y) = 1 − 1 = 0. y

y=x+z

y = x−z

z

0

x

z

Exercise 4.3 (2)

(2) The hatched part in the Figure contains all points (x, y) which satisfy the inequality x − y ≤ z . Since the joint density of (X,Y) is f X,Y (x, y) = e −(x+y) , x, y ≥ 0, the 'survival function' of X − Y is P( X − Y > z) =

∞ ∞

∫ ∫

e −(x+y) dydx +

∞ x−z

∫ ∫

e −(x+y) dydx

z 0 0 z+x ∞ ∞ = e −(z+2x) dx + [e −x − e −(2x−z) ]dx

∫

0

∫z

4 FUNCTIONS OF RANDOM VARIABLES

51

= 12 e −z + e −z − 12 e −3z = 32 e −z − 12 e −3z . Hence, by formula (2.52), ∞

E( X − Y ) = ∫ 0 P( X − Y > z) dz =

3 2

− 12 ⋅ 13 = 43 .

(3) For z ≥ 0, the distribution function of Z = X − Y is F Z (z) = P(X − Y ≤ z = P(X ≤ z + Y) =∫

∞ F (z + y) f Y (y)dy 0 X

∞

= ∫ 0 [1 − e −(z+y) ]e −y dy = 1 − 12 e −z , z ≥ 0.

For z ≤ 0, the distribution function of Z = X − Y is F Z (z) = ∫

∞ z

∞

F X (z + y) f Y (y)dy = ∫ z [e −y − e z e −2y ]dy = 12 e − z , z ≤ 0.

This is a Laplace distribution (page 6) with density f Z (z) = 12 e − z , − ∞ < z < +∞. 4.4) X and Y are independent random variables with E(X) = E(Y) = 5 , Var(X) = VarY) = 9, and let U = 2X + 3Y and V = 3X − 2Y. Determine E(U ), E(V ), Var(U ), Var(V ), Cov(U, V ), and ρ(U, V ). Solution E(U) = 25, E(V) = 5, Var(U) = Var(V) = 4 ⋅ 9 + 9 ⋅ 9 = 117, E(UV ) = E(6X 2 − 4XY + 9XY − 6Y 2 ) = 6 (9 + 25) + 125 − 6(9 + 25) = 125, Cov(X, Y) = ρ(U, V) = 0. 4.5) X and Y are independent, in the interval [0,1] uniformly distributed random variables. Determine the densities of (1) Z = min(X, Y) , and (2) Z = X Y. Solution (1) F Z (z) = P(min(X, Y) ≤ z) = 1 − P(min(X, Y) > z) = 1 − (1 − z) 2 = 2z − z 2 , 0 ≤ z ≤ 1. Hence, f Z (z) = F Z (z) = 2(1 − z), 0 ≤ z ≤ 1. (2) By formula (4.21) and the independence of X and Y, 1 z y

F Z (z) = P(XY ≤ z) = P(X ≤ Yz ) = ∫ z

dy + 1 ⋅ P(Y ≤ z).

Hence, F Z (z) = z − z ln z, 0 < z ≤ 1. f Z (z) = F (z) = −ln z, 0 < z ≤ 1.

52

SOLUTIONS MANUAL

4.6) X and Y are independent and N(0, 1) -distributed. Determine the density f Z (z) of Z = X/Y . Which type of probability distributions does f Z (z) belong to? Solution 1 2π

f X (x) =

2

e

− x2

,

f Y (y) =

1 2π

e

−

y2 2

; − ∞ < x, y < ∞.

By formula (4.22), page 173, f Z (z) =

+∞ 1 2π −∞

∫

1 π

∫

∫ ye

−

=

2

x e

− x2

2

∞ −x xe 2 0

e

−

e

−

(z x) 2 2

(z x) 2 2

dx

dx.

Substituting y = x 1 + z 2 gives f Z (z) =

1 π(1 + z 2 )

∞

y2 2

dy =

0

1 , − ∞ < z < +∞. π(1 + z 2 )

Thus, Z has a Cauchy distribution (see page 74). 4.7)* X and Y are independent and identically Cauchy distributed with parameters λ = 1 and μ = 0, i.e. , they have densities 1 , f (y) = 1 1 , − ∞ < x, y < +∞ . Y π 1 + x2 1 + y2 Verify that the sum Z = X + Y has a Cauchy distribution as well. f X (x) =

1 π

Solution By formula (4.38), page 181, f Z (z) =

∞

1

1

∫ π −∞ 1 + (x − y) 2

1 1 1 π 1 + y2 d y = π2

∞

1

∫ −∞ 1 + (x − y) 2

1 d y. 1 + y2

The integrand is decomposed into partial fractions 1 1 = Ay + B + Cy + D . 2 1 + y2 1 + (x − y) 1 + y 2 1 + (x − y) 2

(i)

This structure of the decomposition results from the fact that the zeros of the denominators of both factors are each conjugate complex numbers. Multiplying both sides of (i) by [1 + (x − y) 2 ] [1 + y 2 ] and comparing the coefficients of the powers of y yields the following system of equations for the unknowns A, B, C, and D: B + D + D x 2 = 1, A + C + C x 2 − 2 D x = 0, B + D − 2 C x = 0, A + C = 0. The solution is A=−

2 2 , B= 3 2, C= , D= 1 2. 2 2 (4 + x ) x 4+x (4 + x ) x 4+x

4 FUNCTIONS OF RANDOM VARIABLES

53

Substituting u = y − x in the first integral resulting from (i) gives +∞

∫ −∞

+∞ ⎛ Ay + B Cy + D⎞ ⎛ A (u + x) + B C u + D ⎞ + + dy = ⎜ ⎟ ⎜ ⎟ du ∫ 2 2 ⎝ 1 + (x − y) −∞ ⎝ 1+y ⎠ 1 + u2 ⎠ 1 + u2

=

+∞

∫ −∞

+∞ ⎛Ax + B + D⎞ 1 du. du = (A x + B + D) ⎜ ⎟ ∫ ⎝ 1 + u2 ⎠ −∞ 1 + u 2

Hence, f Z (x) = A x + B2 + D π

+∞

∫

−∞

1 du. 1 + u2

+∞

1 du = π , the density of Z is 1 + u2 1 2 , − ∞ < x < +∞ . f Z (x) = A x +πB + D = π 4 + x2 This derivation becomes rigorous by doing the integration within finite integration limits -a and +a and then passing to the limit as a → ∞. Since

∫

−∞

4.8) The joint density of the random vector (X, Y) is f (x, y) = 6 x 2 y, 0 ≤ x, y ≤ 1. Determine the distribution density of the product Z = X Y. Solution Since X and Y are independent (exercise 3.9), by formula (4.21), 1 1 f Z (z) = ∫ z 1x ⎛⎝ 6x 2 ⋅ xz ⎞⎠ dx = 6z ∫ z dx = 6 z (1 − z), 0 ≤ z ≤ 1. 4.9) The random vector (X, Y) has the joint density f X,Y (x, y) = 2 e −(x+y) for 0 ≤ x ≤ y < ∞. Determine the densities of (1) Z = max(X, Y) and (2) Z = min(X, Y). Solution (1)

z z

F Z (z) = P(max(X, Y) ≤ z) = F X,Y (z, z) = ∫ 0 ∫ x f X,Y (x, y) dy dx z z

z

z

= 2 ∫ 0 ∫ x e −(x+y) dy dx = 2 ∫ 0 e −x ∫ x e −y dy dx z

z

z

= 2 ∫ 0 e −x (e −x − e −z ) dx = 2 ∫ 0 e −2x dx − 2e −z ∫ 0 e −x dx = 1 − 2e −z + e −2z = (1 − e −z ) 2 , z ≥ 0. f Z (z) = 2e −z (1 − e −z ), z ≥ 0. (2)

F Z (z) = P(min(X, Y) ≤ z) = 1 − P(min(X, Y) > z) = 1 − P(X > z, Y > z) ∞ y −(x+y) e dx dy z

= 1 − 2 ∫z

∫

= 1 − e 2z , z ≥ 0. The same results as obtained under (1) and (2) one obtains on condition that X and Y are iid exponential with parameter λ = 1.

54

SOLUTIONS MANUAL

4.10) The resistance values X, Y, and Z of 3 resistors connected in series are assumed to be independent, normally distributed random variables with respective mean values 200, 300, and 500 [Ω] , and standard deviations 5, 10, and 20 [Ω] . (1) What is the probability that the total resistance exceeds 1020 [Ω] ? (2) Determine that interval [1000 − ε, 1000 + ε] to which the total resistance belongs with probability 0.95. Solution (1) Total resistance: T = X + Y + Z = N(1000, 525). ⎛ P(T > 1020) = P ⎜ N(0, 1 > ⎝

1020−1000 ⎞ ⎟ ⎠ 525

= 1 − P(N(0, 1) ≤ 0.873) ≈ 0.191. (2) The condition is equivalent to P(1000 − ε ≤ N(1000, 525) ≤ 1000 + ε) = 0.95 or ⎛ ⎞ P ⎜ − ε ≤ N(0, 1) ≤ + ε ⎟ = 0.95. ⎝ 525 525 ⎠ The percentiles x 0.025 and x 0.975 of the standard normal distribution are −1.96 and and +1.96, respectively. Hence, ε = 1.96 ⋅ 525 = 44.91 so that the desired interval is [955.09, 1044.91] . 4.11) A supermarket employs 24 shop assistants. 20 of them achieve an average daily turnover of $8000, whereas 4 achieve an average daily turnover of $10 000. The corresponding standard deviations are $2400 and $3000, respectively. The daily turnovers of all shop assistants are independent and have a normal distribution. Let Z be the daily total turnover of all shop assistants. (1) Determine E(Z) and Var(Z). (2) What is the probability that the daily total turnover Z is greater than $190 000? Solution E(Z) = 20 ⋅ 8000 + 4 ⋅ 10, 000 = 200 000, Var(Z) = 20 ⋅ 2400 2 + 4 ⋅ 3000 2 = 151 200 000. Since σ Z = Var(Z) = 12 296 , the desired probability is P(Z > 190 000) = 1 − P(Z ≤ 190 000) = 1 − Φ ⎛⎝ 190 000 − 200 000 ⎞⎠ = 1 − Φ(0.813) 12 296 = 0.792. 4.12) A helicopter is allowed to carry at most 8 persons given that their total weight does not exceed 620kg. The weights of the passengers are independent, identically normally distributed random variables with mean value 76kg and variance 324kg 2 . (1) What are the probabilities of exceeding the permissible load with 7 and 8 passengers, respectively?

4 FUNCTIONS OF RANDOM VARIABLES

55

(2) What would the maximum total permissible load have to be to ensure that with probability 0.99 the helicopter will be allowed to fly 8 passengers? Solution (1) 7 passengers: Z = N(532, 2268). Hence, ⎛ ⎞ P(Z > 620) = 1 − Φ ⎜ 620−532 ⎟ = 1 − Φ(1.848) = 0.0322. ⎝ 2268 ⎠ 8 passengers: Z = N(608, 2592). Hence, ⎛ ⎞ P(Z > 620) = 1 − Φ ⎜ 620−608 ⎟ = 1 − Φ(0.2357) = 0.406. ⎝ 2592 ⎠ (2) Let L 0.99 be the maximum total permissible load. Then L 0.99 satisfies ⎛ L −608 ⎞ P(Z ≤ L 0.99 ) = Φ ⎜ 0.99 ⎟ = 0.99. ⎝ 2592 ⎠ Since the 0.99-percentile of the standard normal distribution is x 0.99 = 2.32, L 0.99 − 608 = 2.32 2592 so that L 0.99 = 726 kg. 4.13) Let X be the height of the woman and Y be the height of the man in married couples in a certain geographical region. By analyzing a sufficiently large sample, a statistician found that the random vector (X, Y) has a joint normal distribution with parameters E(X) = 168cm, Var(X) = 64cm 2 , E(Y) = 175cm, Var(Y) = 100cm 2 , ρ(X, Y) = 0.86. (1) Determine the probability P(X > Y) that in married couples in this area a wife is taller than her spouse. (2) Determine the same probability on condition that there is no correlation between X and Y, and interprete the result in comparison to (1). Hint If you do not want to use a statistical software package, make use of the fact that the desired probability has structure P(X > Y) = P(X + (−Y) > 0) and apply formula (4.48), page185.

Solution (1) The correlation coefficient between X and -Y is ρ(X, −Y) = −0.86. Hence, the variance of Z = X + (−Y) = X − Y is Var(Z) = 64 − 0.86 ⋅ 8 ⋅ 10 + 100 = 95.2. Thus, Z = N(−7, 95.2) and ⎛ P(Z > 0) = P(X − Y > 0) = P(X > Y) = P ⎜ N(0, 1) > ⎝

7 95.2

= P(N(0, 1) > 0.717) = 0.237. (2) In this case, Var(Z) = Var(X − Y) = 164 so that ⎛ P(Z > 0) = P(X > Y) = P ⎜ N(0, 1) > ⎝

7 164

= P(N(0, 1) > 0.547) = 0.293.

⎞ ⎟ ⎠

⎞ ⎟ ⎠

56

SOLUTIONS MANUAL

4.14) Let A and B be independent random variables, identically distributed with the probability density (Laplace distribution or doubly exponential distribution, page 66) f (x) = 1 e − x , − ∞ < x < +∞. 2 Determine the probability density of the sum X = A + B. Solution The density of X is given by the convolution of f (x) with itself, i.e., by the 2nd convolution power of f (x) (formula 4.63): f

2∗ (x)

+∞

= ( f ∗ f )(x) = ∫ −∞ f (x − y) f (y)dy.

Next let x ≥ 0. Then f 2∗ (x) = =

1 +∞ − x−y e 4 −∞

∫

1 0 e − x−y 4 −∞

e − y dy =

∫

∞

e − y dy + 14 ∫ 0 e − x−y e − y dy

x ∞ 1 ∞ −(x+y) −y e e dy + 14 0 e −(x−y) e −y dy + 14 x e −(y−x) e −y dy 4 0

∫

∫

∫

= 18 e −x + 14 xe −x + 18 e −x = 14 e −x (1 + x), x ≥ 0. Analogously, f

2∗ (x)

= 14 e +x (1 − x), x ≤ 0.

Together: f

2∗ (x)

⎧ 1 e −x (1 + x) for x ≥ 0, ⎪ = ⎨ 41 ⎪ e +x (1 − x) for x ≤ 0. ⎩ 4

4.15) Let A and B be independent random variables, identically distributed with the probability density 1 2 ⎛ = 1 1 ⎞ , − ∞ < x < +∞. f (x) = π e x + e −x ⎝ π cosh x ⎠ Determine the probability density of the sum X = A + B. Solution The density of X is the convolution of f (x) with itself: f (x) = 42 π = 42 π = 42 π

+∞

∫

−∞

1 1 ⋅ dy e x−y + e −(x−y) e y + e −y

+∞

1

dy ∫ −∞ e x + e −x + e x−2y + e −x+2y +∞

1

∫

−∞ e x (1 + e −2y ) + e −x (1 + e 2y )

dy.

Substitution z = e −2y and introducing the notation a = e 2x yields f (x) = 22 e x π

+∞

∫

0

1 dz. az 2 + (a + 1)z + 1

This integral can be found in most integral tables. It leads to artanh functions. (a is a constant with regard to the integration variable z.)

CHAPTER 5 Inequalities and Limit Theorems 5.1) On average, 6% of the citizens of a large town suffer from severe hypertension. Let X be the number of people in a sample of n randomly selected citizens from this town which suffer from this disease. (1) By making use of Chebyshev's inequality find the smallest positive integer n min with property P(

1 n X − 0.06

≥ 0.01) ≤ 0.05 for all n with n ≥ n min .

(2) Find a positive integer n min satisfying this relationship by using theorem 5.6. Solution (1) X has a binomial distribution with parameters p = 0.06 and n. Hence, Var(X) = n ⋅ 0.06 ⋅ 0.94 = 0.0564 ⋅ n and σ 2 = Var( 1n X) =

0.0564 n

so that by inequality (5.1) P(

1 n

X − 0.06 ≥ 0.01) ≤

0.0564 . 0.01 2 n

For the right-hand side of this inequality to be less than 0.05, n must satisfy 0.0564 0.01 2 ⋅0.05

≤ n so that n min = 11280.

(2) By theorem 5.6, 1 n

X − 0.06 0.0564 n

Hence, P(

1 n

X − 0.06 ≥ 0.01) ≤ 0.05 is equivalent to ⎛ P⎜− ⎝

From

0.01 0.0564 n

≈ N(0, 1).

0.01 0.0564 n

≤ 1n X − 0.06 ≤

0.01 0.0564 n

⎞ ⎟ ≥ 0.95 . ⎠

≥ 1.96 follows that n ≥ 2167.

5.2) The measurement error X of a measuring device has mean value E(X) = 0 and variance Var(X) = 0.16. The random outcomes of n independent measurements are X 1 , X 2 , ..., X n , i.e., the X i are independent, identically as X distributed random variables. (1) By the Chebyshev's inequality, determine the smallest integer n = n min with property that the arithmetic mean of n measurements differs from E(X) = 0 by less than 0.1 with a probability of at least 0.99. (2) On the additional assumption that X is continuous with unimodal density and mode x m = 0 , solve (1) by applying the Gauss inequality (5.4). (3) Solve (1) on condition that X = N(0, 0.16). (1) Let X =

1 n

n

Σ i=1 X i . Then Var(X) = 0.16 n

so that by (5.1)

P( X ≤ 0.1) = 1 − P( X > 0.1) ,

58

SOLUTIONS MANUAL P( X > 0.1) = 1 − P( X ≤ 0.1) ≤ 0.16 , 0.01 ⋅ n 0.99 ≤ 1 − 0.16 ≤ P( X ≤ 0.1). 0.01 ⋅ n

It follows n min = 1600. (2) Proceeding as under (1) yields 0.99 ≤ 1 −

4 ⋅ 0.16 n ≤ P( X ≤ 0.1). 9 ⋅ 0.1 2

It follows n min = 712. (3) Since

Var(X) =

0.4 n

, the inequality P(−0.1 ≤ X ≤ +0.1) ≥ 0.99 becomes P ⎛⎝ − 0.1 n ≤ N(0, 1) ≤ + 0.1 n ⎞⎠ ≥ 0.99. 0.4 0.4

From

0.1 0.4

n = x 0.995 = 2.576 it follows that n min = 107.

5.3) A manufacturer of TV sets knows from past experience that 4% of his products do not pass the final quality check. (1) What is the probability that in the total monthly production of 2000 sets between 60 and 100 sets do not pass the final quality check? (2) How many sets have at least to be produced a month to make sure that at least 2000 sets pass the final quality check with probability 0.9? Solution (1) Let X be the random number of faulty TV sets in the monthly production. Then X has a binomial distribution with parameters p = 0.04 and n = 2000. The random variable X has approximately a normal distribution with mean value E(X) = 0.04 ⋅ 2000 = 80 and variance σ 2 = 2000 ⋅ 0.04 ⋅ 0.96 = 76.8. (The condition (5.26) is satisfied.) Hence, X−80 76.8

≈ N(0, 1)

so that ⎛ P(60 ≤ X ≤ 100) = P ⎜ ⎝

60−80 76.8

≤ N(0, 1) ≤

100−80 ⎞ ⎟ 76.8 ⎠

= Φ(2.28) − Φ(−2.28)

= 2Φ(2.28) − 1 = 0.977. (2) Let Y be the random number of N produced ones, which pass the final test. Then Y has a binomial distribution with E(Y) = 0.96 ⋅ N and Var(Y) = N ⋅ 0.96 ⋅ 0.04 = 0.0384 ⋅ N Thus, ⎛ P(Y ≥ 2000) = P ⎜ ⎝

Y−0.96⋅N N⋅0.0384

≥

2000−0.96⋅N ⎞ ⎟ N⋅0.0384 ⎠

⎛ ≈ P ⎜ N(0, 1) ≥ ⎝

⎛ ⎞ = 1 − Φ ⎜ 2000−0.96⋅N ⎟ ≥ 0.9. ⎝ N⋅0.0384 ⎠ Equivalently, ⎛ ⎞ 0.1 ≥ Φ ⎜ 2000−0.96⋅N ⎟ . ⎝ N⋅0.0384 ⎠

2000−0.04⋅N ⎞ ⎟ N⋅0.0384 ⎠

5 INEQUALITIES AND LIMIT THEOREMS

59

The 0.1-percentile of the standard normal distribution is x 0.1 = 0.540. Hence, the smallest N satisfying 0.54 ≥

2000−0.96⋅N N⋅0.0384

has to be determined. It is N min = 2079. 5.4) The daily demand for a certain medication in a country is given by a random variable X with mean value 28 packets per day and with a variance of 64. The daily demands are independent of each other and distributed as X. (1) What amount of packets should be ordered for a year with 365 days so that the total annual demand does not exceed the supply with probability 0.99? (2) Let X i be the demand at day i = 1, 2, ... , and n Xn = 1 Σ Xi. n i=1 Determine the smallest integer n = n min so that the probability of the occurrence of the event X n − 28 ≥ 1 does not exceed 0.05. Solution (1) The annual random demand is 365

Σ X i = N(10 220, 64 ⋅ 365).

i=1

Hence, the desired supply D must satisfy P(N(10 220, 64 ⋅ 365) ≤ D) ≥ 0.99 or ⎛ P ⎜ N(0, 1) ≤ ⎝

D−10 220 ⎞ ⎟ 8⋅ 365 ⎠

≥ 0.99.

The 0.99-percentile of the standard normal distribution is x 0.99 = 2.327 so that D must satisfy D−10 220 8⋅ 365

≥ 2.327.

Hence, the annual supply of this medication should be at least D = 10 576 packets to meet the requirement. (2) Since Var(X n ) = 64/n, the application of Chebyshev's inequality (5.1) yields P( X n − 28 ≥ 1) ≤ 64 n ≤ 0.05 so that n min = 1280. However, the application of theorem 5.6 is justified and yields a significantly lower number: Since X n = N(28, 64 n ), P( X n − 28 ≥ 1) = 2 ⎡⎢ 1 − Φ ⎛⎝ 8/ 1 n ⎞⎠ ⎤⎥ ≤ 0.05 or ⎣ ⎦ 0.975 ≤ Φ ⎛⎝ 8/ 1 n ⎞⎠ or 1.96 ≤ 2

1 8/ n

⎛ 1.96⋅8 ⎞ ≤ n. ⎝ 1 ⎠ Hence, n min = 246.

or finally

60

SOLUTIONS MANUAL

5.5) According to the order, the rated nominal capacitance of condensers in a large delivery should be 300 μF. Their actual rated nominal capacitances are, however, random variables X with parameters E(X) = 300 and Var(X) = 144. (1) By Chebyshev's inequality determine a lower bound for the probability of the event A that X does not differ from the rated nominal capacitance by more than 5%. (2) Under the additional assumption that X is a continuous random variable with unimodal density and mode x m = 300 , solve (1) by means of the Gauss inequality (5.4). (3) Determine the exact probability on condition that X = N(300, 144). (4) A delivery contains 600 condensers. Their capacitances are independent and identically distributed as X. The distribution of X has the same properties as stated under (2). By means of a Gauss inequality give a lower bound for the probability that the arithmetic mean of the capacitances of the condensers in the delivery differs from E(X) = 300 by less than 2. Solution (1) P( X − 300 > 15) = 1 − P( X − 300 ≤ 15) ≤

144 . 225

Hence,

P( X − 300 ≤ 15) ≥ 0.3600. (2) P( X − 300 > 15) = 1 − P( X − 300 ≤ 15) ≤

4 9⋅15 2

⋅ 144 = 0.2844. Hence,

P( X − 300 ≤ 15) ≥ 0.7156. (3) P( X − 300 ≤ 15) = 2Φ(15/12) − 1 = 2 ⋅ 0.89435 − 1 = 0.7887. (4) P( X 600 − 300 ≤ 2) ≥ 1 − 4 ⋅ 144 = 0.9733. 9 ⋅ 4 600 5.6) A digital transmission channel distorts on average 1 out of 10 000 bits during transmission. The bits are transmitted independently of each other. (1) Give the exact formula for the probability of the random event A that amongst 10 6 sent bits there are at least 80 bits distorted. (2) Determine the probability of A by approximation of the normal distribution to the binomial distribution. Solution (1) The random number X of distorted bits from the 10 6 has a binomial distribution with parameters p = 0.0001 and n = 10 6 . Hence, P(A) =

10 6

Σ

i=80

⎛ 1000 000 ⎞ (0.0001) i (0.9999) 1000 000−i . ⎝ ⎠ i

(2) E(X) = 100, Var(X) = 10 6 ⋅ 0.0001 ⋅ 0.9999 = 99.99, P(A) = P(X ≥ 80) = P ⎛⎝ X−100 ≥ 10

80−100 ⎞ 10 ⎠

Var(X) ≈ 10.

= 1 − Φ(−2) = Φ(2) = 0.9772.

However, condition (5.26) is not fulfilled so that the value 0.9772 may be significantly different from the true one. 5.7) Solve the problem of example 2.4 (page 51) by making use of the normal approximation to the binomial distribution and compare with the exact result. The problem is: A power station supplies power to 10 bulk consumers. They use power independently of each other and in random time intervals, which, for each customer, accumulate to 20% of the calendar time. What is the probability of the random event B that at a randomly chosen time point at least seven customers use power?

5 INEQUALITIES AND LIMIT THEOREMS

61

Solution The random number X of consumers using power at a time point has a binomial distribution with parameters p = 0.2 and n = 10. Hence, E(X) = 2, Var(X) = 1.6,

Var(X) = 1.265.

The probability to be calculated is P(X ≥ 7) = P(X = 7) + P(X = 8) + P(X = 9) + P(X = 10). Using the first line of formulas (5.24) with i 1 = 7 and i 2 = 10 gives ⎞ = 1 − Φ(3.55) ≤ 0.00001. P(X ≥ 7) = 1 − Φ ⎛⎝ 7−0.5−2 1.265 ⎠ The normal approximation to P(X ≥ 7) = 0.000864 (page 52) is unsatisfactory. Note that condition (5.26) is far from being fulfilled so that the normal approximation is not applicable. 5.8) Solve the problem of example 2.6 (page 54) by making use of the normal approximation to the hypergeometric distribution and compare with the exact result. Problem: A customer knows that on average 4% of parts delivered by a manufacturer are defective and has accepted this percentage. To check whether the manufacturer exceeds this limit, the customer takes from each batch of 800 parts randomly a sample of size 80 and accepts the delivery if there are at most 3 defective parts in a sample. What is the probability that the customer accepts a batch, which contains 50 defective parts, although it should be on average only 32 defective parts? Solution Let X be the random number of defective parts in the sample. Since under the assumptions made, namely N = 800, M = 50, n = 80, the probabilities p i = P(X = i) are ⎛ 50 ⎞ ⎛ 800 − 50 ⎞ ⎝ ⎠ ⎝ 80 − i ⎠ pi = i ; ⎛ 800 ⎞ ⎝ 80 ⎠

i = 0, 1, ..., 50.

The binomial approximation to the hypergeometric distribution is given by formula (2.41), page 57, with p = M/N = 0.0625, n = 80 : ⎞ 0.0625 i ⋅ 0.9375 80−i ; i = 0, 1, ..., 50, p i ≈ ⎛⎝ 80 i ⎠ which implies E(X) = 5, Var(X) = 80 ⋅ 0.0625 ⋅ 0.9375 = 4.6875,

Var(X) = 2.1651.

The p 0 , p 1 , p 2 , and p 3 are approximated by formulas (5.24), second line: ⎛ P(Z n = i) = ⎛⎝ n ⎞⎠ p i (1 − p) n−i ≈ Φ ⎜ i ⎝ ⎛ pi ≈ Φ⎜ ⎝

i+ 12 −5 4.6875

⎞ ⎛ ⎟ − Φ⎜ ⎠ ⎝

i+ 12 −np np(1−p) i− 12 −5 4.6875

⎞ ⎛ ⎟ − Φ⎜ ⎠ ⎝ ⎞ ⎟, ⎠

i− 12 −np np(1−p)

⎞ ⎟, ⎠

0 ≤ i ≤ n.

i = 0, 1, 2, 3.

p 0 = Φ(−2.078) − Φ(−2.540) ≈ 0.0135, p 1 ≈ Φ(−1.617) − Φ(−2.078) = 0.0342, p 2 ≈ Φ(−1.155) − Φ(−1.617) = 0.0710, p 3 ≈ Φ(−0.693) − Φ(−1.155) = 0.1200. Since p 0 + p 1 + p 2 + p 3 ≈ 0.2387, the normal approximation to the hypergeometric distribution via the binomial distribution is quite close to the exact value 0.2414 (although condition (5.26) is violated).

62

SOLUTIONS MANUAL

5.9) The random number of asbestos particles per 1mm 3 in the dust of an industrial area is Poisson distributed with parameter λ = 8. What is the probability that in 1cm 3 of dust there are (1) at least 10 000 asbestos particles, and (2) between 8 000 and 12 000 asbestos particles (including the bounds)? Solution For numerical reasons, the normal approximation to the Poisson distribution is applied. (1) The upper equation of (5.28) is applied with i 2 = ∞, i 1 = 10 000, and λ = 8 000 (the continuity correction can be neglected in view of the large figures): ⎛ ⎞ P ≥10 000 = 1 − Φ ⎜ 10 000−8000 ⎟ = 1 − Φ(22.36) < 0.00001. ⎝ ⎠ 8000 (2) The upper equation of (5.28) is applied with i 2 = 12 000 and i 1 = 8000 : ⎛ ⎞ ⎛ ⎞ P ≥10 000 = Φ ⎜ 12 000−8000 ⎟ − Φ ⎜ 8 000−8000 ⎟ = 1 . ⎝ ⎠ ⎝ 8000 ⎠ 2 8000 5.10) The number of e-mails, which daily arrive at a large company, is Poisson distributed with parameter λ = 22 400. What is the probability p that daily between between 22 300 and 22 500 e-mails arrive? Solution The upper equation of (5.28) is applied with i 2 = 22500 and i 1 = 22300 : ⎛ ⎞ ⎛ ⎞ p = Φ ⎜ 22 500−22 400 ⎟ − Φ ⎜ 22 300−22 400 ⎟ = Φ(0.668) − Φ(−0.668) ⎝ ⎠ ⎝ ⎠ 22 400 22 400 = 2Φ(0.668) − 1 = 0.495. 5.11) In 1kg of a tapping of a cast iron melt there are on average 1.2 impurities. What is the probability that in a 1000kg tapping there are at least 1240 impurities? The spacial distribution of the impurities in a tapping is assumed to be Poisson. Solution The upper equation of (5.28) is applied with i 2 = ∞, i 1 = 1240, and λ = 1200 : ⎛ ⎞ P ≥1240 = 1 − Φ ⎜ 1240−1200 ⎟ = 1 − Φ(1.155) = 0.1254. ⎝ 1200 ⎠ 5.12) After six weeks, 24 seedlings, which had been planted at the same time, reach the random heights X 1 , X 2 , ..., X 24 , which are independent, exponentially distributed as X with mean value μ = 32cm. Based on the Chebyshev inequality (5.1) and the Gaussian inequality (5.3), determine (1) upper bounds for the probability that the arithmetic mean X 24 =

1 24

24

Σ i=1 X i

differs from μ by more than 6 cm, (2) lower bounds for the probability that the deviation of X 24 from μ does not exceed 6cm. Solution (1)

E(X) = 32, Var(X) = 32 2 = 1024, E(X 24 ) = 32, Var(X 24 ) =

128 3

= 42.66, x m = 30.6.

5 INEQUALITIES AND LIMIT THEOREMS

63

Chebyshev inequality: P( X 24 − 32 ≥ 6) ≤ 42.7 = 1.19. 36 This inequality yields no information on the desired probability. (The coefficient of variation of the exponential distribution is large: V(X) = 1.) Gaussian inequality: 2

42.7 + (32 − 30.6) P( X 24 − 32 ≥ 6) ≤ 4 ⋅ = 0.9380 . 9 (6 − 32 − 30.6 ) 2 (2) Chebyshev inequality: P( X 24 − 32 ≤ 6) ≥ −0.19. This inequality yields no information on the desired probability. Gaussian inequality: P( X 24 − 32 ≤ 6) ≥ 0.062. Note The exact probability is P( X 24 − 32 ≤ 6) = 0.6444. (X 24 has an Erlang distribution.)

5.13) Under otherwise the same notation and assumptions as in exercise 5.12, only 6 seedlings had been planted. Let X6 =

1 6

6

Σ i=1 X i .

(1) Determine by the one-sided Chebyshev inequality an upper bound for the probability P(X 6 > 38). (2) Determine the exact value of this probability. (Hint: Erlang distribution). Solution (1) Since E(X 6 ) = 32 and Var(X 6 ) =

32 2 6

= 170.6, the one-sided Chebyshev inequality (page 199)

becomes P(X 6 > 38) = P(X 6 − 32 > 6) ≤

170.7 170.7+36

= 0.8258.

(2) X 6 has density 6

⎛ 3 ⎞ x5 ⎛ 3 ⎞ ⎝ 16 ⎠ − x f 6 (x) = e ⎝ 16 ⎠ , x ≥ 0. 5! Hence, the desired probability is P(X 6 > 38) =

6 ∞ ⎛ 3 ⎞ x5 − ⎛⎝ 3 ⎞⎠ x ⎝ 16 ⎠ 16 dx e 5! 38

∫

= 0.2805.

This result can be obtained without integration from the distribution function of the Erlang distribution (2.72), page 75. Thus, the one-sided Chebyshev inequality contains little information on the desired probability. The normal approximation to the desired probability yields 38−32 ⎞ P(X 6 > 38) = 1 − Φ ⎛⎝ 13.064 ⎠ = 1 − Φ(0.4593) = 0.3265. The number n = 6 is too small to expect a more accurate result.

64

SOLUTIONS MANUAL

5.14) The continuous random variable X is uniformly distributed on [0, 2]. (1) Draw the graph of the function p(ε) = P( X − 1 ≥ ε) in dependence of ε , 0 ≤ ε ≤ 1. (2) Compare this graph with the upper bound u(ε) for the probability P( X − 1 ≥ ε) given by the Chebyshev inequality (5.1), 0 ≤ ε ≤ 1. (3) Try to improve the Chebyshev upper bound for P( X − 1 ≥ ε) by the Markov upper bound (5.8) using a = 3 and a = 4. Solution (1)

p(ε) = P( X − 1 ≥ ε) = 1 − P( X − 1 ≤ ε) = 1 − P(−ε ≤ X − 1 ≤ +ε) = 1 − P(1 − ε ≤ X ≤ +1 + ε) = 1 − 2ε . 2

Hence, p(ε) = 1 − ε, 0 ≤ ε ≤ 1.

1 u(ε)

0.8 0.6 0.4

p(ε)

0.2

1

0 0.2 0.4 0.6 0.8

1

Exercise 5.14, (1) and (2)

(2) Mean value and variance of X are E(X) = 1, Var(X) = E(X 2 ) − 1 =

1 2 2 x dx − 1 2 0

∫

=

4 3

− 1 = 13 .

Hence, the corresponding Chebyshev bound (5.1) is ⎧ ⎪ 1, 0 ≤ ε ≤ 1/3 , P( X − 1 ≥ ε) ≤ u(ε) = ⎨ 1 ⎪ 2 , 1/3 ≤ ε ≤ 1 . ⎩ 3⋅ε (3) For any a > 1 there is E( X − 1 a ) =

2 1 1 (1 − x) a dx + 12 1 (x − 1) a dx 2 0

∫

∫

=

1 . a+1

1 Thus, the right-hand side of (5.8) is (a+1)ε a . With increasing a the length of the interval in which

the upper bound is equal to 1 increases, but at the same time the bound becomes sharper than for smaller a if ε tends to 1.

CHAPTER 6 Basics of Stochastic Processes 6.1) A stochastic process {X(t), t > 0} has the one-dimensional distribution F t (x) = P(X(t) ≤ x) = 1 − e −(x/t)

2,

x ≥ 0.

Is this process weakly stationary? Solution Since X(t) has a Weibull distribution with parameter β = 2 (Rayleigh distribution), the trend function of {X(t), t > 0} is time-dependent: m(t) = t Γ(3/2) =

t 2

π.

Thus, {X(t), t > 0} cannot be (weakly) stationary. 6.2) The one-dimensional distribution of the stochastic process {X(t), t > 0} is F t (x) = P(X(t) ≤ x) =

x − (u−μ t) 2 1 e 2σ 2 t du 2π t σ −∞

∫

with μ > 0, σ > 0, x ∈ (−∞, +∞). Determine its trend function m(t) and, for μ = 2 and σ = 0.5 , sketch the functions y 1 (t) = m(t) + Var(X(t))

and y 2 (t) = m(t) − Var(X(t)) ,

0 ≤ t ≤ 10.

Solution X(t) has to a normal distribution with parameters μ t and σ 2 t : f t (x) =

− 1 e 2π t σ

(x−μ t) 2 2σ 2 t

t > 0, x ∈ (−∞, +∞ ).

;

Hence, m(t) = E(X(t)) = μ t ,

20 18 16 14 12 10 8

Var(X(t)) = σ 2 t,

m(t) y 1 (t)

y 2 (t)

6 4 2 0

t ≥ 0.

1

2

3

4

5

6

7

8

9

10

t

SOLUTIONS MANUAL

66

6.3) Let X(t) = A sin(ω t + Φ) , where A and Φ are independent, nonnegative random variables with Φ being uniformly distributed over [0, 2π] and E(A 2 ) < ∞.

(1) Determine trend-, covariance- and correlation function of {X(t), t ∈ (−∞, + ∞)}. (2) Is the stochastic process {X(t), t ∈ (−∞, + ∞)} weakly and/or strongly stationary? Solution E(X(t)) = E(A)

(1)

2π

∫0

1 sin(ωt + ϕ) dϕ = E(A) 2π [−cos(ωt + ϕ)] 2π 0

1 = E(A) 2π [cos(ωt) − cos(ωt + 2π)].

Thus, the trend function m(t) = E(X(t)) is identically 0: m(t) ≡ 0 .

(i)

The process {X(t), t ∈ (−∞, + ∞)} is a second order process since, for any t, E( X 2 (t)) = E(A 2 sin 2 (ω t + Φ)) ≤ E(A 2 ⋅ 1) = E(A 2 ) < ∞. The covariance function C(s, t) = Cov(X(s), X(t)), s < t, is obtained as follows: 2π

1 C(s, t) = E(X(s) X(t)) = E(A 2 ) 2π ∫ 0 sin(ω s + ϕ) sin(ω t + ϕ) dϕ.

Since (sin α) (sin β) = 12 [cos(β − α) − cos(β + α)], 2π

∫0

2π

2π

sin(ω s + ϕ) sin(ω t + ϕ) dϕ = ∫ 0 cos(ω (t − s)) dϕ − ∫ 0 cos((ω (s + t) + 2ϕ) dϕ.

The second integral is 0 and the first integral is 2π cos(ω(t − s)). Hence, C(s, t) = C(τ) = 12 E(A 2 ) cos(ωτ),

τ = t − s.

(ii)

Since Var(X(t)) = C(0) = 12 E(A 2 ), the correlation function of {X(t), t ∈ (−∞, + ∞)} is

C(τ) = cos(ωτ). C(0) (2) Since {X(t), t ∈ (−∞, + ∞)} is a second order process with properties (i) and (ii), it is weakly stationary. The one-dimensional distribution of {X(t), t ∈ (−∞, + ∞)} obviously depends on t. Hence, this process cannot be strongly stationary. ρ(τ) =

6.4) Let X(t) = A(t) sin(ω t + Φ) , where A(t) and Φ are independent, nonnegative random variables for all t, and let Φ be uniformly distributed over [0, 2π]. Verify: If {A(t), t ∈ (−∞, + ∞)} is a weakly stationary process, then {X(t), t ∈ (−∞, + ∞)} is also weakly stationary.

Solution As in the previous example, the trend function of {X(t), t ∈ (−∞, + ∞)} is seen to be identically equal to 0. The covariance function of {X(t), t ∈ (−∞, + ∞)} is (integration as in exercise 6.3) 2π

1 C(s, t) = E(X(s) X(t)) = E(A(s) A(t)) 2π ∫ 0 sin(ω s + ϕ) sin(ω t + ϕ) dϕ

= 12 E(A(s) A(t)) cos(ωτ),

τ = t − s.

If the process {A(t), t ∈ (−∞, + ∞)} is weakly stationary, then it is a second order process and the mean value E(A(s) A(t)) is only a function of τ = t − s. Hence, in this case {X(t), t ∈ (−∞, + ∞)} is weakly stationary as well.

67

6 BASICS OF STOCHASTIC PROCESSES

6.5) Let {a 1 , a 2 , ..., a n } be a sequence of finite real numbers, and let {Φ 1 , Φ 2 , ..., Φ n } be a sequence of independent random variables, which are uniformly distributed over [0, 2π].

Determine covariance function and correlation function of {X(t), t ∈ (−∞, + ∞)} given by n

X(t) = Σ i=1 a i sin(ω t + Φ i ) . Solution From exercise 6.3, the trend function of {X(t), t ∈ (−∞, + ∞)} is identically equal to 0. The covariance function of this process is n n C(s, t) = E ⎛⎝ ⎡⎣ Σ i=1 a i sin(ω s + Φ i ) ⎤⎦ ⎡⎣ Σ k=1 a k sin(ω t + Φ k ) ⎤⎦ ⎞⎠ n n = E ⎛⎝ Σ i=1 Σ k=1 a i a k sin(ω s + Φ i ) sin(ω t + Φ k ) ⎞⎠ n = E ⎛⎝ Σ i=1 a 2i sin(ω s + Φ i ) sin(ω t + Φ i ) ⎞⎠ .

Integrating as in example 6.3 gives covariance and correlation function of {X(t), t ∈ (−∞, + ∞)} : n C(τ) = 1 cos(ω τ) Σ a 2i , 2 i=1

ρ(τ) =

C(τ) = cos(ω τ) with τ = t − s. C(0)

6.6)* A modulated signal (pulse code modulation) {X(t), t ≥ 0} is given by

X(t) =

∞

Σ

n=0

A n h(t − n) ,

where the A n are independent and identically distributed random variables which only can take on values -1 and +1 and have mean value 0. Further, let h(t) =

1 for 0 ≤ t < 1/2 . 0 elsewhere

1) Sketch a section of a possible sample path of the stochastic process {X(t), t ≥ 0}. 2) Determine the covariance function of this process. 3) Let Y(t) = X(t − Z) where the random variable Z has a uniform distribution over [0, 1] . Is the stochastic process {Y(t), t ≥ 0} weakly stationary ? Solution (1)

x(t) A 0 = −1 0

0.5

A 1 = +1 1

1.5

A 2 = +1 2

2.5

A 3 = −1 3

3.5

t

SOLUTIONS MANUAL

68

(2) E(A n ) = 0; n = 0, 1, ... so that the trend function of the process {X(t), t ≥ 0} is identically 0. Since E(A m A n ) = 0 for m ≠ n, ⎛ ∞ ⎞ C(s, t) = E ⎜ Σ A m A n h(s − m) h(t − n) ⎟ ⎝ m,n=0 ⎠ ⎛∞ ⎞ = E ⎜ Σ A 2n h(s − n) h(t − n) ⎟ . ⎝ n=0 ⎠

In view of E(A 2n ) = 1 and h(s − n) h(t − n) =

1 if n ≤ s, t ≤ n + 1/2 , 0 otherwise

the covariance function of {X(t), t ≥ 0} is C(s, t) =

1 if n ≤ s, t ≤ n + 1/2; n = 0, 1, ... . 0 otherwise

Thus, {X(t), t ≥ 0} is not weakly stationary. (3) Let s ≤ t. Then the covariance function of {Y(t), t ≥ 0} is ∞ ∞ C Y (s, t) = E ⎛⎝ Σ n=0 A 2n h(s − Z − n) h(t − Z − n) ⎞⎠ = Σ n=0 E( h(s − Z − n) h(t − Z − n)).

If n ≤ s, t < n + 12 , then h(s − Z − n) h(t − Z − n) = 1 if and only if 0 ≤ Z ≤ s − n. If n + 12 ≤ s, t ≤ n + 1, then h(s − Z − n) h(t − Z − n) = 1 if and only if t − (n + 12 ) ≤ Z ≤ s − n. If n ≤ s < n + 12 , n + 12 ≤ t ≤ n + 1, and t − s ≤ 12 , then h(s − Z − n) h(t − Z − n) = 1 if and only if t − (n + 12 ) ≤ Z ≤ s − n . In all other cases, h(s − Z − n) h(t − Z − n) = 0. Hence, ⎧ ⎪ ⎪⎪ C(s, t) = ⎨ ⎪⎪ ⎪ ⎩

if n ≤ s, t < n + 12

s−n 1 − (t − s) 2 1 − ( t − s) 2

if n + 12 ≤ s, t ≤ n + 1

if n ≤ s < n + 12 , n + 12 ≤ t ≤ n + 1, t − s ≤

0

1 2

otherwise

The stochastic process {Y(t), t ≥ 0} is not weakly stationary, since, for n ≤ s, t < n + 1/2, its covariance function only depends on s. 6.7) Let {X(t), t ∈ (−∞, +∞)} and {Y(t), t ∈ (−∞, +∞)} be two independent, weakly stationary stochastic processes the trend functions of which are identically 0 and which have the same covariance function C(τ). Prove that the stochastic process {Z(t), t ∈ (−∞, +∞)} with Z(t) = X(t) cos ωt − Y(t) sin ωt is weakly stationary.

Solution The following derivation makes use of sin α sin β = 12 [cos(β − α) − cos(α + β)],

cos α cos β = 12 [cos(β − α) + cos(α + β)] .

69

6 BASICS OF STOCHASTIC PROCESSES {Z(t), t ∈ (−∞, +∞)} has trend function m Z (t) = E(Z(t)) ≡ 0. Hence, its covariance function is

C Z (s, t) = E(Z(s) Z(t)) = E([X(s) cos ωs − Y(s) sin ωs] [X(t) cos ωt − Y(t) sin ωt]) = E([X(s) X(t) cos ωs cos ωt]) + E([Y(s) Y(t) sin ωs sin ωt]) −E([X(s) Y(t) cos ωs sin ωt]) − E([Y(s) X(t) sin ωs cos ωt]).

By taking into account E(X(t)) ≡ E(Y(t)) ≡ 0 , the independence of X(s) and Y(t) , and C(−τ) = C(τ) with τ = t − s, C Z (s, t) = E([X(s) X(t) cos ωs cos ωt]) + E([Y(s) Y(t) sin ωs sin ωt]) = C(τ) [cos ωs cos ωt + sin ωs sin ωt] = C(τ) [cos ω(t − s)] = C(τ) cos τ .

Thus, the second order process {Z(t), t ∈ (−∞, +∞)} is weakly stationary. 6.8) Let X(t) = sin Φt , where Φ is uniformly distributed over the interval [0, 2π] . Verify: (1) The discrete-time stochastic process {X(t) ; t = 1, 2, ...} is weakly, but not strongly stationary. (2) The continuous-time stochastic process {X(t), t > 0} is neither weakly nor strongly stationary.

Solution E(X(t)) = 1 2π

2π

∫0

1 sin ϕt dϕ = 1 [−cos ϕt] 2π 0 = 2 π t [1 − cos(2πt)], 2πt

t > 0.

(i)

(1) In view of (i) and cos(2πt) = 1 for t = 1, 2, ... , the trend function of the second order stochastic process {X(t); t = 1, 2, ...} is identically equal to 0. Its covariance function is (s, t = 1, 2, ...) C(s, t) = E([sin Φs] [sin Φt]) = 1 2π = 1 4π

2π

∫0

2π

∫0

[sin ϕs ] [sin ϕt ]dϕ

[cos ϕ(t − s) − cos ϕ (s + t)]dϕ

2π = 1 ⎡ 1 sin ϕ(t − s) − 1 sin ϕ(s + t) ⎤ s+t ⎦0 4π ⎣t− s

= 1 ⎡ 1 sin 2π(t − s) − 1 sin 2π(s + t) ⎤ , s+t ⎦ 4π⎣t− s

s ≠ t.

Hence, C(s, t) =

0 for τ = t − s > 0 . 1/2 for τ = t − s = 0

Thus, the second order process {X(t) ; t = 1, 2, ...} is weakly stationary. (2) The stochastic process in continuous time {X(t), t > 0} cannot be weakly or strongly stationary since, by (i), its trend function depends on t. 6.9) Let {X(t), t ∈ (−∞, +∞)} and {Y(t), t ∈ (−∞, +∞)} be two independent stochastic processes with respective trend- and covariance functions m X (t), m Y (t) and C X (s, t), C Y (s, t). Further, let

U(t) = X(t) + Y(t) and V(t) = X(t) − Y(t), t ∈ (−∞, +∞). Determine the covariance functions C U (s, t) and C V (s, t) of the stochastic processes (1) {U(t), t ∈ (−∞, +∞)} and (2) {V(t), t ∈ (−∞, +∞)}.

70

SOLUTIONS MANUAL

Solution (1) By formula (3.38), page 135, C U (s, t) = Cov(U(s), U(t)) = E([X(s) + Y(s)] [X(t) + Y(t)]) − [m X (s) + m Y (s)] [m X (t) + m Y (t)] = E(X(s) X(t)) + E(X(s) Y(t)) + E(Y(s) X(t)) + E(Y(s) Y(t)) −m X (s) m X (t) − m X (s) m Y (t) − m Y (s) m X (t) − m Y (s) m Y (t) = C X (s, t) + C Y (s, t) + E(X(s) Y(t)) + E(Y(s) X(t) − m X (s) m Y (t) − m Y (s) m X (t).

Since the processes {X(t), t ∈ (−∞, +∞)} and {Y(t), t ∈ (−∞, +∞)} are independent, E(X(s) Y(t)) = m X (s) m Y (t)

(i)

E(Y(s) X(t)) = m Y (s) m X (t).

(ii)

and This proves the assertion. (2) By formula (3.38), C V (s, t) = Cov(V(s), V(t)) = E([X(s) − Y(s)] [X(t) − Y(t)]) − [m X (s) − m Y (s)] [m X (t) − m Y (t)] = E(X(s) X(t)) − E(X(s) Y(t)) − E(Y(s) X(t)) + E(Y(s) Y(t)) −m X (s) m X (t) + m X (s) m Y (t) + m Y (s) m X (t) − m Y (s) m Y (t) = C X (s, t) + C Y (s, t) − E(X(s) Y(t)) − E(Y(s) X(t) + m X (s) m Y (t) + m Y (s) m X (t).

Now the desired result follows from (i) and (ii). 6.10) The following table shows the annual, inflation-adjusted profits x k of a bank in the years between 2005 to 2015 [in $10 6 ] : Year k Profit x k

1 (2005)

2

3

4

5

6

7

8

9

10

11

0.549

1.062

1.023

1.431

2.100

1.809

2.250

3.150

3.636

3.204

4.173

(1) Determine the smoothed values {y k } obtained by applying M.A.(3). (2) Based on the y k determine the trend function (assumed to be a straight line). (3) Draw the original time series plot, the smoothed version based on the y k and the trend function in one and the same Figure. Solution (1) The sequence {y 2 , y 3 , ..., y 10 } is determined by y k = 1 [x k−1 + x k + x k+1 ]; k = 2, 3, ..., 10, 3 e.g., y 2 = 1 [0.549 + 1.062 + 1.023] = 0.878, 3 1 y 3 = [1.062 + 1.023 + 1.431] = 1.172. 3 The original table is supplemented by the row of the y k :

71

6 BASICS OF STOCHASTIC PROCESSES Year

1 (2005)

Profit x i

0.549

yk

2

3

4

5

6

7

8

9

10

11

1.062

1.023

1.431

2.100

1.809

2.250

3.150

3.636

3.204

4.173

0.878

1.172

1.518

1.780

2.053

2.403

3.012

3.300

3.671

(2) For determining the trend (regression line) by formulas (3.46), note that the x k used there are now the integer-valued time points t k = k ; i = 2, 3, ..., 10. The input parameters for α and β are t=

1 9

10

Σ k=2 t k = 6,

y=

10

1 9

10

Σ k=2 y k = 2.199,

10

Σ k=2 (t k − 6) 2 = 60, Σ k=2 (y k − y) 2 = 7.542, 10

Σ k=2 t k y k − 9 ⋅ 6 ⋅ 2.199 = 139.889 − 118.746 = 21.143. Hence, α = 0.3524, β = 0.0846 so that the trend (regression line) of the time series is y(t) = 0.3524 t + 0.0846, 2 ≤ t ≤ 10. (3)

4.5 4.0

+

3.5

*

+

original time series smoothed time series

+

trend

3.0 2.5 2.0 1.5 1.0 0.5

+

0

1

+ *

* +

2

3

*+

4

+ *

* +

5

6

+ *

* +

*

*

8

9

10

11

*+

7

k

6.11) The following table shows the production figures x i of cars of a company over a time period of 12 years (in 10 3 ) : Year i xi

1

2

3.08

3.40

3 4.00

4 5.24

5

6

7

8

9

10

11

12

7.56 10.68 13.72 18.36 23.20 28.36 34.68 40.44

(1) Draw a time series plot. Is the underlying trend function linear? (2) Smooth the time series {x i } by the Epanechnikov kernel with bandwidth [−2. + 2]. (3) Smooth the time series by exponential smoothing with parameter λ = 0.6 and predict the output for year 13 by the recursive equation (6.31). Solution (1) The trend is not linear, but progressively increasing. (2) In this case, the smoothed time series {y k } is given by y k = w −2 x k−2 + w −1 x k−1 + w 0 x k + w 1 x k+1 + w 2 x k+2 ; k = 3, 4, ..., 10, with 2⎞ 2 ⎛ w k = 1 − k ⋅ 9 = 9 − k ; k = 0, ±1, ±2. ⎝ ⎠ 35 9 35

SOLUTIONS MANUAL

72 This gives

y k = 1 [5 x k−2 + 8x k−1 + 9x k + 8x k+1 + 5x k+2 ]; k = 3, 4, ..., 10. 35 The numerical values are given in the Table: year

1

2

3

4

5

6

8

9

10

11

12

3.08

3.40

4.00

5.24

7.56

10.68

13.72

18.36

23.20

28.36

34.68

40.44

4.52

6.00

8.11

10.98

14.56

18.74

23.56

28.92

Expon 3.08

3.27

3.76

4.74

6.63

9.43

12.50

16.50

21.48

26.30

32.15

38.14

xi Epach

7

(3) Since λ = 0.6, it is justified to apply formula (6.31) right from the beginning, y k = λx x + (1 − λ)y k−1 ; k = 2, 3, ..., 12; y 1 = x 1 . For the numerical values, see the Table. The predicted value for year 13 is y 13 = 0.6 × 40.44 + 0.4 × 34.14 = 38.14. 6.12) Let Y t = 0.8Y t−1 + X t ; t = 0, ±1, ±2, ..., where {X t ; t = 0, ±1, ±2, ...} is the purely random sequence with parameters E(X t ) = 0 and Var(X t ) = 1. Determine the covariance function and sketch the correlation function of the autoregressive sequence of order 1 {Y t ; t = 0, ±1, ±2, ...}.

Solution The autoregressive sequence {Y t ; t = 0, ±1, ±2, ...} is weakly stationary since it satisfies condition (6.40) with c i = (0.8) i (b = 1) : ∞ ∞ c 2i = Σ i=0 (0.8) 2i = 1 2 Σ i=0 1−0.8

By (6.41), its covariance function is C(τ) =

∞

∞

i=0

i=0

Σ a i a τ +i = (0.8) τ Σ a 2i Var(Y t ) = C(0) =

=

1 1−0.8 2

25 9

(0.8)

τ

=

25 9

< ∞.

=

25 9

(0.8) τ ; τ = 0, ± 1, ±2, ...

for all t = 0, ± 1, ±2, ... .

Hence, the correlation function is for all t = 0, ±1, ±2, ... ρ(τ) = ρ(Y t , Y t+τ ) = (0.8) ρ( τ )

τ

for all τ = 0, ± 1, ±2, ...

1

0.5

τ -6

-4

-2

0

2

4

6

Correlation function for Example 6.12

6.13) Let an autoregressive sequence of order 2 {Y t ; t = 0, ±1, ±2, ...} be given by

Y t − 1.6Y t−1 + 0.68Y t−2 = 2X t ; t = 0, ±1, ±2, ..., where {X t ; t = 0, ±1, ±2, ...} is the same purely random sequence as in the previous exercise. (1) Is the the sequence {Y t ; t = 0, ±1, ±2, ...} weakly stationary? (2) Determine its covariance and correlation function.

6 BASICS OF STOCHASTIC PROCESSES

73

Solution The corresponding equation (6.49) is y 2 − 1.6y + 0.68 = 0 or (y − 0.8) 2 = −0.04. The solutions are conjugate complex numbers: y 1 = 0.8 + 0.2i and y 2 = 0.8 − 0.2i . To be able to apply the formulas on the top of page 251, let us write these numbers in the form y 1 = 0.8 + 0.2i = y 0 e iω and y 2 = 0.8 − 0.2i = y 0 e −iω with real numbers y 0 and ω. Then, y 0 e iω + y 0 e −iω = 1.6 and y 0 e iω − y 0 e −iω = 0.4i . In view of the Euler's formulas, iω −iω iω −iω sin ω = e − e and cos ω = e + e , 2 2i

the parameters ω and y 0 are given by 0.2 cos ω = 0.8 y 0 or sin ω = y 0 and 2 2 ⎞ + ⎛ 0.8 ⎞ = 1, sin 2 ω + cos 2 ω = ⎛⎝ 0.2 ⎝ y0 ⎠ y0 ⎠

respectively. Hence, y 0 =

0.68 so that

0.2 ≈ arcsin 0.2425 = 0.2450. 0.68 Now C(τ) is given by the formulas at the top of page 251 together with C(0) given at the bottom of page 250 with b = 2. ω = arcsin

6.14) Let an autoregressive sequence of order 2 {Y t ; t = 0, ±1, ±2, ...} be given by

Y t − 0.8Y t−1 − 0.09Y t−2 = X t ; t = 0, ±1, ±2, ... . where {X t ; t = 0, ±1, ±2, ...} is the same purely random sequence as in exercise (6.12). (1) Check whether the sequence {Y t ; t = 0, ±1, ±2, ...} is weakly stationary. If yes, determine its covariance function and its correlation function. (2) Sketch its correlation function and compare its graph with the one obtained in exercise (6.12). Solution The corresponding equation (6.49) is y 2 − 0.8y − 0.09 = 0 or

(y − 0.4) 2 = 0.09 + 0.16 = 0.25.

The solutions are y 1 = 0.9, y 2 = −0.1. Since y i ≤ 1, i = 1, 2, the sequence {Y t ; t = 0, ±1, ±2, ...} with a 1 = 0.8 and a 2 = 0.09 is weakly stationary. The variance of the Y t is given by the formula for C(0) at the botton of page 250 with b = 1 : Var(Y t ) = C(0) =

0.91 = 4.4384. (1 + 0.09)[(1 − 0.09) 2 − 0.8 2 ]

SOLUTIONS MANUAL

74

The covariance function of {Y t ; t = 0, ±1, ±2, ...} is given by formula (6.50): τ +1

C(τ) = C(0)

τ +1

− (1 − y 22 ) y 1 (1 − y 21 ) y 2 (y 2 − y 1 )(1 + y 1 y 2 )

, τ = 0, ±1, ±2, ... .

Hence, C(τ) = C(0)

(1 − 0.9 2 ) ⋅ (−1) τ +1 − [1 − (−0.1) 2 ] ⋅ 0.9 (−0.1 − 0.9)(1 − 0.9 ⋅ 0.1)

= C(0) ⎛⎝ 1.9 (−1) 91

τ

+ 89.1 0.9 91

τ

τ +1

⎞ , τ = 0, ±1, ±2, ... . ⎠

The correlation function is ρ(τ) =

C(τ) C(0)

= 1.9 (−1) 91 C(0)

τ

+ 89.1 0.9 τ , τ = 0, ±1, ±2, ... . 91

Obviously, Y t−2 has little influence on Y t . ρ( τ )

1 0.8 0.6 0.4 0.2

-6

-4

-2

0

2

4

Correlation function for Example 6.14

6

CHAPTER 7 Point Processes Sections 7.1 and 7.2 7.1) The ocurrence of catastrophic accidents at Sosal & Sons follows a homogeneous Poisson process {N(t), t ≥ 0} with intensity λ = 3 a year. (1) What is the probability p ≥2 that at least two catastrophic accidents will occur in the second half of the current year? (2) Determine the same probability given that two catastrophic accidents have occurred in the first half of the current year. Solution (1) The desired probability is (t = 0.5) p ≥2 =

∞ (3⋅0.5) n e −3⋅0.5 n! n=2

Σ

=1−

1 (3⋅0.5) n e −3⋅0.5 n! n=0

Σ

= 1 − e −1.5 − 1.5 e −1.5 = 0.4422.

(2) In view of the 'memoryless property' of the exponential distribution, this conditional probability also is 0.4422. 7.2) By making use of the independence and homogeneity of the increments of a homogeneous Poisson process {N(t), t ≥ 0} with intensity λ , show that its covariance function is C(s, t) = λ min(s, t). Solution Let 0 < s < t. Then C(s, t) = Cov(N(s), N(t)) = Cov(N(s), N(s) + N(t) − N(s)) = Cov(N(s), N(s)) + Cov(N(s), N(t) − N(s)) = Var(N(s)) + 0 = λ s, which proves the assertion. 7.3) The number of cars which pass a certain intersection daily between 12:00 and 14:00 follows a homogeneous Poisson process with intensity λ = 40 per hour. Among these there are 2.2% which disregard the stop sign. The car drivers behave independently with regard to ignoring the stop sign. (1) What is the probability p ≥2 that at least two cars disregard the stop sign between 12:30 and 13:30? (2) A car driver, who ignores the stop sign at this interection, causes an accident there with probability 0.05. What is the probability p ≥1 of one or more accidents at this intersection between 12:30 and 13:30, caused by a driver, who ignores the stop sign? Solution (1) By theorem 7.8, page 286, the number of cars, which disregard the stop sign between 12:30 and 13:30, has a Poisson distribution with parameter λ p = 40 ⋅ 0.022 = 0.88. Hence, p ≥2 =

∞ (0.88) n e −0.88 n=2 n!

Σ

= 1 − e −0.88 − 0.88e −0.88 = 0.2202.

76

SOLUTIONS MANUAL

(2) The intensity of the flow of car drivers, who cause an accident by ignoring the stop sign, is 0.88 ⋅ 0.05 = 0.044. Hence, the desired probability is p ≥2 =

∞ (0.0.044) n e −0.044 n! n=1

Σ

= 1 − e −0.044 = 0.0430.

7.4) A Geiger counter is struck by radioactive particles according to a homogeneous Poisson process with intensity λ = 1 per 12 seconds. On average, the Geiger counter only records 4 out of 5 particles. (1) What is the probability p ≥2 that the Geiger counter records at least 2 particles a minute? (2) What are mean value and variance of the of the random time Y between the occurrence of two successively recorded particles? Solution (1) By theorem 7.8, the number of recorded particles per minute has a Poisson distribution with parameter 5 λ p = 5 ⋅ 1 ⋅ 0.8 = 4. Hence, p ≥2 =

∞

4 n −4 e n=2 n!

Σ

= 1 − e −4 − 4 e −4 = 0.9084.

(2) Y [min] has an exponential distribution with parameter 5 λ p = 4 [min −1 ]. Hence, E(Y) =

1 4

[min],

Var(Y) =

1 16

[min 2 ].

7.5) The location of trees in an even, rectangular forest stand of size 200m × 500m follows a homogeneous Poisson distribution with intensity 1 per 25m 2 . The diameters of the stems of all trees at a distance of 130cm to the ground is assumed to be 24cm. From outside, a shot is vertically fired at a 500m side of the forest stand (parallel to the ground at level 130cm). What is the probability that a bullet with diameter 1cm hits no tree? Note With regard to the question, the location of a tree is fully determined by the coordinates of the center of the cross-section of its stem at level 130cm.

Solution The bullet touches no tree if there is an open strip in the forest, exactly in the direction of the shot, which has a length of 200m and a width of 26cm; i.e., there is no tree in an area of 200 × 0.26 = 52m 2 (independently of the shape of this area). Hence, the desired probability is p0 = e

1 − 25 ×52

= 0.1249.

7.6) An electronic system is subject to two types of shocks which arrive independently of each other according to homogeneous Poisson processes with respective intensities λ 1 = 0.002 and λ 2 = 0.01 per hour. A shock of type 1 always causes a system failure, whereas a shock of type 2 causes a system failure with probability 0.4. What is the probability of the event A that the system fails within 24 hours due to a shock? Solution The random time Y to the occurrence of the first type 1-shock has an exponential distribution with parameter λ 1 : P(Y ≤ t) = 1 − e −0.002 t , t ≥ 0.

7 POINT PROCESSES

77

The formula of total probability applied to the event A that there is no failure due to a shock within 24 hours gives P(A) = P(A Y ≤ 24) P(Y ≤ 24) + P(A Y > 24) P(Y > 24). Since P(A Y ≤ 24) = 0 and, by theorem 7.8, the time to the first occurrence of a 'deadly' type 2shock has an exponential distribution with parameter λ 2 p = 0.01 ⋅ 0.4 = 0.004, P(A) = e −0.004⋅24 ⋅ e −0.002⋅24 = e −0.144 = 0.8659. Hence, the desired probability is 0.1341. 7.7) A system is subjected to shocks of types 1, 2, and 3, which are generated by independent homogeneous Poisson processes with respective intensities per hour λ 1 = 0.2, λ 2 = 0.3, and λ 3 = 0.4. A type 1-shock causes a system failure with probability 1, a type 2-shock causes a system failure with probability 0.4, and a shock of type 3 causes a system failure with probability 0.2. The shocks occur permanently, whether the system is operating or not. (1) On condition that three shocks arrive in the interval [0, 10h], determine the probability that the system does not experience a failure in this interval. (2) What is the (unconditional) probability that the system fails in [0, 10h] due to a shock? Solution (1) Next the desired probability is calculated on condition that there is no type 1-shock in [0, 10h] . The probabilities p 2 and p 3 that a shock is of type 2 or type 3, respectively, can be calculated from their intensities (since homogeneous Poisson processes are stationary): p2 =

0.3 0.3+0.4

= 37 , p 3 = 1 − p 2 = 47 .

There are four constellations which make sure that there is no system failure in [0, 10h] : 3

a) 3 type 2-shocks, 0 type 3-shock. Probab.: ⎛⎝ 30 ⎞⎠ ⎛⎝ 37 ⎞⎠ = b) 2 type 2-shocks, 1 type 3-schock. Probab.: ⎛⎝ 32 ⎞⎠ ⎛⎝ 37 ⎞⎠ c) 1 type 2-shock, 2 type 3-shocks. Probab.: d) 0 type 2-shocks, 3 type 3-shocks. Probab.:

27 . 343

Survival probab.: (0.6) 3 .

2

4 = 108 . Survival probab.: (0.6) 2 ⋅ 0.8 7 343 2 ⎛ 3 ⎞ 3 ⎛ 4 ⎞ = 144 . Survival probab.: 0.6 ⋅ (0.8) 2 ⎝1⎠ 7⎝7⎠ 343 2 3 3 4 ⎛ ⎞ ⎛ ⎞ = 64 . Survival probab.: (0.8) 3 ⎝1⎠ 7⎝7⎠ 343

Survival probability on condition 'no type 1-shock in [0, 10h] : 0.0170 + 0.0907 + 0.1612 + 0.0955 = 0.3644. Probability of "no type 1-shock in [0, 10h] :" e −λ 1 ⋅10 = 0.1353. Thus, the desired probability is 0.3644 ⋅ 0.1353 = 0.0493. (2) Type 1-shocks arrive with intensity 0.2, 'deadly type 2-shocks arrive with intensity 0.3 ⋅ 0.4 = 0.12, and 'deadly' type 3-shocks arrive with intensity 0.4 ⋅ 0.2 = 0.08 (all per hour). Hence, the probability that no 'deadly' shock arrives in [0, 10h] is e −2 ⋅ e −1.2 ⋅ e −0.8 = e −4 = 0.0183 so that the probability that the system fails in [0, 10h] due to a shock is 1 − e −4 = 0.9817. 7.8) Claims arrive at a branch of an insurance company according a homogeneous Poisson process with an intensity of λ = 0.4 per working hour. The claim size Z has an exponential distribution so that 80% of the claim sizes are below $100 000, whereas 20% are equal or larger than $100 000.

78

SOLUTIONS MANUAL

(1) What is the probability that the fourth claim does not arrive in the first two working hours of a day? (2) What is the mean size of a claim? (3) Determine approximately the probability that the sum of the sizes of 10 consecutive claims exceeds $800 000. Solution (1) The time T 4 to the arrival of the 4th claim has an Erlang distribution with distribution function F T 4 (t) = 1 − e −0.4t [1 + 0.4t + (0.4t) 2 /2 + (0.4t) 3 /6], t ≥ 0. With t = 2, the desired probability is 1 − F T 4 (2) = e −0.8 [1 + 0.8 + (0.8) 2 /2 + (0.8) 3 /6] = 0.9909. (2) By assumption, P(Z ≤ z) = 1 − e −μz , z ≥ 0. Therefore, P(Z ≤ 100 000) = 1 − e −μ⋅100 000 = 0.8 or −μ ⋅ 100 000 = ln 0.2 = −1.609438. E(Z) = 1/μ = 62 133. (3) By the central limit theorem 5.6 (page 208) 10

C 10 = Σ i=1 Z i ≈ N(621 330, 10 ⋅ 62 133 2 ). Hence, ⎛ ⎞ P(C 10 ≥ 800 000) ≈ P ⎜⎜ N(0, 1) ≥ 800 000 − 621 330 ⎟⎟ ⎜ ⎟ ⎝ 10 ⋅ 62 133 2 ⎠ = 1 − Φ(0.9093)) = 0.1814. 7.9) Consider two independent homogeneous Poisson processes 1 and 2 with respective intensities λ 1 and λ 2 . Determine the mean value of the random number of events of process 2 (type 2-events) which occur between any two successive events of process 1 (type 1-events). Solution Let Y be the random length of a time interval between two neighboring type 1-events, and N 2 be the random number of type 2-events occurring in this interval. Then, on condition Y = t, E(N 2 Y = t) = Hence,

∞

Σ

n=0

n

(λ 2 t) n −λ 2 t e n!

= λ 2 t.

∞

∞

E(N 2 ) = ∫ 0 E(N 2 Y = t) λ 1 e −λ 1 t dt = ∫ 0 (λ 2 t ) λ 1 e −λ 1 t dt ∞

= λ 2 ∫ 0 t λ 1 e −λ 1 t dt = λ 2 /λ 1 . 7.10) Let {N(t), t ≥ 0} be a homogeneous Poisson process with intensity λ. Prove that for an arbitrary, but fixed positive h the stochastic process {X(t), t ≥ 0} defined by X(t) = N(t + h) − N(t) is weakly stationary.

7 POINT PROCESSES

79

Solution 1} Since X(t) has a Poisson distribution with parameter E(X(t)) = λ h , its second moment is E(X 2 (t)) = (λh + 1)λ h < ∞ for all t. Hence, {X(t), t ≥ 0} is a second order process. 2) The trend function of {X(t), t ≥ 0} is constant: E(X(t)) = E(N(t + h) − N(t)) = λ(t + h) − λt = λh. 3) The covariance function C(s, t) of {X(t), t ≥ 0} can be written as C(s, t) = Cov(X(s), X(t)) = Cov(N(s + h) − N(s), N(t + h) − N(t)) = Cov(N(s + h), N(t + h)) − Cov(N(s + h), N(t)) − Cov(N(s), N(t + h)) + Cov(N(s), N(t)).

(i)

From exercise 7.2, Cov(N(s), N(t)) = λ s for s ≤ t .

(ii)

a) Let 0 ≤ s < t and s + h ≤ t. Then, from (i) and (ii), C(s, t) = λ(s + h) − λ(s + h) − λs + λs = 0 . b) Let 0 ≤ s < t and t < s + h . Then, from (i) and (ii), letting τ = t − s, C(s, t) = λ(s + h) − λ t − λ s + λ s = λ (h − τ) . s

0

t

s+h

t+h

Or, by making use of the independence of the increments of a Poisson process (see Figure), C(s, t) = Cov([N(t) − N(s)] + [N(s + h) − N(t)], [N(s + h) − N(t)] + [N(t + h) − N(s + h)] ) = 0 + 0 + Var(N(s + h) − N(t)) + 0 = λ(s + h − t) = λ(h − τ). By combining a) and b) ⎧ C(s, t) = C(τ) = ⎨ ⎩

for h ≤ τ for h > τ

0 λ(h − τ)

Hence, the stochastic process {X(t), t ≥ 0} is weakly stationary. 7.11) Let {N(t), t ≥ 0} be a homogeneous Poisson process with intensity λ and {T 1 , T 2 , ...} be the associated random point process, i.e., T i is the time point at which the i th Poisson event occurs. For t → ∞, determine and sketch the covariance function C(τ) of the (stochastic) shot noise process {X(t), t ≥ 0} given by N(t)

X(t) = Σ i=1 h(t − T i ) with h(t) =

sin t for 0 ≤ t ≤ π . 0, elsewhere

Solution By formula (7.33), page 272, with 0 ≤ τ < π, ∞

C(τ) = λ ∫ 0 h(x) h( τ + x) dx π− τ

= λ ∫0

(sin x) sin( τ + x) dx

80

SOLUTIONS MANUAL =

λ π− τ 2 0

∫

[cos τ − cos(2x + τ )]dx

π− τ = λ2 ⎡⎣ (π − τ )cos τ − ∫ 0 cos(2x + τ )dx ⎤⎦

=

λ 2

=

λ 2

(π − τ )cos τ − 12 [sin(2π − τ ) − sin τ ] [(π − τ ) cos τ − cos π sin(π − τ )] .

Hence, ⎧⎪ C(τ) = ⎨ ⎪⎩

λ 2

[(π − τ ) cos τ + sin(π − τ )],

0,

0≤ τ 0, β > 0; λ > 0, Γ(α) then E(L) = α/β, Var(L) = α/β 2 .

(ii)

Hence, by comparing (i) and (ii), α β

L1 :

1=

L2 :

0.5 =

α β2

,

0.5 =

α β

, 0.125 =

→ α = β = 2. α β2

→ α = 2, β = 4 .

The respective probabilities that no type 1-shocks and no type 2-shocks occur in [0,2] are given by the negative binomial distribution (7.58), page 281, with i = 0, t = 2 : 2

2

2

2 ⎞ 1 ⎛ 4 ⎞ ⎛2⎞ P(N L 1 (2) = 0) = ⎛⎝ 2+2 ⎠ = 4 , P(N L 2 (2) = 0) = ⎝ 4+2 ⎠ = ⎝ 3 ⎠ . In view of the assumed independence of the two 'shock-generating processes', the desired probability p is 2

p = 1 − P(N L 1 (2) = 0) ⋅ P(N L 2 (2) = 0) = 1 − 14 ⋅ ⎛⎝ 23 ⎞⎠ = 0.8888. 7.17)* Prove the multinomial criterion (7.55), page 280, on condition that L has density f L (λ). Solution To prove is: If {N L (t), t ≥ 0} is a mixed Poisson process, then, for all n = 1, 2, ... , all vectors (t 1 , t 2 , ..., t n ) with 0 = t 0 < t 1 < t 2 < . . . < t n and for any nonnegative integers i 1 , i 2 , ..., i n satisfying i 1 + i 2 + . . . + i n = i, P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n N L (t n ) = i) =

i i i i! ⎛ t 1 ⎞ 1 ⎛ t 2 − t 1 ⎞ 2 . . . ⎛ t n − t n−1 ⎞ n . ⎠ ⎝ tn i 1 ! i 2 !. . . i n ! ⎝ t n ⎠ ⎝ t n ⎠

Note that in view of i 1 + i 2 + . . . + i n = i , P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n ; N L (t n ) = i) = P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n) . Hence, P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n ; N L (t n ) = i) ∞

= ∫ 0 P(N λ (t k−1 , t k ) = i k ; k = 1, 2, ..., n ) f L (λ) dλ ∞

= ∫0 =

∞

⎛ [(t k − t k−1 ) λ] i k

Π⎜ ∫ k=1

0

=

n

n

Π k=1 P(N λ (t k−1 , t k ) = i k ) f L (λ) dλ

⎝

ik!

⎞ e −(t k −t k−1 ) λ ⎟ f L (λ) dλ ⎠

i i ∞ n t −t i! ⎛ k k−1 ⎞ k (λt n ) e −λ t n f (λ) dλ. Π L ∫ ⎠ i! i 1 !i 2 !. . .i n ! k=1 ⎝ t n 0

Since P(N L (t n ) = i) =

∞

∫

0

(λt n ) i −λ t n e f L (λ) dλ , i!

84

SOLUTIONS MANUAL

the desired result is obtained as follows: P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n N L (t n ) = i ) =

=

P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n ; N L (t n ) = i) P(N L (t n ) = i)

i! i 1 !i 2 !. . .i n !

Π k=1 ⎛⎝ n

∞

∫

0

=

t k −t k−1 tn

⎞ ⎠

i k ∞ (λt ) i n e −λ t n f L (λ) dλ i! 0

∫

(λt n ) i −λ t n e f L (λ) dλ i!

n ⎛ t k − t k−1 ⎞ i k i! Π k=1 ⎝ tn ⎠ . i 1 !i 2 !. . .i n !

Sections 7.3 and 7.4 Note The exercises 7.20 to 7.31 refer to ordinary renewal processes. The functions f (t) and F(t) denote density and distribution function of the cycle length Y, the parameters μ and μ 2 are mean value and second moment of Y. N(t) is the (random) renewal counting function and H(t) denotes the corresponding renewal function H(t) = E(N(t)). 7.18) An insurance company has a premium income of $106 080 a day. The claim sizes are iid ran- dom variables and have an exponential distribution with variance 4 ⋅ 10 6 [$ 2 ]. On average, 2 claims arrive per hour according to a homogeneous Poisson process. The time horizon is assumed to be infi- nite. (1) What probability distribution have the interarrival times between two neighboring claims? (2) Calculate the company's ruin probability if its initial capital is x 0 = $20 000. (3) What minimal initial capital should the company have to make sure that its ruin probability does not exceed 0.01? Solution (1) The distribution function of the random interarrival time Y is F(t) = 1 − e −λt , t ≥ 0, with parameter λ = 2 [h −1 ] and mean value μ = E(Y) = 1/λ = 0.5 [h]. (2) Mean claim size and premium income per hour are ν = E(M) = 4 ⋅ 10 6 = $2000 and κ = 4420 $/h so that μκ − ν α = μκ = 0.0950. Under these assumptions, by formula (7.82), page 297, the ruin probability is α

p(x 0 ) = (1 − α)e − ν x 0 = p(20 000) = (1 − 0.0950)e

− 0.0950 ⋅20 000 2000

(3) An initial capital x = x min is to determine with property p(x 0.01 ) = (1 − 0.0950)e

− 0.0950 ⋅x min 2000

≤ 0.01

or, equivalently, e

− 0.0950 ⋅x min 2000

≤ 0.0110497.

By taking the logarithm at both sites of this inequality it follows that x min = $94850.

= 0.3500.

7 POINT PROCESSES

85

7.19) Parmod is setting up an insurance policy for low-class cars (homogeneous portfolio) over an infinite time horizon. Based on previous statistical work, he expects that claims will arrive according to a homogeneous Poisson process with intensity λ = 0.8 [h −1 ], and that the claim sizes will be iid distributed as an exponentially distributed random variable M with mean value ν = E(M) = $3000. He reckons with a total premium income of $2 800 [h −1 ] . (1) Given that these assumptions are correct, has Parmod a chance to be financially successful with this portfolio over an infinite time period? (2) What is the minimal initial capital x 0 Parmod has to invest to make sure that the lower bound for the survival probability of this portfolio derived from the Lundberg inequality is 0.96? (3) For the sake of comparison, determine the exact value of the survival probability of this company for an initial capital of x 0 /3. Solution (1) The relevant parameters are μ = 1/λ = 1.25 [h], ν = $3000, and κ = $ 2800 [h −1 ]. Hence, the corresponding safety loading is σ = μκ − ν = 1.25 ⋅ 2800 − 3000 = $500. Since σ > 0, Parmod has a chance to experience no ruin in the longrun. (2) From the Lundberg inequality, q(x 0 ) = 1 − p(x 0 ) = 1 − e −r x 0 ≥ 0.96. 500 σ = Since r = α/v (page 298), this inequality becomes with α = μκ = 0.142857 1.25 ⋅ 2800 α − x e ν 0 ≤ 0.04 or − 0.142857 ⋅ x 0 ≥ ln 0.04. 3000 Hence, x 0 ≥ $67 600. 7.20) The lifetime L of a system has a Weibull distribution with distribution function 3

F(t) = P(L ≤ t) = 1 − e −t , t ≥ 0. (1) Determine its failure rate λ(t) and its integrated failure rate Λ(t). (2) The system is maintained according to Policy 1 (page 290, bottom) over an infinite time span. The cost of a minimal repair is $40, and the cost of a preventive replacement is c p = $2000. Determine the cost-optimum replacement interval τ∗ and the corresponding minimal maintenance cost rate K 1 (τ∗). (1) By formulas (2.98), page 88, Λ(t) = t 3 , λ(t) = 3t 2 , t ≥ 0. (2)The corresponding maintenance cost rate is (page 291, top) 3 K 1 (τ) = 2000 +τ 40 τ . An optimal τ = τ∗ satisfies the equation dK 1 (τ)/dτ = 0: 2τ 3 = c p /c m = 50. The solution of this equation and the corresponding minimal maintenance cost rate are τ∗= 2.924, K 1 (τ∗) = 1025.97 [$/h]. 7.21) A system is maintained according to Policy 3 (page 292, top) over an infinite time span. It has the same lifetime distribution and minimal repair cost parameter as in exercise 7.20. As with exercise 7.20, let c r = $2000. (1) Determine the optimum integer n = n∗ and the corresponding maintenance cost rate K 3 (n∗). (2) Compare to K 1 (τ∗) (exercise 7.20) and try to intuitively explain the result.

86

SOLUTIONS MANUAL

Solution The maintenance cost rate is (page 292, top) K 3 (n) =

2000 + 40 ⋅ (n − 1) , E(T n )

where the mean time point of the nth failure (replacement) E(T n ) is given by formula (7.46). Following example 7.9 (page 292), the optimal n = n∗ is the smallest integer satisfying (β = 3) 3n − [n − 1 + 50] ≥ 0. It follows n∗= 25 so that the optimal mean replacement cycle length is K 3 (n∗) =

∞

⎛ 24

3i ⎞

Σ ti! ⎟⎠ dt = 2.911. ∫ e −t ⎜⎝ i=0 3

0

Summarizing: n∗= 25, K 3 (n∗) = 1016.82 . In view of K 3 (n∗) < K 1 (τ∗), policy 3 is in this case more cost efficient than policy 1. However, this may not be true with practical applications, since, with regard to policy 3, the times of replacements are not known in advance, which leads to higher replacement costs. Section 7.3 and 7.4 Note Exercises 7.22 to 7.31 refer to ordinary renewal processes. The functions f (t) and F(t) denote density and distribution function, the parameters μ and μ 2 are mean value and second moment of the cycle length Y. N(t) is the (random) renewal counting function, and H(t) denotes the corresponding renewal function.

7.22) A system starts working at time t = 0. Its lifetime has approximately a normal distribution with mean value μ = 120 and standard deviation σ = 24 [hours]. After a failure, the system is replaced by an equivalent new one in negligible time and immediately resumes its work. What is the smallest number of spare systems which must be available in order to be able to maintain the replacement process over an interval of length 10 000 hours (1) with a probability of not less than 0.90, (2) with probability of not less than 0.99 ? Solution (1) If Y 1 , Y 2 , ... denotes the lengths of the successive renewal cycles, then the smallest n has to be determined which satisfies P(Y 1 + Y 2 + . . . + Y n ≥ 10000) ≥ 0.9. Since Y i = N(120, 24 2 ), this is equivalent to ⎞ ≥ 0.9 or Φ ⎛ 10000−120⋅n ⎞ ≤ 0.1. 1 − Φ ⎛⎝ 10000−120⋅n ⎠ ⎝ 24 n ⎠ 24 n Since the 0.1-percentile of the standardized normal distribution is 1.28, the latter inequality is equivalent to 1.28 ≤

120⋅n−10000 24 n

.

(i)

The smallest n satisfying this inequality is n = 86. (2) Since the 0.01-percentile of the standardized normal distribution is 2.32, condition (i) has to be replaced with 2.32 ≤ Hence, n = 88.

120⋅n−10000 24 n

.

7 POINT PROCESSES

87

7.23) (1) Use the Laplace transformation to find the renewal function H(t) of an ordinary renewal process whose cycle lengths have an Erlang distribution with parameters n = 2 and λ . (2) For λ = 1, sketch the exact graph of the renewal function and the bounds (7.117) in the interval 0 ≤ t ≤ 6. (Make sure that the bounds (7.117) are applicable.) Solution (1) Distribution function and density of an Erlang distribution with parameters n = 2 and λ are F(t) = 1 − e −λt − λ t e −λ t ,

f (t) = λ 2 t e −λ t , t ≥ 0.

Hence, the integral equation (7.102) for the renewal density h(t) = H (t) is t

h(t) = λ 2 t e −λ t + ∫ 0 h(t − x) λ 2 x e −λ x dx . Since L t e −λt =

1 , (s + λ) 2

the Laplace transform of f (t) is λ2 . (s + λ) 2 Hence, by the second formula of (7.104), the Laplace transform of h(t) is f (s) =

h(s) =

λ2 (s+λ) 2

1−

λ2 (s+λ) 2

=

s2

λ2 =A+ B . + 2 λ s s s + 2λ

Multiplying the right-hand side of this equation with s (s + 2λ) yields λ 2 = A (s + 2λ) + B s . Comparing the powers of s on both sides of this equation gives A = −B and A = λ/2 . Hence, h(s) = λ ⋅ 1s − λ ⋅ 1 . 2 2 s + 2λ Retransformation yields h(t) and integration of h(t) yields the renewal function H(t): h(t) = λ − λ e −2λ t , 2 2

H(t) = 1 ⎡⎢ λ t + 1 e −2λ t − 1 ⎤⎥ . 2⎣ 2⎦ 2

(2) The failure rate of the cycle length is 2 f (t) λ 2 t e −λ t λ(t) = = = λ t , −λt −λ t F(t) 1 − (1 − e − λt e ) 1 + λ t

t ≥ 0.

Thus, λ(t) is increasing so that bounds (7.117) are applicable. t

2

t

∫ 0 F(x) dx = ∫ 0 ⎡⎣ e −λ x + λx e −λx ⎤⎦ dx = λ (1 − e −λ t ) − t e −λ t ,

t ≥ 0.

Hence, given λ = 1, the bounds (7.117) are t [1 − (1 + t) e −t ] t − 1 ≤ H(t) ≤ , − t 2 − (2 + t) e − t 2 − (2 + t) e

t > 0.

The figure compares these bounds on the renewal function to the exact graph of H(t) : H(t) = 1 ⎡⎢ t + 1 e −2 t − 1 ⎤⎥ . 2⎣ 2 2⎦

88

SOLUTIONS MANUAL 3

H(t)

2.5 2 1.5 1 0.5

0

2

1

3

4

5

6

t

Example 7.23

7.24) An ordinary renewal process has the renewal function H(t) = t /10. Determine the probability P(N(10) ≥ 2). Solution If an ordinary renewal process has renewal function H(t) = t/10, , then by example 7.12 its cycle length has an exponential distribution with parameter λ = 1/10, i.e., its counting process is the homogeneous Poisson process with intensity λ = 1/10. Hence, the desired probability is P(N(10) ≥ 2) =

1 (10/10) n −10/10 e = 1 − e −1 Σ 1 n! n=2 n=0 n! ∞

Σ

= 1 − 2 e −1 = 0.2642. 7.25) A system is preventively replaced by an identical new one at time points τ, 2τ, ... . If failures happen in between, then the failed system is replaced by an identical new one as well. The latter replacement actions are called emergency replacements. This replacement policy is called block replacement. The costs for preventive and emergency replacements are c p and c e , respectively. The lifetime L of a system is assumed to have distribution function F(t) = (1 − e −λt ) 2 , t ≥ 0. (1) Determine the renewal function H(t) of the ordinary renewal process with cycle length distribution function F(t). (2) Based on the renewal reward theorem (7.148), give a formula forthe long-run maintenance cost rate K(τ) under the block replacement policy. (3) Determine an optimal τ = τ∗ with regard to K(τ) for λ = 0.1, c e = 900, c p = 100. (4) Under otherwise the same assumptions, determine the cost rate if the system is only replaced after failures and compare it with the one obtained under (3). Solution Density and its Laplace transform of L are given by f (t) = 2λ(e −λt − e −2λt ),

f (s) =

2λ 2 . (s + λ)(s + 2λ)

7 POINT PROCESSES

89

From (7.102), the Laplace transform of the corresponding renewal density is 2 h(s) = 2λ . s (s + 3λ) By making use of property (7.120) of the Laplace transformation, the preimage of h(s) is seen to be

h(t) = 23 λ (1 − e −3λt ), t ≥ 0. Integration yields the renewal function 1 ⎛ −3λt H(t) = 23 λ ⎡ t + 3λ − 1 ⎞⎠ ⎤ . ⎝e ⎣ ⎦ (2) The mean cost per replacement cycle is c p + c r H(τ), and τ is the (constant) length of a replacement cycle. Hence, by (7.148), the long-run maintenance cost rate under the block replacement is c p + c e H(τ) K(τ) = . τ

The optimum replacement interval τ = τ∗ satisfies the equation dK(τ)/dτ = 0 or, equivalently, (1 + 3λτ) e −3λτ = 1 − 4.5 ⋅

cp ce .

Thus, an optimum τ = τ∗ can only exist if 4.5 c p < c e . (3) The maintenance cost rate specializes to 1 ⎛ −0.3τ 100 + 150 ⋅ ⎡ τ + 0.3 − 1 ⎞⎠ ⎤ ⎝e ⎣ ⎦ , K(τ) = τ

and an optimal replacement interval τ = τ∗ must satisfy (1 + 0.3τ) e 0.3τ = 1 − 92 ⋅ 100 = −1.5. 180 A positive τ∗ cannot exist since the necessary condition 4.5 c p < c e is not fulfilled. 7.26) Given the existence of the first three moments of the cycle length Y of an ordinary renewal process, verify the formulas (7.112).

Solution Formulas (7.112) are E(S) =

μ2 + σ2 μ and E(S 2 ) = 3 2μ 3μ

1 t F(x) dx . with F S (t) = P(S ≤ t) = μ ∫0 The mean value of S is obtained by applying formula (2.52), Dirichlet's formula (page 101), and partial integration: 1 ∞ ∞ F(x) dx dt = 1 ∞ x F(x) dt dx E(S) = μ ∫0 ∫t μ ∫0 ∫0 1 ∞ x F(x) dx = 1 ∞ x F(x) dx = 1 μ . =μ ∫0 μ ∫0 2μ 2 The desired result follows from μ 2 = μ 2 + σ 2 . Moreover, by partial integration: ∞ 1 F(x) dx = 1 ∞ x 2 F(x) dx = 1 ∞ x 2 F(x) dx E( S 2 ) = ∫ 0 x 2 μ μ ∫0 μ ∫0 ∞

1 ⎡ x 3 F(x) ⎤ + 1 ∞ x 3 f (x) dx. =μ ⎢ ⎥ ∫ ⎣3 ⎦0 3μ 0 In view of lim x 3 F(x) = 0, this is the desired result. x→∞

90

SOLUTIONS MANUAL

7.27)* (1) Verify that the probability p(t) = P(N(t) is odd) satisfies the integral equation t

p(t) = F(t) − ∫ 0 p(t − x) f (x) dx,

f ( x) = F ( x) .

(2) Determine p(t) if the cycle lengths are exponential with parameter λ.

Solution (1) Obviously, p(t) ≡ 0 if N(t) = 0, i.e., if T 1 > t. Let T 1 ≤ t. Then, given that the first renewal occurs at T 1 = x, p ( t T 1 = x) = 1 − p ( t − x) , since, in order to have an odd number of renewals in (0, t], the number of renewals in (x, t] must be even. (Note that 1 − p(t) is the probability for the occurrence of an even number of renewals in (0, t].) Hence, t

t

p(t) = ∫ 0 [1 − p(t − x)] f (x) dx = F(t) − ∫ 0 p(t − x) f (x) dx .

(i)

(2) If F(t) = 1 − e −λt , t ≥ 0, then (i) becomes t

p(t) = 1 − e −λt − ∫ 0 p(t − x) λe −λx dx .

(ii)

Applying the Laplace transform to (ii) gives

p(s) = 1s − 1 − p(s) ⋅ λ . s+λ s+λ Solving for p(s) yields

p ( s) =

λ . s (s + λ)

Hence, by making use of formula (2.120), page 100, t

t p(t) = λ ∫ 0 e −λx dx = λ ⎡⎣ − λ1 e −λx ⎤⎦ = 1 − e −λt , t > 0. 0

7.28)* Verify that H 2 (t) = E(N 2 (t)) satisfies the integral equation t

H 2 (t) = 2H(t) − F(t) + ∫ 0 H 2 (t − x) f (x) dx . Solution

∞

∞

H 2 (t) = Σ n=1 n 2 P(N(t) = n) = Σ n=1 n 2 [F T n (t) − F T n+1 (t)] ∞

= Σ n=1 [(n − 1) 2 + 2n − 1] [F T n (t) − F T n+1 (t)] ∞

= Σ n=1 [(n − 1) 2 [F T n (t) − F T n+1 (t)] + 2 H(t) − F(t). Moreover, t

t

∫ 0 H 2 (t − x) f (x) dx = ∫ 0

Σ n∞=0 n 2 [F T n (t − x) − F T n+1 (t − x)]

∞

= Σ n =0 n 2 ∫ t0 [F T n (t − x) − F T n+1 (t − x)] f (x) dx ∞

= Σ n =0 n 2 [F T n+1 (t − x) − F T n+2 (t − x)] ∞

= Σ n =1 (n − 1) 2 [F T n (t − x) − F T n+1 (t − x)]. This proves the assertion.

f (x) dx

7 POINT PROCESSES

91

7.29) The time intervals between the arrivals of successive particles at a counter generate an ordinary renewal process. After having recorded 10 particles, the counter is blocked for τ time units. Particles arriving during a blocked period are not registered. What is the distribution function of the time from the end of a blocked period to the arrival of the first particle after this period if τ → ∞?

Solution At the end of the blocked period, the underlying renewal process has reached its stationary phase if τ is sufficiently large. Hence, by theorem 7.18, the distribution function asked for is 1 F S (t) = μ

t

∫ 0 F(x) dx,

t ≥ 0.

7.30) The cycle length distribution of an ordinary renewal process is given by the distribuion func2 tion F(t) = P(Y ≤ t) = 1 − e −t , t ≥ 0 (Rayleigh distribution). (1) What is the statement of theorem 7.13 if g(x) = (x + 1) −2 , x ≥ 0 ? (2) What is the statement of theorem 7.15?

Solution Since Y has a Rayleigh distribution, π μ = E(Y) = , 2 (1)

σ 2 = Var(Y) = 1 − π . 4

μ 2 = E(Y 2 ) = 1,

∞ ∞ ∫ 0 g(x) dx = ∫ 0 (x + 1) −2 dx = 1.

Hence, t lim (t − x + 1) −2 dH(x) t→∞ 0

∫

1 =μ

∞

∫ 0 (x + 1) −2 dx =

2 . π

⎛ 2 ⎞ 2 lim (H(t) − t /μ) = 1 ⎜ σ 2 − 1 ⎟ = π − 1. 2⎝μ t→∞ ⎠

(2)

7.31) Let A(t) be the forward and B(t) be the backward recurrence times of an ordinary renewal process at time t. Determine functional relationships between F(t) and the conditional probabilities (1) P(A(t) > y − t B(t) = t − x) , 0 ≤ x < t < y (2) P(A(t) ≤ y B(t) = x) , 0 ≤ x < t, y > 0.

Solution (1)

B(t)

0

x

A(t)

y

t

P(A(t) > y − t B(t) = t − x) = (2)

B(t)

0

t-x

F(y − x) . F(t − x)

A(t)

t

P(A(t) ≤ y B(t) = x) =

t+y

F ( x + y) − F ( x) . F(x)

SOLUTIONS MANUAL

92

3.32) Let (Y, Z) be the typical cycle of an alternating renewal process, where Y and Z have an Erlang distribution with joint parameter λ and parameters n = 2 and n = 1, respectively. For t → ∞, determine the probability that the system is in state 1 at time t and that it stays in this state over the entire interval [t, t + x], x > 0. (Process states as introduced in section 7.3.6.)

Solution To determine is the stationary interval reliability A x given by theorem 7.19, page 322: ∞ 1 Ax = F (u) du. E(Y) + E(Z) ∫ x Y By assumption, F Y (t) = 1 − e −λt − λ t e −λt , F Z (t) = 1 − e −λt ,

t ≥ 0.

Hence, E(Y) = 2 , E(Z) = 1 , λ λ

∞

∫x

∞ F Y (u) du = ∫ x (e −λu + λ u e −λu ) du = 1 e −λx (2 + λx). λ

Thus, A x = 1 (2 + λ x) e −λx . 3 7.33) The time intervals between successive repairs of a system generate an ordinary renewal process {Y 1 , Y 2 , ...} with typical cycle length Y. The costs of repairs are mutually independent, independent of {Y 1 , Y 2 , ...} and identically distributed as M. Y and M have parameters μ = E(Y) = 180 [days], σ = Var(Y) = 30, ν = E(M) = $200,

Var(M) = 40.

Determine approximately the probabilities that (1) the total repair cost arising in [0, 3600 days] do not exceed $4500, (2) a total repair cost of $3000 is not exceeded before 2200 days. Solution (1) Let C(3600) be the total repair cost in [0, 3600] and γ be defined by (7.155): γ = (180 ⋅ 40) 2 + (200 ⋅ 30) 2 = 9372.3.

Then, by making use of theorem 7.20 (formula 7.154), the desired probability is ⎛ 4500 − 200 ⋅ 3600 180 P(C(3600) ≤ 4500) ≈ Φ ⎜⎜ ⎜ 180 −3/2 ⋅ 9372.3 3600 ⎝

Hence,

⎞ ⎟⎟ . ⎟ ⎠

P(C(3600) ≤ 4500) ≈ Φ(2.15) = 0.9841.

(2) Let L(3000) be the first passage time of the total repair cost with regard to level $3000. By theorem 7.21 (formula 7.159), the desired probability is approximately given by ⎛ 2200 − 180 ⋅ 3000 200 ⎜ P(L(3000) > 2200) = 1 − Φ ⎜ ⎜ 200 −3/2 ⋅ 9372.3 ⋅ 3000 ⎝ = 1 − Φ(−2.75) = 0.9970.

⎞ ⎟⎟ ⎟ ⎠

7 POINT PROCESSES

93

7.34) (1) Determine the ruin probability p(x) of an insurance company with an initial capital of x = $20 000 and operating parameters 1/μ = 2 [h −1 ], ν = $ 800, and κ = 1700 [$/h]. (2) Under otherwise the same conditions, draw the the graphs of the ruin probability for x = $20 000 and for x = 0 in dependence on κ over the interval 1600 ≤ κ ≤ 1800. (3) With the numerical parameters given under (1), determine the upper bound e −rx for p(x) given by the Lundberg inequality (7.85). (4) Under otherwise the same conditions, draw the graph of e −rx with x = 20 000 in dependence on κ over the interval 1600 ≤ κ ≤ 1800 and compare to the corresponding graph from (2). Note For problems (1) to (4), the model assumptions made in example 7.10 apply.

Solution (1) By formulas (7.79) and (7.82), page 297, α=

μ κ−ν μκ

,

α

p(x) = (1 − α) e − ν x .

Hence, with the numerical parameters given, α=

0.5⋅1700−800 0.5⋅1700

=

1 17

p(20 000) =

,

1 16 − 17⋅800 ⋅20 000 e 17

= 0.2163.

(2) For fixed initial capital x, the ruin probability is now written as a function of κ : − ⎛⎝ 1− p(κ x = 20, 000) = 1600 κ e

1600 ⎞ κ ⎠ 20 000

p (κ x = 0) = 1600 κ ;

,

1600 ≤ κ ≤ 1800.

The Figure shows the graphs of p(κ x = 20 000) and p(κ x = 0) : 1 0.8 p(κ x = 0)

0.6

Lundberg bound

0.4 0.2

p(κ x = 20, 000)

0 ... 1600

1640

1680

1720

1760

1800

(3) By formula (7.84), page 297, the Lundberg coefficient satisfies the equation ∞ r y − ν1 y e e dy

∫0 The solution is r =

1 ν

∞

1

= ∫ 0 e −( ν −r) y dy =

1 1 ν −r

= μ κ.

⎛ μκ−ν ⎞ = α . Therefore, the Lundberg inequality (7.85) is ⎝ μκ ⎠ ν

p ( x) ≤ e

− 13 x600

, x ≥ 0.

(4) The Lundberg bound (7.85) as a function of κ on condition x = 20 000 is (see Figure): p(κ x = 20 000) ≤ e

⎛ ⎞ − 25− 40 000 κ ⎠ ⎝

, 1600 ≤ κ ≤ 1800.

κ

SOLUTIONS MANUAL

94

7.35)* Under otherwise the same assumptions as made in example 7.10, determine the ruin probability p(x) if the random claim size M has density b(y) = λ 2 y e −λy , λ > 0, y ≥ 0. Note In this exercise the parameter λ is not the intensity of the homogeneous Poisson claim process. The intensity of the claim process is here denoted as 1/μ so that μ = E(Y) is the mean value of the random time Y between the occurrence of neighboring claims.

Solution The claim size M has an Erlang distribution with parameters λ and n = 2 (see page 75). Hence, the mean claim size is (i) ν = 2/λ. As usual, the following assumption has to be made: κμ − ν α = κμ > 0. The Laplace transform of b(y) is (see Table 2.5, page 105) 2 b(s) = ⎛⎝ λ ⎞⎠ . s+λ

Inserting b(s) in formula (7.78), page 296, gives the Laplace transform of the corresponding survival probability: (s + λ) 2 κμ κμ q(0) q(s) = q(0) = s ⋅ 2 κμ (s + λ) 2 − s − 2λ κμ s − 1 + λ 2 (s+λ)

⎤ q(0) ⎡ s + 2λ = s ⎢1 + ⎥. 2 κμ (s + λ) − s − 2λ ⎦ ⎣ q(s) can be represented as q(0) ⎡ ⎤ s + 2λ q(s) = κμ s ⎢ κμ + , (s + s 1 )(s + s 2 ) ⎥⎦ ⎣ where s 1 = 1 ⎡⎣ 2λκμ − 1 − 4λκμ + 1 ⎤⎦ and s 2 = 1 ⎡⎣ 2λκμ − 1 + 4λκμ + 1 ⎤⎦ 2κμ 2κμ or, for the sake of more convenient retransformation, as q(0) q(0) 2λ q(0) 1 1 q(s) = s + κμ ⋅ + κμ ⋅ . (s + s 1 )(s + s 2 ) s (s + s 1 )(s + s 2 ) Taking into account (i), (ii), and s 2 − s 1 = 1 4λκμ + 1 , kμ

s 1 s 2 = λ 2 α = 2νλ α ,

retransformation yields (use table 2.5 or do decomposition of the fractions into partial fractions or use the formulas given at the end of this exercise) ⎧⎪ q(x) = q(0) ⎨ 1 + ⎪⎩

⎡ ⎤⎫ α ⎥ ⎪. 1 2 (e −s 1 x − e −s 2 x ) + ⎢⎢ (s 1 e −s 2 x − s 2 e −s 1 x ) + 1 − ⎥⎬ α ⎢ α λ 4λκμ + 1 ⎥⎪ 4λκμ + 1 ⎣ ⎦⎭

Note that by (i) and (ii), 0 < s 1 < s 2 . Hence, condition q(∞) = 1 yields

7 POINT PROCESSES

95 α⎞ 1 = q(0) ⎛⎝ 1 + 1 − α ⎠ or q(0) = α .

Thus, the desired survival probability is q(x) = 1 − Hint L (e −s 1 x − e −s 2 x ) =

1 [α (e −s 2 x − e −s 1 x ) + ν (s 2 e −s 1 x − s 1 e −s 2 x )] . 4λκμ + 1 s2 − s1 , (s + s 1 ) (s + s 2 )

L (s 1 e −s 2 x − s 2 e −s 1 x ) =

s 1 s 2 (s 2 − s 1 ) . s (s + s 1 ) (s + s 2 )

7.36) Claims arrive at an insurance company as an ordinary renewal process {Y 1 , Y 2 , ...}. The corresponding claim sizes M 1 , M 2 , ... are independent and identically distributed as M and independent of {Y 1 , Y 2 , ...}. Let the Y i be distributed as Y; i.e., Y is the typical interarrival interval and (Y, M) is the typical interarrival cycle. From historical observations it is known that

μ = E(Y) = 1 [h], Var(Y) = 0.25, ν = E(M) = $800, Var(M) = 250.000. Give approximate answers to the following problems: (1) What minimum premium per unit time κ 0.99 has the insurance company to take in so that it will make a profit of at least $10 6 within 20 000 hours with probability α = 0.99? (2) What is the probability that the total claim reaches level $10 5 within 135h ? Note Before possibly reaching its goals, the insurance company may have experienced one or more ruins with subsequent 'red number periods'.

Solution (1) The random profit in [0, 20 000] is κ 0.99 ⋅ 20 000 − C(20 000). Hence, κ 0.99 must satisfy P(κ 0.99 ⋅ 20 000 − C(20 000) ≥ 10 6 ) = 0.99. By formula (7.156), page 328, C(t) has the asymptotic distribution C(t) ≈ N ⎛⎝ μν t, μ −3 γ 2 t ⎞⎠ as t → ∞ with γ 2 = μ 2 Var(M) + ν 2 Var(Y). Hence, γ=

250 000 + 800 2 ⋅ 0.25 = 640.3, C(20 000) ≈ N ⎛⎝ 800 ⋅ 20 000, 640.3 2 ⋅ 20 000 ⎞⎠ . 1

Now κ 0.99 can be determined as follows: P(κ 0.99 ⋅ 20 000 − C(20 000) ≥ 10 6 ) = P(C(20 000) ≤ κ 0.99 ⋅ 20 000 − 10 6 ) ⎛κ ⋅ 20 000 − 10 6 − 800 ⋅ 20 000 ⎞ = Φ ⎜ 0.99 ⎟ = 0.99. ⎝ ⎠ 640.3 ⋅ 20 000 Since the 0.99-percentile of the standard normal distribution is z 0.99 = 2.32, the last equation is equivalent to κ 0.99 ⋅ 20 000 − 10 6 − 800 ⋅ 20 000 κ − 850 = 2.32 or 0.99 = 2.32. 640.3 640.3 ⋅ 20 000 20 000

Thus, κ 0.99 = 860.5 [$/h].

SOLUTIONS MANUAL

96

(2) According to theorem 7.21 (formula (7.159), page 330), the first passage time L(x) of the compound stochastic process {C(t), t ≥ 0} with regard to level x has an asymptotic normal distribution given by μ L(x) ≈ N ⎛⎝ v x, v −3 γ 2 x ⎞⎠ as x → ∞. Hence, ⎛ 135 − 1 ⋅ 10 5 800 P ⎛⎝ L(10 5 ) ≤ 135 ⎞⎠ ≈ Φ ⎜⎜ ⎜ ⎝ 640.3 ⋅ 800 −3 ⋅ 10 5

⎞ ⎟⎟ = Φ(1.118) = 0.868. ⎟ ⎠

CHAPTER 8 Discrete-Time Markov Chains 8.1) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2} and transition matrix ⎛ ⎜ P=⎜ ⎜ ⎝

0.5 0 0.5 0.4 0.2 0.4 0 0.4 0.6

⎞ ⎟ ⎟ ⎟ ⎠

(1) Determine P(X 2 = 2 X 1 = 0, X 0 = 1) and P(X 2 = 2, X 1 = 0 X 0 = 1). (2) Determine P(X 2 = 2, X 1 = 0 X 0 = 0) and, for n > 1, P(X n+1 = 2, X n = 0 X n−1 = 0) . (3) Assuming the initial distribution P(X 0 = 0) = 0.4; P(X 0 = 1) = P(X 0 = 2) = 0.3, determine P(X 1 = 2) and P(X 1 = 1, X 2 = 2). Solution (1) P(X 2 = 2 X 1 = 0, X 0 = 1) = P(X 2 = 2 X 1 = 0) = 0.5, P(X 2 = 2, X 1 = 0 X 0 = 1) = p 10 p 02 = 0.4 ⋅ 0.5 = 0.2. (2) P(X 2 = 2, X 1 = 0 X 0 = 0) = p 00 p 02 = 0.5 ⋅ 0.5 = 0.25, P(X n+1 = 2, X n = 0 X n−1 = 0) = p 00 p 02 = 0.5 ⋅ 0.5 = 0.25. (homogeneity). (0)

(0)

(0)

(3) P(X 1 = 2) = p 0 p 02 + p 1 p 12 + p 2 p 22 = 0.4 ⋅ 0.5 + 0.3 ⋅ 0.4 + 0.3 ⋅ 0.6 = 0.5, (0)

(0)

(0)

P(X 1 = 1, X 2 = 2) = p 0 p 01 p 12 + p 1 p 11 p 12 + p 2 p 21 p 12 = 0.4 ⋅ 0 ⋅ 0.4 + 0.3 ⋅ 0.2 ⋅ 0.4 + 0.3 ⋅ 0.4 ⋅ 0.4 = 0.072. 8.2) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2} and transition matrix ⎛ ⎜ P=⎜ ⎜ ⎝

0.2 0.3 0.5 0.8 0.2 0 0.6 0 0.4

⎞ ⎟ ⎟. ⎟ ⎠

(1) Determine the matrix of the 2-step transition probabilities P (2) . (2) Given the initial distribution P(X 0 = i) = 1/3 ; i = 0, 1, 2 ; determine the probabilities P(X 2 = 0) and P(X 0 = 0, X 1 = 1, X 2 = 2). Solution P (2)

(1) (0) (2)

⎛ (2) ⎞ ⎞ ⎜ ⎛ ⎛ = P ⋅ P = ⎝ ⎝pi j ⎠ ⎠ = ⎜ ⎜ ⎝

(0) (2)

0.58 0.12 0.3 0.32 0.28 0.4 0.36 0.18 0.46

(0) (2)

(2) P(X 2 = 0) = p 0 p 00 + p 1 p 10 + p 2 p 20 = 13 (0.58 + 0.32 + 0.36) = 0.42. (0)

P(X 0 = 0, X 1 = 1, X 2 = 2) = p 0 p 01 p 12 =

1 3

⋅ 0.3 ⋅ 0 = 0.

⎞ ⎟ ⎟. ⎟ ⎠

98

SOLUTIONS MANUAL

8.3) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2} and transition matrix ⎛ ⎜ P=⎜ ⎜ ⎝

⎞ ⎟ ⎟. ⎟ ⎠

0 0.4 0.6 0.8 0 0.2 0.5 0.5 0

(1) Given the initial distribution P(X 0 = 0) = P(X 0 = 1) = 0.4 and P(X 0 = 2) = 0.2, determine the probability P(X 3 = 2). (2) Draw the corresponding transition graph. (3) Determine the stationary distribution. Solution (1) The matrix of the 2-step transition probabilities is ⎛ ⎜ P (2) = P 2 = ⎜ ⎜ ⎝

0.62 0.3 0.08 0.1 0.42 0.48 0.4 0.2 0.4

⎞ ⎟ ⎟, ⎟ ⎠

and the matrix of the 3-step transition probabilities is ⎛ ⎜ P (3) = P ⋅ P (2) = ⎜ ⎜ ⎝

0.280 0.288 0.432 0.576 0.280 0.144 0.360 0.360 0.280

⎞ ⎟ ⎟. ⎟ ⎠

The desired probability is (0) (3)

(0) (3)

(0) (3)

P(X 3 = 2) = p 0 p 02 + p 1 p 12 + p 2 p 22

= 0.4 ⋅ 0.432 + 0.4 ⋅ 0.144 + 0.2 ⋅ 0.280 = 0.2864.

(2)

0

0.6 0.5

2

0.4 0.8

0.2 0.5

1

(3) According to (8.9), the stationary distribution {π 0 , π 1 , π 2 } satisfies π 0 = 0.8 π 1 + 0.5 π 2 π 1 = 0.4 π 0 + 0.5 π 2 1 = π0 + π1 + π2 The solution is π 0 = 0.3070, π 1 = 0.2982, π 2 = 0.3948. 8.4) Let {Y 0 , Y 1 , ...} be a sequence of independent, identically distributed binary random variables with P(Y i = 0) = P(Y i = 1) = 1/2; i = 0, 1, ... Define a sequence of random variables {X 1 , X 2 , ...} by X n = 12 (Y n − Y n−1 ) ; n = 1, 2, ...; and check whether {X 1 , X 2 , ...} has the Markov property.

DISCRET-TIME MARKOV CHAINS

99

Solution Consider, for instance, the transition probability P(X 3 =

1 2

X 2 = 0) = P ⎛⎝ 12 (Y 3 − Y 2 ) =

Then, since the random events 'X 3 = P(X 3 =

=

1 2

1 2

1 2

1 (Y 2 2

− Y 1 ) = 0 ⎞⎠ .

' and 'Y 1 = Y 2 = 1 ' are disjoint,

X 2 = 0) = P ⎛⎝ X 3 =

1 2

Y 1 = Y 2 = 0 ∪Y 1 = Y 2 = 1)

P({X 3 = 12 } ∩ {Y 1 = Y 2 = 0 ∪ Y 1 = Y 2 = 1}) P(Y 1 = Y 2 = 0 ∪ Y 1 = Y 2 = 1)

=

P(Y 1 = Y 2 = 0 ∩ Y 3 = 1) . P(Y 1 = Y 2 = 0 ∪ Y 1 = Y 2 = 1)

By assumption, Y 0 , Y 1 , and Y 3 are independent. Hence, P(X 3 =

1 2

X 2 = 0) =

1/8 1/2

= 1. 4

Now. consider P(X 3 = 1 (Y 2 2

1 2

X 2 = 0, X 1 = 12 ⎞⎠ = P ⎛⎝ 12 (Y 3 − Y 2 ) =

− Y 1 ) = 0 and 12 (Y 1 − Y 0 ) =

1 2

1 2

1 (Y 2 2

− Y 1 ) = 0,

imply that Y 2 = 1. Hence, X 3 =

P(X 3 =

1 2

1 2

1 2

(Y 1 − Y 0 ) = 12 ⎞⎠ .

cannot hold so that

X 2 = 0, X 1 = 12 ⎞⎠ = 0.

Thus, the random sequence {X 1 , X 2 , ...} does not have the Markov property. 8.5) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2, 3} and transition matrix ⎛ ⎜ P = ⎜⎜ ⎜ ⎜ ⎝

0.1 0.2 0.4 0.3

0.2 0.3 0.1 0.4

0.4 0.1 0.3 0.2

0.3 0.4 0.2 0.1

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

(1) Draw the corresponding transition graph. (2) Determine the stationary distribution of this Markov chain. Solution (2) By (8.9), the stationary distribution is solution of the following system of linear algebraic equations: π 0 = 0.1 π 0 + 0.2 π 1 + 0.4 π 2 + 0.3 π 3 π 1 = 0.2 π 0 + 0.3 π 1 + 0.1 π 2 + 0.4 π 3 π 2 = 0.4 π 0 + 0.1 π 1 + 0.3 π 2 + 0.2 π 3

1 = π0 + π1 + π2 + π3 The solution is π i = 0.25; i = 0, 1, 2, 3.

100

SOLUTIONS MANUAL

8.6) Let {X 0 , X 1 , ...} be an irreducible Markov chain with state space Z = {1, 2, ..., n}, n < ∞, and with the doubly stochastic transition matrix P = ((p ij )), i.e.,

Σ

j∈Z

p i j = 1 for all i ∈ Z and

Σ

i∈Z

p i j = 1 for all j ∈ Z.

(1) Prove that the stationary distribution of {X 0 , X 1 , ...} is given by {π j = 1/n , j = 1, 2, ..., n} . (2) Can {X 0 , X 1 , ...} be a transient Markov chain? Solution (1) If {π j = 1/n ; j = 1, 2, ..., n } is the stationary distribution of {X 0 , X 1 , ...} , then it must satisfy the system of linear algebraic equations (8.9):

1 = n 1 p ; i = 1, 2, ..., n. n Σ i=1 n i j But this is obviously true, since P is a doubly stochastic matrix:

Σ i=1 1n p i j = 1n Σ i=1 n

n

pi j = 1 n.

Moreover, this derivation shows that {π j = 1/n ; j = 1, 2, ..., n } being a stationary distribution implies that P is doubly stochastic. Thus: {π j = 1/n ; j = 1, 2, ..., n } is a stationary distribution if and only if P is a doubly stochastic matrix. (Note that, by theorem 8.9, any irreducible Markov chain with finite state space has exactly one stationary distribution.) (2) No, since every irreducible Markov chain with finite state space is recurrent (see the corollary after theorem 8.6).) 8.7) Prove formulas (8.20), page 346, for the mean time to absorption in a random walk with two absorbing barriers (example 8.3). Solution Let m(n) be the mean time till the particle reaches one of the absorbing states 0 or z, when it starts from n, 1 ≤ n ≤ z − 1. From any location n, the particle jumps independently of each other with probability p to n + 1 or with probability q = 1 − p to n − 1. If the particle firstly jumps from n to n + 1, then its mean time to absorption is 1 + m(n + 1). If the first jump goes from n to n − 1, then the mean time till absorption of the particle is 1 + m(n − 1). Hence, the m(1), m(2), ..., m(z − 1) satisfy the system of equations (i) m(n) = p [1 + m(n + 1)] + q [1 + m(n − 1)] ; n = 1, 2, ..., z − 1

with the boundary conditions m(0) = m(z) = 0.

(ii)

By replacing in (i) m(n) with m(n) = p m(n) + q m(n), an equivalent representation is obtained: q [m(n + 1) − m(n)] = p [m(n) − m(n − 1)] − 1p , n = 1, 2, ..., z − 1, p ≠ q.

(iii)

By introducing the notation d(n) = m(n) − m(n − 1); i = 1, 2, ..., z, we get q d(n + 1) = p d(n) − 1p , n = 1, 2, ..., z − 1, p ≠ q.

(iv)

In view of the boundary conditions (ii), there is d(1) = m(1) and d(z) = −m(z − 1). Successively applying the recursive equations (iv) yields

8 DISCRETE-TIME MARKOV CHAINS

101

⎛ q k−1 q . . . q k−2 ⎞ d(k) = ⎛⎝ p ⎞⎠ m(1) − 1 1 + ⎜ p⎝ p + + p k−2 ⎟⎠ ⎛ 1 − (q/p) k−1 ⎞ q k−1 = ⎛⎝ p ⎞⎠ m(1) − 1 p ⎜⎝ 1 − q/p ⎟⎠

1 ⎞ ⎛ q ⎞ k−1 − 1 ; k = 1, 2, ..., z. = ⎛⎝ m(1) + p − q⎠ ⎝p⎠ p−q The mean times to absorption are m(n) =

n

n

q 1 ⎞ d(k) = ⎛⎝ m(1) + p − Σ ⎛⎝ p ⎞⎠ q ⎠ k=1 k=1

Σ

k−1

n − p− q

n

1 ⎞ 1 − (q/p) − n ; n = 1, 2, ..., z. = ⎛⎝ m(1) + p − q ⎠ 1 − q/p p−q The unknown mean absorption time m(1) can be determined from z m(z) = Σ k=1 d(k) = 0 , which is one of the boundary conditions. It follows z m(1) = − 1 . p(1 − (q/p) z ) p − q Inserting this in m(n) proves formula (4.20) for p ≠ q : n 1 ⎡ z ⎛ 1 − (q/p) ⎞ − n ⎤ ; n = 0, 1, ..., z. m(n) = p − ⎜ ⎢ ⎥ q ⎣ ⎝ 1 − (q/p) z ⎟⎠ ⎦

The formula m(n) = n(z − n) for p = q = 1/2 follows from d(k) = m(1) − 2(k − 1). 8.8) Show that the vector π = (π 1 = α, π 2 = β, π 3 = γ), determined in example 8.6, is a stationary initial distribution with regard to a Markov chain which has the one-step transition matrix (8.22) (page 349). Solution The transition matrix (8.22) is ⎛ ⎜ ⎜ ⎜ ⎝

0 α + β/2 γ + β/2 α/2 + β/4 1/2 β/4 + γ/2 0 α + β/2 γ + β/2

⎞ ⎟ ⎟. ⎟ ⎠

Multiplying the vector (α, β, γ) with, say, the first and third column of this matrix, gives two of the three equations (8.9): α 2 + αβ + β 2 /4 = (α + β/2) 2 , β 2 /4 + βγ/2 + γ 2 + βγ/2 = (β/2 + γ) 2 .

Since the corresponding three equations (8.9) are linearly dependent, the third equation is replaced with α + β + γ = 1. Now the assertion follows from the equations (8.21). 8.9) A source emits symbols 0 and 1 for transmission to a sink. Random noises S 1 , S 2 , ... successively and independently affect the transmission process of a symbol in the following way: if a '0' ('1') is to be transmitted, then S i distorts it to a '1' ('0') with probability p (q); i = 1, 2, ... Let X 0 = 0 or X 0 = 1 denote whether the source has emitted a '0' or a '1' for transmission. Further, let X i = 0 or X i = 1 denote whether the attack of noise S i implies the transmission of a '0' or a '1'; i = 1, 2, ... The random sequence {X 0 , X 1 , ...} is an irreducible Markov chain with state space Z = {0, 1} and transition matrix

102

SOLUTIONS MANUAL P=

⎛1−p p ⎞ . ⎝ q 1−q⎠

(1) Verify: On condition 0 < p + q ≤ 1, the m-step transition matrix is given by P (m) =

m 1 ⎛ q p ⎞ + (1 − p − q) ⎛ p −p ⎞ . ⎝ −q q ⎠ p+q ⎝ q p ⎠ p+q

(i)

(2) Let p = q = 0.1. The transmission of the symbols 0 and 1 is affected by the random noises S 1 , S 2 , ..., S 5 . Determine the probability that a '0' emitted by the source is actually received. Solution

(1) The proof is done by induction: For m = 1 the assertion is fulfilled since P = P (1) . Now assume that the m-step transition matrix given by (i) is correct. Then, P (m+1) = P ⋅ P (m) .

Doing the matrix multiplication yields P ⋅ P (m) = + =

1 ⎛ 1−p p ⎞ ⎛q p⎞ p+q ⎝ q 1−q⎠ ⎝q p⎠

(1 − p − q) m ⎛ 1 − p p ⎞ ⎛ p − p ⎞ q ⎠ p+q ⎝ q 1 − q ⎠ ⎝ −q

m 1 ⎛ q p ⎞ + (1 − p − q) ⎛ (1 − p − q)p − (1 − p − q)p ⎞ p+q ⎝q p⎠ p+q ⎝ −(1 − p − q)q (1 − p − q)q ⎠

=

m+1 1 ⎛ q p ⎞ + (1 − p − q) ⎛ p − p ⎞ = P (m+1) . ⎝ q p ⎠ ⎝ −q q ⎠ p+q p+q

This completes the proof of the representation (i) of the m-step transition matrix. (2) Inserting the given figures into (i) with m = 5 gives the corresponding 5-step transition matrix: (5) P (5) = ⎛⎝ ⎛⎝ p i j ⎞⎠ ⎞⎠ = 5 ⎛⎝ 0.1 0.1 ⎞⎠ + 1.6384 ⎛⎝ 0.1 −0.1 ⎞⎠ = ⎛⎝ 0.66384 0.33616 ⎞⎠ . 0.1 0.1 0.33616 0.66384 −0.1 0.1 (5)

Thus, the desired probability is p 00 = 0.66384. 8.10) Weather is classified as (predominantly) sunny (S) and (predominantly) cloudy (C), where C includes rain. For the town of Musi, a fairly reliable prediction of tomorrow's weather can only be made on the basis of today's and yesterday's weather. Let (C,S) indicate that the weather yesterday was cloudy and today's weather is sunny and so on. Based on historical observations it is known that, given the constellation (S,S) today, the weather tomorrow will be sunny with probability 0.8 and cloudy with probability 0.2; given (S,C) today, the weather tomorrow will be sunny with probability 0.4 and cloudy with probability 0.6; given (C,S) today, the weather tomorrow will be sunny with probability 0.6 and cloudy with probability 0.4; given (C,C) today, the weather tomorrow will be cloudy with probability 0.8 and sunny with probability 0.2. (1) Illustrate graphically the transitions between the states 1 = (S,S), 2 = (S,C), 3 = (C,S), and 4 = (C,C).

(2) Determine the matrix of the transition probabilities of the corresponding discrete-time Markov chain and its stationary state distribution.

8 DISCRETE-TIME MARKOV CHAINS

103

Solution

(1)

0.8

0.2

(S,S)

(S,C)

0.6

0.4

0.6 0.8

(C,C)

0.2

0.4

(C,S)

(2) From the transition graph developed under (1), ⎛ ⎜ P = ⎜⎜ ⎜ ⎜ ⎝

0.8 0.2 0 0 0 0 0.4 0.6 0.6 0.4 0 0 0 0 0.2 0.8

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

Hence, the corresponding system of equations (8.9) for the stationary distribution is π 1 = 0.8 π 1 + 0.6 π 3 π 2 = 0.2 π 1 + 0.4 π 3 π 3 = 0.4 π 2 + 0.2 π 4

1 = π1 + π2 + π3 + π4 The solution is π 1 = 3/8, π 2 = π 3 = 1/8, π 4 = 3/8. 8.11) A supplier of toner cartridges of a certain brand checks her stock every Monday at 7:00 a.m. If the stock is less than or equal to s cartridges, she orders an amount of S - s cartridges from a wholesale dealer at 7:30 a.m., 0 ≤ s < S. Otherwise, the supplier orders nothing. In case of anorder, the wholesale dealer delivers the amount wanted till 9:00 a.m., the opening time of the shop. A demand which cannot be met by the supplier for being out of stock is lost. The weekly potential cartridge sales figures B n in week n (including lost demand) are independent random variables, identically distributed as B: p i = P(B = i); i = 0, 1, ...

Let X n be the number of cartridges the supplier has on stock on the n th Sunday (no business over weekends). (1) Is {X 1 , X 2 , ...} a Markov chain? (2) If yes, determine its transition probabilities. Solution (1) Let S n be the weekly potential cartridge sales figures of the supplier (including lost demand) in the n th week which ends at the n th Monday before opening hours. Then X n+1 is given by ⎧ X n+1 = ⎨ ⎩

max (X n − B n , 0) if X n > s max (S − B n , 0) if X n ≤ s

Consider the transition probability P(X n+1 = x n+1 X n = x n , X n−1 = x n−1 ). In view of the structure ture of X n+1 and with X n = x n given, the value x n−1 has no influence on X n+1 whatsoever, since

104

SOLUTIONS MANUAL

the number of cartridges ordered for the (n + 1) th week depends only on x n . (By assumption, the demands B 1 , B 2 , ... are independent.) Thus, P(X n+1 = x n+1 X n = x n , X n−1 = x n−1 ) = P(X n+1 = x n+1 X n = x n ).

Hence, {X 1 , X 2 , ...} is a Markov chain. It is, moreover, a homogeneous Markov chain, i.e., its transition probabilities P(X n+1 = x n+1 X n = x n ) are the same for all n = 1, 2, ... , since the weekly demands B 1 , B 2 , ... are identically distributed as B. Its state space is Z = {0, 1, ..., S}. (2) The transition probabilities p i j = P(X n+1 = j X n = i) are fully determined by s, S and B. ∞

p i 0 = P(B ≥ i) = Σ k=i p k

For i > s,

p i j = P(B = i − j) = p i−j ; j = 1, 2, ..., i . ∞

p i 0 = P(B ≥ S) = Σ k=S p k

For 0 ≤ i ≤ s,

p i j = P(B = S − j) = p S−j ; j = 1, 2, ..., S . 8.12) A Markov chain has state space Z = {0, 1, 2, 3, 4} and transition matrix ⎛ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎝

0.5 0.1 0.4 0 0 0.8 0.2 0 0 0 0 1 0 0 0 0 0 0 0.9 0.1 0 0 0 1 0

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

(1) Determine the minimal closed sets. (2) Identify essential and inessential states. (3) What are the recurrent and transient states? Solution (1) There are two minimal closed sets: {0, 1, 2} and {3, 4}. When you start in any of these sets, you obviously cannot leave it. But within these sets, you will eventually reach every state with probability 1. (2) Since every state of this Markov chain belongs to a minimal closed set, inessential states cannot exist. (3) Since all states are essential and Z is finite, all states are recurrent. 8.13) A Markov chain has state space Z = {0, 1, 2, 3} and transition matrix ⎛ ⎜ P = ⎜⎜ ⎜ ⎜ ⎝

0 0 1 0 1 0 0 0 0.4 0.6 0 0 0.1 0.4 0.2 0.3

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

Determine the classes of essential and inessential states. Solution {0, 1, 2} is an essential class, since, when starting at a state i ∈ {0, 1, 2} , there is always a path which leads back to state i with positive probability. State 3 is inessential, since, when leaving this state, the probability of arriving at {0, 1, 2} is 0.7. However, from set {0, 1, 2} no return to state 3 is possible. Hence, with probability 1, the Markov chain will eventually leave state 3 and never return.

8 DISCRETE-TIME MARKOV CHAINS

105

8.14) A Markov chain has state space Z = {0, 1, 2, 3, 4} and transition matrix ⎛ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎝

0 0.2 0.8 0 0 0 0 0 0.9 0.1 0 0 0 0.1 0.9 1 0 0 0 0 1 0 0 0 0

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

(1) Draw the transition graph. (2) Verify that this Markov chain is irreducible with period 3. (3) Determine the stationary distribution. Solution

(1)

0.2

0

1

1

4

0.8

0.1 0.9

2

3 (2) From the transition graph: Every state is accessible from any other state and from every state any other state can be reached. Hence, this Markov chain is irreducible. Moreover: Return to any state is only possible after multiples of 3. Hence, this Markov chain has period 3. (3) The stationary distribution satisfies the system of linear equations π j = Σ i∈Z π i p i j ; j ∈ Z, together with the normalizing condition Σ i∈Z π i = 1 (formulas (8.9)) π0 = π3 + π4 π 1 = 0.2 π 0 π 2 = 0.8 π 0 π 3 = 0.9 π 1 + 0.1 π 2 1 = π0 + π1 + π2 + π3 + π4

The solution is π 0 = 50/150, π 1 = 10/150, π 2 = 40/150, π 3 = 13/150, π 4 = 37/150. 8.15) A Markov chain has state space Z = {0, 1, 2, 3, 4} and transition matrix ⎛ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎝

0 1 0 0 0 1 0 0 0 0 0.2 0.2 0.2 0.4 0 0.2 0.8 0 0 0 0.4 0.1 0.1 0 0.4

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

(1) Find the essential and inessential states. (2) Find the recurrent and transient states. Solution (1) essential: {0, 1}, inessential: {2, 3, 4}. (2) recurrent: {0, 1},

transient: {2, 3, 4}.

106

SOLUTIONS MANUAL

8.16) Determine the stationary distribution of the random walk considered in example 8.12 on condition p i = p, 0 < p < 1. Solution The state space of this random walk is Z = {0, 1, 2, ...}, and the positive transition probabilities are p i 0 = p ; i = 0, 1, ... p i, i+1 = 1 − p ; i = 0, 1, ...

Hence, the corresponding system (8.9) for the stationary distribution is π 0 = p (π 0 + π 1 + . . .) π i = (1 − p) π i−1 ;

i = 1, 2, ...

From the first equation and the normalizing condition, π 0 = p. The complete solution is obtained recursively: π i = p (1 − p) i ;

i = 0, 1, ...

This is the geometric distribution (page 50). 8.17) The weekly power consumption of a town depends on the weekly average temperature in that town. The weekly average temperature, observed over 16 years in the month of August, has been partitioned in 4 classes (in C 0 ) : 1 = [10 − 15), 2 = [15 − 20), 3 = [20 − 25), 4 = [25 − 30].

The weekly average temperature fluctuations between the classes in August follow a homogeneous Markov chain with transition matrix ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

0.1 0.2 0.1 0

0.5 0.3 0.4 0.2

0.3 0.3 0.4 0.5

0.1 0.2 0.1 0.3

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

When the weekly average temperatures are in class 1, 2, 3 or 4, the respective average power consumption per week is 1.5, 1.3, 1.2, and 1.3 [in MW]. (The increase from class 3 to class 4 is due to air conditioning.) What is the average power consumption in the longrun in August? Solution To be able to apply theorem 8.9, the stationary state probabilities have to be calculated. By (8.9), they satisfy the system of linear algebraic equations π j = Σ i∈Z π i p i j ; j = 1, 2, 3, 4.

These equations are linearly dependent so that in what follows the equation for j = 4 will be 4 replaced by the normalizing condition Σ i=1 π i = 1 : − 0.9π 1 + 0.2π 2 + 0.1π 3 = 0 0.5π 1 − 0.7π 2 + 0.4π 3 + 0.2π 4 = 0 0.3π 1 + 0.3π 2 − 0.6π 3 + 0.5π 4 = 0 π1 + π2 + π3 + π4 = 1

The solution is π 1 = 0.11758, π 2 = 0.34378, π 3 = 0.37066, π 4 = 0.16798. Hence, the average power consumption in the longrun in August is

0.11758 ⋅ 1.5 + 0.34378 ⋅ 1.3 + 0.37066 ⋅ 1.2 + 0.16798 ⋅ 1.3 = 1.28645 [MW].

8 DISCRETE-TIME MARKOV CHAINS

107

8.18) A household insurer knows that the total annual claim size X of clients in a certain portfolio has a normal distribution with mean value $800 and standard deviation $260. The insurer partitions his clients into classes 1, 2, and 3 depending on the annual amounts they claim, and the class they belong to (all costs in $): A client, who is in class 1 in the current year, will make a transition to class 1, 2, or 3 next year, when his/her respective total claims are between 0 and 600, 600 and 1000, or greater than 1000 in the current year. A client, who is in class 2 in the current year, will make a transition to class 1, 2, or 3 next year if his/her respective total claim sizes are between 0 and 500, 500 and 900, or more than 900. A client, who is in class 3 and claims between 0 and 400, between 400 and 800, or at least 800 in the current year, will be in class 1, 2, or in class 3 next year, respectively. When in class 1, 2, or 3, the clients will pay the respective premiums 500, 800, or 1000 a year. (1) What is the average annual contribution of a client in the longrun? (2) Does the insurer make any profit under this policy in the longrun? Solution Next the transition probabilities between the three classes have to be calculated: ⎞ = Φ(−0.769) = 0.221, p 11 = Φ ⎛⎝ 600−800 260 ⎠ ⎞ − Φ ⎛ 600−800 ⎞ = Φ(0.769) − Φ(−0.769) = 2Φ(0.769) − 1 = 0.558, p 12 = Φ ⎛⎝ 1000−800 ⎝ 260 ⎠ 260 ⎠ ⎞ = 1 − Φ(0.769) = 0.221. p 13 = 1 − Φ ⎛⎝ 1000−800 260 ⎠ ⎞ = Φ(−1.54) = 0.124, p 21 = Φ ⎛⎝ 500−800 260 ⎠ ⎞ − Φ ⎛ 500−800 ⎞ = Φ(0.385) − Φ(−1.54) = 0.526, p 22 = Φ ⎛⎝ 900−800 ⎝ 260 ⎠ 260 ⎠ 900−800 p 23 = 1 − Φ ⎛⎝ 260 ⎞⎠ = 1 − Φ(0.385) = 0.350. ⎞ = 0.061, p 31 = Φ ⎛⎝ 400−800 260 ⎠ ⎞ − Φ ⎛ 400−800 ⎞ = 0.5 − 0.061 = 0.439, p 32 = Φ ⎛⎝ 800−800 ⎝ 260 ⎠ 260 ⎠ 800−800 ⎞ ⎛ p 33 = 1 − Φ ⎝ 260 ⎠ = 0.5.

Thus, the matrix of the one-step transition probabilities is ⎛ ⎜ ⎜ ⎜ ⎝

0.221 0.558 0.221 0.124 0.526 0.350 0.061 0.439 0.500

⎞ ⎟ ⎟ ⎟ ⎠

so that the stationary state probabilities of the underlying Markov chain with state space Z = {1, 2, 3} satisfy the system of linear algebraic equations π 1 = 0.221π 1 + 0.124π 2 + 0.061π 3 π 2 = 0.558π 1 + 0.526π 2 + 0.439π 3

1 = π1 + π2 + π3 The solution is π 1 = 0.10975, π 2 = 0.49514, π 3 = 0.39511 . Hence, by theorem 8.9, the average contribution of a client a year is 0.10975 ⋅ 500 + 0.49514 ⋅ 800 + 0.39511 ⋅ 1000 = 846.097[$]. The average profit of the insurer per client and year is about $46.

108

SOLUTIONS MANUAL

8.19) Two gamblers 1 and 2 begin a game with stakes of size $3 and $4, respectively. After each move a gambler either wins or loses $1 or the gambler's stake remains constant. These possibilities are controlled by the transition probabilities p 0 = 0, p 1 = 0.5, p 2 = 0.4, p 3 = 0.3, p 4 = 0.3, p 5 = 0.4, p 6 = 0.5, p 7 = 0, q 0 = 0, q 1 = 0.5, q 2 = 0.4, q 3 = 0.3, q 4 = 0.3, q 5 = 0.4, q 6 = 0.5 , q 7 = 0.

(According to Figure 8.3 there is p i = p i i+1 and q i = p i i−1 .) The game is finished as soon as a gambler has won the entire stake of the other one or, equivalently, if one gambler has lost her/his entire stake. (1) Determine the respective probabilities that gambler 1 or gambler 2 wins. (2) Determine the mean time of the game. Solution (1) Since p i = q i ; i = 1, 2, ..., z − 1, all A n defined by (8.42) are equal to 1. Hence, the absorption probabilities at state 0 given by (8.43) are (including the conditions a(0) = 1 and a(7) = 0) a(k) = 7 − k ; k = 0, 1, 2, ...7. 7 Thus, gamblers 1 and 2 win with the respective probabilities b(3) = 1 − a(3) = 3/7 and b(4) = 1 − a(4) = 4/7.

(2) Formula (8.48) gives m(1): m(1) =

117.5 3

+

7

6

Σ

k=1

1 pk

= 164.5 . 21

The recursive formulas (8.45) yield d(1) = m(1) , d(2) = m(1) − 6/3, d(3) = m(1) − 13.5/3, d(4) = m(1) − 23.5/3, d(5) = m(1) − 33.5/3, d(6) = m(1) − 41/3, d(7) = m(1) − 47/3.

Therefore, by (8.47), m(1) = 7.833, m(2) = 13.667, m(3) = 17.000, m(4) = 17.000, m(5) = 13.667, m(6) = 7.833. The mean duration of the game is m(3) = m(4) = 17 time units. 8.20) Analogously to example 8.17 (page 369), consider a population with a maximal size of z = 5 individuals, which comprises at the beginning of its observation 3 individuals. Its birth and death probabilities with regard to a time unit are p 0 = 0, p 1 = 0.6, p 2 = 0.4, p 3 = 0.2, p 4 = 0.4, p 5 = 0, q 0 = 0, q 1 = 0.4, q 2 = 0.4, q 3 = 0.6, q 4 = 0.5, q 5 = 0.8.

(1) What is the probability of extinction of this population? (2) Determine its mean time to extinction. Solution (1) The probability of extinction is 1 since the states 1, 2, ..., 5 are transient.

8 DISCRETE-TIME MARKOV CHAINS

109

(2) The table shows the relevant parameter: k

0

1

2

3

4

pk

0

0.6

0.4

0.2

0.4

0

qk

0

0.4

0.4

0.6

0.5

0.8

rk

1

0.1

0.2

d(k) m(k)

0

0

0.2

0.2

10.25

5.167

2.667

10.25

5

18.09

The figures can be verified by formulas (8.50), (8.47), and (8.45). 8.21) Let the transition probabilities of a birth- and death process be given by 1 pi = and q i = 1 − p i ; i = 1, 2, ... ; p 0 = 1 . 1 + [i /(i + 1)] 2

Show that the process is transient. Solution According to theorem 8.10 (p. 369), it has to be shown that the corresponding sum (8.51) is finite. Since qi q 1 q 2 . . .q n 1 i2 p i = (i + 1) 2 , the terms p 1 p 2 . . .p n are equal to (n + 1) 2 .

Hence, the sum (8.51) becomes ∞

2 1 = π − 1 < ∞. 2 6 n=1 ( n + 1)

Σ

8.22) Let i and j be two different states with f i j = f j i = 1. Show that both i and j are recurrent. Solution The assertion results from f ii ≥ f i j ⋅ f j i = 1 and f j j ≥ f j i ⋅ f i j = 1. 8.23) The respective transition probabilities of two irreducible Markov chains (1) and (2) with common state space Z = {0, 1, ...} are

(1) p i i+1 = 1 , i+2

pi 0 = i + 1 i+2

(2) p i i+1 = i + 1 , i+2

p i 0 = 1 ; i = 0, 1, ... i+2

Check whether these Markov chains are transient, null recurrent or positive recurrent. Solution This exercise is a special case of example 8.12 (page 358) with the p i given by p i = p i 0 ; i = 0, 1, ...

(1) In this case, p i = (i + 1)/(i + 2). The corresponding sum (8.30) ∞

∞

i=0

i=0

Σ pi = Σ

i+1 i+2

is divergent. Hence, this Markov chain is recurrent. Since

110

SOLUTIONS MANUAL μ 00 =

∞

Σ

m=0

(m)

m f 00 < ∞,

it is, moreover, positive recurrent. Note that (1) f 00 = 1 ; 2

(m) 1 ⋅ m ; m = 2, 3, ... f 00 = 1 ⋅ 1 . . . m 2 3 m+1

(2) In this case, p i = 1/(i + 2). The corresponding sum (8.30) is divergent as well: ∞

∞

1 = ∞. i + 2 i=0

Σ pi = Σ

i=0

But this Markov chain is null recurrent. (The mean recurrence time to state 0 is determined by the harmonic progression.) 8.24) Let N i be the random number of time periods a discrete-time Markov chain stays in state i (sojourn time of the Markov chain in state i). Determine E(N i ) and Var(N i ). Solution Once the Markov chain has arrived at a state i, the transition into the following state depends only on i. Hence, N i has a geometric distribution with parameter p = p ii : P(N i = n) = (1 − p ii ) p n−1 ii ;

n = 1, 2, ...

Thus, mean value and variance of N i are (see page 50) p ii p ii E(N i ) = , Var(N i ) = , 1 − p ii (1 − p ii ) 2

0 ≤ p ii < 1.

8.25) A Galton-Watson process starts with one individual. The random number of offspring Y of this individual has the z-transform M(z) = (0.6z + 0.4) 3 .

(1) What type of probability distribution has Y ? (2) Determine the probabilities P(Y = k). (3) What is the corresponding probability of extinction? (4) Let T be the random time to extinction. Determine the probability P(T = 2) by applying formula (8.60). Verify this result by applying the total probability rule to P(T = 2). Solution (1) Y has a binomial distribution with parameters n = 3 and p = 0.6 : P(Y = k) = ⎛⎝ 3k ⎞⎠ 0.6 k 0.4 3−k ; k = 0, 1, 2, 3.

(2) P(Y = 0) = 0.064, PY = 1) = 0.288, P(Y = 2) = 0.432, P(Y = 3) = 0.216. (3) The probability of extinction π 0 is the smallest positive z which is solution of z = (0.6z + 0.4) 3 There are two positve solutions: z 0 = 0.0971 and z 1 = 1. (A third, irrelevant solution of this equation is z 3 = −3.09571.) Hence, the probability of extinction is π 0 = 0.09571. (4) Formula (8.60) with n = 2 is, since M 2 (z) = ⎡⎣ 0.6 ⋅ (0.6z + 0.4) 3 + 0.4 ⎤⎦

3

and M 1 (z) = M(z), 3

P(T = 2) = M 2 (0) − M 1 (0) = M(M(0)) − M(0) = ⎡⎣ 0.6 ⋅ (0.4) 3 + 0.4 ⎤⎦ − 0.4 3 = 0.02026.

8 DISCRETE-TIME MARKOV CHAINS

111

Extinction in the second generation can only happen, if all Y offspring of the zeroth generation do not produce any offspring. Given Y = k this happens with probability 0.064 k . Hence, 3

P(T = 2) = Σ k=1 P(T = 2 Y = k) ⋅ P(Y = k) = 0.064 ⋅ 0.288 + 0.064 2 ⋅ 0.432 + 0.064 3 ⋅ 0.216 = 0.02026. Note that the event 'T = 2 ' cannot happen if Y = 0. Hence, P(T = 2 Y = 0) = 0.

8.26) A Galton-Watson process starts with one individual. The random number of offspring Y of this individual has the z-transform M(z) = e 1.5(z−1) . (1) What is the underlying probability distribution of Y ? (2) Determine the corresponding probability of extinction. (3) Let T be the random time to extinction. Determine the probability P(T = 3) by applying formula (8.60). Solution (1) M(z) is the z-transform of the Poisson distribution with parameter λ = 1.5. (2) The positive solutions of the equation e 1.5(z−1) = z are z 0 = 0.417 and z = 1. Hence, the probability of extinction is π 0 = 0.417. M 1 (z) = M(z) = e 1.5(z−1) , (3) M 2 (z)= e 1.5[e M 3 (z) = e

1.5

1.5(z−1) −1]

,

1.5(z−1) −1] e 1.5[e −1

.

Hence, M 1 (0) = 0.22313, M 2 (0) = e 1.5×[0.22313−1] = e −1.165305 = 0.3118, M 3 (0) = e 1.5{0.31183−1} = e −1.032255 = 0.3562.

The desired probability is P(T = 3) = M 3 (0) − M 2 (0) = 0.0444. 8.27) (1) Determine the z-transform of the truncated, p 0 −modified geometric distribution given by formula (8.62), page 375. (2) Determine the corresponding probability π 0 of extinction if m = 6, p 0 = 0.482, and p = 0.441.

(i)

(3) Compare this π 0 with the probability of extinction obtained in example (8.18) without truncation, but under otherwise the same assumptions. Solution

(1)

M(z) = p 0 z 0 + = p0z0 +

1 − p0 i−1 m p z [(1 − p) z] i=1 1 − (1 − p) m

Σ

m−1 1 − p0 p z Σ [(1 − p) z] i 1 − (1 − p) m i=0

112

SOLUTIONS MANUAL

so that M(z) = p 0 z 0 +

1 − p0 ⎛ 1 − [(1 − p) z] m ⎞ pz . m ⎝ 1 − (1 − p) z ⎠ 1 − (1 − p)

(2) Inserting the figures (i) into this M(z) gives 6

M(z) = 0.482 + 0.23563 z ⋅ 1 − 0.030512 ⋅ z . 1 − 0.559 z

The smallest positive solution of M(z) = z is π 0 = 0.9206.

(3) Without truncation, the extinction probability turned out to be π 0 = 0.86. The difference is not negligible. 8.28) Assume a Galton-Watson process starts with X 0 = n > 1 individuals. Determine the corresponding probability of extinction given that the same Galton-Watson process, when starting with one individual, has probability of extinction π 0 . Solution The offspring of the n individuals generates n independent Galton-Watson processes. Each of them has extinction probability π 0 . Therefore, the extinction probability of a Galton-Watson process starting with n individuals is π n0 . 8.29) Given X 0 = 1 show that the probability of extinction π 0 satisfies equation M(π 0 ) = π 0

by applying the total probability rule (condition with regard to the number of offspring of the individual in the zerouth generation). Make use of the answer to exercise 8.28. Solution A population starting with X 0 = 1 individual develops according to a Galton-Watson process. The random number of offspring of this individual be X 1 . Let A be the random event that the population will eventually die out and B n be the event that 'X 1 = n '. Then ∞

π 0 = P(A) = Σ n=0 P(A B n )P(B n ).

Since P(A B n ) = π n0 (by exercise 8.28) and by the definition of M(z), ∞

.π 0 = Σ n=0 π n0 P(X 1 = n) = M(π 0 ).

CHAPTER 9 Continuous-Time Markov Chains 9.1) Let Z = {0, 1} be the state space and ⎛ P(t) = ⎜ ⎝

1 − e −t e −t 1 − e −t e −t

⎞ ⎟ ⎠

the transition matrix of a continuous-time stochasticprocess{X(t), t ≥ 0}. Check whether this process is a homogeneous Markov chain. Solution If {X(t), t ≥ 0} is a homogeneous Markov chain, then it must satisfy the Chapman-Kolmogorovequations (6.6): p ij (t + τ) = Σ p ik (t) p kj (τ) ; t ≥ 0, τ ≥ 0. k∈Z

From P, p 00 (t + τ) = e −(t+τ) , p 00 (t) p 00 (τ) + p 01 (t) p 10 (τ) = e −t ⋅ e −τ + (1 − e −t ) ⋅ (1 − e −τ ) ≠ p 00 (t + τ). Hence, {X(t), t ≥ 0} cannot be a homogeneous Markov chain. 9.2) A system fails after a random lifetime L. Then it waits a random time W for renewal. A renewal takes another random time Z. The random variables L, W, and Z have exponential distribu- tions with parameters λ, ν, and μ , respectively. On completion of a renewal, the system immedi- ately resumes its work. This process continues indefinitely. All life, waiting, and renewal times are assumed to be independent. Let the system be in states 0, 1 and 2 when it is operating, waiting or being renewed. (1) Draw the transition graph of the corresponding Markov chain {X(t), t ≥ 0}. (2) Determine the point and the stationary availability of the system on condition P(X(0) = 0) = 1. Solution

(1)

λ 0

ν 1

2

μ (2) The first two differential equations of (9.20) for the state probabilities p 0 (t) and p 1 (t) together with the normalizing condition yield p 0 (t) = μ p 2 (t) − λ p 0 (t), p 1 (t) = λ p 0 (t) − ν p 1 (t),

(i)

p 0 (t) + p 1 (t) + p 2 (t) = 1. The initial condition are: p 0 (0) = 1, p 1 (0) = p 2 (0) = 0.

(ii)

SOLUTIONS MANUAL

114

From (i) one obtains an inhomogeneous second-order differential equation with constant coefficients for the probability p 0 (t) : p 0 (t) + a p /0 (t) + b p 0 (t) = μ v

(iii)

with a = λ + μ + ν, b = λμ + λν + μν. Next the corresponding homogeneous differential equation has to be solved: p 0,h (t) + a p /0,h (t) + b p 0,h (t) = 0.

(iv)

The corresponding characteristic equation x2 + a x + b = 0

has the solutions x 1/2 = − a ± 1 a 2 − 4b . 2 2 Case 1 a 2 = 4b, i.e., x 1 = x 2 = −a /2. Then the general solution of the homogeneous differential equation (iv) is p 0,h (t) = (c 1 + c 2 t) e

− a2 t

,

where c 1 and c 2 are arbitrary constants. The general solution of the inhomogeneous differential equation (iii) has structure p 0 (t) = p 0,h (t) + p 0,s (t), where p 0,s (t) is a special solution of (iii). Obviously, the constant function p 0,s (t) ≡ μ ν/b is a solution of (iii). Hence, the desired probability is p 0 (t) = (c 1 + c 2 t) e

− a2 t

+

μν . b

To determine the constants c 1 and c 2 , the initial conditions (ii) will be made use of: From p 0 (0) = 1, p 0 (0) = c 1 +

μν λμ + λν μν = 1 so that c 1 = 1 − = . b b b

From (i), μ p 1 (t) = μ − p 0 (t) − (λ + μ) p 0 (t).

(v)

Now the initial condition p 1 (0) = 0 yields 0 = μ + ac 1 /2 − c 2 − (λ + μ) . Hence, a (μ + ν) c 2 = λ ⎛⎝ − 1 ⎞⎠ . 2b

With c 1 and c 2 known, the point availability of the system A(t) = p 0 (t) is fully given. The stationary availability is obtained by letting t → ∞ in A(t) : A = lim A(t) = t→∞

μν = b

1 λ

+

1 λ 1 μ

+ ν1

.

Case 2 a 2 > 4b. In this case, x 1/2 < 0 and the general solution of (iv) is p 0,h (t) = c 1 e x 1 t + c 2 e x 2 t .

Hence, the general solution of (iii) is p 0 (t) = c 1 e x 1 t + c 2 e x 2 t +

μν b

.

(vi)

9 CONTINUOUS-TIME MARKOV CHAINS

115

From p 0 (0) = 1, c1 + c2 = 1 −

μν . b

From p 1 (0) = 0 and (iv), 0 = μ − c 1 x 1 − c 2 x 2 − (λ + μ) so that (c 2 − c 1 ) a 2 − 4b + a (c 1 + c 2 ) = 2 λ . Hence, ⎞ b − μν ⎛ a − ⎜⎜ − 1 ⎟⎟ . ⎜ 2 ⎟ 2b ⎝ a − 4b ⎠ a 2 − 4b λ

c2 =

The stationary availability is, of course, the same as the one obtained in case 1. Case 3 a 2 < 4b. In this case, the solutions of the characteristic equation are complex: x 1/2 = α ± i β

with α = − a , β = 1 2 2

4b − a 2 , i = −1 .

Then the general solution of the homogeneous differential equation (iv) is p 0,h (t) = e α t (c 1 cos βt + c 2 sin βt)

so that the desired general solution of (iii) is p 0 (t) = e α t (c 1 cos βt + c 2 sin βt) +

μν . b

The initial conditions (ii) and equation (v) yield the constants c 1 and c 2 : c1 = 1 −

μν , b

c2 =

a

4b − a 2

⎛ 1 − μν ⎞ − ⎝ b ⎠

2λ 4b − a 2

.

Note In view of the structure of the transition graph, the probabilities p 1 (t) and p 2 (t) can be obtained from p 0 (t) by cyclic transformation of the intensities λ, μ, and ν :

λ → v → μ → λ. This is easily done since the parameters a and b are invariant to this transformation.

9.3) Consider a 1-out-of-2-system, i.e., the system is operating when at least one of its two subsystems is operating. When a subsystem fails, the other one continues to work. On its failure, the joint renewal of both subsystems begins. On its completion, both subsystems immediately resume their work. The lifetimes of the subsystems are identically exponential with parameter λ . The joint renewal time is exponential with parameter µ. All life and renewal times are independent of each other. Let X(t) be the number of subsystems operating at time t. (1) Draw the transition graph of the corresponding Markov chain {X(t), t ≥ 0}.

(2) Given P(X(0) = 2) = 1, determine the time-dependent state probabilities p i (t) = P(X(t) = i) ; i = 0, 1, 2. (3) Determine the stationary state distribution. Solution

(1) 0

2λ

1 μ

λ

2

SOLUTIONS MANUAL

116

(2) The transition graph has the same structure as the one in the preceding exercise. Hence, the results obtained in exercise 9.2 apply if there the transition rate λ is replaced with 2λ and the transition rate ν with λ. (3) From (vi) in exercise 9.2, by doing the corresponding parameter transformations indicated in the note after exercise 9.2 and under (2), the stationary state probabilities are p0 =

1 2λ 1 +1 2λ λ

+ μ1

,

p 1 ==

1 λ 1 +1 2λ λ

+ μ1

,

p2 =

1 μ 1 +1 2λ λ

+ μ1

.

9.4) A copy center has 10 copy machines which are in constant use. The times between two successive failures of a machine have an exponential distribution with mean value 100 hours. There are two mechanics who repair failed machines. A defective machine is repaired by only one mechanic. During this time, the second mechanic is busy repairing another failed machine if there is any or this mechanic is idle. All repair times have an exponential distribution with mean value 4 hours. All random variables involved are independent. Consider the steady state. 1) What is the average percentage of operating machines? 2) What is the average percentage of idle mechanics? Solution This is the repairman problem considered in example 9.14, page 419. Let X(t) denote the number of failed machines at time t. Then {X(t), t ≥ 0} is a birth- and death process with state space Z = {0, 1, ..., 10}.

The stationary state probabilities π i = P(X = i) of this process are given by formulas (9.65) with ρ = 0.01/0.25 = 0.04 and r = 2 : π 0 = 0.673, π 1 = 0.269, π 2 = 0.049, π 3 = 0.008, π 4 = 0.001, π i < 0.0001 for i = 5, 6, ..., 10. Hence, the mean number of operating machines in the steady state is with sufficient accuracy m w = 10 ⋅ 0.673 + 9 ⋅ 0.269 + 8 ⋅ 0.049 + 7 ⋅ 0.008 + 6 ⋅ 0.001 = 9.6,

i.e., in the steady state, on average 96% of the copy machines are operating. The mean number of idle mechanics is m r = 2 ⋅ 0.673 + 1 ⋅ 0.269 + 0 = 1.615. Hence, the mean percentage of idle mechanics in the steady state is about 81%. 9.5) Consider the two-unit system with standby redundancy discussed in example 9.5 a), page 391, on condition that the lifetimes of the units are exponential with respective parameters λ 1 and λ 2 . The other model assumptions listed in example 9.5 remain valid. Describe the behavior of the system by a Markov chain and draw the transition graph. Solution The underlying Markov chain {X(t), t ≥ 0} has state space {(0, 0), (0, 1), (1, 0), (1, 1 s ), (1 s , 1)}, where (s 1 , s 2 ) means that unit 1 is in state s 1 and unit 2 in state s 2 with

0 unit is down (being replaced in states (1,0) and (0,1)) 1 unit is operating 1 s unit is available (ready for operating), but in cold standby State (0,0) is absorbing. The transition graph is

117

9 CONTINUOUS-TIME MARKOV CHAINS

(0, 1) μ

λ1

λ2

(1 s , 1)

(1, 1 s ) μ

(0, 0) λ1

λ2 (1, 0)

9.6) Consider the two-unit system with parallel redundancy discussed in example 9.6 on condition that the lifetimes of the units are exponential with parameters λ 1 and λ 2 , respectively. The other model assumptions listed in example 9.6 remain valid. Describe the behaviour of the system by a Markov chain and draw the transition graph. Solution The underlying Markov chain {X(t), t ≥ 0} has state space {(0, 0), (0, 1), (1, 0), (1, 1)} , where (i, j)

means that at one and the same time point unit 1 is in state i and unit 2 in state j with ’0 = unit down’ and ’1 = unit operating’ a) Survival probability In this case, state (0,0) is absorbing and the transition graph is λ1

(0,1)

μ

λ2

(1,1)

(0,0) μ

(1,0)

λ2

λ1

b) Long-run availability State (0, 0) has to be replaced with two new states: (0, 0 1 ) :

the transition to state (0, 0) was made via state (0, 1)

(0 1 , 0) :

the transition to state (0, 0) was made via state (1, 0)

The system arrives at state (0, 0 1 ) at a time point when unit 1 is being replaced. The system arrives at state (0 1 , 0) at a time point when unit 2 is being replaced. r = 1 (one mechanic) The transition graph is λ1

(0,1)

μ

(1,1)

μ

λ2 μ μ

λ2

(1,0)

μ λ1

(0, 0 1 )

(0 1 , 0)

SOLUTIONS MANUAL

118 r = 2 (two mechanics) The transition graph is λ2

λ1

μ μ

(0, 0 1 )

μ

(0 1 , 0)

(0,1)

μ

(1,1)

μ

μ

(1,0)

λ2

λ1

9.7) The system considered in example 9.7, page 398, is generalized as follows: If the system makes a direct transition from state 0 to the blocking state 2, then the subsequent renewal time is exponential with parameter μ 0 . If the system makes a transition from state 1 to state 2, then the subsequent renewal time is exponential with parameter μ 1 .

(1) Describe the behavior of the system by a Markov chain and draw the transition graph. (2) What is the stationary probability that the system is blocked? Solution (1) The following system states are introduced (state 2 differs from the one used in example 9.7): 0 The system is operating. 1 Type 1-failure state 2 Type 2-failure state if caused by a transition from state 1 (system blocked) 3 Type 2-failure state if caused by a transition from state 2 (system blocked)

The corresponding transition graph is λ2

3

λ1 μ0

0

μ1

ν

1

2

(2) The system (9.28) for the stationary state probabilities is (λ 1 + λ 2 ) π 0 = μ 1 π 2 + μ 0 π 3 ν π1 = λ1π0 μ1π2 = ν π1 π0 + π1 + π2 + π3 = 1 The solution is π0 =

1 , λ1 λ1 λ2 1+ ν + μ + μ 1 0

λ π 1 = ν1 π 0 ,

λ π 2 = μ1 π 0 , 1

λ π 3 = μ2 π 0 . 0

The probability that the system is blocked is

λ1 λ2 μ1 + μ0 π2 + π3 = . λ λ λ 1 + ν1 + μ 1 + μ 2 1 0

(If μ 1 = μ 2 = μ, then this blocking probability coincides with the one obtained in example 9.7. There it is denoted as π 2 .)

119

9 CONTINUOUS-TIME MARKOV CHAINS

9.8) Consider a two-unit system with standby redundancy and one mechanic. All repair times of failed units have an Erlang distribution with parameters n = 2 and μ. Apart from this, the other model assumptions listed in example 9.5 remain valid. (1) Describe the behavior of the system by a Markov chain and draw the transition graph. (2) Determine the stationary availability of the system. (3) Sketch the stationary availability of the system as a function of ρ = λ/μ. Solution (1) Erlang's phase method with the following system states is introduced:

0 1 2 3 4

Both units are available (one is operating, the other one in cold standby) One unit has failed. Its repair is in phase 1. One unit has failed. Its repair is in phase 2. Both units have failed. The one under repair is in phase 1. Both units have failed. The one under repair is in phase 2.

The corresponding Markov chain {X(t), t ≥ 0} with state space {0, 1, 2, 3, 4} has the following transition graph:

3

λ

0

λ

μ

μ

1

μ

4 λ

μ

2

(2) The system of algebraic equations for the stationary state probabilities π 0 , π 1 , . . ., π 4 is λ π0 = μ π2 (λ + μ) π 1 = λ π 0 + μ π 4 (λ + μ) π 2 = μ π 1 μ π3 = λ π1 μ π4 = λ π2 + μ π3 From this and the normalizing condition, letting ρ = λ/μ , π0 =

1 , 1 + 2ρ + 4ρ 2 + 2ρ 3

π 1 = (ρ + 1)ρ π 0 , π2 = ρ π0, π 3 = (ρ + 1)ρ 2 π 0 , π 4 = (ρ + 2)ρ 2 π 0 . (3) The stationary availability of the system is A = π0 + π1 + π2 =

(ρ + 1) 2 1 + 2ρ + 4ρ 2 + 2ρ 3

.

120

SOLUTIONS MANUAL

A

1 0.8 0.6 0.4 0.2 r

0

0.2

0.4

0.6

0.8

1

Figure for exercise 9.8

9.9) Consider a two-unit parallel system (i.e., the system operates if at least one unit is operating). The lifetimes of the units have an exponential distributions with parameter λ. There is one repairman, who can only attend one failed unit at a time. Repairs times have an Erlang distribution with parameters n = 2 and λ = 0.5. The system arrives at the failed state as soon as a unit fails during the repair of the other one. All life and repair times are assumed to be independent. (1) By using Erlang's phase method, determine the relevant state space of the system and draw the corresponding transition graph of the underlying Markov chain. (2) Determine the stationary availability of the system. Solution

(1)

State 1: Both units are operating. State 2: One unit is operating. The other one is being repaired, phase 1. State 3: One unit is operating. The other one is being repaired, phase 2. State 4: Both units have failed. The one under repair is in phase 1. State 5: Both units have failed. The one under repair is in phase 2.

The corresponding transition diagram of the underlying Markov chain is: 3 0.5 1

λ

0.5 2λ

2

λ

0.5

5

0.5

4

(2) The system of algebraic equations for the stationary state probabilities π 1 , π 2 , ..., π 5 is 2λπ 1 = 0.5π 3 (λ + 0.5)π 2 = 2λπ 1 + 0.5π 5 π 3 = 0.5π 2 0.5π 4 = λπ 2 0.5π 5 = 0.5π 4

121

9 CONTINUOUS-TIME MARKOV CHAINS Together with the normalizing condition π1 + π2 + . . . + π5 = 1 the solution is π1 =

1 , 1+12λ+32λ 2

π2 =

8λ , 1+12λ+32λ 2

π4 = π5 =

π3 =

4λ , 1+12λ+32λ 2

16λ 2 . 1+12λ+32λ 2

The system is available if it is in state 1, 2, or 3. Hence, the stationary availability A of the system is 1 + 12λ A = π1 + π2 + π3 = . 1 + 12λ + 32λ 2 9.10) When being in states 0, 1, and 2, a pure birth process {X(t), t ≥ 0} with Z = {0, 1, 2, ...} has the respective birth rates λ 0 = 2, λ 1 = 3 , λ 2 = 1. Given X(0) = 0, determine the time-dependent state probabilities p i (t) = P(X(t) = i) for i = 0, 1, 2. Solution The p i (t), i = 0, 1, 2 , satisfy the first three differential equations of (9.36), page 405: p /0 (t) = −2 p 0 (t) p /1 (t) = 2 p 0 (t) − 3 p 1 (t) p /2 (t) = 3 p 1 (t) − p 2 (t)

The initial condition is equivalent to p 0 (0) = 1. Hence, the first differential equation yields p 0 (t) = e −2t , t ≥ 0.

The probabilities p 1 (t) and p 2 (t) can be recursively obtained by applying (9.38): p 1 (t) = 2 (e −2t − e −3t ),

p 2 (t) = 3 e −t (1 − e −t ) 2 ,

t ≥ 0.

9.11) Consider a linear birth process {X(t), t ≥ 0} with birth rates λ j = j λ ; j = 0, 1, ..., and state space Z = {0, 1, ...}. (1) Given X(0) = 1, determine the distribution function of the random time point T 3 at which the process enters state 3. (2) Given X(0) = 1, determine the mean value of the random time point T n at which the process enters state n, n > 1. Solution (1) T n can be represented as a sum of n − 1 independent random variables X i : T n = X 1 + X 2 + . . . + X n−1 ,

(i)

where X i , the sojourn time of the process in state i , has an exponential distribution with parameter λi. Therefore, by formula (4.66), page 190, the distribution function of T 3 is t

F T 3 (t) = ∫ 0 (1 − e −2λ(t−x) ) λ e −λx dx t

= ∫ 0 λ e −λx dx − λe −2λt ∫ 0 e λx dx = 1 − e −λt − λ e −2λt ⎡ 1 e λ x ⎤ ⎦0 ⎣λ = 1 − e −λt − e −2λt (e λ t − 1). t

t

Hence, F T 3 (t) = (1 − e −λ t ) 2 , t ≥ 0.

SOLUTIONS MANUAL

122 (2) In view of (i), since E( X i ) = 1 , λi

the mean value of T n is E(T n ) = 1 ⎛⎝ 1 + 1 + . . . + 1 ⎞⎠ . λ 2 n−1 9.12) The number of physical particles of a particular type in a closed container evolves as follows: There is one particle at time t = 0. It splits into two particles of the same type after an exponential random time Y with parameter λ (its lifetime). These two particles behave in the same way as the original one, i.e., after random times, which are identically distributed as Y, they split into 2 particles each, and so on. All lifetimes of the particles are assumed to be independent. Let X(t) denote the number of particles in the container at time t. Determine the absolute state probabilities p j (t) = P(X(t) = j) ; j = 1, 2, ...; of the stochastic process {X(t), t ≥ 0}. Solution {X(t), t ≥ 0} is a linear birth process with parameter λ . Hence , the state probabilities on condition X(0) = 1 are (section 9.6.1) p j (t) = e −λt (1 − e −λt ) j−1 ; j = 1, 2, ... 9.13) A death process with state space Z = {0, 1, 2, ...} has death rates

μ 0 = 0, μ 1 = 2, and μ 2 = μ 3 = 1. Given X(0) = 3 , determine the absolute state probabilities p j (t) = P(X(t) = j) for j = 3, 2, 1, 0. Solution According to (9.44), the p j (t) satisfy the differential equations p /3 (t) = −p 3 (t) p /2 (t) = −p 2 (t) + p 3 (t) p /1 (t) = −2 p 1 (t) + p 2 (t) p /0 (t) = 2 p 1 (t)

From the first differential equation, p 3 (t) = e −t , t ≥ 0.

Recursively from (9.45) (formulas (9.46) are not applicable) p 2 (t) = e −t

t

∫ 0 e x e −x dx = t e −t ,

t

t

t ≥ 0.

p 1 (t) = e −2t ∫ 0 e 2x x e −x dx = e −2t ∫ 0 x e x dx = e −2t [(x − 1) e x ] t0 . p 1 (t) = (t − 1) e −t + e −2t , t ≥ 0. t

p 0 (t) = 2 ∫ 0 [(x − 1) e −x + e −2x ] dx. p 0 (t) = 1 − e −2t − 2t e −t .

123

9 CONTINUOUS-TIME MARKOV CHAINS 9.14) A linear death process {X(t), t ≥ 0} has death rates μ j = j μ ; j = 0, 1, ...

(1) Given X(0) = 2, determine the distribution function of the random time T at which the process arrives at state 0 ('lifetime' of the process). (2) Given X(0) = n, n > 1, determine the mean value of the random time T at which the process enters state 0. Solution (1) Given p n (0) = 1 , T can be represented as a sum of n independent random variables X i : T = X1 + X2 + . . . + Xn,

(i) where X i , the sojourn time of the death process in state i, has an exponential distribution with parameter iμ. Hence, by formula (4.66), the distribution function of T = X 1 + X 2 is t

F T (t) = ∫ 0 (1 − e −2μ(t−x) ) μ e −μ x dx.

Thus,

F T (t) = (1 − e −μ t ) 2 , t ≥ 0.

(2) In view of (i), since E(X i ) = 1/(μi) , 1 ⎛1 + 1 + . . . + 1 ⎞ . E(T) = μ n⎠ ⎝ 2 9.15) At time t = 0 there are an infinite number of molecules of type a and 2n molecules of type b in a two-component gas mixture. After an exponential random time with parameter µ any molecule of type b combines, independently of the others, with a molecule of type a to form a molecule ab. (1) What is the probability that at time t there are still j free molecules of type b in the container? (2) What is the mean time till there are left only n free molecules of type b in the container? Solution (1) Let X(t) be the number of free molecules of type b in the container. Then {X(t), t ≥ 0} is a linear death process with death rates μ j = jμ ; j = 0, 1, ..., 2n which starts at X(0) = 2n. Therefore, by section 9.6.2, the desired probabilities are

⎛ ⎞ p j (t) = 2n e −jμt (1 − e −μt ) 2n−j ; j = 0, 1, ... ⎝ j ⎠ (2) The time T n till the process arrives at state n is given by T n = X 1 + X 2 + . . . + X n , where X i has an exponential distribution with parameter (2n − i + 1) ; i = 1, 2, ..., n. Hence, E(T n ) =

1 μ

⎛ 1 + 1 + ... + 1 ⎞ . ⎝ 2n 2(n−1) n+1 ⎠

9.16) At time t = 0 a cable consists of 5 identical, intact wires. The cable is subject to a constant load of 100 kp such that in the beginning each wire bears a load of 20kp. Given a load of w kp per wire, the time to breakage of a wire (its lifetime) is exponential with mean value 1000 w

[weeks].

When one or more wires are broken, the load of 100 kp is uniformly distributed over the remaining intact ones. For any fixed number of wires, their lifetimes are assumed to be independent and identically distributed. (1) What is the probability that all wires are broken at time t = 50 [weeks] ? (2) What is the mean time until the cable breaks completely?

SOLUTIONS MANUAL

124

Solution (a) When 5 wires are intact, then the mean lifetime of a wire is 1000/20 = 50. Hence, the lifetimes of the wires are exponential with parameter m 5 = 0.02.

(b) When 4 wires are intact, then the mean lifetime of a wire is 1000/25 = 40. Hence, the lifetimes of the wires are exponential with parameter m 4 = 0.025. (c) When 3 wires are intact, then the mean lifetime of a wire is 1000/33.3 = 30. Hence, the lifetimes of the wires are exponential with parameter m 3 = 0.0333. (d) When 2 wires are intact, then the mean lifetime of a wire is 1000/50 = 20. Hence, the lifetimes of the wires are exponential with parameter m 2 = 0.05. (e) When 1 wire is intact, then the mean lifetime of this wire is 1000/100 = 10. Hence, the liftime of this wire is exponential with parameter m 1 = 0.1. Let X(t) denote the number of intact wires at time t. Then {X(t), t ≥ 0} is a death process with state space Z = {0, 1, 2, 3, 4, 5} and death rates μ i = i ⋅ m i = 0.1; i = 1, 2, ..., 5. (1) Let T 0 be the time till the process enters state 0. Then, p 0 (t) = P(T 0 ≤ t). T 0 has structure T 0 = X 1 + X 2 + . . . + X 5 , where the X i are independent and identically exponentially distributed with parameter 0.1. Hence, T 0 has an Erlang distribution with parameters n = 5 and λ = 0.1 (page 75). Thus,

(0.1 ⋅ 50) i = 0.56 . i! i=0 4

p 0 (50) = 1 − e −0.1⋅50 Σ

E(T 0 ) = n = 5 = 50 . λ 0.1

(2)

9.17)* Let {X(t), t ≥ 0} be a death process with X(0) = n and positive death rates μ 1 , μ 2 , ... , μ n . Prove: If Y is an exponential random variable with parameter λ and independent of the death process, then n μi P(X(Y) = 0) = Π . μ i=1 i + λ Solution The conditional probability P(X(Y) = 0 Y = t)) is simply p 0 (t) : p 0 (t) = P(X(Y) = 0 Y = t).

Hence, the desired probability is ∞

∞

P(X(Y) = 0) = ∫ 0 p 0 (t) λe −λt dt = λ ∫ 0 p 0 (t) e −λt dt.

(i)

Therefore, λ1 P(X(Y) = 0) is the Laplace transform of p 0 (t). In view of this relationship, the system of the differential equations (9.44) for the state probabilities of a death process is solved by means of the Laplace transformation with parameter s = λ (section 2.5.2, page 99). The system (9.44) is p n (t) = −μ n p n (t) p j (t) = −μ j p j (t) + μ j+1 p j+1 (t) ; j = 0, 1, ..., n − 1.

Let p j (λ) = L p j (t) be the Laplace transform of p j (t) with parameter s = λ. Then, by formula (2.120) and in view of the initial condition p n (0) = 1, application of the Laplace transform to this system gives a system of algebraic equations for the p j (λ) :

125

9 CONTINUOUS-TIME MARKOV CHAINS λ p n (λ) − 1 = −μ n p n (λ) λ p j (λ) = −μ j p j (λ) + μ j+1 p j+1 (λ) ; j = n − 1, n − 2, ..., 1, 0.

(ii)

Solving recursively yields p n (λ) =

1 μn + λ

μn ⋅ 1 μ n−1 + λ μ n + λ μ n−1 μn p n−2 (λ) = ⋅ ⋅ 1 μ n−2 + λ μ n−1 + λ μ n + λ .. . μ3 μn μ2 p 1 (λ) = ⋅ ⋅ ... ⋅ ⋅ 1 μ1 + λ μ2 + λ μ n−1 + λ μ n + λ p n−1 (λ) =

From (ii) with j = 0, since μ 0 = 0, p 0 (λ) = p 0 (λ) =

Hence, so that by (i),

μ1 p (λ) λ 1

μ2 μ3 μn μ1 ⋅ ⋅ ⋅ ... ⋅ ⋅ 1 λ μ1 + λ μ2 + λ μ n−1 + λ μ n + λ

P(X(Y) = 0) = λ p 0 (λ) =

μ1 μ2 μ n−1 μn ⋅ ⋅ ... ⋅ ⋅ . μ1 + λ μ2 + λ μ n−1 + λ μ n + λ

9.18) Let a birth and death process have state space Z = {0, 1, ..., n} and transition rates

λ j = (n − j) λ and μ j = j μ ; j = 0, 1, ..., n. Determine its stationary state probabilities. Solution This is the special case r = n of the situation considered in example 9.14, page 419. Hence, from formulas (9.60) and (9.61) or directly from (9.65), letting ρ = λ/μ,

π0 =

1 , π j = ⎛⎝ nj ⎞⎠ ρ j π 0 ; n 1 + Σ j=1 ( nj ) ρ j π 0

j = 1, 2, ..., n.

9.19) Check whether or under what restrictions a birth and death process with transition rates

λj =

j+1 j+2

λ and μ j = μ; j = 0, 1, ... ,

has a stationary state distribution. Solution The condition λ < μ is sufficient for the existence of a stationary state distribution. To see this note that the corresponding series (9.62) converges since the transition rates satisfy the sufficient condition (9.63): λ i−1 i λ λ μ i = i + 1 ⋅ μ ≤ μ < 1.

The series (9.64) diverges since n

j

μ

Σ Π λi n→∞ lim

j=1 i=1

i

= lim

n

Σ

n→∞ j=1

⎛μ⎞ ⎝λ⎠

j 2 3

n μ j 1 ≥ lim Σ ⎛⎝ ⎞⎠ = ∞ . j+1 n→∞ j=1 λ ⋅ ⋅ . . . ⋅ j+2 3 4

Hence, by theorem 9.3, a stationary state distribution exists if λ < μ.

126

SOLUTIONS MANUAL

9.20) A birth and death process has transition rates

λ j = ( j + 1)λ and μ j = j 2 μ ; j = 0, 1, ...; 0 < λ < μ . Confirm that this process has a stationary state distribution and determine it. Solution The transition rates fulfil the sufficient condition (9.63): λ i−1 i λ λ λ μi = i2μ = i μ ≤ μ < 1 .

(Obviously, for satisfying (9.63) the condition λ < μ can be dropped.) The series (9.64) diverges: n

j

μ

Σ Π λi n→∞ lim

j=1 i=1

i

n

j

i2μ

Σ Π (i + 1) λ = ∞ . n→∞

= lim

j=1 i=1

Hence, by theorem (9.3), there exists a stationary state distribution. It is given by formulas (9.60) and (9.61): ∞ ⎡ λ ⎞ j ⎤, π 0 = ⎢ 1 + Σ 1 ⎛⎝ μ ⎠ ⎥ j=1 j! ⎣ ⎦ j λ j λ ⎞ j ; j = 1, 2, ... . π j = Π μi−1 π 0 = Π λ π 0 = 1 ⎛⎝ μ ⎠ i j! i=1 i=1 i μ

9.21) Consider the following deterministic models for the mean (average) development of the size of populations: (1) Let m(t) be the mean number of individuals of a population at time t. It is reasonable to assume that the change of the population size, namely dm(t)/dt, is proportional to m(t), t ≥ 0, i.e., for a positive constant h the mean number m(t) satisfies the differential equation dm(t) = h m(t). dt

a) Solve this differential equation assuming m(0) = 1. b) Is there a birth and death process which has this trend function m(t)? (2) The mean population size m(t) satisfies the differential equation dm(t) = λ − μ m(t). dt a) With a positive integer N, solve this equation under the initial condition m(0) = N. b) Is there a birth and death process which has this trend function m(t)? Solution (1) a) By separation of the variables, the differential equation (i) for m(t) becomes dm(t) = h dt. m(t)

Integration yields with a constant c: ln m(t) = λ t + c. Hence, with C = e c , m(t) = C e h t , t ≥ 0.

In view of the initial condition m(0) = 1, the constant C is equal to 1 so that m(t) = e ht , t ≥ 0.

b) The linear birth process with λ = h has this trend function (page 406).

(i)

127

9 CONTINUOUS-TIME MARKOV CHAINS (2) a) This is a first order inhomogeneous differential equation with constant coefficients: dm(t) dt

+ μ m(t) = λ.

Its solution has structure m(t) = m h (t) + m s (t), where m h (t) is the general solution of the corresponding homogeneous differential equation (determined under a)) plus a special solution of the inhomogeneous differential equation (ii). Since m s (t) = λ/μ is obviously a solution of (ii), m(t) = Ce −μt + μλ . m(0) = C + μλ = N.

From the initial condition,

m(t) = ⎛⎝ N − μλ ⎞⎠ e −μt + μλ

Hence,

m(t) =

or

λ −μt ) + Ne −μt . μ (1 − e

b) This is the trend function of a birth and death process {X(t), t ≥ 0} with birth rates λ i = λ and death rates iμ , i = 0, 1, ... starting X(0) = N; N = 0, 1, ... . (example 9.12, page 414 to 416). 9.22) A computer is connected to three terminals (for example, measuring devices). It can simultaneously evaluate data records from only two terminals. When the computer is processing two data records and in the meantime another data record has been produced, then this new data record has to wait in a buffer when the buffer is empty. Otherwise the new data record is lost. (The buffer can store only one data record.) The data records are processed according to the FCFS-queueing discipline. The terminals produce data records independently according to a homogeneous Poisson processes with intensity λ. The processing times of data records from all terminals are independent (even if the computer is busy with two data records at the same time) and have an exponential distribution with parameter µ. They are assumed to be independent on the input. Let X(t) be the number of data records in computer and buffer at time t. (1) Verify that {X(t), t ≥ 0} is a birth and death process, determine its transition rates and draw the transition graph. (2) Determine the stationary loss probability, i.e., the probability that in the steady state a data record is lost. Solution (1) Only transitions to neighbouring states are possible according to the following transition graph:

3λ

3λ 0

1 μ

3λ 3

2 2μ

2μ

{X(t), t ≥ 0} is a birth- and death process. The desired loss probability is π loss = π 3 . Inserting the transition rates into (9.60) and (9.61) gives π loss =

6.75ρ 3 1 + 3ρ + 4.5ρ 2 + 6.75ρ 3

,

ρ = λ/μ.

9.23) Under otherwise the same assumptions as in the previous exercise, it is assumed that a data record which has been waiting in the buffer a random patience time, will be deleted as being no longer up to date. The patience times of all data records are assumed to be independent, exponential random variables with parameter ν . They are also independent of all arrival and processing times of the data records. Determine the stationary loss probability.

SOLUTIONS MANUAL

128

Solution The states of the corresponding birth and death process {X(t), t ≥ 0} are the same as in exercise 9.22. The transition rates are identical except the death rate μ 3 , which is μ 3 = 2μ + ν.

3λ

3λ

3λ

1

0

3

2

μ

2μ + ν

2μ

From (9.60) and (9.61), if 'loss' refers only to a occupied buffer, π loss = π 3 =

13.5ρ 3 /(2 + ν/μ) 1 + 3ρ + 4.5ρ 2 + 13.5ρ 3 /(2 + ν/μ)

,

ρ = λ/μ .

If 'loss' also refers to data records which were deleted because their patience time had expired, then, if B is the service time and Z the patience time of a customer, by the total probability rule, π loss = π 3 + P(Z < B) (1 − π 3 ) = π 3 +

⎛ 2μ ⎞ (1 − π 3 ). ⎝ 2μ + ν ⎠

9.24) Under otherwise the same assumptions as in exercise 9.22, it is assumed that a data record will be deleted when its total sojourn time in the buffer and computer exceeds a random time Z, where Z has an exponential distribution with parameter α. Thus, the interruption of the current service of a data record is possible. (1) Draw the corresponding transition graph. (2) Determine the stationary loss probability. Solution The corresponding transition graph is

3λ

3λ 1

0 μ+α

3λ 3

2 2μ + α

2μ + α

If 'loss' refers only to an occupied buffer, then, from (9.60) and (9.61), 27 λ 2 π loss = π 3 = . (μ + α) (2μ + α) 2 + 3λ (2μ + α) 2 + 9λ 2 (2μ + α) + 27λ 2 If 'loss' also refers to data records which were deleted because their sojourn time had expired, then μ 2μ 2μ π loss = π + π + (1 − π 3 ) + π 3 . μ + α 1 2μ + α 2 2μ + α 9.25) A small filling station in a rural area provides diesel for agricultural machines. It has one diesel pump and waiting capacity for 5 machines. On average, 8 machines per hour arrive for diesel. An arriving machine immediately leaves the station without fuel if pump and all waiting places are occupied. The mean time a machine occupies the pump is 5 minutes. The station behaves like a M/M/s/m-queueing system. (1) Determine the stationary loss probability. (2) Determine the stationary probability that an arriving machine waits for diesel. Solution (1) The formulas given in section 9.7.4.1 apply. (See also example 9.17). The arrival rate per hour is λ = 8 , and the service rate is μ = 12 per hour. Hence, the traffic intensity is ρ = λ /μ = 2/3, and

9 CONTINUOUS-TIME MARKOV CHAINS

129

the probabilities π i that in the steady state there are i machines at the filling station are i

π i = ⎛⎝ 23 ⎞⎠ π 0 ; i = 1, 2, ..., 6

with π 0 =

1 2

1 + + ⎛⎝ 23 ⎞⎠ + . . . + ⎛⎝ 23 ⎞⎠ 2 3

6

.

The loss probability in the steady state is π loss = π 6 = 0.0311 and the waiting probability is π wait = π 1 + π 2 + . . . + π 5 = 1 − π 0 − π 6 = 0.6149. 9.26) Consider a two-server loss system. Customers arrive according to a homogeneous Poisson process with intensity λ. A customer is always served by server 1 when this server is idle, i.e., an arriving customer goes only then to server 2, when server 1 is busy. The service times of both servers are independent, identically exponential with parameter μ distributed random variables. Let X(t) be the number of customers in the system at time t. Determine the stationary state probabilities of the stochastic process {X(t), t ≥ 0}. Solution Let (i, j) be the state that there are i customers at server 1 and j customers at server 2; i, j = 0, 1. To simplify notation, let 0 = (0, 0), 1 = ((1, 0), 2 = (0, 1), 3 = (1, 1) . The transition graph is (0,1) λ

μ

μ

λ

λ (0,0)

μ

(1,0)

(1,1) μ

Hence, the p i satisfy the system of linear equations λ p0 = μ p1 + μ p2 (λ + μ) p 1 = μ p 3 (λ + μ) p 2 = λ p 0 + μ p 3 By making use of the normalizing condition, the solution is seen to be ⎡ ρ2 ρ (ρ + 2) ρ 2 ⎤ p0 = ⎢1 + + + ⎥ 2 (ρ + 1) 2 (ρ + 1) 2 ⎦ ⎣ p1 =

ρ2 p , 2 (ρ + 1) 0

p2 =

−1

,

ρ(ρ + 2) ρ2 p0, p3 = p . 2 0 2 (ρ + 1)

Hence, the stochastic process {X(t), t ≥ 0} with X(t) denoting the number of customers in the system at time t has the stationary state probabilities π0 = p0 , π1 = p1 + p2 = ρ p0 , π3 = p3 . 9.27) A 2-server loss system is subject to a homogeneous Poisson input with intensity λ. The situation considered in the previous exercise is generalized as follows: If both servers are idle, a customtomer goes to server 1 with probability p and to server 2 with probability 1 − p . Otherwise, a customer goes to the idle server (if there is any). The service times of the servers 1 and 2 are independent exponential random variables with parameters μ 1 and μ 2 , respectively. All arrival and service times are independent. Describe the system behaviour by a homogeneous Markov chain and draw the transition graph.

130

SOLUTIONS MANUAL

Solution Let state (i, j) be defined as in exercise 9.26. Then the transition graph of the corresponding homogeneous Markov chain is (0,1)

(1 − p)λ

μ1

μ2

λ

λp

(0,0)

μ1

λ

(1,0)

μ2

(1,1)

9.28) A single-server waiting system is subjected to a homogeneous Poisson input with intensity λ = 30 [h −1 ]. If there are not more than 3 customers in the system, the service times have an exponential distribution with mean 2 [min]. If there are more than 3 customers in the system, the service times have an exponential distribution with mean 1 [min]. All arrival and service times are independent. (1) Show that there exists a stationary state distribution and determine it. (2) Determine the mean length of the waiting queue in the steady state. Solution (1) As usual, let X(t) be the number of customers in the system at time t. The birth and death process {X(t), t ≥ 0} has birth rates λi =

1 2

[min −1 ] ; i = 0, 1, ...

μ 0 = 0, μ 1 = μ 2 = μ 3 =

1 2

[min −1 ],

and death rates μ i = 1 [min −1 ] ; i = 4, 5, ...

Since the conditions of theorem 9.3 are fulfilled, there is a stationary state distribution. It is given by (9.60) and (9.61): k−3 π 0 = π 1 = π 2 = π 3 = 1 , π k = 1 ⎛⎝ 1 ⎞⎠ ; k = 4, 5, ... . 5 5 2

(2) The mean queue length in the steady state is k−3 ∞ E(L) = 1 ⋅ 1 + 2 ⋅ 1 + 3 ⋅ 1 + Σ k ⋅ 1 ⎛⎝ 1 ⎞⎠ 5 5 5 k=4 5 2 k−4 ∞ = 6 + 1 ⋅ Σ (k − 4 + 4) ⎛⎝ 1 ⎞⎠ 2 5 10 k=4 i i ∞ ∞ = 6 + 1 ⋅ Σ i ⎛⎝ 1 ⎞⎠ + 4 ⋅ Σ ⎛⎝ 1 ⎞⎠ = 6 + 2 + 8 . 5 10 i=0 2 10 k=0 2 5 10 10

Thus, E(L) = 2.2. 9.29) Taxis and customers arrive at a taxi rank in accordance with two independent homogeneous Poisson processes with intensities λ 1 = 4 an hour and λ 2 = 3 an hour, respectively. Potential customers, who find 2 waiting customers, do not wait for service, but leave the rank immediately. Groups of customers, who will use the same taxi, are considered to be one customer. On the other hand, arriving taxis, who find two taxis waiting, leave the rank as well. What is the average number of customers waiting at the rank?

9 CONTINUOUS-TIME MARKOV CHAINS

131

Solution Let (i, j) denote the state that there are i customers and j taxis at the rank. The transition graph of the corresponding Markov process with state space {(2, 0), (1, 0), (0, 0), (0, 1), (0, 2)} is λ1

(2,0)

λ1

λ1

(1,0) λ2

(0,0) λ2

λ1

(0,2)

(0,1) λ2

λ2

Thus, the transitions between the states are governed by a birth- and death process with constant birth rates and constant death rates. By (9.60) and (9.61) with λ 1 /λ 2 = 4/3, the stationary state probabilities are π (2,0) = 0.1037, 2

3

4

π (1,0) = 43 π (2,0) , π (0,0) = ⎛⎝ 43 ⎞⎠ , π (0,1) = ⎛⎝ 43 ⎞⎠ π (2,0) , π (0,2) = ⎛⎝ 43 ⎞⎠ π (2,0) . Thus, in the steady state, the mean number of customers waiting at the rank is E(L) = 2 ⋅ 0.1037 + 1 ⋅ 43 ⋅ 0.1037 = 0.3457. 9.30) A transport company has 4 trucks of the same type. There are two maintenance teams for repairing the trucks after a failure. Each team can repair only one truck at a time and each failed truck is handled by only one team. The times between failures of a truck (lifetimes) are exponential with parameter λ. The repair times are exponential with parameter μ. All life and repair times are assumed to be independent. Let ρ = λ/μ = 0.2. What is the most efficient way of organizing the work: 1) to make both maintenance teams responsible for the maintenance of all 4 trucks so that any team which is free can repair any failed truck, or 2) to assign 2 definite trucks to each team? Solution This is a special case of the repairman problem considered in example 9.14 with r maintenance teams and n machines to be maintained. Let X(t) be the number of failed trucks at time t. Next the stationary state probabilities of the birth and death process {X(t), t ≥ 0} are determined. Case 1 n = 4, r = 2 : Formulas (9.65) yield π 0 = 0.47783, π 1 = 0.38226, π 2 = 0.11468, π 3 = 0.02294, π 4 = 0.00229 . Hence, the mean number of failed trucks in the steady state is 4

E(X 4,2 ) = Σ i=1 i π i = 0.68960. The mean number of busy maintenance teams is E(Z 4,2 ) = 1 ⋅ π 1 + 2 (π 2 + π 3 + π 4 ) = 0.66208. Case 2 n = 2, r = 1 : Formulas (9.65) yield the stationary state probabilities of {X(t), t ≥ 0} : π 0 = 0.67568, π 1 = 0.27027, π 2 = 0.05405. Hence, the mean number of failed trucks in the steady state is E(X 2,1 ) = 1 ⋅ π 1 + 2 ⋅ 0.05405 = 0.37837. The mean number of busy maintenance teams is E(Z 2,1 ) = 1 ⋅ (π 1 + π 2 ) = 0.32432.

132

SOLUTIONS MANUAL

Comparison of policies 1 and 2 When applying policy 2, the mean number of failed trucks out of 4 is 2 E(X 2,1 ) = 0.74864, whereas the mean number of busy maintenance teams is 2 E(Z 2,1 ) = 0.64864. One the one hand, using policy 2 leads on average to a larger number of failed trucks than using policy 1. On the other hand, under policy 2 the maintenance teams are less busy than under policy 1. Consequently, with regard to the criteria applied, policy 1 is preferable to policy 2. 9.31) Ferry boats and customers arrive at a ferry station in accordance with two independent homogeneous Poisson processes with intensities λ and μ , respectively. If there are k customers at the ferry station when a boat arrives, then it departs with min (k, n) passengers (n is the capacity of each boat). If k > n , then the remaining k − n customers wait for the next boat. The sojourn times of the boats at the station are assumed to be negligibly small. Model the situation by a homogeneous Markov chain {X(t), t ≥ 0} and draw the transition graph. Solution The situation is modeled by a homogeneous Markov chain {X(t), t ≥ 0} with state space {0, 1, ...} as follows: If there are i customers at the ferry station, then {X(t), t ≥ 0} is in state i. The number of ferry boats at the station need not be taken into account, since, by assumption, their sojourn time at the station, compared with their interarrival times, is negligibly small, and they depart even without any passenger. This Markov chain has the positive transition intensities q i, i+1 = μ, i = 0, 1, ... qi 0 = λ , q kn+i, (k−1)n+i = λ ;

i = 1, 2, ..., n

i = 1, 2, ..., n ;

k = 1, 2, ...

{X(t), t ≥ 0} is not a birth and death process. A section of its transition graph is λ

0

λ μ

λ

1

2

μ λ

...

μ

n

μ λ

n+1

μ

n+2

...

μ

2n

...

λ

9.32) The life cycle of an organism is controlled by shocks, e.g., virus attacks or accidents, in the following way: A healthy organism has an exponential lifetime L with parameter λ h . If a shock occurs, the organism falls sick and, when being in this state, its (residual) lifetime S is exponential with parameter λ s , λ s > λ h . However, a sick organism may recover and return to the healthy state. This occurs in an exponential time R with parameter μ. If during a period of sickness another shock occurs, the organism cannot recover and will die a random time D after the occurrence of the second shock. D has an exponential distribution with parameter λ d , λ d > λ s . The random variables L, S, R, and D are assumed to be independent. Shocks arrive according to a homogeneous Poisson process with intensity λ. (1) Describe the evolvement in time of the states the organism may be in by a Markov chain. (2) Determine the mean lifetime of the organism.

133

9 CONTINUOUS-TIME MARKOV CHAINS Solution (1) Four states are introduced: 0 healthy, 1 sick (after one shock), 2 sick (after two shocks), 3 dead.

The transition graph of the corresponding Markov chain {X(t), t ≥ 0} is λh

λs

λ 1

0 μ

λ

2

λd

3

(2) The system (9.20), p.389, for the absolute state probabilities p i (t) = P(X(t) = i); i = 1, 2, 3, 4 is p 0 (t) = μ p 1 (t) − (λ + λ h ) p 0 (t) p 1 (t) = λ p 0 (t) − (λ + λ s + μ) p 1 (t) p 2 (t) = λ p 1 (t) − λ d p 2 (t) p 3 (t) = λ h p 0 (t) + λ s p 1 (t) + λ d p 2 (t)

Let p i (s) be the Laplace transform of p i (t). Then, by applying the Laplace transform to the first 3 differential equations of this system, making use of p 0 (0) = 1 and formula (2.121), p. 100, gives s p 0 (s) − 1 = μ p 1 (s) − (λ + λ h ) p 0 (s) s p 1 (s) = λ p 0 (s) − (λ + λ s + μ) p 1 (s) s p 2 ( s) = λ p 1 ( s) − λ d p 2 ( s)

The solution is

s + λ + λs + μ (s + λ + λ s ) (s + λ + λ h ) + μ (s + λ h ) λ p 1 (s) = (s + λ + λ s + μ) (s + λ + λ h ) − λμ

p 0 (s) =

p 2 ( s) =

λ2 (s + λ d ) [(s + λ + λ s + μ) (s + λ + λ h ) − λμ]

If L denotes the lifetime of the organism, then F(t) = P(L > t) = p 0 (t) + p 1 (t) + p 2 (t) is the survival probability of an organism. Hence, the Laplace transform of F(t) is F( s) =

∞ −s t e F(t) dt =

∫0

p 0 (s) + p 1 (s) + p 2 (s) .

By formula (2.52), page 64, the desired mean lifetime is 2λ + λ s + μ + λ 2 /λ d E(L) = F(0) = . (λ + λ s + μ) λ h + (λ + λ s ) λ As a special case: If λ = 0 , then the mean lifetime is E(L) = 1/λ h . 9.33) Customers arrive at a waiting system of type M/M/1/∞ with intensity λ. As long as there are less than n customers in the system, the server remains idle. As soon as the n th customer arrives, the server resumes its work and stops working only then, when all customers (including the newcomers) have been served. After that the server again waits until the waiting queue has reached length n and so on. Let 1/μ be the mean service time of a customer and X(t) be the number of customers in the system at time t. (1) Draw the transition graph of the Markov chain {X(t), t ≥ 0}. (2) Given that n = 2 , compute the stationary state probabilities. (Make sure they exist.)

134

SOLUTIONS MANUAL

Solution (1) Let the Markov chain {X(t), t ≥ 0} be in state i if there are i customers in the system. However, if the number of customers in the system is k, 1 ≤ k < n, and this state was reached from state n, then this number is denoted as k ∗ . With this agreement, the transition graph of {X(t), t ≥ 0} is λ

1*

μ

2*

μ

0

λ

... λ

μ ...

1 1

λ

λ

...

λ

μ

(n-1)*

λ

μ

n-1

λ

μ

n

λ

μ

n+1

λ

... ...

(2) By (9.28) and the transition graph for n = 2 , the stationary state probabilities satisfy λ π 0 = μ π 1∗ λ π1 = μ π0 (λ + μ) π 1 ∗ = μ π 2 (λ + μ) π 2 = λ π 1 + λ π 1 ∗ + μ π 3 (λ + μ) π i = λ π i−1 + μ π i+1 ; i = 3, 4, ... Letting ρ = λ/μ , the solution is π 1 = π 0 , π 1 ∗ = ρ π 0 , π i = ρ i−1 (ρ + 1) π 0 ;

i = 2, 3, ...

From the normalizing condition and the geometric series (page 48), π 0 = (1 − ρ) /2. Thus, a stationary solution exists if λ < μ , i.e., if ρ < 1. Note that p 1 = π 1 ∗ + π 1 = (ρ + 1) π 0 is the probability that in the steady state there is exactly one customer in the system. Thus, with p i = π i for i = 0, 2, 3, ..., the probabilities that i customers are in the system, are p i = ρ i−1 (ρ + 1) p 0 ;

i = 1, 2, ...

9.34) At time t = 0 a computer system consists of n operating computers. As soon as a computer fails, it is separated from the system by an automatic switching device with probability 1 − p. If a failed computer is not separated from the system (this happens with probability p), then the entire system fails. The lifetimes of the computers are independent and have an exponential distribution with parameter λ. Thus, this distribution does not depend on the system state. Provided the switching device has operated properly when required, the system is available as long as there is at least one computer available. Let X(t) be the number of computers which are available at time t. By convention, if due to the switching device the entire system has failed in [0, t), then X(t) = 0. (1) Draw the transition graph of the Markov chain {X(t), t ≥ 0}. (2) Given n = 2, determine the mean lifetime E(X s ) of the system. Solution

(1)

nλ(1 − p)

n

(n − 1)λ(1 − p)

n-1

...

λ(1 − p)

(n − 1)λ p

2λ(1 − p)

2

2λp

1

λ

0 nλ p

135

9 CONTINUOUS-TIME MARKOV CHAINS (2) From the transition graph with n = 2 and (9.20) with p i (t) = P(X(t) = i); i = 0, 1, 2 , p /0 (t) = λ p 1 (t) + 2λ p p 2 (t) p /1 (t) = 2λ(1 − p) p 2 (t) − λ p 1 (t) p /2 (t) = −2λ p 2 (t)

In view of the initial condition p 1 (0) = 0, p 2 (0) = 1, application of the Laplace transform to the second and third differential equation yields (notation as in exercise 9.32) s p 1 (s) = 2λ(1 − p) p 2 (s) − λ p 1 (s) s p 2 (s) − 1 = −2λ p 2 (s)

2λ (1 − p) , p 2 ( s) = 1 . (s + λ) (s + 2λ) s + 2λ Analogously to exercise 9.32, the mean lifetime is obtained from E(X s ) = p 1 (0) + p 2 (0) . Hence, E(X s ) = (1.5 − p) /λ. p 1 ( s) =

The solution is

9.35) A waiting-loss system of type M/M/1/2 is subject to two independent Poisson inputs 1 and 2 with respective intensities λ 1 and λ 2 (type 1- and type 2- customers). An arriving type 1-customer who finds the server busy and the waiting places occupied displaces a possible type 2-customer from its waiting place (such a type 2-customer is lost), but ongoing service of a type 2-customer is not interrupted. When a type 1-customer and a type 2-customer are waiting, then the type 1-customer will always be served first, regardless of the order of their arrivals. The service times of type 1and type 2- customers are independent random variables, which have exponential distributions with parameters μ 1 and μ 2 , respectively. Describe the behaviour of the system by a homogeneous Markov chain, determine the transition rates, and draw the transition graph. Solution System states (i , j, k) are defined as follows: i = 0 : the server is idle. i = 1 : a type 1-customer is served. i = 2 : a type 2-customer is served. j type 1-customers are waiting: j = 1, 2. k type 2-customers are waiting: k = 1, 2 ; 0 ≤ j + k ≤ 2. The corresponding transition graph is 1,2,0

λ1 λ2 λ1 λ1

0,0,0

μ2

λ2

μ1

1,0,0 μ1

2,0,0

μ2 μ1 λ1 μ2 λ2

μ1

1,1,0 λ2

1,1,1 λ1

μ1

1,0,1

λ2 μ2

μ2

λ1

2,1,0 μ1

2,0,1

λ1

λ2 μ2

λ1

1,0,2

2,2,0 λ1

2,1,1 λ2

λ1

2,0,2

SOLUTIONS MANUAL

136

9.36) A queueing network consists of two servers 1 and 2 in series. Server 1 is subject to a homogeneous Poisson input with an average arrival of λ = 1/12 customers per minute. A customer is lost if server 1 is busy. From server 1 a customer goes to server 2 for further service. If server 2 is busy, the customer is lost. A customer, after having been served by server 2, leaves the system. All arrival and service times are independent. The service times of servers 1 and 2 are exponential with respective mean values 1/μ 1 = 6 [minutes] and 1/μ 2 = 12 [minutes]. With regard to the total input at server 1, what percentage of customers is served by both servers? Solution Next the system is modeled by a homogeneous Markov chain with states (i, j) , where i is the number of customers at server 1 and j is the number of customers at server 2; i, j = 0, 1. The corresponding transition graph is

1 (1,0)

λ

μ2

μ1

0 (0,0) μ2

(1,1) 3

λ (0,1)

μ1

2 For convenience, the states will be denoted as follows: 0 = (0, 0), 1 = (1, 0), 2 = (0, 1), 3 = (1, 1) . The system of equations for the stationary state probabilities is λ π0 = μ2π2 μ1π1 = λ π0 + μ2π3 (λ + μ 2 ) π 1 = μ 1 π 1 + μ 1 π 3 (μ 1 + μ 2 ) π 3 = λ π 2 The solution is π0 =

1 2 ⎛ ⎞ λ λ λ λ2 1+ μ 1+ +μ + μ 2 (μ 1 + μ 2 ) ⎠ μ 2 (μ 1 + μ 2 ) 1⎝ 2 ⎛ ⎞ λ2 π 1 = μλ 1 + , μ 2 (μ 1 + μ 2 ) ⎠ 1⎝ π 2 = μλ , 2 π3 =

λ2 . μ 2 (μ 1 + μ 2 )

By inserting the numerical parameters λ = 1/12 [min −1 ], μ 1 = 1/6 [min −1 ], and μ 2 = 1/12 [min −1 ], the π i become π 0 = π (0,0) = 3 , π 1 = π (1,0) = 2 , π 2 = π (0,1) = 3 , π 3 = π (1,1) = 1 . 9 9 9 9

(i)

137

9 CONTINUOUS-TIME MARKOV CHAINS

Let A be the random event that an arriving customer will be served by both servers. The probability of this event can only then be positive when at arrival of a customer the system is in states (0,0) or (0,1). Hence, P(A) = P(A system in state (0,0)) π (0,0) + P(A system in state (0,1)) π (0,1) .

Obviously, P(A system is in state (0,0)) = 1.

If the system is in state (0,1), then A will occur if and only if the service time Z 1 of server 1 is greater than the service time Z 2 of server 2. Therefore, 1 ∞ − 16 t 1 − 12 t e dt 12

P(A system in state (0,1)) = P(Z 1 > Z 2 ) = ∫ 0 e

= 1. 3

This derivation makes use of the memoryless property of the exponential distribution. Thus, P(A) = 1 ⋅ 3 + 1 ⋅ 3 = 4 = 0.44 . 9 3 9 9 Therefore, 44.4 % of all arriving customers are successful.

9.37) A queueing network consists of three nodes (queueing systems) 1, 2 and 3, each of them is of type M/M/1/∞. The external inputs into the nodes have intensities λ 1 = 4, λ 2 = 8, and λ 3 = 12, all [h −1 ]. The respective mean service times at the nodes are 4, 2 and 1 [min]. After having been served by node 1, a customer goes to nodes 2 or 3 with equal probabilities 0.4 or leaves the system with probability 0.2. From node 2, a customer goes to node 3 with probability 0.9 or leaves the system with probability 0.1. From node 3, a customer goes to node 1 with probability 0.2 or leaves the system with probability 0.8. The external inputs and the service times are independent. (1) Check whether this queueing network is a Jackson network. (2) Determine the stationary state probabilities of the network. Solution (1) This is a Jackson network, since the assumptions 1 to 4, pages 303 and 304, are fulfilled. (2) Note that the external inputs λ i are

λ 1 = 1/15, λ 2 = 2/15, λ 3 = 3/15 [min]. The scheme of this network is By (9.106), the total inputs into the nodes α i satisfy 1 α 1 = 15 + 0.2 α 3 2 α 2 = 15 + 0.4 α 1 3 α 3 = 15 + 0.4 α 1 + 0.9 α 2 .

The solution is

α 1 = 0.1541, α 2 = 0.1950, α 3 = 0.4371.

Hence, ρ 1 = α 1 /μ 1 = 0.6164, ρ 2 = α 2 /μ 2 = 0.3900, ρ 3 = α 3 /μ 3 = 0.4371. Now the stationary state probabilities π x that the system is in state x = (x 1 , x 2 , x 3 ) are given by (9.110) with n = 3 and ϕ i (0) = 1 − ρ i : x

ϕ i (x i ) = (1 − ρ i ) ρ i i , x i = 0, 1, ...; i = 1, 2, 3.

138

SOLUTIONS MANUAL

9.38) A closed queueing network consists of 3 nodes. Each one has 2 servers. There are 2 customers in the network. After having been served at a node, a customer goes to one of the others with equal probability. All service times are independent and have an exponential distribution with parameter µ. What is the stationary probability to find both customers at the same node? Solution The transition matrix controling the transitions of customers between the nodes 1, 2, and 3 is

⎛ ⎜ P=⎜ ⎜ ⎝

0 0.5 0.5 0.5 0 0.5 0.5 0.5 0

⎞ ⎟ ⎟. ⎟ ⎠

This is a doubly stochastic matrix. Hence, the stationary distribution of the corresponding discretetime Markov chain is π 1 = π 2 = π 3 = 1/3. Let x = (x 1 , x 2 , x 3 ); x i = 0, 1, 2; be a state of the net- work, where x i denotes the number of customers being served at node i. The state space Z of the network has 6 elements: (2, 0, 0), (0, 2, 0), (0, 0, 2), (1, 1, 0), (1, 0, 1), (0, 1, 1). The corresponding stationary state probabilities π x are given by (9.115). In particular, the probability that both customers are at node 1, i.e., the stationary probability of state x = (2, 0, 0) is π x = h ⎛⎝ 31μ ⎞⎠

2

2⎤ ⎡ with h = ⎢ 6 ⎛⎝ 31μ ⎞⎠ ⎥ ⎦ ⎣

−1

.

Hence, π (2,0,0) = 1/6. All 6 states have the same probability. Thus, the desired probability is π (2,0,0) + π (0,2,0) + π (0,0,2) = 1/2. 9.39) Depending on demand, a conveyor belt operates at 3 different speed levels 1, 2, and 3. A transition from level i to level j is made with probability p i j with p 12 = 0.8 , p 13 = 0.2 , p 21 = p 23 = 0.5 , p 31 = 0.4 , p 32 = 0.6 .

The respective mean times the conveyor belt operates at levels 1, 2, or 3 between transitions are μ 1 = 45 , μ 2 = 30 , and μ 3 = 12 [hours]. Determine the stationary percentages of time in which the conveyor belt operates at levels 1, 2, and 3 by modeling the situation as a semi-Markov chain. Solution Next the stationary state probabilities π 1 , π 2 , and π 3 of the embedded discrete-time Markov chain with state space Z = {1, 2, 3) are determined. The transition graph of this Markov chain is

2

0.8 0.5

1

0.2

0.5 0.6

0.4

Thus, by (8.9), page 342, the π 1 satisfy the system of equations

3

139

9 CONTINUOUS-TIME MARKOV CHAINS π 1 = 0.5 π 2 + 0.4 π 3 π 2 = 0.8 π 1 + 0.6 π 3 1 = π1 + π2 + π3 The solution is π 1 = 0.3153, π 2 = 0.4144, π 3 = 0.2703. Now the desired percentages p 1 , p 2 , and p 3 are obtained from formula (9.120), page 460: p1 =

0.3153 ⋅ 45 [100 %] = 47.51 % 0.3153 ⋅ 45 + 0.4144 ⋅ 30 + 0.2703 ⋅ 12

p2 =

0.4144 ⋅ 30 [100%] = 41.63 % 0.3153 ⋅ 45 + 0.4144 ⋅ 30 + 0.2703 ⋅ 12

p3 =

0.2703 ⋅ 12 [100%] = 10.86 % 0.3153 ⋅ 45 + 0.4144 ⋅ 30 + 0.2703 ⋅ 12

9.40) The mean lifetime of a system is 620 hours. There are two failure types: Repairing the system after a type 1-failure requires 20 hours on average and after a type 2-failure 40 hours on average. 20% of all failures are type 2-failures. There is no dependence between the system lifetime and the subsequent failure type. After each repair the system is 'as good as new'. The repaired system immediately resumes its work. This process is continued to infinity. All life- and repair times are independent. (1) Describe the situation by a semi-Markov chain with 3 states and draw the transition graph of the underlying embedded discrete-time Markov chain. (2) Determine the stationary state probabilities of the system. Solution The following system states are introduced: 0 working, 1 repair after type 2-failure,

2 repair after type 1-failure.

The transition graph of the embedded discrete-time Markov chain {X 0 , X 1 , ...} is 0.8

0.2

1

0 1

2 1

Hence, by (8.9), the stationary state distribution of {X 0 , X 1 , ...} satisfy π0 = π1 + π2 ,

π 1 = 0.2 π 0 ,

π 2 = 0.8 π 0 .

The solution is π 0 = 0.5, π 1 = 0.1, π 2 = 0.4 . Therefore, by (9.120), the stationary state probabilities p 0 , p 1 , and p 2 of the system are p0 =

0.5 ⋅ 620 = 310 = 0.9627, 0.5 ⋅ 620 + 0.1 ⋅ 40 + 0.4 ⋅ 20 322

p1 =

0.1 ⋅ 40 = 4 = 0.0142, 0.5 ⋅ 620 + 0.1 ⋅ 40 + 0.4 ⋅ 20 322

p2 =

0.4 ⋅ 20 = 8 = 0.0248. 0.5 ⋅ 620 + 0.1 ⋅ 40 + 0.4 ⋅ 20 322

140

SOLUTIONS MANUAL

9.41) Under otherwise the same model assumptions as in example 9.25, page 464, determine the stationary probabilities of the states 0, 1, and 2 introduced there on condition that the service time B is a constant μ; i.e., determine the stationary state probabilities of the loss system M/D/1/0 with unreliable server. Solution The transition graph of the embedded discrete-time Markov chain is

0

p01

p02 1

p10

1

2

p

12

The transition probabilities are λ , λ + λ0 λ0 p 02 = 1 − p 01 = , λ + λ0 p 01 = P(L < L 0 ) =

p 10 = P(L 1 > μ) = e −λ 1 μ , p 12 = 1 − p 10 = 1 − e −λ 1 μ .

The stationary state probabilities of the system satisfy π 0 = p 10 π 1 + π 2 , π 1 = p 01 π 0 , π 0 + π 1 + π 2 = 1. The solution is π0 =

λ + λ0 , 2λ 0 + λ(3 − e −λ 1 μ )

π1 =

λ , 2λ 0 + λ(3 − e −λ 1 μ )

π2 =

λ 0 + λ (1 − e −λ 1 μ ) . 2λ 0 + λ(3 − e −λ 1 μ )

The mean sojourn times of the system in the states are: μ 0 = E(min(L 0 , Y )) =

1 , λ + λ0

μ 1 = E(min(L 1 , μ)) = E(L 1 L 1 ≤ μ) P(L 1 ≤ μ) + μ e −λ 1 μ μ

= ∫ 0 t λ 1 e −λ 1 t dt + +μ e −λ 1 μ −λ 1 μ = 1−e . λ1 μ 2 = μ.

Note: Example 9.26 assumes that the 'service time' (repair time) of the failed server has the same distribution as the service time of a customer. Hence, in the context of this exercise, μ 2 = μ . Now formulas (9.120) yield the stationary state probabilities of the system.

141

9 CONTINUOUS-TIME MARKOV CHAINS

9.42) A system has two different failure types: type 1 and type 2. After a type i-failure the system is said to be in failure state i ; i = 1, 2. The time L i to a type i-failure has an exponential distribution with parameter λ i ; i = 1, 2. Thus, if at time t = 0 a new system starts working, the time to its first failure is Y 0 = min (L 1 , L 2 ). The random variables L 1 and L 2 are assumed to be independent. After a type 1-failure, the system is switched from failure state 1 into failure state 2. The respective mean sojourn times of the system in states 1 and 2 are μ 1 and μ 2 . When in state 2, the system is being renewed. Thus, μ 1 is the mean switching time and μ 2 the mean renewal time. A renewed system immediately starts working, i.e., the system makes a transition from state 2 to state 0 with probability 1. This process continues to infinity. (For motivation, see example 9.7). (1) Describe the system behaviour by a semi-Markov chain and draw the transition graph of the embedded discrete-time Markov chain. (2) Determine the stationary availability A 0 of the system. Solution (1) The following states of the system are introduced: 0 system operating 1 system in failure state 1 2 system in failure state 2

The corresponding embedded discrete-time Markov chain has the transition graph 0

p01

p

02

1

1

2

1

The transition probability p 01 is ∞

p 01 = P(L 1 < L 2 ) = ∫ 0 e −λ 2 t λ 1 e −λ 1 t dt ∞

= λ 1 ∫ 0 e −(λ 1 +λ 2 ) t dt so that p 01 =

λ1 , λ1 + λ2

p 02 = 1 − p 01 =

λ2 . λ1 + λ2

Therefore, the stationary state distribution of the embedded discrete-time Markov chain satisfies π 0 = π 2 , π 1 = p 01 π 0 , π 2 = π 1 + p 02 π 0 . The solution is π0 =

λ1 + λ2 , 3λ 1 + 2λ 2

π1 =

λ1 , 3λ 1 + 2λ 2

π2 =

λ1 + λ2 . 3λ 1 + 2λ 2

SOLUTIONS MANUAL

142 By (2.52), the mean time the system stays in state 0 is

1 μ 0 = ∫ 0 e −(λ 1 +λ 2 ) t dt = . λ1 + λ2 Thus, by (9.120), the stationary probabilities that the system is in the states 0, 1, or 2 are 1 A0 = . (stationary availability) 1 + λ 1 μ 1 + (λ 1 + λ 2 ) μ 2 ∞

The probabilities that the system is in the states 1 and 2, respectively, are A1 =

λ1μ1 , 1 + λ 1 μ 1 + (λ 1 + λ 2 ) μ 2

A2 =

(λ 1 + λ 2 )μ 2 . 1 + λ 1 μ 1 + (λ 1 + λ 2 ) μ 2

CHAPTER 10 Martingales 10.1) Let Y 0 , Y 1 , ... be a sequence of independent random variables, which are identically distributed as N(0, 1). Is the discrete-time stochastic process {X 0 , X 1 , ...} with n

(1) X n = Σ i=0 Y i2 ; n = 0, 1, ... , n

(2) X n = Σ i=0 Y i3 ; n = 0, 1, ... , n

(3) X n = Σ i=0 Y i ; n = 0, 1, ... , a martingale? Solution (1) Since E(Y i ) = 0, the mean value of Y i2 is the variance of Y i : E(Y i2 ) = Var(Y i ) = E(Y i2 ) − [E(Y i )] 2 = 1 > 0. Therefore, as shown in example 10.1, {X 0 , X 1 , ...} cannot be a martingale. (2) E(Y i3 ) = 0 so that {X 0 , X 1 , ...} is a martingale. (3) E( Y i ) = E( N(0, 1) ) = 2/π > 0 (see page 79) so that {X 0 , X 1 , ...} cannot be a martingale. 10.2) Let Y 0 , Y 1 , ... be a sequence of independent random variables with finite mean values. Show that the discrete-time stochastic process {X 0 , X 1 , ...} generated by n

X n = Σ i=0 (Y i − E(Y i )) is a martingale. Solution Since E(Y i − E(Y i )) = 0, X n is a sum of independent random variables with mean values 0. Therefore, by example 10.1, {X 0 , X 1 , ...} is a martingale. 10.3) Let a discrete-time stochastic process {X 0 , X 1 , ...} be defined by Xn = Y0 ⋅ Y1 ⋅ . . . ⋅ Yn , where the random variables Y i are independent and have a uniform distribution over the interval [0, T]. Under which condition is {X 0 , X 1 , ...} is (1) a martingale, (2) a submartingale, (3) a supermartingale? Solution The mean value of Y i is E(Y i ) = T/2 . Hence, E(Y i ) = 1 if and only if T = 2. Thus, by example 10.2, {X 0 , X 1 , ...} is (1) a martingale if T = 2 , (2) a submartingale if T ≥ 2, and (3) a supermartingale if T ≤ 2.

144

SOLUTIONS MANUAL

10.4) Determine the mean value of the loss immediately before the win when applying the doubling strategy, i.e., determine E(X N−1 ) (see example 10.6 for notation and motivation). Solution On condition that the win happens at the n th game, Jean's total loss up to the (n − 1) th game is 2 n−1 − 1. The probability that Jean wins the n th game is ⎛⎝ 12 ⎞⎠ up to the game immediately before the win is n n ⎛ 1 ⎞ (2 n−1 n→∞ i=1 ⎝ 2 ⎠

E(Loss) = lim

Σ

n−1

n

⋅ 12 = ⎛⎝ 12 ⎞⎠ . Hence, the mean loss of Jean n ⎛1 n→∞ i=1 ⎝ 2

− 1) = lim

Σ

n − ⎛⎝ 12 ⎞⎠ ⎞⎠

1−(1/2) n+1 = lim ⎛⎝ n2 − 1/2 + 1 ⎞⎠ = ∞. n→∞

This result supports the claim that on average no money can be made with the doubling strategy. 10.5) Why is theorem 10.2 not applicable to the sequence of 'winnings,' which arises by applying the doubling strategy (example 10.6)? Solution None of the conditions is fulfilled. 10.6) Jean is not happy with the winnings he can make when applying the 'doubling strategy'. Hence, under otherwise the same assumptions and notations as in example 10.6 (p = 1/2), he triples his bet size after every lost game, starting again with €1. After the first win Jean stops gambling. What is his winnings when he loses 5 games in a row and wins the 6th one? Solution game result bet 'winnings'

1 loss 1 -1

2 loss 3 -4

3 loss 9 -13

4 loss 27 -40

5 loss 81 -121

6 win 243 +122

10.7) Starting at value 0, the profit of an investor increases per week by one unit with probability p, p > 1/2, or decreases per week by one unit with probability 1 − p. The weekly increments of the investor's profit are assumed to be independent. Let N be the random number of weeks until the investor's profit reaches for the first time a given positive integer n. By means of Wald's equation, determine E(N ). Solution In the i th week, the increment of the investor's profit is 1 with probability p Xi = −1 with probability 1 − p By definition of N, X 1 + X 2 + . . . + X N = n . Taking the mean value on both sides of this equality (Wald's equation (10.22)) yields E(N) ⋅ E(X 1 ) = E(N) ⋅ (2p − 1) = n . Thus, for p > 1/2, E(N) =

2p − 1 n .

10 MARTINGALES

145

10.8) Starting at value 0, the fortune of an investor increases per week by $200 with probability 3/8, remains constant with probability 3/8, and decreases by $200 with probability 2/8. The weekly increments of the investor's fortune are assumed to be independent. The investor stops the 'game' as soon as he has made a total fortune of $2000 or a loss of $1000, whichever occurs first. By using suitable martingales and applying the optional stopping theorem, determine (1) the probability p 2000 that the investor finishes the 'game' with a profit of $2000, (2) the probability p −1000 that the investor finishes the 'game' with a loss of $1000, (3) the mean duration E(N) of the 'game.' Solution The increment of the investor's fortune in the i th week is ⎧ ⎪ Zi = ⎨ ⎪ ⎩

200 with probability 3/8 0 with probability 3/8 −200 with probability 2/8

and has mean value E(Z i ) = 25. The martingale {X 1 , X 2 , ...} is defined by n

X n = Σ i=1 (Z i − E(Z i )) = Y n − 25n with Y n = Z 1 + Z 2 + . . . + Z n . A finite stopping time for {X 1 , X 2 , ...} is N = min{n, Y n = 2000 or Y n = −1000}.

(i)

By the martingale stopping theorem 10.2, E(X 1 ) = 0 = E(Y N ) − 25 E(N) or, equivalently, since p 2000 = P(Y N = 2000) and p −1000 = P(Y N = −1000) , 2000 ⋅ p 2000 − 1000 ⋅ p −1000 = 25E(N).

(ii)

A second equation for the unknowns p 2000 , p −1000 = 1, and E(N) is p 2000 + p −1000 = 1 .

(iii)

A third equation is obtained by the exponential martingale analogously to example 10.11: Let Ui = e θ Zi . The mean value of U i is E(U i ) = e 200⋅θ ⋅ 3 + 1 ⋅ 3 + e −200⋅θ ⋅ 2 . 8 8 8 Hence, E(U i ) = 1 if and only if θ = θ 1 with θ 1 =

1 200

ln 2 . Now, let 3

n

X n = Π U i = e θ1 Yn . i=1

If the martingale{X 1 , X 2 , ...} is stopped at time N defined by (i), then the martingale stopping theorem 10.2 yields E(U 1 ) = 1 = e 2000⋅θ 1 p 2000 + e −1000⋅θ 1 p −1000 . From this equation and (iii), p 2000 = 0.8703,

p −1000 = 0.1297.

Now, from (ii), E(N) = 64.4.

146

SOLUTIONS MANUAL

10.9) Let X 0 be uniformly distributed over [0, T ], X 1 be uniformly distributed over [0, X 0 ], and, generally, X i+1 be uniformly distributed over [0, X i ], i = 0, 1, ... (1) Prove that the sequence {X 0 , X 1 , ...} is a supermartingale. (2) Show that E(X k ) =

T ; 2 k+1

k = 0, 1, ...

(i)

Solution (1) For any sequence x 0 , x 1 , ..., x n with T ≥ x 0 ≥ x 1 ≥ ... ≥ x n ≥ 0, E(X n+1 − X n X n = x n , ..., X 1 = x 1 , X 0 = x 0 ) = E(X n+1 − X n X n = x n ) = E(X n+1 X n = x n ) − E(X n X n = x n ) =

xn 2

x

− x n = − 2n ≤ 0 .

Hence, by (10.6), {X 0 , X 1 , ...} is a supermartingale. (2) For k = 0 formula (i) is true, since X 0 has a uniform distribution over [0, T]. To prove (i) by induction, assume that (i) is true for any k with 1 ≤ k. Then, given X 0 = x 0 , the induction assumption implies that E(X k X 0 = x 0 ) =

x0 . 2k

( X 0 = x 0 has assumed the role of T.) Since X 0 is a uniformly distributed over [0, T], E(X k ) = 1 T

T

1

∫ 2 k dx 0 = T ⎡⎣ 2⋅2 k ⎤⎦ = 2 k+1 . x0

T2

T

0

10.10) Let {X 1 , X 2 , ...} be a homogeneous discrete-time Markov chain with the finite state space Z = {0, 1, ..., n} and transition probabilities j n−j ⎛ ⎞ p i j = P(X k+1 = j X k = i) = n ⎛⎝ ni ⎞⎠ ⎛⎝ n n− i ⎞⎠ ; i, j ∈ Z . ⎝ j⎠

Show that {X 1 , X 2 , ...} is a martingale. (In genetics, this martingale is known as the Wright-Fisher model without mutation.) Solution Since {X 1 , X 2 , ...} is a homogeneous Markov chain, E(X k+1 − X k X k = i k , ..., X 1 = i 1 , X 0 = i 0 ) = E(X k+1 − X k X k = i k ). Given i ∈ Z , the transition probabilities p i j are generated by a binomial distribution with parameters n and p = i/n ; j = 0, 1, ..., n. If X k = i , then X k+1 has this binomial distribution so that its mean value is E(X k+1 ) = n ⋅ ni = i . Therefore, E(X k+1 − X k X k = i k ) = E(X k+1 − i k ) = E(X k+1 ) − i k = i k − i k = 0. Hence, {X 1 , X 2 , ...} is a martingale. Remark The function h(i) = ni , i ∈ Z, is concordant to this martingale (definition 10.3), since h(i) =

⎛ n ⎞ ⎛ i ⎞ j ⎛ n − i ⎞ n−j j 1 ⋅ n = n ⋅ E(Y(i)) = ni . ⎝n⎠ ⎝ n ⎠ j=0 ⎝ j ⎠ n

Σ

10 MARTINGALES

147

10.11) Let L be a stopping time for a stochastic process {X(t), t ∈ T} in discrete or continuous time and z a nonnegative constant. Verify that L ∧ z = min(L, z) is a stopping time for {X(t), t ∈ T}. Solution If L < z , then L ∧ z = L. But L is a stopping time for {X(t), t ∈ T} by assumption. If z < L , then L ∧ z = z. Since a constant does not depend on any of the random variables X(t) , z is obviously a stopping time for {X(t), t ∈ T}. 10.12) Let {N(t), t ≥ 0} be a nonhomogeneous Poisson process with intensity function λ(t) and trend function t

Λ(t) = ∫ 0 λ(x)dx. Check whether the stochastic process {X(t), t ≥ 0} with X(t) = N(t) − Λ(t) is a martingale. Solution This is a special case of the following exercise 10.13. 10.13) Show that every stochastic process {X(t), t ∈ T} satisfying E( X(t) ) < ∞, t ∈ T, which has a constant trend function and independent increments, is a martingale. Solution For s < t, assuming E(X(t)) ≡ m, and using the notation (10.29), E(X(t) X(y), y ≤ s) = E(X(s) + X(t) − X(s) X(y), y ≤ s) = X(s) + E(X(t) − X(s) X(y), y ≤ s) = X(s) + E(X(t) − X(s)) = X(s) + m − m = X(s). 10.14)* The ruin problem described in section 7.2.7, page 292, is modified in the following way: The risk reserve process {R(t), t ≥ 0} is only observed at the end of each year. The total capital of the insurance company at the end of year n is n

R(n) = x + κ n − Σ i=0 M i ; n = 0, 1, 2, ..., where x is the initial capital, κ is the constant premium income per year, and M i is the total claim size the insurance company has to cover in year i, M 0 = 0. The M 1 , M 2 , ... are assumed to be independent and identically distributed as M = N(μ, σ 2 ) with κ > μ > 3σ 2 . Let p(x) be the ruin probability of the company: p(x) = P(there is an n = 1, 2, ... so that R(n) < 0). Show that 2

p(x) ≤ e −2 (κ−μ) x/σ , x ≥ 0.

148

SOLUTIONS MANUAL

Solution According to formula (2.128), page 102, the Laplace transform of M is E ⎛⎝ e −s M ⎞⎠ = e

−μs+ 12 σ 2 s 2

(i)

.

By definition of R(n), n

e −sR(n) = e −s x−s κ n+s Σ i=0 M i = e −s x Π i=1 e −s (κ−M i ) . n

Let X n = e −s R(n) . Then, by (i), for n = 1, 2, ... , E(X n ) = e −sx

n

Π E ⎛⎝ e −s (κ−M i ) ⎞⎠

i=1

= e −sx ⎛⎝ e

−(κ−μ) s+ 12 σ 2 s 2 ⎞

n

⎠ .

(ii)

Choose s = s 0 such that −(κ − μ)s + 12 σ 2 s 2 = 0, i.e. s 0 =

2(κ−μ) σ2

.

With s = s 0 , the factors E ⎛⎝ e −s (κ−M i ) ⎞⎠ in (ii) have mean value 1. Hence, by example 10.2, the random sequence {X 0 , X 1 , ...} is a martingale with the constant (time-independent) trend function 2

m = E(X 0 ) = e −2(κ−μ) x /σ . A stopping time for the martingale {X 0 , X 1 , ...} is L = min(n, R(n) < 0). By the previous exercise 10.11, L ∧ z with 0 < z < ∞ is a bounded stopping time for this martingale. Hence, from theorem 10.2, 2 e −2(κ−μ) x /σ = E ⎛⎝ e −s 0 R(L∧z) ⎞⎠

= E(e −s 0 R(L∧z) L < z) P(L < z) + E(e −s 0 R(L∧z) L > z) P(L > z) ≥ P(L < z). This inequality is true since R(L ∧ z) < 0 for L < z so that E(e −s 0 R(L∧z) L < z) ≥ 1. Since (iii) holds for all z = 0, 1, 2, ... and lim P(L < z) = p(x), the assertion is proved. z→∞

(iii)

CHAPTER 11 Brownian Motion Note In all exercises, {B(t), t ≥ 0} is the Brownian motion with parameter Var(B(1)) = σ 2 , and {S(t), t ≥ 0} is the standard Brownian motion (σ = 1). 11.1) Verify that the probability density f t (x) of B(t), 1 e −x 2 /(2 σ 2 t) , 2πt σ

f t (x) =

t > 0,

satisfies the thermal conduction equation ∂ f t (x) ∂ 2 f t (x) =c . ∂t ∂ x2

(i)

Solution ∂ f t (x) =− ∂t

1

⋅ e −x

2σ 2π t 3 =

2 /(2 σ 2 t)

1 2σ 2πt 3

∂ f t (x) =− ∂x ∂ 2 f t (x) ∂ x2

=−

1 σ3

2π t 3 =

e −x

e −x

2 2 2 1 ⋅ x 2 2 ⋅ e −x /(2 σ t) 2πt σ 2 σ t

+

2 /(2 σ 2 t)

x σ3

2π t 3

2 /(2 σ 2 t)

1 σ 3 2π t 3

e −x

+

⎡ x2 ⎤ ⎢ 2 − 1⎥. ⎣σ t ⎦

e −x

2 /(2 σ 2 t)

x σ3

2 /(2 σ 2 t)

2π t 3

.

2 2 ⋅ x2 ⋅ e −x /(2 σ t) σ t

⎡ x2 ⎤ ⎢ 2 − 1⎥ . ⎣σ t ⎦

Thus, f t (x) satisfies (i) with c = σ 2 /2. 11.2) Determine the conditional probability density of B(t) given B(s) = y, 0 ≤ s < t. Solution The condition B(s) = y defines a shifted Brownian motion {B y (t), t > s} starting at time t = s with the initial value B y (s) = y. Hence, it can be written in the form B y (t) = y + B(t − s), t ≥ s. Thus, B y (t), t > s, has trend function m y (t) ≡ y and density (x−y) 2

f t (x B(s) = y) =

− 1 e 2 σ 2 ( t−s) , 2π(t − s) σ

t > s.

150

SOLUTIONS MANUAL

11.3)* Prove that the stochastic process {B(t), 0 ≤ t ≤ 1} given by B(t) = B(t) − t B(1) is the Brownian bridge. Solution The stochastic process {B(t), 0 ≤ t ≤ 1} as an additve superposition of two Gaussian processes is a Gaussian process itself. Hence, to prove that {B(t), 0 ≤ t ≤ 1} is the Brownian bridge, the following characteristic three properties of the Brownian bridge (example 11.1, page 503) have to be verified: 1) B(0) = 0, 2) E(B(t)) ≡ 0, 3) Cov(B(s), B(t)) = σ 2 s (1 − t) , 0 ≤ s < t. The process {B(t), 0 ≤ t ≤ 1} has obviously properties 1) and 2). To verify property 3, let 0 ≤ s < t : Cov(B(s), B(t)) = E([B(s) − s B(1)] [B(t) − t B(1)]) = E(B(s) B(t)) − t E(B(s) B(1)) − s E(B(t) B(1)) + st E((B(1)) 2 ) = σ2s − σ2s t − σ2s t + σ2s t = σ 2 s (1 − t) . 11.4) Let {B(t), 0 ≤ t ≤ 1} be the Brownian bridge with σ = 1. Prove that the stochastic process {S(t), t ≥ 0} defined by t ⎞ S(t) = (t + 1) B ⎛⎝ t+1 ⎠ is the standard Brownian motion. Solution Since the Brownian bridge is a Gaussian process, the process {S(t), t ≥ 0} is a Gaussian process as well. Hence, it suffices to show that {S(t), t ≥ 0} has properties 1) S(0) = 0, 2) E(S(t)) ≡ 0, 3) Cov(S(s), S(t)) = s, 0 ≤ s < t . Property 1) follows immediately from B(0) = 0, and property 2) follows from E(B(t)) ≡ 0. To verify property 3), the covariance function of {S(t), t ≥ 0} needs to be determined: In view of property 2), making use of the covariance function of the Brownian bridge, for 0 ≤ s < t, s ⎞ ⎛ t ⎞⎤⎞ Cov(S(s), S(t)) = (s + 1)(t + 1) E ⎛⎝ ⎡ B ⎛⎝ s+1 ⎠ B ⎝ t+1 ⎠ ⎦ ⎠ ⎣ s ⎡ = (s + 1)(t + 1) s+1 ⎣1 −

= s [t + 1 − t] = s. Note: If 0 ≤ s < t, then s /(s + 1) < t /(t + 1) .

t ⎤ t+1 ⎦

11 BROWNIAN MOTION

151

11.5) Determine the probability density of B(s) + B(t). Solution As a sum of two normally distributed random variables, B(s) + B(t) also has a normal distribution. Its parameters are E(B(s) + B(t)) = 0 + 0 = 0 , and, by formula (4.46), page 184, assuming 0 ≤ s < t, Var( B(s) + B(t)) = Var(B(s)) + 2Cov(B(s), B(t)) + Var(B(t)) = σ 2 s + 2σ 2 s + σ 2 t = σ 2 (t + 3s). Hence, the probability density of B(s) + B(t) is f B(s)+B(t) (x) =

1 e 2π (t + 3s) σ

−

x2

2σ 2 (t + 3s)

0 ≤ s < t,

,

− ∞ < x < ∞.

11.6) Let n be any positive integer. Determine mean value and variance of X(n) = B(1) + B(2) + . . . + B(n). Solution Since E(B(t)) ≡ 0, E(X(n)) = 0 . The covariances Cov(B(i), B(j)), i < j, are Cov(B(i), B(j)) = σ 2 i ,

i, j = 1, 2, ..., n, i < j.

Now formula (4.52) yields Var (X(n)) =

n

n

i=1

i,j=1 i 0. ⎝σ t ⎠

where as usual Φ(⋅) is the distribution function of the standard normally distributed random variable N(0, 1). By formual (2.84), the mean value of B(t) is E( B(t) ) =

2 πt

σ.

11.11)* Starting from x = 0, a particle makes independent jumps of length Δx = σ Δt to the right or to the left every Δt time units with the respective probabilities μ p = 1 ⎛⎝ 1 + σ Δt ⎞⎠ and 1 − p, 2 where, for given σ > 0 the assumption

Δt ≤ σ/μ is made.

Show that as Δt → 0 the position of the particle at time t is governed by a Brownian motion with drift with parameters µ and σ .

154

SOLUTIONS MANUAL

Solution Let for i = 1, 2, ... ⎧ Xi = ⎨ ⎩

+1 if the i th jump goes to the right, −1 if the i th jump goes to the left.

If X(t) denotes the position of the particle at time t, then, by assumption, X(0) = 0 and X(t) = (X 1 + X 2 + . . . + X [t/Δt] ) Δx , where [t /Δt] denotes the greatest integer less than or equal to t /Δt . The X 1 , X 2 , ... are independent and have for all i = 1, 2, ... the probability distribution ⎧ Xi = ⎨ ⎩

+1 with probability p, −1 with probability 1 − p.

Hence, E(X i ) = 2p − 1,

Var(X i ) = 4p(1 − p) ;

i = 1, 2, ...

Mean value and variance of X(t) are μ E(X(t)) = ⎡⎣ Δtt ⎤⎦ (2p − 1) = ⎡⎣ Δtt ⎤⎦ ⋅ σ Δt ⋅ Δx , μ2 Var(X(t)) = 4 ⎡⎣ Δtt ⎤⎦ ⋅ p(1 − p) = ⎡⎣ Δtt ⎤⎦ ⋅ ⎛⎝ 1 − 2 Δt ⎞⎠ ⋅ (Δx) 2 . σ

Thus, since by assumption Δx = σ Δt , E(X(t)) = μ ⎡⎣ Δtt ⎤⎦ Δt ,

Var(X(t)) = [σ 2 − μ 2 Δt] ⎡⎣ Δtt ⎤⎦ Δt.

Letting Δt → 0 yields E(X(t)) = μ t ,

Var(X(t)) = σ 2 t.

(i)

Since the underlying asymmetric random walk has homogeneous and independent increments, the resulting limiting stochastic process {X(t), t ≥ 0} must have the same properties. Moreover, X(t) as the limit of a sum of independent, identically distributed random variables fulfilling the assumptions of theorem 5.6 has a normal distribution. Its parameters have been shown to be given by (i). Hence, the limiting process {X(t), t ≥ 0} is the Brownian motion with drift parameter μ. 11.12) Let {D(t), t ≥ 0} be a Brownian motion with drift with parameters μ and σ. Determine t E ⎛⎝ ∫ 0 (D(s)) 2 ds ⎞⎠ .

Solution Since D(t) = μ t + B(t), by changing the order of integration, t t E ⎛⎝ ∫ 0 (D(s)) 2 ds ⎞⎠ = E ⎛⎝ ∫ 0 (μs + B(s)) 2 ds ⎞⎠ t = E ⎛⎝ ∫ 0 [μ 2 s 2 + 2μ sB(s) + (B(s)) 2 ] ds ⎞⎠

=

t

∫ 0 E[μ 2 s 2 + 2μ sB(s) + (B(s)) 2 ] ds =

t

∫ 0 [μ 2 s 2 + σ 2 s] ds

= 16 t 2 (2μ 2 t + 3σ 2 ) .

11 BROWNIAN MOTION

155

11.13) Show that for c > 0 and d > 0 2

P(B(t) ≤ c t + d for all t ≥ 0) = 1 − e −2 c d /σ .

(i)

Solution The left-hand side of equation (i) can be rewritten as P(max{−ct + B(t)} ≤ d ) . t, t≥0

Thus, it has to be verified that the distribution function of the maximum of the Brownian motion with drift with negative parameter μ = −c is given by the right-hand side of (i). Apart from the notation, equation (i) is equivalent to the second line of (11.44). 11.14) At time t = 0 a speculator acquires an American call option with infinite expiration time and strike price x s . The price X(t) of the underlying risky security at time t is given by X(t) = x 0 e B(t) . The speculator makes up his mind to exercise this option at that time point, when the price of the risky security hits a level x with x > x s ≥ x 0 for the first time. (1) What is the speculator's mean discounted payoff G α (x) under a constant discount rate α ? (2) What is the speculator's payoff G(x) without discounting? In both cases, cost of acquiring the option is not included in the speculator's payoff.

Solution (1) X(t) hits level x if and only if B(t) hits x h = ln(x /x 0 ). Therefore, the corresponding hitting time L(x h ) has distribution function (11.20) with x = x h , and the random discounted payoff of the speculator at time L(x h ) is (x − x s ) e −α L(x h ) . From (11.43) with μ = 0 and x = x h , the desired mean discounted payoff of the speculator is x G α (x) = E ⎛⎝ (x − x s ) e −α L(x h ) ⎞⎠ = (x − x s ) ⎛⎝ x0 ⎞⎠

γ

with γ =

2α σ .

(2) Because X(t) will sooner or later hit any level x with x > x 0 with probability 1, the speculator's payoff without discounting is G(x) = x − x s . ({X(t), t ≥ 0} has an increasing trend function.) 11.15) The price of a unit of a share at time point t is X(t) = 10 e D(t) , t ≥ 0, where {D(t), t ≥ 0} is a Brownian motion process with drift parameter μ = 0.01 and volatility σ = 0.1. At time t = 0 a speculator acquires an option, which gives him the right to buy a unit of the share at strike price x s = 10.5 at any time point in the future, independently of the then current market value. It is assumed that this option has no expiry date. Although the drift parameter is negative, the investor hopes to profit from random fluctuations of the share price. He makes up his mind to exercise the option at that time point, when the mean value of the difference between the actual share price x and the strike price x s is maximal. (1) What is the initial price of a unit of the share? (2) Is the share price on average increasing or decreasing? (3) Determine the corresponding share price which maximizes the mean profit of the speculator.

156

SOLUTIONS MANUAL

Solution (1) x 0 = 10. (2) According to formula (11.49), the mean value of X(t) is 2

E(X(t)) = e (μ+σ /2)t = e (−0.01+0.005)t = e −0.005t ⋅ Thus, the share price is on average decreasing. (3) The condition λ = 2 μ /σ 2 = 2 ⋅ 0.01/0.01 = 2 > 1 is fulfilled. Hence, formula (11.58) can be applied and yields the optimal level of the share price at which the investor should exercise: x ∗ = 2 x s = 21. Formula (11.59) gives the corresponding maximal mean profit of the investor: 2 G(x ∗ ) = 1 ⋅ ⎛⎝ 10 ⎞⎠ = 2.381. 10.5 2

11.16) The value (in $) of a share per unit develops, apart from the constant factor 10, according to a geometric Brownian motion {X(t), t ≥ 0} given by X(t) = 10 e B(t) , 0 ≤ t ≤ 120, where {B(t), t ≥ 0} is the Brownian motion process with volatility σ = 0.1. At time t = 0 a speculator pays $17 for becoming owner of a unit of the share after 120 [days], irrespective of the then current market value of the share. (1) What will be the mean undiscounted profit of the speculator at time point t = 120? (2) What is the probability that the investor will lose some money when exercising at this time point? In both cases, take into account the amount of $17, which the speculator had to pay in advance. Solution (1) By formuala (11.34), E(e B(t) ) = e tσ

2 /2

.

Hence, E(10e B(120) ) = 10 ⋅ e 120⋅0.005 = 10 ⋅ e 0.6 = 18.22 so that the mean undiscounted profit of the investor is $1.22. (2)

P(10e B(120) < 17) = P(e B(120) < 1.7)) = P(B(120) < ln 1.7) ⎛ P ⎜ N(0, 1) < ⎝

ln 1.7 0.1⋅ 120

⎞ ⎟ = Φ(0.4844) = 0.686. ⎠

11.17 The value of a share per unit develops according to a geometric Brownian motion with drift given by X(t) = 10 e 0.2t+0.1S(t) , t ≥ 0, where {S(t), t ≥ 0} is the standardized Brownian motion. An investor owns a European call option with running time τ = 1 [year] and with strike price x s = $12 on a unit of this share. (1) Given the discount rate α = 0.04, determine the mean discounted profit of the holder. (2) For what value of the drift parameter μ do you get the fair price of the option? (3) Determine this fair price.

11 BROWNIAN MOTION

157

Solution (1) The formulas derived in example 11.7 apply. c=

[ln(x s /x 0 ) − μτ] [ln(12/10) − 0.2] = = −0.1768. 0.1 σ τ

G α (τ; μ, σ) = x 0 e (μ−α+σ

2 /2)τ

Φ(σ τ − c) − x s e −α τ Φ(−c)

so that G 0.04 (1; 0.2, 0.1) = 10 e (0.2−0.04+0.005) Φ(0.1 + 0.1768) − 12 e −0.04 Φ(0.1768) = 11.794 ⋅ 0.609 − 11.529 ⋅ 0.569 = 0.6225. (2) The condition is α − μ = σ 2 /2, which gives μ = 0.04 − 0.005 = 0.035. (3) The corresponding constant c is c=

[ln(x s /x 0 ) − μτ] [ln(12/10) − 0.035] = = 1.473. 0.1 σ τ

The fair price is G fair = G 0.04 (1; 0.035, 0.1) = 10 Φ(−1.373) − 12 e −0.04 Φ(−1.473) = 10 ⋅ 0.0847 − 12 ⋅ e −0.04 ⋅ 0.0702 = 0.038.. 11.18) The random price X(t) of a risky security per unit at time t is X(t) = 5e −0.01t+B(t)+0.2 B(t) , t ≥ 0, 0 < a ≤ 1, where {B(t), t ≥ 0} is the Brownian motion with volatility σ = 0.04. At time t = 0 a speculator acquires the right to buy the share at price $5.1 at any time point in the future, independently of the then current market value; i.e., the speculator owns an American call option with strike price x s = $5.1 on the share. The speculator makes up his mind to exercise the option at that time point, when the mean difference between the actual share price x and the strike price is maximal. (1) Is the stochastic process {X(t), t ≥ 0} a geometric Brownian motion with drift? (2) Is the share price on average increasing or decreasing? (3) Determine the optimal actual share price x = x ∗ . (4) What is the probability that the investor will exercise the option? Solution (1) no (2) decreasing (3) The stochastic price process {X(t), t ≥ 0} can hit a level x > 5.1 only at a time point t with B(t) > 0, since otherwise B(t) + 0.2 B(t) < 0. But in this case {X(t), t ≥ 0} can never hit a level greater than 5. Therefore, this process hits (if at all) a level x > 5.1 for the first time at the same ∼ time point as the geometric Brownian motion with drift X(t) = 5 e D(t) given by D(t) = −0.01t + (1 + 0.2)B(t) with parameters

∼ 2 = (1.2σ) 2 = (1.2 ⋅ 0.04) 2 = 0.002304. μ = −0.01 and σ

158

SOLUTIONS MANUAL

∼ The value of λ belonging to the process X(t) = 5e D(t) is λ = 0.02 = 8.68. 0.002304 By formula (11.58), the optimal level is x ∗ = 8.68 ⋅ 5.1 = 5.76. 8.68 − 1 (4) This is the probability p(x ∗ ) that the share prices will ever reach level x ∗ . By formula (11.44): 5 ⎞ p(x ∗ ) = e −λ ln(x/x 0 ) = ⎛⎝ 5.76 ⎠

8.68

= 0.2928.

11.19) At time a speculator acquires a European call option with strike price x s and finite expiration time τ. Thus, the option can only be exercised at time τ at price independently of its market value at time τ. The random price X(t) of the underlying risky security develops according to X(t) = x 0 + D(t), where {D(t), t ≥ 0} is the Brownian motion with positive drift parameter μ and volatility σ. If X(τ) > x s , the speculator will exercise the option. Otherwise, the speculator will not. Assume that x 0 + μt > 3σ t , 0 ≤ t ≤ τ. (1) What will be the mean undiscounted payoff of the speculator (cost of acquiring the option not included)? (2) Under otherwise the same assumptions, what is the investor's mean undiscounted profit if X(t) = x 0 + B(t) and x 0 = x s ? Solution (1) The speculator's undiscounted random payoff is max{X(τ) − x s , 0}). Since X(t) = N(x 0 + μt, σ 2 t), the speculator's mean payoff G = E(max{X(τ) − x s , 0}) is (x − x 0 − μτ) 2 2 σ2τ dx. ∫ (x − x s ) e

∞

−

1 2π τ σ x s

G= By substituting u = x − x s ,

1 2π τ σ

G= By substituting y =

(u + x s − x 0 − μτ) 2 2 σ2τ du. ∫ ue

∞

−

0

u + x s − x 0 − μτ , σ τ G=

=

σ τ 2π

1 2π

∞

∫

x s −x 0 −μτ

dy

e −y

2 /2

σ τ

∞

∫ x s −x −μτ

2 /2

[σ τ y − x s + x 0 + μτ] e −y

ye

y2

−2

dy −

0 σ τ

Now let a=

x s − x 0 − μτ 2π

x s − x 0 − μτ . σ τ

∞

∫ x s −x −μτ 0 σ τ

dy.

11 BROWNIAN MOTION

159

Then G becomes G=σ

τ e −a 2 /2 − a σ τ [1 − Φ(a)]. 2π τ . 2π

(2) In this special case, a = 0. Therefore, G = σ 11.20) Show that for any constant α

E(e α U(t) ) = e α

2 t 3 /6

,

where U(t) is the integrated standard Brownian motion: t

U(t) = ∫ 0 S(x) dx, t ≥ 0. Solution According to section 11.5.6, U(t) is normally distributed with E(U(t)) = 0, Var(U(t)) = t 3 /3, i.e., U(t) = N(0, t 3 /3). By formula (2.128), page 102, the Laplace transform of Y = N(μ, σ 2 ) is E ⎛⎝ e −sY ⎞⎠ = e

−μs+ 12 s 2 σ 2

.

Letting in this formula μ = 0 and replacing σ 2 with t 3 /3 and s with α yields the desired result. 11.21) For any fixed positive τ let the stochastic process {V(t), t ≥ 0} be given by t+τ

V(t) = ∫ t

S(x)dx.

Is the process {V(t), t ≥ 0} weakly stationary? Solution Let τ > 0 and 0 ≤ s < s + τ < t. Since E(V(t)) ≡ 0, the covariance C(s, t) = Cov(V(s), V(t)) is s+τ t+τ s+τ t+τ C(s, t) = E ⎛⎝ ∫ s S(x)dx ⋅ ∫ t S(y)dy ⎞⎠ = ∫ s ∫ t E(S(x)S(y))dydx s+τ t+τ x dydx t

= ∫s =

τ 2

∫

s+τ t+τ x dydx t

= ∫s

⎡⎣ (s + τ) 2 − s 2 ⎤⎦ =

τ2 2

∫

s+τ

= τ ∫s

x dx

[2s + τ], 0 ≤ s < s + τ < t,

i.e., C(s, t) has not property C(s, t) = C(t − s) so that the process is not weakly stationary. 11.22) Let {X(t), t ≥ 0} be the cumulative repair cost process of a system with X(t) = 0.01 e D(t) , where {D(t), t ≥ 0} is a Brownian motion with drift. Its parameters are μ = 0.02 and σ 2 = 0.04. The cost of a system replacement by an equivalent new one is c = 4000. (1) The system is replaced according to policy 1 (page 522). Determine the optimal repair cost limit x ∗ and the corresponding maintenance cost rate K 1 (x ∗ ). (2) The system is replaced according to policy 2 (page 522). Determine its economic lifetime τ ∗ based on the average repair cost development E(X(t)) = 0.01 E(e D(t) ) and the corresponding maintenance cost rate K 2 (τ ∗ ).

160

SOLUTIONS MANUAL

(3) Analogously to example 11.8, apply replacement policy 2 to the cumulative repair cost process X(t) = 0.01e M(t) with M(t) = max D(y). 0≤y≤t

Determine the corresponding economic lifetime τ ∗ of the system and the maintenance cost rate K 2 (τ ∗ M). Compare to the minimal maintenance cost rates determined under (1) and (2). For part (3) of this exercise you need computer assistance.

Solution (1) The equation X(t) = 0.01 e D(t) = x is equivalent to D(t) = ln 100x. Hence, the mean first passage time till X(t) hits level x is by formula (11.41) L X (x) =

ln 100x 0.2

= 50 ln 100x.

The corresponding maintenance cost rate under policy 1 is K 1 (x) = 4000 + x . 50 ln 100x K 1 (x) assumes its absolute minimum at x ∗ = 415 : K 1 (x ∗ ) = 8.304. (2) K 2 (τ) =

4000 + 0.01 E(e D(τ) ) 4000 + 0.01 e 0.02τ+0.0208τ = . τ τ

K 2 (τ) assumes its absolute minimum at τ ∗ = 511 : K 2 (τ ∗ ) = 8.636. (3) Analogously to example 11.8, the maintenance cost rate K 2 (τ M) is seen to be ∞

τ

1 0.04 2π y 1.5 0 τ

4 000 + 0.01 ∫ xe x ∫ K 2 (τ M) =

0

(x − 0.02y) 2 e 0.0032y dydx −

The optimal values are τ ∗ = 510, K 2 (τ ∗ M) = 8.667.

.

CHAPTER 12 Spektral Analysis of Stationary Processes 12.1) Define the stochastic process {X(t), t ∈ R} by X(t) = A cos(ωt + Φ), where A and Φ are independent random variables with E(A) = 0, and Φ is uniformly distributed over the interval [0, 2π]. Check whether the covariance function of the weakly stationary process {X(t), t ∈ R} can be obtained from the limit relation (12.5). The covariance function of a slightly more general process has been determined in example 6.6 at page 235.

Solution Only point estimates can be obtained from (12.5). 12.2) A weakly stationary, continuous-time process has covariance function ⎛ ⎞ C(τ) = σ 2 e −α τ cos βτ − α sin β τ . ⎝ ⎠ β Prove that its spectral density is given by s(ω) =

2σ 2 α ω 2 . π (ω 2 + α 2 + β 2 − 4β 2 ω 2 )

Solution Apply formula (12.35) (the lower integration bound in (12.35) must be 0): 2 s(ω) = σπ

∞

∫ e −αt cos ωt cos βt dt −

0

2 =σ 2π

−

∞

σ2α πβ

∞

∫ e −αt sin βt cos ωt dt

0

∫ e −αt [cos(ω − β)t + cos(ω + β)t] dt

0 ∞ 2 σ α

2π β

∫ e −αt [sin(β − ω)t + sin(β + ω)t] dt.

0

Now use the result of example 2.6 for evaluating the first integral and make use of e cx

∫(e cx sin bx) dx = c 2 + b 2 (c sin bx − b cos bx). for evaluating the second integral. 12.3) A weakly stationary continuous-time process has covariance function ⎛ ⎞ C(τ) = σ 2 e −α τ cos βτ + α sin β τ . ⎝ ⎠ β Prove that its spectral density is given by s(ω) =

2σ 2 α (α 2 + β 2 ) π (ω 2 + α 2 − β 2 + 4α 2 β 2 )

Solution Solve analogously to the previous exercise.

.

162

SOLUTIONS MANUAL

12.4) A weakly stationary continuous-time process has covariance function 2

C(τ) = a −b τ for a > 0, b > 0. Prove that its spectral density is given by s(ω) =

2 a e −ω /4b . 2 πb

Solution Apply formula (12.35) (the lower integration bound in (12.35) must be 0): ∞

2 s(ω) = π1 ∫ (cos ωt) a −b t dt.

0

The Fourier transform F(y) of f (x) = Fourier transformation: F(y) =

2 π

2 e −cx ,

c > 0, can be found in most tables compiled for the

∞

1 2

∫ cos(xy) f (x) dx =

0

c −1/2 e −y

2 /4c

.

To make use of it with regard to applying formula (12.35), write C(τ) in the form 2

C(τ) = e −(ln a) b τ . 12.5) Define a weakly stationary stochastic process {V(t), t ≥ 0} by V(t) = S(t + 1) − S(t), where {S(t), t ≥ 0} is the standard Brownian motion process. Prove that its spectral density is proportional to 1 − cos ω . ω2 Solution The covariance function C(τ) of {V(t), t ≥ 0} is a special case of the one determined in exercise 11.7. (Note: τ has another meaning in exercise 11.7 than here.): C(τ) =

0 if 1 ≤ τ, 1 − τ if τ ≤ 1.

From this and formula (12.35) 1

s(ω) = π1 ∫(cos ωt) (1 − t) dt 0

⎡1

1 ⎤ = π1 ⎢⎢ ∫(cos ωt)dt − ∫ t cos ωt dt ⎥⎥ ⎢⎣ 0 ⎥⎦ 0

⎡ ω ⎤. = π1 ⎢ 1 − cos ⎥ 2 ⎣ ω ⎦ 12.6) A weakly stationary, continuous-time stochastic process has spectral density n αk s(ω) = Σ , α k > 0. k=1 ω 2 + β 2 k Prove that its covariance function is given by C(τ) = 2

α k −β k e k=1 β k n

Σ

τ

, α k > 0.

12 SPECTRAL ANALYSIS OF STATIONARY PROCESSES

163

Solution The Fourier transform of f (x) =

1 β 2 +x 2

F(y) =

π e −βy 2 β

is 2 π

, i.e.,

∞

π e −βy . 2 β

1

∫ cos(xy) β 2 +x 2 dx =

0

Now apply formula (12.34) 12.7) A weakly stationary, continuous-time stochastic process has spectral density ⎧ 0 for ω < ω 0 or for ω > 2ω 0 , s(ω) = ⎨ 2 ω 0 ≤ ω ≤ 2ω 0 , ⎩ a for

ω 0 > 0.

Prove that its covariance function is given by C(τ) = 2 a 2 sin(ω 0 τ)

⎛ 2 cos ω 0 τ − 1 ⎞ . τ ⎝ ⎠

Solution Apply formula C(τ) =

+∞

∫

−∞

cos ωτ s(ω) dω

and integrate over the area in which s(ω) is positive: C(τ) = a 2

−ω 0

∫

cos ωτ dω + a 2

−2ω 0

2ω 0

∫

cos ωτ dω

ω0 −ω

2ω

0 0 = a 2 ⎡ 1τ sin ωτ ⎤ + a 2 ⎡ 1τ sin ωτ ⎤ ⎣ ⎣ ⎦ −2ω0 ⎦ ω0

2 = aτ [sin (−ω 0 τ) − sin (−2ω 0 τ) + sin (2ω 0 τ) − sin (ω 0 τ)] 2 = aτ [−sin (ω 0 τ) + sin (2ω 0 τ) + sin (2ω 0 τ) − sin (ω 0 τ)] 2 = aτ [2 sin (2ω 0 τ) − 2 sin (ω 0 τ)] 2 = aτ [4 sin ω 0 τ cos ω 0 τ − 2 sin (ω 0 τ)] 2 = 2 τa [2 sin ω 0 τ cos ω 0 τ − sin (ω 0 τ)] 2 = 2 τa sin ω 0 τ [2 cos ω 0 τ − 1].