Stochastic Processes in Science, Engineering and Finance (Solutions, Instructor Solution Manual) [1 ed.] 1584884932, 9781584884934


141 76 2MB

English Pages 116 Year 2006

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Table of Contents
CHAPTER 1. Probability Theory
CHAPTER 2. Basics of Stochastic Processes
CHAPTER 3. Point Processes
CHAPTER 4. Discrete-Time Markov Chains
CHAPTER 5. Continuous-Time Markov Chains
CHAPTER 6. Martingales
CHAPTER 7. Brownian Motion
Recommend Papers

Stochastic Processes in Science, Engineering and Finance   (Solutions, Instructor Solution Manual) [1 ed.]
 1584884932, 9781584884934

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Stochastic Processes in Science, Engineering and Finance Solutions Manual

TABLE OF CONTENTS CHAPTER 1: PROBABILITY THEORY

1

CHAPTER 2: BASICS OF STOCHASTIC PROCESSES

25

CHAPTER 3: POINT PROCESSES

31

CHAPTER 4: DISCRETE-TIME MARKOV CHAINS

57

CHAPTER 5: CONTINUOUS-TIME MARKOV CHAINS

69

CHAPTER 6: MARTINGALES

97

CHAPTER 7: BROWNIAN MOTION

103

CHAPTER 1

Probability Theory 1.1) Castings are produced weighing either 1, 5, 10 or 20 kg. Let A, B, and C be the events that a casting weighs 1 or 5 kg, exactly 10 kg, and at least 10 kg, respectively. Characterize verbally the events A ∩ B, A B, A ∩ C, and (A B) ∩ C . Solution A ∩ B This event can never occur (impossible event ∅ ). A B A casting weighs not more than 10 kg. A ∩ C A casting weighs 1 or 5 kg (since A = C ). (A B) ∩ C A casting weighs at least 10 kg (since A B is the certain event M.) 1.2) Three persons have been tested for the occurrence of gene g. Based on this random experiment, three events are introduced as follows: A = 'no person has gene g' B = 'exactly 1 person has gene g' C = 'at least 2 persons have gene g' (1) Characterize verbally the random events A ∩ B, B C, and (A B) ∩ C. (2) By introducing a suitable sample space, determine the sets of elementary events which characterize the random events occurring under (1). Solution (1) A ∩ B This event can never occur (impossible event). B C 'At least one person has gene g.' (A B) ∩ C 'No or one person has gene g.' (Note that A B = C.) (2) Let 1 and 0 indicate that a person has gene g or not, respectively. Then the sample space M consists of the 2 3 = 8 vectors (z 1 , z 2 , z 3 ) with zi =

1 if person i has gene g ; 0 otherwise

i = 1, 2, 3.

Hence,

B

A = {(0, 0, 0)}, B = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, A ∩ B = ∅, C = {(1, 1, 0), (1, 0, 1), (0, 1, 1), (1, 1, 1)} , C = {1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 0), (1, 0, 1), (0, 1, 1), (1, 1, 1)}, (A

B) ∩ C = {(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1)}.

1.3) Let P(A) = 0.3, P(B) = 0.5, and P(A ∩ B) = 0.2. Determine the probabilities P(A B), P(A ∩ B), and P(A

B).

Solution P(A B) = P(A) + P(B) − P(A ∩ B) = 0.3 + 0.5 − 0.2 = 0.6. P(A ∩ B) = P(B \ A) = P(B \ A ∩ B) = P(B) − P(A ∩ B) = 0.5 − 0.2 = 0.3. P(A

B) = 1 − P(A

B) = 1 − P(A ∩ B) = 0.8 (by the rule of de Morgan (1.2)).

2

SOLUTIONS MANUAL

1.4) 200 plates are checked for surface quality (acceptable, unacceptable) and for satisfying given tolerance limits of the diameter (yes, no). The results are summarized in the following matrix: surface quality acceptable diameter

unacceptable

yes

170

15

no

8

7

A plate is selected at random from these 200. Let A be the event that its diameter is within the tolerance limits, and let B the event that its surface quality is acceptable. (1) Determine the probabilities P(A), P(B), and P(A ∩ B) from the matrix. Using the rules developed in section 1.1, determine P(A B) and P(A B) . (2) Are A and B independent? Solution (1) P(A) = 170+15 = 0.925, P(B) = 170+8 = 0.89, P(A ∩ B) = 170 = 0.85. 200

200

200

P(A

B) = P(A) + P(B) − P(A ∩ B) = 0.925 + 0.89 − 0.85 = 0.965.

P(A

B) = 1 − P(A

B) = 1 − P(A ∩ B) = 1 − 0.85 = 0.15.

(2) No, since 0.85 = P(A ∩ B) ≠ P(A) P(B) = 0.82325. 1.5) A company optionally equips its newly developed PC Ibson with 2 or 3 hard disk drives and with or without extra software and analyzes the first 1000 ordered computers:

extra software

hard disk drives three two 520 90 70 320

yes no

A PC is selected at random from the first 1000 orders. Let A be the event that this PC has three hard disk drives and B be the event this PC has extra software. (1) Determine the probabilities P(A), P(B), and P(A ∩ B) from the matrix. (2) By using the rules developed in section 1.1 determine the probabilities P(A

B), P(A B), P(B A), P(A

B B) and P(A B).

Solution P(A) = 520+70 = 0.59, P(B) = 520+90 = 0.61, P(A ∩ B) = 520 = 0.52.

(1)

1000

(2)

P(A P(A B) =

P(A ∪ B B) =

1000

1000

B) = P(A) + P(B) − P(A ∩ B) = 0.59 + 0.61 − 0.52 = 0.68.

P(A ∩ B) 0.52 P(A ∩ B) 0.52 = = 0.852, P(B A) = = = 0.881. P(B) P(A) 0.62 0.59

P((A ∪ B) ∩ B) P(A ∩ B) P(A \ B) P(A) − P(A ∩ B) 0.59 − 0.52 = = = = = 0.179. 0.39 P(B) P(B) P(B) P(B) P(A B) =

P(A ∩ B) 1 − P(A B) 1 − 0.68 = = = 0.8205. 0.39 P(B) P(B)

1 PROBABILITY THEORY

3

1.6) 1000 bits are independently transmitted from a source to a sink. The probability of a faulty transmission of a bit is 0.0005. What is the probability that the transmission of at least two bits is not successful? Solution The random number X of faulty transmissions amongst 1000 transmissions has a binomial distribution with parameters n = 1000 and p = 0.0005. Hence, application of the Poisson approximation to the binomial distribution with parameter λ = n p = 0.5 is justified: P(X ≥ 2) = 1 − P(X = 0) − P(X = 1) = 1 − e −05 − 0.5 e −0.5 = 0.0902. 1.7) To construct a circuit a student needs, among others, 12 chips of a certain type. The student knows that 4% of these chips are defective. How many chips have to be provided so that, with a probability of not less than 0.9, the student has a sufficient number of nondefective chips in order to be able to construct the circuit? Solution If the student orders 12 chips of this type, then, on average, 0.96 ⋅ 12 = 11.52 of them are nondefective. But the probability that 12 chips are nondefective is only (0.96) 12 = 0.61 < 0.9. If the student orders 13 chips, then the probability that at least 12 of them are nondefective is (binomial distribution with parameters n = 13 and p = 0.96 ) ⎛ 13 ⎞ (0.96) 13 + ⎛ 13 ⎞ (0.96) 12 (0.04) 1 = 0.9068. ⎝ 13 ⎠ ⎝ 12 ⎠ Hence, n = 13 chips fulfil the requirement. 1.8) It costs $ 50 to find out whether a spare part required for repairing a failed device is faulty or not. Installing a faulty spare part causes a damage of $ 1000. Is it on average more profitable to use a spare part without checking if (1) 1% of all spare parts of that type, (2) 3% of all spare parts of that type, (3) 10 % of all spare parts of that type are faulty? Solution Let X be the random damage (in $) when not checking. (1) E(X) = 0.01 ⋅ 1000 = 10. (2) E(X) = 0.03 ⋅ 1000 = 30. (3) E(X) = 0.1 ⋅ 1000 = 100. Since E(X) = 100 > 50, only in this case checking is costefficient. 1.9) A test for diagnosing faults in circuits indicates no fault with probability 0.99 if the circuit is faultless. It indicates a fault with probability 0.90 if the circuit is faulty. Let the probability that a circuit is faulty be 0.02. (1) What is the probability that a circuit is faulty if the test indicates a fault? (2) What is the probability that a circuit is faultless if the test indicates that it is faultless? Solution Let A be the random event that a circuit (selected at random from the population) is faulty and B be the random event that the test indicates a fault. From the probabilities given, P(A) = 0.02, P(B A) = 0.90, P(B A) = 0.10, P(B A) = 0.01, P(B A) = 0.99.

4

SOLUTIONS MANUAL

(1) Using (1.7), the probability that the test indicates a fault is P(B) = P(B A) P(A) + P(B A) P(A) = 0.90 ⋅ 0.02 + 0.01 ⋅ 0.98 = 0.0278. By Bayes' formula (1.8), the desired probability is P(B A) P(A) 0.90 ⋅ 0.02 = = 0.6475. P(B) 0.0278 (2) By Bayes' formula (1.8), the desired probability is P(A B) =

P(A B) =

P(B A) P(A) 0.99 ⋅ 0.98 = = 0.9970. 1 − 0.0278 P(B)

Contrary to the result obtained under (1), this probability is a strong argument in favour of the test. 1.10) Suppose 2% of cotton fabric rolls and 3% of nylon fabric rolls contain flaws. Of the rolls used by a manufacturer, 70% are cotton and 30% are nylon. (1) What is the probability that a randomly selected roll used by the manufacturer contains flaws? (2) Given that a randomly selected roll used by the manufacturer does not contain flaws, what is the probability that it is nylon? Solution A roll is selected at random from the ones used by the manufacturer. Let A be the random event that this roll contains flaws, and B be the random event that this roll is cotton. From the figures given, P(B) = 0.7, P(B) = 0.3, P(A B) = 0.02, P(A B) = 0.98, P(A B) = 0.03, P(A B) = 0.97. (1) By the total probability rule (1.7), the desired probability is P(A) = P(A B)P(B) + P(A B) P(B) = 0.02 ⋅ 0.70 + 0.03 ⋅ 0.3 = 0.0230. (2) By Bayes' formula (1.8), the desired probability is P(B A) =

P(A B) P(B) 0.97 ⋅ 0.30 = = 0.2979. 1 − 0.0230 P(A)

1.11) Transmission of information between computers s and t (see figure) is possible if there is at least one closed path between s and t. The figure indicates the possible interruption of an edge (connection between two nodes of the transmission graph) by a switch. In practice, such an interruption may be caused by a cable break or if the transmission capacity of a channel is exceeded. All 5 switches operate independently. Each one is closed with probability p and open with probability 1 − p . Only switch 3 allows for transmitting information into both directions. (1) What is the probability w s,t (p) of the random event A that s can send information to t ? (2) Draw the graph of w s,t (p) as a function of p.

4

1

s

t

3 2

5

Solution (1) Let B i be the random event that switch i closed, The total probability rule (1.7) is applied with the disjoint and exhaustive set of events {B 3 , B 3 } :

1 PROBABILITY THEORY

5

w s,t (p) = P(A B 3 ) P(B 3 ) + P(A B 3 ) P(B 3 ). On condition that switch 3 is closed, event A occurs if and only if event (B 1 B 2 ) ∩ (B 4 B 5 ) occurs. On condition that switch 3 is open, event A occurs if and only if event (B 1 ∩ B 4 ) (B 2 ∩ B 5 ) occurs. Hence, by formula (1.4), taking into account the independence of the B i : P(A B 3 ) = (2p − p 2 )(2p − p 2 ), P(A B 3 ) = 2p 2 − p 4 . Since P(B 3 ) = p and P(B 3 ) = 1 − p, w s,t (p) = 2p 2 (p 3 + p + 1) − 5p 4 .

(2)

w s,t (p)

1 0.8 0.6 0.4 0.2 0

0.2

0.4

0.6

0.8

p

1

1.12) From a source, symbols 0 and 1 are emitted independently of each other in proportion 1 : 4. Random noise may cause transmission failures: If a 0 was sent, then a 1 will arrive at the sink with probability 0.1. If a 1 was sent, then a 0 will arrive at the sink with probability 0.05. (1) A 1 has arrived. What is the probability that a 1 had been sent? (2) A 0 has arrived. What is the probability that a 1 had been sent? source

0

sink

0.90

0 0.05 0.10

1

0.95

1

Solution Let A be the event that a 1 has arrived at the sink and B be the event that a 1 had been sent. Then A ( B ) is the event that a 0 has arrived at the sink (had been sent). The proportion P(B)/P( B) = 4 implies P(B) = 0.80. From the probabilities given, P(A B) = 0.95, P( A B) = 0.05, P(A B) = 0.10,

P( A B) = 0.90,

P(A) = P(A B) P(B) + P(A B) P(B) = 0.95 ⋅ 0.80 + 0.10 ⋅ 0.20 = 0.7800. (1) By Bayes' formula (1.8), the probability wanted is P(B A) =

P(A B) P(B) 0.95 ⋅ 0.80 = = 0.9744. P(A) 0.78

6

SOLUTIONS MANUAL

(2) By Bayes' formula (1.8), the probability wanted is P(B A) =

P(A B) P(B) 0.05 ⋅ 0.80 = = 0.1818. 0.22 P(A)

1.13) A biologist measured the weight of 132 eggs of a certain bird species [gram]: i weight x i number of eggs n i

1 38

2 41

3 42

4 43

5 44

6 45

7 46

8 47

9 48

10 50

4

6

7

10

13

26

33

16

10

7

There are no eggs weighing less than 38 or more than 49. Let X denote the weight of an egg selected randomly from this population. (1) Determine the probability distribution of the random variable X. (2) Determine the probabilities P(43 ≤ X ≤ 48) and P(X > 45). (3) Draw the distribution function of X. Solution

(1) The probability distribution of X is given by the set { p i = P(X = x i ) = i 1 p i 0.0303

2

3

4

5

6

7

8

0.0455

0.0530

0.0758

0.0985

0.1970

0.2500

0.1212

9 (2) P(43 ≤ X ≤ 48) = Σ i=4 p i = 0.8183,

(3)

i

1

F(x i ) = P(X ≤ x i )

F(x)

ni ; i = 1, 2, ...} : 132

2

9

10

0.0758 0.0530

10 P(X > 45) = Σ i=7 p i = 0.5. 3

4

0.0303 0.0758 0.1288

5

6

7

8

9

0.2046 0.3031 0.5001 0.7500 0.8712 0.9470

1 0.8 0.6 0.4 0.2 0

40

38

42

44

48

46

50

1.14) 120 nails are classified by length: i length x i (in mm)

< 15.0

1 15.0

number of nails n i

0

8

2 15.1

3 15.2

4 15.3

5 15.4

26

42

24

15

6 15.5 > 15.6 5

Let X denote the length of a nail selected randomly from this population. (1) Determine the probabilities p i = P(X = x i ); i = 1, 2, ..., 6. (2) Draw the distribution function of X. n

(1) The desired probabilities are p i = P(X = x i ) = i ; i = 1, 2, ..., 6 : 120 i pi

1 0.0667

2 0.2166

3 0.3500

4 0.2000

5 0.1250

6 0.0417

0

x

10 1

7

1 PROBABILITY THEORY (2) i F(x i ) = P(X ≤ x i )

F(x)

1 0.0667

2

3

4

5

6

0.2833

0.6333

0.8333

0.9583

1

1 0.8 0.6 0.4 0.2 0

15.0

15.1

15.2

15.3

15.4

15.5

x

1.15) Let X be given by exercise 1.13. Determine E(X) and Var(X ). Solution 10

E(X ) = Σ i=1 p i x i = 45.18,

10

Var(X ) = Σ i=1 p i (x i − 45.18) 2 = 5.3421.

1.16) Let X be given by exercise 1.14. Determine E(X) and Var(X ). Solution 6

E(X ) = Σ i=1 p i x i = 15.2225,

6

Var(X ) = Σ i=1 p i (x i − 15.2225) 2 = 0.01508.

1.17) Because it happens that not all airline passengers show up for their reserved seats, an airline would sell 602 tickets for a flight that holds only 600 passengers. The probability that, for some reason or other, a passenger does not show up is 0.008. The passengers behave independently. What is the probability that every passenger who shows up will have a seat? Solution The random number X of passengers who show up for a flight has a binomial distribution with parameters n = 602 and p = 0.992. Hence, the probability that 601 or 602 passengers show up is ⎛ 602 ⎞ (0.992) 602 + ⎛ 602 ⎞ (0.992) 601 (0.008) ⎝ 602 ⎠ ⎝ 601 ⎠ = (0.992) 602 + 602 (0.992) 601 (0.008) = 0.04651.

Hence, the desired probability is 1 − 0.04651 = 0.95349. 1.18) Water samples are taken from a river once a week. Let X denote the number of samples taken over a period of 20 weeks which are polluted. It is known that on average 10% of the samples are polluted. Assuming independence of the outcomes of the sample analyses, what is the probability that X exceeds its mean by more than one standard deviation? Solution X has a binomial distribution with parameters n = 20 and p = 0.1. Hence, mean value and standard deviation of X are E(X) = n p = 2 ,

σ = Var(X) = np(1 − p) = 1.8 = 1.342.

Thus, the desired probability has structure P(X > 2 + 1.342) = P(X > 3.342) = P(X ≥ 4).

8

SOLUTIONS MANUAL

Hence, 3 3 P(X ≥ 4) = 1 − Σ i=0 P(X = i) = 1 − Σ i=0 ⎛⎝ 20 ⎞⎠ (0.1) i (0.9) 20−i = 0.1329. i

1.19) From the 300 chickens of a farm, 100 have attracted bird flue. If four chickens are randomly selected from the population of 300, what is the probability that all of them have bird flue? Solution The random number X of chickens in the sample of size 4 having bird flue has a hypergeometric distribution with parameters N = 300, M = 100, and n = 4. Hence, with m = 4, ⎛ 100 ⎞ ⎛ 200 ⎞ ⎝ 4 ⎠⎝ 0 ⎠ P(X = 4) = = 0.01185. ⎛ 300 ⎞ ⎝ 4 ⎠ 1.20) Some of the 140 trees in a park are infested with a fungus. A sample of 10 randomly selected trees is taken. (1) If 25 trees from the 140 are infested, what is the probability that the sample contains at least one infested tree? (2) If 5 trees from the 140 are infested, what is the probability that the sample contains at least two infested trees? Solution Let X be the random number of infested trees in the sample. (1) X has a hypergeometric distribution with parameters N = 140, M = 25, and n = 10. The probability wanted is ⎛ 25 ⎞ ⎛ 115 ⎞ ⎝ 0 ⎠ ⎝ 10 ⎠ P(X ≥ 1) = 1 − P(X = 0) = 1 − = 0.8701. ⎛ 140 ⎞ ⎝ 10 ⎠

(2) X has a hypergeometric distribution with parameters N = 140, M = 5, and n = 10. The probability wanted is ⎛ 5 ⎞ ⎛ 135 ⎞ ⎛ 5 ⎞ ⎛ 135 ⎞ ⎝ 0 ⎠ ⎝ 10 ⎠ ⎝ 1 ⎠ ⎝ 9 ⎠ P(X ≥ 2) = 1 − P(X = 0) − P(X = 1) = 1 − − = 0.0411. ⎛ 140 ⎞ ⎛ 140 ⎞ ⎝ 10 ⎠ ⎝ 10 ⎠ 1.21) Flaws occur at random along the length of a thin copper wire. Suppose that the number of flaws follows a Poisson distribution with a mean value of 0.15 flaws per centimetre. What is the probability of more than 2 flaws in a section of length 10 centimetre? Solution The random number X of flaws occuring in a section of length 10 centimetre has a Poisson distribution with parameter (intensity) λ = 0.15 ⋅ 10 = 1.5. Hence, the desired probability is 2

P(X > 2) = 1 − P(X = 0) − P(X = 1) = P(X = 2) = 1 − e −1.5 − 1.5 e −1.5 − 1.5 e −1.5 = 0.191. 2

1.22) The number of dust particles which occur on the reflector surface of a telescope has a Poisson distribution with intensity 0.1 per centimetre squared. What is the probability of not more than 2 particles on an area of 10 squared centimetres?

9

1 PROBABILITY THEORY

Solution The random number X of particles on an area of 10 centimetre squared has a Poisson distribution with parameter λ = 0.1 ⋅ 10 = 1. Hence, the desired probability is P(X ≤ 2) = e −1 + e −1 + 1 e −1 = 0.920. 2

1.23) The random number of crackle sounds produced per hour by an old radio has a Poisson distribution with parameter λ = 12. What is the probability that there is no crackle sound during the 4 minutes transmission of a listener's favourite hit? Solution The random number X of crackle sounds within a four minutes time interval has a Poisson distribution with parameter λ = 12/15 = 0.8. Hence, the desired probability is P(X = 0) = e −0.8 = 0.4493.

1.24) Show that the following functions f (x) are probability density functions for some value of c and determine c:

(1) f (x) = c x 2 , 0 ≤ x ≤ 4. (2) f (x) = c (1 + 2x), 0 ≤ x ≤ 2. (3) f (x) = c −x , 0 ≤ x < ∞. These functions are assumed to be identically 0 outside their respective ranges. Solution The functions specified under (1) to (3) are nonnegative. Hence it suffices to show that there exists a positive constant c so that ∫ f (x) dx = 1. 4

3 4 (1) ∫ 0 x 2 dx = x = 64 . Hence, c = 3/64. 3 0 3 4

(2) ∫ 0 (1 + 2x) dx = (x + x 2 ) (3)

2 = 6. Hence, c = 1/6. 0

∞ −x e dx = −e −x ∞ 0 = 1. Hence, c = 1.

∫0

1.25) Consider a nonnegative random variable X with probability density function 2 f (x) = x e − x /2 , x ≥ 0.

Determine x such that P(X ≤ x) = 0.5, P(X ≤ x) = 0.5, and P(X > x) = 0.95. Solution The distribution function of X is 2 2 x F(x) = ∫ 0 y e − y /2 dy = 1 − e −x , x ≥ 0.

(Rayleigh distribution)

2

From the equation P(X ≤ x) = P(X < x) = 1 − e −x = 0.5, 2

e −x = 0.5.

The solution is the median (0.5-percentile) of X: m = x 0.5 ≈ 0.8326.

Solving the equation P(X > x) = e

−x 2

= 0.95 yields the 0.05-percentile of X: x 0.05 ≈ 0.2265.

10

SOLUTIONS MANUAL

1.26) A road traffic light is switched on every day at 5:00 a.m. It always begins with 'red' and holds this colour for 2 minutes. Then it changes to 'green' and holds this colour 4 minutes. This cycle continues till midnight. A car driver arrives at this traffic light at a time point which is uniformly distributed between 9:00 and 9:10 a.m. (1) What is the probability that the driver has to wait in front of the traffic light? (2) Determine the same probability on condition that the driver's arrival time point has a uniform distribution over the interval [8:58, 9:08]? Solution The following figures show the "red" and "green" periods in the time intervals [9:00, 9:10] and [8:58, 9:08], respectively: green

red 9.00

9.02

9.04

red 9.06

green

green 9.08

9.10

8.58

red

9:00

green 9:02

(1)

9:04 (2)

red 9:06

9:08

In both cases, the traffic light holds colour "red" for 4 minutes during the car driver's arrival time interval of length 10 minutes. Hence, the desired probability is p wait = 4/10 = 0.4. 1.27) According to the timetable, a lecture begins at 8:15. The arrival time of professor Durrick in the venue is uniformly distributed between 8:13 and 8:20, whereas the arrival time of the student Sluggish is uniformly distributed over the time interval from 8:05 to 8:30. What is the probability p late that Sluggish arrives after Durrick in the venue? y 8:20

late

8:15

late

8:13 8:10 8:05 8:05

8:13 8:10

x 8:15

8:20

8:25

8:30

Solution Let be X the arrival time of Sluggish and Y the arrival time of Durrick. Then the random vector (X, Y) has a two-dimensional uniform distribution over the rectangular R = {8 : 05 ≤ x ≤ 8 : 30, 8 : 13 ≤ y ≤ 8 : 20}. This rectangular covers an area of μ(R) = 25 ⋅ 7 = 175 [min 2 ].

To determine that subarea of R given by R late = {(x, y) with x > y} consider the following figure: Hence, R late has area μ(R late ) = 10 ⋅ 7 + 1 7 2 = 94.5 [min 2 ]. 2

Thus, the desired probability is p late =

μ(R late ) = 0.54. μ(R)

1 PROBABILITY THEORY

11

1.28) Determine E(X) and Var(X) of the three random variables X with probability density functions specified in exercise 1.24. Solution 4 4 (1) E(X ) = 3 ∫ 0 x 3 dx = 3, Var(X ) = 3 ∫ 0 x 2 x 2 dx − 9 = 0.6. 64 64 2 2 (2) E(X ) = 1 ∫ 0 x (1 + 2x)dx = 11/9, Var(X ) = 1 ∫ 0 x 2 (1 + 2x) dx − (11/9) 2 = 23/81. 6 6 ∞

(3) E(X ) = ∫ 0 x e −x dx = 1,

∞ Var(X ) = ∫ 0 x 2 e −x dx − 1 = 1.

1.29) The lifetimes of bulbs of a particular type have an exponential distribution with parameter λ [h −1 ]. Five bulbs of this type are switched on at time t = 0. Their lifetimes are independent. (1) What is the probability that at time t = 1/λ a) all 5 bulbs, b) at least 3 bulbs are failed? (2) What is the probability that at least one bulb survives 5/λ hours? Solution

(1) A bulb fails in the interval [0, 1/λ] with probability p = 1 − e −λ⋅(1/λ) = 1 − 1/e and survives this interval with probability 1 − p = e −λ⋅(1/λ) = 1/e.

(a) All 5 bulbs fail in [0, 1/λ] with probability (1 − 1/e) 5 ≈ 0.1009. (b) The random number of bulbs which fail in [0, 1/λ] has a binomial distribution with parameters n = 5 and p = 1 − 1/e. Hence, the desired probability is ⎛ 5 ⎞ p 3 (1 − p) 2 + ⎛ 5 ⎞ p 4 (1 − p) 1 + ⎛ 5 ⎞ p 5 (1 − p) 0 ≈ 0.7364. ⎝3⎠ ⎝4⎠ ⎝5⎠

(2) All bulbs fail in [0, 5/λ] with probability (1 − e −5 ) 5 = 0.9668. Hence, the desired probability is 1 − 0.9668 = 0.0332. 1.30) The probability density of the annual energy consumption X of an enterprise [in 10 8 kwh ] is f (x) = 30(x − 2) 2 [1 − 2(x − 2) + (x − 2) 2 ],

2 ≤ x ≤ 3.

(1) Determine the distribution function of X. (2) What is the probability that the annual energy consumption exceeds 2.8? (3) What is the mean annual energy consumption? Solution x

F(x) = 30 ∫ 2 (y − 2) 2 [1 − 2(y − 2) + (y − 2) 2 ] dy

(1)

= (x − 2) 3 [10 − 15(x − 2) + 6(x − 2) 2 ], 2 ≤ x ≤ 3.

Hence, ⎧ 0, x3 ⎩ 1,

(2) P(X > 2.8) = 1 − F(2.8) ≈ 0.0579. 3

(3) E(X) = 30 ∫ 2 x (x − 2) 2 [1 − 2(x − 2) + (x − 2) 2 ] dx = 2.5. Note that f (x) is symmetric with symmetry center x = 2.5.

12

SOLUTIONS MANUAL

1.31) Assume X is normally distributed with mean 5 and standard deviation 4, i.e. X = N(5, 16). Determine the respective values of x which satisfy P(X > x) = 0.5, P(X > x) = 0.95, P(x < X < 9) = 0.2, P(3 < X < x) = 0.95, P(−x < X < +x) = 0.99. Solution

Note that for X = N(μ, σ 2 ), b−μ a−μ P(a ≤ X ≤ b) = Φ ⎛⎝ σ ⎞⎠ − Φ ⎛⎝ σ ⎞⎠ .

The conditions are equivalent to P(N(0, 1) > x−5 ) = 0.5, P(N(0, 1) > x−5 ) = 0.95, P( x−5 < N(0, 1) < 9−5 ) = 0.5, 4

4

4

4

P( 3−5 < N(0, 1) < x−5 ) = 0.5, P(− x+5 < N(0, 1) < + x−5 ) = 0.99 , 4 4 4 4

or Φ ⎛⎝ x−5 ⎞⎠ = 0.5, Φ ⎛⎝ x−5 ⎞⎠ = 0.05, Φ(1) − Φ ⎛⎝ x−5 ⎞⎠ = 0.5, 4 4 4 Φ ⎛⎝ x−5 ⎞⎠ − Φ ⎛⎝ − 1 ⎞⎠ = 0.5, Φ ⎛⎝ x−5 ⎞⎠ − Φ ⎛⎝ − x+5 ⎞⎠ = 0.99. 4 2 4 4

From the table of the standard normal distribution, the x-values satisfying these equations are 5, -1.56, 6.45, a solution does not exist, 15.3. 1.32) The response time X of an average male car driver is normally distributed with mean 0.5 and standard deviation 0.06 (in seconds). (1) What is the probability that the response time is greater than 0.6 seconds? (2) What is the probability that the response time is between 0.5 and 0.55 seconds? Solution (1) The desired probability is P(X > 0.6) = P ⎛⎝ N(0, 1) > 0.6−0.5 ⎞⎠ = 1 − Φ(5/3) ≈ 0.04746. 0.06

(2) The desired probability is P(0.5 ≤ X ≤ 0.55) = Φ ⎛⎝ 0.55−0.5 ⎞⎠ − Φ ⎛⎝ 0.5−0.5 ⎞⎠ = Φ(5/6) − 1 = 0.2975. 0.06

0.06

2

1.33) The tensile strength of a certain brand of polythene sheet can be modeled by a normal distribution with mean 36 psi and standard deviation 4 psi. (1) Determine the probability that the tensile strength of a sample sheet is at least 28 psi. (2) If the specifications require the tensile strength to exceed 30 psi, what proportion of the production has to be scrapped? Solution

(1) P(X ≥ 28) = 1 − Φ ⎛⎝ 28−36 ⎞⎠ = 1 − Φ(−2) ≈ 0.7725. 4

(2) P(X ≤ 30) = Φ ⎛⎝ 30−36 ⎞⎠ = Φ(−1.5) ≈ 0.0668. 4

Thus, 6,68% of the production are rejects.

1 PROBABILITY THEORY

13

1.34) The total monthly sick-leave time X of employees of a small company has a normal distribution with mean 100 hours and standard deviation 20 hours. (1) What is the probability that the total monthly sick-leave time is between 50 and 80 hours? (2) How much time has to be budgeted for sick leave to make sure that the budgeted amount is exceeded with a probability not greater than 0.1? Solution (1) The desired probability is P(50 ≤ X ≤ 80) = Φ ⎛⎝ 80−100 ⎞⎠ − Φ ⎛⎝ 50−100 ⎞⎠ = Φ(−1) − Φ(−2.5) ≈ 0.1524. 20

20

(2) The 0.9-percentile x 0.9 of X has to be determined: P(X ≤ x 0.9 ) = 0.9. Equivalently, Φ⎛ ⎝

x 0.9 −100 ⎞ = 0.9. ⎠ 20

Since the 0.9-percentile of the standard normal distribution is 1.28 , i.e. Φ(1.28) = 0.9, the 0.9percentile of X satisfies x 0.9 −100 = +1.28. 20

Hence, x 0.9 = 125.6. 1.35) Let X = X θ have a geometric distribution with p i = P(X θ = i) = (1 − θ) θ i ; i = 0, 1, ...; 0 ≤ θ ≤ 1.

By mixing the X θ with regard to a suitable structure distribution show that ∞ Σ (i + 1)1(i + 2) = 1. i=0 Solution The parameter θ of the geometric distribution is assumed to be a value of a random variable Θ which has a uniform distribution over [0, 1], i.e. Θ has density f Θ (θ) = 1, 0 ≤ θ ≤ 1.

The mixture of the X θ with regard to this density leads to a random variable Y the probability distribution of which is given by 1

P(Y = i) = ∫ 0 (1 − θ) θ i dθ ; i = 0, 1, ...

Integration yields P(Y = i) =

1 ; i = 0, 1, ... (i + 1) (i + 2)

For being a probability distribution, the sum of the P(Y = i) ; i = 0, 1, ..., must be equal to 1. 1.36) A random variable X = X α has distribution function

F α (x) = e −α/x , α > 0, x > 0 (Frechet distribution). What distribution type arises when mixing the F α (x) with regard to the structure distribution density f (α) = λ e −λ α , λ > 0, α > 0 ? Solution The mixture of the distribution functions F α (x) generates a random variable Y with distribution function

14

SOLUTIONS MANUAL ∞

G(x) = P(Y ≤ x) = ∫ 0 e −α/x λ e −λ α dα =

λ x ; λ > 0, x ≥ 0. 1 + λx

This is a Pareto distribution. Sections 1.4 and 1.5 1.37) The times between the arrivals of taxis at a rank are independent and identically exponentially distributed with parameter λ = 4 [h −1 ]. Assume that an arriving customer does not find an available taxi, the previous one left 3 minutes ago, and no other customers are waiting. What is the probability that the customer has to wait at least 5 minutes for the next free taxi ? Solution In view of the 'memoryless property' of the exponential distribution (example 1.14), the time to the arrival of the next taxi has an exponential distribution with parameter λ = 4 [h −1 ] i.e. this time is independent on the time when the previous taxi left. Thus, if X is the waiting time of the customer, the desired probability is P(X > 5) = e

− 4 ⋅5 60

≈ 0.7165.

1.38) The random variable X have distribution function F(x) = λx /(1 + λx), λ > 0, x ≥ 0. Check whether there is a subinterval of [0, ∞) on which F(x) is DFR. Solution

The density of X is f (x) = λ / (1 + λx) 2 , x ≥ 0. Hence, the failure rate of X is λ(x) = λ , x ≥ 0. 1 + λx Since λ(x) is a decreasing function of x, F(x) is DFR on [0, ∞). 1.39)* Consider lifetimes X and Y with respective probability densities

f (x) =

⎧ ⎪ ⎪ ⎪ g(x) = ⎨ ⎪⎪ ⎪ ⎩

1/4, 0 ≤ x ≤ 4 , 0, otherwise

1 x, 0 ≤ x ≤ 2 10 5 x, 2 ≤ x ≤ 3 10 . 3 x, 3 ≤ x ≤ 4 10

0,

otherwise

With the notation introduced in section 1.4 (page 35), let X 2 and Y 2 be the corresponding residual lifetimes given that X > 2 and Y > 2 , respectively. (1) Show that X ≤ Y. (2) Check whether X 2 ≤ Y 2 and interpret the result. st st Solution (1) X has a uniform distribution over [0, 4] with distribution function ⎧ ⎪ F(x) = ⎨ ⎪ ⎩

0, 1 x, 4

1,

x≤0

0≤x≤4 , 4≤x

whereas Y has the piecewise linear distribution function

1 PROBABILITY THEORY

15

⎧ 0, ⎪ 1 ⎪ 10 x, ⎪⎪ G(x) = ⎨ 5 (x − 2) + 2 , 10 ⎪⎪ 10 3 7 ⎪ 10 (x − 3) + 10 , ⎪ ⎩ 1,

x≤0

0≤x≤2 2≤x≤3

.

3≤x≤4 4≤x

Hence, F(x) ≥ G(x) for all x ≥ 0 which is equivalent to X ≤ Y. st

(2) According to formula (1.35) and with the notation introduced there (t = 2) , the residual lifetimes X 2 and Y 2 have distribution functions ⎧ ⎪ F 2 (x) = ⎨ ⎪ ⎩

⎧ ⎪ ⎪ G 2 (x) = ⎨ ⎪ ⎪ ⎩

x≤0

0, 1 x, 2

0≤x≤2 , 2≤x

1,

x≤0

0,

5 x, 0≤x≤1 8 . 3 (x − 1) + 5 , 1 ≤ x ≤ 2 8 8

2≤x

1,

Hence, F 2 (x) ≤ G 2 (x) for all x ≥ 0 which is equivalent to X 2 ≥ Y 2 . st

Interpretation: The usual stochastic order is not necessarily preserved under aging. 1.40)* Let the random variables A and B have uniform distributions over [0, a] and [0, b] , a < b, respectively. (1) Show that A ≤ B and A ≤ B. st

hr

(2) The random variable X be defined by P(X = 0) = P(X = a) = 1/2. Show that if X is independent of A and B, then A + X ≤/ B + X . hr

(3) Let A X and B X be the random variables arising by mixing A and B, respectively, with regard to the distribution of X as structure distribution. Show that AX ≤ BX. hr

Solution (1) A and B have the respective distribution functions ⎧ ⎪ F A (x) = ⎨ ⎪ ⎩

x≤0

0, 1 a x,

1,

0≤x≤a , a≤x

⎧ ⎪ F B (x) = ⎨ ⎪ ⎩

x≤0

0, 1 x, b

0≤x≤b . b≤x

1,

Since by assumption a < b, F A (x) ≥ F B (x) for all x ≥ 0. Hence, A ≤ B. st

Moreover, ⎧ ⎪⎪ =⎨ F A (x) ⎪⎪ ⎩ F B (x)

1, a (b−x) , b (a−x)

∞,

x≤0

0≤x 0, − ∞ < μ < +∞, − ∞ < x < +∞.

Determine the Laplace transform of f (x) and, by means of it, E(X ) , E(X 2 ) , and Var(X ). Solution f (s) =

+∞

e −s x λ e −λ x−μ dx 2 −∞



μ

+∞

= λ ∫ e −s x e −λ (μ−x) dx + λ ∫ e −s x e −λ (x−μ) dx 2 −∞ 2 μ μ

+∞

= λ e −λ μ ∫ e − (s−λ) x dx + λ e +λ μ ∫ e −(s+λ) x dx 2 2 μ −∞ μ ∞ = λ e −λ μ ⎡ − 1 e −(s−λ) x ⎤ + λ e +λ μ ⎡ − 1 e −(s+λ) x ⎤ ⎣ s−λ ⎦ −∞ 2 ⎣ s+λ ⎦μ 2

= λ e −λ μ 1 e −(s−λ) μ + λ e +λ μ + 1 e −(s+λ) μ λ−s λ+s 2 2 = λ ⎡ 1 + 1 ⎤ e −s μ . λ+s ⎦ 2 ⎣λ−s

Thus, f (s) =

λ2

λ2 − s2

e− μ s.

The first derivative of f (s) is f (s) =

λ2

⎡ 2s ⎤ e −μ s . ⎢μ − 2 ⎥ s − λ2 ⎦

s2 − λ2 ⎣

From this, f (s) =

λ2μ2 2 λ2 − +o(s), (s 2 − λ 2 ) 2 s 2 − λ 2

where function o(s) has property lim o(s) = 0. (In this special case, o(s) represents all the terms s→0

with factor s.) Hence,

f (0) = −μ and f (0) = μ 2 + 2 / λ 2

so that, by (1.28) and (1.19), E(X ) = μ , E(X 2 ) = μ 2 + 2 /λ 2 , and Var(X ) = 2 /λ 2 . 1.53) 6% of the citizens in a large town suffer from severe hypertension. Let B n be the number of people in a sample of n randomly selected citizens from this town which suffer from this desease (Bernoulli trial).

(1) By making use of the Chebychev inequality find a positive integer n 0 with property B P ⎛⎝ nn − 0.06 ≥ 0.01 ⎞⎠ ≤ 0.05 for all n with n ≥ n 0 .

(i)

(2) Find a positive integer n 0 satisfying relationship (i) by making use of the central limit theorem.

24 Solution (1)

SOLUTIONS MANUAL

E(B n ) = 0.06 ⋅ n, Var(B n ) = 0.06 ⋅ 0.94 ⋅ n = 0.0564 ⋅ n .

Hence, B B E ⎛⎝ nn ⎞⎠ = 0.06, Var ⎛⎝ nn ⎞⎠ = 0.0564 n .

Application of (1.129) with ε = 0.01 and X = B n /n yields B P ⎛⎝ nn − 0.06 ≥ 0.01 ⎞⎠ ≤

0.0564 ≤ 0.05. (0.01) 2 ⋅ n

Thus, the desired n = n 0 is the smallest integer satisfying 0.0564 ≤ n. 0.05 ⋅ (0.01) 2 It follows n 0 = 11 280. B ⎞ (2) nn ≈ N ⎛⎝ 0.06, 0.0564 n ⎠ . Hence, B B P ⎛⎝ nn − 0.06 ≥ 0.01 ⎞⎠ = 1 − P ⎛⎝ nn − 0.06 ≤ 0.01 ⎞⎠ ⎡ ⎛ ⎞ ⎛ ⎞⎤ ⎢ ⎜ 0.07 − 0.06 ⎟ ⎜ 0.05 − 0.06 ⎟ ⎥ Bn ⎞ ⎛ = 1 − P ⎝ 0.05 ≤ n ≤ 0.07 ⎠ = 1 − ⎢ Φ ⎜ ⎟ − Φ⎜ ⎟⎥ ⎜ 0.0564 ⎟ ⎥ ⎢ ⎜ 0.0564 ⎟ ⎠ ⎝ ⎠⎦ n n ⎣ ⎝ ⎡ ⎛ ⎞ ⎤ ⎢ ⎜ 0.07 − 0.06 ⎟ ⎥ = 1 − ⎢ 2Φ ⎜ ⎟ − 1⎥. ⎢ ⎜ 0.0564 ⎟ ⎥ ⎠ n ⎣ ⎝ ⎦

Hence, n = n 0 is the smallest integer satisfying ⎡ ⎛ ⎞ ⎤ ⎛ ⎞ ⎢ ⎜ 0.07 − 0.06 ⎟ ⎥ ⎜ 0.07 − 0.06 ⎟ 1 − ⎢ 2Φ ⎜ ⎟ − 1 ⎥ ≤ 0.05 or, equivalently, Φ ⎜ ⎟ ≤ 1.95. ⎜ 0.0564 ⎟ ⎢ ⎜ 0.0564 ⎟ ⎥ ⎠ ⎝ ⎠ n n ⎣ ⎝ ⎦

Since z 0.05 = 1.96, n = n 0 is the smallest integer satisfying 0.07 − 0.06 ≤ 1.96. 0.0564 n

It follows n 0 = 2167.

Thus, knowledge of the probability distribution of B n /n considerably reduces the sample size compared to applying Chebychev's inequality.

CHAPTER 2

Basics of Stochastic Processes 2.1) A stochastic process {X(t), t > 0} has the one-dimensional distribution 2 F t (x) = P(X(t) ≤ x) = 1 − e −(x/t) , x ≥ 0.

Is this process weakly stationary? Solution Since X(t) has a Weibull distribution with parameter β = 2 (Rayleigh distribution), the trend function of {X(t), t > 0} is time-dependent: m(t) = t Γ(3/2) = t π . 2

Thus, {X(t), t > 0} cannot be (weakly) stationary. 2.2) The one-dimensional distribution of the stochastic process {X(t), t > 0} is x

− 1 ∫ e 2π t σ −∞

F t (x) = P(X(t) ≤ x) =

(u−μ t) 2 2σ 2 t

du

with μ > 0, σ > 0, x ∈ (−∞, +∞). Determine its trend function m(t) and, for μ = 2 and σ = 0.5 , sketch the functions y 1 (t) = m(t) + Var(X(t))

and y 2 (t) = m(t) − Var(X(t)) ,

0 ≤ t ≤ 10.

Solution X(t) has to a normal distribution with parameters μ t and σ 2 t : f t (x) =

− 1 e 2π t σ

(x−μ t) 2 2σ 2 t

;

t > 0, x ∈ (−∞, +∞ ).

Hence, m(t) = E(X(t)) = μ t ,

20 18 16 14 12 10 8

Var(X(t)) = σ 2 t,

m(t)

y 1 (t)

y 2 (t)

6 4 2 0

t ≥ 0.

1

2

3

4

5

6

7

8

9

10

t

SOLUTIONS MANUAL

26

2.3) Let X(t) = A sin(ω t + Φ) , where A and Φ are independent, nonnegative random variables with Φ being uniformly distributed over [0, 2π] and E(A 2 ) < ∞. (1) Determine trend-, covariance- and correlation function of {X(t), t ∈ (−∞, + ∞)}. (2) Is the stochastic process {X(t), t ∈ (−∞, + ∞)} weakly and/or strongly stationary?

Solution 2π

E(X(t)) = E(A) ∫ 0 sin(ωt + ϕ) dϕ = E(A) 1 [−cos(ωt + ϕ)] 2π 0 2π

(1)

= E(A) 1 [cos(ωt) − cos(ωt + 2π)]. 2π

Thus, the trend function m(t) = E(X(t)) is identically 0: m(t) ≡ 0 .

(i)

The process {X(t), t ∈ (−∞, + ∞)} is a second order process since, for any t, E( X 2 (t)) = E(A 2 sin 2 (ω t + Φ)) ≤ E(A 2 ⋅ 1) = E(A 2 ) < ∞. The covariance function C(s, t) = Cov(X(s), X(t)), s < t, is obtained as follows: 2π C(s, t) = E(X(s) X(t)) = E(A 2 ) 1 ∫ 0 sin(ω s + ϕ) sin(ω t + ϕ) dϕ. 2π

Since (sin α) (sin β) = 1 [cos(β − α) − cos(β + α)], 2 2π

∫0





sin(ω s + ϕ) sin(ω t + ϕ) dϕ = ∫ 0 cos(ω (t − s)) dϕ − ∫ 0 cos((ω (s + t) + 2ϕ) dϕ.

The second integral is 0 and the first integral 2π cos(ω(t − s)). Hence, C(s, t) = C(τ) = 1 E(A 2 ) cos(ωτ),

τ = t − s.

2

(ii)

Since Var(X(t)) = C(0) = 1 E(A 2 ), the correlation function of {X(t), t ∈ (−∞, + ∞)} is 2

ρ(τ) =

C(τ) = cos(ωτ). C(0)

(2) Since {X(t), t ∈ (−∞, + ∞)} is a second order process with properties (i) and (ii), it is weakly stationary. The one-dimensional distribution of {X(t), t ∈ (−∞, + ∞)} obviously depends on t. Hence, this process cannot be strongly stationary. 2.4) Let X(t) = A(t) sin(ω t + Φ) , where A(t) and Φ are independent, nonnegative random variables for all t and let Φ be uniformly distributed over [0, 2π]. Verify: If {A(t), t ∈ (−∞, + ∞)} is a weakly stationary process, then {X(t), t ∈ (−∞, + ∞)} is also weakly stationary.

Solution As in the previous example, the trend function of {X(t), t ∈ (−∞, + ∞)} is seen to be identically equal to 0. Provided its existence, the covariance function of {X(t), t ∈ (−∞, + ∞)} is (integration as in exercise 2.3) 2π C(s, t) = E(X(s) X(t)) = E(A(s) A(t)) 1 ∫ 0 sin(ω s + ϕ) sin(ω t + ϕ) dϕ 2π

= 1 E(A(s) A(t)) cos(ωτ), 2

τ = t − s.

If the process {A(t), t ∈ (−∞, + ∞)} is weakly stationary, then it is a second order process and the mean value E(A(s) A(t)) is only a function of τ = t − s. Hence, in this case {X(t), t ∈ (−∞, + ∞)} is weakly stationary as well.

27

2 BASICS OF STOCHASTIC PROCESSES

2.5) Let {a 1 , a 2 , ..., a n } be a sequence of finite real numbers and {Φ 1 , Φ 2 , ..., Φ n } a sequence of independent random variables which are uniformly distributed over [0, 2π].

Determine covariance- and correlation function of {X(t), t ∈ (−∞, + ∞)} given by n

X(t) = Σ i=1 a i sin(ω t + Φ i ) . Solution From exercise 2.3, the trend function of {X(t), t ∈ (−∞, + ∞)} is identically equal to 0. The covariance function of this process is n n C(s, t) = E ⎛⎝ ⎡⎣ Σ i=1 a i sin(ω s + Φ i ) ⎤⎦ ⎡⎣ Σ k=1 a k sin(ω t + Φ k ) ⎤⎦ ⎞⎠ n n = E ⎛⎝ Σ i=1 Σ k=1 a i a k sin(ω s + Φ i ) sin(ω t + Φ k ) ⎞⎠ n = E ⎛⎝ Σ i=1 a 2i sin(ω s + Φ i ) sin(ω t + Φ i ) ⎞⎠ .

Integrating as in example 2.3 gives covariance and correlation function of {X(t), t ∈ (−∞, + ∞)} : n C(τ) = 1 cos(ω τ)Σ i=1 a 2i , 2

ρ(τ) =

C(τ) = cos(ω τ) with τ = t − s. C(0)

2.6)* A modulated signal (pulse code modulation) {X(t), t ≥ 0} is given by

X(t) =



Σ

n=0

A n h (t − n ) ,

where the A n are independent and identically distributed random variables which only can take on values -1 and +1 and have mean value 0. Further, let h(t) =

1 for 0 ≤ t < 1/2 . 0 elsewhere

1) Sketch a section of a possible sample path of the stochastic process {X(t), t ≥ 0}. 2) Determine the covariance function of this process. 3) Let Y(t) = X(t − Z) where the random variable Z has a uniform distribution over [0, 1] . Is the stochastic process {Y(t), t ≥ 0} weakly stationary ? Solution (1)

x(t) A 0 = −1 0

0.5

A 1 = +1 1

1.5

A 2 = +1 2

2.5

A 3 = −1 3

3.5

t

SOLUTIONS MANUAL

28

(2) Since E(A n ) = 0; n = 0, 1, ... , the trend function of the process {X(t), t ≥ 0} is identically 0. Since E(A m A n ) = 0 for m ≠ n, ⎛ ∞ ⎞ C(s, t) = E ⎜ Σ A m A n h(s − m) h(t − n) ⎟ ⎝ m,n=0 ⎠ ⎛ ∞ ⎞ = E ⎜ Σ A 2n h(s − n) h(t − n) ⎟ . ⎝ n=0 ⎠

In view of E(A 2n ) = 1 and h(s − n) h(t − n) =

1 if n ≤ s, t ≤ n + 1/2 , 0 otherwise

the covariance function of {X(t), t ≥ 0} is C(s, t) =

1 if n ≤ s, t ≤ n + 1/2; n = 0, 1, ... . 0 otherwise

Thus, {X(t), t ≥ 0} is not weakly stationary. (3) Let s ≤ t. Then the covariance function of {Y(t), t ≥ 0} is ∞ ∞ C Y (s, t) = E ⎛⎝ Σ n=0 A 2n h(s − Z − n) h(t − Z − n) ⎞⎠ = Σ n=0 E( h(s − Z − n) h(t − Z − n)).

If n ≤ s, t < n + 1 , then h(s − Z − n) h(t − Z − n) = 1 if and only if 0 ≤ Z ≤ s − n.

2 1 If n + ≤ s, t ≤ n + 1, then h(s − Z − n) h(t − Z − n) = 1 if and only if t − (n + 1 ) ≤ Z ≤ s − n. 2 2 1 1 1 If n ≤ s < n + , n + ≤ t ≤ n + 1, and t − s ≤ , then h(s − Z − n) h(t − Z − n) = 1 if and only if 2 2 2

t − (n + 1 ) ≤ Z ≤ s − n . 2

In all other cases, h(s − Z − n) h(t − Z − n) = 0. Hence, ⎧ ⎪ ⎪ ⎪ C(s, t) = ⎨ ⎪⎪ ⎪ ⎩

s−n

if n ≤ s, t < n + 1

2 1 − ( t − s) if n + 1 ≤ s, t ≤ n + 1 2 2 . 1 − (t − s) if n ≤ s < n + 1 , n + 1 ≤ t ≤ n + 1, t − s ≤ 1 2 2 2 2

0

otherwise

The stochastic process {Y(t), t ≥ 0} is not weakly stationary, since, for n ≤ s, t < n + 1/2, its covariance function only depends on s. 2.7) Let {X(t), t ∈ (−∞, +∞)} and {Y(t), t ∈ (−∞, +∞)} be two independent, weakly stationary stochastic processes the trend functions of which are identically 0 and which have the same covariance function C(τ). Prove that the stochastic process {Z(t), t ∈ (−∞, +∞)} with Z(t) = X(t) cos ωt − Y(t) sin ωt is weakly stationary.

Solution The following derivation makes use of sin α sin β = 1 [cos(β − α) − cos(α + β)], 2

cos α cos β = 1 [cos(β − α) + cos(α + β)] . 2

29

2 BASICS OF STOCHASTIC PROCESSES {Z(t), t ∈ (−∞, +∞)} has trend function m Z (t) = E(Z(t)) ≡ 0. Hence, its covariance function is

C Z (s, t) = E(Z(s) Z(t)) = E([X(s) cos ωs − Y(s) sin ωs] [X(t) cos ωt − Y(t) sin ωt]) = E([X(s) X(t) cos ωs cos ωt]) + E([Y(s) Y(t) sin ωs sin ωt]) −E([X(s) Y(t) cos ωs sin ωt]) − E([Y(s) X(t) sin ωs cos ωt]).

By taking into account E(X(t)) ≡ E(Y(t)) ≡ 0 , the independence of X(s) and Y(t) , and C(−τ) = C(τ) with τ = t − s, C Z (s, t) = E([X(s) X(t) cos ωs cos ωt]) + E([Y(s) Y(t) sin ωs sin ωt]) = C(τ) [cos ωs cos ωt + sin ωs sin ωt] = C(τ) [cos ω(t − s)] = C(τ) cos τ .

Thus, the second order process {Z(t), t ∈ (−∞, +∞)} is weakly stationary. 2.8) Let X(t) = sin Φt , where Φ is uniformly distributed over the interval [0, 2π] . Verify: (1) The discrete-time stochastic process {X(t) ; t = 1, 2, ...} is weakly, but not strongly stationary. (2) The continuous-time stochastic process {X(t), t > 0} is neither weakly nor strongly stationary.

Solution 2π 1 E(X(t)) = 1 ∫ 0 sin ϕt dϕ = 1 [−cos ϕt] 2π 0 = 2 π t [1 − cos(2πt)], 2π 2πt

t > 0.

(i)

(1) In view of (i) and cos(2πt) = 1 for t = 1, 2, ... , the trend function of the second order stochastic process {X(t); t = 1, 2, ...} is identically equal to 0. Its covariance function is (s, t = 1, 2, ...) 2π C(s, t) = E([sin Φs] [sin Φt]) = 1 ∫ 0 [sin ϕs ] [sin ϕt ]dϕ 2π 2π = 1 ∫ 0 [cos ϕ(t − s) − cos ϕ (s + t)]dϕ 4π 2π = 1 ⎡ 1 sin ϕ(t − s) − 1 sin ϕ(s + t) ⎤ s+t ⎦0 4π ⎣t− s

= 1 ⎡ 1 sin 2π(t − s) − 1 sin 2π(s + t) ⎤ , s+t ⎦ 4π⎣t− s

s ≠ t.

Hence, C(s, t) =

0 for τ = t − s > 0 . 1/2 for τ = t − s = 0

Thus, the second order process {X(t) ; t = 1, 2, ...} is weakly stationary. (2) The stochastic process in continuous time {X(t), t > 0} cannot be weakly or strongly stationary since, by (i), its trend function depends on t. 2.9) Let {X(t), t ∈ (−∞, +∞)} and {Y(t), t ∈ (−∞, +∞)} be two independent stochastic processes with respective trend- and covariance functions m X (t), m Y (t) and C X (s, t), C Y (s, t). Further, let

U(t) = X(t) + Y(t) and V(t) = X(t) − Y(t), t ∈ (−∞, +∞). Determine the covariance functions C U (s, t) and C V (s, t) of the stochastic processes (1) {U(t), t ∈ (−∞, +∞)} and (2) {V(t), t ∈ (−∞, +∞)}.

SOLUTIONS MANUAL

30 Solution (1) By formula (1.64), C U (s, t) = Cov(U(s), U(t))

= E([X(s) + Y(s)] [X(t) + Y(t)]) − ⎡⎣ m X (s) + m Y (s)⎤⎦ ⎡⎣ m X (t) + m Y (t)⎤⎦ = E(X(s) X(t)) + E(X(s) Y(t)) + E(Y(s) X(t)) + E(Y(s) Y(t)) −m X (s) m X (t) − m X (s) m Y (t) − m Y (s) m X (t) − m Y (s) m Y (t) = C X (s, t) + C Y (s, t) + E(X(s) Y(t)) + E(Y(s) X(t) − m X (s) m Y (t) − m Y (s) m X (t).

Since the processes {X(t), t ∈ (−∞, +∞)} and {Y(t), t ∈ (−∞, +∞)} are independent, E(X(s) Y(t)) = m X (s) m Y (t) and E(Y(s) X(t)) = m Y (s) m X (t). This proves the assertion. (2) By formula (1.64), C V (s, t) = Cov(V(s), V(t)) = E([X(s) − Y(s)] [X(t) − Y(t)]) − ⎡⎣ m X (s) − m Y (s)⎤⎦ ⎡⎣ m X (t) − m Y (t)⎤⎦ = E(X(s) X(t)) − E(X(s) Y(t)) − E(Y(s) X(t)) + E(Y(s) Y(t)) −m X (s) m X (t) + m X (s) m Y (t) + m Y (s) m X (t) − m Y (s) m Y (t) = C X (s, t) + C Y (s, t) − E(X(s) Y(t)) − E(Y(s) X(t) + m X (s) m Y (t) + m Y (s) m X (t).

Now the desired result follows from (i).

(i)

CHAPTER 3

Point Processes Sections 3.1 and 3.2 3.1) The number of catastrophic accidents at Sosal & Sons can be described by a homogeneous Poisson process {N(t), t ≥ 0} with intensity λ = 3 a year. (1) What is the probability p ≥2 that at least two catastrophic accidents will occur in the second half of the current year? (2) Determine the same probability given that two catastrophic accidents have occurred in the first half of the current year. Solution (1) The desired probability is (t = 0.5) p ≥2 =

∞ (3⋅0.5) n 1 (3⋅0.5) n e −3⋅0.5 = 1 − Σ e −3⋅0.5 = 1 − e −1.5 − 1.5 e −1.5 = 0.4422. n! n! n=2 n=0

Σ

(2) In view of the 'memoryless property' of the exponential distribution, this conditional probability also is 0.4422. 3.2) By making use of the independence and homogeneity of the increments of a homogeneous Poisson process {N(t), t ≥ 0} with intensity λ , show that its covariance function is C(s, t) = λ min(s, t). Solution Let 0 < s < t. Then C(s, t) = Cov(N(s), N(t)) = Cov(N(s), N(s) + N(t) − N(s)) = Cov(N(s), N(s)) + Cov(N(s), N(t) − N(s)) = Var(N(s)) = λ s , which proves the assertion. 3.3) The number of cars which pass a certain intersection daily between 12:00 and 14:00 follows a homogeneous Poisson process with intensity λ = 40 per hour. Among these there are 0.8% which disregard the STOP-sign. What is the probability p ≥1 that at least one car disregards the STOP-sign between 12:00 and 13:00? Solution By theorem 3.8, the number of cars which disregard the STOP-sign between 12:00 and 13:00 has a Poisson distribution with parameter λ p = 40 ⋅ 0.008 = 0.32. Hence, the desired probability is p ≥1 =

∞ (0.32) n e −0.32 = 1 − e −0.32 = 0.2739. n=1 n!

Σ

32

SOLUTIONS MANUAL

3.4) A Geiger counter is struck by radioactive particles according to a homogeneous Poisson process with intensity λ = 1 per 12 seconds. On average, the Geiger counter only records 4 out of 5 particles. (1) What is the probability p ≥2 that the Geiger counter records at least 2 particles a minute? (2) What are mean value and variance of the of the random time Y between the occurrence of two successively recorded particles? Solution (1) By theorem 3.8, the number of recorded particles per minute has a Poisson distribution with parameter 5 λ p = 5 ⋅ 1 ⋅ 0.8 = 4. Hence, p ≥2 =

∞ 4n e −4 = 1 − e −4 − 4 e −4 = 0.9084. n=2 n!

Σ

(2) Y [min] has an exponential distribution with parameter 5 λ p = 4 [min −1 ]. Hence, E(Y) = 1 [min], 4

Var(Y) = 1 [min 2 ]. 16

3.5) An electronic system is subject to two types of shocks which arrive independently of each other according to homogeneous Poisson processes with respective intensities λ 1 = 0.002 and λ 2 = 0.01 per hour, A shock of type 1 always causes a system failure, whereas a shock of type 2 causes a system failure with probability 0.4. What is the probability of the event A that the system fails within 24 hours due to a shock? Solution The random time Y to the occurrence of the first type 1-shock has an exponential distribution with parameter λ 1 :

P(Y ≤ t) = 1 − e −0.002 t , t ≥ 0. The formula of total probability applied to the event A that there is no failure due to a shock within 24 hours gives P(A) = P(A Y ≤ 24) P(Y ≤ 24) + P(A Y > 24) P(Y > 24). Since P(A Y ≤ 24) = 0 and, by theorem 3.8, the time to the first occurrence of a 'deadly' type 2shock has an exponential distribution with parameter λ 2 p = 0.01 ⋅ 0.4 = 0.004, P(A) = e −0.004⋅24 ⋅ e −0.002⋅24 = e −0.144 = 0.8659. Hence, the desired probability is 0.1341. 3.6) Consider two independent homogeneous Poisson processes 1 and 2 with respective intensities λ 1 and λ 2 . Determine the mean value of the random number of events of process 2 (type 2-events) which occur between any two successive events of process 1 (type 1-events). Solution Let Y be the random length of a time interval between two neighboring type 1-events, and N 2 be the random number of type 2-events occurring in this interval. Then, on condition Y = t, E(N 2 Y = t) =



Σ

n=0

n

(λ 2 t) n −λ t e 2 = λ 2 t. n!

3 POINT PROCESSES

33

Hence,





E(N 2 ) = ∫ 0 E(N 2 Y = t) λ 1 e −λ 1 t dt = ∫ 0 (λ 2 t ) λ 1 e −λ 1 t dt ∞

= λ 2 ∫ 0 t λ 1 e −λ 1 t dt = λ 2 /λ 1 . 3.7) Let {N(t), t ≥ 0} be a homogeneous Poisson process with intensity λ. Prove that for an arbitrary, but fixed positive h the stochastic process {X(t), t ≥ 0} defined by X(t) = N(t + h) − N(t) is weakly stationary. Solution 1} Since X(t) has a Poisson distribution with parameter E(X(t)) = λ h , its second moment is E(X 2 (t)) = (λh + 1)λ h < ∞ for all t. Hence, {X(t), t ≥ 0} is a second order process. 2) The trend function of {X(t), t ≥ 0} is constant: E(X(t)) = E(N(t + h) − N(t)) = λ(t + h) − λt = λh. 3) The covariance function C(s, t) of {X(t), t ≥ 0} can be written as C(s, t) = Cov(X(s), X(t)) = Cov(N(s + h) − N(s), N(t + h) − N(t)) = Cov(N(s + h), N(t + h)) − Cov(N(s + h), N(t)) − Cov(N(s), N(t + h)) + Cov(N(s), N(t)).

(i)

From exercise 3.2, Cov(N(s), N(t)) = λ s for s ≤ t . a) Let 0 ≤ s < t and s + h ≤ t. Then, from (i) and (ii), C(s, t) = λ(s + h) − λ(s + h) − λs + λs = 0 . b) Let 0 ≤ s < t and t < s + h . Then, from (i) and (ii), letting τ = t − s, C(s, t) = λ(s + h) − λ t − λ s + λ s = λ (h − τ) . Or, by making use of the independence of the increments of a Poisson process (see figure), 0

s

t

s+h

t+h

C(s, t) = Cov([N(t) − N(s)] + [N(s + h) − N(t)], [N(s + h) − N(t)] + [N(t + h) − N(s + h)] ) = 0 + 0 + Var(N(s + h) − N(t)) + 0 = λ(s + h − t) = λ(h − τ). By combining a) and b) ⎧ 0 C(s, t) = C(τ) = ⎨ ⎩ λ(h − τ) Hence, the stochastic process {X(t), t ≥ 0} is weakly stationary.

for h ≤ τ . for h > τ

(ii)

34

SOLUTIONS MANUAL

3.8) Let {N(t), t ≥ 0} be a homogeneous Poisson process with intensity λ and {T 1 , T 2 , ...} be the associated random point process, i.e. T i is the time point at which the i th Poisson event occurs. For t → ∞, determine and sketch the covariance function C(τ) of the (stochastic) shot noise process {X(t), t ≥ 0} given by N(t) sin t for 0 ≤ t ≤ π X(t) = Σ i=1 h(t − T i ) with h(t) = . 0, elsewhere Solution By formula (3.33) with 0 ≤ τ < π, ∞

π− τ

C(τ) = λ ∫ 0 h(x) h( τ + x) dx = λ ∫ 0

(sin x) sin( τ + x) dx

π− τ = λ ∫0 [cos τ − cos(2x + τ )]dx 2 π− τ = λ ⎡⎣ (π − τ )cos τ − ∫ 0 cos(2x + τ )dx ⎤⎦ 2

= λ (π − τ )cos τ − 1 [sin(2π − τ ) − sin τ ] 2 λ = [(π − τ ) cos τ − cos π sin(π − τ )] . 2

2

Hence, ⎧⎪ λ [(π − τ ) cos τ + sin(π − τ )], C(τ) = ⎨ 2 ⎪⎩ 0,

3

2 C(τ) λ

0≤ τ 0. At time t, the dealer is in a position to sell all cars of this type to a customer.

What will be the mean total price E( K) the car dealer achieves? Solution The random total price the car dealer achieves is N(t)

K = Σ i=1 C i e −α (t−T i ) .

In what follows, the derivation of E(K) is done via the Laplace transform of K : f K (s) = E ⎛⎝ e −sK ⎞⎠ .

By theorem 3.5, on condition N(t) = n, the random vector (T 1 , T 2 , ..., T n ) has the same probability distribution as an ordered random sample of size n taken from a random variable T which has a uniform distribution over the interval [0, t]. Hence, the conditional mean value of e −sK on condition ”N(t) = n” is E(e −sK N(t) = n) = E exp ⎡⎣ −s

=E

n

Π i=1 exp ⎡⎣ −s C i e −α (t−T i ) ⎤⎦

n

Σ i=1 C i e −α (t−T i ) ⎤⎦

n = Π i=1 E exp ⎡⎣ −s C i e −α (t−T i ) ⎤⎦

n = ⎡ E exp ⎛⎝ −s C e −α (t−T ) ⎞⎠ ⎤ . ⎣ ⎦

Let f C (z) = E(e −z C ) be the Laplace transform of C. Then, E exp ⎛⎝ −s C e −α (t−T ) T = y = E exp ⎛⎝ −s C e −α (t−y ) ⎞⎠ = f C ⎛⎝ s e −α(t−y) ⎞⎠ .

Hence, by substituting x = t − y , t t E exp ⎛⎝ −s C e −α (t−T ) ⎞⎠ = 1 ∫ 0 f C ⎛⎝ s e −α(t−y) ⎞⎠ dy = 1 ∫ 0 f C (s e −α x ) d x t t

so that

n t E(e −sK N(t) = n) = ⎡⎣ 1t ∫ 0 f C (s e −α x ) d x ⎤⎦ ;

n = 0, 1, ...

Applying the formula of total probability yields E(e −sK ) =



n

⎡ 1 t f (s e −α x ) d x ⎤ n (λt) e −λt ∫0 C ⎦ n! n=0 ⎣ t

Σ

= e −λt

∞ 1 ⎛ λ t f (s e −α x ) d x ⎞ n . ⎝ ∫0 C ⎠ n ! n=0

Σ

Hence, t E(e −sK ) = exp −λ ⎡ ∫ 0 ⎛⎝ 1 − f C (s e −α x ) ⎞⎠ dx ⎤ . ⎣ ⎦

By (1.28) with n = 1,

3 POINT PROCESSES

37 E(K) = −

dE(e −sK ) ds

s=0

λ (1 − e −αt ). = − f C (0) α

Using again (1.28), λ (1 − e −αt ). E(K) = E(C) α 3.11) Statistical evaluation of a large sample justifies to model the number of cars which daily arrive at a particular filling station for petrol between 0:00 and 4:00 a.m. by a nonhomogeneous Poisson process {N(t), t ≥ 0} with intensity function

λ(t) = 8 − 4 t + 3 t 2 [h −1 ], 0 ≤ t ≤ 4 . (1) How many cars arrive on average between 0:00 and 4:00 a.m.? (2) What is the probability that at least 40 cars arrive between 2:00 and 4.00? Solution (1) The mean number of cars arriving between 0:00 and 4:00 a.m. is 4 4 Λ(4) = ∫ 0 ⎛⎝ 8 − 4 t + 3 t 2 ⎞⎠ dt = ⎡⎣ 8t − 2t 2 + t 3 ⎤⎦ = 64. 0

(2) Let Y be the random number of cars arriving between 2:00 and 4:00. Its mean number is E(Y) = Λ(2, 4) = Λ(4) − Λ(2) 2 = 64 − ∫ 0 ⎛⎝ 8 − 4 t + 3 t 2 ⎞⎠ dt = 64 − 16 = 48.

By making use of the normal approximation to the Poisson distribution (section 1.9.3), the desired probability is ⎛ 40− 1 −48 ⎞ 2 ⎟⎟ = 1 − Φ(−1.227) = 0.89. P(Y ≥ 40) = 1 − Φ ⎜⎜ ⎜ ⎟ 48 ⎝ ⎠ 3.12)* Let {N(t), t ≥ 0} be a nonhomogeneous Poisson process with intensity function λ(t), trend function Λ(t) and arrival time point T i of the i th Poisson event. Show that, given N(t) = n, the random vector (T 1 , T 2 , ..., T n ) has the same probability distribution as n ordered, independent, and identically distributed random variables with distribution function ⎧ Λ(x) for 0 ≤ x < t ⎪ F(x) = ⎨ Λ(t) . ⎪⎩ 1, t≤x Hint Compare to theorem 3.5. Solution To derive the conditional joint density f (t 1 , t 2 , ..., t n N(t) = n)

of (T 1 , T 2 , ..., T n ) , consider for 0 ≤ t 1 < t 2 < . . . < t n ≤ t the conditional probabilities P(t i ≤ T i < t i + h i , i = 1, 2, ..., n ; t < T n+1 ) P(t i ≤ T i < t i + h i ; i = 1, 2, ..., n N(t) = n) = , P(N(t) = n) where the h i are so small that the intervals t i ≤ T i < t i + h i ; i = 1, 2, ..., n are disjoint. Then, by definition of a joint density and making use of the unconditional joint probability density (3.49) of (T 1 , T 2 , ..., T n ),

38

SOLUTIONS MANUAL Pt i ≤ T i < t i + h i , i = 1, 2, ..., n ; t < T n+1 ) h 1 h 2 . . .h n ⋅ P(N(t) = n) max h i →0

f (t 1 , t 2 , ..., t n N(t) = n) =

lim

i=1,2,...,n

∞ t +h n t n−1 +h n−1 . . . t 1 +h 1 ∫ t n−1 ∫ t 1 λ(x 1 )λ(x 2 ). . .λ(x n ) f T 1 (x n+1 ) dx 1 dx 2 . . .dx n+1 = lim . (Λ(t)) n −Λ(t) max h i →0 h 1 h 2 . . .h n ⋅ e i=1,2,...,n n!

∫ t ∫ t nn

By formulas (1.40), f T (x) = λ(x) e −Λ(x) , 1



1 − F T (t) = ∫ t f T (x) dx = e −Λ(t) . 1 1

Hence, f (t 1 , t 2 , ..., t n N(t) = n) =

Λ(t 1 + h 1 ) − Λ(t 1 ) n! . . . lim Λ(t n + h n ) − Λ(t n ) lim n h1 hn (Λ(t)) h 1 →0 h n →0

so that λ(t 1 ) λ(t 2 ) ⎧ . . . λ(t n ) , ⎪ n! Λ(t) Λ(t) Λ(t) f (t 1 , t 2 , ..., t n N(t) = n) = ⎨ ⎪ 0, ⎩

0 ≤ t1 < t2 < . . . < tn . otherwise

But this is the joint density of n ordered, independent and identically distributed random variables with densities f (x) and distribution functions F(x) : ⎧ λ(x) , 0 ≤ x ≤ t ⎪ f (x) = ⎨ Λ(t) , ⎪ 0, otherwise ⎩

⎧⎪ Λ(x) , 0 ≤ x ≤ t F (x) = ⎨ Λ(t) . ⎪ 0, otherwise ⎩

3.13) Determine the optimal renewal interval τ∗ and the corresponding maintenance cost rate K(τ) for policy 1 (section 3.2.6.2) given that the system lifetime has a Weibull distribution with form parameter β and scale parameter θ; β > 1, θ > 0. Solution (1) The distribution function of the system lifetime, failure rate and integrated failure rate are β

F(t) = 1 − e −(t/θ) ,

β−1 β λ(t) = θ ⎛⎝ θt ⎞⎠ ,

β Λ(t) = ⎛⎝ θt ⎞⎠ ,

t ≥ 0.

Hence, the (long-run) maintenance cost rate is β c p + c m ⎛⎝ θτ ⎞⎠ K(τ) = . τ

The optimum replacement interval τ = τ∗ satisfies the equation dK(τ)/d τ = 0 : (β − 1) ⎛⎝ θτ ⎞⎠

β

cp

= cm .

Hence, τ∗= θ ⎛⎝

cp ⎞ 1/β , (β−1) c m ⎠

K(τ∗) = c m λ(τ∗) .

3.14) Clients arrive at an insurance company according to a mixed Poisson process the structure parameter L of which has a uniform distribution over the interval [0, 1].

(1) Determine the state probabilities of this process at time t. (2) Determine trend and variance function of this process.

3 POINT PROCESSES

39

(3) For what values of α and β are trend and variance function of a Polya arrival process identical to the ones obtained under (2) ? Solution

(1)

1

n i⎤ ⎡ (λt) n −λt e dλ = 1 ⎢ 1 − e −t Σ t ⎥ ; n = 0, 1, ... t ⎣ n! i=0 i ! ⎦ 0

P(N L (t) = n) = ∫

For verifying the result without the help of a table of standard integrals, use the relationship between density and distribution function of the Erlang distribution with parameter n + 1 . (2) Trend- and variance function follow from (3.54): Since E(L) = 1/2 and Var(L) = 1/12 , Var(N L (t)) = 1 t ⎡ 1 + 1 t ⎤ . 2 ⎣ 6 ⎦ (3) In case of a Polya arrival process with parameters α and β, trend- and variance function are m(t) = E(N L (t)) = α t, Var(N L (t)) = α t + α t 2 . β β β2 Thus, the trend functions are identical if β = 2α. For the variance functions to coincide as well, the additional condition α = 3 is required. Therefore, both trend- and covariance functions under (2) and (3) coincide if α = 3 and β = 6. m(t) = E(N L (t)) = 1 t , 2

3.15)* Prove the multinomial criterion (3.55) on condition that L has density f L (λ).) Solution To prove is: If {N L (t), t ≥ 0} is a mixed Poisson process, then, for all n = 1, 2, ... , all vectors (t 1 , t 2 , ..., t n ) with 0 = t 0 < t 1 < t 2 < . . . < t n and for any nonnegative integers i 1 , i 2 , ..., i n satisfying i 1 + i 2 + . . . + i n = i, P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n N L (t n ) = i)

=

⎛ t 1 ⎞ i 1 ⎛ t 2 − t 1 ⎞ i 2 . . . ⎛ t n − t n−1 ⎞ i n i! . ⎝ ⎠ tn i 1 ! i 2 !. . . i n ! ⎝ t n ⎠ ⎝ t n ⎠

Note that in view of i 1 + i 2 + . . . + i n = i , P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n ; N L (t n ) = i) = P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n) .

Hence, P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n ; N L (t n ) = i) ∞

= ∫ 0 P(N λ (t k−1 , t k ) = i k ; k = 1, 2, ..., n ) f L (λ) dλ ∞

= ∫0

n

Π k=1 P(N λ (t k−1 , t k ) = i k ) f L (λ) dλ

⎞ n ⎛ ⎡⎣ (t k − t k−1 ) λ ⎤⎦ i k ⎜⎜ e −(t k −t k−1 ) λ ⎟⎟ f L (λ) dλ ⎜ ⎟ ik! ⎠ 0 k=1 ⎝ i k ∞ (λt ) i n − t t i! = Π ⎛ k t nk−1 ⎞⎠ ∫ i!n e −λ t n f L (λ) dλ. i 1 !i 2 !. . .i n ! k=1 ⎝ 0

=



∫ Π

Since P(N L (t n ) = i) =

the desired result is obtained as follows:



(λt n ) i −λ t n e f L (λ) dλ , i! 0



40

SOLUTIONS MANUAL P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n N L (t n ) = i ⎞⎠

=

=

P(N L (t k−1 , t k ) = i k ; k = 1, 2, ..., n ; N L (t n ) = i) P(N L (t n ) = i)

i! i 1 !i 2 !. . .i n !

=

t k −t k−1 ⎞ i k ∞ (λt n ) i −λ t n f (λ) dλ L t n ⎠ ∫ i! e 0 ∞ (λt n ) i ∫ i! e −λ t n f L (λ) dλ 0

Π k=1 ⎛⎝ n

i! i 1 !i 2 !. . .i n !

Π k=1 ⎛⎝ n

t k −t k−1 ⎞ i k tn ⎠ .

3.16) A system is maintained according to policy 7 (section 3.2.6.4). The repair cost of a system failing at time t has a uniform distribution over the interval [a, a + bt] with a ≥ 0 and b > 0. Under the same assumptions as in section 3.2.6.4 (in particular assumptions (3.96) and (3.99)), show that every linearly increasing repair cost limit (i) c(t) = c + dt with a < c and 0 < d < b

leads to a higher maintenance cost rate than K 7 (c∗) given by (3.105). Solution Combining (3.96) with (i) gives ⎧ 0, 0≤t 1.

(The special assumption that C is uniformly distributed over [0, c r ] is not yet needed.) After some algebra, K(c) is seen to have structure −1 ∼ K(c) = θ ⎛⎝ Γ ⎡⎢ 1 + β1 ⎤⎥ ⎞⎠ K(c) ⎣ ⎦

with 1

−1 c ∼ K(c) = ⎡⎣ R(c) ⎤⎦ β ⎡⎣ ∫ 0 R(x)dx + (c r − c) R(c) ⎤⎦ . ∼ Hence, the problem of minimizing K(c) is equivalent to minimizing K(c). An optimal repair cost ∼ limit c = c ∗ satisfies equation dK(c) /dc = 0 or 1 c 1 1 ∫ R(x)dx + β−1 c = β−1 c r . R(c) 0

(i)

The left-hand side of this equation is 0 for c = 0. For c → ∞ it strictly increases to infinity. Thus, in view of the assumption β > 1 , there exists a unique solution c = c ∗ with 0 < c ∗ ≤ c r . Now, let ⎧ 0, ⎪ R(x) = ⎨ x /c r , ⎪ ⎩ 1,

x 0.

The figure compares these bounds on the renewal function to the exact graph of H(t) : H(t) = 1 ⎡⎢ t + 1 e −2 t − 1 ⎤⎥ . 2⎣ 2 2⎦ 3

H(t)

2.5 2 1.5 1 0.5

t 0

1

2

3

4

5

6

3.20) The probability density function of the cycle length Y of an ordinary renewal process is the mixture of two exponential distributions: f (t) = pλ 1 e −λ 1 t + (1 − p)λ 2 e −λ 2 t , 0 ≤ p ≤ 1, t ≥ 0 .

By means of the Laplace transformation, determine the associate renewal function. Solution The Laplace transform of f (t) is f (s) =

p λ1 (1 − p) λ 2 + . s + λ1 s + λ2

44

SOLUTIONS MANUAL

Hence, by the second formula of (3.125), the Laplace transform of the corresponding renewal density is p λ 1 (1−p )λ 2 + s+λ s+λ 1 2 h(s) = . p λ 1 (1−p λ 2 1 − s+λ − s+λ 1 2

From this, h(s) =

⎡⎣ pλ 1 + (1 − p)λ 2 ⎤⎦ s + λ 1 λ 2 ⎡ pλ + (1 − p)λ 2 ⎤⎦ s + λ 1 λ 2 = ⎣ 1 (s + λ 1 )(s + λ 2 ) − ⎡⎣ pλ 1 + (1 − p)λ 2 ⎤⎦ s − λ 1 λ 2 s 2 + (1 − p)λ 1 s + pλ 2 s

=

pλ 1 + (1 − p)λ 2 λ1λ2 + . s + (1 − p)λ 1 + pλ 2 s [s + (1 − p)λ 1 + pλ 2 ]

Retransformation yields, taking into account formula (1.29), t

h(t) = [pλ 1 + (1 − p)λ 2 ]e −[(1−p)λ 1 +pλ 2 ] t + λ 1 λ 2 ∫ 0 e −[(1−p)λ 1 +pλ 2 ] x dx =

λ1λ2 λ1λ2 ⎡ ⎤ −[(1−p)λ 1 +pλ 2 ] t + ⎢ pλ + (1 − p)λ 2 − . ⎥e (1 − p)λ 1 + pλ 2 ⎣ 1 (1 − p)λ 1 + pλ 2 ⎦

After some algebra, h(t) =

λ1λ2 (λ 1 − λ 2 ) 2 + p(1 − p) e −[(1−p)λ 1 +pλ 2 ] t . (1 − p)λ 1 + pλ 2 (1 − p)λ 1 + λ 2 p

The renewal function is obtained by integration: H(t) =

λ1 − λ2 λ1λ2 ⎛ ⎞2⎛ −[(1−p)λ 1 +pλ 2 ] t ⎞ , t + p (1 − p) ⎜ ⎟ 1−e ⎠ (1 − p)λ 1 + pλ 2 ⎝ (1 − p)λ 1 + λ 2 p ⎠ ⎝

t ≥ 0.

Note that mean value and variance of the cycle length are p 1 − p (1 − p)λ 1 + pλ 2 μ= + = , λ1 λ2 λ1λ2

2 2 p 1 − p (1 − p)λ 1 + pλ 2 2 σ = + = . λ 21 λ 22 λ 21 λ 22

Using this, the renewal function can be written as ⎛ 2 ⎞ H(t) = μt + ⎜ σ − 1 ⎟ ⎛⎝ 1 − e −[(1−p)λ 1 +pλ 2 ] t ⎞⎠ , ⎝ μ2 ⎠

t ≥ 0.

3.21)* (1) Verify that the probability p(t) = P(N(t) is odd) satisfies the integral equation t

p(t) = F(t) − ∫ 0 p(t − x) f (x) dx,

f (x) = F (x) .

(2) Determine p(t) if the cycle lengths are exponential with parameter λ. Solution (1) Obviously, p(t) ≡ 0 if N(t) = 0, i.e. if T 1 > t. Let T 1 ≤ t. Then, given that the first renewal occurs at T 1 = x, p(t T 1 = x ⎞⎠ = 1 − p(t − x) ,

3 POINT PROCESSES

45

since, in order to have an odd number of renewals in (0, t], the number of renewals in (x, t] must be even. (Note that 1 − p(t) is the probability for the occurrence of an even number of renewals in (0, t].) Hence, t

t

p(t) = ∫ 0 [1 − p(t − x)] f (x) dx = F(t) − ∫ 0 p(t − x) f (x) dx .

(i)

(2) If F(t) = 1 − e −λt , t ≥ 0, then (i) becomes t

p(t) = 1 − e −λt − ∫ 0 p(t − x) λe −λx dx .

(ii)

Applying the Laplace transform to (ii) gives p(s) = 1s − 1 − p(s) ⋅ λ . s+λ s+λ Solving for p(s) yields p(s) =

λ . s (s + λ)

Hence, by making use of (1.29), t t p(t) = λ ∫ 0 e −λx dx = λ ⎡⎣ − λ1 e −λx ⎤⎦ = 1 − e −λt , t > 0. 0

3.22) An ordinary renewal process has the renewal function H(t) = t /10 . Determine the probability P(N(10) ≥ 2).

Solution If an ordinary renewal process has renewal function H(t) = t /10 , then, by example 3.12, its cycle length has an exponential distribution with parameter λ = 1/10, i.e. its counting process is the homogeneous Poisson process with intensity λ = 1/10. Hence, the desired probability is P(N(10) ≥ 2) = = 1 − e −1

1

Σ

∞ (10/10) n e −10/10 n! n=2

Σ

1 = 1 − 2 e −1 = 0.2642.

n=0 n!

3.23)* Verify that H 2 (t) = E(N 2 (t)) satisfies the integral equation t

H 2 (t) = 2H(t) − F(t) + ∫ 0 H 2 (t − x) f (x) dx . Solution





H 2 (t) = Σ n=1 n 2 P(N(t) = n) = Σ n=1 n 2 [F T n (t) − F T

n+1

(t)]

∞ = Σ n=1 [(n − 1) 2 + 2n − 1] [F T n (t) − F T (t)] n+1 ∞ = Σ n=1 [(n − 1) 2 [F T n (t) − F T (t)] + 2 H(t) − F(t). n+1

Moreover, t

t

∫ 0 H 2 (t − x) f (x) dx = ∫ 0

Σ n∞=0 n 2 [F T n (t − x) − F T n+1 (t − x)]



= Σ n =0 n 2 ∫ t0 [F T n (t − x) − F T (t − x)] f (x) dx n+1 ∞

= Σ n =0 n 2 [F T

n+1

(t − x) − F T

n+2

(t − x)]



= Σ n =1 (n − 1) 2 [F T n (t − x) − F T This proves the assertion.

n+1

(t − x)].

f (x) dx

46

SOLUTIONS MANUAL

3.24) Given the existence of the first 3 moments of the cycle length Y, prove equations (3.132).

Solution Equations (3.132) are μ μ2 + σ2 1 t F(x) dx . , E(S 2 ) = 3 with F S (t) = P(S ≤ t) = μ ∫0 2μ 3μ The mean value of S is obtained by applying (1.17), Dirichlet's formula, and partial integration, E(S) =

∞ ∞ ∞ x E(S) = μ1 ∫ 0 ∫ t F(x) dx dt = μ1 ∫ 0 ∫ 0 F(x) dt dx

1 ∞ x F(x) dx = 1 ∞ x F(x) dx = 1 μ . =μ ∫0 μ ∫0 2μ 2 The desired result follows from μ 2 = μ 2 + σ 2 . The second moment of S is obtained by partial integration: ∞ 1 F(x) dx = 1 ∞ x 2 F(x) dx E(S 2 ) = ∫ 0 x 2 μ μ ∫0 ∞ 1 ∞ x 2 F(x) dx = 1 ⎡ x 3 F(x) ⎤ + 1 ∞ x 3 f (x) dx. =μ ⎥ ∫0 ∫ μ ⎢⎣ 3 ⎦0 3μ 0

In view of lim x 3 F(x) = 0, this is the desired result. x→∞

3.25) The cycle length Y of an ordinary renewal process is a discrete random variable with probability distribution P(Y = k) = p k ; k = 0, 1, 2, ...

(1) Show that the corresponding renewal function H(n) ; n = 0, 1, ... satisfies H(n) = q n + H(0) p n + H(1) p n−1 + . . . + H(n) p 0 with q n = P(Y ≤ n) = p 0 + p 1 + . . . + p n ; n = 0, 1, ... (2) Consider the special cycle length distribution P(Y = 0) = p , P(Y = 1) = 1 − p and determine the corresponding renewal function. (This special renewal process is sometimes referred to as the negative binomial process.) Solution (1) Given Y = k, k ≤ n, the mean number of renewals in [0, n] is 1 + H(n − k) . Hence, by the total probability rule, n

n

H(n) = Σ k=0 [1 + H(n − k)] p k = q n + Σ k=0 H(n − k) p k . (2) From (i), H(0) = p + H(0) p , H(n) = 1 + p H(n) + (1 − p) H(n − 1) ; From the first equation, H(0) =

p . 1−p

Recursively, starting with n = 1, H(n) =

n+p , 1−p

n = 0, 1, ...

n = 1, 2, ...

(i)

3 POINT PROCESSES

47

3.26) Consider an ordinary renewal process with cycle length distribution function 2

F(t) = P(Y ≤ t) = 1 − e −t , t ≥ 0. (1) What is the statement of theorem 3.12 if g(x) = (x + 1) −2 , x ≥ 0 ? (2) What is the statement of theorem 3.14 (formula (3.145)? Solution Since Y has a Rayleigh distribution, π μ = E(Y) = , 2 (1)

μ 2 = E(Y 2 ) = 1,

σ 2 = Var(Y) = 1 − π . 4

∞ ∞ ∫ 0 g(x) dx = ∫ 0 (x + 1) −2 dx = 1.

Hence, t 1 ∞ (x + 1) −2 dx = 2 . lim ∫ (t − x + 1) −2 dH(x) = μ ∫0 π t→∞ 0

⎛ 2 ⎞ 2 lim (H(t) − t /μ) = 1 ⎜ σ − 1 ⎟ = π − 1. 2 2⎝μ t→∞ ⎠

(2)

3.27) The time intervals between the arrivals of successive particles at a counter generate an ordinary renewal process. After having recorded 10 particles, the counter is blocked for τ time units. Particles arriving during a blocked period are not registered. What is the distribution function of the time from the end of a blocked period to the arrival of the first particle after this period if τ → ∞?

Solution At the end of the blocked period, the underlying renewal process has reached its stationary phase if τ is sufficiently large. Hence, by theorem 3.17, the distribution function asked for is 1 t F(x) dx, t ≥ 0. F S (t) = μ ∫0 3.28) Let A(t) be the forward and B(t) the backward recurrence times of an ordinary renewal process at time t. Determine functional relationships between F(t) and the conditional probabilities

(1) P(A(t) > y − t B(t) = t − x) , 0 ≤ x < t < y

(2) P(A(t) ≤ y B(t) = x) , 0 ≤ x < t, y > 0.

Solution (1)

B(t)

0

x

A(t)

y

t

P(A(t) > y − t B(t) = t − x) =

(2)

A(t)

B(t)

0

t-x

F(t − x + y) . F(t − x)

t

P(A(t) ≤ y B(t) = x) =

t+y

F(x + y) − F(x) . F(x)

48

SOLUTIONS MANUAL

3.29)* Prove formula (3.145) by means of theorem 3.13. (Hint Let Z(t) = H(t) − t /μ . )

Solution Formula (3.145) (theorem 3.14 for ordinary renewal processes) is ⎛ 2 ⎞ ⎛ μ ⎞ lim ⎛⎝ H(t) − μt ⎞⎠ = 1 ⎜ σ − 1 ⎟ = ⎜ 2 − 1 ⎟ . 2 ⎝ μ2 t→∞ ⎠ ⎝ 2μ 2 ⎠

Let Z(t) = H(t) − t /μ . Substituting H(t) = Z(t) + t /μ into the renewal equation (3.118) yields an equation of renewal type in Z(t) : t

Z(t) = a(t) + ∫ 0 Z(t − u) f (x) dx , where 1 t (t − x) f (x) dx + F(t) − t , a(t) = μ ∫0 μ or, equivalently, 1 ∞ (x − t) f (x) dx − F(t). a(t) = μ ∫t a(t) is integrable since ∞

∫0

1 ∞ ∞ (x − t) f (x) dxdt − ∞ F(x) dx a(x) dx = μ ∫0 ∫t ∫0

x ∞ ∞ = μ1 ∫ 0 f (x) ∫ 0 (x − t)dt dx − μ = μ1 ∫ 0 f (x) 1 x 2 dx − μ 2 ∞ = μ1 ∫ 0 f (x) 1 x 2 dx − μ 2 μ = 2 − μ. 2μ

Hence, by theorem 3.13, 1 ∞ a(x) dx = ⎛ μ 2 − 1 ⎞ . lim ⎛⎝ H(t) − μt ⎞⎠ = μ ⎜ 2 ⎟ ∫0 t→∞ ⎝ 2μ ⎠ 3.30) Let (Y, Z) be the typical cycle of an alternating renewal process, where Y and Z have an Erlang distribution with joint parameter λ and parameters n = 2 and n = 1, respectively.

For t → ∞, determine the probability that the system is in state 1 at time t and that it stays in this state over the entire interval [t, t + x], x > 0. (Process states as introduced in section 3.3.6.) Solution To determine is the stationary interval reliability A x given by theorem 3.18: Ax =

∞ 1 F (u) du. E(Y) + E(Z) ∫ x Y

By assumption, F Y (t) = 1 − e −λt − λ t e −λt , F Z (t) = 1 − e −λt ,

t ≥ 0.

Hence, E(Y) = 2 , E(Z) = 1 , λ λ



∫x

∞ F Y (u) du = ∫ x (e −λu + λ u e −λu ) du = 1 e −λx (2 + λx). λ

Thus, A x = 1 (2 + λ x) e −λx . 3

3 POINT PROCESSES

49

3.31) The time intervals between successive repairs of a system generate an ordinary renewal process {Y 1 , Y 2 , ...} with typical cycle length Y. The costs of repairs are mutually independent, independent of {Y 1 , Y 2 , ...} and identically distributed as M. Y and M have parameters

μ = E(Y) = 180 [days], σ = Var(Y) = 30, ν = E(M) = $ 200,

Var(M) = 40.

Determine approximately the probabilities that (1) the total repair cost arising in [0, 3600 days] do not exceed $ 4500, (2) a total repair cost of $ 3000 is not exceeded before 2200 days. Solution (1) Let C(3600) be the total repair cost in [0, 3600] and γ be defined by (3.174): γ = (180 ⋅ 40) 2 + (200 ⋅ 30) 2 = 9372.3. Then, by making use of theorem 3.19 (formula 3.175), the desired probability is ⎛ 4500 − 200 ⋅ 3600 180 P(C(3600) ≤ 4500) ≈ Φ ⎜⎜ ⎜ 180 −3/2 ⋅ 9372.3 3600 ⎝

⎞ ⎟. ⎟ ⎟ ⎠

Hence, P(C(3600) ≤ 4500) ≈ Φ(2.15) = 0.9841. (2) Let L(3000) be the first passage time of the total repair cost with regard to level $3000. By theorem 3.20 (formula 3.179), the desired probability is approximately given by ⎛ 2200 − 180 ⋅ 3000 200 ⎜ P(L(3000) > 2200) = 1 − Φ ⎜ ⎜ 200 −3/2 ⋅ 9372.3 ⋅ 3000 ⎝

⎞ ⎟ ⎟ ⎟ ⎠

= 1 − Φ(−2.75) = 0.9970. 3.32) A system is subjected to an age renewal policy with renewal interval τ as described in example 3.21. Determine the stationary availability of the system by modeling its operation by an alternating renewal process.

Solution With the notation introduced in example 3.21, let the typical renewal cycle of an alternating renewal process be denoted as (Y, Z), where Y is the random operating time of the system in a cycle and Z is the replacement time in a cycle. Then, with the notation used in example 3.21, the random operating time is Y = min(τ, T) and the random renewal time is characterized as follows: ⎧ d r for T < τ Z=⎨ ⎩ d p for T ≥ τ

Hence,

or, equivalently, τ

E(Y) = ∫ 0 F(t) dt,

⎧ d r with probability F(τ) Z=⎨ . ⎩ d p with probability F(τ)

E(Z) = d r F(τ) + d p F(τ) .

By theorem 3.18 (formula (3.163)), the stationary system availability under age replacement is τ

A(τ) = τ

∫ 0 F(t) dt

∫ 0 F(t) dt + d r F(τ) + d p F(τ)

.

50

SOLUTIONS MANUAL

3.33) A system is subjected to an age renewal policy with renewal interval τ . Contrary to example 3.21, it is assumed that renewals occur in negligible time and that preventive and emergency renewals give rise to the respective constant costs c p and c e with 0 < c p < c e .

Further, let F(t) be the distribution function of the system lifetime T and λ(t) be the corresponding, increasing failure rate. (1) Determine the maintenance cost rate (total maintenance cost per unit time) K(τ) for an unbounded running time of the system. (2) Give a necessary and sufficient condition for the existence of an optimal renewal interval τ ∗ . (3) Determine τ ∗ if T has a uniform distribution over the interval [0, z]. Note 'Total maintenance cost' include replacement and repair costs. Solution (1) Let M be the random total maintenance cost in a replacement cycle (time interval between neighbouring replacements) and Y be the random length of a cycle. Then, by formula (3.170), the long-run maintenance cost rate has structure E(M) K(τ) = E(Y) with ⎧ c e with probability F(τ) M=⎨ Y = min(τ, T). , ⎩ c p with probability F(τ) Hence, K(τ) =

c e F(τ) + c p F(τ) τ

∫ 0 F(x) dt

.

(2) The condition dK(τ)/dτ = 0 yields a necessary condition for an optimal renewal interval: τ

λ(τ) ∫ 0 F(x) dx − F(τ) = c /(1 − c) with c = c p /c e .

(i)

By assumption, λ(t) is increasing. Hence, the left-hand side of this equation is increasing in τ. This can be shown as follows: For s < t, λ(s) F(x) − f (x) ≤ λ(t) F(x) − f (x). The left-hand side (right hand side) is nonnnegative for 0 ≤ x ≤ s (0 ≤ x ≤ t). Hence, s

λ(s) ∫ 0 F(x) dx − F(s) s

= ∫ 0 [λ(s)F(x) dx − f (x)]dx s

≤ ∫ 0 [λ(t)F(x) dx − f (x)]dx t

< ∫ 0 [λ(t)F(x) dx − f (x)]dx t

= λ(t) ∫ 0 F(x) dx − F(t) . Therefore, the left-hand side of (i) is increasing in τ and tends to λ(∞) μ − 1 if λ(∞) < ∞ , and to ∞ if λ(∞) = ∞. Thus, given an increasing failure rate, a finite optimal renewal interval exists if lim λ(t) = ∞ or μ λ(∞) > 1/(1 − c) if λ(∞) < ∞.

t→∞

3 POINT PROCESSES

51

(2) Let F(t) = t /z ; 0 ≤ t ≤ z. Then, f (t) = 1/z ,

λ(t) = 1/(t − z) ; 0 < t < z .

The corresponding maintenance cost rate is K(τ) = 2 c e

c z + (1 − c) τ . 2 (z − τ) τ

The solution of the corresponding equation (i) is τ∗ =

z ⎡ c (2 − c) − c ⎤ ; 0 < c < 1 ⎦ 1−c ⎣

3.34) A system is preventively renewed at fixed time points τ, 2τ, ... Failures between these time points are removed by emergency renewals. (This replacement policy is called block replacement.) (1) With the notation and assumptions of the previous problem, determine the maintenance cost rate K(τ). (2) On condition that the system lifetime has distribution function F(t) = (1 − e −λ t ) 2 , t ≥ 0, give a necessary condition for a renewal interval τ = τ ∗ which is optimal with respect to K(τ). Hint Make use of the renewal function obtained in example 3.13. Solution (1) In this case, the replacement cycle length is a constant: Y = τ. The total random maintenance cost within a cycle is C = c p + c e N(τ), where N(t) is the renewal function belonging to an ordinary renewal process with cycle length distribution function F(t). From (3.170), the long-run maintenance cost rate under this policy is K(τ) =

c p + c e H(τ) . τ

(2) The renewal function H(t) which belongs to F(t) is known from example 3.13: H(t) = 2 ⎡⎢ t + 1 ⎛⎝ e −3t − 1 ⎞⎠ ⎤⎥ . ⎦ 3⎣ 3 With this H(t), the necessary condition dK(τ)/dτ = 0 for an optimal renewal interval becomes (1 + 3λτ) e −3λτ = 1 − 9 c. 2 A unique solution τ = τ ∗ exists if 0 < c < 2/9. 3.35) Under the model assumptions of example 3.22, (1) determine the ruin probability p(x) of an insurance company with an initial capital of x = $ 20 000 and operating parameters 1/μ = 2 [h −1 ], ν = $ 800 , and κ = 1700 [$/h],

(2) with the same numerical parameters, determine the upper bound e −r x of the Lundberg inequality (3.206), (3) under otherwise the same conditions, draw the the graphs of the ruin probability p(x) for x = 20 000 and x = 0 (no initial capital) in dependence on κ over the interval 1600 ≤ κ ≤ 1800.

52

SOLUTIONS MANUAL

Solution (1) By formulas (3.193) and (3.195), μκ − ν α = μκ ,

α

p(x) = (1 − α) e − ν x .

Hence, with the numerical parameters given, α = 0.5⋅1700−800 = 1 , 0.5⋅1700

17

1

− ⋅20 000 p(20 000) = 16 e 17⋅800 = 0.2163. 17

(2) By formula (3.20), the Lundberg coefficient r satisfies the equation 1 ∞ r y − ν1 y ∞ e e dy = ∫ 0 e −( ν −r) y dy = 1 = μ κ. 1 ν −r

∫0 Hence,

⎛ μκ − ν ⎞ α r= 1 ν ⎝ μκ ⎠ = ν and the Lundberg inequality (3.206) is p(x) ≤ e



x 17⋅800

,

x ≥ 0.

(3) Since x is fixed, the ruin probability is now written as a function of κ : ⎛

− ⎝ 1− p(κ x = 20, 000) = 1600 κ e

1600 ⎞ κ ⎠ 20,000

p(κ x = 0) = 1600 κ ,

,

1600 ≤ κ ≤ 1800,

1600 ≤ κ ≤ 1800.

1 p(κ x = 0)

0.8 0.6 0.4 p(κ x = 20, 000)

0.2

0 ... 1600

1640

1680

1720

1760

1800

κ

3.36) Under otherwise the same assumptions as made in exercise 3.35 and with the same numerical parameters, (1) determine the ruin probability if claims arrive according to an ordinary renewal process the typical cycle length Y of which has an Erlang distribution with parameters n = 2 and λ = 4, (2) determine the ruin probability if claims arrive according to the corresponding stationary renewal process.

53

3 POINT PROCESSES Solution (1) The densities of Y and M are b(y) = 1ν e −y /ν , y ≥ 0.

f Y (t) = λ 2 t e −λ t , t ≥ 0,

Their Laplace transforms are 2 f Y (s) = ⎛⎝ λ ⎞⎠ , s+λ

b ( s) =

1 . νs + 1

Hence, by formula (3.208), the corresponding Lundberg coefficient satisfies condition ⎛ λ ⎞2⋅ 1 =1 ⎝ κr + λ ⎠ 1 − rν

or (1 − r ν) (κr + λ) = λ 2 . This quadratic equation in r can be written in the form ⎛ r + 2λν − κ ⎞ 2 = ⎝ 2 νκ ⎠

1 ⎛ 4λνκ + κ 2 ⎞ . ⎝ ⎠ 4 ν2κ2

Its solution is (4λν + κ) κ . r = − 2λν − κ + 1 2 νκ 2 νκ Inserting the numerical values gives 1 r = − 8 ⋅ 800 − 1700 + (16 ⋅ 800 + 1700) 1700 . 1600 ⋅ 1700 1600 ⋅ 1700 Hence, r = 0.000097381.

Now, by formula (3.209), the ruin probability is p(20, 000) = (1 − rν)e −20,000 r = 0.1315.

(2) By the last formula of section 3.4.3, p S (20, 000) =

800 e −20,000 r = 0.1342 . 1700 ⋅ 0.5

3.37) Under otherwise the same assumptions as made in example 3.22, determine the survival probability if the claim size M has density b(y) = a 2 y e −a y , a > 0, y ≥ 0. Solution M has an Erlang distribution with parameters n = 2 and a. Hence, the mean claim size is ν = E(M) = 2a .

(i)

As usual, the following assumption has to be made: α=

κμ − ν κμ > 0.

The Laplace transform of b(y) is b ( s) =

a2 . (s + a ) 2

(ii)

54

SOLUTIONS MANUAL

By (3.193), the Laplace transform of the corresponding survival probability is q(s) =

q(0) = s ⋅

κμ κμ s − 1 +

(s + a) 2 κμ κμ (s + a) 2 − s − 2a

a2 (s+a) 2

q(0) = s

q(0)

⎡ s + 2a ⎢⎢ 1 + ⎢ κμ (s + a) 2 − s − 2a ⎣

⎤ ⎥⎥ . ⎥ ⎦

Hence, q(0) ⎡ ⎤ s + 2a q(s) = κμ s ⎢ κμ + ⎥, ( s + s )( s + s ) ⎣ 1 2 ⎦

where s 1 = 1 ⎡⎣ 2aκμ − 1 − 4aκμ + 1 ⎤⎦ , 2κμ s 2 = 1 ⎡⎣ 2aκμ − 1 + 4aκμ + 1 ⎤⎦ . 2κμ

For the sake of easy retransformation, q(s) is written in the form q(0) q(0) 2a q(0) 1 1 q(s) = s + κμ ⋅ + κμ ⋅ . (s + s 1 )(s + s 2 ) s (s + s 1 )(s + s 2 )

Taking into account (i), (ii) and s 2 − s 1 = 1 4aκμ + 1 , kμ s 1 s 2 = a 2 α = 2νa α ,

retransformation yields (use a table or do decomposition into partial fractions) ⎧⎪ q(x) = q(0) ⎨ 1 + ⎪⎩

⎡ ⎤⎫ α ⎥ ⎪. 2 1 (e −s 1 x − e −s 2 x ) + ⎢⎢ (s 1 e −s 2 x − s 2 e −s 1 x ) + 1 − ⎥⎬ α ⎥⎪ ⎢ α a 4aκμ + 1 4aκμ + 1 ⎣ ⎦⎭

Note that, by (i) and (ii), 0 < s 1 < s 2 . Hence, condition q(∞) = 1 yields α⎞ 1 = q(0) ⎛⎝ 1 + 1 − α ⎠

or q(0) = α .

Thus, q ( x) = 1 −

1 ⎡⎣ α (e −s 2 x − e −s 1 x ) + ν(s 2 e −s 1 x − s 1 e −s 2 x ) ⎤⎦ 4aκμ + 1

Hint L (e −s 1 x − e −s 2 x ) =

s2 − s1 , (s + s 1 ) (s + s 2 )

L ⎛⎝ s 1 e −s 2 x − s 2 e −s 1 x ⎞⎠ =

s 1 s 2 (s 2 − s 1 ) . s (s + s 1 ) (s + s 2 )

55

3 POINT PROCESSES 3.38) Claims arrive at an insurance company according to an ordinary renewal process {Y 1 , Y 2 , ....}.

The corresponding claim sizes M 1 , M 2 , ... are independent and identically distributed as M and independent of {Y 1 , Y 2 , ....}. Let the Y i be distributed as Y; i.e. Y is the typical interarrival interval. Then (Y, M) is the typical interarrival cycle. From historical observations it is known that μ = E(Y) = 1 [h], Var(Y) = 0.25, ν = E(M) = $ 800, Var(M) = 250, 000. Find approximate answers to the following problems: (1) What minimum premium per hour κ 0.99 has the insurance company to take in so that it will make a profit of at least $ 10 6 in the interval [0, 20 000] [hours] with probability 0.99? (2) What is the probability that the total claim hits level $ 6 ⋅ 10 6 in the interval [0, 7 000] [hours]? (Before possibly reaching its goals, the insurance company may have experienced one or more ruins with subsequent 'red number periods'.) Solution The random profit in [0, 20 000] is

κ 0.99 ⋅ 20 000 − C(20 000). Hence, κ 0.99 must satisfy P(κ 0.99 ⋅ 20 000 − C(20 000) ≥ 10 6 ) = 0.99.

By formula (3.211), since γ = 250, 000 + 800 2 ⋅ 0.25 = 640.3. κ 0.99 is determined as follows: P(κ 0.99 ⋅ 20 000 − C(20 000) ≥ 10 6 )

= P(κ 0.99 ⋅ 20 000 − 10 6 ≥ C(20 000)) ⎛κ ⋅ 20 000 − 10 6 − 800 ⋅ 20 000 ⎞ = Φ ⎜ 0.99 ⎟ = 0.99. 640.3 ⋅ 20 000 ⎝ ⎠

Since the 0.99-percentile of the standard normal distribution is z 0.99 = 2.32 , this last equation is equivalent to κ 0.99 ⋅ 20 000 − 10 6 − 800 ⋅ 20 000 640.3 ⋅ 20 000 or κ 0.99 − 850 640.3 20 000

Hence, κ 0.99 = 860.5 [$/h].

= 2.32.

= 2.32

56

SOLUTIONS MANUAL

(2) By theorem 3.20 (formula 3.179), the first passage time L(x) of the compound stochastic process {C(t), t ≥ 0} with regard to level x has an asymptotic normal distribution as μ L(x) ≈ N ⎛⎝ v x, v −3 γ 2 x ⎞⎠ as x → ∞.

Hence, ⎛ 7000 − 1 ⋅ 6 ⋅ 10 6 800 ⎞ ⎛ 6 P ⎝ L(6 ⋅ 10 ) ≤ 7 000 ⎠ = Φ ⎜⎜ ⎜ ⎝ 640.3 ⋅ 800 −3 ⋅ 6 ⋅ 10 6

⎞ ⎟ = Φ(−7.2) ≈ 0. ⎟ ⎟ ⎠

CHAPTER 4

Discrete-Time Markov Chains 4.1) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2} and transition matrix ⎛ 0.5 0 0.5 ⎞ ⎟ ⎜ P = ⎜ 0.4 0.2 0.4 ⎟ . ⎟ ⎜ ⎝ 0 0.4 0.6 ⎠ (1) Determine P ⎛⎝ X 2 = 2 X 1 = 0, X 0 = 1) and P ⎛⎝ X 2 = 2, X 1 = 0 X 0 = 1) (2) Determine P(X 2 = 2, X 1 = 0 X 0 = 0) and, for n > 1, P(X n+1 = 2, X n = 0 X n−1 = 0) . (3) Assuming the initial distribution P(X 0 = 0) = 0.4; P(X 0 = 1) = P(X 0 = 2) = 0.3, determine P(X 1 = 2) and P(X 1 = 1, X 2 = 2). Solution (1) P ⎛⎝ X 2 = 2 X 1 = 0, X 0 = 1) = P ⎛⎝ X 2 = 2 X 1 = 0) = 0. P(X 2 = 2, X 1 = 0 X 0 = 1) = p 10 p 02 = 0.4 ⋅ 0.5 = 0.2 (2) P ⎛⎝ X 2 = 2, X 1 = 0 X 0 = 0) = p 00 p 02 = 0.5 ⋅ 0.5 = 0.25 P ⎛⎝ X n+1 = 2, X n = 0 X n−1 = 0) = p 00 p 02 = 0.5 ⋅ 0.5 = 0.25 (homogeneity). (0)

(0)

(0)

(3) P(X 1 = 2) = p 0 p 02 + p 1 p 12 + p 2 p 22 = 0.4 ⋅ 0.5 + 0.3 ⋅ 0.4 + 0.3 ⋅ 0.6 = 0.5 (0)

(0)

(0)

P(X 1 = 1, X 2 = 2) = p 0 p 01 p 12 + p 1 p 11 p 12 + p 2 p 21 p 12 = 0.4 ⋅ 0 ⋅ 0.4 + 0.3 ⋅ 0.2 ⋅ 0.4 + 0.3 ⋅ 0.4 ⋅ 0.4 = 0.072 4.2) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2} and transition matrix ⎛ 0.2 0.3 0.5 ⎜ P = ⎜ 0.8 0.2 0 ⎜ ⎝ 0.6 0 0.4

⎞ ⎟ ⎟. ⎟ ⎠

(1) Determine the matrix of the 2-step transition probabilities P (2) . (2) Given the initial distribution P(X 0 = i) = 1/3 ; i = 0, 1, 2 ; determine the probabilities P(X 2 = 0) and P(X 0 = 0, X 1 = 1, X 2 = 2). Solution ⎛ 0.58 0.12 0.3 ⎞ (2) ⎜ ⎟ P (2) = P ⋅ P = ⎛⎝ ⎛⎝ p i j ⎞⎠ ⎞⎠ = ⎜ 0.32 0.28 0.4 ⎟ (1) ⎜ ⎟ ⎝ 0.36 0.18 0.46 ⎠ (0) (2) (0) (2) (0) (2) (2) P(X 2 = 0) = p 0 p 00 + p 1 p 10 + p 2 p 20 = 1 (0.58 + 0.32 + 0.36) = 0.42 3 (0) 1 P(X 0 = 0, X 1 = 1, X 2 = 2) = p 0 p 01 p 12 = ⋅ 0.3 ⋅ 0 = 0 3

58

SOLUTIONS MANUAL

4.3) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2} and transition matrix ⎛ 0 0.4 0.6 ⎞ ⎜ ⎟ P = ⎜ 0.8 0 0.2 ⎟ . ⎜ ⎟ ⎝ 0.5 0.5 0 ⎠ (1) Given the initial distribution P(X 0 = 0) = P(X 0 = 1) = 0.4 and P(X 0 = 2) = 0.2, determine P(X 3 = 2) . (2) Draw the corresponding transition graph. (3) Determine the stationary distribution. Solution (1) The matrix of the 2-step transition probabilities is ⎛ 0.62 0.3 0.08 ⎜ P (2) = P 2 = ⎜ 0.1 0.42 0.48 ⎜ ⎝ 0.4 0.2 0.4

⎞ ⎟ ⎟. ⎟ ⎠

Hence, the matrix of the 3-step transition probabilities is ⎛ 0.280 0.288 0.432 ⎞

⎜ ⎟ P (3) = P ⋅ P (2) = ⎜ 0.576 0.280 0.144 ⎟ . ⎜ ⎟ ⎝ 0.360 0.360 0.280 ⎠

The desired probability is (0) (3)

(0) (3)

(0) (3)

P(X 3 = 2) = p 0 p 02 + p 1 p 12 + p 2 p 22 = 0.4 ⋅ 0.432 + 0.4 ⋅ 0.144 + 0.2 ⋅ 0.280 = 0.2864.

(2)

0

0.6 0.5

2

0.4 0.8

0.2 0.5

1

(3) According to (4.9), the stationary distribution {π 0 , π 1 , π 2 } satisfies π 0 = 0.8 π 1 + 0.5 π 2 π 1 = 0.4 π 0 + 0.5 π 2 1 = π0 + π1 + π2 The solution is π 0 = 0.3070, π 1 = 0.2982, π 2 = 0.3948. 4.4) Let {Y 0 , Y 1 , ...} be a sequence of independent, identically distributed binary random variables with P(Y i = 0) = P(Y i = 1) = 1/2; i = 0, 1, ... Define a sequence of random variables {X 1 , X 2 , ...} by X n = 1 (Y n − Y n−1 ) ; n = 1, 2, ... ; and 2 check whether {X 1 , X 2 , ...} has the Markov property.

59

4 DISCRETE-TIME MARKOV CHAINS Solution Consider, for instance, the transition probability P(X 3 = 1 X 2 = 0 ⎞⎠ = P ⎛⎝ 1 (Y 3 − Y 2 ) = 1 1 (Y 2 − Y 1 ) = 0 ⎞⎠ 2 2 2 2

Then, since the random events "X 3 = 1 " and "Y 1 = Y 2 = 1 " are disjoint, 2 P(X 3 = 1 X 2 = 0 ⎞⎠ = P ⎛⎝ X 3 = 1 Y 1 = Y 2 = 0 ∪Y 1 = Y 2 = 1 2 2 P({X 3 = 1 } ∩ {Y 1 = Y 2 = 0 ∪ Y 1 = Y 2 = 1}) 2 P(Y 1 = Y 2 = 0 ∪ Y 1 = Y 2 = 1)

=

=

P(Y 1 = Y 2 = 0 ∩ Y 3 = 1) . P(Y 1 = Y 2 = 0 ∪ Y 1 = Y 2 = 1)

By assumption, Y 0 , Y 1 , and Y 3 are independent. Hence, P(X 3 = 1 X 2 = 0 ⎞⎠ = 1/8 = 1 2 1/2 4 Now. consider P(X 3 = 1 X 2 = 0, X 1 = 1 ⎞⎠ = P ⎛⎝ 1 (Y 3 − Y 2 ) = 1 1 (Y 2 − Y 1 ) = 0, 1 (Y 1 − Y 0 ) = 1 ⎞⎠ . 2 2 2 2 2 2 2

From 1 (Y 2 − Y 1 ) = 0 and 1 (Y 1 − Y 0 ) = 1 it follows that Y 2 = 1. Hence, X 3 = 1 cannot hold so that

2

2

2

2

P(X 3 = 1 X 2 = 0, X 1 = 1 ⎞⎠ = 0. 2 2

Thus, the random sequence {X 1 , X 2 , ...} does not have the Markov property. 4.5) A Markov chain {X 0 , X 1 , ...} has state space Z = {0, 1, 2, 3} and transition matrix ⎛ ⎜ P = ⎜⎜ ⎜ ⎜ ⎝

0.1 0.2 0.4 0.3

0.2 0.3 0.1 0.4

0.4 0.1 0.3 0.2

0.3 0.4 0.2 0.1

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

(1) Draw the corresponding transition graph. (2) Determine the stationary distribution of this Markov chain. Solution (2) By (4.9), the stationary distribution is solution of the following system of linear algebraic equations: π 0 = 0.1 π 0 + 0.2 π 1 + 0.4 π 2 + 0.3 π 3 π 1 = 0.2 π 0 + 0.3 π 1 + 0.1 π 2 + 0.4 π 3 π 2 = 0.4 π 0 + 0.1 π 1 + 0.3 π 2 + 0.2 π 3

1 = π0 + π1 + π2 + π3 The solution is π i = 0.25; i = 0, 1, 2, 3.

SOLUTIONS MANUAL

60

4.6) Let {X 0 , X 1 , ...} be an irreducible Markov chain with state space Z = {1, 2, ..., n}, n < ∞, and with the doubly stochastic transition matrix P = ((p ij )), i.e.

Σ

j∈Z

p i j = 1 for all i ∈ Z and

Σ

i∈Z

p i j = 1 for all j ∈ Z.

(1) Prove that the stationary distribution of {X 0 , X 1 , ...} is given by {π j = 1/n , j = 1, 2, ..., n} . (2) Can {X 0 , X 1 , ...} be a transient Markov chain? Solution (1) If {π j = 1/n ; j = 1, 2, ..., n } is the stationary distribution of {X 0 , X 1 , ...} , then it must satisfy the system of linear algebraic equations (4.9):

1 = n 1 p ; i = 1, 2, ..., n. n Σ i=1 n i j But this is obviously true, since P is a doubly stochastic matrix: n

n

Σ i=1 1n p i j = 1n Σ i=1

pi j = 1 n.

Moreover, this derivation shows that {π j = 1/n ; j = 1, 2, ..., n } being a stationary distribution implies that P is doubly stochastic. Thus: {π j = 1/n ; j = 1, 2, ..., n } is a stationary distribution if and only if P is a doubly stochastic matrix. (Note that, by theorem 4.9, any irreducible Markov chain with finite state space has exactly one stationary distribution.) (2) No, since every irreducible Markov chain with finite state space is recurrent. (See the statement after theorem 4.6.) 4.7) A source emits symbols 0 and 1 for transmission to a sink. Random noises S 1 , S 2 , ... successively and independently affect the transmission process of a symbol in the following way: if a '0' ('1') is to be transmitted, then S i distorts it to a '1' ('0') with probability p (q); i = 1, 2, ... Let X 0 = 0 or X 0 = 1 denote whether the source has emitted a '0' or a '1' for transmission. Further, let X i = 0 or X i = 1 denote whether the attack of noise S i implies the transmission of a '0' or a '1'; i = 1, 2, ... The random sequence {X 0 , X 1 , ...} is an irreducible Markov chain with state space Z = {0, 1} and transition matrix ⎛ 1−p p ⎞ P=⎜ ⎟. ⎝ q 1−q ⎠

(1) Verify: On condition 0 < p + q ≤ 1, the m-step transition matrix is given by P (m) =

m 1 ⎛ q p ⎞ + (1 − p − q) ⎛ p −p ⎞ . ⎜ ⎟ ⎜ ⎟ p+q ⎝ q p ⎠ p+q ⎝ −q q ⎠

(i)

(2) Let p = q = 0.1. The transmission of the symbols 0 and 1 is affected by the random noises S 1 , S 2 , ..., S 5 . Determine the probability that a '0' emitted by the source is actually received. Solution

(1) The proof is done by induction: For m = 1 the assertion is fulfiled since P = P (1) . Now assume that the m-step transition matrix given by (i) is correct. Then, P (m+1) = P ⋅ P (m) .

Doing the matrix multiplication yields

61

4 DISCRETE-TIME MARKOV CHAINS P ⋅ P (m) = =

m 1 ⎛ 1 − p p ⎞ ⎛ q p ⎞ + (1 − p − q) ⎛ 1 − p p ⎞ ⎛ p −p ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ p+q ⎝ q 1−q ⎠⎝ q p ⎠ p+q ⎝ q 1 − q ⎠ ⎝ −q q ⎠

m 1 ⎛ 1 − p p ⎞ ⎛ q p ⎞ + (1 − p − q) ⎛ 1 − p p ⎞ ⎛ p −p ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ p+q ⎝ q 1−q ⎠⎝ q p ⎠ p+q ⎝ q 1 − q ⎠ ⎝ −q q ⎠

=

m 1 ⎛ q p ⎞ + (1 − p − q) ⎛ (1 − p − q) p −(1 − p − q) p ⎞ ⎜ ⎟ ⎜ ⎟ p+q ⎝ q p ⎠ p+q ⎝ −(1 − p − q) q (1 − p − q) q ⎠

=

m+1 ⎛ p −p ⎞ 1 ⎛ q p ⎞ + (1 − p − q) ⎟ ⎟ = P (m+1) . ⎜ ⎜ p+q ⎝ q p ⎠ p+q ⎝ −q q ⎠

This completes the proof of the representation (i) of the m-step transition matrix. (2) Inserting the figures given into (i) with m = 5 gives the corresponding 5-step transition matrix: ⎛ 0.1 0.1 ⎞ ⎛ 0.1 −0.1 ⎞ ⎛ 0.66384 0.33616 ⎞ (5) P (5) = ⎛⎝ ⎛⎝ p i j ⎞⎠ ⎞⎠ = 5 ⎜ ⎟ + 1.6384 ⎜ ⎟ =⎜ ⎟. ⎝ 0.1 0.1 ⎠ ⎝ −0.1 0.1 ⎠ ⎝ 0.33616 0.66384 ⎠ (5)

Thus, the desired probability is p 00 = 0.66384. 4.8) Weather is classified as (predominantly) sunny (S) and (predominantly) cloudy (C), where C includes rain. For the town of Musi, a fairly reliable prediction of tomorrow's weather can only be made on the basis of today's and yesterday's weather. Let (C,S) indicate that the weather yesterday was cloudy and today's weather is sunny and so on. Based on historical observations it is known that, given the constellation (S,S) today, the weather tomorrow will be sunny with probability 0.8 and cloudy with probability 0.2; given (S,C) today, the weather tomorrow will be sunny with probability 0.4 and cloudy with probability 0.6; given (C,S) today, the weather tomorrow will be sunny with probability 0.6 and cloudy with probability 0.4; given (C,C) today, the weather tomorrow will be cloudy with probability 0.8 and sunny with probability 0.2. (1) Illustrate graphically the transitions between the states 1 = (S,S), 2 = (S,C), 3 = (C,S), and 4 = (C,C). (2) Determine the matrix of the transition probabilities of the corresponding discrete-time Markov chain and its stationary state distribution. Solution

(1)

0.8

0.2

(S,S)

(S,C)

0.6

0.4

0.6 0.8

(C,C)

0.4

(C,S)

0.2

(2) From the transition graph developed under (1), ⎛ ⎜ P = ⎜⎜ ⎜ ⎜ ⎝

0.8 0 0.6 0

0.2 0 0.4 0

0 0.4 0 0.2

0 0.6 0 0.8

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

SOLUTIONS MANUAL

62

Hence, the corresponding system of equations (4.9) for the stationary distribution is π 1 = 0.8 π 1 + 0.6 π 3 π 2 = 0.2 π 1 + 0.4 π 3 π 3 = 0.4 π 2 + 0.2 π 4

1 = π1 + π2 + π3 + π4 The solution is π 1 = 3/8, π 2 = π 3 = 1/8, π 4 = 3/8. 4.9)* An area (e.g. the sufarce of a CD ) is partitioned into n segments S 1 , S 2 , ..., S n , and a collection of n objects O 1 , O 2 , ..., O n (e.g. pieces of information) are stored at these segments so that each segment contains exactly one object. At time points t = 1, 2, ... , one of the objects is needed. Since its location is assumed to be unknown, it has to be searched for. This is done in the following way: The segments are checked in increasing order of their indices. When the desired object O is found at segment S k , then O will be moved to segment S 1 and the objects originally located at S 1 , S 2 , ..., S k−1 will be moved in this order to S 2 , S 3 , ..., S k . (This allocation policy is expected to lead to small search times.) Let p i be the probability that at a time point t object O i is needed; i = 1, 2, ..., n. It is assumed that these probabilities do not depend on t.

(1) Describe the successive location of object O 1 by a homogeneous, discrete-time Markov chain {X 0 , X 1 , ...} on condition that p 1 = α and p 2 = p 3 = . . . = p n = β = 1 − α , 0 < α < 1. n−1 Determine the matrix of transition probabilities. (2) What is the corresponding stationary distribution of the location of O 1 ? Solution (1) Let X t = i denote the random event that at time t object O 1 is at segment S i ; i = 1, 2, ..., n ; t = 0, 1, ... Then {X 0 , X 1 , ...} is a homogeneous, discrete-time Markov chain. Its positive transition probabilities p i j = P(X t+1 = j X t = i)

are given by p i1 = α ,

i = 1, 2, ..., n

p ii = (i − 1) β , p i,i+1 = (n − i) β ,

i = 2, 3, ..., n i = 1, 2, ..., n − 1

Hence, the matrix of the transition probabilities is ⎛ ⎜ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎜ ⎝

... α (n − 1) β 0 0 0 ... α β (n − 2) β 0 0 . . . 0 α 0 2β (n − 3) β .. .. .. .. .. .. . . . . . . ... α 0 0 (n − 2) β β ... 0 0 0 (n − 1) β α

Note that α + (n − 1) β = 1.

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

63

4 DISCRETE-TIME MARKOV CHAINS (2) The system of equations (4.9) for the stationary state probabilities is π1 = α π1 + α π2 + . . . + α πn π 2 = (n − 1) β π 1 + β π 2 π 3 = (n − 2) β π 2 + 2 β π 3 .. . π n = β π n−1 + (n − 1) β π n

The first equation and the normalizing condition π 1 + π 2 + . . . + π n = 1 yield π 1 = α . Now, from the second equation, αβ π2 = (n − 1) . 1−β By proceeding in this way, the stationary distribution is seen to be π1 = α π k = (n − 1) (n − 2). . .(n − k + 1)

α β k−1 ; [1 − β] [1 − 2β] ⋅ . . . ⋅ [1 − (k − 1) β]

k = 2, 3, . . ., n.

4.10) A 'classic' inventory problem is the following one: A retailer of toner cartridges of a certain brand checks his stock every Monday at 7:00 a.m. If the stock is less than or equal to s cartridges, he orders an amount of S - s cartridges from a wholesale dealer at 7:30 a.m., 0 ≤ s < S. Otherwise, the retailer orders nothing. In case of an order, the dealer delivers the amount wanted till 9:00 a.m., the opening time of the shop. A demand which cannot be met by the retailer for being out of stock is lost. The weekly potential cartridge sales figures of the retailer (including lost demand) are independent random variables, identically distributed as B: p i = P(B = i); i = 0, 1, ...

Let X n be the number of cartridges the retailer has on stock on the n th Monday before the arrival of a possible delivery from the wholesale dealer. (1) Show that {X 1 , X 2 , ...} is a homogeneous Markov chain. (2) Determine its transition probabilities. Solution (1) Let S n be the weekly potential cartridge sales figures of the retailer (including lost demand) in the n th week which ends at the n th Monday before opening hours. Then X n+1 is given by ⎧ max (X n − B n , 0) if X n > s X n+1 = ⎨ . if X n ≤ s ⎩ max (S − B n , 0)

Consider the transition probability P(X n+1 = x n+1 X n = x n , X n−1 = x n−1 ⎞⎠ In view of the structure of X n+1 and with X n = x n given, the value x n−1 has no influence on X n+1 whatsoever, since the number of cartridges ordered for the (n + 1) th week depends only on x n . (By assumption, the demands B 1 , B 2 , ... are independent. Thus, P(X n+1 = x n+1 X n = x n , X n−1 = x n−1 ⎞⎠ = P(X n+1 = x n+1 X n = x n )

Hence, {X 1 , X 2 , ...} is a Markov chain. It is, moreover, a homogeneous Markov chain, i.e. its transition probabilities P(X n+1 = x n+1 X n = x n ) are the same for all n = 1, 2, ... , since the weekly demands B 1 , B 2 , ... are identically distributed as B. Its state space is Z = {0, 1, ..., S}.

SOLUTIONS MANUAL

64

(2) The transition probabilities p i j = P(X n+1 = j X n = i) are fully determined by s, S and B. For i > s,



p i 0 = P(B ≥ i) = Σ k=i p k p i j = P(B = i − j) = p i−j ;

For 0 ≤ i ≤ s,

j = 1, 2, ..., i . ∞

p i 0 = P(B ≥ S) = Σ k=S p k p i j = P(B = S − j) = p S−j ;

j = 1, 2, ..., S .

4.11) A Markov chain has state space Z = {0, 1, 2, 3, 4} and transition matrix ⎛ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎝

0.5 0.8 0 0 0

0.1 0.2 1 0 0

0.4 0 0 0 0

0 0 0 0.9 1

0 0 0 0.1 0

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

(1) Determine the minimal closed sets. (2) Check, whether inessential states exist. Solution (1) There are two minimal closed sets: {0, 1, 2} and {3, 4}. When you start in any of these sets, you obviously cannot leave it. But within these sets, you will sooner or later reach every state with probability 1. (2) Since every state of this Markov chain belongs to a minimal closed set, there cannot exist an inessential state. 4.12) A Markov chain has state space Z = {0, 1, 2, 3} and transition matrix ⎛ ⎜ P = ⎜⎜ ⎜ ⎜ ⎝

0 1 0.4 0.1

0 0 0.6 0.4

1 0 0 0.2

0 0 0 0.3

⎞ ⎟ ⎟. ⎟ ⎟ ⎟ ⎠

Determine the classes of essential and inessential states. Solution {0, 1, 2} is an essential class, since, when starting at a state i ∈ {0, 1, 2} , there is always a path which leads back to state i with positive probability. State 3 is inessential, since, when leaving this state, the probability of arriving at {0, 1, 2} is 0.7. However, from set {0, 1, 2} no return to state 3 is possible. Hence, with probability 1, the Markov chain will sooner or later leave state 3 and never return. 4.13) A Markov chain has state space Z = {0, 1, 2, 3, 4} and transition matrix ⎛ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎝

0 0 0 1 1

0.2 0 0 0 0

0.8 0 0 0 0

0 0.9 0.1 0 0

0 0.1 0.9 0 0

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

65

4 DISCRETE-TIME MARKOV CHAINS (1) Draw the transition graph. (2) Verify that this Markov chain is irreducible with period 3. (3) Determine the stationary distribution. Solution

(1)

0.2

0

1

1

0.8

0.1

4

2

0.9

3

(2) From the transition graph: Every state is accessible from any other state and from every state any other state can be reached. Hence, this Markov chain is irreducible. Moreover: Return to any state is only possible after multiples of 3. Hence, this Markov chain has period 3. (3) By (4.9), the stationary distribution satisfies π0 = π3 + π4 π 1 = 0.2 π 0 π 2 = 0.8 π 0 π 3 = 0.9 π 1 + 0.1 π 2

1 = π0 + π1 + π2 + π3 + π4 The solution is π 0 = 50/150, π 1 = 10/150, π 2 = 40/150, π 3 = 13/150, π 4 = 37/150. 4.14) A Markov chain has state space Z = {0, 1, 2, 3, 4} and transition matrix ⎛ ⎜ ⎜ P=⎜ ⎜ ⎜ ⎝

0 1 0.2 0.2 0.4

(1) Find the essential and inessential states. (2) Find the recurrent and transient states. Solution (1) essential: {0, 1}, inessential: {2, 3, 4}.

(2) recurrent: {0, 1}, transient: {2, 3, 4}.

1 0 0.2 0.8 0.1

0 0 0.2 0 0.1

0 0 0.4 0 0

0 0 0 0 0.4

⎞ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

SOLUTIONS MANUAL

66

4.15) Determine the stationary distribution of the random walk considered in example 4.14 on condition p i = p, 0 < p < 1. Solution The state space of this random walk is Z = {0, 1, 2, ...}, and the positive transition probabilities are pi 0 = p ;

i = 0, 1, ...

p i, i+1 = 1 − p ;

i = 0, 1, ...

Hence, the corresponding system (4.9) for the stationary distribution is π 0 = p (π 0 + π 1 + . . .) π i = (1 − p) π i−1 ;

i = 1, 2, ...

From the first equation and the normalizing condition, π 0 = p. The complete solution is obtained recursively: π i = p (1 − p) i ;

i = 0, 1, ...

4.16) Let the transition probabilities of a birth- and death process be given by 1 pi = and q i = 1 − p i ; i = 1, 2, ... ; p 0 = 1 . 1 + [i /(i + 1)] 2

Show that the process is transient. Solution The corresponding sum in formula (4.34) is ∞

2 1 = π − 1. 6 j=1 ( j + 1) 2

Σ

Since this sum is finite, by theorem 4.11, the birth- and death process is transient. 4.17) Let i and j be two different states with f i j = f j i = 1. Show that both i and j are recurrent. Solution The assertion results from f ii ≥ f i j ⋅ f j i = 1

and f j j ≥ f j i ⋅ f i j = 1. 4.18) The respective transition probabilities of two irreducible Markov chains (1) and (2) with common state space Z = {0, 1, ...} are

(1) p i i+1 = 1 , i+2

pi 0 = i + 1 i+2

(2) p i i+1 = i + 1 , i+2

p i 0 = 1 ; i = 0, 1, ... i+2

Check whether these Markov chains are transient, null recurrent or positive recurrent.

67

4 DISCRETE-TIME MARKOV CHAINS Solution This exercise is a special case of example 4.14 with the p i given by p i = p i 0 ; i = 0, 1, ...

(1) In this case, p i = (i + 1)/(i + 2). The corresponding sum (4.20) ∞

Σ

i=0

∞ i+1

Σ

pi =

i=0 i + 2

is infinite. Hence, this Markov chain is recurrent. Since μ 00 =



Σ

m=0

(m)

m f 00 < ∞ ,

it is, moreover, positive recurrent. Note that (1) f 00 = 1 ; 2

(m) 1 ⋅ m ; m = 2, 3, ... f 00 = 1 ⋅ 1 ⋅ . . . ⋅ m 2 3 m+1

(2) In this case, p i = 1/(i + 2). The corresponding sum (4.20) is infinite as well: ∞

Σ

i=0

pi =



Σ

1 = ∞.

i=0 i + 2

But this Markov chain is null recurrent. (The mean recurrence time to state 0 is determined by the harmonic progression.) 4.19) Let N i be the random number of time periods a discrete-time Markov chain stays in state i (sojourn time of the Markov chain in state i). Determine E(N i ) and Var(N i ). Solution Once the Markov chain has arrived at a state i, the transition into the following state depends only on i. Hence, N i has a geometric distribution with parameter p = p ii : P(N i = n) = (1 − p ii ) p n−1 ii ;

n = 1, 2, ...

Thus, mean value and variance of N i are (see section 1.2.2.2) p ii p ii E(N i ) = , Var(N i ) = , 1 − p ii (1 − p ) 2 ii

0 ≤ p ii < 1.

4.20)* A haulier operates a fleet of trucks. His contract with an insurance company covers his entire fleet and has the following structure ('bonus malus system'): The haulier has to pay his premium at the beginning of each year. There are 3 premium levels: λ 1 , λ 2 and λ 3 with λ 3 < λ 2 < λ 1 . If no claim had been made in the previous year and the premium level was λ i , then the premium level in the current year is λ i+1 or λ 3 if λ i = λ 3 . If a claim had been made in the previous year, the premium level in the current year is λ 1 . The haulier will claim only then if the total damage a year exceeds an amount of c i given the premium level λ i in that year; i = 1, 2, 3. In case of a claim, the insurance company will cover the full amount. The total damages a year are independent random variables, identically distributed as M. Given a vector of claim limits (c 1 , c 2 , c 3 ) , determine the haulier's long-run mean loss cost a year.

(Loss cost = premium plus total damage not refunded by the insurance company.) Hint Introduce the Markov chain {X 1 , X 2 , ...}, where X n = i if the premium level at the beginning of year n is λ i , and make use of theorem 4.10.

68

SOLUTIONS MANUAL

Solution The Markov chain {X 1 , X 2 , ...} has state space Z = {1, 2, 3}. Let B(x) = P(M ≤ x)

be the distribution function of the total damage M a year. Then the transition probabilities of this Markov chain are p 12 = B(c 1 ), p 23 = B(c 2 ), p 33 = B(c 3 ); p i1 = 1 − B(c i ) ; i = 1, 2, 3. If {π 1 , π 2 , π 3 } denotes the stationary distribution of {X 1 , X 2 , ...}, then, by (4.9), it satisfies π 1 = [1 − B(c 1 )] π 1 + [1 − B(c 2 )] π 2 + [1 − B(c 3 )] π 3 π 2 = B(c 1 )π 1 π 3 = B(c 2 )π 2 + B(c 3 )π 3

The solution is π1 =

1 − B(c 3 ) [1 + B(c 1 )] [1 − B(c 3 )] + B(c 2 ) B(c 3 )

π2 =

B(c 1 )[1 − B(c 3 )] [1 + B(c 1 )] [1 − B(c 3 )] + B(c 2 ) B(c 3 )

π3 =

B(c 1 ) B(c 2 ) [1 + B(c 1 )] [1 − B(c 3 )] + B(c 2 ) B(c 3 )

The mean loss cost of the haulier in a year with premium level λ i is g(i) = λ i + E(M M ≤ c i )B(c i ) + 0 ⋅ [1 − B(c i )],

where E(M M ≤ c i ) =

Hence,

ci



0

c

x

i b(x) dx = λ i + 1 ∫ x b(x) dx. B(c i ) 0 B(c i )

ci

g(i) = λ i + ∫ 0 x b(x) dx.

By theorem 4.10, the mean long-run loss cost a year is g(1) π 1 + g(2) π 2 + g(3) π 3 . Note Bonus-Malus-insurance software packages contain numerical algorithms which allow the determination of cost-optimal limits c i for this and more complicated insurance models.

CHAPTER 5

Continuous-Time Markov Chains 5.1) Let Z = {0, 1} be the state space and ⎛ e −t 1 − e −t ⎞ P(t) = ⎜ ⎟ ⎝ 1 − e −t e −t ⎠ the transition matrix of a continuous-time stochastic process {X(t), t ≥ 0}. Check whether {X(t), t ≥ 0} is a homogeneous Markov chain. Solution If {X(t), t ≥ 0} is a homogeneous Markov chain, then it must satisfy the Chapman-Kolmogorovequations (5.6): p ij (t + τ) = Σ p ik (t) p kj (τ) ; t ≥ 0, τ ≥ 0. k∈Z

From P, p 00 (t + τ) = e −(t+τ) , p 00 (t) p 00 (τ) + p 01 (t) p 10 (τ) = e −t ⋅ e −τ + (1 − e −t ) ⋅ (1 − e −τ ) ≠ p 00 (t + τ). Hence, {X(t), t ≥ 0} cannot be a homogeneous Markov chain. 5.2) A system fails after a random lifetime L. Then it waits a random time W for renewal. A renewal takes another random time Z. The random variables L, W and Z have exponential distributions with parameters λ, ν, and μ , respectively. On completion of a renewal, the system immediately resumes its work. This process continues indefinitely. All life, waiting, and renewal times are assumed to be independent. Let the system be in states 0, 1 and 2 when it is operating, waiting or being renewed. (1) Draw the transition graph of the corresponding Markov chain {X(t), t ≥ 0}. (2) Determine the point and the stationary availability of the system on condition P(X(0) = 0) = 1. Solution

(1)

λ 0

1

ν

2

μ (2) The corresponding system of linear differential equations (5.20) for the state probabilities p j (t) = P(X(t) = j) and the normalizing condition yield p 0 (t) = μ p 2 (t) − λ p 0 (t) p 1 (t) = λ p 0 (t) − ν p 1 (t)

(i)

p 0 (t) + p 1 (t) + p 2 (t) = 1 Initial condition: p 0 (0) = 1, p 1 (0) = p 2 (0) = 0.

(ii)

70

SOLUTIONS MANUAL

From (i) one obtains an inhomogeneous second-order differential equation with constant coefficients for the probability p 0 = p 0 (t) : p 0 + a p /0 + b p 0 = μ v

(iii)

with a = λ + μ + ν, b = λμ + λν + μν. Next the corresponding homogeneous differential equation has to be solved: p 0,h + a p /0,h + b p 0,h = 0.

(iv)

The corresponding characteristic equation x2 + a x + b = 0

has the solutions x 1/2 = − a ± 1 a 2 − 4b . 2 2 Case 1 a 2 = 4b, i.e. x 1 = x 2 = −a /2. Then the general solution of the homogeneous differential equation (iv) is p 0,h (t) = (c 1 + c 2 t) e

−a t 2

,

where c 1 and c 2 are arbitrary constants. The general solution of the inhomogeneous differential equation (iii) has structure p 0 (t) = p 0,h (t) + p 0,s (t), where p 0,s (t) is a special solution of (iii). Obviously, the constant function p 0,s (t) ≡ μ ν/b is a solution of (iii). Hence, the desired probability is −a t

μν . b To determine the constants c 1 and c 2 , the initial conditions (ii) will be made use of. From p 0 (0) = 1, μν p 0 (0) = c 1 + = 1. b Hence, μν λμ + λν c1 = 1 − = . b b From (i), p 0 (t) = (c 1 + c 2 t) e

2

+

μ p 1 (t) = μ − p 0 (t) − (λ + μ) p 0 (t). Now the initial condition p 1 (0) = 0 yields 0 = μ+

(v)

a c1 − c 2 − (λ + μ) . 2

Hence, a (μ + ν) c 2 = λ ⎛⎝ − 1 ⎞⎠ . 2b

With c 1 and c 2 known, the point availability of the system A(t) = p 0 (t) is fully given. The stationary availability is obtained by letting t → ∞ in A(t) : 1

A = lim A(t) = t→∞

μν λ = . 1 1 1 b + + λ μ ν

(vi)

71

5 CONTINUOUS-TIME MARKOV CHAINS Case 2 a 2 > 4b. In this case, x 1/2 < 0 and the general solution of (iv) is p 0,h (t) = c 1 e x 1 t + c 2 e x 2 t .

Hence, the general solution of (iii) is p 0 (t) = c 1 e x 1 t + c 2 e x 2 t +

From p 0 (0) = 1, c1 + c2 = 1 −

μν . b

μν . b

From p 1 (0) = 0 and (iv), 0 = μ − c 1 x 1 − c 2 x 2 − (λ + μ) so that (c 2 − c 1 ) a 2 − 4b + a (c 1 + c 2 ) = 2 λ . Hence, c2 =

⎛ ⎞ b − μν a − ⎜⎜ − 1 ⎟⎟ . ⎜ 2 ⎟ 2b ⎝ a − 4b ⎠ a 2 − 4b

λ

The stationary availability is, of course, the same as the one obtained in case 1. Case 3 a 2 < 4b. In this case, the solutions of the characteristic equation are complex: x 1/2 = α ± i β with α = − a , β = 1 2 2

4b − a 2 , i = −1 .

Then the general solution of the homogeneous differential equation (iv) is p 0,h (t) = e α t (c 1 cos βt + c 2 sin βt)

so that the desired general solution of (iii) is μν p 0 (t) = e α t ⎛⎝ c 1 cos βt + c 2 sin βt ⎞⎠ + b The initial conditions (ii) and equation (v) yield the constants c 1 and c 2 : μν a 2λ ⎛ 1 − μν ⎞ − c1 = 1 − , c2 = . ⎝ ⎠ b b 4b − a 2 4b − a 2 Note In view of the structure of the transition graph, the probabilities p 1 (t) and p 2 (t) can be obtained from p 0 (t) by cyclic transformation of the intensities λ, μ, and ν :

λ → v → μ → λ. This is easily done since the parameters a and b are invariant to this transformation. 5.3) Consider a 1-out-of-2-system, i.e. the system is operating when at least one of its two subsystems is operating. When a subsystem fails, the other one continues to work. On its failure, the joint renewal of both subsystems begins. On its completion, both subsystems immediately resume their work. The lifetimes of the subsystems are identically exponential with parameter λ . The joint renewal time is exponential with parameter µ. All life and renewal times are independent of each other. Let X(t) be the number of subsystems operating at time t. (1) Draw the transition graph of the corresponding Markov chain {X(t), t ≥ 0}.

(2) Given P(X(0) = 2) = 1, determine the time-dependent state probabilities p i (t) = P(X(t) = i) ; i = 0, 1, 2. (3) Determine the stationary state distribution.

72

SOLUTIONS MANUAL

(1)

2λ 1

0

λ

2

μ (2) The transition graph has the same structure as the one in the previous exercise 2.2. Hence, the results obtained in exercise 5.2 apply if there the transition rate λ is replaced with 2λ and the transition rate ν with λ. (3) From (vi) in exercise 5.2, by doing the corresponding parameter transformations indicated at the end of exercise 5.2 and under (2), the stationary state probabilities are 1 2λ p0 = , 1 + λ1 + μ1 2λ

1 λ p 1 == , 1 + λ1 + μ1 2λ

1 μ p2 = . 1 + λ1 + μ1 2λ

5.4) A launderette has 10 washing machines which are in constant use. The time between two successive failures of a washing machine has an exponential distribution with mean value 100 hours. There are two mechanics who repair failed machines. A defective machine is repaired by only one mechanic. During this time, the second mechanic is busy repairing another failed machine if there is any or this mechanic is idle. All repair times have an exponential distribution with mean value 4 hours. All random variables involved are independent. Consider the steady state. 1) What is the average percentage of operating machines? 2) What is the average percentage of idle mechanics? Solution This is the repairman problem considered in example 5.14. Let X(t) denote the number of failed machines at time t. Then {X(t), t ≥ 0} is a birth- and death process with state space {0, 1, ..., 10}. The stationary state probabilities π i = P(X = i)

of this process are given by formulas (5.65) with ρ = 0.01/0.25 = 0.04 and r = 2 : π 0 = 0.673, π 1 = 0.269, π 2 = 0.049, π 3 = 0.008, π 4 = 0.001, π i < 0.0001 for i = 5, 6, ..., 10. Hence, the mean number of operating machines in the steady state is with sufficient accuracy m w = 10 ⋅ 0.673 + 9 ⋅ 0.269 + 8 ⋅ 0.049 + 7 ⋅ 0.008 + 6 ⋅ 0.001 = 9.6,

i.e., in the steady state, on average 96% of the washing machines are operating. The mean number of idle mechanics is m r = 2 ⋅ 0.673 + 1 ⋅ 0.269 + 0 = 1.615.

Hence, the mean percentage of idle mechanics in the steady state is about 81%. 5.5) Consider the two-unit system with standby redundancy discussed in example 5.5 a) on condition that the lifetimes of the units are exponential with respective parameters λ 1 and λ 2 . The other model assumptions listed in example 5.5 remain valid. Describe the behaviour of the system by a Markov chain and draw the transition graph.

73

5 CONTINUOUS-TIME MARKOV CHAINS

Solution The underlying Markov chain {X(t), t ≥ 0} has state space {(0, 0), (0, 1), (1, 0), (1, 1 s ), (1 s , 1)}, where (s 1 , s 2 ) means that unit 1 is in state s 1 and unit 2 in state s 2 with

0 unit is down (being replaced in states (1,0) and (0,1)) 1 unit is operating 1 s unit is available (ready for operating), but in cold standby State (0,0) is absorbing. The transition graph is (0, 1)

λ1

μ (1, 1 s )

λ2

(1 s , 1) μ

(0, 0) λ1

λ2 (1, 0)

5.6) Consider the two-unit system with parallel redundancy discussed in example 5.6 on condition that the lifetimes of the units are exponential with parameters λ 1 and λ 2 , respectively. The other model assumptions listed in example 5.6 remain valid. Describe the behaviour of the system by a Markov chain and draw the transition graph. Solution The underlying Markov chain {X(t), t ≥ 0} has state space {(0, 0), (0, 1), (1, 0), (1, 1)} , where (i, j)

means that at one and the same time point unit 1 is in state i and unit 2 in state j with ’0 = unit down’ and ’1 = unit operating’ a) Survival probability In this case, state (0,0) is absorbing and the transition graph is λ1

(1,1)

(0,1)

μ

λ2

(0,0)

μ λ2

(1,0)

λ1

b) Long-run availability State (0, 0) has to be replaced with two new states: (0, 0 1 ) :

the transition to state (0, 0) was made via state (0, 1)

(0 1 , 0) :

the transition to state (0, 0) was made via state (1, 0)

The system arrives at state (0, 0 1 ) at a time point when unit 1 is being replaced. The system arrives at state (0 1 , 0) at a time point when unit 2 is being replaced.

74

SOLUTIONS MANUAL

r = 1 (one mechanic) The transition graph is λ1

λ2

(0,1)

(1,1)

(0, 0 1 )

μ

μ μ

μ

(1,0)

λ2

(0 1 , 0)

λ1

r = 2 (two mechanics) The transition graph is λ1

μ

(0,1)

(0, 0 1 )

μ

μ

(1,1)

λ2

μ

μ μ

(1,0)

λ2

(0 1 , 0) λ1

5.7) The system considered in example 5.7 is generalized as follows: If the system makes a direct transition from state 0 to the blocking state 2, then the subsequent renewal time is exponential with parameter μ 0 . If the system makes a transition from state 1 to state 2, then the subsequent renewal time is exponential with parameter μ 1 .

(1) Describe the behaviour of the system by a Markov chain and draw the transition graph. (2) What is the stationary probability that the system is blocked? Solution (1) The following system states are introduced (state 2 differs from the one used in example 5.7): 0 The system is operating. 1 Type 1-failure state 2 Type 2-failure state if caused by a transition from state 1 (system blocked) 3 Type 2-failure state if caused by a transition from state 2 (system blocked)

The corresponding transition graph is λ2

3

μ0

λ1

0

ν

1 μ1

(2) The system (5.28) for the stationary state probabilities is (λ 1 + λ 2 ) π 0 = μ 1 π 2 + μ 0 π 3 ν π1 = λ1π0 μ1π2 = ν π1 π0 + π1 + π2 + π3 = 1 The solution is

2

5 CONTINUOUS-TIME MARKOV CHAINS π0 =

75 1

λ λ λ 1 + ν1 + μ 1 + μ 2 1 0

λ λ λ π 1 = ν1 π 0 , π 2 = μ 1 π 0 , π 3 = μ 2 π 0 1 0 The probability that the system is blocked is λ1 λ2 μ1 + μ0 π2 + π3 = . λ λ λ 1 + ν1 + μ 1 + μ 2 1 0

If μ 1 = μ 2 = μ, then this blocking probability coincides with the one obtained in example 5.7. (There it is denoted as π 2 .) 5.8) Consider a two-unit system with standby redundancy and one mechanic. All repair times of failed units have an Erlang distribution with parameters n = 2 and μ. Apart from this, the other model assumptions listed in example 5.5 remain valid. (1) Describe the behaviour of the system by a Markov chain and draw the transition graph. (2) Determine the stationary state probabilities of the system. (3) Sketch the stationary availability of the system as a function of ρ = λ/μ. Solution (1) Erlang's phase method with the following system states is introduced:

0 1 2 3 4

Both units are available (one is operating, the other one in cold standby) One unit has failed. Its repair is in phase 1. One unit has failed. Its repair is in phase 2. Both units have failed. The one under repair is in phase 1. Both units have failed. The one under repair is in phase 2.

The stochastic process {X(t), t ≥ 0} with state space {0, 1, 2, 3, 4} is a Markov process with the following transition matrix:

3

λ

0

λ

1

μ

μ μ

μ

4 λ

2

(2) The system of algebraic equations for the stationary state probabilities π 1 , π 2 , π 3 , and π 4 is λ π0 = μ π2 (λ + μ) π 1 = λ π 0 + μ π 4 (λ + μ) π 2 = μ π 1 μ π3 = λ π1 μ π4 = λ π2 + μ π3

76

SOLUTIONS MANUAL

From this and the normalizing condition, letting ρ = λ/μ , π0 =

1 , 1 + 2ρ + 4ρ 2 + 2ρ 3

π 1 = (ρ + 1)ρ π 0 , π 2 = ρ π 0 , π 3 = (ρ + 1)ρ 2 π 0 , π 4 = (ρ + 2)ρ 2 π 0 . (3) The stationary availability of the system is A = π0 + π1 + π2 =

Α

(ρ + 1) 2 1 + 2ρ + 4ρ 2 + 2ρ 3

.

1 0.8 0.6 0.4 0.2 0

0.2

0.4

0.6

0.8

ρ

1

5.9) When being in states 0, 1, and 2, a (pure) birth process {X(t), t ≥ 0} with state space Z = {0, 1, 2, ...} has birth rates λ 0 = 2, λ 1 = 3 , λ 2 = 1. Given X(0) = 0, determine the timedependent state probabilities p i (t) = P(X(t) = i) for i = 0, 1, 2. Solution The p i (t), i = 0, 1, 2 , satisfy the first three differential equations of (5.36): p /0 (t) = −2 p 0 (t) p /1 (t) = 2 p 0 (t) − 3 p 1 (t) p /2 (t) = 3 p 1 (t) − p 2 (t)

The initial condition is equivalent to p 0 (0) = 1. Hence, the first differential equation yields p 0 (t) = e −2t , t ≥ 0.

The probabilities p 1 (t) and p 2 (t) can be recursively obtained by applying (5.38): p 1 (t) = 2 (e −2t − e −3t ),

p 2 (t) = 3 e −t (1 − e −t ) 2 ,

t ≥ 0.

5.10) Consider a linear birth process {X(t), t ≥ 0} with birth rates λ j = j λ ; j = 0, 1, ..., and state space Z = {0, 1, ...}. (1) Given X(0) = 1, determine the distribution function of the random time point T 3 at which the process enters state 3. (2) Given X(0) = 1, determine the mean value of the random time point T n at which the process enters state n, n > 1. Solution (1) T n can be represented as a sum of n − 1 independent random variables X i : T n = X 1 + X 2 + . . . + X n−1 ,

(i)

77

5 CONTINUOUS-TIME MARKOV CHAINS

where X i , the sojourn time of the process in state i , has an exponential distribution with parameter λi. Therefore, by formula (1.109), the distribution function of T 3 is t

F T (t) = ∫ 0 (1 − e −2λ(t−x) ) λ e −λx dx 3

t

= ∫ 0 λ e −λx dx − λe −2λt ∫ 0 e λx dx = 1 − e −λt − λ e −2λt ⎡ 1 e λ x ⎤ ⎣λ ⎦0 t

t

= 1 − e −λt − e −2λt (e λ t − 1). Hence, F T (t) = (1 − e −λ t ) 2 , t ≥ 0. 3

(2) In view of (i), since E( X i ) = 1 , λi

the mean value of T n is E(T n ) = 1 ⎛⎝ 1 + 1 + . . . + 1 ⎞⎠ . λ 2 n−1 5.11) The number of physical particles of a particular type in a closed container evolves as follows: There is one particle at time t = 0. It splits into two particles of the same type after an exponential random time Y with parameter λ (its lifetime). These two particles behave in the same way as the original one, i.e. after random times, which are identically distributed as Y, they split into 2 particles each, and so on. All lifetimes of the particles are assumed to be independent. Let X(t) denote the number of particles in the container at time t. Determine the absolute state probabilities p j (t) = P(X(t) = j) ; j = 1, 2, ...; of the stochastic process {X(t), t ≥ 0}. Solution {X(t), t ≥ 0} is a linear birth process with parameter λ . Hence , the state probabilities on condition X(0) = 1 are (section 5.6.1) p j (t) = e −λt (1 − e −λt ) j−1 ; j = 1, 2, ...

5.12) A death process with state space Z = {0, 1, 2, ...} has death rates

μ 0 = 0, μ 1 = 2, and μ 2 = μ 3 = 1. Given X(0) = 3 , determine p j (t) = P(X(t) = j) for j = 3, 2, 1, 0. Solution According to (5.44), the p j (t) satisfy the differential equations p /3 (t) = −p 3 (t) p /2 (t) = −p 2 (t) + p 3 (t) p /1 (t) = −2 p 1 (t) + p 2 (t) p /0 (t) = 2 p 1 (t)

From the first differential equation, p 3 (t) = e −t , t ≥ 0.

Recursively from (5.45) (formulas (5.46) are not applicable) t

p 2 (t) = e −t ∫ 0 e x e −x dx = t e −t , t ≥ 0.

78

SOLUTIONS MANUAL t

t

p 1 (t) = e −2t ∫ 0 e 2x x e −x dx = e −2t ∫ 0 x e x dx = e −2t [(x − 1) e x ] t0 . p 1 (t) = (t − 1) e −t + e −2t , t ≥ 0. t

p 0 (t) = 2 ∫ 0 [(x − 1) e −x + e −2x ] dx. p 0 (t) = 1 − e −2t − 2t e −t . 5.13) A linear death process {X(t), t ≥ 0} has death rates

μ j = j μ ; j = 0, 1, ... (1) Given X(0) = 2, determine the distribution function of the random time T 0 at which the process arrives at state 0 ('lifetime' of the process). (2) Given X(0) = n, n > 1, determine the mean value of the random time T 0 at which the process enters state 0. Solution (1) Given p n (0) = 1 , T 0 can be represented as a sum of n independent random variables X i : T0 = X1 + X2 + . . . + Xn,

(i)

where X i , the sojourn time of the death process in state i, has an exponential distribution with parameter iμ. Hence, by formula (1.109), the distribution function of T 0 = X 1 + X 2 is t

F T (t) = ∫ 0 (1 − e −2μ(t−x) ) μ e −μ x dx . 0

Thus, F T (t) = (1 − e −μ t ) 2 , t ≥ 0. 0

(2) In view of (i), since E(X i ) = 1/(μi) , 1 ⎛1 + 1 + . . . + 1 ⎞ . E( T 0 ) = μ n⎠ ⎝ 2 5.14) At time t = 0 there are an infinite number of molecules of type a and 2n molecules of type b in a two-component gas mixture. After an exponential random time with parameter µ any molecule of type b combines, independently of the others, with a molecule of type a to form a molecule ab. (1) What is the probability that at time t there are still j free molecules of type b in the container? (2) What is the mean time till there are left only n free molecules of type b in the container? Solution (1) Let X(t) be the number of free molecules of type b in the container. Then {X(t), t ≥ 0} is a linear death process with death rates μ j = jμ ; j = 0, 1, ..., 2n which starts at X(0) = 2n. Therefore, by section 5.6.2, the desired probabilities are ⎛ ⎞ p j (t) = 2n e −jμt (1 − e −μt ) 2n−j ; j = 0, 1, ... ⎝ j ⎠

(2) The time T n till the process arrives at state n is given by Tn = T0 = X1 + X2 + . . . + Xn,

79

5 CONTINUOUS-TIME MARKOV CHAINS where X i has an exponential distribution with parameter (2n − i + 1) ; i = 1, 2, ..., n. Hence, ⎞ 1⎛ 1 + 1 E(T n ) = μ + ... + 1 . ⎝ 2n 2(n − 1) n+1⎠

5.15) At time t = 0 a cable consists of 5 identical, intact wires. The cable is subject to a constant load of 100 kp such that in the beginning each wire bears a load of 20 kp. Given a load of w kp per wire, the time to breakage of a wire (its lifetime) is exponential with mean value 1000 [weeks]. w When one or more wires are broken, the load of 100 kp is uniformly distributed over the remaining intact ones. For any fixed number of wires, their lifetimes are assumed to be independent and identically distributed. (1) What is the probability that all wires are broken at time t = 50 [weeks] ? (2) What is the mean time until the cable breaks completely? Solution (a) When 5 wires are intact, then the mean lifetime of a wire is 1000/20 = 50. Hence, the lifetimes of the wires are exponential with parameter m 5 = 0.02.

(b) When 4 wires are intact, then the mean lifetime of a wire is 1000/25 = 40. Hence, the lifetimes of the wires are exponential with parameter m 4 = 0.025. (c) When 3 wires are intact, then the mean lifetime of a wire is 1000/33.3 = 30. Hence, the lifetimes of the wires are exponential with parameter m 3 = 0.0333. (d) When 2 wires are intact, then the mean lifetime of a wire is 1000/50 = 20. Hence, the lifetimes of the wires are exponential with parameter m 2 = 0.05. (e) When 1 wire is intact, then the mean lifetime of this wire is 1000/100 = 10. Hence, the lifetimes of this wire is exponential with parameter m 1 = 0.1. Let X(t) denote the number of intact wires at time t. Then {X(t), t ≥ 0} is a death process with state space Z = {0, 1, 2, 3, 4, 5} and death rates μ i = i ⋅ m i = 0.1; i = 1, 2, ..., 5. (1) Let T 0 be the time till the process enters state 0. Then, p 0 (t) = P(T 0 < t). T 0 has structure T 0 = X 1 + X 2 + . . . + X 5 , where the X i are independent and identically exponentially distributed with parameter 0.1. Hence, T 0 has an Erlang distribution with parameters n = 5 and λ = 0.1 (section 1.2.3.2). Thus, 4 (0.1 ⋅ 50) i

p 0 (50) = 1 − e −0.1⋅50 Σ

i=0

(2)

i!

= 0.56 .

E(T 0 ) = n = 5 = 50 . λ 0.1

5.16)* Let {X(t), t ≥ 0} be a death process with X(0) = n and positive death rates μ 1 , μ 2 , ... , μ n . Prove: If Y is an exponential random variable with parameter λ and independent of the death process, then n μi P(X(Y) = 0) = Π . μ i=1 i + λ

80

SOLUTIONS MANUAL

Solution The conditional probability P(X(Y) = 0 Y = t)) is simply p 0 (t) : p 0 (t) = P(X(Y) = 0 Y = t).

Hence, the desired probability is ∞



P(X(Y) = 0) = ∫ 0 p 0 (t) λe −λt dt = λ ∫ 0 p 0 (t) e −λt dt.

(i)

Therefore, 1 P(X(Y) = 0) is the Laplace transform of p 0 (t). In view of this relationship, the system λ of the differential equations (5.44) for the state probabilities of a death process is solved by means of the Laplace transformation with parameter s = λ (section 1.3.2). The system (5.44) is p n (t) = −μ n p n (t) p j (t) = −μ j p j (t) + μ j+1 p j+1 (t) ; j = 0, 1, ..., n − 1.

Let p j (λ) = L p j (t) be the Laplace transform of p j (t) with parameter s = λ. Then, by formula (1.30) and in view of the initial condition p n (0) = 1, application of the Laplace transform to this system gives a system of algebraic equations for the p j (λ) : λ p n (λ) − 1 = −μ n p n (λ) λ p j (λ) = −μ j p j (λ) + μ j+1 p j+1 (λ) ; j = n − 1, n − 2, ..., 1, 0. Solving recursively yields p n (λ) = p n−1 (λ) =

1 μn + λ

μn ⋅ 1 μ n−1 + λ μ n + λ

μ n−1 μn ⋅ ⋅ 1 μ n−2 + λ μ n−1 + λ μ n + λ .. . μ2 μ3 μn p 1 (λ) = ⋅ ⋅ ... ⋅ ⋅ 1 μ1 + λ μ2 + λ μ n−1 + λ μ n + λ p n−2 (λ) =

From (ii) with j = 0, since μ 0 = 0, Hence,

μ p 0 (λ) = 1 p 1 (λ) λ

μ μ2 μ3 μn p 0 (λ) = 1 ⋅ ⋅ ⋅ ... ⋅ ⋅ 1 . λ μ1 + λ μ2 + λ μ n−1 + λ μ n + λ

Thus, by (i), P(X(Y) = 0) = λ p 0 (λ) =

μ1 μ2 μ n−1 μn ⋅ ⋅ ... ⋅ ⋅ . μ1 + λ μ2 + λ μ n−1 + λ μ n + λ

5.17) Let a birth- and death process have state space Z = {0, 1, ..., n} and transition rates

λ j = (n − j) λ and μ j = j μ ; j = 0, 1, ..., n. Determine its stationary state probabilities.

(ii)

5 CONTINUOUS-TIME MARKOV CHAINS

81

Solution This is the special case r = n of the situation considered in example 5.14. Hence, from formulas (5.60) and (5.61) or directly from (5.65), letting ρ = λ/μ,

π j = ⎛⎝ n ⎞⎠ ρ j π 0 ; j π0 =

j = 1, 2, ..., n ;

1

. n ⎛n⎞ j 1 + Σ ⎝ ⎠ ρ π0 j=1 j

5.18) Check whether or under what restrictions a birth- and death process with transition rates j+1 λj = λ and μ j = μ ; j = 0, 1, ... , j+2 has a stationary state distribution. Solution The condition λ < μ is sufficient for the existence of a stationary state distribution. To see this note that the corresponding series (5.62) converges since the transition rates satisfy the sufficient condition (5.63): λ i−1 i λ λ μ i = i + 1 ⋅ μ ≤ μ < 1. The series (5.64) diverges since

lim

n

j μ i

Σ Π

n→∞ j=1 i=1 λ i

n μ j 1 ⎛μ⎞ j ≥ lim Σ ⎛⎝ ⎞⎠ = ∞ . ⎝ ⎠ j+1 n→∞ j=1 λ 2 3 . . . n→∞ j=1 λ ⋅ ⋅ ⋅

= lim

n

Σ

3 4

j+2

Hence, by theorem 5.3, a stationary state distribution exists if λ < μ. 5.19) A birth- and death process has transition rates

λ j = ( j + 1)λ and μ j = j 2 μ ; j = 0, 1, ...; 0 < λ < μ . Confirm that this process has a stationary state distribution and determine it. Solution The transition rates fulfil the sufficient condition (5.63): λ i−1 iλ λ λ μi = 2 = i μ ≤ μ < 1 . i μ

(Obviously, for satisfying (5.63) the condition λ < μ can be dropped.) The series (5.64) diverges: lim

n

j μ i

Σ Π

n→∞ j=1 i=1 λ i

= lim

n

j

i2μ

Σ Π (i + 1) λ = ∞ .

n→∞ j=1 i=1

Hence, by theorem (5.3), there exists a stationary state distribution. It is given by formulas (5.60) and (5.61): πj =

j λ j i−1 λ π = 1 ⎛ λ ⎞ j ; j = 1, 2, ... ; π = Π 0 0 j! ⎝ μ ⎠ μi i i=1 i=1 μ

Π

∞ ⎡ λ ⎞ j ⎤⎥ . π 0 = ⎢⎢ 1 + Σ 1 ⎛⎝ μ ⎠ ⎥⎥ ⎢⎣ j=1 j! ⎦

82

SOLUTIONS MANUAL

5.20) A computer is connected to three terminals (for example, measuring devices). It can simultaneously evaluate data records from only two terminals. When the computer is processing two data records and in the meantime another data record has been produced, then this new data record has to wait in a buffer when the buffer is empty. Otherwise the new data record is lost. (The buffer can store only one data record.) The data records are processed according to the FCFS-queueing discipline. The terminals produce data records independently according to a homogeneous Poisson processes with intensity λ. The processing times of data records from all terminals are independent (even if the computer is busy with two data records at the same time) and have an exponential distribution with parameter µ. They are assumed to be independent on the input. Let X(t) be the number of data records in computer and buffer at time t. (1) Verify that {X(t), t ≥ 0} is a birth- and death process, determine its transition rates and draw the transition graph. (2) Determine the stationary loss probability, i.e. the probability that in the steady state a data record is lost. Solution (1) Only transitions to neighbouring states are possible according to the following transition graph: 3λ 3λ 3λ

0

1

3

2

μ





Therefore, {X(t), t ≥ 0} is a birth- and death process. The desired loss probability is π loss = π 3 . Inserting the transition rates into (5.60) and (5.61) gives π loss =

6.75ρ 3 1 + 3ρ + 4.5ρ 2 + 6.75ρ 3

,

ρ = λ/μ.

5.21) Under otherwise the same assumptions as in exercise 5.20, it is assumed that a data record which has been waiting in the buffer a random patience time, will be deleted as being no longer up to date. The patience times of all data records are assumed to be independent, exponential random variables with parameter ν . They are also independent of all arrival and processing times of the data records. Determine the stationary loss probability. Solution The states of the corresponding birth- and death process {X(t), t ≥ 0} are the same as in exercise 5.20. The transition rates are identical except the death rate μ 3 , which is μ 3 = 2μ + ν.



3λ 1

0 μ

3λ 3

2 2μ

2μ + ν

From (5.60) and (5.61), if 'loss' refers only to a occupied buffer, π loss = π 3 =

13.5ρ 3 /(2 + ν/μ) 1 + 3ρ + 4.5ρ 2 + 13.5ρ 3 /(2 + ν/μ)

,

ρ = λ/μ .

If 'loss' also refers to data records which were deleted because their patience time had expired, then, if B is the service time and Z the patience time of a customer, by the total probability rule, ⎛ 2μ ⎞ π loss = π 3 + P(Z < B) (1 − π 3 ) = π 3 + (1 − π 3 ). ⎝ 2μ + ν ⎠

5 CONTINUOUS-TIME MARKOV CHAINS

83

5.22) Under otherwise the same assumptions as in exercise 5.21, it is assumed that a data record will be deleted when its total sojourn time in the buffer and computer exceeds a random time Z, where Z has an exponential distribution with parameter α. Thus, interruption of an ongoing service of a data record is possible. Determine the stationary loss probability. Solution The corresponding transition graph is







1

0 μ+α

3

2 2μ + α

2μ + α

If 'loss' refers only to a occupied buffer, then, rom (5.60) and (5.61), π loss = π 3 =

27 λ 2

(μ + α) (2μ + α) 2 + 3λ (2μ + α) 2 + 9λ 2 (2μ + α) + 27λ 2

.

If 'loss' also refers to data records which were deleted because their sojourn time had expired, then μ 2μ 2μ π loss = π + π + (1 − π 3 ) + π 3 . μ + α 1 2μ + α 2 2μ + α 5.23) A small filling station in a rural area provides diesel for agricultural machines. It has one diesel pump and waiting capacity for 5 machines. On average, 8 machines per hour arrive for diesel. An arriving machine immediately leaves the station without fuel if pump and all waiting places are occupied. The mean time a machine occupies the pump is 5 minutes. It is assumed that the station behaves like an M/M/s/m-queueing system. (1) Determine the stationary loss probability. (2) Determine the stationary probability that an arriving machine waits for diesel. Solution (1) The formulas given in section 5.7.4.1 apply. (See also example 5.17.) The arrival rate per hour is λ = 8 and the service rate is μ = 12 per hour. Hence, the traffic intensity is π = λ/μ = 2/3, and the probabilities π i that in the steady state there are i machines at the filling station are i π i = ⎛⎝ 2 ⎞⎠ π 0 ; i = 1, 2, ..., 6 with π 0 = 3

1 2 6 1 + 2 + ⎛⎝ 2 ⎞⎠ + . . . + ⎛⎝ 2 ⎞⎠ 3 3 3

.

The loss probability in the steady state is π loss = π 6 = 0.0311 and the waiting probability is π = π 1 + π 2 + . . . + π 5 = 1 − π 0 − π 6 = 0.6149. wait

5.24) Consider a two-server loss system. Customers arrive according to a homogeneous Poisson process with intensity λ. A customer is always served by server 1 when this server is idle, i.e. an arriving customer goes only then to server 2, when server 1 is busy. The service times of both servers are independent, identically exponential with parameter μ distributed random variables. Let X(t) be the number of customers in the system at time t. Determine the stationary state probabilities of the stochastic process {X(t), t ≥ 0}. Solution Let (i, j) be the state that there are i customers at server 1 and j customers at server 2; i, j = 0, 1. To simplify notation, let 0 = (0, 0), 1 = ((1, 0), 2 = (0, 1), 3 = (1, 1) . The corresponding transition graph is

SOLUTIONS MANUAL

84

(0,1) λ

μ

μ

λ

(0,0)

(1,0)

μ

λ

(1,1)

μ

Hence, the p i satisfy the system of linear equations λ p0 = μ p1 + μ p2 (λ + μ) p 1 = μ p 3 (λ + μ) p 2 = λ p 0 + μ p 3

By making use of the normalizing condition, the solution is seen to be −1 ⎡ ρ (ρ + 2) ρ2 ρ (ρ + 2) ρ 2 ⎤ ρ2 ρ2 p0 = ⎢1 + + + p0 , p2 = p0 , p3 = p . ⎥ , p1 = 2 (ρ + 1) 2 (ρ + 1) 2 ⎦ 2 (ρ + 1) 2 (ρ + 1) 2 0 ⎣

Hence, the stochastic process {X(t), t ≥ 0} with X(t) denoting the number of customers in the system at time t, has the stationary state probabilities π0 = p0 , π1 = p1 + p2 = ρ p0 , π3 = p3 . 5.25) A 2-server loss system is subject to a homogeneous Poisson input with intensity λ. The situation considered in the previous exercise is generalized as follows: If both servers are idle, a customer goes to server 1 with probability p and to server 2 with probability 1 − p . Otherwise, a customer goes to the idle server (if there is any). The service times of the servers 1 and 2 are independent exponential random variables with parameters μ 1 and μ 2 , respectively. All arrival and service times are independent. Describe the system behaviour by a homogeneous Markov chain and draw the transition graph. Solution Let state (i, j) be defined as in exercise 5.24. Then the transition graph of the corresponding homogeneous Markov chain is (0,1)

(1 − p)λ

λ

μ1

μ2 λp

(0,0)

μ1

(1,0)

λ μ2

(1,1)

5.26) A single-server waiting system is subjected to a homogeneous Poisson input with intensity λ = 30 [h −1 ]. If there are not more than 3 customers in the system, the service times have an exponential distribution with mean 2 [min]. If there are more than 3 customers in the system, the service times have an exponential distribution with mean 1 [min]. All arrival and service times are independent. (1) Show that there exists a stationary state distribution and determine it. (2) Determine the mean length of the waiting queue in the steady state.

85

5 CONTINUOUS-TIME MARKOV CHAINS

Solution (1) As usual, let X(t) be the number of customers in the system at time t. The birth- and death process {X(t), t ≥ 0} has birth rates λ i = 1 [min −1 ] ; i = 0, 1, ... 2

and death rates μ 0 = 0, μ 1 = μ 2 = μ 3 = 1 [min −1 ], μ i = 1 [min −1 ] ; i = 4, 5, ... 2 Since the conditions of theorem 5.3 are fulfilled, there is a stationary state distribution. It is given by (5.60) and (5.61): π0 = π1 = π2 = π3 = 1 , 5 k−3 π k = 1 ⎛⎝ 1 ⎞⎠ ; k = 4, 5, ... 5 2

(2) The mean queue length in the steady state is ∞ k−3 E(L) = 1 ⋅ 1 + 2 ⋅ 1 + 3 ⋅ 1 + Σ k ⋅ 1 ⎛⎝ 1 ⎞⎠ 5 5 5 k=4 5 2 ∞ k−4 = 6 + 1 ⋅ Σ (k − 4 + 4) ⎛⎝ 1 ⎞⎠ 5 10 k=4 2 ∞ ∞ i i = 6 + 1 ⋅ Σ i ⎛⎝ 1 ⎞⎠ + 4 ⋅ Σ ⎛⎝ 1 ⎞⎠ = 6 + 2 + 8 . 5 10 10 5 10 i=0 2 10 k=0 2

Thus, E(L) = 2.2. 5.27) Taxis and customers arrive at a taxi rank in accordance with two independent homogeneous Poisson processes with intensities λ 1 = 4 an hour and λ 2 = 3 an hour, respectively. Potential customers, who find 2 waiting customers, do not wait for service, but leave the rank immediately. (Groups of customers, who will use the same taxi, are considered to be one customer.) On the other hand, arriving taxis, who find two taxis waiting, leave the rank as well. What is the average number of customers waiting at the rank? Solution Let (i, j) denote the state that there are i customers and j taxis at the rank. The transition graph of the corresponding Markov process with state space {(2, 0), (1, 0), (0, 0), (0, 1), (0, 2)} is λ1

(2,0)

λ1

(1,0) λ2

λ1

(0,0) λ2

λ1

(0,2)

(0,1) λ2

λ2

Thus, the transitions between the states are governed by a birth- and death process with constant birth rates and constant death rates. By (5.60) and (5.61) with λ 1 /λ 2 = 4/3, the stationary state probabilities are π (2,0) = 0.1037 2 3 4 π (1,0) = 4 π (2,0) , π (0,0) = ⎛⎝ 4 ⎞⎠ , π (0,1) = ⎛⎝ 4 ⎞⎠ π (2,0) , π (0,2) = ⎛⎝ 4 ⎞⎠ π (2,0) 3 3 3 3

Thus, in the steady state, the mean number of customers waiting at the rank is E(L) = 2 ⋅ 0.1037 + 1 ⋅ 4 ⋅ 0.1037 = 0.3457. 3

SOLUTIONS MANUAL

86

5.28) A transport company has 4 trucks of the same type. There are two maintenance teams for repairing the trucks after a failure. Each team can repair only one truck at a time and each failed truck is handled by only one team. The times between failures of a truck (lifetimes) are exponential with parameter λ. The repair times are exponential with parameter μ. All life and repair times are assumed to be independent. Let ρ = λ/μ = 0.2. What is the most efficient way of organizing the work: 1) to make both maintenance teams responsible for the maintenance of all 4 trucks so that any team which is free can repair any failed truck, or 2) to assign 2 definite trucks to each team? Solution This is the repairman problem considered in example 5.14 with r maintenance teams and n machines to be maintained. Let X(t) be the number of failed trucks at time t. Next the stationary state probabilities of the birth- and death process {X(t), t ≥ 0} are determined. Case 1 n = 4, r = 2 Formulas (5.65) yield π 0 = 0.47783, π 1 = 0.38226, π 2 = 0.11468, π 3 = 0.02294, π 4 = 0.00229 .

Hence, the mean number of failed trucks in the steady state is 4

E(X 4,2 ) = Σ i=1 i π i = 0.68960.

The mean number of busy maintenance teams is E(Z 4,2 ) = 1 ⋅ π 1 + 2 (π 2 + π 3 + π 4 ) = 0.66208. Case 2 n = 2, r = 1 Formulas (5.65) yield the stationary state probabilities of {X(t), t ≥ 0} : π 0 = 0.67568, π 1 = 0.27027, π 2 = 0.05405. Hence, the mean number of failed trucks in the steady state is E(X 2,1 ) = 1 ⋅ π 1 + 2 ⋅ 0.05405 = 0.37837.

The mean number of busy maintenance teams is E(Z 2,1 ) = 1 ⋅ (π 1 + π 2 ) = 0.32432. Comparison of policies 1 and 2 When applying policy 2, the mean number of failed trucks out of four is 2 E(X 2,1 ) = 0.74864,

whereas the mean number of busy maintenance teams is 2 E(Z 2,1 ) = 0.64864. Hence, one the one hand, using policy 2 leads on average to a larger number of failed trucks than using policy 1. On the other hand, under policy 2 the maintenance teams are less busy than under policy 1. Consequently, with regard to the criteria applied, policy 1 is preferable to policy 2. 5.29) Ferry boats and customers arrive at a ferry station in accordance with two independent homogeneous Poisson processes with intensities λ and μ , respectively. If there are k customers at the ferry station when a boat arrives, then it departs with min (k, n) passengers (n is the capacity of each boat). If k > n , then the remaining k − n customers wait for the next boat. The sojourn times of the boats at the station are assumed to be negligibly small. Model the situation by a homogeneous Markov chain {X(t), t ≥ 0} and draw the transition graph. Solution The situation is modeled by a homogeneous Markov chain {X(t), t ≥ 0} with state space {0, 1, ...} as follows: If there are i customers at the ferry station, then {X(t), t ≥ 0} is in state i. The number of ferry boats at the station need not be taken into account, since, by assumption, their sojourn

5 CONTINUOUS-TIME MARKOV CHAINS

87

time at the station, compared with their interarrival times, is negligibly small, and they depart even without any passenger. This Markov chain has the positive transition intensities q i, i+1 = μ, i = 0, 1, ... qi 0 = λ , q kn+i, (k−1)n+i = λ ;

i = 1, 2, ..., n i = 1, 2, ..., n ;

k = 1, 2, ...

Obviously, {X(t), t ≥ 0} is not a birth- and death process. A section of its transition graph is λ

0

λ μ

λ

1

...

2

μ

μ

n

μ

λ

n+1

μ

n+2

λ

...

μ

2n

...

λ

5.30) The life cycle of an organism is controlled by shocks (e.g. virus attacks, accidents) in the following way: A healthy organism has an exponential lifetime L with parameter λ h . If a shock occurs, the organism falls sick and, when being in this state, its (residual) lifetime S is exponential with parameter λ s , λ s > λ h . However, a sick organism may recover and return to the healthy state. This occurs in an exponential time R with parameter μ. If during a period of sickness another shock occurs, the organism cannot recover and will die a random time D after the occurrence of the second shock. D is assumed to be exponential with parameter λ d , λ d > λ s . The random variables L, S, R, and D are assumed to be independent. Shocks arrive according to a homogeneous Poisson process with intensity λ. (1) Describe the evolvement in time of the states the organism may be in by a Markov chain. (2) Determine the mean lifetime of the organism. Solution (1) Four states are introduced: 0 healthy, 1 sick (after one shock), 2 sick (after two shocks), 3 dead.

The transition graph of the corresponding Markov chain {X(t), t ≥ 0} is

λ 1

0 μ

λh

λs

λ

2

λd

3

(2) The system (5.20) for the absolute state probabilities p i (t) = P(X(t) = i); i = 1, 2, 3, 4 is p 0 (t) = μ p 1 (t) − (λ + λ h ) p 0 (t) p 1 (t) = λ p 0 (t) − (λ + λ s + μ) p 1 (t) p 2 (t) = λ p 1 (t) − λ d p 2 (t) p 3 (t) = λ h p 0 (t) + λ s p 1 (t) + λ d p 2 (t)

SOLUTIONS MANUAL

88

Let p i (s) be the Laplace transform of p i (t). Then, by applying the Laplace transform to the first 3 differential equations of this system, making use of p 0 (0) = 1 and formula (1.30), gives s p 0 (s) − 1 = μ p 1 (s) − (λ + λ h ) p 0 (s) s p 1 (s) = λ p 0 (s) − (λ + λ s + μ) p 1 (s) s p 2 (s) = λ p 1 (s) − λ d p 2 (s)

The solution is s + λ + λs + μ (s + λ + λ s ) (s + λ + λ h ) + μ (s + λ h ) λ p 1 ( s) = (s + λ + λ s + μ) (s + λ + λ h ) − λμ

p 0 ( s) =

p 2 (s) =

λ2 (s + λ d ) [(s + λ + λ s + μ) (s + λ + λ h ) − λμ]

If X denotes the lifetime of the organism, then F(t) = P(X > t) = p 0 (t) + p 1 (t) + p 2 (t) is the survival probability of an organism. Hence, the Laplace transform of F(t) is ∞ F(s) = ∫ 0 e −s t F(t) dt = p 0 (s) + p 1 (s) + p 2 (s) .

By formula (1.17), the desired mean lifetime is E(X) = F(0) =

2λ + λ s + μ + λ 2 /λ d . (λ + λ s + μ) λ h + (λ + λ s ) λ

As a special case: If λ = 0 , then the mean lifetime is E(X) = 1/λ h . 5.31) Customers arrive at a waiting system of type M/M/1/∞ with intensity λ. As long as there are less than n customers in the system, the server remains idle. As soon as the n th customer arrives, the server resumes its work and stops working only then, when all customers (including the newcomers) have been served. After that the server again waits until the waiting queue has reached length n and so on. Let 1/μ be the mean service time of a customer and X(t) be the number of customers in the system at time t. (1) Draw the transition graph of the Markov chain {X(t), t ≥ 0}. (2) Given that n = 2 , compute the stationary state probabilities. (Make sure that they exist.) Solution (1) Let the Markov chain {X(t), t ≥ 0} be in state i if there are i customers in the system. However, if the number of customers in the system is k, 1 ≤ k < n, and this state was reached from state n, then this number is denoted as k ∗ . With this agreement, the transition graph of {X(t), t ≥ 0} is λ

1*

μ

μ

0

λ

2*

λ

... λ

μ ...

1 1

λ

...

λ

n-1

μ

(n-1)*

λ

λ

μ

n

μ λ

n+1

μ ... λ

...

89

5 CONTINUOUS-TIME MARKOV CHAINS (2) By (5.28) and the transition graph for n = 2 , the stationary state probabilities satisfy λ π0 = μ π1∗ λ π1 = μ π0 (λ + μ) π 1 ∗ = μ π 2 (λ + μ) π 2 = λ π 1 + λ π 1 ∗ + μ π 3 (λ + μ) π i = λ π i−1 + μ π i+1 ; i = 3, 4, ... Letting ρ = λ/μ , the solution is π1 = π0 π1∗ = ρ π0 π i = ρ i−1 (ρ + 1) π 0 ;

i = 2, 3, ...

From the normalizing condition and the geometric series, 1−ρ π0 = . 2 Thus, a stationary solution exists if λ < μ , i.e. if ρ < 1. Note that p 1 = π 1 ∗ + π 1 = (ρ + 1) π 0 is the probability that in the steady state there is exactly one customer in the system. Thus, with p i = π i for i = 0, 2, 3, ... the probabilities that i customers are in the system are p i = ρ i−1 (ρ + 1) p 0 ;

i = 1, 2, ...

5.32) At time t = 0 a computer system consists of n operating computers. As soon as a computer fails, it is separated from the system by an automatic switching device with probability 1 − p. If a failed computer is not separated from the system (this happens with probability p), then the entire system fails. The lifetimes of the computers are independent and have an exponential distribution with parameter λ. Thus, this distribution does not depend on the system state. Provided the switching device has operated properly when required, the system is available as long as there is at least one computer available. Let X(t) be the number of computers which are available at time t. By convention, if due to the switching device the entire system has failed in [0, t), then X(t) = 0. (1) Draw the transition graph of the Markov chain {X(t), t ≥ 0}. (2) Given n = 2, determine the mean lifetime E(X s ) of the system.

(1) n

nλ(1 − p)

n-1

(n − 1)λ(1 − p)

...

3λ(1 − p)

2

(n − 1)λ p

2λ(1 − p) 2λp

1

λ

0

nλp

(2) From the transition graph with n = 2 and (5.20) with p i (t) = P(X(t) = i); i = 0, 1, 2 , p /0 (t) = λ p 1 (t) + 2λ p p 2 (t) p /1 (t) = 2λ(1 − p) p 2 (t) − λ p 1 (t) p /2 (t) = −2λ p 2 (t)

In view of the initial condition p 1 (0) = 0, p 2 (0) = 1, application of the Laplace transform to the second and third differential equation yields (notation as in exercise 5.30) s p 1 (s) = 2λ(1 − p) p 2 (s) − λ p 1 (s) s p 2 (s) − 1 = −2λ p 2 (s)

SOLUTIONS MANUAL

90 The solution is 2λ (1 − p) , p 2 ( s) = 1 . (s + λ) (s + 2λ) s + 2λ Analogously to exercise 5.30, the mean lifetime is obtained from p 1 ( s) =

E(X s ) = p 1 (0) + p 2 (0) .

Hence, E(X s ) = 1 (1.5 − p) . λ 5.33) A waiting-loss system of type M/M/1/2 is subject to two independent Poisson inputs 1 and 2 with respective intensities λ 1 and λ 2 (type 1- and type 2- customers). An arriving type 1-customer who finds the server busy and the waiting places occupied displaces a possible type 2-customer from its waiting place (such a type 2-customer is lost), but ongoing service of a type 2-customer is not interrupted. When a type 1-customer and a type 2-customer are waiting, then the type 1-custom- er will always be served first, regardless of the order of their arrivals. The service times of type 1- and type 2- customers are independent random variables, which have exponential distributions with parameters μ 1 and μ 2 , respectively. Describe the behaviour of the system by a homogeneous Markov chain, determine the transition rates, and draw the transition graph. Solution System states (i ; j, k) are defined as follows: i = 0 : the server is idle i = 1 : a type 1-customer is being served i = 2 : a type 2-customer is being served j type 1-customers are waiting; j = 1, 2 k type 2-customers are waiting; k = 1, 2 ; 0 ≤ j + k ≤ 2

The corresponding transition graph is 1,2,0

λ1

λ1 λ1

0,0,0

μ2

1,0,0 μ2

μ2 μ1 λ2

μ1

1,1,0 λ2

λ1

1,1,1

μ1

λ2

λ1

1,0,2

μ2

μ1

2,0,0

μ2

1,0,1

λ2

μ1

λ1

μ2

λ1

2,1,0 μ1

λ2

λ2

2,0,1 μ 2

2,2,0 λ1

2,1,1 λ2

λ1

2,0,2

91

5 CONTINUOUS-TIME MARKOV CHAINS

5.34) A queueing network consists of two servers 1 and 2 in series. Server 1 is subject to a homogeneous Poisson input with intensity per hour λ = 5. A customer is lost if server 1 is busy. From server 1 a customer goes to server 2 for further service. If server 2 is busy, the customer is lost. The service times of servers 1 and 2 are exponential with respective mean values

1/μ 1 = 6 [minutes] and 1/μ 2 = 12 [minutes]. All arrival and service times are independent. What percentage of customers (with respect to the total input at server 1) is served by both servers? Solution Next the system is modeled by a homogeneous Markov chain with states (i, j) , where i is the number of customers at server 1 and j is the number of customers at server 2; i, j = 0, 1. The corresponding transition graph is

1 λ

(1,0)

μ2

μ1

0 (0,0)

(1,1) 3

λ

μ2

2

(0,1)

μ1

For convenience, the states will be denoted as follows: 0 = (0, 0), 1 = (1, 0), 2 = (0, 1), 3 = (1, 1) . The system of equations for the stationary state probabilities is λ π0 = μ2π2 μ1π1 = λ π0 + μ2π3 (λ + μ 2 ) π 1 = μ 1 π 1 + μ 1 π 3 (μ 1 + μ 2 ) π 3 = λ π 2 The solution is π0 =

1 2 2 1 + μλ ⎛ 1 + μ (μλ +μ ) ⎞ + μλ + μ (μλ +μ ) 1⎝ 2 2 1 2 ⎠ 2 1 2 2 π 1 = μλ ⎛ 1 + μ (μλ +μ ) ⎞ 1⎝ 2 1 2 ⎠

π 2 = μλ

2

2

π 3 = μ (μλ +μ ) 2 1 2 By inserting the numerical parameters λ = 1/12, μ 1 = 1/6, and μ 2 = 1/12 (all with regard to minutes), the π i become π 0 = π (0,0) = 3 , π 1 = π (1,0) = 2 , π 2 = π (0,1) = 3 , π 3 = π (1,1) = 1 . 9 9 9 9

(i)

SOLUTIONS MANUAL

92

Let A be the random event that an arriving customer will be served by both servers. The probability of this event can only then be positive when at arrival of a customer the system is in states (0,0) or (0,1). Hence, P(A) = P(A system in state (0,0)) π (0,0) + P(A system in state (0,1)) π (0,1) . Obviously, P(A system in state (0,0)) = 1. If the system is in state (0,1), then A will occur if and only if the service time Z 1 of server 1 is greater than the service time Z 2 of server 2. Therefore, 1

1

− t ∞ − t P(A system in state (0,1)) = P(Z 1 > Z 2 ) = ∫ 0 e 6 1 e 12 dt = 1 . 12 3 (This derivation makes use of the memoryless property of the exponential distribution.) Thus,

P(A) = 1 ⋅ 3 + 1 ⋅ 3 = 4 = 0.44 . 9 3 9 9 Therefore, 44.4 % of all arriving customers are successful.

5.35) A queueing network consists of three nodes (queueing systems) 1, 2 and 3, each of type M/M/1/∞. The external inputs into the nodes have intensities λ 1 = 4, λ 2 = 8, and λ 3 = 12 [customers an hour]. The respective mean service times at the nodes are 4, 2 and 1 [min]. After having been served by node 1, a customer goes to nodes 2 or 3 with equal probabilities 0.4 or leaves the system with probability 0.2. From node 2, a customer goes to node 3 with probability 0.9 or leaves the system with probability 0.1. From node 3, a customer goes to node 1 with probability 0.2 or leaves the system with probability 0.8. The external inputs and the service times are independent. (1) Check whether this queueing network is a Jackson network. (2) Determine the stationary state probabilities of the network. Solution (1) This is a Jackson network, since the assumptions 1 to 4, pages 303 and 304, are fulfiled. (2) Note that the external inputs λ i are

λ 1 = 1/15, λ 2 = 2/15, λ 3 = 3/15 [min]. The scheme of this network is 2/15

2

0.1 0.9

0.4 1/15 0.2

1

0.4 0.2

3/15

3

By (5.106), the total inputs into the nodes α i satisfy α 1 = 1 + 0.2 α 3

15 α 2 = 2 + 0.4 α 1 15 3 α3 = + 0.4 α 1 + 0.9 α 2 . 15

The solution is α 1 = 0.1541, α 2 = 0.1950, α 3 = 0.4371.

0.8

93

5 CONTINUOUS-TIME MARKOV CHAINS Hence, ρ 1 = α 1 /μ 1 = 0.6164, ρ 2 = α 2 /μ 2 = 0.3900, ρ 3 = α 3 /μ 3 = 0.4371.

Now the stationary state probabilities π x that the system is in state x = (x 1 , x 2 , x 3 ) are given by (5.110) with n = 3 and ϕ i (0) = 1 − ρ i : x

ϕ i (x i ) = (1 − ρ i ) ρ i i , x i = 0, 1, ...; i = 1, 2, 3. 5.36) A closed queueing network consists of 3 nodes. Each one has 2 servers. There are 2 customers in the network. After having been served at a node, a customer goes to one of the others with equal probability. All service times are independent and have an exponential distribution with parameter µ. What is the stationary probability to find both customers at the same node? Solution The transition matrix controling the transitions of customers between the nodes 1, 2, and 3 is ⎛ 0 0.5 0.5 ⎞ ⎜ ⎟ P = ⎜ 0.5 0 0.5 ⎟ . ⎜ ⎟ ⎝ 0.5 0.5 0 ⎠

This is a doubly stochastic matrix. Hence, the stationary distribution of the corresponding discretetime Markov chain is π 1 = π 2 = π 3 = 1/3. Let x = (x 1 , x 2 , x 3 ); x i = 0, 1, 2; be a state of the network, where x i denotes the number of custom- ers being served at node i. The state space Z of the network has 6 elements: (2, 0, 0), (0, 2, 0), (0, 0, 2), (1, 1, 0), (1, 0, 1), (0, 1, 1). The corresponding stationary state probabilities π x are given by (5.115). In particular, the probability that both customers are at node 1, i.e. the stationary probability of state x = (2, 0, 0) is 2 2 ⎤ −1 ⎡ π x = h ⎛⎝ 1 ⎞⎠ with h = ⎢ 6 ⎛⎝ 1 ⎞⎠ ⎥ . 3μ ⎣ 3μ ⎦ Hence, π (2,0,0) = 1/6. Obviously, all 6 states have the same probability. Thus, the desired probability is π (2,0,0) + π (0,2,0) + π (0,0,2) = 1/2.

5.37) Depending on demand, a conveyor belt operates at 3 different speed levels 1, 2, and 3. A transition from level i to level j is made with probability p i j with p 12 = 0.8 , p 13 = 0.2 , p 21 = p 23 = 0.5 , p 31 = 0.4 , p 32 = 0.6 .

The respective mean times the conveyor belt operates at levels 1, 2, or 3 between transitions are μ 1 = 45 , μ 2 = 30 , and μ 3 = 12 [hours]. Determine the stationary percentages of time in which the conveyor belt operates at levels 1, 2, and 3 by modeling the situation as a semi-Markov chain. Solution Next the stationary state probabilities π 1 , π 2 , and π 3 of the embedded discrete-time Markov chain with state space Z = {1, 2, 3) are determined. The transition graph of this Markov chain is

2

0.8 0.5

1

0.2 0.4

0.5 0.6

3

SOLUTIONS MANUAL

94 Thus, by (4.9), the π 1 satisfy the system of equations π 1 = 0.5 π 2 + 0.4 π 3 π 2 = 0.8 π 1 + 0.6 π 3 π1 + π2 + π3 = 1 The solution is π 1 = 0.3153, π 2 = 0.4144, 0.2703.

Now the desired percentages p 1 , p 2 , and p 3 are obtained from formula (5.120): p1 =

0.3153 ⋅ 45 [100 %] = 47.51 % 0.3153 ⋅ 45 + 0.4144 ⋅ 30 + 0.2703 ⋅ 12

p2 =

0.4144 ⋅ 30 [100%] = 41.63 % 0.3153 ⋅ 45 + 0.4144 ⋅ 30 + 0.2703 ⋅ 12

p3 =

0.2703 ⋅ 12 [100%] = 10.86 % 0.3153 ⋅ 45 + 0.4144 ⋅ 30 + 0.2703 ⋅ 12

5.38) The mean lifetime of a system is 620 hours. There are two failure types: Repairing the system after a type 1-failure requires 20 hours on average and after a type 2-failure 40 hours on average. 20% of all failures are type 2-failures. There is no dependence between the system lifetime and the subsequent failure type. Upon each repair the system is 'as good as new'. The repaired system immediately resumes its work. This process is continued indefinitely. All life- and repair times are independent. (1) Describe the situation by a semi-Markov chain with 3 states and draw the transition graph of the underlying discrete-time Markov chain. (2) Determine the stationary state probabilities of the system. Solution The following system states are introduced: 0 working, 1 repair after type 2-failure,

2 repair after type 1-failure.

The transition graph of the underlying discrete-time Markov chain {X 0 , X 1 , ...} is 0.8

0.2

1

0 1

2 1

Hence, by (4.9), the stationary state distributions of {X 0 , X 1 , ...} satisfy the following system of linear algebraic equations: π 0 = π 1 + π 2 , π 1 = 0.2 π 0 , π 2 = 0.8 π 0 The solution is π 0 = 0.5, π 1 = 0.1, π 3 = 0.4 . Therefore, by (5.120), the stationary state probabilities p 0 , p 1 , and p 2 of the system are p0 =

0.5 ⋅ 620 = 310 = 0.9627 0.5 ⋅ 620 + 0.1 ⋅ 40 + 0.4 ⋅ 20 322

p1 =

0.1 ⋅ 40 = 4 = 0.0142 0.5 ⋅ 620 + 0.1 ⋅ 40 + 0.4 ⋅ 20 322

p2 =

0.4 ⋅ 20 = 8 = 0.0248 0.5 ⋅ 620 + 0.1 ⋅ 40 + 0.4 ⋅ 20 322

95

5 CONTINUOUS-TIME MARKOV CHAINS

5.39) A system has two different failure types: type 1 and type 2. After a type i-failure the system is said to be in failure state i ; i = 1, 2. The time L i to a type i-failure has an exponential distribution with parameter λ i ; i = 1, 2. Thus, if at time t = 0 a new system starts working, the time to its first failure is Y 0 = min (L 1 , L 2 ). The random variables L 1 and L 2 are assumed to be independent. After a type 1-failure, the system is switched from failure state 1 into failure state 2. The respective mean sojourn times of the system in states 1 and 2 are μ 1 and μ 2 . When in state 2, the system is being renewed. Thus, μ 1 is the mean switching time and μ 2 the mean renewal time. A renewed system immediately starts working, i.e. the system makes a transition from state 2 to state 0 with probability 1. This process continues to infinity. (For motivation, see example 5.7). (1) Describe the system behaviour by a semi-Markov chain and draw the transition graph of the embedded discrete-time Markov chain. (2) Determine the stationary probabilities of the system in the states 0, 1, and 2. Solution (1) The following states of the system are introduced: 0 system operating 1 system in failure state 1 2 system in failure state 2

The corresponding embedded discrete-time Markov chain has the transition graph 0

p01

p

02

1

1

2

1

The transition probabilities p 01 and p 02 are ∞ ∞ p 01 = P(L 1 < L 2 ) = ∫ 0 e −λ 2 t λ 1 e −λ 1 t dt = λ 1 ∫ 0 e −(λ 1 +λ 2 ) t dt.

Hence, p 01 =

λ1 , λ1 + λ2

p 02 = 1 − p 01 =

λ2 . λ1 + λ2

Therefore, the stationary state distribution of the embedded discrete-time Markov chain satisfies π 0 = π 2 , π 1 = p 01 π 0 , π 2 = π 1 + p 02 π 0 . The solution is π0 =

λ1 + λ2 , 3λ 1 + 2λ 2

π1 =

λ1 , 3λ 1 + 2λ 2

λ + λ2 π2 = π0 = 1 . 3λ 1 + 2λ 2

By (1.17), the mean time the system stays in state 0 is ∞ 1 μ 0 = ∫ 0 e −(λ 1 +λ 2 ) t dt = . λ1 + λ2 Thus, by (5.120), the stationary state probabilities A 0 , A 1 , and A 2 of the system are

A0 = A1 =

1 , 1 + λ 1 μ 1 + (λ 1 + λ 2 ) μ 2

λ1μ1 , 1 + λ 1 μ 1 + (λ 1 + λ 2 ) μ 2

A2 =

(stationary availability)

(λ 1 + λ 2 ) μ 2 . 1 + λ 1 μ 1 + (λ 1 + λ 2 ) μ 2

SOLUTIONS MANUAL

96

5.40) Under otherwise the same model assumptions as in example 5.26, determine the stationary probabilities of the states 0, 1, and 2 introduced there on condition that the service time B is a constant μ; i.e. determine the stationary state probabilities of the loss system M/D/1/0 with unreliable server. Solution The transition graph of the embedded discrete-time Markov chain is 0

p01

p02 1

p10

1

2

p

12

The transition probabilities are λ , λ + λ0

p 01 = P(L < L 0 ) =

p 10 = P(L 1 > μ) = e −λ 1 μ ,

p 02 = 1 − p 01 =

λ0 , λ + λ0

p 12 = 1 − p 10 = 1 − e −λ 1 μ .

The stationary state probabilities of the system satisfy π 0 = p 10 π 1 + π 2 π 1 = p 01 π 0 π0 + π1 + π2 = 1 The solution is π0 = π1 = π0 =

λ + λ0

2λ 0 + λ(3 − e −λ 1 μ )

,

λ , 2λ 0 + λ(3 − e −λ 1 μ ) λ 0 + λ (1 − e −λ 1 μ )

2λ 0 + λ(3 − e −λ 1 μ )

.

The mean sojourn times of the system in the states are: 1 , λ + λ0 μ 1 = E(min(L 1 , μ)) = E(L 1 L 1 ≤ μ ⎞⎠ P(L 1 ≤ μ) + μ e −λ 1 μ μ 0 = E(min(L 0 , Y )) = μ

= ∫ 0 t λ 1 e −λ 1 t dt + +μ e −λ 1 μ −λ 1 μ = 1−e . λ1

Note: Example 5.26 assumes that the 'service time' (repair time) of the failed server has the same distribution as the service time of a customer. Hence, in the context of this exercise, μ3 = μ . Now formulas (5.120) yield the stationary state probabilities of the system.

CHAPTER 6

Martingales 6.1) Let Y 0 , Y 1 , ... be a sequence of independent random variables, which are identically distributed as N(0, 1). Is the discrete-time stochastic process {X 0 , X 1 , ...} with n

X n = Σ i=0 Y i2 ; n = 0, 1, ... a martingale? Solution Since E(Y i ) = 0, the mean value of Y 2i is the variance of Y i : E(Y 2i ) = Var(Y i ) = E(Y 2i ) − [E(Y i )] 2 = 1 > 0. Therefore, as shown in example 6.1, {X 0 , X 1 , ...} cannot be a martingale. 6.2) Let Y 0 , Y 1 , ... be a sequence of independent random variables with finite mean values. Show that the discrete-time stochastic process {X 0 , X 1 , ...} generated by n

X n = Σ i=0 (Y i − E(Y i )) is a martingale. Solution Since E(Y i − E(Y i )) = 0, X n is a sum of independent random variables with mean values 0. Therefore, by example 6.1, {X 0 , X 1 , ...} is a martingale. 6.3) Let a discrete-time stochastic process {X 0 , X 1 , ...} be defined by Xn = Y0 ⋅ Y1 ⋅ . . . ⋅ Yn , where the random variables Y i are independent and have a uniform distribution over the interval [0, T]. Under which condition is {X 0 , X 1 , ...} (1) a martingale, (2) a submartingale, (3) a supermartingale? Solution The mean value of Y i is E(Y i ) = T/2 . Hence, E(Y i ) = 1 if and only if T = 2. Thus, by example 6.2, {X 0 , X 1 , ...} is a martingale if T = 2 , a supermartingale if T ≤ 2, and a submartingale if T ≥ 2. 6.4) Let {X 0 , X 1 , ...} be the discrete Black-Scholes model defined by Xn = Y0 ⋅ Y1 ⋅ . . . ⋅ Yn . where Y 0 is an arbitrary positive random variable with finite mean, and Yi = eZi with independent Z i = N(μ, σ 2 ); i = 1, 2, ... Under which condition is {X 0 , X 1 , ...} a martingale?

98

SOLUTIONS MANUAL

Solution According to section 1.2.3.2, Y i has a logarithmic normal distribution with mean value 2 E(Y i ) = e μ+σ /2 .

By example 6.2, {X 0 , X 1 , ...} is a martingale if and only if E(Y i ) = 1. This condition is fulfiled if and only if μ = −σ 2 /2 . 6.5) Starting at value 0, the profit of an investor increases per week by one unit with probability p, p > 1/2, or decreases per week by one unit with probability 1 − p. The weekly increments of the investor's profit are assumed to be independent. Let N be the random number of weeks until the investor's profit reaches for the first time a given positive integer n. By means of Wald's equation, determine E(N ). In the i th week, the increment of the investor's profit is Xi =

1 −1

with probability p with probability 1 − p

By definition of N, X1 + X2 + . . . + XN = n . By taking the mean value on both sides of this equality, making use of Wald's equation (1.27), E(N) ⋅ E(X 1 ) = E(N) ⋅ (2p − 1) = n . Thus, E(N) =

2p − 1 n .

6.6) Let Z 1 , Z 2 , ..., Z n be a sequence of independent, identically as Z distributed random variables with 1 with probability p Z= , 0 < p < 1, −1 with probability 1 − p Y n = Z 1 + Z 2 + . . . + Z n and X n = h(Y n ); n = 1, 2, ...; where, for any real y, h(y) = [(1 − p) /p] y . Prove that {X 1 , X 2 , ...} is a martingale with regard to {Y 1 , Y 2 , ...}. Solution According to definition 6.2, the mean value (6.12) has to be considered (the y i are integers): E(X n+1 − X n Y n = y n , Y n−1 = y n−1 , ..., Y 1 = y 1 ⎞⎠ ⎛ 1−p Y n+1 ⎛ 1−p ⎞ Y n = E ⎜ ⎛⎝ p ⎞⎠ −⎝ p ⎠ Y n = y n , Y n−1 = y n−1 , ..., Y 1 = y 1 ) ⎝ ⎛ 1−p y n +Z n+1 ⎛ 1−p ⎞ y n ⎞ ⎛ 1−p ⎞ y n ⎛ ⎛ 1−p ⎞ Z n+1 ⎞ = E ⎜ ⎛⎝ p ⎞⎠ − ⎝ p ⎠ ⎟ = ⎝ p ⎠ E⎜ ⎝ p ⎠ − 1⎟ ⎝ ⎠ ⎝ ⎠ ⎤ 1−p y n ⎡ 1−p +1 1−p −1 = ⎛⎝ p ⎞⎠ ⎢ ⎛⎝ p ⎞⎠ p + ⎛⎝ p ⎞⎠ (1 − p) − 1 ⎥ = 0. ⎣ ⎦ Hence, {X 1 , X 2 , ...} is a martingale with regard to {Y 1 , Y 2 , ...} .

6 MARTINGALES

99

6.7) Starting at value 0, the fortune of an investor increases per week by $ 200 with probability 3/8, remains constant with probability 3/8 and decreases by $ 200 with probability 2/8. The weekly increments of the investor's fortune are assumed to be independent. The investor stops the 'game' as soon as he has made a total fortune of $ 2000 or a loss of $ 1000, whichever occurs first. By using suitable martingales and applying the optional stopping theorem, determine (1) the probability p 2000 that the investor finishes the 'game' with a profit of $ 2000, (2) the probability p −1000 that the investor finishes the 'game' with a loss of $ 1000, (3) the mean duration E(N ) of the 'game'. Solution The increment of the investor's fortune in the i th week is ⎧ 200 with probability 3/8 ⎪ Z i = ⎨ 0 with probability 3/8 ⎪ ⎩ −200 with probability 2/8 and has mean value E(Z i ) = 25. The martingale {X 1 , X 2 , ...} is defined by n

X n = Σ i=1 (Z i − E(Z i )) = Y n − 25n with Yn = Z1 + Z2 + . . . + Zn . A finite stopping time for {X 1 , X 2 , ...} is N = min{n, Y n = 2000 or Y n = −1000}.

(i)

By the martingale stopping theorem 6.2, E(X 1 ) = 0 = E(Y N ) − 25 E(N) or, equivalently, since p 2000 = P(Y N = 2000) and p −1000 = P(Y N = −1000) : 2000 ⋅ p 2000 − 1000 ⋅ p −1000 = 25E(N).

(ii)

A second equation for the unknowns p 2000 , p −1000 , and E(N) is p 2000 + p −1000 = 1 .

(iii)

A third equation is obtained by means of the exponential martingale analogously to example 6.11: Let Ui = eθ Zi . The mean value of U i is E(U i ) = e 200⋅θ ⋅ 3 + 1 ⋅ 3 + e −200⋅θ ⋅ 2 . 8 8 8 Hence, E(U i ) = 1 if and only if θ = θ 1 with θ 1 = 1 ln 2 . 200 3 Now, let Xn =

n

Π Ui = eθ1 Yn .

i=1

If the martingale {X 1 , X 2 , ...} is stopped at time N defined by (i), then the martingale stopping theorem 6.2 yields

100

SOLUTIONS MANUAL E(U 1 ) = 1 = e 2000⋅θ 1 p 2000 + e −1000⋅θ 1 p −1000 .

From this equation and (iii), p 2000 = 0.8703,

p −1000 = 0.1297.

Now, from (ii), E(N) = 64.4 . 6.8) Let X 0 be uniformly distributed over [0, T], X 1 be uniformly distributed over [0, X 0 ], and, generally, X i+1 be uniformly distributed over [0, X i ], i = 0, 1, ... (1) Prove that the sequence {X 0 , X 1 , ...} is a supermartingale. (2) Show that T E(X k ) = k+1 ; k = 0, 1, ...

(i)

2

Solution (1) For any sequence x 0 , x 1 , ..., x n with T ≥ x 0 ≥ x 1 ≥ ... ≥ x n ≥ 0, E(X n+1 − X n X n = x n , ..., X 1 = x 1 , X 0 = x 0 ⎞⎠ = E(X n+1 − X n X n = x n ) = E(X n+1 X n = x n ) − E(X n X n = x n ) x = n − xn 2 x = − n ≤ 0. 2 Hence, by (6.6), {X 0 , X 1 , ...} is a supermartingale. (2) For k = 0 formula (i) is true, since X 0 has a uniform distribution over [0, T]. To prove (i) by induction, assume that (i) is true for any k with 1 ≤ k. Then, given X 0 = x 0 , the induction assumption implies that x E ⎛⎝ X k X 0 = x 0 ) = 0k 2 ( X 0 = x 0 has assumed the role of T.) Since X 0 is a uniformly distributed over [0, T], E(X k ) = 1

T

Tx 0

1 ⎡ T2 ⎤

T

∫ 2 k dx 0 = T ⎢⎣ 2⋅2 k ⎥⎦ = 2 k+1 .

0

6.9) Let {X 1 , X 2 , ...} be a homogeneous discrete-time Markov chain with the finite state space Z = {0, 1, ..., n} and transition probabilities j n−j ⎛ ⎞ p i j = P(X k+1 = j X k = i) = n ⎛⎝ ni ⎞⎠ ⎛⎝ n n− i ⎞⎠ ; i, j ∈ Z . ⎝ j⎠ Show that {X 1 , X 2 , ...} is a martingale. (In genetics, this martingale is known as the Wright-Fisher model without mutation.)

Solution Since {X 1 , X 2 , ...} is a homogeneous Markov chain, E ⎛⎝ X k+1 − X k X k = i k , ..., X 1 = i 1 , X 0 = i 0 ) = E ⎛⎝ X k+1 − X k X k = i k . Given i ∈ Z , the transition probabilities p i j are generated by a binomial distribution with parameters n and p = i/n ; j = 0, 1, ..., n. If X k = i , then X k+1 has this binomial distribution so that its mean value is

MARTINGALES

101 E(X k+1 ) = n ⋅ ni = i .

Therefore,

E(X k+1 − X k X k = i k ⎞⎠ = E(X k+1 − i k ) = E(X k+1 ) − i k = i k − i k = 0

Hence, {X 1 , X 2 , ...} is a martingale. Remark The function h(i) = ni , i ∈ Z, is concordant to this martingale (definition 6.3), since h(i) =

n ⎛ n ⎞ i j n − i n−j ⎛ ⎞ ⎛ ⎞

Σ

⎝n⎠ ⎝ n ⎠ j=0 ⎝ j ⎠

j i ⋅n =1 n ⋅ E(Y(i)) = n .

6.10) Prove that every stochastic process {X(t), t ∈ T} with a constant trend function and independent increments, which satisfies E( X(t) ) < ∞, t ∈ T, is a martingale. Solution For s < t, assuming E(X(t)) ≡ m and using the notation (6.26), E(X(t) X(y), y ≤ s) = E(X(s) + X(t) − X(s) X(y), y ≤ s) = X(s) + E(X(t) − X(s) X(y), y ≤ s) = X(s) + E(X(t) − X(s)) = X(s) + m − m = X(s) . 6.11) Let L be a stopping time for a stochastic process {X(t), t ∈ T} in discrete or continuous time and z a nonnegative constant. Verify that L ∧ z = min(L, z) is a stopping time for {X(t), t ∈ T}. Solution If L < z , then L ∧ z = L. But L is a stopping time for {X(t), t ∈ T} by assumption. If z < L , then L ∧ z = z. Since a constant does not depend on any of the random variables X(t) , z is obviously a stopping time for {X(t), t ∈ T}. 6.12)* The ruin problem described in section 3.4.1 is modified in the following way: The risk reserve process {R(t), t ≥ 0} is only observed at the end of each year. The total capital of the insurance company at the end of year n is n

R(n) = x + κ n − Σ i=0 M i ; n = 0, 1, 2, ... , where x is the initial capital, κ is the constant premium income per year, and M i is the total claim size the insurance company has to cover in year i, M 0 = 0. The M 1 , M 2 , ... are assumed to be independent and identically distributed as M = N(μ, σ 2 ) with κ > μ > 3σ 2 . Let p(x) be the ruin probability of the company: p(x) = P(there is an n = 1, 2, ... so that R(n) < 0). Show that 2

p(x) ≤ e −2 (κ−μ) x/σ , x ≥ 0.

102

SOLUTIONS MANUAL

Solution From example 1.12, the Laplace transform of M is E ⎛⎝ e −s M ⎞⎠ = e

−μs+ 1 σ 2 s 2 2

.

(i)

By definition of R(n), n

e −sR(n) = e −s x−s κ n+s Σ i=0 M i = e −s x Π i=1 e −s (κ−M i ) . n

Let X n = e −s R(n) . Then, by (i), for n = 1, 2, ... , E(X n ) = e −sx

n

Π E ⎛⎝ e −s (κ−M i ) ⎞⎠

i=1

⎛ −(κ−μ) s+ 1 σ 2 s 2 ⎞ n 2 = e −sx ⎜ e ⎟ . ⎝ ⎠

(ii)

Choose s = s 0 such that −(κ − μ)s + 1 σ 2 s 2 = 0, i.e. s 0 = 2

2(κ−μ) σ2

.

Letting s = s 0 , the factors E ⎛⎝ e −s (κ−M i ) ⎞⎠ in (ii) have mean value 1. Hence, by example 6.2, {X 0 , X 1 , ...} is a martingale with the constant (time-independent) trend function 2

m = E(X 0 ) = e −2(κ−μ) x /σ . A stopping time for the martingale {X 0 , X 1 , ...} is L = min(n, R(n) < 0). By the previous exercise 6.11, L ∧ z with 0 < z < ∞ is a bounded stopping time for this martingale. Hence, from theorem 6.2, 2 e −2(κ−μ) x /σ = E ⎛⎝ e −s 0 R(L∧z) ⎞⎠

= E(e −s 0 R(L∧z) L < z) P(L < z) + E(e −s 0 R(L∧z) L > z) P(L > z) ≥ P(L < z). This inequality is true since R(L ∧ z) < 0 for L < z so that E(e −s 0 R(L∧z) L < z) ≥ 1. Since (iii) holds for all z = 0, 1, 2, ... and lim P(L < z) = p(x), the assertion is proved. z→∞

(iii)

CHAPTER 7

Brownian Motion Note In all exercises, {B(t), t ≥ 0} is the Brownian motion with parameter Var(B(1)) = σ 2 , and {S(t), t ≥ 0} is the standard Brownian motion (σ = 1). 7.1) Verify that the probability density f t (x) of B(t), 1 e −x 2 /(2 σ 2 t) , 2πt σ satisfies the thermal conduction equation f t (x) =

t > 0,

∂ f t (x) ∂ 2 f t (x) =c . ∂t ∂ x2

(i)

Solution ∂ f t (x) =− ∂t

2 2 ⋅ e −x /(2 σ t) +

1 2σ 2π t 3

2 2 2 1 ⋅ x ⋅ e −x /(2 σ t) 2 2 2πt σ 2 σ t

2 2 ⎡ 2 ⎤ e −x /(2 σ t) ⎢ x − 1 ⎥ 2 ⎣σ t ⎦ 2σ 2πt 3

1

=

2 2 ∂ f t (x) x =− e −x /(2 σ t) . ∂x σ 3 2π t 3

∂ 2 f t (x) ∂ x2

=−

1 σ 3 2π t 3 =

2 2 e −x /(2 σ t) +

2 2 ⋅ x ⋅ e −x /(2 σ t) 2 σ 3 2π t 3 σ t

x

2 2 ⎡ 2 ⎤ e −x /(2 σ t) ⎢ x − 1 ⎥ . ⎣ σ2t ⎦ σ 3 2π t 3

1

Thus, f t (x) satisfies (i) with c = σ 2 /2. 7.2) Determine the conditional probability density of B(t) given B(s) = y, 0 ≤ s < t. Solution The condition B(s) = y defines a shifted Brownian motion {B y (t), t > s} starting at time t = s with the initial value B y (s) = y. Hence, it can be written in the form B y (t) = y + B(t − s), t ≥ s. Thus, B y (t), t > s, has trend function m y (t) ≡ y and density f t (x B(s) = y) =

1 e 2π(t − s) σ



(x−y) 2 2 σ 2 ( t−s)

,

t > s.

104

SOLUTIONS MANUAL

7.3)* Prove that the stochastic process {B(t), 0 ≤ t ≤ 1} given by B(t) = B(t) − t B(1) is the Brownian bridge. Solution The stochastic process {B(t), 0 ≤ t ≤ 1} as an additve superposition of two Gaussian processes is a Gaussian process itself. Hence, to prove that {B(t), 0 ≤ t ≤ 1} is the Brownian bridge, the following characteristic three properties of the Brownian bridge (example 7.1) have to be verified: 1) B(0) = 0, 2) E(B(t)) ≡ 0, 3) Cov(B(s), B(t)) = σ 2 s (1 − t) , 0 ≤ s < t. The process {B(t), 0 ≤ t ≤ 1} has obviously properties 1) and 2). To verify property 3), the covariance Cov(B(s), B(t)) needs to be determined: In view of property 2), assuming 0 ≤ s < t , Cov(B(s), B(t)) = E([B(s) − s B(1)] [B(t) − t B(1)]) = E(B(s) B(t)) − t E(B(s) B(1)) − s E(B(t) B(1)) + st E((B(1)) 2 ) = σ2s − σ2s t − σ2s t + σ2s t = σ 2 s (1 − t) . 7.4) Let {B(t), 0 ≤ t ≤ 1} be the Brownian bridge with σ = 1. Prove that the stochastic process {S(t), t ≥ 0} defined by S(t) = (t + 1) B ⎛⎝ t ⎞⎠ t+1 is the standard Brownian motion. Solution Since the Brownian bridge is a Gaussian process, the process {S(t), t ≥ 0} is a Gaussian process as well. Hence, it suffices to show that {S(t), t ≥ 0} has properties 1) S(0) = 0, 2) E(S(t)) ≡ 0, 3) Cov(S(s), S(t)) = s, 0 ≤ s < t . Property 1) follows immediately from B(0) = 0, and property 2) follows from E(B(t)) ≡ 0. To verify property 3), the covariance function of {S(t), t ≥ 0} needs to be determined: In view of property 2), making use of the covariance function of the Brownian bridge, for 0 ≤ s < t, Cov(S(s), S(t)) = (s + 1)(t + 1) E ⎛⎝ ⎡ B ⎛⎝ s ⎞⎠ B ⎛⎝ t ⎞⎠ ⎤ ⎞⎠ t+1 ⎦ ⎣ s+1 = (s + 1)(t + 1) s ⎡ 1 − t ⎤ s+1 ⎣ t+1 ⎦ = s [t + 1 − t] = s. Note: If 0 ≤ s < t, then s /(s + 1) < t /(t + 1) .

7 BROWNIAN MOTION

105

7.5) Determine the probability density of B(s) + B(t) . Solution As a sum of normally distributed random variables, B(s) + B(t) also has a normal distribution. Its parameters are E(B(s) + B(t)) = 0 + 0 = 0 , and, by (1.124), assuming 0 ≤ s < t, Var( B(s) + B(t)) = Var(B(s)) + 2Cov(B(s), B(t)) + Var(B(t)) = σ 2 s + 2σ 2 s + σ 2 t = σ 2 (t + 3s). Hence, the probability density of B(s) + B(t) is 1 2π (t + 3s) σ

f B(s)+B(t) (x) =

x2 2 e 2σ (t + 3s) , −

0 ≤ s < t,

− ∞ < x < ∞.

7.6) Let n be any positive integer. Determine mean value and variance of X(n) = B(1) + B(2) + . . . + B(n). Hint Make use of formula (1.100). Solution Since E(B(t)) ≡ 0, E(X(n)) = 0 . For applying formula (1.100), the covariances Cov(B(i), B(j)), i < j, are needed: Cov(B(i), B(j)) = σ 2 i ,

i, j = 1, 2, ..., n, i < j.

Now, (1.100) yields Var (X(n)) =

n

Σ

i=1

n

= σ2 Σ i + 2 σ2 i=1

Var(B(i)) + 2

n

Σ

i,j=1 i 0 and d > 0 2

P(B(t) ≤ c t + d for all t ≥ 0) = 1 − e −2 c d /σ .

(i)

Solution The left-hand side of equation (i) can be rewritten as P( max {−ct + B(t)} ≤ d ) . t, t≥0

Thus, it has to be verified that the distribution function of the maximum of the Brownian motion with drift with negative parameter μ = −c is given by the right-hand side of (i). Apart from the notation, equation (i) is equivalent to the second line of (7.44). 7.13) (1) What is the mean value of the first passage time of the reflected Brownian motion { B(t) , t ≥ 0} with regard to a positive level x ? (2) Determine the distribution function of B(t) . Solution (1) B(t) = x is true if and only if either B(t) = x or B(t) = −x . Thus, the mean value of the following random variable has to be determined: L(−x, +x) = min{t, B(t) = x or B(t) = −x}. t

7 BROWNIAN MOTION

109

But this has already been done in example 7.4. Letting x = a = b , formula (7.28) yields 2 L(−x, +x) = x . σ2

(2) Let F B(t) (x) be the distribution function of B(t) . Then, since B(t) = N(0, σ 2 t), F B(t) (x) = P( B(t) ≤ x) = P(−x ≤ B(t) ≤ +x) ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = Φ ⎜ x ⎟ − Φ ⎜ − x ⎟ = 2 Φ ⎜ x ⎟ − 1, ⎝σ t ⎠ ⎝ σ t ⎠ ⎝σ t ⎠

x ≥ 0, t > 0.

7.14) At time t = 0 a speculator acquires an American call option with infinite expiration time and strike price x s . The price X(t) of the underlying risky security at time t is given by X(t) = x 0 e B(t) . The speculator makes up his mind to exercise this option at that time point, when the price of the risky security hits a level x with x > x s ≥ x 0 for the first time. (1) What is the speculator's mean discounted payoff G α (x) under a constant discount rate α ? (2) What is the speculator's payoff G(x) without discounting? In both cases, cost of acquiring the option is not included in the speculator's payoff. Solution (1) X(t) hits level x if and only if B(t) hits x h = ln(x /x 0 ). Therefore, the corresponding hitting time L(x h ) has distribution function (7.19) with x = x h , and the random discounted payoff of the speculator at time L(x h ) is (x − x s ) e −α L(x h ) .

From (7.43) with μ = 0 and x = x h , the desired mean discounted payoff of the speculator is x γ G α (x) = E ⎛⎝ (x − x s ) e −α L(x h ) ⎞⎠ = (x − x s ) ⎛⎝ x0 ⎞⎠ with γ =

2α σ .

(2) The speculator's payoff without discounting is G(x) = x − x s . This is because X(t) will sooner or later hit any level x, x > x 0 , with probability 1. 7.15) The price X(t) of a risky security at time t is X(t) = x 0 e μt+B(t)+a B(t) , t ≥ 0, 0 < a ≤ 1, with a negative drift parameter µ. At time t = 0 a speculator acquires an American call option with strike price x s on this risky security. The option has no finite expiration date. The speculator makes up his mind to exercise this option at that time point, when the price of the risky security hits a level x with x > x s ≥ x 0 for the first time. Otherwise, i.e. if the price of the risky security never reaches level x, the speculator will never exercise. Determine the level x = x∗ at which the speculator should schedule to exercise this option to achieve (1) maximal mean payoff without discounting, and (2) maximal mean discounted payoff (constant discount rate α ).

110

SOLUTIONS MANUAL

Solution The stochastic price process {X(t), t ≥ 0} can hit a level x > x s only at a time point t with B(t) > 0. Therefore, this process hits level x for the first time at the same time point as the geometric ∼ Brownian motion with drift { X(t), t ≥ 0} given by ∼ X(t) = e D(t) with D(t) = μt + (1 + a)B(t), where {D(t), t ≥ 0} is a Brownian motion with drift with parameters μ and (1 + a) 2 σ 2 . Hence, the questions can immediately be answered by making use of the results derived in example 7.7. (1) x ∗ =

2μ λ x with λ = . s λ−1 (1 + a) 2 σ 2

(2) x ∗ =

γ ⎛ ⎞ 1 x s with γ = 2(1 + a) 2 σ 2 α + μ 2 − μ . This result is valid for γ > 1. ⎠ γ−1 (1 + a) 2 σ 2 ⎝

This result is valid for λ > 1.

7.16) The value of a share at time t is X(t) = x 0 + D(t), where x 0 > 0 and {D(t), t ≥ 0} is a Brownian motion with positive drift parameter μ and variance parameter σ 2 . At time point t = 0 a speculator acquires an American call option on this share with finite expiry date τ . Assume that x 0 + μt > 3σ t , 0 ≤ t ≤ τ. (1) Why does the assumption make sense? (2) When should the speculator exercise to make maximal mean undiscounted profit? Solution (1) The assumption makes sure that it is rather unlikely that the process {X(t), t ≥ 0} ever becomes negative in [0, τ]. (2) The speculator should exercise at time t = τ, since the trend function of {X(t), t ≥ 0} is increasing. 7.17) At time t = 0, a speculator acquires a European call option with strike price x s and finite expiration time τ . Thus, the option can only be exercised at time τ at price x s , independently of its market value at time τ. The price X(t) of the underlying risky security at time t is X(t) = x 0 + D(t) , where {D(t), t ≥ 0} is the Brownian motion with positive drift parameter μ and volatility σ 2 . If X(τ) > x s , the speculator will exercise the option. Otherwise, he will not. As in exercise 7.16, assume that x 0 + μt > 3σ t , 0 ≤ t ≤ τ. 1) What will be the mean undiscounted payoff of the speculator (cost of acquiring the option not included)? 2) Under otherwise the same assumptions, what is the investor's mean undiscounted profit if X(t) = x 0 + B(t) and x 0 = x s ? Solution (1) The speculator's random undiscounted payoff is max{X(τ) − x s , 0}). Since X(t) = N(x 0 + μt, σ 2 t), his mean payoff G = E(max{X(τ) − x s , 0}) is

7 BROWNIAN MOTION

111 ∞

− 1 ∫ (x − x s )e 2π τ σ x s

G=

(x−x 0 −μτ) 2 2 σ2τ

dx.

By substituting u = x − x s , ∞

1 ∫ ue 2π τ σ 0

G= By substituting y =

(u+x s −x 0 −μτ) 2 2 σ2τ

du.

u + x s − x 0 − μτ , σ τ G=

=

σ τ 2π

1 2π





x s −x 0 −μτ σ τ





x s −x 0 −μτ σ τ

= where a =



ye

σ τ 2π



[σ τ y − x s + x 0 + μτ] e

y2 2 dy −



y2 2 dy

x s − x 0 − μτ





x s −x 0 −μτ σ τ



e



y2 2 dy.

2 e −a /2 − a σ τ [1 − Φ(a)],

x s −x 0 −μτ x +μτ−x s . In terms of c = −a = 0 , σ τ σ τ

G=σ

τ e −c 2 /2 + c σ τ Φ(−c) . 2π

(2) In this special case, c = 0. Therefore, G=σ

τ . 2π

7.18) Let X(t) be the cumulative repair cost of a system arising in the interval (0, t] (excluding replacement costs) and R(t) = X(t) /t the corresponding cumulative repair cost rate. Assume R(t) = r 0 B 2 (t) , r 0 > 0. The system is replaced by an equivalent new one as soon as R(t) reaches level r . (1) Given a constant replacement cost c, determine a level r = r ∗ which is optimal with respect to the long-run total maintenance cost per unit time K(r) . (Make sure that an optimal level r ∗ exists.) (2) Compare K(r ∗ ) to the minimal long-run total maintenance cost per unit time K(τ ∗ ) which arises by applying the corresponding economic lifetime τ ∗ . Solution (1) The cumulative repair cost rate R(t) reaches level r at a time point t satisfying B 2 (t) = r /r 0 . This relationship is valid if and only if B(t) = −r /r 0 or B(t) = +r /r 0 .

112

SOLUTIONS MANUAL

Hence, the mean value of the first passage time of the process {R(t), t ≥ 0} with regard to level r is given by (7.28) with a = b = r /r 0 . Therefore, by formula (7.65), the corresponding maintenance coste rate is σr 2 K(r) = r + c ⎛⎝ r 0 ⎞⎠ . (2) The optimal level of r and the corresponding minimal value of the maintenance cost rate are 1/3 K(r ∗ ) = 3 ⎛⎝ 2cσ 2 r 20 ⎞⎠ . 2

r ∗ = [2cσ 2 r 0 2 ] 1/3 ,

The trend function of the total repair cost process is {X(t), t ≥ 0} is m(t) = r 0 σ 2 t 2 , t ≥ 0. Hence, when applying a constant replacement interval τ , the total maintenance cost rate is K(τ) =

c + r0σ2 τ2 . τ

Thus, economic lifetime and the corresponding minimal total maintenance cost rate are τ∗ =

c , r0σ2

K(τ ∗ ) = 2σ r 0 c .

There holds K(r ∗ ) < K(τ ∗ ) if 0.712 r 0 < σ 2 c. 7.19)* Let {S(t), t ≥ 0} be the standard Brownian motion and t

X(t) = ∫ 0 S(s) ds. (1) Determine the covariance between S(t) and X(t). (2) Verify that 3 E(X(t) S(t) = x) = t x and Var(X(t) S(t) = x) = t . 12 2

Hint Make use of the fact that the random vector (S(t), X(t)) has a two-dimensional normal distribution. Solution (1) By changing the order of integration, since E(S(t)) = E(X(t)) ≡ 0, t Cov(S(t), X(t)) = E(S(t) X(t)) = E ⎛⎝ S(t)∫ 0 S(x)dx ⎞⎠ t t = E ⎛⎝ ∫ 0 S(t) S(x) dx ⎞⎠ = ⎛⎝ ∫ 0 E(S(t) S(x)) dx ⎞⎠ t = ⎛⎝ ∫ 0 min(t, x) dx ⎞⎠ t

= ∫ 0 x dx.

7 BROWNIAN MOTION

113

Hence, 2 Cov(S(t), X(t)) = t . 2

(2) By (1.66), the density of a 2-dimensional normal distribution is f (x, y) =

1 2πσ x σ y 1−ρ 2

⎧⎪ ⎛ (x−μ x ) 2 (x−μ x )(y−μ y ) (y−μ y ) 2 ⎞ ⎫ ⎪ 1 exp ⎨ − − 2ρ + ⎜ ⎟ ⎬. σxσy 2 2 2 ⎪⎩ 2(1−ρ ) ⎝ σ x σy ⎠ ⎪ ⎭

Since in our case t

X = S(t) and Y = X(t) = ∫ 0 S(x) dx, the parameters of the joint distribution of (S(t), X(t)) are 3 μ x = μ y = 0, σ 2x = t, σ 2y = t , ρ = 3

t 2 /2 t

t 3 /3

=1 3. 2

Hence, the joint density of (S(t), X(t)) is f (x, y) =

⎧ ⎛ 2 3 xy y2 ⎞ ⎫ exp ⎨ −2 ⎜ x − + 3 ⎟ ⎬; π t2 t3 ⎠ ⎭ t2 ⎩ ⎝ t 3

t > 0, − ∞ < x, y < +∞ .

By formulas (1.59) and (7.9), the conditional density of X(t) given S(t) = x is 3

f (y x) =

=

=

=

1 3 2π t 12

2 f (x, y) = πt f S(t) (x)

12 2π t 3/2 12 2π t 3/2

⎛ 2 3 xy y 2 ⎞ −2 ⎜ xt − +3 ⎟ t2 t3 ⎠ e ⎝ 2

x 1 e− 2 t 2πt

⎛ 2 6 xy y 2 ⎞ −⎜ 3 x − +6 ⎟ 2t t2 t3 ⎠ e ⎝

e

− 6 ⎛⎝ 1 t 2 x 2 −t xy+y 2 ⎞⎠ t3 4

⎛ ⎛ y− 1 xt ⎞ 2 ⎞ ⎜ ⎝ 2 ⎠ ⎟ exp ⎜ − ⎟, t3 ⎟ ⎜ 2 ⎠ ⎝ 12

t > 0, − ∞ < y < +∞.

This is a normal distribution with the parameters 3 E(X(t) S(t) = x) = t x and Var(X(t) S(t) = x) = t . 12 2

This completes the proof of (2).

114

SOLUTIONS MANUAL

7.20) Show that for any constant α 2 3 E(e α X(t) ) = e α t /6 ,

where X(t) is defined as in exercise 7.19. Solution By example 1.12, the Laplace transform of Y = N(μ, σ 2 ) is E ⎛⎝ e −sY ⎞⎠ = e

−μs+ 1 s 2 σ 2 2

.

Since, by the previous example 7.19, E(X(t)) = 0 and Var(X(t)) = t 3 /3, E ⎛⎝ e −sX(t) ⎞⎠ = e

3 3 .

+ 1 s2 t 2

Replacing s with any real-valued −α yields the desired result.