116 13 999KB
English Pages 267 Year 2016
Solutions Manual for Modeling and Analysis of Stochastic Systems Third Edition
Please send all corrections to the author at the email address below.
V. G. Kulkarni Department of Operations Research University of North Carolina Chapel Hill, NC 275993180 email: [email protected] home page: http://www.unc.edu/∼kulkarni
CHAPTER 1
Introduction
3
CHAPTER 2
DTMCs: Transient Behavior
Modeling Exercises 2.1. The state space of {Xn , n ≥ 0} is S = {0, 1, 2, 3, ...}. Suppose Xn = i. Then the age of the lightbulb in place at time n is i. If this light bulb does not fail at time n + 1, then Xn+1 = i + 1. If it fails at time n + 1, then a new lightbulb is put in at time n + 1 with age 0, making Xn+1 = 0. Let Z be the lifetime of a lightbulb. We have P(Xn+1 = 0Xn = i, Xn−1 , ..., X0 )
= P(lightbulb of age i fails at age i + 1) = P(Z = i + 1Z > i) pi+1 = P∞ j=i+1 pj
Similarly P(Xn+1 = 0Xn = i, Xn−1 , ..., X0 )
= P(Z > i + 1Z > i) P∞ j=i+2 pj = P∞ j=i+1 pj
It follows that {Xn , n ≥ 0} is a successruns DTMC with P∞ j=i+2 pj pi = P∞ , j=i+1 pj and
pi+1 qi = P∞
j=i+1
pj
,
for i ∈ S. 2.2 The state space of {Yn , n ≥ 0} is S = {1, 2, 3, ...}. Suppose Yn = i > 1, then the remaining life decreases by one at time n + 1. Thus Xn+1 = i − 1. If Yn = 1, a new light bulb is put in place at time n + 1, thus Yn+1 is the lifetime of the new light bulb. Let Z be the lifetime of a light bulb. We have P(Yn+1 = i − 1Xn = i, Xn−1 , ..., X0 ) = 1, i ≥ 2, 5
6
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
and P(Xn+1 = kXn = 1, Xn−1 , ..., X0 ) = P(Z = k) = pk , k ≥ 1. 2.3. Initially the urn has w + b balls. At each stage the number of balls in the urn increases by k − 1. Hence after n stages, the urn has w + b + n(k − 1) balls. Xn of them are black, and the remaining are white. Hence the probability of getting a black ball on the n + 1st draw is Xn . w + b + n(k − 1) If the n + 1st draw is black, Xn+1 = Xn + k − 1, and if it is white, Xn+1 = Xn . Hence i , P(Xn+1 = iXn = i) = 1 − w + b + n(k − 1) and i P(Xn+1 = i + k − 1Xn = i) = . w + b + n(k − 1) Thus {Xn , n ≥ 0} is a DTMC, but it is not time homogeneous. 2.4. {Xn , n ≥ 0} is a DTMC with state space {0 = dead, 1 = alive} because the movements of the cat and the mouse are independent of the past while the mouse is alive. Once the mouse is dead, it stays dead. If the mouse is still alive at time n, he dies at time n + 1 if both the cat and mouse choose the same node to visit at time n + 1. There are N − 2 ways for this to happen. In total there are (N − 1)2 possible ways for the cat and the mouse to choose the new nodes. Hence P(Xn+1 = 0Xn = 1) =
N −2 . (N − 1)2
Hence the transition probability matrix is given by 1 0 P = . N −2 N −2 1 − (N −1)2 (N −1)2
2.5. Let Xn = 1 if the weather is sunny on day n, and 2 if it is rainy on day n. Let Yn = (Xn−1 , Xn ), be the vector of weather on day n − 1 and n, n ≥ 1. Now suppose Yn = (1, 1). This means the weather was sunny on day n − 1 and n. Then, it will be sunny on day n + 1 with probability .8 and the new weather vector will be Yn+1 = (1, 1). On the other hand it will rain on day n + 1 with probability .2, and the weather vector will be Yn+1 = (1, 2). These probabilities do not depend on the weather up to time n − 2, i.e., they are independent of Y1 , Y2 , ...Yn−2 . Similar analysis in other states of Yn shows that {Yn , n ≥ 1} is a DTMC on state space {(1, 1), (1, 2), (2, 1), (2, 2)} with the following transition probability matrix: 0.8 0.2 0 0 0 0 .5 .5 . P = .75 .25 0 0 0 0 0.4 0.6
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
7
2.6. The state space is S = {0, 1, · · · , K}. Let K i αi = p (1 − p)K−i , 0 ≤ i ≤ K. i Thus, when a functioning system fails, i components fail simultaneously with probability αi , i ≥ 1. The {Xn , n ≥ 0} is a DTMC with transition probabilities: p0,i = αi , 0 ≤ i ≤ K, pi,i−1 = 1, 1 ≤ i ≤ K. 2.7. Suppose Xn = i. Then, Xn+1 = i + 1 if the first coin shows heads, while the second shows tails, which will happen with probability p1 (1 − p2 ), independent of the past. Similarly, Xn+1 = i − 1 if the first coin shows tails and the second coin shows heads, which will happen with probability p2 (1−p1 ), independent of the past. If both coins show heads, or both show tails, Xn+1 = i. Hence, {Xn , n ≥ 0} is a space homogeneous random walk on S = {..., −2, −1, 0, 1, 2, ...} (see Example 2.5) with pi = p1 (1 − p2 ), qi = p2 (1 − p1 ), ri = 1 − pi − qi .
2.8. We define Xn , the state of the weather system on the nth day, as the length of the current sunny or rainy spell. The state is k, (k = 1, 2, 3, ...), if the weather is sunny and this is the kth day of the current sunny spell. The state is −k, (k = 1, 2, 3, ..), if the the weather is rainy and this is the kth day of the current rainy spell. Thus the state space is S = {±1, ±2, ±3, ...}. Now suppose Xn = k, (k = 1, 2, 3, ...). If the sunny spell continues for one more day, then Xn+1 = k + 1, or else a rainy spell starts, and Xn+1 = −(k + 1). Similarly, suppose Xn = −k. If the rainy spell continues for one more day, then Xn+1 = −(k + 1), or else a sunny spell starts, and Xn+1 = 1. The Markov property follows from the fact that the lengths of the sunny and rainy spells are independent. Hence, for k = 1, 2, 3, ..., P(Xn+1 = k + 1Xn = k) P(Xn+1 = −1Xn = k) P(Xn+1 = −(k + 1)Xn = −k) P(Xn+1 = 1Xn = −k)
= pk , =
1 − pk ,
= qk , =
1 − qk .
2.9. Yn is the outcome of the nth toss of a six sided fair die. Sn = Y1 + ...Yn . Xn = Sn (mod 7). Hence we see that Xn+1 = Xn + Yn+1 (mod 7).
8
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Since Yn s are iid, the above equation implies that {Xn , n ≥ 0} is a DTMC with state space S = {0, 1, 2, 3, 4, 5, 6}. Now, for i, j ∈ S, we have P(Xn+1 = jXn = i)
=
P(Xn + Yn+1 (mod 7) = jXn = i)
=
P(i + Yn+1 (mod 7) = j) 0 if i = j = 1 if i 6= j. 6
Thus the transition probability matrix is given by 0 61 16 16 61 1 0 1 1 1 61 1 6 16 61 61 61 01 6 61 P = 61 61 16 01 6 61 61 16 16 01 6 1 6
6 1 6
6 1 6
6 1 6
6 1 6
1 6 1 6 1 6 1 6 1 6
0 1 6
1 6 1 6 1 6 1 6 1 6 1 6
.
0
2.10. State space of {Xn , n ≥ 0} is S = {0, 1, · · · , r − 1}. We have Xn+1 = Xn + Yn+1 (mod r), which shows that {Xn , n ≥ 0} is a DTMC. We have P(Xn+1 = jXn = i) = P(Yn+1 = (j − i)(mod r)) =
∞ X
αj−i+mr .
m=0
Here we assume that αk = 0 for k ≤ 0. 2.11. Let Bn (Gn ) be the bar the boy (girl) is in on the nth night. Then {(Bn , Gn ), n ≥ 0} is a DTMC on S = {(1, 1), (1, 2), (2, 1), (2, 2)} with the following transition probability matrix: 1 0 0 0 a(1 − d) ad (1 − a)(1 − d) (1 − a)d . P = (1 − b)c (1 − b)(1 − c) bc b(1 − c) 0 0 0 1 The story ends in bar k if the bivariate DTMC gets absorbed in state (k, k), for k = 1, 2. 2.12. Let Q be the transition probability matrix of {Yn , n ≥ 0}. Suppose Zm = f (i), that the DTMC Y is in state i when the filled gas for the mth time. Then, the student fills gas next after 11 − i days. The DTMC Y will be in state j at that time with probability [Q11−i ]ij . This shows that {Zm , m ≥ 0} is a DTMC with state space {f (0), f (1), · · · , f (10)}, with transition probabilities P(Zm+1 = f (j)Zm = f (i)) = [Q11−i ]ij .
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
9
2.13. Following the analysis in Example 2.1b, we see that {Xn , n ≥ 0} is a DTMC on state space S = {1, 2, 3, ..., k} with the following transition probabilities: P(Xn+1 = iXn = i) = pi , 1 ≤ i ≤ k, P(Xn+1 = i + 1Xn = i) = 1 − pi , 1 ≤ i ≤ k − 1, P(Xn+1 = 1Xn = k) = 1 − pk .
2.14. Let the state space be {0, 1, 2, 12}, where the state is 0 if both components are working, 1 if component 1 alone is down, 2 i f component 2 alone is down, and 12 if components 1 and 2 are down. Let Xn be the state on day n. {Xn , n ≥ 0} is a DTMC on {0, 1, 2, 12} with tr pr matrix α0 α1 α2 α12 r1 1 − r1 0 0 . P = r2 0 1 − r2 0 0 0 r1 1 − r1 Here we have assumed that if both components fail, we repair component 1 first, and then component 2.
2.15. Let Xn be the pair that played the nth game. Then X0 = (1, 2). Suppose Xn = (1, 2). Then the nth game is played between player 1 and 2. With probability b12 player 1 wins the game, and the next game is played between players 1 and 3, thus making Xn+1 = (1, 3). On the other hand, player 2 wins the game with probability b21 , and the next game is played between players 2 and 3, thus making Xn+1 = (2, 3). Since the probabilities of winning are independent of the past, it is clear that {Xn , n ≥ 0} is a DTMC on state space {(1, 2), (2, 3), (1, 3)}. Using the arguments as above, we see that the transition probabilities are given by 0 b21 b12 P = b23 0 b32 . b13 b31 0
2.16. Let Xn be the number of beers at home when Mr. Al Anon goes to the store. Then {(Xn , Yn ), n ≥ 0} is DTMC on state space S = {(0, L), (1, L), (2, L), (3, L), (4, L), (0, H), (1, H), (2, H), (3, H), (4, H)}
10
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
with the following transition probability matrix: 0 0 0 0 α 0 0 0 0 α 0 0 0 0 α 0 0 0 0 α 0 0 0 0 α 1−β 0 0 0 0 1−β 0 0 0 0 0 1 − β 0 0 0 0 0 1−β 0 0 0 0 0 1−β 0
0 0 0 0 0 β β 0 0 0
0 0 0 0 0 0 0 β 0 0
0 0 0 0 0 0 0 0 β 0
0 0 0 0 0 0 0 0 0 β
1−α 1−α 1−α 1−α 1−α 0 0 0 0 0
.
2.17. We see that Xn+1 = max{Xn , Yn+1 }. Since the Yn ’s are iid, {Xn , n ≥ 0} is a DTMC. The state space is S = {0, 1, · · · , M }. Now, for 0 ≤ i < j ≤ M , pi,j = P(max{Xn , Yn+1 } = jXn = i) = P(Yn = j) = αj . Also, pi,i = P(max{Xn , Yn+1 } = iXn = i) = P(Yn ≤ i) =
i X
αk .
k=0
2.18. Let Yn = u is the machine is up at time n and d if it is down at time n. If Yn = u, let Xn be the remaining up time at time n; and if Yn = d, let Xn be the remaining down time at time n. Then {(Xn , Yn ), n ≥ 0} is a DTMC with state space S = {(i, j) : i ≥ 1, j = u, d} and transition probabilities p(i,j),(i−1,j) = 1, i ≥ 2, j = u, d, p(1,u),(i,d) = di , p(1,d),(i,u) = ui , i ≥ 1. 2.19. Let Xn be the number of messages in the inbox at 8:00am on day n. Ms. Friendly answers Zn = Bin(Xn , p) emails on day n. hence Xn − Zn = Bin(Xn , 1 − p) emails are left for the next day. Yn is the number messages that arrive during 24 hours on day n. Hence at the beginning of the next day there Xn+1 = Yn + Bin(Xn , 1 − p) in her mail box. Since {Yn , n ≥ 0} is iid, {Xn , n ≥ 0} is a DTMC. 2.20. Let Xn be the number of bytes in this buffer in slot n, after the input during the slot and the removal (playing) of any bytes. We assume that the input during the slot occurs before the removal. Thus Xn+1 = max{min{Xn + An+1 , B} − 1, 0}.
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
11
Thus if Xn = 0 and there is no input, Xn+1 = 0. Similarly, if Xn = B, Xn+1 = B − 1. The process {Xn , n ≥ 0} is a random walk on {0, ..., B − 1} with the following transition probabilities: p0,0 = α0 + α1 , p0,1 = α2 , pi,i−1 = α0 , pi,i = α1 , pi,i+1 = α2 , 0 < i < B − 1, pB−1,B−1 = α1 + α2 ; pB−1,B−2 = α0 . 2.21. Let Xn be the number of passengers on the bus when it leaves the nth stop. Let Dn+1 be the number of passengers that alight at the (n + 1)st stop. Since each person on board the bus gets off with probability p in an independent fashion, Dn+1 is Bin(Xn , p) random variable. Also, Xn − Dn+1 is a Bin(Xn , 1 − p) random variable. Yn+1 is the number of people that get on the bus at the (n + 1)st bus stop. Hence Xn+1 = min{Xn − Dn+1 + Yn+1 , B}. Since {Yn , n ≥ 0} is a sequence of iid random variables, it follows from the above recursive relationship, that {Xn , n ≥ 0} is a DTMC. The state space is {0, 1, ..., B}. For 0 ≤ i ≤ B, and 0 ≤ j < B, we have pi,j
=
P(Xn+1 = jXn = i)
=
P(Xn − Dn+1 + Yn+1 = jXn = i)
=
P(Yn+1 − Bin(i, p) = j − i)
=
i X
P(Yn+1 − Bin(i, p) = j − iBin(i, p) = k)P(Bin(i, p) = k)
k=0
=
i X
P(Yn+1 = k + j − iBin(i, p) = k)
k=0
=
i k p (1 − p)i−k k
i X i k p (1 − p)i−k αk+j−i , k
k=0
where we use the convention that αk = 0 if k < 0. Finally, pi,B = 1 −
B−1 X
pij .
j=0
2.22. The state space is {−1, 0, 1, 2, ..., k −1}. The system is in state 1 at time n if it is in economy mode after the nth item is produced (and possibly inspected). It is in state i (1 ≤ i ≤ k) if it is in 100% inspection mode and i consecutive nondefective items have been found so far. The transition probabilities are p−1,0 = p/r, p−1,−1 = 1 − p/r, pi,i+1 = 1 − p, pi,0 = p, 0 ≤ i ≤ k − 2
12
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR pk−1,−1 = 1 − p, pk−1,0 = p.
2.23. Xn is the amount on hand at the beginning of the nth day, and Dn is the demand during the nth day. Hence the amount on hand at the end of the nth day is Xn − Dn . If this is s or more, no order is placed, and hence the amount on hand at the beginning of the (n + 1)st day is Xn − Dn . On the other hand, if Xn − Dn < s, then the inventory is brought upto S at the beginning of the next day, thus making Xn+1 = S. Thus Xn − Dn if Xn − Dn ≥ s, Xn+1 = S if Xn − Dn < s. Since {Dn , n ≥ 0} are iid, {Xn , n ≥ 0} is a DTMC on state space {s, s + 1, ..., S − 1, S}. We compute the transition probabilities next. For s ≤ j ≤ i ≤ S, j 6= S, we have P(Xn+1 = jXn = i)
=
P(Xn − Dn = jXn = i)
=
P(Dn = i − j) = αi−j .
and for s ≤ i < S, j = S we have P(Xn+1 = SXn = i)
= P(Xn − Dn < sXn = i) ∞ X = P(Dn > i − s) = αk . k=i−s
Finally P(Xn+1 = SXn = S)
= =
P(Xn − Dn < s, or Xn − Dn = SXn = S) ∞ X P(Dn > S − s) + P(Dn = 0) = αk + α0 . k=S−s+1
The transition probability matrix is given below: α0 0 0 ... 0 α1 α 0 ... 0 0 α2 α α ... 0 1 0 P = .. .. .. .. .. . . . . . αS−s−1 αS−s−2 αS−s−3 . . . α0 αS−s αS−s−1 αS−s−2 . . . α1 where
∞ X
bj = P(Dn > j) =
αk .
k=j+1
2.24. The state space of {(Xn , Yn ), n ≥ 0} is S = {(i, j) : i ≥ 0, j = 1, 2}.
b0 b1 b2 .. . bS−s−1 α0 + bS
.. , .
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR Let βki =
∞ X
13
αji , k ≥ 1, i = 1, 2.
j=k
The transition probabilities are given by (see solution to Modeling Exercise 2.1) 1 1 p(i,1),(i+1,1) = βi+2 /βi+1 , i ≥ 0, 2 2 p(i,2),(i+1,2) = βi+2 /βi+1 , i ≥ 0, 1 1 p(i,1),(0,j) = vj αi+1 /βi+1 , i ≥ 0, 2 2 p(i,2),(0,j) = vj αi+1 /βi+1 , i ≥ 0.
2.25. Xn is the number of bugs in the program just before running it for the nth time. Suppose Xn = k. Then no is discovered on the nth run with probability 1−βk ,a nd hence Xn+1 = k. A bug will be discovered on the n run with probability βk , in which case Yn additional bugs are introduced, (with P(Yn = i) = αi , i = 0, 1, 2) and Xn+1 = k − 1 + Yn . Hence, given Xn = k, k − 1 with probability βk α0 = qk k with probability βk α1 + 1 − βk = rk Xn+1 = k + 1 with probability βk α2 = pk Thus {Xn , n ≥ 0} is a DTMC with state space {0, 1, 2, ...} with transition probability matrix 1 0 0 0 0 ... q1 r1 p1 0 0 . . . P = 0 q2 r2 p2 0 . . . . 0 0 q3 r3 p3 . . . .. .. .. .. .. . . . . . . . .
2.26. Xn = number of active rumor mongers at time n. Yn = number of individuals who have not heard the rumor up to and including time n. Zn = number of individuals who have heard the rumor up to and including time n, but have stopped spreading it. The rumor spreading process is modeled as a three dimensional process {(Xn , Yn , Zn ), n ≥ 0}. We shall show that it is a DTMC. Since the total number of individuals in the colony is N , we must have Xn + Yn + Zn = N,
n ≥ 0.
Now let An be the number of individuals who hear the rumor for the first time at time n + 1. Now, an individual who has not heard the rumor by time n does not hear it by time n + 1 if each the Xn rumor mongers at time n fails to contact him at time n + 1. The probability of that is ((N − 2)/(N − 1))Xn . Hence An ∼ Bin(Yn , 1 − ((N − 2)/(N − 1))Xn ).
14
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Similarly, let Bn be the number of active rumormongers at time n that become inactive at tiem n + 1. An active rumor monger becomes inactive if he contacts a person whos has already heard the rumor. The probability of that is (Xn + Yn − 1)/(N − 1). Hence Bn ∼ Bin(Xn , (Xn + Yn − 1)/(N − 1)). Now, from the definitions of the various random variables involved, Xn+1 = Xn − Bn + An , Yn+1 = Yn − An , Zn+1 = Zn + Bn . Thus {(Xn , Yn , Zn ), n ≥ 0} is a DTMC. 2.27. {Xn , n ≥ 0} is a DTMC with state space S = {rr, dr, dd}, since gene type of the n+1st generation only depends on that of the parents in the nth generation. We are given that X0 = rr. Hence, the parents of the first generation are rr, dd. Hence X1 is dr with probability 1. If Xn is dr, then the parents of the (n + 1)st generation are dr and dd. Hence the (n + 1)th generation is dr or dd with probability .5 each. Once the nth generation is dd it stays dd from then on. Hence transition probability matrix is given by 0 1 0 P = 0 .5 .5 . 0 0 1
2.28. Using the analysis in 2.27, we see that {Xn , n ≥ 0} is a DTMC with state space S = {rr, dr, dd} with the following transition probability matrix: .5 .5 0 P = .25 .5 .25 . 0 .5 .5
2.29. Let Xn be the number of recipients in the nth generation. There are 20 recipients to begin with. Hence X0 = 20. Let Yi , n be the number of letters sent out by the ith recipient in the nthe generation. The {Yi,n : n ≥ 0, i = 1, 2, ..., Xn } are iid random variables with common pmf given below: P(Yi,n = 0) = 1 − α; P(Yi,n = 20 = α. The number of recipients in the (n + 1)st generation are given by Xn+1 =
Xn X
Yi,n .
i=1
Thus {Xn , n ≥ 0} is a branching process, following the terminology of Section 2.2.
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
15
Note that we cannot start with X0 = 1 since we would need to use Y1,0 = 20 with probability 1, which is different distribution from the other Yi,n s. This would invalidate the assumptions of a branching process. 2.30. Let Xn be the number of backlogged packets at the beginning of the nth slot. Furthermore, let In be the collision indicator defined as follows: In = id if there are no transmissions in the (n − 1)st slot (idle slot), In = s if there is exactly 1 transmission in the (n − 1)st slot (successful slot), and In = e if there are 2 or more transmissions in the (n − 1)st slot (error or collision in the slot). We shall model the state of the system at the beginning of the n the slot by (Xn , In ). Now suppose Xn = i, In = s. Then, the backlogged packets retry with probability r. Hence, we get P(Xn+1 = i − 1, In+1 = sXn = i, In = s)
=
(1 − p)N −i ir(1 − r)i−1 ,
P(Xn+1 = i, In+1 = sXn = i, In = s)
=
(N − i)p(1 − p)N −i−1 (1 − r)i ,
P(Xn+1 = i, In+1 = idXn = i, In = s)
=
(1 − p)(N −i) (1 − r)i ,
P(Xn+1 = i, In+1 = eXn = i, In = s)
=
(1 − p)(N −i) (1 − (1 − r)i − ir(1 − r)i−1 ).
P(Xn+1 = i + 1, In+1 = eXn = i, In = s)
=
P(Xn+1 = i + j, In+1 = eXn = i, In = s)
(N − i)p(1 − p)N −i−1 (1 − (1 − r)i ) N −i i = p (1 − p)N −i−j , 2 ≤ j ≤ N − i. j
Next suppose Xn = i, In = id. Then, the backlogged packets retry with probability r00 > r. The above equations become: P(Xn+1 = i − 1, In+1 = sXn = i, In = id)
=
(1 − p)N −i ir0 (1 − r0 )i−1 ,
P(Xn+1 = i, In+1 = sXn = i, In = id)
=
(N − i)p(1 − p)N −i−1 (1 − r0 )i ,
P(Xn+1 = i, In+1 = idXn = i, In = id)
=
(1 − p)(N −i) (1 − r0 )i ,
P(Xn+1 = i, In+1 = eXn = i, In = id)
=
(1 − p)(N −i) (1 − (1 − r0 )i − ir0 (1 − r0 )i−1 ).
P(Xn+1 = i + 1, In+1 = eXn = i, In = id)
=
P(Xn+1 = i + j, In+1 = eXn = i, In = id)
(N − i)p(1 − p)N −i−1 (1 − (1 − r0 )i ) N −i i = p (1 − p)N −i−j , 2 ≤ j ≤ N − i. j
Finally, suppose Xn = i, In = e. Then, the backlogged packets retry with probability r00 < r. The above equations become: P(Xn+1 = i − 1, In+1 = sXn = i, In = e)
=
(1 − p)N −i ir00 (1 − r00 )i−1 ,
P(Xn+1 = i, In+1 = sXn = i, In = e)
=
(N − i)p(1 − p)N −i−1 (1 − r00 )i ,
P(Xn+1 = i, In+1 = idXn = i, In = e)
=
(1 − p)(N −i) (1 − r00 )i ,
P(Xn+1 = i, In+1 = eXn = i, In = e)
=
(1 − p)(N −i) (1 − (1 − r00 )i − ir00 (1 − r00 )i−1 ).
P(Xn+1 = i + 1, In+1 = eXn = i, In = e)
=
P(Xn+1 = i + j, In+1 = eXn = i, In = e)
(N − i)p(1 − p)N −i−1 (1 − (1 − r00 )i ) N −i i = p (1 − p)N −i−j , 2 ≤ j ≤ N − i. j
This shows that {(Xn , In ), n ≥ 0} is a DTMC with transition probabilities given
16
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
above. 2.31. Let Xn be the number of packets ready for transmission at time n. Let Yn be the number of packets that arrive during time (n, n + 1]. If Xn = 0, no packets are transmitted during the nth slot and we have Xn+1 = Yn . If Xn > 0, exactly one packet is transmitted during the nth time slot. Hence, Xn+1 = Xn − 1 + Yn . Since {Yn , n ≥ 0} are iid, we see that {Xn , n ≥ 0} is identical to the DTMC given in Example 2.16. 2.32. Let Yi,n , i = 1, 2, be the number of nondefective items in the inventory of the ith machine at time n, after all production and any assembly at time n is done. Since the assembly is instantaneous, both Y1,n and Y2,n cannot be positive simultaneously. Now define Xn = B2 + Y1,n − Y2,n . The state space of {Xn , n ≥ 0} is S = {0, 1, 2, ..., B1 + B2 − 1, M1 + M2 }. Now, Xn = k > B2 ⇒ Y1,n = k − B2 , Y2,n = 0, Xn = k < B2 ⇒ Y1,n = 0, Y2,n = B2 − k, Xn = k = B2 ⇒ Y1,n = 0, Y2,n = 0. Thus Xn contains complete information about Y1,n and Y2,n . {Xn , n ≥ 0} is a random walk on S as in Example 2.5 with α1 if n = 0, pn,n+1 = pn = α1 (1 − α2 ) if 0 < n < B1 + B2 , α2 if n = B1 + B2 , pn,n−1 = qn = α2 (1 − α1 ) if 0 < n < B1 + B2 , if n = 0, 1 − α1 α1 α2 + (1 − α1 )(1 − α2 ) if 0 < n < B1 + B2 , pn,n = rn = 1 − α2 if n = B1 + B2 . 2.33. Let Xn be the age of the light bulb in place at time n. Using the solution to Modeling Exercise 2.1, we see that {Xn , n ≥ 0} is a successruns DTMC on {0, 1, ..., K − 1} with qi = pi+1 /bi+1 , pi = 1 − qi , 0 ≤ i ≤ K − 2, qK−1 = 1, P where bi = P (Zn ≥ i) = ∞ j=i pj . 2.34. The same three models of reader behavior in Section 2.3.7 work if we consider a citation from paper i to paper j as link from webpage i to web page j, and action of visiting a page is taken to the same as actually looking up a paper.
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
17
Computational Exercises 2.1. Let Xn be the number of white balls in urn A after n experiments. {Xn , n ≥ 0} is a DTMC on {0, 1, ..., 10} with the following transition probability matrix: P =
0 0.01 0 0 0 0 0 0 0 0 0
1.00 0.18 0.04 0 0 0 0 0 0 0 0
0 0.81 0.32 0.09 0 0 0 0 0 0 0
0 0 0 0 0 0 0.64 0 0 0.42 0.49 0 0.16 0.48 0.36 0 0.25 0.50 0 0 0.36 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0.25 0.48 0.49 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.16 0 0 0 0.42 0.09 0 0 0.64 0.32 0.04 0 0 0.81 0.18 0.01 0 0 1.00 0
.
Using the equation given in Example 2.21 we get the following table: n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
X0 = 8 X0 = 5 X0 = 3 8.0000 5.0000 3.0000 7.4000 5.0000 3.4000 6.9200 5.0000 3.7200 6.5360 5.0000 3.9760 6.2288 5.0000 4.1808 5.9830 5.0000 4.3446 5.7864 5.0000 4.4757 5.6291 5.0000 4.5806 5.5033 5.0000 4.6645 5.4027 5.0000 4.7316 5.3221 5.0000 4.7853 5.2577 5.0000 4.8282 5.2062 5.0000 4.8626 5.1649 5.0000 4.8900 5.1319 5.0000 4.9120 5.1056 5.0000 4.9296 5.0844 5.0000 4.9437 5.0676 5.0000 4.9550 5.0540 5.0000 4.9640 5.0432 5.0000 4.9712 5.0346 5.0000 4.9769
2.2. Let P be the transition probability matrix and a the initial distribution given in the problem.
18
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
1. Let a(2) be the pmf of X2 . It is given by Equation 2.31. Substituting for a and P we get a(2) = [0.2050 0.0800 0.1300 0.3250 0.2600]. 2. P(X2 = 2, X4 = 5)
= P(X4 = 5X2 = 2)P(X2 = 2) =
P(X2 = 5X0 = 2) ∗ (.0800)
=
[P 2 ]2,5 ∗ (.0800)
=
(.0400) ∗ (.0800) = .0032.
3. P(X7 = 3X3 = 4)
= P(X4 = 3X0 = 4) =
[P 4 ]4,3
= .0318. 4. P(X1 ∈ {1, 2, 3}, X2 ∈ {4, 5})
=
5 X
P(X1 ∈ {1, 2, 3}, X2 ∈ {4, 5}X0 = i)P(X0 = i)
i=1
=
5 X
ai
i=1
=
3 X 5 X
P(X1 = j, X2 = k}X0 = i)
j=1 k=4
5 X 3 X 5 X
ai pi,j pj,k
i=1 j=1 k=4
=
.4450.
2.3. Easiest way is to prove this by induction. Assume a+b 6= 2. Using the formula given in Computational Exercise 3, we see that 1 1 1−b 1−a 1−a a−1 1 0 P0 = + = , 0 1 2−a−b 1−b 1−a 2−a−b b−1 1−b and P1 =
1 2−a−b
1−b 1−a 1−b 1−a
a+b−1 1−a + 2−a−b b−1
a−1 1−b
=
a 1−b
1−a b
.
Thus the formula is valid for n = 0 and n = 1. Now suppose it is valid for n = k ≥ 1. Then P k+1
= Pk ∗ P 1 1−b = 2−a−b 1−b 1 1−b = 2−a−b 1−b
(a + b − 1)k 1 − a a − 1 1−a a 1−a + ∗ 1−a b−1 1−b 1−b b 2−a−b k+1 (a + b − 1) 1−a 1−a a−1 + , 1−a b−1 1−b 2−a−b
where the last equation follows after some algebra. Hence the formula is valid for
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
19
n = k + 1. Thus the result is established by induction. If a + b = 2, we must have a = b = 1. Hence, 1 0 n P =P = . 0 1 The formula reduces to this after an application of L’Hopital’s rule to compute the limit. 2.4. Let Xn be as defined in Example 2.1b. Then {Xn , n ≥ 0} is a DTMC with transition matrix given below: p1 1 − p1 P = . 1 − p2 p2 Using the results of Computational exercise 3 above, we get (p1 + p2 − 1)n 1 − p1 1 1 − p2 1 − p1 n + P = p2 − 1 2 − p1 − p2 1 − p2 1 − p1 2 − p1 − p2
p1 − 1 1 − p2
Using the fact that the first patient is given a drug at random, we have P(X1 = 1) = P(X1 = 2) = .5. Hence, for n ≥ 1, we have = P(Xn = 1X1 = 1) ∗ .5 + P(Xn = 1X1 = 2) ∗ .5 1 = · ([P n−1 ]1,1 + [P n−1 ]2,1 ) 2 (p1 − p2 ) ∗ ((p1 + p2 − 1)(n−1) − 1) = 1− . 2−a−b Now, let Yr = 1 if the rth patient gets drug 1, and 0 otherwise. Then P(Xn = 1)
Zn =
n X
Yr
r=1
is the number of patients among the first n who receive drug 1. Hence E(Zn )
n X Yr ) = E( r=1
= = =
n X r=1 n X r=1 n X r=1
E(Yr ) P(Yr = 1) P(Xr = 1)
.
20
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR n X (p2 − p1 ) ∗ ((p1 + p2 − 1)(n−1) − 1) = 1+ 2−a−b r=1 = n
2(1 − p2 ) (p1 − p2 ) · ((p1 + p2 − 1)n − 1). − 2 − p1 − p2 (2 − p1 − p2 )2
2.5. Let Xn be the brand chosen by a typical customer in the nth week. Then {Xn , n ≥ 0} is a DTMC with transition probability matrix P given in Example 2.6. We are given the initial distribution a to be a = [.3 .3 .4]. The distribution of X3 is given by a(3) = aP 3 = [0.1317 0.3187 0.5496]. Thus a typical customer buys brand B in week 3 with probability .3187. Since all k customers behave independently of each other, the number of customers that buy brand B in week 3 is B(k, .3187) random variable. 2.6. Since the machines are identical and independent, the total expected revenue (n) over {0, 1, · · · , n} is given by rM11 , where M (n) is given in Example 2.24. 2.7. Let α =
1+u 1−d
and write Xn = (1 − d)n αZn .
Using the results about the generating functions of a binomial, we get E(Xn ) = (1 − d)n E(αZn ) = (1 − d)n (pα + 1 − p)n , and E(Xn2 ) = (1 − d)2n E(α2Zn ) = (1 − d)2n (pα2 + 1 − p)n . This gives the mean and variance of Xn . 2.8. The initial distribution is a = [1 0 0 0]. (i) a(2) = aP 2 = [0.42 0.14 0.11 0.33]. Hence, P(X2 = 4) = .33. (ii) Since P(X0 = 1) = 1, we have P(X1 = 2, X2 = 4, X3 = 1)
=
4 X
P(X1 = 2, X2 = 4, X3 = 1X0 = i)P(X0 = i)
i=1
= p(X1 = 2, X2 = 4, X3 = 1X0 = 1) = p1,2 p2,4 p4,1 = .015.
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
21
(iii) Using time homogeneity, we get P(X7 = 4X5 = 2)
= P(X2 = 4X0 = 2) [P 2 ]2,1 = .25
= (iv) Let b = [1234]0 . Then
E(X3 ) = a ∗ P 3 ∗ b = 2.455.
2.9. From the definition of Xn and Yn we see that Xn+1 =
20 Xn − Yn
if Xn − Yn < 10, if Xn − Yn ≥ 10.
Since {Yn , n ≥ 0} are iid random variables, it follows that {Xn , n ≥ 0} is a DTMC on state space {10, 11, 12, ..., 20}. The transition probability matrix is given by P =
.1 .2 .3 .4 0 0 0 0 0 0 0
0 .1 .2 .3 .4 0 0 0 0 0 0
0 0 .1 .2 .3 .4 0 0 0 0 0
0 0 0 .1 .2 .3 .4 0 0 0 0
0 0 0 0 .1 .2 .3 .4 0 0 0
0 0 0 0 0 .1 .2 .3 .4 0 0
0 0 0 0 0 0 .1 .2 .3 .4 0
0 0 0 0 0 0 0 .1 .2 .3 .4
0 0 0 0 0 0 0 0 .1 .2 .3
0 0 0 0 0 0 0 0 0 .1 .2
The initial distribution is a = [0 0 0 0 0 0 0 0 0 0 1]. Let b = [10 11 12 13 14 15 16 17 18 19 20]0 . Then we have E(Xn ) = aP n b,
n ≥ 0.
.9 .7 .4 0 0 0 0 0 0 0 .1
.
22
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Using this we get n 0 1 2 3 4 5 6 7 8 9 10
E(Xn ) 20.0000 18.0000 16.0000 14.0000 13.1520 14.9942 16.5868 16.5694 15.4925 14.5312 14.5887
2.10. From Example 2.12, {Xn , n ≥ 0} is a random walk on {0, 1, 2, 3, ...} with parameters r0 = 1 − p = .2, p0 = .8, qi = q(1 − p) = .14,
pi = p(1 − q) = .24,
ri = .62,
i ≥ 1.
We are given X0 = 0. Hence, P(X1 = 0) = .2, P(X1 = 1) = .8. And, P(X2 = 0)
= P(X2 = 0X1 = 0)P(X1 = 0) + P(X2 = 0X1 = 1)P(X1 = 1) = .2 ∗ .2 + .14 ∗ .8 = .152.
2.11. The simple random walk of Example 2.19 has state space {0, ±1, ±2, ...}, and the following transition probabilities: pi,i+1 = p,
pi,i−1 = q = 1 − p.
We want to compute pni,j = P(Xn = jX0 = i). Let R be the number of right steps taken by the random walk during the first n steps, and L be the number of right steps taken by the random walk during the first n steps. Then, R + L = n, R − L = j − i. Thus
1 1 (n + j − i), L = (n + i − j). 2 2 n This is possible if and only if n + j − i is even. There are R ways of taking R steps R=
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
23
to the right and L steps to the left in the first n steps. Hence, if n + j − i is even, we have n R L n pi,j = p q , R otherwise it is zero. 2.12. Let {Xn , n ≥ 0} be the DTMC of Modeling Exercise 2.5. Let Yn = (Xn−1 , Xn ). {Yn , n ≥ 1} is a DTMC with transition probability matrix given below: 0.8 0.2 0 0 0 0 .5 .5 . P = .75 .25 0 0 0 0 0.4 0.6 Suppose the rainy spell starts on day 1, i.e Y1 = (1, 2). Let R be the length of the rainy spell.Then P(R = 1) = P(Y2 = (2, 1)Y1 = (1, 2)) = .5. For k ≥ 2 we have P(R = k)
= P(Yi = (2, 2), i = 2, 3, ..., k, Yk+1 = (2, 1)Y1 = (1, 2)) = P(Y2 = (2, 2)Y1 = (1, 2))
k−1 Y
P(Yi+1 = (2, 2)Yi = (2, 2))P(Yk+1 = (2, 1)Yk = (2, 2))
i=2
=
(.5)(.6)k−2 (.4).
By a similar analysis, the distribution of the length of the sunny spell S, is given by P(S = 1) = .25, and, for k ≥ 2, P(S = k) = (.75)(.8)k−2 (.2).
2.13. The matlab program to compute the quantities is given below. ************************************************************** C = 20; % Capacity of the bus. l = 10; % Passengers ata stop are P(l). p = 0.4; % prob that a rider gets off at a stop. N = 20; % Number of stops. %pp(i+1) = p(a Poisson(l) rv = i). pp= zeros(1,C+1);pp(1)=exp(l); for i = 1:C pp(i+1) = pp(i)*l/(i); end; % P = transition probability matrix of the DTMC {Xn , n ≥ 0}. P = zeros(C+1);
24
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
for i = 0:C %pb(j+1) = P(a Bin(i,p) rv = j). pb=zeros(1,i+1); pb(1) = pˆ(i); for j=1:i pb(j+1) = pb(j)*((1p)/p)*(ij+1)/(j); end; for j=0:C1 P(i+1,j+1) = 0; for k=0:min(i,j) P(i+1,j+1) = P(i+1,j+1) + pb(k+1)*pp(jk+1); end; end; end; b = sum(P’); P(:,C+1) = ones(C+1,1)  b’; % ex(n) = E(Xn ). nv=[];ex = [];b=[0:C]’;a=[1 zeros(1,C)]; for n=0:N nv = [nv n]; ex = [ex a*Pˆn*b]; end; [nv 0 ex0 ] *************************************************************
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
25
The final output is n 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
E(Xn ) 0 9.9972 15.6322 17.9843 18.7442 18.9664 19.0294 19.0471 19.0520 19.0534 19.0538 19.0539 19.0539 19.0539 19.0539 19.0539 19.0539 19.0539 19.0539 19.0539 19.0539
2.14. Follows from direct verification that P xk = λk xk , yk P = λk yk , 1 ≤ k ≤ m. 2.15. The statement holds for n = 1. Now suppose it holds for a given n ≥ 1. Then ∞ ∞ X X pn+1 = P(X = i)p = q P(Xn = i) = q. n i0 00 i=0
i=0
The result is true by induction. 2.16. The matlab program is listed below: ******************************************** N = 3; %Number of points on the circle. p = .4; %probability of clockwise jump. %P = the transition probability matrix. P = zeros(N,N); P(1,N) = 1p;P(1,2) = p; for i = 2:N1 P(i,i+1) = p; P(i,i1) = 1p;
26
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
end; P(N,1) = p; P(N,N1) = 1p; [V, D] = eig(P); IV = inv(V); for i=1:N i %i th eigenvalue is printed next. D(i,i) % the matrix Bi is printed next. V(:,i)*IV(i,:) end; ******************************************* The output of the above program is λ1 = 1,
λ2 = −.5 + .1732i,
The corresponding matrices are
0.3333 B1 = 0.3333 0.3333
0.3333 0.3333 0.3333
λ3 = −.5 − .1732i.
0.3333 0.3333 , 0.3333
−0.1667 + 0.2887i 0.3333 −0.1667 − 0.2887i
−0.1667 − 0.2887i −0.1667 + 0.2887i , 0.3333
−0.1667 − 0.2887i 0.3333 −0.1667 + 0.2887i
−0.1667 + 0.2887i −0.1667 − 0.2887i . 0.3333
0.3333 B2 = −0.1667 − 0.2887i −0.1667 + 0.2887i
0.3333 B3 = −0.1667 + 0.2887i −0.1667 − 0.2887i Then
P n = B1 + (−.5 + .1732i)n B2 + (−.5 − .1732i)n B3 .
2.18. We show by induction that (n)
pij = qpj ,
j = 0, 1, 2, ..., n − 1,
pi,i+n = pn . (n)
All other pi,j are zero. This is clearly true at n=1. We show that if it holds for n, it holds for n + 1. For j = 0, 1, ..., n − 1 (n+1)
pi,j
(n)
(n)
=
qp0,j + ppi+1,j
=
qqpj + pqpj = qpj .
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR For j = n, we hace (n+1)
pi,n
(n)
(n)
=
qp0,n + ppi+1,n
=
qpn + 0 = qpn .
Finally, (n+1)
pi,i+n+1
(n)
(n)
=
qp0,n+1 + ppi+1,i+1+n
=
0 + ppn = pn+1 .
Thus the result holds for n + 1. Hence it holds for all n by induction. 2.19. We are given
0.3 0.4 0.3 P = 0.4 0.5 0.1 . 0.6 0.2 0.2 Hence
1 − 0.3z I − zP = −0.4z −0.6z Following Example 2.15, we get ∞ X
(n)
p11 z n
−0.4z 1 − 0.5z −0.2z
−0.3z −0.1z . 1 − 0.2z
=
(I − zP )−1 11
=
det(A) det(I − zP )
n=0
where
A=
1 − 0.5z −0.2z
−0.1z 1 − 0.2z
.
Expanding, we get ∞ X
(n)
p11 z n
=
n=0
= =
1 − 0.7z + .08z 2 (1 − z − .05z 2 + .05z 3 .4 0.5236 0.0764 + − 1−z 1 + 0.2236z 1 − .2236z ∞ X (.4 + 0.5236(−0.2236)n + 0.0764(0.2236)n )z n n=0
Hence, we get (n)
p11 = .4 + 0.5236(−0.2236)n + 0.0764(0.2236)n ,
2.20. From the structure of the DTMC (n)
pij = 0 if j > i.
n ≥ 0.
27
28
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Also, (n) pii
=
1 i+1
n ,
i ≥ 0.
Hence, φii (z)
= =
∞ X
n=0 ∞ X n=0
=
(n)
pii z n z i+1
n
1/(1 − z/(i + 1)).
Next, φi,i−1 (z)
=
∞ X
(n)
pi,i−1 z n
n=0 ∞ X
1 (n−1) 1 (n−1) pi,i−1 + pi−1,i−1 }z n i + 1 i + 1 n=0 z z φi,i−1 (z) + φi−1,i−1 (z). = i+1 i+1 =
{
Solving the above we get φi,i−1 (z) =
z i+1
(1 −
−1 z z . )(1 − ) i+1 i
In general we have φi,j =
i z X ( φk,j (z)). i+1 k=j
The result follows from this by induction. 2.21. The {Xn , n ≥ 0} as defined in Modeling Exercise 2.28 is a 3state DTMC with transition probability matrix given by 0 1 0 P = 0 .5 .5 . 0 0 1 Using Matlab, we get P = XDX −1 , where
1.0000 0.8944 1 0 0.4472 1 , X= 0 0 1 0 0 0 D = 0 0.5 0 , 0 0 1
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR and
X −1
1 = 0 0
29
1 −1 . 1
−2 1 0
Hence, n ≥ 1, 0 21−n = 0 2−n 0 0
P n = XDn X −1
1 − 21−n 1 − 2−n . 1
2.22. The {Xn , n ≥ 0} as defined in Modeling Exercise 2.27 is a 3state DTMC with transition probability matrix given by 0.5 0.5 0 P = 0.25 .5 .25 . 0 0.5 0.5 Using Matlab, we get P = XDX −1 , where
1 1 X = −1 1 1 1 0 0 D= 0 1 0 0 and
−1 0 , 1 0 0 , 0.5
X −1
0.25 −0.50 = 0.25 0.50 0.00 0
0.25 0.25 . 0.50
Hence, n ≥ 1, .25 + 2−n−1 0.25 = .25 − 2−1−n
P n = XDn X −1
.50 .25 − 2−1−n . 0.50 0.25 −n−1 0.50 .25 + 2
2.23. Let {Xn , n ≥ 0} be a branching process with X0 = i. Suppose the individuals in the zeroth generation are indexed 1, 2, ..., i. Let Xnk be the number of individuals in the nth generation that are direct descendants of the kth individual in generation zero. Then, X0k = 1, 1 ≤ k ≤ i, and Xn =
i X
Xnk ,
n ≥ i.
k=1
Since the offsprings do not interact with each other, it is clear that {Xnk , n ≥ 0},
30
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
1 ≤ k ≤ i are i independent and stochastically identical branching processes, each beginning with a single individual. Hence, E(Xn )
=
E(
i X
Xnk )
k=1 i X
=
E(Xnk )
k=1 n
=
iµ ,
from Equation 2.37. Similarly V ar(Xn )
= V ar(
i X
Xnk )
k=1 i X
=
V ar(Xnk )
k=1
=
inσ 2 n −1 iµn−1 σ 2 µµ−1
2.24. The transition probability matrix is 1 0 q 0 P = 0 q 0 0
0 p 0 0
if µ = 1, if µ 6= 1.
0 0 . p 1
Finding the eigenvalues and eigenvectors, we get, with θ = θ 0 0 0 0 −θ 0 0 , D= 0 1 0 0 0 0 1 and
0 θ X= q 0
0 −θ q 0
1 − pq q q2 0
√
−p2 0 . pq q
Then we get P n = XDn X −1 . 2.25. The WrightFisher model satisfies Xn+1 ∼ Bin(N, Xn /N ). Hence E(Xn+1 ) = N E(Xn /N ) = E(Xn ).
pq,
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
31
Hence E(Xn ) = E(X0 ) = i for all n ≥ 0. We next have 2 E(Xn+1 Xn ) = N
Xn Xn (1 − ). N N
Taking expectations again, we get 2 E(Xn+1 ) = E(Xn ) + E(Xn2 )(1 −
Using E(Xn ) = i, a = 1 −
1 N
E(Xn2 )
1 ). N
and solving recursively, we get
= N i(1 − an ) + i2 an ,
n ≥ 0.
Hence Var(Xn ) = E(Xn2 ) − (E(Xn ))2 = (1 − an )(N i − i2 ). 2.26. We have Xn+1 = Xn + 1 with probability p(Xn ) = Xn (N − Xn )/N 2 , Xn+1 = Xn − 1 with probability p(Xn ), and Xn+1 = Xn with the remaining probability. Hence E(Xn+1 ) = E(Xn ) = E(X0 ) = i. Also, 2 E(Xn+1 ) = E((Xn2 +2Xn +1)p(Xn )+Xn2 (1−2p(Xn ))+(Xn2 −2Xn +1)p(Xn )).
Using E(Xn ) = i and simplifying the above, we get 2 E(Xn+1 ) = E(Xn2 )(1 −
2i 2 )+ . N2 N
Using E(X02 ) = i2 , the above equation can be solved recursively to get E(Xn2 ) = an i2 + b where a = 1 − 2/N 2 , and b = 1i/N .
1 − an , n ≥ 0, 1−a
32
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Conceptual Exercises 2.1. We have P(Xn+2 = k, Xn+1 = jXn = i, Xn−1 , · · · , X0 ) = P(Xn+2 = kXn+1 = j, Xn = i, Xn−1 , · · · , X0 ) · P(Xn+1 = jXn = i, Xn−1 , · · · , X0 ) = P(Xn+2 = kXn+1 = j)P(Xn+1 = jXn = i) = pjk pij = P(X2 = k, X1 = jX0 = i) The result follows by summing over j ∈ A and k ∈ B. 2.2. (a). Let {Xn , n ≥ 0} and {Yn , n ≥ 0} be two independent DTMCs on state space {0, 1} with transition probability matrices P 1 and P 2, where 0.8 0.2 P1 = , .5 .5 0.3 0.7 P2 = . .4 .6 Both DTMCs start with initial distribution [.5.5]. Let Zn = Xn + Yn . Now P(Z2 = 2Z1 = 1, Z0 = 0) =
P(Z2 = 2, Z1 = 1, Z0 = 0) . P(Z1 = 1, Z0 = 0)
We have P(Z2 = 2, Z1 = 1, Z0 = 0) = P(X2 = Y2 = 1, X1 + Y1 = 1, X0 = Y0 = 0) = P(X2 = Y2 = 1, X1 = 1, Y1 = 0, X0 = Y0 = 0) + P(X2 = Y2 = 1, X1 = 0, Y1 = 1, X0 = Y0 = 0) = P(X2 = 1, X1 = 1, X0 = 0)P(Y2 = 1, Y1 = 0, Y0 = 0) +P(X2 = 1, X1 = 0, X0 = 0)p(Y2 = 1, Y1 = 1, Y0 = 0) =
(.5)(.1)(.5)(.21) + (.5)(.16)(.5)(.42) = .0221.
Similarly P(Z1 = 1, Z0 = 0) = .1550. Hence P(Z2 = 2Z1 = 1, Z0 = 0) = .0221/.155 = .1423. However, P(Z2 = 2, Z1 = 1) = .0690, P(Z1 = 1) = .5450. Hence p(Z2 = 2Z1 = 1) = .0690/.5450 = .1266. Thus {Zn , n ≥ 0} is not a DTMC. (b). Let {Xn , n ≥ 0} and {Yn , n ≥ 0} be two independent DTMCs with state
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
33
space S 1 and S 2 and transition probability matrices P 1 and P 2 , respectively. Let Zn = (Xn , Yn ). The state space of {Zn , n ≥ 0} is S 1 × S 2 . Furthermore, P(Zn+1 = (j, l)Zn = (i, k), Zn−1 , ..., Z0 ) =
P(Xn+1 = j, Yn+1 = lXn = i, Yn = k, Xn−1 , Yn−1 , ..., X0 , Y0 )
=
P(Xn+1 = jXn = i, Xn−1 , ..., X0 ) · P(Yn+1 = lYn = k, Yn−1 , ..., Y0 )
=
P(Xn+1 = jXn = i) · P(Yn+1 = lYn = k)
=
1 2 Pi,j · Pk,l
=
P(Zn+1 = (j, l)Zn = (i, k)).
Thus {Zn , n ≥ 0} is a DTMC. 2.3. (a). False. Let {Xn , n ≥ 0} be transition probability matrix 0.8 P = 0 .75
a DTMC with state space {1, 2, 3} and 0.2 .5 .25
0 .5 . 0
Let the initial distribution be a = [.20.8]. Now P(X2 = 1X1 ∈ {1, 2}, X0 = 1)
= =
P(X2 = 1, X1 ∈ {1, 2}, X0 = 1) P(X1 ∈ {1, 2}, X0 = 1) .1280 = .64. .2
However, P(X2 = 1X1 ∈ {1, 2})
= =
P(X2 = 1, X1 ∈ {1, 2}) P(X1 ∈ {1, 2}) .4800 = .4800. 1
(b). True. We have P(Xn = j0 Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk ) P(Xn = j0 , Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk ) = P(Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk ) P(Xn+2 = j2 , ..., Xn+k = jk Xn = j0 , Xn+1 = j1 )P(Xn = j0 , Xn+1 = j1 ) = P(Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk ) P(Xn+2 = j2 , ..., Xn+k = jk Xn+1 = j1 )P(Xn = j0 , Xn+1 = j1 ) = P(Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk ) P(Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk )P(Xn = j0 , Xn+1 = j1 )/P(Xn+1 = j1 ) = P(Xn+1 = j1 , Xn+2 = j2 , ..., Xn+k = jk ) = P(Xn = j0 , Xn+1 = j1 )/P(Xn+1 = j1 )
34
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR = P(Xn = j0 Xn+1 = j1 ).
(c). False. Time shifting is allowed only in conditional probabilities, not in joint probabilities. Consider the special case of k = 0. Then the equation reduces to P(Xn = j0 ) = P(X0 = j0 ). This is clearly not valid in general. For the DTMC in part (a), for example, P(X0 = 1) = .2, but P(X1 = 1) = .76 2.4 (a). False. (True only if k = 0). b and the transition probability matrix will completely describe {Xn , n ≥ k}. It does not determine distribution of Xk−1 for example. (b). False. (True only if f is onetoone function, in which case f (Xn ) is a relabeled version of Xn .) As a counterexample, consider the DTMC in part (a). Let f (1) = f (2) = 1, f (3) = 2. Then Yn = 1 if Xn ∈ {1, 2}, and Yn = 2 if Xn = 3. The numerical calculations in part (a) show that {Yn , n ≥ 0} is not a DTMC. 2.5. {(Xn , Yn , Zn ), n ≥ 0} is a DTMC. Let f (i, k, 0) = i,
f (i, k, 1) = k.
Then Wn = f (Xn , Yn , Zn ). Thus {Wn , n ≥ 0} will be a DTMC if and only if distribution of (Xn+1 , Yn+1 , Zn+1 ) given (Xn = i, Yn = k, Zn = 1) depends only on i, and that of (Xn+1 , Yn+1 , Zn+1 ) given (Xn = i, Yn = k, Zn = 2) depends only on k. This won’t be the case in general. Hence {Wn , n ≥ 0} is not a DTMC. 2.6. Let αk = P(Yn = k),
k ∈ {0, 1, 2, ...}.
Xn be the value of the nth record. We have P(Xn+1 = jXn = i, Xn−1 , ..., X0 ) =
αj , 1 − fi
where fi = P(Yn ≤ i). Hence {Xn , n ≥ 0} is a DTMC. 2.7. Let Ni = min{n ≥ 0 : Xn 6= i}.
j > i,
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
35
Then, for r ≥ 1, P(Ni = rX0 = i)
= P(X1 = i, ..., Xr−1 = i, Xr 6= iX0 = i) =
(pi,i )r−1 (1 − pi,i ).
Thus the sojourn time in state i is G(1 − pi,i ). 2.8. By its definition, the rth visit of the DTMC {Xn , n ≥ 0} to the set A takes place at time Nr . (The zeroth visit is at time 0.) The actual state visited at this rth visit is Yr . Thus the state space of {Yr , r ≥ 0} is A. Then P(Yr+1 = jYr = i, Yr−1 , ..., Y0 )
= P(XNr+1 = jXNr = i, XNr−1 , ..., XN0 ) = P(XNr+1 = jXNr = i).
Hence {Yr , r ≥ 0} is a DTMC. 2.9. Let ai = P(X0 = i). Then X X P(X1 = j) = P(X1 = jX0 = i)ai = p ai = p. i∈S
i∈S
Suppose P(Xk = j) = p for some k ≥ 1. X X P(Xk+1 = j) = P(Xk+1 = jXk = i)P(Xk = i) = p P(Xk = i) = p. i∈S
i∈S
Thus the result follows by induction. 2.10. Define
Yn =
(Xn , 1) (Xn , 0)
if n is odd if n is even.
Then P(Yn+1 = (j, 1)Yn = (i, 0), Yn−1 , ..., Y0 ) = P(Xn+1 = jXn = i, n even) = ai,j , and P(Yn+1 = (j, 0)Yn = (i, 1), Yn−1 , ..., Y0 ) = P(Xn+1 = jXn = i, n odd) = bi,j . Thus {Yn , n ≥ 0} is a DTMC with state space S × {0, 1} and transition probability matrix 0 A P = . B 0 2.11. The solution of this problem is from Ross (Stochastic Processes, Wiley, 1983), Chapter 4, Section 1. Suppose X0 = 0. We shall first show that P(Xn = iXn  = i, Xn−1 , ..., X0 ) =
pi . pi + q i
To prove this let T = max{k : 0 ≤ k ≤ n, Xk = 0}.
36
DISCRETETIME MARKOV CHAINS: TRANSIENT BEHAVIOR
Then, since XT = 0, we have P(Xn = iXn  = i, Xn−1 , ..., X0 ) = P(Xn = iXn  = i, Xn−1 , ..., XT +1 , XT = 0). From the definition of T , it follows that the event E = {Xn  = i, Xn−1  = in−1 , ..., XT +1  = iT +1 , XT = 0} is teh union of two disjoint events E+ = {Xn = i, Xn−1 = in−1 , ..., XT +1 = iT +1 , XT = 0}, and E− = {Xn = −i, Xn−1 = −in−1 , ..., XT +1 = −iT +1 , XT = 0}. We have P(E+ ) = p(n−T +i)/2 q (n−T −i)/2 , P(E− ) = p(n−T −i)/2 q (n−T +i)/2 . Hence P(Xn = iE) =
P(E+ ) pi . = i P(E+ ) + P(E− ) p + qi
Thus P(Xn+1  = i + 1

Xn  = i, Xn−1 , ..., X0 )
=
P(Xn+1 = i + 1Xn = i)
+
P(Xn+1 = −(i + 1)Xn = −i)
=
pi+1 + q i+1 . pi + q i
pi
pi + qi pi
qi + qi
Thus {Xn , n ≥ 0} is a random walk on {0, 1, 2, ...} with p0,1 = 1, and, for i ≥ 1, pi,i+1 =
pi+1 + q i+1 = 1 − pi,i−1 . pi + q i
2.12. A given partition {Ar } of S is called lumpable if, for all Ar and As in the partition, X pi,j = αr,s , for all i ∈ Ar . j∈As
Now define Ai = {j ∈ S : f (j) = i}. {Yn = f (Xn ), n ≥ 0} is a DTMC if the partition {Ar } is lumpable. To prove sufficiency, suppose {Ar } is lumpable. Then P(Yn+1 = sYn = r, Yn−1 , ..., Y0 )
=
P(Xn+1 ∈ As Xn ∈ Ar , Yn−1 , ..., Y0 )
=
αr,s .
Necessity follows in a similar fashion.
CHAPTER 3
DiscreteTime Markov Chains: First passage Times
Computational Exercises 3.1. Let
1 0.1 P = 0 0.7
0 0.3 0.4 0
0 0 0.6 0 . 0.2 0.4 0.1 0.2
(a) From Theorem 3.1, we get ui (n) = P (T ≤ nX0 = i) = [P n ]i,0 . Hence P (T ≥ 3X0 = 1) = 1 − u1 (2) = 1 − [P 2 ]1,0 = 1 − .13 = .87. (b) From Theorem 3.3 we get m1
=
1 + .3m1 + .6m2
m2
=
1 + .4m1 + .2m2 + .4m3
m3
=
1 + .1m2 + .2m3 .
solving, we get [m1 m2 m3 ] = [5.7895 5.0877 1.8860]. Hence E(T X0 = 1) = m1 = 5.7895. (c) Using Theorem 3.5, we get Solving we get: [m1 (2) m2 (2) m3 (2)] = [44.2844 35.7002 6.6774]. Hence, var(T X0 = 1) = m1 (2) + m1 − m21 = 16.5559. 3.2. Let
1 0.2 P = 0 0
0 0 0.8 0 37
0 0.8 0 1
0 0 . 0.2 0
38
DTMCS: FIRST PASSAGE TIMES
(a) From Theorem 3.1, we get ui (n) = P (T ≤ nX0 = i) = [P n ]i,0 . Hence P (T ≥ 3X0 = 1) = 1 − u1 (2) = 1 − [P 2 ]1,0 = 1 − .2 = .8. (b) From Theorem 3.3 we get m1
=
1 + .8m2
m2
=
1 + .8m1 + .2m3
m3
=
1 + m2
. solving, we get [m1 m2 m3 ] = [11 12.5 13.5]. Hence E(T X0 = 1) = m1 = 11. (c) Using Theorem 3.5, we get m1 (2)
=
2(m1 − 1).8m2 (2)
m2 (2)
=
2(m2 − 1) + .8m1 (2) + .2m3 (2)
m3 (2)
=
2(m3 − 1) + m2 (2)
. Solving we get: [m1 (2) m2 (2) m3 (2)] = [240 275 300]. Hence, var(T X0 = 1) = m1 (2) + m1 − m21 = 130. 3.3. Let Xn be the position (i.e., the cell number) of the player after the nth toss. (X0 = 1) The player cannot be on squares 3,10, 11,15, and clearly the next position depends only on the current position and the outcome of the next toss. Since the tosses are iid random variables, {Xn , n ≥ 0} is a DTMC on state space S = {1, 2, 4, 5, 6, 7, 8, 9, 12, 13, 14, 16}. The transition matrix is 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 1 0 0 0 0 0 0 1 1 0 1 0 0 . P = 3 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 3 Theorem 3.3 yields: m1
=
m2
=
1 1 + (m2 + m4 + m5 ) 3 1 1 + (m4 + 2m5 ) 3
DTMCS: FIRST PASSAGE TIMES .. .
.. .
39
.. .
1 1 + (m8 + m14 ). 3 The expected number of tossed needed to finish the game is given by m1 = 14.5771. m1 4
=
3.4. Let Ti be the number of tosses needed by player i (i = 1, 2) to complete the game. Then T1 and T2 are iid random variables whose cdf can be computed using the method given in the solution to Computational Exercise 3.1(a). We get: P (Ti ≤ n) = [P n ]1,16 ,
n ≥ 0.
Now let T be the time when the game ends, (i.e when one of the two players wins.) Thus, T = min(T1 , T2 ). Hence, P (T > n) = P (T1 > n)P (T2 > n), and E(T ) =
∞ X
P (T > n).
n=0
Computing these quantities, we get E(T ) = 9.2869. 3.5. Let c(i) be the expected number of times the ladder from 3 to 5 is used in the next step if the player is in cell i ∈ S. We get 1 [1 1 0 0 0 0 0 0 0 0 0 0]. 3 Let l(i) be the expected number of times the ladder 35 is used by the player during a game starting from cell i ∈ S, let l = [l(i)], i ∈ S. Then l(i) is the expected total cost incurred until absorption in state 16 starting from state i and using th cost vector c. Using the results of Conceptual exercise 6 (with α = 1), or using first step analysis directly, we get (with P P being the submatrix of P obtained by deleting the last row and column) l = [I − P P ]−1 c. c=
Solving numerically, we get the desired answer as l(1) = 2.6241. 3.6. The matlab program is given below. The first 5 statements are input data: nump, K, ladders, chutes and step, as defined in the program. The outputs are cdist1, cdistk, e1 and ek as defined in the program. %Chutes and Ladders. nump=2; %number of players. K=4; % the game board is K by K. ladders = [3 5 10 13]; %the i th ladder goes from cell ladders(i,1) to ladders(i,2). chutes = [11 2
40
DTMCS: FIRST PASSAGE TIMES
15 8]; %the i th chute goes from cell chutes(i,2) to chutes(i,1). step = [1 1 1]/3; %step(i) is the probability that a single step is of size i. P=zeros(K*K,K*K); [ns ms]=size(step); [nl ml]=size(ladders); [nc mc]=size(chutes); for m=1:ms P=P+diag(step(m)*ones(K*Km,1),m); end; for m=1:nl P(:,ladders(m,2))=P(:,ladders(m,2))+P(:,ladders(m,1)); P(:,ladders(m,1))=zeros(K*K,1); P(ladders(m,1),:)=zeros(1,K*K); P(ladders(m,1),ladders(m,2))=1; end; for m=1:nc P(:,chutes(m,2))=P(:,chutes(m,2))+P(:,chutes(m,1)); P(:,chutes(m,1))=zeros(K*K,1); P(chutes(m,1),:)=zeros(1,K*K); P(chutes(m,1),chutes(m,2))=1; end; P=P+diag(ones(1,K*K)sum(P’),0); a=[1:K*K]; aa=[ladders(:,1)’ chutes(:,1)’]; a(aa)=[]; P=P(a,a); [np,mp]=size(P); cdist1=[1];M=eye(np); %cdist1(j) = P(game played by one player lasts > j1) for n=1:1000 M=M*P; pr=M(1,np); if 1pr > .000001 cdist1 = [cdist1 1pr]; else n=1001; end end cdist1 cdistk=cdist1.ˆnump %cdistk(j) = P(game played by k players lasts > j1) e1=sum(cdist1) %e1 = expected length of the game played by one player. ek=sum(cdistk) %ek = expected length of the game played by k players.
3.7. Let mi be the expected time to reach the food starting from cell i. First step analysis yields:
DTMCS: FIRST PASSAGE TIMES
m1
=
1 + .5m2 + .5m4
m2
=
1 + (1/3)(m1 + m3 + m5 )
m3
=
1 + .5m2 + .5m6
m4
=
1 + (1/3)(m1 + m5 + m7 )
m5
=
1 + (1/4)(m2 + m4 + m6 + m8 )
m6
=
1 + (1/3)(m3 + m5 + m9 )
m7
=
1 + .5m4 + .5m8
m8
=
1 + (1/3)(m5 + m7 + m9 )
m9
=
0
41
. Solving, we get the desired answer to be m1 = 18. 3.8. Note that the cat and the rat are interchangeable for this problem. Hence, define the states as follows: 0 1 2 3 4 5 6 7 8
= = = = = = = = =
cat and mouse in the same cell a corner cell and an adjacent side cell are occupied a corner cell and an adjacent corner cell are occupied a corner cell and the opposite corner cell are occupied a corner cell and the center cell are occupied a corner cell and an opposite side cell are occupied a side cell and an adjacent side cell are occupied a side cell and the opposite side cell are occupied a side cell and the center cell are occupied
Let mi be the expected time when the rat and the cat meet starting from state i. Following a first step analysis we get m0
=
0
m1
=
1 + (1/2)m1 + (1/6)m5 + (1/3)m8
m2
=
1 + (1/4)m0 + (1/2)m6 + (1/4)m7
m3
=
1 + (1/2)m6 + (1/2)m7
m4
=
1 + (1/4)m0 + (1/2)m6 + (1/4)m7
m5
=
1 + (1/6)m1 + (1/2)m5 + (1/3)m8
m6
=
1 + (2/9)m0 + (2/9)m2 + (1/9)m3 + (4/9)m4
m7
=
1 + (1/9)m0 + (2/9)m2 + (2/9)m3 + (4/9)m4
m8
=
1 + (1/3)m1 + (1/3)m6 + (1/3)m8
. Solving, we get the desired expected value to be m3 = 6.3243.
42
DTMCS: FIRST PASSAGE TIMES 3.9. Let mi = E(T X0 = i). Using the transition probabilities given we get mi = 1 +
1 (m1 + m2 ... + mi+1 ), i ≥ 1. i+2
Rearranging, we get m2 = 2m1 − 3, mi+1 = (i + 2)mi − (i + 1)mi − 1, i ≥ 2. Now, let ui = mi − mi−1 , i ≥ 1. Then, the above equations can be rearranged as u2 = m1 − 3, ui+1 = (i + 1)ui − 1, i ≥ 2. Solving recursively, we get ui =
i X 1 1 i!m1 − i! , i ≥ 2. 2 j! j=1
Finally, using mi − m1 = u1 + u2 + ... + ui , we get mi =
i i k X X 1 1X k!m1 − k! , i ≥ 2. 2 j! j=1 k=0
k=2
Since mi is supposed to be the smallest nonnegative solution, we choose Pi Pk 1 k=2 k! j=1 j! m1 = lim . Pi 1 i→∞ k=0 k! 2 This can be evaluated to be m1 = 2(e − 1) = 3.4366. 3.10. We have m0 = mN = 0. The first step analysis yields mi = 1 + pi mi+1 + qi mi−1 + (1 − pi − qi )mi , 1 ≤ i ≤ N − 1. This yields (pi + qi )mi = 1 + pi mi+1 + qi mi−1 , 1 ≤ i ≤ N − 1. Using xi = mi − mi−1 and pi = qi , we get xi − xi+1 = 1/pi , 1 ≤ i ≤ N − 1. − − − −(1) Summing over i = 1 to N − 1, we get x1 − xN = m1 + mN −1 =
N −1 X j=1
1 . pj
Due to symmetry, we must have m1 = mN −1 . Also, we can write 1 1 1 =N + . pj j N −j
DTMCS: FIRST PASSAGE TIMES
43
Thus we can obtain m1 = N
N −1 X j=1
1 . j
Now, adding (1) up to i, we get xi+1 = mi+1 − mi = m1 −
i X 1 , 1 ≤ i ≤ N − 1. p j=1 j
Summing the above equations we get mi+1 = (i + 1)m1 −
i X k i X X 1 i−k+1 = (i + 1)m1 − , 1 ≤ i ≤ N − 1. p pj j j=1
k=1
k=1
3.11. Let X0 = 0, and for n ≥ 1 define Xn = i if the nth toss results in run of i heads in a row, and Xn = −i if it results in a run of i tails in a row. Make states r and −m absorbing. Then {Xn , n ≥ 0} is a DTMC on statespace {−m, −m + 1, ..., −1, 0, 1, ..., r − 1, r}, with the nonzero transition probabilities as given below: p0,1 = p, p0,−1 = q = 1 − p, pi,i+1 = p, pi,−1 = q, 1 ≤ i ≤ r − 1, pi,i−1 = q, pi,1 = p, −m + 1 ≤ i ≤ −1, pr,r = 1, p−m,−m = 1. Define T to be the first passage time into state r, and let ui = P (T < ∞X0 = i). Then the desired answer is given by u0 . Equations 4.28 yield: u0 = pu1 + qu−1 , ui = pui+1 + qu−1 , 1 ≤ i ≤ r − 1,
(1)
ui = qui−1 + pu1 , −m + 1 ≤ i ≤ −1,
(2)
u−m = 0, ur = 1. Solving equation (1) recursively we get u1 = pi−1 ui + (1 − pi−1 )u−1 , 1 ≤ i ≤ r. Using ur = 1 we get u1 = pr−1 + (1 − pr−1 )u−1 .
(3)
Similarly, solving equation (2) recursively and using u−m = 0, we get u−1 = (1 − q m−1 )u1 .
(4)
Solving equations (3) and (4) simultaneously, we get u1 =
pr−1 1 − (1 −
pr−1 )(1
−
q m−1 )
, u−1 =
pr−1 (1 − q m−1 ) . 1 − (1 − pr−1 )(1 − q m−1 )
44
DTMCS: FIRST PASSAGE TIMES
Finally, the desired answer is u0 = pu1 + qu−1 =
pr−1 (1 − q m ) . 1 − (1 − pr−1 )(1 − q m−1 )
3.12. Let ui be the probability that eventually all genes become recessive. We have u0 = 1, uN = 0, and ui = pi ui+1 + qi ui−1 + (1 − pi − qi )ui , 1 ≤ i ≤ N − 1. This yields (pi + qi )ui = pi ui+1 + qi ui−1 , 1 ≤ i ≤ N − 1. Using xi = ui − ui−1 and pi = qi , we get xi − xi+1 = 0, 1 ≤ i ≤ N − 1. − − − −(1) Thus xi = x1 = u1 − 1, 1 ≤ i ≤ N. Summing, we get x1 + · · · xN = −1 = N u1 − N. Thus u1 = (N − 1)/N. Backsubstitution yields N −i , 0 ≤ i ≤ N. N Let vi be the probability that eventually all genes become dominant. Then we have ui =
vi = 1 − ui =
i , 0 ≤ i ≤ N. N
3.13. Define Y0 = 7 and Yn = Xn (mod7), n ≥ 1. Then {Yn , n ≥ 0} is a DTMC on statespace {0, 1, ..., 7} with transition probability matrix 0 .25 .25 .25 .25 0 0 0 0 0 .25 .25 .25 .25 0 0 0 0 0 .25 .25 .25 .25 0 .25 0 0 0 .25 .25 .25 0 . P = .25 .25 0 0 0 .25 .25 0 .25 .25 .25 0 0 0 .25 0 .25 .25 .25 .25 0 0 0 0 0 .25 .25 .25 .25 0 0 0 Let mi be the expected time to reach 0 starting from state i in this DTMC. Then the required answer is given by m7 = 7.. Solving Equations 4.72 we get m7 = 7. 3.14. Let Un ≤ n be the number of upticks up to time n, and Dn = n − Un be the number of downticks up to time n. The stock value at time n is Xn = 1.2Un .9Dn .
DTMCS: FIRST PASSAGE TIMES
45
Thus the stock sold once Un ≥ kn = d(log(2)−n log(1−d))/log((1+u)/(1−d))e. Thus the expected time of sale is the expected time when the kn th uptick is observed at time n for the first time. Let m(i, n) be the this expected time assuming that we have observed i upticks up to time n. We have m(i, n) = 1 + pm(i + 1, n + 1) + qm(i, n + 1), 0 ≤ i < kn , m(i, n) = 0, for i ≥ kn . The answer is given by m(0, 0). 3.15. Consider a DTMC {Xn , n ≥ 0} on state space {0, 1 = H, 2 = HH, 3 = HHT, 4 = HHT T } with the following transition probability matrix: P =
q q 0 0 0
p 0 0 0 0 p 0 0 0 p q 0 . p 0 0 q 0 0 0 1
Let mi be the expected time to reach state 4 from state i. The desired answer is given by m0 . Theorem 3.3 yields m0
=
1 + qm0 + pm1
m1
=
1 + qm0 + pm2
m2
=
1 + pm2 + qm3
m3
=
1 + pm1 .
Solving, we get m0 = 1/p2 q 2 . 3.16. Follow the notation of the solution to Computational Exercise 3.14. Now we sell the stock as soon as Thus the stock sold once Un ≥ kn = d(log(2) − n log(1 − d))/log((1+u)/(1−d))e or Un ≤ jn = d(log(.7)−n log(1−d))/log((1+u)/(1− d))e. The equations now become m(i, n) = 1 + pm(i + 1, n + 1) + qm(i, n + 1), jn < i < kn , m(i, n) = 0, for i ≥ kn ori ≤ jn . The answer is given by m(0, 0). 3.17. Theorem 3.3 yields: m0 = mN = 0; mi = 1 + pmi+1 + qmi−1 , 1 ≤ i ≤ N − 1. Using p + q = 1, we can rewrite the above equation as q(mi − m1−1 ) = 1 + p(mi+1 − mi ), 1 ≤ i ≤ N − 1.
46
DTMCS: FIRST PASSAGE TIMES
Let wi = mi − mi−1 , 1 ≤ i ≤ N − 1. Then we have wi+1 = (q/p)wi − (1/p), 1 ≤ i ≤ N − 1. Recursive substitution yields wi = (q/p)i−1 w1 − (1/p)
1 − (q/p)i−1 , 1 ≤ i ≤ N. 1 − (q/p)
Now w1 + w2 + ... + wN = mN − m0 = 0. Hence, summing the above equation for 1 = 1 to N , we get 0=
N X
i−1
(q/p)
w1 − (1/p)
i=1
N X 1 − (q/p)i−1 i=1
1 − (q/p)
.
Hence, N 1 − q/p 1 . − · q − p q − p 1 − (q/p)N Then routine algebra yields the desired result by using w1 = m1 − m0 = m1 =
mi =
i X
wj .
j=1
3.18. Construct a DTMC with state space {A, C, G, T, AC, ACT } with the following transition probability matrix: 0.180 0.000 0.426 0.120 0.274 0.000 0.170 0.368 0.274 0.188 0.000 0.000 0.161 0.339 0.375 0.135 0.000 0.000 P = 0.079 0.355 0.384 0.182 0.000 0.000 . 0.170 0.368 0.274 0.000 0.000 0.188 0.000 0.000 0.000 0.000 0.000 1.000 Let T be the first passage time into the state ACT . We are asked to compute mA = E(T X0 = A). Using Theorem 3.3 we get mA = 211.9182. 3.19. Equation 3.28 yields: φi (z) = pzφi+1 (z) + qzφi−1 (z), i ≥ 1, with φ0 (z) = 1. For a constant z, this is a difference equation with constant coefficients. Hence the solution is of the type φi (z) = φ(z)i , where φ(z) is some constant (a function of z). Substituting in the above equation and canceling the common factor φ(z)i−1 we get φ(z) = pzφ(z)2 + qz. There are two solutions to the above equation: p 1 ± 1 − 4pqz 2 φ(z) = . 2pz
DTMCS: FIRST PASSAGE TIMES
47
We choose the solution with the  sign to keep φ(z) ≤ 1. The probabilistic interpretation: Let Ti,i−1 be the time to go from state i to 0. Since the random walk (starting in state i) must visit all the intermediate states (i, i − 1, ..., 1) before visiting state 0, we have Ti,0 = Ti,i−1 + Ti−1,i−2 + ... + T1,0 . However, due to Markov property and space homogeneity, we see that the random variables {Ti,i−1 , i ≥ 1} are iid, with common generating function, say, φ(z). Hence, E(z T X0 = i) = E(z Ti,0 ) = E(z Ti,i−1 +Ti−1,i−2 +...+T1,0 ) = φ(z)i . The exact form of φ(z) is evaluated above. 3.20. Construct a DTMC with state space {A, C, G, T, CA, CAG} with the following transition probability matrix: 0.180 0.274 0.426 0.120 0.000 0.000 0.000 0.368 0.274 0.188 0.170 0.000 0.161 0.339 0.375 0.135 0.000 0.000 . P = 0.079 0.355 0.384 0.182 0.000 0.000 0.180 0.274 0.000 0.120 0.000 0.426 0.000 0.000 0.000 0.000 0.000 1.000 Suppose the first three bases are CAG. Let T be the first passage time into the state CAG. Let mG = E(T X0 = G). The desired answer is then mG − 3. Using Theorem 3.3 we get mG = 47.0314. Hence the desired answer is 44.0314. 3.21. Following the same probabilistic argument as in the previous problem (it holds because of the special structure of the DTMC) we get E(z T X0 = i) = φ(z)i , i ≥ 0. Then, Theorem 3.4 yields φ(z) = zp1,0 φ(z)0 +
∞ X j=1
p1,j φ(z)j =
∞ X
aj φ(j)j .
j=0
3.22. Construct a DTMC with state space {A, C, G, T, AC, ACT, GC, GCT } with the following transition probability matrix: 0.180 0.000 0.426 0.120 0.274 0.000 0.000 0.000 0.170 0.368 0.274 0.188 0.000 0.000 0.000 0.000 0.161 0.000 0.375 0.135 0.000 0.000 0.339 0.000 0.079 0.355 0.384 0.182 0.000 0.000 0.000 0.000 . P = 0.170 0.368 0.274 0.000 0.000 0.188 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.170 0.368 0.274 0.000 0.000 0.000 0.000 0.188 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000
48
DTMCS: FIRST PASSAGE TIMES
Let T be the first passage time to state ACT . We are asked to compute uA = P(T < ∞X0 = A). Using Theorem 3.2 we get uA = .6082. 3.23. Let bi be the expected bets that the gambler A wins starting from state i. A direct first step analysis yields: w0 = wN = 0; wi = p + pwi+1 + qwi−1 , 1 ≤ i ≤ N − 1. Now let mi = wi /p. Then the above equations can be rewritten as m0 = mN = 0; mi = 1 + pmi+1 + qmi−1 , 1 ≤ i ≤ N − 1. But these are the equations satisfied by the mi , the expected time until the game ends starting from state i, and whose solution is given in Computational problem 3.17 above. Hence we get wi = pE(T X0 = i). 3.24. We are asked to compute ui , the probability of absorption into state 0 from state i is a simple random walk with parameters r0 = 1, pi = p(1 − q), qi = q(1 − p) i ≥ 1. Using the results of Example 3.9 we get ( 1 if q(1 − p) ≥ p(1 − q) i ui = q(1−p) if q(1 − p) < p(1 − q). p(1−q) 3.25. Let Xn = (k, i) k = 1, 2, 0 ≤ i ≤ r if the nth trial uses treatment k, and as a result we have a run i consecutive successes on treatment k (i = 0 if the nth trial produced a failure). Play the winner rule implies that {Xn , n ≥ 0} is a DTMC with the following transition probabilities: p(1,0),(2,0) = q2 = 1 − p2 , p(2,0),(1,0) = q1 = 1 − p1 , p(1,0),(2,1) = p2 , p(2,0),(1,1) = p1 , p(k,i),(k,i+1) = pk , p(k,i),(k,0) = qk , k = 1, 2, 0 ≤ i ≤ r − 1, p(k,r),(k,r) = 1. The states (k, r) are absorbing, since the experiment terminates when the DTMC reaches these states. Let T = min{n ≥ 0 : Xn = (1, r)}, and u(k,i) = P (T < ∞X0 = (k, i)). The probability that the drug 1 (the better drug) gets selected is given by .5(u(1,0) + u(2,0) ). The relevant equations are u(1,r) = 1, u(2,r) = 0. u(k,i) = pk u(k,i+1) + qk u(k,0) , k = 1, 2, 1 ≤ i ≤ r − 1, u(1,0) = p2 u(2,1) + q2 u(2,0) , u(2,0) = p1 u(1,1) + q1 u(1,0) .
DTMCS: FIRST PASSAGE TIMES
49
Solving recursively for u(k,1) in terms of u(k,i) we get i−1 u(k,1) = (1 − pi−1 1 )uk,0 + p1 u(k,i) , k = 1, 2, 1 ≤ i ≤ r.
Using the boundary condition for u(1,r) = 1, u(2,r) = 0, we get r−1 r−1 u(1,1) = (1 − pr−1 1 )u1,0 + p1 , u(2,1) = (1 − p2 )u2,0 .
Substituting in the equations for u(k,0 and solving we get u(1,0) =
pr1 (1 − pr2 )pr1 , u = . (2,0) 1 − (1 − pr1 )(1 − pr2 ) 1 − (1 − pr1 )(1 − pr2 )
Hence the probability of correct selection is given by .5(u(1,0) + u(2,0) ) = .5
(2 − pr2 )pr1 . 1 − (1 − pr1 )(1 − pr2 )
3.26. Let {Xn , n ≥ 0} be as defined in the previous problem. Let m(k,i) = E(T X0 = (k, i)) where T = min{n ≥ 0 : Xn = (1, r) or (2, r)}, . The desired answer is .5(m(1,0) + m(2,0) ). The equations for m(k,i) are m(1,r) = 0, m(2,r) = 0. m(k,i) = 1 + pk m(k,i+1) + qk m(k,0) , k = 1, 2, 1 ≤ i ≤ r − 1, m(1,0) = 1 + p2 m(2,1) + q2 m(2,0) , m(2,0) = p1 m(1,1) + q1 m(1,0) . 3.27. This is a branching process with offspring distribution p0 = .2andp20 = .8. Thus m = .8 ∗ 20 = 16. Hence the extinction probability is the unique solution in (0,1) to the equation (from Equation 3.19): u = .2 + .8u20 . Using the recursion u0 = 0, un = .2 + .8u20 n−1 , n ≥ 1, we get (up to 4 decimals) u = lim un = .2000 n→∞
3.28. Using the same argument as in example 3.10, we see that φi (z) = E(z N X0 = i) = φ(z)i , where φ(z) = E(z N X0 = 1). Using the first step analysis φ(z) = E(z N X0 = 1) =
∞ X i=0
E(z N X0 = 1, X1 = i) = z
∞ X i=0
E(z N X0 = i) = z
∞ X i=0
φ(z)i = zψ(φ(z)).
50
DTMCS: FIRST PASSAGE TIMES
Taking derivatives, we get φ0 (z) = ψ(φ(z)) + zψ 0 (φ(z))φ0 (z). Now set z = 1. Using E(N ) = φ0(1) , m = ψ 0(1) , φ(1) = 1, ψ(1) = 1, we get E(N ) = 1 + mE(N ). This yields E(N ) = 1/(1 − m) as desired. 3.29. See solution to Modeling Exercise 2.17. The first step analysis yields: M X 1 i+1 mi + mj , 0 ≤ i ≤ M − 1 mi = 1 + M +1 M +1 j=i+1
with mM = 0. This can be rearranged to yield mi =
M −1 X M +1 1 + mj , 0 ≤ i ≤ M − 1. M −i M − i j=i+1
One can show that the solution is mi = M + 1 for all 0 ≤ i ≤ M − 1.. Hence the desired answer is m0 = M + 1. 3.30. {Xn , n ≥ 0} is a simple random walk on S = {0, ±1, · · · , ±k} with absorbing barriers at ±k and pi,i+1 = p = p1 (1 − p2 ), pi,i−1 = q = p2 (1 − p1 ), −k < i < k. Let ui be the probability that this random walk gets absorbed in the state k, starting from state i ∈ S. The desired answer is given by u0 . We have already solved a similar problem in Example 3.7. Using those results, we get ( 1−(q/p)i+k if q 6= p 1−(q/p)2k ui = (3.1) i+k if q = p 2k Thus the desired answer is u0 = pk /(pk + q k ). 3.31. Use the notation of Computational Exercise 3.30. Let mi be the expected time to reach state k or −k starting from state i. The desired answer is given by m0 . Using the results of Computational Exercise 3.17 we get mi =
i+k 2k 1 − (q/p)i+k − · . q − p q − p 1 − (q/p)2k
Hence the desired answer is given by m0 =
k 2k pk k q k − pk − · k = · k . k q−p q−p p +q q − p p + qk
3.32. Use the notation from the solution to Modeling Exercise 2.32. Let mi be the
DTMCS: FIRST PASSAGE TIMES
51
expected time until Xn reaches 0 or N = B1 + B2 . Using the results of Computational Exercise cmp4:16 we get mi =
i N 1 − (q/p)i , − · q − p q − p 1 − (q/p)N
where p = α1 (1 − α2 ) and q = α2 (1 − α1 ). The desired answer is given by mB2 . 3.33. Use the notation from the solution to Modeling Exercise 2.12. Let mij be the expected time until the boy and the girl meet if currently they are in bars i and j respectively. First step analysis yields: m12 = 1 + adm12 + (1 − a)(1 − d)m21 , m21 = 1 + bcm21 + (1 − b)(1 − c)m12 . If a = d and b = c, the solution is symmetric and is given by m12 = m21 = 1/(a + b − 2ab). 3.34. When αi = for i ≥ 4, Equation 3.19 reduces to u = α0 + α1 u + α1 u2 + α3 u3 . P3
Since 0 αj = 1, we see that u = 1 is a solution. Hence we can write the above equation as (u − 1)(α3 u2 + (α2 + α3 )u − α0 ) = 0. The quadratic can then be solved to get two solutions, exactly one of which will be in (0,1). 3.35. Let Yn be half the distance (in clockwise direction) between the cat and mouse. Thus Y0 = 1. Now, in each step the distance increases by 1 with probability pq (the cat moves counterclockwise and the mouse moves clockwise), or decrease with probability qp (the cat moves clockwise and the mouse moves counterclockwise), or remains the same with probability p2 + q 2 (both move clockwise or both move counterclockwise). The cat gets a nice meal when Yn = 0 or Yn = N . Thus {Yn , n ≥ 0} is a simple random walk with state space {0, 1, · · · , N }. The states 0 and N are absorbing. The other parameters are pi = qi = pq, ri = p2 + q 2 , 1 ≤ i ≤ N − 1. Let T = min{n ≥ 0 : Yn ∈ {0, N }} and m(i) = E(T Y0 = i). Then the answer is given by m(1). Note that the result of Computational Exercise 3.17 for the case p = q = 1/2 reduces to m(i) = i(N − i). In our case we get m(i) = i(N − i)/(1 − p2 − q 2 ).
52
DTMCS: FIRST PASSAGE TIMES
Hence the answer is given by m(1) = (N − 1)/(1 − p2 − q 2 ) =
N −1 . 2pq
3.36. 1. Let ai be the probability that the student who sends out emails, sends i of them. Then ∞ X µ= iai . i=0
Let αi be the probability that a person in generation n ≥ 1 sends out i emails. Then (with d = 1 − a − b − c) α0 = (a + b) + (c + d)a0 , αi = (c + d)ai , i ≥ 1. Thus the expected number of emails sent out by a typical person in generation n ≥ 1 is ∞ X iαi = (c + d)µ. i=0
There are five individuals in the first generation. Let u be the probability that the branching process started by a single individual in this generation becomes extinct. It solves the following equation 3.18 in the text: u = ψ(i) =
∞ X
ui αi .
i=1
We know that this equation has a solution u ∈ (0, 1) if (c + d)µ > 1, and the only solution is u = 1 if (c + d)µ ≤ 1. Now, the chain mail becomes extinct if all five of the branching processes becomes extinct. Hence the probability of the chain mail going extinct is u5 . 2. Assume (c + d)µ < 1. Let Ni be the number of emails sent out by recipient i in generation 1 and all his descendants. The {Ni , 1 ≤ i ≤ 5} are iid, with mean (say) τ . Then the total number of emails sent out is given by 5 + 5τ , the first 5 accounts for the emails sent by the initiator of this chain mail. Consider N1 . Conditioning on the number of emails sent by the first person in the first generation, we get τ = (c + d)µ + (c + d)µτ, which gives (c + d)µ < ∞. 1 − (c + d)µ Thus the expected total number of students who get the email is τ=
5+
5(c + d)µ 5 = . 1 − (c + d)µ 1 − (c + d)µ
Each of these students signs the petition with probability b + d. Also, the original
DTMCS: FIRST PASSAGE TIMES
53
student who started signs the petition (we can assume). Hence the expected total number of signatures on the petition is given by 1+
5(b + d) . 1 − (c + d)µ
If (c + d)µ = 1, the above quantity is ∞.
54
DTMCS: FIRST PASSAGE TIMES
Conceptual Exercises 3.1. Using vi (A) = 0 for i ∈ A we get vi (A)
= P(T (A) = nX0 = i) ∞ X = P(T (A) = ∞X1 = j, X0 = i)P(X1 = jX0 = i) =
j=0 ∞ X
pij P(T (A) = ∞X1 = j, X0 = i)
j=0
=
X
pij P(T (A) = ∞X1 = 0, X0 = i) +
j∈A
=
=
=
∞ X j∈ /A ∞ X j∈ /A ∞ X
∞ X
pij P(T (A) = ∞X1 = j, X0 = i)
j∈ /A
pij P(T (A) = ∞X1 = j) pij P(T (A) = ∞X0 = j)
pij vj (A).
j∈ /A
The proof of “largest” solution follows as in the proof of Theorem 3.2. 3.2. Using mi (A) = 0 for i ∈ A and the firststep analysis as in Theorem 3.3 yields X mi (A) = 1 + pij mj (A), i ∈ / A. j∈ /A
3.3. If i > 0, T = T˜. Hence we have v˜i = vi , i > 0. For i = 0 we use the firststep analysis to get v˜0 =
∞ X
pij vj .
j=1
3.4 If i > 0, T = T˜. Hence we have m ˜ i = mi , i > 0. For i = 0 we use the firststep analysis to get m ˜0 = 1+
∞ X j=1
pij mj .
DTMCS: FIRST PASSAGE TIMES
55
3.5. Using the proof of Theorem 3.1 ui,n = pi,0 +
∞ X
i > 0, n ≥ 1.
uj,n−1 ,
j=1
We have, for i ≥ 1, mi,n
= =
n X k=0 n X k=1
kP (T = kX0 = i) k
∞ X
P (T = kX0 = i, X1 = j)P (X1 = jX0 = i)
j=0 n X
= pi,0 + = pi,0 + = pi,0 + = pi,0 +
k
k=2 n X k=2 n X
k
∞ X j=1 ∞ X
pi,j P (T = kX0 = i, X1 = j) pi,j P (T = k − 1X0 = j)
j=1
(1 + k − 1)
k=2 n X ∞ X
∞ X
pi,j P (T = k − 1X0 = j)
j=1
pi,j P (T = k − 1X0 = j) +
k=2 j=1
= pi,0 +
∞ X
n X
(k − 1)
pi,j P (T ≤ n − 1X0 = j) +
j=1
pi,j
j=1
= P (T ≤ nX0 = i) +
∞ X
pi,j P (T = k − 1X0 = j)
j=1
k=2 ∞ X
∞ X
n−1 X
kP (T = kX0 = j)
k=1
pi,j mj,n−1
j=1
= ui,n +
∞ X
pi,j mj,n−1
j=1
3.6. 1. µk (k) = 0, µi (k) =
X
pij µj (k),
i 6= k.
j6=k
2. µ0 (0, N ) = µ0 (N ), µN (0, N ) = µN (0), and µi (0, N ) = pi,0 µ0 (N ) + pi,N µN (0) +
N −1 X
pi,j µj (0, N ), 1 ≤ i ≤ N − 1.
i=1
3.7. Let Yn be the subset of A visited by the X process up to time n. Then {(Xn , Yn ), n ≥ 0} is a DTMC. Assume that the bivariate process gets absorbed as soon as it visits a state with Xn = 0 or with Yn = A. Thus the transition probabilities
56
DTMCS: FIRST PASSAGE TIMES
are: P (Xn+1 = j, Yn+1 = CXn = i, Yn = B) = pi,j if (j ∈ A − B and C = B ∪ {j}) or (j ∈ / A − B and C = B), and P (Xn+1 = i, Yn+1 = BXn = i, Yn = B) = 1 if i = 0 or B = A. For B ⊂ A, and i ≥ 1, let u(i, B) be the probability that the process gets absorbed in a state with Yn = A. A first step analysis yields X X u(i, B) = pi,j u(j, B ∪ {j}) + pi,j u(j, B). j∈A−B
j6=0,j ∈ /A−B
3.8. Let T be the first passage time to state 0, and Nj be the expected number of visits to state j over {0, 1, ..., T }. (Assume j 6= 0.) Then wi,j = E(Nj X0 = i). We observe that, for i ≥ 1, E(Nj X0 = i, X1 = k)
=
1 if i = j, and k = 0
=
0 if i 6= j, and k = 0
=
1 + wk,j if i = j, and k ≥ 1
= wk,j if i 6= j, and k ≥ 1. This can be written as E(Nj X0 = i, X1 = k)
= δi,j if k = 0 = δi,j + wk,j if k ≥ 1
Now, the first step analysis yields wi,j
= =
E(Nj X0 = i) ∞ X E(Nj X0 = i, X1 = k)P (X1 = kX0 = i) k=0
=
δi,j pi,0 +
∞ X
(δi,j + wk,j )pi,k
k=1
=
δi,j +
∞ X
pi,k wk,j .
k=1
3.10. Define a DTMC {Yn , n ≥ 0} on state space S = {i : i ≥ −k} with the following transition probabilities: P (Yn+1 = jYn = i)
= pi,j if (i ≥ 0 and i 6= i0 and j ≥ 0 = pi,i0 if i ≥ 0 and i 6= i0 and j = −1 = pir ,ir+1 if i = −r and j = −(r + 1), 1 ≤ r ≤ k − 1. = pir ,j if i = −r and j 6= i( r + 1), 1 ≤ r ≤ k − 1. =
1 if i = j = −k.
DTMCS: FIRST PASSAGE TIMES
57
Then T = min{n ≥ 0 : Yn = −r}. Let mi = E(T Y0 = i). Then mi
=
1 + pi,i0 m−1 +
∞ X
pi,j mj if i ≥ 0 and i, j 6= i0 ,
j=0
m−r
=
X
1 + pir ,ir+1 m−(r+1) +
pir ,j mj for 1 ≤ r ≤ k − 1
j≥0,j6=ir+1
m−k
=
0.
(3.2)
We use the convention mi0 = m−1 . 3.9. Let wi = P (XT −1 = jX0 = i). Define w0 = 0. We have P (XT −1 = jX0 = i, X1 = k)
=
wk if i 6= j or k 6= 0
=
1 if i = j and k = 0.
Using first step analysis, we get (for i ≥ 1) wi
= P (XT −1 = jX0 = i) ∞ X = P (XT −1 = jX0 = i, X1 = k)P (X1 = kX0 = i) k=0
= δi,j pi,0 +
∞ X
pi,k wk .
k=1
3.11. Construct a new DTMC {Yn , n ≥ 0} on statespace {0, 1, 2, ...} and with transition probabilities qi,j as follows: q0,0 = 1, qi,j = pi,j for i ≥ 1. Let Zn = max0≤k≤n {Yn }. Then {(Yn , Zn ), n ≥ 0} is a DTMC, and M = limn→∞ Zn = Z. Now let ui,k (j) = P (M = jYn = i, Zn = k). (Note that the probability is independent of n.) The desired answer is then given by P (M = jX0 = i) = P (Z = jY0 = i, Z0 = i) = ui,i (j). Using first step analysis we get, for j ≥ k ≥ i, ui,k (j)
= P (Z = jYn = i, Zn = k) ∞ X = P (Z = jYn = i, Yn+1 = r, Zn = k)P (Yn+1 = rYn = i) k=0
= pi,0 δk,j +
k X
pi,r ur,k (j) +
r=1
j X r=k+1
3.12 For i = 1, Equation 3.21 reduces to v2 =
1 − α1 v1 . α0
pi,r ur,r (j).
58
DTMCS: FIRST PASSAGE TIMES
Since α0 + α1 ≤ 1, this implies v2 ≥ v1 . For i = 2 we get v3 =
1 − α1 α2 1 − α1 − α2 v2 − v1 ≥ v2 . α0 α0 α0
Since α0 + α1 α2 ≤ 1, this implies v3 ≥ v2 . Using induction one can show that P − ik=0 αk vi+1 ≥ vi ≥ vi . α0
CHAPTER 4
DiscreteTime Markov Chains: Limiting Behavior
Computational Exercises 4.1.
.132 (n) M = .132 lim P n = lim n→∞ n→∞ n + 1 .132
.319 .319 .319
.549 .549 . .549
4.2 (i). α + β = 0 → α = β = 0. limn→∞ P n does not exist. M (n) 1/2 1/2 lim = . 1/2 1/2 n→∞ n + 1 (ii). α + β = 2 → α = β = 1. M (n) = n→∞ n + 1
lim P n = lim
n→∞
1 0
0 1
.
The rows are not identical. 4.3. All rows of P n and
M (n) n+1
converge to [1/(N +1), 1/(N +1), · · · , 1/(N +1)].
4.4. Let T be the first passage time to state 0, and let vi = P(T = ∞X0 = i). Then, using the first step analysis we get, v0 = 0, and vi = ri vi + pi vi+1 + qi vi−1 , i ≥ 1. These can be rewritten as pi qi vi+1 + vi−1 , i ≥ 1. 1 − ri 1 − ri These are same equations as in Example 3.10. Hence the results of Example 4.16 about transience and recurrence hold. Next, let mi = E(T X0 = i). Using the first step analysis, we get: m0 = 0 and vi =
mi = 1 + ri mi + pi mi+1 + qi mi−1 , i ≥ 1. These can be rewritten as pi qi 1 + mi+1 + mi−1 , i ≥ 1. mi = 1 − ri 1 − ri 1 − ri 59
60
DTMCS: LIMITING BEHAVIOR
However, this produces the same answer for mi as given in Example 3.16. Hence the conditions for the positive and null recurrence derived in Example 4.16 continue to hold. 4.5. This is a special case of Computational Exercise 4.4 with p0 = 1, r0 , and pi = p, qi = q, ri = 1 − p − q, i ≥ 1. Thus, from the results of Example 4.16, we get αi = (q/p)i , i ≥ 0, ρi = (1/q)(p/q)i−1 , i ≥ 1. Hence the DTMC is (i) positive recurrent if p < q, (ii) null recurrent if p = q, and (iii) transient if p > q. 4.6. Follows by manipulating π = πP , and using π0 = 1 − µ. 4.7 Results of Example 4.24 continue to hold. 4.8 Results of Example 4.24 continue to hold. 4.9 Communicating class: {A, B, C} All states are aperiodic, positive recurrent. 4.10. Using direct numerical calculations with MATLAB we get: (a). (−0.7071)n 0.2706 0.5000 0.6533 0.5000 0 −0.6533 −0.5000 0.2706 0.5000 Pn = 0.6533 −0.5000 −0.2706 0.5000 0 0 −0.2706 0.5000 −0.6533 0.5000
0.2706 0.5000 · 0.6533 0.5000
−0.6533 −0.5000 0.2706 0.5000
0.6533 −0.2706 −0.5000 0.5000 . −0.2706 −0.6533 0.5000 0.5000
Hence
.25 .25 lim P n = .25 n→∞ .25
.25 .25 .25 .25
.25 .25 .25 .25
.25 .25 . .25 .25
0 (−0.0000)n 0 0
0 0 (0.7071)n 0
0 0 0 (1.0000)n
DTMCS: LIMITING BEHAVIOR
61
(b). (1)n 0 0 −0.6325 0.5000 0.6325 (.5000)n 0 −0.3162 −0.5000 −0.3162 0 0 (−1)n 0.3162 0.5000 −0.3162 0 0 0 0 0.6325 −0.5000 0.6325 0.3333 0.6667 0.6667 0.3333 −0.5270 −0.5270 0.5270 0.5270 · 0.3333 −0.6667 0.6667 −0.3333 . 0.5270 −0.5270 −0.5270 0.5270 This is a periodic function of n. Hence it has two convergent subsequences: 1 0 23 0 3 0 2 0 1 3 3 lim P 2n = 1 0 2 0 , n→∞ 3 3 0 32 0 13
0.5000 0.5000 n P = 0.5000 0.5000
0
1 3 lim P 2n+1 = 0 n→∞ 1 3
2 3
0 2 3
0
0 2 3
0 2 3
1 3
0 1 . 3 0
4.11. M (n) /(n + 1) has the same limits. 4.12. (i) The DTMC is irreducible, positive recurrent, and aperiodic . (ii) The DTMC is reducible with C1 = {0} and C2 = {N } as closed communicating classes that are aperiodic, and T = {1, 2, · · · , N − 1} is a communicating class that is not closed. (iii) The DTMC is reducible with C1 = {0} and C2 = {N } as closed communicating classes that are aperiodic, and T = {1, 2, · · · , N − 1} is a communicating class that is not closed. 4.13. (a). 1. Communicating Class 1: {1}. The class is not closed, hence it is transient. It is aperiodic since p1,1 > 0. 2. Communicating Class 2: {2}. The class is not closed, hence it is transient. It is aperiodic since p2,2 > 0. 3. Communicating Class 3: {3}. The class is closed and finite, hence it is positive recurrent. It is aperiodic since p3,3 > 0. (b).
0 0 0 n (−.5000)
62
DTMCS: LIMITING BEHAVIOR
1. Communicating Class 1: {1, 2}. The class is closed and finite, hence it is positive recurrent. It is aperiodic since p1,1 > 0. 2. Communicating Class 2: {3}. The class is not closed, hence it is transient. It is aperiodic since p3,3 > 0. 3. Communicating Class 3: {4}. The class is closed and finite, hence it is positive recurrent. It is aperiodic since p4,4 > 0. Indeed state 4 is absorbing. 4.14. Assume that 0 < αi < 1 for all i ≥ 1, 0 < βi < 1 for all i ≤ −1, and 0 < p < 1. Hence, the DTMC is irreducible. The structure of the DTMC is such that it is a success runs DTMC on positive integers when it is positive, and it is a success runs DTMC on the negative integers when it is negative. Every time it hits zero, it chooses the positive and the negative half with probability p and 1 − p respectively. Thus, using the results of Example 4.15 we can show that the DTMC is recurrent if ∞ X
(1 − αi ) = ∞,
i=1
and
∞ X
(1 − β−i ) = ∞.
i=1
It is positive recurrent if ∞ n−2 X Y
αi < ∞,
n=1 i=1
and
∞ n−2 X Y
β−i < ∞.
n=1 i=1
4.15. Use the recurrence criterion of Example 4.15. Note that all the three cases are irreducible success runs DTMCs. (a). i . qi = 1 − pi = i+1 Hence, ∞ X qi = ∞. i=0
Hence this success runs DTMC is recurrent. Using Equation 3.39 we get ∞ n−2 X Y
pi =
n=1 i=0
∞ X 1 = e < ∞. n! n=0
Hence the DTMC is positive recurrent. (b). qi ==
1 , i ≥ 1. i+1
DTMCS: LIMITING BEHAVIOR Hence,
63
∞ X
qi =
∞ X 1
i=0
i
i=2
= ∞.
Hence this success runs DTMC is recurrent. Using Equation 3.39 we get ∞ n−2 X Y
pi = 1 +
n=1 i=0
∞ X 1 = ∞. n n=1
Hence the DTMC is null recurrent. (c). Using qi for odd i, we get ∞ X
qi ≥
∞ X 1
i=0
i=2
i
= ∞.
Hence this success runs DTMC is recurrent. Using Equation 3.39 and collecting adjacent terms, we get ∞ n−2 Y X
pi ≤ 2
n=1 i=0
∞ X
1 < ∞. n n! 2 n=1
Hence the DTMC is positive recurrent. 4.17. Use the direct method of Example 4.15. Let X0 = 0 and T = min{n > 0 : Xn = 0}. Then P(T = n + 1) = pn , n ≥ 0. Hence the DTMC is recurrent if P(T < ∞) =
∞ X
pn = 1,
n=0
and transient if P(T < ∞) =
∞ X
pn < 1,
n=0
It is positive recurrent if it is recurrent and E(T ) = 1 +
∞ X
npn < ∞,
n=0
and null recurrent if it is recurrent and E(T ) = 1 +
∞ X n=0
4.16.
npn = ∞.
64
DTMCS: LIMITING BEHAVIOR
(i). From the special case discussed in Example 4.24 we see that this random walk is transient if p > q, null recurrent if p = q and positive recurrent if p < q. (ii). From Example 4.16, we get n+1 1 1 = + . n! n! (n − 1)!
ρ0 = 1; ρn = Hence, ∞ X
ρn = 2e < ∞.
n=0
Hence this is a positive recurrent random walk. (iii). From Example 4.16, we get α0 = 1; αn =
1 . n!
Hence, ∞ X
αn = e < ∞.
n=0
Hence this is a transient random walk. 4.18. The DTMC {Xn , n ≥ 0} of Example 2.17, has the following transition probabilities: pi,j = ai−j+1 , 0 < j ≤ i + 1. To use Pakes’ Lemma, we first compute d(i)
= E(Xn+1 − Xn Xn = i) =
i+1 X
(j − i)ai−j+1
j=1
=
i+1 X
ai−j+1 −
j=1
=
i X
j=1
aj −
j=0
≤ 1−
i+1 X (i − j + 1)ai−j+1
i X
jaj
j=0 i X
jaj
j=0
Thus i X j=0
jaj < 1 ⇒ lim sup d(i) < 0.
DTMCS: LIMITING BEHAVIOR
65
Hence, from Pakes’ Lemma, we see that the DTMC is positive recurrent if ∞ X
jaj < 1.
j=0
4.19. The discrete time queue has the following parameters: r0 = 1 − p, p0 = p, qi = q(1 − p), pi = p(1 − q), ri = 1 − pi − qi , i ≥ 1. Substituting in Equation 3.171, we get ρ0 = 1, ρn = Then
∞ X
1 p(1 − q) n ( ) , n ≥ 1. 1 − q q(1 − p)
ρn = 1 +
n=0
∞ 1 X p(1 − q) n ( ) . 1 − q n=1 q(1 − p)
The queue is stable (i.e., positive recurrent) if and only if this sum is finite, which is the case if and only if p(1 − q) < 1, q(1 − p) which is equivalent to p < q. 4.20. Run cmp27 to compute the P matrix. Run dtmcod(P) to get the following limiting distribution: π = [0.0834 0.0868 0.0864 0.0793 0.0925 0.0854 0.0682 0.1142 0.0724 0.0421 0.1893]. 4.21. (a). The DTMC is irreducible, positive recurrent and aperiodic. Hence the limiting distribution [π1 π2 π3 π4 ] satisfies π1
= .5π1 + .5π2 ,
π2
= .5π1 + .5π3 ,
π3
= .5π2 + .5π4 ,
π4
= .5π3 + .5π4 ,
The normalizing equation is π1 + π2 + π3 + π4 = 1. Using MATLAB we get the solution as [π1 π2 π3 π4 ] = [.25 .25 .25 .25]. The long run fraction of the time the DTMC spends in state i is also given by πi .
66
DTMCS: LIMITING BEHAVIOR
(b). The DTMC is irreducible, positive recurrent, but periodic with period 2. Hence the limiting distribution does not exist. The [π1 π2 π3 π4 ] satisfies π1
=
.5π2 ,
π2
=
π1 + .5π3 ,
π3
=
.5π2 + .5π4 ,
π4
=
π3 ,
The normalizing equation is π1 + π2 + π3 + π4 = 1. Using MATLAB we get the solution as 1 1 1 1 ]. 6 3 3 6 The long run fraction of the time the DTMC spends in state i is given by πi . [π1 π2 π3 π4 ] = [
4.22. The DTMC is irreducible and aperiodic. We shall directly solve for π. The steady state equations are ∞ X 1 π0 = πj , j + 2 j=0 πi =
∞ X
1 πj , j+2 j=i−1
i ≥ 1.
From this we get π0 = π1 , 1 πi = πi−1 + πi+1 . i+1 Solving recursively we get 1 π0 , i! Using the normalization equation we get πi =
i ≥ 0.
∞ X 1 −1 ] = e−1 . π0 = [ i! i=0
Hence the DTMC is positive recurrent with the limiting distribution 1 pii = e−1 , i!
i ≥ 0.
4.23. Run “cmp211” to compute the P matrix. Run “dtmcod(P)” to get the following limiting distribution: π = [0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0002 0.0004 0.0011 0.0025 0.0051 0.0096 0.0169 0.0272 0.0405 0.0560 0.0721 0.0863 0.6822].
DTMCS: LIMITING BEHAVIOR
67
4.24. The balance equations are π j = π 0 βj +
j+1 X
πi αj−i+1 ,
j ≥ 0.
i=1
Following the same steps as in Example 4.25, and using the notation given here we get ∞ X 1 φ(z) = πj z j = π0 B(z) + A(z)(φ(z) − π0 ). z j=0 Solving for φ(z) we get φ(z) = π0
A(z) − zB(z) . A(z) − z
To compute the unknown π0 , we use the normalization equation φ(1) = 1. Using L’Hopitals rule once we get 1−µ π0 = . 1−µ+ν We must have a < 1 for this to be positive, hence the condition of stability is a=
∞ X
kak < 1.
k=0
4.25. From the solution to Modeling Exercise 2.13, we see that {Xn , n ≥ 0} is a DTMC on state space {1, 2, 3, ..., k} with the following transition probabilities: pi,i = pi , 1 ≤ i ≤ k, pi,i+1 = 1 − pi , 1 ≤ i ≤ k − 1, pk,1 = 1 − pk . This is an irreducible, aperiodic positive recurrent DTMC. The balance equations are π1 = p1 π1 + (1 − pk )πk , πi = pi πi + (1 − pi−1 )πi−1 , 2 ≤ i ≤ k. Solving recursively we get πi =
1 − pk πk , 1 ≤ i ≤ k. 1 − pi
Using the normalization equation we get mk πk = Pk j=1
mj
,
where mi = 1/(1 − pi ). Hence, the limiting distribution is mi pi = Pk , 1 ≤ i ≤ k. j=1 mj
68
DTMCS: LIMITING BEHAVIOR
Thus the long run fraction of the patients that receive treatment i is given by πi . 4.26. Let σ = (σ1 , σ2 , ..., σN ) be a permutation of (1, 2, ..., N ). Let σ(i, j) be the permutation of σ obtained by interchanging the ith and jth components of σ. Thus σ(i, i) = σ. We have 2 , ifi 6= j, N2 1 P(Xn+1 = σXn = σ) = . N The DTMC is irreducible since it is possible to generate one permutation from another by a finite number of pairwise interchanges. The DTMC is also aperiodic. Now X X P(Xn+1 = σXn = σ 0 ) = P(Xn+1 = σXn = σ) + P(Xn+1 = σXn = σ(i, j)) P(Xn+1 = σ(i, j)Xn = σ) =
σ0
i6=j
= =
1 N (N −!) 2 + · 2 N 2 N 1.
Thus the DTMC is doubly stochastic. From the result of Conceptual Exercise 4.22 in this chapter we see that the limiting distribution of such a DTMC is unform over the state space. Hence 1 . lim P(Xn = σ) = n→∞ N! 4.27. (a). Equation 4.45 becomes ρ=
∞ X
ai ρi = (1 − c)
∞ X
(ρc)i =
i=0
i=0
1−c . 1 − ρc
Note that we know that ρ < 1 and .5 < c < 1, and hence the geometric series converges. Solving the above equation we get ρ2 c − ρ − c + 1 = 0, which has two solutions
1−c . c The second solution is less than 1, and hence is the correct one. Substituting in Equation 4.46 we get the limiting distribution as ρ = 1, ρ =
πj =
2c − 1 1 − c j ( ) , c c
j ≥ 0.
(b). Equation 4.45 becomes ρ=
∞ X i=0
ai ρi =
1 1 − ρm+1 . m+1 1−ρ
DTMCS: LIMITING BEHAVIOR
69
Solving numerically (run “cmp317b”) we get the following table of ρ values for different ms. m 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
ρ 0.4142 0.2757 0.2113 0.1727 0.1464 0.1273 0.1127 0.1011 0.0918 0.0840 0.0774 0.0718 0.0670 0.0628 0.0590 0.0557 0.0528 0.0501
The limiting distribution for a given m is given by Equation 4.46 by using the appropriate value of ρ from the above table. 4.28. Let Yn = Xn (mod7). Then {Yn , n ≥ 0} is DTMC on state space {0, 1, 2, 3, 4, 5, 6} with transition probabilities computed as follows: P(Yn+1 = jYn = i) = P(Xn+1 = j − i) ifj > i, P(Yn+1 = jYn = i) = P(Xn+1 = 7 + j − i) ifj ≤ i. The transition probability matrix is given below: 0 16 16 16 16 16 1 0 1 1 1 1 61 1 6 61 16 61 61 61 01 6 16 61 P = 61 61 61 01 6 16 61 61 61 61 01 6 0 6 6 6 6 6 1 6
1 6
1 6
1 6
1 6
1 6
1 6 1 6 1 6 1 6 1 6 1 6
.
0
This is doubly stochastic matrix of an irreducible aperiodic DTMC. Hence, from Conceptual Exercise 4.22, we get 1 πi = , 0 ≤ i ≤ 6. 7
70
DTMCS: LIMITING BEHAVIOR
4.29. Let Zn = Xn (modr). Then {Zn , n ≥ 0} has state space {0, 1, ..., r − 1}. We have ∞ X P(Zn+1 = jZn = i) = P(Xn+1 = j − i(modk)) = αkm+j−i if j > i, m=0
P(Zn+1 = jZn = i) = P(Xn+1 = k + j − i(modk)) =
∞ X
αkm+j−i if j ≤ i.
m=1
Thus {Zn , n ≥ 0} is an irreducible DTMC. Now, k−1 X
pi,j =
∞ X
αi = 1.
m=0
i=0
Hence the DTMC is doubly stochastic. Assuming it is aperiodic, we get, from Conceptual Exercise 10, 1 πi = , 0 ≤ i ≤ r − 1. r 4.30. The discretetime queue of Example 2.12 is a random walk on {0, 1, 2, ...} with p00 = r0 = 1 − p, p01 = p0 = p, pi,i−1 = qi = q(1 − p), pi,i = ri = pq + (1 − p)(1 − q), pi,i+1 = pi = p(1 − q). Hence we can use the results of Example 3.23 to compute its limiting distribution. We have ρ0 = 1, p0 p1 ...pn−1 1 p(1 − q) n ρn = ( ) , n ≥ 1. = q1 q2 ...qn 1 − q q(1 − p) Now, ∞ X
ρn
∞ X
=
1+
1 p(1 − q) n ( ) 1 − q q(1 − p) n=1
=
1+
p 1 if p < q p(1−q) q(1 − p) 1 −
=
q . q−p
n=0
q(1−p)
Hence the queue is positive recurrent if p < q. In this case, using Equations (4.41) and (4.42), the limiting distribution is given by q−p , π0 = q πn =
q − p p(1 − q) n ( ) , n ≥ 1. q(1 − q) q(1 − p)
4.31. The urnmodel of Example 2.13 is a random walk on {0, 1, 2, ..., N } with p00 = r0 = 0, p01 = p0 = 1,
DTMCS: LIMITING BEHAVIOR
71
pi,i−1 = qi = (i/N )2 , pi,i = ri = 2(i/N )(N −i)(N ), pi,i+1 = pi = ((N −i)/N )2 , 1 ≤ i ≤ N −1, pN,N −1 = qN = 1, pN,N = rN = 0. Hence we can use the results of Example 3.23 to compute its limiting distribution. We have N 2 p0 p1 ...pn−1 =( ) , 0 ≤ n ≤ N. ρn = q1 q2 ...qn n This is a finite state irreducible DTMC. Hence it is positive recurrent. It is a lso aperiodic. Using Equations (3.173) and (3.175), the limiting distribution os given by 2 ( N n ) πn = PN , 0 ≤ n ≤ N. N 2 j=0 ( j ) 4.32. (i) From the transition probabilities given in the solution to Modeling Exercise 2.18, we see that this is an irreducible, aperiodic DTMC. (ii) Let πi,j = lim P((Xn , Yn ) = (i, j)), (i, j) ∈ S. n→∞
The balance equations yield πi,u = πi+1,u + ui π1,d , i ≥ 1, πi,d = πi+1,d + di π1,u , i ≥ 1. Solving recursively, we get πi,u = π1,u −
i−1 X
! p1,d , i ≥ 1
ur
r=1
πi,d = π1,d −
i−1 X
! dr
p1,u , i ≥ 1.
r=1
We also get π1,u = π1,d . Hence ∞ X ur )π1,u , i ≥ 1, πi,u = ( r=i ∞ X dr )π1,u , i ≥ 1. πi,d = ( r=i
Now, 1=
∞ X i=1
πi,u +
∞ X
∞ X ∞ ∞ X ∞ X X πi,d = ( ur + dr )π1,u = (u + d)π1,u .
i=1
i=1 r=i
i=1 r=i
Since u < ∞, d < ∞, we get π1,u = π1,d =
1 . u+d
72
DTMCS: LIMITING BEHAVIOR Hence P(Machine is up) =
∞ X
pi,u =
i=1
u . u+d
4.33. From the Modeling Exercise 2.1, we see that {Xn , n ≥ 0} is a successruns DTMC with P∞ j=i+2 αj pi = P∞ , j=i+1 αj and
αi+1 qi = P∞
j=i+1
αj
,
for i = 0, 1, 2, .... Hence we can use the results of Example 3.22. Using L to represent a generic lifetime os a new item, we heve ρ0 = 1 = P(L > 0), ρn = p0 p1 ...pn−1 =
∞ X
αj = P(L > n), n ≥ 1.
j=n+1
Now,
∞ X
ρn =
n=0
∞ X
P(L > n) = τ < ∞.
n=0
Hence, from Equation (3.158), the DTMC is positive recurrent. Using Equation (3.159) we get P(L > n) πn = , n ≥ 0. τ 4.34. Let µn = E(Xn ). Then, given X0 = i, we get µn+1 = (1 − p)µn + µ,
µ0 = i.
Solving recursively yields µn = i ∗ (1 − p)n + (1 − (1 − p)n ) ∗ (µ/p).
4.35. Let φn (z) = E(znX ) and ψ(z) = E(znY ). Then φn+1 (z)
= E(z Xn+1 ) = E(z Yn +Bin(Xn ,1−p) ) = E(z Yn )E(z Bin(Xn ,1−p) ) = ψ(z)E(((1 − p)z + p)Xn ) = ψ(z)φn ((1 − p)z + p).
Letting n → ∞ we get φ(z) = ψ(z)φ((1 − p)z + p).
DTMCS: LIMITING BEHAVIOR
73
By repeated substitution we get φ(z) =
∞ Y
ψ((1 − p)n z + 1 − (1 − p)n ).
n=0
4.36. (i) The DTMC is irreducible, aperiodic if 0 < p < 1. (ii) Let πi = limn→∞ P (Xn = i). The balance equations are: πi = (1 − p)πi−1 , 1 ≤ i ≤ k − 1 π−1 = (1 − p/r)π−1 + (1 − p)πk−1 . Solving recursively, we get πi = (1 − p)i π0 , 0 ≤ i ≤ k − 1, r (1 − p)k π0 . p Using the normalizing equation we get p π0 = . 1 + (r − 1)(1 − p)k π−1 =
Now the expected number of items inspected is 1 in state i, (0 ≤ i ≤ k − 1) and 1/r in state 1. Hence the long run fraction of items inspected is given by α = π−1 (1/r) +
k−1 X i=0
πi =
1 . 1 + (r − 1)(1 − p)k
(iii) None of the inspected items are defective when shipped. An uninspected item is defective with probability p. Hence the long run fraction of defective items in the shipped items is given by p(1 − α) =
p(r − 1)(1 − p)k . 1 + (r − 1)(1 − p)k
4.37. Let Xn be as in the solution of Modeling Exercise 2.20. We have seen there that {Xn , n ≥ 0} is a random walk on {0, 1, 2, · · · , B − 1}. Let πj be its limiting distribution. The audio quality is impaired if Xn = 0 and no input occurs, or Xn = B − 1 and two bytes arrive. Hence the long run fraction of the time the audio quality is impaired is given by π0 α0 + πB−1 α2 . 4.38. Let Xn be as in the solution to Modeling Exercise 2.32. Using the results from Example 4.24, we have ρ0 = 1, ρn =
α1 α1 (1 − α2 ) n−1 ·( ) , if 1 ≤ n ≤ B1 + B2 − 1, α2 (1 − α1 ) α2 (1 − α1 )
74
DTMCS: LIMITING BEHAVIOR
α1 α1 (1 − α2 ) B1 +B2 −1 ·( ) , if n = B1 + B2 − 1. α2 α2 (1 − α1 ) Then, the limiting distribution of Xn is given by ρj πj = lim P(Xn = j) = PB1 +B2 , 0 ≤ j ≤ B1 + B2 . n→∞ ρi i=0 ρn =
(i) Both machines are working at time n if 0 < Xn < B1 + B2 .. Hence the long run fraction of the time when both machines are working is given by 1−π0 −πB1 +B2 . (ii) Let sj be the expected assemblies shipped if at time n given Xn = j. We have 0 ≤ j < B2 α1 α1 α2 j = B2 sj = α2 B2 < j ≤ B1 + B2 . PB1 +B2 Then the expected number of assemblies shipped per period is given by j=0 sj πj . (iii) Fraction of the time machine 1 is off = πB1 +B2 . Fraction of the time machine 2 is off = π0 . 4.39. See solution to Modeling Exercise 2.24. Using the transition probabilities given there, we get β1 pi−1,1 , i ≥ 1, pi,1 = i+1 βi1 pi,2 =
2 βi+1 pi−1,2 , i ≥ 1. βi2
This implies j pi,j = βi+1 p0,j , i ≥ 0, j = 1, 2.
We also have p0,j = vj
∞ X
1 αi+1 pi,1 )
i=0
+ vj
∞ X
2 αi+1 pi,2 ).
i=0
This can be simplified to get p0,j = vj p0,1 + vj p0,2 . Using normalizing equation we get 1=
∞ X
(pi,1 + pi,2 ) =
i=1
∞ X
1 βi+1 p0,1 +
i=1
∞ X
2 βi+1 p0,2
i=1
= τ1 p0,1 + τ2 p0,2 . Hence we get p0,j = pi,j =
vj , j = 1, 2, τ1 v1 + τ2 v2
j vj βi+1 , i ≥ 0, j = 1, 2, τ1 v1 + τ2 v2
4.40. See solution to Modeling Exercise 2.27. States 1 and 2 are transient, and
DTMCS: LIMITING BEHAVIOR
75
state 3 is absorbing. Hence the limiting distribution is [0, 0, 1]. 4.41. See solution to Modeling Exercise 2.28. The DTMC is irreducible and aperiodic. The limiting distribution satisfies: π1 = .5π1 + .25π2 , π2 = .5(π1 + π2 + π3 ), π1 + π2 + π3 = 1. The solution is given by π1 = .25, π2 = .5, π3 = .25. 4.42. Use the notation from the solution to Modeling Exercise 2.6. Let βi =
K X
αj , 0 ≤ i ≤ K.
j=i
Note that K X
βj = 1 + Kp.
j=0
One can verify by direct substitution that the limiting distribution is given by pj = βj /(1 + Kp), 1 ≤ j ≤ K. 4.43. 4/11 7/11 0 0 4/11 7/11 0 0 (a). 4/11 7/11 0 0 . 4/11 7/11 0 0
(b).
90/172 27/172 55/172 0 90/172 27/172 55/172 0 90/172 27/172 55/172 0 0 0 0 1 1170/3956 351/3956 715/3956 10/23 540/3956 162/3956 330/3956 17/23
0 0 0 0 0 0
0 0 0 0 0 0
.
4.44. Here {Xn , n ≥ 0} is a random walk on {0, 1, ..., N } with p0,0 = pN,N = 1; pi,i+1 = p, pi,i−1 = q, 1 ≤ i ≤ N − 1. It has three classes: C1 = {0}, C2 = {N } and T = {1, 2, .., N − 1}. C1 and C2 are positive recurrent, and T is transient. Hence, using the results of Theorem 3.21, we get if i = j = 0 or i = j = N 1 0 if 1 ≤ i, j ≤ N − 1 (n) lim p = n→∞ i,j αi (1) if 1 ≤ i ≤ N − 1, j = 0 αi (2) if 1 ≤ i ≤ N − 1, j + N .
76
DTMCS: LIMITING BEHAVIOR
Here ui = αi (1) is the probability that the DTMC gets absorbed in state 0 starting from state i. From Theorem 3.20, we see that ui satisfy ui = pui+1 + qui−1 , 1 ≤ i ≤ N − 1, with boundary condition u0 = 1, uN = 0. The solution is given by Equation 3.1. We also have αi (2) = 1 − ui , 1 ≤ i ≤ N − 1. This completes the solution. 4.45. Let Xn be the age of the component in place at time n. Using the solution to Modeling Exercise 2.33, we see that its limiting distribution is given by πi = βi+1 π0 , 0 ≤ i ≤ K − 1, with
K X π0 = ( bi )−1 = 1/E(min(Zn , K)). i=1
Now, a cost of C1 is encountered for every transition from i to 0 for 0 ≤ i ≤ K − 2 and C2 is incurred for every transition from K − 1 to 0. Hence the cost vector is c(i)C1 qi , 0 ≤ i ≤ K − 2,
C(K − 1) = C2 .
Hence, using Theorem 3.23, we get g = C1
K−2 X
πi qi + C2 πK−1 =
i=0
PK−1 C1 i=1 pi + C2 bK . E(min(Zn , K))
4.46. Let Dn be the number of nondefective items produced on day n. Then, {Dn , n ≥ 0} is a sequence of iid random variables with common pmf a0 = (1 − p)2 , a1 = 2p(1 − p), a2 = p2 . Since the demand is one per day, we get Xn+1 = (Xn + Dn − 1)+ . Thus {Xn , n ≥ 0} is a DTMC. Indeed, it is a random walk on {0, 1, 2, ...} with r0 = 1 − p2 , p0 = p2 , qi = (1 − p)2 , ri = 2p(1 − p), pi = p2 , i ≥ 1. Use the results of Example 3.23. We get ρi = (p/(1 − p))2i , i ≥ 0. Using the results of Example 4.24, we see that the DTMC is positive recurrent iff P ρn < ∞. This is the case iff, p/(1 − p) < 1, i.e., p < .5. Using the results of Example 4.24, we see that the limiting distribution is given by πi = (p/(1 − p))2i (1 − (p/(1 − p))2 ), i ≥ 0. 4.47. Suppose Xn = 0. Then a demand is lost with probability (1 − p)2 . Assume the holding cost is levied based upon the beginning inventory. Hence, the cost
DTMCS: LIMITING BEHAVIOR
77
function is given by c(0) = d(1 − p)2 , c(i) = c ∗ i, i ≥ 1. Using Theorem 4.26 and the results of the Computational Exercise 4.46, we get the long run cost rate as (using r = (p/(1 − p))2 ) g=
∞ X
πi c(i) = d(1 − p)2 (1 − r) +
i=0
∞ X
ciri (1 − r) = d(1 − 2p) + cr/(1 − r).
i=0
4.48. Let Xn be the inventory at the beginning of period n, and let Yn be the state of the machine (1 if up, 0 if down), at the beginning of period n. Note that 0 ≤ Xn ≤ k ⇒ Yn = 1. Thus the statespace of the bivariate process {(Xn , Yn ), n ≥ 0} is {i = (i, 1) : 0 ≤ i ≤ K − 1} ∪ {(i, 0) : k < i ≤ K}. It is a DTMC since the production in each time period is iid. The transition probabilities are given by pi,i+1 = p2 , 0 ≤ i < K − 1, pi,i−1 = q 2 , 1 ≤ i ≤ K − 1, pi,i = 2pq, 0 ≤ i ≤ K − 1, pK−1,(K,0) = p2 , p(i,0),(i−1,0) = 1, k + 1 < i ≤ K, p(k+1,0),k = 1. The balance equations, using judicious cuts, are p2 πi = q 2 πi+1 , 0 ≤ i ≤ k − 1, (1) p2 πi = q 2 πi+1 + πK , k ≤ i ≤ K − 2, (2) p2 πK−1 = π(K,0) , (3) π(i,0) = π(K,0) , k + 1 ≤ i ≤ K. (4) Solving equation (1) recursively yields πi = π0 (p2 /q 2 )i , 0 ≤ i ≤ k. (5) Solving equation (2) recursively, and simplifying yields πi = πk (p2 /q 2 )i−k + π(K,0)
1 − (p2 /q 2 )i−k , k ≤ i ≤ K − 1. (6) q 2 − p2
Substituting for πk from equation (5) in equation (6) yields πi = π0 (p2 /q 2 )i + π(K,0)
1 − (p2 /q 2 )i−k , k ≤ i ≤ K − 1. (7) q 2 − p2
Using equation (4) and (7) in the normalizing equation K−1 X i=0
πi +
K X i=k+1
π(i,0) = 1
78
DTMCS: LIMITING BEHAVIOR
yields π0
K−1 X
2
2 i
(p /q ) + π(K,0)
i=0
K−1 X i=k+1
1 − (p2 /q 2 )i−k + (K − k)π(K,0) = 1.(8) q 2 − p2
Substitute equation (3) in equation (7) (with i = K − 1) to get π(K,0) /p2 = πK−1 = π0 (p2 /q 2 )(K−1) + π(K,0)
1 − (p2 /q 2 )K−1−k . (9) q 2 − p2
Solve equations (8) and (9) simultaneously to obtain π0 and π(K,0) . P (a) Steady state probability that the machine is off = K i=k+1 π(i,0) = (K −k)π(K,0) . (b) Probability of i items in the inventory = πi if 0 ≤ i ≤ k, πi + π(i,0) if k + 1 ≤ i ≤ K − 1, and π(K,0) if i = K. 4.49. The machine is turned off whenever the system moves from state K − 1 to (K, 0) (the transition probability is p2 ), and is turned on whenever it moves from state (k + 1, 0) to k (the transition probability is 1). Thus the expected cost vector is c(K − 1) = Ap2 , C(k + 1, 0) = B, all other costs being zero. Using Theorem 3.23, the long run cost rate is seen to be g = Ap2 πK−1 + Bπ(k+1,0) , where the liming distribution is as given in the solution to Computational Exercise 4.48. 4.50. (i) The DTMC is irreducible and aperiodic. (ii) Let P be as in the solution of Modeling Exercise 2.14. Solving π = πP , we get r1 r2 π0 = . r1 r2 + r2 α1 + r1 (α2 + r1 α12 ) + r2 α12 π1 = α1 π0 /r1 , π2 = (α2 + r1 α12 )π0 /r2 , π12 = α12 π0 /r1 . (iii) Rπ0 − c1 π1 − c2 π2 − (c1 + c2 )π12 . 4.51. Let {(Xn , Yn ), n ≥ 0} be DTMC developed in the solution to Modeling Exercise 2.16. The DTMc is irreducible, positive recurrent and aperiodic. The limiting distribution is π = [.0021 .0085 .0427 .2133 .4 .0005 .0021 .0107 .0533 .2667]. Let c(i, j) be the beer expense if the state is Xn = i, Yn = j. The cost vector is c = [15 12 9 6 3 4 0 0 0 0]0 . Hence the long run beer expense per day is π ∗ c = 3.0005 dollars per day.
DTMCS: LIMITING BEHAVIOR
79
4.52. Let Xn be the number of bytes in this buffer in slot n, after the input during the slot and the removal (playing) of any bytes. We assume that the input during the slot occurs before the removal. Thus if Xn = 0 and there is no input, there will be no bytes to play. In this case we set Xn+1 = −1. Similarly, if Xn = K and at least one byte streams in, there will be a loss. In this case we set Xn+1 = K + 1. Let T = min{n ≥ 0 : Xn = −1 or K + 1}. Thus the song plays flawlessly if T > B. The process {Xn , n ≥ 0} is a DTMC on {−1, 0, ..., K, K + 1} with the following transition probabilities: pi,i+j−1 = αj , 0 ≤ i < K − 1, j = 0, 1, 2. Also, pK−1,K−1 = α1 ; pK−1,K−2 = α0 ; pK−1,K+1 = α2 ; pK,K−1 = α0 , pK,K+1 = α1 +α2 . We set p0,0 = pK+1,K+1 = 1 since we don’t care what happens to the DTMC once it reaches the state 0 or K + 1. Now let ui (n) = P (T ≤ nX0 = i). Clearly u−1 (n) = uK+1 (n) = 1 for all n ≥ 0, ui (0) = 0 for all 0 ≤ i ≤ K. We have the following recursive equations for ui (n) for n ≥ 1 and 0 ≤ i ≤ K: ui (n) = α0 ui−1 (n − 1) + α1 ui (n − 1) + α2 ui+1 (n − 1). If the initial buffer content is k, the song of B bytes plays flawlessly with probability 1 − uk (B). We need to pick a 1 ≤ k ≤ K where this probability is maximum. Using the given parameters in a Matlab program we get the optimum k to be 12 and the maximum probability to be .9818. 4.53 (i). {Xn , n ≥ 0} is a simple random walk on {0, 1, · · · , B} with pi = α2 , 1 ≤ i < B, qi = α0 , 1 ≤ i ≤ B. Let T = min{n ≥ o : Xn ∈ {0, B}}, and vi (n) = P (T > nX0 = i). Then v0 (n) = vB (n) = 0, n ≥ 0 vi (0) = 1, 1 ≤ i < B, and vi (n) = α0 vi+1 (n − 1) + α2 vi−1 (n − 1), 1 ≤ i ≤ B − 1, n ≥ 1. The desired probability is given by vb (K). (ii). Numerical computations yield the optimum b = 12 and the maximum probability = .9795. The matlab program is given below: a0=.2; a1=.5; a2=.3;B=100; K=512; P= a0*diag(ones(1,B),1) + a1*diag(ones(1,B+1)) + a2*diag(ones(1,B),1); P=P(2:B,2:B); v=ones(B1,1); for k=1:K
80
DTMCS: LIMITING BEHAVIOR
v=P*v; end [b p] = max(v) 4.54 (i). The state space is {4, 4.05, 4.10, · · · , 4.50}. The steady state distribution is π = [0.2487 0.1799 0.1331 0.0999 0.0761 0.0592 0.0473 0.0394 0.0351 0.0351 0.0463.] (ii). The long run cost per visit to the gas station is 10 X
(11 − i) ∗ f (i) ∗ πi = 32.89.
i=0
The average time between visits to the pump is 10 X (11 − i)πi = 8.0356. i=0
Hence the cost per day is 32.89/8.0356 = 4.0931. If the student fills up every time, he buys 11 gallons every eleven days. The limiting distribution of the price is given by π in part (i). Hence the cost per day is 1 X
0f (i) ∗ πi = 4.1487.
i=0
Thus price dependent purchases seems to save 5 cents per day! 4.55. Let Y be the page she lands on at time T . The we have ai,j
=
P(Y = jX0 = i) ∞ X P(XT = jX0 = i, T = k)P(Tk X0 = i) = =
k=0 ∞ X
[P k ]i,j dk (1 − d)
k=0
Hence A = [ai,j ] = (1 − d)
∞ X
(dP )k = (1 − d)(I − dP )−1 .
k=0
Let u = [1/N, 1/N, · · · , 1/N ] be the initial distribution. Then we get π ˆ = uA = (1 − d)u(I − dP )−1 . It follows that π ˆ satisfies the following equation: π ˆ = (1 − d)u + π ˆ dP = π ˆ [(1 − d)U + dP ], where U is an N by N matrix with all rows equal to u. This is the same equation satisfied by π ˜ of Example 4.28. Since the solution (normalized to 1) is unique, we
DTMCS: LIMITING BEHAVIOR
81
must have π ˆ=π ˜. 4.56. Let τi,j be the expected number of visits to page j until T starting with page i. Then we have N X τi,j = (1 − d)δi,j + d(δi,j + pi,k τk,j . k=i
Hence we get τ = [τi,j ] = (I − dP )−1 (1 − d). We have, with u = [1/N, 1/N, · · · , 1/N ], π ¯ = uτ = u(I − dP )−1 . From the solution to Computational Exercise 4.55, we see that π ˜=π ˆ = (1−d)u(I − dP )−1 . Hence π ¯=π ˜ /(1 − d). Thus π ¯ is proportional to π ˜ and the proportionality constant is 1/(1 − d) = E(T ). This makes intuitive sense. 4.57 Consider the citation model of Modeling Exercise 2.34. Suppose there are 5 papers indexed 1, 2, 3, 4 and 5. Paper 2 cites paper 1, papers 3 and 4 cite paper 2 and paper 5 cites papers 2 and 3. Using Model 2 (see Section 2.3.7) compute the appropriate ranking of the five papers. Do the same analysis using Model 3 (see Section 2.3.7), with damping factor d = .6. For model 2, the transition probability matrix is .2 .2 .2 .2 .2 1 0 0 0 0 P = 0 1 0 0 0 . 0 1 0 0 0 0 .5 .5 0 0 The corresponding stationary distribution is π = [0.4000 0.3200 0.1200 0.0800 0.0800]. These are the page ranks, and the correct ranking is relevant matrix is 0.2 0.2 0.2 0.2 0.68 0.08 0.08 0.08 0.08 0.68 0.08 0.08 P = 0.08 0.08 0.68 0.08 0.08 0.08 0.38 0.38 0.08
12345. For model 3, the 0.2 0.08 . 0.08 0.08
The corresponding stationary distribution is π = [0.3037 0.3121 0.1514 0.1164 0.1164].
82
DTMCS: LIMITING BEHAVIOR
Thus the ranking under this model is 21345. 4.58. (a). We have Xn+1 (k + 1) = Bin(Xn (k), βk ), 0 ≤ k < K, n ≥ 0, and Xn+1 (0) =
K X
Bin(Xn (k), αk ) + Bn+1 ,
n ≥ 0.
k=0
Since {Bn , n ≥ 0} are iid, {Xn , n ≥ 0} is a DTMC. The state space is {0, 1, 2, · · ·}K+1 . It is irreducible since Xn (0) can take any value in {0, 1, 2, · · ·}, and if Xn (0) = m, then all states with components bounded above by m can be reached in the next K steps. This implies irreducibility. The DTMC is aperiodic because it is possible to go from state 0 to 0 with positive probability. (b). Taking expectations on both sides of the recursive equations above, we get xn+1 = xn M + be0 where e0 = [1 0 0 · · · 0]. Let x = lim xn . n→∞
Then we have x(k + 1) = βk x(k), 0 ≤ k < K. Thus x(k) = β0 β1 · · · βk−1 x(0) = ρk x(0), 1 ≤ k ≤ K. Finally, using ρ0 = 1, we get x(0) = b +
K X
αk ρk x(0),
k=0
that is x(0) =
1−
b PK
k=0
αk ρk
.
(c). We have sn =
K X
αk xn (k) + b.
k=0
Thus s = lim sn = x(0) = n→∞
1−
b PK
k=0
.
αk ρk
(d). We have d(x) = E(ν(Xn+1 )−ν(Xn )Xn = x) = E(
K X
k=0
ν(Xn+1 )−
K X
k=0
x(k)Xn = x) = b+xM e−xe
DTMCS: LIMITING BEHAVIOR
83
where e = (1 1 · · · 1)0 . Thus d(x) = b −
K X
(1 − αk − βk )x(k).
k=0
Now define H = {x :
K X
(1 − αk − βk )x(k) ≤ b; x(k) = 0if αk + βk = 1, 0 ≤ k ≤ K}.
k=0
Thus H is a finite set. Clearly, this ν and H satisfy the conditions of Foster’s criterion. Hence the DTMC is positive recurrent. 4.59. 1. Let Xn be the number of items in the warehouse at the beginning of of day n, after the demand for that day has been satisfied. Then Xn+1 = min{d, max{Xn + 1 − Dn+1 , 0}}. Thus {Xn , n ≥ 0} is a DTMC with statespace {0, 1, 2, · · · , d} and transition probability matrix β0 α0 0 0 ··· 0 β1 α1 α0 0 ··· 0 β2 α2 α1 α0 ··· 0 P = (4.1) . .. .. .. .. .. .. . . . . . . βd αd−1 αd−2 αd−3 · · · α1 + α0 Here β’s are such that the row sums are one. 2. Assume α0 > 0, and α0 + α1 < 1. The the DTMC is irreducible and aperiodic. The limiting distribution exists since the state space is finite. 3. In this case {Xn , n ≥ 0} is a random walk of Example 2.10 with pi = α0 , 0 ≤ i < d qi = α2 , 1 ≤ i ≤ d, and ri = 1 − pi − qi . Hence the limiting distribution is given by πi = ρi (1 − ρ)/(1 − ρd+1 ),
i ≥ 0,
(4.2)
where ρ = α0 /α2 . 4. If there are i items in the warehouse, there is one item of age j for 1 ≤ j ≤ i. Hence the average age is given by L=
d X i=0
iπi .
84
DTMCS: LIMITING BEHAVIOR
5. When there are i items in the warehouse and a demand of size 1 occurs, it is satisfied by items of age i, i ≥ 1. If a demand of size 2 occurs it is satisfied by items of size i if i = 1 and of size i and i − 1 if i ≥ 2. Hence the desired answer is d d X X α1 iπi + α2 (π1 + πi (i + i − 1)/2). i=1
i=2
6. The fraction of items discarded due to old age = πd . 4.60. 1. This is the same DTMC as in the batch production inventory model of Example 2.16 with α0 = 1 − p, αN = p. It is irreducible and aperiodic. 2. From Example 4.19, the DTMC is positive recurrent if and only if µ = N p < 1. Hence we must have 2 ≤ N < 1/p. 3. The variance of the order quantity received is σ 2 = N 2 p(1 − p). Hence the expected number of items in the warehouse in steadystate is given by L=
σ2 1 N 2 p(1 − p) 1 (µ + ) = (N p + ). 2 1−µ 2 1 − Np
4. From Example 4.25 we get π0 = 1 − N p. The expected profit from sales per period is (1 − π0 )(s − c) = N p(s − c). The expected holding cost per period is hL. Hence the long run net profit, as a function of N is given by g(N ) = N p(s − c) −
h N 2 p(1 − p) (N p + ). 2 1 − Np
5. g(N ) is a concave function of N . It is maximized at s ! 1 hq/(2p) N = max(2, 1− ). p s − c + h(q − p)/(2p) One needs to check the two integers near the above RHS and choose the one that minimizes the cost. 4.61.
DTMCS: LIMITING BEHAVIOR
85
1. The assumption of Geometric distributions has the following consequences. An arrival occurs at any given time n with probability a, regardless of history up to time n. Thus an arrival to queue i occurs at any time n with probability api . A departure occurs from a nonempty queue i at time n with probability di , regardless of history. Hence {Xni , n ≥ 0} is a discrete time queue as in Example 2.12. The state space is {0, 1, 2, · · ·}. The transition probabilities are pm,m+1 = api (1 − di ), m ≥ 1, pm,m−1 = (1 − api )di , m ≥ 1, p0,1 = api , p0,0 = 1 − api , pm,m = 1 − pm,m+1 − pm,m−1 , m ≥ 1. The DTMC is irreducible and aperiodic, since all these parameters are positive. It is positive recurrent when (see Example 4.24) api < di . 2. The two queues are not independent of each other, since arrival to one queue implies no arrival to the other queue. 3. From Example 4.24 we see that the limiting distribution is given by πm = ρm i (1 − ρm ), m ≥ 0, where ρi = api /di . The expected number of customers in the queue in steady state is given by Li =
∞ X
mπm = ρi /(1 − ρi ) = api /(di − api ).
m=0
The the sum of the expected number of customers in the two queues in steady state is given by d1 d2 L1 + L2 = + . d1 − ap1 d2 − ap2 This is minimized at √ √ √ d1 d1 − d2 d2 + a d2 √ √ . p1 = d1 + d2 The expression for p2 is symmetric. 4.62. 1. We assume that Xn is counted after the departures at time n, but before the arrivals. Let An be the number of arrivals at time n. Then we have X0 = 0 and Xn+1 = Bin(Xn + An , 1 − p), n ≥ 0.
(4.3)
Since {An , n ≥ 0} are iid, {Xn , n ≥ 0} is a DTMC. It is irreducible, as long as An ’s are not identically zero and p > 0. It is aperiodic since pi,i > 0 if P (An = i) > 0 and p > 0. It is positive recurrent recurrent, since from Pakes’ lemma, the drift in state i is d(i) = −ip + τ which goes to −∞ as i → ∞ as long as p > 0.
86
DTMCS: LIMITING BEHAVIOR
2. Let µn = E(Xn ). Taking the expectations on both sides of Equation 4.3 we get µ0 = 0, µn+1 = (µn + τ )(1 − p), n ≥ 0.
(4.4)
Solving recursively, we get µn =
τ (1 − p) (1 − (1 − p)n ), n ≥ 0. p
(4.5)
3. There are two different revenue models. In the first model every customer pays f per period at the end of each period (like the rent on an apartment). The fee paid at time n is Xn−1 + An−1 . Hence the total discounted revenue over infinite time horizon is (using X0 = 0) φ(f )
=
∞ X
αn f E(Xn−1 + An−1 )
n=1
=
fα
∞ X
αn (µn + τ )
n=0
=
fτα (1 − α)(1 − α(1 − p))
In the second model, each customer pays the total accumulated fee when he leaves (like in a parking lot). Let T be the time spent by a customer in the system. Then the revenue produced by this customer when he leaves the system, discounted back to when he arrived, is f E(T αT ) =
f αp . (1 − α(1 − p))2
Since An number of arrivals occur at time n, the total expected discounted revenue from all the customers arriving at time n (discounted back to time n) is f τ αp . (1 − α(1 − p))2 Hence the total discounted revenue over the infinite time horizon is ∞ X τ αp f τ αp = . f αn 2 (1 − α(1 − p)) (1 − α)(1 − α(1 − p))2 n=0 4. We use the first model for revenue. The second model produces the same f ∗ . We have f αAf e−θf φ(f ) = . (1 − α)(1 − α(1 − p)) This is maximized at f ∗ = 1/θ, and the maximum revenue is given by φ(f ∗ ) =
Aαe−1 . θ(1 − α)(1 − α(1 − p))
5. Suppose X0 = 0, which is P(0). We shall show that Xn is a P(µn ) random variable, where µn is given by Equation 4.5. If Xn is P(µn ), then Xn +An is P(µn +τ ).
DTMCS: LIMITING BEHAVIOR
87
Then Bin(Xn + An , 1 − p) is P((µn + τ )(1 − p)). Hence µn satisfies equation 4.4. Hence µn is given by Equation 4.5. Thus, as n → ∞, Xn → P (τ (1 − p)/p) in distribution. 4.63. 1. {Xn , n ≥ 0} satisfies Xn+1 = Bin(Xn , 1 − p) + K,
n ≥ 0.
Thus given {X0 , X1 , · · · , Xn }, Xn+1 depends only on Xn . Hence it is a DTMC, and the statespace is S = {0, 1, 2, · · ·}. 2. Taking expected values on both sides of the recursive equation we get mn+1 = (1 − p)mn + K. This can be solved by back substitution to get mn =
K (1 − (1 − p)n ), p
n ≥ 0.
3. There are mn houses available at the beginning of the nth period. Of these pmn houses are sold, each one producing a net revenue of r − c at the end of the period. (1 − p)mn houses are left unsold, each one costing h in holding costs at the beginning of the period. Hence the net revenue from the nth period is (α(r − c)p − h(1 − p))mn . Hence the the expected total discounted net revenue over the infinite horizon is given by ∞ X
αn (α(r − c)p − h(1 − p))mn
n=0
= = =
∞ X K (α(r − c)p − h(1 − p)) αn (1 − (1 − p)n ) p n=0 K 1 1 α(r − c)p − h(1 − p))( − p 1 − α 1 − α(1 − p) α2 (r − c)p2 − hα(1 − p)p . (1 − α)(1 − α(1 − p))
4. We have lim mn =
n→∞
K . p
Hence the expected revenue per period is K ((r − c)p − h(1 − p)). p 5. The expected net revenue per period in steady state is given by K((r − c + h) −
hc ). 2c − r
88
DTMCS: LIMITING BEHAVIOR
√
This is maximized at r∗ = max(2c − hc, c). Thus the optimal markup is given by √ r∗ − c = max(c − hc, 0). Thus unless h < c, this is not a profitable business. 4.64. 1. K must be at least two to produce a nontrivial situation. So assume K ≥ 2. From the description of the system we see that Xn+1 = Xn + 1 if 0 ≤ Xn < K − 1, and for Xn ≥ K − 1 Xn+1 = Xn + 1 with probability q = 1 − p and Xn+1 = Xn + 1 − K with probability p. Since Xn+1 is given entirely in terms of Xn , it is a DTMC. 2. The statespace is S = {0, 1, 2, · · ·}. The transition probabilities are pi,i+1 = 1
if 0 ≤ i ≤ K − 2,
pi,i+1 = q, pi,i+1−K = p,
if i ≥ K − 1,
3. It is irreducible, and periodic with period K. (Since p ∈ (0, 1).) 4. The balance equations are (A) :
π0 = pπK−1 , πj = πj−1 + pπj+K−1 , 1 ≤ j ≤ K − 1 (B) :
πj = qπj−1 + pπj+K−1 ,
j ≥ K.
The equations (B) form a system of difference equations with constant coefficients, and can be solved by the methods of the Upper Hessenberg matrix example. We try πj = αj , j ≥ K − 1. Substituting in (A) we get αj = qαj−1 + pαj+K ,
j ≥ K.
Thus if we can find an α ∈ (0, 1) satisfying (C) :
α = q + pαK
we have a valid solution. From the theory for such equations developed in class, such a solution is guaranteed and unique if Kp > 1. Assume that this condition holds. Thus we have (D) :
πj = cαj ,
j ≥K −1
for any constant c. Solving the equations in (A) recursively we get (E) :
πj = π0 + cpαK
1 − αj , 0 ≤ j ≤ K − 1. 1−α
DTMCS: LIMITING BEHAVIOR
89
Using π0 = cpαK−1 in (E) and combining with (D) and simplifying using (C), we get j+1 cpαK−1 1−α 0≤j ≤K −1 1−α (F ) : πj = j cα j≥K Now use the normalizing equation to compute c. Using (C) to simplify, we get c=
1−α . KpαK−1
Substituting in (F) we get (G) :
πj =
1 j+1 ) K (1 − α 1−α j+1−K α Kp
0≤j ≤K −1 j≥K
5. It is given by ∞ X j=0
jπj =
K−2 X j=0
jπj +
α + (K − 1)(1 − α) . Kp
90
DTMCS: LIMITING BEHAVIOR
Conceptual Exercises 4.1. (i) R is reflexive, symmetric and transitive. (ii) R is not reflexive (since x could be a female), not symmetric (since x could be a male y a female), but it is transitive. (iii) R is reflexive, symetric, but need not be transitive (since x and y may have common classes, and y and z may have common classes, but that does not imply that x and z have common classes). (iv) R is not reflexive, it is symmetric, but not transitive. 4.2. Suppose C1 ∩ C2 6=. Then there is at least one element (say x) that belongs to both C1 and C2 . Since C1 and C2 are communicating classes, x communicates with all elements in C1 as well as those in C2 . Hence, by transitivity of the communication, all elements of C1 communicate with all elements in C2 . Hence all elements of C1 must be in C2 and vice versa. Hence C1 = C2 . 4.3. This is the same as finding the shortest path from node i to node j in the transition diagram. If the length is N − 1 or less, j is accessible from i. 4.4. Skip. (n) 4.5. Let A = {n ≥ 1 : P(T˜i = nX0 = i) > 0}, and B = {n ≥ 1 : pii > 0}. Now, n, m ∈ A ⇒ in + jm ∈ B for all i, j ≥ 1, and k ∈ B ⇒ there are n, m ∈ A and i, j ≥ 1 such that k = in + jm. The result follows from this.
4.6. Suppose state i is recurrent, and X0 = i. Then with probability one the DTMC returns to state i one more time. Then Markov property and time homogeneity implies that the DTMC returns infinitely often. This gives P(Vi = ∞X0 = i) = 1. Now suppose state i is transient with u ˜i < 1 being the probability that the DTMC returns to state i starting from state i. Then Vi = k if the DTMC returns k consecutive times, and then never returns. Hence, Markov property and time homogeneity implies that P(Vi = kX0 = i) = u ˜k−1 (1 − u ˜i ), k ≥ 1. i 4.7. Consider two states i 6= j in the state space S = {1, 2, ..., N } of the DTMC {Xn , n ≥ 0}. Since the DTMC is irreducible, there is a sequence of states i = i1 , i2 , ..., ik−1 , ik = j, such that P (Xr+1 = ir+1 Xr = ir ) > 0 for r = 1, ..., k − 1. Now, if k ≤ N we are done. So suppose k > N . Then at least one state on the sequence must be repeated, i.e., im = in for some 1 ≤ m < n ≤ k. Now consider the sequence i1 , ...im−1 , im , in+1 , ..., ik . The length of this sequence is at most
DTMCS: LIMITING BEHAVIOR
91
k − 1, and the probability of following this sequence is positive if X0 = i0 . Hence if k > N , it is also possible to reach j from i in k − 1 steps. Now we can repeat the argument as long as the current sequence has more than N states in it. Finally, we will reach a sequence with N states, thus proving the statement. (n)
4.8. Suppose the period of state i is d. Then we must have pi,i = 0 for i = (N )
1, 2, ..., d − 1. Thus, if d > N , we must have pi,i = 0. But this implies that it is not possible to return to state i from state i in N steps or less. This contradicts the assertion proved in Conceptual Exercise 4.7 above. 4.9. A state in a finite state DTMC must belong to some communicating class. If this class is closed, the state is positive recurrent from Theorem 4.9. If it is not closed, the state is transient from Theorem 4.10. Hence the state cannot be null recurrent. 4.10. It is easy to construct DTMCs with period 1 or 2 (simple random walks with reflecting barriers yields period = 2, if we make any pii > 0, it makes period = 1). Now let 3 ≤ d ≤ N . Consider a DTMC with the following transition probabilities: pi, i + 1 = 1, 1 ≤ i ≤ d − 2, pd−1,j = 1/(N − d + 1), j = d, d + 1, · · · , N, pj,1 = 1, d ≤ j ≤ N. This DTMC has period d. 4.11. Suppose the DTMC has only one communicating class. It must be closed, since there are no states outside it. Then, from Theorem 4.9, all states are positive recurrent. Hence we are done. Now suppose there are two communicating classes. If one of them is closed, we are done. So suppose both are open. Then it must be possible to go from class one to 2, but not back, since class 1 is open. Similarly, it must be possible to go from class 2 to 1, but not back, since class 2 is open. But this is impossible, since if it is possible to go from class 1 to 2 and 2 to 1, they must together form a single closed communicating class; and we are done. Same argument can be repeated for more than two classes. (m)
4.12. Since k → i, there is an m > 0 such that pki > 0. Now, X (m) (n) (n+m) (m) (n) pkk = pkr prk ≥ pki pik . r
Thus r 1 X (n) pkk r→∞ r + 1 n=0
lim
=
r 1 X (n+m) pik r→∞ r + 1 n=0
lim
92
DTMCS: LIMITING BEHAVIOR (m)
≥ pki
r 1 X (n) pik r→∞ r + 1 n=0
lim
> 0. This yields Equation 4.19. 4.13. Let ν(i) = i. Then d(i) = E(Xn+1 − Xn Xn = i). Since Xn ≥ 0, we have d(i) ≥ −i. Hence d(i) < ∞ ⇒ d(i) < ∞. Also, lim sup d(i) ¡ 0 iff, for a given > 0, there is a k such that d(i) < − for all i > k. Let H = {0, 1, ..., k}. Thus, if the hypothesis of Pakes’ lemma is satisfied, so are Equations 4.16 and 4.17 (with the H and ν defined above). Hence Pakes’s lemma follows from Foster’s criterion. 4.14. Follows directly from BorelCantelli lemma. P 4.15. We are given that πj = πi pi,j and P(X0 = j) = πj for all j ∈ S. Now, suppose P(Xn = i) = πi for all i ∈ S for some n ≥ 0. Then, conditioning on Xn , we get X X P(Xn+1 = j) = P(Xn+1 = jXn = i)P(Xn = i) = πi pi,j = πj . i∈S
i∈S
Hence the result follows from induction. 4.16. Consider the balance equations in Example 4.16. The first equation can be written as p0 π0 = q1 π1 . Adding the first i balance equations we get pi πi = qi+1 πi+1 , i ≥ 1. This yields pi πi , i ≥ 0. qi Solving recursively we get Equation 4.40. πi+1 =
4.17. The expected cost incurred at time n, given that Xn = i is X c(i) = pi,j c(i, j). j∈S
Hence the result follows. 4.18. Follows from the same argument as in the solution to the above problem.
DTMCS: LIMITING BEHAVIOR
93
4.19. For N = 0, we get φ(0, i) = c(i), as expected. For N ≥ 1, we get φ(N, i)
= E(
N X
αn c(Xn )X0 = i)
n=0
=
X
E(
=
αn c(Xn )X1 = j, X0 = i)P (X1 = jX0 = i)
n=0
j∈S
X
N X
E(c(X0 ) +
N X
αn c(Xn )X1 = j, X0 = i)P (X1 = jX0 = i)
n=1
j∈S
N X
= c(i) + α
X
= c(i) + α
X j∈S
n=0
= c(i) + α
X
pi,j φ(N − 1, j).
pi,j E(
αn−1 c(Xn )X1 = j)
n=1
j∈S
N −1 X
pi,j E(
αn c(Xn )X0 = j)
j∈S
4.20. For N = 0, we get g(0, i) = c(i), as expected. For N ≥ 1, we get g(N, i)
=
N X 1 E( c(Xn )X0 = i) N + 1 n=0
=
N 1 X X E( c(Xn )X1 = j, X0 = i)P (X1 = jX0 = i) N +1 n=0
=
1 N +1
j∈S
=
X
E(c(X0 ) +
N X
c(Xn )X1 = j, X0 = i)P (X1 = jX0 = i)
n=1
j∈S
N X X 1 c(Xn )X1 = j)] [c(i) + pi,j E( N +1 n=1 j∈S
=
N −1 X X 1 [c(i) + pi,j E( c(Xn )X0 = j)] N +1 n=0
=
X 1 [c(i) + N pi,j g(N − 1, j)]. N +1
j∈S
j∈S
4.21. Global balance equations are obtained by summing the local equation over all j. 4.22. Since {Xn , n ≥ 0} is a finite state irreducible DTMC, it is positive recurrent (Theorem 4.9). Hence it has a unique limiting distribution (Theorem 4.19). Thus it
94
DTMCS: LIMITING BEHAVIOR
suffices to check that the solution πj = 1/N for all j = 1, 2, ..., N satisfies the balance equations 4.29 and 4.30. Clearly, 4.30 is satisfied. Using the definition of doubly stochastic matrices, we get N X i=1
πi pi,j =
N N X 1 X 1 1 pi,j = pi,j = = πj . N N N i=1 i=1
Thus Equation 4.29 is satisfied. Hence πj = 1/N for all j = 1, 2, ..., N is the liming distribution. 4.23. Let i and j be two states in tree DTMC such that pi,j > 0. Then we must have pj,i > 0. Also using a cut in the transition diagram with i on one side and j on the other side, we see that the stationary probabilities satisfy πi pi,j = πj pj,i . Hence the DTMC is reversible. 4.24. See the notation in Subsection 2.6.1. Consider the three cases: (i) λ = 1 is an eigenvalue of multiplicity one, and it is the the only eigenvalue with λ = 1. In this case all other eigenvalues lie strictly inside the unit circle. Let x be the right eigenvector, and y be the left eigenvector, corresponding to eigenvalue 1. We can take x = e, since P is stochastic. Substituting in Equation 2.34, and letting n → ∞, we get P n → xy. Thus in the limit, all rows of P n converge to y. (ii) λ = 1 has multiplicity k, and these are the only eigenvalues with λ = 1. This case corresponds to k irreducible aperiodic classes. In this case these are the k independent right eigenvectors x1 ... xk , and k independent left eigenvectors y1 , · · · , yk . Pn →
k X
xr yr .
r=1
The rows of the limit are not identical. (iii) There are eigenvalues with λ = 1 other than λ = 1. This case corresponds to one or more periodic classes. In this case P n displays an oscillatory behavior, and thus does not have a limit. 4.25. Since j is recurrent, we have ∞ X n=0
(n)
pjj = ∞.
DTMCS: LIMITING BEHAVIOR
95 (m)
Since i → j, there is an m > 0 such that pij > 0. Now, X (m) (n) (n+m) (m) (n) = pir prj ≥ pij pjj . pij r
Thus ∞ X
(n)
pij
≥
n=0
∞ X
(n+m)
pij
n=0
≥
(m) pij
∞ X
(n)
pjj = ∞,
n=0
as desired. (m)
4.26. Since i → j, there is a m ≥ 0 such that pi,j > 0(def inition3.1). Since j P (n) is recurrent, pj,j = ∞ (Theorem 3.3). Now, ∞ X
(n)
pi,j
≥
n=0
= ≥ =
∞ X
(n)
pi,j
n=m ∞ X X
(m) (n−m)
pi,k pk,j
n=m k∈S ∞ X (m) (n−m) pi,j pj,j n=m ∞ X (n) (m) pj,j = ∞. pi,j n=0
4.27. We prove part (i). Part (ii) follows similarly. Since i → j, there is a m ≥ 0 P ∗ (m) such that pi,j > 0(def inition3.1). Since j is recurrent, pj,j (n) > 0 (Theorem 3.4). Now, following the solution to the above problem, we get p∗i,j (n)
=
N 1 X (n) pi,j N ⇒∞ N + 1 n=0
=
N X 1 (n) pi,j N ⇒∞ N + 1 n=m
=
N X 1 (m) (n−m) pi,j pj,j N ⇒∞ N + 1 n=m
lim
lim
lim
(m)
= pi,j
(m)
N X 1 (n−m) pj,j N ⇒∞ N + 1 n=m
lim
= pi,j p∗j,j > 0.
96
DTMCS: LIMITING BEHAVIOR
4.28. Let {Xn , n ≥ 0} be a DTMC on S = {1, 2} with p1,1 = p2, 2 = .5. The DTMC is irreducible and aperiodic with π1 = π2 = .5. Suppose P (X0 = 1) = 1. Then P (X1 = 1) = P (Xn = 2) = .5. Thus X1 has the limiting distribution, however X0 does not. 4.29. Fix i and j. Suppose the DTMC earns one dollar every time it undergoes a transition from state i to j, and 0 otherwise. Then from Conceptual Exercise 4.18, the long run rate of revenue is given by g = πi pi,j . However, the long run rate of revenue is the same as the long run fraction of the transitions that take the DTMC from i to j. 4.30. For Q = 12 (P + P T ) to be a transition matrix, P must be doubly stochastic. Since P is given to be irreducible, and positive recurrent, it must be a finite N × N matrix, with stationary distribution πi = 1/N for all i. Then Q is also doubly stochastic, irreducible and symmetric with the same stationary distribution as that of P . Since it is symmetric, we get πi pi,j = pi,j /N = pj,i /N = πj pj,i . Thus the DTMC with transition matrix Q is reversible from Theorem 4.27. 4.31. We have, for i > 0, g(i)
= =
E(RX0 = i) ∞ X E(RX0 = i, X1 = j)P(X1 = jX0 = i) j=0
=
∞ X
pij (ri + g(j))
j=0
=
ri +
∞ X
pij g(j).
j=1
Here we have used the fact that g(0) = 0. 4.32. (a). We have P (Z1 = jZ0 = i) ≥ αP (Xi = jX1 = i). Hence P(X1 = jX0 = i) > 0 ⇒ P(Z1 = jZ0 = i) > 0. Thus if {Xn , n ≥ 0} is aperiodic, so i {Zn , n ≥ 0}. (b). The converse is not true. A simple counterexample is the two state periodic DTMC {Xn , n ≥ 0} . In that case P(Z1 = 1Z0 = 1) ≥ (1 − α)α > 0. Thus {Zn , n ≥ 0} is aperiodic, but {Xn , n ≥ 0} is periodic. (c). The transition probability matrix Q of {Zn , n ≥ 0} is related to the transition probability matrix P of {Xn , n ≥ 0} by Q=α
∞ X
(1 − α)k P k .
k=1
DTMCS: LIMITING BEHAVIOR
97
Hence if π satisfies π = πP we have ∞ X πQ = α (1 − α)k πP k = π. k=1
Hence {Zn , n ≥ 0} has the same steady state distribution as {Xn , n ≥ 0} . 4.33. g(0) = 0, ∞ X g(s) = ps,t + ps,j g(j), j=1
g(i) =
∞ X j=1
pi,j g(j), i 6= s.
CHAPTER 5
Poisson Processes
Computational Exercises 5.1. This can be done in bruteforce way. We show a way using properties of Exponentials. Let L be the length of the shortest path. If X3 ≤ X1 , L = min(X3 , X1 ); else L = min(X1 , X3 ) + min(X3 − X1 , X2 ). Using strong memoryless property, we see that, given X3 ≤ X1 , min(X1 , X3 ) ∼ Exp(λ1 + λ3 ), and given X3 > X1 , X3 − X1 ∼ Exp(λ3 ) and min(X3 − X1 , X2 ) ∼ Exp(λ2 + λ3 ). Also using the results of Section 5.1.7, we get P (L ≤ x)
(5.1)
= P (L ≤ xX3 ≤ X1 )P (X3 ≤ X1 ) + P (L ≤ xX3 > X1 )P (X3 > X1 ) = P (min(X3 , X1 ) ≤ xX3 ≤ X1 )P (X3 ≤ X1 ) + P (min(X1 , X3 ) + min(X3 − X1 , X2 ) ≤ xX3 > X1 )P (X3 > X1 ) = P (Exp(λ1 + λ3 ) ≤ x)P (X3 ≤ X1 ) + P (Exp(λ1 + λ3 ) + Exp(λ2 + λ3 ) ≤ x)P (X3 > X1 ) λ3 (1 − e−(λ1 +λ3 )x ) = λ1 + λ3 λ1 λ2 + λ3 λ1 + λ3 + { (1 − e−(λ1 +λ3 )x ) + (1 − e−(λ2 +λ3 )x )} λ1 + λ3 λ2 − λ1 λ1 − λ2 λ2 λ1 = (1 − e−(λ1 +λ3 )x ) + (1 − e−(λ2 +λ3 )x ). λ2 − λ1 λ1 − λ2 5.2. a − b − c is the shortest path if X3 > X1 and X3 − X1 ≥ X2 . Hence P (a − b − c is the shortest path)
= P (X3 > X1 )P (X3 − X1 > X2 X3 > X1 ) λ2 λ1 = · . λ1 + λ3 λ2 + λ3
Here we have used the fact that, given X3 > X1 , X3 − X1 ∼ Exp(λ3 ). 5.3. Let L be the length of the longest path. P (L ≤ x)
=
P (max(X1 + X2 , X3 ) ≤ x)
=
P (X1 + X2 ≤ x, X3 ≤ x) 99
100
POISSON PROCESSES = P (X1 + X2 ≤ x)P (X3 ≤ x) λ2 λ1 = (1 − e−λ1 x − e−λ2 x )(1 − e−λ3 x ). λ2 − λ1 λ1 − λ2
5.4. For this network, P(a−b−c is the longest path) = 1P(a−b−c is the shortest path). Hence, from the solution to Computational Exercise 5.2, we get the desired probability as λ2 λ1 · . 1− λ1 + λ3 λ2 + λ3 5.5. P (max(X1 , . . . , Xn ) < x)
= P (X1 < x, . . . , Xn < x) = P {X < x}n = (1 − e−λx )n
5.6. Use the results of Appendix C.5. Let X1 , X2 , X3 be iid exp(λ), and Y1 , Y2 , Y3 be their order statistics. Then the ship’s life time is Y2 . Hence the desired result is given by P (Y2 > t) = e−3λt + 3(1 − e−λt e−2λt ). 5.7. Let Y be the time when the machine is discovered to be down and X be the life time of the machine. E{Y }
= E{Y X < T }P {X < T } + E{Y X > T }P {X > T } = T (1 − e−λT ) + (T + E{Y })e−λT = T + E{Y }e−λT
Hence, T 1 − e−λT The expected duration of time that the machine is down before it is discovered to be down is given by E{Y − X} = E{Y } − E{X} T 1 = − 1 − e−λT λ E{Y } =
5.8. We want to have k individuals out of n to be alive at time t. Each individual is alive at time t with probability e−λt and there are nk ways of choosing them, hence the desired probability is n (e−λt )k (1 − e−λt )n−k . k Following an argument as in Example 5.2, the expected time is obtained as n X i=n−k+1
1 . iλ
POISSON PROCESSES
101
5.9. Let T be the lifetime of the machine, X and Y be the number of spares of part A and B available. Let mi,j = E(T X = i, Y = j), i = 0, 1, 2; j = 0, 1. Consider the case of X = 2, Y = 1. Let LA and LB be the lifetime of component A and B respectively. We are given that LA ∼ Exp(λ) and LB ∼ Exp(µ). The next failure occurs at time min(LA, LB), whose expected value is 1/(λ + µ). If LA < LB, which happens with probability λ/(λ + µ), component A has failed, and a spare A component is put in place. The expected lifetime of the system from then on given by m1,1 , since the lifetimes of the components in use are exponential by the strong memoryless property. Similar analysis hold if LA ≥ LB. Using this analysis, we get λ µ 1 + m1,1 + m2,0 . λ+µ λ+µ λ+µ Similar analysis yields the following set of equations: m2,1 =
m2,0
=
m1,1
=
m1,0
=
m0,1
=
m0,0
=
1 + λ+µ 1 + λ+µ 1 + λ+µ 1 + λ+µ 1 . λ+µ
λ m1,0 λ+µ λ µ m0,1 + m1,0 λ+µ λ+µ λ m0,0 λ+µ µ m0,0 λ+µ
Solving by back substitution we get the desired answer to be (with p = µ/(λ + µ) and q = 1 − p) 1 m2,1 = [1 + p + q + 2pq + q 2 + 3q 2 p]. λ+µ This can be further simplified to
1 λ+µ 2p(1
+ 3q 2 ).
5.10. Let Si be the time of the ith failure. In the beginning, all strands are functioning, and each carries a load of L/n. Hence the potential lifetime of each strand is an exp(λL/n) random variable. Thus S1 , being the minimum of n iid Exp(λL/n) random variables, is itself an Exp(λL) random variable. Now, consider the situation at time S1 : there are (n−1) strands functioning, each sharing a load of L/(n−1). Thus the remaining time to failure for all functioning strands are now iid Exp(λL/(n − 1)) rvs. Thus, S2 − S1 , being the minimum of n − 1 iid Exp(λL/(n − 1)) rvs, is an Exp(λL) rv. Proceeding this way, we see that the n inter failure times are iid Exp(λL) rvs. Hence T , the time until all strands break, is the sum of n iid Exp(λL) rvs. Hence it is an Erl(λL, n) random variable with density fT (t) = λLexp{−λLt}
(λLt)n−1 , (n − 1)!
t ≥ 0.
5.11. Let Xi be the time required to complete the ith job. When job 1 is processed
102
POISSON PROCESSES
first, total cost is C12 = C1 X1 + C2 (X1 + X2 ); and when job 2 is processed first, total cost is C21 = C2 X2 + C1 (X1 + X2 ). Thus, expected total cost when job 1 C1 +C2 2 2 1 is processed first is C1λ+C + C + C λ2 , and when job 2 is processed first λ2 λ1 . 1 C1 C2 The difference is λ1 − λ2 . Thus, process job 1 first if C1 λ1 > C2 λ2 , and otherwise process job 2 first. 5.12. First observe that P (no arrivals in a service time) = µ/(λ + µ). Let A be the desired event. If the second customer arrives after the first finishes service, A occurs if there are no arrivals in the second service time. If the second customer arrives before the first finishes service, A occurs if there are no arrivals in remaining service time (which is an Exp(µ) random variable) of the first customer, and also during the second service time. By conditioning on whether the second customer arrives before or after the first customer completes service, we find that the probability that the third customer arrives after the second customer has been served is 2 2 µ µ λ + . λ+µ λ+µ λ+µ 5.13. Let Xi be the length of the ith stick, Xi ∼ Exp(λ). Let (Y1 , Y2 , ..., Yn ) be the order statistics of (X1 , X2 , ..., Xn ). Then from Appendix D2, we get the joint density of (Y1 , Y2 , ..., Y − n) as f (y1 , y2 , ..., yn ) = n!λn Πni=1 e−λyi , 0 ≤ y1 ≤ y2 ≤ ... ≤ yn . Then P (Yn = Xi ) = 1/n, for 1 ≤ i ≤ n. Let A be the event that a polygon can be formed from the n sticks. P (A)
= P (Yn >
n−1 X
Yi )
j=1
Z
∞
= n!
λe y =0 Z 1∞
=
n!
=
Z
λe−2λy1
y =0 Z 1∞
=
−λy1
∞
λe y =y Z2 ∞ 1
−λy2
Z
∞
...
λe yn−1 =yn Z ∞
λe−2λy2 ...
y =y Z 2∞ 1
−λyn−1
Z
∞
yn =y1 +y2 +...+yn
λe−2λyn−1 dyn−1 ...dy1
y =yn Z n−1 ∞
n! λe−2λy1 λe−2λy2 ... λe−4λyn−2 dyn−2 ...dy1 2 y1 =0 y2 =y1 yn−1 =yn Z ∞ Z ∞ Z ∞ n! −2λy1 −2λy2 λe λe ... λe−6λyn−3 dyn−3 ...dy1 2 · 4 y1 =0 y2 =y1 yn−1 =yn
.. . =
1 n! = n( )n−1 . 2 · 4 · 6 · · · 2(n − 1) 2
5.14. Let E be the event that A ∗ x2 + B ∗ x + c = 0 has only real roots. Then P (E)
=
P (B 2 − 4AC ≥ 0)
λe−λyn dyn dyn−1 ...dy1
POISSON PROCESSES
103 Z
∞
∞
Z
Z
∞
λ3 e−λ(a+b+c) dbdcda
= a=0 Z ∞
c=0 Z ∞
Za=0 ∞
Zc=0 ∞
= = c=0 Za=0 ∞ Z ∞
= u=0
b=2sqrtac
λ2 e−λ(a+c+2sqrtac) dcda 2
λ2 e−λ(sqrta+sqrtb) dcda 2
4λ2 uve−λ(u+v) dudv =
v=0
1 3
where the last equality follows from a standard table of integrals. 5.15. Since U ∼ U (0, 1), P (−ln(U ) > u) = P (U ≤ e−λu ) = e−u ,
u ≥ 0.
Hence −ln(U ) ∼ Exp(1). 5.16. Let Si be the service of the ith customer, customer 1 being in service. Due to memoryless property, {Si , i ≥ 1} are iid Exp(µ). The firstcome firstserved service implies that W = S1 + S2 + · · · + SN , with W = 0 if N = 0. Following the same calculation as in Section 5.1.8, we get E(e−sW ) =
∞ X
(1 − ρ)ρn (
n=0
(1 − ρ)µ µ n ) = (1 − ρ) + ρ . s+µ s + (1 − ρ)µ
The last expression can be thought of as 1−ρ times the LST of a degenerate rv taking value 0 plus ρ times the LST of an Exp((1 − ρ)µ) random variable. Hence, inverting it yields: P (W = 0) = 1 − ρ,
P (W > t) = ρe−(1−ρ)µt ,
t ≥ 0.
5.17. Let Li be the lifetime of the i component. Then {Li , 1 ≤ i ≤ k} are iid Exp(λ) rvs. The system lifetime is then given by L = L1 +L2 +· · ·+Lk ∼ Erl(k, λ). Hence we need to choose the smallest k such that P (L > T ) = e−λT
k−1 X (λT )i i=0
i!
≥ α.
5.18. Let Xi be the lifetime of the ith component, and L = min(X1 , X2 ) be the lifetime of the system. Then P (L > t) = P (min(X1 , X2 ) > t) = P (X1 > t, X2 > t) = P (X1 > t)P (X2 > t) = e−(λ+µ)t
n−1 X (µt)r r=0
Hence, using the result from Appendix B7, we get Z ∞ Z n−1 X µ r ∞ −(λ+µ)t ((λ + µ)t)r E(L) = P (L > t)dt = ( ) e λ+µ r! 0 0 r=0
r!
.
104
POISSON PROCESSES n−1
=
1 X µ r 1 µ n ( ) = [1 − ( ) ]. λ + µ r=0 λ + µ λ λ+µ
5.19. Let Z = X1 /X2 . Then Z P (Z > z) = P (X1 > zX2 ) =
∞
P (X1 > zX2 X2 = x)λ2 e−λ2 x dx =
Z
Z
∞
Z
∞
P (Z > z)dz =
E(Z) = 0
e−λ1 zx λ2 e−λ2 x dx =
0
0
Also
∞
0
λ2 dz = ∞. λ2 + λ1 z
5.20. P (N (t) = kN (t + s) = k + m)
(5.2)
= P (N (t) = k, N (t + s) = k + m)/P (N (t + s) = k + m) = P (N (t) = k, N (t + s) − N (t) = m)/P (N (t + s) = k + m) = P (N (t) = k)P (N (t + s) − N (t) = m)/P (N (t + s) = k + m) (λt)k −λs (λs)m −λ(s+t) (λ(s + t))k+m = e−λt e /e k! m! k + m! k+m t k s m = ( ) ( ) . m s+t s+t 5.21. Use generating functions. 5.22. Let Si be the remaining service time of the customer being served by server i. Then {S1 , ..., Ss } are iid Exp(µ) rvs. Let Ti be the ith inter departure time, and Ai be the ith interarrival time. Then T1 = min(S1 , ..., Ss ) ∼ Exp(sµ), and Ai are iid Exp(λ). λ . λ + sµ (b) P {at least j arrivals before a service completion} = P (A1 + A2 + ... + Aj ≤ j λ T1 ) = . Hence P {exactly j arrivals before a service completion} = λ + sµ j λ sµ . λ + sµ λ + sµ (s − 1)µ sµ . (c) P{at least 2 service completions before an arrival} = λ + sµ λ + (s − 1)µ (a) P {an arrival before a service completion} = P (A1 < T1 ) =
5.23. E(SN (t) ) = te−λt
∞ X n=0
∞ X
E(Sn  N (t) = n)P{N (t) = n}
n=0
1−
1 n+1
(λt)n 1 − e−λt =t− . n! λ
∞ X nt −λt (λt)n = e = n+1 n! n=0
λ2 . λ2 + λ1 z
POISSON PROCESSES
105
5.24. From Theorem 5.8, N1 and N2 are independent Poisson processes. Hence T1 and T2 are independent, with joint distribution P (T1 ≤ t1 , T2 ≤ t2 ) = (1 − e−λp1 t1 )(1 − e−λp2 t2 ). !# " ∞ ∞ ∞ X X (λt)i X (−λt)i −λt 1 5.25. P{N (t) is odd} = = P{N (t) = 2i+1} = e − 2 i=0 i! i! i=0 i=0 1 − e−2λt . 2 5.26. Let λ =
k X
λi pi . The desired probability is (by 5.30)
i=1
P {T > t, S = i}
= =
P {T > tS = i}P {S = i} λi pi e−tλ λ
5.27. The counter is not dead at time t if there are no events during the interval (t − τ, t], the probability of which is e−λτ . 5.28. Let N (t) be the number of arrivals over (0, t]. The N is a PP(8). (a) E(N (8)) = 8 ∗ 8 = 64., Var(N (8)) = 8*8 = 64. (b) P (N (1) > 4) = 1 − exp(−8)(1 + 8 + 82 /2 + 83 /6 + 84 /24) = .9004. (c) P (N (.25) = 0) = exp(−8 ∗ .25) = .1353. (d) Using the property of covariance from Appendix D3, and equation 5.64, we get Cov(N (11)−N (9), N (12)−N (10)) = Cov(N (11), N (12))−Cov(N (9), N (12))− Cov(N (11), N (10)) + Cov(N (9), N (10)) = 8(11 − 9 − 10 + 9) = 8. Hence the correlation coefficient is √ √ Cov(N (11)−N (9),N (12)−N (10)) = 8/ 16 · 16 = .5. V ar(N (11)−N (9))V ar(N (12)−N (10))
5.29. Let T = time pedestrian will have to wait and X1 = time until the first car crosses the pedestrian crossing. 0 if X1 > ux T ∼ X1 + T if X1 ≤ ux E(T ) = E(X1 + T  X1 ≤ ux )(1 − e−λx/u ) ⇔ e−λx/u E(T ) = E(X1  X1 ≤ x −λx/u ) u )(1 − e Z x u eλx/u − 1 x Hence E(T ) = eλx/u λte−λt dt = − . λ u 0 5.30. (i) No. {R(t), t ≥ 0} is not a PP, since the time until the first failure U1 , however, the time between the first and second failure is D1 + U2 .
106
POISSON PROCESSES
(ii) Yes. The time between the n + 1 st and n th repair is Un + Dn = (1 + c)Un ∼ exp(λ/(1 + c)), and these times are independent. Hence {R(t), t ≥ 0} is a PP(λ/(1 + c)). 5.31. (This is a special case of Example 5.17 with G(t) = 1 − e−µt . ) If a customer arrives at time s ≤ t, he or she will be in the system at t with probability p(s) = e−µ(t−s) . Suppose that this event at time s in a PP(λ) is registered with probability p(s). Then N1 (t) = # of registered events up to t = # of customers in the Rt λ system at time t. Hence E(N1 (t)) = λ 0 p(s) ds = (1 − e−µt ). µ 5.32. Let Li be the life time of patient i, and Sn be the time when the nth kidney becomes available. Thus the first patient receives a kidney if L1 > S1 . This probability is λ/(λ + µ1 ). The second patient receives a kidney if L1 < S1 and L2 > S1 , or, L1 > S1 and L2 > S2 . We have Z ∞ λ λ − , A = P(L1 < S1 , L2 > S1 ) = P(L1 < t, L2 > t)λe−λt dt = λ + µ2 λ + µ1 + µ2 0 B = P(L1 > S1 , L2 > S2 ) = P(L1 > S1 , L2 > S1 )P(L2 −S1 > S2 −S1 L2 > S1 ) = The desired probability is given by A+B =
λ λ µ2 − . λ + µ2 λ + µ1 + µ2 λ + µ2
5.33. Let N (t) be the number of meteors seen up to time t. Then, N (t) ∼ PP(1). (a) P{N (1) = 0} = e−1 = .3679. (1)2 e−1 (b) P{N (1) = 2} = = .1839. 2 −1 P{N (1) > 2} = 1 − e (1 + 1 + 1/2) = 1 − .3679 − .3679 − .1839 = .0803.. 5.34. The probability that a shock permanently damages a machine is Z ∞ p= p(x)dG(x). 0
The process of permanently damaging shocks is a PP(λp). The machine fails when the first permanently damaging shock occurs, which happens after an Exp(λp) amount of time. 5.35. Number of women arrive at the store form a P P (6), and the process is independent of the number of men arrive at the store by Theorem 5.8. Hence the answer is 6. 5.36. We have, for k = 0, 1, 2, ..., Z t c(t − k) Λ(t) = λ(u)du = c(k + 1) 0
if 2k ≤ t < 2k + 1 if 2k + 1 ≤ t < 2k + 2
λ λ . λ + µ1 + µ2 λ + µ2
POISSON PROCESSES
107
From setting n = 0 in Theorem 5.21, we get P (X1 > t) = exp{−Λ(t)}. 5.37. With Λ(t) as defined in the previous exercise, we get P (N (t) = k) = exp{−Λ(t)}
Λ(t)k , k!
k ≥ 0.
5.38. From Theorem 5.22 we see that, given N (t) = n, X1 is distributed as the minimum of n iid random variables with common cdf Λ(u)/Λ(t), 0 ≤ u ≤ t, where Λ(·) is as given in the solution to Computational Exercise 5.36. Hence Z t Z t Λ(u) n ) du. E(X1 N (t) = n) = P (X1 > u)du = (1 − Λ(t) 0 0 Carrying out the integrals we get E(X1 N (t) = n) = min(t, 1)/(n + 1), 0 ≤ t ≤ 2. 5.39. Let {N (t), t ≥ 0} be a NPP with rate function λ(·). Suppose an event occurring in this process at time t is registered with probability p(t), independently of other events. Let R(t) be the number of registered events over (0, t]. Clearly, R(0) = 0. Furthermore, P (R(t+h) = k+1R(t) = k) = P (N (t+h)−N (t) = 1, and the event is registered) = λ(t)p(t)h+o(h). Similarly, P (R(t + h) = kR(t) = k) = 1 − λ(t)p(t) + o(h), P (R(t + h) = k + jR(t) = k) = o(h),
j ≥ 2.
The R process inherits the independent of increments property from the N process. Hence, from Definition 5.5, the R process is a NPP(λ(t)p(t)). Now following the argument in Example 5.17, we see that R(t), the number of customers in the library Rt at time t, is a P( 0 λ(u)(1 − G(t − u))du) random variable. 5.40. Let the number of items produced by time t be N I (t) (P P (λ)) and the number of trucks that arrive at the depot by time t be N T (t) (P P (µ)). Then Z(t) = N I (SN T (t) ) is neither a P P nor an N P P . It is also not a CP P since the batch sizes are not independent of the arrival time. E{Z(t)}
where E{SN T (t) } = t −
= = = =
E{N I (SN T (t) )} E{E{N I (SN T (t) )SN T (t) }} E{λSN T (t) } λE{SN T (t) }
1 (1 − e−µt ) from Computational Exercise 23. µ
5.41. Let N1 (t) ∼ PP(λd ). Then Z1 (t) = amount deposited by customer up to t is a CPP with E(Z1 (t)) = λd τd t and Var(Z1 (t)) = λd (τd2 + σd2 )t.
108
POISSON PROCESSES
Let N2 (t) ∼ PP(λw ) and Z2 (t) = amount withdrawn by the customer’s spouse up to time t. N (t) = N1 (t) + N2 (t) = number of transactions up to t is a PP(λ ) where λ = λd + λw . N (t)
Z(t) = Z1 (t) + Z2 (t) =
X
Zn where {Zn , n ≥ 1} are iid with mean τ =
n=1 λd λw λd +λw τd + λd +λw τw
and second moment s2 =
λd λw 2 2 2 2 λd +λw (τd +σd )+ λd +λw (τw +σw ).
Hence Z(t) is a CPP. Thus E(Z(t)) = λτ and Var(Z(t)) = λs2 t. 5.42. We have A(s) = E(e−sZn ) =
∞ X
e−sn (1 − α)n−1 α =
n=1
αe−s . 1 − e−s (1 − α)
The LST of Z(t) is then given by Equation 5.138 with the above A(s). Using the fact that sums of iid geometric random variables is a negative binomial random variable, we get P (Z(t) = k) =
k X
P (Z(t) = kN (t) = n) =
n=0
k X
P (N B(n, α) = kN (t) = n)P (N (t) = 0)
n=0
= δk,0 e−λt +
k X k−1 n (λt)n α (1 − α)k−n e−λt . n−1 n! n=1
5.43 1. We have N (t) jumps by one over (t, t + h] if the machine that hasn’t failed unitl time t fails in (t, t + h]. This happens with probability r(t)h + o(h). Hence P(N (t + h) − N (t) = 1) = r(t)h + o(h). Similarly P(N (t + h) − N (t) = 0) = 1 − r(t)h + o(h),, etc. Also, N (t), t ≥ 0} has independent increments, due to the definition of minimal repair. Rt 2. Let R(t) = 0 r(u)du. Then N (t) ∼ P(R(t)). Hence E(C(t)) = cE(N (t)(N (t)+1)/2) = (c/2)(R(t)+R(t)2 +R(t)) = c(R(t)+R(t)2 /2). 3. We have R(t) = t2 .. Hence E(N (t)) = t2 and E(C(t)) = 3ct2 /2. 5.44. 1. The probability that a customer joins the checkout queue is U ∼ U (0, 1). Thus the expected rate at which customers join the checkout queue is λ/2.
POISSON PROCESSES
109
2. A customer arrives at time V ∼ U (0, t). He stays in the system for U ∼ U (01, 1) then joins the checkout queue at time V + U with probability U . Hence w have qt = E(U P(V + U < t)). Consider two cases: 1. t < 1. Then 1 qt = t
Z tZ
t−v
ududv = 0
o
t2 . 6
2. t ≥ 1. Then 1 qt = t
Z
t
Z
t−v
Z
t−1
Z
ududv + t−1
0
1
ududv =
0
0
1 1 − . 2 3t
3. A(t) is a Poisson r.v. with parameter λqt . 4. {A(t), t ≥ 0} is not a Poisson Process? It is a nonhomogeneous Poisson Process with rate function λ(t) = λqt . 5.45. Let Ln = N (n + 1) − N (n) be the total number of sales in year n. Since {N (t), t ≥ 0} is an NPP, it follows that Ln is a P(θn ) random variable, where Z n+1 θn = λ(t)dt = 200 + 200e−n (1 − e−1 ). n
Hence the mean of the sales in year n is $350(1.08)n θn , and the variance is $3502 (1.08)2n θn .
110
POISSON PROCESSES
Conceptual Exercises 5.1. From Equation 5.14 r(x) = f (x)/(1 − F (x)). Integrating both sides, we get Z Z x r(u)du = 0
0
x
f (u) du = −ln(1 − F (x)). 1 − F (u)
Hence
Z P (X > x) = 1 − F (x) = exp(−
x
r(u)du). 0
5.2. Let Xi , i = 1, 2, ..., n be iid Exp(λ) and Yi , i = 1, 2, ..., m be iid Exp(µ) random variables and define X = X1 + X2 · · · + Xn , Y = Y1 + Y2 · · · + Ym Then, from section 5.1.6, X ∼ Erl(n, λ), F (m, n)
Y ∼ Erl(m, µ). Then
= P (X1 + X2 · · · + Xn < Y1 + Y2 · · · + Ym ) = P (X1 + X2 · · · + Xn < Y1 + Y2 · · · + Ym Xn < Ym )P (Xn < Ym ) +P (X1 + X2 · · · + Xn < Y1 + Y2 · · · + Ym Xn < Ym )P (Xn ≥ Ym ) λ P (X1 + X2 · · · + Xn−1 < Y1 + Y2 · · · + Ym − Xn Xn < Ym ) = λ+µ µ + P (X1 + X2 · · · + Xn − Ym < Y1 + Y2 · · · + Ym−1 Xn < Ym ) λ+µ λ = P (X1 + X2 · · · + Xn−1 < Y1 + Y2 · · · + Ym ) λ+µ µ + P (X1 + X2 · · · + Xn < Y1 + Y2 · · · + Ym−1 ) λ+µ since the dist of Xn − Ym given Xn > Ym is the same as that of Xn , etc. λ µ = F (n − 1, m) + F (n, m − 1). λ+µ λ+µ
It can be shown by induction that F (n, m) given in the problem is the solution to the above equations. 5.3. Z P(Xi < Xj , j 6= i)
∞
λi e−λi x P(Xj > x, j 6= i)dx
= 0
Z
∞
=
λi 0
=
λi . λ
n Y j=1
e−λj x dx
POISSON PROCESSES
111
5.4. Let f (·) be the density of T . Conditioning on T we get P(Xi > si + T, 1 ≤ i ≤ nXi > T, 1 ≤ i ≤ n) Z ∞ = P(Xi > si + T, 1 ≤ i ≤ nXi > T, 1 ≤ i ≤ n, T = t)f (t)dt 0 Z ∞ P(Xi > si + t, 1 ≤ i ≤ nXi > t, 1 ≤ i ≤ n)f (t)dt = 0
Z =
n ∞Y
0
e−λi si f (t)dt
i=1
( memoryless property of exponentials) n Y = e−λi si . i=1
5.5. Taking the Laplace Transform of Equation 5.12, we get LT (p0 0(t)) = −λLT (p0 (t)). Using the properties of LT, we get sp∗0 (s) − p0 (0) = p∗0 (s). Using the initial condition p0 (0) = 1, we get (λ + s)p∗0 (s) = 1. Similarly, taking the LT of Equation 5.11 (with k > 0) and using the initial condition pk (0) = 0 we get sp∗k (s) = −λp∗k (s) + λp∗k−1 (s). This yields (λ + s)p∗k (s) = λp∗k−1 (s). Solving recursively, we get p∗k (s) =
λk , (λ + s)k+1
k ≥ 0.
Using Table F3 on page 585, we can invert the above to get pk (t) = e−λt
(λt)k . k!
5.6. (a) We have P (N (t) = k) = pP (N1 (t) = k) + (1 − p)P (N2 (t) = k) (λ2 t)k (λ1 t)k + (1 − p)e−λ2 t . k! k! This is not a Poisson pmf unless p = 0 or p = 1 or λ1 = λ2 . Hence the answer is: No, unless p = 0 or p = 1 or λ1 = λ2 . = pe−λ1 t
112
POISSON PROCESSES
(b) Yes, since P (N (t+s)−N (s) = k) = pP (N1 (t+s)−N1 (s) = k)+(1−p)P (N2 (t+s)−N2 (s) = k) = pe−λ1 t
(λ1 t)k (λ2 t)k + (1 − p)e−λ2 t , k! k!
which is a function of only t. (c) No. Let 0 < t1 < t2 < t3 < t4 . The value of N (t2 ) − N (t1 ) can be used to compute the probability that N = N1 , which will affect the pmf of N (t4 )−N (t3 ). In the extreme case, suppose λ1 = 0 and λ2 > 0. Then N (t2 ) − N (t1 ) > 0 ⇒ N = N2 . This will make N (t4 ) − N (t3 ) a P(λ2 (t4 − t3 )) rv. 5.7. From Definition 5.7 we see that N (0) = 0 and N (t + s) ∼ P (Λ(t + s)), N (t + s) ∼ P (Λ(t + s)) and N (s) ∼ P (Λ(s)). Hence E(z N (t+s) ) = exp(−(1 − z)Λ(t + s)), E(z N (s) ) = exp(−(1 − z)Λ(s)). Now, independence of increments implies that N (t + s) − N (s) is independent of N (s). Hence, E(z N (t+s) ) = E(z N (t+s)−N (s)+N (s) ) = E(z N (t+s)−N (s) )E(z N (s) ). This yields E(z N (t+s)−N (s) ) = exp(−(1 − z)(Λ(t + s) − Λ(s))). This implies that N (t + s) − N (s) ∼ P (Λ(t + s) − Λ(s)). 5.8. Property (i) follows from the Definition 5.7. We shall show property (ii). By definition N (0) = 0 and N (t + h) − N (t) ∼ P (Λ(t + h) − Λ(t)). Thus Z t+h P(N (t+h)−N (t) = 0) = exp(− λ(u)du) = exp(−λ(t)h+o(h)) = 1−λ(t)h+o(h). t
Z
t+h
P(N (t+h)−N (t) = 1) =
Z λ(u)du exp(−
t
t+h
λ(u)du = (λ(t)h+o(h)) exp(−λ(t)h+o(h) = λ(t)h+o(h). t
5.9. (1) Let Sji be the time of occurrence of ith event in the jth process. From the properties of the Poisson processes, Sji ∼ Erl(i, λj ). Hence, using the notation F (n, m) of Conceptual Exercise 2, we see that P (A1 ≥ k) = P (S1k ≤ S21 ) = F (k, 1) = (
λ1 )k . λ1 + λ2
λ
j Thus Ai is an MG( λ1 +λ ) rv. 2 1 1 (2) No, since P (A1 ≥ 2, A2 ≥ 2) = 0 6= ( λ1λ+λ )2 · ( λ1λ+λ )2 . 2 2
POISSON PROCESSES
113
5.10. Using independence of increments we get P(N (t1 ) = k1 , N (t2 ) = k2 , · · · , N (tn ) = kn ) =
P(N (t1 ) = k1 , N (t2 ) − N (t1 ) = k2 − k1 , · · · , N (tn ) − N (tn−1 ) = kn − kn−1 )
= P(N (t1 ) = k1 )P(N (t2 ) − N (t1 ) = k2 − k1 ) · · · P(N (tn ) − N (tn−1 ) = kn − kn−1 ) (Λ(t1 ))k1 −(Λ(t2 )−Λ(t1 )) (Λ(t2 ) − Λ(t1 ))k2 −k1 (Λ(tn ) − Λ(tn−1 ))kn −kn−1 = e−Λ(t1 ) ·e · · · e−(Λ(tn )−Λ(tn−1 )) k1 ! (k2 − k1 )! (kn − kn−1 )! k1 k2 −k1 kn −kn−1 (Λ(t )) (Λ(t ) − Λ(t )) (Λ(t ) − Λ(t )) 1 2 1 n n−1 = e−Λ(tn ) · ··· . k1 ! (k2 − k1 )! (kn − kn−1 )! 5.11. From the properties of NPP we get P (Si ∈ (ti , ti +dti , 1 ≤ i ≤ n, Sn+1 > t) = Πni=1 λ(ti )dti exp{−(Λ(ti )−Λ(ti−1 )}exp{−(Λ(t)−Λ(tn )}. Using this in the proof of Theorem 5.14 we get P (Si ∈ (ti , ti + dti ), 1 ≤ i ≤ nN (t) = n) =
exp{−(Λ(t)}Πni=1 λ(ti )dti exp{−Λ(t)} Λ(t) n!
n
.
Hence the joint pdf of (S1 , S2 , ..., Sn ) given N (t) = n is given by f (t1 , t2 , ..., tn ) =
1 λ(t1 )λ(t2 )...λ(tn ), 0 ≤ t1 ≤ t2 ≤ ... ≤ tn ≤ t. n!Λ(t)n
But this is the joint pdf of the order statistic of n iid random variables with common pdf λ(u)/Λ(t), 0 ≤ u ≤ t. This completes the proof. 5.12. (a) Note that, given T , Bi is a P(λi T ) rv. Hence ψi (z)
= E(z Bi ) = E(E(z Bi T )) = E(e−λi T (1−z) ) = φT (λi (1 − z))
(b) Following the same steps as in (a), we get ψ12 (z1 , z2 ) = φT (λ1 (1 − z1 ) + λ2 (1 − z2 )). (c) No, since ψ12 (z1 , z2 ) is not a product form in general. (d) Yes, since in this case ψ12 (z1 , z2 ) = e−λ1 τ (1−z1 ) · e−λ2 τ (1−z2 ) = ψ1 (z1 )ψ2 (z2 ). 5.13. (i). No, (ii). N0, (iii). No. 5.14. First note that P (Xn+1 > tSn = s) = P (no events in (s, t + s)) = exp{−(Λ(t + s) − Λ(s))}.
114
POISSON PROCESSES
Also, P (Sn ≤ s) = P (N (s) ≥ n) =
∞ X
exp(−Λ(s))
k=n
(Λ(s))k . k!
Differentiating the above gives the pdf of Sn as follows fn (s) = λ(s)exp(−Λ(s))
(Λ(s))k−1 . (k − 1)!
Finally, using Z
∞
P (Xn+1 > t) =
P (Xn+1 > tSn = s)fn (s)ds 0
yields Theorem 5.21.
5.15. Since {Ni (t), t ≥ 0} (1 ≤ i ≤ r) are independent NPPs and have independent increments, we see that Ni (t) ∼ P (Λi (t)) and {N (t), t ≥ 0} has independent increments. Thus to show that it is an N P P (λ(·)) with λ(t) = λ1 (t) + λ2 (t) + · · · + λr (t),
t ≥ 0,
it suffices to show that N (t) ∼ P (Λ(t)), where Z t r X Λ(t) = λ(u)du) = Λi (t). 0
i=1
This follows since Ni (t) are independent Poisson random variables. P13. Let {N (t), t ≥ 0} be a PP(λ), and {N1 (t), t ≥ 0} be obtained 5.16. from it by nonhomogenous Bernoulli splitting with splitting function p(·). Hence N1 (0) = 0. Also, N1 (t + s) − N1 (t) depends only on {p(u), s ≤ u ≤ t + s)} and N (t+s)−N (t). Hence {N1 (t), t ≥ 0} inherits the independent increments property of {N (t), t ≥ 0}. Next P (N1 (t+h)−N1 (t) = k+1N1 (t) = k) = P (N (t+h)−N (t) = 1, and the event is registered) = p(t)λh+o(h). Similarly, other equations in Equation 5.119 can be verified. Thus, from definition 5.5, {N1 (t), t ≥ 0} is a NPP(λp(·)). 5.17. Let {N (t), t ≥ 0} be a NPP(λ(·)). Let Sk be the kth event time in the NPP. Define Xk = 1 if the kth event is registered, and 0 otherwise. Then, given Sk = s, Xk ∼ B(p(s)). Now P (N1 (t) = k)
=
∞ X
P (N1 (t) = kN (t) = n)P (N (t) = n)
n=0
=
∞ X n=k
N (t)
P(
X i=1
Xi = kN (t) = n)P (N (t) = n)
POISSON PROCESSES
115 ∞ X
=
n=k ∞ X
=
P( P(
n X i=1 n X
B(p(Si )) = kN (t) = n)P (N (t) = n) B(p(Ui )) = kN (t) = n)P (N (t) = n),
i=1
n=k
where {Ui , i ≥ 1} are iid random variables with common density λ(u)/Λ(t) for 0 ≤ u ≤ t. Now let Yi ∼ B(p(Ui )). Then Z 1 t p(u)λ(u)du. P (Yi = 1) = Λ 0 Rt Thus {Yi , i ≥ 1} are iid B(α) random variables with α = Λ1 0 p(u)λ(u)du. Hence P (N1 (t) = k) =
∞ X
P (Bin(α, n) = kN (t) = n)P (N (t) = n).
n=k
The rest of the calculations follow as in Theorem 5.18. 5.18. Using independence of increments we get P(Z(t1 ) = k1 , Z(t2 ) = k2 , · · · , Z(tn ) = kn ) = P(Z(t1 ) = k1 , Z(t2 ) − Z(t1 ) = k2 − k1 , · · · , Z(tn ) − Z(tn−1 ) = kn − kn−1 ) = P(Z(t1 ) = k1 )P(Z(t2 ) − Z(t1 ) = k2 − k1 ) · · · P(Z(tn ) − Z(tn−1 ) = kn − kn−1 ) = pk1 (t1 )pk2 −k1 (t2 − t1 ) · · · pkn −kn−1 (tn − tn−1 ). 5.19. From Theorem 5.24 we get E(e−sZ(t) ) = φ(s) = e−λt(1−A(s)) where A(s) = E(e−sZn ). We have 1 = A(0), E(Zn ) = −A0 (0), E(Zn2 ) = A00 (0). Taking derivatives, we get φ0 (s) = λtA0 (s)e−λt(1−A(s)) , φ00 (s) = λtA00 (s)e−λt(1−A(s)) + (λtA0 (s))2 e−λt(1−A(s)) . Hence E(Z(t) = −φ0 (0) = −λtA0 (0)e−λt(1−A(0)) = λtE(Zn ), E(Z(t)2 = φ00 (0) = λtA00 (0) + (λtA0 (0))2 = λtE(Zn2 ) + (E(Z(t)))2 . Theorem 5.25 follows from this. 5.20. Let the common batch mean be E(Z1 ), second moment E(Z12 ) and LST
116
POISSON PROCESSES
A(s). From Equation 5.127 we know that N (t) ∼ P (Λ(t)). Then following the proof of Theorem 5.13 we get E(e−sZ(t) ) = exp(−Λ(t)(1 − A(s))). Similarly, following the calculations in Theorem 5.14 we get E(Z(t)) = Λ(t)E(Z1 ); V ar(Z(t)) = Λ(t)E(Z12 ). 5.21. It is not an NPP since it does not have independent increments property. ˜1 , U ˜2 , · · · , U ˜n ) = (t1 , t2 , · · · , tn ) if an only if (U1 , U2 , · · · , Un ) 5.22. We get (U equals some permutation of (t1 , t2 , · · · , tn ). Since all n! permutations are equally likely, and the probability of getting any one permutation is 1/tn , we get Equa˜k = u if one of the Ui ’s takes tion 5.14. To obtain Equation 5.15, observe that U the value u (this happens with probability 1/t), k − 1 of the Ui ’s take values less than u (this happens with probability (u/t)k−1 ), and the remaining n − k Ui ’s take values greater than u (this happens with probability (1 − u/t)n−k ). There are n n! =k (k − 1)!1!(n − k)! k ways of picking the k − 1, 1 and n − k Ui ’s. The Equation 5.14 follows from this. Finally Equation 5.16 follows by direct integration. 5.23. Note that X1 > t if and only if there are no points in a circle of radius t centered at a, i.e., in a region of area πt2 . Since the number of points in this region is a P(λπt2 ), we see that the desired probability is exp{−λπt2 }. 5.24. We have ¯k2 ) = E(Sk2 ) = E(U
k(k + 1)t2 . (n + 1)(n + 2)
Hence
k(n + 1 − k) 2 t . (n + 1)2 (n + 2) Next we compute Cov(Si Sj ) = E(Si Sj ) − E(Si )E(Sj ), for 1 ≤ i < j ≤ n. Note that, given Sj = u, S1 , ..., Sj−1 are the order statistic of j − 1 iid U (0, u) random variables. Hence i E(Si Sj ) = E(E(Si Sj Sj )) = E( Sj2 ) j i j(j + 1)t2 i(j + 1)t2 = = . j (n + 1)(n + 2) (n + 1)(n + 2) ¯ 2 ) − E(U ¯k )2 = V ar(Sk ) = E(U k
Thus, for 1 ≤ i < j ≤ n, Cov(Si , Sj ) = (
i(j + 1) ij − )t2 . (n + 1)(n + 2) (n + 1)(n + 1)
POISSON PROCESSES
117
5.25 We have P(Sn+1 > s+tSn = s, Sn−1 , · · · , S0 ) = P(N (s+t)−N (s) = 0Sn = s) = e−(λ(s+t)−Λ(s)) . Hence §n , n ≥ 0} is a DTMC.
CHAPTER 6
ContinuousTime Markov Chains
Modeling Exercises 6.1. The state space of {X(t), t ≥ 0} is {0, 1, ..., k}. Note that when X(t) = i, i machines are working and hence can fail at rate iµ, and (k − i) machines are under repair at a total rate of (k − i)λ. Following the development in Example 6.5, we see that {X(t), t ≥ 0} is a CTMC with the following transition rates: qi,i+1 = (k − i)λ, 0 ≤ i ≤ k − 1, qi,i−1 = iµ, 1 ≤ i ≤ k, qi,i = −iµ − (k − i)λ, 0 ≤ i ≤ k. All other elements of the rate matrix are zero. 6.2. Same as above, except that in state i, min(r, k − i) machines are under repair. Hence we have qi,i+1 = min(k − i, r)λ, 0 ≤ i ≤ k − 1, qi,i−1 = iµ, 1 ≤ i ≤ k, qi,i = −iµ − min(k − i, r)λ, 0 ≤ i ≤ k. All other elements of the rate matrix are zero. 6.3. Let the state of the system be the queue of machines in the repair shop. Thus, state 0 implies that the queue is empty, i.e., both machines are working, state i, (i = 1, 2) indicates that machine is iss in the workshop (under repair) and the other machine is up, state ij, i, j = 1, 2, i 6= j indicates that machine i is under repair, while machine j is waiting in the workshop. Thus the state space is {0, 1, 2, 12, 21}. Doing the usual triggering event analysis we get the generating matrix as −(µ1 + µ2 ) µ1 µ2 0 0 λ1 −(λ1 + µ2) 0 µ2 0 λ2 −(λ2 + µ1) 0 0 µ1 Q= . 0 0 λ1 −λ1 0 0 λ2 0 0 −λ2 119
120
CONTINUOUSTIME MARKOV CHAINS
6.4. Following the definition of state in modeling exercise 3 above we get the state space as S = {0, 1, 2, 3, 12, 13, 21, 23, 31, 32, 123, 132, 213, 231, 312, 321}. Thus state 312 indicates that machine 3 is under repair, machine 1 is next in line for repair, followed by machine 2. The triggering event analysis yields the following transition rates (we only list the positive ones. The indices i, j, k are distinct integers taking values 1,2,3.) q0,i = µi , qi,0 = λi , qi,ij = µj , qij,j = λi , qij,ijk = µk , qijk,jk = λi . 6.5. The state space of {X(t), t ≥ 0} is {0, 1, ..., k}. In state i, i wires share a load of M kilograms. Thus each breaks at rate µM/i. Hence the next break occurs ate rate µM.. This yields the following rates: q0,0 = 0, qi,i−1 = µM,
1 ≤ i ≤ k.
6.6. Since the arrival process is Poisson, service times are exponential, and the number of active servers depends only on the number of customers in the system, {X(t), t ≥ 0} is a birth and death process with the following parameters: λi = λ, i ≥ 0, µ for 1 ≤ i ≤ 5 2µ for 6 ≤ i ≤ 8 3µ for 9 ≤ i ≤ 12 µi = 4µ for 13 ≤ i ≤ 15 5µ for 16 ≤ i 6.7. The statespace of the system is {0, 1A, 1B, 2, 3, 4, ...}. The state 1A (1B) indicates that there is one customer in the system and he is being served by server A (B). Otherwise, the state i indicates that there are i customers in the system. The triggering event analysis shows that {X(t), t ≥ 0} is CTMC with the following transition rates (we show only the positive rates.) q0,1a = λα, q0,1B = λ(1 − α), q1A,0 = µ1 , q1A,2 = λ, q1B,0 = µ2 , q1B,2 = λ, q2,1A = µ2 , q2,1B = µ1 , q2,3 = λ, qi,i+1 = λ, qi,i−1 = µ1 , µ2 , i ≥ 3. 6.8. The state space of {(X1 (t), X2 (t)), t ≥ 0} is {(i, j), i ≥ 0, j ≥ 0}. The positive transition rates are q(i,i),(i+1,i) = q(i,i),(i,i+1) = λ/2, i ≥ 0, q(0, j), (0, j − 1) = µ, j ≥ 1, q(i, 0), (i − 1, 0) = µ, i ≥ 1,
CONTINUOUSTIME MARKOV CHAINS
121
q(i,j),(i+1,j) = λ, 0 ≤ i < j, q(i,j),(i,j+1) = λ, 0 ≤ j < i, q(i,j),(i−1,j) = q(i,j),(i,j−1) = µ, i ≥ 1, j ≥ 1. 6.9. The statespace of the system is {0, 1, ..., n}. The triggering event analysis shows that it is a CTMC with positive transition rates given below: q0,i = λi , qi,0 = µi , 1 ≤ i ≤ n. 6.10. Let X(t) be the number of customers in the system at time t. Then {X(t), t ≥ 0} is a CTMC on statespace {0, 1, 2, ...} with positive rates given below: qi,i−1 = iµ, qi,i+k = λ(1 − p)pk−1 , i ≥ 0, k ≥ 1. 6.11. The arrival process is Poisson, service times are exponential, and the admission policy depends only on the number of customers in the system. This makes {X(t), t ≥ 0} a birth and death process with birth parameters λ1 + λ2 if 0 ≤ i < s λi = λ1 if i ≥ s, and death parameters µi = min(i, s)µ, i ≥ 0}. 6.12. The arrival processes of the customers and the busses are independent Poisson processes, and the number of customers removed by a bus depends only on the number of customers in the depot. This makes {X(t), t ≥ 0} a CTMMC with statespace {0, 1, 2, ...}. The positive entries of the generator matrix of {X(t), t ≥ 0} are given below: qi,i+1 = λ, i ≥ 0, qi,0 = µ, 1 ≤ i ≤ k, qi,i−k = µ, i ≥ k. 6.13. The arrival process is Poisson, and the service times depend upon the class of the customer. Hence we need know the number of customers in the system, as well as the class of the customer in service to describe the state of the system. This makes {(X(t), Y (t)), t ≥ 0} a CTMC on state space {(i, j) : i ≥ 0, j = 1, 2}. Note that a departure occcurs from state (i, j) with rate µj . The next customer to enter service is of class k with probability αj (We use α1 = α, α2 = 1 − α). Hence the next state is (i − 1, k) with probability αj . Thus the transition rate from state (i, j) to state (i − 1, k) is µj αk (assuming i ≥ 2). Similar analysis yields the following transition rates: q(0,0),(1,j) = λαj , q(1,j),(0,0) = µj , j = 1, 2, q(i,j),(i+1,j) = λ, i ≥ 1, j = 1, 2, q(i,j),(i−1,k) = µj αk , i ≥ 2, j, k = 1, 2. 6.14. Suppose X(t) = i. Then the next arrival occurs with rate λ. The next service completion occurs with rate µ. Each of the i − 1 customers in the queue waiting for service leaves due to impatience with rate θ. Hence {X(t), t ≥ 0} is birth and death process with birth parameters λi = λ, i ≥ 0 and death parameters
122
CONTINUOUSTIME MARKOV CHAINS
µ0 = 0, µi = µ + (i − 1)θ, i ≥ 1. 6.15. {X(t), t ≥ 0} is a CTMC on statespace {0, 1, 2, ...} with positive elements of the generator matrix as given below: qi,i+1 = .4iλ, qi,i+2 = .3iλ, qi,i−1 = .3iµ, i ≥ 0. 6.16. Let X(t) be the number of customers in the system at time t, and Y (t) = 1 if the server is busy at time t, and 0 otherwise. The operating policy implies that the server cannot be idle if there are N or more customers in the system. Thus X(t) ≥ N ⇒ Y (t) = 1. Hence the state space is {1, 2, ...} ∪ {(0, 0), (1, 0), ..., (N − 1, 0)}. Here state i indicates that X(t) = i and Y (t) = 1, and state (i, 0) indicates that X(t) = i and Y (t) = 0. The triggering event analysis yields the following rates: q(i,0),(i+1,0) = λ, 0 ≤ i ≤ N − 1, q(N −1,0),N = λ, q(1,(0,0) = µ, qi,i+1 = λ, i ≥ 1, qi,i−1 = µ, i ≥ 2. 6.17. Let X(t) be the state of the system at time t. State 0 indicates that the system has crashed at time t. State i, 1 ≤ i ≤ 5, indicates that the system is functioning with i CPUs working, and 5 − i CPUs down. The iid exponential lifetimes of the CPUs and the instantaneous recovery mechanism implies that {X(t), t ≥ 0} is a CTMC with the generator matrix given below: 0 0 0 0 0 0 µ −µ 0 0 0 0 2µ(1 − c) 2µc −2µ 0 0 0 . Q= 3µ(1 − c) 0 3µc −3µ 0 0 4µ(1 − c) 0 0 4µc −4µ 0 5µ(1 − c) 0 0 0 5µc −5µ 6.18. The state space remains the same. The Q matrix changes to 0 0 0 0 0 0 µ −(µ + λ) λ 0 0 0 2µ(1 − c) 2µc −(2µ + λ) λ 0 0 Q= 3µ(1 − c) 0 3µc −(3µ + λ) λ 0 4µ(1 − c) 0 0 4µc −(4µ + λ) λ 5µ(1 − c) 0 0 0 5µc −5µ
.
6.19. Let Xi (t) be the number of customers of type i in the system at time t, (i = 1, 2). Then {X(1(t), X2 (t)), t ≥ 0} is a CTMC on S = {(i, j) : i ≥ 0, 0 ≤ j ≤ s} with transition rates q(i,j),(i+1,j) = λ1 , (i, j) ∈ S, q(i,j),(i,j+1) = λ2 , (i, j) ∈ S, j < s q(i,j),(i−1,j) = min(i, s − j)µ1 , q(i,j),(i,j−1) = jµ2 , (i, j) ∈ S.
CONTINUOUSTIME MARKOV CHAINS
123
6.20. The statespace is {(i, j) : i ≥ 0, j = 0, 1}. Consider state (i, 0). If a new message arrives, it immediately starts transmitting, thus taking the system to state (i, 1). If one of the i backlogged messages starts transmitting, the system moves to state (i−1, 1). Now consider state (i, 1). If a new message arrives, there is a collision, and the new state is (i + 2, 0). If one of the i backlogged messages start transmitting, there is a collision and the system moves to (i + 1, 0). If the transmission terminates without collision, the system moves to state (i, 0). This yields the following transition rates: q(i,0),(i,1) = λ, q(i,0),(i−1,1) = iθ, i ≥ 0, q(i,1),(i+2,0) = λ, q(i,1),(i+1,0) = iθ, i ≥ 0, q(i,1),(i,0) = µ, i ≥ 1. 6.21. Let Y (t) be the number of packets in the buffer and Z(t) be the number of tokens in the token pool at time t. Define X(t) = M − Z(t) + Y (t). Since 0 ≤ Z(t) ≤ M and Y (t) ≥ 0, we get 0 ≤ X(t) < ∞. Also, 0 ≤ X(t) < M ⇒ Y (t) = 0, Z(t) = M − X(t), and M < X(t) ⇒ Y (t) = X(t) − M, Z(t) = 0. Thus X(t) has complete information about (Y (t), Z(t)). Now, if a token arrives, X(t) reduces by one (unless it is at 0, in which case the token is lost, and X(t) remains unchanged.) If a packet arrives, X(t) increases by 1. Thus {X(t), t ≥ 0} is birth and death process with birth rates λi = λ, i ≥ 0 and death rates µ0 = 0, µi = µ, i ≥ 1. 6.22. Note that at most one order can be outstanding at any time. The state space of {X(t), t ≥ 0} is {0, 1, ..., K + R}. Consider state i, R < i ≤ K + R. In this state no orders are outstanding, and a new demand takes the system to state i − 1. Next consider state i, 0 ≤ i ≤ R. In this state one order is outstanding. If the order is delivered, the system state moves to i + K > R, and if a new demand occurs the state moves to i − 1. If i = 0, the demand is lost. {X(t), t ≥ 0} is a CTMC with transition rates given below: qi,i+K = λ, 0 ≤ i ≤ R, qi,i−1 = µ, 1 ≤ i ≤ K + R. 6.23. If the the macromolecule is a string of i A’s at time t, we say X(t) = i. If it is a string of i A’s and a T at one end, we say X(t) = iT . If it is a string of i A’s and two T 0 s at two ends, we say X(t) = T iT . Assume that X(0) = 1. Then {X(t), t ≥ 0} is a CTMC with the following transitions: q2,1 = µ, qi,i−1 = 2µ, i ≥ 3, qi,i+1 = 2λ, qi,iT = 2θ, i ≥ 2. q1T,1 = µ, qiT,(i−1)T = µ, i ≥ 2, qiT,T iT = θ, i ≥ 1. Note that the states 1 and T iT, i ≥ 1 are absorbing.
124
CONTINUOUSTIME MARKOV CHAINS
6.24. {Xi (t), t ≥ 0} is a CTMC for i = 1. It is not a CTMC for i = 2, · · · , K. {Zk (t), t ≥ 0} is a birth and death process on {0, 1, · · · , k} with birth parameters λi = λ for 0 ≤ i ≤ k − 1, and death parameters µi = iµ for 1 ≤ i ≤ k. 6.25. {Xk (t), t ≥ 0} is a birth and death process on {0, 1, · · · , } with birth parameters λi = λpk for i ≥ 0, and death parameters µk for i ≥ 1. The two processes are independent since the arrival stream gets split into two independent Poisson streams due to the Bernoulli splitting mechanism. 6.26. S = {c, 0, 1, 2, ..., K}, where c means the system is under recovery from a catastrophe, and i (0 ≤ i ≤ K) means that the server is up and there are i customers in the system. The positive transition rates are: qc,0 = α, q0,c = θ, q0,1 = λ, qi,i−1 = µ, qi,i+1 = λ, qi,c = θ, 1 ≤ i ≤ K − 1, qK,K−1 = µ, qK,c = θ. 6.27. State space ={0, 1, · · · , K}. Transition rates: q0,k = λk , qk,0 = µk 1 ≤ k ≤ K. 6.28. Let X(t) be 1 if the machine is working, 2 if it is down and waiting for the repairperson, and 3 if it is under repair, at time t. Then {X(t), t ≥ 0} is a CTMC with transition rates q1,2 = µ, q2,3 = λ, q3,1 = θ. 6.29. State space = {0, 1, 2}. Transition rates: q0,1 = 2λ, q1,0 = µ, qi,2 = 2λ, q2,0 = 2µ. 6.30. Let X(t) be the number of items in the warehouse at time t, and Y (t) be 1 if the machine is up and 0 if it is down at time t. Then {(X(t), Y (t)), t ≥ 0} is a CTMC. Note that when X(t) ≤ k, we must have Y (t) = 1; and when X(t) = K, we must have Y (t) = 0. Hence the statespace is {(i, 1), 0 ≤ i ≤ K − 1} ∪ {(i, 0), k + 1 ≤ i ≤ K}. The nonzero transition rates are q(i,1),(i+1,1) = λ, 0 ≤ i ≤ K − 1, q(i,1),(i−1,1) = µ, 1 ≤ i ≤ K − 1, q(K−1,1),(K,0) = λ, q(i,0),(i−1,0) = µ, k + 2 ≤ i ≤ K, q(k+1,0),(k,1) = µ.
CONTINUOUSTIME MARKOV CHAINS
125
6.31. λi = λ1 + λ2 for 0 ≤ i ≤ K − 1, λi = λ1 for i ≥ K, µi = µ for i ≥ 1. 6.32. Let X(t) be the number of customers in the system at time t, and let Y (t) be the number of tasks under processing for the customer in service. When X(t) = 0 we define Y (t) = 0. Then {(X(t), Y (t)), t ≥ 0} is a CTMC. The nonzero transition rates are q(0,0),(1,2) = q(i,j),(i+1,j) = λ, i > 0, j = 1, 2, q(i,2),(i,1) = q(i,1),(i−1,2) = 2µ, i > 0. 6.33. We describe a space as E if it is empty, B if it is occupied by a car in service, and W if it is occupied by a car that is waiting to begin service or has finished service. The state space is S = {1 = EEE, 2 = BEE, 3 = BBE, 4 = EBE, 5 = BBW, 6 = BW E, 7 = EBW, 8 = BW W }. Thus state is EBW if the space 1 is empty, space two has a car that is pumping gas, and space 3 is occupied by a car that is waiting for service. The triggering event analysis yields the following rate matrix: Q=
−λ µ 0 µ 0 µ 0 0
λ 0 0 0 0 0 0 −(λ + µ) λ 0 0 0 0 0 0 −(λ + 2µ) µ λ µ 0 0 0 0 −(λ + µ) 0 0 λ 0 0 0 0 −2µ 0 µ µ 0 0 0 0 −(λ + µ) 0 λ µ 0 0 0 0 −µ 0 µ 0 0 0 0 0 −µ
.
6.34. Let X(t) be the number of machines in use at time t, and Y (t) be the status of the standby machine at time t (0 if there is none, U if it is up, and D if it is down with an undetected failure). The state of the system is (X(t), Y (t)). The statespace is {1 = (2, U ), 2 = (2, 0), 3 = (2, D), 4 = (1, 0), 5 = (0, 0)}. The triggering event analysis yields the following rate matrix −(2λ + θ) 2λ θ 0 0 µ −(2λ + µ) 0 2λ 0 0 0 −2λ 2λ 0 Q= . 0 µ 0 −(λ + µ) λ 0 0 0 µ −µ 6.35. {I(t), t ≥ 0} is a CTMC on state space {0, 1, 2, · · · , N } with rates qi,i+1 = βi(N − i), 0 ≤ i ≤ N. The initial state is 1. 6.36. {(S(t), C(t)), t ≥ 0} is a CTMC with rates q(i,j),(i−1,j) = βij, 0 ≤ i ≤ N − K, 0 ≤ j ≤ K,
126
CONTINUOUSTIME MARKOV CHAINS q(i,j),(i,j−1) = γj, 0 ≤ i ≤ N − K, 0 ≤ j ≤ K.
The initial state is (NK , K). 6.37. {(S(t), I(t)), t ≥ 0} is a with rates q(i,j),(i−1,j+1) = βij, 0 ≤ i ≤ N, 0 ≤ j ≤ N − i, q(i,j),(i,j−1) = (γ + ν)j, 0 ≤ i ≤ N, 0 ≤ j ≤ N − i. Initial state is (N − 1, 1). 6.38. Let Xi (t) be the number of orders at price pi (i = 1, 2). We use the convention that if Xi (t) > 0, there Xi (t) sell orders on the book at time t, and if Xi (t) < 0, there are −Xi (t) buy orders on the book. This definition is unambiguous because there cannot be positive number of buy and sell orders at a given price. The state of the order book at time t is given by (X1 (t), X2 (t)). The state space is S = {(i, j) : 1 ≤ i ≤ K, 0 ≤ j ≤ K} ∪ {(i, j) : −K ≤ i ≤ 0, −K ≤ j ≤ K}. (Note that we cannot have buy orders at p2 and sell orders at p1 at the same time.) The transition rates are: For −(K − 1) ≤ i, j ≤ 0, q(i,j),(i−1,j) = λb1 , q(i,j),(i+1,j) = λs1 , q(i,j),(i,j−1) = λb2 , q(i,j),(i,j+1) = λs2 . For i = −K, 0 ≤ j ≤ K − 1, q(i,j),(i+1,j) = λs1 , q(i,j),(i,j−1) = λb2 , q(i,j),(i,j+1) = λs2 . For j = K, −(K − 1) ≤ i ≤ 0, q(i,j),(i−1,j) = λb1 , q(i,j),(i+1,j) = λs1 , q(i,j),(i,j−1) = λb2 . For i = −K, j = K, q(−K,K),(−K+1,K) = λs1 , q(−K,K),(−K,K−1) = λb2 . For 1 ≤ i ≤ K − 1, 0 ≤ j ≤ K − 1 q(i,j),(i−1,j) = λb1 + λb2 , q(i,j),(i+1,j) = λs1 , q(i,j),(i,j+1) = λs2 . For i = K, 0 ≤ j ≤ K − 1, q(i,j),(i−1,j) = λb1 + λb2 , q(i,j),(i,j+1) = λs2 . For 1 ≤ i ≤ K − 1, j = K, q(i,j),(i−1,j) = λb1 + λb2 , q(i,j),(i+1,j) = λs1 . For i = K, j = K, q(K,K),(K−1,K) = λs1 + λs2 . For −(K − 1) ≤ i ≤ 0, −(K − 1) ≤ j ≤ −1, q(i,j),(i−1,j) = λb1 , q(i,j),(i,j−1) = λb2 , q(i,j),(i,j+1) = λs1 + λs2 .
CONTINUOUSTIME MARKOV CHAINS
127
For i = −K, −(K − 1) ≤ j ≤ −1, q(i,j),(i,j−1) = λb2 , q(i,j),(i,j+1) = λs1 + λs2 . For −(K − 1) ≤ i ≤ 0, j = −K, q(i,j),(i−1,j) = λb1 , q(i,j),(i,j+1) = λs1 + λs2 . For i = −K, j = −K, q(−K,−K),(−K,−K+1) = λs1 + λs2 . 6.39. {S(t), t ≥ 0} is a CTMC on statespace {0, 1, 2, 3, · · ·} with rates qi,i+1 = λ, i ≥ 0; qi,i−1 = iθ + µ, i ≥ 1. 6.40. {Xj (t), t ≥ 0} is a CTMC with statespace {0, 1, 2, · · ·} and transition rates qi,i+1 = λj , i ≥ 0, qi,i−1 =
J X
µk βk,j , i ≥ 1.
k=1
6.41. {(X1 (t), X2 (t)), t ≥ 0} is a CTMC with state space S = {(i, j) : 0 ≤ i ≤ B, 0 ≤ j ≤ min(B − b, B − i)} and transition rates q(i,j)=(i+1,j) = λ1 , q(i,j),(i−1,j) = µ1 , q(i,j),(i,j+1) = λ2 , q(i,j),(i,j−1) = µ2 , (i, j) ∈ S, with the convention that any transition that takes the system out of S has rate zero. 6.42. {(X(t), Y (t)), t ≥ 0} is a CTMC with state space {i, 0) : i ≥ 0} ∪ {(i, 1), i ≥ 1} and transition rates: q(i,0),(i+1,0) = λ, i ≥ 0, q(i,0),(i−1,1) = µ0 , i ≥ 1, q(i,1),(i+1,1) = λ, i ≥ 1, q(i,1),(i−1,1) = µ1 , i ≥ 2, q(1,1),(0,0) = µ1 .
128
CONTINUOUSTIME MARKOV CHAINS
Computational Exercises 6.1. Define Z(t) = 0 if X(t) is even, and Z(t) = 1 if X(t) is odd. Then {Z(t), t ≥ 0} is a CTMC on statespace {0, 1} with generator matrix −α α Q= . β −β Using results of Equation 6.21, we get P (X(t) oddX(0) = 0) = P (Z(t) = 1Z(0) = 0) = p0,1 (t) =
α (1−e−(α+β)t ). α+β
6.2. From Example 6.12, we see that {X(t), t ≥ 0} is a birth and death process with birth parameters λi = λ, i ≥ 0, and death parameters µi = iµ, i ≥ 0. The forward equations are(use pi,−1 (t) = 0) p0i,j (t) = −(λ + jµ)pi,j (t) + (j + 1)µpi,j+1 (t) + λpi,j−1 (t), j ≥ 1. Multiplying the above equation by j and summing over from j = 0to ∞, and using mi (t) =
∞ X
jpi,j (t),
m0i (t) =
j=0
∞ X
jp0i,j (t),
j=0
we get m0i (t)
=
=
=
∞ X
jp0i,j (t)
j=0 ∞ X
−
j(λ + jµ)pi,j (t) +
j=0 ∞ X
−λ
j(j + 1)µpi,j+1 (t) +
j=0
jpi,j (t) − µ
j=0
=
∞ X
∞ X
−λmi (t) + µ
2
j pi,j (t) + µ
=
∞ X
∞ X (j + 1 − 1)(j + 1)pi,j+1 (t) + λ (j − 1 + 1)pi,j−1 (t)
j=0
j 2 pi,j (t) + µ
j=0
jλpi,j−1 (t)
j=0
j=0 ∞ X
∞ X
∞ X
j 2 pi,j (t) − µ
j=0
j=0 ∞ X
∞ X
∞ X
j=0
j=0
j=0
(j)pi,j (t) − λ
(j − 1)pi,j−1 (t) + λ
−λmi (t) − µmi (t) + λmi (t) + λ.
Thus mi (t) satisfies the following differential equation: m0i (t) + µmi (t) = λ,
mi (0) = i.
The solution is given by mi (t) = ie−µt +
λ (1 − e−µt ). µ
6.3. Let X(t) be the number of operating machines at time t. Let Xi (t) be the state of the ith machine at time t (0 if down and 1 if up). The state of the system at
(1)pi,j−1
CONTINUOUSTIME MARKOV CHAINS
129
time t is given by Y (t) = (X1 (t), X2 (t)). {Y (t), t ≥ 0} is a CTMC on state space {(0, 0), (1, 0), (0, 1), (1, 1)} with the following transition matrix: −(λ1 + λ2 ) λ1 λ2 0 µ1 −(µ1 + λ2 ) 0 λ2 . Q= µ2 0 −(λ1 + µ2 ) λ1 0 µ2 µ1 −(µ1 + µ2 ) 1. We know from Equation 6.21 that P (Xi (t) = 0Xi (0) = 1) =
µi (1 − e−(λi +µi )t ), λi + µi
λi µi + e−(λi +µi )t . λi + µi λi + µi Since the two machines do not interfere (since each has its own repairperson), we get P (Xi (t) = 1Xi (0) = 1) =
P (X(t) = 0X(0) = 2) = P (X1 (t) = 0X1 (0) = 1)P (X2 (t) = 0X2 (0) = 1), P (X(t) = 1X(0) = 2) = P (X1 (t) = 1X1 (0) = 1)P (X2 (t) = 0X2 (0) = 1) +P (X1 (t) = 0X1 (0) = 1)P (X2 (t) = 1X2 (0) = 1), P (X(t) = 2X(0) = 2) = P (X1 (t) = 1X1 (0) = 1)P (X2 (t) = 1X2 (0) = 1). 2. By brute force calculations (or using symbolic calculation software) we see that the Q matrix has four eigenvalues ν1 = 0, ν2 = −(λ1 + µ1 ), ν3 = −(λ2 + µ2 ), ν4 = −(λ1 + µ1 + λ2 + µ2 ). The corresponding A matrix is given by (see Section 6.4.2) 1 λ1 λ2 λ1 λ2 1 −µ1 µ2 −µ1 λ2 . A= 1 λ1 λ2 −λ1 µ2 1 −µ1 −µ2 µ1 µ2 The A−1 is given by
A−1
µ1 µ2 µ2 1 = (λ1 + µ1 )(λ2 + µ2 ) µ1 1
λ1 µ2 −µ2 λ1 −1
λ2 µ1 λ2 −µ1 −1
λ1 λ2 −λ2 . −µ1 1
Substituting in Equation 6.27 we compute P (t). We get P (X(t) = 0X(0) = 2) = p(1,1),(0,0) (t), P (X(t) = 1X(0) = 2) = p(1,1),(1,0) (t)+p(1,1),(0,1) (t), P (X(t) = 2X(0) = 2) = p(1,1),(1,1) (t). This matches the result in (1.). 3. From Theorem 6.8, we get P ∗ (s) = (sI − Q)−1 . Using Kramers rule fro matrix inverses, we get p∗(1,1),(1,1) (s) =
(s + λ1 + λ2 )(s + µ1 + λ2 )(s + µ2 + λ1 ) − λ1 µ1 (s + µ2 + λ1 ) − λ2 µ2 (s + µ1 + λ2 ) . s(s + λ1 + µ1 )(s + λ2 + µ2 )(s + λ1 + µ1 + λ2 + µ2 )
130
CONTINUOUSTIME MARKOV CHAINS
This can be inverted by partial fractions to obtain p(1,1),(1,1) (t) = P (X(t) = 2X(0) = 2). 6.4. This is a special case of Example 6.25. Following the derivation there, we get p∗i,j (s) =
1 j kλ Πk=i , 1 ≤ i ≤ j. jλ kλ + s
Inverting this yields pi,j (t) = (1 − e−λt )j−i e−iλt . 6.5. This can be treated as a special case of Example 6.26. Or we could use rudimentary probability as follows: Let L1 , L2 , ..., Li be iid Exp(µ) random variables, representing the lifetimes of the i items. For 0 ≤ j ≤ i, P (X(t) = jX(0) = i)
= P (j items last > t, j − i items last ≤ t) i = P (Lk > t, 1 ≤ k ≤ j, Lk ≤ t, j + 1 ≤ k ≤ i) j i −jµt = e (1 − e−µt )i−j . j
6.6. Using the transition rates from the Modeling Exercise 6.15, the forward equations 6.18 become p0i,j (t) = .3(j+1)µpi,j+1 (t)−jµpi,j +.4(j−1)µpi,j−1 +.3(j−2)µpi,j−2 (t), j ≥ 0, where we interpret pi,j (t) = 0 for j < 0. Following the steps in the solution to Computational Exercise 6.2, we get m0 (t) = .7µm(t),
m(0) = i.
Solution is given by m(t) = ie.7µt . Thus the size of the amoeba colony explodes exponentially with time. 6.7. The backward equations for pj (t) = p0,j (t) of the pure birth process are p0j (t) = λj−1 pj−1 (t) − λj pj (t), j ≥ 0. (We assume p−1 (t) = 0.) Multiplying both sides by j and summing over all j we get m0 (t)
=
=
=
∞ X j=0 ∞ X j=0 ∞ X j=0
jp0j (t) jλj−1 pj−1 (t) −
(6.1) ∞ X
jλj pj (t)
j=0 ∞ X
(j + 1)λj pj (t) −
j=0
jλj pj (t)
CONTINUOUSTIME MARKOV CHAINS =
∞ X
131
λj pj (t)
j=0
= =
αP (X(t) is even X(0) = 0) + βP (X(t) is odd X(0) = 0) α(α − β) −(α+β)t 2αβ + e , α+β α+β
where the last equation follows by substituting the expression for P (X(t) is even X(0) = 0) from the solution to Computational Exercise 1. Integrating, we get m(t) =
α(α − β) 2αβ (1 − e−(α+β)t ). t+ α+β (α + β)2
6.8. The special customer faces a twostate CTMC of Example 6.4. Let θ be the probability that this CTMC is in state 2 at time T , given that it is in state 2 at time 0. From Equation 6.21 we get θ=
µ −(λ+µ)T λ e + . λ+µ λ+µ
Let V be the number of visits the special customer makes until he enters the system. Using the Markov property, we see that V is a geometric random variable with P(Vk ) = θk (1 − θ),
k ≥ 0.
The expected time until he enter is given by T E(V ) = T /(1 − θ) =
T (λ + µ) . µ(1 − e−(λ+µ)T )
6.9. Let X(t) be the state of the machine (0=down, 1=up) at time t (in hours). We assume that t = 0 at 8:00am on day 0. Let ai,j = P (X(8) = jX(0) = i). For 24k ≤ t ≤ 24k + 8, (k a nonnegative integer), X(t) is CTMC with rates q0,1 = λ and q1,0 = µ. Hence, a0,0 a0,1 A= a1,0 a1,1 # " µ λ λ λ −8(λ+µ) + λ+µ − λ+µ e−8(λ+µ) + λ+µ λ+µ e . = µ µ µ λ −8(λ+µ) − λ+µ e−8(λ+µ) + λ+µ + λ+µ λ+µ e Next, let bi,j = P (X(24) = jX(8) = i). For 24k + 8 ≤ t ≤ 24(k + 1), X(t) is CTMC with rates q0,1 = 0 and q1,0 = µ. Hence b0,0 b0,1 B= b1,0 b1,1 1 0 = . 1 − e−16µ e−16µ Now let Xn = X(24n) be the state of the machine when the repairperson reports for
132
CONTINUOUSTIME MARKOV CHAINS
work on day n. Then {Xn , n ≥ 0} is a DTMC on {0, 1} with transition matrix " # λ λ λ λ 1 + λ+µ e−(8λ+24µ) − λ+µ e−16µ − λ+µ e−(8λ+24µ) + λ+µ e−16µ P = AB = . µ µ λ λ −(8λ+24µ) 1 − λ+µ e−(8λ+24µ) − λ+µ e−16µ + λ+µ e−16µ λ+µ e Thus lim P (Xn = 1) =
n→∞
P0,1 λ(e−16µ − e−(8λ+24µ) ) = . P0,1 + P1,0 (λ + µ)(1 − e−(8λ+24µ) )
6.10. {X(t), t ≥ 0} is a birth and death process, hence we use Example 6.35. We get k(k − 1)...(k − i + 1) λ i ρ0 = 1, ρi = ( ) , 1 ≤ i ≤ k, i! µ which can be rewritten as k λ i ρi = ( ) , 0 ≤ i ≤ k. i µ Hence, from Example 6.35, we get k X λ µ k k λ i −1 ) . ( ) ) = (1 + )−k = ( p0 = ( µ µ λ + µ i i=0 Thus the CTMC is positive recurrent. The limiting distribution is given by λ i µ k−i k λ i µ k k ( )( ( pi = ρi p0 = ) = )( ) . i µ λ+µ i λ+µ λ+µ λ Thus in steady state X(t) is distributed as a Bin(k, λ+µ ) random variable.
6.11. By using the birth and death parameters from the solution to Modeling Exercise 6.6 in Equation 6.75, we get λ i (µ) for 0 ≤ i ≤ 5 1 λ i ( ) for 6 ≤ i ≤ 8 2i−5 µ 1 λ i ( 8·3i−8 µ ) for 9 ≤ i ≤ 12 ρi = 1 λ i ( ) for 13 ≤ i ≤ 15 i−12 648·4 µ 1 λ i ( 41472·5 ) for i ≥ 16. i−15 µ From Equation 6.75, 1
p0 = P14
i=0
ρi +
λ 15 1 41472 ( µ ) (1
−
λ ) 5µ
,
iff λ < 5µ. Thus the condition of stability is λ < 5µ. If the system is stable, its limiting distribution is given by pi = ρi p0 , i ≥ 0. 6.12. The system is stable if λ < µ = µ1 +µ2 . Using the transition rates developed
CONTINUOUSTIME MARKOV CHAINS
133
in the solution to Modeling Exercise 6.7, we get the following balance equations (use λ1 = λα, λ2 = λ(1 − α)): λp0
= µ1 p1a + µ2 p1b ,
(λ + µ1 )p1a
= λ1 p0 + µ2 p2 ,
(λ + µ2 )p1b
= λ2 p0 + µ1 p2 ,
(λ + µ)p2
= λp1a + λp1b + µp3 ,
(λ + µ)pi
= λpi−1 + µpi+1 , i ≥ 3.
The last set of equations is identical to the birth and death equations, and hence yields the solution pi = p2 ρi−2 , i ≥ 2, where ρ = λ/µ. Next we solve the first three equations to obtain p0 , p1a and p1b in terms of p2 . We get (using β = 1 − α): p0
=
p1a
=
p1b
=
2λ + µ µ1 µ2 · p2 , λ2 λ + αµ2 + βµ1 µ2 λ + αµ · p2 , λ λ + αµ2 + βµ1 µ1 λ + βµ · p2 . λ λ + αµ2 + βµ1
Finally we compute p2 by using the normalizing equation: p0 + p1a + p1b +
∞ X
pi = p0 + p1a + p1b + p2 /(1 − ρ) = 1.
i=2
6.13. This is a finite state CTMC. Hence the limiting distribution exists. Using the transition rates developed in the solution to Modeling Exercise 6.9, we get the following balance equations µi pi = λi p0 , i = 1, 2, ..., n. The solution is λi p0 , i = 1, 2, ..., n. µi Finally, the normalizing equation yields pi =
1 = p0 +
n X
pi = p0 (1 +
i=1
n X λi i=1
µi
.).
Hence, the final solution is p0 =
pi =
1+
1 Pn
λi µ
1+
Pin
λi i=1 µi
λi i=1 µi
,
,
i = 1, 2, ..., n.
134
CONTINUOUSTIME MARKOV CHAINS
6.14. This is a finite state CTMC. Hence the limiting distribution exists. Using the transition rates developed in the solution to Modeling Exercise 6.4, we get the following balance equations (
3 X
µm )p0
=
m=1
(λi +
3 X
3 X
λm pm ,
m=1
µm )pi
3 X
= µi p0 +
m=1,m6=i
λm pmi ,
m=1,m6=i
(λi + µk )pij
= µj pi + λk pkij ,
λi pijk
= µk pij .
Here i, j, k represent three distinct integers taking values 1,2,3. Using the last equation we can eliminate the 6 unknowns pijk . This yields a system of 10 equations in 10 unknowns. Closed form solution is messy. 6.15. This system is always stable. Let ak = (1 − p)pk−1 , k ≥ 1. Using the transition rates given in the solution to Modeling Exercise 6.10, we get the following balance equations: (λ + jµ)pj = λ
j−1 X
aj−i pi + (j + 1)µpj+1 , j ≥ 0.
i=0
We have G(z) =
∞ X
G0 (z) =
z j pj ,
j=0
A(z) =
∞ X
jz j−1 pj ,
j=0 ∞ X
z j aj =
j=1
(1 − p)z . 1 − pz
Multiply the jth balance equation by z j and sum over all j = 0, 1, 2, .... We get ∞ X
z j (λ + jµ)pj =
j=0
∞ X
zj λ
j=0
j−1 X
aj−i pi +
i=0
∞ X
z j (j + 1)µpj+1 .
j=0
Simplifying, we get λG(z) + µzG0 (z) = λ
∞ X i=0
z i pi
∞ X
z j−i aj−i + µG0 (z),
j=i+1
which yields λG(z) + µ(z − 1)G0 (z) = λG(z)A(z). This can be simplified to get G0 (z) =
λ G(z). µ(1 − pz)
CONTINUOUSTIME MARKOV CHAINS
135
Integrating, and using the fact that G(1) = 1, we get G(z) = (
λ 1 − p µp ) . 1 − pz
6.16. Using the birth and death parameters from the solution to Modeling Exercise 6.11 in Equation 6.170, we get ρi =
(λ1 + λ2 )i , 0 ≤ i ≤ s, i!µi
ρi =
(λ1 + λ2 )s λi−s 1 , i ≥ s. s!si−s µi
Now if ρ = λ1 /(sµ) < 1, S=
∞ X
ρi =
i=0
s−1 X
ρi +
i=0
(λ1 + λ2 )s 1 , s!µs 1−ρ
else it is infinity. Thus the system is stable if ρ < 1, i.e., λ1 < sµ. If it is stable, the limiting distribution is given by p0 = 1/S,
pi = ρi /S,
i ≥ 1.
6.17. Using the transition rates given in the solution to Modeling Exercise 6.13, we get the following balance equations: λp0,0
=
µ1 p1,1 + µ2 p1,2 ,
(λ + µj )p1,j
=
λαj p0,0 + µ1 αj p2,1 + µ2 αj p2,2 ,
(λ + µj )pi,j
=
λpi−1,j + µ1 αj pi+1,1 + µ2 αj pi+1,2 , i ≥ 2, j = 1, 2.
j = 1, 2,
Multiply the second equation by z, the third by z i and add over all i to get (λ+µj )
∞ X
z i pi,j = zλαj p0,0 +λ
∞ X i=2
i=1
z i pi−1,j +µ1 αj
∞ X
z i pi+1,1 +µ2 αj
∞ X
i=1
z i pi+1,2 .
i=1
Simplifying, we get (λ+µj )φj (z) = zλαj p0,0 +λzφj (z)+µ1 αj (φ1 (z)−p1,1 z)/z+µ2 αj (φ2 (z)−p1,2 z)/z. Collecting terms, αj (µ1 φ1 (z) +µ2 φ2 (z))−αj (µ1 p1,1 +µ2 p1,2 ). z Using the first balance equation, we get αj (λ(1 − z) + µj )φj (z) = λαj (z − 1)p0,0 + (µ1 φ1 (z) + µ2 φ2 (z)), j = 1, 2. z These are two equations for φ1 (z) and φ2 (z). Solving simultaneously, and simplifying, we get (λ(1−z) +µj )φj (z) = zλαj p0,0 +
φi (z) =
µ1 µ2 z
λαi (λ(1 − z) + µj ) p0,0 . − λµ1 (1 − αz1 ) − λµ2 (1 − αz2 ) − λ2 (1 − z)
136
CONTINUOUSTIME MARKOV CHAINS
Using φ1 (1) + φ2 (1) + p0, 0 = 1 we get p0,0 = 1 − λ(α1 /µ1 + α2 /µ2 ). The condition of stability is p0,0 > 0. 6.18. Using the birth and death rates given in the solution to Modeling Exercise 6.14, we get the following ρ0 = 1, ρi = Πij=1
λ , i ≥ 1. µ + (j − 1)θ
This birth and death process is always stable, and the limiting distribution is given by ρi , i ≥ 0. p i = P∞ j=0 ρj 6.19. Using the birth and death structure of {Zk (t), t ≥ 0}, we get, with ρ = λ/µ, ρj /j! lim P(Zk (t) = j) = Pk , 0 ≤ j ≤ k. i t→∞ i=0 ρ /i! Thus
Pk−1
ρi /i!
i=0
ρi /i!
lim E(Zk (t)) = ρ Pi=0 k
t→∞
.
Then lim P(Xk (t) = 1)
t→∞
= = =
lim E(Xk (t))
t→∞
lim [E(Zk (t)) − E(Zk−1 (t))] t→∞ Pk−2 i Pk−1 i i=0 ρ /i! i=0 ρ /i! − ρ Pk−1 . ρ Pk i i i=0 ρ /i! i=0 ρ /i!
Long run fraction of the customers lost is given by ρK /K! lim P(ZK (t) = K) = PK . i t→∞ i=0 ρ /i! 6.20. See solution to Modeling Exercise 6.4. Queue k is stable if ρk = λpk /µk < 1, k = 1, 2. The mean number of customers in the system is ρ2 ρ1 + . 1 − ρ1 1 − ρ2 Since the two queues are independent, the variance of the sum of the two queue lengths is the sum of the two variances. Hence we get the variance as ρ1 ρ2 + . 2 (1 − ρ1 ) (1 − ρ2 )2
CONTINUOUSTIME MARKOV CHAINS
137
6.21. The state space can be written as S = {R, 0, 1, 2, . . .} where R is the state of recovery and a number j means the system is in operation and there are j customers in the system. The positive transition rates are as follows: qR,0 = α, qi,i+1 = λ, (i ≥ 0), qi,R = θ, (i ≥ 0).
qi,i−1 = µ, (i > 0), The balance equations are
θ(1 − pR ) = αpR αpR + µp1 = (λ + θ)p0 λpj−1 + µpj+1 = (µ + λ + θ)pj , j ≥ 1 Solving the balance equations we get pR pj
θ θ+α α θ+α (1
= =
− b)bj , j ≥ 0
where 1+ b=
λ µ
+
θ µ
−
q
(1 +
λ µ
+ µθ )2 − 4 µλ
2
By definition, b is always between 0 and 1, thus, the system is always stable for α > 0. Expected number of customers in steady state (L) is given as P∞ L = j=1 jpj αb = (θ+α)(1−b) Jobs complete at rate µ in states j ≥ 1. Hence the long run rate at which jobs complete is µ
∞ X
pj = µ(1 − pR − p0 ).
j=1
The rate at which jobs arrive is λ. Hence the long run fraction of job that are completed successfully = µλ (1 − pR − p0 ). 6.22. By using the rates in the solution to Modeling problem 22, we get the following balance equations: λpR+K λpR+K−i
= θpR = θpR−i + λpR+K+1−i , i = 1, 2, .., R,
(λ + θ)pi = λpi+1 , i = 1, 2, ..., R, θp0 = λp1 . From the third set of equations, we get pi = A( a constant),
i = R + 1, ..., K + R.
138
CONTINUOUSTIME MARKOV CHAINS
Using this in the fourth set of equations, and solving recursively, we get λ R−i+1 ) A, λ+θ The last balance equation yields pi = (
i = 1, 2, ..., R.
λ λ R ( ) A. θ λ+θ Using these in the first two sets of the balance equations, and solving recursively, we get λ R+K+1−i ) )A, i = R + 1, R + 2, ..., K + R. pi = (1 − ( λ+θ Summing the above four solutions, and using the normalizing equation, we get p0 =
A=
1 . λ R K + λθ ( λ+θ )
This gives the limiting distribution of the {X(t), t ≥ 0} process. Demands are lost in state 0. Hence the desired answer is p0 =
K
λ λ R θ ( λ+θ ) . λ R + λθ ( λ+θ )
6.23. The balance equations are p0 λk = µk pk , 1 ≤ k ≤ K. The solution satisfying the normalizing equation is p0 = (1 +
K X λk −1 λk ) , pk = p0 , 1 ≤ k ≤ K. µk µk
k=1
6.24. See solution to Modeling Exercise 6.28. The balance equations are µp1 = θp3 = λp2 . Using the normalization equation we get p1 =
λθ θµ λµ , p2 = , p3 = . λµ + µθ + θλ λµ + µθ + θλ λµ + µθ + θλ
6.25. This is a birth and death process with birth parameters λi = λ1 + λ2 for 0 ≤ i ≤ K − 1 and λi = λ1 for i ≥ K, and death parameters µi = µ for i ≥ 1. Let αi = λi /µ. Using the notation of Example 6.35, we get ρi = (α1 + α2 )i , 0 ≤ i ≤ K, ρi = (α1 + α2 )K α1i−K ,
i K.
Then if α1 < 1, ∞ X i=0
ρi =
1 − (α1 + α2 )K (α1 + α2 )K + . 1 − α1 − α2 1 − α1
CONTINUOUSTIME MARKOV CHAINS
139
Hence the system is stable if λ1 < µ. The limiting distribution is given by p j = ρj /
∞ X
ρi ,
j ≥ 0.
i=0
6.26.See Equation 7.19. 6.27. Let X(t) be the number of tasks in the system at time t. Suppose X(t) = i > 0. An arrival occurs at rate λ and this changes the number of tasks to i+2. A task completes at rate 2µ, which reduces the tasks to i − 1. This shows that {X(t), t ≥ 0} is a CTMC on {0, 1, 2, ...} with transition rates: qi,i+2 = λ, i ≥ 0,
qi,i−1 = 2µ, i ≥ 1.
The balance equations become λp0 = 2µp1 , (λ + 2µ)p1 = 2µp2 , (λ + 2µ)pi = λpi−2 + 2µpi+1 , i ≥ 2. P i We compute the generating function φ(z) = ∞ i=0 pi z . Multiply the equation for i pi by z and add to get λp0 + (λ + 2µ)
∞ X
pi z i = 2µ
i=1
∞ X
z i pi+1 + λ
i=0
∞ X
z i pi−2 .
i=2
Manipulating the above equation we get (λ + 2µ)φ(z) =
1 2µ φ(z) + λz 2 φ(z) + 2µ(1 − )p0 . z z
This yields φ(z) =
2µ(z − 1) 2µ p0 = p0 . λz(1 − z 2 ) + 2µ(z − 1) 2µ − λz(1 + z)
We compute p0 by using Φ(1) = 1. We get p0 = 1 −
λ = 1 − ρ. µ
This shows that condition of stability is λ < µ. Next we compute LT , the expected number of tasks in the system: LT =
∞ X i=0
ipi = φ0 (1) =
3ρ 3ρ · (1 − ρ) = . 2(1 − ρ)2 2(1 − ρ)
To compute LC , the expected number of customers in the system, let πi be the limiting probability that there are i customers in the system. Note that π0 = p0 = 1 − ρ. Now, each waiting customer has two tasks with him. However, the customer in service has either one task left or two tasks left with equal probability in steady state. Thus, if there are 0 customers in the system, there are zero tasks in the system. If there
140
CONTINUOUSTIME MARKOV CHAINS
are i > 0 customers in the system, the expected number of tasks is 2(i − 1) + 1.5 = 2i − .5. Hence we can compute the LT in an alternate way as follows: LT =
∞ X
(2i − .5)πi = 2LC − .5(1 − π0 ) = 2LC − .5ρ.
i=1
Hence, we get LC = LT /2 + ρ/4. 6.28. See solution to Modeling Exercise 6.33. The balance equations are pQ = P 0, 81 pi = 1. These can be solved to obtain p. The long run fraction of the customers who enter = 2 ∗ (2ρ + 1)(2 + ρ)(1 + ρ) . 1 − p5 − p7 − p8 = 4 3ρ + 11ρ3 + 14ρ2 + 14ρ + 2 6.29. See solution to Modeling Exercise 6.34. The balance equations are: (2λ + θ)p1 = µp2 , (2λ + µ)p2 = 2λp1 + µp4 , 2λp3 = θp1 , (λ + µ)p4 = 2λp2 + 2λp3 + µp5 , µp5 = λp4 , 5 X pi = 1. 1
The repair person is idle in states 1 and 2. Hence the desired probability is given by p1 + p2 =
2λµ2 (µ + θ + 2λ) . 8λ4 + 4λ3 θ + 8λ3 µ + 6λ2 µθ + 4λµ2 θ + 2λµ3 + µ3 θ + 4λ2 µ2
6.30. Let {X(t), t ≥ 0} be the CTMC given in the solution to Modeling exercise 20. Let ai,j be the probability that the CTMC ever visits state j startingfrom state i. The limiting size of the molecule is j + 2 if the CTMC gets absorbed in state T jT starting from state 1. Hence πi+2 = a1,T jT , for j ≥ 1. The first step analysis yields the following equations for ai0 j ’s: 2λ 2θ a2,T jT + a1T,T jT , 2λ + 2θ 2λ + 2θ 2λ µ 2θ a2,T jT = a2,T jT + a1,T jT + a2T,T jT , 2λ + 2θ + µ 2λ + 2θ + µ 2λ + 2θ 2λ 2µ 2θ ai,T jT = ai+1,T jT + ai−1,T jT + aiT,T jT , i ≥ 3, 2λ + 2θ + 2µ 2λ + 2θ + 2µ 2λ + 2θ µ θ a1T,T jT = a1,T jT + δ1,j , µ+θ µ+θ µ θ aiT,T jT = ai−1,T jT + δi,j , i ≥ 2. µ+θ µ+θ a1,T jT =
CONTINUOUSTIME MARKOV CHAINS
141
These equations can be solved to obtain a1,T jT . We explain the general technique below: The last two equations can be solved recursively to obtain ( µ i−j θ µ i ) a1,T jT + ( µ+θ ) µ+θ if i ≥ j ( µ+θ aiT, T jT = βi = µ i ( µ+θ ) a1,T jT if i < j. Substituting in the equation for ai,T jT we get the following nonhomogeneous difference equation with constant coefficients: ai,T jT =
2λ 2µ 2θ ai+1,T jT + ai−1,T jT + βi , i ≥ 3. 2λ + 2θ + 2µ 2λ + 2θ + 2µ 2λ + 2θ
Try ai,T jT = ri , i ≥ 2, as the homogeneous solution. We get 2λ 2µ ri+1 + ri−1 , i ≥ 3. 2λ + 2θ + 2µ 2λ + 2θ + 2µ p Hence the valid values of r are 12 [(λ + µ + θ) ± (λ + µ + θ)2 − 4λµ]. We discard th + sign since the solution has to be bounded. Then we try ai,T jT = A(µ/µ + θ)i as particular solution. We find that one value of A, call it A1 , works for i ≥ j and another, call it A2 , works for i < j. Thus the general solution is µ i ) if i ≥ j A1 ( µ+θ ai,T jT = Kri + µ i A2 ( µ+θ ) if 2 ≤ i < j. ri =
Now we are left with two equations (one for a1,T jT and one for a2,T jT ) with two unknowns: a1,T jT and K. Solving these two we obtain a1,T jT = πj+2 . 6.31. If Q is irreducible, it has a unique limiting distribution p = [p1 , ..., pN ]. Since Q is doubly stochastic, we can verify that pi = 1/N, 1 ≤ i ≤ N satisfies the balance equations: N N X X qij = 0. pi qij = (1/N ) i=1
i=1
6.32. Now let {X(t), t ≥ 0} be a P P (λ), and define Y (t) = X(t)mod(21). Then {Y (t), t ≥ 0} is a CTMC on {0, 1, ..., 20} with transition rates qi,i+1 = λ, 0 ≤ i ≤ 19, q20,0 = λ. Thus the Q matrix is doubly stochastic. Hence the limiting distribution of Y is uniform over the state space. Now X(t) is divisible by 3 or 7 if and only if Y (t) ∈ {0, 3, 6, 7, 9, 12, 14, 15, 18}. Hence the desired probability is 9/21 = 3/7. 6.33. The Q matrix is irreducible and doubly stochastic. Hence, from the result in Computational Exercise 31, the limiting distribution is uniform, i.e., lim P (X(t) = j) =
t→∞
1 , 0 ≤ j ≤ N. N +1
6.34. See solution to Modeling Exercise 6.22 and Computational Exercise 6.22.
142
CONTINUOUSTIME MARKOV CHAINS
We see that an order is placed every time the CTMC visits state R. Hence the expected time between two consecutive orders is µRR . From the proof of Theorem 6.25 we see that µRR = 1/(pR qR ). Using pR from the solution to Computational Exercise 6.22 and qR = (λ + θ) we get the desired answer. 6.35. Consider a CTMC {X(t), t ≥ 0} on state space S = {0, 1, 2, 12}, where the state is the list of working machines. The generator matrix is given by −2λ λ λ 0 µ −2λ − µ 0 2λ . Q= µ 0 −2λ − µ 2λ 0 µ µ −2µ Let T = min{t ≥ 0 : X(t) = 1 or 12}, and mi = E(T X(0) = i), i ∈ S. We have m0
=
m2
=
1 1 + m2 , 2λ 2 1 µ + m0 2λ + µ 2λ + µ
Solving this, we get the desired answer as m2 =
2λ+µ λ(4λ+µ) .
6.36. X(t) = number of working CPUs at time t. {X(t), t ≥ 0} is a CTMC on {0, 1, 2, 3, 4, 5} with rates qi,i−1 = iµc, qi,0 = iµ(1 − c), i = 2, 3, 4, 5, q1,0 = µ. Let T = min{t ≥ 0 : X(t) = 0}, and mi = E(T, X(0) = i). The desired result is m5 . First step analysis yields m0 = 0, m1 = 1/µ, m2 = 1/(2µ) + cm1 , m3 = 1/(3µ) + cm2 , m4 = 1/(4µ) + cm3 , m5 = 1/(5µ) + cm4 . Solving recursively, we get m5 =
c3 c4 1 1 c c2 [ + + + + ]. µ 5 4 3 2 1
6.37. X(t) = number of working machines at time t. {X(t), t ≥ 0} is a birth and death process on {0, 1, ..., k} with birth parameters λi = (k − i)λ, 0 ≤ i ≤ k,
CONTINUOUSTIME MARKOV CHAINS
143
and death parameters µi = iµ, 0 ≤ i ≤ k. Let T = min{t ≥ 0 : X(t) = 0}, and mi = E(T, X(0) = i). Want m1 . This is a special case of Example 6.28. We have 1 λ1 ...λj−1 = λ j ρj µ1 ...µj k! 1 (λ/µ)j , 1 ≤ j ≤ k. (k − j)!j! kλ Hence, from Equation 6.220, =
m1 =
k X j=1
1 k! (λ/µ)j (k − j)!j! kλ
λ 1 [(1 + )k − 1] kλ µ where we have used the binomial theorem to compute the sum. =
6.38. Follow the above solution but use the rates given in the solution to Modeling Exercise 6.18. We have m0 = 0, m1 = 1/(λ + µ) + λ/(λ + µ)m2 , m2 = 1/(λ + 2µ) + 2cµ/(λ + 2µ)m1 + λ/(λ + 2µ)m3 , m3 = 1/(λ + 3µ) + 3cµ/(λ + 3µ)m2 + λ/(λ + 3µ)m4 , m4 = 1/(λ + 4µ) + 4cµ/(λ + 4µ)m3 + λ/(λ + 4µ)m5 , m5 = 1/(5µ) + cm4 . The desired result is given by m5 . The solution can be obtained by symbolic manipulation programs, but is messy. 6.39. X(t) = number of customers in an finite capacity queue at time t. X(0) = 1. Let T = min{t ≥ 0 : X(t) = 0}, and mi = E(T, X(0) = i). The expected time until an arrival to an empty system is thus thus m1 + 1/λ. m1 can be computed using the results of Example 6.28. We have 1 1 = (λ/µ)j−1 , 1 ≤ j ≤ K. λj ρj µ Hence, from Example 6.28, m1 =
=
K X 1 (λ/µ)j−1 µ j=1
1 1 − (λ/µ)K . µ 1 − λ/µ
144
CONTINUOUSTIME MARKOV CHAINS 1 1 − (λ/µ)K+1 . λ 1 − λ/µ
m1 + 1/λ =
6.40. Use the state space from the solution to Computational Exercise 6.21. Define T = min{t ≥ 0 : X(t) ∈ {0, R}}. Then, we need to compute mi = E(T X(0) = i). The first step analysis yields mi = τ + pmi+1 + qmi−1 , i ≥ 1, where τ = 1/(λ + µ + θ), p = difference equation is given by
λ λ+µ+θ
and q =
µ λ+µ+θ .
The general solution to this
mi = ari + c, i ≥ 0, √
where r = (1 − 1 − 4pq)/(2p) and c = τ /(1 − p − q) = 1/θ. Using the initial condition m0 = 0, we get a = −c. Hence the desired solution is given by mi =
1 − ri . θ
6.41. Consider the birth and death process {ZK (t), t ≥ 0} as described in the solution to Modeling Exercise 6.3. Let T be the first passage time to state 0 in this CTMC. Then the answer is given by m1 = E(T ZK (0) = 1). By using the results of Example 6.28 and simplifying we get K
m1 =
1 X (λ/µ)j . λ j=1 j!
6.42. From the solution to Computational Exercise 6.20 we see that the expected number of customers in the ith queue is given by Li =
λi . µi − λi
Hence long run cost per unit time is given by h1 L1 + h2 L2 . Hence we need to solve the following optimization problem: Minimizeh1 L1 + h2 L2 Subject to: λ1 + λ2 = λ, λ1 , λ2 ≥ 0. This can be solved using Lagrange multiplier, or by eliminating λ2 and solving an optimization problem in one variable. We get the optimal solution as √ hi µi √ √ λi = µi − (µ1 + µ2 − λ). h1 µ1 + h2 µ2 6.43. Use the CTMC developed in the solution to Modeling Exercise 6.7. Its limiting distribution is given by p1 =
µθ λµ λθ , p2 = , p3 = . λθ + µθ + λµ λθ + µθ + λµ λθ + µθ + λµ
CONTINUOUSTIME MARKOV CHAINS
145
The net income per hour is given by 30p1 − 20p2 − 100λ. We are given µ = 1/80 and θ = 1/3. Thus p1 =
80λ 1λ 3λ , p2 = , p3 = . 83λ + 1 83λ + 1 83λ + 1
Thus the net income per hour is −8300λ2 + 2300λ − 20 . 83λ + 1 This is maximized at approximately λ = .065. Hence the repairperson should visit on the average every 1/.065 = 15.38 hours. 6.44. Use the solution in Computational Exercise 6.22. An item is sold at rate λ in every state except 0, and the holding cost is ih per unit time in state i. Hence the long run net profit per unit time is given by λ(1 − p0 ) + h
K+R X
ipi .
i=0
6.45. Using the result of Example 6.38 with c = 0, we get R, the total expected discounted revenue from a single machine (operating at time 0) over the infinite time horizon as r(α + λ) . R= α(α + λ + µ) Hence the total discounted revenue from k machines is kR. 6.46. Let X(t) be the number of working machines at time t. Then {X(t), t ≥ 0} is a CTMC on {k, k + 1, · · · , K − 1, K} with rates qr,r−1 = rµ, k + 1 ≤ r ≤ K, qk,K = kµ. Using the balance equations we see that the limiting distribution is given by 1/r pr = PK , k ≤ r ≤ K. j=k 1/j The long run cost rate is given by g(k) = R
K X
rpr − (Cv + (K − k + 1)Cm )pk kµ.
r=k
Numerical evaluation gives g = [76.82, 103.68, 122.47, 136.91, 147.82, 154.89, 156.59, 148.76, 118.42, 0].
146
CONTINUOUSTIME MARKOV CHAINS
Thus the revenue is maximized at k = 7. That is, it is optimal do the batch replacement as soon as the number of working machines falls below 7. 6.47. This is a special case of Conceptual Exercise 6.4, with X(t) as in the solution to Modeling Exercise 6.17. Hence we get r k0 = 0, ki = + cki−1 , 1 ≤ i ≤ 5. µ Solving recursively, we get the desired answer as k5 =
r r 1 − c5 [1 + c + c2 + c3 + c4 ] = . µ µ 1−c
6.48. Use the notation from the solution of Modeling Exercise 6.30. Let pi,j = lim P(X(t) = i, Y (t) = j), (i, j) ∈ S. t→∞
From the balance equations, we get λpr,1 = µpr+1,1 , 0 ≤ r ≤ k − 1. Hence, using ρ = λ/µ, we get pr,1 = pk,1 /ρk−r , 0 ≤ r ≤ k. We also get pr,0 = c = ρpK−1,1 , k + 1 ≤ r ≤ K. Finally the balance equation λpr,1 = µpr+1,1 + µc, k ≤ r ≤ K − 1, yields 1 − ρr−k c, k ≤ r ≤ K − 1. 1−ρ Finally, using the normalizing equation we can compute pk,1 . Using these probabilities, we can write the long run net income per unit time as pr,1 = ρr−k pk,1 −
µ(1 − p0,1 − h
K−1 X
rpr,1 − h
r=0
K X
rpr,0 .
r=k+1
6.49. See the solution to Computational Exercise 6.8. The expected total cost g(T ) is given by (d + cT )(λ + µ) g(T ) = (d + cT )E(V ) = . µ(1 − e−(λ+µ)T ) Substituting the numerical values we get 2+T . 1 − e−3T This is minimized at approximately T = .74 hours. g(T ) = 15
CONTINUOUSTIME MARKOV CHAINS
147
6.50. Let Xl (t) be the number of large cars that are rented out at time t, and Xm (t) be the number of midsize cars that are rented out at time t. Then {Xl (t), t ≥ 0} is an M/M/k/k queue with arrival rate λl and service rate µl , and {Xm (t), t ≥ 0} is an M/M/K −k/K −k queue with arrival rate λm and service rate µm . Let ρl = λl /µl , and ρm = λm /µm . Then the long run expected number of cars that are rented out is given by Pk rρrl /r! , Ll = Pr=0 k r r=0 ρl /r! and PK−k r rρm /r! Lm = Pr=0 . K−k r r=0 ρm /r! Hence the long run rate of revenue is g(k) = Ll rl + Lm rm . Using Matlab we can compute the above function for 0 ≤ k ≤ K we get g = [299.53, 327.25, 348.21, 361.66, 366.85, 363.26, 350.91, 330.64, 303.99, 272.80, 238.73]. We see that g(k) maximized at k = 4, with g(10) = 366.85 dollars per day. Thus it is optimal to have fleet of four large and six small cars. 6.51. Suppose the server is idle when a customer arrives. The customer enters service P(Vi > p) = 1 − p. Let X(t) be the number of customers in the system at time t. Then {X(t), t ≥ 0} is a twostate DTMC as in Example 6.4 with parameters λ(1 − p) and µ. Hence, in steady state, the system is empty with probability µ/(µ + λ(1 − p)). Thus the expected revenue per unit time is g(p) =
µλ(1 − p)p . µ + λ(1 − p)
This is maximized by (ρ = λ/µ) √
p= √
1+ρ √ . 1 + ρ + rho
With this p the fraction of the arriving √ customers who join the system is given by µ/(µ + λ(1 − p)), which reduces to 1/ 1 + ρ. 6.52. Assume that 0 ≤ p ≤ 1, otherwise no one will enter, or everyone will enter. Then an arriving customer enters with probability q = 1 − p. The resulting system is an M/M/1 system with arrival rate λq and service rate µ. The long run expected number in the system is λq/(µ − λq). Hence the long run expected revenue per unit time is g(q) = λq(1 − q) − hλq/(µ − λq). Thus we need to solve the constrained optimization problem Minimizeg(q)
148
CONTINUOUSTIME MARKOV CHAINS Subject to: 0 ≤ q ≤ min(1, µ/λ).
We have g 0 (q) = 2λ(1 − 2q) −
λµh . (µ − λq)2
Thus g 0 (q) = 0 is a cubic in q. Thus we need to solve this problem numerically. 6.53. See solution to Computational Exercise 6.25. λ1 c1 + λ2 c2 reduces to λ1 c1 + λ2 c2
1−(α1 +α2 )K 1−α1 −α2 (α +α )K 1−(α1 +α2 )K + 11−α2 1−α1 −α2 1
PK−1 j=0
pj . This
.
Q 6.54. It is clear that {X(t), t ≥ 0} is a CTMC on state space S = K k=1 Sk . If X(t) = i and Xi (t) changes state from ir to r0 , X(t) changes state to j = i − ir er + r0 er , where er is a row vector of all zeros and one in the rth component. Hence the transition rates of the vector valued process are given by qi,i−ir er +r0 er = [Qr ]ir ,r0 , i ∈ S, 1 ≤ r ≤ K, r0 ∈ Sr . Let pi (k) = lim P(Xk (t) = i), t→∞
i ∈ Sk .
Then, for i = (i1 , i2 , · · · , iK ) ∈ S, we have pi = lim P(X(t) = i) = t→∞
K Y
pik .
k=1
Now, let i ∈ S and j = i − ir er + r0 er . Then we have p(i)qi,j
=
K Y
pik [Qr ]ir ,r0
k=1
=
(
K Y
pik )pir [Qr ]ir ,r0
k=1,k6=r
=
(
K Y
pik )pr0 [Qr ]r0 ,ir
k=1,k6=r
=
(
K Y
pjk )[Qr ]r0 ,ir
k=1
( since Xr is reversible and j = i − ir er + r0 er ) =
p(j)qj,i .
This proves that {X(t), t ≥ 0} is reversible. 6.55. Assume that the service times are iid exp(µ) for all the customers. The state
CONTINUOUSTIME MARKOV CHAINS
149
space is S = {φ, 1, 2, 3, 123, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}. From Example 6.47, p(φ) = [1 + 3
λ λ λ θ + 3( )2 + ( )3 + ]−1 , µ µ µ µ
and the other limiting probabilities are given by λ p({i}) = p(φ) , µ λ p({i, j}) = p(φ)( )2 , µ
i = 1, 2, 3, i, j = 1, 2, 3, j 6= i,
λ p({1, 2, 3}) = p(φ)( )3 , µ θ p({123}) = p(φ) . µ 6.56. We say that the warehouse is in state (i1 , i2 , · · · , in ) if there are n items in the warehouse and the size of the rth item is Mir (1 ≤ r ≤ k). The empty warehouse is said to be in state φ. Let X(t) be the state of the warehouse at time t. The process {X(t), t ≥ 0} is a CTMC with statespace S = {φ} ∪ {(i1 , i2 , · · · , in ) : n ≥ 1, ir > 0, 1 ≤ r ≤ n,
n X
Mir ≤ B},
r=1
and transition rates given below (we write q(i → j) instead of qij for the ease of reading): q(φ → (i)) = λi , 1 ≤ i ≤ k, q((i1 , i2 , · · · , in ) → (i1 , i2 , · · · , in , in+1 )) = λin+1 , (i1 , i2 , · · · , in+1 ) ∈ S, q((i1 , i2 , · · · , in ) → (i1 , · · · , ir−1 , ir+1 , · · · , in )) = µir , (i1 , i2 , · · · , in ) ∈ S, q((i) → φ) = µi 1 ≤ i ≤ N. Thus the statespace is finite and the CTMC is irreducible and positive recurrent. Let p(φ)
=
p(i1 , i2 , · · · , in )
=
lim P(X(t) = φ),
t→∞
lim P(X(t) = (i1 , i2 , · · · , in )), (i1 , i2 , · · · , in ) ∈ S.
t→∞
Now suppose, for (i1 , i2 , · · · , in ) ∈ S, p(i1 , i2 , · · · , in ) =
λi1 λi2 · · · λin p(φ). µi1 µi2 · · · µin
(6.2)
It is straightforward to verify that this satisfies the local balance equations: λi p(φ) λik+1 p(i1 , i2 , · · · , in )
= µi p(i), 1 ≤ i ≤ k, = µin+1 p(i1 , i2 , · · · , ik , in+1 ),
for (i1 , i2 , · · · , in , in+1 ) ∈ S. The above equations imply that {X(t), t ≥ 0} is a
150
CONTINUOUSTIME MARKOV CHAINS
reversible CTMC with the limiting distribution given above. Using the normalizing equation we get −1 X λi1 λi2 · · · λin p(φ) = 1 + . µi1 µi2 · · · µin (i1 ,i2 ,···,in )∈S
For i(i1 , i2 , · · · , in ) define M (i) =
n X
M ir .
r=1
The probability that the warehouse has to reject an item due to lack of space is given by X X p(i) λj . i∈S
j:1≤j≤k, Mj >B−M (i)
6.57. Let the network be denoted by G = (N, E), where N is the set of nodes and E the set of undirected arcs. The network is connected, so that there is a path from any node to any other node in the network. Let N (i) be the neighbors of node i, i.e., N (i) = {j ∈ N : (i, j) ∈ E}. Let di = N (i) be the degree of node i. Then the transition rates of the CTMC {X(t), t ≥ 0} can be written as qi,j = qi /di , j ∈ N (i), i ∈ N . Since the network is connected, this is an irreducible CTMC. This is a reversible CTMC, if we can find a limiting distribution that satisfies pi qi,j = pj qj,i , i.e., Pi qi /di = pj qj /dj . This immediately suggests pi = cdi /qi , i ∈ N as a possible solution. The constant c is chosen so that the pi ’s add up to one. Thus we have di /qi . pi = P j∈N dj /qj Thus the CTMC is reversible with the limiting distribution given above. 6.58. The retrial queue not reversible, sine q(1,0),(1,1) > 0 but q(1,1),(1,0) = 0. 6.59. Let Yi (t) be the position of the ith ball at time t. Then {Yi (t), t ≥ 0} is given to be a reversible CTMC. Let Q be its rtae matrix, and π = [π1 , π2 , ..., πN ] be its steadystate distribution. Then we are given that πi qij = πj qji . {X(t), t ≥ 0} is a P CTMC on statespace S = {i = [i1 , i2 , ..., iN ] : N j=1 ij = k} with transition rates (here em is an N vector with 1 in the mth place and zeros everywhere else) qi,i−em +en = im qm,n ,
i ∈ S.
Then the limiting distribution of X(t) is a Multinomial with parametersk and π, that
CONTINUOUSTIME MARKOV CHAINS
151
is, k! π i1 π i2 ...piiNN , i ∈ S. i1 !i2 !...iN ! 1 2 It can be directly verified that πm qmn = πn qnm ⇒ π(i)qi,i−em +en = π(i − em + en )qi−em +en ,i , thus showing that {X(t), t ≥ 0} is a reversible CTMC. π(i) =
6.60. From the solution to Conceptual Problem 6.35 we see that {I(t), t ≥ 0} is a CTMC on state space {0, 1, 2, · · · , N } with rates qi,i+1 = βi(N − i), 0 ≤ i ≤ N. The initial state is 1. This is a pure birth process. Suppose N ≥ 2. The expected time to reach from state 1 to N is given by N −1 X
E(time spent in state i) =
N −1 X
i=1
i=1
1 , βi(N − i)
this can be simplified to get the desired answer as N −1 1 2 X1 [ − ]. β i=1 i N
6.61. We are asked to compute the probability that epidemic follows the sample path: (N − K, K) → (N − K, K − 1) → (N − K, K − 2) · · · → (N − K, 1) → (N − K, 0). Based on the rates given in the solution to Conceptual Exercise 6.36 we see that this probability is given by K Y
jγ . jγ + j(N − K)β j=1 6.62. We are asked to compute the probability that epidemic follows the sample path: (N − 1, 1) → (N − 2, 1) → (N − 3, 1) · · · → (m, 1) → (m, 0). Based on the rates given in the solution to Conceptual Exercise 6.36 we see that this probability is given by N −1 Y γ jβ . γ + mβ j=m+1 γ + jβ 6.63. From first step analysis we get (cancelling j from the numerator and denominator in the fractions) pi,j =
γ+ν βi pi,j−1 + pi−1,j+1 γ + ν + βi (γ + ν) + βi
for k ≤ i ≤ N − 1, k ≤ i + j ≤ N, j ≥ 1 with boundary conditions pk,0 = 1, pi,j = 0, for i < k.
(6.3)
152
CONTINUOUSTIME MARKOV CHAINS
As a special case we have pk,j = αpk,j−1 , j ≥ 1 where α = (γ + ν)/(γ + ν + βk). Solving recursively, we get pk,j = αj , ≥ 0.
(6.4)
Now we can solve Equations 6.3 for pi,j , for increasing values of i, starting with i = k + 1, and using the boundary condition in Equation 6.4. 6.64. Let X(t) be the state of the patient at time t. Model {X(t), t ≥ 0} as a pure birth process with rates λi , 1 ≤ i ≤ 4, with λ5 = 0. The expected time to reach state 5 from state i is 4 X 1 , 1 ≤ i ≤ 4. τi = λ j=i j Using the numerical values of τi , we get λ4 = .5, λ3 = .1923, λ2 = .2272, λ1 = 5, per year.. The patient starts in state 1. The probability the patient is dead by time t is given by p1,5 (t) = P (X(t) = 5X(0) = 1). This can be done using the results of Example 6.25. Using the values of λ’s given above, we get p ∗1,5 (s) =
λ1 λ2 λ3 λ4 . s(s + λ1 )(s + λ2 )(s + λ3 )(s + λ4 )
Inverting this we get p1,5 (t) = 1 − 11.0017e−.1923t + 10.5798e−2272t − .5783e−.5t + .00021156e−5t . 6.65. Using the notation of the solution to Computational Exercise 6.64 we see that the probability the patient is in state 2 at time t is given by p1,2 (t). Its Laplace transform is given by λ1 p∗1,2 (s) = . (s + λ1 )(s + λ2 ) Inverting this we get p1,2 (t) = 1.0476e−.1923t − 1.0476e−5t . This is maximized at t = .69 years, or about 251 days. So the optimal testing time is T = 251days. 6.66. Use the birth and death rates given in Equations 6.13 and 6.14. Define ρ0 = 1, λ0 λ1 · · · λi−1 , i ≥ 1, µ1 µ2 · · · µi µi+1 µi+2 · · · µ0 ρi = , i ≤ −1. λi λi+1 · · · λ−1 ρi =
CONTINUOUSTIME MARKOV CHAINS
153
Then the limiting distribution is given by pi = ρi p0 , −∞ < i∞, where
∞ X
p0 = 1/
ρi .
i=−∞
6.67. Let {X(t), t ≥ 0} be the birth and death process defined in Subsection 6.3.4. In state i > 0, the buy orders are executed at rate λbi , since all incoming buy orders are executed. In state i < 0, a buy order is executed at rate λsi , since every incoming sell order removes a buy order. Hence the total rate at which buy orders are executed is given by ∞ −1 X X A= λbi pi + λsi pi , i=1
i=−∞
where pi ’s are as given in the solution to Computational Exercise 6.66. The rate at which the buy orders arrive is given by B=
∞ X
λbi pi .
i=−∞
Hence the long run fraction of buy orders that get executed are given by A/B. 6.68. Using the notation of the solution to Modeling Exercise ??, we see that {(X1 (t), X2 (t)), t ≥ 0} is a CTMC on state space S = {(0, 0), (0, 1), (0, −1), (1, 0), (−1, 0)} and transition rates q(0,0),(1,0) = λs1 , q(0,0),(−1,0) = λb1 , q(0,0),(0,1) = λs2 , q(0,0),(0,−1) = λb2 , q(1,0),(0,0) = λb1 , +λb2 q(−1,0),(0,0) = λs1 , q(0,1),(0,0) = λb2 , q(−1,0),(0,0) = λs1 + λs2 . The balance equations yield: p(1,0) =
p(−1,0) =
λb1 p(0,0) , λs1
p(0,1) =
λs2 p(0,0) , λb2
λb2 p(0,0) . + λs2 can be obtained by normalizing equation p(0,−1) =
The quantity p(0,0)
λs1 p(0,0) , λb1 + λb2
λs1
p(0,0) + p(1,0) + p(−1,0) + p(0,1) + p(0,−1) = 1.
154
CONTINUOUSTIME MARKOV CHAINS
6.69. From the solution to the Modeling Exercise 6.39 we see that {S(t), t ≥ 0} is a birth and death process on statespace {0, 1, 2, 3, · · ·} with rates qi,i+1 = λ, i ≥ 0; qi,i−1 = iθ + µ, i ≥ 1. Let
λi , i ≥ 1. µ(µ + θ) · · · (µ + (i − 1)θ) Then the process is stable if θ > 0 and its limiting distribution is given by ρ0 = 1, ρi =
pj = ρj /
∞ X
ρi .
j=0
6.70. Suppose an incoming sell order sees n other sell orders ahead of it. Let S(t) be −1 if this sell order gets executed by time t, a if it leaves without getting executed by time t. If it is not yet executed and has not left by by time t, let S(t) denote the number of sell orders in front of it. We have S(0) = n. {S(t), t ≥ 0} is a CTMC on statespace {−1, 0, 1, 2, 3, · · · , n, a} with rates qi,a = θ, 0 ≤ i ≤ n; qi,i−1 = iθ + µ, 0 ≤ i ≥ n. Let ui be the probability of eventually visiting state a stating from state i, 0 ≤ ilen. The desired answer is given by un . We have u0 =
µ + iθ µ , ui = ui−1 , 1 ≤ i ≤ n. µ+θ µ + (i + 1)θ
Solving recursively, we get un
n Y
µ + iθ µ = . µ + (i + 1)θ µ + (n + 1)θ i=0
6.71. Let Xj (t) be the number of patients of type j at time t. We have seen in the solution of Modeling Exercise 6.40 that {Xj (t), t ≥ 0} is a CTMC with statespace {0, 1, 2, · · ·} and transition rates qi,i+1 = λj , i ≥ 0, qi,i−1 =
I X
µk βk,j = θj , i ≥ 1.
k=1
Suppose λj < θj . Then {Xj (t), t ≥ 0} is stable, with p0,j = lim P (Xj (t) = 0) = 1 − λj /µj . t→∞
Thus the fraction of kidneys that lead to a successful transplant is given by J I X X (λj /θj ) µk βk,j αk,j . j=1
6.72.
k=1
CONTINUOUSTIME MARKOV CHAINS
155
1. Note that when PiX(t) = i, servers 1 through i are busy, and the total service rate is θi = j=1 µj . Hence {X(t), t ≥ 0} is a birth and death process on {0, 1, 2, · · · , } with birth rates λ in each state, and death rates θi in states 1 ≤ i ≤ s and θs in states i ≥ s. 2. The condition of positive recurrence is θs > λ. Let ρ0 = 1, ρi =
λi , 1 ≤ i ≤ s, θ1 θ2 · · · θs
ρi = ρs (λ/θs )i−s , i ≥ s. The
ρi pi = lim P (X(t) = i) = P∞ t→∞
j=0
ρj
,
i≥0
3. Server i is busy if and only if there are at leastPi customers in the system. Hence he probability that server i is busy is given by ∞ j=i pj . 6.73. 1. {X(t), t ≥ 0} is a CTMC with state space {0, 1, 2, · · ·}. Suppose X(t) = i. Suppose the the batch sizes arriving after time t are Y1 , Y2 , · · ·, arriving at time t + S1 , t + S2 , · · ·. Then CTMC jumps out of state i at time t + SN if Y1 ≤ i, Y2 ≤ i, · · · , YN −1 ≤ i, YN > i. Thus P (N = n) = (1 − αi )n−1 αi . Hence it stays in state i for an exp(λαi ) amount of time and then jumps to state i + k > i with probability αk−1 (1 − α). Hence qi,i+k = λαi+k−1 (1 − α), k ≥ 1, i ≥ 0. 2. All states are transient. 3. pi,i (t) is the probability that all the arriving batches up to time t are of size i or less. Hence ∞ X (λt)k i e−λt (1 − αi )k = e−λtα . pi,i (t) = k! k=0
pi,j (t) (j > i) is the probability one batch of size j arrived up to time t, and all other batches were of size j or less. Hence pi,j (t) = α
j−1
(1 − α)
∞ X k=1
=α
j−1
e−λt
(λt)k (1 − αj )k−1 k! j
(1 − α)(1 − e−λtα )/(1 − αj ).
6.74. Consider a tandem line of two stations. There is an infinite supply of jobs in front of station 1 and hence station 1 is always busy. The jobs that are processed at station 1 are stored in a buffer with a finite capacity b. Station 2 draws jobs from this buffer for processing. When the jobs are processed at station 2, they leave the system. Station 1 is blocked when there are b jobs in the buffer and station 1 completes the processing of another job. (Note that station 1 can start processing a job even if its output buffer is full.) The blockage is removed as soon as a job leaves station 2. We
156
CONTINUOUSTIME MARKOV CHAINS
have two servers. Server i takes iid exp(µi ) (i = 1, 2) amounts of time to complete jobs, with µ1 < µ2 . Suppose server i is assigned to station i. 1. Let X(t) be the number of jobs in the buffer plus the one in service (if any) at station two. We also set X(t) = b + 2 if the the buffer is full and server one completes processing. {X(t), t ≥ 0} is a CTMC on state space {0, 1, · · · , b + 1, B} with rates qi,i+1 = µ1 , 0 ≤ i ≤ b + 1, qi,i−1 = µ2 , 1 ≤ i ≤ b + 2. 2. Let ρ = µ1 /µ2 . Then pj = lim P (X(t) = j) = ρj p0 , 0 ≤ j ≤ b + 2, t→∞
where
1−ρ . 1 − ρb+3 The throughput is given by µ2 (1 − p0 ). p0 =
3. The desired answer is 1+
b X
jpj + (b + 1)(pb+1 + pb+2 .
j=0
4. Consider the system with µ1 and µ2 interchanged. Let qj be the limiting probability that this system is in state j in the limit. Then we see that qj == ρ−j p0 , 0 ≤ j ≤ b + 2, where
1 − (ρ)−1 . 1 − ρ−(b+3) The throughput in this system is given by µ1 (1 − q0 ). Direct calculations show that µ1 (1 − q0 ) = µ2 (1 − p0 ). Thus the throughput remains the same, no matter what the server assignment is. q0 =
6.75. 1. {X(t), t ≥ 0} is a CTMC on {0, 1, · · · , K} with rates qi,i+1 = λ, 0 ≤ i ≤ K − 1, qK,0 = µ. 2. The balance equations are λpi = λpi−1 , 1 ≤ i ≤ K − 1, λp0 = µpK . Using the normalizing equation we get (using ρ = λ/µ) pi =
1 ρ , 0 ≤ i ≤ K − 1, pK = . K +ρ K +ρ
The server is busy with probability pK .
CONTINUOUSTIME MARKOV CHAINS
157
3. The desired answer is given by Lq =
K−1 X
ipi =
i=1
K(K − 1) . 2(K + ρ)
4. The expected longrun average number of customers in the system is given by L = Lq + KpK . 6.76. 1. Let X(t) be the number of customers in the system at time t. {X(t), t ≥ 0} is a CTMC on {0, 1, · · · , K − 1} with rates qi,i+1 = λ, 0 ≤ i ≤ K − 2, qK−1,0 = λ, qi,i−1 = µ, 1 ≤ i ≤ K − 1. The balance equations are λpi = µpi+1 + λpK−1 , 0 ≤ i ≤ K − 2. K−1 X
pi = 1.
i=0
Let ρ = λ/µ and c = ρpK−1 .. Then the solution is given by pi = ρi p0 −
1 − ρi c, 0 ≤ i ≤ K − 1. 1−ρ
Setting i = K − 1 and using c = ρpK−1 we get pK−1 =
ρK−1 (1 − ρ) p0 . 1 − ρK
Hence
1 − ρK−1 ]p0 , 0 ≤ i ≤ K − 1. 1 − ρK Finally, p0 can be computed using the normalizing equation. pi = [ρi − ρK
2. c(K) = h
K−1 X
ipi + λpK−1 sK.
i=0
3. This is same as the expected first passage time from 0 to K in a birth and death process on {), 1, · · · , K} with the follwoing rates: qi,i+1 = λ, 0 ≤ i ≤ K − 1, qi,i−1 = µ, 1 ≤ i ≤ K. The solution is (using α = µ/λ) m0 =
αK − 1 K . + 2 λ(1 − α) λ(1 − α)
158
CONTINUOUSTIME MARKOV CHAINS
6.77. 1. Let Xi (t) be the number of type i (i = A, B) in the system at time t. The state of the system is X(t) = (XA (t), XB (t)). The state space is S = {(0, 0), (1, 0), (2, 0), (0, 1), (0, 2), (1, 1), (1, 2), (2, 1)}. 2. The balance equations are (λA + λB )p(0,0)
= µA p(1,0) + µB p(0,1) ,
(λA + λB + µA )p(1,0)
= µA p(2,0) + µB p(1,1) + λA p(0,0) ,
(λB + µA )p(2,0)
= µB p(2,1) + λA p(1,0) ,
(λA + λB + µB )p(0,1)
= µB p(0,2) + µA p(1,2) + λB p(0,0) ,
(λA + µB )p(0,2)
=
µA p(1,2) + λB p(0,1) ,
(λA + λB + µA + µB )p(1,1)
=
µA p(2,1) + µB p(1,2) + λA p(0,1) + λB p(1,0) ,
(λA + µA + µB )p(1,2)
=
λA p(0,2) ,
(λB + µA + µB )p(2,1)
=
λB p(2,0)
The normalization equation is: X
p(i,j) = 1.
(i,j)∈S
3. The long run utilization of server A is 1 − p(0,0) − p(0,1) − p(0,2) . 6.78. Let Mi,j (t) be the expected time spent by the machine in state j over (0, t] starting in state i. These quantities are given in Equation 6.24. The desired answer is given by α0 M1,0 (t) + α1 M1,1 (t). 6.79. 1. Model this system as a continuoustime Markov chain. Let X(t) be the number of passengers at the bus depot at time t. {X(t), t ≥ 0} is a CTMC with rates qi,i+1 = λ, qi,0 = µ,
i ≥ 0, i ≥ k.
2. The balance equations are λpi = λpi−1 , 1 ≤ i ≤ k − 1, (λ + µ)pi = λpi−1 , i ≥ k. Let ρ = λ/(λ + µ) Assume µ > 0 so that ρ < 1. The solution is pi = p0 , 1 ≤ i ≤ k − 1, pi = ρi−k+1 p0 , i ≥ k. The normalizing equation yields 1 −1 p0 = (k + ) . 1−ρ
CONTINUOUSTIME MARKOV CHAINS
159
The stability condition is µ > 0. 3. Consider a passenger who arrives to an empty bus depot. What is the expected time until this passenger leaves on a bus? The passenger has to wait for additional k − 1 arrivals and then for the officer to turn up. Hence the expected time is (k − 1)/λ + 1/µ. 6.80. The system is stable if λ/µ1 < 1. Assume stability. The balance equations are λp(0, 0) = µ0 p(1, 0) + µ1 p(1, 1), (λ + µ0 )p(i, 0) = λp(i − 1, 0), i ≥ 1 (λ + µ1 )p(1, 1) = µ0 p(2, 0) + µ1 p(2, 1), i ≥ 1 (λ + µ1 )p(i, 1) = λp(i − 1, 1) + µ0 p(i + 1, 0) + µ1 p(i + 1, 1), i ≥ 2. Let ρi = λ/µi , 1, 2. α = ρ1 , β = ρ0 /(1 + ρ0 ). Then the solution is p(i, 0) = β i p(0, 0), i ≥ 0. αi − β i p(0, 0), i ≥ 1. α−β p(0, 0) can be computed using the normalizing equation. Here is an alternate method. p(0, 0) is the probability that an incoming customer sees an idle server (from PASTA). Hence the expected service time of an arbitrary customer is τ = p(0, 0)/µ0 + (1 − p(0, 0))/µ1 . Now, p(0, 0), the probability that a server is idle in a single server system is 1 − λτ . Hence, we must have p(i, 1) = αβ
p(0, 0) = 1 − λ(p(0, 0)/µ0 + (1 − p(0, 0))/µ1 ). Solving, we get p(0, 0) = (1 − ρ1 )/(1 + ρ0 − ρ1 ), where ρi = λ/µi , i = 0, 1.
160
CONTINUOUSTIME MARKOV CHAINS
Conceptual Exercises 6.1. Let Z be the time time spent in state j during (0, T ), and Mij = E(ZX(0) = i). Clearly MN j = 0.We have E(S1 X(0) = i) if i = j and k = N , E(S1 X(0) = i) + E(ZX(0) = k) if i = j and k < N E(ZX(0) = i, X(S1 ) = k) = 0 if i 6= j and k = N E(ZX(0) = k) if i 6= j and k < N . Hence, for 0 ≤ i ≤ N − 1, we get Mij
= =
E(ZX(0) = i) N X
E(ZX(0) = i, X(S1 ) = j)P (X(S1 ) = jX(0) = i)
j=0
=
N −1 X qi,j δij + . Mkj qi qi j=0,j6=i
Now let B = [qij ]i,j≥1 . The above equations can be written as BM = −I. 6.2. From Theorem 6.15 we have sφ(s) = w + M φ(s). Taking derivatives repeatedly, we get φ(k) (s), the kth derivative of φ(s), as kφ(k) (s) + sφ(k−1) (s) = M φ(k) (s). Using dk φi (s)s=0 , dsk in the above equation we get the desired result. mi (k) = (−1)k
6.3. Let mk = E(T X(0) = k). Let S1 be the time of first transition and X1 = X(S1 +). We have E(S1 X(0) = i) if i = k and j = n, E(T X(0) = k, X(S1 ) = n) = E(S1 X(0) = i) + E(T X(0) = n) otherwise. Hence, for k 6= i we get mk
= E(T X(0) = k) X = E(T X(0) = k, X(S1 ) = n)P (X(S1 ) = nX(0) = k) n6=k
CONTINUOUSTIME MARKOV CHAINS X 1 qk,n = + mn , qk qk
161
n6=k
while for k = i we get mi =
X 1 qk,n + mn . qi qk n6=i,j
Rearranging, we get X
qkn mn = −1,
k 6= i,
n∈S
X
qin mn = −1.
n∈S{j}
6.4. Let ρi = E(ZX(0) = i). If i = N , then T = 0, and hence ρN = 0. For i < N we have E(S1 X(0) = i)r(i) if j = N , E(ZX(0) = i, X(S1 ) = j) = E(S1 X(0) = i)r(i) + E(ZX(0) = j) if j < N . Hence, for 0 ≤ i ≤ N − 1, we get ρi
= E(ZX(0) = i) =
N X
E(ZX(0) = i, X(S1 ) = j)P (X(S1 ) = jX(0) = i)
j=0
=
N −1 X r(i) qi,j + ρj . qi qi j=0,j6=i
6.5. Let p(t) be the pmf of X(t). Since p(0) = p is the limiting distribution, it satisfies pQ = 0, and hence pQn = 0. We have p(t) = pP (t) = p exp(Qt) = p
∞ X Qn tn = 0. n! n=0
6.6. Suppose X(0) = i. The expected discounted reward incurred during the first sojourn time T is Z T E e−αt ri dt + e−αT ri,X(T ) X(0) = i 0 = E (1 − e−αT )ri /α + e−αT ri,X(T ) X(0) = i ri qi X qij = + rij qi + α qi + α qi j6=i
=
r(i) , qi + α
where r(i) = ri +
X j6=i
qij rij .
162
CONTINUOUSTIME MARKOV CHAINS
The result follows from this. 6.7. Suppose X(0) = i. The expected total reward incurred during the first sojourn time is r(i) ri X qij + rij = , qi qi qi j6=i
where r(i) = ri +
X
qij rij .
j6=i
The result follows using this in place of c(i) in Theorem 6.31. 6.8. We have gi (t)
= = = =
E(T otalrewardover(0, t]X(0) = i) Z t E( cX(t) dtX(0) = i) 0 Z t E(cX(t) dtX(0) = i)dt 0 Z tX cj pij (t)dt 0
=
X
j
Mij (t)cj
j
Hence the result follows. 0 6.9. Let {Y (t), t ≥ 0} be a CTMC on {0, 1, ..., N } with transition rates qi,j = 0 0 0 qi,j /r(i). Let T be the first time Y process hits state N , and mi = E(T Y (0) = i). From the proof of theorem 6.19 we get
m0i =
N −1 N −1 0 X X qi,j 1 r(i) qi,j 0 0 + m = + m . j qi0 qi0 qi qi j j=0,j6=i
j=0,j6=i
But these are the same equations as for ρi of the previous exercise. Hence, from the uniqueness of the solutions, m0i = ρi for all i. In fact T 0 and Z have the same distribution, since the sequence of states visited by the X and Y processes are statistically identical, and the sojourn time of the Y process in state i has the same distriibution as the reward earned by the X process during one visit to state i by the X process, both being Exp(qi /r(i)) random variables. 6.10. This is a special case Conceptual Exercise 6.7. Fix i and j 6= i and define rk = 0 for all k, and rmn = 1 if m = i and n = j, andPzero otherwise. Then Ni j(t) is the total reward earned up to t. We get r(i) = ri + m6=n rmn qmn = qij . Hence
CONTINUOUSTIME MARKOV CHAINS
163 P
the long run reward per unit time is given by
i∈S
r(i)pi = pi qij .
6.11. Let M (t) be the remaining instructions at time t. Suppose X(0) = i and S1 is the first sojourn time. If S1 ri > x, the job completes at time T = x/ri , else the system moves to a new state j with probability qij /qi and the job has already received S1 amount of processing and has x − S1 ri instructions still to go for completion. Hence −sx/r i e if y > x/ri −sT E(e X(0) = i, S1 = y, X(y) = j) = −s(y+T ) E(e X(0) = j, M (0) = x − yri ) if if y ≤ x/ri −sx/r i e if y > x/ri = −sy e φj (s, x − yri ) if if y ≤ x/ri Unconditioning with respect to y and j, we get XZ ∞ E(e−sT X(0) = i, S1 = y, X(y) = j)qij e−qi y dy φi (s, x) = j6=i
=
j6=i
=
y=0 x/ri
XZ
e
φj (s, x − yri )qij e
−qi y
dy +
XZ
y=0
j6=i
x/ri
XZ j6=i
−sy
∞
e−sx/ri qij e−qi y dy
y=x/ri
e−sy φj (s, x − yri )qij e−qi y dy + e−(s+qi )x/ri .
y=0
Now multiply both sides by e−wx and intgrate with respect to x from zero to infinity. We have Z ∞ ∗ φi (s, w) = e−wx φi (s, x)dx 0
=
XZ j6=i
= =
j6=i
=
Z
x/ri
e−sy φj (s, x − yri )qij e−qi y dydx +
Z
y=0 ∞
e−sy e−wyri
x=0
XZ X
e−wx
0
XZ j6=i
∞
Z
∞
e−w(x−yri ) φj (s, x − yri )qij e−qi y dxdy +
x=ri y
e−y(s+qi +wri )
x=0
φ∗j (s, w)
Z
Z
∞
e−wx e−(s+qi )x/ri dx.
0
∞
e−w(x) φj (s, x)qij dxdy +
x=0
ri qij + . s + qi + wri s + qi + wri
(s + qi + wri )φ∗i (s, w) = ri +
Z 0
Multiplying the equation by s + qi + wri we get X
qij φ∗j (s, w).
i6=j
Using qi = −qii , and bring the sum on the LHs, we get X φ∗j (s, w)(sδi,j − qij + wri δij ) = ri . i
e−wx e−(s+qi )x/ri dx.
0
∞
j6=i
∞
∞
e−wx e−(s+qi )x/ri dx.
164
CONTINUOUSTIME MARKOV CHAINS
This can be written in matrix form as [sI + wR − Q]φ∗ (s, w) = r, where R, φ∗ , I, Q are as defined in the problem. 6.12. Following the same argument as above, we get −sx/r i e if y > x/ri E(e−sT X(0) = i, S1 = y, X(y) = j) = −sy e φj (s, x) if if y ≤ x/ri Unconditioning with respect to y and j, we get X Z x/ri XZ φi (s, x) = e−sy φj (s, x)qij e−qi y dy + j6=i
=
y=0
j6=i
∞
e−sx/ri qij e−qi y dy
y=x/ri
X qij (1 − e−(s+qi )x/ri )φj (s, x) + e−(s+qi )x/ri . s + qi j6=i
6.13. Let G, α and M be as in Theorem 6.32 Let {X(t), t ≥ 0} be a CTMC on statespace {0, 1, 2, ..., k1 + k2 } with rate matrix 00 Q= , bM where b = −M e is such that row sums of Q are zero. (note that the first k1 elements of b are zero.) Let T be the first passage time in state 0. Assume that the distribution of X(0) is α. Then T is a phase type random variable with parameters (α, M ). From the construction of M and α it follows that {X(t), t ≥ 0} starts in the set {1, 2, ..., k1 } and spends a random amount of time, say T1 , in that set and then visits the set {k1 + 1, ..., k1 + k2 }. It then spends a random amount of time, say T2 , in that set of states, and then gets absorbed in state 0. Then it is clear that T1 is a phase type random variable with parameters (α1 , M1 ). Furthermore, P (X(T1 ) = j + k1 X(0) = i) = α2,j for all 1 ≤ i ≤ k1 and 1 ≤ j ≤ k2 . Thus X(T1 ) − k1 is independent of {X(t), 0 ≤ t ≤ T1 } and its distribution is given by α2 . Thus T2 is a phase type random variable with parameters (α2 , M2 ). However, by construction T = T1 + T2 . Hence T1 + T2 is a phase type random variable. 6.14. Let Ti , (i = 1, 2) be two independent phase type random variables with parameter (αi , Mi ). α = [βα1 , (1 − β)α2 ] and M be as in Theorem 6.32. Let T be a phase type random variable with parameters (α, M ). Then, using Equation 6.41, we get P (T > t)
=
αexp{M t}e
M1 0 [βα1 , (1 − β)α2 ]exp{ }e 0M2 exp{M1 t}0 = [βα1 , (1 − β)α2 ] e 0exp{M2 t}
=
CONTINUOUSTIME MARKOV CHAINS
165
= βα1 exp{M1 t}e1 + (1 − β)α2 exp{M2 t}e = βP (T1 > t) + (1 − β)P (T2 > t). Thus T is a mixture of T1 and T2 . This proves the statement. 6.15. Let Ti be the first passage time into state 0 in a CTMC {Xi (t), t ≥ 0} state space Si = {0, 1, ..., ki }. Assume that X1 and X2 are independent processes, so that T1 and T2 are independent random variables. Then {(X1 (t), X2 (t)), t ≥ 0} is a CTMC with statespace S = S1 × S2 . Let T be a first passage time in the bivariate process into the set of states {(i, j) ∈ S : i = 0 or j = 0}. Then T is a phase type random variable, and T = min(T1 , T2 ). This proves the statement. 6.16. Follows by summing the local balance equations over all i. 6.17. Follows along the same lines as Example 4.32. 6.18. Let Vj (t) be the time spent in state j over (0, t]. We have Mi,j (t + h)
= =
E(Vj (t + h)X(0) = i) X E(Vj (t + h)X(h) = k, X(0) = i)P (X(h) = kX(0) = i)
=
E(Vj (t + h)X(h) = i, X(0) = i)P (X(h) = iX(0) = i) +
k∈S
X
E(Vj (t + h)X(h) = k, X(0) =
k6=i
=
hδi,j + E(Vj (t)X(0) = i, X(0) = i)(1 + qi,i h) +
X k6=i
=
hδi,j + Mi,i (t)(1 + qi,i h) +
X
Mk,j (t)qi,k h.
k6=i
Rearranging, we get Mi,j (t + h) − Mi,j (t) = h + [M (t)Q]i,j h. Dividing by h and letting h → 0, we get d Mi,j (t) = δi,j + [M (t)Q]i,j , dt which yields d M (t) = I + QM (t). dt
E(Vj (t)X(0) = k)qi,k h + o(h)
CHAPTER 7
Queueing Models
Modeling Exercises 7.1. Let X(t) be the number of customers waiting at the taxi stand at time t. This number goes up by 1 whenever a customer arrives, and goes down by one whenever there is at least one customer and a taxi arrives. Thus {X(t), t ≥ 0} is a birth and death process with birth rate λ in all states i ≥ 0, and death rates µ in all states i ≥ 1. Hence it is an M M 1 queue with arrival rate λ and service rate µ. 7.2. Let X(t) be the number of items in the warehouse at time t. {X(t), t ≥ 0} is a birth and death process with birth rates λi = λ,
i ≥ 0,
µi = µ,
i ≥ 1.
and death rates 7.3. Let X(t) be the number of customers in the bank at time t. {X(t), t ≥ 0} is a birth and death process with birth rates i ≥ 0,
λi = λ, and death rates
µ, 2µ, µi = 3µ,
for 1 ≤ i ≤ 3 for 4 ≤ i ≤ 9 for i ≥ 10.
7.4. Let X(t) be the number of customers in the system at time t. Service completion takes place at rate µ, but the customer departs with probability α. Hence the system moves from state i to i − 1 with rate αµ. Thus {X(t), t ≥ 0} is a birth and death process with birth rates λi = λ,
i ≥ 0,
and death rates µi = αµ,
i ≥ 1.
7.5. Let X(t) be the number of customers in the grocery store at time t. {X(t), t ≥ 167
168
QUEUEING MODELS
0} is a birth and death process with birth rates i ≥ 0,
λi = λ, and death rates µi =
µ1 , µ2 ,
for 1 ≤ i ≤ 3 for i ≥ 4.
7.6. Suppose a customer starts service at time 0 and finishes service at time S. This means the server is up at time 0 and S. Let W ∼ exp(µ) be amount of time it takes to service the customer if there are no failures, and U ∼ exp(θ) be the first uptime of the server. If U > W then S = min(W, U ). If U < W then the server fails at time U before the service is completed, then the server stays down for R ∼ exp(α) amount of time. From then on it again takes S amount of time to finish the service, due to memoryless property of the exponential distribution. Hence S ∼ U + R + S. Using this analysis, we get ˜ G(s) = E(e−sS ) =
θ+µ θ θ+µ α µ ˜ · + · · · G(s). θ+µ s+θ+µ θ+µ s+θ+µ s+α
This yields: µ(s + α) . (s + θ + µ)(s + α) − θα Since the server is up whenever a new customer enters service for the first time, it is clear that the service times are iid with LST φ(s). The arrival process is PP(λ). Hence this is an M G1 queue. ˜ G(s) =
7.7. The arrival process is a superposition of k independent Poisson processes, hence it is a P P (λ) with λ = λ1 + ... + λk . Since each customer is of type i with probability αi = λi /λ, the service times S are iid hyper exponential random variables with parameters n = k, αi and µi for i = 1, 2, ..., k. That is P(S ≤ x) =
k X
αi (1 − e−µi x ).
i=1
Hence this is an M/G/1 queue. 7.8. This is not a standard M/G/1 queue, since a customer arriving at an empty system may see the server idle, and hence will require a different amount of time to complete service as compared to customers who arrive at a nonempty system, since they definitely find the server up when it is their turn to start service for the first time. Thus this is a special case of the variation of the M/G/1 queue in Modeling Exercise 7.13. So let X(t) be the number of customers in the system at time t. Suppose a service has just completed at time zero, and X(0) = 0. Thus the server is up at time zero. Suppose the next arrival occurs at time T ∼ exp(λ). The probability that the server is up at this arrival instant, given T = t, is α θ + e−(α+θ)t . α+θ α+θ
QUEUEING MODELS
169
Thus the probability that a customer arriving at an empty system sees the server up is given by Z ∞ α θ u= + λe−λt e−(α+θ)t dt α+θ α+θ 0 θλ α + . = α + θ (α + θ)(α + θ + λ) Let G be as in the solution to Modeling Exercise 7.6. Let H be the time spent in service by a customer arriving at an empty system. Conditioning upon whether the server is up or down at the time of arrival, we get α ˜ ˜ ˜ H(s) = uG(s) + (1 − u) G(s). s+α Use this H and G in the solution to Modeling Exercise 7.13 to complete the solution. 7.9. This is a G/M/1 queue where the inter arrival times are deterministic (they are equal to 1), and the service times are iid exp(µ). 7.10. Let X(t) be the set of busy servers at time t. {X(t), t ≥ 0} is a CTMC whose state space is the set of all subsets of M = {1, 2, ..., s}. For A ⊂ M , let m(A) be the smallest integer in M that is not in A. The usual triggering event analysis yields the following rates: q(A, A − {i}) = µi , q(A, A ∪ {m(A)}) = λ,
A ⊆ M, i ∈ A, A ⊂ M, A 6= M.
7.11. Let Xn be the # of customers in the system after the nth departure. Assuming Xn ≥ 1, the system behaves as a typical M/G/1 queue, the system behaves like a standard M/G/1 queue. If Xn = 0, P{Xn+1 = jXn = 0, Xn−1 , . . . , X0 } = P{Xn+1 = jXn = 0} = j+1 X P{k arrivals during vacation period, j+1−k arrivals during the next service time} = k=1 j+1 X
vk · aj+1−k , where vk =
k=1
ψ (k) (0) k!
1 − ψ(0)
Z , and ai = 0
∞
e−λt
(λt)i dG(t). Thus, i!
{Xn , n ≥ 0} is a DTMC. 7.12. Under the random routing scheme, the Bernoulli splitting of Poisson processes implies that {Xi (t), t ≥ 0} is the queue length process in an M/M/1 queue with interarrival times iid exp(λ/2) and iid exp(µ) service times. The two queues are independent. Under the alternate routing scheme, {Xi (t), t ≥ 0} is the queue length process in a G/M/1 queues with interarrival times iid with common distribution erl(2, λ) and iid exp(µ) service times. The two queues are dependent. 7.13. Let An be the number of customers that arrive during the nth service time. Then, Equation 7.38 continues to hold. Furthermore, P (An+1 = iXn = k) = αi as in Equation 7.37, if k > 0. However, if k = 0, we get Z ∞ (λt)i P (An+1 = iXn = 0) = e−λt dH(t) = βi . i! 0
170
QUEUEING MODELS
Hence {Xn , n ≥ 0} is a DTMC with transition probabilities as given in the Computational Exercise 4.24. 7.14. The {Xn , n ≥ 0} process is a special case of Modeling Exercise 7.23. Each arriving packet takes one unit of time to process. However, a packet arriving at an empty system has to wait until the next integer time, plus the unit of transmission time. Thus τG = 1, and ˜ G(s) = e−s . Suppose Xn = 0 and A ∼ exp(λ) is the time of next arrival. Then the service time of the next arrival is 1 − f (A) + 1, where f (A) is the fractional part of A. From Computational Exercise 5 of Chapter 5, we get E(f (A)) = 1/(1 − e−λ ) − 1/λ. Hence τH = 2 − 1/(1 − e−λ ) + 1/λ. We also have E(e−sf (A) ) =
1 − e−(λ+s) λ . · λ+s 1 − e−λ
Hence ˜ H(s) = E(e−s(2−f (A)) ) = e−2s
λ 1 − e−(λ+s) . · λ+s 1 − e−λ
7.15. The interarrival times to the overflow queue are iid with common The LST φ(s) =
λ+µ λ µ λ [ + φ(s)]. s+λ+µ λ+µ λ+µs+λ
Solving for φ(s) we get φ(s) =
λ(λ + s) . (λ + s)(λ + µ + s) − λµ
Successive interarrival times to the overflow queue are iid, hence it is a G/M/∞ queue. 7.16. The interdeparture times from the first station are iid, each being a sum of tow independent random variables: exp(λ) plus an exp(µ1 ). Hence the interarrival times to the second queu are iid with LST ˜ G(s) =
λ µ1 . s + λ σ + µ1
The service times at the second queue are iid exp(µ2 ). Hence {X2 (t), t ≥ 0} is the queue length process of a G/M/1 queue. 7.17. {Xi (t), t ≥ 0} (i = 1, 2) is not the queue length process of an M/M/1 queue, since the service times depend on what is happening at queue j 6= i. {X1 (t)+ X2 (t), t ≥ 0} is the queue length process of an M/M/1 queue, since it increase by one with rate λ and decreases by one with rate µ = µ1 + µ2 . 7.18. This is a Jackson network with n − 1 nodes indexed 2, 3, · · · , n with the following routing probabilities: ri,i+1 = αi , 2 ≤ i ≤ n − 1,
QUEUEING MODELS
171
and the following exit probabilities: ri = 1 − αi , 2 ≤ i ≤ n − 1, rn = 1. New customers arrive at station 2 according to a PP(µα1 ). Service times are iid exp(µi ) at node i, (2 ≤ i ≤ n).
172
QUEUEING MODELS
Computational Exercises 7.1. From Section 7.3.1, we get the following generating function of X, the steady state number in the M M 1 system: φ(z) =
∞ X
pj z j = (1 − ρ)
j=0
Hence,
∞ X 1−ρ . (ρz)j = 1 − ρz j=0
ρ 1−ρ (−ρ)z=1 = . 2 (1 − ρz) 1−ρ ρ 2 1−ρ (ρ2 )z=1 = 2( ) . =2 (1 − ρz)3 1−ρ
L = E(X) = φ0 (z)z=1 = − L(2) = E(X(X − 1)) = φ00 (z)z=1 Hence, σ 2 = V ar(X) = L(2) + L − L2 = 2(
ρ 2 ρ ρ 2 ρ . ) + −( ) = 1−ρ 1−ρ 1−ρ (1 − ρ)2
7.2. {X q (t), ≥ 0} is not a CTMC, since the sojourn time in state 0 is not exponentially distributed. When there are k > 0 customers in the system, k − 1 are in the queue, else there are none in the queue. Hence Lq = 0p0 +
∞ X
(k − 1)pk =
∞ X
kpk −
k=0
k=1
∞ X
pk = L − ρ =
k=1
ρ2 . 1−ρ
7.3. Follow the analysis of Wn in Subsection 7.3.1. We have Xn∗ = 0 ⇔ Wnq = 0. Hence, letting n → ∞, P(W q = 0) = 1 − ρ. For j ≥ 1, we have P(Wnq ≤ xXn∗ = j) = 1 −
j−2 X
e−µx
r=0
(µx)r . r!
Substituting, we get j−1 ∞ X X (µx)r F (x) = (1 − ρ)ρj P(Wnq ≤ xXn∗ = j) 1 − e−µx r! r=0 j=0 q
which, after some algebra, reduces to F q (x) = 1 − ρe−(µ−λ)x ,
x ≥ 0.
The expected value can be calculated as Wq =
1 ρ . µ1−ρ
This satisfies Lq = λW q . 7.4. Let mi = E(T X(0) = i).
! ,
QUEUEING MODELS
173
First step analysis yields m1 = 1/(λ + µ) + λ/(λ + µ)m2 + µ/(λ + µ) · 0. Since the time go from state ii to i − 1 is independent of i, we can use the Markov property to get mi = im1 . Substituting in the above equation we get m1 = 1/(λ + µ) + 2λ/(λ + µ)m1 which gives m1 = 1/(µ − λ). Hence mi =
i . µ−λ
7.5. Let ni = E(N X(0) = i). First step analysis yields n1 = µ/(λ + µ) + λ/(λ + µ)n2 + µ/(λ + µ) · 0. Since the time go from state ii to i − 1 is independent of i, we can use the Markov property to get ni = in1 . Substituting in the above equation we get n1 = µ/(λ + µ) + 2λ/(λ + µ)n1 which gives n1 = µ/(µ − λ). Hence mi =
i iµ = . µ−λ 1−ρ
7.6. Let λi = λpi . Then the queue in from server i is an M/M/1 queue with arrival rate λi and service rate µi . Assume that µ1 + µ2 > λ. Then the total expected number of customers in the system is given by L=
λ1 λ1 + . µ1 − λ1 µ1 − λ1
We are asked to minimize this subject to λ1 + λ2 = λ, λ1 < µ1 , λ2 < µ2 . This can be done either by substituting for λ2 in terms of λ1 , or by using Lagrange multipliers. The final solution is √ µi λi = µi − √ √ (µ1 + µ2 − λ). µ1 + µ2
174
QUEUEING MODELS
7.7. An arriving customer sees i customers ahead of him with P probability pi , due to PASTA. Hence the expected cost of a joining customer is ci pi . The customer pays P f to the system. Thus the net revenue from each customer in steady state is f− P ci pi . Since, customers arrive at rate λ per unit time, the net revenue rate is λ(f − ci pi ). The result follows from the pi ’s given in Equation 7.17. 7.8. (a). We must have λαk < µk . Hence the feasible region is 0 ≤ αk < µk /λ, 1 ≤ k ≤ K, α1 + ... + αK = 1. (b). The long run cost per unit time for the entire system is given by c(α) =
K X
hk
k=1
λαk . µk − λαk
(c). We need to minimize c(α) subject to αk < µk /λ for 1 ≤ k ≤ K, and α1 + ...αK = 1. We ignore the inequality constraints, and use Lagrangian multipliers to solve the constrained optimization. We get the KK conditions as ∂c(α) λhk µk = a(constant), 1 ≤ k ≤ K. = αk (µk − λαk )2 This yields r r µk µk hk αk = ( − ). λ λ a Thus the constant a is chosen to satisfy r r K r X µk µk hk ( − ) = 1. λ λ a r
k=1
The solution is given by K
λ Xp a = 2( hk µk )2 . µ k=1
It can be seen that with this a the resulting αk will be automatically feasible. 7.9. From PASTA we see that in steady state, an arriving customer sees j customers in the system with probability pj , as given in Equation 7.19. An arriving customer enters if j < K. Hence the long run fraction of customers who cannot enter the system is pK . 7.10. Let L be as in Equation 7.20. The expected waiting time of an arriving customer is L/λ.
QUEUEING MODELS
175
7.11. Let L be as in Equation 7.20. The rate at which customers enter the system is λa = λ(1 − pK ). Hence Little’s law gives W = L/λa . 7.12. Let X(t) be the number of customers in an M/M/1/K queue at time t, and suppose X(0) = i. Let T be the first time the queue becomes full or empty, and let mi = E(T X(0) = i). Clearly m0 = mK = 0, and mi =
1 λ µ + mi+1 + mi−1 , 1 ≤ i ≤ K − 1. λ+µ λ+µ λ+µ
Using the result of Computational Exercise 3.17, we get mi =
i K 1 − (µ/λ)i . − · µ − λ µ − λ 1 − (µ/λ)K
7.13. By PASTA, an arriving customer sees the system full with probability PK . Hence the probability that an arriving customer enters the system is 1−PK . Thus the expected entrance fee paid by an arriving customer is a(1 − pK ). Since customers arrive at rate λ, the rate at which the system collects entrance fees is λa(1 − PK ). Using the results of Example 6.44, we see that the expected waiting cost per unit time is cL, where L is the expected number of customers in the system in steady state. Hence, the longrun net income rate to the system is given by λa(1 − pK ) − cL 7.14. 1. Let X(t) be the number of items in the warehouse at time t. We see that {X(t), t ≥ 0} is the queue length process in an M/M/1/K queue with arrival rate 10 per hour, and service rate of 8 per hour. 2. The limiting probabilities are given by Equation 7.19. Using ρ = λ/µ = 1.25 we get .25 pj = 1.25j , 0 ≤ j ≤ K. 1.25K+1 − 1 The production cost is $5 per item in states 0 through K − 1, the revenue rate is $10 per item in states 1 through K, and the holding cost rate is L per hour, where L is the expected number in the system in steady state, given by Equation 7.20. Hence the long run net income rate per hour is c(K) = 10µ(1 − p0 ) − 5λ(1 − pK ) − L. 3. Numerical calculations yield c = [21.7 28.36 31.3 32.7 33.32 33.49 33.37 33.05 ]. where c = [c(1), c(2), · · · , c(8)]. Thus the optimum K is 6.
176
QUEUEING MODELS
7.15. By PASTA, π ˆ j = pj ,
j ≥ 0.
Then P(An arriving customer enters)
∞ X
=
ˆ n = j)P(X ˆ n = j) P(An arriving customer enters  X
j=0 ∞ X
=
j=0 ∞ X
=
αj π ˆj αj pj .
j=0
7.16. For an M M 1 queue with balking we get ρ0 = 1, ρn =
λn Πn−1 i=0 αi , µn
ρn pn = P∞
= e−λ/µ
and
j=0
ρj
n ≥ 1, λn . n!µn
Hence we see that the following choice will work: αi = 1/(i + 1),
i ≥ 0.
7.17. When there are i customers in the system, min(i, s) servers are busy. Hence, the expected number of busy servers is given by s X
ipi + s
i=0
∞ X
pi
= p0 [
i=s+1
s X
i
s
i(λ/µ) /i! + s(s /s!)
i=0
∞ X
(λ/sµ)i ]
i=s+1
=
s−1 X (λ/µ)p0 [ (λ/µ)i /i! + (ss /s!)(λ/sµ)s /(1 − λ/sµ))]
=
λ/µ,
i=0
since s−1 X p0 = [ (λ/µ)i /i! + (ss /s!)(λ/sµ)s /(1 − λ/sµ))]−1 . i=0
7.18. L/λ =
1 µ (1
+
ps (1−ρ)2 ).
7.19. When there are k > s customers in the system, k − s are in the queue, else there are none in the queue. Hence Lq =
∞ X k=s
(k − s)pk =
∞ X k=s
(k − s)ps ρk−s =
ps ρ . (1 − ρ)2
QUEUEING MODELS
177
Next, using PASTA, ∞ X 1 ps k−s . pk = µ µ (1 − ρ)2
Wq =
k=s+1 q
q
Thus L = λW is satisfied. 7.20. Let pj be as in Subsection 7.3.3. Now, if an arriving customer finds j customers ahead of him, the queuing time is zero if j < s, else it is the sum of j − s + 1 iid exp(sµ) random variables. Hence we get P (Wnq > xXn∗ = j) =
j−s X
e−µx
i=0
(µx)i , i!
j ≥ s.
Hence, lim P (Wnq > x)
n→∞
=
=
lim
n→∞
K X
P (Wnq > xXn∗ = j)P (Xn∗ = j)
j=0
j−s ∞ X X
e−sµx
j=s i=0
=
ps
∞ X j=s
=
ps
∞ X
ρj−s
j−s X
e−sµx
i=0
∞ (sµx)i X j−s ρ i! j=i+s
e−sµx
(sµx)i ρi i! 1 − ρ
=
ps
=
ps −(sµ−λ)x e . 1−ρ
i=0
(sµx)i i!
e−sµx
i=0 ∞ X
(sµx)i i!
ps Also, P (W q = 0) = 1 − 1−ρ . The waiting time W is the sum of W q and an independent exp(µ) random variable.
7.21. Let system I be the M/M/s queue with arrival rate λ and service rate µ. Let ρ = λ/(sµ). Then we can show that L1q =
∞ X
(j − s)pj =
j=s+1
where
ss ρs+1 p0 s!(1 − ρ)2
s−1 j X s j ss ρs −1 ρ + ] . p0 = [ j! s! 1 − ρ j=0
178
QUEUEING MODELS
Let system II be an M/M/1 queue arrival rate λ and service rate sµ. Let ρ be as above. Then we can show that ρ2 L2q = . 1−ρ Then , for s ≥ 2, we have L2q L1q
ss ρs sj j j=0 j! ρ + s! 1−ρ ss ρs−1 s!(1−ρ)
Ps−1 =
ss−1 s−1 sj j j=0 j! ρ + (s−1)! ρ ss ρs11 s!(1−ρ)
Ps−2 =
+ρ
≥ 1 − ρ + ρ = 1. Here the last inequality follows by ignoring the sum in the numerator. Hence L2q ≥ L1q . However, the inequality is reversed if we compare the expected numbers in the system, namely L1 = L1q + sρ and L2 = L2q + ρ. 7.22. Using the notation of Subsection 7.3.5, we see that {X(t), t ≥ 0} is a birth and death process on {0, 1, 2} with birth parameters λ0 = 2λ, λ1 = λ, and death parameters µ1 = µ2 = µ. The limiting probabilities are given by, with ρ = λ/µ, p0 =
1 2ρ 2ρ2 , p = , p = . 1 2 1 + 2ρ + 2ρ2 1 + 2ρ + 2ρ2 1 + 2ρ + 2ρ2
The profit rates c(i) in state i are given by c(0) = −Cµ, C(1) = r − Cµ, C(2) = 2r. Hence the long run profit rate is given by 2 X i=0
pi C(i) =
−Cµ + 2(r − Cµ)ρ + 4rρ2 . 1 + 2ρ + 2ρ2
7.23. See solution to Modeling Exercise 7.2. From Subsection 7.3.1, the system is stable if ρ < 1. When it is stable, its limiting distribution is given by pj = (1 − ρ)ρj ,
j ≥ 0.
Incoming demand is lost if the warehouse is empty, which happens with probability p0 = 1 − ρ. 7.24. Let X(t) be the number of customers in the system at time t. Then {X(t), t ≥ 0} is a birth and death process on state space {0, 1, ..., s} with birth rate λi = λ,
QUEUEING MODELS
179
i = 0, 1, ..., s − 1, λs = 0, and death rates µi = iµ, i = 0, 1, ..., s. From Example 6.35, we get (λ/µ)n , n!
ρn =
n = 0, 1, ..., s.
Hence, pj = Ps
ρj
n=0
=
ρn
(λ/µ)j j! Ps (λ/µ)n n=0 n!
,
j = 0, 1, ..., s.
This can be seen as the Poisson distribution truncated at s. 7.25. An arriving customer enters the system if it is not full. Due to PASTA, an arriving customer finds the system full with probability ps = B(s, ρ). Hence, in steady state, an arriving customer enters the system with probability 1 − B(s, ρ). Since the customers arrive at rate λ, the rate at which they enter is given by λ(1 − B(s, ρ)). We obtain the recurrence below: B(s, ρ)
ρs /s! j j=0 (ρ /j!)
=
Ps
=
Ps−1
= = =
(ρ/s)(ρs−1 /(s − 1)!) j s−1 /(s − 1)!) j=0 (ρ /j!) + (ρ/s)(ρ Ps−1 j (ρ/s)(ρs−1 /(s − 1)!)/ j=0 (ρ /j!) Ps−1 j s−1 1 + (ρ/s)(ρ /(s − 1)!)/ j=0 (ρ /j!) (ρ/s)B(s − 1, ρ) 1 + (ρ/s)B(s − 1, ρ) ρB(s − 1, ρ) s + ρB(s − 1, ρ)
7.26. See the solution to Modeling Exercise 7.3. Let ρ = λ/µ. Using the results of Example 6.35, we get i for 0 ≤ i ≤ 3 ρ, ρi /2i−3 , for 4 ≤ i ≤ 9 ρi = i ρ /(64 ∗ 3i−9 ), for i ≥ 10. Now, if ρ/3 < 1, we have c=
∞ X i=0
ρi =
1 − ρ3 ρ3 (1 − ρ6 /64) ρ9 + + , 1−ρ 1 − ρ/2 64(1 − ρ/3)
else the sum diverges. Hence the condition of stability is ρ/3 < 1, or λ < 3µ. Assuming stability, we get the limiting distribution as pj = ρj /c,
j ≥ 0.
180
QUEUEING MODELS
Three tells are active with probability ∞ X
pj =
j=10
1 ρ1 0 · . 192c 1 − ρ/3
7.27. From solution to Modeling Exercise 7.4, we see that this is an M/M/1 queue with traffic intensity ρ = λ/αµ. Hence, from Section 7.3.1, it is stable if ρ < 1. When it is stable, its limiting distribution is given by pj = (1 − ρ)ρj , j ≥ 0. 7.28. See solution to Modeling Exercise 7.5. Using Example 6.35, we get (λ/µ1 )i , for 0 ≤ i ≤ 3 ρi = (λ/µ1 )3 (λ/µ2 )i−3 , for i ≥ 4. Now, if λ < µ2 , we have ∞ X i=0
ρi =
(λ/µ1 )3 1 − (λ/µ1 )3 + , 1 − λ/µ1 1 − λ/µ2
else the sum diverges. Hence the condition of stability is λ < µ2 . Assuming the system is stable, the limiting distribution is pj = ρj /
∞ X
ρi ,
j ≥ 0.
i=0
7.29. Note that the server is idle if the system is empty, and busy if there are N or more customers in it. Otherwise, the server may be idle or busy. Hence the statespace is S = {(i, 0) : 0 ≤ i ≤ N − 1} ∪ {(i, 1) : i ≥ 1}. The positive transition rates are q(i,I),(i+1,I) = λ, 0 ≤ i ≤ N − 2, q(N −1,I),(N,1) = λ, q(i,1),(i+1,1) = λ, i ≥ 1, q(i,1),(i−1,1) = µ, i ≥ 2, q(1,1),0 = µ. The balance equations are λp0,0 = µp1,1 , λp(i,0) = λp(i−1,0) , 1 ≤ i ≤ N − 1, (λ + µ)p(1,1) = µp(2,1) , (λ + µ)p(i,1) = λp(i−1,1) + µp(i+1,1) , i ≥ 2, i 6= N, (λ + µ)p(N,1) = λp(N −1,1) + µp(N +1,1) + λp(N −1,0) . From Equations 2 , we get p(i,0) = p0,0 , 1 ≤ i ≤ N − 1. Let ρ = λ/µ. Using the third set of equations for i > N , we get pi,1 = ρN −i pN,1 ,
i ≥ N.
QUEUEING MODELS
181
Using the third set of equations for 1 ≤ i < N , we get pi,1 = ρ
i−1 X
ρi p0,0 = ρ
k=0
1 − ρi p0,0 , 1 ≤ i ≤ N. 1−ρ
Using the normalization equation, and assuming ρ < 1, we get p0,0 =
1−ρ . N
This yields the solution given in the book. 7.30. The cost rates are c(i, 0) = ic, 0 ≤ i < N − 1,
c(N − 1, 0) = (N − 1)c + λf,
c(i, 1) = ic,
i ≥ 1.
Hence the longrun cost rate C(N ) is given by C(N ) =
N −1 X
ci(pi,0 + pi,1 ) +
∞ X
c(N + n)pN +n,1 + λf pN −1,0 .
n=0
i=1
This can be simplified to get C(N ) = c(N − 1)/2 + cρ/(1 − ρ) + λf (1 − ρ)/N. This is minimized at N∗ =
p 2λf (1 − ρ)/c.
7.31. See solution to Computational Exercise 6.25. Use PASTA. A customers of type 1 always enter. Hence probability that an arriving or entering customer of type 1 see j customers when he enters is pj for all j ≥ 0. This is also the probability that an arriving customer of type 2 sees j people in the system. However, a type 2 customer enters only if there are fewer than K customers in theP system. Hence the probability that an entering type 2 customer sees j people is pj / K−1 k=0 pk , 0 ≤ j ≤ K − 1. 7.32. See solution modeling Exercise 7.10. Specifically, for s = 3 the equations become: λp(φ) (λ + µ1 )p(1)
= µ1 p(1) + µ2 p(2) + µ3 p(3), = µ2 p(12) + µ3 p(13) + λp(φ),
(λ + µ2 )p(2)
= µ1 p(12) + µ3 p(23),
(λ + µ3 )p(3)
= µ1 p(13) + µ2 p(23),
(λ + µ1 + µ2 )p(12)
= µ3 p(123) + λp(2) + λp(1),
(λ + µ1 + µ3 )p(13)
= µ2 p(123) + λp(3),
(λ + µ2 + µ3 )p(23)
= µ1 p(123),
(µ1 + µ2 + µ3 )p(123)
= λp(23).
182
QUEUEING MODELS
7.33. Equations 7.29 become: a1 = λ + paN ,
ai+1 = ai , 1 ≤ i ≤ N − 1.
The solution is: ai = λ/(1 − p),
i = 1, . . . , N.
The stability condition is λ/(1 − p) < min(µ1 , . . . , µN ). (i) expected number of customers in the network in steady state =
N X i=1
(ii) fraction of time the network is completely empty in steady state =
λ . (1 − p)µi − λ N Y
1−
i=1
λ . (1 − p)µi
7.34. Equation 7.29 become a1 = λ +
N X
ai+1 = (1 − pi )ai , 1 ≤ i ≤ N − 1.
pi ai ,
i=1
The solution is: ai = 1−
N X k=1
pk
λ k−1 Y
i−1 Y
(1 − pj )
(1 − pj ) =
j=1
λ , N Y (1 − pj )
i = 1, . . . , N.
j=i
j=1
The stability condition is ai < µi , i = 1, . . . , N . Let ρi = ai /µi , then
(i) expected number of customers in the network in steady state =
N X i=1
(ii) fraction of time the network is completely empty in steady state =
ρi . 1 − ρi
N Y
(1 − ρi ).
i=1
7.35. Equation 7.29 become a1 = λ + aN +1 , ai+1 = (1 − pi )ai , 1 ≤ i ≤ N − 1. aN +1 =
N X
pi ai .
i=1
By comparing with the equations in Computational Exercise refcmp7:54, we see that the solution ai , 1 ≤ i ≤ N is the same as given there. Furthermore, aN +1 is given by the last equation above. The stability condition is ai < µi , i = 1, . . . , N + 1. Let ρi = ai /µi , then
QUEUEING MODELS
183
(i) expected number of customers in the network in steady state =
N +1 X i=1
(ii) fraction of time the network is completely empty in steady state =
ρi . 1 − ρi
N +1 Y
(1−ρi ).
i=1
7.36. We model this as a Jackson network with the following parameters: N = 35, λi = 60, 000/(12 ∗ 35) = 142.857 per hour This assumes that each arrival is equally likely to join any ride at first. We assume that after completing a ride the visitor joins one of the remaining rides (equally chosen among 35, since the visitor may go on the same ride multiple times) with probability .8, and leaves the fair with probability .2. This implies that he rides additional 4 rides on the average. This makes the total number of rides taken by a visitor equal to five, as desired. Thus we have the following routing matrix: rij = .8/35, 1 ≤ i, j ≤ 35 and ri = .2, 1 ≤ i ≤ 35. We model the ride as a single server queue with service rate 30 per minute, or 30*60 = 1800 per hour. Thus, we use si = 1, 1 ≤ i ≤ 35 µi = 1800 per hour, 1 ≤ i ≤ 35. The traffic equations yield aj = λj +
35 X
rij ai .
i=1
The symmetry implies that the effective arrival rate at each ride is the same, i.e., ai = a for all 1 ≤ i ≤ 35. Hence we get a = 142.857 + .8a. This gives a = 714.286.. Since ai = a < µi for all i, this network is stable. The average queue length at a typical ride is ai /(µi − ai ) = .65 Thus this is very sparsely congested fair! 7.37. This is a special case of the model described in Subsection 7.4.2. The network has two nodes in series. The other parameters are µ if 1 ≤ n ≤ 5 µ1 (n) = 2µ if n ≥ 6,
184
QUEUEING MODELS µ 2µ µ2 (n) = 3µ
if 1 ≤ n ≤ 2 if 3 ≤ n ≤ 10, if n ≥ 11,
λ(n) = λ/(n + 1), p1,2 = 1, all other pi,j = 0, u1 = 1, u2 = 0. Using Equation 7.122, we get a1 = a2 = 1. Hence
(1/µ)n if 1 ≤ n ≤ 5 φ1 (n) = (1/µ)5 (1/2µ)n−5 if n ≥ 6, if 1 ≤ n ≤ 2 (1/µ)n (1/µ)2 (1/2µ)n−2 if 3 ≤ n ≤ 10, φ2 (n) = (1/µ)2 (1/2µ)8 (1/3µ)n−10 if n ≥ 11, From Theorem 7.9, we get
λn , n! where n = n1 + n2 , and c is a normalization constant. This queue is always stable, since ∞ X ∞ ∞ n X X λn X p(n1 , n2 ) = φ1 (n1 )φ2 (n2 ) n! n +n =0 n =0 n =0 n=0 p(n1 , n2 ) = cφ1 (n1 )φ2 (n2 )
1
2
1
2
≤
∞ X λn n! n=0
≤
∞ X (λ/µ2 )n (n + 1) n! n=0
T − U ) =
e−µT − e−θT θ = .7087. · θ−µ 1 − e−θT
With probability r1 = 1−α = .2913, the call completes while in stretch 1, and hence is not handed over. Similar calculations show that the calls enter node according to a PP(λ∗ (1 − p)p) and node 5 with rate λ∗ (1 − p)2 p). 1. The service times there have the same distribution G. The routing probabilities are r2,5 = α, r2 = 1 − α. 2. Calls from node 3 get handed over to node 5 with probability β = e−µT = .5488, and they leave the network with probability 1 − β = .4512. Thus the we have a five node network with routing matrix 0 0 α 0 0 0 0 0 0 α R= 0 0 0 0 β . 0 0 0 0 0 0 0 0 0 0 The other parameters of the network are r = [1 − α, 1 − α, 1 − β, 1, 1]0, λ = λ∗ [p, (1 − p)p, 0, (1 − p)p , 0]. From this we get a = λ∗ [p, (1 − p)p, αp, α(1 − p)p, αβp + α(1 − p)p] = [41.9283 12.6286 29.7141 8.9497 25.2572] 3. Service times in node 1,2,4 iid with common cdf G, the service times at nodes 3 and 5 are iid min(exp(µ), T ), whose mean is m0 = .0752 hrs, or 4.5119 minutes. Since all nodes are infinite server queues, the expected number of calls handled by the ith station in steady state is ai E(Si ), where Si is the service time at station i. Numerical calculations yield: L = [2.0357 0.9496 1.4427 0.6730 1.2263].
186
QUEUEING MODELS
4. The expected number of calls handed over from tower 1 to tower 2 per unit time is λ∗ pβ = 23.0108, and those from tower 2 to 3 is λ∗ (1 − p)pα + λ∗ αpβ = 25.2572. 7.39. 1. Infinite waiting space, independence of arrival streams and service times, independent Bernoulli routing. 2. a1 = λ1 + aN pN , ai = λi + ai−1 pi−1 , 2 ≤ i ≤ N. Solving recursively, we get aN
N N −1 N X Y Y = λi pj /(1 − pj ). i=1
j=i
j=1
Empty product is 1. Other ai ’s are obtained by permuting the indices (1, 2, ...N, ) with (i + 1, i + 2, ..., N, 1, 2, ...i). 3. Condition of stability: ρi = ai /µi < 1, 1 ≤ i ≤ N . P 4. N i=1 ρi /(1 − ρi ). 7.40. Let wi,j be the expected number of visits to state j over the infinite time horizon starting from state i. Using first step analysis, we see that W = [wi,j ] satisfies the following matrix equation: W = I + RW. Now, if I − R is invertible, this has a unique finite solution W = (I − P )−1 . Thus the expected amount of time spent in the system by any customer is finite. Hence the probability that a customer stays in the system forever must be zero. 7.41.
t→∞
N Y
X
(a) Without loss of generality let i = 1. lim P{X1 (t) = j} = ρj1 G(K)
ρxi i .
x2 +...+xN =K−j i=2
Hence lim P{X1 (t) ≥ j}
t→∞
= G(K)
K X
N Y
X
ρl1 X
N Y
x1 +...+xN =K−j i=1 K X j=1
lim P{Xi (t) ≥ j} =
t→∞
K X j=1
ρji
ρxi i = ρj1
G(K) G(K − j)
G(K) G(K − j)
7.42. In this case µi (j) = µi for i ≥ 1. Hence Equation 7.34 reduces to φi (n) = (πi /µi )n = ρni .
K−j X t=0
x2 +...+xN =K−l i=2
l=j
= ρj1 G(K)
(b) Li =
ρxi i = ρj1 G(K)
ρt1
X
N Y
x2 +...+xN =K−j−t i=2
ρxi i
QUEUEING MODELS
187
Hence the limiting distribution is given by p(x) =
N Y
ρxi i /H(K),
i=1
where N X Y
H(K) =
ρxi i
x∈A(K) i=1
where A(K) = {x = (x1 , x2 , ..., xN ) : xi ≥ 0,
N X
xi = K}.
i=1
Now, ∞
X 1 = ρki z k . 1 − ρi z k=0
Hence ∞ X 1 = 1 − ρi z i=1 N Y
N X Y
ρxi i z k =
K=0 x∈A(K) i=1
∞ X
˜ H(K)z k = H(z).
k=0
Now, Bj (z) =
j Y
1 = Bj−1 (z)/(1 − ρj z). 1 − ρi z i=1
Hence Bj (z) = ρj Bj (z) + Bj−1 (z). Then using the series expansion of Bj (z) and equating the coefficients of z n , we get bj (n) = ρj bj (n − 1) + bj−1 (n). The boundary conditions are b1 (n) = ρn1 ,
bj (0) = 1.
Thus we can recursively compute H(K) = bN (K). 7.67 We can model this problem as a closed tandem queueing network system with 4 nodes with µ2 = µ4 = 2, µi = xi /120, i = 1, 3 where xi is the number of customers at node i, x1 + x2 + x3 + x4 = 150. The expected number of customers at node 2 (or 4) –the desired answer– is given by 150 150−x 1 −x2 X X 1 150−x X
x2
x1 =0 x2 =0 x3 =0 150−x 150−x 150 1 1 −x2 X X X x1 =0 x2 =0
x3 =0
0.125x2 +x4 30x1 +x3 x1 !x3 !
0.125x2 +x4 30x1 +x3 x1 !x3 !
188
QUEUEING MODELS
where x4 = 150 − x1 − x2 − x3 . Using a computer to evaluate the above sum yields the answer to be .4486. 7.44. From Equation 7.34, we see that µi (n)φi (n) = πi φi (n−1) for n ≥ 1. Using the fact that µi (0) = 0, we get T H(i)
=
K X
µi (n) lim P (Xi (t) = n) t→∞
n=0
=
K X n=1
=
K X n=1
X
µi (n)
G(K)ΠK k=1 φk (xk )
x∈A(K):xi =n
X
πi
G(K)ΠK k=1 φk (xk )
x∈A(K−1):xi =n−1
X
= πi G(K)
ΠK k=1 φk (xk )
x∈A(K−1)
= πi G(K)/G(K − 1). Here we have used the definition of G(K − 1) from Theorem 7.10 to derive the last equality. 7.45. See solution to Modeling Exercise 7.7. We have k
E(S) =
1 X λi , λ i=1 µi
E(S 2 ) =
2 X λi . λ i=1 µ2i
and k
Condition of stability is ρ = λE(S) =
k X λi i=1
µi
< 1.
For a stable M/G/1 queue, the expected number in the system is given by Equation 7.153. Substituting, we get k
L=ρ+
λ X λi . 1 − ρ i=1 µ2i
˜ 7.46. See solution to Modeling Exercise 7.6, where we computed the LST G(s) of the service time. To compute the stability condition, we need to compute τ = E(S).
QUEUEING MODELS
189
Following the same conditioning analysis, we get τ=
µ 1 θ 1 1 · + [ + + τ ]. θ+µ θ+µ θ+µ θ+µ α
Hence
θ 1 [1 + ]. µ α Thus the queue is stable if ρ = λτ = µλ [1 + αθ ] < 1. When it is stable, the generating function of the limiting distribution of the number in the system is given by Equation 7.39. τ=
7.47. From Theorem 6.14, and 6.16, we get the mean and second moment of the service time as E(S) = −αM −1 e = .5926,
E(S 2 ) = α(M −1 )2 e = .2963.
Using λ = 1, we get ρ = λE(S) = .5926.. Hence, substituting in Theorem 7.13, we get lambda2 E(S 2 ) = .9562. L=ρ+ 2(1 − ρ) 7.48. Use Theorem 7.13. (a). Exp(µ): τ = 1/µ, σ 2 = 1/µ2 , ρ = λ/µ. Hence L=ρ+
ρ2 . 1−ρ
(b). U [0, 2/µ] : τ = 1/µ, σ 2 = 1/3µ2 , ρ = λ/µ. Hence L=ρ+
2 ρ2 . 31−ρ
(c). Constant 1/µ: τ = 1/µ, σ 2 = 0, ρ = λ/µ. Hence L=ρ+
1 ρ2 . 21−ρ
(d). Erlang (k, kµ) : τ = 1/µ, σ 2 = 1/kµ2 , ρ = λ/µ. Hence L=ρ+
k + 1 ρ2 . 2k 1 − ρ
Thus the exponential distribution produces the largest, while the deterministic produces the smallest L. 7.49. See solution to Modeling Exercise 7.8. So let X(t) be the number of customers in the system at time t, and Y (t) be the state of the server at time t, (1 if up, and 0 if down.) Then {(X(t), Y (t)), t ≥ 0} is a CTMC. Let pn = lim P (X(t) = n, Y (t) = 1),
n ≥ 0,
qn = lim P (X(t) = n, Y (t) = 0),
n ≥ 0.
t→∞
t→∞
190
QUEUEING MODELS
The balance equations are (λ + θ)p0
= αq0 ,
(λ + µ + θ)pn
= λpn−1 + µpn+1 + αqn ,
(λ + α)q0
= θp0 ,
(λ + α)qn
= λqn−1 + θpn ,
n ≥ 1,
n ≥ 1,
Multiplying by z n appropriately, we get (λ + µ + θ)p0 (λ + µ + θ)pn z
n
(λ + α)q0 (λ + α)qn z n
= µp0 + αq0 , 1 = zλpn−1 z n−1 + µpn+1 z n+1 + αqn z n , z = θp0 , = zλqn−1 z n−1 + θpn z n ,
Now let P˜ (z) =
∞ X
˜ pn z n , Q(z) =
n=0
n ≥ 1,
n ≥ 1,
∞ X
qn z n .
n=0
Summing the equations we get (λ + µ + θ)P˜ (z) ˜ (λ + α)Q(z)
1 1 ˜ + zλP˜ (z) + µP˜ (z), = µ(1 − )p0 + αQ(z) z z ˜ = θP˜ (z) + zλQ(z)
which can be rewritten as 1 1 ˜ (λ(1 − z) + µ(1 − ) + θ)P˜ (z) − αQ(z) = µ(1 − )p0 , z z ˜ (λ(1 − z) + α)Q(z) = θP˜ (z). ˜ From the last equation we get P˜ (z) in terms of Q(z). Substituting in the previous equation and simplifying, we get ˜ Q(z)
=
P˜ (z)
=
µ z p0 α λ)( θ + λθ (1
( µz − − z)) − λ α λ ˜ ( + (1 − z))Q(z). θ θ
,
˜ Using P˜ (1) + Q(1) = 1 we get p0 =
α λ − . α+θ µ
Thus the queue is stable if p0 > 0, which yields the result given in the book. P 7.50. See solution to Modeling Exercise 7.11. Let bj = j+1 k=1 vk · aj+1−k . πj = j+1 j+1 j+1 ∞ ∞ ∞ ∞ X X X X XX X π 0 bj + πi aj+i−1 , z j πj = π0 bj + πi aj+i−1 , φ(z) = π0 vi aj+1−i + i=1
j=0
j=0
j=0 i=1
j=0 i=1
QUEUEING MODELS 191 ˜ − λz) ˜ − λz) ˜ − λz) G(λ ψ(z) G(λ G(λ (ψ(z) − π0 ) = π0 + (φ(z) − π0 ) =⇒ z 1 − ψ(0) z z ˜ − λz) P (1 − ρ)(1 − ψ(0)) π0 (ψ(z) − 1)G(λ . Then, πi = 1, =⇒ π0 = φ(z) = . ˜ m (1 − ψ(0))(z − G(λ − λz)) ˜ − λz) 1 − ρ G(λ (ψ(z)− Thus, if ρ = λτ < 1, the DTMC is positive recurrent, and φ(z) = ˜ − λz) m z − G(λ 1). 7.51. This follows from Theorem 7.2 and PASTA. Using φ(z) in the solution to Computational Exercise 7.50, we get lim E[X(t)] = lim φ0 (z). Using L’Hospital’s t→∞
z→1
1 ρ2 m(2) σ2 rule and algebra yields L = ρ + (1 + 2 ) + . 21−ρ τ 2m 7.52. This is a special case of Modeling Exercise 7.11, where the server returns from vacation when there are N customers waiting. Thus ψ(z) = z N ,
m = N.
The result is obtained by substituting the above in the results of Computational Exercises 7.50 and 7.51. 7.53. Let A be the inter arrival time, then the first system is 2 M/M/1 queues with E{A} = 2/λ. The second system is 2 G/M/1 queues with E{A} = 2/λ where G is gamma(2, λ). Both systems are stable if λ < 2µ. For G/M/1 systems we have ρ L= 1−α ˜ where α = G(µ(1 − α)). By example 7.15 we have α1 = λ/2µ for the first system. For the second system we have α2 =
λ2 (µ(1 − α2 ) + λ)2
Solving for α2 and discarding the solution α2 = 1 we get p λ + µ − µ(2λ + µ) α2 = 2µ Since α2 < α1 (which implies L2 < L1 ) second system is better. 7.54. This is a G/M/1 queue where the inter arrival times are deterministic (they are equal to 1). The system is stable if µ > 1. From Theorem 7.16 the limiting distribution is given by p0 = 1 − ρ and pj = ρ(1 − α)αj−1 for j ≥ 1, where ρ = 1/µ and α solves α = e−µ(1−α) . The mean is given by 1/(µ(1 − α)). P 7.55. See Computational Exercise 4.24. Computational Exercise 4.24We have kαk = λτG . Hence from the result of Computational Exercise 4.24, we see that the DTMC
192
QUEUEING MODELS P
is positive recurrent if λτG < 1. We have kβk = λτH . Hence from the result of Computational Exercise 4.24, we see that the generating function is as given in the problem. 7.56. Follows by evaluating the first derivative of φ(z) obtained in the solution of Computational Exercise 7.55 at z = 1. This involves applying L’Hopital’s rule twice. 7.57. The {Xn , n ≥ 0} process is a special case of Modeling Exercise 7.23. Each arriving packet takes one unit of time to process. However, a packet arriving at an empty system has to wait until the next integer time, plus the unit of transmission time. Thus τG = 1, and ˜ G(s) = e−s . Suppose Xn = 0 and A ∼ exp(λ) is the time of next arrival. Then the service time of the next arrival is 1 − f (A) + 1, where f (A) is the fractional part of A. From Computational Exercise 5 of Chapter 5, we get E(f (A)) = 1/(1 − e−λ ) − 1/λ. Hence τH = 2 − 1/(1 − e−λ ) + 1/λ. We also have E(e−sf (A) ) =
1 − e−(λ+s) λ · . λ+s 1 − e−λ
Hence 1 − e−(λ+s) λ · . λ+s 1 − e−λ The result now follows from the formula in Computational Exercise 7.55. ˜ H(s) = E(e−s(2−f (A)) ) = e−2s
7.58. Let An be the number of arrivals in (n − 1, n]. Then {An , n ≥ 1} is a ¯ n , n ≥ 0} satisfies equasequence of iid P (λ) random variables. Furthermore, {X ¯ n , n ≥ 0} is a DTMC with the transition matrix given in Equation 7.38. Thus {X tion 7.36 with λi ai = e−λ , i ≥ 0. i! Then we get ρ = λ, and h(z) =
∞ X
ai z i = e−λ(1−z) .
i=0
The queue is stable if ρ < 1 and the generating function of the limiting distribution is given by Equation 7.150. 7.59. Xn is the value of X(t) just after its nth downward jump. Hence the limiting distribution of Xn is the same as that of X(t) just before an arrival. Since the arrival process is Poisson, PASTA implies that this is also the limiting distribution of ¯ n , since it is not an observation after a X(t). The same argument does not hold for X downward jump. 7.60. 1. Let {Si , i ≥ 1} be a sequence of iid rv with mean τ and variance σ 2 . Independent
QUEUEING MODELS
193
of this, let N be a geometric rv with parameter p (mean 1/p, variance q/p2 , where q = 1 − p). Then the service time of a single customer is S=
N X
Si .
i=1
Hence E(S) = E(Si )E(N ) = τ /p, V ar(S) = E(N )V ar(Si ) + (E(Si ))2 V ar(N ) = σ 2 /p + τ 2 q/p2 . 2. Let X(t) be the number of customers in the system at time t. {X(t), t ≥ 0} is the queue length process of a standard M/G/1 queue with iid service time with mean τ /p and arrival rate λ. Hence the condition of stability is ρ = λτ /p < 1. 3. Since X(t) jumps up and down by 1 at a time, we see that in steady state the number as seen by a departure from the system is the same as the number as seen by an arrival, which (due to PASTA) is the same as the number in steady state at an arbitrary time point. Hence the required answer is L=ρ+
1 λ2 E(S 2 ) , 2 1−ρ
where E(S 2 ) = V ar(S) + τ 2 . 4. Let Xn be the number of customers in the system after the nth service completion (may or may not be a departure). Let An be the number of arrivals during the nth service time. Then, if Xn > 0, Xn+1 = Xn + An+1 with probability q and Xn+1 = Xn + An+1 − 1 with probability p. If Xn = 0, Xn+1 = An+1 + 1 with probability q, and Xn+1 = An+1 with probability p. Hence, E(Xn+1 ) = E(Xn ) + E(An+1 ) − p(1 − P (Xn = 0)) + qP (Xn+1 = 0). Letting n → ∞ and assuming limits exist, we get P (Xn = 0) → p − λτ = p(1 − ρ). Similarly 2 E(Xn+1 ) = E((Xn + An+1 )2 ; Xn > 0)q + E((Xn + An+1−1 )2 ; Xn > 0)p
+E((An+1 + 1)2 ; Xn = 0)q + E((An+1 )2 ; Xn = 0)p. Simplifying and letting n → ∞ we get lim E(Xn ) = λτ +
n→∞
1 2pq + λ2 (σ 2 + τ 2 ) . 2 2(p − λτ )
7.61. The inter arrival times are iid hyper exponential with mean λ−1 = r/λ1 + (1 − r)/λ2 . The condition of stability is λ < µ. Let α be a solution to α=r
λ1 λ2 + (1 − r) . λ1 + µ(1 − α) λ2 + µ(1 − α)
194
QUEUEING MODELS
This is a cubic in α. We know that α = 1 is a solution. Factoring out (1 − α), we get the following quadratic µ2 α2 − αµ(µ + λ1 + λ2 ) + µ(rλ1 + (1 − r)λ2 ) + λ1 λ2 = 0. This has two roots. The one in (0, 1) is given by p 1 α= [µ(µ+λ +λ )− µ2 (µ + λ1 + λ2 )2 − 4µ2 (µ(rλ1 + (1 − r)λ2 ) + λ1 λ2 )]. 1 2 2µ2 From Theorem 7.17, we get the limiting distribution of {X(t), t ≥ 0} as p0 = 1 − λ/µ, pj = (λ/µ)(1 − α)αj−1 , j ≥ 1. 7.62. See any queueing theory book for more details on G/M/c queue. Here we give the final result. {Xn∗ , n ≥ 0} is a DTMC with the following transition probability matrix as given below: β0 δ 0 0 0 0 ··· β1 δ1 α0 0 0 ··· β2 δ2 α1 α0 0 · · · P = . β3 δ3 α2 α1 α0 · · · .. .. .. .. .. .. . . . . . . where Z
∞
αi =
e−2µt
0
δi = 2i
Z
(2µt)i dG(t), i ≥ 0, i!
∞
e−µt (1 − e−µt
0
i−1 X (µt)r r=0
βi = 1 − δ i −
i−1 X
r!
)dG(t), i ≥ 0,
αj , i ≥ 0.
j=0
The limiting distribution πj∗ satisfies πj∗ =
∞ X
αi−j+1 πi∗ , j ≥ 2.
i=j−1
Hence the solution is given by πj∗ = Cαj ,
j ≥ 1,
where α is the unique solution in (0, 1) to ˜ α = G(2µ − 2µα). and C is a constant. To find C and π0∗ we use the normalizing equation 1 = π0∗ +
∞ X j=1
πj∗ = π0∗ +
Cα , 1−α
QUEUEING MODELS
195
and π1∗
=
∞ X
δi πi∗ .
i=0
The right hand side can be simplified as π1∗ = Cα = δ0 π0∗ +
2αC ˜ (α − G(µ)). 2α − 1
These two equations can be solved to obtain C and π0∗ . 7.63. 1. Using the φ(s) from the solution of Modeling Exercise 7.27, we get the mean interarrival time as 1+ρ , a= λρ with ρ = λ/µ. Hence the stability condition (from Theorem 7.16 1/aθ < 1, or λρ < θ. 1+ρ 2. Solving α = φ(θ(1 − α)) we get p 1 ((2λ + µ + θ) − (2λ + µ + θ)2 − 4(θ + λ)). 2 From Theorem 7.17 we get α=
p0 pj
= =
1 − a/θ, (a/θ)α
j−1
(7.1) (1 − α),
j ≥ 1.
(7.2)
7.64. The derivation in Section 7.7 continues to remain valid for the modified retrial queue up to and including Equation 7.55. Since each external customer that sees a busy server joins the orbit with probability 1 − c, the expression for E(z An ) is modified to ˜ E(z An ) = G((1 − c)λ(1 − z)). Using this modification through the rest of the analysis, we get the stability condition as ρ(1 − c) < 1 and Z ˜ ˜ (1 − z)G((1 − c)λ(1 − z)) λ 1 1 − G((1 − c)λ(1 − u)) φ(z) = (1−ρ(1−c)) exp(− du). ˜ ˜ θ z G((1 G((1 − c)λ(1 − z)) − z − c)λ(1 − u)) − u The expression for the mean of Xn in the limit is obtained from Equation 7.59 by replacing all ρ’s by ρ(1 − c)’s. ˜ 7.65. With exp(µ) service times we get G(s) = µ/(s + µ). hence we get ˜ 1 − G(λ(1 − u)) ρ . = ˜ 1 − ρu G(λ(1 − u)) − u
196
QUEUEING MODELS
Hence, substituting in Equation 7.52 and simplifying, we get Z 1−ρ λ 1 ρ φ(z) = exp(− du. 1 − ρz θ z 1 − ρu Evaluating the integral, we get φ(z) = (
1 − ρ λ +1 )θ . 1 − ρz
By expanding the above as a power series in z (using Binomial theorem) we get φ(z) =
∞ X
pn z n ,
n=0
where p0 = p00 ,
pn = p0n + p1,n−1 ,
n ≥ 1,
where p0n and p1n are as derived in 6.32. This shows that the results are consistent. 7.66. 1. From the mechanics of the (Q, Q − 1) policy we see that X(t) increases by one whenever a demand occurs, and decreases by one whenever an outstanding order is received. Thus the arrivals in the X process (upward jumps) occur according to a Poisson process with rate µ, while each customer (outstanding order) stays in the system (service) for a random amount of time with mean τ (mean lead time). The service times are iid. Thus the X process is an M/G/∞ queue. 2. It is known that the steady state distribution of the number of customers in an M/G/∞ queue is Poisson with mean µτ. The long run fraction of the time the warehouse is empty is given by the long run probability that there are at least Q outstanding orders, which is given by lim P (X(t) ≥ Q) = 1 −
t→∞
Q−1 X
e−µτ
i=0
(µτ )i . i!
7.67. α = .02, β(α) = 2.92,, µ = 10 per hour, √ r = λ/µ = 1000. Hence the recommended staffing level = s = 1000 + 2.92 1000 = √ 1093, (rounded up). The 1 √ expected queueing time is Wq = α = (.02/10)/(2.92 1000 = .002/93 hours, µβ r which equals .0013 minutes. 7.68. From the proof of Theorem ?? we get lim
√
r→∞
and lim
r→∞
re−r
s X j=0
rs 1 = √ φ(β) s! r
e−r
rj = Φ(β) j!
QUEUEING MODELS
197
Substituting in Using these limits in the ErlangB formula as given in Computational Exercise 7.24 we get lim
r→∞
√
rB(s, r) =
φ(β) φ(−β = = h(−β). Φ(β) 1 − Φ(−β)
wep know that h(x) is an increasing function of x, increasing from 0 at x = −∞ to 2/πpat x = 0, and growing to +∞ as x → ∞. Thus if r < 2/(πα2 ), then √ √ rα < 2/π, and hence the solution to rα = h(−β) occurs at β < 0. Otherwise it occurs at β > 0. √ 7.69. When λ = 10, 000, We need to find a β satisfying h(−β) = .02 1000 = .6325. Numerically we get β = −.27. Thus the recommended staffing level is √ s =√r − .27 1000 = 992 (rounded up.) We need to find a β satisfying h(−β) = .02 10000 = 2. Numerically we get β = 1.58. Thus the recommended staffing √ level is s = r + 1.58 10000 = 10, 158 (rounded up.) 7.70. The density of Se is given by (1 − G(u))/τ . Hence we get Z ∞ Z t m(t) = τ E(λ(t−Se )) = τ λ(t−u)(1−G(u))/τ du = λ(u)(1−G(t−u))du −∞
0
as desired. 7.71. We have m(t) = τ E(λ(t − Se )) = τ λE(1 + αsin(β(t − Se ))). The desired result follows from the equality sin(a − b) = sin(a)cos(b) − cos(a)sin(b). 7.72. No solution given. Consider the Computational Exercise 7.71 with λ = α = β = 1. Plot the λ(t) and m(t) over 0 ≤ t ≤ 5. What does it tell you about when the queue length achieves its maximum relative to where the arrival rate function achieves its maximum? 7.73. Let A be an interarrival time. Then P(A = kd) = (1 − θ)k−1 θ, k ≥ 1. We have E(A) = d/θ. Since the interarrival times are iid with this distribution and service times are iid exp(µ), {X(t), t ≥ 0} is a queue length process in a G/M/1 queue. The stability condition is θ/d < µ. We have ˜ G(s) = E(e−sA ) =
e−sd θ . 1 − e−sd (1 − θ)
˜ Assume stability. Let α ∈ (0, 1) be the solution to α = G(µ(1 − α)). The limiting queue length distribution is p0 = 1 − ρ, pj = ραj−1 (1 − α), j ≥ 0.
198
QUEUEING MODELS
7.74. 1. Each customer joins with probability P (Vi > p) = 1 − p. Hence {N (t), t ≥ 0} is a PP(λ(1 − p)). 2. {X(t), t ≥ 0} is the queuelength process in an M/G/1 queue with PP(λ(1 − p)) arrivals, and iid service times with mean τ and second moment s2 . 3. The expected number in the queue in steady state is given by L = ρ(1 − p) +
1 λ2 (1 − p)2 s2 . 2 1 − ρ(1 − p)
4. The net profit is given by G(p) = λ(1 − p)p − hL = λ(1 − p)p − hρ(1 − p) −
h λ2 (1 − p)2 s2 , 0 ≤ p ≤ 1. 2 1 − ρ(1 − p)
The first term is concave, the second term is linear and the third term is also concave as a function of p (due to the minus sign). Hence G(p) is concave. Using the values given above, we have G(p) = 10(1 − p)p − .9(1 − p) −
50(1 − p)2 . 1 − .9(1 − p)
This is maximized at p∗ = .93. The optimal net profit is G(p∗ ) = .326. 7.75. 1. In steady state, the ith node (i ≥ 2) behaves like an M/M/1 queue with arrival rate ai = µ1 Πi−1 k=1 αk , and service rate µi . Let ρi = ai /µi , 2 ≤ i ≤ n. Then the condition of stability is ρi < 1 for 2 ≤ i ≤ n. The expected number in the ith stations is given by ρi Li = , i ≥ 2. 1 − ρi 2. Let a1 = µ1 and αn = 0. The long run net profit is given by G(α1 , · · · , αn ) =
n X i=1
di ai (1 − αi ) −
n X hi ρi . 1 − ρi i=2
7.76. 1. The arrival process is a renewal process since an arrival can only occur when the arrival process is on. Thus the time until the next arrival is independent of the past. 2. τ , the mean interarrival time satisfies τ=
β 1 1 + ( + τ ). β+λ β+λ α
QUEUEING MODELS
199
This yields τ=
α+β . λα
Its LST φ(s) satisfies φ(s) =
β + λ lambda β α ( + φ(s)). s+β+λ β+λ β + lambda s + α
The solution is given by φ(s) =
λ(s + α) . s2 + s(α + β + lambda) + αλ
3. The G/M/1 queue is stable if ρ =
θα µ(α+β)
< 1. Let a ∈ (0, 1) be the solution to
a = φ(µ(1 − a)). Then the limiting distribution is given by π0 = 1 − ρ, πj = ρaj−1 (1 − a), j ≥ 1. 7.77. 1. {X1 (t), t ≥ 0} is the queue length process of an M/M/1 queueing system with arrival rate λ1 and service rate µ. This is because the class one customers are unaffected by the presence of class two customers due to the preemptive priority. 2. {X(t), t ≥ 0} is the queue length process of an M/M/1 queueing system with arrival rate λ1 + λ2 and service rate µ, since the service rates are the same for both classes, and the queue length process is independent of the service discipline. 3. λ1 + λ2 < µ. 4. Let ρi = λi /µ, i = 1, 2, and ρ = ρ1 + rho2 . Then ρ1 ρ L1 = , L1 + L2 = . 1 − ρ1 1−ρ Hence L2 =
ρ2 . (1 − ρ1 )(1 − ρ)
7.78. we know that {X2 (t), t ≥ 0} is the queue length process of a G/M/1 queue. The LST of the interarrival times is λ µ1 ˜ G(s) = , s + λ σ + µ1 and mean 1/λ + 1/µ1 = (λ + µ1 )/(λµ1 ). Hence the stability condition is ρ = λµ1 ˜ µ2 (λ+µ1 ) < 1. Let α ∈ (0, 1) be the solution to α = G(µ(1 − α). We get p α = [(λ + µ1 + µ2 ) − (λ + µ1 + µ2 )2 − 4λµ1 ]/(2µ2 ). The limiting distribution of X(t) is p0 = 1 − ρ, pj = ραj−1 (1 − α), j ≥ 1.
200
QUEUEING MODELS
The expected number of customers in steady state is L = ρ1 − α. 7.79 1. We see that over each cycle of length 10, there are 6 arrivals, 4 entries, and 4 departures. Out of these six arrivals, the first sees an empty system, the second, third and the fifth see one person, the fourth and sixth see two person ahead of them. This happens in every cycle. Hence we see that π ˆ0 = 1/6, π ˆ1 = 3/6, π ˆ2 = 2/6, π ˆj = 0 for j ≥ 3, and α = 4/6, α0 = 1, α1 = 2/3, α2 = 1/2. Similarly, we get π0∗ = 1/4, π1∗ = 2/4, π2∗ = 1/4, πj∗ = 0, for j ≥ 3. It is easy to see that Theorem 7.1 holds. 2. We have π0 = 1/4, π1 = 2/4, π2 = 1/4, πj = 0, for j ≥ 3, Thus Theorem 7.2 holds. 3. p0 = 1/10, p1 = 4/10, p2 = 4/10, p3 = 1/10, pj = 0, for j ≥ 4. Thus, pj 6= π ˆj . This is because the arrival process is not Poisson. 7.80 Consider the sample path of Computational Exercise 7.79. 1. L
1 10
10
Z
X(u)du = 1.5. 0
2. λ = 6/10. Arrival 1 enters at time 1 and leaves at time 5, hence W1 = 4. Arrival arrives at time 2 and leaves at time 2, hence W2 = 0, etc. Thus we get W1 = 4, W2 = 0, W3 = 5, W4 = 0, W5 = 3, W6 = 3. Hence W =
6 X
Wi /6 = 15/6 = 2.5.
i=1
Thus L = λW holds. 3. λe = 4/10, We = (W1 + W3 + W5 + W6 )/4 = 2.5. Thus Le = λWe holds. 7.81 1. L, λ and λe remain unchanged as in the solution to Computational Ex. 7.80.
QUEUEING MODELS
201
2. Under the LCFS discipline arrival 1 enters at time 1 and leaves at time 10, hence W1 = 9. We have W2 = W4 = 0 since these arrivals do not enter. The third arrival enters at time 3 and leaves at 5, so W3 = 2. Similarly, W5 = 3 and W6 = 1. Hence 6 X W = Wi /6 = 15/6 = 2.5. i=1
3. We = (W1 + W3 + W5 + W6 )/4 = 2. Thus Le = λWe holds. 7.82 The expected number of customers in node i in steady state is (using the M/M/1 formula) λ , 1 ≤ i ≤ N. Li = µi − λ The long run holding cost per unit time in steady state is c(µ1 , µ2 , · · · , µN ) =
N X i=1
hi Li =
N X hi λ . µ −λ i=1 i
We solve the optimization problem Min c(µ1 , µ2 , · · · , µN ) subject to µ1 + µ2 · · · + µN = 1. This can be solved by using Lagrange multiplier method to get √ hi µ∗i = λ + (µ − N λ) PN p . hj j=1
CHAPTER 8
Renewal Processes
Computational Exercises 8.1. Consider the M/M/1/K queueing system at Sn . The remaining service time of the customer in service ∼ exp(µ) and the time until the next arrival ∼ exp(λ). The evolution of the system is independent of Sn and the past and each customer who sees the system full faces a stochastically identical system. Hence the inter arrival times of customers who see the system full are i.i.d. and {Sn , n ≥ 0} is a renewal sequence. N (t) = number of customers that are rejected up to t. The corresponding process in the M/G/1/K system is not a renewal process since the remaining service time of the customer in service depends on Sn . If we assume that a customer arrives and sees the system full at time 0, then the corresponding process is a renewal process in the G/M/1/K case, since the service times are exponentially distributed. 8.2. Since the downward jumps are always occurring at rate µ, it is clear that {Sn , n ≥ 0} is renewal sequence generated by iid exp(µ) random variables. The corresponding renewal process counts the number of downward jumps in (0, t]. 8.3. Recurrent, since inter event times are iid exp(µ), a nondefective distribution. 8.4. A busy cycle is said to begin when a customer enters an empty system. Let Sn the start time of the nth busy cycle. Since the service times and the inter arrival times are iid, and independent of each other, the successive busy cycles are iid. Hence {Sn , n ≥ 0} is a renewal sequence. 2 Z ∞ λ ˜ 8.5. G(s) = e−st λ2 te−λt dt = . s+λ 0 2k λ k ˜ ˜ Gk (s) = [G(s)] = → LST of Erlang(2k,λ). s+λ Gk (t) = 1 −
2k−1 X r=0
e−λt
(λt)r . r! 203
204
RENEWAL PROCESSES 2k
P{N (t) = k} = Gk (t) − Gk+1 (t) = e−λt
(λt) (λt)2k+1 + e−λt . (2k)! (2k + 1)!
8.6. Let An be the number of events at time n. From the distribution of Xn , it is clear that {An , n ≥ 1} is a sequence of iid G(α) random variables while A0 is a MG(α) random variable. Thus we must have N (n) ≥ n. Now, P (N (0) = k) = P (A0 = k) = (1 − α)k α,
k ≥ 0,
For k ≥ n ≥ 1, we use the fact that 1 + A0 is a G(α) random variable to obtain P (N (n) = k)
=
P (1 + A0 + A1 + ... + An = k + 1) k = (1 − α)k−n αn+1 . n
Thus N (n) is a Negative Binomial random variable with parameters (n + 1, α). 8.7. Let An be the number of events at time n. From the distribution of Xn , it is clear that {An , n ≥ 1} is a sequence of iid Ber(α) random variables while A0 is 0. Thus we must have N (n) ≤ n. Now, for 0 ≤ k ≥ n, we obtain P (N (n) = k)
= =
P (A1 + ... + An = k + 1) n (1 − α)k αn−k . k
Thus N (n) is a Binomial random variable with parameters (n, 1 − α). 8.8. Let α1 = .8, α2 = .2. From Equation 8.7 we get p0 (0) = 1, p0 (1) = .2, p0 (n) = 0, n ≥ 2. From Equation 8.8 we get pk (0) = 0, k ≥ 1, pk (1) = .8pk−1 (0), k ≥ 1, pk (n) = .8pk−1 (n − 1) + .2pk−1 (n − 2),
k ≥ 1, n ≥ 2.
8.9. Let α0 = .2, α1 = .3, α2 = .5. From Equation 8.7 we get p0 (0) = .8, p0 (1) = .5, p0 (n) = 0, n ≥ 2. From Equation 8.8 we get pk (0) = .2pk−1 (0), k ≥ 1, pk (1) = .2pk−1 (1) + .3pk−1 (0), k ≥ 1,
RENEWAL PROCESSES
205
pk (n) = .2pk−1 (n) + .3pk−1 (n − 1) + .5pk−1 (n − 2),
k ≥ 1, n ≥ 2.
8.10. Let Fi (t) = 1 − e−λi t , i = 1, 2. We are given G(t) = rF1 (t) + (1 − r)F2 (t). Also define Fi,i (t) = Fi ∗ Fi (t) = 1 − e−λi t − λi te−λi t , i = 1, 2, F1,2 (t) = F1 ∗ F2 (t) = 1 −
λ2 e−λ1 t − λ1 e−λ2 t . λ2 − λ1
We have
λ1 λ2 ˜ G(s) =r + (1 − r) . s + λ1 s + λ2 ˜ k [1 − G(s)]. ˜ Then, from Equation 8.21, p˜k (s) = G(s) Hence p0 (t) = 1 − G(t) = re−λ1 t + (1 − r)e−λ2 t . p1 (t) = G(t) − G ∗ G(t) = G(t) − r2 F1,1 (t) − 2r(1 − r)F1,2 (t) − (1 − r)2 F2,2 (t). 8.11. Xn = machine life time ∼ U [2, 5]. (a) Inter replacement time Yn = min(Xn , 3), E(Yn ) = 17/6. Long run rate of replacement = 6/17 3 if X1 ≥ 3 (b) Time between two consecutive planned replacements = T = . X1 + T if X1 < 3 17 E(T ) = E(min(X1 , 3)) + 31 E(T ) ⇒ 23 E(T ) = 17 6 ⇒ E(T ) = 4 . Hence long run rate of planned replacement = 4/17. X1 if X1 < 3 (c) Time between two consecutive failures = T = . E(T ) = 3 + T if X1 ≥ 3 17 E(min(X1 , 3)) + 32 E(T ) ⇒ 13 E(T ) = 17 6 ⇒ E(T ) = 2 . Hence long run rate of failures = 2/17. 8.12. From part (a) of Computational Exercise 8.11 above, τ = E(Yn ) = 17/6. We also have Z 3 2 E(Yn ) = x2 /3dx + 32 (2/3) = 73/9. 2
Hence σ 2 = 82/9 − (17/6)2 = 1/12. From Theorem 8.7, we see that asymptotically N (t) ∼ N (.353t, .00366t). 8.13. Let T = min{n ≥ 1 : Xn = 0}. Then {N (n), n ≥ 0} is a renewal process generated by a sequence of positive integer valued random variables distributed as T . To compute the asymptotic distribution of N (n) by using Theorem 8.7, we need τ = E(T X0 = 0) and σ 2 = var(T X0 = 0). Let mi = E(T X0 = i) A first step analysis yields m0 = 1 + (1 − α)m1 , m1 = 1 + βm1 . Solving, we get τ = m0 = 1 +
1−α . 1−β
206
RENEWAL PROCESSES
Similarly, let vi = E(T 2 X0 = i). Then v0 = 1α + (1 + 2m1 + v1 )(1 − α),
v1 = 1(1 − β) + (1 + 2m1 + v1 )β.
Solving, we get 2
Thus σ = v0 −
m20 .
v0 = 2 + α − β + 2/(1 − β). Then asymptotically, N (n) ∼ N (t/τ, σ 2 t/τ 3 ).
8.14. Let the inter renewal time be X. Then X = X1 + X2 where X1 ∼ exp(λ) and X2 ∼ exp(µ). Hence, 1 1 + µ λ 1 1 V {X} = σ 2 = 2 + 2 µ λ E{X} = τ =
2
By Theorem 8.14, N (t) is asymptotically N ( τt , στ 3t ).
λ1 λ2 + (1 − r) . s + λ1 s + λ2 Substituting in Equation 8.16 and simplifying we get ˜ 8.15. G(s) =r
˜ (s) = λ1 λ2 + rλ1 s + (1 − r)λ2 s . M s(s + (1 − r)λ1 + rλ2 ) Using partial fraction expansion, we get ˜ (s) = M
λ1 λ2 1 r(1 − r)(λ1 − λ2 )2 1 + . (1 − r)λ1 + rλ2 s (1 − r)λ1 + rλ2 s + (1 − r)λ1 + rλ2
Inverting the LST we get M (t) =
λ1 λ2 r(1 − r)(λ1 − λ2 )2 t+ (1 − e−((1−r)λ1 +rλ2 )t ). (1 − r)λ1 + rλ2 ((1 − r)λ1 + rλ2 )2
8.16. Let α1 = .8, α2 = .2. Then β0 = 0, β1 = .8, βn = 1, n ≥ 2. Then Equation 8.17, we get M (0) = 0, M (1) = .8 and M (n) = 1 + .8M (n − 1) + .2M (n − 2). The last equation can be solved numerically in a recursive manner to obtain M (n). Or, it can be solved as a nonhomogenous difference equation with constant coefficients to obtain n 4 M (n) = + ((−.2)n − 1), n ≥ 0. 1.2 144 ˜ 8.17. G(s) =
λ µ . s+λs+µ
RENEWAL PROCESSES ˜ λµ λµ 1 λµ 1 ˜ (s) = G(s) = M = − . ˜ s(s + λ + µ) λ+µs λ+µs+λ+µ 1 − G(s) M (t) =
207
λµ λµ (1 − e−(λ+µ)t ). t− λ+µ (λ + µ)2
8.18. Let H(t) = E(SN (t)+k ). Then x + E(Sk−1  S1 = x) if x > t E(SN (t)+k  S1 = x) = x + E(SN (t−x)+k ) if x ≤ t x + (k − 1)τ if x > t = x + H(t − x) if x ≤ t Hence Z H(t)
∞
Z
0
∞
Z
t
H(t − x) dG(x)
(k − 1)τ dG(x) +
x dG(x) +
=
0
t
= τ + (k − 1)τ (1 − G(t)) + H ∗ G(t) Taking LSTs, ˜ ˜ ˜ G(s) ˜ H(s) = τ + (k − 1)τ (1 − G(s)) + H(s) which yields ˜ H(s) =τ
˜ G(s) k+ ˜ 1 − G(s)
! ˜ (s)). = τ (k + M
Hence H(t) = τ (k + M (t)). 8.19. Each renewal in the N process is counted with probability p in the N ∗ process. Hence, p is the expected number of renewals in the N ∗ process for each renewal in the N process. Hence M ∗ (t) = pM (t). 8.20. Let {N (t), t ≥ 0} be generated by the iid sequence {Xn , n ≥ 0} with ˜ common LST G(s). Then {N1 (t), t ≥ 0} is a renewal process generated by the iid sequence {Yn , n ≥ 1}, where Yn = X2n−1 + X2i . Hence the common LST of Y is ˜ 2 (s). Hence G ˜2 ˜ 1 (s) = G (s) . M ˜ 2 (s) 1−G Next, {N2 (t), t ≥ 0} is a delayed renewal process generated by the sequence {Yn , n ≥ ˜ 1}, where Y1 = X1 and Yn = X2n−2 + X2n−1 n ≥ 2. Hence the LST of Y1 is G(s) 2 ˜ and the common LST of Yn , n ≥ 2, is G (s). Hence ˜ 2 (s) = M
˜ G(s) . ˜ 2 (s) 1−G
8.21. If t ≤ x, H(t) = 1. This can be written as Z t H(t) = 1 − G(t) + H(t − u) dG(u). 0
208
RENEWAL PROCESSES
If t > x, H(t − u) 1 P{A(t) ≤ x  X1 = u} = 0
0≤u