Exercises and Solutions in Statistical Theory (Solutions, Instructor Solution Manual) [1 ed.] 9781466572928, 9781466572898, 1466572892


155 70 1MB

English Pages 188 Year 2013

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Exercises and Solutions in Statistical Theory   (Solutions, Instructor Solution Manual) [1 ed.]
 9781466572928, 9781466572898, 1466572892

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

SOLUTIONS MANUAL FOR Exercises and Solutions in Statistical Theory

by Lawrence L. Kupper, Brian H. Neelon and Sean M. O’Brien

SOLUTIONS MANUAL FOR Exercises and Solutions in Statistical Theory

by Lawrence L. Kupper, Brian H. Neelon and Sean M. O’Brien

Boca Raton London New

York

CRC Press is an imprint of the Taylor & Francis Group, an informa business

CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2013 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed on acid-free paper Version Date: 20130124 International Standard Book Number-13: 978-1-4665-7292-8 (Paperback) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com

Contents

2 Basic Probability Theory 2.1

1

Solutions to Even-Numbered Exercises

3 Univariate Distribution Theory 3.1

Solutions to Even-Numbered Exercises

4 Multivariate Distribution Theory 4.1

Solutions to Even-Numbered Exercises

5 Estimation Theory 5.1

17 17 35 35 99

Solutions to Even-Numbered Exercises

6 Hypothesis Testing Theory 6.1

1

99 159

Solutions to Even-Numbered Exercises

i

159

Chapter 2

Basic Probability Theory

2.1

Solutions to Even-Numbered Exercises

Solution 2.2. (a) ¯ ∩ B) ¯ = pr(A

= = =

pr[A ∪ B] = 1 − pr(A ∪ B)

1 − pr(A) − pr(B) + pr(A ∩ B) = 1 − pr(A) − pr(B) + pr(A)pr(B) [1 − pr(A)] − pr(B)[1 − pr(A)] ¯ ¯ [1 − pr(A)][1 − pr(B)] = pr(A)pr( B),

which completes the proof. (b) ¯ = pr(A) − pr(A)pr(B). ¯ pr(A ∩ B) = pr(A)pr(B) = pr(A)[1 − pr(B)] Hence,

¯ = pr(A) − pr(A ∩ B) = pr(A ∩ B), ¯ pr(A)pr(B)

¯ since pr(A) = pr(A ∩ B) + pr(A ∩ B).

The second result follows in a completely analogous manner. Solution 2.4. (a) pr(lot is purchased) =

1

C50 C95 10 = 0.5838. C100 10

2

BASIC PROBABILITY THEORY

(b) Let k denote the smallest number of defective kidney dialysis machines that can be in the lot of 100 machines so that the probability is no more than 0.20 that the hospital will purchase the entire lot of 100 machines. Then, we need to find smallest positive integer k such that 100−k 100−k Ck0 C10 C10 = ≤ 0.20. C100 C100 10 10

By computer, or by trial-and-error, we obtain k = 15. Solution 2.6. (a) pr(car door breaks during the 1,000-th trial)=pr[(car door does not break during any of the first 999 trials)∩(car door breaks during the 1,000-th trial)]= (0.9995)999 (0.0005) = 0.0003. (b) pr(car door breaks before the 1,001-th trial starts)=1-pr(car door does not break during the first 1,000 trials)= 1 − (0.9995)1,000 = 1 − 0.6065 = 0.3935. (c) The assumption that the probability of the car door breaking does not change from trial-to-trial is probably an unrealistic one. As the number of trials increases, the probability of breakage would be expected to slowly increase, negating the assumption of mutually independent trials. Solution 2.8. First, pr(A) =

¯ ¯ pr(A|C)pr(C) + pr(A|C)pr( C)

=

(0.90)(0.01) + (0.06)(0.99) = 0.0684.

pr(B) = =

¯ ¯ pr(B|C)pr(C) + pr(B|C)pr( C) (0.95)(0.01) + (0.08)(0.99) = 0.0887.

And,

Also, pr(A ∩ B) = = =

¯ ¯ pr(A ∩ B|C)pr(C) + pr(A ∩ B|C)pr( C) ¯ ¯ ¯ pr(A|C)pr(B|C)pr(C) + pr(A|C)pr(B| C)pr( C) (0.90)(0.95)(0.01) + (0.06)(0.08)(0.99) = 0.0134.

Finally, pr(A ∩ B) = 0.0134 6= pr(A)pr(B) = (0.0684)(0.0887) = 0.0061,

SOLUTIONS TO EVEN-NUMBERED EXERCISES

3

so that events A and B are unconditionally dependent. This simple numerical example illustrates the general principle that conditional independence between two events does not imply unconditional independence between these same two events. Solution 2.10. Now, pr(A) = =

1 − pr(no heads among the n tosses) − pr(no tails among the n tosses)  n  n  (n−1) 1 1 1 1− − =1− . 2 2 2

And, pr(B)

= pr(no tails among the n tosses) + pr(exactly one tail among the n tosses)  n  n  n 1 1 1 = +n = (n + 1) . 2 2 2

Also,  n 1 pr(A ∩ B) = pr(exactly one tail among the n tosses) = n . 2 Finally, for A and B to be independent events, we require pr(A ∩ B) = pr(A)pr(B), or  n "  (n−1) #   n  1 1 1 n = 1− (n + 1) , 2 2 2 or equivalently, Solution 2.12.



n n+1



=1−

 1 (n−1) , 2

giving n = 3.

(a) pr(all four are the same race) = pr(all four are C) + pr(all four are H) + pr(all four are A) + pr(all four are N) = (0.45)4 + (0.25)4 + (0.20)4 + (0.10)4 = 0.0466. (b) pr[exactly 2 (and only 2) are the same race] = pr(2 C’s and any two other races) + pr(2 H’s and any two other races) + pr(2 A’s and any two other races) + pr(2 N’s and any two other races) = 6(0.45)2 (0.55)2 + 6(0.25)2(0.75)2 + 6(0.20)2 (0.80)2 + 6(0.10)2 (0.90)2 = 0.7806.

4

BASIC PROBABILITY THEORY P4 (c) pr(at least 2 are not Caucasian) = j=2 C4j (0.55)j (0.45)4−j = 0.7585.

(d) Let E1 ≡ “exactly 2 of 4 are C”, and let E2 ≡ “all 4 are each either C or H”. So, pr[2C’s and 2H’s] pr(E1 ∩ E2 ) = pr(E2 ) pr(E2 ) 2 2 6(0.45) (0.25) = 0.3161. (0.45 + 0.25)4

pr(E1 |E2 ) = = Solution 2.14.

(a) pr(C|H ∩ D) = 100/150 = 2/3; or, pr(C|H ∩ D) =

pr(C ∩ H ∩ D) 100/300 2 = = . pr(H ∩ D) 150/300 3

(b) ¯ pr(C ∪ D|H)

¯ + pr(D|H) ¯ − pr(C ∩ D|H) ¯ = pr(C|H) 60 40 50 + − = 0.70; = 100 100 100

or, ¯ = 1 − pr(C ¯ ∩ D| ¯ H) ¯ =1− pr(C ∪ D|H)

30 = 0.70. 100

¯ = 90/(90 + 50) = 9/14; or, (c) pr(H|C) ¯ = pr(H|C)

¯ 90/300 pr(H ∩ C) = 9/14. = ¯ 140/300 pr(C)

(d) pr(C ∩ H|D) = 1 − pr(C ∩ H|D) = 1 −

pr(C ∩ H ∩ D) 100/300 11 = 1− = . pr(D) 210/300 21

(e) pr(C ∪ D ∪ H) = =

pr(C) + pr(D) + pr(H) −

pr(C ∩ D) − pr(C ∩ H) − pr(D ∩ H) + pr(C ∩ D ∩ H) 160 210 200 140 110 150 100 + + − − − + = 0.90; 300 300 300 300 300 300 300

or, ¯ ∩D ¯ ∩ H) ¯ = 1− pr(C ∪ D ∪ H) = 1 − pr(C ∪ D ∪ H) = 1 − pr(C

30 = 0.90. 300

SOLUTIONS TO EVEN-NUMBERED EXERCISES

5

(f) pr[C ∪ (H ∩ D)]

= pr(C) + pr(H ∩ D) − pr(C ∩ H ∩ D) 160 150 100 = + − = 0.70 300 300 300

Solution 2.16. (a) π n + (1 − π)n . (b) π r (1 − π)n−r , 0 ≤ r ≤ n. Pr n j n−j (c) , 0 ≤ r ≤ n. j=0 Cj π (1 − π)

n−s r−s n−s r (d) (π s )[Cr−s π (1 − π)n−r ] = Cr−s π (1 − π)n−r , 0 ≤ s ≤ r ≤ n. Pn−s Pn−s (e) (π s )[ j=r−s Cjn−s π j (1 − π)(n−s)−j ] = j=r−s Cjn−s π j+s (1 − π)(n−s)−j , 0 ≤ s ≤ r ≤ n.

Solution 2.18. (a) First, pr(A ∩ B|C)

  pr(B ∩ C) pr(A ∩ B ∩ C) = pr(A|C) pr(C) pr(C)

=

pr(A|C)pr(B|C) ⇔



pr(A ∩ B ∩ C) = pr(A|C) ⇔ pr(A|B ∩ C) = pr(A|C). pr(B ∩ C)

=

pr(A|C) ⇔

And, pr(A|B ∩ C)



pr(A ∩ B ∩ C) pr(A ∩ C) = pr(B ∩ C) pr(C) pr(A ∩ B ∩ C) pr(B ∩ C) = ⇔ pr(B|A ∩ C) = pr(B|C), pr(A ∩ C) pr(C)

which completes the proof that the three equalities are equivalent. (b) For i = 1, 2, . . . , 6, let Ei be the event that “the number i is rolled”; clearly, pr(Ei ) = 1/6 and the events E1 , E2 , . . . , E6 are pairwise mutually exclusive. Then, pr(A|B ∩ C) = and pr(A|C) =

pr(A ∩ B ∩ C) pr(E6 ) 1/6 1 = = = , pr(B ∩ C) pr(E5 ∪ E6 ) 2/6 2 pr(E6 ) 1/6 1 pr(A ∩ C) = = = , pr(C) pr(E5 ∪ E6 ) 2/6 2

6

BASIC PROBABILITY THEORY so that events A and B are conditionally independent given that event C has occurred. However, ¯ = pr(A|B ∩ C)

¯ pr(A ∩ B ∩ C) pr(E4 ) = 1, = ¯ pr(E4 ) pr(B ∩ C)

and ¯ = pr(A|C)

¯ pr(E2 ∪ E4 ) pr(A ∩ C) 2/6 1 = = = , ¯ pr(E1 ∪ E2 ∪ E3 ∪ E4 ) 4/6 2 pr(C)

so that events A and B are conditionally dependent given that event C has not occurred. Solution 2.20. Let A be the event that Joe gets at least one hit during each of the 13 games in which he had 3 official at bats, let B be the event that Joe gets at least one hit during each of the 31 games in which he had 4 official at bats, and let C be the event that Joe gets at least one hit during each of the 12 games in which he had 5 official at bats. Now, under the stated assumptions, the probability that Joe does not get a hit in a game where he has 3 official at bats is equal to (1 − 0.408)3 = (0.592)3 = 0.2075, so that pr(A) = (1 − 0.2075)13 = 0.0486. Using this same strategy to compute pr(B) and pr(C), we have π

= pr (A ∩ B ∩ C) = pr(A)pr(B)pr(C) 31  12  = (0.0486) 1 − (0.592)4 1 − (0.592)5 = (0.0486)(0.0172)(0.4042) = 0.0003.

This approximate calculation provides strong evidence for why a hitting streak of 56 games has occurred only once during the entire history of major league baseball. Solution 2.22. Let H be the event that heads is observed on the coin that is randomly selected, and let A be the event that the other side of this coin is also heads. Further, let C1 be the event that the coin selected has heads on both sides, and let C2 be the event that the coin selected has heads on one side and tails on the other.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

7

So, we wish to compute the numerical value of pr(A|H) = pr(A ∩ H)/pr(H). Now, pr(A ∩ H) = pr(A ∩ H|C1 )pr(C1 ) + pr(A ∩ H|C2 )pr(C2 )     1 1 1 = (1) + (0) = . 2 2 2 And, pr(H) = =

pr(H|C1 )pr(C1 ) + pr(H|C2 )pr(C2 )      3 1 1 1 + = . (1) 2 2 2 4

Finally, pr(A|H) =

pr(A ∩ H) (1/2) 2 = = . pr(H) (3/4) 3

Solution 2.24. (a) For i = 1, 2, 3, let Ai be the event that this randomly chosen adult resident plays course #i. Then, if C is the event that this randomly chosen adult plays none of these three courses, we have    pr(C) = pr A1 ∩ A2 ∩ A3 = pr ∪3i=1 Ai  = 1 − pr ∪3i=1 Ai = 1 − pr(A1 ) − pr(A2 ) − pr(A3 ) + pr(A1 ∩ A2 ) +

= =

pr(A1 ∩ A3 ) + pr(A2 ∩ A3 ) − pr(A1 ∩ A2 ∩ A3 )

1 − 0.18 − 0.15 − 0.12 + 0.09 + 0.06 + 0.05 − 0.02 0.73.

(b) Now, let B be the event that this randomly chosen adult resident plays exactly one of these three courses, and let D be the event that this randomly chosen resident plays at least two of these three courses. Then, pr(B) = 1 − pr(C) − pr(D) = 1 − 0.73 − pr(D). Now, pr(D)

= = − =

pr [(A1 ∩ A2 ) ∪ (A1 ∩ A3 ) ∪ (A2 ∩ A3 )]

pr(A1 ∩ A2 ) + pr(A1 ∩ A3 ) + pr(A2 ∩ A3 ) 3pr(A1 ∩ A2 ∩ A3 ) + pr(A1 ∩ A2 ∩ A3 ) 0.09 + 0.06 + 0.05 − 2(0.02) = 0.16.

Finally, pr(B) = 1 − 0.73 − 0.16 = 0.11.

8

BASIC PROBABILITY THEORY

(c) Let E be the event that this randomly chosen adult resident plays only #1 and #2. Then, ¯ = pr(E|C) = =

¯ pr(E ∩ C) ¯ pr(C) ¯ pr(C|E)pr(E) ¯ C (1)pr(A1 ∩ A2 ∩ A3 ) . (1 − 0.73)

Now, since pr(A1 ∩ A2 ) = pr(A1 ∩ A2 ∩ A3 ) + pr(A1 ∩ A2 ∩ A3 ), it follows that pr(A1 ∩ A2 ∩ A3 ) = 0.09 − 0.02 = 0.07. ¯ = (1)(0.07)/0.27 = 0.26. Finally, pr(E|C) Solution 2.26. The probability that no two of these k dice show the same number (i.e., that all k numbers showing are different) is equal to       5 4 6−k+1 6 ··· , 2 ≤ k ≤ 6. αk = 6 6 6 6 And, the probability that no two of these k dice show the same number and that one of these k dice shows the number 6 is equal to         1 5 4 6−k+1 βk = k ··· , 2 ≤ k ≤ 6. 6 6 6 6 Thus, θk =

βk k = , 2 ≤ k ≤ 6. αk 6

Solution 2.28. (a) Since each of the k balls can end up in any one of the n urns, the total number of possible configurations of k balls and n urns is nk , and each of these possible configurations has probability n−k of occurring. And, among these nk equally likely configurations, there are n(n − 1) · · · (n − k + 1) = n!/(n − k)! configurations for which no urn contains more than one ball. Hence, θ(n, k) =

n(n − 1) · · · (n − k + 1) n! = , 1 ≤ k ≤ n. nk (n − k)!nk

SOLUTIONS TO EVEN-NUMBERED EXERCISES

9

(b) If we think of the 12 months as 12 urns and the 5 people as 5 balls, then γ

= 1 − θ(12, 5) = 1 − = 1 − 0.382 = 0.618.

(12)! (12 − 5)!(12)5

Solution 2.30∗ . For 1 ≤ i < j, the event A(i, j) of interest can be written as A(i, j) = ∩4k=1 Ak , where A1 is the event that the first (i − 1) tosses do not produce either the number 1 or the number 2, where A2 is the event that the i-th toss produces the number 1, where A3 is the event that the next [(j − i) − 1] tosses do not produce the number 2, and event A4 is the event that the j-th toss produces the number 2. Since the events A1 , A2 , A3 , and A4 are mutually independent, it follows that 4  Y pr[A(i, j)] = pr ∩4k=1 Ak = pr(Ak ) k=1

# " # "  (i−1)    (j−i−1)   1 5 1 4 = 6 6 6 6  i  j 5 1 4 . = 20 5 6

By symmetry, it follows that pr[A(j, i)] =

1 20

 j  i 4 5 . 5 6

Thus, the probability of either of the two scenarios i < j and j < i can be written succinctly as  min{i,j}  max{i,j} 5 1 4 . 20 5 6 Solution 2.32∗ . For i = 1, 2, . . . , 6, let Ei be the event that the number i appears on exactly two of the three dice when the experiment is conducted. Then, pr(A) =

6  X pr(Ei ) pr ∪6i=1 Ei = i=1

=

"    # 2 1 5 5 6 3 = 0.4167. = 6 6 12

10

BASIC PROBABILITY THEORY

Let Cn be the event that event A occurs at least twice during n repetitions of the experiment. Then, pr(Cn ) = = =

1 − pr Cn



1 − pr(event A occurs at most once during n repetitions of the experiment)   1 − (0.5833)n + n(0.5833)n−1(0.4167) .

By trial-and-error, the smallest value of n, say n∗ , such that   pr(Cn ) = 1 − (0.5833)n + n(0.5833)n−1 (0.4167) ≥ 0.90

is equal to n∗ = 8.

Solution 2.34∗ . First, note that θ2 = α given that the first repetition results in outcome A, and that θ2 = (1 − β) given that the first repetition results in outcome B. Now, θ3 = αθ2 + (1 − β)(1 − θ2 ) = k0 + k1 θ2 , where k0 = (1 − β) and k1 = (α + β − 1). Next, θ4

= = =

αθ3 + (1 − β)(1 − θ3 ) = k0 + k1 θ3 k0 + k1 (k0 + k1 θ2 ) k0 (1 + k1 ) + k12 θ2 .

Using a similar strategy, we have θ5

= = =

αθ4 + (1 − β)(1 − θ4 ) = k0 + k1 θ4   k0 + k1 k0 (1 + k1 ) + k12 θ2 k0 (1 + k1 + k12 ) + k13 θ2 .

SOLUTIONS TO EVEN-NUMBERED EXERCISES

11

So, in general, for n = 3, 4, . . ., we have

θn

=

=



k0 

n−3 X j=0



k1j  + k1n−2 θ2

 k0 1 − k1n−2 + k1n−2 θ2 , (1 − k1 )

where k0 = (1 − β), k1 = (α + β − 1), and where θ2 equals α if the first repetition of the experiment results in outcome A and equals (1 − β) if the first repetition of the experiment results in outcome B. Finally, since 0 < k1 < 1, we have

limn→∞ θn

= =

k0 (1 − k1 ) (1 − β) (1 − β) = . (2 − α − β) (1 − α) + (1 − β)

Note that this limiting value is the same regardless of the outcome on the first repetition of the experiment. Solution 2.36∗ . (a) In words, the event that Player A is ruined with x dollars remaining is the union of two mutually exclusive events, namely, the event that Player A wins the next game and then is ruined with (x + 1) dollars remaining and the event that Player A loses the next game and then is ruined with (x − 1) dollars remaining. This leads to the desired difference equation θx = πθx+1 + (1 − π)θx−1 , x = 1, 2, . . . , (a + b − 1); clearly, θ0 = 1 since Player A has no money, and θa+b = 0 since Player A has all of Player B’s money.

12

BASIC PROBABILITY THEORY

(b) Using the difference equation given in part (a), we have " "  x x+1 #  x−1 #  1−π 1−π 1−π + (1 − π) α + β α+β = π α+β π π π   x x+1 (1 − π) (1 − π) = α+β + πx π x−1 x  1−π = α+β [(1 − π) + π] π  x 1−π = α+β . π (c) Now, θ0 = 1 = α + β



1−π π

0

= α + β,

so that α = 1 − β. And, θa+b = 0 = α + β



so that

1−π π

a+b

"



β = 1− and

"

α=1− 1−

= (1 − β) + β

1−π π 

a+b #−1

1−π π



1−π π

a+b

,

a+b #−1

.

Using these expressions for α and β, we obtain θx

"

=

1− 1−

=

 1−π x π 1−





 a+b #−1 " a+b #−1  x 1−π 1−π 1−π + 1− π π π  1−π a+b

π  1−π a+b π

.

(d) Based on the expression for θx derived in part (c), it follows that the probability that Player A is ruined when Player A begins the competition with a dollars is  a+b 1−π a − 1−π π π θa = . a+b 1 − 1−π π

SOLUTIONS TO EVEN-NUMBERED EXERCISES

13

And, by symmetry, pr(Player B is ruined) =

=



π 1−π

b

1−



π 1−π

a+b

a+b

π 1−π  1−π a π  1−π a+b π

1−

1−

− 

= (1 − θa ).

Since pr(Player A is ruined)+pr(Player B is ruined)=1, it is certain that either Player A or Player B will eventually lose all of his or her money. When π = 1/2, one can use L’Hˆopital’s Rule to show that θa = b/(a + b) and (1 − θa ) = a/(a + b). (e) If π ≤ 0.50, so that the house has no worse than an even chance of winning each game, then limb→∞ θa = 1, so that Player A will eventually lose all of his or her money if Player A continues to play. If π > 0.50, then a limb→∞ θa = 1−π . As a word of caution, π is always less than 0.50 for π any casino game. Solution 2.38∗ . (a) Let Axy be the event that a person matches winning pair (x, y). Then, #  " C5x C51 45 5−x πx0 = pr(Ax0 ) = , x = 3, 4, 5; 56 46 C5 and, πx1

"

C5x C51 5−x = pr(Ax1 ) = 56 C5

#

1 46



, x = 0, 1, 2, 3, 4, 5.

Then, it follows directly that π30 = 0.0033, π40 = 0.0001, π50 = 2.5610x10−7 , π01 = 0.0134, π11 = 0.0071, π21 = 0.0012, π31 = 0.0001, π41 = 1.4512x10−6 , and π51 = 5.6911x10−9 . (b) Let θ be the overall probability of winning if a person plays this Mega Millions lottery game one time. Then, θ

= =

1 − pr(A00 ) − pr(A10 ) − pr(A20 ) 1 − 0.9749 = 0.0251.

(c) We want to choose the smallest positive integer value of n, say n∗ , that satisfies the inequality 1 − (1 − 0.0251)n = 1 − (0.9749)n ≥ 0.90,

14

BASIC PROBABILITY THEORY or equivalently that nln(0.9749) ≤ ln(0.10). It then follows easily that n∗ = 91.

Solution 2.40∗ . (a) There are two possible equally likely outcomes (namely, “evens” and “odds”) for each game, and the total number of games played is equal to Ck2 = k(k − 1)/2. So, there are 2k(k−1)/2 total possible outcomes for all C2k games that are played. Of these 2k(k−1)/2 outcomes, there are k! outcomes that produce the outcome of interest. So, since all outcomes are equally likely to occur, the desired probability is equal to θk =

k! 2k(k−1)/2

.

Note that θ2 = 1, as expected, and that θ6 = 6!/215 = 0.0220. (b) Appealing to Stirling’s approximation to k! for large k, we have θk

≈ ≈ ≈

√ k 2πk ke 2k(k−1)/2 √ k 2πk ke 2k2 /2! √  k 2πk k , ek 2k/2

which converges to the value 0 as k → ∞. Solution 2.42∗ . (a) Let A be the event that the gambler has a dollars to bet, let B be the event that the gambler accumulates b dollars, and let W be the event that the gambler wins the next play of the game. Then, we have θa

= = = =

pr(B|A) = pr(B ∩ W|A) + pr(B ∩ W|A)

pr(W|A)pr(B|W ∩ A) + pr(W|A)pr(B|W ∩ A) pr(W)pr(B|W ∩ A) + pr(W)pr(B|W ∩ A)

πθa+1 + (1 − π)θa−1 , a = 1, 2, . . . , (b − 1).

(b) Using direct substitution and simple algebra, it is straightforward to show that the stated solutions satisfy the difference equations given in part (a).

SOLUTIONS TO EVEN-NUMBERED EXERCISES

15

(c) For Scenario I, we have θ100 =

100 = 0.01. 10, 000

For Scenario II, we have θ100 =

 0.52 100 0.48  0.52 200 0.48

−1 −1

= 0.0003.

This result is clearly counterintuitive, since it is over 33(≈ 0.01/0.0003) times more likely that the gambler will accumulate b dollars under Scenario I than under Scenario II.

Chapter 3

Univariate Distribution Theory

3.1

Solutions to Even-Numbered Exercises

Solution 3.2. (a) For x < 0, FX (x)

=

for 0 ≤ x < 1, FX (x)

=

for 1 ≤ x < +∞, FX (x)

=

0; Z x

4 3 x4 t dt = ; 5 0 5 Z x 4 1−t 4 1 + e dt = 1 − e1−x . 5 5 1 5

In summary,

(b)

 x < 0;  0, x4 /5, 0 ≤ x < 1; FX (x) =  1 − (4/5)e1−x , 1 ≤ x < +∞. E(X) = =

Z ∞ 4 4 x e1−x dx x x3 dx + 5 5 1 0  5 1 Z ∞ 4e 4 x + xe−x dx. 5 5 0 5 1

Z

1

To evaluate the second integral above, one straightforward approach is to employ integration by parts with u = x, du = dx, dv = e−x dx, and 4 v = −e−x . By doing so, we obtain E(X) = 25 + 58 = 44 25 = 1.760. 17

18

UNIVARIATE DISTRIBUTION THEORY

(c) pr



1 1 < X < 2|X ≥ 2 3



pr[( 12 < X < 2) ∩ (X ≥ 31 )] pr( 12 < X < 2) = 1 pr(X ≥ 3 ) pr(X ≥ 13 )

=

FX (2) − FX ( 21 ) [1 − 45 e1−2 ] − (1/2) 5 = 4 1 − FX ( 31 ) 1 − (1/3) 5 0.6949.

= ˙ ≈

4

Solution 3.4. Clearly, 5 , pr(X = 2) = pr(X = 1) = 100



So, in general, we have pX (x) = pX (x) =



5 100 − x + 1

95 100

5 100



5 99



, pr(X = 3) =



95 100



94 99



5 98

for x = 1, and

 x−1 Y j=1

95 − j + 1 100 − j + 1



, x = 2, 3, . . . , 96.

Solution 3.6. Let D denote the number of defective machines in the lot, and let the random variable X denote the number of defective machines found among the two machines that are tested. Then, the condition “pr(the two machines are either both defective or both non-defective)=pr(one machine is defective and the other machine is non-defective)” implies that pr[(X = 2) ∪ (X = 0)] =

⇒ pr(X = 2) + pr(X = 0) = ⇒ ⇒

25−D + CD 0 C2 C25 2

25−D CD 2 C0

=

D(D − 1) + (25 − D)(24 − D) = 2 ⇒ D2 − 25D + 150 = ⇒ D = 10 or D = 15.

pr(X = 1) pr(X = 1) 25−D CD 1 C1 C25 2

D(25 − D) (D − 10)(D − 15) = 0

Solution 3.8. (a) When n = 1, the possible values of X are −1 and +1, with respective probabilities (1 − θ) and θ. Note that these are the probabilities obtained when Y ∼ Binomial(n = 1, θ). Since Y = (1 + X)/2, pX (x|n = 1) = C1( 1+x ) θ( 2

1+x 2

) (1 − θ)( 1−x 2 ) , x = −1, +1.



, etc.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

19

When n = 3, the possible values of X are −3, −1, +1, and +3, with respective probabilities (1 − θ)3 , 3θ(1 − θ)2 , 3θ2 (1 − θ), and θ3 . So, pX (x|n = 3) = C3( 3+x ) θ( 2

3+x 2

) (1 − θ)( 3−x 2 ) , x = −3, −1, +1, +3.

Note that Y = (3 + X)/2, where Y ∼ Binomial(n = 3, θ). (b) Based on the results in part (a), for n any odd positive integer, pX (x)

=

n−x n+x Cn( n+x ) θ( 2 ) (1 − θ)( 2 ) , 2 x = −n, −(n − 2), . . . , −1, +1, . . . , (n − 2), n.

Also, since Y = (n + X)/2, where Y ∼ Binomial(n, θ), it follows that X = (2Y − n), so that E(X) = 2E(Y ) − n = 2(nθ) − n = n(2θ − 1) and V(X) = 4V(Y ) = 4nθ(1 − θ). Solution 3.10. Rp (a) FP (p) = pr(P ≤ p) = 0 (6t − 6t2 )dt = p2 (3 − 2p), 0 < p < 1. So, pr(0.60 < P < 0.80) = FP (0.80) − FP (0.60) = (0.80)2 [3 − 2(0.80)] − (0.60)2 [3 − 2(0.60)] = 0.2480. (b) pr(0.70 < P < 0.80|0.60 < P < 0.80) pr[(0.70 < P < 0.80) ∩ (0.60 < P < 0.80)] pr(0.60 < P < 0.80) pr(0.70 < P < 0.80) FP (0.80) − FP (0.70) = = 0.2480 0.2480 0.8960 − 0.7840 = = 0.4516. 0.2480 R1 6 . (c) For k ≥ 0, E(P k ) = 0 (pk )(6p − 6p2 )dp = (k+2)(k+3) 2 So, E(P ) = 0.50 and E(P ) = 0.30, giving V(P ) = 0.30 − (0.50)2 = 0.05. =

Solution 3.12. (a) pr(D)

= =

pr(D ∩ N) + pr(D ∩ M) + pr(D ∩ S) + pr(D ∩ L) pr(D|N)pr(N) + pr(D|M)pr(M) + pr(D|S)pr(S) + pr(D|L)pr(L)

=

0.01(0.50) + 0.02(0.25) + 0.05(0.20) + 0.003(0.05)

=

0.0202.

20

UNIVARIATE DISTRIBUTION THEORY

(b) ¯ ∩ N| ¯ D) ¯ pr(L = = = = = =

  ¯ pr (M ∪ S) ∩ D ¯ pr(M ∪ S|D) = ¯ pr(D)   ¯ ∪ (S ∩ D) ¯ pr (M ∩ D) 1 − pr(D) ¯ + pr(S ∩ D) ¯ pr(M ∩ D)

1 − pr(D) ¯ ¯ pr(D|M)pr(M) + pr(D|S)pr(S) 1 − pr(D) (1 − 0.02)(0.25) + (1 − 0.05)(0.20) 1 − 0.0202 0.2450 + 0.1900 = 0.4440. 0.9798

(c) Now, since ¯ ∩ (L ¯ ∩ N)] ¯ = pr[D ¯ ∩ (M ∪ S)] = 0.2450 + 0.1900 = 0.4350, pr[D and since the assumptions of the binomial distribution are met, we have π

=

10 X

x 10−x C10 x (0.4350) (0.5650)

x=2 1 X

x 10−x C10 x (0.4350) (0.5650)

=

1−

=

1 − (0.5650)10 − 10(0.4350)(0.5650)9

=

x=0

0.9712.

(d) Now, let A be the event that “there is at least one category M teenager and no category S teenagers among the first (x−1) teenagers selected”, let B be the event that “a category S teenager is the x-th teenager selected”, let C be the event that “there is at least one category S teenager and no category M teenagers among the first (x − 1) teenagers selected”, and let E be the event that “a category M teenager is the x-th teenager selected”. Then, for x = 2, 3, . . . , ∞, pX (x)

= pr(X = x) = pr(A ∩ B) + pr(C ∩ E) = pr(A)pr(B) + pr(C)pr(E) = (0.20)

x−1 X

Cjx−1 (0.25)j (0.55)(x−1)−j

+ (0.25)

j=1

x−1 X

(0.20)j (0.55)(x−1)−j Cx−1 j

j=1

x−1

= 0.20[(0.80)

x−1

− (0.55)

x−1

] + 0.25[(0.75)

− (0.55)x−1 ]

= 0.20(0.80)x−1 + 0.25(0.75)x−1 − 0.45(0.55)x−1, x = 2, 3, . . . , ∞.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

21

Note that pX (x) is a valid discrete probability distribution since ∞ X

pX (x)

x=2

=

∞ X

[0.20(0.80)x−1 + 0.25(0.75)x−1 − 0.45(0.55)x−1]

x=2

     0.80 0.75 0.55 + 0.25 − 0.45 1 − 0.80 1 − 0.75 1 − 0.55 = 0.80 + 0.75 − 0.55 = 1, = 0.20



and since each term is this summation is non-negative. Solution 3.14. As a mathematical aid, for a twice-differentiable function 2 2 g(X) g(X) > 0, then g(X) is a convex function of X; and, if d dX < 0, g(X), if d dX 2 2 then g(X) is a concave function of X. So, appealing to Jensen’s Inequality, we have the following results: (a) Since X 2 is a convex function of X, it follows that E(X 2 ) ≥ [E(X)]2 ; note that this result also follows directly from the fact that V(X) = E(X 2 ) − [E(X)]2 ≥ 0. (b) Since eX is a convex function of X, it follows that E(eX ) ≥ eE(X) . (c) Since ln(X) is a concave function of X, it follows that E[ln(X)] ≤ ln[E(X)]; (d) Since 1/X is a convex function of X for X > 0 and is a concave function of X for X < 0, it follows that E(1/X) ≥ 1/E(X) for X > 0 and that E(1/X) ≤ 1/E(X) for X < 0. Solution 3.16. (a) Now, h(x, t + ∆t) = = + = + =

pr[x rare events in the time interval (0, t + ∆t)] pr{[x rare events in (0, t)] ∩ [no rare events in (t, t + ∆t)]} pr{[(x − 1) rare events in (0, t)] ∩ [exactly one rare event in (t, t + ∆t)]} pr[x rare events in (0, t)]pr[no rare events in (t, t + ∆t)] pr[(x − 1) rare events in (0, t)]pr[exactly one rare event in (t, t + ∆t)] h(x, t)[1 − θ∆t] + h(x − 1, t)θ∆t.

Thus, h(x, t + ∆t) − h(x, t) = θ[h(x − 1, t) − h(x, t)]; ∆t so, using the definition of a derivative, we have   d[h(x, t)] h(x, t + ∆t) − h(x, t) = lim = θ[h(x − 1, t) − h(x, t)]. ∆t→0 ∆t dt

22

UNIVARIATE DISTRIBUTION THEORY

(b) With h(x, t) =

(θt)x e−θt , x!

we have d[h(x, t)] dt

= = =

θx(θt)x−1 e−θt θ(θt)x e−θt − x! x!   (θt)x e−θt (θt)x−1 e−θt θ − (x − 1)! x! θ[h(x − 1, t) − h(x, t)].

Solution 3.18. We have µ(r)

= = = = =

= = =

E [X(X − 1)(X − 2) · · · (X − r + 1)] n X [x(x − 1)(x − 2) · · · (x − r + 1)] Cnx π x (1 − π)n−x x=0 n  X

  x! n! π x (1 − π)n−x (x − r)! x!(n − x)! x=r  n  X 1 π x (1 − π)n−x n! (x − r)!(n − x)! x=r  n−r X 1 n! π y+r (1 − π)n−r−y y!(n − r − y)! y=0 n−r n!π r X n−r y C π (1 − π)n−r−y (n − r)! y=0 y

n!π r n−r [π + (1 − π)] (n − r)! n!π r , r = 1, 2, . . . , n, (n − r)!

with µ(r) = 0 for r > n. Now, µ(1) = E(X) =

n!π 1 = nπ. (n − 1)!

And, µ(2) = E[X(X − 1)] =

n!π 2 = n(n − 1)π 2 , (n − 2)!

SOLUTIONS TO EVEN-NUMBERED EXERCISES 2

23 2

2

so that V(X) = E[X(X − 1)] + E(X) − [E(X)] = n(n − 1)π + nπ − (nπ) = nπ(1 − π).

Using the notation lim∗ to denote the limit as n → +∞ and π → 0 subject to the restriction λ = nπ, we have     n!π r lim∗ µ(r) = lim∗ = lim∗ [n(n − 1)(n − 2) · · · (n − r + 1)π r ] (n − r)!   r  λ = lim∗ n(n − 1)(n − 2) · · · (n − r + 1) n        2 r−1 1 1− ··· 1 − λr = lim∗ (1) 1 − n n n = λr .

This answer is as expected, since lim∗ [Cnx π x (1 − π)n−x ] = λr when X ∼ POI(λ).

λx e−λ x! ,

and µ(r) =

Solution 3.20. First, note that  E [X − E(X)][X 2 − E(X 2 )] = E(X 3 )−E(X)E(X 2 ) = E(X 3 )−(µ)(σ 2 +µ2 ),

so that we need to evaluate E(X 3 ).

Since X ∼ N(µ, σ 2 ), we know that the moment generating function for X is equal to MX (t) = eµt+

σ2 t2 2

. Then, expanding MX (t) in an infinite series gives 2 2

MX (t) = = =

2 2

2 2

(µt + σ 2t ) (µt + σ 2t )2 (µt + σ 2t )3 + + + ··· 1! 2! 3! µ2 t2 µσ 2 t3 µ3 t3 σ 2 t2 + + + + ··· 1 + µt + 2 2 2 6 t t2 t3 1 + (µ) + (σ 2 + µ2 ) + (3µσ 2 + µ3 ) + · · · , 1! 2! 3! 1+

so that E(X 3 ) = (3µσ 2 + µ3 ). Equivalently, E(X 3 ) =

d3 MX (t) = (3µσ 2 + µ3 ). dt3 |t=0

Finally,  E [X − E(X)][X 2 − E(X 2 )] = (3µσ 2 + µ3 ) − µ(σ 2 + µ2 ) = 2µσ 2 .

24

UNIVARIATE DISTRIBUTION THEORY

Solution 3.22. First, let Ax be the event that the number x is obtained on the first roll of the pair of dice. Then, we have pr(N = 1) = = =

pr(A2 ) + pr(A3 ) + pr(A7 ) + pr(A11 ) + pr(A12 ) 2(1 − π) (1 − π) (1 − π) 2(1 − π) + +π+ + 30 30 30 30 (1 + 4π) . 5

Noting that the two numbers 4 and 10 each have the same probability of occurring (as do the two numbers 5 and 9, and the two numbers 6 and 8), we have, for n ≥ 2, pr(N = n) = = + + = + +

2pr(A4 )pr(N = n|A4 ) + 2pr(A5 )pr(N = n|A5 ) + 2pr(A6 )pr(N = n|A6 )   n−2   3(1 − π) 3(1 − π) 3(1 − π) 1− 2 −π +π 30 30 30  n−2    4(1 − π) 4(1 − π) 4(1 − π) 1− −π +π 2 30 30 30 n−2     5(1 − π) 5(1 − π) 5(1 − π) −π +π 1− 2 30 30 30  n−2 (1 − π)(1 + 9π) 9(1 − π) 50 10  n−2 4(1 − π)(2 + 13π) 13(1 − π) 225 15  n−2 (1 − π)(1 + 5π) 5(1 − π) . 18 6

Now, for 0 < θ < 1, note that ∞ X

n=2

nθn−2

∞ X d(θn ) dθ n=2 n=2   ∞ θ2 d X n d = θ−1 θ = θ−1 dθ n=2 dθ 1 − θ

= θ−1

=

∞ X

nθn−1 = θ−1

(2 − θ) . (1 − θ)2

SOLUTIONS TO EVEN-NUMBERED EXERCISES

25

So, using this general result, it follows directly that  X ∞ (1 + 4π) + npr(N = n) npr(N = n) = (1) 5 n=2 n=1    9(1−π)    2 − 10 (1 + 4π) (1 − π)(1 + 9π) + h i2  5 50  1 − 9(1−π)   10    13(1−π)   4(1 − π)(2 + 13π)  2 − 15 h i2  225   1 − 13(1−π)  15    5(1−π)   (1 − π)(1 + 5π)  2 − 6 i2 . h  18   1 − 5(1−π)  6 

∞ X

E(N ) =

=

+

+

When π = 1/6, it follows that E(N ) = 3.376. Solution 3.24∗ . (a) We have MU (t)

tU

"

= E(e ) = E e √ −tnπ



„ t √ Y −nπ

nπ(1−π)

«#

=e

t nπ(1−π)



√ −tnπ

nπ(1−π)

n



E e

tY nπ(1−π)





= e (1 − π) + πe n  √ −tπ √t(1−π) nπ(1−π) nπ(1−π) + πe = (1 − π)e   p p ∞ ∞ j j X X [−tπ/ [t(1 − π)/ nπ(1 − π)] nπ(1 − π)]  = (1 − π) +π j! j! j=0 j=0 n  t2 + (1 − π)R1n + πR2n = 1+ 2n nπ(1−π)

where R1n =

∞ X [−tπ/ j=3

So,

p p ∞ X [t(1 − π)/ nπ(1 − π)]j nπ(1 − π)]j and R2n = . j! j! j=3 

 t2 ln[MU (t)] = nln 1 + + (1 − π)R1n + πR2n . 2n

26

UNIVARIATE DISTRIBUTION THEORY Now, with u(n) =

t2 2

+ n(1 − π)R1n + nπR2n , we have

ln[MU (t)] = u(n)



   u(n) n ln 1 + , u(n) n

where

u(n) =

∞ X [−t t2 +(1−π) 2 j=3

p p ∞ X π/(1 − π)]j (1− j ) [t (1 − π)/π]j (1− j ) n 2 +π n 2 . j! j! j=3 2

Now, since limn→∞ u(n) = t2 , we have limn→∞ So, with v(n) = u(n)/n, we have

lim

n→∞

h ln 1 +

u(n) n

u(n)/n

i

= lim

n→∞

u(n) n

= 0.

0 ln[1 + v(n)] = . v(n) 0

So, employing L’Hospital’s Rule, we obtain

[1 + v(n)]−1 ln[1 + v(n)] = lim dv(n) n→∞ n→∞ v(n) lim

dv(n) dn

= 1.

dn

Thus, limn→∞ ln[MU (t)] =

t2 2,

which completes the proof.

(b) Since nπ = 150 and nπ(1 − π) = 105, we have

pr(148 ≤ Y ≤ 159) = ≈ since Z =

Y√−150 ∼N(0,1) ˙ 105

Solution 3.26∗ .



159 − 150 148 − 150 Y − 150 √ pr ≤ √ ≤ √ 105 105 105 pr(−0.195 ≤ Z ≤ 0.878) = 0.39,

for large n.



SOLUTIONS TO EVEN-NUMBERED EXERCISES

27

(a) Now,

E(X) = = = = = =

∞ X

xpX (x) =

x=0 ∞ X

xpr(X = x)

x=0

x [pr(X > x − 1) − pr(X > x)]

x=1 ∞ X

x=1 ∞ X

∞ X

∞ X

xpr(X > x − 1) −

(u + 1)pr(X > u) −

u=0 ∞ X

u=0 ∞ X

upr(X > u) + pr(X > u) =

u=0

∞ X

u=0 ∞ X

u=0

xpr(X > x)

x=1 ∞ X

xpr(X > x)

x=0

pr(X > u) −

∞ X

xpr(X > x)

x=0

[1 − FX (u)] .

(b) We have

E(X) = =

∞ X

u=0 " ∞ X

u=0

= = =

[1 − FX (u)] = ∞ X

(1 − π) ∞ X

u=0

∞ X

u=0 ∞ X

u=0

π u+1 =

"

∞ X

pr(X > u)

u=0

(1 − π)π

x=u+1

(1 − π)

∞ X x

π

x=u+1

π u+1 (1 − π) π . (1 − π)

#

x

#

28

UNIVARIATE DISTRIBUTION THEORY ∗

Solution 3.28 . Now, ν1

= E (|X − E(X)|) = [λ]

=

X

λx e−λ + x!

(λ − x)

λx e−λ − x!

[λ]

X

x=0

[λ]

= 2

x=0

(λ − x)

x=0

=

∞ X

X

(λ − x)

x=0

|x − λ|

λx e−λ x!

∞ X

(x − λ)

λx e−λ x!

∞ X

(λ − x)

λx e−λ x!

x=[λ]+1

x=[λ]+1

λx e−λ , x!

since [λ]

∞ X

X λx e−λ λx e−λ 0= (λ − x) = + (λ − x) x! x! x=0 x=0

∞ X

(λ − x)

x=[λ]+1

λx e−λ . x!

Thus, we have ν1

=

=

=

=

 [λ]  x+1 X xλx λ λx e−λ −λ = 2e − 2 (λ − x) x! x! x! x=0 x=0   [λ] [λ] X λx+1 X λx  2e−λ  − x! (x − 1)! x=0 x=1   [λ]−1 u+1 [λ]−1 x+1 [λ]+1 X λ X λ λ  + − 2e−λ  x! [λ]! u! u=0 x=0  [λ] −λ  λ e (2λ) = (2λ)pr (X = [λ]) . [λ]! [λ] X

Solution 3.30∗ . First, Z ∞ 2 2 1 E (|Y − c|) = |y − c| √ e−(y−µ) /2σ dy 2πσ −∞ Z ∞ Z c 2 2 1 1 −(y−µ)2 /2σ2 e dy + (y − c) √ e−(y−µ) /2σ dy = (c − y) √ 2πσ 2πσ c −∞ Z c Z c 2 2 2 2 1 1 √ = c e−(y−µ) /2σ dy − y√ e−(y−µ) /2σ dy 2πσ 2πσ −∞ −∞ Z ∞ Z ∞ 2 2 2 2 1 1 √ + y√ e−(y−µ) /2σ dy − c e−(y−µ) /2σ dy. 2πσ 2πσ c c

SOLUTIONS TO EVEN-NUMBERED EXERCISES

29

Now, let z = (y − µ)/σ, so that y = (µ + σz) and dy = σdz. Then, Z β Z β 2 1 −z2 /2 1 √ e E (|Y − c|) = c dz − (µ + σz) √ e−z /2 dz 2π 2π −∞ −∞ Z ∞ Z ∞ 1 −z2 /2 1 −z2 /2 √ e (µ + σz) √ e dz − c dz + 2π 2π β β Z β 2 1 = cΦ(β) − µΦ(β) − σ z √ e−z /2 dz 2π −∞ Z ∞ 2 1 + µ [1 − Φ(β)] + σ z √ e−z /2 dz − c [1 − Φ(β)] 2π β σ h −z2 /2 i∞ σ h −z2 /2 iβ −e −e +√ = 2(c − µ)Φ(β) − (c − µ) − √ −∞ β 2π 2π = 2σβΦ(β) − σβ + σφ(β) + σφ(β) = 2σ [φ(β) + βΦ(β)] − σβ. Finally, the equation 

 1 −β 2 /2 1 −β 2 /2 = 2σ √ e (−β) + Φ(β) + β √ e −σ 2π 2π = 2σΦ(β) − σ = 0  gives Φ(β) = 12 , or β = c−µ = 0, or c = µ. σ dE (|Y − c|) dβ

Since

d2 E(|Y −µ|) dβ 2

> 0 when β = 0, E (|Y − c|) is minimized when c = µ.

Solution 3.32∗ . (a) Let L be the event that a randomly chosen insured driver belongs to Class L, and let the events M and H be defined analogously. Now, for any one-year period, pr(≥ 2 accidents) = 1 − pr(< 2 accidents). So, pr(< 2 accidents)

= pr(< 2 accidents|L)pr(L) + pr(< 2 accidents|M)pr(M) + pr(< 2 accidents|H)pr(H)     = e−0.02 + (0.02)e−0.02 (0.30) + e−0.10 + (0.10)e−0.10 (0.50)   + e−0.20 + (0.20)e−0.20 (0.20) = 0.9940,

so that pr(≥ 2 accidents) = 1 − 0.9940 = 0.0060.

(b) Let A be the event that there are no automobile accidents during a particular 12-month period for two randomly chosen insured drivers. Then, the quantity of interest is pr(L ∩ M|A) =

pr(L ∩ M)pr(A|L ∩ M) . pr(A)

30

UNIVARIATE DISTRIBUTION THEORY   Now, pr(L∩M) = 2(0.30)(0.50) = 0.30, and pr(A|L∩M) = e−0.02 e−0.10 = 0.8869. And, for any randomly chosen insured driver, pr(0 accidents) = + = =

pr(0 accidents|L)pr(L) + pr(0 accidents|M)pr(M) pr(0 accidents|H)pr(H)    e−0.02 (0.30) + e−0.10 (0.50) + e−0.20 (0.20)

0.9102,

so that pr(A) = (0.9102)2 = 0.8285. Finally, pr(L ∩ M|A) = (0.30)(0.8869)/(0.8285) = 0.3211. (c) We have FW1 (w1 ) = = =

pr(W1 ≤ w1 ) = 1 − pr(W1 > w1 )

1 − pr [no accidents in the time interval (0, w1 )] 1 − e−0.20w1 ,

where 0.20w1 is the mean of a Poisson distribution modelling the number of automobile accidents for a randomly chosen insured driver over a time period of w1 years. Thus, fW1 (w1 ) =

dFW1 (w1 ) = 0.20e−0.20w1 , 0 < w1 < ∞, dw1

so that W1 has a negative exponential distribution with E(W1 ) = 1/0.20 = 5.00 years. We find w∗1 as that value such that Z ∞ ∗ 1 fW1 (w1 )dw1 = 1 − FW1 (w∗1 ) = e−0.20w1 ≤ , 2 w∗ 1 which gives w∗1 = ln(2)/0.20 = 3.47 years. Solution 3.34∗ . With y = lnx, so that dy = x−1 dx, we have Z M Z M 2 2 1 √ fX (x)dx = k e−(lnx−µ) /2σ dx 2πσx 0 0 Z lnM 2 2 1 √ = k e−(y−µ) /2σ dy 2πσ −∞ Z ( lnMσ−µ ) 2 1 √ e−z /2 dz = k 2π −∞     Y −µ lnM − µ ,Z = ∼ N(0, 1), = kFZ σ σ

SOLUTIONS TO EVEN-NUMBERED EXERCISES h  i−1 . so that k = FZ lnM−µ σ

31

So,

E(X) = = = = = = =

=

Z

M

2 2 1 e−(lnx−µ) /2σ dx 2πσx 0 Z M 2 2 1 e−(lnx−µ) /2σ x−1 dx k elnx √ 2πσ 0 Z lnM 2 2 1 k ey √ e−(y−µ) /2σ dy 2πσ −∞ Z lnM 2 2 2 1 √ e−[−2σ y+(y−µ) ]/2σ dy k 2πσ −∞ Z lnM 2 2 2 4 2 1 √ e−{[y−(µ+σ )] −2µσ −σ }/2σ dy k 2πσ −∞ Z lnM 2 2 2 2 σ 1 √ keµ+ 2 e−[y−(µ+σ )] /2σ dy 2πσ −∞ ” “ lnM −µ−σ2 Z σ 2 1 σ2 √ e−z /2 dz keµ+ 2 2π −∞   lnM−µ−σ2   FZ σ σ2 Y − µ − σ2 µ+  e 2 ,Z =  ∼ N(0, 1). σ FZ lnM−µ

k

x√

σ

When M = 25, µ = 1.90, and σ 2 = 0.34, it follows that

E(X) =

FZ



ln25−1.90−0.34 √ 0.34

FZ = =

Solution 3.36∗ .



ln25−1.90 √ 0.34





e1.90+

FZ (1.6787) (9.3933) FZ (2.2618) (0.953) (9.3933) = 9.06. (0.988)

0.34 2

32

UNIVARIATE DISTRIBUTION THEORY

(a) For 0 < x ≤ β, we have FX (x)

=

pr(X ≤ x) =   α x t β β  α 0 x ; β β

= =

Z

x

0

 α−1 t dt α β

and, for β ≤ x < 1, we have α−1 1−t dt 1−β β α x   1−t = β + −(1 − β) 1−β β α  1−x . = 1 − (1 − β) 1−β

FX (x)

= β+

Z

x

α



(b) From part (a), since FX (β) = β, it follows that ξ = 1/2 when β = 1/2. Now, for 12 ≤ β < 1, the median ξ satisfies the equation Z

ξ

α

0

 α  α−1 ξ 1 x dx = β = , β β 2

so that α−1

β( α ) ξ = 1/α . 2 For 0 < β ≤ 12 , the median ξ satisfies the equation β+

Z

ξ

β

α



1−x 1−β

α−1

dx =

which gives ξ =1−

α−1 α

(1 − β)( 21/α

) .

1 , 2

SOLUTIONS TO EVEN-NUMBERED EXERCISES

33

(c) For r a non-negative integer, we have r

E(X )

α−1   α−1 Z 1 1−x x r dx + dx x α x α β 1−β β 0 α−1 Z β Z 1  α 1−x α+r−1 r dx x dx + α x β α−1 0 1−β β  α+r β Z 1 x α + α [1 − u(1 − β)]r uα−1 (1 − β)du β α−1 (α + r) 0 0   Z 1 r X  α+r αβ r α−1 j r−j u C (1) [−u(1 − β)] + α(1 − β) du j   (α + r)β α−1 0 j=0

Z

= = = =

β

r

Z 1 r X αβ r−1 r r−j r−j Cj (−1) (1 − β) uα+r−j−1 du + α(1 − β) (α + r) 0 j=0  1 r r+1 X αβ uα+r−j r r−j + α(1 − β) Cj (β − 1) (α + r) (α + r − j) 0 j=0

=

=

r X (β − 1)r−j αβ r+1 Crj + α(1 − β) . (α + r) (α + r − j) j=0

=

Now, for r = 1, we obtain E(X) = =

  αβ 2 (β − 1) 1 + α(1 − β) + (α + 1) (α + 1) α (α − 1)β + 1 . (α + 1)

And, since E(X 2 ) =

  2(β − 1) 1 αβ 3 (β − 1)2 , + α(1 − β) + + (α + 2) (α + 2) (α + 1) α

it follows with some algebra that 2

V(X) = E(X 2 ) − [E(X)] =

α − 2(α − 1)β(1 − β) . (α + 1)2 (α + 2)

Now, when α = 1, we obtain E(X) = 1/2 and V(X) = 1/12. These answers are correct since fX (x) = 1, 0 < x < 1, when α = 1.

Chapter 4

Multivariate Distribution Theory

4.1

Solutions to Even-Numbered Exercises

Solution 4.2. First, V(Y ) = β12 V(X) + β22 V(X 2 ) + 2β1 β2 cov(X, X 2 ) = β12 V(X) + β22 V(X 2 ), since cov(X, X 2 ) = E(X 3 ) − E(X)E(X 2 ) = 0 − (0)E(X 2 ) = 0 because X has a distribution symmetric about zero. And, cov(X, Y ) = cov(X, β0 + β1 X + β2 X 2 ) = β1 V(X) + β2 cov(X, X 2 ) = β1 V(X).

Thus, we have corr(X, Y ) = =

β1 V(X) cov(X, Y ) p = p 2 V(X)V(Y ) V(X)[β1 V(X) + β22 V(X 2 )] s V(X) . β1 2 β1 V(X) + β22 V(X 2 )

When β1 = 0, then corr(X, Y )=0, a value which reflects the fact that Y = β0 + β2 X 2 , which is a purely non-linear (i.e., purely quadratic) function of X with no linear component. When β2 = 0, then corr(X, Y ) = ±1, values which reflect the fact that Y = β0 + β1 X is a perfect linear (i.e., straight-line) function of X. 35

36

MULTIVARIATE DISTRIBUTION THEORY

Solution 4.4. V(XY )

= E[(XY )2 ] − [E(XY )]2 = [E(X 2 )E(Y 2 )] − [E(X)E(Y )]2 = {V(X) + [E(X)]2 }{V(Y ) + [E(Y )]2 } − [E(X)]2 [E(Y )]2 = V(X)V(Y ) + V(X)[E(Y )]2 + V(Y )[E(X)]2 .

Clearly, V(XY ) >V(X)V(Y ). Solution 4.6. Using the method of transformations, we have:

P

X , R=Y Y ⇒ X = P R, Y = R ∂X ∂X ∂P ∂R R = ⇒ J = 0 ∂Y ∂Y =

∂P

∂R

P = R. 1

Clearly, 0 < R < +∞ and 0 < P < 1. So, fP,R (p, r; θ)

1

2θ−2 e− θ (pr+r) · r θ 2θ−2 re−r/( 1+p ) , 0 < r < +∞, 0 < p < 1.

= =

So, fP (p) =

Z



2θ−2 re−r/( 1+p ) dr

0

=

2θ−2

θ



θ 1+p

2

· Γ(2) = 2(1 + p)−2 ,

0 < p < 1.

Another approach involves finding FP (p), and then using the relationship fP (p) = dFP (p)/dp. In particular,     X X FP (p) = pr(P ≤ p) = pr ≤ p = pr(X ≤ pY ) = pr Y ≥ Y p Z ∞Z ∞ 2p = 2θ−2 e−(x+y)/θ dydx = , 0 < p < 1, (1 + p) 0 x/p which gives fP (p) = 2(1 + p)−2 , 0 < p < 1. Solution 4.8. Let T1 be the random variable denoting the total number of accidents occurring during the first n/2 years, and let T2 be the random

SOLUTIONS TO EVEN-NUMBERED EXERCISES

37

variable denoting the total number of accidents occurring during the last n/2 Pn/2 Pn years. Then, T1 ∼ Poisson( i=1 λi ), T2 ∼ Poisson( i= n +1 λi ), and T1 and 2 T2 are independent random variables. So, ( ∞ t −1 ) 1 [ [ pr(T1 > T2 ) = pr [(T1 = t1 ) ∩ (T2 = t2 )] t1 =1 t2 =0

=

∞ tX 1 −1 X

[pr(T1 = t1 )pr(T2 = t2 )]

t1 =1 t2 =0

=



n/2 P

    t1 =1 t2 =0  ∞ tX 1 −1  X

i=1

λi

!t1

e



n/2 P

λi

i=1

t1 !





       

n P

i= n 2 +1

λi

!t2

0

e

−@

n P

i= n +1 2

1

λi A

t2 !

    .   

Solution 4.10. (a) If U = (X −Y ) and V = (X +Y ), then X = (U +V )/2 and Y = (V −U )/2. So, ∂X ∂X ∂U ∂V 1/2 1/2 1 J = = −1/2 1/2 = 2 = |J|. ∂Y ∂Y ∂U

Since fX,Y (x, y; α, β)

=

fU,V (u, v; α, β)

=

∂V

y x (αβ)−1 e−( α + β ) , x > 0, y > 0, (v−u) (u+v) (2αβ)−1 e−[ 2α + 2β ] , 0 < v < +∞, −v < u < +v.

(Note that Y = 0 ⇒ U = V , and X = 0 ⇒ U = −V .) (b) So, for −∞ < u < 0, fU (u; α, β) = =

1 2αβ

Z



e −[

(u+v) (v−u) 2α + 2β

−u −1 u/β

(α + β)

e

] dv

.

Similarly, for 0 < u < +∞, fU (u; α, β) = (α + β)−1 e−u/β . So,  (α + β)−1 eu/β , −∞ < u < 0; fU (u; α, β) = (α + β)−1 e−u/β , 0 < u < +∞. Also, E(U ) = V(U ) =

E(X) − E(Y ) = (α − β), V(X) + V(Y ) = (α2 + β 2 ).

38

MULTIVARIATE DISTRIBUTION THEORY

Solution 4.12. (a) The joint distribution for X1 and X2 is (x1 , x2 ) (0,0) (1,0) (0,1) (1,1)

pX1 ,X2 (x1 , x2 ) (1 − θ)2 θ(1 − θ) (1 − θ)θ θ2

=⇒

(¯ x , s2 ) (0,0) (1/2, 1/2) (1/2, 1/2) (1,0)

pX,S x, s2 ) ¯ 2 (¯ (1 − θ)2 θ(1 − θ) (1 − θ)θ θ2

¯ and S 2 is: So, the joint distribution of X (¯ x , s2 ) (0,0) 1 1 2, 2 (1,0) Since

pX,S x , s2 ) ¯ 2 (¯ (1 − θ)2 2θ(1 − θ) θ2

    1 2 1 1 ¯ ¯ pr X = S = = 1 6= pr X = = 2θ(1 − θ), 2 2 2

¯ and S 2 are dependent random variables. X (b) As expected, ¯ E(X) ¯ V(X)

  1 = 2θ(1 − θ) + (1)θ2 = θ = E(X), and 2 "  # 2 V(X) 1 θ(1 − θ) 2 2 = . = 2θ(1 − θ) + (1) θ − θ2 = 2 2 2

Also, 2

E(S ) = V(S 2 ) = =

  1 2θ(1 − θ) = θ(1 − θ) = V(X), as expected; and 2 "  # 2 1 2 2θ(1 − θ) − [θ(1 − θ)] 2 θ(1 − θ)(1 − 2θ + 2θ2 ) . 2

And, ¯ S 2) = cov(X, = =

2 ¯ 2 ) − E(X)E(S ¯ E(XS )    1 1 2θ(1 − θ) − (θ)[θ(1 − θ)] 2 2 θ(1 − θ)(1 − 2θ) . 2

SOLUTIONS TO EVEN-NUMBERED EXERCISES

39

So, ¯ − 3E(S 2 ) = θ(3θ − 1), and E(L) = 2E(X) ¯ + 9V(S 2 ) + 2(2)(−3)cov(X, ¯ S 2) V(L) = 4V(X)       θ(1 − θ) θ(1 − θ)(1 − 2θ + 2θ2 ) θ(1 − θ)(1 − 2θ) = 4 +9 − 12 , 2 2 2 which can be further simplified as desired. Solution 4.14. First, for i = 1, 2, . . . , n, V(Zi ) = 1; and, corr(Zi , Zj ) =corr(Xi , Xj ) for every i 6= j. Then, V(L) =

n X

V(Zi ) + 2

i=1

=

n−1 X

n X

Cov(Zi , Zj )

i=1 j=i+1

n+2

−1 n(n − 1) ρ = n + n(n − 1)ρ ≥ 0, or ρ ≥ . 2 (n − 1)

Finally, since ρ ≤ 1, we have

−1 (n−1)

≤ ρ ≤ 1.

Solution 4.16. Now, cov(L1 , L2 ) = =

=

=

=

E {[L1 − E(L1 )][L2 − E(L2 )]} #) (" k #" k k k X X X X E ai Yi − ai µi bi Yi − bi µi E

("

i=1

i=1

i=1

#) #" k X bi (Yi − µi ) ai (Yi − µi )

i=1   i=1 k  X X ai bj (Yi − µi )(Yj − µj ) E ai bi (Yi − µi )2 +  

k X i=1

=

i=1

k X

σ

2

i=1

all i6=j

ai bi E[(Yi − µi )2 ] +

X

k X

all i6=j

ai bj E[(Yi − µi )(Yj − µj )]

ai b i ,

i=1

since E[(Yi − µi )(Yj − µj )] = cov(Yi , Yj ) = 0 for all i 6= j. Since

Pk σ 2 i=1 ai bi cov(L1 , L2 ) corr(L1 , L2 ) = p = p , V(L1 )V(L2 ) V(L1 )V(L2 )

40

MULTIVARIATE DISTRIBUTION THEORY P it follows that corr(L1 , L2 ) = 0 if ki=1 ai bi = 0. Solution 4.18. (a) Z



xβ−1 e−x/α dx Γ(β) · αβ 0  Z Γ(β + r) · αβ+r ∞ x(β+r)−1 e−x/α = dx = 1 if (β + r) > 1 Γ(β) · αβ Γ(β + r) · αβ+r 0 Γ(β + r) = αr , (β + r) > 0. Γ(β)

r

E(X )

=

xr

(b) E(T ) = E(X + Y ) = E(X) + E(Y ) = E(X) + Ex [Ey (Y |X = x)]   γ  1 = αβ + γEx = αβ + Ex x x γ . = αβ + α(β − 1) And, V(T ) = V(X +Y ) = V(X)+V(Y )+2cov(X, Y ) = α2 β+V(Y )+2cov(X, Y ). Now, V(Y ) = = = =

γ 



γ2 x2



Vx [Ey (Y |X = x)] + Ex [Vy (Y |X = x)] = Vx + Ex x (  )         2 1 1 1 1 1 + Ex γ 2 Vx + γ 2 Ex = γ 2 Ex − Ex x x2 x2 x x2 (     2 )   1 1 1 −2 Γ(β − 2) 2 2 2α γ 2Ex − E = γ − x x2 x Γ(β) α2 (β − 1)2   γ2 2 βγ 2 1 = − , β > 2. α2 (β − 1)(β − 2) (β − 1)2 α2 (β − 1)2 (β − 2)

Also, E(XY ) = = cov(X, Y ) = =

Ex {Ey (XY |X = x)} = Ex {xEy (Y |X = x)} h γi = γ, so that Ex x · x   γ E(XY ) − E(X) · E(Y ) = γ − αβ α(β − 1) −γ . (β − 1)

SOLUTIONS TO EVEN-NUMBERED EXERCISES So,

41

βγ 2 2γ − , β > 2. α2 (β − 1)2 (β − 2) (β − 1)

V(T ) = α2 β + (c)

Z

fY (y) =



fY (y|X = x)fX (x) dx

0

Z

=

0



γ 1 xβ−1 e−x/α e−y/( x ) · dx (γ/x) Γ(β)αβ

αβγ β , y > 0. (γ + αy)β+1

= Solution 4.20. E(X r )

=

Z

1

0

= θ

Z

xr θ(1 − x)θ−1 dx 1

0

= θ

x(r+1)−1 (1 − x)θ−1 dx

Γ(r + 1)Γ(θ + 1) Γ(r + 1)Γ(θ) = , r ≥ 0. Γ(θ + r + 1) Γ(θ + r + 1)

So, when θ = 2 and r = 1, E(X) =

(1)(2) 1 Γ(2)Γ(3) = = . Γ(4) 6 3

And, when θ = 2 and r = 2, E(X 2 ) =

(2)(2) 1 Γ(3)Γ(3) = = , Γ(5) 24 6

so that 1 V(X) = − 6

 2 1 1 = . 3 18

So, for θ = 2, X − E(X) X − 1/3 q = p ∼ ˙ N (0, 1) 1/18n V(X)

for large n by the Central Limit Theorem. Thus,       1 1 ≤ 0.10 θ = 2 pr X − ≤ 0.10 θ = 2 = pr −0.10 ≤ X − 3 3 ) ( 0.10 −0.10 ≤Z≤ p = ˙ pr p θ = 2 , 1/18n 1/18n

42

MULTIVARIATE DISTRIBUTION THEORY

where X − 1/3 Z= p ∼ ˙ N (0, 1) 1/18n

for large n. For this probability to be at least 0.95, we require n∗ to satisfy p

0.10 1/18n∗

≥ 1.96,

√ or 18n∗ ≥ 1.96/0.10, or n∗ ≥ 21.34, so that n∗ = 22. Since fX (x; θ) is skewed to the right, the question would be whether a random sample from fX (x; θ) of size 22 would be sufficient to satisfy the Central Limit Theorem approximation. Solution 4.22. Clearly, the domain of X(n) is {−1, 0, 1}. Now,  n n Y   θ n pr(Xi = −1) = . pr X(n) = −1 = pr [∩i=1 (Xi = −1)] = 2 i=1

And,

  pr X(n) = 1

= pr [∪ni=1 (Xi = 1)] = 1 − pr [∩ni=1 (Xi ≤ 0)] n Y = 1− [1 − pr(Xi = 1)] i=1

n  θ . = 1− 1− 2 Since

  pr X(n) = 0 = =

=

    1 − pr X(n) = 1 − pr X(n) = −1   n   n θ θ 1− 1− 1− − 2 2  n  n θ θ 1− − , 2 2

we have

 θ n    2 θ n n pX(n) (x(n) ) = pr X(n) = x(n) = 1 − 2 − 2θ  n 1 − 1 − θ2

if x(n) = −1, if x(n) = 0, if x(n) = 1.

Solution 4.24. The joint distribution of X1 and X2 is pX1 ,X2 (x1 , x2 ) = = =

[pX (x1 )] [pX (x2 )]  x1   π (1 − π)1−x1 π x2 (1 − π)1−x2

π x1 +x2 (1 − π)2−(x1 +x2 ) , x1 = 0, 1 and x2 = 0, 1.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

43

So, when the pair (X1 , X2 ) takes the value (x1 , x2 ) = (0, 0), then the pair (U, V ) takes the value (u, v) = (0, 0) with probability (1−π)2 ; when (x1 , x2 ) = (1, 0), then (u, v) = (1, 1) with probability π(1 − π); when (x1 , x2 ) = (0, 1), then (u, v) = (1, 1) with probability (1 − π)π; and, when (x1 , x2 ) = (1, 1), then (u, v) = (2, 0) with probability π 2 . Thus, E(U V ) = (1)(1) [π(1 − π) + (1 − π)π] = 2π(1 − π),

E(U ) = (1)[2π(1 − π)] + (2)(π 2 ) = 2π, and E(V ) = (1)[2π(1 − π)] = 2π(1 − π), so that cov(U, V )

= E(U V ) − E(U )E(V ) = 2π(1 − π) − (2π)[2π(1 − π)]

= 2π(1 − π)(1 − 2π).

When π = 1/2, then cov(U, V ) = 0. But, when π = 1/2, then pr(U = 0|V = 0) = =

pr[(U = 0) ∩ (V = 0)] pr(V = 0) 1/4 1 1 = 6= pr(U = 0) = , 1/2 2 4

so that U and V are not independent random variables. This simple illustration demonstrates that two random variables can be dependent even when they are uncorrelated. However, two random variables that are independent have zero correlation. Solution 4.26. (a) Let TG be the random variable denoting the total number of insurance claims made by Category G members, and let TA and TP be defined analogously. Also, TG ∼ Poisson[20, 000(1.00)], TA ∼ Poisson[50, 000(2.00)], and TP ∼ Poisson[30, 000(4.00)]. Then, since T = (TG + TA + TP ) is the sum of three mutually independent Poisson random variables, it follows that T has a Poisson distribution with E(T ) = V(T ) = 20, 000(1.00) + 50, 000(2.00) + 30, 000(4.00) = 240, 000. (b) As in part (a), TA is the random variable denoting the total number of insurance claims made by Category A members, and TP is the random variable denoting the total number of insurance claims made by Category P members. Then, as we know from part (a), TA ∼ Poisson[(50, 000)(2.00)] and TP ∼ Poisson[(30, 000)(4.00)], and TA and TP are independent random variables.

44

MULTIVARIATE DISTRIBUTION THEORY Then, pr(TA > TP )

= =

∞ X ∞ X

i=0 j=i+1 ∞ X

pr[(TA = j) ∩ (TP = i)]

pr(TP = i)

i=0

∞ X

pr(TA = j)

j=i+1

" # ∞ " # 5 5 ∞ X (1.20x105 )i e(−1.20x10 ) X (105 )j e−10 = . i! j! i=0 j=i+1 Solution 4.28. (a) Since E(Xi2 ) = V(Xi ) + [E(Xi )]2 = (1 + µ2 ), i = 1, 2, . . . , n, it follows that E(S) =

n X

E(Xi2 ) = n(1 + µ2 ).

i=1

Now, V(S) =

n X i=1

V(Xi2 ) =

n X i=1

{E(Xi4 ) − [E(Xi2 )]2 }.

Appealing to moment generating function theory, it follows directly that t2

E(Xi4 )

d4 MXi (t) d4 (eµt+ 2 ) = = = (µ4 + 6µ2 + 3). dt4 dt4 |t=0 |t=0

So, V(S) = n[(µ4 + 6µ2 + 3) − (1 + µ2 )2 ] = 2n(1 + 2µ2 ). (b) When µ = 0, Xi ∼ N(0, 1), so that Xi2 ∼ χ21 . Since X1 , X2 , . . . , Xn constitute Pna set of n mutually independent random variables, it follows that S = i=1 Xi2 ∼ χ2n . (c) Since Y ∼ χ2b , it follows directly that

E(aY ) = aE(Y ) = ab and V(aY ) = a2 V(Y ) = a2 (2b) = 2a2 b. So, we need to choose a and b so that ab = n(1 + µ2 ) and 2a2 b = 2n(1 + 2µ2 ). Solving these two equations simultaneously gives a=

(1 + 2µ2 ) n(1 + µ2 )2 and b = . (1 + µ2 ) (1 + 2µ2 )

SOLUTIONS TO EVEN-NUMBERED EXERCISES

45

Solution 4.30. (a) For i = 1, 2, . . . , n, let the random variable Xi take the value 1 if the i-th athlete produces a P positive drug test, and let Xi take the value 0 n otherwise. Then, X = i=1 Xi . Thus, since E(Xi ) = πi and V(Xi ) = πi (1−πi ), and since the {Xi } are mutually independent, it follows directly that n n X X πi (1 − πi ). πi and V(X) = E(X) = i=1

i=1

(b) Consider the quantity Q

=

V(X) + λ [E(X) − k] = n X

=

i=1

=

k−

πi − n X

n X

πi2

i=1

πi2 + λ

i=1



"

" n X i=1

n X i=1

n X i=1

πi (1 − πi ) + λ

πi − k #

#

" n X i=1

πi − k

#

πi − k ,

where λ is a Lagrange multiplier. So, ∂Q = −2πi + λ = 0 gives λ = 2πi , i = 1, 2, . . . , n. ∂πi Then, using this result, we have, for i = 1, 2, . . . , n, n X

λ = nλ = n(2πi ) = 2

i=1

n X i=1

πi = 2k, or πi =

k . n

By checking second derivatives, it follows directly that taking πi = k/n, i = 1, 2, . . . , n will maximize V (X) subject to the constraint E(X) = k, 0 < k < n. Solution 4.32. (a) Consider the transformation U = X/Y and V = Y , so that the inverse functions are X = U V and Y = V ; and, note that this is a 1-1 transformation from the (X, Y ) plane to the (U, V ) plane. Also, the Jacobian for this transformation has the value J = V . Now, since the joint distribution of X and Y is fX,Y (x, y) = fX (x)fY (y) =

1 −(x2 +y2 )/2 e , −∞ < x < ∞, −∞ < y < ∞, 2π

it follows that fU,V (u, v) =

|v| −[(uv)2 +v2 ]/2 e , −∞ < u < ∞, −∞ < v < ∞. 2π

46

MULTIVARIATE DISTRIBUTION THEORY So, Z ∞ 2 2 1 |v|e−v (1+u )/2 dv 2π −∞ Z 1 ∞ −v2 (1+u2 )/2 ve dv π 0 i∞ 2 2 1h −(1 + u2 )−1 e−v (1+u )/2 π 0 1 , −∞ < u < ∞. π(1 + u2 )

fU (u) = = = =

Note that fU (u) has the exact structure of a t-distribution with one degree of freedom. (b) We have Z

u

1 dw 2 −∞ π(1 + w ) u 1  −1 tan (w) −∞ π  1  −1 tan (u) − tan−1 (−∞) π 1 h −1 πi tan (u) + π 2 1 tan−1 (u) + , −∞ < u < ∞. 2 π

FU (u) = = = = = And,

√ pr(− 3 < U < 1) = = = =

√ FU (1) − FU (− 3)

√ tan−1 (1) − tan−1 (− 3) π (π/4) − (−π/3) π 7 = 0.5833. 12

Solution 4.34. (a) Note that 0 < u < 2. So, for 0 < u < 1, we have Z u Z y2 +u (1)dy1 dy2 FU (u) = 0

=

Z

2y2

u

0 2

=

u . 2

Z

0

y1 /2

(1)dy2 dy1 +

Z

2u

u

Z

y1 /2

y1 −u

(1)dy2 dy1

SOLUTIONS TO EVEN-NUMBERED EXERCISES

47

And, for 1 < u < 2, we have 1 + 2

FU (u) = =

Z

(2u −

u

1

Z

y1 −1

(1)dy2 dy1 +

0

Z

2

u

u2 − 1). 2

Z

y1 −1

(1)dy2 dy1

y1 −u

Finally, fU (u) = dFU (u)/du = u for 0 < u < 1, and fU (u) = (2 − u) for 1 < u < 2. (b) Since (Y2 − Y1 )2 = (Y1 − Y2 )2 = U 2 , we have       1 1 1 pr U 2 > = pr U > = 1 − pr U < 4 2 2   2 (1/2) 7 1 = 1 − FU =1− = . 2 2 8 P200 Solution 4.36. First, note that W = i=1 Wi , where Wi is the weight of the i-th of these 200 steel ball bearings. And, since   4 3 Wi = 7.85Vi = 7.85 πR = (32.882)Ri3, 3 i where Vi = 34 πRi3 is the volume of the i-th of these 200 steel ball bearings, it follows that E(W ) =

200 X

E(Wi ) =

i=1

200 X

(32.882)E(Ri3 ) = (6, 576.40)E(R3 ).

i=1

Now, since Z = (R − µ)/σ ∼ N(0, 1), we have " 3 # R−µ 3 E(Z ) = 0 = E σ = = = =

 σ −3 E R3 − 3µR2 + 3µ2 R − µ3   σ −3 E(R3 ) − 3µE(R2 ) + 3µ2 E(R) − µ3   σ −3 E(R3 ) − 3µ(σ 2 + µ2 ) + 3µ3 − µ3   σ −3 E(R3 ) − 3µσ 2 − µ3 ,

so that E(R3 ) = µ3 + 3µσ 2 = (3.0)3 + 3(3.0)(0.02) = 27.18. Finally, E(W ) = (6, 576.40)(27.18) = 178, 746.55 grams. Solution 4.38.

48

MULTIVARIATE DISTRIBUTION THEORY

(a) Now, pX1 (1) = =

pr(X1 = 1) = pX1 ,X2 (1, 0) + pX1 ,X2 (1, 1) π10 + π11 = (1 − ρ)θ(1 − θ) + θ2 + ρθ(1 − θ) = θ,

so that pX1 (x1 ) = θx1 (1 − θ)1−x1 , x1 = 0, 1. Similarly, pX2 (1) = π01 + π11 = θ, so that pX2 (x2 ) = θx2 (1 − θ)1−x2 , x2 = 0, 1. Thus, both X1 and X2 follow the same Bernoulli distribution. (b) Clearly, E(X1 ) = E(X2 ) = θ, and V(X1 ) = V(X2 ) = θ(1 − θ). Since E(X1 X2 ) = =

(1)(1)pX1 ,X2 (1, 1) = π11 θ2 + ρθ(1 − θ),

it follows that cov(X1 , X2 ) = =

= E(X1 X2 ) − E(X1 )E(X2 )

θ2 + ρθ(1 − θ) − (θ)(θ) = ρθ(1 − θ).

Thus, corr(X1 , X2 )

= =

(c) Now,

cov(X1 , X2 ) p V(X1 )V(X2 ) ρθ(1 − θ) p = ρ. [θ(1 − θ)] [θ(1 − θ)]

pX1 ,X2 (x1 , 1) π x1 π 1−x1 = 11 01 pX2 (1) θ  2  x1 θ + ρθ(1 − θ) [(1 − ρ)θ(1 − θ)]1−x1 = θ = [θ + ρ(1 − θ)]x1 [1 − θ − ρ(1 − θ)]1−x1 , x1 = 0, 1.

pX1 (x1 |X2 = 1) =

So, the conditional distribution of X1 given X2 = 1 is BIN [n = 1, π = θ + ρ(1 − θ)]. Thus, E(X1 |X2 = 1) = θ+ρ(1−θ) and V(X1 |X2 = 1) = [θ + ρ(1 − θ)] [1 − θ − ρ(1 − θ)] . Solution 4.40.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

49

(a) For i = 1, 2, . . . , n, Yi = (Xi +1) ∼ GEOM(1−π). And, with S = we have

MS (t) = = =

Pn

i=1

Yi ,

 Pn   E etS = E et i=1 Yi  n  n Y  Y (1 − π)et E etYi = (1 − πet ) i=1 i=1 n  (1 − π)et , (1 − πet )

so that S ∼ NEGBIN(n, 1 − π). Thus, when n = 3 and π = 0.40, we have

¯ ≤1 pr X



= pr (X1 + X2 + X3 ≤ 3) = pr [(Y1 − 1) + (Y2 − 1) + (Y3 − 1) ≤ 3]

= pr [(Y1 + Y2 + Y3 ) ≤ 6] = pr (S ≤ 6) =

6 X

Cs−1 (0.60)3 (0.40)s−3 = 0.821. 2

s=3

(b) First, the joint distribution of X1 , X2 , . . . , Xn is

pX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ) =

n Y

i=1

= xi = 0, 1, . . . , ∞ for i = 1, 2, . . . , n, and 0 < π < 1.

(1 − π)π xi

(1 − π)n π

Pn

i=1

xi

,

50

MULTIVARIATE DISTRIBUTION THEORY Thus, we have θ

= pr (X1 ≤ X2 ≤ · · · ≤ Xn−1 ≤ Xn ) ∞ ∞ ∞ ∞ ∞ X X X X X ··· =

xn−2 =xn−3 xn−1 =xn−2 xn =xn−1

x1 =0 x2 =x1 x3 =x2

= (1 − π)n = (1 − π)n = (1 − π)n = (1 − π)n = (1 − π)n .. . n

= (1 − π) = =

∞ ∞ X X

∞ X

x1 =0 x2 =x1 x3 =x2 ∞ X

∞ X

∞ X

x1 =0 x2 =x1 x3 =x2 ∞ ∞ X X

∞ X

x1 =0 x2 =x1 x3 =x2 ∞ ∞ X X

∞ X

x1 =0 x2 =x1 x3 =x2 ∞ X

∞ X

∞ X

x1 =0 x2 =x1 x3 =x2

∞ X

∞ X

··· ··· ··· ··· ···

(1 − π)n π

∞ X

∞ X

π

xn−2 =xn−3 xn−1 =xn−2 ∞ X

∞ X

π

xn−2 =xn−3 xn−1 =xn−2 ∞ X

xn−2 =xn−3 ∞ X

xn−2 =xn−3 ∞ X

xn−2 =xn−3

Pn−1

xi

Pn−1

xi

i=1

i=1

Pn−2

xi

(1 − π)−1 π

Pn−2

xi

i=1

i=1

xi

∞ X

π xn

xn =xn−1

(1 − π)−1 π

i=1

Pn



π xn−1 (1 − π) ∞ X

π2

xn−1 =xn−2



(π 2 )xn−2 (1 − π 2 )

(1 − π)−1 (1 − π 2 )−1 π

Pn−3 i=1

x1

(1 − π)−1 (1 − π 2 )−1 (1 − π 3 )−1 · · · (1 − π n−1 )−1 (π n )

(1 − π)n (1 − π)(1 − π 2 ) · · · (1 − π n−1 )(1 − π n ) (1 − π)n Qn . i i=1 (1 − π )

Solution 4.42. (a) Now,

fX (x) =

Z

θ−x

0

fY (y) =

Z

θ−1 (θ − x)−1 dy = θ−1 , 0 < x < θ.

θ−y

θ−1 (θ − x)−1 dx = θ−1 [−ln(θ − x)]θ−y 0   θ θ−1 [ln(θ) − ln(y)] = θ−1 ln , 0 < y < θ. y 0

= (b) Now, fY (y|X = x) =



xi

x1 =0

And,



θ−1 (θ − x)−1 fX,Y (x, y) = = (θ − x)−1 , 0 < y < (θ − x). fX (x) θ−1

xn−1

π3

xn−2

SOLUTIONS TO EVEN-NUMBERED EXERCISES

51

Thus, the conditional density of Y given X = x is a uniform density over the interval (0, θ − x). Thus, it follows directly that E(Y |X = x) =

(θ − x) (θ − x)2 and V(Y |X = x) = . 2 12

So, since (θ − x) θ 1 = α + βx, where α = and β = − , 2 2 2 q q V(X) 1 it follows that ρX,Y = β V(X) V(Y ) = − 2 V(Y ) . Now, since X has a uniform density over the interval (0, θ), it follows that E(X) = θ/2 and V(X) = θ2 /12. And, E(Y |X = x) =

   (θ − x)2 (θ − x) +E V(Y ) = V[E(Y |X = x)] + E[V(Y |X = x)] = V 2 12 2  1 1 = − V(X) + E(X 2 − 2θX + θ2 ) 2 12 # "    2 7θ2 θ θ2 θ 1 θ2 2 = − 2θ +θ = + + . 48 12 12 2 2 144 

Finally, ρX,Y

1 =− 2

s

θ2 /12 =− 7θ2 /144

r

3 = −0.6547. 7

Alternatively, 

 θ − (θ/2) (θ − x) θ E(Y ) = E[E(Y |X = x)] = E = = , 2 2 4 and E(XY ) = =

E[E(XY |X = x)] = E[xE(Y |X = x)] = E [x(θ − x)/2] "    2 # θ θ2 θ2 θ 1 θ − = − . 2 2 12 2 12

Thus, cov(X, Y ) = E(XY ) − E(X)E(Y ) = ρX,Y

θ2 12



θ 2



θ 4



2

θ = − 24 , so that

cov(X, Y ) −θ2 /24 = p =p =− 2 V(X)V(Y ) (θ /12)(7θ2 /144)

r

3 . 7

52

MULTIVARIATE DISTRIBUTION THEORY

(c) Now, pr(X > Y ) =

Z

0

=

θ/2 Z θ−y

θ−1

y

Z

θ/2

= = = =

Z

θ/2 0

[−ln(θ − x)]yθ−y dy

[ln(θ − y) − ln(y)]dy

0

=

θ−1 (θ − x)−1 dxdy = θ−1

θ/2

θ−1 {(θ − y)[1 − ln(θ − y)] − y[ln(y) − 1]}0          θ θ θ θ θ−1 1 − ln − ln − 1 − θ[1 − ln(θ)] + 0 2 2 2 2     θ − θ[1 − ln(θ)] θ−1 θ − θln 2 θ−1 [θ − θln(θ) + θln(2) − θ + θln(θ)] ln(2) = 0.6931.

¯ = θ/2 and E(Y¯ ) = θ/4, it follows that (d) Since E(X) ¯ − Y¯ ) = E(X

θ θ θ − = , so that k1 = 4. 2 4 4

And, since fX (x) = θ−1 , so that FX (x) = x/θ, 0 < x < θ, we have fU (u) = n

 u n−1 θ

θ−1 = nθ−n un−1 , 0 < u < θ.

So, for r ≥ 0, we have r

E(U ) =

Z

θ r

u nθ 0

= nθ−n

So, since E(U ) = (e) Now,



V(θˆ1 ) =

n n+1





−n n−1

u

un+r (n + r)

=



0

=

(16)V[n−1

n X

Z

θ

u(n+r)−1 du 0

nθr . (n + r)

θ, it follows that k2 =

i=1

=

du = nθ

−n

(Xi − Yi )] =

n+1 n

 .

16 V(Xi − Yi ) n

16 [V(Xi ) + V(Yi ) − 2cov(Xi , Yi )] n   2  16 θ2 31θ2 7θ2 θ = + −2 − . n 12 144 24 9n

SOLUTIONS TO EVEN-NUMBERED EXERCISES

53

And, V(θˆ2 )

 2 n+1 V(U ) = {E(U 2 ) − [E(U )]2 } n  2 "   2 # n+1 n n 2 θ2 = θ − n n+2 n+1 =



=

θ2 . n(n + 2)

n+1 n

2

Since V(θˆ2 ) < V(θˆ1 ), θˆ2 is the preferred estimator of these two unbiased estimators of θ. Solution 4.44. In general, !#) !# " n # (" n " n n n n X X X X X X b i Xi b i Xi − E ai X i ai X i − E = E b i Xi ai X i , cov i=1

i=1

=

=

=

E

E

("

i=1

 n X 

n X i=1

=

i=1

n X

σ2

i=1

i=1

i=1

#) #" n X bi (Xi − µ) ai (Xi − µ) i=1

ai bi (Xi − µ)2 +

X

all i6=j

  ai bj (Xi − µ)(Xj − µ) 

X   ai bi E (Xi − µ)2 + ai bj E [(Xi − µ)(Xj − µ)]

n X

all i6=j

ai b i ,

i=1

since cov(Xi , Xj ) = E [(Xi − µ)(Xj − µ)] = 0 ∀ i 6= j. In our case, ai = n−1 ∀ i and bi = ci , i = 1, 2, . . . , n. So, ¯ µ cov(X, ˆ) = σ 2

n X i=1

i=1

n  σ2 X σ2 ¯ n−1 (ci ) = ci = = V(X). n i=1 n

Finally,

s ¯ ¯ ¯ µ V( X) V(X) cov( X, ˆ ) ¯ µ , = p = corr(X, ˆ) = p ¯ ¯ V(ˆ µ) V(X)V(ˆ µ) V(X)V(ˆ µ) Pn Pn ¯ = σ 2 /n and V(ˆ where V(X) µ) = i=1 c2i σ 2 = σ 2 i=1 c2i . Thus, ¯ µ corr(X, ˆ) = s

n

1

n P

i=1

,

c2i

54

MULTIVARIATE DISTRIBUTION THEORY

an expression which does not depend on σ 2 . When n = 5 and ci = i2 , i = 1, 2, . . . , 5, then 1 1 ¯ µ = 0.0603. corr(X, ˆ) = q P = p 5 5(55) 5 i=1 i2 Solution 4.46. First, note that n

−lnG =

n

1X 1X (−lnXi ) = Ui = W, n i=1 n i=1

so that G = e−W . Then, since Xi = e−Ui , in which case the Jacobian is that fUi (ui ) = e−ui , 0 < ui < ∞, so that Ui ∼ NEGEXP(α = 1). Hence, since E(eW )

n  −1 Pn  Y t n Ui i=1 e n Ui = E e =E i=1

=

= −e−Ui , it follows

!

n Y

n  t  Y (1 − n−1 t)−1 = (1 − n−1 t)−n , E e n Ui =

i=1

Ui ∼ GAMMA(α = n−1 , β = n), so that

i=1

we have W = n

dXi dUi

Pn −1

i=1

fW (w) =

wn−1 e−nw , 0 < w < ∞. Γ(n)n−n

Now, with G = e−W or W = −lnG, so that the Jacobian is follows that nn n−1 fG (g) = g (−lng)n−1 , 0 < g < 1. Γ(n)

dW dG

= −G−1 , it

Note that the distribution of G can also be found by using an inductive argument. Solution 4.48. Let X denote the number of votes for the pro-life candidate Pr among the subset of r = (n − s) voting residents. Note that X = i=1 Xi , where pXi (xi ; π) = (1 − π)1−xi π xi , xi = 0, 1, so that the Central Limit Theorem is applicable here assuming that the {Xi }ri=1 are mutually independent random variables.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

55

Now, since X ∼BIN(r, π), we have θ

= = = ˙ ≈

where

pr {majority of the n voting residents vote for the pro-life candidate}    n n pr X + s > = pr X > − s 2 2 ) ( n − s − rπ X − rπ 2 pr p >p rπ(1 − π) rπ(1 − π) ( ) n 2 − s − rπ pr Z > p , rπ(1 − π) X − rπ ∼ ˙ N (0, 1) Z= p rπ(1 − π)

for large r by the Central Limit Theorem.

If pr(Z ≤ zθ ) = θ when Z ∼N(0,1), then we require n 2 − s − rπ p < −zθ rπ(1 − π)

p n − rπ + zθ rπ(1 − π) 2 p s+r =⇒ s > − rπ + zθ rπ(1 − π) 2 p r s =⇒ s − > − rπ + zθ rπ(1 − π) 2 2  p 1 − π + 2zθ rπ(1 − π), =⇒ s > 2r 2 =⇒ s >

and so s is the smallest integer satisfying this inequality given values √ for r, π, and θ. If θ = 0.841 and π = 0.50, then z0.841 = 1.00, and so s > r in this special case. Solution 4.50. ¯ = (a) We know that S = nX λ = 0.20, we have 

Pn

i=1

Xi ∼ Poisson(nλ). So, for n = 4 and

 S ≥ 0.40 = pr(S ≥ 1.60) = 1 − pr(S ≤ 1) 4 = 1 − pr(S = 0) − pr(S = 1)

¯ ≥ 0.40) = pr pr(X

= 1 − e−0.80 − (0.80)e−0.80

= 0.1912.

56

MULTIVARIATE DISTRIBUTION THEORY

¯ = λ and V(X) ¯ = λ/n. So, (b) We know that E(X) «# " „ ¯ X−λ  t √ λ/n E etZ = E e h √ √ √ i ¯ E et nX/ λ · e−t nλ h t i √ √ S e−t nλ E e nλ

=

=

√ √t nλ (nλ)(e nλ −1) e » – P∞ (t/√nλ)j √ (nλ) −1 j=0 j! −t nλ e e h P∞ √ t t2 −t nλ (nλ) 1+ √nλ + 2nλ + j=3 e“ e P∞ tj 1−j/2 1−j/2 ” t2 n j=3 j! λ 2 +

e−t

= = = =

e

tj j!

i λ−j/2 n−j/2 −1

.

So, 2

lim E(etZ ) = et

n→∞

/2

,

which is the moment generating function of a standard normal random variable. Note that this result follows as a special case of the Central Limit Theorem. Solution 4.52. (a) pr(X < Y ) =

Z



0

= = = = =

Z

y

α1 β2 e−β2 y−(α1 +β1 −β2 )x dxdy

0



y −e−(α1 +β2 −β2 )x dy (α1 + β1 − β2 ) 0 0 Z ∞ h i α1 β2 e−β2 y 1 − e−(α1 +β1 −β2 )y dy (α1 + β1 − β2 ) 0 ( ∞  −(α1 +β1 )y ∞ ) −e−β2 y −e α1 β2 − (α1 + β1 − β2 ) β2 (α1 + β1 ) 0 0   α1 β2 1 1 − (α1 + β1 − β2 ) β2 (α1 + β1 ) α1 . (α1 + β1 )

α1 β2

Z



e−β2 y

(b) fX,Y (x, y|X < Y ) =

α1 β2 exp[−β2 y − (α1 + β1 − β2 )x] pr(X < Y )

= β2 (α1 + β1 )e−β2 y−(α1 +β1 −β2 )x , 0 < x < y < ∞.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

57

(c) fX (x|X < Y ) =

Z



fX,Y (x, y|X < Y )dy

x

Z



e−β2 y dy  −β2 y ∞ −(α1 +β1 −β2 )x −e = β2 (α1 + β1 )e β2 x  −β2 x  e = β2 (α1 + β1 )e−(α1 +β1 −β2 )x β2

= β2 (α1 + β1 )e

−(α1 +β1 −β2 )x

x

= (α1 + β1 )e−(α1 +β1 )x , 0 < x < ∞.

(d) fY [y|(X = x) ∩ (X < Y )] =

fX,Y (x, y|X < Y ) fX (x|X < Y )

=

β2 (α1 + β1 )e−β2 y−(α1 +β1 −β2 )x (α1 + β1 )e−(α1 +β1 )x

=

β2 e−β2 (y−x) , 0 < x < y < ∞.

E(X|X < Y ) = (α1 + β1 )−1 and

V(X|X < Y ) = (α1 + β1 )−2 .

(e) From part (c), we have

And, from part (d), we have E[Y |(X = x) ∩ (X < Y )] = V[Y |(X = x) ∩ (X < Y )] =

 1 + x , and β2 1 . β22



So, using these results in conjunction with conditional expectation theory, we have:   1 E(Y |X < Y ) = Ex {E[Y |(X = x) ∩ (X < Y )]} = Ex +x β2 1 + E(X|X < Y ) = β2 1 1 + = β2 (α1 + β1 ) (α1 + β1 + β2 ) . = β2 (α1 + β1 )

58

MULTIVARIATE DISTRIBUTION THEORY And, V(Y |X < Y )

= Ex {V[Y |(X = x) ∩ (X < Y )]} + Vx {E[Y |(X = x) ∩ (X < Y )]}     = Ex β2−2 + Vx β2−1 + x

= β2−2 + V(X|X < Y ) 1 (α1 + β1 )2 + β22 1 + = . = 2 2 β2 (α1 + β1 ) β22 (α1 + β1 )2 Solution 4.54. (a) E etS



= =

i h E et(X1 +X2 ) E e

tX1



E e

tX2





θet = 1 − (1 − θ)et

2

,

which is the moment generating function of a negative binomial random variable with k = 2. Thus, 2 s−2 pS (s) = Cs−1 = (s − 1)θ2 (1 − θ)s−2 , s = 2, 3, . . . , ∞. 2−1 θ (1 − θ)

(b) By Bayes’ Theorem, pX1 (x1 |S = s) =

pS (s|X1 = x1 )pX1 (x1 ) . pS (s)

Now, pS (s|X1 = x1 )

= pr(S = s|X1 = x1 ) = pr(x1 + X2 = s|X1 = x1 ) = pr(X2 = s − x1 ) = θ(1 − θ)(s−x1 )−1 .

So, pX1 (x1 |S = s)

= =



  θ(1 − θ)(s−x1 )−1 θ(1 − θ)x1 −1 (s − 1)θ2 (1 − θ)s−2 1 , x1 = 1, 2, . . . , (s − 1). (s − 1)

(c) Now, cov(X1 , S) = cov(X1 , X1 + X2 ) = cov(X1 , X1 ) + cov(X1 , X2 ) = V(X1 ) + 0 = (1 − θ)/θ2 . And, V(S) = V(X1 ) + V(X2 ) = 2(1 − θ)/θ2 . So, corr(X1 , S) = q

(1 − θ)/θ2

(1−θ) θ2

·

2(1−θ) θ2

1 = √ = 0.7071. 2

SOLUTIONS TO EVEN-NUMBERED EXERCISES

59

As an alternative approach, note that s−1 X

E(X1 |S = s) =

x1 =1

x1 pX1 (x1 |S = s) = 



s−1 X

x1 =1

x1

1 (s − 1)

(s − 1)s 1 s = = α + βs, (s − 1) 2 2

=

where α = 0 and β = 1/2. So, corr(X1 , S) = =

s

V(S) V(X1 )  s 2(1 − θ)/θ2 1 1 = √ = 0.7071. 2 (1 − θ)θ2 2

β

Solution 4.56. (a) Since −∞ < x < ∞ and −∞ < y < ∞, we need to show that hX,Y (x, y) = hX (x)hY (y) is equivalent to the condition [f1 (x) − f2 (x)][g1 (y) − g2 (y)] = 0. Now, hX (x) =



−∞

and, hY (y) = So,

Z Z



−∞

hX,Y (x, y)dy = πf1 (x) + (1 − π)f2 (x);

hX,Y (x, y)dx = πg1 (y) + (1 − π)g2 (y).

hX (x)hY (y) = π 2 f1 (x)g1 (y)+π(1−π)f1 (x)g2 (y)+π(1−π)f2 (x)g1 (y)+(1−π)2 f2 (x)g2 (y). Thus, hX,Y (x, y) − hX (x)hY (y) = 0

⇔ π(1 − π)f1 (x)g1 (y) − π(1 − π)f1 (x)g2 (y) − π(1 − π)f2 (x)g1 (y) + π(1 − π)f2 (x)g2 (y) = 0 ⇔ [f1 (x) − f2 (x)][g1 (y) − g2 (y)] = 0.

(b) Since hX (x) = πf1 (x) + (1 − π)f2 (x), Z ∞ E(X) = xhX (x)dx = πα1 + (1 − π)α2 ; −∞

similarly, E(Y ) = πβ1 + (1 − π)β2 .

60

MULTIVARIATE DISTRIBUTION THEORY And, E(XY ) = = =

Z



−∞ Z ∞

−∞

ıxyhX,Y (x, y)dxdy ıxy[πf1 (x)g1 (y) + (1 − π)f2 (x)g2 (y)]dxdy

πα1 β1 + (1 − π)α2 β2 .

So, cov(X, Y ) = = =

E(XY ) − E(X)E(Y ) πα1 β1 + (1 − π)α2 β2 − [πα1 + (1 − π)α2 ][πβ1 + (1 − π)β2 ] π(1 − π)(α1 − α2 )(β1 − β2 ).

(c) The set of sufficient conditions are: i) f1 (x) 6= f2 (x); ii) g1 (y) 6= g2 (y); iii) at least one of the equalities α1 = α2 or β1 = β2 holds. Note that conditions (i) and (ii) imply that X and Y are dependent, while condition (iii) implies that cov(X, Y ) = 0. Solution 4.58. (a) First, the method of transformations will be √ used to find the marginal p distribution of U . Let U = Y /X and V = XY , so that the inverse functions are X = V /U and Y = U V . So, 0 < U < +∞ and 0 < V < +∞. Also, ∂X ∂X 2 1/U ∂U ∂V −V /U = = −2V , J = U ∂Y ∂Y V U ∂U ∂V so that |J| = 2V /U . Thus,

e−θ( u ) e−θ v

fU,V (u, v) =

fU (u)

=

(uv)



2v u



2vu−1 e−( u + θ )v , 0 < u < ∞, 0 < v < ∞. θ

= Finally,

−1

Z



u

fU,V (u, v)dv = 2u−1

0

= 2u−1



u θ + u θ

−2

Z



ve−( u + θ )v dv

0

, 0 < u < ∞.

θ

u

SOLUTIONS TO EVEN-NUMBERED EXERCISES

61

Another method for finding the distribution of U is to find FU (u) and U (u) then use the relationship fU (u) = dFdu . In particular, FU (u) = = =

pr(U ≤ u) = pr

"

Y X

1/2

≤u

#

√ √ pr( Y ≤ u X) = pr(Y ≤ u2 X) Z ∞ Z u2 x −1 e−(θx+θ y) dydx = 1 − 0

0

(θ2

θ2 , 0 < u < +∞, + u2 )

which leads to the same expression for fU (u) as obtained by the method of transformations. (b) Note that fX,Y (x, y) = θe−θx

  −1 −θ−1 y  = fX (x)fY (y), 0 < x < ∞, 0 < y < ∞. θ e

In other words, X and Y are independent random variables with X ∼ GAMMA(αp= θ−1 , β = 1) and Y ∼ GAMMA(α = θ, β = 1). Since U = Y /X, √ !     Y = E Y 1/2 E X −1/2 E(U ) = E √ X # #" "    Γ 1 − 12 Γ 1 + 12 1/2 −1 −1/2 θ θ = Γ(1) Γ(1)     πθ 3 1 = Γ Γ θ= . 2 2 2 So, ¯ ) = n−1 E(U

n X

E(Ui ) = n

i=1

−1

 n  X πθ i=1

2

=

πθ . 2

Solution 4.60. (a) Note that X can take the values 0 and 1, with pr(X = 1) = pr [(W = 1) ∩ (U1 = 1)] + pr [(W = 0) ∩ (U2 = 1)] = pr(W = 1)pr(U1 = 1) + pr(W = 0)pr(U2 = 1) = θπ + (1 − θ)π = π, so that X has a Bernoulli distribution with E(X) = π and V(X) = π(1 − π). Analogously, it follows that Y also has a Bernoulli distribution with E(Y ) = π and V(Y ) = π(1 − π).

62

MULTIVARIATE DISTRIBUTION THEORY Now, cov(X, Y ) =

E(XY ) − E(X)E(Y ) = E(XY ) − π 2 .

And, since E(XY )

= E {[W U1 + (1 − W )U2 ] [W U1 + (1 − W )U3 ]}   = E W 2 U12 + W (1 − W )U1 U3 + (1 − W )W U2 U1 + (1 − W )2 U2 U3

= E(W 2 )E(U12 ) + E [W (1 − W )] E(U1 )E(U3 ) + E [(1 − W )W ] E(U2 )E(U1 )

+ E[(1 − W )2 ]E(U2 )E(U3 ) = θπ + 0 + 0 + (1 − θ)π 2 = π 2 + θπ(1 − π),

we have cov(X, Y ) = π 2 + θπ(1 − π) − π 2 = θπ(1 − π). Thus, θπ(1 − π) cov(X, Y ) =p = θ. corr(X, Y ) = p V(X)V(Y ) [π(1 − π)] [π(1 − π)]

(b) We need to find pY (y|X = x) = pr(Y = y|X = x), y = 0, 1, the conditional distribution of Y given X = x. So, we first need to find pX,Y (x, y) = pr [(X = x) ∩ (Y = y)], the joint distribution of X and Y . Now, pX,Y (1, 1) = pr [(W = 1) ∩ (U1 = 1)] + pr [(W = 0) ∩ (U2 = 1) ∩ (U3 = 1)] = θπ + (1 − θ)π 2 ;

pX,Y (1, 0) = pr [(W = 0) ∩ (U2 = 1) ∩ (U3 = 0)] = (1 − θ)π(1 − π); pX,Y (0, 1) = pr [(W = 0) ∩ (U2 = 0) ∩ (U3 = 1)] = (1 − θ)π(1 − π); and, pX,Y (0, 0) = pr [(W = 1) ∩ (U1 = 0)] + [(W = 0) ∩ (U2 = 0) ∩ (U3 = 0)] = θ(1 − π) + (1 − θ)(1 − π)2 .

Thus, with pX (x) = pr(X = x) being the marginal distribution of X, x = 0, 1, we have pY (1|X = 0) = And, pY (1|X = 1) =

pX,Y (1, 0) (1 − θ)π(1 − π) = = (1 − θ)π = E(Y |X = 0). pX (0) (1 − π) pX,Y (1, 1) θπ + (1 − θ)π 2 = = θ + (1 − θ)π = E(Y |X = 1). pX (1) π

In general, then, for x = 0, 1, we have E(Y |X = x) = (1 − θ)π + θx = α + βx, where α = (1 − θ)π and β = θ.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

63

Solution 4.62. (a) Using the general formula (given in the front material for this chapter) for the joint density function fX(r) ,X(s) (x(r) , x(s) ), 1 ≤ r < s ≤ n, of two order statistics, we obtain (for r = 1 and s = n) fX(1) ,X(n) (x(1) , x(n) ) = =



h x

(n)

x

(1)

in−2  1   1 

−k − −k θ θ θ θ   n−2 n(n − 1)θ−n x(n) − x(1) , kθ < x(1) < x(n) < (k + 1)θ.

n(n − 1)

Now, consider the one-to-one transformation defined by the equations U = X(n) − X(1) and V = X(n) . Since the inverse functions are X(1) = V − U and X(n) = V , the absolute value of the Jacobian is equal to 1, and we obtain fU,V (u, v) = n(n − 1)θ−n un−2 , 0 < u < (v − kθ) and kθ < v < (k + 1)θ. Finally,

fU (u) = n(n − 1)θ = n(n − 1)θ

−n

Z

(k+1)θ

un−2 dv

u+kθ −n n−2

u

(θ − u), 0 < u < θ.

(b) We have

E(U ) =

Z

θ

(u)n(n − 1)θ−n un−2 (θ − u)du   n−1 θ, n+1 0

=

  n+1 so that g(U ) = n−1 U satisfies E [g(U )] = θ. Note that it is also possible to find E(U ) indirectly using the equality     E(U ) = E X(n) − E X(1) .

Such an approach would require the use of the marginal distributions of X(1) and X(n) . Solution 4.64.

64

MULTIVARIATE DISTRIBUTION THEORY

(a) First,

pX (0) = pr(X = 0) =

∞ X

n=1

= =

∞ X

pr(X = 0|N = n)pr(N = n)

n=1 ∞ X

(1 − π)n (1 − e−λ )−1

n=1

= = =

pr[(X = 0) ∩ (N = n)]

λn e−λ n!

∞ X [λ(1 − π)]n e−λ (1 − e−λ ) n=1 n! "∞ # X [λ(1 − π)]n 1 −1 (eλ − 1) n=0 n!

eλ(1−π) − 1 . (eλ − 1)

And, for x = 1, 2, . . . , ∞ subject to the restriction that n ≥ x, we have

pX (x)

= =

∞ X

n=x ∞ X n=x

= = = =

pr(X = x|N = n)pr(N = n) Cnx π x (1 − π)n−x (1 − eλ )−1

λn e−λ n!

 ∞  n π x e−λ X n! n−x λ (1 − π) (1 − e−λ ) n=x x!(n − x)! n! ∞ (πλ)x X [λ(1 − π)]n−x x!(eλ − 1) n=x (n − x)! ∞ (πλ)x X [λ(1 − π)]y x!(eλ − 1) y=0 y!

(πλ)x eλ(1−π) . x!(eλ − 1)

SOLUTIONS TO EVEN-NUMBERED EXERCISES Clearly, pX (x) ≥ 0, x = 0, 1, . . . , ∞, and ∞ X

pX (x)



eλ(1−π) − 1 X (πλ)x eλ(1−π) + (eλ − 1) x!(eλ − 1) x=1

=

x=0

∞ eλ(1−π) − 1 eλ(1−π) X (πλ)x + (eλ − 1) (eλ − 1) x=1 x!  eλ(1−π) − 1 + eλ(1−π) eπλ − 1 (eλ − 1)

= =

(eλ − 1) = 1, eλ − 1)

=

so that pX (x) is a valid discrete probability distribution. (b) Since (1 − e−λ )−1

E(N ) =

(1 − e−λ )−1

=

∞ X

(n)

n=1 ∞ X

(n)

n=0

λn e−λ n! λn e−λ n!

λeλ λ = λ , −λ (1 − e ) (e − 1)

= we have E(X) = =

En [E(X|N = n)] = En (nπ) πλeλ πE(N ) = λ . (e − 1)

Or, more directly but less easily, we can compute E(X) as E(X) =

∞ X

pX (x) =

x=0

= = = Solution 4.66.

∞ X

(x)

x=1 λ

∞ X

(πλ)x eλ(1−π) x!(eλ − 1)

e (πλ)x e−λπ (x) (eλ − 1) x=1 x! ∞ X eλ (πλ)x e−λπ (x) (eλ − 1) x=0 x!

πλeλ . (eλ − 1)

65

66

MULTIVARIATE DISTRIBUTION THEORY

(a) π = pr(X > 35) = =



lnX − 3.22 3.56 − 3.22 √ pr(lnX > ln35) = pr > √ 0.03 0.03 pr(Z > 1.96) = 0.025 since Z ∼ N(0, 1).



(b) Since the assumptions underlying the binomial distribution are satisfied with n = 3 and π = 0.025, the probability that at least two of the three readings will exceed 35 µg/m3 is equal to C23 (0.025)2 (0.975)1 + C33 (0.025)3 (0.975)0 = 0.00182. (c) First, pr(Z = 0) = 0 since Y1 is always at least as large as itself. So, the possible values for Z are 1, 2, . . . , n. Now, in general, pZ (z) = pr(Z = z) = pr(Z = z|Y1 = 1)pr(Y1 = 1)+pr(Z = z|Y1 = 0)pr(Y1 = 0). So, for z = 1, 2, . . . , (n − 1), n−1 z−1 n−1 z pZ (z) = Cz−1 π (1 − π)n−z (π) + (0)(1 − π) = Cz−1 π (1 − π)n−z ,

and pZ (n) = π n−1 (π) + (1)(1 − π) = π n + (1 − π), where π = 0.025. Finally, E(Z) = = =

n X

zpZ (z) =

z=1 n−2 X

n−1 X z=1

n−1 z zCz−1 π (1 − π)n−z + n[π n + (1 − π)]

(u + 1)Cun−1 π u+1 (1 − π)n−(u+1) + n[π n + (1 − π)]

u=0 n−2 X u=0

uCun−1 π u+1 (1 − π)(n−1)−u +

n−2 X u=0

Cun−1 π u+1 (1 − π)(n−1)−u + n[π n + (1 − π)].

Now, n−2 X u=0

uCun−1 π u+1 (1 − π)(n−1)−u

n−1 X

uCun−1 π u (1 − π)(n−1)−u − π(n − 1)π n−1

n−1 X

Cun−1 π u (1 − π)(n−1)−u − π n−1 ]

=

π

=

π[(n − 1)π] − π(n − 1)π n−1 ;

=

π[

=

π(1 − π n−1 ).

u=0

and, n−2 X u=0

Cun−1 π u+1 (1 − π)(n−1)−u

u=0

SOLUTIONS TO EVEN-NUMBERED EXERCISES

67

Finally, E(Z) = π[(n−1)π]−π(n−1)π n−1 +π(1−π n−1 )+n[π n +(1−π)] = n−(n−1)π(1−π). When π = 0.025, E(Z) = 0.0244 + 0.9756n. Solution 4.68. (a) First, under the stated assumptions, it follows that pX (x|Y = y) = Cyx π x (1 − π)y−x , x = 0, 1, . . . , y and 0 < π < 1. So,

pX (x)

=

∞ X

pX (x|Y y=x ∞  X

= y)pY (y) =

∞ X

y=x

Cyx π x (1

y−x

− π)

  y −λ  y! λ e π x (1 − π)y−x x!(y − x)! y! y=x   ∞ X 1 = π x (1 − π)u λu+x e−λ x!u! u=0

=

= = =



λy e−λ y!

∞ (πλ)x e−λ X [λ(1 − π)]u x! u! u=0

(πλ)x e−λ eλ(1−π) x! (πλ)x e−πλ , x = 0, 1, . . . , ∞, x!

so that X ∼ POI(πλ). Now,

pY (y|X = x)

=

=

=

pX,Y (x, y) p (x|Y = y)pY (y) = X pX (x) p (x) h y X−λ i y x λ e y−x [Cx π (1 − π) ] y! h i x −πλ (πλ) e x!

[λ(1 − π)]y−x e−λ(1−π) , y = x, x + 1, . . . , ∞. (y − x)!



68

MULTIVARIATE DISTRIBUTION THEORY Thus, E(Y |X = x)

= = =

∞ X

(y)

y=x ∞ X

[λ(1 − π)]y−x e−λ(1−π) (y − x)!

(u + x)

u=0 ∞ X

(u)

u=0

[λ(1 − π)]u e−λ(1−π) u!

∞ X [λ(1 − π)]u e−λ(1−π) [λ(1 − π)]u e−λ(1−π) +x u! u! u=0

= λ(1 − π) + x.

(b) Since E(X|Y = y) = πy = α + βy, where the intercept α = 0 and the slope β = π, we can appeal to the relationship between correlation and slope in simple linear regression to conclude that

ρX,Y = β

s

V(Y ) =π V(X)

r

√ λ = π. πλ

Also, since E(Y |X = x) = [λ(1 − π) + x] is a linear relationship with slope equal to 1, it follows that s

ρX,Y = (1)

V(X) = V(Y )

r

πλ √ = π. λ

Or, more directly, since E(XY ) = =

Ey [E(XY |Y = y)] = Ey [yE(X|Y = y)]

Ey [y(πy)] = πE(Y 2 ) = π(λ + λ2 ),

we have ρX,Y

= = =

Solution 4.70.

E(XY ) − E(X)E(Y ) p V(X)V(Y )

π(λ + λ2 ) − (πλ)(λ) p (πλ)(λ) √ πλ √ = π. λ π

SOLUTIONS TO EVEN-NUMBERED EXERCISES

69

(a) First,

pR (0)

= pr(R = 0) =

∞ X

pr(X1 = u)pr(X2 = u)

u=1 ∞ X 

=

u=1

= π2

π(1 − π)u−1

  π(1 − π)u−1

∞ X  u−1 (1 − π)2

u=1

π π2 = . 1 − (1 − π)2 (2 − π)

=

And, for r = 1, 2, . . . , ∞, we have pR (r)

=

pr(R = r) =

∞ X

[pr(X1 = u)pr(X2 = u + r) + pr(X1 = u + r)pr(X2 = u)]

u=1

=

2

∞ X 

u=1

=

π(1 − π)u−1



π(1 − π)u+r−1

∞ X  u−1 (1 − π)2 2π (1 − π) 2



r

u=1

=

2π(1 − π)r . (2 − π)

Note that, as required, ∞ X

pR (r)

=

r=0

=



X 2π(1 − π)r π + (2 − π) r=1 (2 − π) 2(1 − π) π + = 1. (2 − π) (2 − π)

(b) First,

PR (s)

=

∞ X r=0

= =

sr pR (r) =



X 2π(1 − π)r π + sr (2 − π) r=1 (2 − π) ∞

2π X π + [s(1 − π)]r (2 − π) (2 − π) r=1   s 2π(1 − π) π , |s(1 − π)| < 1. + (2 − π) (2 − π) 1 − s(1 − π)

70

MULTIVARIATE DISTRIBUTION THEORY Now, E(R)

   2π(1 − π) dPR (s) −2 [1 − s(1 − π)] = ds (2 − π) s=1 s=1

=



=

2(1 − π) . π(2 − π)

And, E [R(R − 1)] = =



   4π(1 − π)2 d2 PR (s) −3 = [1 − s(1 − π)] ds2 (2 − π) s=1 s=1

4(1 − π)2 . π 2 (2 − π)

Thus, V(R) = = =

2

E [R(R − 1)] + E(R) − [E(R)] 4(1 − π)2 2(1 − π) 4(1 − π)2 + − 2 2 π (2 − π) π(2 − π) π (2 − π)2   2(1 − π) 1 + (1 − π)2 . π 2 (2 − π)2

Solution 4.72. (a) For 1 ≤ u < v < +∞, we have pU,V (u, v)

= pr [(X1 = u) ∩ (X2 = v)] + pr [(X1 = v) ∩ (X2 = u)]       = π(1 − π)u−1 π(1 − π)v−1 + π(1 − π)v−1 π(1 − π)u−1 = 2π 2 (1 − π)u+v−2 .

And, pU,V (u, u) = pr [(X1 = u) ∩ (X2 = u)]    = π(1 − π)u−1 π(1 − π)u−1 = π 2 (1 − π)2(u−1) .

To show that pU,V (u, v) is a valid bivariate discrete probability distribution, first note that ∞ X ∞ X

u=1 v=u

pU,V (u, v) =

∞ X

u=1

pU,V (u, u) +

∞ ∞ X X

u=1 v=u+1

pU,V (u, v).

SOLUTIONS TO EVEN-NUMBERED EXERCISES

71

Now, ∞ X

∞ X

pU,V (u, u) =

u=1

u=1

π2

=

π 2 (1 − π)2(u−1)

∞ X 

u=1

(1 − π)2

u−1

π π2 = . 1 − (1 − π)2 (2 − π)

= And, ∞ X ∞ X

pU,V (u, v) =

u=1 v=u+1

= =

∞ ∞ X X

u=1 v=u+1 ∞ X 2





2

u=1 ∞ X

u=1

= = Since

2π 2 (1 − π)u+v−2

(1 − π)u−2 u−2

(1 − π)

∞ X

(1 − π)v

v=u+1



(1 − π)u+1 1 − (1 − π)



  ∞  2π (1 − π)2 2π X  2 u (1 − π) = (1 − π) u=1 (1 − π) 1 − (1 − π)2 2(1 − π) . (2 − π)

π 2(1 − π) + = 1, (2 − π) (2 − π)

and since 0 ≤ pU,V (u, v) ≤ 1 for all permissible pairs (u, v), it follows that pU,V (u, v) is a valid bivariate discrete probability distribution. (b) Now, for u = 1, 2, . . . , ∞, pU (u) =

pU,V (u, u) +

∞ X

v=u+1

= =

2π 2 (1 − π)u+v−2

π 2 (1 − π)2(u−1) + 2π 2 (1 − π)u−2 π 2 (1 − π)2(u−1) + 2π 2 (1 − π)u−2

 u−1  2  (1 − π)2 π + 2π(1 − π)  u−1   = (1 − π)2 1 − (1 − π)2 .   Thus, U ∼ GEOM 1 − (1 − π)2 . =

∞ X

v=u+1



(1 − π)v

(1 − π)u+1 1 − (1 − π)



72

MULTIVARIATE DISTRIBUTION THEORY Also,

pV (1) = pU,V (1, 1) = π 2 .

And, for v = 2, 3, . . . , ∞,

pV (v)

=

pU,V (v, v) +

v−1 X

u=1

= =

2π 2 (1 − π)u+v−2

π 2 (1 − π)2(v−1) +

v−1 X

u=1

2π 2 (1 − π)u+v−2

π 2 (1 − π)2(v−1) + 2π 2 (1 − π)v−1 2

2(v−1)

2

v−1

+ 2π (1 − π)

v−1 X

(1 − π)u−1

u=1



1 − (1 − π)v−1 1 − (1 − π)



=

π (1 − π)

= =

π 2 (1 − π)2(v−1) + 2π(1 − π)v−1 − 2π(1 − π)2(v−1) 2π(1 − π)v−1 − π(2 − π)(1 − π)2(v−1) .

Finally, since 0 ≤ pV (v) ≤ 1 for v = 1, 2, . . . , ∞, and since

∞ X

v=1

pV (v)

= π2 +

∞ X v=2

= π 2 + 2π

2π(1 − π)v−1 −



1−π 1 − (1 − π)



∞ X v=2

π(2 − π)(1 − π)2(v−1)

− π(2 − π)

= π 2 + 2(1 − π) − (1 − π)2 = 1,



(1 − π)2 1 − (1 − π)2



it follows that pV (v) is a valid univariate discrete probability distribution. Solution 4.74. First, note that Yn =

Pn

i=1

 Xi , and that PXi (s) = E sXi =

SOLUTIONS TO EVEN-NUMBERED EXERCISES sπ + s

−1

(1 − π). So, we have PYn (s) = =

73

 Pn   E sYn = E s i=1 Xi ! n n Y Y  Xi = E sXi E s i=1

i=1

 n sπ + s−1 (1 − π)  n s−n s2 π + (1 − π) n X Cni (s2 π)i (1 − π)n−i s−n

= = =

i=0

n X

=

Cni π i (1 − π)n−i s2i−n

i=0 n X

=

Cnn+j π

n+j 2

2

j=−n

(1 − π)

n−j 2

sj .

So, since the coefficient of sj is pr(Yn = j), we have pr(Yn = j) = Cnn+j π

n+j 2

2

(1 − π)

n−j 2

.

When n is an odd positive integer, pr(Yn = j) = 0 when j is an even positive or negative integer (with 0 being considered to be an even integer). And, when n is an even positive integer, pr(Yn = j) = 0 when j is an odd positive or negative integer. So, when n = 3, we have pY3 (y3 ) = C33+j π

3+j 2

2

(1 − π)

3−j 2

, j = −3, −1, +1, +3.

And, when n = 4, we have pY4 (y4 ) = C44+j π

4+j 2

2

(1 − π)

4−j 2

, j = −4, −2, 0, +2, +4.

Solution 4.76. First, note that Ur

=

n X i=1

=

¯ r= (Xi − X)

n−r

n X i=1

=

n−r

n X i=1



n X i=1

(n − 1)Xi − Yir ,

Xi − n−1 X

all j6=i

n X

i=1 r

Xj 

Xi

!r

74

h

where Yi = (n − 1)Xi −

P

MULTIVARIATE DISTRIBUTION THEORY i   2 all j6=i Xj ∼ N 0, n(n − 1)σ .

So, it follows directly that E(Ur ) = 0 when r is an odd positive integer. p Now, with Zi = Yi / n(n − 1)σ 2 ∼ N(0, 1), we have MZi (t) = = =

 2 E etZi = et /2 ∞ X (t2 /2)j j=0 ∞  X j=0

j!

 (2j)! t2j , j!2j (2j)!

so that E(Zi2j ) =

(2j)! , j = 1, 2, . . . , ∞. j!2j

Thus, for j = 1, 2, . . . , ∞, E (U2j )

= (n−2j )

n X i=1

  E Yi2j

  j (2j)! = n n(n − 1)σ 2 j!2j   1−j n (n − 1)j (2j)! 2j σ , = j!2j  1−2j

so that kj =

j!2j . n1−j (n − 1)j (2j)!

When j = 1, k1 = (n − 1)−1 ; and, when j = 2, k2 = n/3(n − 1)2 . Solution 4.78. (a) First, it follows easily that E(Xi − Yi ) = (µx − µy ) and V(Xi − Yi ) = σx2 + σy2 − 2ρσx σy ;

P and, since (T1n − T2n ) = ni=1 (Xi − Yi ) is a sum of n mutually independent and identically distributed random variables, we have E (T1n − T2n ) = n(µx − µy ) and V (T1n − T2n ) = n(σx2 + σy2 − 2ρσx σy ).

SOLUTIONS TO EVEN-NUMBERED EXERCISES

75

So, θn

= =

pr (T1n > T2n ) = pr pr

" n X i=1

=

= ˙



" n X i=1

(Xi − Yi ) > 0

#

Xi >

n X i=1

Yi

#

 Pn (Xi − Yi ) − n(µx − µy ) −n(µx − µy )  q pr  i=1 > q 2 2 n(σx + σy − 2ρσx σy ) n(σx2 + σy2 − 2ρσx σy )   √ n(µ − µ ) − x y , pr Z > q σx2 + σy2 − 2ρσx σy )

where Z ∼N(0, ˙ 1) for large n by the Central Limit Theorem. For n = 100, µx = 10, σx2 = 4, µy = 9.8, σy2 = 3, and ρ = 0.10, we obtain θ100 =pr(Z ˙ > −0.7964)=0.783. ˙ (b) Using similar reasoning as in part (a), we wish to find the smallest value of n, say n∗ , such that   Pn (X − Y ) − n(µ − µ ) 5 − n(µ − µ ) i i x y x y  ≥ 0.90. q pr  i=1 >q 2 2 2 2 n(σx + σy − 2ρσx σy ) n(σx + σy − 2ρσx σy ) So, for µx = 10, σx2 = 4, µy = 9.8, σy2 = 3, and ρ = 0.10, we require 5 − n(10 − 9.8) p ≤ −1.282, n(6.3072)

or equivalently, which gives n∗ = 324.

 √ √ 0.20 n n − 16.10 ≥ 5,

Solution 4.80. (a) Clearly, pr(Y1 = yj ) = N −1 , j = 1, 2, . . . , N. Now, for i = 2, 3, . . . , n, let Ai−1 be the event that none of the first (i − 1) distinct values of y selected equal yj . Then, pr(Yi = yj ) = = =

pr [(Yi = yj ) ∩ Ai−1 ] = pr(Yi = yj |Ai−1 )pr(Ai−1 )        N −1 N −2 N −i+1 1 ··· N −i+1 N N −1 N −i+2 1 , j = 1, 2, . . . , N. N

76

MULTIVARIATE DISTRIBUTION THEORY

(b) For i = 1, 2, . . . , n,

E(Yi ) =

N X

(yj )pr(Yi = yj ) =

j=1

N X

(yj )

j=1



1 N



= N −1

N X

yj = µ.

j=1

So, for i 6= i0 , cov(Yi , Yi0 ) = E(Yi Yi0 ) − µ2 . Now, E(Yi Yi0 ) =

N X X

j=1 all k6=j

=

N X

yj yk pr[(Yi = yj ) ∩ (Yi0 = yk )]

(yj )pr(Yi = yj )

j=1

=

N X j=1

=

= =

X

all k6=j

(yj )



1 N

1 N (N − 1)

 X

(yk )

all k6=j

N X

yj

j=1

"N X

k=1

(yk )pr(Yi0 = yk |Yi = yj )



1 N −1 #



yk − yj

  N X 1 N 2 µ2 − yj2  N (N − 1) j=1

 2 2  σ2 1 N µ − (N σ 2 + N µ2 ) = µ2 − , N (N − 1) (N − 1)

so that  cov(Yi , Yi0 ) = µ2 −

 σ2 σ2 − µ2 = − . (N − 1) (N − 1)

Since we have shown that any two observations randomly selected WOR from this finite-sized population are negatively correlated, we do not have a random sample. However, as the population becomes infinity large (i.e, N → ∞), we approach the characteristics of a random sample (i.e., a set of n mutually independent and identically distributed random variables). (c) Clearly, E(Y¯ ) = n−1

n X

E(Yi ) = n

i=1

so that Y¯ is an unbiased estimator of µ.

−1

n X i=1

µ = µ,

SOLUTIONS TO EVEN-NUMBERED EXERCISES

77

Now,

V(Y¯ )

"

= V n = n−2

−1

n X i=1

" n X

Yi

#

V(Yi ) + 2

i=1

n−1 X

n X

#

cov(Yi , Yi0 )

i=1 i0 =i+1

    σ2 n(n − 1) 2 − nσ + 2 = n 2 (N − 1)   2 σ N −n . = n N −1 −2

So, when sampling WOR from a finite-sized population, the variance of Y¯ is less than σ 2 /n, the variance of the sample mean based on a random sample. The factor (N − n)/(N − 1) is called the finite population correction factor. Since 

N −n N −1



 n ≈ 1− for large N, N

this correction factor cannot be ignored when n/N , the size of the sample relative to the size of the population, is not negligible (say, ≥ 0.05 in value). ¯ 2 = (X1 + X2 )/2, we have Solution 4.82. For n = 2, since X

U2

2  2 # X − X X − X 1 2 2 1 ¯2) = σ = σ (Xi − X + 2 2 i=1   2  X1 − X2 X1 − X2 √ √ ∼ χ21 since = ∼ N(0, 1). σ 2 σ 2 −2

2 X

2

−2

"

Now, for any n ≥ 4, assume that Un−1 = σ −2

Pn−1 i=1

¯ n−1 )2 ∼ χ2n−2 . (Xi − X

78 ¯n = n Then, with X Un

= σ −2

n X i=1

−1



MULTIVARIATE DISTRIBUTION THEORY  ¯ n−1 + Xn , we have (n − 1)X

¯ n )2 = σ −2 (Xi − X

n X  i=1

  ¯ n−1 + Xn 2 Xi − n−1 (n − 1)X

n X   ¯ n−1 ) + n−1 (X ¯ n−1 − Xn ) 2 (Xi − X = σ −2

= σ −2

i=1 n−1 X

  ¯ n−1 ) + n−1 (X ¯ n−1 − Xn ) 2 (Xi − X

n−1 X

  ¯ n−1 )2 + 2n−1 (Xi − X ¯ n−1 )(X ¯ n−1 − Xn ) + n−2 (X ¯ n−1 − Xn )2 (Xi − X

i=1

  ¯ n−1 ) + n−1 (X ¯ n−1 − Xn ) 2 + σ −2 (Xn − X

= σ −2

i=1

+ σ −2



n−1 n

2

¯ n−1 )2 (Xn − X

  2 n−1 n−1 2 −2 ¯ ¯ n−1 − Xn )2 (Xn−1 − Xn ) + σ (X = Un−1 + 0 + σ n2 n   n−1 −2 ¯ n−1 − Xn )2 = Un−1 + σ (X n −2



= Un−1 + V ∼ χ2n−1 ,

¯ n−1 , and Xn are mutually independent random variables, since Un−1 , X Un−1 ∼ χ2n−2 by the induction assumption, and 

because

2 ¯ n−1 − Xn X  ∼ χ21 V = q n σ (n−1)



 ¯ n−1 − Xn X  q  ∼ N(0, 1). n σ (n−1)

Solution 4.84. In general,  (n−1) fY(1) (y(1) ) = n 1 − FY (y(1) ) fY (y(1) ),

where FY (y) is the cumulative distribution function (CDF) of the random variable Y . So, since FY (y) = 1 − e−y/θ , y > 0, it follows directly that fY(1) ) (y(1) ) =

n −ny(1) /θ e , 0 < y(1) < +∞. θ

SOLUTIONS TO EVEN-NUMBERED EXERCISES So, Y(1) ∼gamma(α =

θ n, β

= 1), with E(Y(1) ) =

θ n

79

and V(Y(1) ) =

θ2 n2 .

So,

Y(1) − nθ n Z(1) = p = Y(1) − 1, −1 < Z(1) < +∞. 2 2 θ θ /n So, since Y(1) has moment generating function E(etY(1) ) = (1 − αt)−β = (1 − θ −1 , it follows that n t) MZ(1) (t)

h n i = E(etZ(1) ) = E et( θ Y(1) −1)     −1 h tn i θ tn = e−t E e( θ )Y(1) = e−t 1 − n θ −t e , t < 1. = (1 − t)

So, lim MZ(1) (t) = MZ(1) (t) =

n→∞

e−t , t < 1, (1 − t)

which is the moment generating function for a random variable (in particular, Z(1) ) with density function fZ(1) (z(1) ) = e−(z(1) +1) , −1 < z(1) < +∞. This is the expected result, since Z(1) = e−u , u > 0.

n θ Y(1)

− 1 = U − 1, where fU (u) =

Solution 4.86. For u = 1, 2, . . . , n, we have FU (u) = =

pr(U ≤ u) = pr [∩ni=1 (Xi ≤ u)] n  u n Y . pr(Xi ≤ u) = N i=1

So, for u = 1, 2, . . . , n, pU (u) = pr(U = u) = FU (u) − FU (u − 1) =

 u n N





u−1 N

n

.

80

MULTIVARIATE DISTRIBUTION THEORY

Thus,

E(U )

=

N X

upU (u)

u=1

 n N N  u n X X u−1 − u u N N u=1 u=1 "N # N X X −n n+1 n = N u − u(u − 1)

=

= N −n

u=1

u=1

N X

N −1 X

"

u=1

= N

−n

"

N X

un+1 − u

n+1

u=1

= N

−n

= N−

"

N

n+1

N −1  X v=1





(v + 1)v n

v=1

N −1 X v=1

N −1 X v=1

v

n

v

#

n+1



#

N −1 X v=1

v

n

#

v n . N

As a simple check, when n = 1, we correctly obtain E(U ) = E(X1 ) = (N + 1)/2. And, when n = 2, we obtain E(U ) = (N + 1)(4N − 1)/6N , which has (as desired) the value 1 when N = 1. Solution 4.88∗ . First, with πx = (eλ − 1)−1 λx /x!, the set of random ∞ variables {Yx }x=1 can be considered to follow a multinomial distribution with a countably infinite number of cells, with sample size n, and with probabilities 0 {πx }∞ x=1 . Hence, it follows that Yx ∼ Binomial(n, πx ), and that, for x 6= x , cov(Yx , Yx0 ) = −nπx πx0 .

SOLUTIONS TO EVEN-NUMBERED EXERCISES

81

So,

E(U )

    ∞ ∞ ∞ ∞ X X X X = n−1  E(Y2j−1 ) − E(Y2j ) = n−1  (nπ2j−1 ) − (nπ2j ) j=1

j=1

   ∞  X 2j 2j−1 λ λ − = (eλ − 1)−1   (2j − 1)! (2j)! j=1   λ2 λ3 λ4 = (eλ − 1)−1 λ − + − + ... 2! 3! 4!   ∞ X (−λ)r  = (eλ − 1)−1 1 − r! j=0

j=1

j=1

= (eλ − 1)−1 (1 − e−λ ) = e−λ .

Also, V(U ) = n−2 [V(S1 ) + V(S2 ) − 2cov(S1 , S2 )].

Now,

V(S1 )



= V =

∞ X j=1

∞ X j=1



Y2j−1  =

∞ X

V(Y2j−1 ) + 2

j=1

[nπ2j−1 (1 − π2j−1 )] + 2

X

all j k) = 1−pr[(X−Y ) > k]−pr[(Y −X) > k] = 1−2pr[(X−Y ) > k]. Now, m−(k+1)

pr[(X − Y ) > k] = pr(X > Y + k) = m−(k+1)

= m

−2

X

y=1

X

(m − y − k)

X

[(m − k) − y]

y=1

m X

pr(X = x)pr(Y = y)

x=y+k+1

m−(k+1)

= m−2

y=1

  (m − k)(m − k − 1) = m−2 (m − k)(m − k − 1) − 2 (m − k)(m − k − 1) = . 2m2 So, pr(U ≤ k) = 1 −

(m − k)(m − k − 1) , 0 ≤ k ≤ (m − 1). m2

As expected, pr[U ≤ (m − 1)] = 1 and pr(U ≤ 0) = pr(U = 0) = 1/m. (b) Now, pr(U = 0) = 1/m. And, for u = 1, 2, . . . , (m − 1), pU (u)

= pr(U = u) = pr(U > u − 1) − pr(U > u) (m − u + 1)(m − u) (m − u)(m − u − 1) = − m2 m2 2(m − u) = . m2

So, E(U ) = = = =

(0) 2 m2



1 m



m−1 X u=1

+

m−1 X

(u)

u=1

(mu − u2 )

2(m − u) m2

    2 (m − 1)(m)(2m − 1) (m − 1)m − m m2 2 6 2 (m − 1) . 3m

SOLUTIONS TO EVEN-NUMBERED EXERCISES

95



(c) For m = 100, we wish to choose k to be the largest integer value of k such that (100 − k)(99 − k) ≤ 0.05. pr(U ≤ k) = 1 − (100)2 By trial-and-error, it follows that k ∗ = 2, in which case pr(U ≤ 2) = 0.0494. (d) If Y is the number of pairs of numbers that do not differ by more than k ∗ = 2, then Y ∼ BIN(n = 10, π = 0.0494). So, under the assumption of no ESP, we have pr(Y ≥ 3) = 1 − pr(Y ≤ 2) = =

1−

2 X

y 10−y C10 y (0.0494) (0.9506)

y=0

0.0112.

So, assuming no ESP, we have observed a fairly rare event. Thus, for this particular pair of monozygotic twins, there is some statistical evidence for the presence of ESP. Perhaps further study about this pair of monozygotic twins is warranted, hopefully using other more sophisticated ESP detection experiments. Solution 4.108∗. For i = 1, 2, . . . , n, let the dichotomous random variable Wi equal 1 if the i-th white ball is in Urn 1 after k one-minute time periods have elapsed. Then, Nk =

n X

Wi and E(Nk ) =

i=1

n X

E(Wi ).

i=1

Now, after k one-minute time periods have elapsed, the i-th white ball will be in Urn 1 if it has been selected an even number of times. So, E(Wi ) =

pr(Wi = 1) =

X

j even

=

= =

Ckj

  k−j 1 1 1− n n

    j  k−j X j  k−j k k X 1 1 1 1 1  + Ckj − Ck 1− 1− 2 j=0 j n n n n j=0 ( k  k )   1 1 1 1 1 + 1− + − + 1− 2 n n n n " #  k 1 2 1+ 1− . 2 n

96

MULTIVARIATE DISTRIBUTION THEORY

Thus,

"  k # n 2 E(Nk ) = 1+ 1− . 2 n

Clearly, limk→∞ E(Nk ) =

n , 2

so that, as anticipated, each urn will contain n/2 white balls and n/2 green balls when k is infinitely large (i.e., when equilibrium is attained). Solution 4.110∗ . (a) Based on the stated first-order autoregressive model, we have V



Yj 0 − µ σ



2(tj0 −tj )

V



Yj − µ σ

=

1=ρ

=

ρ2(tj0 −tj ) + V(j 0 ),



+ V(j 0 )

so that V(j 0 ) = 1 − ρ2(tj0 −tj ) . And, cov(Yj , Yj 0 )

= = = =



 Yj − µ Yj 0 − µ σ cov , σ σ     Yj − µ (tj0 −tj ) Yj − µ 2 σ cov + j 0 ,ρ σ σ   Yj − µ Yj − µ σ 2 ρ(tj0 −tj ) cov , σ σ   Yj − µ σ 2 ρ(tj0 −tj ) V σ 2

= σ 2 ρ(tj0 −tj ) . So, corr(Yj , Yj 0 )

= =

cov(Yj , Yj 0 ) p V(Yj )V(Yj 0 )

σ 2 ρ(tj0 −tj ) p = ρ(tj0 −tj ) . (σ 2 )(σ 2 )

SOLUTIONS TO EVEN-NUMBERED EXERCISES

97

(b) We have V(Y¯ )

  n n−1 n X X X = n−2  V(Yj ) + 2 cov(Yj , Yj 0 ) j=1

= n−2

=

=

 

 

j=1 j 0 =j+1

nσ 2 + 2

n−1 X j=1

n X

j 0 =j+1

σ2 ρ

h

(j 0 −1) (j−1) − (n−1) (n−1)

 n−1 n X X 0 σ2  n+2 ρ(j −j)/(n−1)  n2 0 j=1 j =j+1 # " n−1 2 X σ i (n − i)θn , n+2 n2 i=1



i



where θn = ρ1/(n−1) . Now, n−1 X i=1

(n − i)θni

=

n

n−1 X i=1

= = = = =

n−1 X

θni −

n−1 X

iθni

i=1

n−1 X

d(θni ) dθn i=1 i=1 "n−1 #    1 − θnn−1 d X i θ n θn − θn 1 − θn dθn i=1 n     d θn − θnn θn − θnn n − θn 1 − θn dθn 1 − θn     θn − θnn 1 − nθnn−1 (1 − θn ) + (θn − θnn ) n − θn 1 − θn (1 − θn )2 θn [n(1 − θn ) − (1 − θnn )] , (1 − θn )2 n

θni − θn

so that σ2 V(Y¯ ) = 2 n where θn = ρ1/(n−1) .

 n+

 2θn n [n(1 − θn ) − (1 − θn )] , (1 − θn )2

Chapter 5

Estimation Theory

5.1

Solutions to Even-Numbered Exercises

Solution 5.2. (a) Let IE (x) be the indicator function for the set E, so that IE (x) equals 1 if x ∈ E and IE (x) equals 0 otherwise. Then, letting A = (θ, ∞) and B = (0, ∞), we have fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ; θ) =

n Y

{e−(xi −θ) IA (xi )}

i=1

=

" n #  nθ  Y e−xi IB (xi ) e IA (x(1) ) i=1

=

g(u; θ) · h(x1 , x2 , . . . , xn ),

where u = x(1) = min{x1 , x2 , . . . , xn }. Now, h(x1 , x2 , . . . , xn ) is functionally independent of θ; and, its range of variation does not depend on θ since, given X(1) = x(1) , we know that xi ≥ x(1) > θ, i = 1, 2, . . . , n. So, by the Factorization Theorem, X(1) is a sufficient statistic for θ. (b) FX (x; θ)

=

Z

x θ

= eθ So,

 x e−(t−θ) dt = eθ −e−t θ  e−θ − e−x = 1 − e−(x−θ) , x > θ.

h in−1 e−(x(1) −θ) = ne−n(x(1) −θ) , x(1) > θ. fX(1) (x(1) ; θ) = n 1 − 1 + e−(x(1) −θ) 99

100

ESTIMATION THEORY Let U = X(1) − θ

⇒ dU = dX(1)

⇒ fU (u) = ne−nu , u > 0

⇒ U ∼ gamma(α = n−1 , β = 1)

⇒ E(U ) = n−1 , V(U ) = n−2 . Since X(1) = U + θ,

  1 1 E X(1) = θ + and V X(1) = V(U ) = 2 . n n   Since limn→∞ E X(1) = θ and limn→∞ V X(1) = 0, X(1) is consistent for θ. Solution 5.4. (a) fY1 ,Y2 ,...,Yn (y1 , y2 , . . . , yn ; θ)

=

n n o Y (k−1) −θyi θk [Γ(k)]−1 yi e

i=1

= θ

nk

−n

[Γ(k)]

n Y

yi

i=1

!k−1

e−θ

Pn

i=1

yi

= g(u; θ) · h(y1 , y2 , . . . , yn ), where g(u; θ) = θnk e−θ

Pn

i=1

yi

and h(y1 , y2 , . . . , yn ) =

−n

[Γ(k)]

n Y

i=1

yi

!k−1

.

Given U = Y = y, the function h(y1 , y2 , . . . , yn ) does not in any way depend on θ, so that Y is a sufficient statistic for θ. (b) Since Yi ∼ gamma(α = θ−1 , β = k) ∀ i, and the {Yi } are mutually independent, we know that S=

n X i=1

Yi ∼ GAMMA(α = θ−1 , β = kn)

and that E(S r ) =

Γ(kn + r) −r θ , (kn + r) > 0. Γ(kn)

SOLUTIONS TO EVEN-NUMBERED EXERCISES

101

So, h −1 i E Y = =

E

n

= nE(S −1 ) = n

Γ(kn − 1) θ Γ(kn)

S n θ, (kn − 1) > 0. (kn − 1)

So, the function of Y we seek is   kn − 1 −1 Y , θˆ = n ˆ = θ ∀n. To determine if θˆ is consistent for θ, we will find V(θ). ˆ since E(θ) Now, 2  ˆ = E(θˆ2 ) − θ2 = kn − 1 E(Y −2 ) − θ2 V(θ) n 2    kn − 1 2 Γ(kn − 2) n θ2 − θ2 = n Γ(kn)   (kn − 1)2 2 −1 = θ (kn − 1)(kn − 2)   (kn − 1) −1 = θ2 (kn − 2) θ2 , kn > 2. = (kn − 2)

Since

ˆ = lim lim V(θ)

n→∞

n→∞



 θ2 = 0, (kn − 2)

ˆ = θ ∀ n, it follows that θˆ is a consistent estimator of θ. and since E(θ) (c) From the Central Limit Theorem, we know that, for n large,  r √ Y − kθ Y − E(Y ) n q =p ˙ 1). θY − nk ∼N(0, = k k/nθ2 V(Y ) So, if Z ∼ N(0, 1), 0.95 = . = = =

pr(−1.96 < Z < 1.96) r   √ n pr −1.96 < θY − nk < 1.96 k r   √ √ n pr nk − 1.96 < θ Y < nk + 1.96 k ) (r r k √ k √ −1 −1 . pr ( nk − 1.96)Y ( nk + 1.96)Y 0. Γ(β)  −1   n Γ(n − 1) 1 = θ. = nE(U −1 ) = n Γ(n) θ n−1 =

˜ = θ and θˆ = −n/ (Pn lnXi ) is a function Thus, θ˜ = [(n − 1)/n)]θˆ since E(θ) i=1 of a sufficient statistic (namely, U or eU ) for θ. Since Γ(n − 2) 2 θ2 θ = , it follows that Γ(n) (n − 1)(n − 2)   θ2 θ2 n2 θ 2 2 2 2 ˆ ˆ ˆ V(θ) = E(θ ) − [E(θ)] = n − . = (n − 1)(n − 2) (n − 1)2 (n − 1)2 (n − 2)

E(U −2 ) =

SOLUTIONS TO EVEN-NUMBERED EXERCISES Thus, ˜ = V(θ)

109

(n − 1)2 ˆ θ2 θ2 V( θ) = > = C-R BOUND. n2 (n − 2) n

So, θ˜ does not achieve the C-R BOUND for finite n. However, (n − 2) C-R BOUND = , ˜ n V(θ) which has a limiting value of 1 as n → ∞. Clearly, θ˜ = ˙ θˆ for large n, and θˆ is asymptotically fully efficient. Solution 5.18. With y = (y1 , y2 , . . . , yn ), the likelihood function is L(y; α)



L=

n Y

(αe−αyi ) = αn e−nα¯y .

i=1

So, lnL = nlnα − nα¯ y. Thus, ∂lnL n = − n¯ y, ∂α α and n ∂ 2 lnL = − 2. ∂α2 α The equation ∂lnL =0 ∂α gives 1 α ˆ= ¯ Y as the maximum likelihood estimator of α. And, 2

α ˆ ˆ α)≈ ˙ . V(ˆ n Since α ˆ−α q ∼N ˙ (0, 1) ˆ α) V(ˆ

110

ESTIMATION THEORY

for large n, (1 − α) ≈

=



 q p ˆ α α) V(ˆ α) < α < α ˆ + Z1− 2 V(ˆ

pr α ˆ−Z           2 2 pr  < θ <  2 2 2 , q q      ˆ α) ˆ α)  ˆ + Z1− α2 V(ˆ α ˆ − Z1− α2 V(ˆ  α  1− α 2

q ˆ α) for large n. For the given data, where θ2 = 2/α2 and where α ˆ > Z1− α2 V(ˆ the computed 95% confidence interval for θ2 = 2/α2 is ! 2 2 = (22.371, 49.504).  2 ,   0.25 2 0.25 + 1.96 0.25 0.25 − 1.96 10 10 ¯ i ∼ N(µi , 1/ni ), i = 1, 2, 3, and that X ¯1, X ¯ 2, Solution 5.20. We know that X ˆ ¯ ¯ and X3 are mutually independent random variables. Clearly, if θ = (2X1 − ˆ = θ, V(θ) ˆ = ( 4 + 9 + 1 ), and θˆ is a linear combination ¯2 +X ¯ 3 ), then E(θ) 3X n1 n2 n3 of mutually independent normal variates. Hence, if pr(Z < Z1− α2 ) = (1 − α2 ) when Z ∼ N(0, 1), then     ˆ  θ−θ < Z1− α2 (1 − α) = pr −Z1− α2 < Z < Z1− α2 = pr −Z1− α2 < q   ˆ V(θ)   q q ˆ < θ < θˆ + Z1− α V(θ) ˆ . = pr θˆ − Z1− α2 V(θ) 2 So, the exact 100(1 − α)% confidence interval for θ is: r q 9 1 4 ˆ ˆ ¯ ¯ ¯ α α θ ± Z1− 2 V(θ) = (2X1 − 3X2 + X3 ) ± Z1− 2 + + . n1 n2 n3 For the given data, the computed 95% confidence interval for θ is (11.53,14.47). Solution 5.22. (a) Q =

Pn

2 i=1 (Yi −θxi ) .

So,

dQ dθ

= −2

Pn

i=1

xi (Yi −θxi ). Thus, the equation

n n n n X X X X dQ 2 ˆ = 0 gives xi Yi − θ xi = 0, or θ = xi Yi / x2i . dθ i=1 i=1 i=1 i=1

Since

d2 Q dθ 2

=2

Pn

i=1

x2i > 0, θˆ minimizes Q.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

111

(b) Note that θˆ can be written in the form θˆ =

n X

xi Yi /

i=1

n X

x2i =

i=1

n X

ci Yi ,

i=1

Pn where ci = xi / i=1 x2i , i = 1, 2 . . . , n. So, given that Xi = xi , i = 1, 2, . . . , n, so that the ci ’s are fixed constants, it then follows that θˆ is a linear combination of mutually independent normal random variables and therefore is itself normally distributed. Also, ˆ i ’s fixed) = E(θ|x ˆ i ’s fixed) = And, V(θ|x

  Pn x2i i=1 = θ. ci (θxi ) = θ Pn 2 i=1 xi i=1

n X

n X

c2i (θ2 x2i )



i=1

2

n X

x4i /

i=1

n X i=1

x2i

!2

.

ˆ given that the xi ’s are fixed, is So, the distribution of θ, ( ) P θ2 ni=1 x4i N θ, Pn . 2 ( i=1 x2i )

Also, if Z1− α2 is such that pr(Z > Z1− α2 ) = α/2, 0 < α ≤ 0.20, when  Pn Pn 2 2 Z ∼ N(0, 1), then, letting k = i=1 x4i / , i=1 xi (1 − α)

= =

(

pr(−Z1− α2 < Z < Z1− α2 ) = pr −Z1− α2 pr

(

θˆ θˆ √ √ 0. i=1 xi L=

Solution 5.24. Since (Xi − µ0 )/σ ∼ N(0, 1) and since X1 , X2 , . . . , Xn are mutually independent random variables, it follows that Pn (Xi − µ0 )2 nˆ σ2 = 2 ∼ χ2n , U = i=1 2 σ σ P n where σ ˆ 2 = i=1 (xi − µ0 )2 /n.

112

ESTIMATION THEORY

So, (1 − α)

= =

  h i nˆ σ2 2 2 2 2 pr χn, α2 < U < χn, 1− α2 = pr χn, α2 < 2 < χn, 1− α2 σ "s # s nˆ σ2 nˆ σ2 pr 0. θ Γ(n) Γ(n) 0 So, E(S −1 ) =

Γ(n − 1) θ θ= . Γ(n) (n − 1)

Hence, (n − 1) = θˆ = S



n−1 n



¯ −1 X

is the MVUE of θ, n ≥ 3. (b) From part (a), lnL = nlnθ − θs, so that dlnL n d2 lnL −n = − s and = 2. dθ θ dθ2 θ Hence, the CRLB is θ2 /n. Now, θ2 Γ(n − 2) 2 θ = , E(S −2 ) = Γ(n) (n − 1)(n − 2)

SOLUTIONS TO EVEN-NUMBERED EXERCISES so that, for n ≥ 3,

 2 V(S −1 ) = E(S −2 ) − E(S −1 ) =

113

θ2 θ2 − . (n − 1)(n − 2) (n − 1)2

Hence,

ˆ V(θ)



= V (n − 1)S =

−1



θ2 , n ≥ 3. (n − 2)

2

= (n − 1)



θ2 θ2 − (n − 1)(n − 2) (n − 1)2



So,

θ2 /n CRLB (n − 2) = 2 = , n ≥ 3. ˆ θ /(n − 2) n V(θ)  ˆ θ) = limn→∞ n−2 = 1, so that θˆ is an asymptotiAnd, limn→∞ EFF(θ, n cally fully efficient estimator of θ. ˆ θ) = EFF(θ,

Solution 5.28. Since 2     ¯ 2 = V(X) ¯ + E(X) ¯ 2 = σ + µ2 , E (X) n it follows that

MSE(θˆ1 , θ)

And, since

h i2 = V(θˆ1 ) + E(θˆ1 ) − θ  2 2  σ = V(θˆ1 ) + + µ2 − µ2 n 4 σ = V(θˆ1 ) + 2 . n

  2   σ σ2 S2 2 2 ¯ +µ − = µ2 = θ, = E (X) − n n n

¯ 2− we let θˆ2 = (X)

S2 n .

¯ and S 2 are independent random variables, we have Now, since X

So,

2   2σ 4 ¯ 2 + V(S ) = V(θˆ1 ) + . MSE(θˆ2 , θ) = V(θˆ2 ) = V (X) 2 2 n n (n − 1)

MSE(θˆ1 , θ) − MSE(θˆ2 , θ)

= =

    σ4 2σ 4 ˆ ˆ V(θ1 ) + 2 − V(θ1 ) + 2 n n (n − 1)   4 σ 2 . 1− 2 n (n − 1)

114

ESTIMATION THEORY

Thus, for n = 2, θˆ1 has a smaller MSE than θˆ2 ; for n = 3, the MSE’s of θˆ1 and θˆ2 are equal; and, for n > 3, θˆ2 has a smaller MSE than θˆ1 . Solution 5.30. First, since lnE(X) = (µ + σ 2 /2), we can first compute an appropriate ML-based large-sample confidence interval for the parametric function (µ + σ 2 /2), and then easily convert it to one for E(X). To start, note that the maximum likelihood estimator (MLE) of (µ + σ 2 /2) is (ˆ µ+σ ˆ 2 /2), 2 2 where µ ˆ and σ ˆ are, respectively, the MLE’s of µ and σ . Now, with yi = lnxi for i = 1, 2, . . . , n, and with y = (y1 , y2 , . . . , yn ), the likelihood L(y; µ, σ 2 ) ≡ L takes the form L=

n Y

(2πσ 2 )−1/2 e−(yi −µ)

2

/2σ2

= (2πσ 2 )−n/2 e−

i=1

Pn

i=1 (yi −µ)

2

/2σ2

,

so that

Pn (yi − µ)2 2 n 2 lnL = − ln2π − lnσ − i=1 2 . n 2 2σ Thus, solving simultaneously the two equations Pn n 2 ∂lnL 2 X n ∂lnL i=1 (yi − µ) = 2 = − + =0 (yi − µ) = 0 and 2 2 4 ∂µ 2σ i=1 ∂(σ ) 2σ 2σ gives µ ˆ=n

−1

n X

Yi = Y¯ and σ ˆ 2 = n−1

i=1

n X i=1

so that µ ˆ+σ ˆ 2 /2 = Y¯ +



(Yi − Y¯ )2 =

n−1 2n

Now, − ∂ 2 lnL n ∂ 2 lnL = − 2, = 2 ∂µ σ ∂µ∂(σ 2 ) and

∂ 2 lnL n = − ∂(σ 2 )2 2σ 4

so that −E



∂ 2 lnL ∂µ2



and −E

Pn



i=1 (yi σ4

− µ)2

n = 2 , −E σ



∂ 2 lnL ∂µ∂(σ 2 )



=

n . 2σ 4

∂ 2 lnL ∂(σ 2 )



n−1 n

S2.

Pn

i=1 (yi σ6



− µ)

,



= 0,

,



S2,

SOLUTIONS TO EVEN-NUMBERED EXERCISES

115

2

So, the large-sample variance of (ˆ µ+σ ˆ /2) is V(ˆ µ) +

σ2 2σ 4 σ 2 (2 + σ 2 ) V(ˆ σ2 ) = + = . 4 n 4n 2n

Thus, from ML-theory, since 2

(ˆ µ + σˆ2 ) − lnE(X) p ∼N(0, ˙ 1) for large n, σ ˆ 2 (2 + σ ˆ 2 )/2n

it follows that the ML-based large-sample 95% confidence interval for lnE(X) is p (ˆ µ+σ ˆ 2 /2) ± 1.96 σ ˆ 2 (2 + σ ˆ 2 )/2n, so that the ML-based large-sample 95% confidence interval for E(X) is √ 2 2 2 e(ˆµ+ˆσ /2)±1.96 σˆ (2+ˆσ )/2n .  2 For the given data, n = 30, µ ˆ = y¯ = 3.00, and σ ˆ 2 = n−1 s n  (2.50) = 2.4167, so that we have = 29 30 √ e(3.00+2.4167/2)±1.96 2.4167(2+2.4167)/2(30) = e(4.2084±0.8267) , giving (29.421, 153.715) as the computed ML-based large-sample 95% confi2 dence interval for E(X) = eµ+σ /2 . Solution 5.32. (a) We know that Xi ∼ BIN(n, πi ), i = 1, 2, . . . , k, and that cov(Xi , Xj ) = −nπi πj for all i 6= j. So, E(θˆij ) =

nπi nπj E(Xi ) E(Xj ) − = − = (πi − πj ) = θij . n n n n

And, V(θˆij ) = n−2 [V(Xi ) + V(Xj ) − 2cov(Xi , Xj )] = n−2 [nπi (1 − πi ) + nπj (1 − πj ) − 2(−nπi πj )]   = n−1 πi + πj − (πi − πj )2 .

(b) Since θˆ12 = (ˆ π1 − π ˆ2 ) is the maximum likelihood estimator of the parameter θ12 = (π1 − π2 ), it follows that θˆ12 − θ12 (ˆ π1 − π ˆ2 ) − (π1 − π2 ) q = p ∼N(0, ˙ 1) −1 n [ˆ π1 + π ˆ2 − (ˆ π1 − π ˆ2 )2 ] ˆ θˆ12 ) V(

116

ESTIMATION THEORY for large n, giving p (ˆ π1 − π ˆ2 ) ± 1.96 n−1 [ˆ π1 + π ˆ2 − (ˆ π1 − π ˆ2 )2 ]

as an appropriate large-sample 95% confidence interval for θ12 . For the available data, π ˆ1 = 0.50 and π ˆ2 = 0.20, giving 0.30 ± 0.153 or (0.147, 0.453) as the computed 95% confidence interval for θ12 . Solution 5.34. (a) First, let U = ln(σ) = ln(σ 2 )/2. Hence, dU = (2σ 2 )−1 dσ 2 , so that the Jacobian J = 1/(2σ 2 ). Thus, π(U ) ∝ 1 ⇒ π(σ 2 ) ∝ 1/(2σ 2 ) ∝ 1/σ 2 . So, assuming prior independence between µ and σ 2 , Z ∞ fY (y|µ, σ 2 )π(µ, σ 2 )dσ 2 π(µ|Y = y) ∝ 0 Z ∞ = fY (y|µ, σ 2 )π(µ)π(σ 2 )dσ 2 0 Z ∞ Pn 2 1 ∝ (σ 2 )−n/2 e− 2σ2 i=1 (yi −µ) (σ 2 )−1 dσ 2 Z0 ∞ Pn 2 1 n = (σ 2 )− 2 −1 e− 2σ2 i=1 (yi −µ) dσ 2 . 0

Now, n X i=1

(yi − µ)2

=

n X i=1

=

n X i=1

yi2 − 2n¯ yµ + nµ2 =

n X i=1

yi2 + n(µ2 − 2¯ yµ)

yi2 − n¯ y 2 + n(µ − y¯)2

= (n − 1)s2 + n(µ − y¯)2 , Pn Pn 1 ¯)2 . where y¯ = n1 i=1 yi and s2 = n−1 i=1 (yi − y 2 2 So, letting A = [(n − 1)s + n(µ − y¯) ]/2, we have Z ∞ 2 n π(µ|Y = y) ∝ (σ 2 )− 2 −1 e−A/σ dσ 2 0

∝ A−n/2 Γ(n/2)

Z

0



n

2

An/2 (σ 2 )− 2 −1 e−A/σ dσ 2 Γ(n/2) | {z } π(σ2 |y,µ)

 −n/2 ∝ A−n/2 = (n − 1)s2 + n(µ − y¯)2 "  2 #− (n−1)+1 2 µ − y¯ 1 √ , ∝ 1+ n − 1 s/ n

SOLUTIONS TO EVEN-NUMBERED EXERCISES

117

which is proportional to (i.e., is the kernel of) a t-distribution with (n−1) degrees of freedom. y) √ , given It then follows directly that the posterior distribution of ψ = (µ−¯ s/ n Y = y, is tn−1 , where tn−1 denotes a (central) t-distribution with (n − 1) degrees of freedom. This posterior distribution is identical to the sam−µ) √ pling distribution of the random variable Tn−1 = (Y in the frequentist S/ n setting. Generally speaking, under diffuse priors, Bayesian and frequentist methods generally yield similar results. (b) Now, under prior independence, Z ∞ π(σ 2 |Y = y) = fY (y|µ, σ 2 )π(µ)π(σ 2 )dµ −∞ Z ∞ 2 2 1 n (σ 2 )− 2 −1 e− 2σ2 [(n−1)s +n(µ−¯y) ] dµ ∝ −∞ Z ∞ (n−1)s2 2 n 1 (σ 2 )− 2 e− 2σ2 (µ−¯y) dµ ∝ (σ 2 )−(n−1)/2−1 e− 2σ2 −∞ Z ∞ (n−1)s2 2 n 1 ∝ (σ 2 )−(n−1)/2−1 e− 2σ2 (σ 2 /n)− 2 e− 2σ2 (µ−¯y) dµ {z } −∞ | ∝

(σ 2 )−(n−1)/2−1 e

2 − (n−1)s 2σ2

∝ π(µ|y,σ2 )

,

which is the kernel for an inverse-gamma (IG) distribution with shape parameter (n − 1)/2 and scale parameter (n − 1)s2 /2. Thus, the posterior distribution of σ 2 given Y = y is IG. Solution 5.36. (a) Since fX1 ,X2 ,...,Xn (x1 , x2 , . . . , xn ; θ) =

n  Y i=1

2 2 1 √ e−(xi −lnθ) /2σ 2πσ



on o n Pn Pn 2 2 2 1 (2πσ 2 )−n/2 e− i=1 xi /2σ , = e− 2σ2 [−2lnθ( i=1 xi )+n(lnθ) ] Pn it follows from the Factorization Theorem that i=1 Xi is a sufficient statistic for θ. Since the normal distribution with unknown mean µ = lnθ and known variance σ 2 is a member Pn of the exponential family of distributions, it follows directly that i=1 Xi is a complete sufficient statistic for θ.P ¯ = 1 n Xi is a complete sufficient statistic for θ and is also an Since X i=1 n unbiased estimator of µ, a reasonable first attempt at finding the MVUE ¯ ¯ ∼ N(µ, σ 2 /n), it of θ = eµ is to consider the random variable eX . Since X

118

ESTIMATION THEORY ¯

follows directly from moment generating function theory that E(etX ) = 2 2 2 ¯ e(tµ+t σ /2n) . So, E(eX ) = e(µ+σ /2n) . Thus, the MVUE of θ is 2

¯ σ θˆ = e(X− 2n ) ,

ˆ = θ and since X ¯ is a complete sufficient statistic for θ. since E(θ) (b) Now,    σ 2 2 ¯ σ2 ¯ V e(X− 2n ) = e− 2n V(eX )  h i  σ2 ¯ ¯ 2 e− n E(e2X ) − E(eX )   2  σ2 2σ2 σ2 e− n e(2µ+ n ) − eµ+ 2n  2   2  e2µ eσ /n − 1 = θ2 eσ /n − 1 .

ˆ = V(θ) = = =

To find the Cramer-Rao lower bound for the variance of any unbiased estimator of θ, since 1 1 lnfX (x; θ) = − ln(2πσ 2 ) − 2 (x − lnθ)2 , 2 2σ we have

(x − lnθ) ∂lnfX (x; θ) = , ∂θ θσ 2

so that E

(

∂lnfX (x; θ) ∂θ

2 )

=E



 (X − lnθ)2 1 σ2 = 2 4 = 2 2. 2 4 θ σ θ σ θ σ

Hence, the CRLB is equal to ( " 2 #)−1 θ2 σ2 ∂lnfX (x; θ) nE = . ∂θ n Finally, RE =

θ2 σ 2 /n CRLB σ 2 /n = 2 σ2 /n = σ2 /n ˆ θ (e − 1) (e − 1) V(θ)

Now, we can express RE in the form RE = hP ∞

σ 2 /n

(σ2 /n)j j=0 j!

−1

i=h

σ2 n

+

 −1 ∞ 2 (j−1) X σ 2 /n (σ /n)   . P∞ (σ2 /n)j i = 1 + j! j=2

j!

j=2

Thus, RE< 1 for all finite values of n, so that θˆ is not fully efficient for any finite value of n. However, limn→∞ RE=1; hence, the ARE of θˆ equals 1, so that θˆ is asymptotically a fully efficient unbiased estimator of the parameter θ = eµ .

SOLUTIONS TO EVEN-NUMBERED EXERCISES

119

Solution 5.38. (a) E(W ) V(W )

= µw = E(XY ) = E(X)E(Y ) = µx µy .   2 2 = σw = E(W 2 ) − [E(W )] = E (XY )2 − µ2x µ2y     = E X 2 E Y 2 − µ2x µ2y = σx2 + µ2x σy2 + µ2y − µ2x µ2y

= σx2 σy2 + σx2 µ2y + µ2x σy2 . So, 2

(CVw )

= = =

⇒ CVw

=

2 σx2 σy2 + σx2 µ2y + µ2x σy2 σw = µ2w µ2x µ2y !   !  2 σy2 σy2 σx σx2 + + µ2x µ2y µ2x µ2y 2

2

2

2

(CVx ) (CVy ) + (CVx ) + (CVy ) h i1/2 2 2 2 2 (CVx ) (CVy ) + (CVx ) + (CVy ) .

(b) E(X) = V(X) = E(Y ) = V(Y ) = β ⇒ CVx = CVy = β 1/2 /β = β −1/2 . So,  2  2  2 1/2 2  −1/2 −1/2 −1/2 −1/2 β + β + β CVw = β =

β −2 + 2β −1

1/2

=



2 1 + β2 β

1/2

.

Since the point estimate of β is βˆ = 2, the point estimate of CVw is dw = (1/4 + 2/2)1/2 = (5/4)1/2 = 1.118. CV To construct an ML-based 95% confidence interval for CVw , we need an dw ). Now, estimate of V(CV 1/2  2 1 ˆ d . + CVw = g(β) = βˆ2 βˆ And,

ˆ g 0 (β)

=

=

=

1 2



1 2 + 2 ˆ β βˆ

−1/2 

2 −2 − 3 ˆ ˆ β β2

  − βˆ13 + βˆ12 ˆ −(1 + β) q = q 1 βˆ3 βˆ12 + β2ˆ + β2ˆ βˆ2 ˆ −(1 + β) q . βˆ4 + 2βˆ5



120

ESTIMATION THEORY So, using the delta method, we have h i2 ˆ2 ˆ V( ˆ = (1 + β) (0.04) b CV dw ) = b β) V( ˙ g 0 (β) (βˆ4 + 2βˆ5 ) 2 (1 + 2) = (0.04) = 0.0045. (24 + 26 ) So, the ML-based 95% confidence interval for CVw is q √ b CV dw ) = 1.118 ± 1.96 0.0045 dw ± 1.96 V( CV

= 1.118 ± 0.132 ⇒ (0.986, 1.250).

Solution 5.40. (a) pS (s; θ) = =

=

=

pr(S = s) = pr(X1 + X2 = s) = s−1 X

s−1 X

l=1 s−1 X

pr[(X1 = l) ∩ (X2 = s − l)]

(1 − θ)s−l (1 − θ)l · (−lnθ)−1 l (s − l) l=1 l=1   s−1 s−1 X X 1 1 1 1 (−lnθ)−2 (1 − θ)s = (−lnθ)−2 (1 − θ)s + l(s − l) s l (s − l) l=1 l=1  s−1  (1 − θ)s X 1 1 + s(lnθ)2 l (s − l) pr(X1 = l)pr(X2 = s − l) =

(−lnθ)−1

l=1

=

(b) Let U = Then,



s−1 s X

2(1 − θ) s(lnθ)2

l=1

l−1 ,

s = 2, 3, . . . , ∞.

1, X1 = 1; . 0, X1 > 1

E(U ) = pr(X1 = 1) = (−lnθ)−1 (1 − θ)1 /1 =

(θ − 1) . lnθ

Using the Rao-Blackwell Theorem, π ˆ

pr[(X1 = 1) ∩ (S = s)] pr(S = s) pr[(X1 = 1) ∩ (X2 = s − 1)] pr(X1 = 1)pr(X2 = s − 1) = pr(S = s) pr(S = s) !−1 (−lnθ)−1 (1−θ)1 (−lnθ)−1 (1−θ)s−1 s−1 X s 1 s−1 −1 , s = 2, 3, . . . , ∞. l = s−1 2(s − 1) 2(1−θ)s P −1 l=1 l s(lnθ)2

= E(U |S = s) = pr(X1 = 1|S = s) = = =

l=1

SOLUTIONS TO EVEN-NUMBERED EXERCISES

121

When X1 = 2 and X2 = 3, so that S = (X1 + X2 ) = 5, then π ˆ= !−1 5−1 X 5 −1 l E(U |S = s) = 2(5 − 1) l=1

=

5 −1 (1/1 + 1/2 + 1/3 + 1/4) = 0.30. 8

Solution 5.42. (a) Note that θ

= pr(Y > 1|Y > 0) = =

pr(Y > 1) 1 − pr(Y = 0) − pr(Y = 1) = pr(Y > 0) 1 − pr(Y = 0)

1 − π − π(1 − π) = (1 − π), (1 − π)

so that we need to find the MVUE for (1 − π), or equivalently, for π itself. Clearly, if π ˆ is the MVUE of π, then (1 − π ˆ )Pis the MVUE of (1 −P π) = θ. n Since pY1 ,Y2 ,...,Yn (y1 , y2 , . . . , yn ) = π n (1 − π) i=1 yi , clearly Un = ni=1 Yi is a sufficient statistic for π. And, from exponential family theory, Un is a complete sufficient statistic for π. Also,   n(1 − π) 1 −1 = , E(Un ) = nE(Y ) = n π π so that it is appropriate to use the Rao-Blackwell Theorem. Let  1 if Y1 = 0, ∗ U = 0 if Y1 > 0.

Then, E(U ∗ ) = π. So, E(U ∗ |Un = un ) = =

pr[(Y1 = 0) ∩ (Un = un )] pr(Y1 = 0|Un = un ) = pr(Un = un )  n  P pr(Y1 = 0)pr Yi = un  n i=2  . P pr Yi = un i=1

So, since Un =

Pn

Yi ∼ NEGBIN(k = n, π), we have  un +n−2 n−1  π Cn−2 π (1 − π)un ∗ E(U |Un = un ) = π ˆ= un +n−1 n Cn−1 π (1 − π)un i=1

= =

un +n−2 Cn−2

(un + n − 2)!/(n − 2)!un ! un +n−1 = (u + n − 1)!/(n − 1)!u ! Cn−1 n n (n − 1) . (un + n − 1)

122

ESTIMATION THEORY Hence, the MVUE θˆ for θ = (1 − π) is (n − 1) = n 1− P (Un + n − 1)

i=1

n P

Yi

i=1

Yi + n − 1

Y¯ = . (Y¯ + 1 − n1 )

(b) Since n is large, it follows from the Central Limit Theorem that  Y¯ − 1−π π q ∼ ˙ N(0, 1). (1−π) nπ 2

So,

0.95

˙ ≈ = =

pr

  

Y¯ − −1.96 < q

1−π π

(1−π) nπ 2



< 1.96

  

) r (1 − π) (1 − π) ¯ < π Y − (1 − π) < 1.96 pr −1.96 n n ( ) r r (1 − π) (1 − π) < θ < π Y¯ + 1.96 pr π Y¯ − 1.96 . n n (

r

Since Y¯ is a consistent estimator of (1 − π)/π, 1/(1 + Y¯ ) is a consistent estimator of π. So, an appropriate large-sample 95% confidence interval for θ using these data is: v  u u 1− 1   t 1+¯ y 1 y¯ ± 1.96 1 + y¯ n s    1 1 − 1.30 1 = (0.30) ± 1.96 1 + 0.30 100 = 0.2308 ± 0.0942, or (0.1366, 0.3250). Solution 5.44. (a) We know that Y¯i ∼ N(µi , σ 2 /n) and (n − 1)Si2 /σ 2 ∼ χ2n−1 , and that Y¯i and Si2 are independent random variables. Hence, the set {Y¯i , Si2 }ki=1 constitutes aPset of 2k mutually independent random variables. k ¯ Also, θˆ = i=1 ci Yi is a linear combination of mutually independent normally distributed random variables, so θˆ itself is normally distributed

SOLUTIONS TO EVEN-NUMBERED EXERCISES ˆ = θ and V(θ) ˆ = (σ 2 /n) Pk c2 . Also, with E(θ) i=1 i U=

k X (n − 1)S 2 i

σ2

i=1

where (N − k) = So,

Pk

TN −k

i=1 (n

=

=

=

123

Pk (n − 1) i=1 Si2 = ∼ χ2(N −k) , σ2

− 1).

Z



ˆ √θ−θ



ˆ p V(θ) U/(N − k) = p U/(N − k) " #

q

Pk c Y¯ −θ q i=1Pi i k σ2 2 i=1 ci n Pk

2 i=1 Si /(N − k) σ2 Pk ¯ i=1 ci Yi − θ q ∼ tN −k , Pk (n−1) i=1 Si2 Pk 2 c i=1 i n(N −k)

(n−1)

since Z ∼ N(0, 1), U ∼ χ2N −k , and Z and U are mutually independent. (n−1) Noting that n(N −k) = 1/nk, the 100(1 − α)% confidence interval for θ is: k X i=1

ci Y¯i ± t1− α2 ,(N −k)

(

Pk

2 i=1 Si k

1 n

!

k X

c2i

i=1

)1/2

.

This interval is exact, and (N − k) is a maximum given the stated assumptions and the available data. (b) Note that {S12 , S22 , . . . , S 2k } is a set of mutually independent and iden2

tically distributed random variables, with E(Si2 ) = σ12 and V(Si2 ) = 2σ14 k 2 2 (n−1) , i = 1, 2, . . . , 2 ; also, {S k +1 , . . . , Sk } is a set of mutually inde2

pendent and identically distributed random variables, with E(Si2 ) = σ22 2σ24 and V(Si2 ) = (n−1) , i = k2 + 1, . . . , k. Let k/2 k X 1 X 2 1 σ ˆ12 = Si2 . Si and σ ˆ22 = (k/2) i=1 (k/2) k i= 2 +1

Then, E(ˆ σ12 )

= σ12 , V(ˆ σ12 ) =

E(ˆ σ22 )

= σ22 , V(ˆ σ22 ) =

k 2



4σ14 2σ14 = ; and , k(n − 1) (n − 1)

4σ24 . k(n − 1)

124

ESTIMATION THEORY Then, (ˆ σ12

− q

σ ˆ22 )



4ˆ σ12 k(n−1)

(σ12

+



σ22 )

4ˆ σ22 k(n−1)

=

q

q

4σ14 k(n−1)

4ˆ σ14 k(n−1)

−q

4ˆ σ4

2 + k(n−1) q 4

4σ2 k(n−1)

4ˆ σ14 k(n−1)

+



 2 2 σ ˆ − σ 1   q1 4

4ˆ σ24 k(n−1)

4σ1 k(n−1)



σ ˆ22

q −

σ22

4σ24 k(n−1)



,

which converges in distribution to a N(0,1) random variable as k → ∞ by Slutsky’s Theorem, since (for i = 1, 2) s

σi4 σ ˆ14 + σ ˆ24   2 2 σ ˆ − σ i  and  qi 4 4σi k(n−1)

P

−→

s

σi4 P as k → ∞ (since σ ˆi2 −→ σi2 as k → ∞), σ14 + σ24

D

−→ N(0, 1) as k → ∞ by the Central Limit Theorem.

Hence, an appropriate approximate (for large k) 100(1 − α)% confidence interval for γ = (σ12 − σ22 ) is: (ˆ σ12 − σ ˆ22 ) ± Z1− α2

s

4 (ˆ σ4 + σ ˆ24 ). k(n − 1) 1

Solution 5.46. Note that θ

pr(Y > 1) 1 − pr(Y = 0) − pr(Y = 1) = pr(Y > 0) 1 − pr(Y = 0)

=

pr(Y > 1|Y > 0) =

=

1 − π − π(1 − π) = (1 − π), (1 − π)

so that we need to find the MVUE of (1 − π), or equivalently, of π. Pn P Since pY1 ,Y2 ,...,Yn (y1 , y2 , . . . , yn ; π) = π n (1 − π) i=1 yi , U = ni=1 Yi is a sufficient statistic for π. And, from exponential family theory, U is a complete , we will use the Rao-Blackwell sufficient statistic for π. Since E(U ) = n(1−π) π Theorem. Let  1 if Y1 = 0, ∗ . U = 0 if Y1 > 0

SOLUTIONS TO EVEN-NUMBERED EXERCISES 125 Pn Pn ∗ Then, E(U ) = π. So, since i=1 Yi and i=2 Yi both have negative exponential distributions, it follows that E(U ∗ |U = u) = = = = =

pr[(Y1 = 0) ∩ (U = u)] pr(Y1 = 0|U = u) = pr(U = u) Pn pr(Y1 = 0)pr ( i=2 Yi = u) Pn pr ( i=1 Yi = u)  u+n−2 n−1  π Cn−2 π (1 − π)u u+n−1 n Cn−1 π (1 − π)u

u+n−2 Cn−2

=

u+n−1 Cn−1

(u + n − 2)!/(n − 2)!u! (u − n − 1)!/(n − 1)!u!

(n − 1) . (u + n − 1)

So, the MVUE of (1 − π) is

Pn Yi (n − 1) = Pn i=1 . θˆ = 1 − Pn ( i=1 Yi + n − 1) ( i=1 Yi + n − 1)

Finally, to show directly that θˆ is an unbiased estimator of θ = (1 − π), we have   U ˆ = E E(θ) (U + n − 1)  ∞  X u u+n−1 ( π 1 − π)u = Cn−1 u + n − 1 u=0 = =

∞ X (u + n − 2)! n u π (1 − π)u (n − 1)!u! u=1 ∞ X

(1 + v)

v=0

= (1 − π)

(v + n − 1)! n π (1 − π)1+v (n − 1)!(v + 1)!

∞ X v=0

v+n−1 n π (1 − π)v Cn−1

= (1 − π) = θ. Solution 5.48. Since we only have available the n random variables S1 , S2 , . . . , Sn , we need to find the distribution of S. Consider a 1-to-1 transformation from (X, Y ) to (S, T ), where S = (X + Y ) and T = Y . For this transformation, X = (S − T ), Y = T , |J| = 1, so that fS,T (s, t; θ) = 3θ−3 s,

126

ESTIMATION THEORY

0 < t < s < θ. So, fS (s; θ)

Z

=

s

3θ−3 s dt = 3θ−3 s2 , 0 < s < θ.

0

Since n is large, it is reasonable to invoke the Central Limit Theorem and claim that

for large n, where S =

1 n

S − E(S) q ∼ ˙ N (0, 1) V(S)

Pn

i=1

Si .

Since fS (s; θ) = 3θ−3 s2 , 0 < s < θ, we have   r+3 θ  Z θ 3 s = θr , r ≥ 0. E(S r ) = sr 3θ−3 s2 ds = 3θ−3 (r + 3) 0 r+3 0 So, E(S) =

3θ , 4

E(S 2 ) =

3θ2 , 5

and 3θ2 − V(S) = 5



3θ 4

2

=

3θ2 , 80

so that E(S) =

3θ 3θ2 and V(S) = . 4 80n

Hence, (1 − α) = ˙ = = = =

) S − 3θ 4 α

0 for large n.  As another approach, note that θˆ = 43 S is a consistent estimator of θ since ˆ = θ and E(θ)  2  2  2 3θ 4 ˆ = 4 lim V(S) = lim = 0. lim V(θ) n→∞ 3 n→∞ 3 n→∞ 80n q q q p ˆ Thus, V(S) = 3θˆ2 /(80n) is consistent for V(S) = 3θ2 /(80n). Therefore, by Slutsky’s Theorem,  s S − E(S) − E(S)  q  V(S) = Sq ∼ ˙ N (0, 1) ˆ V(S) ˆ V(S) V(S) for large n. So, (1 − α) = ˙

    S − 3θ 4 pr −Z1− α2 < q < Z1− α2 = pr[L < θ < U ],   3θˆ2 /(80n)

where (with θˆ = 43 S)

L

=

  q 4 2 ˆ α S − Z1− 2 3θ /80n 3

U

=

  q 4 S + Z1− α2 3tˆ2 /80n , 3

and

which is simply θˆ ± Z1− α2

q ˆ ˆ θ). V(

As a numerical illustration, suppose α = 0.05, n = 100, and s = 6, so that  θˆ = 43 s = 8. Then, the computed 95% interval for θ using the first-derived interval is (7.615,8.427); for the second-derived interval, the computed 95% confidence interval for θ is (7.595, 8.405). Solution 5.50. (a) Let ∗

U =



1, X1 = 1; 0, X1 = 6 1.

128

ESTIMATION THEORY P Then, E(U ) = pr(X1 = 1) = θ = kπ(1 − π) . Since S = ni=1 Xi is a complete sufficient statistic for θ, θˆ = E(U ∗ |S = s) is the MVUE of θ by the Rao-Blackwell Theorem. Now, ∗

k−1

pr{(X1 = 1) ∩ (S = s)} E(U ∗ |S = s) = pr(X1 = 1|S = s) = pr(S = s) Pn Pn pr{(X1 = 1) ∩ ( i=1 Xi = s)} pr{(X1 = 1) ∩ ( i=2 Xi = s − 1)} = = pr(S = s) pr(S = s) Pn pr(X1 = 1)pr( i=2 Xi = s − 1) . = pr(S = s) Note that this probability is 0 if s = 0, so that E(U ∗ |S = s) = 0 if s = 0. Also, E(U ∗ |S = s) = 0 if [s > k(n − 1) + 1] because, then, X1 must be at least 2 in value. P Since S ∼ BIN(kn, π) and ni=2 Xi ∼ BIN[k(n − 1), π], then, for 1 ≤ s ≤ [k(n − 1) + 1], we have h i k(n−1) [kπ(1 − π)k−1 ] Cs−1 π s−1 (1 − π)k(n−1)−(s−1) i h E(U ∗ |U = u) = s kn−s Ckn s π (1 − π) k(n−1)

=

kCs−1

Ckn s

.

So, the MVUE of θ is   0 k(n−1) θˆ = kCs−1 /Ckn s  0

if s = 0; if 1 ≤ s ≤ k(n − 1) + 1; if k(n − 1) + 2 ≤ s ≤ kn.

Also,

k(n−1)+1

X

ˆ = E(θ)

s=1

=

"

Ckn s

θ

X j=0

#

s kn−s Ckn s π (1 − π)

k(n−1)+1

kπ(1 − π)k−1 k(n−1)

=

k(n−1)

kCs−1

X s=1

k(n−1) s−1

Cs−1

π

(1 − π)k(n−1)−(s−1)

k(n−1) j

Cj

π (1 − π)k(n−1)−j = θ(1) = θ.

(b) The use of the binomial distribution is problematic here because teenagers in the same family share genetic, lifestyle, and demographic characteristics, so that the assumption of “independent” 0–1 trials is probably not justified. Also, π itself is expected to vary across families.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

129

Solution 5.52. If U = (Y − µ)/C, then Y = µ + CU , and dY = C dU . So, fU (u) = e−u , u > 0. So, E(Y ) = µ + C, and V(Y ) = C 2 . Hence, E(ˆ µ1 ) = E(Y¯ ) − C = (µ + C) − C = µ, so that µ ˆ1 is an unbiased estimator of µ. Also, V(ˆ µ1 ) = V(Y¯ ) = C 2 /n. Now, FY (y) =

Z

y µ

C −1 e−(t−µ)/C dt = 1 − e−(y−µ)/C , 0 < µ < y < +∞.

So,  n−1 n 1 − FY (y(1) ) fY (y(1) ) h in−1 n e−(y(1) −µ)/C C −1 e−(y(1) −µ)/C n −n[y(1) −µ]/C e , 0 < µ < y(1) < +∞. C

fY(1) (y(1) ) = = =

  If V = n Y(1) − µ /C, then Y(1) = µ + (C/n)V , and dY(1) = (C/n) dV .

So, fV (v) = e−v , v > 0. So, E[Y(1) ] = µ + C/n, and V[Y(1) ] = C 2 /n2 .

So, E(ˆ µ2 ) = E[Y(1) ]−C/n = µ+C/n−C/n = µ, so that µ ˆ2 is also an unbiased estimator of µ. Since V(ˆ µ2 ) = V[Y(1) ] = C 2 /n2 < V(ˆ µ1 ) = C 2 /n for n > 1, we would prefer µ ˆ2 . Also, Y(1) is a complete sufficient statistic for µ, so that µ ˆ2 is the minimum variance unbiased estimator (MVUE) of µ. Solution 5.54. Pn Pn (a) Since i=1 (xi − µ)2 = i=1 (xi − x ¯)2 + n(¯ x − µ)2 , we have n Y

fX (xi ; µ, σ 2 ) =

i=1

n

2

1

(2πσ 2 )−n/2 e− 2σ2 (¯x−lnθ) · e− 2σ2

=

Pn

x) i=1 (xi −¯

2

g(u; θ) · h(x1 , x2 , . . . , xn ),   P where h(x1 , x2 , . . . , xn ) = exp − 2σ1 2 ni=1 (xi − x¯)2 in no way depends ¯ =x ¯ is a sufficient statistic for the parameter θ = eµ . on θ given X ¯. So, X

¯ ∼ N(µ, σ 2 /n), we know that (b) Since X ¯

E(etX ) = MX¯ (t) = eµt+

σ2 t2 2n

.

130

ESTIMATION THEORY σ2 2n )

¯− Hence, θˆ = exp(X is an unbiased estimator of eµ = θ when σ 2 is ¯ Also, known, and θˆ is an explicit function of X.  2 2  ˆ = V eX¯ e− σ2n = e− σn V(eX¯ ) V(θ)    n o2   2 2 2 2 2 ¯ ¯ − σn 2X − σn 2µ+ 2σ µ+ σ X n 2n = e E(e ) − E(e ) =e e − e   σ2 = e2µ e n − 1 .

ˆ = 0, θˆ is a consistent estimator Since θˆ is unbiased for θ and limn→∞ V(θ) of θ = eµ . Solution 5.56. (a) Note that pr(Y = k + 1) =

∞ X

pr(X > k) =

x=k+1

= =

(1 − θ) k

θ .

∞ X

x=k+1

θx−1 (1 − θ)

θx−1 = (1 − θ)



θk (1 − θ)



Thus, pY (y) = θy−1 (1 − θ), y = 1, 2, . . . , k, and pY (k + 1) = pr(Y = k + 1) = θk . Now, E(Y ) =

k X

y=1

=

= = = =

yθy−1 (1 − θ) + (k + 1)θk

k X d y (θ ) + (k + 1)θk dθ y=1 " k # d X y (1 − θ) θ + (k + 1)θk dθ y=1   d θ(1 − θk ) + (k + 1)θk (1 − θ) dθ (1 − θ) ( )  1 − (k + 1)θk (1 − θ) + θ(1 − θk ) (1 − θ) + (k + 1)θk (1 − θ)2  1 − θk+1 . (1 − θ)

(1 − θ)

SOLUTIONS TO EVEN-NUMBERED EXERCISES

131

(b) Now, suppose that t of the n Yi values take the value (k + 1). Then, the likelihood function L can be written in the form L = = = = Pn−t

since Thus,

i=1

Cnt θk

so that

θyi −1 (1 − θ)

i=1 P n kt [ n−t Ct θ θ i=1 yi −(n−t)] (1 − θ)(n−t) Pn−t Cnt θ[ i=1 yi +(k+1)t−n] (1 − θ)(n−t) Pn Cnt θ( i=1 yi −n) (1 − θ)(n−t) ,

yi + (k + 1)t = lnL ∝

Y t n−t

n X i=1

i=1

yi .

!

yi − n lnθ + (n − t)ln(1 − θ),

( ∂lnL = ∂θ

gives

Pn

Pn

i=1

yi − n) (n − t) − =0 θ (1 − θ)

Pn Yi − n ˆ P θ = ni=1 Y i=1 i − T

as the MLE of θ. First, note that T ∼ BIN(n, θk ). Then, since Pn − ( i=1 yi − n) (n − t) ∂ 2 lnL = − , 2 2 ∂θ θ (1 − θ)2 we have −E



∂ 2 lnL ∂θ2



=

= =

nE(Yi ) − n n − E(T ) + θ2 (1 − θ)2 h i k+1 n 1−θ − n n − nθk 1−θ + θ2 (1 − θ)2  k n 1−θ . θ(1 − θ)2

So, an appropriate large-sample 95% confidence interval for θ is s ˆ − θ) ˆ2 θ(1 θˆ ± 1.96 . n(1 − θˆk ) For the given numerical information, we have (120 − 30) θˆ = = 0.90, (120 − 20)

132

ESTIMATION THEORY so that the computed 95% confidence interval for θ is s (0.90)(1 − 0.90)2 = 0.90 ± 0.042, 0.90 ± 1.96 (30) [1 − (0.90)4 ] or (0.858, 0.942).

Solution 5.58. Since θˆ = ln



π ˆ1 1−π ˆ1



− ln



π ˆ2 1−π ˆ2



,

we need to use the delta method to compute a large-sample approximation to       π ˆ1 π ˆ2 ˆ = V ln V(θ) + V ln . 1−π ˆ1 1−π ˆ2 Now, by the delta method and for large n1 , we have  2   π1    dln 1−π1 π ˆ1  V(ˆ ≈  π1 ) V ln 1−π ˆ1 dπ1 =



=

1 π1 /(1 − π1 ) 1 ; n1 π1 (1 − π1 )



(1)(1 − π1 ) − π1 (−1) (1 − π1 )2

2 

π1 (1 − π1 ) n1

analogously, for large n2 , we have    1 π ˆ2 ≈ . V ln 1−π ˆ2 n2 π2 (1 − π2 ) So, for large n1 and n2 , ˆ V(θ)

≈ =

1 1 + n1 π1 (1 − π1 ) n2 π2 (1 − π2 ) 1 1 + = h(n1 , π1 , π2 ), say. n1 π1 (1 − π1 ) (N − n1 )π2 (1 − π2 )

Now, the equation ∂h(n1 , π1 , π2 ) −1 1 = 2 =0 + 2 ∂n1 n1 π1 (1 − π1 ) (N − n1 ) π2 (1 − π2 ) gives

(N − n1 )2 π1 (1 − π1 ) = eθ , = 2 n1 π2 (1 − π2 )



SOLUTIONS TO EVEN-NUMBERED EXERCISES

133

or (N − n1 ) = n1 eθ/2 , so that n1 = N



1 1 + eθ/2



and n2 = N



eθ/2 1 + eθ/2



.

When N = 100 and θ = 2, then 1/(1 + e) = 0.269, so that n1 = 27 and n2 = 73. Solution 5.60. (a) For notational simplicity in all that follows, we will write E(Yi ) instead of b instead of E(β|{x b i } fixed), etc., keeping in mind that E(Yi |X = xi ), E(β) all results are conditional on x1 , x2 , . . . , xn being fixed, known constants. Since βb is a linear combination of independent normally distributed random variables, βb is itself normally distributed. Now, , n n X X b (xi − x)E(Yi ) E(β) = (xi − x)2 i=1

=

n X i=1

=

i=1

,

(xi − x)(α + βxi )

n X i=1

(xi − x)2

# n X 1 Pn xi (xi − x) α (xi − x) + β 2 i=1 (xi − x) i=1 i=1 "

n X n

= Also,

X 1 ·β (xi − x)2 = β. 2 (x − x) i i=1 i=1

Pn

b = V(β) =

So,

n X 1 Pn (xi − x)2 σ 2 [ i=1 (xi − x)2 ]2 i=1

σ2 . 2 i=1 (xi − x)

Pn

 b β ∼ N β, Pn

 σ2 . 2 i=1 (xi − x)

b 0 − x) is a linear combination of two independent (b) Since Yb0 = Y + β(x normally distributed random variables, Yb0 itself is normally distributed.

134

ESTIMATION THEORY Now, E(Yb0 ) = =

b E(Y ) + (x0 − x)E(β) n 1X E(Yi ) + β(x0 − x) n i=1 n

= =

1X (α + βxi ) + β(x0 − x) n i=1

α + βx0 = E(Y |X = x0 ).

And, V(Yb0 )

So, Yb0

(c) From part (b),

b = V(Y ) + (x0 − x)2 V(β) 2 σ σ2 + (x0 − x)2 Pn = 2 n i=1 (xi − x)   2 (x0 − x) 1 . + Pn = σ2 2 n i=1 (xi − x)

   1 (x0 − x)2 ∼ N α + βx0 , σ 2 + Pn . 2 n i=1 (xi − x)

Yb0 − E(Y |X = x0 ) Yb0 − E(Y |X = x0 ) q = Z ∼ N(0, 1). =q −x)2 ] σ 2 [ n1 + Pn(x0(x V(Yb0 ) 2 i −x) i=1

And,

SSE ∼ χ2(n−2) . σ2 Since Yb0 and SSE are independent random variables, the ratio q

or, equivalently, r So,

(1 − α) = pr

      

SSE σ2

Z ∼ tn−2 ,  (n − 2)

Yb0 − E(Y |X = x0 ) i ∼ tn−2 . h 2 SSE 1 Pn(x0 −x) 2 + (n−2) n (xi −x) i=1

−tn−2,1− α2 < r

   

Yb0 − E(Y |X = x0 ) i < tn−2,1− α2  h (x0 −x)2  SSE 1 P  n (n−2) n + (xi −x)2 i=1

SOLUTIONS TO EVEN-NUMBERED EXERCISES

135

leads to the exact confidence interval for E(Y |X = x0 ) of the form: s   1 (x0 − x)2 SSE b . + Pn Y0 ± tn−2,1− α2 2 (n − 2) n i=1 (xi − x)

For the given data, the exact 90% confidence interval is (−3.866, −2.134).

Solution 5.62. First, note that (X1 + X2 + X3 ) ∼ N(0, 3), (X1 − X3 ) ∼ N(0, 2), (X1 −X2 +X3 ) ∼ N(0, 3), and X4 ∼ N(0, 1). Also, these four normally distributed linear functions are mutually uncorrelated, and hence they are mutually independent. Then, we can rewrite U in the form √ (X1 + X2 + X3 )/ 3 U =    1/2 2  2  X1√ −X3 X1 −X +X 2 2 3 √ 3 + + X4 2 3 =

Z p , W/3

where Z ∼ N(0, 1), W ∼ χ23 , and where Z and W are independent random variables. Therefore, U ∼ t3 ; ie, U has a t-distribution with 3 degrees of freedom. So, pr(|U | > k1 ) = 0.05 = 1 − pr(|U | ≤ k1 ) = 1 − pr(−k1 ≤ U ≤ k1 ), which gives k1 = 3.182. First, note that Xi2 ∼ χ21 for i = 1, 2, 3, 4. So, it follows directly that (X12 + X22 )/2 = F ∼ f2,2 . (X32 + X42 )/2 So, since V = F/(1 + F ), 0 < V < 1, we have, for 0 < k2 < 1,   F > k2 pr(V > k2 ) = 0.05 = pr (1 + F )   k2 = pr [F > k2 (1 + F )] = pr F > . (1 − k2 ) Thus, k2 /(1 − k2 ) = 19.0 using f2,2 -distribution tables, and so k2 = 19/20 = 0.95. Solution 5.64. The likelihood function L is L=

n Y

(1 − π)π xi = (1 − π)n π u , where u =

i=1

n X i=1

xi ;

136

ESTIMATION THEORY

so, from exponential family theory, U = statistic for π.

Pn

i=1

Xi is a complete sufficient

Now, let U ∗ = 1 if Xn = 0, and let U ∗ = 0 if Xn > 0; then, E(U ∗ ) = (1 − π). So,

(1 − π ˆ)

i h P n−1 X = u pr (Xn = 0) ∩ i i=1 Pn = E(U ∗ |U = u) = pr(U ∗ = 1|U = u) = . pr ( i=1 Xi = u)

Now, for i = 1, 2, . . . , n, let Yi = (Xi + 1), so that Yi ∼ GEOM(1 − π). Then, we know that

Vn =

n X i=1

Yi = (U + n) ∼ NEGBIN(n, 1 − π),

and

Vn−1 =

n−1 X

n−1 X

Yi =

i=1

i=1

Xi + n − 1

!

∼ NEGBIN(n − 1, 1 − π).

Thus,

(1 − π ˆ)

pr(Xn = 0)pr(Vn−1 = u + n − 1) pr(Vn = u + n)  u+n−2  (1 − π) Cn−2 (1 − π)n−1 π u

= = =

u+n−1 Cn−1 (1 − π)n π u (n − 1) . (u + n − 1)

=

Finally, the MVUE of π is n

π ˆ = 1 − (1 − π ˆ) = 1 −

X (n − 1) U = , where U = Xi . (U + n − 1) (U + n − 1) i=1

SOLUTIONS TO EVEN-NUMBERED EXERCISES

137

Now, with vn = (u + n) and with wn = (vn − 1), we have E(1 − π ˆ) = = =

∞ X (n − 1) vn −1 Cn−1 (1 − π)n π vn −n (v − 1) n v =n n

∞ X

(vn − 2)! (1 − π)n π vn −n (n − 2)!(v − n)! n =n

vn ∞ X

vn =n

n −2 Cvn−2 (1 − π)n π vn −n

= (1 − π)

∞ X

wn =n−1

= (1 − π),

wn −1 C(n−1)−1 (1 − π)n−1 π wn −(n−1)

so that π ˆ is, as expected, an unbiased estimator of π. Solution 5.66. (a) The equation pr(Xi = 1) = = =

π = pr[(Xi = 1) ∩ (Xi−1 = 0)] + pr[(Xi = 1) ∩ (Xi−1 = 1)]

pr(Xi = 1|Xi−1 = 0)pr(Xi−1 = 0) + pr(Xi = 1|Xi−1 = 1)pr(Xi−1 = 1) pr(Xi = 1|Xi−1 = 0)(1 − π) + θπ

gives pr(Xi = 1|Xi−1 = 0) = and hence

π(1 − θ) (1 − π)

π(1 − θ) (1 − 2π + πθ) = , (1 − π) (1 − π) o n ≤ θ ≤ 1 needed so that all probawith the requirement max 0, (2π−1) π bilities are between 0 and 1. So, pr(Xi = 0|Xi−1 = 0) = 1 −

L =

pX1 (x1 )

=

pX1 (x1 )

n Y

i=2 n Y

i=2

= ×

  i−1 (Xj = xj ) pXi xi | ∩j=1 pXi (xi |Xi−1 = xi−1 )

 (1−xi−1 )xi n  Y π(1 − θ) θxi−1 xi (1 − θ)xi−1 (1−xi ) (1 − π) i=2  (1−xi−1 )(1−xi )  (1 − 2π + πθ) , (1 − π)

π x1 (1 − π)1−x1

138

ESTIMATION THEORY which, after straightforward algebraic manipulations, simplifies to the expression given in part (a) above.

(b) Using the given values of the maximum likelihood estimates of π and θ, ˆ π ) = 0.0317, V( ˆ θ) ˆ = 0.0006, and it follows by direct substitution that V(ˆ ˆ = 0.0010. So, cov(ˆ ˆ π , θ) ˆ θˆ − π V( ˆ)

ˆ + V(ˆ ˆπ ˆ θ) ˆ π ) − 2cov( = V( ˆ θ, ˆ) = 0.0317 + 0.0006 − 2(0.0010) = 0.0303.

So, the computed large-sample 95% confidence interval for the parameter (θ − π) is q √ ˆ θˆ − π (θˆ − π ˆ ) ± 1.96 V( ˆ ) = (0.05 − 0.03) ± 0.0303 = 0.02 ± 0.3412, or (−0.3212, 0.3612).

Since the value 0 is included in this computed confidence interval, these data provide no statistical evidence that θ > π. Solution 5.68∗ . (a) Now, Sn ∼ gamma(α = θ, β = n) and Sn−1 ∼ gamma(α = θ, β = n − 1). Therefore, since Sn−1 and Xn are independent, fSn−1 ,Xn (sn−1 , xn ) =

fSn−1 (sn−1 )fXn (xn ) = 1

=

n−2 −sn−1 /θ sn−1 e · θ−1 e−xn /θ Γ(n − 1) · θn−1

n−2 − θ (sn−1 +xn ) sn−1 e , sn−1 > 0, xn > 0. Γ(n − 1) · θn

√ 2 Let Yn = X√ n and Sn = Sn−1 + Xn , which implies that √ Sn−1 = Sn − Yn and Xn = Yn . (Note also that Sn−1 = 0 ⇒ Sn = Yn and Xn = 0 ⇒ Yn = 0.) The Jacobian of the transformation is ∂Sn−1 ∂Sn−1 1 −1/2 − Yn 1 ∂Sn 2 ∂Yn 1 = J = = − Yn−1/2 , 2 ∂Xn ∂Xn − 21 Yn−1/2 0 ∂Yn ∂Sn −1/2

and hence |J| = 12 Yn So, fSn ,Yn (sn , yn )

= =

.

  1 √ (sn − yn )n−2 e− θ (sn ) 1 −1/2 y Γ(n − 1) · θn 2 n √ n−2 −sn /θ (sn − yn ) e , 0 < yn < s2n < +∞. √ 2 yn Γ(n − 1) · θn

SOLUTIONS TO EVEN-NUMBERED EXERCISES

139

Since Sn ∼ gamma(α = θ, β = n), then fSn ,Yn (sn , yn ) fSn (sn ) √ √ (sn − yn )n−2 e−sn /θ /2 yn Γ(n − 1) · θn snn−1 e−sn /θ /Γ(n) · θn √ (n − 1)(sn − yn )n−2 , 0 < yn < s2n . √ 2 yn snn−1

fYn (yn |Sn = sn ) = = =

2

(c) Now, E(Yn ) = E(Xn2 ) = V(Xn ) + [E(Xn )] = θ2 + (θ)2 = 2θ2 , so that Yn /2 is an unbiased estimator of θ2 . To find the MVUE of θ2 , given that Sn is a complete sufficient statistic for θ2 , we need to find   1 Yn Sn = sn = E(Yn |Sn = sn ) E 2 2 according to the Rao-Blackwell Theorem. From part (b), E(Yn |Sn = sn ) = =

Z

√ (n − 1)(sn − yn )n−2 dyn yn √ 2 yn snn−1 0 Z 2 (n − 1) sn 1/2 √ yn (sn − yn )n−2 dyn . n−1 2sn 0 s2n

√ 1/2 Letting u = sn − yn , which implies that yn = (sn − u), yn = (sn − u)2 , and dyn = −2(sn − u)du, we have: Z 2(n − 1) sn E(Yn |Sn = sn ) = (sn − u)2 un−2 du 2snn−1 0 Z (n − 1) sn 2 (sn − 2sn u + u2 )un−2 du = snn−1 0 s    n sn  n+1 sn  (n − 1) 2 un−1 n u u = sn + − 2sn n−1 n − 1 n n +1 0 sn 0 0  n+1  n+1 n+1 (n − 1) sn 2s s = − n + n n−1 (n − 1) n (n + 1) sn   2 1 1 2 = (n − 1)sn − + (n − 1) n (n + 1) = 2s2n /n(n + 1).

So, 12 E(Yn |Sn = sn ) = s2n /n(n + 1) and Sn2 /n(n + 1) is the MVUE of θ2 . (c) Since E(Sn ) = nθ, it is reasonable to conjecture that some function of S2n might be the MVUE of θ2 . Now, since E(S2n ) = V(Sn ) + [E(Sn )]2 = nθ2 + (nθ)2 = n(n + 1)θ2 ,

140

ESTIMATION THEORY it follows easily that MVUE) of θ2 .

S2n n(n+1)

is an unbiased estimator (and hence is the

Solution 5.70∗ . (a) In general, for a random sample X1 , X2 , . . . , Xm of size m from fX (x), −∞ < x < ∞, the distribution of the r-th order statistic is fX(r) (x(r) ) =

m! [FX (x(r) )]r−1 [1−FX (x(r) )]m−r fX (x(r) ), −∞ < x < ∞, (r − 1)!(m − r)!

Rx where r = 1, 2, . . . , m and where FX (x) = −∞ fX (t)dt. In our situation, m = (2n + 1) and r = (n + 1), so that the distribution of X(n+1) is fX(n+1) (x(n+1) ) = So,

n (2n + 1)!  FX (x(n+1) )[1 − FX (x(n+1) )] fX (x(n+1) ), −∞ < x(n+1) < ∞. 2 (n!)

(2n + 1)! E[(X(n+1) −θ) ] = (n!)2 2

Z



−∞

 n (x(n+1) −θ)2 FX (x(n+1) )[1 − FX (x(n+1) )] fX (x(n+1) )dx(n+1) .

Now, since fX (x) ≤ fX (θ) for all x, −∞ < x < ∞, we have FX (x) −

1 2

Z x Z θ f (t)dt − f (t)dt X X −∞ −∞ Z x Z x fX (t)dt ≤ fX (θ) dt = fX (θ) x − θ ,

= =

θ

which gives

θ



FX (x) − 21 (x − θ) ≥ [fX (θ)]2 2

2

, −∞ < x < ∞.

Thus, we have E[(X(n+1) − θ)2 ] ≥ (2n + 1)! 1 2 (n!) [fX (θ)]2

Z



−∞



1 FX (x(n+1) ) − 2

2



n FX (x(n+1) )[1 − FX (x(n+1) )] fX (x(n+1) )dx(n+1) .

Now, let u = FX (x(n+1) ), so that du = fX (x(n+1) )dx(n+1) . Then, taking advantage of properties of the beta distribution, we have Z

∞ −∞



1 FX (x(n+1) ) − 2

2

 n FX (x(n+1) )[1 − FX (x(n+1) )] fX (x(n+1) )dx(n+1) =

SOLUTIONS TO EVEN-NUMBERED EXERCISES 141 2  Z 1 Z 1 1 1 un (1 − u)n du = u2 − u + u− un (1 − u)n du 2 4 0 0 Z 1 Z 1 Z 1 1 n u (1 − u)n du = = un+2 (1 − u)n du − un+1 (1 − u)n du + 4 0 0 0   n!n! (n + 2)!n! (n + 1)!n! 1 − + = . (2n + 3)! (2n + 2)! 4 (2n + 1)! Finally, we obtain

E[(X(n+1) − θ)2 ]

   (2n + 1)! (n + 2)!n! (n + 1)!n! 1 (n!)2 − + (n!)2 (2n + 3)! (2n + 2)! 4 (2n + 1)!   (n + 1) 1 1 (n + 1)(n + 2) − + , = = [fX (θ)]−2 (2n + 2)(2n + 3) (2n + 2) 4 4(2n + 3)[fX (θ)]2 ≥ [fX (θ)]−2

which completes the proof. (b) If X ∼ N(µ, σ 2 ), then θ = µ, and fX (θ) = (2πσ 2 )−1/2 = 0.399σ −1 . So, with n = 20, the lower bound is equal to

[0.399σ

−1 −2

]



 1 = 0.0365σ 2. 4[2(20) + 3]

Thus, if X ∼ N(µ, σ 2 ), the lower bound for the mean-squared error of the sample median as an estimator of the population median increases directly as σ 2 increases, which is as expected. Solution 5.72∗ . (a)

ˆ = E(λ) = =

∞ X

r=2 ∞ X r=2 ∞ X r=1

=

rE 



Nr n

λ

 r −1 λ

r (e − 1)

r pY (r; λ) −

E(Y ) −

(eλ

r!



λ (eλ − 1)

λ . − 1)

142

ESTIMATION THEORY Since E(Y ) =

∞ X

y=1

= =

y(eλ − 1)−1

λy y!

∞ X eλ λy e−λ y (eλ − 1) y=0 y!

λeλ eλ (λ) = λ , − 1) (e − 1)

(eλ

we have ˆ = E(λ)

λeλ λ λ − = λ (eλ − 1) = λ. − 1) (eλ − 1) (e − 1)

(eλ

(b) First, with y = (y1 , y2 , . . . , yn ), we have L(y; λ) =

n  Y i=1

yi −1 λ

λ

(e − 1)

yi !



Pn

−n λ

λ

= (e − 1)

lnL(y; λ) = −nln(eλ − 1) + (ny)lnλ −

n X

Qn

i=1

i=1

yi

yi !

, and

lnyi !.

i=1

So −neλ ny ∂lnL(y; λ) = λ + , ∂λ (e − 1) λ and −

∂ 2 lnL(y; λ) ∂λ2



 ny eλ (eλ − 1) − e2λ + 2 (eλ − 1)2 λ

=

n

=

−neλ ny + 2. (eλ − 1)2 λ

Thus,  2  ∂ lnL(y; λ) E − ∂λ2

= = =

n −neλ + 2 E(Y ) λ 2 (e − 1) λ   λ −ne n λeλ + (eλ − 1)2 λ2 (eλ − 1)  λ  neλ (e − 1) − λ λ 2 λ(e − 1)

= neλ (eλ − λ − 1)/λ(eλ − 1)2 .

SOLUTIONS TO EVEN-NUMBERED EXERCISES

143

So, ˆ = Eff(λ)

n h 2 io−1 −E ∂ lnL(y;λ) 2 ∂λ

ˆ V(λ) [ne (e − λ − 1)/λ(eλ − 1)2 ]−1 n−1 λ[1 + λ(eλ − 1)−1 ] λ

= = =

λ

(eλ − 1)2 eλ (eλ − λ − 1)[1 + λ(eλ − 1)−1 ] (eλ − 1)3 . λ λ e (e − λ − 1)(eλ + λ − 1)

When λ = 2, ˆ Eff(λ)

(e2 − 1)3 e2 (e2 − 2 − 1)(e2 + 2 − 1) (7.389 − 1)3 (7.389)(7.389 − 3)(7.389 + 1) 260.7946 = 0.9586. 272.0580

= = =

Solution 5.74∗. distribution L=

The appropriate likelihood function L is the multinomial

n! [α(1 − α)]x1 (αβ)x2 (α2 )x3 (1 − α − αβ)x4 , x1 !x2 !x3 !x4 !

so that lnL ∝ x1 ln[α(1 − α)] + x2 ln(αβ) + 2x3 ln(α) + x4 ln(1 − α − αβ). So, simultaneous solution of the two equations x1 (1 − 2α) x2 2x3 x4 (1 + β) ∂lnL = + + − =0 ∂α α(1 − α) α α (1 − α − αβ) and

gives

x2 x4 α ∂lnL = − =0 ∂β β (1 − α − β) α ˆ=

and βˆ =



(x1 + 2x3 ) (n + x1 + x3 )

x2 x2 + x4



1−α ˆ α ˆ



.

144

ESTIMATION THEORY

To find the large-sample variance of α ˆ using the delta method, first note that   1−π ˆ3 , V(ˆ α) = V(1 − α ˆ) = V 1+π ˆ1 + π ˆ3 where π ˆ1 = X1 /n and π ˆ3 = X3 /n. Now, with τ (π1 , π3 ) = (1 − π3 )/(1 + π1 + π3 ), it follows that −(1 − π3 ) ∂τ (π1 , π3 ) = , ∂π1 (1 + π1 + π3 )2 and

∂τ (π1 , π3 ) −(2 + π1 ) = . ∂π3 (1 + π1 + π3 )2

So, using the delta method, it follows that the approximate large-sample variance V(ˆ α) of α ˆ is equal to 0

[∆(π1 , π3 )] I −1 (π1 , π3 ) [∆(π1 , π3 )] , where ∆(π1 , π3 ) =

−1 [(1 − π3 ), (2 + π1 )] (1 + π1 + π3 )2

and where I

−1

(π1 , π3 ) = n

−1



π1 (1 − π1 ) −π1 π3 −π1 π3 π3 (1 − π3 )

 .

Hence, completion of the needed matrix multiplication gives (1 − π3 ) [π1 (1 − π1 ) + π3 (4 − π1 )] n(1 + π1 + π3 )4 as the approximate large-sample variance of α ˆ based on the delta method. Based on the observed data, the estimate of α is equal to α ˆ=

[25 + 2(10)] 1 = = 0.333. (100 + 25 + 10) 3

And, since π ˆ1 = x1 /n = 25/100 = 0.25 and π ˆ3 = x3 /n = 10/100 = 0.10, the estimated approximate large-sample variance of α ˆ based on the delta method is equal to (1 − 0.10) [0.25(1 − 0.25) + 0.10(4 − 0.25)] = 0.0015. 100(1 + 0.25 + 0.10)4

SOLUTIONS TO EVEN-NUMBERED EXERCISES

145

Finally, the computed 95% confidence interval for α is equal to √ 0.333 ± 1.96 0.0015, or (0.257, 0.409). Solution 5.76∗ . (a) The parameter of interest is π = pr(Y ≤ t) = FY (t) =

Z

0

t

θe−θt dy = (1 − e−θt ), 0 < t < ∞.

Now, the likelihood function L is n  P ln(1−π) Pn −ln(1 − π) n −θ n yi i=1 e[ t ] i=1 yi . = L=θ e t Pn So, by exponential family theory, U = i=1 Yi is a complete sufficient statistic for π. Also, U ∼ GAMMA(α = θ−1 , β = n). Appealing to the Rao-Blackwell Theorem, π ˆ = E(U ∗ |U = u), where U ∗ is any unbiased estimator of π. To proceed, let U ∗ take the value 1 if Y1 ≤ t and take the value 0 if Y1 > t; then, E(U ∗ ) = (1)pr(Y1 ≤ t) = π. So, π ˆ

= =

E(U ∗ |U = u) = pr(Y1 ≤ t|U = u)  FY1 (t|U = u) if 0 < t < u < ∞, 1 if 0 < u ≤ t < ∞.

To find an explicit expression for FY1 (t|U = u), we first need to find an explicit expression for fY1 (y1 |U = u), the conditional distribution of Y1 given U = u. So, fY1 (y1 |U = u) =

fY (y1 )fU (u|Y1 = y1 ) fY1 ,U (y1 , u) = 1 , 0 < y1 < u < ∞. fU (u) fU (u)

Now, fU (u|YP 1 = y1 ) is the density function of the random variable (y1 +V ), where V = ni=2 Yi ∼ GAMMA(α = θ−1 , β = n − 1). Hence, fU (u|Y1 = y1 ) =

(u − y1 )n−2 e−θ(u−y1 ) , 0 < y1 < u < ∞. Γ(n − 1)θ−(n−1)

Thus, fY1 (y1 |U = u) = =



θe−θy1

 h (u−y1 )n−2 e−θ(u−y1 ) i h

Γ(n−1)θ −(n−1)

un−1 e−θu Γ(n)θ −n

i

(n − 1)(u − y1 )n−2 , 0 < y1 < u < ∞. un−1

146

ESTIMATION THEORY Finally, Z

= Thus, with U = π ˆ

t

t (n − 1)(u − y1 )n−2 1  dy1 = n−1 −(u − y1 )n−1 0 n−1 u u 0 n−1  t , 0 < t < u < ∞. 1− 1− u

FY1 (t|U = u) =

Pn

Yi , the MVUE π ˆ of π is  n−1 1 − 1 − Ut if 0 < t < U < ∞, 1 if 0 < U ≤ t < ∞.

i=1



=

(b) E(ˆ π) =

Z



π ˆ fU (u)du

0

=

Z

t

(1)fU (u)du +

0

= = = = =

Z



t

1− 1−

Z



t

Z



t

Z



u−t u

n−1

(u − t)n−1

n−1 #  t fU (u)du 1− 1− u

"

un−1 e−θu du Γ(n)θ−n

e−θu du Γ(n)θ−n



e−θ(w+t) dw wn−1 Γ(n)θ−n 0 Z ∞ n−1 −θw w e dw 1 − e−θt Γ(n)θ−n 0 1−

1 − e−θt = π.

¯ = n−1 U , we have (c) With U  n−1  ¯ n−1 t (−t/U) π ˆ =1− 1− =1− 1+ , U n so that

¯

limn→∞ π ˆ = 1 − e−t/U , ¯ is the MLE of θ. which is the MLE of π since 1/U Thus, for large n, the MVUE and MLE of π are essentially the same. Solution 5.78∗ . (a) First, E(Y ) =

 1 1 1 y2 y3 α (y) (1 + αy)dy = +α = . 2 2 2 3 3 −1 −1

Z

1

SOLUTIONS TO EVEN-NUMBERED EXERCISES 147  Pn Pn  2 α 2 Then, with Qu = i=1 [Yi − E(Yi )] = i=1 Yi − 3 , it follows that n α ∂Qu 2 X Yi − = 0, =− ∂α 3 i=1 3

Pn which gives α ˆ = 3n−1 i=1 Yi = 3Y¯ as the ULS estimator of α. Clearly, E(ˆ α) = 3E(Y¯ ) = 3(α/3) = α, so that α ˆ is an unbiased estimator of α. And, since  1 Z 1 1 1 y3 y4 1 (y 2 ) (1 + αy)dy = +α E(Y 2 ) = = , 2 2 3 4 −1 3 −1 we have V(Y ) = (1/3) − (α/3)2 = (1/3) − (α2 /9). Thus,     V(Y ) (1/3) − (α2 /9) (3 − α2 ) ¯ ¯ . V(ˆ α) = V(3Y ) = 9V(Y ) = 9 =9 = n n n One obvious undesirable property of α ˆ = 3Y¯ is that −3 < α ˆ < +3, so that it is possible to obtain a value of α ˆ that is outside the permissible set of values of the parameter α (namely, −1 < α < +1). (b) Now, since lnfY (y) = −ln2 + ln(1 + αy), we have ∂lnfY (y) y ∂ 2 lnfY (y) −y 2 = and = . ∂α (1 + αy) ∂α2 (1 + αy)2 So, we have  Z 1   2 Z 1 1 y2 1 y2 ∂ lnfY (y) (1 + αy)dy = dy = −E 2 2 ∂α2 2 −1 (1 + αy) −1 (1 + αy) which, using the definite integral expression given earlier, equals   α−3 1 1 2 2 (1 + α) − 2(1 + α) + ln(1 + α) − (1 − α) + 2(1 − α) − ln(1 − α) , 2 2 2 or equivalently

    1+α α−3 ln − 2α , 2 1−α

where α 6= 0. When α = 0, we have  2  ∂ lnfY (y) −E = ∂α2 =

 y2 dy 2 −1  1 1 y3 1 = . 2 3 −1 3

Z

1



148

ESTIMATION THEORY So, when α 6= 0, we have CRLB = =

  2 −1 ∂ lnfY (y) −E n ∂α2    −1 3 1+α 2α ; ln − 2α n 1−α −1

and, when α = 0, the CRLB= n−1 (1/3)−1 = 3/n. Finally, when α 6= 0, EFF(ˆ α, α)

CRLB V(ˆ α)

=

   −1 2α3 1+α ln − 2α ; (3 − α2 ) 1−α

=

and, when α = 0, EFF(ˆ α, α) = 1. More generally, EFF(ˆ α, α) decreases monotonically to the value of 0 as α → 1. By symmetry, similar conclusions hold for −1 ≤ α ≤ 0. Solution 5.80∗ . First, for i = 1, 2, it follows that the density function of the random variable Xi is equal to m dFXi (xi ) = mθi−m xim−1 e−(xi /θi ) , 0 < xi < ∞, 0 < θi < ∞, m ≥ 1, dxi

so that Z



m

(xi )mθi−m xim−1 e−(xi /θi ) dxi 0   Z ∞ 1 −m 1/m −u/θim u e = θi du = Γ 1 + θi . m 0

E(Xi )

=

Now, the likelihood function L for these data is L

=

2 Y n Y

m−1 −(xij /θi ) mθi−m xij e

i=1 j=1

= m2n

2 Y

i=1

where si = So,

Pn

j=1

xm ij .





 −mn  θi

n Y

j=1

m

m−1

xij 

−m

e−θi



si 

,

lnL ∝ −mn (lnθ1 + lnθ2 ) − θ1−m s1 − θ2−m s2 .

SOLUTIONS TO EVEN-NUMBERED EXERCISES Now, for i = 1, 2,

149

∂lnL −mn = + mθi−m−1 si = 0 ∂θi θi

gives

 s 1/m i θˆi = n as the maximum likelihood estimate of θi . Also, for i = 1, 2,

mn m(m + 1)si ∂ 2 lnL . = 2 − ∂θi2 θi θim+2

And, since E

m Xij



= =

Z



 −m m−1 −(xij /θi )m xm xij e dxij ij mθi 0 Z ∞ m θi−m ue−u/θi du = θim , 0

it follows that −E



∂ 2 lnL ∂θi2



= =

−mn m(m + 1) (nθim ) + θi2 θim+2 m2 n . θi2

Thus, since ∂ 2 lnL/∂θ1 ∂θ2 = 0, it follows that      (θ2 + θ2 )  V θˆ1 − θˆ2 = V θˆ1 + V θˆ2 = 1 2 2 . m n  1 ˆ So, since Γ 1 + m θi is the MLE of E(Xi ), an appropriate ML-based largesample 95% confidence for γ is v   u u    2 2  t θˆ1 + θˆ2  1   θˆ1 − θˆ2 ± 1.96 . Γ 1+ m  m2 n  p p ˆ1 = ˆ2 = For the available data, θ 350/50 = 2.646, θ 200/50 = 2, and √ Γ(3/2) = π/2; so, the computed 95% confidence interval for γ is s # √ " π (7 + 4) (2.646 − 2) ± 1.96 , 2 (2)2 (50)

150

ESTIMATION THEORY

or (0.165, 0.980). Since the lower limit of this computed confidence interval is greater than zero in value, these data provide statistical evidence that drug 1 provides a higher mean survival time than drug 2. Solution 5.82∗ . ¯i ) = µi , it is clear that X ¯ is an unbiased estimator of (a) Clearly, since E(X µ. So, it is reasonable to prefer that option which produces the smaller 2 ¯ = k −2 Pk V(X ¯ i ) = k −2 Pk σi . value of V(X) i=1 i=1 ni Using the sample sizes for Option 1, we have k X

¯ V(X|Option 1) = k −2

σi2

i=1

"

σi

N

Pk

i=1

σi

!#−1

1 = N k2

k X

σi

i=1

!2

.

And, using the sample sizes for Option 2, we have ¯ V(X|Option 2) = k

−2

k X

σi2

i=1

So, with σ ¯ = k −1

Pk

i=1

"

σi2

N

Pk

i=1

σi , we have

¯ ¯ V(X|Option 2) − V(X|Option 1) = =

σi2

!#−1

=



k 1 X 2 σ −  N k i=1 i

k 1 X 2 σ . N k i=1 i

P k

i=1 σi

k

k 1 X (σi − σ ¯ )2 ≥ 0, N k i=1

2   

with a strict inequality holding if the {σi }ki=1 are not all equal to the same value. Thus, Option 1 is to be preferred to Option 2. (b) To make this decision, consider finding choices for n1 , n2 , . . . , nk that Pk ¯ minimize V( to the constraint i=1 ni = N . In particular,  X) subject  Pk−1 with nk = N − i=1 ni , consider minimizing the function ¯ = V(k 2 X)

k−1 X i=1

σk2 σi2 + Pk−1  . ni N − i=1 ni

So, for i = 1, 2, . . . , (k − 1), we have

¯ σ2 σk2 ∂[k 2 V(X)] = − i2 +  2 = 0, Pk ∂ni ni N − i=1 ni

SOLUTIONS TO EVEN-NUMBERED EXERCISES or equivalently, −

151

σi2 σ2 + k2 = 0; 2 ni nk

the above expression gives k k X σi nk X ni = , or ni = N = σi , nk σk σk i=1 i=1

so that nk = N

σk

Pk

i=1

Since

¯ ∂ 2 [k2 V(X)] n2i

σi

!

, and hence ni = N

σi

Pk

i=1

> 0 for all i and since

σi

!

, i = 1, 2, . . . , k.

¯ ∂ 2 [k2 V(X)] ∂ni ∂nj

= 0 for all i 6= j, it is ¯ so that clear that the Option 1 choices for n1 , n2 , . . . , nk minimize V(X), it is not possible to do better than Option 1. Note that the method of Lagrange multipliers can also be used to find the solution to this constrained minimization problem. Solution 5.84∗ . Since Y¯i (1 − Y¯i ) ≤ 14 , i = 1, 2, we have s r (1/4) (1/4) 1 1 W ≤ 2(1.96) + = 1.96 + = w. n1 n2 n1 n2 Now, the equation (1.96)2 gives n2 =



1 1 + n1 n2



= w2

 w 2 n1 . , where k = (kn1 − 1) 1.96

Thus, we wish to find that value n∗1 of n1 which minimizes C

= =

c2 n 1 (c1 n1 + c2 n2 ) = c1 n1 + (kn1 − 1)   c2 . n 1 c1 + (kn1 − 1)

So, ∂C ∂n1

 c2 c1 + (kn1 − 1)   − n1 c2 k(kn1 − 1)−2 = 0,

=



152

ESTIMATION THEORY

which gives the quadratic equation (kc1 )n21 − (2kc1 )n1 + (c1 − c2 ) = 0. The positive root of this quadratic equation provides a minimum, giving   r  r   c2 c2 3.84 1 ∗ 1+ = , n1 = 1+ 2 k c1 w c1 so that n∗2

=

Finally, we have



3.84 w2

 r  c1 1+ . c2

 q  1 + cc21 n∗1 =  q . n∗2 1 + cc12

As expected, both n∗1 and n∗2 increase as w decreases. If c1 = c2 , then n∗1 = n∗2 ; if c1 > c2 , then n∗2 > n∗1 ; and, if c2 > c1 , then n∗1 > n∗2 . Solution 5.86∗ . (a) For u = n, (n + 1), . . . , N , we have FU (u) = pr(U ≤ u) = pr [∩ni=1 (Xi ≤ u)] n Y   i−1 (Xj ≤ u) = pr(X1 ≤ u) pr Xi ≤ u| ∩j=1 i=2

= =

 u  u − 1  u − 2  N Cun . CN n

N −1

N −2

···



u−n+1 N −n+1



Now, pU (n) = pr(U = n) = 1/CN n ; and, for u = (n + 1), (n + 2), . . . , N , we have pU (u) = = =

pr(U = u) = FU (u) − FU (u − 1) Cu−1 Cun − nN N Cn Cn

u−1 Cn−1

CN n

.

So, we can compactly write the probability distribution of U as pU (u) =

u−1 Cn−1

CN n

, u = n, (n + 1), . . . , N.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

153

Then, we have E(U )

N X Cu−1 u n−1 CN n u=n   N 1 X (u − 1)! = u (n − 1)!(u − n)! CN n u=n

=

=

N 1 X u! (n − 1)!(u − n)! CN n u=n

N n X u Cn CN n u=n n  N +1  Cn+1 = CN  n n (N + 1), = n+1     ˆ ) = E n+1 U − 1 = n+1 E(U ) − 1 = N. so that E(N n n

=

(b) Now,

E[U (U + 1)] =

N X

u(u + 1)

u=n

=

u−1 Cn−1

CN n

N 1 X (u + 1)! N (n − 1)!(u − n)! Cn u=n

N n(n + 1) X u+1 Cn+1 CN n u=n n(n + 1)  N +2  = Cn+2 CN n   n = (N + 1)(N + 2). n+2

=

So, with some algebra, it can be shown that V(U ) = = =

E[U (U + 1)] − E(U ) − [E(U )]2       2 n n n (N + 1)(N + 2) − (N + 1) − (N + 1) n+2 n+1 n+1 n(N + 1)(N − n) . (n + 1)2 (n + 2)

Thus, ˆ) = V(N



n+1 n

2

V(U ) =

(N + 1)(N − n) . n(n + 2)

154

ESTIMATION THEORY ∗

Solution 5.88 . (a) It follows easily that E(Xi1 ) = E(Xi2 ) = α and V(Xi1 ) = V(Xi2 ) = α(1 − α). And, since E(Xi1 Xi2 ) = pr[(Xi1 = 1)∩(Xi2 = 1)] = pr(Xi1 = 1)pr(Xi2 = 1|Xi1 = 1) = αβ, it follows that corr(Xi1 , Xi2 ) = = =

E(Xi1 Xi2 ) − E(Xi1 )E(Xi2 ) p V(Xi1 )V(Xi2 ) αβ − (α)(α) p [α(1 − α)][α(1 − α)] (β − α) . (1 − α)

So, corr(Xi1 , Xi2 ) is negative if β < α, is positive if β > α, and equals 0 if α = β. (b) First, pr[(Xi1 = 0) ∩ (Xi2 = 0)] = − = =

1 − pr[(Xi1 = 1) ∩ (Xi2 = 0)] − pr[(Xi1 = 0) ∩ (Xi2 = 1)] pr[(Xi1 = 1) ∩ (Xi2 = 1)] 1 − α(1 − β) − α(1 − β) − αβ (1 − 2α + αβ).

Thus, expressed in terms of y0 , y1 and y2 , the multinomial likelihood function L takes the form L=

n! (1 − 2α + αβ)y0 [2α(1 − β)]y1 (αβ)y2 , y0 !y1 !y2 !

so that lnL ∝ y0 ln(1 − 2α + αβ) + (y1 + y2 )lnα + y1 ln(1 − β) + y2 lnβ. So, solving simultaneously the two equations (β − 2)y0 (y1 + y2 ) ∂lnL = + =0 ∂α (1 − 2α + αβ) α and

αy0 y1 y2 ∂L = − + =0 ∂β (1 − 2α + αβ) (1 − β) β

ˆ produces the stated expressions for α ˆ and β.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

155

(c) First, it follows directly that E(Y0 ) = n(1 − 2α + αβ), that E(Y1 ) = 2nα(1 − β), and that E(Y2 ) = nαβ. Thus, since (β − 2)2 y0 (y1 + y2 ) ∂ 2 lnL = − − , 2 2 ∂α (1 − 2α + αβ) α2 it follows with some algebra that   2 n(2 − β) ∂ lnL . = −E ∂α2 α(1 − 2α + αβ) And, since

∂ 2 lnL y0 = , ∂α∂β (1 − 2α + αβ)2

it follows that −E Also, since



∂ 2 lnL ∂α∂β



=−

n . (1 − 2α + αβ)

α2 y0 y1 y2 ∂ 2 lnL = − − − 2, ∂β 2 (1 − 2α + αβ)2 (1 − β)2 β it follows with some algebra that  2  ∂ lnL nα(1 − 2α + β) −E . = 2 ∂β β(1 − β)(1 − 2α + αβ) So, with the expected information matrix I equal to " # n(2−β) n − (1−2α+αβ) α(1−2α+αβ) I= , nα(1−2α+β) n − (1−2α+αβ) β(1−β)(1−2α+αβ) it follows directly that the large-sample variance-covariance matrix for α ˆ and βˆ has the structure   α(1 − 2α + β) β(1 − β) . I −1 = (2n)−1 β(1 − β) β(1 − β)(2 − β)/α Thus, for large n, we have V(ˆ α) ≈

α(1 − 2α + β) ˆ ≈ β(1 − β)(2 − β) , , V(β) 2n 2nα

and

β(1 − β) . 2n For n = 100, y0 = 60, y1 = 15, and y2 = 25, it follows from part (b) that α ˆ = 0.325 and βˆ = 0.625. ˆ = cov(ˆ α, β)

156

ESTIMATION THEORY And, the estimated variance of (βˆ − α ˆ ) is equal to ˆ βˆ − α V( ˆ)

ˆ + Vˆ (ˆ ˆ = Vˆ (β) α) − 2cov(ˆ α, β) ˆ ˆ − β) ˆ ˆ − β)(2 ˆ − β) ˆ α ˆ (1 − 2α ˆ + β) 2β(1 β(1 + − = 2nˆ α 2n 2n = 0.0043.

Thus, the computed large-sample 95% confidence interval for the parameter θ = (βˆ − α ˆ ) is q ˆ = (βˆ − α ˆ ) ± 1.96 Vˆ (βˆ − α) =

√ (0.625 − 0.325) ± 1.96 0.0043

0.300 ± 0.129, or (0.171, 0.429).

Since the lower limit of this confidence interval is greater than zero, there is statistical evidence suggesting that β > α. In other words, infants with an ear infection in one ear have a higher probability than α of also having an ear infection in the other ear. An estimate of corr(Xi1 , Xi2 ) is (0.625 − 0.325) (βˆ − α) ˆ = = 0.444, (1 − α) ˆ (1 − 0.325) which is a reasonably high correlation. Solution 5.90∗ . First, note that 

Pn+k



    k n µ+ µ − (n + k) n+k n+k       n k ¯n − µ + ¯∗ − µ , = X X k n+k n+k

 ¯ n+k − µ = X

¯ ∗ = k −1 where X k

Pn+k

¯n + nX

i=n+1

i=n+1 Xi



Xi .

Now, we know that

  ¯∗ − µ ¯n − µ X X k √ √ = Z2 ∼ N(0, 1), = Z1 ∼ N(0, 1), that σ/ n σ/ k and that Z1 and Z2 are independent random variables.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

157

So, we have  ¯ n+k − µ 2 X  ¯n − µ 2 X

 

=

n n+k



    2 ¯n − µ + k ¯∗ − µ X X k n+k   ¯n − µ X

2   ¯∗ Xk − µ ¯n − µ X " √ !  #2    n σ/ k k Z1 √ + n+k n+k σ/ n Z2 " √ ! #2  nk n + T1 . n+k n+k



= =

=

n n+k



+



k n+k

Thus,it follows that pr

"

#  ¯ n+k − µ 2 X =  0|(µ1 − µ2 ) = 10, σ = 20 n " # ¯1 − X ¯ 2 ) − 10 (X 10 p pr > 1.96 − p 2(20)2 /n 2(20)2 /n  √  n pr Z > 1.96 − √ , Z ∼ N(0, 1). 2 2

So, for POWER≥ 0.90, we require √ n 1.96 − √ ≤ −1.282, or n ≥ 8(1.96 + 1.282)2 = 84.08, 2 2 so that n∗ = 85. Solution 6.6. (a) L (y1 , y2 ; θ = 1) =

L (y1 , y2 ; θ = 2) =

fY1 (y1 ; 1) fY2 (y2 ; 1) = 1,

0 < y1 < 1, 0 < y2 < 1.

fY1 (y1 ; 2) fY2 (y2 ; 2) = 4y1 y2 ,

0 < y1 < 1, 0 < y2 < 1.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

161

So, the likelihood ratio (LR) is LR =

L (y1 , y2 ; θ = 1) L (y1 , y2 ; θ = 2)

=

4y1 y2





y1 y2







1 ≤ k0 4y1 y2 k0−1 1 −1 k =k 4 0 {(y1 , y2 ) : y1 y2 ≥ k} .

R =

This is a UMP rejection region for all θ > 1 since R will be of the same general structure (namely, y1 y2 ≥ k) for any specific θ > 1. In particular, for any θ > 1, LR =

1 θ2 (y1 y2 )θ−1

≤ k0 ⇒ y1 y2 ≥



1 k0 θ2

1  (θ−1)

.

(b) θ−1

fY1 ,Y2 (y1 , y2 ; θ) = θ2 (y1 y2 )

,

0 < y1 < 1, 0 < y2 < 1.

Let U = Y1 Y2 , V = Y1 ; so, Y1 = V and Y2 = U/V . The Jacobian of the transformation is given by ∂Y ∂Y1 1 0 1 ∂V ∂U −1 = J = ⇒ |J| = V −1 . −1 −U = −V ∂Y ∂Y 2 2 V 2 ∂U

V

∂V

Therefore,

fU,V (u, v; θ) So, fU (u; θ)

= θ2 uθ−1 v −1 , 0 < u < v < 1. Z 1 = θ2 uθ−1 v −1 dv u

1

= θ2 uθ−1 [lnv]u = θ2 uθ−1 (ln1 − lnu)  = θ2 uθ−1 ln u−1 , 0 < u < 1.

(c) For θ = 1, choose kα such that Z 1 Z fU (u; 1)du = kα



(d) For θ = 2, POWER =

1

Z

 ln u−1 du = α.

1



fU (u; 2)du =

Z

1



 4uln u−1 du.

Solution 6.8. Since the expected number of rabies deaths in City i is (1 − θi )/θi , i = 1, 2, one can equivalently conduct a generalized likelihood

162

HYPOTHESIS TESTING THEORY

ratio test of H0 : θ1 = θ2 versus H1 : θ1 6= θ2 . Let LΩ and Lω denote the likelihood functions under H1 and H0 respectively. Then, LΩ

n Y

=

j=1

{[θ1 (1 − θ1 )y1j ][θ2 (1 − θ2 )y2j ]}

= (θ1 θ2 )n (1 − θ1 )ny1 (1 − θ2 )ny2 , and lnLΩ

n(lnθ1 + lnθ2 ) + ny 1 ln(1 − θ1 ) + ny 2 ln(1 − θ2 ).

=

So, ∂lnLΩ ∂θ1

n ny1 1 − = 0 ⇒ θˆ1 = . θ1 (1 − θ1 ) (1 + y1 )

=

By symmetry, θˆ2

1 , (1 + y2 )

=

so that LˆΩ

−n

= (1 + y 1 )

−n

(1 + y 2 )



y1 (1 + y 1 )

ny1 

y2 (1 + y 2 )

ny2

Now, Lω

= θ2n (1 − θ)n(y1 +y2 )

and lnLω

= 2nlnθ + n(y1 + y 2 )ln(1 − θ).

So, ∂lnLω ∂θ

2n n(y 1 + y 2 ) 2 − = 0 ⇒ θˆ = , θ (1 − θ) (2 + y1 + y 2 )

=

so that Lˆω

=



2 (2 + y1 + y 2 )

2n 

(y 1 + y 2 ) (2 + y 1 + y2 )

So, since ˆ λ

=

Lˆω , LˆΩ

n(y1 +y2 )

.

.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

163

we have ˆ = −2lnλ

=

−2lnLˆω + 2lnLˆΩ

−4n[ln(2) − ln(2 + y 1 + y2 )] − 2n(y 1 + y 2 )[ln(y 1 + y2 ) − ln(2 + y1 + y 2 )] − 2nln(1 + y 1 ) − 2nln(1 + y2 ) + 2ny1 [ln(y 1 ) − ln(1 + y 1 )] + 2ny2 [ln(y 2 ) − ln(1 + y 2 )].

ˆ is: For n = 50, y 1 = 5.20, and y 2 = 4.80, the numerical value of −2lnλ ˆ −2lnλ

= −200[ln(2) − ln(12)] − 100(10)[ln(10) − ln(12)] − 100ln(6.20) −100ln(5.80) + 100(5.20)[ln(5.20) − ln(6.20)]

+100(4.80)[ln(4.80) − ln(5.80)] = 358.35 + 182.32 − 182.45 − 175.79 − 91.46 − 90.84 . = 0.13.

ˆ ∼χ Since −2lnλ ˙ 21 for large n under H0 : θ1 = θ2 , there is clearly no evidence in favor of rejecting H0 . In other words, based on the available data, there is no statistical evidence that the two cities are different with regard to the average number of rabies deaths per year . Solution 6.10. Let θ1 denote a specific value of θ that is greater P than 31 , and n n s−n let x = (x1 , x2 , . . . , xn ). Since L(x; θ) = (1 − θ) θ , where s = i=1 xi , we have   2 n 1 s−n L(x, 13 ) 3 3 = ≤k L(x, θ1 ) (1 − θ1 )n θ1s−n s  1 =⇒ ≤ k0 3θ1   1 ≤ k 00 =⇒ sln 3θ1 =⇒ s ≥ kα ,   Pn since ln 3θ11 < 0 for any θ1 > 31 . Since the rejection region S = i=1 Xi ≥

kα is the same for every value θ1 of θ greater than 31 , this is a UMP region and associated UMP test. Now, by the Central Limit Theorem, n

S − 1−θ S − E(S) p = p ∼N ˙ (0, 1) V(S) nθ/(1 − θ)2

for large n. So, we would reject H0 : θ = 31 in favor of H1 : θ >   50 S − 1− ( 13 ) (S − 75) (S − 75) r =q  = 6.1237 > 1.645 50 9 50( 31 ) 3 4 (1− 13 )2

1 3

when

164

HYPOTHESIS TESTING THEORY

for a size α = 0.05 test. Thus,   (S − 75) 1 POWER = pr > 1.645 θ = 6.1237 2   1 = pr S > (75 + 10.0735) θ = 2    S − E(S|θ = 1 ) 1  85.0735 − E(S|θ = ) 2 2 q q > = pr   1 1 V(S|θ = 2 ) V(S|θ = 2 )      50   85.0735 − (1− 1 )  2 r , ≈ pr Z >   50( 12 )     2 (1− 12 ) where Z ∼N ˙ (0, 1) for large n. Hence,   85.0735 − 100 √ POWER ≈ pr Z > 100 = pr(Z > −1.4927) . = 0.932.

Solution 6.12. First, since lnE(X) = (µ + σ 2 /2), we can equivalently test H0 : lnE(X) ≤ ln30 versus H1 : lnE(X) > ln30. To start, note that the maximum likelihood estimator (MLE) of (µ + σ 2 /2) is (ˆ µ+σ ˆ 2 /2), where µ ˆ 2 2 and σ ˆ are, respectively, the MLE’s of µ and σ . Now, with yi = lnxi for i = 1, 2, . . . , n, and with y = (y1 , y2 , . . . , yn ), the likelihood L(y; µ, σ 2 ) ≡ L takes the form L=

n Y

(2πσ 2 )−1/2 e−(yi −µ)

2

/2σ2

= (2πσ 2 )−n/2 e−

i=1

Pn

i=1 (yi −µ)

2

/2σ2

so that

,

Pn (yi − µ)2 2 n lnL = − ln2π − lnσ 2 − i=1 2 . n 2 2σ Thus, solving simultaneously the two equations Pn n 2 ∂lnL 2 X n ∂lnL i=1 (yi − µ) (yi − µ) = 0 and = 2 = − + =0 2 2 4 ∂µ 2σ i=1 ∂(σ ) 2σ 2σ gives µ ˆ=n

−1

n X i=1

Yi = Y¯ and σ ˆ 2 = n−1

n X i=1

(Yi − Y¯ )2 =



n−1 n



S2,

SOLUTIONS TO EVEN-NUMBERED EXERCISES so that µ ˆ+σ ˆ 2 /2 = Y¯ + Now,



n−1 2n

− ∂ 2 lnL n ∂ 2 lnL = − , = ∂µ2 σ 2 ∂µ∂(σ 2 )

and

n ∂ 2 lnL = − ∂(σ 2 )2 2σ 4

so that −E and



∂ 2 lnL ∂µ2



−E

Pn



S 2.

Pn

i=1 (yi σ6

i=1 (yi σ4

− µ)2

n = 2 , −E σ



∂ 2 lnL ∂µ∂(σ 2 )



=

n . 2σ 4

∂ 2 lnL ∂(σ 2 )



165

− µ)

,

,



= 0,

So, the large-sample variance of (ˆ µ+σ ˆ 2 /2) is V(ˆ µ) +

V(ˆ σ2 ) σ2 2σ 4 σ 2 (2 + σ 2 ) = + = . 4 n 4n 2n

Thus, from ML-theory, since 2

(ˆ µ + σˆ2 ) − lnE(X) p ∼N(0, ˙ 1) for large n, σ ˆ 2 (2 + σ ˆ 2 )/2n

it follows that we would reject H0 : lnE(X) ≤ 3.4012 in favor of H1 : lnE(X) > 3.4012 at the α ≈ 0.025 level when 2

(ˆ µ + σˆ ) − 3.4012 p 2 > 1.96. σ ˆ 2 (2 + σ ˆ 2 )/2n For the ˆ = y¯ = 3.00, and σ ˆ2 =  given data, n = 30, µ 29 = 30 (2.50) = 2.4167, so that

n−1 n



s2

2

(ˆ µ + σˆ ) − 3.4012 (3.00 + 2.4167 2 ) − 3.4012 p 2 = p = 1.912. 2 2 σ ˆ (2 + σ ˆ )/2n 2.4167(2 + 2.4167)/2(30)

So, these data do not provide sufficient evidence to reject H0 in favor of H1 at the α ≈ 0.025 level of significance. However, since the computed test statistic is very close to 1.96 in value, it is reasonable to suggest that more data be gathered.

166

HYPOTHESIS TESTING THEORY

Solution 6.14. An appropriate likelihood function L for these data is the multinomial distribution, namely,  n  n  n  n 2+θ 1 1−θ 2 1−θ 3 θ 4 n! , L = Q4 4 4 4 4 i=1 ni ! where 0 ≤ ni ≤ n for i = 1, 2, 3, 4, and where So, since

P4

i=1

ni = n.

lnL ∝ n1 ln(2 + θ) + (n2 + n3 )ln(1 − θ) + n4 lnθ, we have

∂lnL n1 (n2 + n3 ) n4 = − + . ∂θ (2 + θ) (1 − θ) θ

Hence, n1 (n2 + n3 ) n4 ∂ 2 lnL =− − − 2, ∂θ2 (2 + θ)2 (1 − θ)2 θ so that −E



∂ 2 lnL ∂θ2



=

= =

E(n2 + n3 ) E(n4 ) E(n1 ) + + 2 (2 + θ) (1 − θ)2 θ2 i h   + (1−θ) n (1−θ) n 2+θ n 4θ 4 4 4 + + (2 + θ)2 (1 − θ)2 θ2 n(1 + 2θ) . 2θ(1 − θ)(2 + θ)

ˆ of θ, ˆ the MLE of θ, is Thus, for large n, the variance V(θ) ˆ ≈ V(θ)

2θ(1 − θ)(2 + θ) . n(1 + 2θ)

So, under H0 : θ = 0.40, ˆ = 0.40) = V(θ|θ

2(0.40)(1 − 0.40)(2 + 0.40) 0.64 = . n[1 + 2(0.40)] n

Hence, a score-type test statistic for testing the null hypothesis H0 : θ = 0.40 versus the alternative hypothesis H1 : θ > 0.40 has the form (θˆ − 0.40) (θˆ − 0.40) √ , Sˆ = q = (0.80/ n) ˆ = 0.40) V(θ|θ

SOLUTIONS TO EVEN-NUMBERED EXERCISES

167

and one would reject the null hypothesis in favor of the alternative hypothesis at the α = 0.05 level when Sˆ > 1.645. So, POWER = = =

= ≈

" # h i ˆ − 0.40) ( θ √ > 1.645|θ = 0.50 pr Sˆ > 1.645|θ = 0.50 = pr (0.80/ n)   1.316 pr θˆ > √ + 0.40|θ = 0.50 n   √ ˆ − 0.50 n) − 0.10 θ (1.316/  pr  q > q ˆ ˆ V(θ|θ = 0.50) V(θ|θ = 0.50) # " √ (1.316/ n) − 0.10 θˆ − 0.50 √ > √ pr 0.791/ n 0.791/ n  √  pr Z > 1.664 − 0.126 n ,

where Z ∼ ˙ N(0,1) for large n.

Finally, for POWER≥ 0.80, we require the smallest value n∗ of n that satisfies the inequality √ 1.664 − 0.126 n ≤ −0.842, which gives n∗ = 396.

Solution 6.16. (a) With y = (y1 , y2 , . . . , yn ), the likelihood and log-likelihood functions are: !−(θ+1) n Y , and yi L(y; θ) = θn cnθ i=1

lnL(y; θ) = nlnθ + nθlnc − (θ + 1) Solving

gives the MLE

Since

n X

lnyi .

i=1

n X n ∂lnL(y; θ) = + nlnc − lnyi = 0 ∂θ θ i=1

n θˆ = Pn i=1 ln

Yi c

=

∂ 2 lnL(y; θ) ∂θ2

ln

n Qn

i=1

=

Yi c

−n , θ2

 .

168

HYPOTHESIS TESTING THEORY we have ˆ= V(θ) ˙

θ2 n

for large n. Since θˆ − θ q ∼N(0, ˙ 1) θˆ2 /n

for large n from maximum likelihood theory, we know that θˆ − 2 √ ∼ ˙ N(0, 1) 2/ n for large n under H0 : θ = 2. So, for α=0.05, ˙ our decision rule is to reject ˆ θ−2 H0 : θ = 2 when 2/√n > 1.645. So, (

) θˆ − 2 √ > 1.645 | θ = 3 2/ n   2(1.645) ˆ θ = 3 = pr θ > 2 + √ n    θˆ − 3  √ − 3 2 + 2(1.645) n √ > √ = pr  3/ n  3/ n   √  √  2(1.645) n n = pr Z > = pr Z > 1.097 − , − 3 3 3

. Power = pr

where Z ∼ ˙ N(0, 1) for large n √ when θ = 3. So, for Power ≥ 0.80, we need √ ∗ ∗ to pick n such that 1.097 − 3n ≤ −0.842, or n∗ ≥ 3(1.097 + 0.842), or n∗ ≥ (5.817)2 = 33.838. So, n∗ = 34. (b) First, L(y; θ = 2) = 2n c2n L(y; θ = 3) = 3n c3n

n Y

yi

i=1 n Y

i=1

yi

!−3

!−4

, and

.

So, using the Neyman-Pearson Theorem, Qn −3 n Y 2n c2n ( i=1 yi ) L(y; θ = 2) ≤k⇒ ≤ k ⇒ yi ≤ k 0 , say, Q −4 L(y; θ = 3) 3n c3n ( ni=1 yi ) i=1

SOLUTIONS TO EVEN-NUMBERED EXERCISES n k. So, the MP rejection region R is of the form where k 0 = 3c 2 R=

(

(y1 , y2 , . . . , yn ) :

n Y

i=1

yi ≤ kα

)

169

,

where

pr

"

n Y

i=1

#

yi ≤ kα |H0 : θ = 2 = α.

Since the same form of rejection region would be obtained for every specific value of θ > 2, R is also a UMP region for testing H0 : θ = 2 versus HA : θ > 2. (c) If n = 1, c = 100, and α = 0.10, we need to specify k0.10 so that pr [Y1 < k0.10 |θ = 2] = 0.10. So we need to specify k0.10 so that Z

k0.10

100

2(100)2 y1−3 dy1

= = =

k0.10  (100)2 −y1−2 100   1 1 2 − 2 (100) (100)2 k0.10 0.10,

so that k0.10 = 105.41. So, Power = = = = =

Solution 6.18.

pr[Y1 < 105.41|θ = 3] Z 105.41 3(100)3 y1−4 dy1 = (100)3 [−y1−3 ]105.41 100 100   1 1 (100)3 − (100)3 (105.41)3 3  100 1− 105.41 1 − 0.8538 = 0.1462.

170

HYPOTHESIS TESTING THEORY

(a) First, for r a non-negative integer,

r

E (X ) =

Z

= 2

1 0

Z

xr [2(1 − θ)x + 2θ(1 − x)] dx

0

1



r

x (x + θ − 2θx) dx = 2 



θ 1 − 2θ xr+2 + r+2 r+1     1 − 2θ θ = 2 + r+2 r+1 2 [1 + r(1 − θ)] = . (r + 1)(r + 2)

= 2

Z



0

1



 (1 − 2θ)xr+1 + θxr dx

xr+1

1 0

 So, it follows directly that E(X) = (2 − θ)/3, that E X 2 = (3 − 2θ)/6, and that  1 + 2θ(1 − θ) 2 . V(X) = E X 2 − [E(X)] = 18 Thus, ¯ = E(X)

(2 − θ) ¯ = 1 + 2θ(1 − θ) and V(X) 3 18n

¯ = 0.80) = 0.0733/n. Thus, When E(X) = 0.40, then θ = 0.80; so, V(X|θ under H0 : θ = 0.40, the standardized random variable ¯ − 0.40 X p ∼N(0, ˙ 1) for large n 0.0733/n by the Central Limit Theorem. And so, for α = 0.05, we would reject H0 : E(X) = 0.40 in favor of H1 : E(X) > 0.40 when the observed value of this standardized random variable exceeds 1.645 in value. When n = 50 and x ¯ = 0.45, p then this standardized random variable takes the value (0.45 − 0.40)/ 0.0733/50 = 1.3059, so that these data do not provide sufficient statistical evidence in favor of the proposition that a typical U.S. worker spends, on average, more than 40% of an 8-hour work day using the internet for non-work-related purposes. ¯ = 0.74) = (b) First, when E(X) = 0.42, then θ = 0.74, so that V(X|θ

SOLUTIONS TO EVEN-NUMBERED EXERCISES

171

0.0769/n. So, POWER = = =

= ≈

# ¯ − 0.40 X > 1.96 E(X) ≥ 0.42 pr p 0.0733/n " # ¯ − 0.40 X ≥ pr p > 1.96 E(X) = 0.42 0.0733/n # " r 0.0733 ¯ > 1.96 + 0.40 E(X) = 0.42 pr X n q   + 0.40 − 0.42 1.96 0.0733 ¯ − 0.42 X n  p > pr  p 0.0769/n 0.0769/n  √  pr Z > 1.9136 − 0.0721 n , "

where Z ∼N(0, ˙ 1) for large n. √ So, for POWER ≥ 0.90, we require (1.9136 − 0.0721 n) ≤ −1.282, or n ≥ 1, 964.4198, giving n∗ = 1, 965. The main reasons why the required sample size is so large are that 0.42 is not much different from the null value of 0.40 and that a high power (namely, at least 0.90) is desired. Solution 6.20∗ . (a) With x = (x1 , x2 , . . . , xn ), the likelihood function is L(x; θ) = (2θ)−n e For any θ1 > θ0 , −

n P

|xi |/θ0

(2θ0 )−n e i=1 L(x; θ0 ) = = n P L(x; θ1 ) − |xi |/θ1 (2θ1 )−n e i=1



θ1 θ0

n

e



n P

i=1

|xi |/θ0 +

n P

i=1

|xi |/θ1

.

Using the Neyman-Pearson Theorem,



nln



θ1 θ0











1 θ0 1 θ1

L(x; θ0 ) ≤ L(x; θ1 )  n 1 X |xi | ≤ − θ1 i=1  n 1 X − |xi | ≤ θ0 i=1 ⇒

n X i=1

|xi | ≥

k lnk lnk − nln k

0





θ1 θ0



1 1 − θ1 θ0



= k0 = cα ,



n P

i=1

|xi |/θ

.

172 since



1 θ1



1 θ0



HYPOTHESIS TESTING THEORY < 0 for θ1 > θ0 . For a size α test, choose cα so that ! n X pr |Xi | > cα H0 : θ = θ0 = α. i=1

Pn Since R = {S : S > cα }, where S = i=1 |Xi |, is always of the same form for all θ1 > θ0 , the associated test based on R is a UMP test. (b) {−∞ < Xi < +∞} = ⇒

Xi

=

{−∞ < Xi < 0} ∪ {0 < Xi < +∞}  −Yi , −∞ < Xi < 0; Yi , 0 < Xi < +∞.

(Note that this is not a 1-1 transformation.) Let h−1 1 (yi ) = −yi and h−1 2 (yi ) = +yi . Then, |Jj | = 1, j = 1, 2. So, fYi (yi ; θ)

2 X

=

j=1

  |Jj |fXi h−1 j (yi )

= (2θ)−1 e−yi /θ + (2θ)−1 e−yi /θ = θ−1 e−yi /θ , yi > 0. So, Yi ∼ gamma(α = θ, β = 1). (c) α

n X

= pr

i=1

n X

= pr

i=1

|Xi | > cα H0 : θ = θ0

Yi > cα H0 : θ = θ0

!

! .

Now, Yi ∼ gamma(α = θ0 , β = 1) under H0 ; and, since the {Yi } are n P Yi ∼ gamma(α = θ0 , β = n) under H0 . So, mutually independent, i=1

n P

Yi − nθ0 p ∼ ˙ N(0, 1) nθ02

i=1

for large n by the Central Limit Theorem. So   n P     Yi − nθ0   cα − nθ0 i=1 p p α = pr > H : θ = θ 0 0   nθ02 nθ02     ) ( cα − nθ0 , = ˙ pr Z > p 2 nθ0

SOLUTIONS TO EVEN-NUMBERED EXERCISES

173

where Z ∼ ˙ N(0,1) under H0 . ⇒

cα − nθ0 p nθ02

= ˙

Z1−α q nθ0 + Z1−α nθ02 √ θ0 (n + nZ1−α ).





= ˙





= ˙

(d)

POWER

= ˙

=

=

When θ = 1.2,

 n  P     Yi − nθ0   i=1 p pr > Z θ = 1.2 1−α   nθ02       n P     Y − (100)(1) i   i=1 p > Z.975 θ = 1.2 pr   (100)(1)2      n  P   ) ( n   Yi − 100   X i=1 Yi > 119.6 θ = 1.2 . pr > 1.96 θ = 1.2 = pr   10   i=1   P n

 Yi − (100)(1.2)  i=1   p   (100)(1.2)2  is approximately N(0,1) for large n. So,

POWER

 n P   Yi − 120  i=1

    119.6 − 120 θ = 1.2 >  12  

= ˙

pr

= ˙

pr(Z > −0.0333),

  

12

where Z ∼ ˙ N(0,1). Hence, POWER = ˙ 0.513. Solution 6.22∗ .

174

HYPOTHESIS TESTING THEORY

(a) The unrestricted likelihood function is ( ) n Y 1 1 −x2i /2θ1 −θ2 yi2 /2 LΩ = e e ·√ √ 1/2 −1/2 2πθ1 2πθ2 i=1 ) ) ( ( n n X X n/2 2 −n −n/2 2 yi /2. . = (2π) θ1 exp − xi /2θ1 θ2 exp −θ2 i=1

i=1

So, n lnLΩ = −nln(2π) − lnθ1 − 2

Pn

x2i

i=1

2θ1

θ2 n + lnθ2 − 2

Pn

i=1

2

yi2

.

Thus, ∂lnLΩ n =− + ∂θ1 2θ1

Pn

2 i=1 xi 2 2θ1

=0

gives θˆ1 =

n X

x2i

i=1

,

n.

And, ∂lnLΩ n = − ∂θ2 2θ2

Pn

i=1

yi2

2

=0

gives θˆ2 = n

, n X

yi2 .

i=1

So, θˆ2 θˆ1

lnLˆΩ = (2π)−n

! n2

e−n .

Under H0 : θ1 = θ2 (= θ, say), the restricted likelihood function is ( n ) ( ) n X X −n 2 2 Lω = (2π) exp − xi /2θ exp −θ yi /2 , i=1

i=1

so that lnLω = −nln(2π) −

Pn

i=1



x2i



θ

Pn

i=1

2

yi2

.

SOLUTIONS TO EVEN-NUMBERED EXERCISES

175

So, ∂lnLω = ∂θ

Pn

2 i=1 xi 2θ2



Pn

i=1

yi2

2

=0

gives

or θˆ =

p θˆ1 θˆ2 . So, Lˆω

Pn x2i 2 ˆ ˆ ˆ θ = Pi=1 n 2 = θ1 θ2 , i=1 yi (

= (2π)−n exp −

n X i=1

) ) ( n X y 2 /2 x2 /2θˆ exp −θˆ i

i

i=1

) ( ) p −n θˆ1 θˆ2 −nθˆ1 p exp = (2π) exp 2θˆ2 2 θˆ1 θˆ2   q = (2π)−n exp −n θˆ1 /θˆ2 . −n

(

So, −n

ˆ = λ

= Finally, ˆ −2lnλ

 q  ˆ ˆ exp −n θ1 /θ2

(2π) Lˆω =  n/2 ˆ LˆΩ (2π)−n θθˆ2 e−n 1 −1   ˆ !n/2  q θ 2 exp n θˆ1 /θˆ2 e−n  . θˆ1  s

= 2 n

s

= 2n 

θˆ1 n + ln ˆ 2 θ2 θˆ1 + 0.50ln θˆ2

θˆ2 θˆ1

! θˆ2 θˆ1



− n

!



− 1 .

For the available data, n = 30, θˆ1 = 60/30 = 2.00, and θˆ2 = 30/18 = 1.67, so that "r #   2.00 1.67 ˆ −2lnλ = 2(30) − 1 = 0.252. + 0.50ln 1.67 2.00 ˆ ∼χ Since −2lnλ ˙ 21 under H0 for large n and we reject for large values of ˆ −2lnλ, there is no evidence in these data to reject H0 .

176

HYPOTHESIS TESTING THEORY

(b) From part (a), √ it is reasonable to√ conclude that θ1 = θ2 (= θ, say). n Now, since (Xi / θ) ∼N(0,1) and ( θYi ) ∼N(0,1), and since {Xi }i=1 and n {Yi }i=1 constitute a set of 2n mutually independent random variables, it follows that  Pn  Pn 2 θYi2 n 2 i=1 Yi Pi=1 P =θ ∼ fn,n . n n 2 2 i=1 Xi /nθ i=1 Xi So,

(1 − α) = =

   Pn  Yi2 2 i=1 α α < fn,n,1− 2 pr fn,n, 2 < θ Pn 2 i=1 Xi ( P )  Pn   n 2 1/2 2 1/2 X X −1/2 1/2 i i Pi=1 pr fn,n,1− α < θ < Pi=1 fn,n,1− α n n 2 2 2 2 i=1 Yi i=1 Yi

Thus, an exact 100(1 − α)% confidence interval for θ is ) ( P  Pn   n 2 1/2 2 1/2 X X −1/2 1/2 i i Pi=1 Pi=1 fn,n,1− α , fn,n,1− α n n 2 2 2 2 i=1 Yi i=1 Yi For the data given in part (a),

 Pn 1/2  1/2 Xi2 60 Pi=1 = = 1.8257, n 2 Y 18 i=1 i

and f30,30,0.975 = 2.07. So, the 95% exact confidence interval for θ is: √ √ (1.8257/ 2.07, 1.8257 2.07) = (1.269, 2.627). Solution 6.24∗ . (a) The likelihood function L is L=

n Y

n11 n10 n01 n00 pXi1 ,Xi2 (xi1 , xi2 ) = π11 π10 π01 π00 ,

i=1

Pn

Pn Pn where n11 = i=1 i=1 xi1 (1 − xi2 ), n01 = i=1 (1 − Pxni1 xi2 , n10 = xi1 )xi2 , and n00 = i=1 (1 − xi1 )(1 − xi2 ). Also, note that the random variables N11 , N10 , N01 , and N00 (with corresponding realizations n11 , n10 , n01 , and n00 ) follow a multinomial distribution, so that ˆ − θ) ˆ = n10 , ˆ − θ) ˆ = n11 , π ˆ10 = (1 − ρˆ)θ(1 π ˆ11 = θˆ2 + ρˆθ(1 n n ˆ 2 + ρˆθ(1 ˆ − θ). ˆ ˆ − θ) ˆ = n01 , and π ˆ00 = (1 − θ) π ˆ01 = (1 − ρˆ)θ(1 n

SOLUTIONS TO EVEN-NUMBERED EXERCISES

177

So, it follows directly that ˆ 2 = (2θˆ − 1), (ˆ π11 − π ˆ00 ) = θˆ2 − (1 − θ) so that (ˆ π11 − π ˆ00 ) + 1 2 (n11 − n00 ) + (n11 + n10 + n01 + n00 ) 2n (2n11 + n10 + n01 ) . 2n

θˆ = = =

ˆ − θ), ˆ it follows with some algebra And, since (ˆ π10 + π ˆ01 ) = 2(1 − ρˆ)θ(1 that ρˆ = 1 −

(ˆ π10 + π ˆ01 ) ˆ − θ) ˆ 2θ(1

4n11 n00 − (n10 + n01 )2 . (2n11 + n10 + n01 )(2n00 + n10 + n01 )

=

For the available data, θˆ = 0.150 and ρˆ = 0.216. (b) Now, lnL = +

n11 ln[θ2 + ρθ(1 − θ)] + (n10 + n01 )ln[(1 − ρ)θ(1 − θ)]

n00 ln[(1 − θ)2 + ρθ(1 − θ)].

So, (1 − θ)n11 (n10 + n01 ) θn00 ∂lnL = − + , ∂ρ θ + ρ(1 − θ) (1 − ρ) (1 − θ) + ρθ

and

∂ 2 lnL (1 − θ)2 n11 (n10 + n01 ) θ2 n00 = − − − , ∂ρ2 [θ + ρ(1 − θ)]2 (1 − ρ)2 [(1 − θ) + ρθ]2 ∂ 2 lnL n00 n11 = − . ∂ρ∂θ [(1 − θ) + ρθ]2 [θ + ρ(1 − θ)]2

Since E



∂ 2 lnL ∂ρ∂θ



=

n[(1 − θ)2 + ρθ(1 − θ)] n[θ2 + ρθ(1 − θ)] − , [(1 − θ) + ρθ]2 [θ + ρ(1 − θ)]2

it follows that E



 ∂ 2 lnL = 0, ∂ρ∂θ ρ=0

so that the (2×2) expected information matrix under H0 : ρ = 0 is a diagonal matrix.

178

HYPOTHESIS TESTING THEORY Thus, it is straightforward to verify that " #−1  2  ∂ lnL V0 (ˆ ρ) = −E = n−1 . ∂ρ2 ρ=0

So, Sˆ = nˆ ρ2 ; and, for large n, Sˆ∼χ ˙ 21 under H0 : ρ = 0. For the available data, the numerical value of Sˆ is (40)(0.216)2 = 1.866. Since χ20.95,1 = 3.841, there is not sufficient evidence to reject H0 : ρ = 0 in favor of H1 : ρ 6= 0. Solution 6.26∗ . First, for i = 1, 2, it follows that the density function of the random variable Xi is equal to m dFXi (xi ) = mθi−m xim−1 e−(xi /θi ) , 0 < xi < ∞, 0 < θi < ∞, m ≥ 1, dxi

so that E(Xi )

Z



m

(x)mθi−m xim−1 e−(xi /θi ) dxi 0   Z ∞ 1 −m 1/m −u/θim . = θi u e du = θi Γ 1 + m 0 =

Then, with Yi = Xim , so that dYi = mXim−1 dXi , it follows that the density function of the random variable Yi is equal to m

fYi (yi ) = θi−m e−yi /θi , 0 < yi < ∞, 0 < θi < ∞, m ≥ 1. Thus, Yi ∼ NEGEXP(α = θim ), so that E(Yi ) = E (Xim ) = θim . So, based on the 2n data values yij = xm ij , i = 1, 2, j = 1, 2, . . . , n, an appropriate likelihood ratio test would involve testing H0 : θ1 = θ2 versus H1 : θ1 6= θ2 . Pn Now, with si = j=1 yij , the unconditional likelihood LΩ is equal to LΩ =

2 Y n Y

m

θi−m e−yij /θi =

i=1 j=1

so that lnLΩ =

2 Y

i=1

2 X  i=1

−m

θi−mn e−θi

 −mnlnθi − θi−m si .

Then, for i = 1, 2, ∂lnLΩ −mn = + mθi−m−1 si = 0 ∂θi θi

si

,

SOLUTIONS TO EVEN-NUMBERED EXERCISES gives

θˆim

179

= si /n, so that −mn  e−2n . LˆΩ = θˆ1 θˆ2

Now, under H0 : θ1 = θ2 = θ (say), the likelihood Lω is equal to Lω = θ−2mn e−θ

−m

2 X

si ,

i=1

so that lnLω = −2mnlnθ − θ−m (s1 + s2 ). Thus,

∂lnLω −2mn = + mθ−m−1 (s1 + s2 ) = 0 ∂θ θ   gives θˆm = (s1 + s2 )/2n = θˆ1m + θˆ2m /2, so that Lˆω = θˆ−2mn e−2n . ˆ can be written in the form So, the likelihood ratio test statistic λ  mn θˆ1 θˆ2 ˆω L ˆ = = λ LˆΩ θˆ2mn  n   θˆ1m θˆ2m s1 s2 /n2  n = = (s1 + s2 )2 /(2n)2 θˆ2m =

  4

s1 s1 + s2



s2 s1 + s2

n

.

For the available data,   ˆ λ= 4

210 210 + 300



300 210 + 300

30

= 0.3871.

ˆ ∼χ Under H0 : θ1 = θ2 and for large n, −2lnλ ˙ 21 . For the avaiable data, ˆ −2lnλ = −2ln(0.3841) = 1.90, giving a P-value of about 0.16. So, these data do not provide strong statistical evidence that these drugs are different with regard to prolonging the lives of patients with metastatic colon cancer.

K16627

an informa business

www. taylorandfrancisgroup.co m

6000 Broken Sound Parkwa y, NW Suite 300, Boca Raton, FL 33487 270 Madison A venue New Y ork, NY 10016 2 Park Square, Milton Park Abingdon, Oxon OX14 4RN, UK

9 781466 572928