Mathematical Statistics: Exercises and Solutions 0387249702, 9780387249704

The exercises are grouped into seven chapters with titles matching those in the author's Mathematical Statistics.

156 34

English Pages 388 [385] Year 2005

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Mathematical Statistics: Exercises and Solutions
 0387249702, 9780387249704

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Mathematical Statistics: Exercises and Solutions

Jun Shao

Mathematical Statistics: Exercises and Solutions

Jun Shao Department of Statistics University of Wisconsin Madison, WI 52706 USA [email protected]

Library of Congress Control Number: 2005923578 ISBN-10: 0-387-24970-2 ISBN-13: 978-0387-24970-4

Printed on acid-free paper.

© 2005 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 9 8 7 6 5 4 3 2 1 springeronline.com

(EB)

To My Parents

Preface Since the publication of my book Mathematical Statistics (Shao, 2003), I have been asked many times for a solution manual to the exercises in my book. Without doubt, exercises form an important part of a textbook on mathematical statistics, not only in training students for their research ability in mathematical statistics but also in presenting many additional results as complementary material to the main text. Written solutions to these exercises are important for students who initially do not have the skills in solving these exercises completely and are very helpful for instructors of a mathematical statistics course (whether or not my book Mathematical Statistics is used as the textbook) in providing answers to students as well as finding additional examples to the main text. Motivated by this and encouraged by some of my colleagues and Springer-Verlag editor John Kimmel, I have completed this book, Mathematical Statistics: Exercises and Solutions. This book consists of solutions to 400 exercises, over 95% of which are in my book Mathematical Statistics. Many of them are standard exercises that also appear in other textbooks listed in the references. It is only a partial solution manual to Mathematical Statistics (which contains over 900 exercises). However, the types of exercise in Mathematical Statistics not selected in the current book are (1) exercises that are routine (each exercise selected in this book has a certain degree of difficulty), (2) exercises similar to one or several exercises selected in the current book, and (3) exercises for advanced materials that are often not included in a mathematical statistics course for first-year Ph.D. students in statistics (e.g., Edgeworth expansions and second-order accuracy of confidence sets, empirical likelihoods, statistical functionals, generalized linear models, nonparametric tests, and theory for the bootstrap and jackknife, etc.). On the other hand, this is a stand-alone book, since exercises and solutions are comprehensible independently of their source for likely readers. To help readers not using this book together with Mathematical Statistics, lists of notation, terminology, and some probability distributions are given in the front of the book.

vii

viii

Preface

All notational conventions are the same as or very similar to those in Mathematical Statistics and so is the mathematical level of this book. Readers are assumed to have a good knowledge in advanced calculus. A course in real analysis or measure theory is highly recommended. If this book is used with a statistics textbook that does not include probability theory, then knowledge in measure-theoretic probability theory is required. The exercises are grouped into seven chapters with titles matching those in Mathematical Statistics. A few errors in the exercises from Mathematical Statistics were detected during the preparation of their solutions and the corrected versions are given in this book. Although exercises are numbered independently of their source, the corresponding number in Mathematical Statistics is accompanied with each exercise number for convenience of instructors and readers who also use Mathematical Statistics as the main text. For example, Exercise 8 (#2.19) means that Exercise 8 in the current book is also Exercise 19 in Chapter 2 of Mathematical Statistics. A note to students/readers who have a need for exercises accompanied by solutions is that they should not be completely driven by the solutions. Students/readers are encouraged to try each exercise first without reading its solution. If an exercise is solved with the help of a solution, they are encouraged to provide solutions to similar exercises as well as to think about whether there is an alternative solution to the one given in this book. A few exercises in this book are accompanied by two solutions and/or notes of brief discussions. I would like to thank my teaching assistants, Dr. Hansheng Wang, Dr. Bin Cheng, and Mr. Fang Fang, who provided valuable help in preparing some solutions. Any errors are my own responsibility, and a correction of them can be found on my web page http://www.stat.wisc.edu/˜ shao. Madison, Wisconsin April 2005

Jun Shao

Contents Preface . . . . . . . . Notation . . . . . . . Terminology . . . . Some Distributions Chapter Chapter Chapter Chapter Chapter Chapter Chapter

1. 2. 3. 4. 5. 6. 7.

. . . . . . . . . . . . . . . . . . . . . . . .

vii

. . . . . . . . . . . . . . . . . . . . . . . .

xi

. . . . . . . . . . . . . . . . . . . . . . . .

xv

. . . . . . . . . . . . . . . . . . . . . . . . xxiii

Probability Theory . . . . . . . . . . . . Fundamentals of Statistics . . . . . . . Unbiased Estimation . . . . . . . . . . . Estimation in Parametric Models . . . Estimation in Nonparametric Models Hypothesis Tests . . . . . . . . . . . . . . Confidence Sets . . . . . . . . . . . . . .

. . . .

1

. . . .

51

. . . .

95

. . . . 141 . . . . 209 . . . . 251 . . . . 309

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

Notation R: The real line. Rk : The k-dimensional Euclidean space. c = (c1 , ..., ck ): A vector (element) in Rk with jth component cj ∈ R; c is considered as a k × 1 matrix (column vector) when matrix algebra is involved. τ c : The transpose of a vector c ∈ Rk considered as a 1 × k matrix (row vector) when matrix algebra is involved. c: The Euclidean norm of a vector c ∈ Rk , c2 = cτ c. |c|: The absolute value of c ∈ R. Aτ : The transpose of a matrix A. Det(A) or |A|: The determinant of a matrix A. tr(A): The trace of a matrix A. A: The norm of a matrix A defined as A2 = tr(Aτ A). A−1 : The inverse of a matrix A. A− : The generalized inverse of a matrix A. A1/2 : The square root of a nonnegative definite matrix A defined by A1/2 A1/2 = A. −1/2 : The inverse of A1/2 . A R(A): The linear space generated by rows of a matrix A. Ik : The k × k identity matrix. Jk : The k-dimensional vector of 1’s. ∅: The empty set. (a, b): The open interval from a to b. [a, b]: The closed interval from a to b. (a, b]: The interval from a to b including b but not a. [a, b): The interval from a to b including a but not b. {a, b, c}: The set consisting of the elements a, b, and c. A1 × · · · × Ak : The Cartesian product of sets A1 , ..., Ak , A1 × · · · × Ak = {(a1 , ..., ak ) : a1 ∈ A1 , ..., ak ∈ Ak }. xi

xii

Notation

σ(C): The smallest σ-field that contains C. σ(X): The smallest σ-field with respect to which X is measurable. ν1 × · · · × νk : The product measure of ν1 ,...,νk on σ(F1 × · · · × Fk ), where νi is a measure on Fi , i = 1, ..., k. B: The Borel σ-field on R. B k : The Borel σ-field on Rk . Ac : The complement of a set A. A ∪ B: The union of sets A and B. ∪Ai : The union of sets A1 , A2 , .... A ∩ B: The intersection of sets A and B. ∩Ai : The intersection of sets A1 , A2 , .... IA : The indicator function of a set A. P (A): The probability of a set A.  f dν: The integral of a Borel function f with respect to a measure ν.  f dν: The integral of f on the set A. A f (x)dF (x): The integral of f with respect to the probability measure corresponding to the cumulative distribution function F . λ  ν: The measure λ is dominated by the measure ν, i.e., ν(A) = 0 always implies λ(A) = 0. dλ : The Radon-Nikodym derivative of λ with respect to ν. dν P: A collection of populations (distributions). a.e.: Almost everywhere. a.s.: Almost surely. a.s. P: A statement holds except on the event A with P (A) = 0 for all P ∈ P. δx : The point mass at x ∈ Rk or the distribution degenerated at x ∈ Rk . {an }: A sequence of elements a1 , a2 , .... an → a or limn an = a: {an } converges to a as n increases to ∞. lim supn an : The largest limit point of {an }, lim supn an = inf n supk≥n ak . lim inf n an : The smallest limit point of {an }, lim inf n an = supn inf k≥n ak . →p : Convergence in probability. →d : Convergence in distribution. g  : The derivative of a function g on R. g  : The second-order derivative of a function g on R. g (k) : The kth-order derivative of a function g on R. g(x+): The right limit of a function g at x ∈ R. g(x−): The left limit of a function g at x ∈ R. g+ (x): The positive part of a function g, g+ (x) = max{g(x), 0}.

Notation

xiii

g− (x): The negative part of a function g, g− (x) = max{−g(x), 0}. ∂g/∂x: The partial derivative of a function g on Rk . ∂ 2 g/∂x∂xτ : The second-order partial derivative of a function g on Rk . exp{x}: The exponential function ex . log x or log(x): The inverse of ex , log(ex ) = x. ∞ Γ(t): The gamma function defined as Γ(t) = 0 xt−1 e−x dx, t > 0. F −1 (p): The pth quantile of a cumulative distribution function F on R, F −1 (t) = inf{x : F (x) ≥ t}. E(X) or EX: The expectation of a random variable (vector or matrix) X. Var(X): The variance of a random variable X or the covariance matrix of a random vector X. Cov(X, Y ): The covariance between random variables X and Y . E(X|A): The conditional expectation of X given a σ-field A. E(X|Y ): The conditional expectation of X given Y . P (A|A): The conditional probability of A given a σ-field A. P (A|Y ): The conditional probability of A given Y . X(i) : The ith order statistic of X1 , ..., Xn . ¯ = n−1 n Xi . ¯ or X ¯ · : The sample mean of X1 , ..., Xn , X X i=1 ¯ ·j = n−1 n Xij . ¯ ·j : The average of Xij ’s over the index i, X X i=1 n ¯ 2. S 2 : The sample variance of X1 , ..., Xn , S 2 = (n − 1)−1 i=1 (Xi − X)  n Fn : The empirical distribution of X1 , ..., Xn , Fn (t) = n−1 i=1 δXi (t). (θ): The likelihood function. H0 : The null hypothesis in a testing problem. H1 : The alternative hypothesis in a testing problem. L(P, a) or L(θ, a): The loss function in a decision problem. RT (P ) or RT (θ): The risk function of a decision rule T . rT : The Bayes risk of a decision rule T . N (µ, σ 2 ): The one-dimensional normal distribution with mean µ and variance σ 2 . Nk (µ, Σ): The k-dimensional normal distribution with mean vector µ and covariance matrix Σ. Φ(x): The cumulative distribution function of N (0, 1). zα : The (1 − α)th quantile of N (0, 1). χ2r : The chi-square distribution with degrees of freedom r. χ2r,α : The (1 − α)th quantile of the chi-square distribution χ2r . χ2r (δ): The noncentral chi-square distribution with degrees of freedom r and noncentrality parameter δ.

xiv

Notation

tr : The t-distribution with degrees of freedom r. tr,α : The (1 − α)th quantile of the t-distribution tr . tr (δ): The noncentral t-distribution with degrees of freedom r and noncentrality parameter δ. Fa,b : The F-distribution with degrees of freedom a and b. Fa,b,α : The (1 − α)th quantile of the F-distribution Fa,b . Fa,b (δ): The noncentral F-distribution with degrees of freedom a and b and noncentrality parameter δ. : The end of a solution.

Terminology σ-field: A collection F of subsets of a set Ω is a σ-field on Ω if (i) the empty set ∅ ∈ F; (ii) if A ∈ F, then the complement Ac ∈ F; and (iii) if Ai ∈ F, i = 1, 2, ..., then their union ∪Ai ∈ F. σ-finite measure: A measure ν on a σ-field F on Ω is σ-finite if there are A1 , A2 , ... in F such that ∪Ai = Ω and ν(Ai ) < ∞ for all i. Action or decision: Let X be a sample from a population P . An action or decision is a conclusion we make about P based on the observed X. Action space: The set of all possible actions. Admissibility: A decision rule T is admissible under the loss function L(P, ·), where P is the unknown population, if there is no other decision rule T1 that is better than T in the sense that E[L(P, T1 )] ≤ E[L(P, T )] for all P and E[L(P, T1 )] < E[L(P, T )] for some P . Ancillary statistic: A statistic is ancillary if and only if its distribution does not depend on any unknown quantity. Asymptotic bias: Let Tn be an estimator of θ for every n satisfying an (Tn −θ) →d Y with E|Y | < ∞, where {an } is a sequence of positive numbers satisfying limn an = ∞ or limn an = a > 0. An asymptotic bias of Tn is defined to be EY /an . Asymptotic level α test: Let X be a sample of size n from P and T (X) be a test for H0 : P ∈ P0 versus H1 : P ∈ P1 . If limn E[T (X)] ≤ α for any P ∈ P0 , then T (X) has asymptotic level α. Asymptotic mean squared error and variance: Let Tn be an estimator of θ for every n satisfying an (Tn − θ) →d Y with 0 < EY 2 < ∞, where {an } is a sequence of positive numbers satisfying limn an = ∞. The asymptotic mean squared error of Tn is defined to be EY 2 /a2n and the asymptotic variance of Tn is defined to be Var(Y )/a2n . Asymptotic relative efficiency: Let Tn and Tn be estimators of θ. The asymptotic relative efficiency of Tn with respect to Tn is defined to be the asymptotic mean squared error of Tn divided by the asymptotic mean squared error of Tn . xv

xvi

Terminology

Asymptotically correct confidence set: Let X be a sample of size n from P and C(X) be a confidence set for θ. If limn P (θ ∈ C(X)) = 1 − α, then C(X) is 1 − α asymptotically correct. Bayes action: Let X be a sample from a population indexed by θ ∈ Θ ⊂ Rk . A Bayes action in a decision problem with action space A and loss function L(θ, a) is the action that minimizes the posterior expected loss E[L(θ, a)] over a ∈ A, where E is the expectation with respect to the posterior distribution of θ given X. Bayes risk: Let X be a sample from a population indexed by θ ∈ Θ ⊂ Rk . The Bayes risk of a decision rule T is the expected risk of T with respect to a prior distribution on Θ. Bayes rule or Bayes estimator: A Bayes rule has the smallest Bayes risk over all decision rules. A Bayes estimator is a Bayes rule in an estimation problem. Borel σ-field B k : The smallest σ-field containing all open subsets of Rk . Borel function: A function f from Ω to Rk is Borel with respect to a σ-field F on Ω if and only if f −1 (B) ∈ F for any B ∈ Bk . Characteristic The characteristic function of a distribution F on  √function: τ Rk is e −1t x dF (x), t ∈ Rk . Complete (or bounded complete) statistic: Let X be a sample from a population P . A statistic T (X) is complete (or bounded complete) for P if and only if, for any Borel (or bounded Borel) f , E[f (T )] = 0 for all P implies f = 0 except for a set A with P (X ∈ A) = 0 for all P. Conditional expectation E(X|A): Let X be an integrable random variable on a probability space (Ω, F, P ) and A be a σ-field contained in F. The conditional expectation of X given A, denoted by E(X|A), is defined to be the a.s.-unique random  variable satisfying  (a) E(X|A) is Borel with respect to A and (b) A E(X|A)dP = A XdP for any A ∈ A. Conditional expectation E(X|Y ): The conditional expectation of X given Y , denoted by E(X|Y ), is defined as E(X|Y ) = E(X|σ(Y )). Confidence coefficient and confidence set: Let X be a sample from a population P and θ ∈ Rk be an unknown parameter that is a function of P . A confidence set C(X) for θ is a Borel set on Rk depending on X. The confidence coefficient of a confidence set C(X) is inf P P (θ ∈ C(X)). A confidence set is said to be a 1 − α confidence set for θ if its confidence coefficient is 1 − α. Confidence interval: A confidence interval is a confidence set that is an interval.

Terminology

xvii

Consistent estimator: Let X be a sample of size n from P . An estimator T (X) of θ is consistent if and only if T (X) →p θ for any P as n → ∞. T (X) is strongly consistent if and only if limn T (X) = θ a.s. for any P . T (X) is consistent in mean squared error if and only if limn E[T (X) − θ]2 = 0 for any P . Consistent test: Let X be a sample of size n from P . A test T (X) for testing H0 : P ∈ P0 versus H1 : P ∈ P1 is consistent if and only if limn E[T (X)] = 1 for any P ∈ P1 . Decision rule (nonrandomized): Let X be a sample from a population P . A (nonrandomized) decision rule is a measurable function from the range of X to the action space. Discrete probability density: A probability density with respect to the counting measure on the set of nonnegative integers. Distribution and cumulative distribution function: The probability measure corresponding to a random vector is called its distribution (or law). The cumulative distribution function of a distribution or probability measure P on B k is F (x1 , ..., xk ) = P ((−∞, x1 ]×· · ·×(−∞, xk ]), xi ∈ R. Empirical Bayes rule: An empirical Bayes rule is a Bayes rule with parameters in the prior estimated using data. Empirical distribution: The empirical distribution based on a random sample (X1 , ..., Xn ) is the distribution putting mass n−1 at each Xi , i = 1, ..., n. Estimability: A parameter θ is estimable if and only if there exists an unbiased estimator of θ. Estimator: Let X be a sample from a population P and θ ∈ Rk be a function of P . An estimator of θ is a measurable function of X. Exponential family: A family of probability densities {fθ : θ ∈ Θ} (with respect to a common σ-finite measure ν), Rk , is an expo Θ ⊂ τ nential family if and only if fθ (x) = exp [η(θ)] T (x) − ξ(θ) h(x), where T is a random p-vector with a fixed positive integer p, η is a function from Θ to Rp , h is a nonnegative Borel function, and  ξ(θ) = log exp{[η(θ)]τ T (x)}h(x)dν . Generalized Bayes rule: A generalized Bayes rule is a Bayes rule when the prior distribution is improper. Improper or proper prior: A prior is improper if it is a measure but not a probability measure. A prior is proper if it is a probability measure. Independence: Let (Ω, F, P ) be a probability space. Events in C ⊂ F are independent if and only if for any positive integer n and distinct events A1 ,...,An in C, P (A1 ∩ A2 ∩ · · · ∩ An ) = P (A1 )P (A2 ) · · · P (An ). Collections Ci ⊂ F, i ∈ I (an index set that can be uncountable),

xviii

Terminology

are independent if and only if events in any collection of the form {Ai ∈ Ci : i ∈ I} are independent. Random elements Xi , i ∈ I, are independent if and only if σ(Xi ), i ∈ I, are independent. Integration or integral: Let ν be a measure on a σ-field F on a set Ω. The integral of a nonnegative simple function (i.e., a function of k the form ϕ(ω) = i=1 ai IAi (ω), where ω ∈ Ω, k is a positive integer, A1 , ..., Ak are in F, and a1 , ..., ak are nonnegative numbers)  k is defined as ϕdν = i=1ai ν(Ai ). The integral of a nonnegative  Borel function is defined as f dν = supϕ∈Sf ϕdν, where Sf is the collection of all nonnegative simple functions that are bounded by f . For a Borel function f , its integral exists if and only if at least one of max{f, 0}dν and  max{−f, 0}dν is finite, in which case   f dν = max{f, 0}dν − max{−f, 0}dν. f is integrable if and   only if both max{f, 0}dν and max{−f, 0}dν are finite. When ν is a probability measure corresponding   to the cumulative distribution k function F on R , we write f dν = f (x)dF (x). For any event A,   f dν is defined as I f dν. A A Invariant decision rule: Let X be a sample from P ∈ P and G be a group of one-to-one transformations of X (gi ∈ G implies g1◦ g2 ∈ G and gi−1 ∈ G). P is invariant under G if and only if g¯(PX ) = Pg(X) is a one-to-one transformation from P onto P for each g ∈ G. A decision problem is invariant if and only if P is invariant under G and the loss L(P, a) is invariant in the sense that, for every g ∈ G and every a ∈ A (the collection of all possible  actions), there exists a unique g¯(a) ∈ A such that L(PX , a) = L Pg(X) , g¯(a) . A decision rule T (x) in an invariant decision problem is invariant if and only if, for every g ∈ G and every x in the range of X, T (g(x)) = g¯(T (x)). Invariant estimator: An invariant estimator is an invariant decision rule in an estimation problem. LR (Likelihood ratio) test: Let (θ) be the likelihood function based on a sample X whose distribution is Pθ , θ ∈ Θ ⊂ Rp for some positive integer p. For testing H0 : θ ∈ Θ0 ⊂ Θ versus H1 : θ ∈ Θ0 , an LR test is any test that rejects H0 if and only if λ(X) < c, where c ∈ [0, 1] and λ(X) = supθ∈Θ0 (θ)/ supθ∈Θ (θ) is the likelihood ratio. LSE: The least squares estimator. Level α test: A test is of level α if its size is at most α. Level 1 − α confidence set or interval: A confidence set or interval is said to be of level 1 − α if its confidence coefficient is at least 1 − α. Likelihood function and likelihood equation: Let X be a sample from a population P indexed by an unknown parameter vector θ ∈ Rk . The joint probability density of X treated as a function of θ is called the likelihood function and denoted by (θ). The likelihood equation is ∂ log (θ)/∂θ = 0.

Terminology

xix

Location family: A family of Lebesgue densities on R, {fµ : µ ∈ R}, is a location family with location parameter µ if and only if fµ (x) = f (x − µ), where f is a known Lebesgue density. Location invariant estimator. Let (X1 , ..., Xn ) be a random sample from a population in a location family. An estimator T (X1 , ..., Xn ) of the location parameter is location invariant if and only if T (X1 + c, ..., Xn + c) = T (X1 , ..., Xn ) + c for any Xi ’s and c ∈ R. Location-scale family: A family of Lebesgue densities on R, {fµ,σ : µ ∈ R, σ > 0}, is a location-scale family with location µ and  parameter  scale parameter σ if and only if fµ,σ (x) = σ1 f x−µ , where f is a σ known Lebesgue density. Location-scale invariant estimator. Let (X1 , ..., Xn ) be a random sample from a population in a location-scale family with location parameter µ and scale parameter σ. An estimator T (X1 , ..., Xn ) of the location parameter µ is location-scale invariant if and only if T (rX1 + c, ..., rXn + c) = rT (X1 , ..., Xn ) + c for any Xi ’s, c ∈ R, and r > 0. An estimator S(X1 , ..., Xn ) of σ h with a fixed h = 0 is locationscale invariant if and only if S(rX1 + c, ..., rXn + c) = rh T (X1 , ..., Xn ) for any Xi ’s and r > 0. Loss function: Let X be a sample from a population P ∈ P and A be the set of all possible actions we may take after we observe X. A loss function L(P, a) is a nonnegative Borel function on P × A such that if a is our action and P is the true population, our loss is L(P, a). MRIE (minimum risk invariant estimator): The MRIE of an unknown parameter θ is the estimator has the minimum risk within the class of invariant estimators. MLE (maximum likelihood estimator): Let X be a sample from a population P indexed by an unknown parameter vector θ ∈ Θ ⊂ Rk and (θ) ˆ = maxθ∈Θ (θ) is be the likelihood function. A θˆ ∈ Θ satisfying (θ) called an MLE of θ (Θ may be replaced by its closure in the above definition). Measure: A set function ν defined on a σ-field F on Ω is a measure if (i) 0 ≤ ν(A) ≤ ∞ for any A ∈ F; (ii) ν(∅) = 0; and (iii) ν (∪∞ i=1 Ai ) =  ∞ i=1 ν(Ai ) for disjoint Ai ∈ F, i = 1, 2, .... Measurable function: a function from a set Ω to a set Λ (with a given σfield G) is measurable with respect to a σ-field F on Ω if f −1 (B) ∈ F for any B ∈ G. Minimax rule: Let X be a sample from a population P and RT (P ) be the risk of a decision rule T . A minimax rule is the rule minimizes supP RT (P ) over all possible T . Moment generating function:  τ The moment generating function of a distribution F on Rk is et x dF (x), t ∈ Rk , if it is finite.

xx

Terminology

Monotone likelihood ratio: The family of densities {fθ : θ ∈ Θ} with Θ ⊂ R is said to have monotone likelihood ratio in Y (x) if, for any θ1 < θ2 , θi ∈ Θ, fθ2 (x)/fθ1 (x) is a nondecreasing function of Y (x) for values x at which at least one of fθ1 (x) and fθ2 (x) is positive. Optimal rule: An optimal rule (within a class of rules) is the rule has the smallest risk over all possible populations. Pivotal quantity: A known Borel function R of (X, θ) is called a pivotal quantity if and only if the distribution of R(X, θ) does not depend on any unknown quantity. Population: The distribution (or probability measure) of an observation from a random experiment is called the population. Power of a test: The power of a test T is the expected value of T with respect to the true population. Prior and posterior distribution: Let X be a sample from a population indexed by θ ∈ Θ ⊂ Rk . A distribution defined on Θ that does not depend on X is called a prior. When the population of X is considered as the conditional distribution of X given θ and the prior is considered as the distribution of θ, the conditional distribution of θ given X is called the posterior distribution of θ. Probability and probability space: A measure P defined on a σ-field F on a set Ω is called a probability if and only if P (Ω) = 1. The triple (Ω, F, P ) is called a probability space. Probability density: Let (Ω, F, P ) be a probability space and ν be a σfinite measure on F. If P  ν, then the Radon-Nikodym derivative of P with respect to ν is the probability density with respect to ν (and is called Lebesgue density if ν is the Lebesgue measure on Rk ). Random sample: A sample X = (X1 , ..., Xn ), where each Xj is a random d-vector with a fixed positive integer d, is called a random sample of size n from a population or distribution P if X1 , ..., Xn are independent and identically distributed as P . Randomized decision rule: Let X be a sample with range X , A be the action space, and FA be a σ-field on A. A randomized decision rule is a function δ(x, C) on X × FA such that, for every C ∈ FA , δ(X, C) is a Borel function and, for every X ∈ X , δ(X, C) is a probability measure on FA . A nonrandomized decision rule T can be viewed as a degenerate randomized decision rule δ, i.e., δ(X, {a}) = I{a} (T (X)) for any a ∈ A and X ∈ X . Risk: The risk of a decision rule is the expectation (with respect to the true population) of the loss of the decision rule. Sample: The observation from a population treated as a random element is called a sample.

Terminology

xxi

Scale family: A family of Lebesgue densities on R, {fσ : σ > 0}, is a scale family with scale parameter σ if and only if fσ (x) = σ1 f (x/σ), where f is a known Lebesgue density. Scale invariant estimator. Let (X1 , ..., Xn ) be a random sample from a population in a scale family with scale parameter σ. An estimator S(X1 , ..., Xn ) of σ h with a fixed h = 0 is scale invariant if and only if S(rX1 , ..., rXn ) = rh T (X1 , ..., Xn ) for any Xi ’s and r > 0. Simultaneous confidence intervals: Let θt ∈ R, t ∈ T . Confidence intervals Ct (X), t ∈ T , are 1−α simultaneous confidence intervals for θt , t ∈ T , if P (θt ∈ Ct (X), t ∈ T ) = 1 − α. Statistic: Let X be a sample from a population P . A known Borel function of X is called a statistic. Sufficiency and minimal sufficiency: Let X be a sample from a population P . A statistic T (X) is sufficient for P if and only if the conditional distribution of X given T does not depend on P . A sufficient statistic T is minimal sufficient if and only if, for any other statistic S sufficient for P , there is a measurable function ψ such that T = ψ(S) except for a set A with P (X ∈ A) = 0 for all P . Test and its size: Let X be a sample from a population P ∈ P and Pi i = 0, 1, be subsets of P satisfying P0 ∪ P1 = P and P0 ∩ P1 = ∅. A randomized test for hypotheses H0 : P ∈ P0 versus H1 : P ∈ P1 is a Borel function T (X) ∈ [0, 1] such that after X is observed, we reject H0 (conclude P ∈ P1 ) with probability T (X). If T (X) ∈ {0, 1}, then T is nonrandomized. The size of a test T is supP ∈P0 E[T (X)], where E is the expectation with respect to P . UMA (uniformly most accurate) confidence set: Let θ ∈ Θ be an unknown parameter and Θ be a subset of Θ that does not contain the true value of θ. A confidence set C(X) for θ with confidence coefficient 1 − α is Θ -UMA if and only if for any other confidence set C1 (X)   with significance level 1 − α, P θ ∈ C(X) ≤ P θ ∈ C1 (X) for all θ ∈ Θ . UMAU (uniformly most accurate unbiased) confidence set: Let θ ∈ Θ be an unknown parameter and Θ be a subset of Θ that does not contain the true value of θ. A confidence set C(X) for θ with confidence coefficient 1 − α is Θ -UMAU if and only if C(X) is unbiased and for any  other unbiased  confidence set  C1 (X) with significance level 1 − α, P θ ∈ C(X) ≤ P θ ∈ C1 (X) for all θ ∈ Θ . UMP (uniformly most powerful) test: A test of size α is UMP for testing H0 : P ∈ P0 versus H1 : P ∈ P1 if and only if, at each P ∈ P1 , the power of T is no smaller than the power of any other level α test. UMPU (uniformly most powerful unbiased) test: An unbiased test of size α is UMPU for testing H0 : P ∈ P0 versus H1 : P ∈ P1 if and only

xxii

Terminology

if, at each P ∈ P1 , the power of T is no larger than the power of any other level α unbiased test. UMVUE (uniformly minimum variance estimator): An estimator is a UMVUE if it has the minimum variance within the class of unbiased estimators. Unbiased confidence set: A level 1 − α confidence set C(X) is said to be unbiased if and only if P (θ ∈ C(X)) ≤ 1 − α for any P and all θ = θ. Unbiased estimator: Let X be a sample from a population P and θ ∈ Rk be a function of P . If an estimator T (X) of θ satisfies E[T (X)] = θ for any P , where E is the expectation with respect to P , then T (X) is an unbiased estimator of θ. Unbiased test: A test for hypotheses H0 : P ∈ P0 versus H1 : P ∈ P1 is unbiased if its size is no larger than its power at any P ∈ P1 .

Some Distributions 1. Discrete uniform distribution on the set {a1 , ..., am }: The probability density (with respect to the counting measure) of this distribution is  −1 x = ai , i = 1, ..., m m f (x) = 0 otherwise, where ai ∈ R, i = 1, ..., m, and m is a positive integer. The expecm tation of this distribution is a ¯ = j=1 aj /m and the variance of this m distribution is j=1 (aj − a ¯)2 /m. The moment generating function of m aj t this distribution is j=1 e /m, t ∈ R. 2. The binomial distribution with size n and probability p: The probability density (with respect to the counting measure) of this distribution is   n x n−x x = 0, 1, ..., n x p (1 − p) f (x) = 0 otherwise, where n is a positive integer and p ∈ [0, 1]. The expectation and variance of this distributions are np and np(1 − p), respectively. The moment generating function of this distribution is (pet + 1 − p)n , t ∈ R. 3. The Poisson distribution with mean θ: The probability density (with respect to the counting measure) of this distribution is  x −θ θ e x = 0, 1, 2, ... x! f (x) 0 otherwise, where θ > 0 is the expectation of this distribution. The variance of this distribution is θ. The moment generating function of this t distribution is eθ(e −1) , t ∈ R. 4. The geometric with mean p−1 : The probability density (with respect to the counting measure) of this distribution is  x = 1, 2, ... (1 − p)x−1 p f (x) = 0 otherwise, xxiii

xxiv

Some Distributions where p ∈ [0, 1]. The expectation and variance of this distribution are p−1 and (1 − p)/p2 , respectively. The moment generating function of this distribution is pet /[1 − (1 − p)et ], t < − log(1 − p).

5. Hypergeometric distribution: The probability density (with respect to the counting measure) of this distribution is ⎧ n m ⎨ (x)(r−x) x = 0, 1, ..., min{r, n}, r − x ≤ m (Nr ) f (x) = ⎩ 0 otherwise, where r, n, and m are positive integers, and N = n + m. The expectation and variance of this distribution are equal to rn/N and rnm(N − r)/[N 2 (N − 1)], respectively. 6. Negative binomial with size r and probability p: The probability density (with respect to the counting measure) of this distribution is    x−1 r x−r x = r, r + 1, ... r−1 p (1 − p) f (x) = 0 otherwise, where p ∈ [0, 1] and r is a positive integer. The expectation and variance of this distribution are r/p and r(1−p)/p2 , respectively. The moment generating function of this distribution is equal to pr ert /[1 − (1 − p)et ]r , t < − log(1 − p). 7. Log-distribution with probability p: The probability density (with respect to the counting measure) of this distribution is  f (x) =

−(log p)−1 x−1 (1 − p)x 0

x = 1, 2, ... otherwise,

where p ∈ (0, 1). The expectation and variance of this distribution are −(1−p)/(p log p) and −(1−p)[1+(1−p)/ log p]/(p2 log p), respectively. The moment generating function of this distribution is equal to log[1 − (1 − p)et ]/ log p, t ∈ R. 8. Uniform distribution on the interval (a, b): The Lebesgue density of this distribution is 1 f (x) = I(a,b) (x), b−a where a and b are real numbers with a < b. The expectation and variance of this distribution are (a + b)/2 and (b − a)2 /12, respectively. The moment generating function of this distribution is equal to (ebt − eat )/[(b − a)t], t ∈ R.

Some Distributions

xxv

9. Normal distribution N (µ, σ 2 ): The Lebesgue density of this distribution is 2 2 1 f (x) = √ e−(x−µ) /2σ , 2πσ where µ ∈ R and σ 2 > 0. The expectation and variance of N (µ, σ 2 ) are µ and σ 2 , respectively. The moment generating function of this 2 2 distribution is eµt+σ t /2 , t ∈ R. 10. Exponential distribution on the interval (a, ∞) with scale parameter θ: The Lebesgue density of this distribution is f (x) =

1 −(x−a)/θ e I(a,∞) (x), θ

where a ∈ R and θ > 0. The expectation and variance of this distribution are θ+a and θ2 , respectively. The moment generating function of this distribution is eat (1 − θt)−1 , t < θ−1 . 11. Gamma distribution with shape parameter α and scale parameter γ: The Lebesgue density of this distribution is f (x) =

1 xα−1 e−x/γ I(0,∞) (x), Γ(α)γ α

where α > 0 and γ > 0. The expectation and variance of this distribution are αγ and αγ 2 , respectively. The moment generating function of this distribution is (1 − γt)−α , t < γ −1 . 12. Beta distribution with parameter (α, β): The Lebesgue density of this distribution is f (x) =

Γ(α + β) α−1 (1 − x)β−1 I(0,1) (x), x Γ(α)Γ(β)

where α > 0 and β > 0. The expectation and variance of this distribution are α/(α + β) and αβ/[(α + β + 1)(α + β)2 ], respectively. 13. Cauchy distribution with location parameter µ and scale parameter σ: The Lebesgue density of this distribution is f (x) =

σ , π[σ 2 + (x − µ)2 ]

where µ ∈ R and σ > 0. The expectation and variance of this distribution do not exist. The characteristic function of this distribution √ is e −1µt−σ|t| , t ∈ R.

xxvi

Some Distributions

14. Log-normal distribution with parameter (µ, σ 2 ): The Lebesgue density of this distribution is f (x) = √

2 2 1 e−(log x−µ) /2σ I(0,∞) (x), 2πσx

where µ ∈ R and σ 2 > 0. The expectation and variance of this 2 2 2 distribution are eµ+σ /2 and e2µ+σ (eσ − 1), respectively. 15. Weibull distribution with shape parameter α and scale parameter θ: The Lebesgue density of this distribution is f (x) =

α α−1 −xα /θ x e I(0,∞) (x), θ

where α > 0 and θ > 0. The expectation and variance of this distribution are θ1/α Γ(α−1 + 1) and θ2/α {Γ(2α−1 + 1) − [Γ(α−1 + 1)]2 }, respectively. 16. Double exponential distribution with location parameter µ and scale parameter θ: The Lebesgue density of this distribution is f (x) =

1 −|x−µ|/θ e , 2θ

where µ ∈ R and θ > 0. The expectation and variance of this distribution are µ and 2θ2 , respectively. The moment generating function of this distribution is eµt /(1 − θ2 t2 ), |t| < θ−1 . 17. Pareto distribution: The Lebesgue density of this distribution is f (x) = θaθ x−(θ+1) I(a,∞) (x), where a > 0 and θ > 0. The expectation this distribution is θa/(θ−1) when θ > 1 and does not exist when θ ≤ 1. The variance of this distribution is θa2 /[(θ − 1)2 (θ − 2)] when θ > 2 and does not exist when θ ≤ 2. 18. Logistic distribution with location parameter µ and scale parameter σ: The Lebesgue density of this distribution is f (x) =

e−(x−µ)/σ , σ[1 + e−(x−µ)/σ ]2

where µ ∈ R and σ > 0. The expectation and variance of this distribution are µ and σ 2 π 2 /3, respectively. The moment generating function of this distribution is eµt Γ(1 + σt)Γ(1 − σt), |t| < σ −1 .

Some Distributions

xxvii

19. Chi-square distribution χ2k : The Lebesgue density of this distribution is 1 xk/2−1 e−x/2 I(0,∞) (x), f (x) = Γ(k/2)2k/2 where k is a positive integer. The expectation and variance of this distribution are k and 2k, respectively. The moment generating function of this distribution is (1 − 2t)−k/2 , t < 1/2. 20. Noncentral chi-square distribution χ2k (δ): This distribution is defined as the distribution of X12 +· · ·+Xk2 , where X1 , ..., Xk are independent and identically distributed as N (µi , 1), k is a positive integer, and δ = µ21 + · · · + µ2k ≥ 0. δ is called the noncentrality parameter. The Lebesgue density of this distribution is f (x) = e−δ/2

∞ (δ/2)j j=0

j!

f2j+n (x),

where fk (x) is the Lebesgue density of the chi-square distribution χ2k . The expectation and variance of this distribution are k + δ and 2k + 4δ, √ respectively.√ The characteristic function of this distribution √ is (1 − 2 −1t)−k/2 e −1δt/(1−2 −1t) . 21. t-distribution tn : The Lebesgue density of this distribution is Γ( n+1 ) f (x) = √ 2 n nπΓ( 2 )

x2 1+ n

−(n+1)/2 ,

where n is a positive integer. The expectation of tn is 0 when n > 1 and does not exist when n = 1. The variance of tn is n/(n − 2) when n > 2 and does not exist when n ≤ 2. 22. Noncentral t-distribution tn (δ): This distribution is defined as the  distribution of X/ Y /n, where X is distributed as N (δ, 1), Y is distributed as χ2n , X and Y are independent, n is a positive integer, and δ ∈ R is called the noncentrality parameter. The Lebesgue density of this distribution is  ∞ √ 2 1 √ f (x) = (n+1)/2 n y (n−1)/2 e−[(x y/n−δ) +y]/2 dy. 2 Γ( 2 ) πn 0  n The expectation of tn (δ) is δΓ( n−1 2 ) n/2/Γ( 2 ) when n > 1 and does not exist when n = 1. The variance of tn (δ) is [n(1 + δ 2 )/(n − 2)] − n 2 2 [Γ( n−1 2 )/Γ( 2 )] δ n/2 when n > 2 and does not exist when n ≤ 2.

xxviii

Some Distributions

23. F-distribution Fn,m : The Lebesgue density of this distribution is f (x) =

n/2−1 nn/2 mm/2 Γ( n+m 2 )x I(0,∞) (x), n m (n+m)/2 Γ( 2 )Γ( 2 )(m + nx)

where n and m are positive integers. The expectation of Fn,m is m/(m − 2) when m > 2 and does not exist when m ≤ 2. The variance of Fn,m is 2m2 (n + m − 2)/[n(m − 2)2 (m − 4)] when m > 4 and does not exist when m ≤ 4. 24. Noncentral F-distribution Fn,m (δ): This distribution is defined as the distribution of (X/n)/(Y /m), where X is distributed as χ2n (δ), Y is distributed as χ2m , X and Y are independent, n and m are positive integers, and δ ≥ 0 is called the noncentrality parameter. The Lebesgue density of this distribution is

 ∞ n1 x n1 (δ/2)j −δ/2 f2j+n1 ,n2 , f (x) = e j!(2j + n1 ) 2j + n1 j=0 where fk1 ,k2 (x) is the Lebesgue density of Fk1 ,k2 . The expectation of Fn,m (δ) is m(n + δ)/[n(m − 2)] when m > 2 and does not exist when m ≤ 2. The variance of Fn,m (δ) is 2m2 [(n + δ)2 + (m − 2)(n + 2δ)]/[n2 (m − 2)2 (m − 4)] when m > 4 and does not exist when m ≤ 4. 25. Multinomial distribution with size n and probability vector (p1 ,...,pk ): The probability density (with respect to the counting measure on Rk ) is n! f (x1 , ..., xk ) = px1 · · · pxkk IB (x1 , ..., xk ), x1 ! · · · xk ! 1 k where B = {(x1 , ..., xk ) : xi ’s are nonnegative integers, i=1 xi = n}, k n is a positive integer, pi ∈ [0, 1], i = 1, ..., k, and i=1 pi = 1. The mean-vector (expectation) of this distribution is (np1 , ..., npk ). The variance-covariance matrix of this distribution is the k × k matrix whose ith diagonal element is npi and (i, j)th off-diagonal element is −npi pj . 26. Multivariate normal distribution Nk (µ, Σ): The Lebesgue density of this distribution is τ −1 1 e−(x−µ) Σ (x−µ)/2 , x ∈ Rk , f (x) = (2π)k/2 [Det(Σ)]1/2 where µ ∈ Rk and Σ is a positive definite k × k matrix. The meanvector (expectation) of this distribution is µ. The variance-covariance matrix of this distribution is Σ. The moment generating function of τ τ Nk (µ, Σ) is et µ+t Σt/2 , t ∈ Rk .

Chapter 1

Probability Theory Exercise 1. Let Ω be a set, F be σ-field on Ω, and C ∈ F. Show that FC = {C ∩ A : A ∈ F} is a σ-field on C. Solution. This exercise, similar to many other problems, can be solved by directly verifying the three properties in the definition of a σ-field. (i) The empty subset of C is C ∩ ∅. Since F is a σ-field, ∅ ∈ F. Then, C ∩ ∅ ∈ FC . (ii) If B ∈ FC , then B = C ∩ A for some A ∈ F. Since F is a σ-field, Ac ∈ F. Then the complement of B in C is C ∩ Ac ∈ FC . (iii) If Bi ∈ FC , i = 1, 2, ..., then Bi = C ∪ Ai for some Ai ∈ F, i = 1, 2, .... Since F is a σ-field, ∪Ai ∈ F. Therefore, ∪Bi = ∪(C ∩ Ai ) = C ∩ (∪Ai ) ∈ FC . Exercise 2 (#1.12)† . Let ν and λ be two measures on a σ-field F on Ω such that ν(A) = λ(A) for any A ∈ C, where C ⊂ F is a collection having the property that if A and B are in C, then so is A ∩ B. Assume that there are Ai ∈ C, i = 1, 2, ..., such that ∪Ai = Ω and ν(Ai ) < ∞ for all i. Show that ν(A) = λ(A) for any A ∈ σ(C), where σ(C) is the smallest σ-field containing C. Note. Solving this problem requires knowing properties of measures (Shao, 2003, §1.1.1). The technique used in solving this exercise is called the “good sets principle”. All sets in C have property A and we want to show that all sets in σ(C) also have property A. Let G be the collection of all sets having property A (good sets). Then, all we need to show is that G is a σ-field. Solution. Define G = {A ∈ F : ν(A) = λ(A)}. Since C ⊂ G, σ(C) ⊂ G if G is a σ-field. Hence, the result follows if we can show that G is a σ-field. (i) Since both ν and λ are measures, 0 = ν(∅) = λ(∅) and, thus, the empty set ∅ ∈ G. † The

number in parentheses is the exercise number in Mathematical Statistics (Shao, 2003). The first digit is the chapter number.

1

2

Chapter 1. Probability Theory

(ii) For any B ∈ F, by the inclusion and exclusion formula,  n   ν Ai ∩ B = ν(Ai ∩ B) − ν(Ai ∩ Aj ∩ B) + · · · i=1

1≤i≤n

1≤i 0 and b > 0. Let Q be the set of all rational numbers on R. For any c ∈ R,  {af + bg > c} = {f > (c − t)/a} ∩ {g > t/b}. t∈Q

Since f and g are Borel, {af + bg > c} ∈ F. By Exercise 3, af + bg is Borel. Similar results can be obtained for the case of a > 0 and b < 0, a < 0 and b > 0, or a < 0 and b < 0. From the above result, f + g and f − g are Borel if f and g are Borel. Note that for any c > 0, √ √ {(f + g)2 > c} = {f + g > c} ∪ {f + g < − c}. Hence, (f + g)2 is Borel. Similarly, (f − g)2 is Borel. Then f g = [(f + g)2 − (f − g)2 ]/4 is Borel. Since any constant function is Borel, this shows that af is Borel if f is Borel and a is a constant. Thus, af + bg is Borel even when one of a and b is 0. Assume g = 0. For any c, ⎧ c>0 ⎨ {0 < g < 1/c} {1/g > c} = {g > 0} c=0 ⎩ {g > 0} ∪ {1/c < g < 0} c < 0. Hence 1/g is Borel if g is Borel and g = 0. Then f /g is Borel if both f and g are Borel and g = 0. Exercise 5 (#1.14). Let fi , i = 1, 2, ..., be Borel functions on Ω with respect to a σ-field F. Show that supn fn , inf n fn , lim supn fn , and lim inf n fn are Borel with respect to F. Also, show that the set   A = ω ∈ Ω : lim fn (ω) exists n

is in F and the function h(ω) =



limn fn (ω) f1 (ω)

ω∈A ω ∈ A

4

Chapter 1. Probability Theory

is Borel with respect to F. Solution. For any c ∈ R, {supn fn > c} = ∪n {fn > c}. By Exercise 3, supn fn is Borel. By Exercise 4, inf n fn = − supn (−fn ) is Borel. Then lim supn fn = inf n supk≥n fk is Borel and lim inf n fn = − lim supn (−fn ) is Borel. Consequently, A = {lim supn fn − lim inf n fn = 0} ∈ F. The function h is equal to IA lim supn fn + IAc f1 , where IA is the indicator function of the set A. Since A ∈ F, IA is Borel. Thus, h is Borel. Exercise 6. Let f be a Borel function on R2 . Define a function g from R to R as g(x) = f (x, y0 ), where y0 is a fixed point in R. Show that g is Borel. Is it true that f is Borel from R2 to R if f (x, y) with any fixed y or fixed x is Borel from R to R? Solution. For a fixed y0 , define G = {C ⊂ R2 : {x : (x, y0 ) ∈ C} ∈ B}. Then, (i) ∅ ∈ G; (ii) if C ∈ G, {x : (x, y0 ) ∈ C c } = {x : (x, y0 ) ∈ C}c ∈ B, i.e., C c ∈ G; (iii) if Ci ∈ G, i = 1, 2, ..., then {x : (x, y0 ) ∈ ∪Ci } = ∪{x : (x, y0 ) ∈ Ci } ∈ B, i.e., ∪Ci ∈ G. Thus, G is a σ-field. Since any open rectangle (a, b) × (c, d) ∈ G, G is a σ-field containing all open rectangles and, thus, G contains B 2 , the Borel σ-field on R2 . Let B ∈ B. Since f is Borel, A = f −1 (B) ∈ B2 . Then A ∈ G and, thus, g −1 (B) = {x : f (x, y0 ) ∈ B} = {x : (x, y0 ) ∈ A} ∈ B. This proves that g is Borel. If f (x, y) with any fixed y or fixed x is Borel from R to R, f is not necessarily to be a Borel function from R2 to R. The following is a counterexample. Let A be a non-Borel subset of R and  f (x, y) =

1 0

x=y∈A otherwise

Then for any fixed y0 , f (x, y0 ) = 0 if y0 ∈ A and f (x, y0 ) = I{y0 } (x) (the indicator function of the set {y0 }) if y0 ∈ A. Hence f (x, y0 ) is Borel. Similarly, f (x0 , y) is Borel for any fixed x0 . We now show that f (x, y) is not Borel. Suppose that it is Borel. Then B = {(x, y) : f (x, y) = 1} ∈ B2 . Define G = {C ⊂ R2 : {x : (x, x) ∈ C} ∈ B}. Using the same argument in the proof of the first part, we can show that G is a σ-field containing B 2 . Hence {x : (x, x) ∈ B} ∈ B. However, by definition {x : (x, x) ∈ B} = A ∈ B. This contradiction proves that f (x, y) is not Borel. Exercise 7 (#1.21). Let Ω = {ωi : i = 1, 2, ...} be a countable set, F be all subsets of Ω, and ν be the counting measure on Ω (i.e., ν(A) = the number of elements in A for any A ⊂ Ω). For any Borel function f , the

Chapter 1. Probability Theory

5

integral of f w.r.t. ν (if it exists) is  f dν =



f (ωi ).

i=1

Note. The definition of integration and properties of integration can be found in Shao (2003, §1.2). This type of exercise is much easier to solve if we first consider nonnegative functions (or simple nonnegative functions) and then general functions by using f+ and f− . See also the next exercise for another example. ∞ Solution. First, consider nonnegative f . Then f = i=1 ai I{ωi } , where n ai = f (ωi ) ≥ 0. Since fn = i=1 ai I{ωi } is a nonnegative simple function (a function is simple if it is a linear combination of finitely many indicator functions of sets in F) and fn ≤ f , by definition  fn dν =



n

ai ≤

f dν.

i=1

Letting n → ∞ we obtain that  f dν ≥



ai .

i=1

k Let s = i=1 bi I{ωi } be a nonnegative simple function satisfying s ≤ f . Then 0 ≤ bi ≤ ai and  sdν =

k

bi ≤

i=1



ai .

i=1

Hence 

 f dν = sup

 sdν : s is simple, 0 ≤ s ≤ f





ai

i=1

and, thus,

 f dν =



ai

i=1

for nonnegative f . For general f , let f+ = max{f, 0} and f− = max{−f, 0}. Then  f+ dν =

∞ i=1

 f+ (ωi )

and

f− dν =

∞ i=1

f− (ωi ).

6

Chapter 1. Probability Theory

Then the result follows from    f dν = f+ dν − f− dν if at least one of



f+ dν and



f− dν is finite.

Exercise 8 (#1.22). Let ν be a measure on a σ-field F on Ω and f and g be Borel functions with respect to F. Show that   (i) if f dν exists and a ∈ R, then (af)dν exists  and is equal to a f dν; (ii) if both f dν and gdν existand f dν + gdν is well defined, then (f + g)dν exists and is equal to f dν + gdν.   Note. For integrals   in calculus, properties such as (af )dν = a f dν and (f + g)dν f dν + gdν are obvious. However, the proof of them are complicated for integrals defined on general measure spaces. As shown in this exercise, the proof often has to be broken into several steps: simple functions, nonnegative functions, and then   general functions.  Solution. (i) If a = 0, then (af )dν = 0dν = 0 = a f dν. Suppose that a > 0 and f ≥ 0. By definition, there exists of   a sequence nonnegative simple functions s such that s ≤ f and lim dν = f dν. s n n n n    Then asn ≤ af and limn asn dν = a limn sn dν = a f dν. This shows  −1 −1 (af )dν ≥ a f dν. Let  b = a and considerthe function h = b f . From −1 what we have shown, f dν = (bh)dν ≥ b hdν = a (af )dν. Hence  (af )dν = a f dν. For a > 0 and general f , the result follows by considering af = af+ − af− . For a < 0, the result follows by considering af = |a|f− − |a|f+ . (ii) Consider the case where f ≥ 0 and g ≥ 0. If both f and g are simple functions, the result is obvious. Let  sn , tn , and rn be simple   functions such that 0 ≤ sn ≤ f , limn sndν = f dν, 0 ≤ t ≤ g, lim tn dν = gdν, n n  0 ≤ rn ≤ f + g, and limn rn dν = (f + g)dν. Then sn + tn is simple, 0 ≤ sn + tn ≤ f + g, and     f dν + gdν = lim sn dν + lim tn dν n n  = lim (sn + tn )dν, n

which implies



 f dν +

 gdν ≤

(f + g)dν.  If any of f dν and gdν is infinite, then so is (f + g)dν. Hence, we only need to consider the case where both f and g are integrable. Suppose that g is simple. Then rn − g is simple and     lim rn dν − gdν = lim (rn − g)dν ≤ f dν, 



n

n

Chapter 1. Probability Theory

7

since rn − g ≤ f . Hence     (f + g)dν = lim rn dν ≤ f dν + gdν n

and, thus, the result follows if g is simple. For a general g, by the proved result,    lim rn dν − gdν = lim (rn − g)dν. n





n



Hence (f + g)dν = limn rn dν ≤ f dν + Consider general f and g. Note that



gdν and the result follows.

(f + g)+ − (f + g)− = f + g = f+ − f− + g+ − g− , which leads to (f + g)+ + f− + g− = (f + g)− + f+ + g+ . From the proved result for nonnegative functions,     [(f + g)+ + f− + g− ]dν = (f + g)+ dν + f− dν + g− dν  = [(f + g)− + f+ + g+ ]dν    = (f + g)− dν + f+ dν + g+ dν. If both f and g are integrable, then       (f + g)+ dν − (f + g)− dν = f+ dν − f− dν + g+ dν − g− dν, i.e.,



 (f + g)dν =

 f dν +

gdν.

   Suppose f− dν = ∞. Then f+ dν < ∞  now that   since f dν exists. Since f dν + gdν is well defined, we must have g+ dν < ∞. Since  (f + g) ≤ f + g , (f + g) dν < ∞. Thus, (f + g)−dν = ∞ and + + + +   (f + g)dν = −∞. On the other hand, we also have f dν  + gdν = −∞. Similarly, we can prove the case where f+ dν = ∞ and f− dν < ∞. Exercise 9 (#1.30). Let F be a cumulative distribution function on the real line R and a ∈ R. Show that  [F (x + a) − F (x)]dx = a.

8

Chapter 1. Probability Theory

Solution. For a ≥ 0,    [F (x + a) − F (x)]dx = I(x,x+a] (y)dF (y)dx. Since I(x,x+a] (y) ≥ 0, by Fubini’s theorem, the above integral is equal to    I(y−a,y] (x)dxdF (y) = adF (y) = a. The proof for the case of a < 0 is similar. Exercise 10 (#1.31). Let F and G be two cumulative distribution functions on the real line. Show that if F and G have no common points of discontinuity in the interval [a, b], then   G(x)dF (x) = F (b)G(b) − F (a)G(a) − F (x)dG(x). (a,b]

(a,b]

Solution. Let PF and PG be the probability measures corresponding to F and G, respectively, and let P = PF × PG be the product measure (see Shao, 2003, §1.1.1). Consider the following three Borel sets in R2 : A = {(x, y) : x ≤ y, a < y ≤ b}, B = {(x, y) : y ≤ x, a < x ≤ b}, and C = {(x, y) : a < x ≤ b, x = y}. Since F and G have no common points of discontinuity, P (C) = 0. Then,     F (b)G(b) − F (a)G(a) = P (−∞, b]×(−∞, b] − P (−∞, a]×(−∞, a] = P (A) + P (B) − P (C) = P (A) + P (B)   = dP + dP A B     = dPF dPG + dPG dPF (a,b] (−∞,y] (a,b] (−∞,x]   = F (y)dPG + G(x)dPF (a,b] (a,b]   = F (y)dG(y) + G(x)dF (x) (a,b] (a,b]   = F (x)dG(x) + G(x)dF (x), (a,b]

(a,b]

where the fifth equality follows from Fubini’s theorem. Exercise 11. Let Y be a random variable and m be a median of Y , i.e., P (Y ≤ m) ≥ 1/2 and P (Y ≥ m) ≥ 1/2. Show that, for any real numbers

Chapter 1. Probability Theory

9

a and b such that m ≤ a ≤ b or m ≥ a ≥ b, E|Y − a| ≤ E|Y − b|. Solution. We can assume E|Y | < ∞, otherwise ∞ = E|Y −a| ≤ E|Y −b| = ∞. Assume m ≤ a ≤ b. Then E|Y − b| − E|Y − a| = E[(b − Y )I{Y ≤b} ] + E[(Y − b)I{Y >b} ] − E[(a − Y )I{Y ≤a} ] − E[(Y − a)I{Y >a} ] = 2E[(b − Y )I{aa} ) − E(I{Y ≤a} )] ≥ (a − b)[1 − 2P (Y ≤ a)] ≥ 0, since P (Y ≤ a) ≥ P (Y ≤ m) ≥ 1/2. If m ≥ a ≥ b, then −m ≤ −a ≤ −b and −m is a median of −Y . From the proved result, E|(−Y ) − (−b)| ≥ E|(−Y ) − (−a)|, i.e., E|Y − a| ≤ E|Y − b|. Exercise 12. Let X and Y be independent random variables satisfying E|X + Y |a < ∞ for some a > 0. Show that E|X|a < ∞. Solution. Let c ∈ R such that P (Y > c) > 0 and P (Y ≤ c) > 0. Note that E|X + Y |a ≥ E(|X + Y |a I{Y >c,X+c>0} ) + E(|X + Y |a I{Y ≤c,X+c≤0} ) ≥ E(|X + c|a I{Y >c,X+c>0} ) + E(|X + c|a I{Y ≤c,X+c≤0} ) = P (Y > c)E(|X + c|a I{X+c>0} ) + P (Y ≤ c)E(|X + c|a I{X+c≤0} ), where the last inequality follows from the independence of X and Y . Since E|X + Y |a < ∞, both E(|X + c|a I{X+c>0} ) and E(|X + c|a I{X+c≤0} ) are finite and E|X + c|a = E(|X + c|a I{X+c>0} ) + E(|X + c|a I{X+c≤0} ) < ∞. Then, E|X|a ≤ 2a (E|X + c|a + |c|a ) < ∞. Exercise 13 (#1.34). Let ν be a σ-finite measure on a σ-field F on Ω, λ be another measure with λ  ν, and f be a nonnegative Borel function on Ω. Show that   dλ f dλ = f dν, dν where dλ dν is the Radon-Nikodym derivative. Note. Two measures λ and ν satisfying λ  ν if ν(A) = 0 always implies

10

Chapter 1. Probability Theory

λ(A) = 0, which ensures the existence of the Radon-Nikodym derivative dλ dν when ν is σ-finite (see Shao, 2003, §1.1.2). Solution. By the definition of the Radon-Nikodym derivative and the linearity of integration, the result follows if f is a simple function. For a general nonnegative f , there is a sequence {sn } of nonnegative simple functions such that sn ≤ sn+1 , n = 1, 2, ..., and limn sn = f . Then dλ dλ dλ 0 ≤ sn dλ dν ≤ sn+1 dν and limn sn dν = f dν . By the monotone convergence theorem (e.g., Theorem 1.1 in Shao, 2003),     dλ dλ f dλ = lim sn dλ = lim sn dν = f dν. n n dν dν Exercise 14 (#1.34). Let Fi be a σ-field on Ωi , νi be a σ-finite measure on Fi , and λi be a measure on Fi with λi  νi , i = 1, 2. Show that λ1 × λ2  ν1 × ν2 and d(λ1 × λ2 ) dλ1 dλ2 a.e. ν1 × ν2 , = d(ν1 × ν2 ) dν1 dν2 where ν1 × ν2 (or λ1 × λ2 ) denotes the product measure of ν1 and ν2 (or λ1 and λ2 ). Solution. Suppose that A ∈ σ(F1 × F2 ) and ν1 × ν2 (A) = 0. By Fubini’s theorem,     0 = ν1 × ν2 (A) = IA d(ν1 × ν2 ) = IA dν1 dν2 . Since IA ≥ 0, this implies that there is a B ∈ F2 such that ν2 (B c ) = 0 and on the set B, IA dν1 = 0. Since λ1  ν1 , on the set B   dλ1 IA dλ1 = IA dν1 = 0. dν1 Since λ2  ν2 , λ2 (B c ) = 0. Then     dλ1 dλ2 = 0. λ1 × λ2 (A) = IA d(λ1 × λ2 ) = B

A

Hence λ1 × λ2  ν1 × ν2 . For the second assertion, it suffices to show that for any A ∈ σ(F1 ×F2 ), λ(A) = ν(A), where  d(λ1 × λ2 ) d(ν1 × ν2 ) λ(A) = A d(ν1 × ν2 ) 

and ν(A) =

A

dλ1 dλ2 d(ν1 × ν2 ). dν1 dν2

Chapter 1. Probability Theory

11

Let C = F1 × F2 . Then C satisfies the conditions specified in Exercise 2. For A1 × A2 ∈ F1 × F2 ,  d(λ1 × λ2 ) d(ν1 × ν2 ) λ(A) = d(ν1 × ν2 ) A1 ×A2 = d(λ1 × λ2 ) A1 ×A2

= λ1 (A1 )λ2 (A2 ) and, by Fubini’s theorem,



dλ1 dλ2 d(ν1 × ν2 ) A ×A dν1 dν2   1 2 dλ1 dλ2 dν1 dν2 = dν 1 A1 A2 dν2 = λ1 (A1 )λ2 (A2 ).

ν(A) =

Hence λ(A) = ν(A) for any A ∈ C and the second assertion of this exercise follows from the result in Exercise 2. Exercise 15. Let P and Q be two probability measures on a σ-field F. dQ Assume that f = dP dν and g = dν exists for a measure ν on F. Show that  |f − g|dν = 2 sup{|P (C) − Q(C)| : C ∈ F}. Solution. Let A = {f ≥ g} and B = {f < g}. Then A ∈ F, B ∈ F, and    |f − g|dν = (f − g)dν + (g − f )dν A

B

= P (A) − Q(A) + Q(B) − P (B) ≤ |P (A) − Q(A)| + |P (B) − Q(B)| ≤ 2 sup{|P (C) − Q(C)| : C ∈ F}. For any C ∈ F,



P (C) − Q(C) =

(f − g)dν C

 (f − g)dν +

= 

C∩A



(f − g)dν C∩B

(f − g)dν. A

Since



 (f − g)dν +

C

Cc

 (f − g)dν =

(f − g)dν = 1 − 1 = 0,

12

Chapter 1. Probability Theory

we have

 P (C) − Q(C) =

(g − f )dν

C

c

C

c ∩A

= ≤

 (g − f )dν +

C c ∩B

(g − f )dν

(g − f )dν. B

Hence



 (f − g)dν +

2[P (C) − Q(C)] ≤ A

 (g − f )dν =

|f − g|dν.

B

  Similarly, 2[Q(C)−P  (C)] ≤ |f −g|dν. Thus, 2|P (C)−Q(C)| ≤ |f −g|dν and, consequently, |f − g|dν ≥ 2 sup{|P (C) − Q(C)| : C ∈ F}. Exercise 16 (#1.36). Let Fi be a cumulative distribution function on the real line having a Lebesgue density fi , i = 1, 2. Assume that there is a real number c such that F1 (c) < F2 (c). Define  −∞ < x < c F1 (x) F (x) = F2 (x) c ≤ x < ∞. Show that the probability measure P corresponding to F satisfies P  m + δc , where m is the Lebesgue measure and δc is the point mass at c, and find the probability density of F with respect to m + δc . Solution. For any A ∈ B,    P (A) = f1 (x)dm + a dδc + f2 (x)dm, (−∞,c)∩A

{c}∩A

(c,∞)∩A



 where a = F2 (c) − F1 (c). Note that (−∞,c)∩A dδc = 0, (c,∞)∩A dδc = 0,  and {c}∩A dm = 0. Hence,   P (A) = f1 (x)d(m + δc ) + a d(m + δc ) (−∞,c)∩A {c}∩A  + f2 (x)d(m + δc ) (c,∞)∩A  = [I(−∞,c) (x)f1 (x) + aI{c} (x) + I(c,∞) f2 (x)]d(m + δc ). A

This shows that P  m + δc and dP = I(−∞,c) (x)f1 (x) + aI{c} (x) + I(c,∞) f2 (x). d(m + δc )

Chapter 1. Probability Theory

13

Exercise 17 (#1.46). Let X1 and X2 be independent random variables having the standard normal distribution. Obtain the joint Lebesgue density X12 + X22 and Y2 = X1 /X2 . Are Y1 and Y2 of (Y1 , Y2 ), where Y1 = independent? Note. For this type of problem, we may apply the following result. Let X be a random k-vector with a Lebesgue density fX and let Y = g(X), where g is a Borel function from (Rk , B k ) to (Rk , B k ). Let A1 , ..., Am be disjoint sets in B k such that Rk − (A1 ∪ · · · ∪ Am ) has Lebesgue measure 0 and g on Aj is one-to-one with a nonvanishing Jacobian, i.e., the determinant Det(∂g(x)/∂x) = 0 on Aj , j = 1, ..., m. Then Y has the following Lebesgue density: m   Det (∂hj (x)/∂x)  fX (hj (x)) , fY (x) = j=1

where hj is the inverse function of g on Aj , j = 1, ..., m. Solution. Let A1 = {(x1 , x2 ): x1 > 0, x2 > 0}, A2 = {(x1 , x2 ): x1 > 0, x2 < 0}, A3 = {(x1 , x2 ): x1 < 0, x2 > 0}, and A4 = {(x1 , x2 ): x1 < 0, x2 < 0}. Then 2 the Lebesgue measure  of R − (A1 ∪ A2 ∪ A3 ∪ A4 ) is 0. On each Ai , the function (y1 , y2 ) = ( x21 + x22 , x1 /x2 ) is one-to-one with   y1 y22  y1   √ y2

√ − 2   3/2 2 ∂(x1 , x2 ) (1+y2 ) 1+y22  = y1 . 2 =  1+y Det y y  1 + y2 1 1 2 √ ∂(y1 , y2 ) −   2 2 (1+y 2 )3/2 1+y2

2

Since the joint Lebesgue density of (X1 , X2 ) is 1 −(x21 +x22 )/2 e 2π and x21 + x22 = y12 , the joint Lebesgue density of (Y1 , Y2 ) is  

4 2 2 ∂(x1 , x2 )  1 −(x21 +x22 )/2  y1 = e−y1 Det . e   2π ∂(y , y ) π 1 + y22 1 2 i=1 Since the joint Lebesgue density of (Y1 , Y2 ) is a product of two functions that are functions of one variable, Y1 and Y2 are independent. Exercise 18 (#1.45). Let Xi , i = 1, 2, 3, be independent random variables having the same Lebesgue density f (x) = e−x I(0,∞) (x). Obtain the joint Lebesgue density of (Y1 , Y2 , Y3 ), where Y1 = X1 + X2 + X3 , Y2 = X1 /(X1 + X2 ), and Y3 = (X1 + X2 )/(X1 + X2 + X3 ). Are Yi ’s independent? Solution: Let x1 = y1 y2 y3 , x2 = y1 y3 − y1 y2 y3 , and x3 = y1 − y1 y3 . Then, 

∂(x1 , x2 , x3 ) = y12 y3 . Det ∂(y1 , y2 , y3 )

14

Chapter 1. Probability Theory

Using the same argument as that in the previous exercise, we obtain the joint Lebesgue density of (Y1 , Y2 , Y3 ) as e−y1 y12 I(0,∞) (y1 )I(0,1) (y2 )y3 I(0,1) (y3 ). Because this function is a product of three functions, e−y1 y12 I(0,∞) (y1 ), I(0,1) (y2 ), and y3 I(0,1) (y3 ), Y1 , Y2 , and Y3 are independent. Exercise 19 (#1.47). Let X and Y be independent random variables with cumulative distribution functions FX and FY , respectively. Show that (i) the cumulative distribution function of X + Y is  FX+Y (t) = FY (t − x)dFX (x); (ii) FX+Y is continuous if one of FX and FY is continuous; (iii) X +Y has a Lebesgue density if one of X and Y has a Lebesgue density. Solution. (i) Note that  FX+Y (t) = dFX (x)dFY (y) x+y≤t    = dFY (y) dFX (x) y≤t−x  = FY (t − x)dFX (x), where the second equality follows from Fubini’s theorem. (ii) Without loss of generality, we assume that FY is continuous. Since FY is bounded, by the dominated convergence theorem (e.g., Theorem 1.1 in Shao, 2003),  lim FX+Y (t + ∆t) = lim FY (t + ∆t − x)dFX (x) ∆t→0 ∆t→0  = lim FY (t + ∆t − x)dFX (x) ∆t→0  = FY (t − x)dFX (x) = FX+Y (t). (iii) Without loss of generality, we assume that Y has a Lebesgue density fY . Then  FX+Y (t) = FY (t − x)dFX (x)    t−x fY (s)ds dFX (x) = −∞

Chapter 1. Probability Theory

15

 

t

= 

−∞

t

= −∞



 fY (y − x)dy dFX (x)  fY (y − x)dFX (x) dy,

where the last equality follows from  Fubini’s theorem. Hence, X + Y has the Lebesgue density fX+Y (t) = fY (t − x)dFX (x). Exercise 20 (#1.94). Show that a random variable X is independent of itself if and only if X is constant a.s. Can X and f (X) be independent, where f is a Borel function? Solution. Suppose that X = c a.s. for a constant c ∈ R. For any A ∈ B and B ∈ B, P (X ∈ A, X ∈ B) = IA (c)IB (c) = P (X ∈ A)P (X ∈ B). Hence X and X are independent. Suppose now that X is independent of itself. Then, for any t ∈ R, P (X ≤ t) = P (X ≤ t, X ≤ t) = [P (X ≤ t)]2 . This means that P (X ≤ t) can only be 0 or 1. Since limt→∞ P (X ≤ t) = 1 and limt→−∞ P (X ≤ t) = 0, there must be a c ∈ R such that P (X ≤ c) = 1 and P (X < c) = 0. This shows that X = c a.s. If X and f (X) are independent, then so are f (X) and f (X). From the previous result, this occurs if and only if f (X) is constant a.s. Exercise 21 (#1.38). Let (X, Y, Z) be a random 3-vector with the following Lebesgue density:  1−sin x sin y sin z 0 ≤ x, y, z, ≤ 2π 8π 3 f (x, y, z) = 0 otherwise Show that X, Y, Z are pairwise independent, but not independent. Solution. The Lebesgue density for (X, Y ) is  2π  2π 1 − sin x sin y sin z 1 f (x, y, z)dz = dz = , 3 2 8π 4π 0 0 0 ≤ x, y, ≤ 2π. The Lebesgue density for X or Y is  2π  2π  2π 1 1 , f (x, y, z)dydz = dy = 2 4π 2π 0 0 0 0 ≤ x ≤ 2π. Hence X and Y are independent. Similarly, X and Z are independent and Y and Z are independent. Note that  π 1 1 P (X ≤ π) = P (Y ≤ π) = P (Z ≤ π) = dx = . 2π 2 0

16

Chapter 1. Probability Theory

Hence P (X ≤ π)P (Y ≤ π)P (Z ≤ π) = 1/8. On the other hand,  π π π 1 − sin x sin y sin z P (X ≤ π, Y ≤ π, Z ≤ π) = dxdydz 8π 3 0 0 0

 π 3 1 1 = − 3 sin xdx 8 8π 0 1 1 = − 3. 8 π Hence X, Y , and Z are not independent. Exercise 22 (#1.51, #1.53). Let X be a random n-vector having the multivariate normal distribution Nn (µ, In ). (i) Apply Cochran’s theorem to show that if A2 = A, then X τ AX has the noncentral chi-square distribution χ2r (δ), where A is an n × n symmetric matrix, r = rank of A, and δ = µτ Aµ. (ii) Let Ai be an n × n symmetric matrix satisfying A2i = Ai , i = 1, 2. Show that a necessary and sufficient condition that X τ A1 X and X τ A2 X are independent is A1 A2 = 0. Note. If X1 , ..., Xk are independent and Xi has the normal distribution N (µi , σ 2 ), i = 1, ..., k, then the distribution of (X12 + · · · + Xk2 )/σ 2 is called the noncentral chi-square distribution χ2k (δ), where δ = (µ21 + · · · + µ2k )/σ 2 . When δ = 0, χ2k is called the central chi-square distribution. Solution. (i) Since A2 = A, i.e., A is a projection matrix, (In − A)2 = In − A − A + A2 = In − A. Hence, In −A is a projection matrix with rank tr(In −A) = tr(In )−tr(A) = n−r. The result then follows by applying Cochran’s theorem (e.g., Theorem 1.5 in Shao, 2003) to X τ X = X τ AX + X τ (In − A)X. (ii) Suppose that A1 A2 = 0. Then (In − A1 − A2 )2 = In − A1 − A2 − A1 + A21 + A2 A1 − A2 + A1 A2 + A22 = In − A1 − A2 , i.e., In − A1 − A2 is a projection matrix with rank = tr(In − A1 − A2 ) = n − r1 − r2 , where ri = tr(Ai ) is the rank of Ai , i = 1, 2. By Cochran’s theorem and X τ X = X τ A1 X + X τ A2 X + X τ (In − A1 − A2 )X, X τ A1 X and X τ A2 X are independent.

Chapter 1. Probability Theory

17

Assume that X τ A1 X and X τ A2 X are independent. Since X τ Ai X has the noncentral chi-square distribution χ2ri (δi ), where ri is the rank of Ai and δi = µτ Ai µ, X τ (A1 + A2 )X has the noncentral chi-square distribution χ2r1 +r2 (δ1 + δ2 ). Consequently, A1 + A2 is a projection matrix, i.e., (A1 + A2 )2 = A1 + A2 , which implies A1 A2 + A2 A1 = 0. Since A21 = A1 , we obtain that 0 = A1 (A1 A2 + A2 A1 ) = A1 A2 + A1 A2 A1 and 0 = A1 (A1 A2 + A2 A1 )A1 = 2A1 A2 A1 , which imply A1 A2 = 0. Exercise 23 (#1.55). Let X be a random variable having a cumulative distribution function F . Show that if EX exists, then  0  ∞ [1 − F (x)]dx − F (x)dx. EX = 0

−∞

Solution. By Fubini’s theorem,  ∞  [1 − F (x)]dx = 0

0





 dF (y)dx (x,∞)

∞

=

dxdF (y) 0

 =

(0,y) ∞

ydF (y). 0

Similarly, 



0

0





F (x)dx = −∞

−∞

(−∞,x]

If EX exists, then at least one of 





EX =

ydF (y) = −∞

0

∞ 0 ∞

dF (y)dx = − ydF (y) and

0

ydF (y). −∞

0 −∞

 [1 − F (x)]dx −

ydF (y) is finite and

0

F (x)dx. −∞

Exercise 24 (#1.58(c)). Let X and Y be random variables having the bivariate normal distribution with EX = EY = 0, Var(X) = Var(Y ) = 1,

18

Chapter 1. Probability Theory

and Cov(X, Y ) = ρ. Show that E(max{X, Y }) = Solution. Note that



(1 − ρ)/π.

|X − Y | = max{X, Y } − min{X, Y } = max{X, Y } + max{−X, −Y }. Since the joint distribution of (X, Y ) is symmetric about 0, the distribution of max{X, Y } and max{−X, −Y } are the same. Hence, E|X − Y | = 2E(max{X, Y }). From the property of the normal distribution, X − Y is normally distributed with mean 0 and variance Var(X − Y ) = Var(X) + Var(Y ) − 2Cov(X, Y ) = 2 − 2ρ. Then,    E(max{X, Y }) = 2−1 E|X − Y | = 2−1 2/π 2 − 2ρ = (1 − ρ)/π. Exercise 25 (#1.60). Let X be a random variable with EX 2 < ∞ and let Y = |X|. Suppose that X has a Lebesgue density symmetric about 0. Show that X and Y are uncorrelated, but they are not independent. Solution. Let f be the Lebesgue density of X. Then f (x) = f (−x). Since X and XY = X|X| are odd functions of X, EX = 0 and E(X|X|) = 0. Hence, Cov(X, Y ) = E(XY ) − EXEY = E(X|X|) − EXE|X| = 0. Let t be a positive constant such that p = P (0 < X < t) > 0. Then P (0 < X < t, Y < t) = P (0 < X < t, −t < X < t) = P (0 < X < t) =p and P (0 < X < t)P (Y < t) = P (0 < X < t)P (−t < X < t) = 2P (0 < X < t)P (0 < X < t) = 2p2 , i.e., P (0 < X < t, Y < t) = P (0 < X < t)P (Y < t). Hence X and Y are not independent. Exercise 26 (#1.61). Let (X, Y ) be a random 2-vector with the following Lebesgue density:  −1 π x2 + y 2 ≤ 1 f (x, y) = 0 x2 + y 2 > 1. Show that X and Y are uncorrelated, but they are not independent. Solution. Since X and Y are uniformly distributed on the Borel set

Chapter 1. Probability Theory

19

{(x, y) : x2 +y 2 ≤ 1}, EX = EY = 0 and E(XY ) = 0. Hence Cov(X, Y ) = 0. A direct calculation shows that √ √ 1 P (0 < X < 1/ 2, 0 < Y < 1/ 2) = 2π and

√ √ 1 1 P (0 < X < 1/ 2) = P (0 < Y < 1/ 2) = + . 4 2π

Hence, √ √ √ √ P (0 < X < 1/ 2, 0 < Y < 1/ 2) = P (0 < X < 1/ 2)P (0 < Y < 1/ 2) and X and Y are not independent. Exercise 27 (#1.48, #1.70). Let Y be a random variable having the noncentral chi-square distribution χ2k (δ), where k is a positive integer. Show that (i) the Lebesgue density of Y is gδ,k (t) = e−δ/2

∞ (δ/2)j j=0

j!

f2j+k (t),

where fj (t) = [Γ(j/2)2j/2 ]−1 tj/2−1 e−t/2 I(0,∞) (t) is the Lebesgue density of the central chi-square distribution χ2j , j = 1, 2, ...; √ √ √ (ii) the characteristic function of Y is (1 − 2 −1t)−k/2 e −1t/(1−2 −1t) ; (iii) E(Y ) = k + δ and Var(Y ) = 2k + 4δ. Solution A. (i) Consider first k = 1. By the definition of the noncentral chi-square distribution (e.g., Shao, 2003, p. 26), the distribution of Y is the √ same as that of X 2 , where X has the normal distribution with mean δ and variance 1. Since √ √ P (Y ≤ t) = P (X ≤ t) − P (X ≤ − t) for t > 0, the Lebesgue density of Y is √ √ 1 fY (t) = √ [fX ( t) + fX (− t)]I(0,∞) (t), 2 t where fX is the Lebesgue density of X. Using the fact that X has a normal distribution, we obtain that, for t > 0, √ √ 2  1  −(√t−√δ)2 /2 + e−(− t− δ) /2 e fY (t) = √ 2 2πt  √ e−δ/2 e−t/2  √δt √ = + e −δt e 2 2πt

20

Chapter 1. Probability Theory ⎞ ⎛ √ ∞ √ ∞ e−δ/2 e−t/2 ⎝ ( δt)j (− δt)j ⎠ √ = + j! j! 2 2πt j=0 j=0 =

∞ e−δ/2 e−t/2 (δt)j √ . 2πt j=0 (2j)!

On the other hand, for k = 1 and t > 0,

 ∞ 1 (δ/2)j −δ/2 j−1/2 −t/2 gδ,1 (t) = e t e j! Γ(j + 1/2)2j+1/2 j=0 =

∞ (δt)j e−δ/2 e−t/2 √ . j!Γ(j + 1/2)22j 2t j=0

√ Since j!22j Γ(j + 1/2) = π(2j)!, fY (t) = gδ,1 (t) holds. We then use induction. By definition, Y = X1 + X2 , where X1 has the noncentral chi-square distribution χ2k−1 (δ), X2 has the central chi-square distribution χ21 , and X1 and X2 are independent. By the induction assumption, the Lebesgue density of X1 is gδ,k−1 . Note that the Lebesgue density of X2 is f1 . Using the convolution formula (e.g., Example 1.15 in Shao, 2003), the Lebesgue density of Y is  fY (t) = gδ,k−1 (u)f1 (t − u)du −δ/2

=e

 ∞ (δ/2)j j=0

j!

f2j+k−1 (u)f1 (t − u)du

 for t > 0. By the convolution formula again, f2j+k−1 (u)f1 (t − u)du is the Lebesgue density of Z+X2 , where Z has density f2j+k−1 and is independent of X2 . By definition, Z + X2 has the central chi-square distribution χ22j+k , i.e.,  f2j+k−1 (u)f1 (t − u)du = f2j+k (t). Hence, fY = gδ,k . (ii) Note that the moment generating function of the central chi-square distribution χ2k is, for t < 1/2,  ∞  1 tu e fk (u)du = uk/2−1 e−(1−2t)u/2 du Γ(k/2)2k/2 0  ∞ 1 sk/2−1 e−s/2 ds = Γ(k/2)2k/2 (1 − 2t)k/2 0 1 = , (1 − 2t)k/2

Chapter 1. Probability Theory

21

where the second equality follows from the following change of variable in the integration: s = (1 − 2t)u. By the result in (i), the moment generating function for Y is   ∞ (δ/2)j etx gδ,k (x)dx = e−δ/2 etx f2j+k (x)dx j! j=0 = e−δ/2

∞ j=0

(δ/2)j j!(1 − 2t)(j+k/2)

=

∞ {δ/[2(1 − 2t)]}j e j! (1 − 2t)k/2 j=0

=

e−δ/2+δ/[2(1−2t)] (1 − 2t)k/2

=

eδt/(1−2t) . (1 − 2t)k/2

−δ/2

√ function of Y √ , we obtain Substituting t by −1t in the moment generating √ √ the characteristic function of Y as (1 − 2 −1t)−k/2 e −1δt/(1−2 −1t) . (iii) Let ψY (t) be the moment generating function of Y . By the result in (ii), 

δ k 2δt + ψ  (t) = ψ(t) + 1 − 2t (1 − 2t)2 1 − 2t and  2δt δ k + ψ (t) = ψ (t) + 1 − 2t (1 − 2t)2 1 − 2t

 4δ 2δt 2k + ψ(t) . + + (1 − 2t)2 (1 − 2t)3 (1 − 2t)2 



Hence, EY = ψ  (0) = δ + k, EY 2 = ψ  (0) = (δ + k)2 + 4δ + 2k, and Var(Y ) = EY 2 − (EY )2 = 4δ + 2k. Solution B. (i) We first derive result (ii). Let X be a random variable having the standard normal distribution and µ be a real number. The moment generating function of (X + µ)2 is  2 2 1 √ e−x /2 et(x+µ) dx ψµ (t) = 2π  2 2 eµ t/(1−2t) √ = e−(1−2t)[x−2µt/(1−2t)] /2 dx 2π 2

eµ t/(1−2t) = √ . 1 − 2t

22

Chapter 1. Probability Theory

√ 2 By definition, Y has the same distribution as X12 +· · ·+Xk−1 +(Xk + δ)2 , where Xi ’s are independent and have the standard normal distribution. From the obtained result, the moment generating function of Y is 2

√ 2 δ) ]

2

Eet[X1 +···+Xk−1 +(Xk +

2

= [ψ0 (t)]k−1 ψ√δ (t) =

eµ t/(1−2t) . (1 − 2t)k/2

(ii) We now use the result in (ii) to prove the result in (i). From part (ii) of 2 Solution A, the moment generating function of gδ,k is eµ t/(1−2t) (1−2t)−k/2 , which is the same as the moment generating function of Y derived in part (i) of this solution. By the uniqueness theorem (e.g., Theorem 1.6 in Shao, 2003), we conclude that gδ,k is the Lebesgue density of Y . (iii) Let Xi ’s be as defined in (i). Then, √ 2 EY = EX12 + · · · + EXk−1 + E(Xk + δ)2 √ = k − 1 + EXk2 + δ + E(2 δXk ) = k+δ and √   2 ) + Var (Xk + δ)2 Var(Y ) = Var(X12 ) + · · · + Var(Xk−1 √ = 2(k − 1) + Var(Xk2 + 2 δXk ) √ √ = 2(k − 1) + Var(Xk2 ) + Var(2 δXk ) + 2Cov(Xk2 , 2 δXk ) = 2k + 4δ, since Var(Xi2 ) = 2 and Cov(Xk2 , Xk ) = EXk3 − EXk2 EXk = 0. Exercise 28 (#1.57). Let U1 and U2 be independent random variables having the χ2n1 (δ) and χ2n2 distributions, respectively, and let F = (U1 /n1 )/(U2 /n2 ). Show that (n1 +δ) when n2 > 2; (i) E(F ) = nn12 (n 2 −2) 2n2 [(n +δ)2 +(n −2)(n +2δ)]

(ii) Var(F ) = 2 n12 (n2 −2)22(n2 −4)1 when n2 > 4. 1 Note. The distribution of F is called the noncentral F-distribution and denoted by Fn1 ,n2 (δ). Solution. From the previous exercise, EU1 = n1 +δ and EU12 = Var(U1 )+ (EU1 )2 = 2n1 + 4δ + (n1 + δ)2 . Also,  ∞ 1 −1 xn2 /2−2 e−x/2 dx EU2 = Γ(n2 /2)2n2 /2 0 Γ(n2 /2 − 1)2n2 /2−1 Γ(n2 /2)2n2 /2 1 = n2 − 2 =

Chapter 1. Probability Theory

23

for n2 > 2 and EU2−2 =

1 Γ(n2 /2)2n2 /2





xn2 /2−3 e−x/2 dx

0

Γ(n2 /2 − 2)2n2 /2−2 = Γ(n2 /2)2n2 /2 1 = (n2 − 2)(n2 − 4) for n2 > 4. Then, E(F ) = E

U1 /n1 n2 n2 (n1 + δ) = EU1 EU2−1 = U2 /n2 n1 n1 (n2 − 2)

when n2 > 2 and U12 /n21 − [E(F )]2 U22 /n22 2

n2 n2 (n1 + δ) = 22 EU12 EU2−2 − n1 n1 (n2 − 2)

 2 (n1 + δ)2 2n1 + 4δ + (N − 1 + δ)2 n − = 22 n1 (n2 − 2)(n2 − 4) (n2 − 2)2 2n22 [(n1 + δ)2 + (n2 − 2)(n1 + 2δ)] = n21 (n2 − 2)2 (n2 − 4)

Var(F ) = E

when n2 > 4. Exercise 29 (#1.74). Let φn be the characteristic function of a probabil{an } be a sequence of nonnegative numbers ity measure ∞ Pn , n = 1, 2, .... Let ∞ with n=1 an = 1. Show that n=1 an φn is a characteristic function and find its corresponding probability measure. Solution A. For any event A, define P (A) =



an Pn (A).

n=1

Then P is a probability measure and Pn  P for any n. Denote the RadonNikodym derivative of Pn with respect to P as fn , n = 1, 2, .... By Fubini’s theorem, for any event A,   ∞ ∞ an fn dP = an fn dP A n=1

=

n=1 ∞

A

an Pn (A)

n=1

= P (A).

24

Hence,

Chapter 1. Probability Theory 

an fn = 1 a.s. P . Then, ∞

an φn (t) =

n=1

=





√ −1tx

e

an

n=1 ∞



√ −1tx

e

an

dPn (x) fn (x)dP

n=1

 =

∞ √ −1tx

e 

=

an fn (x)dP

n=1 √ −1tx

e

dP.

∞ Hence, n=1 an φn is the characteristic function of P . Solution B. Let X be a discrete random variable satisfying P (X = n) = an and Y be a random variable such that given X = n, the conditional distribution of Y is Pn , n = 1, 2, .... The characteristic function of Y is √ −1tY

E(e



) = E[E(e −1tY |X)] ∞ √ = an E(e −1tY |X = n) = =

n=1 ∞ n=1 ∞

 an

√ −1ty

e

dPn (y)

an φn (t).

n=1

∞ This shows that n=1 an φn is the characteristic function of the marginal distribution of Y . Exercise 30 (#1.79). Find an example of two random variables X and Y such that X and Y are not independent but their characteristic functions φX and φY satisfy φX (t)φY (t) = φX+Y (t) for all t ∈ R. Solution. Let X = Y be a random variable having the Cauchy distribution with φX (t) = φY (t) = e−|t| . Then X and Y are not independent (see Exercise 20). The characteristic function of X + Y = 2X is √ −1t(2X)

φX+Y (t) = E(e

) = φX (2t) = e−|2t| = e−|t| e−|t| = φX (t)φY (t).

Exercise 31 (#1.75). Let X be a random variable whose√characteristic   function φ satisfies |φ(t)|dt < ∞. Show that (2π)−1 e− −1xt φ(t)dt is the Lebesgue density of X.

Chapter 1. Probability Theory

25

√ √ √ Solution. Define g(t, x) = (e− −1ta − e− −1tx )/( −1t) for a fixed real number a. For any x, |g(t, x)| ≤ |x−a|. Under the condition |φ(t)|dt < ∞,  ∞ 1 φ(t)g(t, x)dt F (x) − F (a) = 2π −∞

(e.g., Theorem 1.6 in Shao, 2003), where F is the cumulative distribution function of X. Since   √  ∂g(t, x)   = |e− −1tx | = 1,   ∂x  by the dominated convergence theorem (Theorem 1.1 and Example 1.8 in Shao, 2003),

  ∞ 1 d F  (x) = φ(t)g(t, x)dt dx 2π −∞  ∞ ∂g(t, x) 1 φ(t) = dt 2π −∞ ∂x  ∞ √ 1 = φ(t)e− −1tx dt. 2π −∞ Exercise 32 (#1.73(g)). Let φ be a characteristic function and G be a cumulative distribution function on the real line. Show that φ(ut)dG(u) is a characteristic function on the real line. Solution. Let F be the cumulative distribution function corresponding to φ and let X and U be independent random variables having distributions F and G, respectively. The characteristic function of U X is   √ √ −1tU X Ee = e −1tux dF (x)dG(u)  = φ(ut)dG(u).

Exercise 33. Let X and Y be independent random variables. Show that if X and X − Y are independent, then X must be degenerate. Solution. We denote the characteristic function of any random variable Z by φZ . Since X and Y are independent, so are −X and Y . Hence, φY −X (t) = φY (t)φ−X (t) = φY (t)φX (−t), t ∈ R. If X and X −Y are independent, then X and Y −X are independent. Then φY (t) = φX+(Y −X) (t) = φX (t)φY −X (t) = φX (t)φX (−t)φY (t), t ∈ R.

26

Chapter 1. Probability Theory

Since φY (0) = 1 and φY is continuous, φY (t) = 0 for a neighborhood of 0. Hence φX (t)φX (−t) = |φX (t)|2 = 1 on this neighborhood of 0. Thus, X is degenerate. Exercise 34 (#1.98). Let PY be a discrete distribution on {0, 1, 2, ...} and given Y = y, the conditional distribution of X be the binomial distribution with size y and probability p. Show that (i) if Y has the Poisson distribution with mean θ, then the marginal distribution of X is the Poisson distribution with mean pθ; (ii) if Y + r has the negative binomial distribution with size r and probability π, then the marginal distribution of X + r is the negative binomial distribution with size r and probability π/[1 − (1 − p)(1 − π)]. Solution. (i) The moment generating function of X is E(etX ) = E[E(etX |Y )] = E[(pet + 1 − p)Y ] = eθp(e

t

−1)

,

which is the moment generating function of the Poisson distribution with mean pθ. (ii) The moment generating function of X + r is E(et(X+r) ) = etr E[E(etX |Y )] = etr E[(pet + 1 − p)Y ] etr = E[(pet + 1 − p)Y +r ] (pet + 1 − p)r π r (pet + 1 − p)r etr = (pet + 1 − p)r [1 − (1 − π)(pet + 1 − p)]r π r ert = . [1 − (1 − π)(pet + 1 − p)]r Then the result follows from the fact that   1 − (1 − π)(pet + 1 − p) π et . =1− 1− 1 − (1 − p)(1 − π) 1 − (1 − p)(1 − π) Exercise 35 (#1.85). Let X and Y be integrable random variables on the probability space (Ω, F, P ) and A be a sub-σ-field of F. Show that (i) if X ≤ Y a.s., then E(X|A) ≤ E(Y |A) a.s.; (ii) if a and b are constants, then E(aX + bY |A) = aE(X|A) + bE(X|A) a.s. Solution. (i) Suppose that X ≤ Y a.s. By the definition of the conditional expectation and the property of integration,     E(X|A)dP = XdP ≤ Y dP = E(Y |A)dP, A

A

A

A

Chapter 1. Probability Theory

27

where A = {E(X|A) > E(Y |A)} ∈ A. Hence P (A) = 0, i.e., E(X|A) ≤ E(Y |A) a.s. (ii) Note that aE(X|A) + bE(Y |A) is measurable from (Ω, A) to (R, B). For any A ∈ A, by the linearity of integration,    (aX + bY )dP = a XdP + b Y dP A A A  =a E(X|A)dP + b E(Y |A)dP A  A = [aE(X|A) + bE(Y |A)]dP. A

By the a.s.-uniqueness of the conditional expectation, E(aX + bY |A) = aE(X|A) + bE(X|A) a.s. Exercise 36 (#1.85). Let X be an integrable random variable on the probability space (Ω, F, P ) and A and A0 be σ-fields satisfying A0 ⊂ A ⊂ F. Show that E[E(X|A)|A0 ] = E(X|A0 ) = E[E(X|A0 )|A] a.s. Solution. Note that E(X|A0 ) is measurable from (Ω, A0 ) to (R, B) and A0 ⊂ A. Hence E(X|A0 ) is measurable from (Ω, A) to (R, B) and, thus, E(X|A0 ) = E[E(X|A0 )|A] a.s. Since E[E(X|A)|A0 ] is measurable from (Ω, A0 ) to (R, B) and for any A ∈ A0 ⊂ A,    E[E(X|A)|A0 ]dP = E(X|A)dP = XdP, A

A

A

we conclude that E[E(X|A)|A0 ] = E(X|A0 ) a.s. Exercise 37 (#1.85). Let X be an integrable random variable on the probability space (Ω, F, P ), A be a sub-σ-field of F, and Y be another random variable satisfying σ(Y ) ⊂ A and E|XY | < ∞. Show that E(XY |A) = Y E(X|A) a.s. Solution. Since σ(Y ) ⊂ A, Y E(X|A) is measurable from (Ω, A) to (R, B). The result follows if we can show that for any A ∈ A,   Y E(X|A)dP = XY dP. A

A

(1) If Y = aIB , where a ∈ R and B ∈ A, then A ∩ B ∈ A and     XY dP = a XdP = a E(X|A)dP = Y E(X|A)dP. A

A∩B

A∩B

A

28

Chapter 1. Probability Theory

(2) If Y =

k

 XY dP = A

i=1

ai IBi , where Bi ∈ A, then

   k k ai XIBi dP = ai IBi E(X|A)dP = Y E(X|A)dP. i=1

A

A

i=1

A

(3) Suppose that X ≥ 0 and Y ≥ 0. There exists a sequence of increasing simple functions Yn such that σ(Yn ) ⊂ A, Yn ≤ Y and limn Yn = Y . Then limn XYn = XY and limn Yn E(X|A) = Y E(X|A). By the monotone convergence theorem and the result in (2),     XY dP = lim XYn dP = lim Yn E(X|A)dP = Y E(X|A)dP. A

n

n

A

A

A

(4) For general X and Y , consider X+ , X− , Y+ , and Y− . Since σ(Y ) ⊂ A, so are σ(Y+ ) and σ(Y− ). Then, by the result in (3),    XY dP = X+ Y+ dP − X+ Y− dP A A A   − X− Y+ dP + X− Y− dP A  A  = Y+ E(X+ |A)dP − Y− E(X+ |A)dP A A   − Y+ E(X− |A)dP + Y− E(X− |A)dP A A   = Y E(X+ |A)dP − Y E(X− |A)dP A A = Y E(X|A)dP, A

where the last equality follows from the result in Exercise 35. Exercise 38 (#1.85). Let X1 , X2 , ... and X be integrable random variables on the probability space (Ω, F, P ). Assume that 0 ≤ X1 ≤ X2 ≤ · · · ≤ X and limn Xn = X a.s. Show that for any σ-field A ⊂ F, E(X|A) = lim E(Xn |A) a.s. n

Solution. Since each E(Xn |A) is measurable from (Ω, A) to (R, B), so is the limit limn E(Xn |A). We need to show that   lim E(Xn |A)dP = XdP A n

A

for any A ∈ A. By Exercise 35, 0 ≤ E(X1 |A) ≤ E(X2 |A) ≤ · · · ≤ E(X|A) a.s. By the monotone convergence theorem (e.g., Theorem 1.1 in Shao,

Chapter 1. Probability Theory

29

2003), for any A ∈ A,   lim E(Xn |A)dP = lim E(Xn |A)dP n A n A  = lim Xn dP n A  = lim Xn dP n A = XdP. A

Exercise 39 (#1.85). Let X1 , X2 , ... be integrable random variables on the probability space (Ω, F, P ). Show that for any σ-field A ⊂ F, (i) E(lim inf n Xn |A) ≤ lim inf n E(Xn |A) a.s. if Xn ≥ 0 for any n; (ii) limn E(Xn |A) = E(X|A) a.s. if limn Xn = X a.s. and |Xn | ≤ Y for any n and an integrable random variable Y . Solution. (i) For any m ≥ n, by Exercise 35, E(inf m≥n Xm |A) ≤ E(Xm |A) a.s. Hence, E(inf m≥n Xm |A) ≤ inf m≥n E(Xm |A) a.s. Let Yn = inf m≥n Xm . Then 0 ≤ Y1 ≤ Y2 ≤ · · · ≤ limn Yn and Yn ’s are integrable. Hence, E(lim inf Xn |A) = E(lim Yn |A) n

n

= lim E(Yn |A) n

= lim E(inf m≥n Xm |A) n

≤ lim inf E(Xm |A) n m≥n

= lim inf E(Xn |A) n

a.s., where the second equality follows from the result in the previous exercise and the first and the last equalities follow from the fact that lim inf n fn = limn inf m≥n fm for any sequence of functions {fn }. (ii) Note that Y + Xn ≥ 0 for any n. Applying the result in (i) to Y + Xn , lim inf E(Y + Xn |A) ≤ E(lim inf (Y + Xn )|A) = E(Y + X|A) a.s. n

n

Since Y is integrable, so is X and, consequently, E(Y + X|A) = E(Y |A) + E(X|A) a.s. and lim inf n E(Y + Xn |A) = E(Y |A) + lim inf n E(Xn |A) a.s. Hence, lim inf E(Xn |A) ≤ E(X|A) a.s. n

Applying the same argument to Y − Xn , we obtain that lim inf E(−Xn |A) ≤ E(−X|A) a.s. n

30

Chapter 1. Probability Theory

Since lim inf n E(−Xn |A) = − lim supn E(Xn |A), we obtain that lim sup E(Xn |A) ≥ E(X|A) a.s. n

Combining the results, we obtain that lim sup E(Xn |A) = lim inf E(Xn |A) = lim E(Xn |A) = E(X|A) a.s. n

n

n

Exercise 40 (#1.86). Let X and Y be integrable random variables on the probability space (Ω, F, P ) and A ⊂ F be a σ-field. Show that E[Y E(X|A)] = E[XE(Y |A)], assuming that both integrals exist. Solution. (1) The problem is much easier if we assume that Y is bounded. When Y is bounded, both Y E(X|A) and XE(Y |A) are integrable. Using the result in Exercise 37 and the fact that E[E(X|A)] = EX, we obtain that E[Y E(X|A)] = E{E[Y E(X|A)|A]} = E[E(X|A)E(Y |A)] = E{E[XE(Y |A)|A]} = E[XE(Y |A)]. (2) Assume that Y ≥ 0. Let Z be another nonnegative integrable random variable. We now show that if σ(Z) ⊂ A, then E(Y Z) = E[ZE(Y |A)]. (Note that this is a special case of the result in Exercise 37 if E(Y Z) < ∞.) Let Yn = max{Y, n}, n = 1, 2, .... Then 0 ≤ Y1 ≤ Y2 ≤ · · · ≤ Y and limn Yn = Y . By the results in Exercises 35 and 39, 0 ≤ E(Y1 |A) ≤ E(Y2 |A) ≤ · · · a.s. and limn E(Yn |A) = E(Y |A) a.s. Since Yn is bounded, Yn Z is integrable. By the result in Exercise 37, E[ZE(Yn |A)] = E(Yn Z),

n = 1, 2, ....

By the monotone convergence theorem, E(Y Z) = lim E(Yn Z) = lim E[ZE(Yn |A)] = E[ZE(Y |A)]. n

n

Consequently, if X ≥ 0, then the result follows by taking Z = E(X|A). (3) We now consider general X and Y . Let f+ and f− denote the positive and negative parts of a function f . Note that E{[XE(Y |A)]+ } = E{X+ [E(Y |A)]+ } + E{X− [E(Y |A)]− } and E{[XE(Y |A)]− } = E{X+ [E(Y |A)]− } + E{X− [E(Y |A)]+ }.

Chapter 1. Probability Theory

31

Since E[XE(Y |A)] exists, without loss of generality we assume that E{[XE(Y |A)]+ } = E{X+ [E(Y |A)]+ } + E{X− [E(Y |A)]− } < ∞. Then, both E[X+ E(Y |A)] = E{X+ [E(Y |A)]+ } − E{X+ [E(Y |A)]− } and E[X− E(Y |A)] = E{X− [E(Y |A)]+ } − E{X− [E(Y |A)]− } are well defined and their difference is also well defined. Applying the result established in (2), we obtain that E[X+ E(Y |A)] = E{E(X+ |A)[E(Y |A)]+ } − E{E(X+ |A)[E(Y |A)]− } = E[E(X+ |A)E(Y |A)], where the last equality follows from the result in Exercise 8. Similarly, E[X− E(Y |A)] = E{E(X− |A)[E(Y |A)]+ } − E{E(X− |A)[E(Y |A)]− } = E[E(X− |A)E(Y |A)]. By Exercise 8 again, E[XE(Y |A)] = E[X+ E(Y |A)] − E[X− E(Y |A)] = E[E(X+ |A)E(Y |A)] − E[E(X− |A)E(Y |A)] = E[E(X|A)E(Y |A)]}. Switching X and Y , we also conclude that E[Y E(X|A)] = E[E(X|A)E(Y |A)]. Hence, E[XE(Y |A)] = E[Y E(X|A)]. Exercise 41 (#1.87). Let X, X1 , X2 , ... be a sequence of integrable random variables on the probability space (Ω, F, P ) and A ⊂ F be a σ-field. Suppose that limn E(Xn Y ) = E(XY ) for every integrable (or bounded) random variable Y . Show that limn E[E(Xn |A)Y ] = E[E(X|A)Y ] for every integrable (or bounded) random variable Y . Solution. Assume that Y is integrable. Then E(Y |A) is integrable. By the condition, E[Xn E(Y |A)] → E[XE(Y |A)]. By the result of the previous exercise, E[Xn E(Y |A)] = E[E(Xn |A)Y ] and E[XE(Y |A)] = E[E(X|A)Y ]. Hence, E[E(Xn |A)Y ] → E[E(X|A)Y ] for every integrable Y . The same result holds if “integrable” is changed to “bounded”.

32

Chapter 1. Probability Theory

Exercise 42 (#1.88). Let X be a nonnegative integrable random variable on the probability space (Ω, F, P ) and A ⊂ F be a σ-field. Show that 



E(X|A) =

  P X > t|A dt.

0

Note. For any B ∈ F, P (B|A) is defined to be E(IB |A). Solution. From the theory of conditional distribution (e.g., Theorem 1.7 in Shao, 2003), there exists P˜ (B, ω) defined on F × Ω such that (i) for any ω ∈ Ω, P˜ (·, ω) is a probability measure on (Ω, F) and (ii) for any B ∈ F, P˜ (B, ω) = P (B|A) a.s. From Exercise 23, 

 XdP˜ (·, ω) =

0





P˜ ({X > t}, ω)dt



=

P (X > t|A)dt a.s. 0

Hence, the result follows if  E(X|A)(ω) =

XdP˜ (, ω) a.s.

This is certainly true if X = IB for a B ∈ F. By the linearity of the integration and conditional expectation, this equality also holds when X is a nonnegative simple function. For general nonnegative X, there exists a sequence of simple functions X1 , X2 , ..., such that 0 ≤ X1 ≤ X2 ≤ · · · ≤ X and limn Xn = X a.s. From Exercise 38, E(X|A) = lim E(Xn |A) n  = lim Xn dP˜ (·, ω) n  = XdP˜ (·, ω) a.s.

Exercise 43 (#1.97). Let X and Y be independent integrable random variables on a probability space and f be a nonnegative convex function. Show that E[f (X + Y )] ≥ E[f (X + EY )]. Note. We need to apply the following Jensen’s inequality for conditional expectations. Let f be a convex function and X be an integrable random variable satisfying E|f (X)| < ∞. Then f (E(X|A)) ≤ E(f (X)|A) a.s. (e.g., Theorem 9.1.4 in Chung, 1974). Solution. If E[f (X + Y )] = ∞, then the inequality holds. Hence, we may assume that f (X + Y ) is integrable. Using Jensen’s inequality and some

Chapter 1. Probability Theory

33

properties of conditional expectations, we obtain that E[f (X + Y )] = ≥ = =

E{E[f (X + Y )|X]} E{f (E(X + Y |X))} E{f (X + E(Y |X))} E[f (X + EY )],

where the last equality follows from E(Y |X) = EY since X and Y are independent. Exercise 44 (#1.83). Let X be an integrable random variable with a Lebesgue density f and let Y = g(X), where g is a function with positive derivative on (0, ∞) and g(x) = g(−x). Find an expression for E(X|Y ) and verify that it is indeed the conditional expectation. Solution. Let h be the inverse function of g on (0, ∞) and ψ(y) = h(y)

f (h(y)) − f (−h(y)) . f (h(y)) + f (−h(y))

We now show that E(X|Y ) = ψ(Y ) a.s. It is clear that ψ(y) is a Borel function. Also, the σ-field generated by Y is generated by the sets of the form Aa = {y : g(0) ≤ y ≤ a}, a > g(0). Hence, it suffices to show that for any a > g(0),   XdP = Aa

ψ(Y )dP. Aa

Note that 

 XdP =

Aa

xf (x)dx g(0)≤g(x)≤a



h(a)

=

xf (x)dx −h(a)





0

=

h(a)

xf (x)dx + −h(a)





0

=

xf (x)dx 0

xf (−x)dx + h(a)

x[f (x) − f (−x)]dx

= 0



xf (x)dx 0

h(a)



h(a)

a

=

h(y)[f (h(y)) − f (−h(y))]h (y)dy.

g(0)

On the other hand, h (y)[f (h(y)) + f (−h(y))]I(g(0),∞) (y) is the Lebesgue

34

Chapter 1. Probability Theory

density of Y (see the note in Exercise 17). Hence,   a ψ(Y )dP = ψ(y)h (y)[f (h(y)) + f (−h(y))]dy g(0) a

Aa

 =

h(y)[f (h(y)) − f (−h(y))]h (y)dy

g(0)

by the definition of ψ(y). Exercise 45 (#1.91). Let X, Y , and Z be random variables on a probability space. Suppose that E|X| < ∞ and Y = h(Z) with a Borel h. Show that (i) E(XZ|Y ) = E(X)E(Z|Y ) a.s. if X and Z are independent and E|Z| < ∞; (ii) if E[f (X)|Z] = f (Y ) for all bounded continuous functions f on R, then X = Y a.s.; (iii) if E[f (X)|Z] ≥ f (Y ) for all bounded, continuous, nondecreasing functions f on R, then X ≥ Y a.s. Solution. (i) It suffices to show   XZdP = E(X) ZdP Y −1 (B)

Y −1 (B)

for any Borel set B. Since Y = h(Z), Y −1 (B) = Z −1 (h−1 (B)). Then    XZdP = XZIh−1 (B) (Z)dP = E(X) ZIh−1 (B) (Z)dP, Y −1 (B)

since X and Z are independent. On the other hand,    ZdP = ZdP = ZIh−1 (B) (Z)dP. Y −1 (B)

h−1 (B)

(ii) Let f (t) = et /(1+et ). Then both f and f 2 are bounded and continuous. Note that E[f (X) − f (Y )]2 = EE{[f (X) − f (Y )]2 |Z} = E{E[f 2 (X)|Z] + E[f 2 (Y )|Z] − 2E[f (X)f (Y )|Z]} = E{E[f 2 (X)|Z] + f 2 (Y ) − 2f (Y )E[f (X)|Z]} = E{f 2 (Y ) + f 2 (Y ) − 2f (Y )f (Y )} = 0, where the third equality follows from the result in Exercise 37 and the fourth equality follows from the condition. Hence f (X) = f (Y ) a.s. Since

Chapter 1. Probability Theory

f is (iii) and real

35

strictly increasing, X = Y a.s. For any real number c, there exists a sequence of bounded, continuous nondecreasing functions {fn } such that limn fn (t) = I(c,∞) (t) for any number t. Then, P (X > c, Y > c) = E{E(I{X>c} I{Y >c} |Z)} = E{I{Y >c} E(I{X>c} |Z)} = E{I{Y >c} E[lim fn (X)|Z]} n

= E{I{Y >c} lim E[fn (X)|Z]} n

≥ E{I{Y >c} lim fn (Y )} n

= E{I{Y >c} I{Y >c} } = P (Y > c), where the fourth and fifth equalities follow from Exercise 39 (since fn is bounded) and the inequality follows from the condition. This implies that P (X ≤ c, Y > c) = P (Y > c) − P (X > c, Y > c) = 0. For any integer k and positive integer n, let ak,i = k + i/n, i = 1, ..., n. Then P (X < Y ) = lim n

∞ n−1

P (X ≤ ak,i , ak,i < Y ≤ ak,i+1 ) = 0.

k=−∞ i=0

Hence, X ≥ Y a.s. Exercise 46 (#1.115). Let X1 , X2 , ... be a sequence of identically distributed random variables with E|X1 | < ∞ and let Yn = n−1 max1≤i≤n |Xi |. Show that limn E(Yn ) = 0 and limn Yn = 0 a.s. Solution. (i) Let gn (t) = n−1 P (max1≤i≤n |Xi | > t). Then limn gn (t) = 0 for any t and 1 0 ≤ gn ≤ P (|Xi | > t) = P (|X1 | > t). n i=1 n

∞ Since E|X1 | < ∞, 0 P (|X1 | > t)dt < ∞ (Exercise 23). By the dominated convergence theorem,  ∞  ∞ lim E(Yn ) = lim gn (t)dt = lim gn (t)dt = 0. n

n

0

0

n

(ii) Since E|X1 | < ∞, ∞ n=1

P (|Xn |/n > ) =

∞ n=1

P (|X1 | > n) < ∞,

36

Chapter 1. Probability Theory

which implies that limn |Xn |/n = 0 a.s. (see, e.g., Theorem 1.8(v) in Shao, 2003). Let Ω0 = {ω : limn |Xn (ω)|/n = 0}. Then P (Ω0 ) = 1. Let ω ∈ Ω0 . For any  > 0, there exists an N ,ω such that |Xn (ω)| < n whenever n > N ,ω . Also, there exists an M ,ω > N ,ω such that max1≤i≤Nω |Xi (ω)| ≤ n whenever n > M ,ω . Then, whenever n > M ,ω , max1≤i≤n |Xi (ω)| n max1≤i≤Nω |Xi (ω)| maxNω ) ≤ P ({ω : k/2m ≤ ω ≤ (k + 1)/2m }) =

1 →0 2m

as n → ∞ for any  > 0. Thus Xn →p X. However, for any fixed ω ∈ [0, 1] and m, there exists k with 1 ≤ k ≤ 2m such that (k − 1)/2m ≤ ω ≤ k/2m . Let nm = 2m − 2 + k. Then Xnm (ω) = 1. Since m is arbitrarily selected, we can find an infinite sequence {nm } such that Xnm (ω) = 1. This implies Xn (ω) does not converge to X(ω) = 0. Since ω is arbitrary, Xn does not converge to X a.s. (ii) Let X = 0 and  0 1/n < ω ≤ 1 Xn (ω) = en 0 ≤ ω ≤ 1/n.

Chapter 1. Probability Theory

37

For any  ∈ (0, 1) P (|Xn − X| > ) = P (|Xn | = 0) =

1 →0 n

as n → ∞, i.e., Xn →p X. On the other hand, for any p > 0, E|Xn − X|p = E|Xn |p = enp /n → ∞. (iii) Define

 X(ω) =

and

 Xn (ω) =

1 0 ≤ ω ≤ 1/2 0 1/2 < ω ≤ 1 0 1

0 ≤ ω ≤ 1/2 1/2 < ω ≤ 1.

For any t, ⎧ ⎨ 1 P (X ≤ t) = P (Xn ≤ t) = 1/2 ⎩ 0

t≥1 0≤t ) = 1 for any  ∈ (0, 1). (iv) let g(t) = 1 − I{0} (t), X = 0, and Xn = 1/n. Then, Xn →p X, but g(Xn ) = 1 and g(X) = 0. (v) Define  √ m n m−1 n 0 such that m +  and m −  are continuity points of the distribution of X, m −  < mn < m +  for sufficiently large n and 1 ≤ P (Xn ≤ mn ) ≤ P (Xn ≤ m + ) 2

38

Chapter 1. Probability Theory

and 1 ≤ P (Xn ≥ mn ) ≤ P (Xn ≥ m − ). 2 Letting n → ∞, we obtain that 12 ≤ P (X ≤ m + ) and 12 ≤ P (X ≥ m − ). Letting  → 0, we obtain that 12 ≤ P (X ≤ m) and 21 ≤ P (X ≥ m). Hence m is a median of X. Exercise 49 (#1.126). Show that if Xn →d X and X = c a.s. for a real number c, then Xn →p X. Solution. Note that the cumulative distribution function of X has only one discontinuity point c. For any  > 0, P (|Xn − X| > ) = P (|Xn − c| > ) ≤ P (Xn > c + ) + P (Xn ≥ c − ) → P (X > c + ) + P (X ≤ c − ) = 0 as n → ∞. Thus, Xn →p X. Exercise 50 (#1.117(b), #1.118). Let X1 , X2 , ... be random variables. Show that {|Xn |} is uniformly integrable if one of the following condition holds: (i) supn E|Xn |1+δ < ∞ for a δ > 0; (ii) P (|Xn | ≥ c) ≤ P (|X| ≥ c) for all n and c > 0, where X is an integrable random variable. Note. A sequence of random variables {Xn } is uniformly integrable if limt→∞ supn E(|Xn |I{|Xn |>t} ) = 0. Solution. (i) Denote p = 1 + δ and q = 1 + δ −1 . Then E(|Xn |I{|Xn |>t} ) ≤ (E|Xn |p )1/p [E(I{|Xn |>t} )q ]1/q = (E|Xn |p )1/p [P (|Xn | > t)]1/q ≤ (E|Xn |p )1/p (E|Xn |p )1/q t−p/q = E|Xn |1+δ t−δ , where the first inequality follows from H¨ older’s inequality (e.g., Shao, 2003, p. 29) and the second inequality follows from P (|Xn | > t) ≤ t−p E|Xn |p . Hence lim sup E(|Xn |I{|Xn |>t} ) ≤ sup E|Xn |1+δ lim t−δ = 0.

t→∞ n

n

t→∞

Chapter 1. Probability Theory

39

(ii) By Exercise 23,  sup E(|Xn |I{|Xn |>t} ) = sup n

n



0 ∞

P (|Xn |I{|Xn |>t} > s)ds

P (|Xn | > s, |Xn | > t)ds

0   ∞ = sup tP (|Xn | > t) + P (|Xn | > s)ds n t  ∞ ≤ tP (|X| > t) + P (|X| > s)ds = sup n

t

→0 as t → ∞ when E|X| < ∞.

Exercise 51. Let {Xn } and {Yn } be sequences of random variables such that Xn diverges to ∞ in probability and Yn is bounded in probability. Show that Xn + Yn diverges to ∞ in probability. Solution. By the definition of bounded in probability, for any  > 0, there is C > 0 such that supn P (|Yn | > C ) < /2. By the definition of divergence to ∞ in probability, for any M > 0 and  > 0, there is n > 0 such that P (|Xn | ≤ M + C ) < /2 whenever n > n . Then, for n > n , P (|Xn + Yn | ≤ M ) ≤ P (|Xn | ≤ M + |Yn |) = P (|Xn | ≤ M + |Yn |, |Yn | ≤ C ) + P (|Xn | ≤ M + |Yn |, |Yn | > C ) ≤ P (|Xn | ≤ M + C ) + P (|Yn | > C ) ≤ /2 + /2 = . This means that Xn + Yn diverges to ∞ in probability. Exercise 52. Let X, X1 , X2 , ... be random variables. Show that if limn Xn = X a.s., then supm≥n |Xm | is bounded in probability. Solution. Since supm≥n |Xm | ≤ supm≥1 |Xm | for any n, it suffices to show that for any  > 0, there is a C > 0 such that P (supn≥1 |Xn | > C) ≤ . Note that limn Xn = X implies that, for any  > 0 and any fixed c1 > 0, there exists a sufficiently large N such that P (∪∞ n=N {|Xn − X| > c1 }) < /3 (e.g., Lemma 1.4 in Shao, 2003). For this fixed N , there exist constants c2 > 0 and c3 > 0 such that N n=1

P (|Xn | > c2 )
c3 )
C = P {|Xn | > C} n≥1

n=1



N



P (|Xn | > C) + P

n=1

∞ 

 {|Xn | > C}

n=N  ∞ 



 ≤ + P (|X| > c3 ) + P {|Xn | > C, |X| ≤ c3 } 3 n=N  ∞     ≤ + +P {|Xn − X| > c1 } 3 3 n=N    ≤ + + = . 3 3 3

Exercise 53 (#1.128). Let {Xn } and {Yn } be two sequences of random variables such that Xn is bounded in probability and, for any real number t and  > 0, limn [P (Xn ≤ t, Yn ≥ t + ) + P (Xn ≥ t + , Yn ≤ t)] = 0. Show that Xn − Yn →p 0. Solution. For any  > 0, there exists an M > 0 such that P (|Xn | ≥ M ) ≤  for any n, since Xn is bounded in probability. For this fixed M , there exists an N such that 2M/N < /2. Let ti = −M +2M i/N , i = 0, 1, ..., N . Then, P (|Xn − Yn | ≥ ) ≤ P (|Xn | ≥ M ) + P (|Xn | < M, |Xn − Yn | ≥ ) ≤ +

N

P (ti−1 ≤ Xn ≤ ti , |Xn − Yn | ≥ )

i=1

≤ +

N

P (Yn ≤ ti−1 − /2, ti−1 ≤ Xn )

i=1

+

N

P (Yn ≥ ti + /2, Xn ≤ ti ).

i=1

This, together with the given condition, implies that lim sup P (|Xn − Yn | ≥ ) ≤ . n

Since  is arbitrary, we conclude that Xn − Yn →p 0. Exercise 54 (#1.133). Let Fn , n = 0, 1, 2, ..., be cumulative distribution functions such that Fn → F0 for every continuity point of F0 . Let U be a random variable having the uniform distribution on the interval [0, 1] and let

Chapter 1. Probability Theory

41

Gn (U ) = sup{x : Fn (x) ≤ U }, n = 0, 1, 2, .... Show that Gn (U ) →p G0 (U ). Solution. For any n and real number t, Gn (U ) ≤ t if and only if Fn (t) ≥ U a.s. Similarly, Gn (U ) ≥ t if and only if Fn (t) ≤ U a.s. Hence, for any n, t and  > 0, P (Gn (U ) ≤ t, G0 (U ) ≥ t + ) = P (Fn (t) ≥ U, F0 (t + ) ≤ U ) = max{0, Fn (t) − F0 (t + )} and P (Gn (U ) ≥ t + , G0 (U ) ≤ t) = P (Fn (t + ) ≤ U, F0 (t) ≤ U ) = max{0, F0 (t) − Fn (t + )}. If both t and t +  are continuity points of F0 , then limn [Fn (t) − F0 (t + )] = F0 (t) − F0 (t + ) ≤ 0 and limn [F0 (t) − Fn (t + )] = F0 (t) − F0 (t + ) ≤ 0. Hence, lim [P (Gn (U ) ≤ t, G0 (U ) ≥ t + ) + P (Gn (U ) ≥ t + , G0 (U ) ≤ t)] = 0 n

when both t and t +  are continuity points of F0 . Since the set of discontinuity points of F0 is countable, Gn (U )−G0 (U ) →p 0 follows from the result in the previous exercise, since G0 (U ) is obviously bounded in probability. Exercise 55. Let {Xn } be a sequence of independent and identically distributed random variables. Show  that there does not exist a sequence n of real numbers {cn } such that limn i=1 (Xi − ci ) exists a.s., unless the distribution of X1 is degenerate. n Solution. Suppose that limn i=1 (Xi − ci ) exists a.s. Let φ and g be the n characteristic functions of X1 andlimn i=1 (Xi −ci ), respectively. For any n n, the characteristic function of i=1 (Xi − ci ) is n

√ −1tci

φ(t)e−

√ −1t(c1 +···+cn )

= [φ(t)]n e−

,

i=1

which converges to g(t) for any t. Then   √   n − −1t(c1 +···+cn )  n  lim [φ(t)] e  = lim |φ(t)| = |g(t)|. n

n

Since |g(t)| is continuous and g(0) = 1, |g(t)| = 0 on a neighborhood of 0. Hence, |φ(t)| = 1 on this neighborhood and, thus, X1 is degenerate. Exercise 56. Let P, P1 , P2 , ... be probability measures such that limn Pn (O) = P (O) for any open set O with P (∂O) = 0, where ∂A is the boundary of the set A. Show that limn Pn (A) = P (A) for any Borel A with P (∂A) = 0.

42

Chapter 1. Probability Theory

Solution. Let A be a Borel set with P (∂A) = 0. Let A0 be the interior of A and A1 be the closure of A. Then A0 ⊂ A ⊂ A1 = A0 ∪ ∂A. Since ∂A0 ⊂ ∂A, P (∂A0 ) = 0 and, by assumption, limn Pn (A0 ) = P (A0 ). Since ∂Ac1 ⊂ ∂A and Ac1 is an open set, limn Pn (Ac1 ) = P (Ac1 ), which implies limn Pn (A1 ) = P (A1 ). Then, P (A) ≤ P (A0 ) + P (∂A) = P (A0 ) = lim Pn (A0 ) ≤ lim inf Pn (A) n

n

and P (A) ≥ P (A0 ) = P (A0 ) + P (∂A) = P (A1 ) = lim Pn (A1 ) ≥ lim sup Pn (A). n

n

Hence lim inf n Pn (A) = lim supn Pn (A) = limn Pn (A) = P (A). Exercise 57. Let X, X1 , X2 , ... be random variables such that, for any continuous cumulative distribution function F , limn E[F (Xn )] = E[F (X)]. Show that Xn →d X. Solution. Let y be a continuity point of the cumulative distribution function of X. Define ⎧ x ≤ y − m−1 ⎨ 0 Fm (x) = mx + 1 − my y − m−1 < x ≤ y ⎩ 1 x>y and

⎧ ⎨ 0 Hm (x) = mx − my ⎩ 1

x≤y y < x ≤ y + m−1 x > y + m−1 .

Then, Fm , Hm , m = 1, 2, ..., are continuous cumulative distribution functions and limm Fm (x) = I(−∞,y] (x) and limm Hm (x) = I(−∞,y] (x), since y is a continuity point of the cumulative distribution function of X. By the dominated convergence theorem, lim E[Fm (X)] = lim E[Hm (X)] = E[I(−∞,y] (X)] = P (X ≤ y). m

m

Since Fm (x) decreases as m increases, E[Fm (Xn )] ≥ E[I(−∞,y] (Xn )] = P (Xn ≤ y). By the assumption, limn E[Fm (Xn )] = E[Fm (X)] for any m. Hence, E[Fm (X)] ≥ lim sup P (Xn ≤ y) n

for any m. Letting m → ∞, we obtain that P (X ≤ y) ≥ lim sup P (Xn ≤ y). n

Chapter 1. Probability Theory

43

Similarly, E[Hm (Xn )] ≤ E[I(−∞,y] (Xn )] = P (Xn ≤ y) and lim inf P (Xn ≤ y) ≤ lim lim E[Hm (Xn )] = lim E[Hm (X)] = P (X ≤ y). n

m

n

m

Hence limn P (Xn ≤ y) = P (X ≤ y). Exercise 58 (#1.137). Let {Xn } and {Yn } be two sequences of random variables. Suppose that Xn →d X and that, for almost every given sequence {Xn }, the conditional distribution of Yn given Xn converges to the distribution of Y at every continuity point of the distribution of Y , where X and Y are independent random variables. Show that Xn +Yn →d X +Y . Solution. From the assumed conditions and the continuity theorem (e.g., √ −1tXn E(e ) = Theorem 1.9 in Shao, 2003), for any real number t, lim n √ √ √ E(e −1tX ) and limn E(e −1tYn |Xn ) = E(e −1tY ) a.s. By the dominated convergence theorem, √ −1tXn

√ −1tYn

lim E{e

[E(e

√ −1t(Xn +Yn )

] = lim E[E(e

n

√ −1tY

|Xn ) − E(e

)]} = 0.

Then lim E[e n

√ −1t(Xn +Yn )

n

√ −1tXn

= lim E[e n

|Xn )] √ −1tYn

= lim E{e n

+ lim E(e = E(e = E[e

√ −1tY

|Xn ) − E(e

√ −1tXn

)E(e

√ −1tX

)E(e

√ −1t(X+Y )

√ −1tYn

[E(e

√ −1tY

n √ −1tY

|Xn )]

E(e

√ −1tXn

)]}

)

)

].

By the continuity theorem again, Xn + Yn →d X + Y . Exercise 59 (#1.140). Let Xn be a random variable distributed as N (µn , σn2 ), n = 1, 2,..., and X be a random variable distributed as N (µ, σ 2 ). Show that Xn →d X if and only if limn µn = µ and limn σn2 =√σ 2 . 2 2 Solution. The characteristic function of X is φX (t)√ = e −1µt−σ t /2 2 2 and the characteristic function of Xn is φXn (t) = e −1µn t−σn t /2 . If limn µn = µ and limn σn2 = σ 2 , then limn φXn (t) = φX (t) for any t and, by the continuity theorem, Xn →d X. Assume now Xn →d X. By the continuity theorem, limn φXn (t) = φX (t) for any t. Then limn |φXn (t)| = |φX (t)| for any t. Since |φXn (t)| =

44

Chapter 1. Probability Theory 2 2

e−σn t

/2

and |φX (t)| = e−σ √ −1µn t

lim e n

2 2

t /2

= lim n

, limn σn2 = σ 2 . Note that √ φXn (t) φX (t) = = e −1µt . |φXn (t)| |φX (t)|

Hence limn µn = µ. Exercise 60 (#1.146). Let U1 , U2 , ... be independent random variables !n −1/n . Show having on [0, 1] and Yn = ( i=1 Ui ) √ the uniform distribution 2 that n(Yn − e) →d N (0, e ). Solution. Let Xi = − log Ui . Then X1 , X2 , ... are independent and identiBy the cally distributed random with EX1 = 1 and Var(X1 ) = 1. √ variables ¯ n − 1) →d N (0, 1), where X ¯ n = n−1 n Xi . central limit theorem, n(X i=1 ¯ t ¯ n (e.g., with g(t) = e to X Note that Yn = eXn . Applying the δ-method √ Theorem 1.12 in Shao, 2003), we obtain that n(Yn −e) →d N (0, e2 ), since g  (0) = 1. Exercise 61 (#1.161). Suppose that Xn is a random variable having the binomial distribution with size n and probability θ ∈ (0, 1), n = 1, 2,.... Define Yn = log(Xn /n) when Xn ≥ 1 and Yn = 1 when  Xn = 0. Show that √ . limn Yn = log θ a.s. and n(Yn − log θ) →d N 0, 1−θ θ Solution. (i) Let Z1 , Z2 , ... be independent and identically distributed random variables with P (Z1 = 1) = θ and nP (Z1 = 0) = 1 − θ. Then the distribution of Xn is the same as that of j=1 Zj . For any  > 0, P

 4  

    Xn   ≥  ≤ 1 E  Xn − θ   − θ    n  4  n θ4 (1 − θ) + (1 − θ)4 θ θ2 (1 − θ)2 (n − 1) = + . 4 n3 4 n3

Hence,



   Xn    P  − θ ≥  < ∞ n n=1 ∞

and, by Theorem 1.8(v) in Shao (2003), limn Xn /n = θ a.s. Define Wn = I{Xn =0} Xn /n. Then Yn = log(Wn + eI{Xn =0} ). Note that ∞

∞ ∞ √ P ( nI{Xn =0} > ) = P (Xn = 0) = (1 − θ)n < ∞.

n=1

n=1



n=1

Hence, limn nI{Xn =0} = 0 a.s., which implies that limn I{Xn =0} = 0 a.s. and limn I{Xn =0} = 1 a.s. By the continuity of the log function on (0, ∞), limn Yn = log θ a.s. n Since Xn has the same distribution as j=1 Zj , by the central limit √ theorem, n(Xn /n − θ) →d N (0, θ(1 − θ)). Since we have shown that

Chapter 1. Probability Theory

45

√ √ limn nI{Xn =0} = 0 a.s. and limn Xn /n = θ a.s., limn I{Xn =0} Xn / n = 0 a.s. By Slutsky’s theorem (e.g., Theorem 1.11 in Shao, 2003),

 √ √ Xn Xn n(Wn − θ) = n − θ − I{Xn =0} √ n n →d N (0, θ(1 − θ)).  (t) = t−1 (e.g., Then, by the δ-method with g(t) = log t and  g1−θ √ Theorem 1.12 √ . Since in Shao, 2003), n(log W − log θ) → N 0, n(Yn − log θ) = d θ √ n √ n(log Wn − log θ) + nI{Xn =0} , by Slutsky’s theorem again, we obtain   √ that n(Yn − log θ) →d N 0, 1−θ . θ

Exercise 62 (#1.149). Let X1 , ..., Xn be independent and identically distributed random variables such that for x = 3, 4, ..., P (X1 = ±x) = ∞ 2 −2 log x)−1 , where c = / log x. Show that E|X1 | = ∞ but (2cx x=3 x n −1 n i=1 Xi →p 0. Solution. Note that  ∞ ∞ 1 1 E|X1 | = c−1 ≥ c−1 dx = ∞. x log x x log x 3 x=3 For any positive integer n, E[X1 I(−n,n) (X1 )] = 0. For sufficiently large x, x[1 − F (x) + F (−x)] < c−1 x ≤ c−1 x



1 k 2 log k

k=x ∞



x−1



−1

c x log(x − 1)

t2 

1 dt log t ∞

x−1

1 dt t2

1 c−1 x · = log(x − 1) x − 1 →0 as x → ∞. By the n weak law of large numbers (e.g., Theorem 1.13(i) in Shao, 2003), n−1 i=1 Xi →p 0. Exercise 63 (#1.151). Let X1 , X2 , ... be independent n nrandom variables. Assume that limn i=1 P (|Xi | > n) = 0 and limn n−2 i=1 E(Xi2 I{|Xi |≤n} ) n n = 0. Show that ( i=1 Xi − bn )/n →p 0, where bn = i=1 E(Xi I{|Xi |≤n} ). Solution. For anyn, let Yni = Xi I{|Xi |≤n} , i = 1, ..., n. Define Tn =  n n i=1 Xi and Zn = i=1 Yni . Then P (Tn = Zn ) ≤

n i=1

P (Yni = Xi ) =

n i=1

P (|Xi | > n) → 0

46

Chapter 1. Probability Theory

as n → ∞. For any  > 0, 

Var(Zn ) |Zn − EZn | ≥ ≤ P n 2 n2 n 1 = 2 2 Var(Yni )  n i=1 ≤

n 1 2 EYni 2 n2 i=1

=

n 1 E(Xi2 I{|Xi |≤n} ) 2 n2 i=1

→0 as n → ∞, where the first equality follows from the fact that Yn1 , ..., Ynn are independent since X1 , ..., Xn are independent. Thus, 



|Tn − EZn | |Tn − EZn | ≥ ≤ P ≥ , Tn = Zn + P (Tn = Zn ) P n n 

|Zn − EZn | ≥  + P (Tn = Zn ) ≤ P n →0 under the established results. Hence the result follows from the fact that bn = EZn . Exercise 64 (#1.154). Let X1 , ..., Xn be independent and identically distributed random variables with Var(X1 ) < ∞. Show that 1 jXj →p EX1 . n(n + 1) j=1 n

Note. A simple way to solve a problem of showing Yn →p a is to establish limn EYn = a and limn Var(Yn ) = 0. Solution. Note that ⎞ ⎛ n n 2 2 jXj ⎠ = jEXj = EX1 . E⎝ n(n + 1) j=1 n(n + 1) j=1 Let σ 2 = Var(X1 ). Then, ⎞ ⎛ n n 4σ 2 2σ 2 (2n + 1) 2 2 →0 jXj ⎠ = 2 j = Var ⎝ n(n + 1) j=1 n (n + 1)2 j=1 3n(n + 1)

Chapter 1. Probability Theory

47

as n → ∞. Exercise 65  (#1.165). Let X1 , X2 , ... be independent random variables. n n Suppose that j=1 (Xj −EXj )/σn →d N (0, 1), where σn2 = Var( j=1 Xj ).  n Show that n−1 j=1 (Xj − EXj ) →p 0 if and only if limn σn /n = 0. Solution. If limn σn /n = 0, then by Slutsky’s theorem (e.g., Theorem 1.11 in Shao, 2003), n n 1 σn 1 (Xj − EXj ) = (Xj − EXj ) →d 0. n j=1 n σn j=1

n Assume now σn /n does not converge to 0 but n−1 j=1 (Xj − EXj ) →p 0. Without loss of generality, assume that limn σn /n = c ∈ (0, ∞]. By Slutsky’s theorem, n n 1 n 1 (Xj − EXj ) = (Xj − EXj ) →p 0. σn j=1 σn n j=1

n This contradicts the fact that j=1 (Xj − EXj )/σn →d N (0, 1). Hence, n n−1 j=1 (Xj − EXj ) does not converge to 0 in probability. n Exercise 66 (#1.152, #1.166). Let Tn = i=1 Xi , where Xn ’s are independent random variables satisfying P (Xn = ±nθ ) = 0.5 and θ > 0 is a constant.  Show that (i) Tn / Var(Tn ) →d N (0, 1); (ii) when θ < 0.5, limn Tn /n = 0 a.s.; (iii) when θ ≥ 0.5, Tn /n does not converge to 0 in probability. 2θ Solution. (i) Note n that2θ ETn = 0 and Var(Xn ) = n for any n. Hence, 2 σn = Var(Tn ) = j=1 j . Since n−1

j



j=1

and

n−1  j+1 j=1



n−1  j+1

x2θ dx ≤

j

j=1

x2θ dx =

j 2θ

j=2



n

x2θ dx =

1

j

n

n2θ+1 − 1 , 2θ + 1

we conclude that lim n

σn2 n2θ+1

= lim n

1 n2θ+1

n j=1

j 2θ =

1 . 2θ + 1

Then limn nθ /σn = 0 and, for any  > 0, nθ < σn for sufficiently large n. Since |Xn | ≤ nθ , when n is sufficiently large, I{|Xj |> σn } = 0, j = 1, ..., n,

48

Chapter 1. Probability Theory

n and, hence, j=1 E(Xj2 I{|Xj |> σn } ) = 0. Thus, Lindeberg’s condition holds and, by theLindeberg central limit theorem (e.g., Theorem 1.15 in Shao, 2003), Tn / Var(Tn ) →d N (0, 1). (ii) When θ < 0.5, ∞ ∞ EXn2 n2θ = < ∞. 2 n n2 n=1 n=1 By the Kolmogorov strong law of large numbers (e.g., Theorem 1.14 in Shao, 2003), limn Tn /n = 0 a.s. (iii) From the result in (i) and the result in the previous exercise, Tn /n →p 0 if and only if limn σn /n = 0. In part (i), we have shown that limn σn2 /n2θ+1 equals a positive constant. Hence, the result follows since limn σn /n = 0 when θ ≥ 0.5. Exercise 67 (#1.162). Let X1 , X2 , ... be independent random variables such that Xj has the uniform distribution on [−j, j], j = 1, 2,.... Show that Lindeberg’s condition is satisfied. j Solution. Note that EXj = 0 and Var(Xj ) = −j x2 dx = 2j 3 /3 for all j. Hence ⎞ ⎛ n n 2 3 n2 (n + 1)2 . σn2 = Var ⎝ Xj ⎠ = j = 3 j=1 6 j=1 For any  > 0, n < σn for sufficiently large n, since limn n/σn = 0. Since |Xj | ≤ j ≤ n, when n is sufficiently large, n

E(Xj2 I{|Xj |> σn } ) = 0.

j=1

Thus, Lindeberg’s condition holds. Exercise 68 (#1.163). Let X1 , X2 , ... be independent random variables such that for j = 1, 2,..., P (Xj = ±j a ) = 6−1 j −2(a−1) and P (Xj = 0) = 1−3−1 j −2(a−1) , where a > 1 is a constant. Show that Lindeberg’s condition is satisfied if and only if a < 1.5. Solution. Note that EXj = 0 and σn2 =

n j=1

Var(Xj ) =

n j=1

2j 2a 1 2 n(n + 1)(2n + 1) = j = . 2(a−1) 3 j=1 18 6j n

Assume first a < 1.5. For any  > 0, na < σn for sufficiently large n, since limn na /σn = 0. Note that |Xn | ≤ na for all n. Therefore, when n is sufficiently large, I{|Xj |> σn } = 0, j = 1, ..., n, and, hence, n j=1

E(Xj2 I{|Xj |> σn } ) = 0.

Chapter 1. Probability Theory

49

Thus, Lindeberg’s condition holds. Assume now a ≥ 1.5. For  ∈ (0, 13 ), let kn be the integer part of (σn )1/a . Then ⎞ ⎛ kn n n 2 2 1 j j ⎠ 1 E(Xj2 I{|Xj |> σn } ) = 2 ⎝ − σn2 j=1 σn j=1 3 3 j=1 = 1−

kn (kn + 1)(2kn + 1) , 18σn2

which converges, as n → ∞, to 1 if a > 1.5 (since limn kn /n = 0) and to 1 − 2 /9 if a = 1.5 (since limn kn3 /σn2 = 2 ). Hence, Lindeberg’s condition does not hold. Exercise 69 (#1.155). Let {Xn } be a sequence of random variables and ¯ n = n Xi /n. Show that let X i=1 ¯ n = 0 a.s.; (i) if limn Xn = 0 a.s., then limn X r ¯ n |r = 0, where (ii) if supn E|Xn | < ∞ and limn E|Xn |r = 0, then limn E|X r ≥ 1 is a constant; (iii) the result in part (ii) may not be true for r ∈ (0, 1); ¯ n →p 0. (iv) Xn →p 0 may not imply X Solution. (i) The result in this part is actually a well known result in mathematical analysis. It suffices to show that nif {xn } is a sequence of real numbers satisfying limn xn = 0, then n−1 i=1 xi = 0. Assume that limn xn = 0. Then M = supn |xn | < ∞ and, for any  > 0, there is an N such that |xn | ≤  for all n > N . Then, for n > max{N, N M/}, N   n  n 1  1  xi  ≤ |xi | + |xi | n n i=1 i=1 i=N +1 N  n 1 M+  ≤ n i=1 i=N +1

NM (n − N ) = + n n <  +  = 2. r ¯ n |r ≤ (ii) When By Jensen’sinequality, E|X n r ≥ 1,r|x| is a convex function. n −1 r −1 r n i=1 E|Xi | . When limn E|Xn | = 0, limn n i=1 E|Xi | = 0 (the ¯ n |r = 0. result in part (i)). Hence, limn E|X (iii)-(iv) Consider the Xi ’s in the previous exercise, i.e., X1 , X2 , ... are independent, P (Xj = ±j a ) = 6−1 j −2(a−1) and P (Xj = 0) = 1 − 3−1 j −2(a−1) , where a is a constant satisfying 1 < a < 1.5. Let r be a constant such that 0 < r < 2(a − 1)/a. Then 0 < r < 1. Note that

lim E|Xn |r = lim 3−1 nar−2(a−1) = 0 n

n

50

Chapter 1. Probability Theory

n and, hence, Xn →p 0. From the result in the previous exercise, i=1 Xi /σn ¯ n does →d N (0, 1) with σn2 = n(n + 1)(2n + 1)/18. Since limn σn /n = ∞, X not converge to 0 in probability. This shows that Xn →p 0 may not imply ¯ n |r does not converge to 0, because if it does, ¯ n →p 0. Furthermore, E|X X ¯ n →p 0. This shows that the result in part (ii) may not be true for then X r ∈ (0, 1). Exercise 70 (#1.164). Let X1 , X2 , ... be independent random variables satisfying P (Xj = ±j a ) = P (Xj = 0) = 1/3, where a > 0, j = 1, 2,.... Show that Liapounov’s condition holds, i.e., lim n

1

n

σn2+δ

j=1

E|Xj − EXj |2+δ = 0

n for some δ > 0, where σn2 = Var( j=1 Xj ). Solution. Note that EXj = 0 and σn2 =

n

2 2a j . 3 j=1 n

Var(Xj ) =

j=1

For any δ > 0, n

2 (2+δ)a j . 3 j=1 n

E|Xj − E(Xj )|2+δ =

j=1

From the proof of Exercise 66, lim n

1 nt+1

n

jt =

j=1

1 t+1

for any t > 0. Thus, lim n

1

n

σn2+δ

j=1

2+δ

E|Xj − EXj |

2 3

n

j (2+δ)a 1+δ/2 n n 2 2a j j=1 3

δ/2 3 (2a + 1)1+δ/2 n(2+δ)a+1 = lim n 2 (2 + δ)a + 1 n(2a+1)(1+δ)

δ/2 (2a + 1)1+δ/2 1 3 = lim 2 (2 + δ)a + 1 n nδ/2 = 0. = lim 

j=1

Chapter 2

Fundamentals of Statistics Exercise 1 (#2.9). Consider the family of double exponential distributions: P = 2−1 e−|t−µ| : −∞ < µ < ∞ . Show that P is not an exponential family. Solution. Assume that P is an exponential family. Then there exist pdimensional Borel functions T (X) and η(µ) (p ≥ 1) and one-dimensional Borel functions h(X) and ξ(µ) such that 2−1 exp{−|t − µ|} = exp{[η(µ)]τ T (t) − ξ(µ)}h(t) for any t and µ. Let X = (X1 , ..., Xn ) be a random sample from P ∈ P and identically distributed with P ∈ P), (i.e., X1 , ..., Xn are independent n where n > p, Tn (X) = i=1 T (Xi ), and hn (X) = Πni=1 h(Xi ). Then the joint Lebesgue density of X is  n " |xi − µ| = exp {[η(µ)]τ Tn (x) − nξ(µ)} hn (x) 2−n exp − i=1

for any x = (x1 , ..., xn ) and µ, which implies that n i=1

|xi | −

n

˜ |xi − µ| = [˜ η (µ)]τ Tn (x) − nξ(µ)

i=1

˜ for any x  and µ, where η˜n(µ) = η(µ) − η(0) and ξ(µ) = ξ(µ) − ξ(0). Define n ψµ (x) = i=1 |xi | − i=1 |xi − µ|. We conclude that if x = (x1 , ..., xn ) and y = (y1 , ..., yn ) such that Tn (x) = Tn (y), then ψµ (x) = ψµ (y) for all µ, which implies that vector of the ordered xi ’s is the same as the vector of the ordered yi ’s. On the other hand, we may choose real numbers µ1 , ..., µp such that η˜(µi ), i = 1, ..., p, are linearly independent vectors. Since ˜ i ), ψµi (x) = [˜ η (µi )]τ Tn (x) − nξ(µ 51

i = 1, ..., p,

52

Chapter 2. Fundamentals of Statistics

for any x, Tn (x) is then a function of the p functions ψµi (x), i = 1, ..., p. Since n > p, it can be shown that there exist x and y in Rn such that ψµi (x) = ψµi (y), i = 1, ..., p, (which implies Tn (x) = Tn (y)), but the vector of ordered xi ’s is not the same as the vector of ordered yi ’s. This contradicts the previous conclusion. Hence, P is not an exponential family. Exercise 2 (#2.13). A discrete random variable X with P (X = x) = γ(x)θx /c(θ), x = 0, 1, 2, ..., ∞ where γ(x) ≥ 0, θ > 0, and c(θ) = x=0 γ(x)θx , is called a random variable with a power series distribution. Show that (i) {γ(x)θx /c(θ) : θ > 0} is an exponential family; (ii) if X1 , ..., Xn are independent andidentically distributed with a power n series distribution γ(x)θx /c(θ), then i=1 Xi has the power series distribux n tion γn (x)θ /[c(θ)] , where γn (x) is the coefficient of θx in the power series expansion of [c(θ)]n . Solution. (i) Note that γ(x)θx /c(θ) = exp{x log θ − log(c(θ))}γ(x). Thus, {γ(x)θx /c(θ) : θ > 0} is an exponential family. (ii) From part (i), we know that the natural parameter η = log θ, and also ζ(η) = log (c(eη )). From the properties of exponential families (e.g., Theorem 2.1 in Shao, 2003), the moment generating function of X is ζ(η+t) ζ(η) ψ /e = c(θet )/c(θ). The moment generating function of Xn(t) = e t n n i=1 Xi is [c(θe )] /[c(θ)] , which is the moment generating function of the power series distribution γn (x)θx /[c(θ)]n . Exercise 3 (#2.17). Let X be a random variable having the gamma distribution with shape parameter α and scale parameter γ, where α is know and γ is unknown. let Y = σ log X. Show that (i) if σ > 0 is unknown, then the distribution of Y is in a location-scale family; (ii) if σ > 0 is known, then the distribution of Y is in an exponential family. Solution. (i) The Lebesgue density of X is 1 xα−1 e−x/γ I(0,∞) (x). Γ(α)γ α Applying the result in the note of Exercise 17 in Chapter 1, the Lebesgue density for Y = σ log X is   1 eα(y−σ log γ)/σ exp −e(y−σ log γ)/σ . Γ(α)σ It belongs to a location-scale family with location parameter η = σ log γ and scale parameter σ.

Chapter 2. Fundamentals of Statistics

53

(ii) When σ is known, we rewrite the density of Y as  y/σ  e 1 exp{αy/σ} exp − − α log γ . σΓ(α) γ Therefore, the distribution of Y is from an exponential family. Exercise 1 , ..., Xn ) be a random sample from N (0, 1). Show that n 4. Let (X n Xi2 / j=1 Xj2 and j=1 Xj2 are independent, i = 1, ..., n. Solution. Note that X12 , ..., Xn2 are independent and have the chi-square distribution χ21 . Hence their joint Lebesgue density is ce−(y1 +···+yn )/2 , yj > 0, √ y1 · · · yn n where c is a constant. Let U = j=1 Xj2 and Vi = Xi2 /U , i = 1, ..., n. Then  n Xi2 = U Vi and j=1 Vj = 1. The Lebesgue density for (U, V1 , ..., Vn−1 ) is # ce−u/2 vn un−1 1 − v1 · · · vn−1 n/2−1 −u/2 √ n = cu e , u > 0, vj > 0. v1 · · · vn−1 u v1 · · · vn Hence U and (V1 /U, ..., Vn−1 /U ) are independent. Since Vn = 1 − (V1 + · · · + Vn−1 ), we conclude that U and Vn /U are independent. An alternative solution can be obtained by using Basu’s theorem (e.g., Theorem 2.4 in Shao, 2003). Exercise 5. Let X = (X1 , ..., Xn ) be a random n-vector having the multivariate normal distribution Nn (µJ, D), where J is the n-vector of 1’s, ⎛ ⎞ 1 ρ ··· ρ ⎜ ρ 1 ··· ρ ⎟ ⎟ D = σ2 ⎜ ⎝ ··· ··· ··· ··· ⎠, ρ ··· 1  ¯ = n−1 n Xi and W = n (Xi − X) ¯ 2 and |ρ| < 1. Show that X i=1 i=1   ¯ has the normal distribution N µ, 1+(n−1)ρ σ 2 , and are independent, X n ρ

W/[(1 − ρ)σ 2 ] has the chi-square distribution χ2n−1 . Solution. Define ⎛ 1 1 1 1 ⎜ ⎜ ⎜ A=⎜ ⎜ ⎜ ⎝

√ n √1 2·1 √1 3·2



···

1 n(n−1)

√ n √−1 2·1 √1 3·2



···

1 n(n−1)

√ n

√ n

0

0 0 ···

√−2 3·2



···

1 n(n−1)



1 n(n−1)

··· ··· ··· ··· ···

√1 n

0 0 ···

√−(n−1)

n(n−1)

⎞ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎠

54

Chapter 2. Fundamentals of Statistics

Then AAτ = I (the identity matrix) and ⎛ 1 + (n − 1)ρ 0 ··· 0 ⎜ 0 1 − ρ · · · 0 ADAτ = σ 2 ⎜ ⎝ ··· ··· ··· ··· 0 0 ··· 1 − ρ

⎞ ⎟ ⎟. ⎠

Let √ Y = AX. Then Y is normally distributed with E(Y ) = AE(X) = ( nµ, 0, ..., 0) and Var(Y ) = ADAτ , i.e., the components of √ Y are in¯ and dependent. Let Y be the ith component of Y . Then, Y = nX i 1 n n √ 2 τ τ τ τ 2 ¯ AX = X X = i=1 Xi . Hence X = Y1 / n i=1 Yi = nY Y = X¯ A    n n n 2 2 2 ¯2 = and W = i=1 (Xi − X)2 = i=1 Xi2 − nX i=1 Yi − Y1 = i=2 Yi . ¯ Since Yi ’s are independent, X and √ W are independent.2 ¯ √ Since Y1 has distribution N ( nµ, [1 + (n − 1)ρ]σ ), X = Y1 / n has   distribution N µ, 1+(n−1)ρ σ 2 . Since Y2 , ..., Yn are independent and idenn n tically distributed as N (0, (1 − ρ)σ 2 ), W/[(1 − ρ)σ 2 ] = i=2 Yi2 /[(1 − ρ)σ 2 ] has the χ2n−1 distribution. Exercise 6. Let (X1 , ..., Xn ) be a random sample from the uniform distribution on the interval [0, 1] and let R = X(n) − X(1) , where X(i) is the ith order statistic. Derive the Lebesgue density of R and show that the limiting distribution of 2n(1 − R) is the chi-square distribution χ24 . Solution. The joint Lebesgue density of X(1) and X(n) is  n(n − 1)(y − x)n−2 0 < x < y < 1 f (x, y) = 0 otherwise (see, e.g., Example 2.9 in Shao, 2003). Then, the joint Lebesgue density of R and X(n) is  n(n − 1)xn−2 0 < x < y < 1 g(x, y) = 0 otherwise and, when 0 < x < 1, the Lebesgue density of R is   1 g(x, y)dy = n(n − 1)y n−2 ds = n(n − 1)xn−2 (1 − x) x

for 0 < x < 1. Consequently, the Lebesgue density of 2n(1 − R) is    n−1 x n−2 x 1 − 2n 0 < x < 2n 4n hn (x) = 0 otherwise.   x n−2 = e−x/2 , limn hn (x) = 4−1 xex/2 I(0,∞) (x), which is Since limn 1 − 2n the Lebesgue density of the χ24 distribution. By Scheff´e’s theorem (e.g.,

Chapter 2. Fundamentals of Statistics

55

Proposition 1.18 in Shao, 2003), the limiting distribution of 2n(1 − R) is the χ24 distribution. Exercise 7. Let (X1 , ..., Xn ) be a random sample from the exponential distribution with Lebesgue density θ−1 e(a−x)/θ I(0,∞) (x), where a ∈ R and θ > 0 are parameters. Let X(1) ≤ · · · ≤ X(n) be order statistics, X(0) = 0, and Zi = X(i) − X(i−1) , i = 1, ..., n. Show that , ..., Zn are independent and 2(n − i + 1)Zi /θ has the χ22 distribution; (i) Z1 r (ii) 2[ i=1 X(i) + (n − r)X(r) − na]/θ has the χ22r distribution, r = 1, ..., n; (iii) X(1) and Y are independent and (X(1) −a)/Y has the Lebesgue density −n  n nt I(0,∞) (t), where Y = (n − 1)−1 i=1 (Xi − X(1) ). n 1 + n−1 Solution. If we can prove the result for the case of a = 0 and θ = 1, then the result for the general case follows by considering the transformation (Xi − a)/θ, i = 1, ..., n. Hence, we assume that a = 0 and θ = 1. (i) The joint Lebesgue density of X(1) , ..., X(n) is  f (x1 , ..., xn ) =

n!e−x1 −···−xn 0

0 < x1 < · · · < xn otherwise.

Then the joint Lebesgue density of Zi , i = 1, ..., n, is  g(x1 , ..., xn ) =

n!e−nx1 −···−(n−i+1)xi −···−xn 0

xi > 0, i = 1, ..., n, otherwise.

Hence Z1 , ..., Zn are independent and, for each i, the Lebesgue density of 2Zi is (n − i + 1)e−(n−i+1)xi I(0,∞) (xi ). Then the density of 2(n − i + 1)Zi is 2−1 e−xi /2 I(0,∞) (xi ), which is the density of the χ22 distribution. (ii) For r = 1, ..., n, r

X(i) + (n − r)X(r) =

i=1

r

(n − i + 1)Zi .

i=1

From (i), Z1 , ...,  Zn are independent and 2(n − i + 1)Zi has the χ22 distrir bution. Hence 2 i=1 X(i) + (n − r)X(r) has the χ22r distribution for any r. (iii) Note that 1 1 (X(i) − X(1) ) = (n − i + 1)Zi . n − 1 i=2 n − 1 i=2 n

Y =

n

From the result in (i), Y and X(1) are independent and 2(n − 1)Y has the χ22(n−1) distribution. Hence the Lebesgue density of Y is fY (y) = (n−1)n n−2 −(n−1)y e I(0,∞) (y). (n−1)! y

Note that the Lebesgue density of X(1) is

56

Chapter 2. Fundamentals of Statistics

fX(1) (x) = ne−nx I(0,∞) (x). Hence, for t > 0, the density of the ratio X(1) /Y is (e.g., Example 1.15 in Shao, 2003)  f (t) = 

|x|fY (x)fX(1) (tx)dx ∞

n(n − 1)n n−1 −(n+nt−1)x e dx x (n − 1)! 0 −n  ∞

(n + nt − 1)n n−1 −(n+nt−1)x nt x e dx = n 1+ n−1 (n − 1)! 0 −n

nt = n 1+ . n−1 =

Exercise 8 (#2.19). Let (X1 , ..., Xn ) be a random sample from the gamma distribution with shape parameter α and scale parameter γx and let (Y1 , ..., Yn ) be a random sample from the gamma distribution with shape parameter α and scale parameter γy . Assume that Xi ’s and Yi ’s are inde¯ Y¯ , where X ¯ and Y¯ are pendent. Derive the distribution of the statistic X/ the sample means based on Xi ’s and Yi ’s, respectively. ¯ has the Solution. From the property of the gamma distribution, nX gamma distribution with shape parameter nα and scale parameter γx and nY¯ has the gamma distribution with shape parameter nα and scale param¯ and Y¯ are independent, the Lebesgue density of the ratio eter γy . Since X ¯ ¯ X/Y is, for t > 0,  ∞ 1 f (t) = (tx)nα−1 e−tx/γx xnα e−x/γy dx [Γ(nα)]2 (γx γy )nα 0

−2nα t 1 Γ(2nα)tnα−1 + . = [Γ(nα)]2 (γx γy )nα γx γy Exercise 9 (#2.22). Let (Yi , Zi ), i = 1, ..., n, be independent and identically distributed random 2-vectors. The sample correlation coefficient is defined to be T =

n 1 ¯ (Yi − Y¯ )(Zi − Z), (n − 1)SY SZ i=1

n n n where Y¯ = n−1 i=1 Yi , Z¯ = n−1 i=1 Zi , SY2 = (n−1)−1 i=1 (Yi −Y¯ )2 , and n ¯ 2. SZ2 = (n−1)−1 i=1 (Zi − Z) 4 (i) Assume that E|Yi | < ∞ and E|Zi |4 < ∞. Show that √

n(T − ρ) →d N (0, c2 ),

Chapter 2. Fundamentals of Statistics

57

where ρ is the correlation coefficient between Y1 and Z1 and c is a constant. Identify c in terms of moments of (Y1 , Z1 ). (ii) Assume that Yi and Zi are independently distributed as N (µ1 , σ12 ) and N (µ2 , σ22 ), respectively. Show that T has the Lebesgue density   Γ n−1 2 √  n−2  (1 − t2 )(n−4)/2 I(−1,1) (t). πΓ 2 (iii) Under the conditions of part (ii), show that the result in (i) is√ the same as that obtained by applying Scheff´e’s theorem to the density of nT . Solution. (i) Consider first the and Var(Y1 )  special case of EY  1 = EZ1 = 0 −1 ¯ = n n Wi . = Var(Z1 ) = 1. Let Wi = Yi , Zi , Yi2 , Zi2 , Yi Zi and W i=1 Since W1 , ..., Wn are independent and identically distributed and Var(W1 ) is finite under the assumption of E|Y1 |4 < ∞ and E|Z1 |4 < ∞, by the √ ¯ central limit theorem, n(W − θ) →d N5 (0, Σ), where θ = (0, 0, 1, 1, ρ) and ⎞ ⎛ 3 2 2 1

⎜ ρ ⎜ 3 E(Y Σ=⎜ 1 ) ⎜ ⎝ E(Y1 Z12 )

E(Y12 Z1 )

ρ 1 E(Y12 Z1 ) E(Z13 ) E(Y1 Z12 )

E(Y1 ) E(Y12 Z1 ) E(Y14 ) − 1 E(Y12 Z12 ) − 1 E(Y13 Z1 ) − ρ

Define h(x1 , x2 , x3 , x4 , x5 ) = 

E(Y1 Z1 ) E(Z13 ) E(Y12 Z12 ) − 1 E(Z14 ) − 1 E(Y1 Z13 ) − ρ

E(Y1 Z1 ) E(Y1 Z12 ) E(Y13 Z1 ) − ρ E(Y1 Z13 ) − ρ E(Y12 Z12 ) − ρ2

x5 − x1 x2 (x3 − x21 )(x4 − x22 )

⎟ ⎟ ⎟. ⎟ ⎠

.

¯ Then T = h(W (e.g., Theorem 1.12 in √ ) and¯ ρ = h(θ). By the δ-method Shao, 2003), n[h(W ) − h(θ)] →d N (0, c2 ), where c2 = ξ τ Σξ and ξ =  ∂h(w)  ∂w w=θ = (0, 0, −ρ/2, −ρ/2, 1). Hence c2 = ρ2 [E(Y14 ) + E(Z14 ) + 2E(Y12 Z12 )]/4 − ρ[E(Y13 Z1 ) + E(Y1 Z13 )] + E(Y12 Z12 ). The result for the general  case can be obtained by  considering the transformation (Yi − EYi )/ Var(Yi ) and (Zi − EZi )/ Var(Zi ). The value of c2 is then given  by the previous expression  with Y1 and Z1 replaced by (Y1 − EY1 )/ Var(Y1 ) and (Z1 − EZ1 )/ Var(Z1 ), respectively. (ii) We only need to consider the case of µ1 = µ2 = 0 and σ12 = σ22 = 1. Let Y = (Y1 , ..., Yn ), Z = √ (Z1 , ..., Zn ), and AZ be the n-vector whose ith ¯ component is (Zi − Z)/( n − 1SZ ). Note that (n − 1)SY2 − (AτZ Y )2 = Y τ BZ Y with BZ = In − n−1 JJ τ − AZ AτZ , where In is the identity matrix of order n and J is the n-vector of 1’s. Since AτZ AZ = 1 and J τ AZ = 0, BZ AZ = 0,

58

Chapter 2. Fundamentals of Statistics

BZ2 = BZ and tr(BZ ) = n − 2. Consequently, when Z is considered to τ be a fixed vector, Y τ BZ Y and AτZ Y are independent, √ AZ Y τis distributed √ τ 2 as N (0, 1), Y BZ Y has the χn−2 distribution,√and n − 2AZ Y / Y τ BZ Y has the t-distribution tn−2 . Since T = AτZ Y /( n − 1SY ), P (T ≤ t) = E[P (T ≤ t|Z)] &  = = =

=

 '  AτZ Y ≤ tZ E P  τ τ 2 Y BZ Y + (AZ Y )     t AτZ Y Z ≤√ E P √ τ 2 Y BZ Y 1−t  √   t n−2 E P tn−2 ≤ √ 1 − t2 √ −(n−1)/2  t√ n−2 n−1 Γ( 2 ) x2 1−t2  1 + dx, n−2 (n − 2)πΓ( n−2 2 ) 0

where tn−2 denotes a random variable having the t-distribution tn−2 and the third equality follows from the fact that √a2a+b2 ≤ t if and only if √a ≤ √ t for real numbers a and b and t ∈ (0, 1). Thus, T has Lebesgue 1−t2 b2 density Γ( n−1 d 2 ) P (T ≤ t) = √ (1 − t2 )(n−4)/2 I(−1,1) (t). dt πΓ( n−2 ) 2 (iii) Under √ the conditions of part (ii), ρ = 0 and, from √ the result in (i), c = 1 and nT →d N (0, 1). From the result in (ii), nT has Lebesgue density

(n−4)/2 Γ( n−1 2 t2 1 2 ) 1− I(−√n,√n) (t) → √ e−t /2 √ n nπΓ( n−2 ) 2π 2 √ by Stirling’s formula. By Scheff´e’s Theorem, nT →d N (0, 1). Exercise 10 (#2.23). Let X1 , ..., Xn be independent and identically√ distributed randomvariables with EX14 0 for all x ∈ R and, for any θ ∈ Θ, fθ (x) is continuous in x. Let X1 and X2 be independent and identically distributed as fθ . Show that if X1 + X2 is sufficient for θ, then P is an exponential family indexed by θ. Solution. The joint density of X1 and X2 is fθ (x1 )fθ (x2 ). By the factorization theorem, there exist functions gθ (t) and h(x1 , x2 ) such that fθ (x1 )fθ (x2 ) = gθ (x1 + x2 )h(x1 , x2 ).

60

Chapter 2. Fundamentals of Statistics

Then log fθ (x1 ) + log fθ (x2 ) = g(x1 + x2 , θ) + h1 (x1 , x2 ), where g(t, θ) = log gθ (t) and h1 (x1 , x2 ) = log h(x1 , x2 ). Let θ0 ∈ Θ and r(x, θ) = log fθ (x) − log fθ0 (x) and q(x, θ) = g(x, θ) − g(x, θ0 ). Then q(x1 + x2 , θ) = log fθ (x1 ) + log fθ (x2 ) + h1 (x1 , x2 ) − log fθ0 (x1 ) − log fθ0 (x2 ) − h1 (x1 , x2 ) = r(x1 , θ) + r(x2 , θ). Consequently, r(x1 + x2 , θ) + r(0, θ) = q(x1 + x2 , θ) = r(x1 , θ) + r(x2 , θ) for any x1 , x2 , and θ. Let s(x, θ) = r(x, θ) − r(0, θ). Then s(x1 , θ) + s(x2 , θ) = s(x1 + x2 , θ) for any x1 , x2 , and θ. Hence, s(n, θ) = ns(1, θ) n = 0, ±1, ±2, .... n For any rational number m (n and m are integers and m = 0), n  1  m 1    n n s m , θ = ns m , θ = m ns m , θ = m s m m , θ = m s (1, θ) .

Hence s(x, θ) = xs(1, θ) for any rational x. From the continuity of fθ , we conclude that s(x, θ) = xs(1, θ) for any x ∈ R, i.e., r(x, θ) = s(1, θ)x + r(0, θ) any x ∈ R. Then, for any x and θ, fθ (x) = exp{r(x, θ) + log fθ0 (x)} = exp{s(1, θ)x + r(0, θ) + log fθ0 (x)} = exp{η(θ)x − ξ(θ)}h(x), where η(θ) = s(1, θ), ξ(θ) = −r(0, θ), and h(x) = fθ0 (x). This shows that P is an exponential family indexed by θ. Exercise 13 (#2.30). Let X and Y be two random variables such that Y has the binomial distribution with size N and probability π and, given Y = y, X has the binomial distribution with size y and probability p. (i) Suppose that p ∈ (0, 1) and π ∈ (0, 1) are unknown and N is known. Show that (X, Y ) is minimal sufficient for (p, π). (ii) Suppose that π and N are known and p ∈ (0, 1) is unknown. Show

Chapter 2. Fundamentals of Statistics

61

whether X is sufficient for p and whether Y is sufficient for p. Solution. (i) Let A = {(x, y) : x = 0, 1, ..., y, y = 0, 1, ..., N }. The joint probability density of (X, Y ) with respect to the counting measure is



 N y N −y y π (1 − π) px (1 − p)y−x IA y x     p π(1 − p) N y = exp x log + y log + N log(1 − π) IA . 1−p 1−π y x Hence, (X, Y ) has a distribution from an exponential family of full rank (0 < p < 1 and 0 < π < 1). This implies that (X, Y ) is minimal sufficient for (p, π). (ii) The joint probability density of (X, Y ) can be written as  

  p y y N −y N exp x log + y log(1 − p) π (1 − π) IA . 1−p y x This is from an exponential family not of full rank. Let p0 = 12 , p1 = 13 , p p2 = 23 , and η(p) = (log 1−p , log(1 − p)). Then, two vectors in R2 , η(p1 ) − η(p0 ) = (− log 2, 2 log 2 − log 3) and η(p2 ) − η(p0 ) = (log 2, log 2 − log 3), are linearly independent. By the properties of exponential families (e.g., Example 2.14 in Shao, 2003), (X, Y ) is minimal sufficient for p. Thus, neither X nor Y is sufficient for p. Exercise 14 (#2.34). Let X1 , ..., Xn be independent and identically distributed random variables having the Lebesgue density    4 exp − x−µ − ξ(θ) , σ where θ = (µ, σ) ∈ Θ = R × (0, ∞). Show that P = {Pθ : θ ∈ Θ} is an exponential family, where joint distribution n Pθ isthe n of X41, ..., Xn , and n n 2 3 is minimal that the statistic T = X , X , X , i i=1 i=1 i i=1 i i=1 Xi sufficient for θ ∈ Θ. n n n n Solution. Let T (x) = ( i=1 xi , i=1 x2i , i=1 x3i , i=3 x4i ) for any x = (x1 , ..., xn ) and let η(θ) = σ −4 (−4µ3 , 6µ2 , −4µ, 1). The joint density of (X1 , ..., Xn ) is   fθ (x) = exp [η(θ)]τ T (x) − nµ4 /σ 4 − nξ(θ) , which belongs to an exponential family. For any two sample points x = (x1 , ..., xn ) and y = (y1 , ..., yn ),  & n   n  n n 1 fθ (x) 4 4 3 3 = exp − 4 xi − yi − 4µ xi − yi fθ (y) σ i=1 i=1 i=1 i=1  n   n '" n n , + 6µ2 x2i − yi2 − 4µ3 xi − yi i=1

i=1

i=1

i=1

62

Chapter 2. Fundamentals of Statistics

which is free of parameter (µ, σ) if and only if T (x) = T (y). By Theorem 2.3(iii) in Shao (2003), T (X) is minimal sufficient for θ. Exercise 15 (#2.35). Let (X1 , ..., Xn ) be a random ) ( sample of random variables having the Lebesgue density fθ (x) = (2θ)−1 I(0,θ) (x)+I(2θ,3θ) (x) . Find a minimal sufficient statistic for θ ∈ (0, ∞). Solution. We use the idea of Theorem 2.3(i)-(ii) in Shao (2003). Let Θr = {θ1 , θ2 , ...} be the set of positive rational !n numbers, P0 = {gθ : θ ∈ Θr }, and P = {gθ : θ > 0}, where gθ (x) = i=1 fθ (xi ) for x = (x1 , ..., xn ). Then P0 ⊂ P and a.s. P0 implies a.s. P (i.e., if an event A satisfying P (A) = 0 for all P ∈ P0 , then P (A) = 0for all P ∈ P). Let {ci } be ∞a sequence ∞ of positive numbers satisfying i=1 ci = 1 and g∞ (x) = i=1 ci gθi (x). Define T = (T1 , T2 , ...) with Ti (x) = gθi (x)/g∞ (x). By Theorem 2.3(ii) in Shao (2003), T is minimal sufficient for θ ∈ Θ0 (or P ∈ P0 ). For any θ > 0, there is a sequence {θik } ⊂ {θi } such that limk θik = θ. Then gθ (x) = lim gθik (x) = lim Tik (x)g∞ (x) k

k

holds for all x ∈ C with P (C) = 1 for all P ∈ P. By the factorization theorem, T is sufficient for θ > 0 (or P ∈ P). By Theorem 2.3(i) in Shao (2003), T is minimal sufficient for θ > 0. Exercise 16 (#2.36). Let (X1 , ..., Xn ) be a random sample of random variables having the Cauchy distribution with location parameter µ and scale parameter σ, where µ ∈ R and σ > 0 are unknown parameters. Show that the vector of order statistics is minimal sufficient for (µ, σ). Solution. The joint Lebesgue density of (X1 , ..., Xn ) is fµ,σ (x) =

σn πn

n

i=1

σ2

1 , x = (x1 , ..., xn ). + (xi − µ)2

For any x = (x1 , ..., xn ) and y = (y1 , ..., yn ), suppose that fµ,σ (x) = ψ(x, y) fµ,σ (y) holds for any µ and σ, where ψ does not depend on (µ, σ). Let σ = 1. Then we must have n

i=1

*

2

1 + (yi − µ)

+

n

= ψ(x, y)

*

2

1 + (xi − µ)

+

i=1

for all µ. Both sides of the above identity can be viewed as polynomials of degree 2n in µ. Comparison of the coefficients to the highest terms gives

Chapter 2. Fundamentals of Statistics

63

ψ(x, y) = 1. Thus, n

*

2

1 + (yi − µ)

+

i=1

n

=

*

2

1 + (xi − µ)

+

i=1

for all µ. As a polynomial √of µ, the left-hand side of the above identity has 2n complex roots xi ± −1, i = 1, ..., n, while √ the right-hand side of the above identity has 2n complex roots yi ± −1, i = 1, ..., n. By the unique factorization theorem for the entire functions in complex analysis, we conclude that the two sets of roots must agree. This means that the ordered values of xi ’s are the same as the ordered values of yi ’s. By Theorem 2.3(iii) in Shao (2003), the order statistics of X1 , ..., Xn is minimal sufficient for (µ, σ). Exercise 17 (#2.40). Let (X1 , ..., Xn ), n ≥ 2, be a random sample from a distribution having Lebesgue density fθ,j , where θ > 0, j = 1, 2, fθ,1 is the density of N (0, θ2 ), and fθ,2 (x) = (2θ)−1 e−|x|/θ . Show that T (T1 , T2 ) is = n n minimal sufficient for (θ, j), where T1 = i=1 Xi2 and T2 = i=1 |Xi |. Solution A. Let P be the joint distribution of X1 , ..., Xn . By the factorization theorem, T is sufficient for (θ, j). Let P = {P : θ > 0, j = 1, 2}, P1 = {P : θ > 0, j = 1}, and P2 = {P : θ > 0, j = 2}. Let S be a statistic sufficient for P ∈ P. Then S is sufficient for P ∈ Pj , j = 1, 2. Note that P1 is an exponential family with T1 as a minimal sufficient statistic. Hence, there exists a Borel function ψ1 such that T1 = ψ1 (S) a.s. P1 . Since all densities in P are dominated by those in P1 , we conclude that T1 = ψ1 (S) a.s. P. Similarly, P2 is an exponential family with T2 as a minimal sufficient statistic and, thus, there exists a Borel function ψ2 such that T2 = ψ2 (S) a.s. P. This proves that T = (ψ1 (S), ψ2 (S)) a.s. P. Hence T is minimal sufficient for (θ, j). Solution B. Let P be the joint distribution of X1 , ..., Xn . The Lebesgue density of P can be written as    I{1} (j) I{2} (j) I{2} (j) I{1} (j) exp − . T − + T 1 2 2θ2 θ (2θ)n (2πθ2 )n/2 Hence P = {P : θ > 0, j = 1, 2} is an exponential family. Let

η(θ, j) = −

I{1} (j) I{2} (j) , 2θ2 θ

 .

Note that η(1, 1) = (− 12 , 0), η(2−1/2 , 1) = (−1, 0), and η(1, 2) = (0, −1). Then, η(2−1/2 , 1) − η(1, 1) = (− 12 , 0) and η(1, 2) − η(1, 1) = ( 12 , −1) are two linearly independent vectors in R2 . Hence T = (T1 , T2 ) is minimal sufficient for (θ, j) (e.g., Example 2.14 in Shao, 2003).

64

Chapter 2. Fundamentals of Statistics

Exercise 18 (#2.41). Let (X1 , ..., Xn ), n ≥ 2, be a random sample from a distribution with discrete probability density fθ,j , where θ ∈ (0, 1), j = 1, 2, fθ,1 is the Poisson distribution with mean θ, and fθ,2 is the binomial distribution with size n1 and probability θ. (i) Show that T = i=1 Xi is not sufficient for (θ, j). (ii) Find a two-dimensional minimal sufficient statistic for (θ, j). Solution. (i) To show that T is not sufficient for (θ, j), it suffices to show that, for some x ≤ t, P (Xn = x|T = t) for j = 1 is different from P (Xn = x|T = t) for j = 2. When j = 1,

 t (n − 1)t−x > 0, P (Xn = x|T = t) = nt x whereas when j = 2, P (Xn = x|T = t) = 0 as long as x > 1. (ii) Let gθ,j be the joint probability density of X1 , ..., Xn . Let P0 = {g 14 ,1 , g 12 ,1 , g 12 ,2 }. Then, a.s. P0 implies a.s. P. By Theorem 2.3(ii) in Shao (2003), the two-dimensional statistic     g 12 ,1 g 12 ,2 = en/4 2−T , en/2 W 2T −n , S= g 14 ,1 g 14 ,1 is minimal sufficient for the family P0 , where  1 Xi = 0 or 1, i = 1, ..., n, W = 0 otherwise. Since there is a one-to-one transformation between S and (T, W ), we conclude that (T, W ) is minimal sufficient for the family P0 . For any x = (x1 , ..., xn ), the joint density of X1 , ..., Xn is θ

enθI{1} (j) (1 − θ)nI{2} (j) W I{2} (j) eT [I{1} (j) log θ+I{2} (j) log (1−θ) ]

n

1 . x ! i=1 i

Hence, by the factorization theorem, (T, W ) is sufficient for (θ, j). By Theorem 2.3(i) in Shao (2003), (T, W ) is minimal sufficient for (θ, j). Exercise 19 (#2.44). Let (X1 , ..., Xn ) be a random sample from a distribution on R having the Lebesgue density θ−1 e−(x−θ)/θ I(θ,∞) (x), where θ > 0 is an unknown parameter. (i) Find a statistic that is minimal sufficient for θ. (ii) Show whether the minimal sufficient statistic in (i) is complete. n Solution. (i) Let T (x) = i=1 xi and W (x) = min1≤i≤n xi , where x = (x1 , ..., xn ). The joint density of X = (X1 , ..., Xn ) is fθ (x) =

en −T (x)/θ e I(θ,∞) (W (x)). θn

Chapter 2. Fundamentals of Statistics

65

For x = (x1 , ..., xn ) and y = (y1 , ..., yn ), I(θ,∞) (W (x)) fθ (x) = e[T (y)−T (x)]/θ fθ (y) I(θ,∞) (W (y)) is free of θ if and only if T (x) = T (y) and W (x) = W (y). Hence, the two-dimensional statistic (T (X), W (X)) is minimal sufficient for θ. (ii) A direct calculation shows that, for any θ, E[T (X)] = 2nθ and E[W (X)] = (1 + n−1 )θ. Hence E[(2n)−1 T − (1 + n−1 )−1 W (X)] = 0 for any θ and (2n)−1 T − (1 + n−1 )−1 W (X) is not a constant. Thus, (T, W ) is not complete. Exercise 20 (#2.48). Let T be a complete (or boundedly complete) and sufficient statistic. Suppose that there is a minimal sufficient statistic S. Show that T is minimal sufficient and S is complete (or boundedly complete). Solution. We prove the case when T is complete. The case in which T is boundedly complete is similar. Since S is minimal sufficient and T is sufficient, there exists a Borel function h such that S = h(T ) a.s. Since h cannot be a constant function and T is complete, we conclude that S is complete. Consider T − E(T |S) = T − E[T |h(T )], which is a Borel function of T and hence can be denoted as g(T ). Note that E[g(T )] = 0. By the completeness of T , g(T ) = 0 a.s., that is, T = E(T |S) a.s. This means that T is also a function of S and, therefore, T is minimal sufficient. Exercise 21 (#2.53). Let X be a discrete bility density ⎧ ⎨ θ fθ (x) = (1 − θ)2 θx−1 ⎩ 0

random variable with probax=0 x = 1, 2, ... otherwise,

where θ ∈ (0, 1). Show that X is boundedly complete, but not complete. Solution. Consider any Borel function h(x) such that E[h(X)] = h(0)θ +



h(x)(1 − θ)2 θx−1 = 0

x=1

for any θ ∈ (0, 1). Rewriting the left-hand side of the above equation in the ascending order of the powers of θ, we obtain that h(1) +



[h(x − 1) − 2h(x) + h(x + 1)] θx = 0

x=1

for any θ ∈ (0, 1). Comparing the coefficients of both sides, we obtain that h(1) = 0 and h(x−1)−h(x) = h(x)−h(x+1). Therefore, h(x) = (1−x)h(0)

66

Chapter 2. Fundamentals of Statistics

for x = 1, 2, .... This function is bounded if and only if h(0) = 0. If h(x) is assumed to be bounded, then h(0) = 0 and, hence, h(x) ≡ 0. This means that X is boundedly complete. For h(x) = 1 − x, E[h(X)] = 0 for any θ but h(X) = 0. Therefore, X is not complete. Exercise 22. Let X be a discrete random variable with θ N −θ Pθ (X = x) =

x

Nn−x  , x = 0, 1, 2, ..., min{θ, n}, n − x ≤ N − θ, n

where n and N are positive integers, N ≥ n, and θ = 0, 1, ..., N . Show that X is complete. Solution. Let g(x) be a function of x ∈ {0, 1, ..., n}. Assume Eθ [g(X)] = 0 for any θ, where Eθ is the expectation with respect to Pθ . When θ = 0, P0 (X = x) = 1 if x = 0 and E0 [g(X)] = g(0). Thus, g(0) = 0. When θ = 1, P1 (X ≥ 2) = 0 and N −1 E1 [g(X)] = g(0)P1 (X = 0) + g(1)P1 (X = 1) = g(1) n−1  . N n

Since E1 [g(X)] = 0, we obtain that g(1) = 0. Similarly, we can show that g(2) = · · · = g(n) = 0. Hence X is complete. Exercise 23. Let X be a random variable having the uniform distribution on the interval (θ, θ + 1), θ ∈ R. Show that X is not complete. Solution. Consider g(X) = cos(2πX). Then g(X) = 0 but  θ+1 sin(2π(θ + 1)) − sin(2πθ) =0 cos(2πx)dx = E[g(X)] = 2π θ for any θ. Hence X is not complete. Exercise 24 (#2.57). Let (X1 , ..., Xn ) be a random sample from the N (θ, θ2 ) distribution, where θ > 0 is a parameter. Find a minimal sufficient statistic for θ and show whether it is complete. Solution. The joint Lebesgue density of X1 , ..., Xn is "  n n 1 1 2 1 1 . exp − 2 x + xi − (2πθ2 )n 2θ i=1 i θ i=1 2

Let η(θ) =

1 1 − 2, 2θ θ

 .

√ Then η( 12 ) − η(1) = (− 32 , 1) and η( √12 ) − η(1) = (− 12 , 2) are linearly n n independent vectors in R2 . Hence T = ( i=1 Xi2 , i=1 Xi ) is minimal

Chapter 2. Fundamentals of Statistics

67

sufficient for θ. Note that   n 2 E Xi = nEX12 = 2nθ2 i=1

and E

 n

2 Xi

= nθ2 + (nθ)2 = (n + n2 )θ2 .

i=1 1 2n t1

1 Let h(t1 , t2 ) = − n(n+1) t22 . Then h(t1 , t2 ) = 0 but E[h(T )] = 0 for any θ. Hence T is not complete.

Exercise 25 (#2.56). Suppose that (X1 , Y1 ), ..., (Xn , Yn ) are independent and identically distributed random 2-vectors and Xi and Yi are in2 dependently distributed as N (µ, σX ) and N (µ, σY2 ), respectively, with θ = 2 2 ¯ and S 2 be the sample mean (µ, σX , σY ) ∈ R × (0, ∞) × (0, ∞). Let X X 2 ¯ and variance for Xi ’s and Y and SY be the sample mean and variance for ¯ Y¯ , S 2 , S 2 ) is minimal sufficient for θ but T is not Yi ’s. Show that T = (X, X Y boundedly complete. Solution. Let

 1 µ 1 µ η = − 2 , 2 ,− 2 , 2 2σX σX 2σY σY and S=

 n i=1

Xi2 ,

n i=1

Yi2 ,

n i=1

Xi ,

n

 Yi

.

i=1

Then the joint Lebesgue density of (X1 , Y1 ), ..., (Xn , Yn ) is   1 nµ2 nµ2 τ exp η S − 2 − 2 − n log(σX σY ) . (2π)n 2σX 2σY 2 Since the parameter space {η : µ ∈ R, σX > 0, σY2 > 0} is a three4 dimensional curved hyper-surface in R , we conclude that S is minimal sufficient. Note that there is a one-to-one correspondence between T and S. Hence T is also minimal sufficient. 1 To show that T is not boundedly complete, consider h(T ) = I{X> ¯ Y¯ } − 2 . Then |h(T )| ≤ 0.5 and E[h(T )] = 0 for any η, but h(T ) = 0. Hence T is not boundedly complete.

Exercise 26 (#2.58). Suppose that (X1 , Y1 ), ..., (Xn , Yn ) are independent and identically distributed random 2-vectors having the normal distribution with EX1 = EY1 = 0, Var(X1 ) = Var(Y1 ) = 1, and Cov(X1 , Y1 ) = θ ∈ (−1, 1). (i) Find a minimal sufficient statistic for θ. (ii) Show whether the minimal sufficient statistic in (i) is complete or not.

68

Chapter 2. Fundamentals of Statistics

n n (iii) Prove that T1 = i=1 Xi2 and T2 = i=1 Yi2 are both ancillary but (T1 , T2 ) is not ancillary. Solution. (i) The joint Lebesgue density of (X1 , Y1 ), ..., (Xn , Yn ) is

1 √ 2π 1 − θ2

n



n n 1 2 2θ 2 exp − (x + yi ) + xi yi 1 − θ2 i=1 i 1 − θ2 i=1

Let

η=

1 2θ − , 2 1 − θ 1 − θ2

" .

 .

2 The space n parameter n {η : −1 < θ < 1} is a curve in R . Therefore, 2 2 ( i=1 (Xi + Yi ), i=1 Xi Yi ) is minimal sufficient. n n (ii) Note that E[ i=1 (Xi2 + Yi2 )] − 2n = 0, but i=1 (Xi2 + Yi2 ) − 2n = 0. Therefore, the minimal sufficient statistic is not complete. (iii) Both T1 and T2 have the chi-square distribution χ2n , which does not depend on θ. Hence both T1 and T2 are ancillary. Note that ⎞ ⎛ n  n E(T1 T2 ) = E Xi2 ⎝ Yj2 ⎠ i=1

=E

 n

j=1

 Xi2 Yi2

i=1

=

nE(X12 Y12 ) 2





+E⎝

Xi2 Yj2 ⎠

i =j

+ n(n − 1)E(X12 )E(Y12 )

= n(1 + 2θ ) + 2n(n − 1), which depends on θ. Therefore the distribution of (T1 , T2 ) depends on θ and (T1 , T2 ) is not ancillary. Exercise 27 (#2.59). Let (X1 , ..., Xn ), n > 2, be a random sample from the exponential distribution on (a, ∞) with scale parameter θ. Show that n (i) i=1 (Xi − X(1) ) and X(1) are independent for any (a, θ), where X(j) is the jth order statistic; = (X(n) − X(i) )/(X(n) − X(n−1) ), i = 1, ..., n − 2, are independent of (ii) Zi  n (X(1) , i=1 (Xi − X(1) )). Solution: (i) Let θ be arbitrarily fixed. Since the joint density of X1 , ..., Xn is "  n 1 −n na/θ θ e exp − xi I(a,∞) (x(1) ), θ i=1 where only a is considered as an unknown parameter, we conclude that X(1) is sufficient for a. Note that nθ e−n(x−a)/θ I(a,∞) (x) is the Lebesgue density

Chapter 2. Fundamentals of Statistics

69

for X(1) . For any Borel function g,  n ∞ g(x)e−n(x−a)/θ dx = 0 E[g(X(1) )] = θ a for any a is equivalent to 



g(x)e−nx/θ dx = 0

a

for any a, which implies g(x) = 0 a.e. with respect to Lebesgue measure. Hence, for any fixed θ, X(1) is sufficient and complete for a. The Lebesgue −1 −x/θ density of Xi − a is θ e I(0,∞) (x), which n does not depend on a. Theren fore, for any fixed θ, i=1 (Xi −X(1) ) = i=1 [(Xi −a)−(X (1) −a)] is anciln lary. By Basu’s theorem (e.g., Theorem 2.4 in Shao, 2003), i=1 (Xi −X(1) ) and X are independent for any fixed θ. Since θ is arbitrary, we conclude (1) n that i=1 (Xi − X(1) ) and X(1) are independent for any  (a, θ). n (ii) From Example 5.14 in Lehmann (1983, p. 47), (X(1) , i=1 (Xi − X(1) )) is sufficient and complete for (a, θ). Note that (Xi − a)/θ has Lebesgue density e−x I(0,∞) (x), which does not depend on (a, θ). Since Zi =

X(n) − X(i) = X(n) − X(n−1)

X(n) −a X −a − (i)θ θ X(n) −a X −a − (n−1) θ θ

,

) is ancillary. By Basu’s Theorem, (Z1 , ..., Zn−2 ) the statistic (Z1 , ...,  Zn−2 n is independent of X(1) , i=1 (Xi − X(1) ) . Exercise 28 (#2.61). Let (X1 , ..., Xn ), n > 2, be a random sample of random variables having the uniform distribution on the interval [a, b], where −∞ < a < b < ∞. Show that Zi = (X(i) − X(1) )/(X(n) − X(1) ), i = 2, ..., n − 1, are independent of (X(1) , X(n) ) for any a and b, where X(j) is the jth order statistic. Solution. Note that (Xi − a)/(b − a) has the uniform distribution on the interval [0, 1], which does not depend on any (a, b). Since Zi =

X(i) − X(1) = X(n) − X(1)

X(i) −a b−a X(n) −a b−a

− −

X(1) −a b−a X(1) −a b−a

,

the statistic (Z2 , ..., Zn−1 ) is ancillary. By Basu’s Theorem, the result follows if (X(1) , X(n) ) is sufficient and complete for (a, b). The joint Lebesgue density of X1 , ..., Xn is (b − a)−n I{a 0. Find the UMVUE of θ when n = 1, 2. Solution. Assume n = 1. Then X is complete and sufficient for θ and the UMVUE of θ is h(X) with E[h(X)] = θ for any θ. Since E[h(X)] =

∞ 1 θx , h(x) eθ − 1 x=1 x!

we must have ∞ h(x)θx x=1

x!

= θ(eθ − 1) = θ

∞ θx x=1

=

x!



θx (x − 1)! x=2

for any θ. Comparing the coefficient of θx leads to h(1) = 0 and h(x) = x for x = 2, 3, .... Assume n = 2. Then T = X1 + X2 is complete and sufficient for θ. The UMVUE of θ is h(T ) with E[h(T )] = θ for any θ. Then θ(eθ − 1)2 =

∞ ∞ h(i + j)ei+j i=1 j=1

i!j!

=



h(t)θt

t=2

t−1 i=0

1 . i!(t − i)!

On the other hand, 2 ∞ ∞ t−2 ∞ ∞ θi θi θj+1 1 θ 2 θ(e − 1) = θ = θt = . i! i!j! i!(t − 1 − i)! t=3 i=1 i=1 j=1 i=0 Comparing the coefficient of θx leads to h(2) = 0 and h(t) =

t−2 i=0

1 i!(t − 1 − i)!

- t−1 i=0

1 i!(t − i)!

for t = 3, 4, .... Exercise 10 (#3.14). Let X1 , ..., Xn be a random sample from the logdistribution with P (X1 = x) = −(1 − p)x /(x log p),

x = 1, 2, ...,

p ∈ (0, 1). Let k be a fixed positive integer. (i) For n = 1, 2, 3, find the UMVUE of pk . (ii) For n = 1, 2, 3, find the UMVUE of P (X = k). k   Solution. (i) Let θ = 1−p. Then pk = r=0 kr (−1)r θr . Hence, it suffices to obtain the UMVUE for θr . Note that the distribution of X1 is from a

Chapter 3. Unbiased Estimation

107

power series distribution with γ(x) = x−1 and  c(θ) = − log(1 − θ) (see n Example 3.5 in Shao, 2003). The statistic T = i=1 Xi is complete and sufficient for θ. By the result in Example 3.5 of Shao (2003), the UMVUE of θr is γn (T − r) I{r,r+1,...} (T ), γn (T ) n  ∞ θy where γn (t) is the coefficient of θt in , i.e., γn (t) = 0 for t < n y=1 y and 1 γn (t) = (y1 + 1) · · · (yn + 1) y1 +···+yn =t−n,yi ≥0

for t = n, n + 1, .... When n = 1, 2, 3, γn (t) has a simpler form. In fact, γ1 (1) = 0 and γ1 (t) = t−1 , t = 2, 3, ...; γ2 (1) = γ2 (2) = 0 and γ2 (t) =

t−2 l=0

1 , (l + 1)(t − l − 1)

t = 3, 4, ...;

γ3 (1) = γ3 (2) = γ3 (3) = 0 and γ3 (t) =

t−3 t−3 l1 =0 l2 =0

1 , (l1 + 1)(l2 + 1)(t − l1 − l2 − 2)

t = 4, 5, ....

(ii) By Example 3.5 in Shao (2003), the UMVUE of P (X1 = k) is γn−1 (T − k) I{k,k+1,...} (T ), kγn (T ) where γn (t) is given in the solution of part (i). Exercise 11 (#3.19). Let Y1 , ..., Yn be a random sample from the uniform distribution on the interval (0, θ) with an unknown θ ∈ (1, ∞). (i) Suppose that we only observe  Xi =

Yi 1

if Yi ≥ 1 if Yi < 1,

i = 1, ..., n.

Derive a UMVUE of θ. (ii) Suppose that we only observe  Xi =

Yi 1

if Yi ≤ 1 if Yi > 1,

i = 1, ..., n.

108

Chapter 3. Unbiased Estimation

Derive a UMVUE of the probability P (Y1 > 1). Solution. (i) Let m be the Lebesgue measure and δ be the point mass on {1}. The joint probability density of X1 , ..., Xn with respect to δ+m is (see, e.g., Exercise 16 in Chapter 1) θ−n I(0,θ) (X(n) ), where X(n) = max1≤i≤n Xi . Hence X(n) is complete and sufficient for θ and the UMVUE of θ is h(X(n) ) satisfying E[h(X(n) )] = θ for all θ > 1. The probability density of X(n) with respect to δ + m is θ−n I{1} (x) + nθ−n xn−1 I(1,θ) (x). Hence h(1) n E[h(X(n) )] = n + n θ θ Then

 θn+1 = h(1) + n



θ

h(x)xn−1 dx. 1

θ

h(x)xn−1 dx 1

for all θ > 1. Letting θ → 1 we obtain that h(1) = 1. Differentiating both sides of the previous expression with respect to θ we obtain that (n + 1)θn = nh(θ)θn−1

θ > 1.

Hence h(x) = (n + 1)x/n when x > 1. (ii) The joint probability density of X1 , ..., Xn with respect to δ + m is θ−r (1 − θ−1 )n−r , where r is the observed value of R = the number of Xi ’s that are less than 1. Hence, R is complete and sufficient for θ. Note that R has the binomial distribution with size n and probability θ−1 and P (Y1 > 1) = 1 − θ−1 . Hence, the UMVUE of P (Y1 > 1) is 1 − R/n. Exercise 12 (#3.22). Let (X1 , ..., Xn ) be a random sample from P ∈ P containing all symmetric distributions with finite means and with Lebesgue densities on R. (i) When n = 1, show that X1 is the UMVUE of µ. (ii) When n > 1, show that there is no UMVUE of µ = EX1 . Solution. (i) Consider the sub-family P1 = {N (µ, 1) : µ ∈ R}. Then X1 is complete for P ∈ P1 . Hence, E[h(X1 )] = 0 for any P ∈ P implies that E[h(X1 )] = 0 for any P ∈ P1 and, thus, h = 0 a.e. Lebesgue measure. This shows that 0 is the unique estimator of 0 when the family P is considered. Since EX1 = µ, X1 is the unique unbiased estimator of µ and, hence, it is the UMVUE of µ. (ii) Suppose that T is a UMVUE of µ. Let P1 = {N (µ, 1) : µ ∈ R}. Since ¯ is UMVUE when P1 is considered, by using the same the sample mean X ¯ a.s. P argument in the solution for Exercise 4(iv), we can show that T = X for any P ∈ P1 . Since the Lebesgue measure is dominated by any P ∈ P1 , ¯ a.e. Lebesgue measure. Let P2 be the family given we conclude that T = X in Exercise 5. Then (X(1) + X(n) )/2 is the UMVUE when P2 is considered, ¯ = (X(1) + X(n) )/2 a.s. P for where X(j) is the jth order statistic. Then X any P ∈ P2 , which is impossible. Hence, there is no UMVUE of µ.

Chapter 3. Unbiased Estimation

109

Exercise 13 (#3.24). Suppose that T is a UMVUE of an unknown parameter θ. Show that T k is a UMVUE of E(T k ), where k is any positive integer for which E(T 2k ) < ∞. Solution. Let U be an unbiased estimator of 0. Since T is a UMVUE of θ, E(T U ) = 0 for any P , which means that T U is an unbiased estimator of 0. Then E(T 2 U ) = E[T (T U )] = 0 if ET 4 < ∞. By Theorem 3.2 in Shao (2003), T 2 is a UMVUE of ET 2 . Similarly, we can show that T 3 is a UMVUE of ET 3 ,..., T k is a UMVUE of ET k . Exercise 14 (#3.27). √ Let X be a random variable having the Lebesgue density [(1 − θ) + θ/(2 x)]I(0,1) (x), where θ ∈ [0, 1]. Show that there is no UMVUE of θ based on an observation X. Solution. Consider estimators of the form h(X) = a(X −1/2 + b)I(c,1) (X) for some real numbers a and b, and c ∈ (0, 1). Note that  1  1  1 √ −1/2 h(x)dx = a x dx + ab dx = 2a(1 − c) + ab(1 − c). 0

If b = −2/(1 + 

1

c



c

c), then 

1

1 0

h(x)dx = 0 for any a and c. Also, 

1

√ a x−1/2 dx = − log c + ab(1 − c). 2 0 c c  √ 1 √ dx = 1 for any b and c. Let If a = [b(1 − c) − 2−1 log c]−1 , then 0 h(x) 2 x √ √ gc = h with b = −2/(1 + c) and a = [b(1 − c) − 2−1 log c]−1 , c ∈ (0, 1). Then  1  1 gc (x) √ dx = θ E[gc (X)] = (1 − θ) gc (x)dx + θ 0 0 2 x a h(x) √ dx = 2 2 x

x−1 dx +

ab 2

for any θ, i.e., gc (X) is unbiased for θ for any c ∈ (0, 1). The variance of gc (X) when θ = 0 is  1 E[gc (X)]2 = a2 (x−1 + b2 + 2bx−1/2 )dx c √ = a2 [− log c + b2 (1 − c) + 4b(1 − c)] √ − log c + b2 (1 − c) + 4b(1 − c) √ = , [b(1 − c) − 2−1 log c]2 √ where b = −2/(1 + c). Letting c → 0, we obtain that b → −2 and, thus, E[gc (X)]2 → 0. This means that no minimum variance estimator within the class of estimators gc (X). Hence, there is no UMVUE of θ. Exercise 15 (#3.28). Let X be a random sample with P (X = −1) = 2p(1 − p) and P (X = k) = pk (1 − p)3−k , k = 0, 1, 2, 3, where p ∈ (0, 1). (i) Determine whether there is a UMVUE of p.

110

Chapter 3. Unbiased Estimation

(ii) Determine whether there is a UMVUE of p(1 − p). Solution. (i) Suppose that f (X) is an unbiased estimator of p. Then p = 2f (−1)p(1 − p) + f (0)(1 − p)3 + f (1)p(1 − p)2 + f (2)p2 (1 − p) + f (3)p3 for any p. Letting p → 0, we obtain that f (0) = 0. Letting p → 1, we obtain that f (3) = 1. Then 1 = 2f (−1)(1 − p) + f (1)(1 − p)2 + f (2)p(1 − p) + p2 = 2f (−1) + f (1) + [f (2) − 2f (−1) − 2f (1)]p + [f (1) − f (2) + 1]p2 . Thus, 2f (−1)+f (1) = 1, f (2)−2f (−1)−2f (1) = 0, and f (1)−f (2)+1 = 0. These three equations are not independent; in fact the second equation is a consequence of the first and the last equations. Let f (2) = c. Then f (1) = c − 1 and f (−1) = 1 − c/2. Let gc (2) = c, gc (1) = c − 1, gc (−1) = 1 − c/2, gc (0) = 0, and gc (3) = 1. Then the class of unbiased estimators of p is {gc (X) : c ∈ R}. The variance of gc (X) is E[gc (X)]2 −p2 = 2(1−c/2)2 p(1−p)+(c−1)2 p(1−p)2 +c2 p2 (1−p)+p3 −p2 . Denote the right hand side of the above equation by h(c). Then h (c) = −(2 − c)p(1 − p) + 2(c − 1)p(1 − p)2 + 2cp2 (1 − p). Setting h (c) = 0 we obtain that 0 = c − 2 + 2(c − 1)(1 − p) + 2cp = c − 2 + 2c − 2(1 − p). Hence, the function h(c) reaches its minimum at c = (4 − 2p)/3, which depends on p. Therefore, there is no UMVUE of p. (ii) Suppose that f (X) is an unbiased estimator of p(1 − p). Then p(1 − p) = 2f (−1)p(1 − p) + f (0)(1 − p)3 + f (1)p(1 − p)2 + f (2)p2 (1 − p) + f (3)p3 for any p. Letting p → 0 we obtain that f (0) = 0. Letting p → 1 we obtain that f (3) = 0. Then 1 = 2f (−1) + f (1)(1 − p) + f (2)p for any p, which implies that f (2) = f (1) and 2f (−1) + f (1) = 1. Let f (−1) = c. Then f (1) = f (2) = 1 − 2c. Let gc (−1) = c, gc (0) = gc (3) = 0, and gc (1) = gc (2) = 1−2c. Then the class of unbiased estimators of p(1−p) is {gc (X) : c ∈ R}. The variance of gc (X) is E[gc (X)]2 − p2 = 2c2 p(1 − p) + (1 − 2c)2 p(1 − p)2 + (1 − 2c)2 p2 (1 − p) − p2 = 2c2 p(1 − p) + (1 − 2c)2 p(1 − p) − p2 = [2c2 + (1 − 2c)2 ]p(1 − p) − p2 ,

Chapter 3. Unbiased Estimation

111

which reaches its minimum at c = 1/3 for any p. Thus, the UMVUE of p(1 − p) is g1/3 (X). Exercise 16 (#3.29(a)). Let (X1 , ..., Xn ) be a random sample from the exponential distribution with density θ−1 e−(x−a)/θ I(a,∞) (x), where a ≤ 0 and θ is known. Obtain a UMVUE of a. Note. The minimum order statistic, X(1) , is sufficient for a but not complete because a ≤ 0. Solution. Let U (X(1) ) be an unbiased estimator of 0. Then E[U (X(1) )] = 0 implies   0



U (x)e−x/θ dx +

U (x)e−x/θ dx = 0

0

a

∞ for all a ≤ 0. Hence, U (x) = 0 a.e. for x ≤ 0 and 0 U (x)e−x/θ dx = 0. Consider h(X(1) ) = (bX(1) + c)I(−∞,0] (X(1) ) with constants b and c. Then E[h(X(1) )U (X(1) )] = 0 for any a. By Theorem 3.2 in Shao (2003), h(X(1) ) is a UMVUE of its expectation  nena/θ 0 (bx + c)e−nx/θ dx E[h(X(1) )] = θ a    bθ  1 − ena/θ , = c 1 − ena/θ + ab + n which equals a when b = 1 and c = −θ/n. Therefore, the UMVUE of a is h(X(1) ) = (X(1) − θ/n)I(−∞,0] (X(1) ). Exercise 17 (#3.29(b)). Let (X1 , ..., Xn ) be a random sample from the distribution on R with Lebesgue density θaθ x−(θ+1) I(a,∞) (x), where a ∈ (0, 1] and θ is known. Obtain a UMVUE of a. Solution. The minimum order statistic X(1) is sufficient for a and has Lebesgue density nθanθ x−(nθ+1) I(a,∞) (x). Let U (X(1) ) be an unbiased estimator of 0. Then E[U (X(1) )] = 0 implies  1  ∞ U (x)x−(nθ+1) dx + U (x)x−(nθ+1) dx = 0 1

a

∞ for all a ∈ (0, 1]. Hence, U (x) = 0 a.e. for x ∈ (0, 1] and 1 U (x)x−(nθ+1) dx = 0. Let h(X(1) ) = cI(1,∞) (X(1) ) + bX(1) I(0,1] (X(1) ) with some constants b and c. Then E[h(X(1) )U (X(1) )] = c

 1



U (x)x−(nθ+1) dx = 0.

112

Chapter 3. Unbiased Estimation

By Theorem 3.2 in Shao (2003), h(X(1) ) is a UMVUE of its expectation 

1



−nθ

 nθ

E[h(X(1) )] = bnθa x dx + cnθa a 

bnθ abnθ anθ + = c− , nθ − 1 nθ − 1 which equals a when b = 1 −



x−(nθ+1) dx

1

1 nθ

and c = 1. Hence, the UMVUE of a is 

1 X(1) I(0,1] (X(1) ). h(X(1) ) = I(1,∞) (X(1) ) + 1 − nθ

Exercise 18 (#3.30). Let (X1 , ..., Xn ) be a random sample from the population in a family P as described in Exercise 18 of Chapter 2. Find a UMVUE of θ. Solution. Note that P = P1 ∪ P2 , where P1 is the family of Poisson distributions with the mean parameter θ ∈ (0, 1) and P2 is the family of ¯ binomial distributions with size 1 and probability θ. The sample mean X is the UMVUE of θ when either P1 or P2 is considered as the family of ¯ is the UMVUE of θ when P is considered as the distributions. Hence X family of distributions. Exercise 19 (#3.33). Find a function of θ for which the amount of information is independent of θ, when Pθ is (i) the Poisson distribution with unknown mean θ > 0; (b) the binomial distribution with known size r and unknown probability θ ∈ (0, 1); (c) the gamma distribution with known shape parameter α and unknown scale parameter θ > 0. Solution. (i) The Fisher information about θ is I(θ) = θ1 . Let η = η(θ). If the Fisher information about η is

2

2 1 dθ ˜ = dθ I(η) I(θ) = =c dη dη θ √ √ √ not depending on θ, then dη dθ = 1/ cθ. Hence, η(θ) = 2 θ/ c. r . Let η = η(θ). If the (ii) The Fisher information about θ is I(θ) = θ(1−θ) Fisher information about η is

2

2 dθ r dθ ˜ I(η) = I(θ) = =c dη dη θ(1 − θ) √  r/ cθ(1 − θ). Choose c = 4r. Then not depending on θ, then dη dθ = √ η(θ) = arcsin( θ).

Chapter 3. Unbiased Estimation

113

(iii) The Fisher information about θ is I(θ) = θα2 . Let η = η(θ). If the Fisher information about η is

2

2 α dθ ˜ = dθ I(θ) = = α, I(η) dη dη θ2 then

dη dθ

= θ−1 and, hence, η(θ) = log θ.

Exercise 20 (#3.34). Let (X1 , ..., Xn ) be a random  sample from a dis, where f (x) > 0 is a tribution on R with the Lebesgue density σ1 f x−µ σ known Lebesgue density and f  (x) exists for all x ∈ R, µ ∈ R, and σ > 0. Let θ = (µ, σ). Show that the Fisher information about θ contained in X1 , ..., Xn is ⎛ ⎞  f  (x)[xf  (x)+f (x)]  [f  (x)]2 dx f (x) dx f (x) n ⎜ ⎟ I(θ) = 2 ⎝ ⎠, σ  [xf  (x)+f (x)]2  f  (x)[xf  (x)+f (x)] dx dx f (x) f (x) assuming that all integrals are finite.   Solution. Let g(µ, σ, x) = log σ1 f x−µ . Then σ   f  x−µ ∂ σ  g(µ, σ, x) = −  x−µ ∂µ σf σ   (x − µ)f  x−µ ∂ 1 σ   − . g(µ, σ, x) = − ∂σ σ σf x−µ σ

and

Then

 2  &   x−µ  '2 f x−µ 1 1 ∂ σ   x−µ g(µ, σ, X1 ) = 2 f dx E ∂µ σ σ σ f σ  (   x−µ )2   f 1 x σ  d  x−µ = 2 σ σ f σ  [f  (x)]2 1 = 2 dx, σ f (x) 

'2  x−µ   2

 & x − µ f σ 1 1 ∂ x−µ  x−µ  + 1 dx E g(µ, σ, X1 ) = 2 f ∂σ σ σ f σ σ σ 2    1 f (x) = 2 + 1 f (x)dx x σ f (x)  1 [xf  (x) + f (x)]2 dx, = 2 σ f (x) 

114

and

Chapter 3. Unbiased Estimation

  ∂ ∂ g(µ, σ, X1 ) g(µ, σ, X1 ) E ∂µ ∂σ '   

   x−µ  &  x−µ f x−µf 1 1 x−µ σ σ  x−µ   x−µ  + 1 dx = 2 f σ σ f σ σ σ f σ   f (x)[xf  (x) + f (x)] = dx. f (x)

The result follows since    ∂ τ   ∂ X1 −µ X1 −µ 1 1 I(θ) = nE . log σ f log σ f σ σ ∂θ ∂θ Exercise 21 (#3.36). Let X be a sample having a probability density fθ (x) with respect to ν, where θ is a k-vector of unknown parameters. Let T (X) be a statistic having a probability density gθ (t) with respect to λ. ∂ ∂ Suppose that ∂θ fθ (x) and ∂θ gθ (t) exist for any x and t and that, on any set ∂ fθ (x)| ≤ uc (x), {θ ≤ c}, there are functions uc (x) and vc (t) such that | ∂θ   ∂ | ∂θ gθ (t)| ≤ vc (t), uc (x)dν < ∞, and vc (t)dλ < ∞. Show that (i) IX (θ) − IT (θ) is nonnegative definite, where IX (θ) is the Fisher information about θ contained in X and IT (θ) is the Fisher information about θ contained in T ; (ii) IX (θ) = IT (θ) if T is sufficient for θ. Solution. (i) For any event T −1 (B),   ∂ ∂ log fθ (X)dP = fθ (x)dν T −1 (B) ∂θ T −1 (B) ∂θ  ∂ fθ (x)dν = ∂θ T −1 (B)  ∂  −1 = P T (B) ∂θ  ∂ = gθ (t)dλ ∂θ B  ∂ gθ (t)dλ = ∂θ  B  ∂ = log gθ (t) gθ (t)dλ ∂θ B ∂ log gθ (T )dP, = ∂θ −1 T (B) where the exchange of differentiation and integration is justified by the dominated convergence theorem under the given conditions. This shows

Chapter 3. Unbiased Estimation



that E

115

   ∂ ∂ log fθ (X)T = log gθ (T ) a.s. ∂θ ∂θ 

 τ ∂ ∂ log fθ (X) log gθ (T ) E  ∂θ   ∂θ  τ   ∂ ∂ =E E log fθ (X)T log gθ (T ) ∂θ ∂θ   τ ∂ ∂ =E log gθ (T ) log gθ (T ) ∂θ ∂θ = IT (θ).

Then

Then the nonnegative definite matrix  τ  ∂ ∂ ∂ ∂ log fθ (X) − log gθ (T ) log fθ (X) − log gθ (T ) E ∂θ ∂θ ∂θ ∂θ is equal to IX (θ) + IT (θ) − 2IT (θ) = IX (θ) − IT (θ). Hence IX (θ) − IT (θ) is nonnegative definite. (ii) If T is sufficient, then by the factorization theorem, fθ (x) = g˜θ (t)h(x). ∂ ∂ log fθ (x) = ∂θ log g˜θ (t), the result in part (i) of the solution implies Since ∂θ that ∂ ∂ log g˜θ (T ) = log gθ (T ) a.s. ∂θ ∂θ Therefore, IX (θ) = IT (θ). Exercise 22 (#3.37). Let (X1 , ..., Xn ) be a random sample from the uniform distribution θ) with θ > 0.  (0,  on the interval d d (i) Show that dθ fθ (x)dx, where fθ is the density of xfθ (x)dx = x dθ X(n) , the largest order statistic. (ii) Show that the Fisher information inequality does not hold for the UMVUE of θ. Solution. (i) Note that fθ (x) = nθ−n xn−1 I(0,θ) (x). Then 

d n2 x fθ (x)dx = − n+1 dθ θ



θ

0

xn dx = −

n2 . n+1

On the other hand, d dθ



d xfθ (x)dx = dθ



n θn





θ n

x dx 0

=

d dθ

nθ n+1

 =

n . n+1

(ii) The UMVUE of θ is (n + 1)X(n) /n with variance θ2 /[n(n + 2)]. On the other hand, the Fisher information is I(θ) = nθ−2 . Hence [I(θ)]−1 = θ2 /n > θ2 /[n(n + 2)].

116

Chapter 3. Unbiased Estimation

Exercise 23 (#3.39). Let X be an observation with Lebesgue density (2θ)−1 e−|x|/θ with unknown θ > 0. Find the UMVUE’s of the parameters θ, θr (r > 1), and (1 + θ)−1 and, in each case, determine whether the variance of the UMVUE attains the Cram´er-Rao lower bound. Solution. For θ, Cram´er-Rao lower bound is θ2 and |X| is the UMVUE of θ with Var(|X|) = θ2 , which attains the lower bound. For θr , Cram´er-Rao lower bound is r2 θ2r . Since E[|X|r /Γ(r + 1)] = θr , |X|r /Γ(r + 1) is the UMVUE of θr with  

 Γ(2r + 1) |X|r = θ2r Var − 1 > r2 θ2r Γ(r + 1) Γ(r + 1)Γ(r + 1) when r > 1.   For (1+θ)−1 . Cram´er-Rao lower bound is θ2 /(1+θ)4 . Since E e−|X| = (1 + θ)−1 , e−|X| is the UMVUE of (1 + θ)−1 with   1 1 θ2 − > . Var e−|X| = 1 + 2θ (1 + θ)2 (1 + θ)4 Exercise 24 (#3.42). Let (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ) with an unknown µ ∈ R and a known σ 2 > 0. Find the UMVUE of etµ with a fixed t = 0 and show that the variance of the UMVUE is larger than the Cram´er-Rao lower bound but the ratio of the variance of the UMVUE over the Cram´er-Rao lower bound converges to 1 as n → ∞. ¯ is complete and sufficient for µ. Since Solution. The sample mean X   2 2 ¯ E etX = eµt+σ t /(2n) , 2 2

¯

the UMVUE of etµ is T (X) = e−σ t /(2n)+tX . The Fisher information I(µ) = n/σ 2 . Then the Cram´er-Rao lower  2 d tµ bound is dµ e /I(µ) = σ 2 t2 e2tµ /n. On the other hand,  22  σ 2 t2 e2tµ ¯ Ee2tX − e2tµ = eσ t /n − 1 e2tµ > , n the Cram´er-Rao lower bound. The ratio of the variance of the UMVUE over 2 2 the Cram´er-Rao lower bound is (eσ t /n − 1)/(σ 2 t2 /n), which converges to 1 as n → ∞, since limx→0 (ex − 1)/x = 1. Var(T ) = e−σ

2 2

t /n

Exercise 25 (#3.46, #3.47). Let X1 , X2 , ... be independent and identically distributed random variables, m be a positive integer, and h(x1 ,..., xm ) be a function on Rm such that E[h(X1 , ..., Xm )]2 < ∞ and h is symmetric in its m arguments. A U-statistic with kernel h (of order m) is defined as

−1 n Un = h(Xi1 , ..., Xim ), m 1≤i1 0 is unknown. Let the prior of θ be the log-normal distribution with parameter (µ0 , σ02 ), where µ0 ∈ R and σ0 > 0 are known constants. (i) Find the posterior density of log θ. (ii) Find the rth posterior moment of θ. (iii) Find a value that maximizes the posterior density of θ. Solution. (i) Let X(n) be the largest order statistic. The product of the density of X and the prior density is proportional to   (log θ − µ0 )2 1 I(X(n) ,∞) (θ). exp − θn+1 2σ02 Then the posterior density of ϑ = log θ given X is   (ϑ − µ0 + nσ02 )2 1 √ I(log X(n) ,∞) (ϑ), exp − 2σ02 2πσ0 CX where

CX = Φ

µ0 − nσ02 − log X(n) σ0



and Φ is the cumulative distribution function of the standard normal distribution.   (ii) Note that E(θr |X) = E er log θ |X and log θ given X has a truncated normal distribution as specified in part (i) of the solution. Therefore, 2

−1 r[2µ0 −(2n−r)σ0 ]/2 e Φ E(θr |X) = CX

µ0 − (n − r)σ02 − log X(n) σ0

 .

(iii) From part (i) of the solution, the posterior density of θ given X is   1 (log θ − µ0 + nσ02 )2 √ I(X(n) ,∞) (θ). exp − 2σ02 2πσ0 CX θ Without the indicator function I(X(n) ,∞)(θ), the above function has a unique 2

maximum at eµ0 −(n+1)σ0 . Therefore, the posterior of θ given X is maxi2 mized at max{eµ0 −(n+1)σ0 , X(n) }. ¯ be the sample mean of a random sample of Exercise 4 (#4.6). Let X 2 size n from N (θ, σ ) with a known σ > 0 and an unknown θ ∈ R. Let π(θ)

144

Chapter 4. Estimation in Parametric Models

be a prior density with respect to a σ-finite measure ν on R. ¯ = x, is of the form (i) Show that the posterior mean of θ, given X σ 2 d log(p(x)) , n dx ¯ unconditional on θ. where p(x) is the marginal density of X, ¯ = x) as a function of the (ii) Express the posterior variance of θ (given X first two derivatives of log p(x). (iii) Find explicit expressions for p(x) and δ(x) in (i) when the prior is N (µ0 , σ02 ) with probability 1 −  and a point mass at µ1 with probability , where µ0 , µ1 , and σ02 are known constants. ¯ has distribution N (θ, σ 2 /n). The product of Solution. (i) Note that X ¯ the density of X and π(θ) is δ(x) = x +

√ n −n(x−θ)2 /(2σ2 ) √ π(θ). e 2πσ Hence,

 p(x) =

and p (x) =

n σ2



√ n −n(x−θ)2 /(2σ2 ) √ π(θ)dν e 2πσ

√ 2 2 n √ (θ − x)e−n(x−θ) /(2σ ) π(θ)dν. 2πσ

Then, the posterior mean is  √ 2 2 1 n √ δ(x) = θe−n(x−θ) /(2σ ) π(θ)dν p(x) 2πσ  √ 2 2 1 n √ = x+ (θ − x)e−n(x−θ) /(2σ ) π(θ)dν p(x) 2πσ σ 2 p (x) = x+ n p(x) σ 2 d log(p(x)) = x+ . n dx (ii) From the result in part (i) of the solution, n2 p (x) = 4 σ 



√ 2 2 n n √ (θ − x)2 e−n(x−θ) /(2σ ) π(θ)dν − 2 p(x). σ 2πσ

Hence, ¯ = x] = E[(θ − x)2 |X

σ 4 p (x) σ 2 + n2 p(x) n

Chapter 4. Estimation in Parametric Models

145

and, therefore, ( ) ¯ = x] − E(θ − x|X ¯ = x) 2 ¯ = x) = E[(θ − x)2 |X Var(θ|X 2  2  σ 4 p (x) σ 2 σ p (x) = 2 + − n p(x) n n p(x) 4 2 2 σ d log p(x) σ = 2 + . n dx2 n ¯ is (iii) If the prior is N (µ0 , σ02 ), then the joint distribution of θ and X ¯ normal and, hence, the marginal distribution of X is normal. The mean ¯ conditional on θ is θ. Hence the marginal mean of X ¯ is µ0 . The of X 2 ¯ ¯ variance of X conditional on θ is σ /n. Hence the marginal variance of X 2 2 2 2 is σ0 + σ /n. Thus, p(x) is the density of N (µ0 , σ0 + σ /n) if the prior is N (µ0 , σ02 ). If the prior is a point mass at µ1 , then √ n −n(x−µ1 )2 /(2σ2 ) p(x) = √ , e 2πσ which is the density of N (µ1 , σ 2 /n). Therefore, p(x) is the density of the mixture distribution (1 − )N (µ0 , σ02 + σ 2 /n) + N (µ1 , σ 2 /n), i.e.,     x − µ1 x − µ0 + φ  p(x) = (1 − )φ  2 , σ 2 /n σ0 + σ 2 /n √ / 2π. Then     (µ x − µ x − µ − x) − x) (1 − )(µ 0 0 1 1 + φ  2 φ  p (x) = σ02 + σ 2 /n σ 2 /n σ 2 /n σ0 + σ 2 /n 2

where φ(x) = e−x

/2

and δ(x) can be obtained using the formula in (i). Exercise 5 (#4.8). Let X = (X1 , ..., Xn ) be a random sample from P with discrete probability density fθ,j , where θ ∈ (0, 1), j = 1, 2, fθ,1 is the Poisson distribution with mean θ, and fθ,2 is the binomial distribution with size 1 and probability θ. Consider the estimation of θ under the squared error loss. Suppose that the prior of θ is the uniform distribution on (0, 1), the prior of j is P (j = 1) = P (j = 2) = 12 , and the joint prior of (θ, j) is the product probability of the two marginal priors. Show that the Bayes action is H(x)B(t + 1) + G(t + 1) , δ(x) = H(x)B(t) + G(t) where x = (x1 , ..., xn ) is the vector of observations, t = x1 + · · · + xn , 1 1 B(t) = 0 θt (1 − θ)n−t dθ, G(t) = 0 θt e−nθ dθ, and H(x) is a function of x

146

Chapter 4. Estimation in Parametric Models

with range {0, 1}. Note. Under the squared error loss, the Bayes action in an estimation problem is the posterior mean. Solution. The marginal density is   D(x) 1 t C(x) 1 −nθ t e θ dθ + θ (1 − θ)n−t dθ m(x) = 2 2 0 0 C(x)G(t) + D(x)B(t) = , 2 where C(x) = (x1 ! · · · xn !)−1 and D(x) = 1 if all components of x are 0 or 1 and is 0 otherwise. Then the Bayes action is 1 1 C(x) 0 e−nθ θt+1 dθ + D(x) 0 θt+1 (1 − θ)n−t dθ δ(x) = 2m(x) H(x)B(t + 1) + G(t + 1) = , H(x)B(t) + G(t) where H(x) = D(x)/C(x) takes value 0 or 1. Exercise 6 (#4.10). Let X be a sample from Pθ , θ ∈ Θ ⊂ R. Consider the estimation of θ under the loss L(|θ − a|), where L is an increasing function on [0, ∞). Let π(θ|x) be the posterior density (with respect to Lebesgue measure) of θ given X = x. Suppose that π(θ|x) is symmetric about δ(x) ∈ Θ and that π(θ|x) is nondecreasing for θ ≤ δ(x) and nonincreasing for θ ≥ δ(x). Show that δ(x) is a Bayes action, assuming that all integrals involved are finite. Solution. Without loss of generality, assume that δ(x) = 0. Then π(θ|x) is symmetric about 0. Hence, the posterior expected loss for any action a is  ρ(a) = L(|θ − a|)π(θ|x)dθ  = L(| − θ − a|)π(θ|x)dθ = ρ(−a). For any a ≥ 0 and θ, define H(θ, a) = [L(|θ + a|) − L(|θ − a|)][π(θ + a|x) − π(θ − a|x)]. If θ + a ≥ 0 and θ − a ≥ 0, then L(|θ + a|) ≥ L(|θ − a|) and π(θ + a|x) ≤ π(θ − a|x); if θ + a ≤ 0 and θ − a ≤ 0, then L(|θ + a|) ≤ L(|θ − a|) and π(θ + a|x) ≥ π(θ − a|x); if θ − a ≤ 0 ≤ θ + a, then π(θ + a|x) ≤ π(θ − a|x) and L(|θ + a|) ≥ L(|θ − a|) when θ ≥ 0 and π(θ + a|x) ≥ π(θ − a|x) and

Chapter 4. Estimation in Parametric Models

147

L(|θ + a|) ≤ L(|θ − a|) when θ ≤ 0. This shows that H(θ, a) ≤ 0 for any θ and a ≥ 0. Then, for any a ≥ 0,  0 ≥ H(θ, a)dθ   = L(|θ + a|)π(θ + a|x)dθ + L(|θ − a|)π(θ − a|x)dθ   − L(|θ + a|)π(θ − a|x)dθ − L(|θ − a|)π(θ + a|x)dθ   = 2 L(|θ|)π(θ|x)dθ − L(|θ + 2a|)π(θ|x)dθ  − L(|θ − 2a|)π(θ|x)dθ = 2ρ(0) − ρ(2a) − ρ(−2a) = 2ρ(0) − 2ρ(2a). This means that ρ(0) ≤ ρ(2a) = ρ(−2a) for any a ≥ 0, which proves that 0 is a Bayes action. Exercise 7 (#4.11). Let X be a sample of size 1 from the geometric distribution with mean p−1 , where p ∈ (0, 1]. Consider the estimation of p with the loss function L(p, a) = (p − a)2 /p. (i) Show that δ is a Bayes action with a prior Π if and only if δ(x) =  1 − (1 − p)x dΠ(p)/ (1 − p)x−1 dΠ(p), x = 1, 2, .... (ii) Let δ0 be a rule such that δ0 (1) = 1/2 and δ0 (x) = 0 for all x > 1. Show that δ0 is a limit of Bayes actions. (iii) Let δ0 be a rule such that δ0 (x) = 0 for all x > 1 and δ0 (1) is arbitrary. Show that δ0 is a generalized Bayes action. Note. In estimating g(θ) under the family of densities {fθ : θ ∈ Θ} and the loss function w(θ)[g(θ) − a]2 , where Θ ⊂ R, w(θ) ≥ 0 and Θ w(θ)[g(θ)]2 dΠ < ∞, the Bayes action is  w(θ)g(θ)fθ (x)dΠ δ(x) = Θ . w(θ)fθ (x)dΠ Θ Solution. (i) The discrete probability density of X is (1 − p)x−1 p for x = 1, 2, .... Hence, for estimating p with loss (p − a)2 /p, the Bayes action when X = x is  1 −1 1 p p(1 − p)x−1 pdΠ (1 − p)x dΠ 0 . δ(x) =  1 = 1 −  10 p−1 (1 − p)x−1 pdΠ (1 − p)x−1 dΠ 0 0 (ii) Consider the prior with Lebesgue density

Γ(2α) α−1 (1 − p)α−1 I(0,1) (p). [Γ(α)]2 p

148

Chapter 4. Estimation in Parametric Models

The Bayes action is 1 δ(x) = 1 − 01 = 1−

(1 − p)x+α−1 pα−1 dp

(1 − p)x+α−2 pα−1 dp 0 Γ(x+2α−1) Γ(x+α−1)Γ(α) Γ(x+2α) Γ(x+α)Γ(α)

x+α−1 x + 2α − 1 α . = x + 2α − 1 = 1−

Since lim δ(x) =

α→0

1 I{1} (x) = δ0 (x), 2

δ0 (x) is a limit of Bayes actions. (iii) Consider the improper prior density posterior risk for action a is  0

1

dΠ dp

= [p2 (1 − p)]−1 . Then the

(p − a)2 (1 − p)x−2 p−2 dp.

When x = 1, the above integral diverges to infinity and, therefore, any a is a Bayes action. When x > 1, the above integral converges if and only if a = 0. Hence δ0 is a Bayes action. Exercise 8 (#4.13). Let X be a sample from Pθ having probability density fθ (x) = h(x) exp{θτ x−ζ(θ)} with respect to ν on Rp , where θ ∈ Rp . Let the prior be the Lebesgue measure on Rp . Show that the generalized Bayes action under the loss L(θ, a) = E(X) − a2 is δ(x) = x when X = x with fθ (x)dθ < ∞.  Solution. Let m(x) = fθ (x)dθ and µ(θ) = E(X). Similar to the case of univariate θ, the generalized Bayes action under loss µ(θ) − a2 is δ(x) =  µ(θ)fθ (x)dθ/m(x). Let Ac = (−∞, c1 ]×· · ·×(−∞, cp ] for c = (c1 , ..., cp ) ∈   θ (x) dθ, c ∈ Rp . Since m(x) = fc (x)dc < ∞, Rp . Note that fc (x) = Ac ∂f∂θ  θ (x) dθ = 0. Since limci →∞,i=1,...,p fc (x) = 0. Hence, ∂f∂θ   ∂fθ (x) ∂ζ(θ) fθ (x) = [x − µ(θ)] fθ (x), = x− ∂θ ∂θ   we obtain that x fθ (x)dθ = µ(θ)fθ (x)dθ. This proves that δ(x) = x. Exercise 9 (#4.14). Let (X1 , ..., Xn ) be a random sample of random 2 variables with the Lebesgue density 2/πe−(x−θ) /2 I(θ,∞) (x), where θ ∈ R

Chapter 4. Estimation in Parametric Models

149

is unknown. Find the generalized Bayes action for estimating θ under the squared error loss, when the (improper) prior of θ is the Lebesgue measure on R. ¯ be the sample mean and X(1) be the smallest order Solution. Let X statistic. Then the product of the density of X1 , ..., Xn and the prior density is " 

n/2 n 2 1 2 exp − (Xi − θ) I(θ,∞) (X(1) ) π 2 i=1 and, thus, the generalized Bayes action is  X(1)  X(1) −n(X−θ) 2 2 ¯ ¯ /2 /2 ¯ −n(X−θ) θe dθ (θ − X)e dθ 0 0 ¯ = X + . δ = X  X(1) −n(X−θ) 2 /2 2 /2 ¯ ¯ (1) −n(X−θ) e dθ e dθ 0 0 Let Φ be the cumulative distribution function of the standard normal distribution. Then √ √  X(1) ¯ − Φ(−√nX)] ¯ 2π[Φ( n(X(1) − X)) 2 ¯ −n(X−θ) /2 √ e dθ = n 0 and  X(1) 0

2 ¯ /2 ¯ −n(X−θ) (θ − X)e dθ =



√ ¯ √ ¯ 2π[Φ (− nX) − Φ ( n(X(1) − X))] . n

Hence, the generalized Bayes action is √ ¯ √ ¯ Φ (− nX) − Φ ( n(X(1) − X)) ¯ √ √ δ=X+√ ¯ − Φ(− nX)] ¯ . n[Φ( n(X(1) − X)) Exercise 10 (#4.15). Let (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ) and π(µ, σ 2 ) = σ −2 I(0,∞) (σ 2 ) be an improper prior for (µ, σ 2 ) with respect to the Lebesgue measure on R2 . (i) Show that the posterior density of (µ, σ 2 ) given x = (x1 , ..., xn ) is π(µ, σ 2 |x) = π1 (µ|σ 2 , x)π2 (σ 2 |x), where π1 (µ|σ 2 , x) is the density of the normal distribution N (¯ x, σ 2 /n), x ¯ is the sample mean of xi ’s, π2 (σ 2 |x) is −1 the density of ω , and ω has the gamma distribution with shape paramen ¯)2 /2]−1 . ter (n − 1)/2 and scale parameter [ i=1 (xi − x  x , where (ii) Show that the marginal posterior density of µ given x is f µ−¯ τ  n τ 2 = i=1 (xi − x ¯)2 /[n(n − 1)] and f is the density of the t-distribution tn−1 . (iii) Obtain the generalized Bayes action for estimating µ/σ under the squared error loss. Solution. (i) The posterior density π(µ, σ 2 |x) is proportional to "    n (µ − x ¯)2 1 −(n+2) 2 I(0,∞) (σ 2 ), σ exp − 2 (xi − x ¯) exp − 2σ i=1 2σ 2 /n

150

Chapter 4. Estimation in Parametric Models

which is proportional to π1 (µ|σ 2 , x)π2 (σ 2 |x). (ii) The marginal posterior density of µ is 



π(µ|x) = 0



π(µ, σ 2 |x)dσ 2 

n n(¯ x − µ)2 1 ∝ σ exp − − (xi − x ¯)2 2 2 2σ 2σ 0 i=1 &

2 '−n/2 µ−x ¯ 1 ∝ 1+ . n−1 τ ∞

−(n+2)

" dσ 2

x Hence, π(µ|x) is f ( µ−¯ τ ) with f being the density of the t-distribution tn−1 . (iii) The generalized Bayes action is  µ π1 (µ|σ 2 , x)π2 (σ 2 |x)dµdσ 2 δ= σ  =x ¯ σ −1 π2 (σ 2 |x)dσ 2

=

Γ(n/2)¯ x n . Γ((n − 1)/2) ¯)2 /2 i=1 (xi − x

Exercise 11 (#4.19). In (ii)-(iv) of Exercise 1, assume that the parameters in priors are unknown. Using the method of moments, find empirical Bayes actions under the squared error loss. n ¯ (the sample mean) and µ Solution. Define µ ˆ1 = X ˆ2 = n−1 i=1 Xi2 . (i) In Exercise 1(ii), EX1 = E[E(X1 |p)] = E(kp) =

kα α+β

and EX12 = E[E(X12 |p)] = E[kp(1 − p) + k 2 p2 ] =

(k 2 − k)α(α + 1) kα + . α+β (α + β)(α + β + 1)

Setting µ ˆ1 = EX1 and µ ˆ2 = EX12 , we obtain that α ˆ=

µ ˆ2 − µ ˆ1 − µ ˆ1 (k − 1) µ ˆ1 (k − 1) + k(1 − µ ˆ2 /ˆ µ1 )

and kα ˆ βˆ = −α ˆ. µ ˆ1

Chapter 4. Estimation in Parametric Models

151

ˆ ¯ + α)/(kn Then the empirical Bayes action is (nX ˆ +α ˆ + β). (ii) In Exercise 1(iii), EX1 = E[E(X1 |θ)] = E(θ/2) =

ab 2(b − 1)

and EX12 = E[E(X12 |θ)] = E[(θ/2)2 + θ2 /12] = E(θ2 /3) =

a2 b . 3(b − 2)

ˆ2 = EX12 , we obtain that Setting µ ˆ1 = EX1 and µ ˆb = 1 +

2

3ˆ µ2 /(3ˆ µ2 − 4ˆ µ21 )

and a ˆ = 2ˆ µ1 (ˆb − 1)/ˆb. Therefore, the empirical Bayes action is (n + ˆb) max{X(n) , a ˆ}/(n + ˆb − 1), where X(n) is the largest order statistic. (iii) In Exercise 1(iv), EX1 = E[E(X1 |θ)] = E(θ) =

1 γ(α − 1)

and EX12 = E[E(X12 |θ)] = E(2θ2 ) =

γ 2 (α

2 . − 1)(α − 2)

Setting µ ˆ1 = EX1 and µ ˆ2 = EX12 , we obtain that α ˆ=

2ˆ µ2 − 2ˆ µ21 µ ˆ2 − 2ˆ µ21

γˆ =

1 . (ˆ α − 1)ˆ µ1

and

¯ + 1)/[ˆ The empirical Bayes action is (ˆ γ nX γ (n + α ˆ − 1)]. Exercise 12 (#4.20). Let X = (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ) with an unknown µ ∈ R and a known σ 2 > 0. Consider the prior Πµ|ξ = N (µ0 , σ02 ), ξ = (µ0 , σ02 ), and the second-stage improper joint prior for ξ be the product of N (a, v 2 ) and the Lebesgue measure on (0, ∞), where a and v are known. Under the squared error loss, obtain a formula for the generalized Bayes action in terms of a one-dimensional integral.

152

Chapter 4. Estimation in Parametric Models

Solution. Let x ¯ be the observed sample mean. From Exercise 2, the Bayes action when ξ is known is δ(x, ξ) =

σ2 nσ02 µ + x ¯. 0 + σ2 nσ02 + σ 2

nσ02

By formula (4.8) in Shao (2003), the Bayes action is  δ(x, ξ)f (ξ|x)dξ, where f (ξ|x) is the conditional density of ξ given X = x. The joint density of (X, µ, ξ) is  "

n n 1 1 1 (µ − µ0 )2 (µ0 − µ)2 2 √ exp − 2 . (xi − µ) − − 2πσ0 v 2σ i=1 2σ02 2v 2 2πσ Integrating out µ in the joint density of (X, µ, ξ) and using the identity 3     2  ∞ 2π at2 − 2bt + c b c dt = exp − exp − 2 a 2a 2 −∞ for any a > 0 and real b and c, we obtain the joint density of (X, ξ) as ⎧ 2 ⎫  µ0 ⎪ ⎪ n¯ x ⎬ ⎨ + 2 2 2 2 σ y (µ0 − a) µ0 d σ0   2 , exp − 2 − − + 2 ⎪ 2v 2 2σ0 ⎭ ⎩ 2σ σ0 σn2 + σ12 2 σn2 + σ12 ⎪ 0

0

where y is the observed value of This implies that E(µ0 |σ02 , x) =

n i=1

Xi2 and d = (2π)−(n+1)/2 σ −n v −1 .

¯ aσ02 (nσ02 + σ 2 ) − nσ02 v 2 x . σ02 (nσ02 + σ 2 ) + nσ02 v 2

Integrating out µ0 in the joint density of (X, ξ) and using the previous identity again yields the joint density of (X, σ02 ) as 7 8 1 −y/(2σ 2 ) 8 de σ04 91 + 1 − f (x, σ02 ) = 2 n 1 2 σ02 σ0 n2 + 12 v σ2 + σ2 σ

⎧ ⎨ a × exp − ⎩ v2

n¯ x σ02 σ 2 n σ2

+

1 σ02

0

σ0

2 - &  2

1 1 + 2− 2 v σ0

1 σ04 n σ2

+

' 1 σ02



n2 x ¯2 2σ 4 n σ2

+

1 σ02

⎫ ⎬ ⎭

Chapter 4. Estimation in Parametric Models

Then the generalized Bayes action is  ∞ * σ2 E(µ0 |σ02 ,x)  δ(x, ξ)f (ξ|x)dξ =

0

nσ02 +σ 2

∞ 0

153

+

+

nσ02 x ¯ nσ02 +σ 2

f (σ02 , x)dσ02

f (σ02 , x)dσ02

.

Exercise 13 (#4.21). Let X = (X1 , ..., Xn ) be a random sample from the uniform distribution on (0, θ), where θ > 0 is unknown. Let π(θ) = bab θ−(b+1) I(a,∞) (θ) be a prior density with respect to the Lebesgue measure, where b > 1 is known but a > 0 is an unknown hyperparameter. Consider the estimation of θ under the squared error loss. (i) Show that the empirical Bayes method using the method of moments b+n produces the empirical Bayes action δ(ˆ a), where δ(a) = b+n−1 max{a, X(n) },  n 2(b−1) a ˆ = bn i=1 Xi , and X(n) is the largest order statistic. (ii) Let h(a) = a−1 I(0,∞) (a) be an improper Lebesgue prior density for a. Obtain explicitly the generalized Bayes action. Solution. (i) Note that EX1 = E[E(X1 |θ)] = E(θ/2) = ab/[2(b − 1)]. n Then a ˆ = 2(b−1) i=1 Xi is the moment estimator of a. From Exercise 2, bn the empirical Bayes action is δ(ˆ a). (ii) The joint density for (X, θ, a) is bab−1 θ−(n+b+1) I(X(n) ,∞) (θ)I(0,θ) (a). Hence, the joint density for (X, θ) is  θ bab−1 θ−n−(b+1) I(X(n) ,∞) (θ)da = θ−(n+1) I(X(n) ,∞) (θ) 0

and the generalized Bayes action is  ∞ −n θ dθ nX(n) X  ∞ (n) = . −(n+1) n−1 θ dθ X(n) Exercise 14 (#4.25). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on (0, ∞) with scale parameter 1. Suppose that we observe T = X1 + · · · + Xθ , where θ is an unknown positive integer. Consider the estimation of θ under the loss function L(θ, a) = (θ − a)2 /θ and the geometric distribution with mean p−1 as the prior for θ, where p ∈ (0, 1) is known. (i) Show that the posterior expected loss is E[L(θ, a)|T = t] = 1 + ξ − 2a + (1 − e−ξ )a2 /ξ, where ξ = (1 − p)t. (ii) Find the Bayes estimator of θ and show that its posterior expected loss

154

Chapter 4. Estimation in Parametric Models

∞ is 1 − ξ m=1 e−mξ . (iii) Find the marginal distribution of (1 − p)T , unconditional on θ. (iv) Obtain an explicit expression for the Bayes risk of the Bayes estimator in part (ii). Solution. (i) For a given θ, T has the gamma distribution with shape parameter θ and scale parameter 1. Hence, the joint probability density of (T, θ) is f (t, θ) = and

1 tθ−1 e−t p(1 − p)θ−1 , (θ − 1)!

∞ θ=1

f (t, θ) = pe−t

t > 0, θ = 1, 2, ...

∞ [(1 − p)t]θ−1

(θ − 1)!

θ=1

= pe−pt .

Then, E[L(θ, a)|T = t] = p−1 ept

∞ (θ − a)2

θ

θ=1

= e−ξ

∞ ξ θ−1 θ=1

+ a2 e−ξ

θ!

f (t, θ)

θ2 − 2ae−ξ

∞ ξ θ−1 θ=1

θ!

θ

∞ ξ θ−1 θ=1

θ!

= 1 + ξ − 2a + (1 − e−ξ )a2 /ξ. (ii) Since E[L(θ, a)|T = t] is a quadratic function of a, the Bayes estimator is δ(T ) = (1 − p)T /(1 − e−(1−p)T ). The posterior expected loss when T = t is ∞ ξe−ξ E[L(θ, δ(t))|T = t] = 1 − = 1 − ξ e−mξ . 1 − e−ξ m=1 (iii) part (i) of the solution, the marginal density of T is ∞ As shown in −pt f (t, θ) = pe , which is the density of the exponential distribution θ=1 on (0, ∞) with scale parameter p−1 . (iv) The Bayes risk of δ(T ) is  " ∞ −m(1−p)T E{E[L(θ, δ(T ))|T ]} = 1 − E (1 − p)T e = 1 − (1 − p)p = 1 − (1 − p)p

m=1 ∞

∞  m=1 ∞

te−m(1−p)t e−pt dt

0

1 , [m(1 − p) + p]2 m=1

Chapter 4. Estimation in Parametric Models

155

where the first equality follows from the result in (ii) and the second equality follows from the result in (iii). Exercise 15 (#4.27). Let (X1 , ..., Xn ) be a random sample of binary random variables with P (X1 = 1) = p ∈ (0, 1). ¯ is an admissible estimator of p under the (i) Show that the sample mean X 2 loss function (a − p) /[p(1 − p)]. ¯ is an admissible estimator of p under the squared error (ii) Show that X loss. Note. A unique Bayes estimator under a given proper prior is admissible. ¯ Consider the uniform distribution on the Solution. (i) Let T = nX. interval (0, 1) as the prior for p. Then the Bayes estimator under the loss function (a − p)2 /[p(1 − p)] is 1 T p (1 − p)n−T −1 dp T ¯ = = X.  10 T −1 n−T −1 n p (1 − p) dp 0

¯ is an admissible estimator under Since the Bayes estimator is unique, X the given loss function. (ii) From the result in (i), there does not exist an estimator U such that ¯ − p)2 E(U − p)2 E(X ≤ p(1 − p) p(1 − p) for any p ∈ (0, 1) and with strict inequality holds for some p. Since p ∈ (0, 1), this implies that there does not exist an estimator U such that ¯ − p)2 E(U − p)2 ≤ E(X ¯ is for any p ∈ (0, 1) and with strict inequality holds for some p. Hence X an admissible estimator of p under the squared error loss. Exercise 16 (#4.28). Let X = (X1 , ..., Xn ) be a random sample from ¯ is an admissible estimator N (µ, 1), µ ∈ R. Show that the sample mean X of µ under the loss function L(µ, a) = |µ − a|. Solution. Consider a sequence of priors, N (0, j), j = 1, 2, .... From ¯ where Exercise 1, the posterior mean under the jth prior is δj = aj X, aj = nj/(nj + 1). From Exercise 6, δj is a Bayes estimator of µ. Let rT be the Bayes risk for an estimator T . Then, for any j, rX¯ ≥ rδj and   ¯ − a−1 µ|µ)] ≥ aj r ¯ , ¯ − µ|µ)] = aj E[E(|X rδj = E[E(|aj X j X

where the last inequality follows from Exercise 11 in Chapter 1 and the fact ¯ Hence, that given µ, µ is a median of X. 0 ≥ rδj − rX¯ ≥ (aj − 1)rX¯ = −(nj + 1)−1 rX¯ ,

156

Chapter 4. Estimation in Parametric Models

which implies that rδj − rX¯ converges to 0 at rate j −1 as j → ∞. On the other hand, √ for any√finite interval (a, b), the prior probability of µ ∈ (a, b) is Φ(b/ j) − Φ(a/ j), which converges to 0 at rate j −1/2 , where Φ is the cumulative distribution of N (0, 1). Thus, by Blyth’s theorem (e.g., ¯ is admissible. Theorem 4.3 in Shao, 2003), X Exercise 17. Let X be an observation from the negative binomial distribution with a known size r and an unknown probability p ∈ (0, 1). Show that (X + 1)/(r + 1) is an admissible estimator of p−1 under the squared error loss. Solution. It suffices to show that δ0 (X) = (X + 1)/(r + 1) is admissible under the loss function p2 (a − p−1 )2 . The posterior distribution of p given X is the beta distribution with parameter (r + α, X − r + β). Under the loss function p2 (a − p−1 )2 , the Bayes estimator is δ(X) =

X +α+β+1 E(p|X) = , E(p2 |X) r+α+1

which has risk Rδ (p) =

r(1 − p) [(α + β + 1)p − (α + 1)]2 + (r + α + 1)2 (r + α + 1)2

and Bayes risk rδ =

rβ β2 αβ(α + β + 1) + + . 2 2 (α + β)(r + α + 1) (α + β) (r + α + 1)2 (α + β)2 (r + α + 1)2

Also, δ0 (X) has risk Rδ0 (p) =

r(1 − p) + (1 − p)2 (r + 1)2

and Bayes risk r δ0 =

rβ β2 αβ + + . 2 (α + β)(r + 1) (α + β)2 (r + 1)2 (α + β + 1)(α + β)2 (r + 1)2

Note that

where

α+β (rδ0 − rδ ) = A + B + C, αβ   1 2r r 1 → A= − α (r + 1)2 (r + α + 1)2 (r + 1)3

if α → 0 and β → 0, B=

  1 2 β 1 → − α(α + β) (r + 1)2 (r + α + 1)2 (r + 1)3

Chapter 4. Estimation in Parametric Models

157

if α → 0, β → 0 and α/β → 0, and   1 1 α+β+1 C = − α + β (r + 1)2 (α + β + 1) (r + α + 1)2 [α − (α + β)(r + 1)][r + α + 1 + (α + β + 1)(r + 1)] = (α + β)(α + β + 1)(r + 1)2 (r + α + 1)2 2 →− (r + 1)2 if α → 0, β → 0 and α/β → 0. Therefore, α+β (rδ0 − rδ ) → 0 αβ if α → 0, β → 0 and α/β → 0. For any 0 < a < b < 1, the prior probability of p ∈ (a, b) is  Γ(α + β) b α−1 πα,β = p (1 − p)β−1 dp. Γ(α)Γ(β) a Note that aΓ(a) = Γ(a + 1) → 1 as a → 0. Hence,  b α+β πα,β → p−1 (1 − p)−1 dp αβ a as α → 0 and β → 0. From this and the proved result, (rδ0 − rδ )/πα,β → 0 if α → 0, β → 0 and α/β → 0. By Blyth’s theorem, δ0 (X) is admissible. Exercise 18 (#4.30). Let (X1 , ..., Xn ) be a random sample of binary random variables with P (X1 = 1) = p ∈ (0, 1). (i) Obtain the Bayes estimator of p(1 − p) when the prior is the beta distribution with known parameter (α, β), under the squared error loss. (ii) Compare the Bayes estimator in (i) with the UMVUE of p(1 − p). (iii) Discuss the bias, consistency, and admissibility of the Bayes estimator in (i). (iv) Let [p(1 − p)]−1 I(0,1) (p) be an improper Lebesgue prior density for p. Show that the posterior of p given Xi ’s is a probability density provided ¯ ∈ (0, 1). that the sample mean X (v) Under the squared error loss, find the generalized Bayes estimator of p(1 − p) under the improper n prior in (iv). Solution. (i) Let T = i=1 Xi . Since the posterior density given T = t is proportional to pt+α−1 (1 − p)n−t+β−1 I(0,1) (p), the Bayes estimator of p(1 − p) is  1 T +α p (1 − p)n−T +β dp (T + α + 1)(n − T + β) = . δ(T ) =  1 0 T +α−1 n−T +β−1 (n + α + β + 2)(n + α + β + 1) p (1 − p) dp 0

158

Chapter 4. Estimation in Parametric Models

(ii) By considering functions of the form aT 2 + bT of the complete and sufficient statistic T , we obtain the UMVUE of p(1 − p) as U (T ) =

T (n − T ) . n(n − 1)

(iii) From part (ii) of the solution, E[T (n − T )] = n(n − 1)p(1 − p). Then the bias of δ(T ) is   n(n − 1) − 1 p(1 − p) (n + α + β + 2)(n + α + β + 1) +

(α + 1)(n + β) + pn(β − α − 1) , (n + α + β + 2)(n + α + β + 1)

which is of the order O(n−1 ). Since limn (T /n) = p a.s. by the strong law of large numbers, limn δ(T ) = p(1 − p) a.s. Hence the Bayes estimator δ(T ) is consistent. Since δ(T ) is a unique Bayes estimator, it is admissible. (iv) The posterior density when T = t is proportional to pt−1 (1 − p)n−t−1 I(0,1) (p), which is proper if and only if 0 < t < n. (v) The generalized Bayes estimator is 1 T p (1 − p)n−T dp T (n − T ) = . 1 0 T −1 n−T −1 n(n + 1) p (1 − p) dp 0 Exercise 19 (#4.35(a)). Let X = (X1 , ..., Xn ) be a random sample from the uniform distribution on (θ, θ + 1), θ ∈ R. Consider the estimation of θ under the squared error loss. Let π(θ) be a continuous and positive Lebesgue density on R. Derive the Bayes estimator under the prior π and show that it is a consistent estimator of θ. Solution. Let X(j) be the jth order statistic. The joint density of X is I(θ,θ+1) (X(1) )I(θ,θ+1) (X(n) ) = I(X(n) −1,X(1) ) (θ). Hence, the Bayes estimator is  X(1)

X(n) −1

δ(X) =  X (1)

θπ(θ)dθ .

π(θ)dθ X(n) −1

Note that limn X(1) = θ a.s. and limn X(n) = θ + 1 a.s. Hence, almost surely, the interval (X(n) − 1, X(1) ) shrinks to a single point θ as n → ∞. Since π is continuous, this implies that limn δ(X) = θ a.s.

Chapter 4. Estimation in Parametric Models

159

Exercise 20 (#4.36). Consider the linear model with observed vector X having distribution Nn (Zβ, σ 2 In ), where Z is an n × p known matrix, p < n, β ∈ Rp , and σ 2 > 0. (i) Assume that σ 2 is known. Derive the posterior distribution of β when the prior distribution for β is Np (β0 , σ 2 V ), where β0 ∈ Rp is known and V is a known positive definite matrix, and find the Bayes estimator of lτ β under the squared error loss, where l ∈ Rp is known. (ii) Show that the Bayes estimator in (i) is admissible and consistent as n → ∞, assuming that the minimum eigenvalue of Z τ Z → ∞. (iii) Repeat (i) and (ii) when σ 2 is unknown and σ −2 has the gamma distribution with shape parameter α and scale parameter γ, where α and γ are known. (iv) In part (iii), obtain Bayes estimators of σ 2 and lτ β/σ under the squared error loss and show that they are consistent under the condition in (ii). Solution. (i) The product of the joint density of X and the prior is proportional to     (β − β0 )τ V −1 (β − β0 ) X − Zβ2 σ −n exp − exp − . 2σ 2 2σ 2 Since

"      X − Zβ2 SSR (βˆ − β)τ Z τ Z(βˆ − β) exp − , = exp − 2 exp − 2σ 2 2σ 2σ 2

ˆ 2 , the product of the joint where βˆ is the LSE of β and SSR = X − Z β density of X and the prior is proportional to  " ˆ β τ (Z τ Z + V −1 )β − 2β τ (V −1 β0 + Z τ Z β) exp − , 2σ 2 which is proportional to   (β − β ∗ )τ (Z τ Z + V −1 )(β − β ∗ ) exp − , 2σ 2 where

ˆ β ∗ = (Z τ Z + V −1 )−1 (V −1 β0 + Z τ Z β).

This shows that the posterior of β is Np (β ∗ , σ 2 (Z τ Z + V −1 )−1 ). The Bayes estimator of lτ β under the squared error loss is then lτ β ∗ . (ii) Since the Bayes estimator lτ β ∗ is unique, it is admissible. If the minimum eigenvalue of Z τ Z → ∞ as n → ∞, then βˆ →p β, lim lτ (Z τ Z + V −1 )−1 V −1 β0 = 0, n

160

Chapter 4. Estimation in Parametric Models

and

lim(Z τ Z + V −1 )−1 Z τ Zl = l. n



Hence, l β →p β. (iii) Let ω = σ −2 . Then the product of the joint density of X and the prior is proportional to     ω(β − β0 )τ V −1 (β − β0 ) ωX − Zβ2 ω α−1 e−ω/γ ω n/2 exp − exp − , 2 2 τ

which is proportional to (under the argument in part (i) of the solution),     (β − β ∗ )τ (Z τ Z + V −1 )(β − β ∗ ) ωSSR n/2+α−1 −ω/γ ω exp − . e exp − 2 2σ 2 Hence, the posterior of (β, ω) is p(β|ω)p(ω) with p(β|ω) being the density of N (β ∗ , ω −1 (Z τ Z + V −1 )−1 ) and p(ω) being the density of the gamma distribution with shape parameter n/2 + α and scale parameter (γ −1 + SSR/2)−1 . The Bayes estimator of lτ β is still lτ β ∗ and the proof of its admissibility and consistency is the same as that in part (ii) of the solution. (iv) From the result in part (iii) of the solution, the Bayes estimator of σ 2 = ω −1 is  ∞ γ −1 + SSR/2 ω −1 p(ω)dω = . n/2 + α − 1 0 It is consistent since SSR/n →p σ 2 . Using E(β/σ) = E[E(β/σ|σ)] = E[σ −1 E(β|σ)] and the fact that β ∗ does not depend on σ 2 , we obtain the Bayes estimator of lτ β/σ as  ∞ Γ(n/2 + α + 1/2) τ ∗  l β . ω 1/2 p(ω)dω = lτ β ∗ Γ(n/2 + α) γ −1 + SSR/2 0 From the fact that

Γ(n + α + 1/2) lim √ = 1, n nΓ(n + α)

the consistency of the Bayes estimator follows from the consistency of lτ β ∗ and SSR/n. Exercise 21 (#4.47). Let (X1 , ..., a random sample of random Xn ) be −(x−θ)2 /2 I(θ,∞) (x), where θ ∈ R variables with the Lebesgue density 2/πe is unknown. Find the MRIE (minimum risk invariant estimator) of θ under the squared error loss. Note. See Sections 2.3.2 and 4.2 in Shao (2003) for definition and discussion of invariant estimators.

Chapter 4. Estimation in Parametric Models

161

Solution. Let f (x1 , ..., xn ) be the joint density of (X1 , ..., Xn ). When θ = 0, " 

n/2 n 2 1 exp − (Xi − t)2 I(t,∞) (X(1) ) f (X1 − t, ..., Xn − t) = π 2 i=1

n/2 2 2 2 ¯ e−(n−1)S /2 e−n(X−t) /2 I(t,∞) (X(1) ), = π ¯ is the sample mean, S 2 is the sample variance, and X(1) is the where X smallest order statistic. Under the squared error loss, the MRIE of θ is Pitman’s estimator (e.g., Theorem 4.6 in Shao, 2003)  X(1) −n(X−t)  2 ¯ /2 te dt tf (X1 − t, ..., Xn − t)dt  = 0X , 2 ¯ (1) −n(X−t) /2 f (X1 − t, ..., Xn − t)dt e dt 0 which is the same as the estimator δ given in Exercise 9. Exercise 22 (#4.48). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on (µ, ∞) with a known scale parameter θ, where µ ∈ R is unknown. Let X(1) be the smallest order statistic. Show that (i) X(1) −θ log 2/n is an MRIE of µ under the absolute error loss L(µ−a) = |µ − a|; (ii) X(1) − t is an MRIE under the loss function L(µ − a) = I(t,∞) (|µ − a|). Solution. Let D = (X1 − Xn , ..., Xn−1 − Xn ). Then the distribution of D does not depend on µ. Since X(1) is complete and sufficient for µ, by Basu’s theorem, X(1) and D are independent. Since X(1) is location invariant, by Theorem 4.5(iii) in Shao (2003), X(1) − u∗ is an MRIE of µ, where u∗ minimizes E0 [L(X(1) − u)] over u and E0 is the expectation taken under µ = 0. Since E0 [L(X(1) − u)] = E0 |X(1) − u|, u∗ is the median of the distribution of X(1) when µ = 0 (Exercise 11 in Chapter 1). Since X(1) has Lebesgue density nθ−1 e−nx/θ I(0,∞) (x) when µ = 0,  1 n u∗ −nx/θ e dx = 1 − e−nu∗ /θ = 2 θ 0 and, hence, u∗ = θ log 2/n. (ii) Following the same argument in part (i) of the solution, we conclude that an MRIE of µ is X(1) − u∗ , where u∗ minimizes E0 [L(X(1) − u)] = P0 (|X(1) −u| > t) over u and E0 and P0 are the expectation and probability under µ = 0. When u ≤ 0, P0 (|X(1) − u| > t) ≥ P0 (X(1) > t). Hence, we only need to consider u > 0. A direct calculation shows that E0 [L(X(1) − u)] = P0 (X(1) > u + t) + P0 (X(1) < u − t) = 1 − e−n min{u−t,0}/θ + e−n(u+t)/θ ,

162

Chapter 4. Estimation in Parametric Models

which is minimized at u∗ = t. Exercise 23 (#4.52). Let X = (X1 , ..., Xn ) be a random sample from a population in a location family with unknown location parameter µ ∈ R and T be a location invariant estimator of µ. Show that T is an MRIE under the squared error loss if and only if T is unbiased and E[T (X)U (X)] = 0 for any U (X) satisfying U (X1 + c, ..., Xn + c) = U (X) for any c, Var(U ) < ∞, and E[U (X)] = 0 for any µ. Solution. Suppose that T is an MRIE of µ. Then T is unbiased. For any U (X) satisfying U (X1 + c, ..., Xn + c) = U (X) for any c and E[U (X)] = 0 for any µ, T + tU is location invariant and unbiased. Since T is an MRIE, Var(T ) ≤ V (T + tU ) = Var(T ) + 2tCov(T, U ) + t2 Var(U ), which is the same as 0 ≤ 2tE(T U ) + t2 Var(U ). This is impossible unless E(T U ) = 0. Suppose now that T is unbiased and E[T (X)U (X)] = 0 for any U (X) satisfying U (X1 + c, ..., Xn + c) = U (X) for any c, Var(U ) < ∞, and E[U (X)] = 0 for any µ. Let T0 be Pitman’s estimator (MRIE). Then U = T − T0 satisfies U (X1 + c, ..., Xn + c) = U (X) for any c, Var(U ) < ∞, and E[U (X)] = 0 for any µ. Then E[T (T − T0 )] = 0. Since T0 is an MRIE, from the previous proof we know that E[T0 (T − T0 )] = 0. Then E(T − T0 )2 = E[T (T − T0 )]−E[T0 (T − T0 )] = 0. Thus, T = T0 a.s. and T is an MRIE. Exercise 24 (#4.56). Let (X1 , ..., Xn ) be a random sample from the uniform distribution on (0, σ) and consider the estimation of σ > 0. Show −1 that the MRIE of σ is 2(n+1) X(n) when the loss is L(σ, a) = |1 − a/σ|, where X(n) is the largest order statistic. Solution. By Basu’s theorem, the scale invariant estimator X(n) is independent of Z = (Z1 , ..., Zn ), where Zi = Xi /Xn , i = 1, ..., n − 1, and Zn = Xn /|Xn |. By Theorem 4.8 in Shao (2003), the MRIE is X(n) /u∗ , where u∗ minimizes E1 |1 − X(n) /u| over u > 0 and E1 is the expectation under σ = 1. If u ≥ 1, then |1−X(n) /u| = 1−X(n) /u ≥ 1−X(n) = |1−X(n) |. Hence, we only need to consider 0 < u < 1. Since X(n) has Lebesgue density nxn−1 I(0,1) (x) when σ = 1,   1 x   − 1xn−1 dx E1 |(X(n) /u) − 1| = n u  0  u  n n 1 n−1 = (u − x)x dx + (x − u)xn−1 dx u 0 u u n n 1 − un+1 = un − un + − (1 − un ) n+1 n+1 u 2 n 1 = un + − 1, n+1 n+1u

Chapter 4. Estimation in Parametric Models −1

163 −1

which is minimized at u∗ = 2−(n+1) . Thus, the MRIE is 2(n+1) X(n) . Exercise 25 (#4.59). Let (X1 , ..., Xn ) be a random sample from the Pareto distribution with Lebesgue density ασ α x−(α+1) I(σ,∞) (x), where σ > 0 is an unknown parameter and α > 2 is known. Find the MRIE of σ under the loss function L(σ, a) = (1 − a/σ)2 . Solution. By Basu’s theorem, the scale invariant estimator X(1) is independent of Z = (Z1 , ..., Zn ), where X(1) is the smallest order statistic, Zi = Xi /Xn , i = 1, ..., n − 1, and Zn = Xn /|Xn |. By Theorem 4.8 in Shao (2003), the MRIE is X(1) /u∗ , where u∗ minimizes E1 (1 − X(1) /u)2 over u > 0 and E1 is the expectation under σ = 1. Since X(1) has Lebesgue density nαx−(nα+1) I(1,∞) (x) when σ = 1, E1 (1 − X(1) /u)2 =

2 E1 (X(1) ) − 2uE1 (X(1) ) + u2

u2 nα 2nα = − + 1, (nα − 2)u2 (nα − 1)u

which is minimized at u∗ = (nα − 1)/(nα − 2). Hence, the MRIE is equal to (nα − 2)X(1) /(nα − 1). Exercise 26 (#4.62). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on (µ, ∞) with scale parameter σ, where µ ∈ R and σ > 0 are unknown. (i) Find the MRIE of σ under the loss L(σ, a) = |1 − a/σ|p with p = 1 or 2. (ii) Under the loss function L(µ, σ, a) = (a − µ)2 /σ 2 , find the MRIE of µ. (iii) Compute the bias of the MRIE of µ in (ii).  n Solution. Let X(1) = min1≤i≤n Xi and T = i=1 (Xi − X(1) ). Then (X(1) , T ) is complete and sufficient for (µ, σ); X(1) and T are independent; T is location-scale invariant and T /σ has the gamma distribution with shape parameter n − 1 and scale parameter 1; and X(1) is location-scale invariant and has Lebesgue density nσ −1 e−n(x−µ)/σ I(µ,∞) (x). (i) Let W = (W1 , ..., Wn−1 ), where Wi = (Xi − Xn )/(Xn−1 − Xn ), i = 1, ..., n − 2, and Wn−1 = (Xn−1 − Xn )/|Xn−1 − Xn |. By Basu’s theorem, T is independent of W . Hence, according to formula (4.28) in Shao (2003), the MRIE of σ is T /u∗ , where u∗ minimizes E1 |1 − T /u|p over u > 0 and E1 is the expectation taken under σ = 1. When p = 1,  u   ∞ 1 E1 |1 − T /u| = (u − t)fn (t)dt + (t − u)fn (t)dt u u  u  u 0 1 = fn (t)dt − tfn (t)dt u 0 0  ∞  ∞ 1 + tfn (t)dt − fn (t)dt, u u u

164

Chapter 4. Estimation in Parametric Models

where fn (t) denotes the Lebesgue density of the gamma distribution with shape parameter n − 1 and scale parameter 1. The derivative of the above function with respect to u is

 u   ∞ 1 ψ(u) = 2 tfn (t)dt − tfn (t)dt . u 0 u The solution to ψ(u) = 0 is u∗ satisfying  u∗  ∞ tfn (t)dt = tfn (t)dt. 0

u∗

Since tfn (t) is proportional to the Lebesgue density of the gamma distribution with shape parameter n and scale parameter 1, u∗ is the median of the gamma distribution with shape parameter n and scale parameter 1. When p = 2, E1 (1 − T /u)2 =

E1 (T 2 ) − 2uE1 (T ) + u2 n(n − 1) 2(n − 1) = − + 1, u2 u2 u

which is minimized at u∗ = n. (ii) By Basu’s theorem, (X(1) , T ) is independent of W defined in part (i) of the solution. By Theorem 4.9 in Shao (2003), an MRIE of µ is X(1) − u∗ T , where u∗ minimizes E0,1 (X(1) − uT )2 over u and E0,1 is the expectation taken under µ = 0 and σ = 1. Note that 2 E0,1 (X(1) − uT )2 = E0,1 (X(1) ) − 2uE0,1 (X(1) )E0,1 (T ) + u2 E0,1 (T 2 )

=

2 2(n − 1)u − + n(n − 1)u2 , n2 n

which is minimized at u∗ = n−2 . Hence the MRIE of µ is X(1) − n−2 T . (iii) Note that E(X(1) − n−2 T ) =

σ σ (n − 1)σ = µ + 2. +µ− 2 n n n

Hence, the bias of the MRIE in (ii) is σ/n2 . Exercise 27 (#4.67). Let (X1 , ..., Xn ) be a random sample of binary random variables with P (X1 = 1) = p ∈ (0, 1). Let T be a randomized ¯ and estimator of p with probability n/(n + 1) being the sample mean X probability 1/(n + 1) being 12 . Under the squared error loss, show that T ¯ has a constant risk that is smaller than the maximum risk of X. Solution. The risk of T is     1 ¯ 2 n + p− 1 2 1 = . E (p − X) 2 n+1 n+1 4(n + 1)

Chapter 4. Estimation in Parametric Models

165

¯ is The maximum risk of X max

0 0. Letting θ → 0, we obtain that b = 0. Then θ ≥ (a − 1)2 θ + a2 for all θ > 0. Letting θ → 0 again, we conclude that a = 0. This shows that 0 is admissible within the class . Exercise 34 (#4.78). Let (X1 , ..., Xn ) be a random sample from the uniform distribution on the interval (µ − 12 , µ + 12 ) with an unknown µ ∈ R. Under the squared error loss, show that (X(1) + X(n) )/2 is the unique minimax estimator of µ, where X(j) is the jth order statistic. Solution. Let f (x1 , ..., xn ) be the joint density of X1 , ..., Xn . Then  1 µ − 12 ≤ x(1) ≤ x(n) ≤ µ + 12 f (x1 − µ, ..., xn − µ) = 0 otherwise. The Pitman estimator of µ is  X(1) + 12 tdt tf (X − t, ..., X − t)dt X(1) + X(n) 1 n X(n) − 1 −∞ = =  X +21 . ∞ (1) 2 2 f (X1 − t, ..., Xn − t)dt −∞ 1 dt

∞

X(n) − 2

Hence, (X(1) + X(n) )/2 is admissible. Since (X(1) + X(n) )/2 has constant risk, it is the unique minimax estimator (otherwise (X(1) + X(n) )/2 can not be admissible).

Chapter 4. Estimation in Parametric Models

171

Exercise 35 (#4.80). Let (X1 , ..., Xn ) be a random sample from the ¯ be exponential distribution on (0, ∞) with unknown mean θ > 0 and X ¯ the sample mean. Show that (nX + b)/(n + 1) is an admissible estimator ¯ of θ under the squared error loss for any b ≥ 0 and that nX/(n + 1) is a minimax estimator of θ under the loss function L(θ, a) = (a − θ)2 /θ2 . Solution. The joint Lebesgue density of X1 , ..., Xn is ¯

θ−n e−nX/θ I(0,∞) (X(1) ), ¯ ϑ = −θ−1 , and where X(1) is the smallest order statistic. Let T (X) = X, n c(ϑ) = ϑ . Then the joint density is of the form c(ϑ)eϑT with respect a σ-finite measure and the range of ϑ is (−∞, 0). For any ϑ0 ∈ (−∞, 0), 

ϑ0

−bϑ/n −1

e

ϑ

−∞



0

dϑ =

e−bϑ/n ϑ−1 dϑ = ∞.

θ0

By Karlin’s theorem (e.g., Theorem 4.14 in Shao, 2003), we conclude that ¯ + b)/(n + 1) is admissible under the squared error loss. This implies (nX ¯ + b)/(n + 1) is also admissible under the loss function L(θ, a) = that (nX 2 2 ¯ + 1) is (a − θ) /θ . Since the risk of nX/(n 1 E θ2

¯ +b nX −θ n+1

2 =

1 , n+1

¯ nX/(n + 1) is an admissible estimator with constant risk. Hence, it is minimax. Exercise 36 (#4.82). Let X be a single observation. Consider the estimation of E(X) under the squared error loss. (i) Find all possible values of α and β such that αX + β are admissible when X has the Poisson distribution with unknown mean θ > 0. (ii) When X has the negative binomial distribution with a known size r and an unknown probability p ∈ (0, 1), show that αX + β is admissible when r α ≤ r+1 and β > r(1 − α). Solution. (i) An application of the results in Exercises 35-36 of Chapter 2 shows that αX + β is an inadmissible estimator of EX when (a) α > 1 or α < 0 or (b) α = 1 and β = 0. If α = 0, then, by Exercise 36 of Chapter 2, αX + β is inadmissible when β ≤ 0; by Exercise 34 of Chapter 2, αX + β is admissible when β > 0. ϑ The discrete probability density X is θx e−θ /x! = e−e eϑx /x!, where ϑ = log θ ∈ (−∞, ∞). Consider α ∈ (0, 1]. Let α = (1 + λ)−1 and β = γλ/(1 + λ). Since  0 −γλϑ e dϑ = ∞ −λeϑ −∞ e

172

Chapter 4. Estimation in Parametric Models

if and only if λγ ≥ 0, and 



e−γλϑ dϑ = ∞ e−λeϑ

0

if and only if λ ≥ 0, we conclude that αX + β is admissible when 0 < α < 1 and β ≥ 0; and αX + β is admissible when α = 1 and β = 0. By Exercise 36 of Chapter 2, αX + β is inadmissible if β < 0. The conclusion is that αX + β is admissible if and only if (α, β) is in the following set: {α = 0, β > 0} ∪ {α = 1, β = 0} ∪ {0 < α < 1, β ≥ 0}.   pr x log(1−p) (ii) The discrete probability density of X is x−1 . Let r−1 (1−p)r e −1 θ = log(1 − p) ∈ (−∞, 0), α = (1 + λ) , and β = γλ/(1 + λ). Note that 

0

e−λγθ

c

if and only if λr ≥ 1, i.e., α ≤ 

c

−∞

eθ 1 − eθ

λr dθ = ∞

r r+1 ;

e−λγθ

eθ 1 − eθ

λr dθ = ∞

if and only if γ > r, i.e., β > rλ/(1 + λ) = r(1 − α). The result follows from Karlin’s theorem. Exercise 37 (#4.83). Let X be an observation from the distribution with Lebesgue density 12 c(θ)eθx−|x| , |θ| < 1. (i) Show that c(θ) = 1 − θ2 . (ii) Show that if 0 ≤ α ≤ 12 , then αX + β is admissible for estimating E(X) under the squared error loss. Solution. (i) Note that  1 1 ∞ θx−|x| e dx = c(θ) 2 −∞

 0   ∞ 1 eθx+x dx + eθx−x dx = 2 −∞ 0

 ∞   ∞ 1 e−(1+θ) dx + e−(1−θ)x dx = 2

0  0 1 1 1 = + 2 1+θ 1−θ 1 = . 1 − θ2

Chapter 4. Estimation in Parametric Models

173

(ii) Consider first α > 0. Let α = (1 + λ)−1 and β = γλ/(1 + λ). Note that  0  1 e−γλθ e−γλθ dθ = dθ = ∞ 2 λ 2 λ −1 (1 − θ ) 0 (1 − θ ) if and only if λ ≥ 1, i.e., α ≤ 12 . Hence, αX + β is an admissible estimator of E(X) when 0 < α ≤ 12 . Consider α = 0. Since

 0   ∞ 1 − θ2 θx+x θx−x E(X) = xe dx + xe dx 2 −∞ 0

 ∞   ∞ 1 − θ2 − xe−(1+θ) dx + xe−(1−θ)x dx = 2 0 0

 2 1+θ 1−θ 1−θ − = 2 1−θ 1+θ 2θ = , 1 − θ2 which takes any value in (−∞, ∞), the constant estimator β is an admissible estimator of E(X) (Exercise 34 in Chapter 2). Exercise 38 (#4.84). Let X be an observation with the discrete probability density fθ (x) = [x!(1 − e−θ )]−1 θx e−θ I{1,2,...} (x), where θ > 0 is unknown. Consider the estimation of θ/(1 − e−θ ) under the squared error loss. (i) Show that the estimator X is admissible. (ii) Show that X is not minimax unless supθ RT (θ) = ∞ for the risk RT (θ) of any estimator T = T (X). (iii) Find a loss function under which X is minimax and admissible. Solution. (i) Let ϑ = log θ. Then the range of ϑ is (−∞, ∞). The probability density of X is proportional to ϑ

θx e−θ e−e = eϑx . 1 − e−θ 1 − e−eϑ Hence, by Corollary 4.3 in Shao (2003), X is admissible under the squared error loss for E(X) = θ/(1 − e−θ ). (ii) The risk of X is Var(X) = E(X 2 ) − [E(X)]2 =

θ + θ2 θ2 θ − e−θ (θ + θ2 ) − = , 1 − e−θ (1 − e−θ )2 (1 − e−θ )2

which diverges to ∞ as θ → ∞. Hence, X is not minimax unless supθ RT (θ) = ∞ for any estimator T = T (X). (iii) Consider the loss [E(X)−a]2 /Var(X). Since X is admissible under the

174

Chapter 4. Estimation in Parametric Models

loss [E(X) − a]2 , it is also admissible under the loss [E(X) − a]2 /Var(X). Since the risk of X under loss [E(X) − a]2 /Var(X) is 1, it is minimax. Exercise 39 (#4.91). Suppose that X is distributed as Np (θ, Ip ), where θ ∈ Rp . Consider the estimation of θ under the loss (a − θ)τ Q(a − θ) with a known positive definite p × p matrix Q. Show that the risk of the estimator Q δc,r =X−

r(p − 2) Q−1 (X − c) Q−1/2 (X − c)2

is equal to tr(Q) − (2r − r2 )(p − 2)2 E(Q−1/2 (X − c)−2 ). Solution. Without loss of generality, we assume that c = 0. Define δr = Q δ0,r , Y = Q1/2 X, µ = Q1/2 θ, : :2 : : r(p − 2) −1 : Q Y − µ: h(µ) = Rδr (θ) = E :Y − : , −1 2 Q Y  and g(µ) = tr(Q) − (2r − r2 )(p − 2)2 E(Q−1/2 X−2 ) = tr(Q) − (2r − r2 )(p − 2)2 E(Q−1 Y −2 ). Let λ∗ > 0 be the largest eigenvalue of Q−1 . Consider the following family of priors for µ: {Np (0, (α + λ∗ )Q2 − Q) : α > 0}. Then the marginal distribution of Y is Np (0, (α + λ∗ )Q2 ) and E[g(µ)] = tr(Q) − (2r − r2 )(p − 2)2 E(Q−1 Y −2 ) (2r − r2 )(p − 2) = tr(Q) − . α + λ∗ Note that the posterior distribution of µ given Y is

  1 Q−1 Ip − Y, Q − Ip . Np α + λ∗ α + λ∗ Hence,

: :2 : : r(p − 2) −1 : E[h(µ)] = E : Y − Q Y − E(µ|Y ) + E(µ|Y ) − µ : : −1 2 ||Q Y || : :2 : : r(p − 2) −1 2 : = E: :Y − Q−1 Y 2 Q Y − E(µ|Y ): + EE(µ|Y ) − µ p (2r − r2 )(p − 2) p = − + tr(Q) − α + λ∗ α + λ∗ α + λ∗ (2r − r2 )(p − 2) = tr(Q) − . α + λ∗

Chapter 4. Estimation in Parametric Models

175

This shows that E[h(µ)] = E[g(µ)]. Using the same argument as that in the proof of Theorem 4.15 in Shao (2003), we conclude that h(µ) = g(µ) for any µ. Exercise 40 (#4.92). Suppose that X is distributed as Np (θ, σ 2 D), where θ ∈ Rp is unknown, σ 2 > 0 is unknown, and D is a known p × p positive definite matrix. Consider the estimation of θ under the loss a − θ2 . Show that the risk of the estimator δ˜c,r = X −

r(p − 2)σ 2 D−1 (X − c) D−1 (X − c)2

is equal to ( ) σ 2 tr(D) − (2r − r2 )(p − 2)2 σ 2 E(D−1 (X − c)−2 ) . Solution. Define Z = σ −1 D−1/2 (X − c) and ψ = σ −1 D−1/2 (θ − c). Then Z is distributed as Np (ψ, Ip ) and r(p − 2)σD−1/2 Z δ˜c,r − c = σD1/2 Z − D−1/2 Z2   r(p − 2)σD−1 = σD1/2 Z − Z D−1/2 Z2 D , = σD1/2 δ0,r D where δ0,r is defined in the previous exercise with Q = D. Then the risk of ˜ δc,r is * + Rδ˜c,r (θ) = E (δ˜c,r − θ)τ (δ˜c,r − θ) ( D ) D = σ 2 E (δc,r − ψ)τ D(δc,r − ψ) * + = σ 2 tr(D) − (2r − r2 )(p − 2)2 E(D−1/2 Z−2 ) ( ) = σ 2 tr(D) − (2r − r2 )(p − 2)2 σ 2 E(D−1 (X − c)−2 ) ,

where the third equality follows from the result of the previous exercise. Exercise 41 (#4.96). Let X = (X1 , ..., Xn ) be a random sample of random variables with probability density fθ . Find an MLE (maximum likelihood estimator) of θ in each of the following cases. (i) fθ (x) = θ−1 I{1,...,θ} (x), θ is an integer between 1 and θ0 . (ii) fθ (x) = e−(x−θ) I(θ,∞) (x), θ > 0. (iii) fθ (x) = θ(1 − x)θ−1 I(0,1) (x), θ > 1. θ x(2θ−1)/(1−θ) I(0,1) (x), θ ∈ ( 12 , 1). (iv) fθ (x) = 1−θ

176

Chapter 4. Estimation in Parametric Models

(v) fθ (x) = 2−1 e−|x−θ| , θ ∈ R. (vi) fθ (x) = θx−2 I(θ,∞) (x), θ > 0. (vii) fθ (x) = θx (1 − θ)1−x I{0,1} (x), θ ∈ [ 12 , 34 ]. (viii) fθ (x) is the density of N (θ, θ2 ), θ ∈ R, θ = 0. (ix) fθ (x) = σ −n e−(x−µ)/σ I(µ,∞) (x), θ = (µ, σ) ∈ R × (0, ∞). 2 2 1 e−(log x−µ) /(2σ ) I(0,∞) (x), θ = (µ, σ 2 ) ∈ R × (0, ∞). (x) fθ (x) = √2πσx √ (xi) fθ (x) = I(0,1) (x) if θ = 0 and fθ (x) = (2 x)−1 I(0,1) (x) if θ = 1. (xii) fθ (x) = β −α αxα−1 I(0,β) (x), θ = (α, β) ∈ (0, ∞) × (0, ∞).   (xiii) fθ (x) = xθ px (1 − p)θ−x I{0,1,...,θ} (x), θ = 1, 2, ..., where p ∈ (0, 1) is known. (xiv) fθ (x) = 12 (1 − θ2 )eθx−|x| , θ ∈ (−1, 1). Solution. (i) Let X(n) be the largest order statistic. The likelihood function is (θ) = θ−n I{X(n) ,...,θ0 } (θ), which is 0 when θ < X(n) and decreasing on {X(n) , ..., θ0 }. Hence, the MLE of θ is X(n) . (ii) Let X(1) be nthe smallest order statistic. The likelihood function is (θ) = exp{− i=1 (Xi − θ)}I(0,X(1) ) (θ), which is 0 when θ > X(1) and increasing on (0, X(1) ). Hence, the MLE of θ is X(1) . !n (iii) Note that (θ) = θn i=1 (1 − Xi )θ−1 I(0,1) (Xi ) and, when θ > 1, ∂ log (θ) n log(1 − Xi ) = + ∂θ θ i=1 n

and

∂ 2 log (θ) n = − 2 < 0. ∂θ2 θ

n The equation ∂ log∂θ(θ) = 0 has a unique solution θˆ = −n/ i=1 log(1 − Xi ). If θˆ > 1, then it maximizes (θ). If θˆ ≤ 1, then (θ) is decreasing on the ˆ interval (1, ∞). Hence the MLE of θ is max{1, θ}. (iv) Note that n ∂ log (θ) n 1 log Xi = + ∂θ θ(1 − θ) (1 − θ)2 i=1

n = 0 has a unique solution θˆ = (1 − n−1 i=1 log Xi )−1 . Also, ˆ Hence, the MLE of when θ > θˆ and ∂ log∂θ(θ) > 0 when θ < θ. 1 ˆ θ is max{θ, 2 }. n (v) Note that (θ) = 2−n exp{− i=1 |Xi − θ|}. Let Fn be the distribution putting mass n−1 to each Xi . Then, by Exercise 11 in Chapter 1, any median of Fn is an MLE !n of θ. (vi) Since (θ) = θn i=1 Xi−2 I(0,X(1) ) (θ), the same argument in part (ii) of the solution yields the MLE X(1) . ¯ be the sample mean. Since (vii) Let X ∂ log (θ) ∂θ ∂ log (θ) X, ¯ and = 0 has a unique solution X, ∂θ ¯ Hence, the same argument in part (iv) of the > 0 when θ < X. solution yields the MLE ⎧ 1 1 ¯ ⎪ ⎨ 2 if X ∈ [0, 2 ) ¯ if X ¯ ∈ [1, 3) θˆ = X 2 4 ⎪ ⎩ 3 ¯ if X ∈ ( 3 , 1]. 4

4

(viii) Note that 1 ∂ log (θ) =− 3 ∂θ θ The equation

 2

nθ + θ

n i=1

Xi −

n

 Xi2

.

i=1

∂ log (θ) ∂θ

= 0 has two solutions  n n n − i=1 Xi ± ( i=1 Xi )2 + 4n i=1 Xi2 . θ± = 2n

Note that limθ→0 log (θ) = −∞. By checking the sign change at the neighborhoods of θ± , we conclude that both θ− and θ+ are local maximum points. Therefore, the MLE of θ is  θ− if (θ− ) ≥ (θ+ ) θˆ = θ+ if (θ− ) < (θ+ ). (ix) The likelihood function 

(θ) = σ

−n

" n 1 exp − (Xi − µ) I(0,X(1) ) (µ) σ i=1

is 0 when µ > X(1) and increasing on (0, X(1) ). Hence, the MLE of µ is X(1) . Substituting µ = X(1) into (θ) and maximizing the resulting n likelihood function yields that the MLE of σ is n−1 i=1 (Xi − X(1) ). (x) Let Yi = log Xi , i = 1, ..., n. Then "  n n 1 1 2 exp − 2 (Yi − µ) − Yi . (θ) = √ 2σ i=1 ( 2πσ)n i=1 n Solving ∂ log∂θ(θ) =  0, we obtain the MLE of µ as Y¯ = n−1 i=1 Yi and the n MLE of σ 2 as n−1 i=1 (Yi − Y¯ )2 . ! n √ −1 (xi)!Since (0) = 1 and (1) = (2n i=1 !Xi )√ , the MLE is equal to 0 if √ n n n n Xi < 1 and is equal to 1 if 2 Xi ! ≥ 1. 2 i=1 i=1 n (xii) The likelihood function is (θ) = αn β −nα i=1 Xiα−1 I(X(n) ,∞) (β),

178

Chapter 4. Estimation in Parametric Models

which is 0 when β < X(n) and decreasing in β otherwise. Hence the MLE of β is X(n) . Substituting n β = X(n) into the likelihood function, we obtain the MLE of α as n[ i=1 log(X(n) /Xi )]−1 .  n (xiii) Let X(n) be the largest Xi ’s and T = i=1 Xi . Then n

(θ) = i=1

 θ pT (1 − p)nθ−T I{X(n) ,X(n) +1,...} (θ). Xi

For θ = X(n) , X(n) + 1, ..., n

(θ + 1) θ+1 . = (1 − p)n (θ) θ + 1 − Xi i=1 Since (θ + 1)/(θ + 1 − Xi ) is decreasing in θ, the function (θ + 1)/(θ) is decreasing in θ. Also, limθ→∞ (θ + 1)/(θ) = (1 − p)n < 1. Therefore, the MLE of θ is max{θ : θ ≥ X(n) , (θ + 1)/(θ) ≥ 1}. ¯ be the sample mean. Then (xiv) Let X ∂ log (θ) ¯ − 2nθ . = nX ∂θ 1 − θ2

√ ¯ 2 − 1. Since The equation ∂ log∂θ(θ) = 0 has two solutions θ± = ± 1 + X θ√− < −1 is not in the parameter space, we conclude that the MLE of θ is ¯ 2 − 1. 1+X Exercise 42. Let (X1 , ..., Xn ) be a random sample from the uniform distribution on the interval (θ, θ + |θ|). Find the MLE of θ when (i) θ ∈ (0, ∞); (ii) θ ∈ (−∞, 0); (iii) θ ∈ R, θ = 0. Solution. (i) When θ ∈ (0, ∞), the distribution of X1 is uniform on (θ, 2θ). The likelihood function is (θ) = θ−n I(X(n) /2,X(1) ) (θ). Since θ−n is decreasing, the MLE of θ is X(n) /2. (ii) When θ ∈ (−∞, 0), the distribution of X1 is uniform on (θ, 0). Hence (θ) = |θ|−n I(−∞,X(1) ) (θ). Since |θ|−n is decreasing, the MLE of θ is X(1) . (iii) Consider θ = 0. If θ > 0, then almost surely all Xi ’s are positive. If θ < 0, then almost surely all Xi ’s are negative. Combining the results in (i)-(ii), we conclude that the MLE of θ is X(n) /2 if X1 > 0 and is X(1) if X1 < 0.

Chapter 4. Estimation in Parametric Models

179

Exercise 43 (#4.98). Suppose that n observations are taken from N (µ, 1) with an unknown µ. Instead of recording all the observations, one records only whether the observation is less than 0. Find an MLE of µ. Solution. Let Yi = 1 if the ith observation is less than 0 and Yi = 0 otherwise. Then Y1 , ..., Yn are the actual observations. Let p = P (Yi = 1) = Φ(−µ), where Φ is the cumulative distribution function of N (0, 1), (p) n be the likelihood function in p, and T = i=1 Yi . Then ∂ log (p) T n−T = − . ∂p θ 1−θ The likelihood equation has a unique solution T /n. Hence the MLE of p is T /n. Then, the MLE of µ is −Φ−1 (T /n). Exercise 44 (#4.100). Let (Y1 , Z1 ), ..., (Yn , Zn ) be independent and identically distributed random 2-vectors such that Y1 and Z1 are independently distributed as the exponential distributions on (0, ∞) with scale parameters λ > 0 and µ > 0, respectively. (i) Find the MLE of (λ, µ). (ii) Suppose that we only observe Xi = min{Yi , Zi } and ∆i = 1 if Xi = Yi and ∆i = 0 if Xi = Zi . Find the MLE of (λ, µ). n Solution. (i) Let (λ, µ) be the likelihood function, Y¯ = n−1 i=1 Yi , and  n Z¯ = n−1 i=1 Zi . Since Yi ’s and Zi ’s are independent, n nY¯ ∂ log (λ, µ) =− + 2 ∂λ λ λ

and

n nZ¯ ∂ log (λ, µ) =− + 2. ∂µ µ µ

¯ Hence, the MLE of (λ, µ) is (Y¯ , Z). −∆i −(∆i −1) −(λ−1 +µ−1 )xi µ e . Let (ii) The n probability density n of (Xi , ∆i ) is λ T = i=1 Xi and D = i=1 ∆i . Then −1

(λ, µ) = λ−D µD−n e−(λ

+µ−1 )T

.

If 0 < D < n, then ∂ log (λ, µ) D T =− + 2 ∂λ λ λ

and

∂ log (λ, µ) D−n T = + 2. ∂λ µ µ

ˆ = T /D and µ The likelihood equation has a unique solution λ ˆ = T /(n−D). ˆ µ The MLE of (λ, µ) is (λ, ˆ). If D = 0, −1 −1 (λ, µ) = µ−n e−(λ +µ )T , which is increasing in λ. Hence, there does not exist an MLE of λ. Similarly, when D = n, there does not exist an MLE of µ.

180

Chapter 4. Estimation in Parametric Models

Exercise 45 (#4.101). Let (X1 , ..., Xn ) be a random sample from the gamma distribution with shape parameter α > 0 and scale parameter γ > 0. Show that almost surely the likelihood equation has a unique solution that is the MLE of θ = (α, γ). Obtain the Newton-Raphson iteration equation and the Fisher-scoring iteration equation. ¯ be the sample mean and Y = n−1 n log Xi . The Solution. Let X i=1 log-likelihood function is ¯ log (θ) = −nα log γ − n log Γ(α) + (α − 1)nY − γ −1 nX. Then, the likelihood equations are ¯ α X Γ (α) + Y = 0 and − + 2 = 0. Γ(α) γ γ ¯ ¯ The second equation yields γ = X/α. Substituting γ = X/α into the first equation we obtain that − log γ −

Γ (α) ¯ = 0. + Y − log X Γ(α)

h(α) = log α − From calculus,



Γ (α) = −C + Γ(α)

k=0

and

1 1 − k+1 k+α



  ∞ d Γ (α) 1 = , dα Γ(α) (k + α)2 k=0

where C is the Euler constant defined as m−1  1 C = lim − log m . m→∞ k+1 k=0

Then ∞

1 1 − α (k + α)2 k=0  ∞ 1 1 1 < − − α k+α k+1+α

h (α) =

k=0

= = = =

1 Γ (α) Γ (α + 1) − + α Γ(α) Γ(α + 1) 1 d Γ(α + 1) + log α dα Γ(α) d 1 1 + log α dα α 0.

Chapter 4. Estimation in Parametric Models

181

Hence, h(α) is decreasing. Also, it follows from the last two equalities of the previous expression that, for m = 2, 3, ..., m−2 1 Γ (m) 1 1 Γ (1) = + + ··· + 1 + = − C. Γ(m) m−1 m−2 Γ(1) k+1 k=0

Therefore, ' &   m−2 1 Γ (m) lim log m − = lim log m − +C =0 m→∞ m→∞ Γ(m) k+1 k=0

¯ which is negative by the definition of C. Hence, limα→∞ h(α) = Y − log X, by Jensen’s inequality when Xi ’s are not all the same. Since &  '  ∞ 1 Γ (α) 1 lim log α − = lim log α + C − − α→0 α→0 Γ(α) k+1 k+α k=0 ' & ∞ 1 1−α = lim log α + C + − 1 + α→0 α (k + 1)(k + α) k=1 

∞ 1 1 +C −1+ = lim log α + α→0 α (k + 1)k k=1 = ∞, we have limα→0 h(α) = ∞. Since h is continuous and decreasing, h(α) = 0 has a unique solution. Thus, the likelihood equations have a unique solution, which is the MLE of θ. Let

¯ Γ (α) α X ∂ log (θ) = n − log γ − + Y, − + 2 , s(θ) = ∂θ Γ(α) γ γ ⎞ ⎛ *  +2 Γ (α) Γ (α) 1 − − ∂ 2 log (θ) Γ(α) γ ⎠, R(θ) = = n ⎝ Γ(α) ¯ ∂θ∂θτ 1 α − − 2X γ2

γ

and

⎛ * F (θ) = E[R(θ)] = n ⎝

Γ (α) Γ(α)

+2



Γ (α) Γ(α)

− γ1

− γ1 − γα2

γ3

⎞ ⎠.

Then the Newton-Raphson iteration equation is θˆ(k+1) = θˆ(k) − [R(θˆ(k) )]−1 s(θˆ(k) ),

k = 0, 1, 2, ...

and the Fisher-scoring iteration equation is θˆ(k+1) = θˆ(k) − [F (θˆ(k) )]−1 s(θˆ(k) ),

k = 0, 1, 2, ....

182

Chapter 4. Estimation in Parametric Models

Exercise 46 (#4.102). Let (X1 , ..., Xn ) be a random sample from a population with discrete probability density [x!(1 − e−θ )]−1 θx e−θ I{1,2,...} (x), where θ > 0 is unknown. Show that the likelihood equation has a unique ¯ > 1. Show whether this root is an MLE of root when the sample mean X θ. Solution. Let (θ) be the likelihood function and 

¯ ∂ log (θ) X 1 . h(θ) = =n −1− θ ∂θ θ e −1 Obviously, limθ→∞ h(θ) = −n. Since limθ→0 θ/(eθ − 1) = 1, 

θ 1 ¯ −n=∞ lim h(θ) = n lim X− θ θ→0 θ→0 θ e −1 ¯ > 1. Note that h is continuous. Hence, when X ¯ > 1, h(θ) = 0 has when X at least one solution. Note that   ¯ X eθ h (θ) = n 2 + θ θ2 . Hence, h(θ) = 0 has a unique solution and log (θ) is convex. Therefore, the unique solution is the MLE of θ. Exercise 47 (#4.104). Let (X1 , Y1 ), ..., (Xn , Yn ) be independent and identically distributed as the bivariate normal distribution with E(X1 ) = E(Y1 ) = 0, Var(X1 ) = Var(Y1 ) = 1, and an unknown correlation coefficient ρ ∈ (−1, 1). Show that the likelihood equation is a cubic in ρ and the probability that it has as n → ∞. an unique root tends to 1 n Solution. Let T = i=1 (Xi2 + Yi2 ) and R = i=1 Xi Yi . The likelihood function is    ρR T −n 2 (ρ) = (2π 1 − ρ ) exp . − 1 − ρ2 2(1 − ρ2 ) Hence,

∂ log (ρ) 1 + ρ2 ρ nρ + R− T = 2 2 2 ∂ρ 1−ρ (1 − ρ ) (1 − ρ2 )2

and the likelihood equation is h(ρ) = 0, where h(ρ) = ρ(1 − ρ2 ) − n−1 T ρ + n−1 R(1 + ρ2 ) is a cubic in ρ. Since h is continuous, 1 (T − 2R) = − (Xi − Yi )2 < 0 n i=1 n

lim h(ρ) = −n

ρ→1

−1

Chapter 4. Estimation in Parametric Models

and

183

1 (Xi + Yi )2 > 0, n i=1 n

lim h(ρ) = n−1 (T + 2R) =

ρ→−1

the likelihood equation has at least one solution. Note that h (ρ) = 1 − 3ρ2 − n−1 T + 2n−1 Rρ. As n → ∞, n−1 T →p Var(X1 ) + Var(Y1 ) = 2 and n−1 R →p E(X1 Y1 ) = ρ. Hence, h (ρ) →p 1 − 3ρ2 − 2 + 2ρ2 = −1 − ρ2 < 0. Therefore, the probability that h(ρ) = 0 has a unique solution tends to 1. Exercise 48 (#4.105). Let (X1 , ..., Xn ) be a random sample from the α Weibull distribution with Lebesgue density αθ−1 xα−1 e−x /θ I(0,∞) (x), where α > 0 and θ > 0 are unknown. n Show that the likelihood n equa−1 −1 α tions are equivalent to h(α) = n log X and θ = n i i=1 i=1 Xi , n n α −1 α −1 where h(α) = ( i=1 Xi ) , and that the likelihood i=1 Xi log Xi − α equations have a unique solution. Solution. The log-likelihood function is log (α, θ) = n log α − n log θ + (α − 1)

n i=1

1 α X . θ i=1 i n

log Xi −

Hence, the likelihood equations are ∂ log (α, θ) 1 α n log Xi − X log Xi = 0 = + ∂α α i=1 θ i=1 i n

n

and

n ∂ log (α, θ) 1 α =− + 2 X = 0, ∂θ θ θ i=1 i n n which are equivalent to h(α) = n−1 i=1 log Xi and θ = n−1 i=1 Xiα . Note that n n n α 2 α α 2 1  i=1 Xi (log Xi )  i=1 Xi − ( i=1 Xi log Xi ) h (α) = + 2 >0 n α α ( i=1 Xi )2 n

by the Cauchy-Schwarz inequality. Thus, h(α) is increasing. Since h is continuous, limα→0 h(α) = −∞, and n  Xi α n log Xi i=1 X(n) 1   = log X > log Xi , lim h(α) = lim (n) α n α→∞ α→∞ Xi n i=1

X(n)

i=1

184

Chapter 4. Estimation in Parametric Models

where X(n) is the largest order statistic and the inequality holds as long as Xi ’s are not identical, we conclude that the likelihood equations have a unique solution. Exercise 49 (#4.106). Consider the one-way random effects model Xij = µ + Ai + eij ,

j = 1, ..., n, i = 1, ..., m,

where µ ∈ R, Ai ’s are independent and identically distributed as N (0, σa2 ), eij ’s are independent and identically distributed as N (0, σ 2 ), σa2 and σ 2 are unknown, and Ai ’s and eij ’s are independent. (i) Find an MLE of (σa2 , σ 2 ) when µ = 0. (ii) Find an MLE of (µ, σa2 , σ 2 ). Solution. (i) From the solution of Exercise 33 in Chapter 3, the likelihood function is "  m SE SA nm 2 2 2 ¯ ·· − µ) , (µ, σa , σ ) = ξ exp − 2 − (X − 2σ 2(σ 2 + nσa2 ) 2(σ 2 + nσa2 ) i=1 m ¯ m n ¯ 2 ¯ 2 ¯ where SA = n i=1 (X i· − X·· ) , SE = i=1 j=1 (Xij − Xi· ) , X·· =    m n n 2 ¯ i· = n−1 (nm)−1 i=1 j=1 Xij , X j=1 Xij , and ξ is a function of σa and 2 σ . In the case of µ = 0, the likelihood function becomes "  SE S˜A 2 2 (σa , σ ) = ξ exp − 2 − , 2σ 2(σ 2 + nσa2 ) m ¯ 2 ˜ where S˜A = n i=1 X i· . The statistic (SA , SE ) is complete and sufficient 2 2 2 2 for (σa , σ ). Hence, (σa , σ ) is proportional to the joint distribution of S˜A and SE . Since Xij ’s are normal, SE /σ 2 has the chi-square distribution χ2m(n−1) and S˜A /(σ 2 + nσa2 ) has the chi-square distribution χ2m . Since the distribution of SE does not depend on σ 2 and S˜A is complete and sufficient for σa2 when σ 2 is known, by Basu’s theorem, S˜A and SE are independent. Therefore, the likelihood equations are nS˜A nm ∂ log (σa2 , σ 2 ) = − 2 =0 ∂σa2 σ 2 + nσa2 σ + nσa2 and ∂ log (σa2 , σ 2 ) nS˜A nm SE m(n − 1) = 2 − 2 + 4 − = 0. 2 ∂σ σ + nσa2 σ + nσa2 σ σ2 A unique solution is σ ˆ2 =

SE m(n − 1)

and σ ˆa2 =

SE S˜A − . nm nm(n − 1)

Chapter 4. Estimation in Parametric Models

185

If σ ˆa2 > 0, then the MLE of (σa2 , σ 2 ) is (ˆ σa2 , σ ˆ 2 ). If σ ˆa2 ≤ 0, however, the 2 2 maximum of (σa , σ ) is achieved at the boundary of the parameter space when σa2 = 0. Note that ∂ log (0, σ 2 ) nS˜A nm SE m(n − 1) = 2 − 2 + 4 − =0 2 ∂σ σ σ σ σ2 has a unique solution σ ˜ 2 = (nS˜A + SE )/[nm + m(n − 1)]. Thus, the MLE 2 2 2 ˜ ) when σ ˆa2 ≤ 0. of (σa , σ ) is (0, σ ¯ ·· maximizes (µ, σ 2 , σ 2 ) for any σ 2 and σ 2 . (ii) It is easy to see that X a a ¯ Hence, X·· is the MLE of µ. To consider the MLE of (σa2 , σ 2 ), it suffices to consider   nm SA ¯ ·· , σ 2 σ 2 ) = ξ exp − SE − − . (X a 2σ 2 2(σ 2 + nσa2 ) 2(σ 2 + nσa2 ) Note that this is the same as (σa2 , σ 2 ) in the solution of part (i) with S˜A replaced by SA and SA /(σ 2 + nσa2 ) has the chi-square distribution χ2m−1 . Using the same argument in part (i) of the solution, we conclude that the MLE of (σa2 , σ 2 ) is (ˆ σa2 , σ ˆ 2 ) if σ ˆa2 > 0, where σ ˆ2 =

SE m(n − 1)

and σ ˆa2 =

SA SE − , n(m − 1) nm(n − 1)

and is (0, σ ˜ 2 ) if σ ˆa2 ≤ 0, where σ ˜ 2 = (nSA + SE )/[n(m − 1) + m(n − 1)]. Exercise 50 (#4.107). Let (X1 , ..., Xn ) be a random sample of random variables with Lebesgue density θf (θx), where f is a Lebesgue density on (0, ∞) or symmetric about 0, and θ > 0 is an unknown parameter. Show that the likelihood equation has a unique root if xf  (x)/f (x) is continuous and decreasing for x > 0. Verify that this condition is satisfied if f (x) = π −1 (1 + x2 )−1 . Solution. Let (θ) be the likelihood function and  n  θXi f  (θXi ) . 1+ h(θ) = f (θXi ) i=1 Then

 n  ∂ log (θ) θXi f  (θXi ) 1 =0 1+ = ∂θ θ i=1 f (θXi )

is the same as h(θ) = 0. From the condition, h(θ) is decreasing in θ when θ > 0. Hence, the likelihood equation has at most one solution. Define g(t) = 1 + tf  (t)/f (t). Suppose that g(t) ≥ 0 for all t ∈ (0, ∞). Then tf (t) is nondecreasing since its derivative is f (t)g(t) ≥ 0. Let t0 ∈ (0, ∞). Then tf (t) ≥ t0 f (t0 ) for t ∈ (t0 , ∞) and  ∞  ∞ t0 f (t0 ) dt = ∞, f (t)dt ≥ 1≥ t t0 t0

186

Chapter 4. Estimation in Parametric Models

which is impossible. Suppose that g(t) ≤ 0 for all t ∈ (0, ∞). Then tf (t) is nonincreasing and tf (t) ≥ t0 f (t0 ) for t ∈ (0, t0 ). Then  t0  t0 t0 f (t0 ) 1≥ dt = ∞, f (t)dt ≥ t 0 0 which is impossible. Combining these results and the fact that g(t) is nonincreasing, we conclude that   n  n  θXi f  (θXi ) θXi f  (θXi ) lim > 0 > lim 1+ 1+ θ→0 θ→∞ f (θXi ) f (θXi ) i=1 i=1 and, therefore, h(θ) = 0 has a unique solution. For f (x) = π −1 (1 + x2 )−1 , xf  (x) 2x2 , = f (x) 1 + x2 which is clearly continuous and decreasing for x > 0. Exercise 51 (#4.108). Let (X1 , ..., Xn ) be a random sample having Lebesgue density fθ (x) = θf1 (x) + (1 − θ)f2 (x), where fj ’s are two different known Lebesgue densities and θ ∈ (0, 1) is unknown. (i) Provide a necessary and sufficient condition for the likelihood equation to have a unique solution and show that if there is a solution, it is the MLE of θ. (ii) Derive the MLE of θ when the likelihood equation has no solution. Solution. (i) Let (θ) be the likelihood function. Note that ∂ log (θ) f1 (Xi ) − f2 (Xi ) = , ∂θ f (Xi ) + θ[f1 (Xi ) − f2 (Xi )] i=1 2 n

s(θ) =

which has derivative s (θ) = −

n i=1

[f1 (Xi ) − f2 (Xi )]2 < 0. {f2 (Xi ) + θ[f1 (Xi ) − f2 (Xi )]}2

Therefore, s(θ) = 0 has at most one solution. The necessary and sufficient condition that s(θ) = 0 has a solution (which is unique if it exists) is that limθ→0 s(θ) > 0 and limθ→1 s(θ) < 0, which is equivalent to n f1 (Xi ) i=1

f2 (Xi )

> n and

n f2 (Xi ) i=1

f1 (Xi )

> n.

The solution, if it exists, is the MLE since s (θ) < 0. n i) (ii) If i=1 ff21 (X (Xi ) ≤ n, then s(θ) ≥ 0 and (θ) is nondecreasing and, thus, n i) the MLE of θ is 1. Similarly, if i=1 ff12 (X (Xi ) ≤ n, then the MLE of θ is 0.

Chapter 4. Estimation in Parametric Models

187

Exercise 52 (#4.111). Let Xij , j = 1, ..., r > 1, i = 1, ..., n, be independently distributed as N (µi , σ 2 ). Find the MLE of θ = (µ1 , ..., µn , σ 2 ). Show that the MLE of σ 2 is not a consistent estimator (as n → ∞). Solution. Let (θ) be the likelihood function. Note that log (θ) = −

n r 1 nr log(2πσ 2 ) − 2 (Xij − µi )2 , 2 2σ i=1 j=1

r ∂ log (θ) 1 = 2 (Xij − µi ), ∂µi σ j=1

and

n r ∂ log (θ) nr 1 =− 2 + 4 (Xij − µi )2 . ∂σ 2 2σ 2σ i=1 j=1 ¯ i· = r−1 r Xij , i = 1, ..., n, and the MLE of Hence, the MLE of µi is X j=1 σ 2 is n r 1 ¯ i· )2 . σ ˆ2 = (Xij − X nr i=1 j=1

Since

⎡ ⎤ r ¯ i· )2 ⎦ = (r − 1)σ 2 , E ⎣ (Xij − X j=1

by the law of large numbers, σ ˆ 2 →p

r−1 2 σ r

as n → ∞. Hence, σ ˆ 2 is inconsistent. Exercise 53 (#4.112). Let (X1 , ..., Xn ) be a random sample from the uniform distribution on (0, θ), where θ > 0 is unknown. Let θˆ be the MLE of θ and T be the UMVUE. (i) Obtain the ratio of the mean squared error of T over the mean squared error of θˆ and show that the MLE is inadmissible when n ≥ 2. (ii) Let Za,θ be a random variable having the exponential distribution on ˆ →d Z0,θ and n(θ − T ) →d (a, ∞) with scale parameter θ. Prove n(θ − θ) Z−θ,θ . Obtain the asymptotic relative efficiency of θˆ with respect to T . Solution. (i) Let X(n) be the largest order statistic. Then θˆ = X(n) and ˆ T (X) = n+1 n X(n) . The mean squared error of θ is E(X(n) − θ)2 =

2θ2 (n + 1)(n + 2)

188

Chapter 4. Estimation in Parametric Models

and the mean squared error of T is E(T − θ)2 =

θ2 . n(n + 2)

The ratio is (n + 1)/(2n). When n ≥ 2, this ratio is less than 1 and, therefore, the MLE θˆ is inadmissible. (ii) From     ˆ ≤ x = P X(n) ≥ θ − x P n(θ − θ) n  θ = θ−n ntn−1 dt 

θ−x/n

= 1− 1−

x n nθ

→ 1 − e−x/θ ˆ →d Z0,θ . From as n → ∞, we conclude that n(θ − θ) ˆ − θˆ n(θ − T ) = n(θ − θ) and Slutsky’s theorem, we conclude that n(θ − T ) →d Z0,θ − θ, which has the same distribution as Z−θ,θ . The asymptotic relative efficiency of θˆ with 2 2 respect to T is E(Z−θ,θ )/E(Z0,θ ) = θ2 /(θ2 + θ2 ) = 12 . Exercise 54 (#4.113). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on (a, ∞) with scale parameter θ, where a ∈ R and θ > 0 are unknown. Obtain the asymptotic relative efficiency of the MLE of a (or θ) with respect to the UMVUE of a (or θ). Solution. Let X(1) be the smallest order statistic. From Exercise 6 in Chapter 3, the UMVUE of a and θ are, respectively, 1 (Xi − X(1) ) n(n − 1) i=1 n

a ˜ = X(1) − and

1 (Xi − X(1) ). n − 1 i=1 n

θ˜ =

ˆ where From Exercise 41(ix), the MLE of (a, θ) is (ˆ a, θ), 1 and θˆ = (Xi − X(1) ). n i=1 n

a ˆ = X(1)

Chapter 4. Estimation in Parametric Models

189

˜ has From part (iii) of the solution of Exercise 7 in Chapter 2, 2(n − 1)θ/θ 2 the chi-square distribution χ2(n−1) . Hence,  2(n − 1) i.e.,





θ˜ −1 θ

→d N (0, 1),

 √  n θ˜ − θ →d N (0, 2θ2 ).

˜ ˆ ˜ Since θˆ = n−1 n θ, θ has the same asymptotic distribution as θ and the ˆ ˜ asymptotic relative efficiency of θ with respect to θ is 1. Note that n(ˆ a − a) = n(X(1) − a) has the same distribution as Z, where Z is a random variable having the exponential distribution on (0, ∞) with scale parameter θ. Then 1 (Xi − X(1) ) →d Z − θ, n − 1 i=1 n

n(˜ a − a) = n(X(1) − a) −

n 1 since n−1 i=1 (Xi − X(1) ) →p θ. Therefore, the asymptotic relative efficiency of a ˆ with respect to a ˜ is E(Z − θ)2 /E(Z 2 ) = 12 . Exercise 55 (#4.115). Let (X1 , ..., Xn ), n ≥ 2, be a random sample from a distribution having Lebesgue density fθ,j , where θ > 0, j = 1, 2, fθ,1 is the density of N (0, θ2 ), and fθ,2 (x) = (2θ)−1 e−|x|/θ . (i) Obtain an MLE of (θ, j). (ii) Show whether the MLE of j in part (i) is consistent. (iii) Show that the MLE of θ is consistent and derive its nondegenerated asymptotic distribution.  n n 2 Solution. (i) Let T1 = i=1 Xi and T2 = i=1 |Xi |. The likelihood function is  2 (2π)−n/2 θ−n e−T1 /(2θ ) j = 1 (θ, j) = 2−n θ−n e−T2 /θ j = 2.  Note that θˆ1 = T1 /n maximizes (θ, 1) and θˆ2 = T2 /n maximizes (θ, 2). Define  1 (θˆ1 , 1) ≥ (θˆ2 , 2) ˆj = 2 (θˆ1 , 1) < (θˆ2 , 2), which is the same as ˆj =

⎧ ⎨ 1 ⎩ 2

θˆ1 θˆ2



θˆ1 θˆ2

>

2 2

2e π 2e π ,

190

Chapter 4. Estimation in Parametric Models



and define θˆ =

θˆ1 θˆ2

ˆj = 1 ˆj = 2.

Then ˆ ˆj) ≥ (θ, j) (θ, ˆ ˆj) is an MLE of (θ, j). for any θ and j. This shows that (θ, ˆ (ii) The consistency of j means that lim P (ˆj = j) = 1, n

which is equivalent to lim P n



θˆ1 ≤ θˆ2

3

2e π



 =

1 0

if j = 1 if j = 2.

ˆ ˆ It suffices to consider 2 the limit of θˆ1 /θˆ2 . When j = 1, 2 2θ1 →p θ and θ2 →p  2 2 π 2e 2 ˆ ˆ π θ (since E|X1 | = π θ). Then θ1 /θ2 →p 2 < π (since π < 4e). 2 √ √ When j = 2, θˆ1 →p 2θ and θˆ2 →p θ. Then θˆ1 /θˆ2 →p 2 > 2e π (since ˆ e < π). Therefore, j is consistent. (iii) When j = 1, by the result in part (ii) of the solution, lim P (θˆ = θˆ1 ) = 1. n

Hence, the asymptotic distribution of θˆ is the same as that of θˆ1 under the normal distribution assumption. By the central limit theorem and the δ√ method, n(θˆ1 − θ) →d N (0, θ2 /2). Similarly, when j = 2, the asymptotic distribution of θˆ is the same as that of θˆ2 . By the central limit theorem, √ ˆ n(θ2 − θ) →d N (0, θ2 ). Exercise 56 (#4.115). Let (X1 , ..., Xn ), n ≥ 2, be a random sample from a distribution with discrete probability density fθ,j , where θ ∈ (0, 1), j = 1, 2, fθ,1 is the Poisson distribution with mean θ, and fθ,2 is the binomial distribution with size 1 and probability θ. (i) Obtain an MLE of (θ, j). (ii) Show whether the MLE of j in part (i) is consistent. (iii) Show that the MLE of θ is consistent and derive its nondegenerated asymptotic distribution. ¯ be the sample mean, g(X) = Solution. (i) Let X = (X1 , ..., Xn ), X !n ( i=1 Xi !)−1 , and h(X) = 1 if all Xi ’s are not larger than 1 and h(X) = 0 otherwise. The likelihood function is  ¯ j=1 e−nθ θnX g(X) (θ, j) = ¯ ¯ nX n−nX θ (1 − θ) h(X) j = 2.

Chapter 4. Estimation in Parametric Models

191

¯ = T /n maximizes both (θ, 1) and (θ, 2). Define Note that X  ¯ 1) > (X, ¯ 2) 1 (X, ˆj = ¯ 1) ≤ (X, ¯ 2). 2 (X, Then ¯ ˆj) ≥ (θ, j) (X, ¯ ˆj) is an MLE of (θ, j). We now simplify for any θ and j and, hence, (X, the formula for ˆj. If at least one Xi is larger than 1, then h(X) = 0, ¯ 1) > (X, ¯ 2), and ˆj = 1. If all Xi ’s are not larger than 1, then (X, h(X) = g(X) = 1 and ¯ 2) (X, ¯ n−nX¯ enX¯ ≥ 1 = (1 − X) ¯ (X, 1) because of the inequality (1 − t)1−t ≥ e−t for any t ∈ [0, 1). This shows that  1 h(X) = 0 ˆj = 2 h(X) = 1. (ii) If j = 2, h(X) = 1 always holds. Therefore, ˆj is consistent if we can show that if j = 1, limn P (h(X) = 1) = 0. Since P (X1 = 0) = e−θ and P (X1 = 1) = e−θ θ, n  n P (h(X) = 1) = (e−θ θ)k (e−θ )n−k k k=0 n  n k −nθ θ e = k k=0 n  n k θ (1 − θ)n(1−θ) . ≤ k k=0

For any fixed θ ∈ (0, 1) and any  > 0, there exists K such that (1−θ)k−θ <  whenever k ≥ K. Then, when n > K, n  K  n k n k n(1−θ) P (h(X) = 1) ≤ θ (1 − θ) θ (1 − θ)n(1−θ) + k k k=K k=0 K  n k θ (1 − θ)n(1−θ) . ≤ + k k=0

For any fixed K, lim n

K  n k=0

k

θk (1 − θ)n(1−θ) = 0.

192

Chapter 4. Estimation in Parametric Models

Hence, lim supn P (h(X) = 1) ≤  and √the¯result follows since  is arbitrary. (iii) By the central limit theorem, n(X − θ) →d N (0, Var(X1 )), since ¯ = θ in any case. When j = 1, Var(X1 ) = θ and when j = 2, E(X) Var(X1 ) = θ(1 − θ). Exercise 57 (#4.122). Let θˆn be an estimator of θ ∈ R satisfying √ ˆ n(θn − θ) →d N (0, √v(θ)) as the sample size n → ∞. Construct an estimator θ˜n such that n(θ˜n − θ) →d N (0, w(θ)) with w(θ) = v(θ) for θ = θ0 and w(θ0 ) = t2 v(θ0 ), where t ∈ R and θ0 is a point in the parameter space. Note. This is a generalized version of Hodges’ superefficiency example (e.g., Example 4.38 in Shao, 2003). Solution. Consider  if |θˆn − θ0 | ≥ n−1/4 θˆn θ˜n = ˆ tθn + (1 − t)θ0 if |θˆn − θ0 | < n−1/4 . We now show that θ˜n has the desired property. If θ = θ0 , then θˆn − θ0 →p θ − θ0 = 0 and, therefore, limn P (|θˆn − θ0 | < n−1/4 ) = 0. On the event {|θˆn − θ| ≥ n−1/4 }, θ˜n = θˆn . Hence the asymptotic distribution of √ √ ˜ n(θn − θ) is the same as that of n(θˆn − θ) when θ = θ0 . Consider now θ = θ0 . Then √ lim P (|θˆn − θ0 | < n−1/4 ) = lim P ( n|θˆn − θ0 | < n1/4 ) n

n

= lim[Φ(n1/4 ) − Φ(−n1/4 )] n

= 1, where Φ is the cumulative distribution function of N (0, 1). On the event √ √ {|θˆn − θ| < n−1/4 }, n(θ˜n − θ0 ) = nt(θˆn − θ0 ) →d N (0, t2 v(θ0 )). Exercise 58 (#4.123). Let (X1 , ..., Xn ) be a random sample from a distribution with probability density fθ with respect to a σ-finite measure ν on (R, B), where θ ∈ Θ and Θ is an open set in R. Suppose that for every x in the range of X1 , fθ (x) is twice continuously differentiable in θ and satisfies   ∂ ∂ ψθ (x)dν = ψθ (x)dν ∂θ ∂θ for ψθ (x) = fθ (x) and = ∂fθ (x)/∂θ; the Fisher information 

2 ∂ I1 (θ) = E log fθ (X1 ) ∂θ is finite and positive; and for any given θ ∈ Θ, there exists a positive number cθ and a positive function hθ such that E[hθ (X1 )] < ∞ and

Chapter 4. Estimation in Parametric Models

193

 ∂ 2 log fγ (x)   ≤ hθ (x) for all x in the range of X1 . Show that sup|γ−θ| 0. Show that one of the solutions of the likelihood equation is the unique MLE of θ. Obtain the asymptotic distribution of the MLE ofθ. n Solution. Let Yi = log Xi , T = n−1 i=1 Yi2 , and (θ) be the likelihood function. Then log (θ) = −

n n log θ 1 (Yi − θ)2 , − 2 2θ i=1

∂ log (θ) n = ∂θ 2

T 1 − −1 θ2 θ



Chapter 4. Estimation in Parametric Models

and

195

∂ 2 log (θ) n nT = 2− 3. ∂θ2 2θ θ

The likelihood equation

∂ log (θ) ∂θ

= 0 has two solutions √ ± 1 + 4T − 1 . 2 2

(θ) = −n( 2θ12 + θ1 ) < 0. Hence, both At each solution, T = θ + θ2 and ∂ log ∂θ 2 solutions are maximum points √ of the likelihood function. Since θ > 0, the only positive solution, θˆ = ( 1 + 4T − 1)/2, is the unique MLE of θ. Since EY12 = θ + θ2 , the Fisher information is 



2 n (2θ + 1)n ∂ log (θ) nT In (θ) = −E =E − 3 = . ∂θ2 2θ2 θ 2θ2

Thus,



n(θˆ − θ) →d N (0, 2θ2 /(2θ + 1)).

Exercise 61 (#4.131). Let (X1 , ..., Xn ) be a random sample from the distribution P (X1 = 0) = 6θ2 − 4θ + 1, P (X1 = 1) = θ − 2θ2 , and P (X1 = 2) = 3θ −4θ2 , where θ ∈ (0, 12 ) is unknown. Obtain the asymptotic distribution of an RLE (root of likelihood equation) of θ. Solution. Let Y be the number of Xi ’s that are 0 and Z be the number of Xi ’s that are 1. Then, the likelihood function is (θ) = (6θ2 − 4θ + 1)Y (θ − 2θ2 )Z (3θ − 4θ2 )n−Y −Z , (12θ − 4)Y (1 − 4θ)Z ∂ log (θ) (3 − 8θ)(n − Y − Z) = 2 + + , ∂θ 6θ − 4θ + 1 θ − 2θ2 3θ − 4θ2 and ∂ 2 log (θ) (72θ2 − 48θ + 4)Y (8θ2 − 4θ + 1)Z = − − ∂θ2 (6θ2 − 4θ + 1)2 (θ − 2θ2 )2 2 (32θ − 24θ + 9)(n − Y − Z) − . (3θ − 4θ2 )2 By the theorem for RLE (e.g., Theorem 4.17 in Shao, 2003), there exists √ an RLE θˆ such that n(θˆ − θ) →d N (0, I1−1 (θ)), where   2 1 ∂ log (θ) I1 (θ) = − E n ∂θ2 72θ2 − 48θ + 4 8θ2 − 4θ + 1 32θ2 − 24θ + 9 = + + . 6θ2 − 4θ + 1 θ − 2θ2 3θ − 4θ2

196

Chapter 4. Estimation in Parametric Models

Exercise 62 (#4.132). Let (X1 , ..., Xn ) be a random sample from N (µ, 1), where µ ∈ R is unknown. Let θ = P (X1 ≤ c), where c is a known constant. Find the asymptotic relative efficienciesof the MLE of θ n with respect to the UMVUE of θ and the estimator n−1 i=1 I(−∞,c] (Xi ). ¯ Let Φ be the cumuSolution. The MLE of µ is the sample mean X. lative distribution function of N (0, 1). Then θ = Φ(c − µ) and the MLE ¯ From the central limit theorem and the δ-method, of θ is θˆ = Φ(c − X). √ ˆ n(θ − θ) →d N (0, [Φ (c − µ)]2√). From Exercise 3(ii) in Chapter 3, the ¯ 1 − n−1 ). Note that UMVUE of θ is θ˜ = Φ((c − X)/   ¯  √ √ c−X ˆ = ¯ − Φ(c − X) n(θ˜ − θ) n Φ √ 1 − n−1

 √  1 ¯ √ = nΦ (ξn )(c − X) −1 1 − n−1 ¯ Φ (ξn )(c − X) √ = √ √ −1 n 1 − n (1 + 1 − n−1 ) →p 0, √ ¯ and (c − X)/ ¯ where ξn is a point between c − X 1 − n−1 . Hence, the ˆ ˜ asymptotic efficiency of θ with respect to θ is 1. For√the estimator relative n T = n−1 i=1 I(−∞,c] (Xi ), by the central limit theorem, n(T − θ) →d N (0, θ(1 − θ)). Hence, the asymptotic relative efficiency of θˆ with respect to T is θ(1 − θ)/[Φ (c − µ)]2 . Exercise 63 (#4.135). Let (X1 , ..., Xn ) be a random sample from a population having the Lebesgue density  x>0 (θ1 + θ2 )−1 e−x/θ1 fθ1 ,θ2 (x) = (θ1 + θ2 )−1 ex/θ2 x ≤ 0, where θ1 > 0 and θ2 > 0 are unknown. (i) Find the MLE of θ = (θ1 , θ2 ). (ii) Obtain a nondegenerated asymptotic distribution of the MLE of θ. (iii) Obtain the asymptotic relative efficiencies of the MLE’s with respect to the moment estimators. n n Solution. (i) Let T1 = i=1 Xi I(0,∞) (Xi ) and T2 = − i=1 Xi I(−∞,0] (Xi ). Then, the log-likelihood function is T1 T2 log (θ) = −n log(θ1 + θ2 ) − − , θ1 θ2 

T1 n T2 ∂ log (θ) n + 2,− + 2 , = − ∂θ θ1 + θ2 θ1 θ1 + θ2 θ2 and   2T1 n n ∂ 2 log (θ) (θ1 +θ2 )2 − θ13 (θ1 +θ2 )2 . = 2T2 n n ∂θ∂θτ (θ1 +θ2 )2 (θ1 +θ2 )2 − θ 3 2

Chapter 4. Estimation in Parametric Models

197

Since the likelihood equation has a unique solution, the MLE of θ is   θˆ = (θˆ1 , θˆ2 ) = n−1 ( T1 T2 + T1 , T1 T2 + T2 ). (ii) By the asymptotic theorem for MLE (e.g., Theorem 4.17 in Shao, 2003), √ ˆ n(θ − θ) →d N2 (0, [I1 (θ)]−1 ), where     2 2 −1 1 + 2θ 1 1 ∂ log (θ) θ 1 I1 (θ) = − E = 1 −1 1 + 2θ n ∂θ∂θτ (θ1 + θ2 )2 θ2 and −1

[I1 (θ)]

(θ1 + θ2 )2   = 2θ1 2 1 + 2θ 1 + −1 θ1 θ2



1 1 + 2θ θ2 1

1 2 1 + 2θ θ1

 .

(iii) Let µj = EX1j , j = 1, 2. From Exercise 52 in Chapter 3, 2 θ1 + θ2 = 2µ2 − 3µ21 and

 2µ2 − 3µ21 − µ1 θ2 = . θ1 2µ2 − 3µ21 + µ1

Let τj (µ1 , µ2 ) be the jth diagonal element of the matrix ΛΣΛτ in the solution of Exercise 52 in Chapter 3. Then, the asymptotic relative efficiency of θˆ1 with respect to the moment estimator of θ1 given in the solution of Exercise 52 in Chapter 3 is     √ √ 2µ2 −3µ21 −µ1 2µ2 −3µ21 +µ1 √ 1 + 2 − 1 τ1 (µ1 , µ2 ) 1 + 2 √ 2µ2 −3µ21 +µ1 2µ2 −3µ21 −µ1 

√ 2µ2 −3µ21 +µ1 (2µ2 − 3µ1 ) 1 + 2 √ 2 2µ2 −3µ1 −µ1

and the asymptotic relative efficiency of θˆ2 with respect to the moment estimator of θ2 is     √ √ 2µ2 −3µ21 −µ1 2µ2 −3µ21 +µ1 √ √ τ2 (µ1 , µ2 ) 1 + 2 1+2 −1 2µ2 −3µ21 +µ1 2µ2 −3µ21 −µ1 

. √ 2µ2 −3µ21 −µ1 (2µ2 − 3µ1 ) 1 + 2 √ 2 2µ2 −3µ1 +µ1

Exercise 64 (#4.136). In Exercise 47,   show that the RLE ρˆ of ρ satisfies √ n(ˆ ρ − ρ) →d N 0, (1 − ρ2 )2 /(1 + ρ2 ) . Solution. Let (ρ) be the likelihood function. From Exercise 47, ∂ log (ρ) = n(1 − ρ2 )2 h(ρ), ∂ρ

198

Chapter 4. Estimation in Parametric Models

whereh(ρ) = ρ(1 − ρ2 ) − n−1 T ρ + n−1 R(1 + ρ2 ), T = n R = i=1 Xi Yi . Then,

n

2 2 i=1 (Xi + Yi ),

and

nh (ρ) 4nρh(ρ) ∂ 2 log (ρ) = + . 2 ∂ρ (1 − ρ2 )2 (1 − ρ2 )3 Since E[h(ρ)] = 0 and E[h (ρ)] = −(1 + ρ2 ),   2 1 + ρ2 1 ∂ log (ρ) = . I1 (ρ) = − E 2 n ∂ρ (1 − ρ2 )2   √ ρ − ρ) →d N 0, [I1 (ρ)]−1 . The By the asymptotic theorem for RLE, n(ˆ result follows. Exercise 65 (#4.137). In Exercise 50, obtain a nondegenerated asymptotic distribution of the RLE θˆ of θ when f (x) = π −1 (1 + x2 )−1 . Solution. For f (x) = π −1 (1 + x2 )−1 and fθ (x) = θf (θx), 1 − θ2 x2 ∂fθ (x) = ∂θ π(1 + θ2 x2 )2

and

∂ 2 fθ (x) 2θx2 (θ2 x2 − 3) = , 2 ∂θ π(1 + θ2 x2 )3

which are continuous functions of θ. For any θ0 ∈ (0, ∞),      ∂fθ (x)   1 − θ2 x2      sup sup  ∂θ  =  2 2 2 θ∈[θ0 /2,2θ0 ] θ∈[θ0 /2,2θ0 ] π(1 + θ x ) ≤

1 2 2 θ∈[θ0 /2,2θ0 ] π(1 + θ x )

=

1 , π[1 + (θ0 /2)2 x2 ]

sup

which is integrable with respect to the Lebesgue measure on (−∞, ∞). Also,   2    ∂ fθ (x)   2θx2 (θ2 x2 − 3)      sup sup  ∂2θ  =  π(1 + θ2 x2 )3  θ∈[θ0 /2,2θ0 ] θ∈[θ0 /2,2θ0 ] ≤

2C 2 2 θ∈[θ0 /2,2θ0 ] πθ(1 + θ x )

=

2C , π(θ0 /2)[1 + (θ0 /2)2 x2 ]

sup

which is integrable, where C does not depend on θ. Therefore, by the dominated convergence theorem,  ∞  ∞ ∂ ∂ ψθ (x)dx = ψθ (x)dx ∂θ −∞ ∂θ −∞

Chapter 4. Estimation in Parametric Models

199

θ (x) holds for both ψθ (x) = fθ (x) and ∂f∂θ . Note that  2     ∂ log fθ (x)  1 2x2 (1 − θ2 x2 )     sup sup  =  2 + (1 + θ2 x2 )2  ∂θ2 θ∈[θ0 /2,2θ0 ] θ∈[θ0 /2,2θ0 ] θ   2 1   + 2x  ≤ sup  θ2 2 2 1+θ x  θ∈[θ0 /2,2θ0 ]

=

4 2x2 + θ02 1 + (θ0 /2)2 x2

and, when X1 has density θ0 f (θ0 x),   4 2X12 < ∞. E 2+ θ0 1 + (θ0 /2)2 X12 Therefore, the regularity conditions in Theorem 4.17 of Shao (2003) are satisfied and √ n(θˆ − θ) →d N (0, [I1 (θ)]−1 ). It remains to compute I1 (θ). Note that   2 ∂ log fθ (X1 ) I1 (θ) = −E ∂θ2  ∞ 2 1 x (1 − θ2 x2 ) 2θ = 2+ dx θ π −∞ (1 + θ2 x2 )3  ∞ 2 x (1 − x2 ) 2 1 dx = 2+ 2 θ θ π −∞ (1 + x2 )3  ∞ 3(1 + x2 ) − 2 − (1 + x2 )2 2 1 dx = 2+ 2 θ θ π −∞ (1 + x2 )3   ∞ 1 2 1 dx = 2+ 2 3 θ θ π (1 + x2 )2 −∞   ∞  ∞ 1 1 dx − dx − 2 2 3 2 −∞ (1 + x ) −∞ 1 + x

 2 3 3 1 = 2+ 2 − −1 θ θ 2 4 1 = 2, 2θ where the identity 



−∞

is used.

1 dx = (1 + x2 )k



πΓ(k − 12 ) Γ(k)

200

Chapter 4. Estimation in Parametric Models

Exercise 66 (#4.138). Let (X1 , ..., Xn ) be a random sample from the logistic distribution on R with Lebesgue density fθ (x) = σ −1 e−(x−µ)/σ /[1 + e−(x−µ)/σ ]2 , where µ ∈ R and σ > 0 are unknown. Obtain a nondegenerated asymptotic distribution of the RLE θˆ of θ = (µ, σ). Solution. Using the same argument as that in the solution for the previous exercise, we can show that the conditions in Theorem 4.17 of Shao (2003) are satisfied. Then √ n(θˆ − θ) →d N (0, [I1 (θ)]−1 ). To obtain I1 (θ), we calculate ∂ log fθ (x) 1 2 =− + , −(x−µ)/σ ∂µ σ σ[1 + e ] ∂ 2 log fθ (x) 2e−(x−µ)/σ =− 2 , 2 ∂µ σ [1 + e−(x−µ)/σ ]2 and   ∞ 2 ∂ 2 log fθ (X1 ) e2(x−µ)/σ = −E dx ∂µ2 σ 2 −∞ σ[1 + e(x−µ)/σ ]4  ∞ e2y 2 dy = 2 σ −∞ (1 + ey )4 

 1 1 1 (1 − t)2 4 2 + dt t = 2 σ 0 t2 1−t t  1 2 = 2 (1 − t)tdt σ 0 1 , = 3σ 2 

where the second equality follows from the transformation y = −(x − µ)/σ and the third equality follows from the transformation t = (1 + ey )−1 (y = log(1 − t) − log t). Note that ∂ 2 log fθ (x) 2(x − µ)e−(x−µ)/σ e−(x−µ)/σ − 1 − 2 = 2 −(x−µ)/σ ∂µ∂σ σ [1 + e ] σ [1 + e−(x−µ)/σ ]2 is odd symmetric about µ and fθ (x) is symmetric about µ. Hence,  2  ∂ log fθ (X1 ) E = 0. ∂µ∂σ

Chapter 4. Estimation in Parametric Models

201

Differentiating log fθ (x) with respect to σ gives ∂ log fθ (x) 1 x−µ 2(x − µ) =− − + 2 ∂σ σ σ σ [1 + e−(x−µ)/σ ] and ∂ 2 log fθ (x) 2(x − µ)2 e−(x−µ)/σ 1 2(x − µ) 4(x − µ) − 4 = 2+ − 3 . 2 3 −(x−µ)/σ ∂σ σ σ σ [1 + e ] σ [1 + e−(x−µ)/σ ]2 From E(X1 − µ) = 0 and the transformation y = −(x − µ)/σ, we obtain that   2  ∞  ∞ 1 ye2y y 2 e2y 4 2 ∂ log fθ (x) = − + dy + dy. −E 2 2 2 y 3 2 ∂σ σ σ −∞ (1 + e ) σ −∞ (1 + ey )4 From integration by parts, −∞  ∞   ye2y yey 1 ∞ ey + yey  dy = + dy y 3 2(1 + ey )2 ∞ 2 −∞ (1 + ey )2 −∞ (1 + e )   ey yey 1 ∞ 1 ∞ = dy + dy y 2 2 −∞ (1 + e ) 2 −∞ (1 + ey )2 1 = 2 and  ∞  ∞ y 2 e2y y 2 e2y dy = 2 dy y 4 (1 + ey )4 −∞ (1 + e ) 0 0  2 ∞ (2y + y 2 )ey 2y 2 ey  + dy = 3(1 + ey )3 ∞ 3 0 (1 + ey )3 0  (2y + y 2 )  2 ∞ 1+y = + dy 3(1 + ey )2 ∞ 3 0 (1 + ey )2 0  2(1 + y)e−y  2 ∞ ye−y = − dy 3(1 + ey ) ∞ 3 0 1 + ey  1 2 ∞ ye−2y = − dy. 3 3 0 1 + e−y ∞ Using the series (1 + e−y )−1 = j=0 (−e−y )j , we obtain that  ∞  ∞ ∞ ye−2y −2y dy = ye (−1)j e−jy dy 1 + e−y 0 0 j=0  ∞ ∞ (−1)j ye−(j+2)y dy =

=

j=0 ∞ j=0

0

(−1)j

1 . (j + 2)2

202

Chapter 4. Estimation in Parametric Models

Noting that (−1)j = 1 when j is even and (−1)j = −1 when j is odd, we obtain that ∞

(−1)j

j=0

∞ ∞ 1 1 1 = − (j + 2)2 (2k)2 (2k + 1)2 k=1 k=1  ∞ ∞  1 1 1 1 = − + − (2k)2 (2k + 1)2 (2k)2 (2k)2 k=1 ∞

=2

=

1 2

k=1 ∞ k=1

k=1 ∞

1 1 − (2k)2 j=2 j 2 ∞

1 1 − +1 2 k j2 j=1 ∞

1 1 = 1− 2 k2 k=1

π2 . = 1− 12 Combining these results, we obtain that  2

   ∂ log fθ (x) π2 1 1 2 2 1 2 π2 −E 1 − = = − + + + 2. − 2 2 2 2 2 ∂σ σ σ σ 3 3 12 3σ 9σ Therefore, 1 I1 (θ) = 2 σ



1 3

0

1 3

0 2 + π9

 .

Exercise 67 (#4.140). Let (X1 , ..., Xn ) be a random sample of binary random variables with P (X1 = 1) = p, where p ∈ (0, 1) is unknown. Let θˆ be the MLE of θ = p(1 − p). (i) Show that θˆ is asymptotically normal when p = 12 . (ii) When p = 12 , derive a nondegenerated asymptotic distribution of θˆ with an appropriate normalization. ¯ is the MLE of p, the MLE of Solution. (i) Since the sample mean X √ ¯ ¯ ¯ θ = p(1 − p) is X(1 − X). From the central limit theorem, n(X − p) →d N (0, θ). Using the δ-method with g(x) = x(1 − x) and g  (x) = 1 − 2x, √ we obtain that n(θˆ − θ) →d N (0, (1 − 2p)2 θ). Note that this asymptotic distribution is degenerate when p = 12 . 1 √ ¯ (ii) When p = 2 , n(X − 12 ) →d N (0, 41 ). Hence,

4n



2 1 ˆ ¯−1 − θ = 4n X →d χ21 . 4 2

Chapter 4. Estimation in Parametric Models

203

Exercise 68 (#4.141). Let (X1 , Y1 ), ..., (Xn , Yn ) be independent and identically distributed random 2-vectors satisfying 0 ≤ X1 ≤ 1, 0 ≤ Y1 ≤ 1, and P (X1 > x, Y1 > y) = (1 − x)(1 − y)(1 − max{x, y})θ for 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, where θ ≥ 0 is unknown. (i) Obtain the likelihood function and the likelihood equation. (ii) Obtain the asymptotic distribution of the MLE of θ. Solution. (i) Let X = X1 and Y = Y1 . Note that F (x, y) is differentiable when x = y but not differentiable when x = y. Hence, when x = y, (X, Y ) has Lebesgue density  (θ + 1)(1 − x)θ x > y fθ (x, y) = (θ + 1)(1 − y)θ x < y and P (X > t, Y > t, X = Y ) = 2P (X > t, Y > t, X > Y )  1 x = 2(θ + 1) (1 − x)θ dydx t



t 1

(x − t)(1 − x)θ dx

= 2(θ + 1) t θ+2

=

2(1 − t) θ+2

.

Also, P (X > t, Y > t) = (1 − t)θ+2 . Hence, P (X > t, X = Y ) = P (X > t, Y > t, X = Y ) = P (X > t, Y > t) − P (X > t, Y > t, X = Y ) θ(1 − t)θ+2 = . θ+2 This means that on the line x = y, (X, Y ) has Lebesgue density θ(1 − t)θ+1 . Let ν be the sum of the Lebesgue measure on R2 and the Lebesgue measure on {(x, y) ∈ R2 : x = y}. Then the probability density of (X, Y ) with respect to ν is ⎧ θ ⎨ (θ + 1)(1 − x) x > y θ fθ (x, y) = (θ + 1)(1 − y) x < y ⎩ θ(1 − x)θ+1 x = y. Let T be the number of (Xi , Yi )’s with Xi = Yi and Zi = max{Xi , Yi }. Then the likelihood function is n

(1 − Zi )θ

(θ) = (θ + 1)n−T θT i=1

(1 − Zi ) i:Xi =Yi

204

Chapter 4. Estimation in Parametric Models

and the likelihood equation is n−T T ∂ log (θ) = + + log(1 − Zi ) = 0, ∂θ θ+1 θ i=1 n

which has a unique solution (in the parameter space)  (n − W )2 + 4W T − (n − W ) ˆ θ= , 2W n where W = − i=1 log(1 − Zi ). (ii) Since ∂ 2 log (θ) n−T T =− − 2 < 0, ∂θ2 (θ + 1)2 θ θˆ is the MLE of θ. Since E(T ) = nθ/(θ + 2), we obtain that  2  ∂ log (θ) θ2 + 4θ + 1 1 I1 (θ) = − E = . 2 n ∂θ θ(θ + 2)(θ + 1)2 Hence,



n(θˆ − θ) →d N

0,

θ(θ + 2)(θ + 1)2 θ2 + 4θ + 1

 .

Exercise 69. Consider the one-way random effects model Xij = µ + Ai + eij ,

j = 1, ..., n, i = 1, ..., m,

where µ ∈ R, Ai ’s are independent and identically distributed as N (0, σa2 ), eij ’s are independent and identically distributed as N (0, σ 2 ), σa2 and σ 2 are unknown, and Ai ’s and eij ’s are independent. Obtain nondegenerate asymptotic distributions of the MLE’s of µ, σa2 , and σ 2 . ¯ ·· , which is always Solution. From Exercise 49(ii), the MLE of µ is X normally distributed with mean µ and variance m−1 (σa2 + n−1 σ 2 ). From Exercise 49(ii), the MLE of σ 2 is σ ˆ 2 = SE /[m(n − 1)] and the 2 2 MLE of σa is σ ˆa = SA /[n(m − 1)] − SE /[nm(n − 1)], provided that σ ˆa2 > 0. 2 We now show that as long as nm → ∞, P (ˆ σa ≤ 0) → 0, which implies that, for the asymptotic distributions of the MLE’s, we may assume that σ ˆa2 > 0. 2 2 Since SE /σ has the chi-square distribution χm(n−1) , SE /[m(n − 1)] →p σ 2 as nm → ∞ (either n → ∞ or m → ∞). Since SA /(σ 2 + nσa2 ) has the chi-square distribution χ2m−1 , the distribution of SA /[n(m − 1)] is the same as that of (σa2 + n−1 σ 2 )Wm−1 /(m − 1), where Wm−1 is a random variable having the chi-square distribution χ2m−1 . We need to consider three different cases. Case 1: m → ∞ and n → ∞. In this case, SE /[nm(n − 1)] →p 0 and

Chapter 4. Estimation in Parametric Models

205

(σa2 + n−1 σ 2 )Wm−1 /(m − 1) →p σa2 > 0. Hence, σ ˆa2 →p σa2 > 0, which 2 implies P (ˆ σa ≤ 0) → 0. Case 2: m → ∞ but n is fixed. In this case, SE /[nm(n − 1)] →p n−1 σ 2 and ˆa2 →p σa2 > 0. (σa2 + n−1 σ 2 )Wm−1 /(m − 1) →p (σa2 + n−1 σ 2 ). We still have σ Case 3: n → ∞ but m is fixed. In this case, SE /[nm(n − 1)] →p 0 and (σa2 + n−1 σ 2 )Wm−1 /(m − 1) →d σa2 Wm−1 /(m − 1). Hence, by Slutsky’s theorem, σ ˆa2 →d σa2 Wm−1 /(m − 1), which is a nonnegative random variable. Hence, P (ˆ σa2 ≤ 0) → 0. Therefore, the asymptotic distributions of MLE’s are the same as those of σ ˆa2 and σ ˆ 2 . Since SE /σ 2 has the chi-square distribution χ2m(n−1) , √

  2 nm σ ˆ − σ 2 →d N (0, 2σ 4 )

as nm → ∞ (either n → ∞ or m → ∞). For σ ˆa2 , we need to consider the three cases previously discussed. Case 1: m → ∞ and n → ∞. In this case,   √ SE σ2 →p 0 m − nm(n − 1) n and



m



Wm−1 −1 m−1

→d N (0, 2).

Since SA /[n(m − 1)] and (σa2 + n−1 σ 2 )Wm−1 /(m − 1) have the same distribution,   

√ √ σ2 SA σ2 SE + m(ˆ σa2 − σa2 ) = m − σa2 + − n(m − 1) n n nm(n − 1) has the same asymptotic distribution as that of 

 √ Wm−1 σ2 m σa2 + −1 . n m−1 Thus,



m(ˆ σa2 − σa2 ) →d N (0, 2σa4 ).

Case 2: m → ∞ but n is fixed. In this case,   √ SE σ2 →d N (0, 2σ 4 n−3 ). m − nm(n − 1) n From the argument in the previous case and the fact that SA and SE are independent, we obtain that   √ m(ˆ σa2 − σa2 ) →d N 0, 2(σa2 + n−1 σ 2 )2 + 2σ 4 n−3 .

206

Chapter 4. Estimation in Parametric Models

Case 3: n → ∞ but m is fixed. In this case, SE /[nm(n − 1)] − σ 2 /n →p 0 and 

  Wm−1 Wm−1 σ2 2 2 σa + − 1 →d σ a −1 . n m−1 m−1 Therefore, σ ˆa2



σa2

→d σa2

 Wm−1 −1 . m−1

Exercise 70 (#4.151). Let (X1 , ..., Xn ) be a random sample from the logistic distribution on R with Lebesgue density fθ (x) = σ −1 e−(x−µ)/σ /[1 + e−(x−µ)/σ ]2 , where µ ∈ R and σ > 0 are unknown. Using Newton-Raphson and Fisherscoring methods, find (i) one-step MLE’s of µ when σ is known; (ii) one-step MLE’s of σ when µ is known; (iii) √ one-step MLE’s of (µ, σ); (iv) n-consistent initial estimators in (i)-(iii). Solution. (i) Let (µ) be the likelihood function when σ is known. From Exercise 66, sσ (µ) =

sσ (µ) = and

n n 2 e−(Xi −µ)/σ ∂ log (µ) = − , ∂µ σ σ i=1 1 + e−(Xi −µ)/σ n e−(Xi −µ)/σ ∂ 2 log (µ) 2 = − , 2 2 ∂µ σ i=1 [1 + e−(Xi −µ)/σ ]2

 n ∂ 2 log (µ) = . −E 2 ∂µ 3σ 2 

Hence, the one-step MLE of µ is µ ˆ(1) = µ ˆ(0) − [sσ (ˆ µ(0) )]−1 sσ (ˆ µ(0) ) by the Newton-Raphson method, where µ ˆ(0) is an initial estimator of µ, and is ˆ(0) + 3σ 2 n−1 sσ (ˆ µ(0) ) µ ˆ(1) = µ by the Fisher-scoring method. (ii) Let (σ) be the likelihood function when µ is known. From Exercise 66, n Xi − µ 2(Xi − µ) ∂ log (σ) =− − + , sµ (σ) = 2 [1 + e−(Xi −µ)/σ ] ∂σ σ i=1 σ σ i=1 n

n

Chapter 4. Estimation in Parametric Models

207

∂ 2 log (σ) ∂σ 2 n n 2(Xi − µ) 4(Xi − µ) n = 2+ − 3 3 [1 + e−(Xi −µ)/σ ] σ σ σ i=1 i=1

sµ (σ) =



n 2(Xi − µ)2 e−(Xi −µ)/σ

σ 4 [1 + e−(Xi −µ)/σ ]2

i=1

,

 

∂ 2 log (σ) 1 1 π2 . −E = 2 + ∂σ 2 σ 3 9 

and

Hence, the one-step MLE of σ is σ ˆ (1) = σ ˆ (0) − [sµ (ˆ σ (0) )]−1 sµ (ˆ σ (0) ) by the Newton-Raphson method, where σ ˆ (0) is an initial estimator of σ, and is (ˆ σ (0) )2 ˆ (0) + 2 σ (0) ) σ ˆ (1) = σ sµ (ˆ π /9 + 1/3 by the Fisher-scoring method. (iii) Let (µ, σ) be the likelihood function when both µ and σ are unknown. From parts (i)-(ii) of the solution, s(µ, σ) = and

∂ log (µ, σ) = ∂(µ, σ)

∂ 2 log (θ) = s (µ, σ) = ∂(µ, σ)∂(µ, σ)τ 



sσ (µ) sµ (σ)



sσ (µ) sµ,σ sµ,σ sµ (σ)

 ,

where 2(Xi − µ)e−(Xi −µ)/σ ∂ 2 log (µ, σ) e−(Xi −µ)/σ − 1 = − . ∂µ∂σ σ 2 [1 + e−(Xi −µ)/σ ] i=1 σ 2 [1 + e−(Xi −µ)/σ ]2 i=1 n

sµ,σ =

n

Hence, by the Newton-Raphson method, the one-step MLE of (µ, σ) is

µ ˆ(1) σ ˆ (1)



=

µ ˆ(0) σ ˆ (0)



* +−1 − s (ˆ µ(0) , σ ˆ (0) ) s(ˆ µ(0) , σ ˆ (0) ).

From Exercise 66, n −E[s (µ, σ)] = 2 σ



1 3

0

1 3

0 2 + π9

 .

208

Chapter 4. Estimation in Parametric Models

Hence, by the Fisher-scoring method, the one-step MLE of (µ, σ) is  

(1)  (0)  (0) 3sσˆ (0) (ˆ µ ) [ˆ σ (0) ]2 µ ˆ µ ˆ   2 = + . sµˆ(0) (ˆ σ (0) )/ 13 + π9 σ ˆ (1) σ ˆ (0) n (iv) Note that logistic distribution has mean µ and variance σ 2 π 2 /3. Thus, ¯ (the sample mean) and in (i)-(iii), we may take µ ˆ(0) = X 7 √ 8 n 81 3 9 ¯ 2, σ ˆ (0) = (Xi − X) π n i=1 which are



n-consistent.

Chapter 5

Estimation in Nonparametric Models Exercise 1 (#5.3). Let p ≥ 1 and Fp be the set of cumulative distribution functions on R having finite pth moment. Mallows’ distance between F and G in Fp is defined to be Mp (F, G) = inf(EX − Y p )1/p , where the infimum is taken over all pairs of random variables X and Y having marginal distributions F and G, respectively. Show that Mp is a distance on Fp . Solution. Let U be a random variable having the uniform distribution on the interval (0, 1) and F −1 (t) = inf{x : F (x) ≥ t} for any cumulative distribution function F . We first show that Mp (F, G) = [E|F −1 (U ) − G−1 (U )|p ]1/p . Since F −1 (U ) is distributed as F and G−1 (U ) is distributed as G, we have Mp (F, G) ≤ [E|F −1 (U ) − G−1 (U )|p ]1/p . Let X and Y be any random variables whose marginal distributions are F and G, respectively. From Jensen’s inequality for conditional expectations, E|X − Y |p = E[E(|X − Y |p |X)] ≥ E|X − E(Y |X)|p . Since X and F −1 (U ) have the same distribution, we conclude that Mp (F, G) ≥ [E|F −1 (U ) − E(Y |F −1 (U ))|p ]1/p . 209

210

Chapter 5. Estimation in Nonparametric Models

Then the result follows if we can show that E(Y |F −1 (U )) = G−1 (U ). Clearly, G−1 (U ) is a Borel function U . Since Y and G−1 (U ) have   of −1 the same distribution, B Y dP = B G (U )dP for any B ∈ σ(F −1 (U )). Hence, E(Y |F −1 (U )) = G−1 (U ) a.s. It is clear that Mp (F, G) ≥ 0 and Mp (F, G) = Mp (G, F ). If Mp (F, G) = 0, then, by the established result, E|F −1 (U ) − F −1 (U )|p = 0. Thus, F −1 (t) = G−1 (t) a.e. with respect to Lebesgue measure. Hence, F = G. Finally, for F , G, and H in Fp , )1/p ( Mp (F, G) = E|F −1 (U ) − G−1 (U )|p )1/p ( ≤ E|F −1 (U ) − H −1 (U )|p )1/p ( + E|H −1 (U ) − G−1 (U )|p = Mp (F, H) + Mp (H, G), where the inequality follows from Minkowski’s inequality. This proves that Mp is a distance. Exercise 2 (#5.5). Let F1 be the collection of cumulative distribution functions on R with finite means and M1 be as defined in Exercise 1. Show that 1 (i) M1 (F, G) = 0 |F −1 (t) − G−1 (t)|dt; ∞ (ii) M1 (F, G) = −∞ |F (x) − G(x)|dx. Solution. (i) Let U be a random variable having the uniform distribution on the interval (0, 1). From the solution of the previous exercise, Mp (F, G) = E|F −1 (U ) − G−1 (U )|. The result follows from E|F −1 (U ) − G−1 (U )| =



1

0

|F −1 (t) − G−1 (t)|dt.

(ii) From part (i), it suffices to show that 



−∞

 |F (x) − G(x)|dx =

0

1

|F −1 (t) − G−1 (t)|dt.

∞

|F (x) − G(x)|dx is equal to the area on R2 bounded by two 1 curves F (x) and G(x) and 0 |F −1 (t) − G−1 (t)|dt is equal to the area on R2 bounded by two curves F −1 (t) and G−1 (t). Hence, they are the same and the result follows. Note that

−∞

Exercise 3. Let Mp be the Mallows’ distance defined in Exercise 1 and {G, G1 , G2 , ...} ⊂ Fp . Show that limn Mp (Gn , G) = 0 if and only   if limn |x|p dGn (x) = |x|p dG(x) and limn Gn (x) = G(x) for any x at

Chapter 5. Estimation in Nonparametric Models

211

which G is continuous. Solution. Let U be a random variable having the uniform distribution on (0, 1). By the solution of Exercise 1, −1 Mp (Gn , G) = [E|G−1 (U )|p ]1/p . n (U ) − G −1 (U )|p = Assume that limn Mp (Gn , G) = 0. Then limn E|G−1 n (U ) − G −1 p −1 p −1 0, which implies that limn E|Gn (U )| = E|G (U )| and Gn (U ) →d −1 G−1 (U ). Since G−1 (U ) has distribution G, n (U )has distributionGn and G p p we conclude that limn |x| dGn (x) = |x| dG(x) and limn Gn (x) = G(x) for any x at which G is continuous.   Assume now that limn |x|p dGn (x) = |x|p dG(x) and limn Gn (x) = G(x) for any x at which G is continuous. Using the same argument in −1 the solution of Exercise in Chapter  1,p we can show that Gn (U ) →p  54 −1 p G (U ). Since limn |x| dGn (x) = |x| dG(x), by Theorem 1.8(viii) in p Shao (2003), the sequence {|G−1 n (U )| } is uniformly integrable and, hence, −1 −1 p limn E|Gn (U ) − G (U )| = 0, which means limn Mp (Gn , G) = 0.

Exercise 4 (#5.6). Find an example of cumulative distribution functions G, G1 , G2 ,... on R such that (i) limn ∞ (Gn , G) = 0 but Mp (Gn , G) does not converge to 0, where Mp is the distance defined in Exercise 1 and ∞ is the sup-norm distance defined as ∞ (F, G) = supx |F (x) − G(x)| for any cumulative distribution functions F and G; (ii) limn Mp (Gn , G) = 0 but ∞ (Gn , G) does not converge to 0. Solution. Let U be a random variable having the uniform distribution on the interval (0, 1). (i) Let G be the cumulative distribution function of U and Gn be the cumulative distribution function of  U if U ≥ n−1 Un = n2 if U < n−1 . Then limn P (|Un − U | > ) = limn n−1 = 0 for any  > 0 and, hence, Un →d U . Since the distribution of U is continuous, by P´ olya’s theorem (e.g., Proposition 1.16 in Shao, 2003), limn ∞ (Gn , G) = 0. But E|Un | ≥ n2 P (U < n−1 ) = n and E|U | = 12 . Hence limn E|Un | = E|U |. By Exercise 3, Mp (Gn , G) does not converge to 0. (ii) Let Gn be the cumulative distribution function of U/n and G(x) = I[0,∞) (x) (the degenerate distribution at 0). Then limn E|U/n|p = 0 for any p and U/n →d 0. Thus, by Exercise 3, limn Mp (Gn , G) = 0. But Gn (0) = P (U ≤ 0) = 0 for all n and G(0) = 1, i.e., Gn (0) does not converge to G(0). Hence ∞ (Gn , G) does not converge to 0. Exercise 5 (#5.8). Let X be a random variable having cumulative dis-

212

Chapter 5. Estimation in Nonparametric Models

tribution function F . Show that  (i) E|X|2 < ∞ implies {F (t)[1 − F (t)]}p/2dt < ∞ for p > 1; (ii) E|X|2+δ < ∞ with some δ > 0 implies {F (t)[1 − F (t)]}1/2 dt < ∞. Solution. (i) If E|X|2 < ∞, then, by Exercise 23 in Chapter 1,  ∞  ∞ 2 2 E|X| = P (|X| > t)dt = 2 sP (|X| > s)ds, 0

0 2

which implies that lims→∞ s [1 − F (s)] = 0 and lims→−∞ s2 F (s) = 0. Then, lims→∞ sp [1 − F (s)]p/2 = 0 and lims→−∞ sp [F (s)]p/2 = 0. Since 0 ∞ p > 1, we conclude that −∞ [F (s)]p/2 ds < ∞ and 0 [1 − F (s)]p/2 ds < ∞, which implies that  0  ∞  ∞ p/2 p/2 {F (t)[1 − F (t)]} dt ≤ [F (t)] dt + [1 − F (t)]p/2 dt < ∞. −∞

−∞

0

2+δ

< ∞, (ii) Similarly, when E|X|  ∞  2+δ 2+δ E|X| = P (|X| > t)dt = (2 + δ) 0



P (|X| > s)s1+δ ds,

0

which implies lims→∞ s2+δ [1−F (s)] = 0 and lims→−∞ s2+δ F (s) = 0. Then, lims→∞ s1+δ/2 [1 − F (s)]1/2 = 0 and lims→−∞ s1+δ/2 [F (s)]1/2 = 0. Since δ > 0, this implies that  0  ∞  ∞ {F (t)[1 − F (t)]}1/2 dt ≤ [F (t)]1/2 dt + [1 − F (t)]1/2 dt < ∞. −∞

−∞

0

Exercise 6 (#5.10). Show that pi = c/n, i = 1, ..., n, λ = −(c/n)n−1 is a maximum of the function  n  n H(p1 , ..., pn , λ) = pi + λ pi − c n

i=1

i=1

over pi > 0, i = 1, ..., n, i=1 pi = c. Note. This exercise shows that the empirical distribution function (which puts mass n−1 to each of n observations) is a maximum likelihood estimator. Solution. It suffices to show that n  c n pi ≤ n i=1 n for any pi > 0, i = 1, ..., n, i=1 pi = c. Let X be a random variable taking value pi with probability n−1 , i = 1, ..., n. From Jensen’s inequality,  n  n c 1 1 , log pi = E(log X) ≤ log E(X) = log pi = log n i=1 n i=1 n

Chapter 5. Estimation in Nonparametric Models

213

which establishes the result. Exercise 7 (#5.11). Consider the problem of maximizing pi > 0,

n

i = 1, ..., n,

pi = 1,

and

i=1

n

!n i=1

pi over

pi ui = 0,

i=1

where ui ’s are s-vectors. Show that the solution is pˆi =

1 , n(1 + λτ ui )

and λ ∈ Rs satisfying

n i=1

i = 1, ..., n,

ui = 0. 1 + λτ ui

Solution. Consider the Lagrange multiplier method with  n  n n log pi + τ pi − 1 − nλτ pi u i . H(p1 , ..., pn , τ, λ) = i=1

i=1

i=1

Taking the derivatives of H and setting them to 0, we obtain that 1 + τ − nλτ ui = 0, i = 1, ..., n, pi

n

pi = 1,

and

i=1

n

pi ui = 0.

i=1

The solution to these equations is τ = −n and pˆi = Substituting pˆi into

n i=1

1 , n(1 + λτ ui )

i = 1, ..., n.

pi ui = 0, we conclude that λ is the solution of n i=1

ui = 0. 1 + λτ ui

Exercise 8 (#5.15). Let δ1 , ..., δn be n observations from a binary random variable and ⎛ ⎞1−δi n n+1 (p1 , ..., pn+1 ) = pδi i ⎝ pj ⎠ . i=1

j=i+1

(i) Show that maximizing (p1 , ..., pn+1 ) subject to pi ≥ 0, i = 1, ..., n + 1, n+1 i=1 pi = 1 is equivalent to maximizing n

qiδi (1 − qi )n−i+1−δi , i=1

214

Chapter 5. Estimation in Nonparametric Models

where qi = pi / (ii) Show that

n+1 j=i

pj , i = 1, ..., n.

 i−1 n δj δi , i = 1, ..., n, pˆn+1 = 1 − 1− pˆi = pˆi n − i + 1 j=1 n−j+1 i=1 n+1 maximizes (p1 , ..., pn+1 ) subject to pi ≥ 0, i = 1, ..., n + 1, i=1 pi = 1. (iii) For any x1 ≤ x2 ≤ · · · ≤ xn , show that 

n+1 δi ˆ . F (t) = 1− pˆi I(0,t] (xi ) = 1 − n−i+1 i=1 xi ≤t

(iv) When δi = 1 for all i, show that pˆi = n−1 , i = 1, ..., n, and pˆn+1 = 0. Note. This exercise shows that the well-known Kaplan-Meier productlimit estimator is a maximum likelihood estimator. Solution. (i) Since n+1 pi j=i+1 pj 1 − qi = 1 − n+1 = n+1 , j=i pj j=i pj n



n

pδi i ⎝

qiδi (1 − qi )1−δi = i=1

i=1

From

 n+1

n

(1 − qi )

n−i

=

i=1

=

j=2 n+1 j=1 n+1

pj

j=2

=

pj

⎞1−δi ⎛ pj ⎠



j=i+1

j=3 n+1 j=2

pj · · ·

j=3

n+1

n+1

⎞−1 pj ⎠

.

j=i

n−1  n+1

pj

n+1

n n+1

n+1

pj

n−2

n+1 j=n

· · · n+1

pj

pj

j=n−1

pj

pj

j=n

pj ,

i=1 j=i

we obtain that n

n

qiδi (1 − qi )n−i+1−δi = i=1

i=1

⎛ pδi i ⎝

n+1

⎞1−δi pj ⎠

.

j=i+1

The result follows since q1 , ..., qn are n free variables. (ii) From part (i), log (p1 , ..., pn+1 ) =

n i=1

[δi log qi + (n − i + 1 − δi ) log(1 − qi )].

Chapter 5. Estimation in Nonparametric Models

Then

215

∂ log  δi n − i + 1 − δi = − = 0, i = 1, ..., n, ∂qi qi 1 − qi

have the solution

δi , n−i+1 which maximizes (p1 , ..., pn+1 ) since qˆi =

i = 1, ..., n,

∂ 2 log  δi n − i + 1 − δi =− 2 −

nλn which converges to 0 under the given conditions. (iii) Note that E[fˆ(t)] − f (t) = λ−1 E(Y1n ) − f (t) n∞ = w(y)[f (t − λn y) − f (t)]dy −∞  ∞ = λn yw(y)f  (ξt,y,n )dy, −∞

where |ξt,y,n − t| ≤ λn . Under the condition that f  is bounded and con∞ tinuous at t and −∞ |y|w(y)dy < ∞,   lim nλn {E[fˆ(t)] − f (t)} = lim nλn O(λn ) = 0. n

n

Hence the result follows from the result in part (ii). (iv) Since f is bounded and continuous,  b  b ˆ E[fˆ(t)]dt lim E f (t)dt = lim n

n

a

a



b





= lim n



−∞

a

b



w(y)f (t − λn y)dydt



w(y)f (t)dydt

= −∞

a



b

f (t)dt.

= a

For t = s,

⎤ ⎡  

n n 1 s − X t − X i j ⎦ E[fˆ(t)fˆ(s)] = 2 2 E ⎣ w w n λn λ λ n n i=1 j=1    1 s − X1 t − X1 = w E w nλ2n λn λn     n−1 s − X1 t − X1 + E w E w nλ2n λn λn   ∞ 1 t − s + λn y = w(y)f (s − λn y)dy w nλn −∞ λn   ∞ n−1 ∞ + w(y)f (t − λn y)dy w(y)f (s − λn y)dy, n −∞ −∞

Chapter 5. Estimation in Nonparametric Models

221

which converges to f (t)f (s) under the given conditions. Hence,   b ˆ lim Var f (t)dt = 0 n

and, therefore,

b a

fˆ(t)dt →p

a

b a

f (t)dt.

Exercise 11 (#5.20). Let (θ, ξ) be a likelihood function. Show that a maximum profile likelihood estimator θˆ of θ is an MLE if ξ(θ), the maximum of supξ (θ, ξ) for a fixed θ, does not depend on θ. Note. A maximum profile likelihood estimator θˆ maximizes the profile likelihood function P (θ) = (θ, ξ(θ)), where (θ, ξ(θ)) = supξ (θ, ξ) for each fixed θ. ˆ = sup (θ, ξ) for any θ. Then Solution. Suppose that ξˆ satisfies (θ, ξ) ξ ˆ If θˆ satisfies P (θ) ˆ = the profile likelihood function is P (θ) = (θ, ξ). ˆ ˆ ˆ ˆ supθ P (θ), then (θ, ξ) = P (θ) ≥ P (θ) = (θ, ξ) ≥ (θ, ξ) for any θ and ξ. ˆ ξ) ˆ is an MLE of (θ, ξ). Hence, (θ, Exercise 12 (#5.21). Let (X1 , ..., Xn ) be a random sample from N (µ,σ 2 ). Derive the profile likelihood function for µ or σ 2 . Discuss in each case whether the maximum profile likelihood estimator is the same as the MLE. Solution. The likelihood function is "  n 1 2 −n/2 2 −n/2 2 (µ, σ ) = (2π) (σ ) exp − 2 (Xi − µ) . 2σ i=1 ¯ σ 2 ), since n (Xi − µ)2 ≥ n (Xi − X) ¯ 2, For fixed σ 2 , (µ, σ 2 ) ≤ (X, i=1 i=1 ¯ where X is the sample mean. Hence the maximum does not depend on σ 2 and the profile likelihood function is "  n 1 2 −n/2 2 −n/2 2 ¯ ¯ (σ ) exp − 2 (Xi − X) . (X, σ ) = (2π) 2σ i=1 By the result in the previous exercise, the profile MLE of σ 2 is the same as ¯ σ2 ) directly verifying that (X, the MLE of σ 2 . This can also n be shown¯ by 2 is maximized at σ ˆ 2 = n−1 i=1 (Xi − X) . n For fixed µ, (µ, σ 2 ) is maximized at σ 2 (µ) = n−1 i=1 (Xi − µ)2 . Then the profile likelihood function is  n/2 n 2 −n/2 −n/2  (µ, σ (µ)) = (2π) e . n 2 i=1 (Xi − µ) n n ¯ 2 , (µ, σ 2 (µ)) is maximized at X, ¯ Since i=1 (Xi − µ)2 ≥ i=1 (Xi − X) 2 which is the same as the MLE of µ (although σ (µ) depends on µ).

222

Chapter 5. Estimation in Nonparametric Models

Exercise 13 (#5.23). Let (X1 , ..., Xn ) be a random sample from a distribution F and let π(x) = P (δi = 1|Xi = x), whereδi = 1 if Xi is observed and δi = 0 if Xi is missing. Assume that 0 < π = π(x)dF (x) < 1. (i) Let F1 (x) = P (Xi ≤ x|δi = 1). Show that F and F1 are the same if and only if π(x) ≡ π. (ii) Let Fˆ be the empirical distribution putting mass r−1 to each observed Xi , where r is the number of observed Xi ’s. Show that Fˆ (x) is unbiased and consistent for F1 (x), x ∈ R. (iii) When π(x) ≡ π, show that Fˆ (x) in part (ii) is unbiased and consistent for F (x), x ∈ R. When π(x) is not constant, show that Fˆ (x) is biased and inconsistent for F (x) for some x ∈ R. Solution. (i) If π(x) ≡ π, then Xi and δi are independent. Hence, F1 (x) = P (Xi ≤ x|δi = 1) = P (Xi ≤ x) = F (x) for any x. If F1 (x) = F (x) for any x, then P (Xi ≤ x, δi = 1) = P (Xi ≤ x)P (δi = 1) for any x and, hence, Xi and δi are independent. Thus, π(x) ≡ π. (ii) Note that n δi I(−∞,x] (Xi ) ˆ F (x) = i=1 n . i=1 δi Since E[δi I(−∞,x] (Xi )|δi ] = δi F1 (x), we obtain that E[Fˆ (x)] = E{E[Fˆ (x)|δ1 , ..., δn ]}  n  i=1 E[δi I(−∞,x] (Xi )|δi ] n =E i=1 δi   n δi F1 (x) i=1  =E n i=1 δi = F1 (x), i.e., Fˆ (x) is unbiased. From the law of large numbers, 1 δi I(−∞,x] (Xi ) →p E[δ1 I(−∞,x] (X1 )] = E[δ1 F1 (x)] = πF1 (x) n i=1 n

and

1 δi →p E(δ1 ) = π. n i=1 n

Hence, Fˆ (x) →p F1 (x). (iii) When π(x) ≡ π, F (x) = F1 (x) (part (i)). Hence, Fˆ (x) is unbiased and consistent for F (x) (part (ii)). When π(x) is not constant, F1 (x) = F (x) for some x (part (i)). Since Fˆ (x) is unbiased and consistent for F1 (x) (part (ii)), it is biased and inconsistent for F (x) for x at which F (x) = F1 (x). Exercise 14 (#5.25). Let F be a collection of distributions on Rd . A functional T defined on F is Gˆ ateaux differentiable at G ∈ F if and only if

Chapter 5. Estimation in Nonparametric Models

223

there is a linear functional LG on D = {c(G1 −G2 ) : c ∈ R, Gj ∈ F, j = 1, 2} (i.e., LG (c1 ∆1 +c2 ∆2 ) = c1 LG (∆1 )+c2 LG (∆2 ) for any ∆j ∈ D and cj ∈ R) such that ∆ ∈ D and G + t∆ ∈ F imply   T (G + t∆) − T (G) − LG (∆) = 0. lim t→0 t Assume that the functional LF is continuous in the sense that ∆j −∆∞ → 0 implies LF (∆j ) → LF (∆), where D ∈ D, Dj ∈ D, and D∞ = supx |D(x)| for any D ∈ D is the sup-norm. Show that φF (x) = LF (δx −F ) is a bounded function of x, where δx is the degenerated distribution at x. Solution. Suppose that φF is unbounded. Then, there exists a sequence {xn } of numbers such that limn |φF (xn )| = ∞. Let tn = |φF (xn )|−1/2 and Hn = tn (δxn − F ). Then Hn ∈ D and by the linearity of LF , |LF (Hn )| = tn |L(δxn − F )| = tn |φF (xn )| = |φF (xn )|1/2 → ∞ as n → ∞. On the other hand, Hn ∞ ≤ tn → 0 implies LF (Hn ) → L(0) if LF is continuous. This contradiction shows that φF is bounded. Exercise 15 (#5.26). Suppose that a functional T is Gˆ ateaux differentiable at F with a bounded and continuous influence function φF (x) = LF (δx − F ), where δx is the degenerated distribution at x. Show that LF is continuous in the sense described in the previous exercise. Solution. From the linearity of LF ,

    δx dG − F = LF (G − F ) φF (x)dG = LF (δx − F )dG = LF for any distribution G. Hence, LF (D) =

 φF (x)dD,

D ∈ D.

If D  j − D∞ → 0, then, since φF is bounded and continuous, → φF (x)dD. Hence, LF (Dj ) → LF (D).



φF (x)dDj

Exercise 16 (#5.29). Let F be the collection of all distributions on R and z be a fixed real number. Define  T (G) = G(z − y)dG(y), G ∈ F. Obtain the influence function φF for T and show that φF is continuous if and only if F is continuous. Solution. For G ∈ F and ∆ ∈ D = {c(G1 − G2 ) : c ∈ R, Gj ∈ F, j = 1, 2},   T (G + t∆) − T (G) = (G + t∆)(z − y)d(G + t∆)(y) − G(x − y)dG(y)   2 = 2t ∆(z − y)dG(y) + t ∆(z − y)d∆(y).

224

Chapter 5. Estimation in Nonparametric Models

Hence,

 T (G + t∆) − T (G) = 2 ∆(z − y)dG(y) lim t→0 t and the influence function is     φF (x) = 2 (δx − F )(z − y)dF (y) = 2 F (z − x) − F (z − y)dF (y) ,

where δx is the degenerated distribution at x. Hence φF is continuous if and only if F is continuous. Exercise 17 (#5.34). An L-functional is defined as  T (G) = xJ(G(x))dG(x), G ∈ F0 , where F0 contains all distributions on R for which T is well defined and J(t) is a Borel function on [0, 1]. (i) Show that the influence function is  ∞ φF (x) = − (δx − F )(y)J(F (y))dy, −∞

where δx is thedegenerated distribution at x. (ii) Show that φF (x)dF  (x) = 0 and, if J is bounded and F has a finite second moment, then [φF (x)]2 dF (x) < ∞. Solution. (i) For F and G in F0 ,   T (G) − T (F ) = xJ(G(x))dG(x) − xJ(F (x))dF (x)  1 = [G−1 (t) − F −1 (t)]J(t)dt 0



1



G−1 (t)

=

dxJ(t)dt F −1 (t)

0







F (x)

=

J(t)dtdx −∞  ∞

= −∞



− where UG (x) =

⎧ ⎨ ⎩

 G(x) F (x)

G(x)

[F (x) − G(x)]J(F (x))dx ∞

−∞

UG (x)[G(x) − F (x)]J(F (x))dx,

J(t)dt

[G(x)−F (x)]J(F (x))

0

−1

G(x) = F (x), J(F (x)) = 0 otherwise

Chapter 5. Estimation in Nonparametric Models

225

and the fourth equality follows from Fubini’s theorem and the fact that the region in R2 between curves F (x) and G(x) is the same as the region in R2 between curves G−1 (t) and F −1 (t). Then, for any ∆ ∈ D = {c(G1 − G2 ) : c ∈ R, Gj ∈ F, j = 1, 2}, T (F + t∆) − T (F ) lim =− t→0 t





∆(x)J(F (x))dx, −∞

since limt→0 UF +t∆ (x) = 0 and, by the dominated convergence theorem, 



lim

t→0

−∞

UF +t∆ ∆(x)J(F (x))dx = 0.

Letting ∆ = δx − F , we obtain the influence function as claimed. (ii) By Fubini’s theorem,    ∞  φF (x)dF (x) = − (δx − F )(y)dF (x) J(F (y))dy = 0, −∞

since Then



δx (y)dF (x) = F (y). Suppose now that |J| < C for a constant C.  |φF (x)| ≤ C



−∞ x

 =C

|δx (y) − F (y)|dy   ∞ F (y)dy + [1 − F (y)]dy

−∞

≤ C |x| +



x 0



F (y)dy + −∞

0



 [1 − F (y)]dy

= C(|x| + E|X|), where X is the random variable having distribution F . Thus, [φF (x)]2 ≤ C 2 (|x| + E|X|)2  and [φF (x)]2 dF (x) < ∞ when EX 2 < ∞. Exercise 18 (#5.37). Obtain explicit forms of the influence functions for L-functionals (Exercise 17) in the following cases and discuss which of them are bounded and continuous. (i) J ≡ 1. (ii) J(t) = 4t − 2. (iii) J(t) = (β − α)−1 I(α,β) (t) for some  constants α < β. Solution. (i) When J ≡ 1, T (G) = xdG(x) is the mean functional (F0 is the collection of all distributions with finite means). From the previous

226

Chapter 5. Estimation in Nonparametric Models

exercise, the influence function is  ∞ [δx (y) − F (y)]dy φF (x) = − −∞  x  ∞ = F (y)dy − [1 − F (y)]dy −∞ x



 =

0

= x−

dy +  ∞

x



0

−∞

F (y)dy −

0



[1 − F (y)]dy

ydF (y). −∞

This influence function is continuous, but not bounded. (ii) When J(t) = 4t − 2,  x  ∞ φF (x) = 2 F (y)[2F (y) − 1]dy − 2 [1 − F (y)][2F (y) − 1]dy. −∞

x

Clearly, φF is continuous. Since  x  F (y)[2F (y) − 1]dy = lim x→∞

−∞



−∞



and



lim

x→∞

F (y)[2F (y) − 1]dy = ∞

[1 − F (y)][2F (y) − 1]dy = 0,

x

we conclude that limx→∞ φF (x) = ∞. Similarly, limx→−∞ φF (x) = −∞. Hence, φF is not bounded. (iii) When J(t) = (β − α)−1 I(α,β) (t), φF (x) = −

1 β−α



F −1 (β)

F −1 (α)

[δx (y) − F (y)]dy,

which is continuous. φF is also bounded, since  F −1 (β) 1 F −1 (β) − F −1 (α) |φF (x)| ≤ |δx (y) − F (y)|dy ≤ . β − α F −1 (α) β−α Exercise 19. In part (iii) of the previous exercise, show that if F is continuous at F −1 (α) and F −1 (β), then ⎧ −1 F (α)(1−α)−F −1 (β)(1−β) ⎪ − T (F ) x < F −1 (α) ⎪ β−α ⎪ ⎪ ⎨ x−F −1 (α)α−F −1 (β)(1−β) φF (x) = − T (F ) F −1 (α) ≤ x ≤ F −1 (β) β−α ⎪ ⎪ ⎪ ⎪ ⎩ F −1 (β)β−F −1 (α)α − T (F ) x > F −1 (β). β−α

Chapter 5. Estimation in Nonparametric Models

227

Solution. When x < F −1 (α), 1 φF (x) = − β−α



F −1 (β)

F −1 (α)

[1 − F (y)]dy

F −1 (β)  F −1 (β) y[1 − F (y)]  1 =− − ydF (y) β − α F −1 (α) β − α F −1 (α) =

F −1 (α)(1 − α) − F −1 (β)(1 − β) − T (F ), β−α

since F (F −1 (α)) = α and F (F −1 (β)) = β. Similarly, when x > F −1 (β), 1 φF (x) = β−α



F −1 (β)

F (y)]dy F −1 (α)

F −1 (β)  F −1 (β) yF (y)  1 = − ydF (y) β − α F −1 (α) β − α F −1 (α)

=

F −1 (β)β − F −1 (α)α − T (F ). β−α

Finally, consider the case of F −1 (α) ≤ x ≤ F −1 (β). Then  F −1 (β) 1 F (y)dy − [1 − F (y)]dy β−α x F −1 (α) x  x yF (y)  1 = − ydF (y) β − α F −1 (α) β − α F −1 (α) F −1 (β)  F −1 (β) y[1 − F (y)]  1 + − ydF (y) β − α x β−α x

1 φF (x) = β−α

=



x

x − F −1 (α)α − F −1 (β)(1 − β) − T (F ). β−α

Exercise 20 (#5.67, #5.69, #5.74). Let T be an L-functional defined in Exercise 17. (i) Show that T (F ) = θ if F is symmetric about θ, J is symmetric about 1 1 2 , and 0 J(t)dt = 1. (ii) Assume that  ∞ ∞ 2 σF = J(F (x))J(F (y))[F (min{x, y}) − F (x)F (y)]dxdy −∞

−∞

 is finite. Show that σ 2 = [φF (x)]2 dF (x), where φF is the influence function of T .

228

Chapter 5. Estimation in Nonparametric Models

(iii) Show that if J ≡ 1, then σF2 in (ii) is equal to the variance of F . Solution. (i) If F is symmetric about θ, then F (x) = F0 (x − θ), where F0 is a cumulative distribution function that is symmetric about 0, i.e., F0 (x) = 1 − F0 (−x). Also, J(t) = J(1 − t). Then   xJ(F0 (x))dF0 (x) = xJ(1 − F0 (−x))dF0 (x)  = xJ(F0 (−x))dF0 (x)  = − yJ(F0 (y))dF0 (y), i.e.,



xJ(F0 (x))dF0 (x) = 0. Hence,  T (F ) = xJ(F (x))dF (x)   = θ J(F (x))dF (x) + (x − θ)J(F0 (x − θ))dF0 (x − θ)  1  =θ J(t)dt + yJ(F0 (y))dF0 (y) 0

= θ.

∞ (ii) From Exercise 17, φF (x) = − −∞ [δx (y) − F (y)]J(F (y))dy. Then  ∞ 2 2 [φF (t)] = (δt − F )(y)J(F (y))dy −∞  ∞  ∞ = (δt − F )(y)J(F (y))dy (δt − F )(x)J(F (x))dx −∞ −∞  ∞ ∞ = (δt − F )(x)(δt − F )(y)J(F (x))J(F (y))dxdy. −∞

−∞

Then the result follows from Fubini’s theorem and the fact that  (δt − F )(x)(δt − F )(y)dF (t) = F (min{x, y}) − F (x)F (y). (iii) When J ≡ 1, by part (i)of the solution to the previous exercise, φF (x) = x − ydF (y). Hence, [φF (x)]2 dF (x) is the variance of F when J ≡ 1. The result follows from part (ii). Exercise 21 (#5.65, #5.72, #5.73). Let T be an L-functional given in Exercise 18(iii) with β = 1 − α and α ∈ (0, 12 ), and let Fn be the empirical distribution based on a random sample from a distribution F . (i) Let X(1) ≤ · · · ≤ X(n) be the order statistics. Show that ¯α = T (Fn ) = X

n−m α 1 X(j) , (1 − 2α)n j=m +1 α

Chapter 5. Estimation in Nonparametric Models

229

which is called the α-trimmed sample mean. (ii) Assume that F is continuous at F −1 (α) and F −1 (1−α) and is symmetric about θ. Show that  −1 " F0 (1−α) 2 −1 2 2 2 x dF0 (x) + α[F0 (1 − α)] , σα = (1 − 2α)2 0 where F0 (x − θ) = F (x), is equal to the σF2 in part (ii) of the previous exercise with J(t) = (1 − 2α)−1 I(α,1−α) (t). (iii) Show that if F0 (0) exists and is positive, then limα→ 12 σα2 = 1/[2F0 (0)]2 .  (iv) Show that if σ 2 = x2 dF0 (x) < ∞, then limα→0 σα2 = σ 2 . Solution. (i) Note that  n 1 i T (Fn ) = xJ(Fn (x))dFn (x) = J n X(i) , n i=1 since Fn (X(i) ) = i/n, i = 1, ..., n. The result follows from the fact that J( ni ) is not 0 if and only if mα ≤ i ≤ n − mα . (ii) Note that J is symmetric about 12 . If F is symmetric about θ, then T (F ) = θ (Exercise 20) and F −1 (α) + F −1 (1 − α) = 2θ. From Exercise 19 with β = 1 − α, we conclude that ⎧ F −1 (α) 0 ⎪ x < F −1 (α) ⎪ 1−2α ⎪ ⎨ x−θ F −1 (α) ≤ x ≤ F −1 (1 − α) φF (x) = 1−2α ⎪ ⎪ ⎪ ⎩ F0−1 (1−α) x > F −1 (1 − α), 1−2α where F0−1 (α) = F −1 (α) − θ and F0−1 (1 − α) = F −1 (1 − α) − θ. Because F0−1 (α) = −F0−1 (1 − α), we obtain that  [F −1 (α)]2 [F0−1 (1 − α)]2 [φF (x)]2 dF (x) = 0 α + α (1 − 2α)2 (1 − 2α)2  F −1 (1−α) (x − θ)2 dF (x) + (1 − 2α)2 F −1 (α)  F0−1 (1−α) 2α[F0−1 (1 − α)]2 x2 = + dF0 (x) 2 (1 − 2α) (1 − 2α)2 F0−1 (α) = σα2 . By Exercise 20, we conclude that σα2 = σF2 . (iii) Note that F0−1 ( 12 ) = 0. Since F0 (0) exists and F0 (0) > 0, F0−1 (α) =

α − 12 + Rα , F0 (0)

230

Chapter 5. Estimation in Nonparametric Models

where limα→ 12 Rα /(α − 12 ) = 0. Then [F0−1 (α)]2 =

(α − 12 )2 + Uα , [F0 (0)]2

where limα→ 12 Uα /(α − 12 )2 = 0. Hence, lim1

α→ 2

2α[F0−1 (1 − α)]2 2α[F0−1 (α)]2 1 = lim = . 1  2 2 1 (1 − 2α) [2F0 (0)]2 α→ 2 4(α − 2 )

Note that



F0−1 (1−α)

0

2



x dF0 (x) =

1−α 1 2

[F0−1 (t)]2 dt

and, by l’Hˆ opital’s rule,  1−α −1 2 [F0 (t)] dt 1 [F0−1 (1 − α)]2 = 0. lim1 2 = lim (1 − 2α)2 4(1 − 2α) α→ 2 α→ 12 Hence, limα→ 12 σα2 = 1/[2F0 (0)]2 .  (iv) Note that limα→0 F0−1 (1 − α) = ∞. Since x2 dF0 (x) < ∞, we have limx→∞ x2 [1 − F (x)] = 0. Hence, limα→0 α[F0−1 (1 − α)]2 = 0. Then, "  −1 lim σα2 = lim 2

α→0

α→0

 =2 0



F0 (1−α)

0

x2 dF0 (x) + α[F0−1 (1 − α)]2

x2 dF0 (x)

= σ2 , since F0 is symmetric about 0. Exercise 22 (#5.75). Calculate σF2 defined in Exercise 20(ii) with J(t) = 4t − 2 and F being the double exponential distribution with location parameter µ ∈ R and scale parameter 1. Solution. Note that F is symmetric about µ. Using the result in part (ii) of the solution to Exercise 18 and changing variable z = y − µ, we obtain that  x−µ  ∞ F0 (y)[2F0 (y) − 1]dy − 2 [1 − F0 (y)][2F0 (y) − 1]dy, φF (x) = 2 −∞

x−µ

where F0 is the double exponential distribution with location parameter 0 and scale parameter 1. Let X be a random variable having distribution F . Then X − µ has distribution F0 and, therefore, the distribution of φF (X)

Chapter 5. Estimation in Nonparametric Models

231

does not depend on µ and we may solve the problem by assuming that µ = 0. Note that φF (x) = F (x)[4F (x) − 2] + [1 − F (x)][4F (x) − 2] = 4F (x) − 2. Hence,

 φF (x) =

x

0

[4F (y) − 2]dy + c,

where c is a constant. For the double exponential distribution with location parameter 0 and scale parameter 1,  1 y y 0, where θp = F −1 (p). Let X(j) be the jth order statistic. Show that √ n(X(kn ) − θp ) →d N (0, p(1 − p)/[F  (θp )]2 ). Solution. Let pn = kn /n = p + o(n−1/2 ) and Fn be the  empirical distribution. Then X(kn ) = Fn−1 (pn ) for any n. Let t ∈ R, σ = p(1 − p)/F  (θp ),  √ and cnt = n(pnt − pn )/ pnt (1 − pnt ). Define pnt = F (θp + tσn−1/2 ),  Znt = [Bn (pnt ) − npnt ]/ npnt (1 − pnt ), where Bn (q) denotes a random variable having the binomial distribution with size n and probability q. Then    √ P n(X(kn ) − θp ) ≤ tσ = P Fn−1 (pn ) ≤ θp + tσn−1/2   = P pn ≤ Fn (θp + tσn−1/2 )   = P Znt ≥ −cnt = Φ(cnt ) + o(1) by the central limit theorem and P´ olya’s theorem, where Φ is the cumulative distribution function of N (0, 1). The result follows if we can show that

236

Chapter 5. Estimation in Nonparametric Models

limn cnt = t. By Taylor’s expansion, pnt = F (θp + tσn−1/2 ) = p + tσn−1/2 F  (θp ) + o(n−1/2 ). √ Then, as n → ∞, pnt → p and n(pnt − pn ) = tσF  (θp ) + o(1), since pn − p = o(n−1/2 ). Hence, √ n(pnt − pn ) tσF  (θp ) cnt =  → = t. pnt (1 − pnt ) p(1 − p) Exercise 28 (#5.112). Let G, G1 , G2 ,..., be cumulative distributions on R. Suppose that limn supx |Gn (x) − G(x)| = 0, G is continuous, and G−1 is continuous at p ∈ (0, 1). −1 (i) Show that limn G−1 (p). n (p) = G (ii) Show that the result in (i) holds for any p ∈ (0, 1) if G (x) exists and is positive for any x ∈ R. Solution. (i) Let  > 0. Since limn supx |Gn (x) − G(x)| = 0 and G is continuous, limn Gn (G−1 (p − )) = G(G−1 (p − )) = p −  < p. Hence, for sufficiently large n, Gn (G−1 (p − )) ≤ p, i.e., G−1 (p − ) ≤ G−1 n (p). Thus, G−1 (p − ) ≤ lim inf G−1 n (p). n

Similarly,

G−1 (p + ) ≥ lim sup G−1 n (p). n

−1

Letting  → 0, by the continuity of G at p, we conclude that limn G−1 n (p) = G−1 (p). (ii) If G (x) exists and is positive for any x ∈ R, then G−1 is continuous on (0, 1). The result follows from the result in (i). Exercise 29 (#5.47). Calculate the asymptotic relative efficiency of the Hodges-Lehmann estimator with respect to the sample mean based on a random sample from F when (i) F is the cumulative distribution of N (µ, σ 2 ); (ii) F is the cumulative distribution of the logistic distribution with location parameter µ and scale parameter σ; (iii) F is the cumulative distribution of the double exponential distribution with location parameter µ and scale parameter σ; (iv) F (x) = F0 (x − µ), where F0 (x) is the cumulative distribution of the t-distribution tν with ν ≥ 3. Solution. In any case, as estimators of µ, the sample mean is asymptotically normal with asymptotic mean squared error Var(X)/n, where X denotes a random variable with distribution F , and the Hodges-Lehmann estimator is asymptotically normal with asymptotic mean squared error

Chapter 5. Estimation in Nonparametric Models

237

 (12γ 2 )−1 , where γ = [F  (x)]2 dx (e.g., Example 5.8 in Shao, 2003). Hence, the asymptotic relative efficiency to be calculated is 12γ 2 Var(X). (i) In this case, Var(X) = σ 2 and  ∞  ∞ 2 1 1 1 −(x−µ)2 /σ 2 γ= e dx = e−x dx = √ . 2 2πσ −∞ 2πσ −∞ 2 πσ Hence, the asymptotic relative efficiency is 12γ 2 Var(X) = 3/π. (ii) Note that Var(X) = σ 2 π 2 /3 and   ∞ 1 e−2(x−µ)/σ e2x 1 ∞ 1 γ= 2 dx = dx = −(x−µ)/σ 4 σ −∞ [1 + e σ −∞ (1 + ex )4 6σ ] (Exercise 66 in Chapter 4). Hence, the asymptotic relative efficiency is 12γ 2 Var(X) = π 2 /9. (iii) In this case, Var(X) = 2σ 2 and  ∞  ∞ 1 1 1 −2|x−µ|/σ γ= e dx = e−2|x| dx = . 2 4σ −∞ 4σ −∞ 4σ Hence, the asymptotic relative efficiency is 12γ 2 Var(X) = 3/2. (iv) Note that Var(X) = ν/(ν − 2) and (  ν+1 )2  ∞ √ (  ν+1 )2  2ν+1  Γ 2 νπ Γ 2 Γ 2 dx γ= . = (  ν )2 (  ν )2  2 ν+1 x −∞ 1 + νπ Γ 2 Γ 2 Γ(ν + 1) ν Hence, the asymptotic relative efficiency is (  )4 (  2ν+1 )2 Γ 2 12ν 2 π Γ ν+1 2 2 . 12γ Var(X) = (  ν )4 (ν − 2) Γ 2 [Γ(ν + 1)]2 Exercise 30 (#5.61, #5.62, #5.63). Consider a random sample from a distribution F on R. In each of the following cases, obtain the asymptotic relative efficiency of the sample median with respect to the sample mean. (i) F is the cumulative distribution of the uniform distribution on the interval (θ − 12 , θ + 12 ), θ ∈ R. (ii) F (x) = F0 (x − θ) and F0 is the cumulative distribution function with c Lebesgue density (1 + x2 )−1 I(−c,c) (x)/ −c (1 + x2 )−1 dt.     (iii) F (x) = (1−)Φ x−µ +D x−µ , where  ∈ (0, 1) is a known constant, σ σ Φ is the cumulative distribution function of the standard normal distribution, D is the cumulative distribution function of the double exponential distribution with location parameter 0 and scale parameter 1, and µ ∈ R and σ > 0 are unknown parameters. Solution. In each case, the asymptotic relative efficiency of the sample

238

Chapter 5. Estimation in Nonparametric Models

median with respect to the sample mean is 4[F  (θ)]2 Var(X1 ). (i) Let θ be the mean of F . In this case, Var(X1 ) = 1/12 and F  (θ) = 1. Hence, the asymptotic relative efficiency of the sample median with respect to the sample mean is 1/3. (ii) The Lebesgue density of F0 is f (x) =

I(−c,c) (x) . 2 arctan(c)(1 + x2 )

Hence, F  (θ) = [2 arctan(c)]−1 . Note that  c x2 dx Var(X1 ) = 2 −c 2 arctan(c)(1 + x )  c  c dx dx − = 2 arctan(c) 2 arctan(c)(1 + x2 ) −c −c c − 1. = arctan(c) Therefore, the asymptotic relative efficiency of the sample median with respect to the sample mean is [c − arctan(c)]/[arctan(c)]3 . (iii) Note that Var(X1 ) = (1 − )σ 2 + 2σ 2 = (1 + )σ 2 and

1−  . + F  (µ) = √ 2πσ 2σ

Hence, the asymptotic relative efficiency of the sample median with respect   1−

/(1 + ). + to the sample mean is 4 √ 2 2π Exercise 31 (#5.64). Let (X1 , ..., Xn ) be a random sample from a distribution on R with Lebesgue density 2−1 (1 − θ2 )eθx−|x| , where θ ∈ (−1, 1) is unknown. (i) Show that the median of the distribution of X1 is given by m(θ) = (1 − θ)−1 log(1 + θ) when θ ≥ 0 and m(θ) = −m(−θ) when θ < 0. (ii) Show that the mean of the distribution of X1 is µ(θ) = 2θ/(1 − θ2 ). (iii) Show that the inverse functions of m(θ) and µ(θ) exist. Obtain the ¯ where m asymptotic relative efficiency of m−1 (m) ˆ with respect to µ−1 (X), ˆ ¯ is the sample median and X is the sample mean. ¯ asymptotically efficient in estimating θ? (iv) Is µ−1 (X) Solution. (i) The cumulative distribution function of X1 is  1−θ (1+θ)x x≤0 2 e Fθ (x) = 1+θ −(1−θ)x 1− 2 e x > 0.

Chapter 5. Estimation in Nonparametric Models

If θ > 0, Fθ (0) =

1−θ 2

239

< 12 . Hence, the median is the solution to 1 1 + θ −(1−θ)x , =1− e 2 2

i.e., (1 − θ)−1 log(1 + θ) = m(θ). If θ < 0, then the median is the solution to 1 1 − θ (1+θ)x , = e 2 2 i.e., −(1 + θ)−1 log(1 − θ) = −m(−θ). If θ = 0, the median is clearly 0 = m(0). (ii) The mean of X1 is  0    ∞ 1 − θ2 ∞ θx−|x| 1 − θ2 (1+θ)x −(1−θ)x xe dx = xe dx + xe dx 2 2 −∞ −∞ 0   1 1 1 − θ2 − + = 2 2 (1 + θ) (1 − θ)2 2θ . = 1 − θ2 (iii) Since µ (θ) =

2 4θ2 + > 0, 2 1−θ (1 − θ2 )2

µ(θ) is increasing in θ and, thus, the inverse function µ−1 exists. For θ ≥ 0, m(θ) is the product of log(1+θ) and (1− θ)−1 , both of which are increasing in θ. Hence, m(θ) is increasing in θ for θ ∈ [0, 1). Since m(θ) = −m(−θ) for θ < 0, the median function m(θ) is increasing in θ. Hence, the inverse function m−1 exists. When θ ≥ 0, the density of X1 evaluated at the median m(θ) is equal to 1 − θ2 θm(θ)−|m(θ)| 1 − θ2 −(1−θ)m(θ) 1−θ e e . = = 2 2 2 When θ < 0, the density of X1 evaluated at the median −m(−θ) is equal to 1 − θ2 −θm(−θ)−|−m(−θ)| 1 − θ2 −(1+θ)m(−θ) 1+θ = = e e . 2 2 2 In any case, the density of X1 evaluated at the median is (1−|θ|)/2. By the asymptotic theory for sample median (e.g., Theorem 5.10 in Shao, 2003), 

√ 1 . n[m ˆ − m(θ)] →d N 0, (1 − |θ|)2 When θ ≥ 0, m (θ) =

1 log(1 + θ) + . 1 − θ2 (1 − θ)2

240

Chapter 5. Estimation in Nonparametric Models

Hence, for θ ∈ (−1, 1), m (θ) =

1 log(1 + |θ|) 1 − |θ| + (1 + |θ|) log(1 + |θ|) + = . 1 − θ2 (1 − |θ|)2 (1 + |θ|)(1 − |θ|)2

By the δ-method, √

n[m−1 (m) ˆ − θ] →d N

0,

(1 − θ2 )2 [1 − |θ| + (1 + |θ|) log(1 + |θ|)]2

 .

¯ it is shown in the next part of the solution that For µ−1 (X), 

2 2 √ −1 ¯ − θ] →d N 0, (1 − θ ) . n[µ (X) 2(1 + θ2 ) ¯ Hence, the asymptotic relative efficiency of m−1 (m) ˆ with respect to µ−1 (X) is [1 − |θ| + (1 + |θ|) log(1 + |θ|)]2 . 2(1 + θ2 ) (iv) The likelihood function is  −n

(θ) = 2

2 n

¯− (1 − θ ) exp nθX

n

" |Xi | .

i=1

Then, ∂ log (θ) n n ¯ = − + nX ∂θ 1+θ 1−θ and

n n ∂ 2 log (θ) =− − < 0. ∂θ2 (1 + θ)2 (1 − θ)2

n n ¯ = 0 is the − 1−θ + nX Hence, the solution to the likelihood equation 1+θ n n −1 ¯ MLE of θ. Since 1+θ − 1−θ = nµ(θ), we conclude that µ (X) is the MLE ¯ is of θ. Since the distribution of is from an exponential family, µ−1 (X) √ X1 −1 ¯ − θ] →d N (0, [I1 (θ)]−1 ), where asymptotically efficient and n[µ (X)

I1 (θ) =

1 1 2(1 + θ2 ) + = . 2 2 (1 + θ) (1 − θ) (1 − θ2 )2

Exercise 32 (#5.70, #5.71). Obtain the asymptotic relative efficiency ¯ α (Exercise 21) with respect to of the trimmed sample mean X (i) the sample mean, based on a random sample of size n from the double exponential distribution with location parameter θ ∈ R and scale parameter 1;

Chapter 5. Estimation in Nonparametric Models

241

(ii) the sample median, based on a random sample of size n from the Cauchy distribution with location parameter θ ∈ R and scale parameter 1. Solution. (i) Let F0 be the double exponential distribution with location parameter 0 and scale parameter 1. The variance of F0 is 2. Hence, the ¯ α with respect asymptotic relative efficiency of the trimmed sample mean X to the sample mean is 2/σα2 , where σα2 is given in Exercise 21(ii). Note that F0−1 (1 − α) = − log(2α). Hence,  − log(2α) 1 2α[log(2α)]2 σα2 = x2 e−x dx + 2 (1 − 2α) 0 (1 − 2α)2 2α[log(2α) − 1] + 1 = . (1 − 2α)2 Thus, the asymptotic relative efficiency is 2(1 − 2α)2 /{2α[log(2α) − 1] + 1}. (ii) Let F0 be the Cauchy distribution with location parameter 0 and scale parameter 1. Note that F0 (0) = 1/π and F0−1 (1 − α) = tan(π − πα). ¯α Hence, the asymptotic relative efficiency of the trimmed sample mean X with respect to the sample median is π 2 /(4σα2 ), where  tan(π−πα) x2 dx 2 2α[tan(π − πα)]2 σα2 = + 2 2 (1 − 2α) 0 π(1 + x ) (1 − 2α)2 ' &  tan(π−πα) tan(π − πα) dx 2 − = (1 − 2α)2 π π(1 + x2 ) 0 2α[tan(π − πα)]2 (1 − 2α)2   2α[tan(π − πα)]2 tan(π − πα) 1 − 2α 2 + = − (1 − 2α)2 π 2 (1 − 2α)2 2α[tan(π − πα)]2 2 tan(π − πα) 1 + = − . π(1 − 2α)2 1 − 2α (1 − 2α)2 +

Exercise 33 (#5.85). Let (X1 , ..., Xn ) be a random sample from a distribution F on R that is symmetric about θ ∈ R. Huber’s estimator of θ n is defined as a solution of i=1 ψ(Xi , t) = 0, where ⎧ t−x>C ⎨ C ψ(x, t) = t−x |x − t| ≤ C ⎩ −C t − x < −C and C > 0 is a constant. Assume that F is continuous at θ − C and θ + C. (i) Show that the function  γ+C Ψ(γ) = (γ − x)dF (x) + CF (γ − C) − C[1 − F (γ + C)] γ−C

242

Chapter 5. Estimation in Nonparametric Models

is differentiable at θ and Ψ(θ) = 0. (ii) Show that the asymptotic relative efficiency of Huber’s estimator with respect to the sample mean is Var(X1 )/σF2 , where  θ+C σF2

=

θ−C

(θ − x)2 dF (x) + C 2 F (θ − C) + C 2 [1 − F (θ + C)] [F (θ + C) − F (θ − C)]2

.

Solution. (i) Since F is symmetric about θ, F (θ − C) = 1 − F (θ + C), dF (θ − y) = dF (θ + y), and 



θ+C

(θ − x)dF (x) = θ−C



C

−C

ydF (θ + y) = −

C

−C

ydF (θ − y),

where the first equality follows by considering x = θ + y and the second  θ+C equality follows by considering x = θ − y. Hence, θ−C (θ − x)dF (x) = 0 and, thus, Ψ(θ) = 0. From integration by parts, 



γ+C

γ+C

(γ − x)dF (x) = −C[F (γ + C) + F (γ − C)] + γ−C

F (x)dx. γ−C

Hence,



γ+C

F (x)dx − C,

Ψ(γ) = γ−C

which is differentiable at θ and Ψ (θ) = F (θ + C) − F (θ − C). (ii) The function 

[ψ(x, γ)]2 dF (x) =



γ+C

(γ − x)2 dF (x) + C 2 F (γ − C) + C 2 [1 − F (γ + C)]

γ−C

is continuous at θ. Hence, by the result in (i) and Theorem 5.13(i) in Shao √ (2003), Huber’s estimator θˆ satisfies n(θˆ − θ) →d N (0, σF2 ). This proves the result. Exercise 34 (#5.86). For Huber’s estimator θˆ in the previous exercise, obtain a formula e(F ) for the asymptotic relative efficiency of θˆ with respect to the sample mean, when     F (x) = (1 − )Φ x−θ + Φ x−θ σ τσ , where Φ is the cumulative distribution function of N (0, 1), σ > 0, τ > 0, and 0 <  < 1. Show that limτ →∞ e(F ) = ∞. Find the value of e(F ) when  = 0, σ = 1, and C = 1.5.

Chapter 5. Estimation in Nonparametric Models

243

Solution. The variance of F is (1 − )σ 2 + τ 2 σ 2 . Let σF2 be given in the previous exercise. Then e(F ) =

(1 − )σ 2 + τ 2 σ 2 , σF2

where C σF2

=

−C

 ) (    C ) (   y 2 d (1 − )Φ σy +Φ τyσ +2C 2 (1 − )Φ − C σ +Φ − τ σ (    C ) . 2 (1 − )Φ C −1 σ + Φ τ σ

Since σF2 is a bounded function of τ , limτ →∞ e(F ) = ∞. When  = 0, σ = 1, and C = 1.5,  C 2 −y2 /2 √1 y e dy + 2C 2 Φ(−C) 2π −C e(F ) = 2Φ(C) − 1 2 2 − π2 Ce−C /2 + Φ(C) − Φ(−C) + 2C 2 Φ(−C) = 2Φ(C) − 1 −0.3886 + 0.8664 + 0.3006 = 0.8664 = 0.8984. Exercise 35 (#5.99). Consider the L-functional T defined in Exercise 17. Let Fn be the empirical distribution based on a random sample of size n  from a distribution F , σ 2 = [φF (x)]2 dF (x), and σF2 n = [φFn (x)]2 dFn (x), where φG denotes the influence function of T at distribution G. Show that limn σF2 n = σF2 a.s., under one of the following two conditions: (a) J is bounded, J(t) = 0 when t ∈ [0, α] ∪ [β, 1] for some constants α < β, and the set D = {x : J is discontinuous at F (x)} has Lebesgue measure 0.  (b) J is continuous on [0, 1] and x2 dF (x) < ∞. Solution. (i) Assume condition (a). Let C = supx |J(x)|. Note that limn supy |Fn (y) − F (y)| = 0 a.s. Hence, there are constants a < b such that  b

φF (x) = −

(δx − F )(y)J(F (y))dy a

and

 φFn (x) = −

b

(δx − Fn )(y)J(Fn (y))dy a.s. a

The condition that D has Lebesgue measure 0 ensures that  b |J(F (y)) − J(Fn (y))|dy = 0 a.s. lim n

a

244

Chapter 5. Estimation in Nonparametric Models

Hence,  b  |φFn (x) − φF (x)| =  (Fn − F )(y)J(Fn (y))dy a   b  + (δx − F )(y)[J(F (y)) − J(Fn (y))]dy  a

≤ C(b − a) sup |Fn (y) − F (y)| 

y

b

|J(F (y)) − J(Fn (y))|dy

+ a

→ 0 a.s. Since supx |φFn (x)| ≤ C(b − a), by the dominated convergence theorem,   2 lim [φFn (x)] dF (x) = [φF (x)]2 dF (x) a.s. n

By the extended dominated convergence theorem (e.g., Proposition 18 in Royden, 1968, p. 232),  lim [φFn (x)]2 d(Fn − F )(x) = 0 a.s. n

This proves the result. (ii) Assume condition (b). Let C = supx |J(x)|. From the previous proof, we still have  ∞ (Fn − F )(y)J(Fn (y))dy φFn (x) − φF (x) = −∞  ∞ + (δx − F )(y)[J(F (y)) − J(Fn (y))]dy. −∞

The first  ∞ integral in the previous expression is bounded in absolute value by C −∞ |Fn − F |(y)dy, which converges to 0 a.s. by Theorem 5.2(i) in Shao (2003). The second integral in the previous expression is bounded in absolute value by  x   ∞ sup |J(Fn (y)) − J(F (y))| F (y)dy + [1 − F (y)]dy , y

−∞

x

 which converges to 0 a.s. by the continuity of J and x2 dF (x) < ∞. Hence, as limn φFn (x) = φF (x) a.s. for any x. The rest of the proof  is the same 2 2 that in part (i) of the solution, since [φ (x)] ≤ C [|x| + |x|dF (x)] (see Fn  the solution of Exercise 17) and x2 dF (x) < ∞. Exercise 36 (#5.100). Let (X1 , ..., Xn ) be a random sample from a distribution F on R and let Un be a U-statistic (see Exercise 25 in Chapter

Chapter 5. Estimation in Nonparametric Models

245

3) with kernel h(x1 , ..., xm ) satisfying E[h(X1 , ..., Xm )]2 < ∞, where m < n. Assume that ζ1 = Var(h1 (X1 )) > 0, where h1 (x) = E[h(x, X2 , ..., Xm )]. Derive a consistent variance estimator for Un . Solution. From Exercise 25 in Chapter 3, it suffices to derive a consistent estimator of ζ1 . Since ζ1 = E[h1 (X1 )]2 − {E[h1 (X1 )]}2 = E[h1 (X1 )]2 − {E(Un )}2 and Un is a consistent estimator of E(Un ), it suffices to derive a consistent estimator of ρ = E[h1 (X1 )]2 . Note that 2    ρ= · · · h(x, y1 , ..., ym−1 )dF (y1 ) · · · dF (ym−1 ) dF (x)   = · · · h(x, y1 , ..., ym−1 )h(x, ym , ..., y2m+1 )dF (y1 ) · · · dF (y2m+1 )dF (x). Hence, a consistent estimator of ρ is the U-statistic with kernel h(x, y1 , ..., ym−1 )h(x, ym , ..., y2m+1 ). Exercise 37 (#5.101). For Huber’s estimator defined in Exercise 33, derive a consistent estimator of its asymptotic variance σF2 . Solution. Let θˆ be Huber’s estimator of θ, Fn be the empirical distribution based on X1 , ..., Xn , and  θ+C ˆ σF2 n

=

ˆ θ−C

(θˆ − x)2 dFn (x) + C 2 Fn (θˆ − C) + C 2 [1 − Fn (θˆ + C)] . [Fn (θˆ + C) − Fn (θˆ − C)]2

Using integration by parts, we obtain that  θ+C 2 θ−C (θ − x)F (x)dx + C 2 2 σF = [F (θ + C) − F (θ − C)]2 and σF2 n

=

2

 θ+C ˆ

(θˆ − x)Fn (x)dx + C 2 . [Fn (θˆ + C) − Fn (θˆ − C)]2 ˆ θ−C

To show that σF2 n is a consistent estimator of σF2 , it suffices to show that lim Fn (θˆ + C) →p F (θ + C) n

and

 lim n



ˆ θ+C

ˆ θ−C

θ+C

(θˆ − x)Fn (x)dx →p

(θ − x)F (x)dx. θ−C

The first required result follows from |Fn (θˆ + C) − F (θ + C)| ≤ |F (θˆ + C) − F (θ + C)| + sup |Fn (x) − F (x)|, x

246

Chapter 5. Estimation in Nonparametric Models

ˆ and the fact that limn supx |Fn (x) − F (x)| = 0 a.s., the consistency of θ, the assumption that F is continuous at F (θ + C). Let 

γ+C

(γ − x)F (x)dx.

g(γ) = γ−C

ˆ →p g(θ). Note that Then g(γ) is continuous at θ and, thus, g(θ) 



ˆ θ+C

ˆ θ−C

(θˆ − x)Fn (x)dx =

ˆ θ+C

ˆ θ−C

ˆ (θˆ − x)[Fn (x) − F (x)]dx + g(θ)

and   θ+C   ˆ  ˆ  (θ − x)[Fn (x) − F (x)]dx ≤ 2C 2 sup |Fn (x) − F (x)|.  ˆ x θ−C

Hence, the second required result follows. Exercise 38 (#5.104). Let X1 , ..., Xn be random variables. For any ˆ its jackknife variance estimator is defined as estimator θ, ⎛ ⎞2 n n n−1 ⎝θˆ−i − 1 vJ = θˆ−j ⎠ , n i=1 n j=1 where θˆ−i is the same as θˆ but is based on n − 1 observations X1 , ..., Xi−1 , ¯ be the sample mean and θˆ = X ¯ 2 . Show Xi+1 , ..., Xn , i = 1, ..., n. Let X that ¯ 2 cˆ2 ¯ cˆ3 4X cˆ4 − cˆ22 4X + , vJ = − 2 n−1 (n − 1) (n − 1)3 n ¯ k , k = 2, 3, 4. where cˆk = n−1 i=1 (Xi − X) ¯ −i be the sample mean based on X1 , ..., Xi−1 , Xi+1 , ..., Xn , Solution. Let X ¯2 , i = 1, ..., n. Then θˆ−i = X −i ¯ ¯ −i = nX − Xi , X n−1 ¯ ¯ = X − Xi , ¯ −i − X X n−1 ¯ ¯ = X − Xi + 2X, ¯ ¯ −i + X X n−1 and

n n ¯ − X i 2 1 ¯ X cˆ2 1 ¯2 ¯2 + X+ =X . X = n i=1 −i n i=1 n−1 (n − 1)2

Chapter 5. Estimation in Nonparametric Models

247

Therefore,

vJ =

⎞2 n 2 ⎠ ¯2 − 1 ¯ −j ⎝X X −i n i=1 j=1

n n−1

n



⎛ ⎞2 n n n − 1 ⎝ ¯2 1 ¯2 + X ¯2 − ¯2 ⎠ X−i − X = X n i=1 n j=1 −j =

n n − 1 

n

¯2 − X ¯ X −i

i=1

 2 2

⎞2 n ¯2 − 1 ¯2 ⎠ − (n − 1) ⎝X X n j=1 −j ⎛

n    n − 1  ¯ cˆ22 ¯ −i + X ¯ 2 X ¯ 2− X−i − X n i=1 (n − 1)3 2 ¯ 2 n ¯ − Xi X − Xi n−1 X cˆ22 ¯ + 2X − = n i=1 n−1 n−1 (n − 1)3

=

=

n n ¯   1 4X ¯ − Xi 4 + ¯ − Xi )3 X (X n(n − 1)3 i=1 n(n − 1)2 i=1

¯2 4X cˆ22 ¯ − Xi )2 − (X n(n − 1) i=1 (n − 1)3 n

+

¯ cˆ3 ¯ 2 cˆ2 cˆ22 cˆ4 4X 4X − − + 3 2 (n − 1) (n − 1) n−1 (n − 1)3 ¯ 2 cˆ2 ¯ cˆ3 4X cˆ4 − cˆ22 4X = + . − 2 n−1 (n − 1) (n − 1)3 =

Exercise 39 (#5.111). Let X1 , ..., Xn be random variables and X1∗ , ..., Xn∗ be a random sample (i.e., a simple random sample with replacement) from ˆ its bootstrap variance estimator is v = X1 , ..., Xn . For any estimator θ, B ∗ ∗ ˆ ˆ Var∗ (θ ), where θ is the same as θˆ but is based on X1∗ , ..., Xn∗ and Var∗ is the variance with respect to the distribution of X1∗ , ..., Xn∗ , given X1 , ..., Xn . ¯ be the sample mean and θˆ = X ¯ 2 . Show that Let X vB =

¯ 2 cˆ2 ¯ cˆ3 4X cˆ4 − cˆ22 4X + , + n n2 n3

n ¯ k , k = 2, 3, 4. where cˆk = n−1 i=1 (Xi − X) Solution. Let E∗ be the expectation with respect to the distribution of X1∗ , ..., Xn∗ , given X1 , ..., Xn . Note that ¯ ∗ ) = E∗ (X ∗ ) = X, ¯ E∗ (X 1

248

Chapter 5. Estimation in Nonparametric Models ∗ ¯ ∗ ) = Var∗ (X1 ) = cˆ2 , Var∗ (X n n

and ⎧ ⎨ cˆ4 ∗ ∗ ∗ ¯ ¯ ¯ ¯ = E∗ [(Xi∗ − X)(X − X)(X − X)(X − X)] cˆ2 j k l ⎩ 2 0

if i = j = k = l if i = k, j = l, i = j otherwise

Thus, ¯ ∗ − X) ¯ 4= E∗ (X

1 n4



∗ ∗ ∗ ¯ ¯ ¯ ¯ E∗ [(Xi∗ − X)(X j − X)(Xk − X)(Xl − X)]

1≤i,j,k,l≤n

1 ¯ 4 = 4 E∗ (Xi∗ − X) n 1≤i≤n

1 + 4 n =



¯ 2 (X ∗ − X) ¯ 2] E∗ [(Xi∗ − X) j

1≤i,j≤n,i =j

cˆ4 (n − 1)ˆ c22 + 3 3 n n

and, hence, ¯ ∗ − X) ¯ 2 = E∗ (X ¯ ∗ − X) ¯ 4 − [E∗ (X ¯ ∗ − X) ¯ 2 ]2 Var∗ (X cˆ4 (n − 1)ˆ c22 ¯ ∗ )]2 = 3+ − [Var∗ (X 3 n n cˆ4 − cˆ22 = . n3 Also, E∗ [(Xi∗

∗ ∗ ¯ ¯ ¯ − X)(X j − X)(Xk − X)] =



cˆ3 0

if i = j = k otherwise

and, thus, ¯ ∗ − X) ¯ 3= E∗ (X =

1 n3



∗ ∗ ¯ ¯ ¯ E∗ [(Xi∗ − X)(X j − X)(Xk − X)]

1≤i,j,k≤n

cˆ3 . n2

Let Cov∗ be the covariance with respect to the distribution of X1∗ , ..., Xn∗ , given X1 , ..., Xn . Then  ∗  ¯ − X) ¯ 2, X ¯∗ − X ¯ = E∗ (X ¯ ∗ − X) ¯ 3 = cˆ3 . Cov∗ (X n2

Chapter 5. Estimation in Nonparametric Models

249

Combining all the results, we obtain that ¯ ∗2 ) Var∗ (θˆ∗ ) = Var∗ (X ¯ ∗2 − X ¯ 2) = Var∗ (X  ∗  ¯ − X)( ¯ X ¯∗ − X ¯ + 2X) ¯ = Var∗ (X  ∗  ¯ − X) ¯ 2 + 2X( ¯ X ¯ ∗ − X) ¯ = Var∗ (X  ∗   ∗  ¯ − X) ¯ 2 Var∗ X ¯ − X) ¯ 2 + 4X ¯ = Var∗ (X  ∗  ¯ ¯ ¯ 2 ¯∗ ¯ + 4XCov ∗ (X − X) , X − X =

¯ 2 cˆ2 ¯ cˆ3 cˆ4 − cˆ22 4X 4X + . + 2 n n n3

Exercise 40 (#5.113). Let X1 , ..., Xn be a random sample from a distribution on Rk with a finite Var(X1 ). Let X1∗ , ..., Xn∗ be a random sample X1 , ..., Xn . Show that for almost all given sequences X1 , X2 , ..., √ from ¯ ∗ − X) ¯ →d Nk (0, Var(X1 )), where X ¯ is the sample mean based on n(X ∗ ¯ X1 , ..., Xn and X is the sample mean based on X1∗ , ..., Xn∗ . ¯ ∗ − X, ¯ Solution. Since we can take linear combinations of components of X it is enough to establish the result for the case k = 1, i.e., X1 , ..., Xn are random variables. ¯ i = 1, ..., n. Given X1 , ..., Xn , Y1 , ..., Yn are indepenLet Yi = Xi∗ − X, dent and identically distributed random variables with mean 0 and variance n ¯ 2 . Note that cˆ2 = n−1 i=1 (Xi − X) ¯= 1 ¯∗ − X Yi X n i=1 n

and ¯ ∗ − X) ¯ = cˆ2 , Var∗ (X n where Var∗ is the variance with respect to the distribution of X1∗ , ..., Xn∗ , given X1 , ..., Xn . To apply Lindeberg’s central limit theorem, we need to check whether n 1 E∗ (Yi2 I{|Yi |> √nˆc2 } ) nˆ c2 i=1 converges to 0 as n → ∞, where  > 0 is fixed and E∗ is the expectation with respect to P∗ , the conditional distribution of X1∗ , ..., Xn∗ given X1 , ..., Xn . Since Yi ’s are identically distributed, 1 E∗ (Yi2 I{|Yi |> √nˆc2 } ) = E∗ (Y12 I{|Y1 |> √nˆc2 } ). n i=1 n

250

Chapter 5. Estimation in Nonparametric Models

Note that E∗ (Y12 I{|Y1 |> √nˆc2 } )





max

Xi2

max

Xi2

1≤i≤n

1≤i≤n

 

   P∗ |Y1 | >  nˆ c2 E∗ |Y1 |2 2 nˆ c2

max1≤i≤n Xi2 , = 2 n which converges to 0 a.s. (Exercise 46 in Chapter 1). Thus, by Lindeberg’s central limit theorem, for almost all given sequences X1 , X2 , ..., √

n 1 Yi →d N (0, 1). nˆ c2 i=1

The result follows since limn cˆ2 = Var(X1 ) a.s.

Chapter 6

Hypothesis Tests Exercise 1 (#6.2). Let X be a sample from a population P and consider testing hypotheses H0 : P = P0 versus H1 : P = P1 , where Pj is a known population with probability density fj with respect to a σ-finite measure ν, j = 0, 1. Let β(P ) be the power function of a UMP (uniformly most powerful) test of size α ∈ (0, 1). Show that α < β(P1 ) unless P0 = P1 . Solution. Suppose that α = β(P1 ). Then the test T0 ≡ α is also a UMP test by definition. By the uniqueness of the UMP test (e.g., Theorem 6.1(ii) in Shao, 2003), we must have f1 (x) = cf0 (x) a.e. ν, which implies c = 1. Therefore, f1 (x) = f0 (x) a.e. ν, i.e., P0 = P1 . Exercise 2 (#6.3). Let X be a sample from a population P and consider testing hypotheses H0 : P = P0 versus H1 : P = P1 , where Pj is a known population with probability density fj with respect to a σ-finite measure ν, j = 0, 1. For any α > 0, define ⎧ f1 (X) > c(α)f0 (X) ⎨ 1 Tα (X) = γ(α) f1 (X) = c(α)f0 (X) ⎩ 0 f1 (X) < c(α)f0 (X), where 0 ≤ γ(α) ≤ 1, c(α) ≥ 0, E0 [Tα (X)] = α, and Ej denotes the expectation with respect to Pj . Show that (i) if α1 < α2 , then c(α1 ) ≥ c(α2 ); (ii) if α1 < α2 , then the type II error probability of Tα1 is larger than that of Tα2 , i.e., E1 [1 − Tα1 (X)] > E1 [1 − Tα2 (X)]. Solution. (i) Assume α1 < α2 . Suppose that c(α1 ) < c(α2 ). Then f1 (x) ≥ c(α2 )f0 (x) implies that f1 (x) > c(α1 )f0 (x) unless f1 (x) = f0 (x) = 0. Thus, Tα1 (x) ≥ Tα2 (x) a.e. ν, which implies that E0 [Tα1 (X)] ≥ E0 [Tα2 (X)]. Then α1 ≥ α2 . This contradiction proves that c(α1 ) ≥ c(α2 ). (ii) Assume α1 < α2 . Since Tα1 is of level α2 and Tα2 is UMP, E1 [Tα1 (X)] ≤ E1 [Tα2 (X)]. The result follows if we can show that the equality can not 251

252

Chapter 6. Hypothesis Tests

hold. If E1 [Tα1 (X)] = E1 [Tα2 (X)], then Tα1 is also UMP. By the uniqueness of the UMP and the fact that c(α1 ) ≥ c(α2 ) (part (i)),   Pj c(α2 )f0 (X) ≤ f1 (X) ≤ c(α1 )f0 (X) = 0, j = 0, 1. This implies that E0 [Tα2 (X)] = 0 < α2 . Thus, E1 [Tα1 (X)] < E1 [Tα2 (X)], i.e., E1 [1 − Tα1 (X)] > E1 [1 − Tα2 (X)]. Exercise 3 (#6.4). Let X be a sample from a population P and P0 and P1 be two known populations. Suppose that T∗ is a UMP test of size α ∈ (0, 1) for testing H0 : P = P0 versus H1 : P = P1 and that β < 1, where β is the power of T∗ when H1 is true. Show that 1 − T∗ is a UMP test of size 1 − β for testing H0 : P = P1 versus H1 : P = P0 . Solution. Let fj be a probability density for Pj , j = 0, 1. By the uniqueness of the UMP test,  1 f1 (X) > cf0 (X) T∗ (X) = 0 f1 (X) < cf0 (X). Since α ∈ (0, 1) and β < 1, c must be a positive constant. Note that  1 − T∗ (X) =

1 0

f0 (X) > c−1 f1 (X) f0 (X) < c−1 f1 (X).

For testing H0 : P = P1 versus H1 : P = P0 , clearly 1 − T∗ has size 1 − β. The fact that it is UMP follows from the Neyman-Pearson Lemma. Exercise 4 (#6.6). Let (X1 , ..., Xn ) be a random sample from a population on R with Lebesgue density fθ . Let θ0 and θ1 be two constants. Find a UMP test of size α for testing H0 : θ = θ0 versus H1 : θ = θ1 in the following cases: (i) fθ (x) = e−(x−θ) I(θ,∞) (x), θ0 < θ1 ; (ii) fθ (x) = θx−2 I(θ,∞) (x), θ0 = θ1 . Solution. (i) Let X(1) be the smallest order statistic. Since fθ1 (X) = fθ0 (X)



en(θ1 −θ0 ) 0

X(1) > θ1 θ0 < X(1) ≤ θ1 ,

the UMP test is either  T1 = or

 T2 =

1 γ

X(1) > θ1 θ0 < X(1) ≤ θ1

γ 0

X(1) > θ1 θ0 < X(1) ≤ θ1 .

Chapter 6. Hypothesis Tests

253

When θ = θ0 , P (X(1) > θ1 ) = en(θ0 −θ1 ) . If en(θ0 −θ1 ) ≤ α, then T1 is the UMP test since, under θ = θ0 , E(T1 ) = P (X(1) > θ1 ) + γP (θ0 < X(1) ≤ θ1 ) = en(θ0 −θ1 ) + γ(1 − en(θ0 −θ1 ) ) =α with γ = (α − en(θ0 −θ1 ) )/(1 − en(θ0 −θ1 ) ). If en(θ0 −θ1 ) > α, then T2 is the UMP test since, under θ = θ0 , E(T2 ) = γP (X(1) > θ1 ) = γen(θ0 −θ1 ) = α with γ = α/en(θ0 −θ1 ) . (ii) Suppose θ1 > θ0 . Then fθ1 (X) = fθ0 (X) The UMP test is either





T1 = or

 T2 =

θ1n θ0n

X(1) > θ1

0

θ0 < X(1) ≤ θ1 .

1 γ

X(1) > θ1 θ0 < X(1) ≤ θ1

γ 0

X(1) > θ1 θ0 < X(1) ≤ θ1 .

When θ = θ0 , P (X(1) > θ1 ) = θ0n /θ1n . If θ0n /θ1n ≤ α, then T1 is the UMP test since, under θ = θ0 , 

θn θn E(T1 ) = 0n + γ 1 − 0n = α θ1 θ1 with γ = (α − under θ = θ0 ,

θ0n θ1n )/(1



θ0n θ1n ).

If θ0n /θ1n > α, then T2 is the UMP test since,

E(T2 ) = γ with γ = αθ1n /θ0n . Suppose now that θ1 < θ0 . Then  θn 1 fθ1 (X) θ0n = fθ0 (X) ∞ The UMP test is either T1 =



0 γ

θ0n =α θ1n

X(1) > θ0 θ1 < X(1) ≤ θ0 .

X(1) > θ0 θ1 < X(1) ≤ θ0

254

Chapter 6. Hypothesis Tests

or

 T2 =

γ 1

X(1) > θ0 θ1 < X(1) ≤ θ0 .

When θ = θ0 , E(T1 ) = 0 and E(T2 ) = γ. Hence, the UMP test is T2 with γ = α. Exercise 5 (#6.7). Let f1 , ..., fm+1 be Borel functions on Rp that are integrable with respect to a σ-finite measure ν. For given constants t1 , ..., tm , let T be the class of Borel functions φ (from Rp to [0, 1]) satisfying  φfi dν ≤ ti , i = 1, ..., m, and T0 be the set of φ’s in T satisfying  φfi dν = ti , i = 1, ..., m. Show that if there are constants c1 , ..., cm such that  1 fm+1 (x) > c1 f1 (x) + · · · + cm fm (x) φ∗ (x) = 0 fm+1 (x) < c1 f1 (x) + · · · + cm fm (x)  is a member of T0 , then φ∗ maximizes φfm+1 dν over φ ∈ T0 . Show that if ci ≥ 0 for all i, then φ∗ maximizes φfm+1 dν over φ ∈ T . Solution. Suppose that φ∗ ∈ T0 . By the definition of φ∗ , for any other φ ∈ T0 , (φ∗ − φ)(fm+1 − c1 f1 − · · · − cm fm ) ≥ 0. Therefore

 (φ∗ − φ)(fm+1 − c1 f1 − · · · − cm fm )dν ≥ 0,

i.e.,

 (φ∗ − φ)fm+1 dν ≥

m

 (φ∗ − φ)fi dν = 0.

ci

i=1

 Hence φ∗ maximizes φfm+1 dν over φ ∈ T0 . If ci ≥ 0, for φ ∈ T , we still have (φ∗ − φ)(fm+1 − c1 f1 − · · · − cm fm ) ≥ 0 and, thus,  (φ∗ − φ)fm+1 dν ≥

n i=1

 ci

(φ∗ − φ)fi dν ≥ 0,

  because ci (φ∗ −φ)fi dν ≥ 0 for each i. Therefore φ∗ maximizes φfm+1 dν over φ ∈ T .

Chapter 6. Hypothesis Tests

255

Exercise 6 (#6.9). Let f0 and f1 be Lebesgue integrable functions on R and  1 f0 (x) < 0 or f0 (x) = 0, f1 (x) ≥ 0 φ∗ (x) = 0 otherwise.  Show that φ∗ maximizes φ(x)f1 (x)dx over all Borel functions φ on R satisfying 0 ≤ φ(x) ≤ 1 and φ(x)f0 (x)dx = φ∗ (x)f0 (x)dx.   Solution. From the definition of φ∗ , φ∗ (x)f0 (x)dx = {f0 (x)0}   = φ(x)f0 (x)dx − φ(x)f0 (x)dx {f0 (x) θ0 when (i) fθ (x) = θ−1 e−x/θ I(0,∞) (x); (ii) fθ (x) = θ−1 xθ−1 I(0,1) (x); (iii) fθ (x) is the density of N (1, θ); c (iv) fθ (x) = θ−c cxc−1 e−(x/θ) I(0,∞) (x), where c > 0 is known. Solution. n (i) The family of densities has monotone likelihood ratio in T (X) = i=1 Xi , which has the gamma distribution with shape parameter n and scale parameter θ. Under H0 , 2T /θ0 has the chi-square distribution χ22n . Hence, the UMP test is  1 T (X) > θ0 χ22n,α /2 T∗ (X) = 0 T (X) ≤ θ0 χ22n,α /2,

260

Chapter 6. Hypothesis Tests

where χ2r,α is the (1 − α)th quantile of the chi-square distribution χ2r . (ii) n The family of densities has monotone likelihood ratio in T (X) = i=1 log Xi , which has the gamma distribution with shape parameter n and scale parameter θ−1 . Therefore, the UMP test is the same as T∗ in part (i) of the solution but with θ0 replaced by θ0−1 . (iii) of densities has monotone likelihood ratio in T (X) = n The family 2 (X − 1) and T (X)/θ has the chi-square distribution χ2n . Therefore, i i=1 the UMP test is  1 T (X) > θ0 χ2n,α T∗ (X) = 0 T (X) ≤ θ0 χ2n,α . (iv) n Thec family of densities has monotone likelihood ratio in T (X) = i=1 Xi , which has the Gamma distribution with shape parameter n and scale parameter θc . Therefore, the UMP test is the same as T∗ in part (i) of the solution but with θ0 replaced by θ0c . Exercise 11 (#6.15). Suppose that the distribution of X is in a family {fθ : θ ∈ Θ} with monotone likelihood ratio in Y (X), where Y (X) has a continuous distribution. Consider the hypotheses H0 : θ ≤ θ0 versus H1 : θ > θ0 , where θ0 ∈ Θ is known. Show that the p-value of the UMP test is given by Pθ0 (Y ≥ y), where y is the observed value of Y and Pθ is the probability corresponding to fθ . Solution. The UMP test of size α is  1 Y ≥ cα Tα = 0 Y < cα , where cα satisfies Pθ0 (Y ≥ cα ) = α. When y is the observed value of Y , the rejection region of the UMP test is {y ≥ cα }. By the definition of p-value, it is equal to α ˆ = inf{α : 0 < α < 1, Tα = 1} = inf{α : 0 < α < 1, y ≥ cα } = inf Pθ0 (Y ≥ cα ) y≥cα

≥ Pθ0 (Y ≥ y), where the inequality follows from Pθ0 (Y ≥ y) ≤ Pθ0 (Y ≥ cα ) for any α such that y ≥ cα . Let Fθ be the cumulative distribution function of Pθ . Since Fθ0 is continuous, cα = Fθ−1 (1 − α). Let α∗ = Pθ0 (Y ≥ y) = 1 − Fθ0 (y). 0 Since Fθ0 is continuous, cα∗ = Fθ−1 (1 − α∗ ) = Fθ−1 (Fθ0 (y)) ≤ y. 0 0 This means that α∗ ∈ {α : 0 < α < 1, y ≥ cα } and, thus, the p-value α ˆ ≤ α∗ . Therefore, the p-value is equal to α∗ = Pθ0 (Y ≥ y).

Chapter 6. Hypothesis Tests

261

Exercise 12 (#6.17). Let F and G be two known cumulative distribution functions on R and X be a single observation from the cumulative distribution function θF (x) + (1 − θ)G(x), where θ ∈ [0, 1] is unknown. (i) Find a UMP test of size α for testing H0 : θ ≤ θ0 versus H1 : θ > θ0 , where θ0 ∈ [0, 1] is known. (ii) Show that the test T∗ (X) ≡ α is a UMP test of size α for testing H0 : θ ≤ θ1 or θ ≥ θ2 versus H1 : θ1 < θ < θ2 , where θj ∈ [0, 1] is known, j = 1, 2, and θ1 < θ2 . Solution. (i) Let f (x) and g(x) be the Randon-Nikodym derivatives of F (x) and G(x) with respect to the measure ν induced by F (x) + G(x), respectively. The probability density of X is θf (x) + (1 − θ)g(x). For 0 ≤ θ1 < θ2 ≤ 1, (x) θ2 fg(x) + (1 − θ2 ) θ2 f (x) + (1 − θ2 )g(x) = f (x) θ1 f (x) + (1 − θ1 )g(x) θ1 g(x) + (1 − θ1 )

is nondecreasing in Y (x) = f (x)/g(x). Hence, the family of densities of X has monotone likelihood ratio in Y (X) = f (X)/g(X) and a UMP test is given as ⎧ Y (X) > c ⎨ 1 T = γ Y (X) = c ⎩ 0 Y (X) < c, where c and γ are uniquely determined by E[T (X)] = α when θ = θ0 . (ii) For any test T , its power is  βT (θ) = T (x)[θf (x) + (1 − θ)g(x)]dν   = θ T (x)[f (x) − g(x)]dν + T (x)g(x)du, which is a linear function of θ on [0, 1]. If T has level α, then βT (θ) ≤ α for any θ ∈ [0, 1]. Since the power of T∗ is equal to the constant α, we conclude that T∗ is a UMP test of size α. Exercise 13 (#6.18). Let (X1 , ..., Xn ) be a random sample from the uniform distribution on (θ, θ + 1), θ ∈ R. Suppose that n ≥ 2. (i) Show that a UMP test of size α ∈ (0, 1) for testing H0 : θ ≤ 0 versus H1 : θ > 0 is of the form  0 X(1) < 1 − α1/n , X(n) < 1 T∗ (X(1) , X(n) ) = 1 otherwise, where X(j) is the jth order statistic. (ii) Does the family of all densities of (X(1) , X(n) ) have monotone likelihood

262

Chapter 6. Hypothesis Tests

ratio? Solution A. (i) The Lebesgue density of (X(1) , X(n) ) is fθ (x, y) = n(n − 1)(y − x)n−2 I(θ,y) (x)I(x,θ+1) (y).  A direct calculation of βT∗ (θ) = T∗ (x, y)fθ (x, y)dxdy, the power function of T∗ , leads to ⎧ 0 θ < −α1/n ⎪ ⎪ ⎨ 1/n n (θ + α ) −α1/n ≤ θ ≤ 0 βT∗ (θ) = ⎪ 1 + α − (1 − θ)n 0 < θ ≤ 1 − α1/n ⎪ ⎩ 1 θ > 1 − α1/n . For any θ1 ∈ (0, 1 − α1/n ], by the Neyman-Pearson Lemma, the UMP test T of size α for testing H0 : θ = 0 versus H1 : θ = θ1 is ⎧ X(n) > 1 ⎨ 1 T = α/(1 − θ1 )n θ1 < X(1) < X(n) < 1 ⎩ 0 otherwise. The power of T at θ1 is computed as βT (θ1 ) = 1 − (1 − θ1 )n + α, which agrees with the power of T∗ at θ1 . When θ > 1 − α1/n , T∗ has power 1. Therefore T∗ is a UMP test of size α for testing H0 : θ ≤ 0 versus H1 : θ > 0. (ii) The answer is no. Suppose that the family of densities of (X(1) , X(n) ) has monotone likelihood ratio. By the theory of UMP test (e.g., Theorem 6.2 in Shao, 2003), there exists a UMP test T0 of size α ∈ (0, 12 ) for testing H0 : θ ≤ 0 versus H1 : θ > 0 and T0 has the property that, for θ1 ∈ (0, 1−α1/n ), T0 is UMP of size α0 = 1+α−(1−θ1 )n for testing H0 : θ ≤ θ1 versus H1 : θ > θ1 . Using the transformation Xi − θ1 and the result in (i), the test  1/n 0 X(1) < 1 + θ1 − α0 , X(n) < 1 + θ1 Tθ1 (X(1) , X(n) ) = 1 otherwise is a UMP test of size α0 for testing H0 : θ ≤ θ1 versus H1 : θ > θ1 . At θ = θ2 ∈ (θ1 , 1−α1/n ], it follows from part (i) of the solution that the power of T0 is 1+α−(1−θ2 )n and the power of Tθ1 is 1+α0 −[1−(θ2 −θ1 )]n . Since both T0 and Tθ1 are UMP tests, 1 + α − (1 − θ2 )n = 1 + α0 − [1 − (θ2 − θ1 )]n . Because α0 = 1 + α − (1 − θ1 )n , this means that 1 = (1 − θ1 )n − (1 − θ2 )n + [1 − (θ2 − θ1 )]n

Chapter 6. Hypothesis Tests

263

holds for all 0 < θ1 < θ2 ≤ 1−α1/n , which is impossible. This contradiction proves that the family of all densities of (X(1) , X(n) ) does not have monotone likelihood ratio. Solution B. This is an alternative solution to part (i) provided by Mr. Jialiang Li in 2002 as a student at the University of Wisconsin-Madison. Let βT (θ) be the power function of a test T . Since βT∗ (θ) = 1 when θ > 1−α1/n , it suffices to show that βT∗ (θ) ≥ βT (θ) for θ ∈ (0, 1 − α1/n ) and any other test T . Define A = {0 < X(1) ≤ X(n) < 1}, B = {θ < X(1) ≤ X(n) < 1}, and C = {1 < X(1) ≤ X(n) < θ + 1}. Then βT∗ (θ) − βT (θ) = E[(T∗ − T )IB ] + E[(T∗ − T )IC ] = E[(T∗ − T )IB ] + E[(1 − T )IC ] ≥ E(T∗ IB ) − E(T IB ) = E(T∗ IA ) − E(T IB ) ≥ E(T∗ IA ) − E(T IA ) = βT∗ (0) − βT (0) = α − βT (0), where the second equality follows from T∗ = 1 when (X(1) , X(n) ) ∈ C, the third equality follows from T∗ = 0 when (X(1) , X(n) ) ∈ A but (X(1) , X(n) ) ∈ B (since 0 < X(1) ≤ θ ≤ 1 − α1/n ), and the second inequality follows from IA ≥ IB . Therefore, if T has level α, then βT∗ (θ) ≥ βT (θ) for all θ > 0. Exercise 14 (#6.19). Let X = (X1 , ..., Xn ) be a random sample from the discrete uniform distribution on points 1, ..., θ, where θ = 1, 2, .... (i) Consider H0 : θ ≤ θ0 versus H1 : θ > θ0 , where θ0 > 0 is known. Show that  1 X(n) > θ0 T∗ (X) = α X(n) ≤ θ0 is a UMP test of size α. (ii) Consider H0 : θ = θ0 versus H1 : θ = θ0 . Show that  1 X(n) > θ0 or X(n) ≤ θ0 α1/n T∗ (X) = 0 otherwise is a UMP test of size α. (iii) Show that the results in (i) and (ii) still hold if the discrete uniform distribution is replaced by the uniform distribution on the interval (0, θ), θ > 0. Solution A. In (i)-(ii), without loss of generality we may assume that θ0 is an integer. (i) Let Pθ be the probability distribution of the largest order statistic X(n) and Eθ be the expectation with respect to Pθ . The family {Pθ : θ = 1, 2, ...}

264

Chapter 6. Hypothesis Tests

is dominated by the counting measure and has monotone likelihood ratio in X(n) . Therefore, a UMP test of size α is ⎧ ⎨ 1 T1 (X) = γ ⎩ 0

X(n) > c X(n) = c X(n) < c,

where c is an integer and γ ∈ [0, 1] satisfying

n c cn − (c − 1)n Eθ0 (T1 ) = 1 − +γ = α. θ0 θ0n For any θ > θ0 , the power of T1 is Eθ (T1 ) = Pθ (X(n) > c) + γPθ (X(n) = c) cn − (c − 1)n cn + γ θn θn θ0n = 1 − (1 − α) n . θ = 1−

On the other hand, for θ ≥ θ0 , the power of T∗ is Eθ (T∗ ) = Pθ (X(n) > θ0 ) + αPθ (X(n) ≤ θ0 ) = 1 −

θ0n θn + α 0n . n θ θ

Hence, T∗ has the same power as T1 . Since sup Eθ (T∗ ) = sup αPθ (X(n) ≤ θ0 ) = αPθ0 (X(n) ≤ θ0 ) = α,

θ≤θ0

θ≤θ0

T∗ is a UMP test of size α. (ii) Consider H0 : θ = θ0 versus H1 : θ > θ0 . The test T1 in (i) is UMP. For θ > θ0 , Eθ (T∗ ) = Pθ (X(n) > θ0 ) + Pθ (X(n) ≤ θ0 α1/n ) = 1 −

θ0n αθ0n + , θn θn

which is the same as the power of T1 . Now, consider hypotheses H0 : θ = θ0 versus H1 : θ < θ0 . The UMP test is ⎧ ⎨ 1 X(n) < d T2 (X) = η X(n) = d ⎩ 0 X(n) > d with Eθ0 (T2 ) =

(d − 1)n cn − (c − 1)n +η = α. n θ0 θ0n

Chapter 6. Hypothesis Tests

265

For θ ≤ θ0 , Eθ (T∗ ) = Pθ (X(n) > θ0 ) + Pθ (X(n) ≤ θ0 α1/n ) = Pθ (X(n) ≤ θ0 α1/n )   αθ0n = min 1, n . θ On the other hand, the power of T2 when θ ≥ θ0 α1/n is Eθ (T2 ) = Pθ (X(n) < d) + ηPθ (X(n) = d) dn − (d − 1)n (d − 1)n + η θn θn θ0n = α n. θ =

Thus, we conclude that T∗ has size α and its power is the same as the power of T1 when θ > θ0 and is no smaller than the power of T2 when θ < θ0 . Thus, T∗ is UMP. (iii) The results for the uniform distribution on (0, θ) can be established similarly. Instead of providing details, we consider an alternative solution for (i)-(iii). Solution B. (i) Let T be a test of level α. For θ > θ0 , Eθ (T∗ ) − Eθ (T ) = Eθ [(T∗ − T )I{X(n) >θ0 } ] + Eθ [(T∗ − T )I{X(n) ≤θ0 } ] = Eθ [(1 − T )I{X(n) >θ0 } ] + Eθ [(α − T )I{X(n) ≤θ0 } ] ≥ Eθ [(α − T )I{X(n) ≤θ0 } ] = Eθ0 [(α − T )I{X(n) ≤θ0 } ](θ0 /θ)n = Eθ0 (α − T )(θ0 /θ)n ≥ 0, where the second equality follows from the definition of T∗ and the third equality follows from a scale transformation. Hence, T∗ is UMP. It remains to show that the size of T∗ is α, which has been shown in part (i) of Solution A. (ii) Let T be a test of level α. For θ > θ0 , Eθ (T∗ ) − Eθ (T ) = Eθ [(1 − T )I{X(n) >θ0 } ] + Eθ [(T∗ − T )I{X(n) ≤θ0 } ] ≥ Eθ [(T∗ − T )I{X(n) ≤θ0 } ] = Pθ (X(n) ≤ θ0 α1/n ) − Eθ (T I{X(n) ≤θ0 } ) = α(θ0 /θ)n − Eθ0 (T I{X(n) ≤θ0 } )(θ0 /θ)n ≥ 0.

266

Chapter 6. Hypothesis Tests

Similarly, for θ < θ0 , Eθ (T∗ ) − Eθ (T ) ≥ Pθ (X(n) ≤ θ0 α1/n ) − Eθ (T ), which is equal to 1 − Eθ (T ) ≥ 0, if θ ≤ θ0 α1/n , and is equal to α(θ0 /θ)n − Eθ (T ) = α(θ0 /θ)n − Eθ0 (T )(θ0 /θ)n ≥ 0 if θ0 > θ > θ0 α1/n . Hence, T∗ is UMP. It remains to show that the size of T∗ is α, which has been shown in part (ii) of Solution A. (iii) Note that the results for the power hold for both discrete uniform distribution and uniform distribution on (0, θ). Hence, it remains to show that T∗ has size α. For T∗ in (i), sup Eθ (T∗ ) = sup αPθ (X(n) ≤ θ0 ) = αPθ0 (X(n) ≤ θ0 ) = α.

θ≤θ0

For T∗ in (ii),

θ≤θ0

Eθ0 (T∗ ) = Pθ0 (X(n) ≤ θ0 α1/n ) = α.

Exercise 15 (#6.20). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on the interval (a, ∞) with scale parameter θ, where a ∈ R and θ > 0. (i) Derive a UMP test of size α for testing H0 : a = a0 versus H1 : a = a0 , when θ is known. (ii) For testing H0 : a = a0 versus H1 : a = a1 < a0 , show that any UMP test T∗ of size α has power βT∗ (a1 ) = 1 − (1 − α)e−n(a0 −a1 )/θ . (iii) For testing H0 : a = a0 versus H1 : a = a1 < a0 , show that the power of any size α test that rejects H0 when Y≤ c1 or Y ≥ c2 is the same as n that in part (ii), where Y = (X(1) − a0 )/ i=1 (Xi − X(1) ) and X(1) is the smallest order statistic and 0 ≤ c1 < c2 are constants. (iv) Derive a UMP test of size α for testing H0 : a = a0 versus H1 : a = a0 . (v) Derive a UMP test of size α for testing H0 : θ = θ0 , a = a0 versus H1 : θ < θ0 , a < a0 . Solution. (i) Let Yi = e−Xi /θ , i = 1, ..., n. Then (Y1 , ..., Yn ) is a random sample from the uniform distribution on (0, e−a/θ ). Note that the hypotheses H0 : a = a0 versus H1 : a = a0 are the same as H0 : e−a/θ = e−a0 /θ versus H1 : e−a/θ = e−a0 /θ . Also, the largest order statistic of Y1 , ..., Yn is equal to e−X(1) /θ . Hence, it follows from the previous exercise that a UMP test of size α is  1 X(1) < a0 or X(1) ≥ a0 − nθ log α T = 0 otherwise. (ii) A direct calculation shows that, at a1 < a0 , the power of the UMP test in part (i) of the solution is 1 − (1 − α)e−n(a0 −a1 )/θ . Hence, for each fixed

Chapter 6. Hypothesis Tests

267

θ, the power of T∗ at a1 can not be larger than 1 − (1 − α)e−n(a0 −a1 )/θ . On the other hand, in part (iii) it is shown that there are tests for testing H0 : a = a0 versus H1 : a = a1 < a0 that have power 1−(1−α)e−n(a0 −a1 )/θ at a1 < a0 . Therefore, the power of T∗ at a1 can not be smaller than 1 − (1 − α)e−n(a0 −a1 )/θ . (iii) Let  1 Y ≤ c1 or Y ≥ c2 T = 0 otherwise be a test of size α for testing H0 : a = a0 versus H1 : a = a1 < a0 . n Let Z = i=1 (Xi − X(1) ). By Exercise 27 in Chapter 2, Z and X(1) are independent. Then, the power of T at a1 is E(T ) = 1 − P (c1 < Y < c2 ) = 1 − P (a0 + c1 Z < X(1) < a0 + c2 Z)   a0 +c2 Z n −n(x−a1 )/θ e dx = 1− E θ a0 +c1 Z   = 1 − E e−n(a0 −a1 +c1 Z)/θ − e−n(a0 −a1 +c2 Z)/θ   = 1 − e−n(a0 −a1 )/θ E e−nc1 Z/θ − e−nc2 Z/θ . Since 2Z/θ has the chi-square distribution χ22(n−1) (Exercise 7 in Chapter   2), b = E e−nc1 Z/θ − e−nc2 Z/θ does not depend on θ. Since T has size α, E(T ) at a = a0 , which is 1 − b, is equal to α. Thus, b = 1 − α and E(T ) = 1 − (1 − α)e−n(a0 −a1 )/θ . (iv) Consider the test T in (iii) with c1 = 0 and c2 = c > 0. From the result in (iii), T has size α and is UMP for testing H0 : a = a0 versus H1 : a < a0 . Hence, it remains to show that T is UMP for testing H0 : a = a0 versus H1 : a > a0 . Let a1 > a0 be fixed and θ be fixed. From the NeymanPearson lemma, a UMP test for H0 : a = a0 versus H1 : a = a1 has the rejection region  " ea1 /θ I(a1 ,∞) (X(1) ) > c0 ea0 /θ I(a0 ,∞) (X(1) ) for some constant c0 . Since a1 > a0 , this rejection region is the same as {Y > c} for some constant c. Since the region {Y > c} does not depend on (a, θ), T is UMP for testing H0 : a = a0 versus H1 : a > a0 . (v) For fixed θ1 < θ0 and a1 < a0 , by the Neyman-Pearson lemma, the UMP test of size α for H0 : a = a0 , θ = θ0 versus H1 : a = a1 , θ = θ1 has the rejection region n  " θ0n e− i=1 (Xi −a1 )/θ1 I(a1 ,∞) (X(1) ) n R= > c0 θ1n e− i=1 (Xi −a0 )/θ0 I(a0 ,∞) (X(1) )

268

Chapter 6. Hypothesis Tests

for some c0 . The ratio in the previous expression is equal to ∞ when a < X(1) ≤ a0 and θ0n na1 /θ1 −na0 /θ0 (θ0−1 −θ1−1 ) n Xi i=1 e e e θ1n when X(1) > a0 . Since θ0−1 − θ1−1 < 0, 



R = X(1) ≤ a0 ∪

 n

" Xi < c

i=1

for some constant c satisfying P (R) = α when a = a0 and θ = θ0 . Hence, c depends on a0 and θ0 . Since this test does not depend on (a1 , θ1 ), it is UMP for testing H0 : θ = θ0 , a = a0 versus H1 : θ < θ0 , a < a0 . Exercise 16 (#6.22). In Exercise 11(i) in Chapter 3, derive a UMP test of size α ∈ (0, 1) for testing H0 : θ ≤ θ0 versus H1 : θ > θ0 , where θ0 > 1 is known. Solution. From Exercise 11(i) in Chapter 3, the probability density (with respect to the sum of Lebesgue measure and point mass at 1) of the sufficient and complete statistic X(n) , the largest order statistic, is fθ (x) = θ−n I{1} (x) + nθ−n xn−1 I(1,θ) (x). The family {fθ : θ > 1} has monotone likelihood ratio in X(n) . Hence, a UMP test of size α is ⎧ X(n) > c ⎨ 1 T = γ X(n) = c ⎩ 0 X(n) < c, where c and γ are determined by the size of T . When θ = θ0 and 1 < c ≤ θ0 ,  θ0 n cn E(T ) = P (X(n) > c) = n xn−1 dx = 1 − n . θ0 c θ0 If θ0 > (1 − α)−1/n , then T has size α with c = θ0 (1 − α)1/n and γ = 0. If θ0 > (1 − α)−1/n , then the size of T is P (X(n) > 1) + γP (X(n) = 1) = 1 −

1 γ + n. θ0n θ0

Hence, T has size α with c = 1 and γ = 1 − (1 − α)θ0n . Exercise 17 (#6.25). Let (X1 , ..., Xn ) be a random sample from N (θ, 1). ¯ is a UMP test of size α ∈ (0, 1 ) for testing Show that T = I(−c,c) (X) 2

Chapter 6. Hypothesis Tests

269

¯ is the sample mean and θ1 > 0 H0 : |θ| ≥ θ1 versus H1 : |θ| < θ1 , where X is a constant. Provide a formula for determining c. Solution. From Theorem 6.3 in Shao (2003), the UMP test is of the form ¯ where c1 and c2 satisfy T = I(c1 ,c2 ) (X), Φ Φ

√

 √  n(c2 + θ1 ) − Φ n(c1 + θ1 ) = α,

√

 √  n(c2 − θ1 ) − Φ n(c1 − θ1 ) = α,

and Φ is the cumulative distribution function of N (0, 1). Let Yi = −Xi , i = 1, ..., n. Then (Y1 , ..., Yn ) is a random sample from N (−θ, 1). Since the hypotheses are not changed with θ replaced by −θ, the UMP test for ¯ with testing the same hypotheses but based on Yi ’s is T1 = I(c1 ,c2 ) (−X) the same c1 and c2 . By the uniqueness of the UMP test, T = T1 and, thus, c1 = −c and c2 = c > 0. The constraints on ci ’s reduce to Φ

√

 √  n(θ1 + c) − Φ n(θ1 − c) = α.

Exercise 18 (#6.29). Consider Exercise 12 with H0 : θ ∈ [θ1 , θ2 ] versus H1 : θ ∈ [θ1 , θ2 ], where 0 < θ1 ≤ θ2 < 1 are constants. (i) Show that a UMP test does not exist. (ii) Obtain a UMPU (uniformly most power unbiased) test of size α. Solution. (i) Let βT (θ) be the power function of a test T . For any test T of level α such that βT (θ) is not constant, either βT (0) or βT (1) is strictly less than α. Without loss of generality, assume that βT (0) < α. This means that at θ = 0, which is one of parameter values under H1 , the power of T is smaller than T∗ ≡ α. Hence, any T with nonconstant power function can not be UMP. From Exercise 12, the UMP test of size α for testing H0 : θ ≤ θ1 versus H1 : θ > θ1 clearly has power larger than α at θ = 1. Hence, T∗ ≡ α is not UMP. Therefore, a UMP test does not exists. (ii) If a test T of level α has a nonconstant power function, then either βT (0) or βT (1) is strictly less than α and, hence, T is not unbiased. Therefore, only tests with constant power functions may be unbiased. This implies that T∗ ≡ α is a UMPU test of size α. Exercise 19. Let X be a random variable with probability density fθ . Assume that {fθ : θ ∈ Θ} has monotone likelihood ratio in X, where Θ ⊂ R. Suppose that for each θ0 ∈ Θ, a UMPU test of size α for testing H0 : θ = θ0 has the acceptance region {c1 (θ0 ) ≤ X ≤ c2 (θ0 )} and is strictly unbiased (i.e., its power is larger than α when θ = θ0 ). Show that the functions c1 (θ) and c2 (θ) are increasing in θ. Solution. Let θ0 < θ1 be two values in Θ and T0 and T1 be the UMPU tests with acceptance regions {c1 (θ0 ) ≤ X ≤ c2 (θ0 )} and {c1 (θ1 ) ≤ X ≤ c2 (θ1 )},

270

Chapter 6. Hypothesis Tests

respectively. Let ψ(X) = T1 (X) − T0 (X) and Eθ be the expectation with respect to fθ . It follows from the strict unbiasedness of the tests that Eθ0 ψ(X) = Eθ0 (T1 ) − α > 0 > α − Eθ1 = Eθ1 ψ(X). If [c1 (θ0 ), c2 (θ0 )] ⊂ [c1 (θ1 ), c2 (θ1 )], then ψ(X) ≤ 0 and Eθ0 ψ(X) ≤ 0, which is impossible. If [c1 (θ1 ), c2 (θ1 )] ⊂ [c1 (θ0 ), c2 (θ0 )], then ψ(X) ≥ 0 and Eθ1 ψ(X) ≥ 0, which is impossible. Hence, neither of the two intervals contain the other. If c1 (θ1 ) ≤ c1 (θ0 ) ≤ c2 (θ1 ) ≤ c2 (θ0 ), then there is a x0 ∈ [c1 (θ0 ), c2 (θ1 )] such that ψ(X) ≥ 0 if X < x0 and ψ(X) ≤ 0 if X ≥ x0 , i.e., the function ψ has a single change of sign. Since the family has monotone likelihood ratio in X, it follows from Lemma 6.4(i) in Shao (2003) that there is a θ∗ such that Eθ ψ(X) ≤ 0 for θ < θ∗ and Eθ ψ(X) ≥ 0 for θ > θ∗ . But this contradicts to the fact that Eθ0 ψ(X) > 0 > Eθ1 ψ(X) and θ0 < θ1 . Therefore, we must have c1 (θ1 ) > c1 (θ0 ) and c2 (θ1 ) > c2 (θ0 ), i.e., both c1 (θ) and c2 (θ) are increasing in θ. Exercise 20 (#6.34). Let X be a random variable from the geometric distribution with mean p−1 . Find a UMPU test of size α for H0 : p = p0 versus H1 : p = p0 , where p0 ∈ (0, 1) is known. Solution. The probability density of X with respect to the counting measure is   p I{1,2,...} (X), f (x) = exp x log(1 − p) + log 1−p which is in an exponential family. Applying Theorem 6.4 in Shao (2003), we conclude that the UMPU test of size α is ⎧ X < c1 or X > c2 ⎨ 1 T∗ = γi X = ci i = 1, 2 ⎩ 0 otherwise, where ci ’s are positive integers and ci ’s and γi ’s are uniquely determined by c ∞ 1 −1 α = (1 − p0 )k−1 + (1 − p0 )k−1 + γi (1 − p0 )ci −1 p0 i=1,2 k=1

k=c2 +1

and c ∞ 1 −1 α k−1 = k(1 − p0 ) + k(1 − p0 )k−1 + γi ci (1 − p0 )ci −1 . 2 p0 i=1,2 k=1

k=c2 +1

Exercise 21 (#6.36). Let X = (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ) with unknown µ and σ 2 .

Chapter 6. Hypothesis Tests

271

(i) Show that the power of the one-sample t-test depends on a noncentral t-distribution. (ii) Show that the power of the one-sample t-test is an increasing function of (µ − µ0 )/σ for testing H0 : µ ≤ µ0 versus H1 : µ > µ0 and of |µ − µ0 |/σ for testing H0 : µ = µ0 versus H1 : µ = µ0 , where µ0 is a known constant. Note. For testing H0 : µ ≤ µ0 versus H1 : µ > µ0 , the one-sample √ ¯ t-test of size α rejects H0 if and only if t(X) > tn−1,α , where t(X) = n(X −µ0 )/S, ¯ is the sample mean, S 2 is the sample variance, and tr,α is the (1 − α)th X quantile of the t-distribution tr . For testing H0 : µ = µ0 versus H1 : µ = µ0 , the one-sample t-test of √ size α rejects H0 if and only if |t(X)| > tn−1,α/2 . ¯ − µ0 )/σ, U = S/σ, and δ = √n(µ − µ0 )/σ. Solution. (i) Let Z = n(X Then Z is distributed as N (δ, 1), (n − 1)U 2 has the chi-square distribution χ2n−1 , and Z and U are independent. By definition, t(X) = Z/U has the noncentral t-distribution tn−1 (δ) with the noncentrality parameter δ. (ii) For testing H0 : µ ≤ µ0 versus H1 : µ > µ0 , the power of the one-sample t-test is     P t(X) > tn−1,α = P Z > tn−1,α U = E [Φ(δ − tn−1,α U )] , where Φ is the cumulative distribution function of N (0, 1). Since Φ is an increasing function, the power is an increasing function of δ. For testing H0 : µ = µ0 versus H1 : µ = µ0 , the power of the one-sample t-test is       P |t(X)| > tn−1,α/2 = P Z > tn−1,α/2 U + P Z < −tn−1,α/2 U ) ( = E Φ(δ − tn−1,α/2 U ) + Φ(−δ − tn−1,α/2 U ) ) ( = E Φ(|δ| − tn−1,α/2 U ) + Φ(−|δ| − tn−1,α/2 U ) . To show that the power is an increasing function of |δ|, it suffices to show that Φ(x − a) + Φ(−x − a) is increasing in x > 0 for any fixed a > 0. The result follows from 2

2

d e−(x−a) /2 − e−(x+a) /2 [Φ(x − a) + Φ(−x − a)] = dx 2π −(x2 +a2 )/2 ax (e − e−ax ) e = 2π > 0. Exercise 22. Let X = (X1 , ..., Xn ) be√a random sample from N (µ, σ 2 ) ¯ ¯ is the sample with unknown µ and σ 2 and t(X) = nX/S, where X 2 mean and S is the sample variance. For testing H0 : µ/σ ≤ θ0 versus H1 : µ/σ > θ0 , find a test of size α that is UMP among all tests based on t(X), where θ0 is a known constant.

272

Chapter 6. Hypothesis Tests

Solution. From the previous exercise,√we know that t(X) has the noncentral t-distribution tn−1 (δ), where δ = nµ/σ is the noncentrality parameter. The Lebesgue density of t(X) is (e.g., Shao, 2003, p. 26)  ∞ gδ (t, y)dy, fδ (t) = 0

√ 2 y (n−2)/2 e−{[t/ y/(n−1)−δ] +y}/2  gδ (t, y) = . 2n/2 Γ( n−1 2 ) π(n − 1)

where

We now show that the family {fδ (t) : δ ∈ R} has monotone likelihood ratio in t. For δ1 < δ2 , it suffices to show that f  (t)fδ1 (t) − fδ2 (t)fδ1 (t) d fδ2 (t) = δ2 ≥ 0, dt fδ1 (t) [fδ1 (t)]2 Since fδ (t) =

 0



[δ − t/



y/(n − 1)]gδ (t, y)dy = δfδ (t) − tf˜δ (t),

where

 f˜δ (t) =

we obtain that

t ∈ R.

0



3

y gδ (t, y)dy, n−1 &



fδ2 (t)fδ1 (t) − fδ2 (t)fδ1 (t) = fδ1 (t)fδ2 (t) δ2 − δ1 + t 

f˜δ1 (t) f˜δ2 (t) − fδ1 (t) fδ2 (t)

' .

 : δ ∈ R is an exponen√ tial family having monotone likelihood ratio in t/ y. Hence, by Lemma 6.3 in Shao (2003), the integral f˜δ (t) is nonincreasing in δ when t > 0 and is nondecreasing in δ when t < 0. Hence, for t > 0 and t < 0,   f˜δ1 (t) f˜δ2 (t) t − ≥0 fδ1 (t) fδ2 (t) For any fixed t ∈ R, the family of densities

gδ (t,y) fδ (t)

and, therefore, fδ2 (t)fδ1 (t) − fδ2 (t)fδ1 (t) ≥ 0. Consequently, for testing H0 : µ/σ ≤ θ0 versus H1 : µ/σ > θ0 , a test of size α that is UMP among all tests based on t(X) rejects H0 when t(X) > √ c, where c is the (1 − α)th quantile of the noncentral t-distribution tn−1 ( nθ0 ). Exercise 23 (#6.37). Let (X1 , ..., Xn ) be a random sample from the gamma distribution with unknown shape parameter θ and unknown scale

Chapter 6. Hypothesis Tests

273

parameter γ. Let θ0 > 0 and γ0 > 0 be known constants. (i) For testing H0 : θ ≤ θ0 versus H1 : θ > θ0 and H0 : θ = θ0 versus H1 : θ = θ0 , show exist UMPU tests whose rejection regions are !n that there ¯ where X ¯ is the sample mean. based on V = i=1 (Xi /X), (ii) For testing H : γ ≤ γ versus H 0 0 1 : γ > γ0 , show that a UMPU test !n n X C. rejects H0 when i=1 Xi > C( !n i=1 i ) for some function ¯ The joint density Solution. (i) Let Y = log( i=1 Xi ) and U = nX. )−n θY −U/γ−Y ( of (X1 , ..., Xn ) can be written as Γ(θ)γ θ e , which belongs to an exponential family. From Theorem 6.4 in Shao (2003), UMPU tests are functions of Y and U . By Basu’s theorem, V1 = Y − n log(U/n) = !n ¯ satisfies the conditions in Lemma 6.7 of Shao (2003). log (X / X) i i=1 Hence, the rejection regions of the UMPU tests can be determined by using V1 . Since V = eV1 , the rejection regions of the UMPU tests can also be determined by using !n V . ¯ The joint density of (X1 , ..., Xn ) (ii) Let U = log( i=1 Xi ) and Y = nX. ( ) −1 −n can be written as Γ(θ)γ θ e−γ Y +θU −U , From Theorem 6.4 in Shao (2003), for testing H0 : γ ≤ γ0 versus H1 : γ > γ0 , the UMPU test is ∗



T =

1 0

Y > C1 (U ) Y ≤ C1 (U ),

where C1 is a function such that E(T ∗ |U ) = α when γ = γ0 . The result follows by letting C(x) = C1 (log x). Exercise 24 (#6.39). Let X1 and X2 be independent observations from the binomial distributions with sizes n1 and n2 and probabilities p1 and p2 , respectively, where ni ’s are known and pi ’s are unknown. (i) Let Y = X2 and U = X1 + X2 . Show that  

n2 θy n1 e IA (y), u = 0, 1, ..., n1 + n2 , P (Y = y|U = u) = Ku (θ) u−y y 1) where A = {y : y = 0, 1, ..., min{u, n2 }, u − y ≤ n1 }, θ = log pp21 (1−p (1−p2 ) , and

⎤−1 n1  n2  eθy ⎦ . Ku (θ) = ⎣ u−y y ⎡

y∈A

(ii) Find a UMPU test of size α for testing H0 : p1 ≥ p2 versus H1 : p1 < p2 . (iii) Repeat (ii) for H0 : p1 = p2 versus H1 : p1 = p2 . Solution. (i) When u = 0, 1, ..., n1 + n2 and y ∈ A,  

n2 u−y n1 p1 (1 − p1 )n1 −u+y py2 (1 − p2 )n2 −y P (Y = y, U = u) = u−y y

274

and

Chapter 6. Hypothesis Tests

n1  n2  u−y p1 (1 − p1 )n1 −u+y py2 (1 − p2 )n2 −y . P (U = u) = u−y y y∈A

Then, when y ∈ A, P (Y = y, U = u) P (Y = y|U = u) = = P (U = u)

n1 u−y



 n2 θy e Ku (θ). y

1) (ii) Since θ = log pp21 (1−p (1−p2 ) , the testing problem is equivalent to testing H0 : θ ≤ 0 versus H1 : θ > 0. By Theorem 6.4 in Shao (2003), the UMPU test is ⎧ Y > C(U ) ⎨ 1 T∗ (Y, U ) = γ(U ) Y = C(U ) ⎩ 0 Y < C(U ),

where C and γ are functions of U such that E(T∗ |U ) = α when θ = 0 (p1 = p2 ), which can be determined using the conditional distribution of Y given U . When θ = 0, this conditional distribution is, by the result in (i), −1  

n2 n1 n1 + n 2 IA (y), u = 0, 1, ..., n1 + n2 . P (Y = y|U = u) = u u−y y (iii) The testing problem is equivalent to testing H0 : θ = 0 versus H1 : θ = 0. Thus, the UMPU test is ⎧ Y > C1 (U ) or Y < C2 (U ) ⎨ 1 T∗ = γi (U ) Y = Ci (U ) i = 1, 2 ⎩ 0 C1 (U ) < Y < C2 (U ), where Ci ’s and γi ’s are functions such that E(T∗ |U ) = α and E(T∗ Y |U ) = αE(Y |U ) when θ = 0, which can be determined using the conditional distribution of Y given U in part (ii) of the solution. Exercise 25 (#6.40). Let X1 and X2 be independently distributed as the negative binomial distributions with sizes n1 and n2 and probabilities p1 and p2 , respectively, where ni ’s are known and pi ’s are unknown. (i) Show that there exists a UMPU test of size α for testing H0 : p1 ≤ p2 versus H1 : p1 > p2 . (ii) Let Y = X1 and U = X1 + X2 . Determine the conditional distribution of Y given U when n1 = n2 = 1. Solution. (i) The joint probability density of X1 and X2 is x1 −1x2 −1 n1 n2 n1 −1 n2 −1 p1 p2 eθY +U log(1−p2 ) (1 − p1 )n1 (1 − p2 )n2

Chapter 6. Hypothesis Tests

275

1−p1 where θ = log( 1−p ), Y = X1 , and U = X1 + X2 . The testing problem is 2 equivalent to testing H0 : θ ≥ 0 versus H1 : θ < 0. By Theorem 6.4 in Shao (2003), the UMPU test is ⎧ Y < C(U ) ⎨ 1 T∗ (Y, U ) = γ(U ) Y = C(U ) ⎩ 0 Y > C(U ),

where C(U ) and γ(U ) satisfy E(T∗ |U ) = α when θ = 0. (ii) When n1 = n2 = 1, P (U = u) =

u−1

P (X1 = k, X2 = u − k)

k=1

k u−1 p1 p2 (1 − p2 )u−1 1 − p1 = 1 − p1 1 − p2 k=1

=

u−1 u−1

p1 p2 (1 − p2 ) 1 − p1

eθk

k=1

for u = 2, 3, ..., and P (Y = y, U = u) = (1 − p1 )y−1 p1 (1 − p2 )u−y−1 p2 for y = 1, ..., u − 1, u = 2, 3, .... Hence P (Y = y|U = u) =

eθy (1 − p1 )y−1 p1 (1 − p2 )u−y−1 p2 =   u−1 u−1 u−1 θk p1 p2 (1−p2 ) θk k=1 e k=1 e 1−p1

for y = 1, ..., u − 1, u = 2, 3, .... When θ = 0, this conditional distribution is the discrete uniform distribution on {1, ..., u − 1}. Exercise 26 (#6.44). Let Xj , j = 1, 2, 3, be independent from the Poisson distributions with means λj , j = 1, 2, 3, respectively. Show that there exists a UMPU test of size α for testing H0: λ1 λ2 ≤ λ23 versus H1: λ1 λ2 > λ23 . Solution. The joint probability density for (X1 , X2 , X3 ) is e−(λ1 +λ2 +λ3 ) X1 log λ1 +X2 log λ2 +X3 log λ3 , e X1 !X2 !X3 ! which is the same as e−(λ1 +λ2 +λ3 ) θY +U1 log λ2 +U2 log λ3 , e X1 !X2 !X3 ! where θ = log λ1 + log λ2 − 2 log λ3 , Y = X1 , U1 = X2 − X1 , and U2 = X3 +2X1 . By Theorem 6.4 in Shao (2003), there exists a UMPU test of size

276

Chapter 6. Hypothesis Tests

α for testing H0 : θ ≤ 0 versus H1 : θ > 0, which is equivalent to testing H0 : λ1 λ2 ≤ λ23 versus H1 : λ1 λ2 > λ23 . Exercise 27 (#6.49). Let (Xi1 , ..., Xini ), i = 1, 2, be two independent random samples from N (µi , σ 2 ), respectively, where ni ≥ 2 and µi ’s and σ are unknown. Show that a UMPU test of size α for H0 : µ1 = µ2 versus H1 : µ1 = µ2 rejects H0 when |t(X)| > tn1 +n2 −1,α/2 , where ;2 −1 ¯1) ¯2 − X n1 + n−1 (X 2 t(X) =  , 2 2 [(n1 − 1)S1 + (n2 − 1)S2 ]/(n1 + n2 − 2) ¯ i and S 2 are the sample mean and variance based on Xi1 , ..., Xin , i = 1, 2, X i i and tn1 +n2 −1,α is the (1 − α)th quantile of the t-distribution tn1 +n2 −1 . Derive the power function of this test. ¯ 1 , U1 = n1 X ¯2 − X ¯ 1 + n2 X ¯ 2 , U2 = 2 n X 2 , Solution. Let Y = X i=1 j=1 ij −1 2 2 θ = (µ1 − µ2 )/[(n−1 1 + n2 )σ ], ϕ1 = (n1 µ1 + n2 µ2 )/[(n1 + n2 )σ ], and 2 −1 ϕ2 = −(2σ ) . Then, the joint density of Xi1 , ..., Xini , i = 1, 2, can be written as √ ( 2πσ)n1 +n2 eθY +ϕ1 U1 +ϕ2 U2 .  The statistic V = Y / U2 − U12 /(n1 + n2 ) satisfies the conditions in Lemma 6.7(ii) in Shao (2003). Hence, the UMPU test has the rejection region V < c1 or V > c2 . Under H0 , V is symmetrically distributed around 0, i.e., V and −V have the same distribution. Thus, a UMPU test rejects H0 when −V < c1 or −V > c2 , which is the same as rejecting H0 when V < −c2 or V > −c1 . By the uniqueness of the UMPU test, we conclude that c1 = −c2 , i.e., the UMPU test rejects when |V | > c. Since U2 −

U12 n1 n2 Y 2 = (n1 − 1)S12 + (n2 − 1)S22 + , n1 + n2 n1 + n2

we obtain that 1 n1 n2 n1 n2 1 = + . V2 (n1 + n2 )(n1 + n2 − 2) [t(X)]2 n1 + n2 Hence, |V | is an increasing function of |t(X)|. Also, t(X) has the tdistribution with tn1 +n2 −2 under H0 . Thus, the UMPU test rejects H0 when |t(X)| > tn1 +n2 −2,α/2 . Under H1 , t(X) is distributed as the noncentral t-distribution tn1 +n2 −2 (δ) with noncentrality parameter δ=

µ − µ1 2 2 . −1 σ n−1 1 + n2

Thus the power function of the UMPU test is 1 − Gδ (tn1 +n2 −2,α/2 ) + Gδ (−tn1 +n2 −2,α/2 ),

Chapter 6. Hypothesis Tests

277

where Gδ denotes the cumulative distribution function of the noncentral t-distribution tn1 +n2 −2 (δ). Exercise 28 (#6.50). Let (Xi1 , ..., Xin ), i = 1, 2, be two independent random samples from N (µi , σi2 ), respectively, where n > 1 and µi ’s and σi ’s are unknown. Show that a UMPU test of size α for testing H0 : σ22 = ∆0 σ12 versus H1 : σ22 = ∆0 σ12 rejects H0 when  2  S2 ∆0 S12 1−c , max > , ∆0 S12 S22 c c where ∆0 > 0 is a constant, 0 f(n−1)/2,(n−1)/2 (v)dv = α/2 and fa,b is the Lebesgue density of the beta distribution with parameter (a, b). Solution. From Shao (2003, p. 413), the UMPU test rejects H0 if V < c1 or V > c2 , where S 2 /∆0 V = 2 2 2 S1 + S2 /∆0 and Si2 is the sample variance based on Xi1 , ..., Xin . Under H0 (σ12 = σ22 ), n−1 V has the beta distribution with parameter ( n−1 2 , 2 ), which is symmetric 1 about 2 , i.e., V has the same distribution as 1 − V . Thus, a UMPU test rejects H0 when 1 − V < c1 or 1 − V > c2 , which is the same as rejecting H0 when V < 1 − c2 or V > 1 − c1 . By the uniqueness of the UMPU test, we conclude that c1 + c2 = 1. Let c1 = c. Then the UMPU test rejects H0 c when V < c or V > 1 − c and c satisfies 0 f(n−1)/2,(n−1)/2 (v)dv = α/2. Let 2 2 F = S2 /(∆0 S1 ). Then V = F/(1+F ), V < c if and only if F −1 > (1−c)/c, and V > 1 − c if and only if F > (1 − c)/c. Hence, the UMPU test rejects when max{F, F −1 } > (1 − c)/c, which is the desired result. Exercise 29 (#6.51). Suppose that Xi = β0 + β1 ti + εi , i = 1, ..., n, where ti ’s are fixed constants that are not all the same, εi ’s are independent and identically distributed as N (0, σ 2 ), and β0 , β1 , and σ 2 are unknown parameters. Derive a UMPU test of size α for testing (i) H0 : β0 ≤ θ0 versus H1 : β0 > θ0 ; (ii) H0 : β0 = θ0 versus H1 : β0 = θ0 ; (iii) H0 : β1 ≤ θ0 versus H1 : β1 > θ0 ; (iv) H0 : β1 = θ0 versus H1 : β1 = θ0 . Solution: Note that (X1 , ..., Xn ) follows a simple linear regression model. n n 2 Let D = n i=1 t2i − ( i=1 ti ) ,  n  n n n 1 2 βˆ0 = t Xi − ti ti X i D i=1 i i=1 i=1 i=1 and 1 βˆ1 = D

 n

n i=1

ti X i −

n i=1

ti

n i=1

 Xi

278

Chapter 6. Hypothesis Tests

be the least squares estimators, and let 1 (yi − βˆ0 βˆ1 ti )2 . n − 2 i=1 n

σ ˆ2 =

ˆ From the theory of linear n models, β0 has the normal distribution with mean β0 and variance σ 2 i=1 t2i /D, βˆ1 has the normal distribution with mean σ 2 /σ 2 has the chi-square distribution χ2n−2 , β1 and variance σ 2 n/D, (n − 2)ˆ ˆ 2 are independent. Thus, the following results follow and (βˆ0 , βˆ1 ) and σ from the example in Shao (2003, p. 416). (i) The UMPU test of size α for testing H0 : β0 ≤ θ0 versus H1 : β0 > θ0 rejects H0 when t0 > tn−2,α , where √ t0 =

D(βˆ0 − θ0 ) n 2 σ ˆ i=1 ti

and tr,α is the (1 − α)th quantile of the t-distribution tr . (ii) The UMPU test of size α for testing H0 : β0 = θ0 versus H1 : β0 =

θ0 rejects H0 when |t0 | > tn−2,α/2 . (iii) The UMPU test of size α for testing H0 : β1 ≤ θ0 versus H1 : β1 > θ0 rejects H0 when t1 > tn−2,α , where √ t1 =

D(βˆ1 − θ0 ) √ . nˆ σ

(iv) The UMPU test of size α for testing H0 : β1 = θ0 versus H1 : β1 = θ0 rejects H0 when |t1 | > tn−2,α/2 . Exercise 30 (#6.53). Let X be a sample from Nn (Zβ, σ 2 In ), where β ∈ Rp and σ 2 > 0 are unknown and Z is an n × p known matrix of rank r ≤ p < n. For testing H0 : σ 2 ≤ σ02 versus H1 : σ 2 > σ02 and H0 : σ 2 = σ02 versus H1 : σ 2 = σ02 , show that UMPU tests of size α are functions of ˆ 2 , where βˆ is the least squares estimator of β, and their SSR = X − Z β rejection regions can be determined using chi-square distributions. Solution. Since H = Z(Z τ Z)− Z τ is a projection matrix of rank r, there exists an n × n orthogonal matrix Γ such that Γ = ( Γ1 Γ2 )

and

HΓ = ( Γ1 0 ),

where Γ1 is n×r and Γ2 is n×(n−r). Let Yj = Γτj X, j = 1, 2. Consider the transformation (Y1 , Y2 ) = Γτ X. Since Γτ Γ = In , (Y1 , Y2 ) has distribution Nn (Γτ Zβ, σ 2 In ). Note that E(Y2 ) = E(Γτ2 X) = Γτ2 Zβ = Γτ2 HZβ = 0.

Chapter 6. Hypothesis Tests

279

Let η = Γτ1 Zβ = E(Y1 ). Then the density of (Y1 , Y2 ) is   1 Y1 2 + Y2 2 η τ Y1 η2 . exp + − 2σ 2 σ2 2σ 2 (2πσ 2 )n/2 From Theorem 6.4 in Shao (2003), the UMPU tests are based on Y = Y1 2 + Y2 2 and U = Y1 . Let V = Y2 2 . Then V = Y − U 2 satisfies the conditions in Lemma 6.7 of Shao (2003). Hence, the UMPU test for H0 : σ 2 ≤ σ02 versus H1 : σ 2 > σ02 rejects when V > c and the UMPU test for H0 : σ 2 = σ02 versus H1 : σ 2 = σ02 , rejects when V < c1 or V > c2 . Since Y1 − η2 + Y2 2 = X − Zβ2 , min Y1 − η2 + Y2 2 = min X − Zβ2 η

β

and, therefore,

ˆ 2 = SSR. V = Y2 2 = X − Z β

Finally, by Theorem 3.8 in Shao (2003), SSR/σ 2 has the chi-square distribution χ2n−r . Exercise 31 (#6.54). Let (X1 , ..., Xn ) be a random sample from a bivariate normal distribution with unknown means µ1 and µ2 , variances σ12 and σ22 , and correlation coefficient ρ. Let Xij be the jth component of Xi , ¯ and S 2 be the sample mean and variance based on X1j , ..., Xnj , j = 1, 2, X j √ √j and V = n − 2R/ 1 − R2 , where 1 ¯ 1 )(Xi2 − X ¯2) (Xi1 − X S1 S2 (n − 1) i=1 n

R=

is the sample correlation coefficient. Show that the UMPU test of size α for H0 : ρ ≤ 0 versus H1 : ρ > 0 rejects H0 when V > tn−2,α and the UMPU test of size α for H0 : ρ = 0 versus H1 : ρ = 0 rejects H0 when |V | > tn−2,α/2 , where tn−2,α is the (1 − α)th quantile of the t-distribution tn−2 . Solution. The Lebesgue density of (X1 , ..., Xn ) can be written as C(µ1 , µ2 , σ1 , σ2 , ρ) exp {θY + ϕτ U } , where C(·) is a function of (µ1 , µ2 , σ1 , σ2 , ρ), Y =

n

Xi1 Xi2 ,

θ=

i=1

U=

 n i=1

2 Xi1 ,

n i=1

2 Xi2 ,

n i=1

ρ , σ1 σ2 (1 − ρ2 ) Xi1 ,

n i=1

 Xi2

,

280

Chapter 6. Hypothesis Tests

and

ϕ=



 1 µ2 1 µ1 − θµ . , , − , − θµ 1 2 2σ12 (1 − ρ2 ) 2σ22 (1 − ρ2 ) σ12 (1 − ρ2 ) σ22 (1 − ρ2 )

By Basu’s theorem, R is independent of U when ρ = 0. Also, R= 

Y − U3 U4 /n (U1 − U32 /n)(U2 − U42 /n)

,

which is linear in Y , where Uj is the jth component of U . Hence, we may apply Theorem 6.4 and Lemma 6.7 in Shao (2003). It remains to show that V has the t-distribution tn−2 when ρ = 0, which is a consequence of the result in the note of Exercise 17 in Chapter 1 and the result in Exercise 22(ii) in Chapter 2. Exercise 32 (#6.55). Let (X1 , ..., Xn ) be a random sample from a bivariate normal distribution with unknown means µ1 and µ2 , variances σ12 and σ22 , and correlation coefficient ρ. Let Xij be the jth component of Xi , ¯ j and S 2 be the sample mean and variance based on X1j , ..., Xnj , j = 1, 2, X j  n ¯ 1 )(Xi2 − X ¯ 2 ). and S12 = (n − 1)−1 i=1 (Xi1 − X (i) Let ∆0 > 0 be a known constant. Show that a UMPU test for testing H0 : σ2 /σ1 = ∆0 versus H1 : σ2 /σ1 = ∆0 rejects H0 when ;2 2 2 2 > c. (∆0 S1 + S22 )2 − 4∆20 S12 R = |∆20 S12 − S22 | (ii) Find the Lebesgue density of R in (i) when σ2 /σ1 = ∆0 . (iii) Assume that σ1 = σ2 . Show that a UMPU test for H0 : µ1 = µ2 versus H1 : µ1 = µ2 rejects H0 when ;2 ¯2 − X ¯1| V = |X (n − 1)(S12 + S22 − 2S12 ) > c. (iv) Find the Lebesgue density of V in (iii) when µ1 = µ2 . Solution. (i) Let

Yi1 Yi2



=

∆0 ∆0

1 −1



Xi1 Xi2

 .

Then Cov(Yi1 , Yi2 ) = ∆20 σ12 − σ22 . If we let ρY be the correlation between Yi1 and Yi2 , then testing H0 : σ2 /σ1 = ∆0 versus H1 : σ2 /σ1 = ∆0 is equivalent to testing H0 : ρY = 0 versus H1 : ρY = 0. By the result in the previous exercise, the UMPU test rejects when |VY | > c, where VY is the sample correlation coefficient based on the Y -sample. The result follows from the fact that |VY | = R.

Chapter 6. Hypothesis Tests

281

(ii) From Exercise 22(ii) in Chapter 2, the Lebesgue density of R in (i) when σ2 /σ1 = ∆0 is 2Γ( n−1 n−4 2 ) (1 − r2 ) 2 I(0,1) (r). √ πΓ( n−2 ) 2 (iii) Let

Yi1 Yi2



=

1 1

1 −1



Xi1 Xi2

 .

Then (Yi1 , Yi2 ) has the bivariate normal distribution

  µ1 + µ2 2(1 + ρ)σ 2 0 , . N2 0 2(1 − ρ)σ 2 µ1 − µ2 Since Yi1 and Yi2 are independent, the UMPU test of size α for testing H0 : µ1 √ = µ2 versus H1 : µ1 = µ2 rejects H0 when |t(Y )| > tn−1,α/2 , where t(Y ) = nY¯2 /SY2 , Y¯2 and SY22 are the sample mean and variance based on Y12 , ..., Yn2 , and tn−1,α is the (1 − α)th quantile of the t-distribution tn−1 . A direct calculation shows that √ ¯ ¯2|  n|X1 − X |t(Y )| =  2 n(n − 1)V. = (S1 + S22 − 2S12 ) (iv) Since t(Y ) has the t-distribution tn−1 under H0 , the Lebesgue density of V when µ1 = µ2 is √ nΓ( n2 ) (1 + nv 2 )−n/2 . √ πΓ( n−1 ) 2 Exercise 33 (#6.57). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on the interval n (a, ∞) with scale parameter θ, where a and θ are unknown. Let V = 2 i=1 (Xi − X(1) ), where X(1) is the smallest order statistic. (i) For testing H0 : θ = 1 versus H1 : θ = 1, show that a UMPU test of size α rejects H0 when V < c1 or V > c2 , where ci ’s are determined by  c2  c2 f2n−2 (v)dv = f2n (v)dv = 1 − α c1

c1

and fm (v) is the Lebesgue density of the chi-square distribution χ2m . (ii) For testing H0 : a = 0 versus H1 : a = 0, show that a UMP test of size α rejects H0 when X(1) < 0 or 2nX(1) /V > c, where c is determined by  c (n − 1) (1 + v)−n dv = 1 − α. 0

282

Chapter 6. Hypothesis Tests

Solution. Since (X(1) , V ) is sufficient and complete for (a, θ), we may consider tests that are functions of (X(1) , V ). (i) When θ = 1, X(1) is complete and sufficient for a, V is independent of X(1) , and V /2 has the Gamma distribution with shape parameter n − 1 and scale parameter θ. By Lemma 6.5 and Lemma 6.6 in Shao (2003), the UMPU test is the UMPU test in the problem where V /2 is an observation from the Gamma distribution with shape parameter n − 1 and scale parameter θ, which has monotone likelihood ratio in V . Hence, the UMPU test of size α rejects H0 when V < c1 or V > c2 , where V has the chi-square distribution χ22(n−1) when θ = 1 and, hence, ci ’s are determined by 

c2

f2n−2 (v)dv = 1 − α

c1

and



c2

 vf2n−2 (v)dv = (1 − α)

c1

0



vf2n−2 (v)dv = (1 − α)(2n − 2).

Since (2n − 2)−1 vf2n−2 (v) = f2n (v), ci ’s are determined by  c2  c2 f2n−2 (v)dv = f2n (v)dv = 1 − α. c1

c1

(ii) From Exercise 15, for testing H0 : a = 0 versus H1 : a = 0, a UMP test of size α rejects H0 when X(1) < 0 or 2nX(1) /V > c. It remains to determine c. When a = 0, X(1) /θ has the chi-square distribution χ22 , V /θ has the chi-square distribution χ22n−2 , and they are independent. Hence, 2nX(1) /[V (n − 1)] has the F -distribution F2,2(n−1) . Hence, 2nX(1) /V has Lebesgue density f (y) = (n − 1)(1 + y)−n . Therefore,  c (n − 1) (1 + y)−n dy = 1 − α. 0

Exercise 34 (#6.58). Let (X1 , ..., Xn ) be a random sample from the uniform distribution on the interval (θ, ϑ), where −∞ < θ < ϑ < ∞. (i) Show that the conditional distribution of the smallest order statistic X(1) given the largest order statistic X(n) = x is the distribution of the minimum of a random sample of size n − 1 from the uniform distribution on the interval (θ, x). (ii) Find a UMPU test of size α for testing H0 : θ ≤ 0 versus H1 : θ > 0. Solution. (i) The joint Lebesgue density of (X(1) , X(n) ) is f (x, y) =

n(n − 1)(x − y)n−2 I(θ,x) (y)I(y,ϑ) (x) (ϑ − θ)n

Chapter 6. Hypothesis Tests

283

and the Lebesgue density of X(n) is g(x) =

n(x − θ)n−1 I(θ,ϑ) (x). (ϑ − θ)n

Hence, the conditional density of X(1) given X(n) = x is (n − 1)(x − y)n−2 f (x, y) = I(θ,x) (y), g(x) (x − θ)n−1 which is the Lebesgue density of the smallest order statistic based on a random sample of size n − 1 from the uniform distribution on the interval (θ, x). (ii) Note that (X(1) , X(n) ) is complete and sufficient for (θ, ϑ) and when θ = 0, X(n) is complete for ϑ. Thus, by Lemmas 6.5 and 6.6 in Shao (2003), the UMPU test is the same as the UMPU test in the problem where X(1) is the smallest order statistic of a random sample of size n − 1 from the uniform distribution on the interval (θ, x). Let Y = x − X(1) . Then Y is the largest order statistic of a random sample of size n − 1 from the uniform distribution on the interval (0, η), where η = x − θ. Thus, by the result in Exercise 14(i), a UMPU test of size α is  T =

α 0

Y x

conditional on X(n) = x. Since x − X(1) = Y ,  T =

α 0

X(1) > 0 X(1) < 0.

Exercise 35 (#6.82). Let X be a random variable having probability density fθ (x) = exp{η(θ)Y (x)−ξ(θ)}h(x) with respect to a σ-finite measure ν, where η is an increasing and differentiable function of θ ∈ Θ ⊂ R. ˆ − log (θ0 ) is increasing (or decreasing) in Y when (i) Show that log (θ) θˆ > θ0 (or θˆ < θ0 ), where (θ) = fθ (x), θˆ is an MLE of θ, and θ0 ∈ Θ. (ii) For testing H0 : θ1 ≤ θ ≤ θ2 versus H1 : θ < θ1 or θ > θ2 or for testing H0 : θ = θ0 versus H1 : θ = θ0 , show that there is a likelihood ratio (LR) test whose rejection region is equivalent to Y (X) < c1 or Y (X) > c2 for some constants c1 and c2 . Solution. (i) From the property of exponential families, θˆ is a solution of the likelihood equation ∂ log (θ) = η  (θ)Y (X) − ξ  (θ) = 0 ∂θ

284

Chapter 6. Hypothesis Tests

ˆ − and ψ(θ) = ξ  (θ)/η  (θ) has a positive derivative ψ  (θ). Since η  (θ)Y dθˆ  ˆ ˆ ξ (θ) = 0, θ is an increasing function of Y and dY > 0. Consequently, for any θ0 ∈ Θ, * + + d * ˆ − log (θ0 ) = d η(θ)Y ˆ − ξ(θ) ˆ − η(θ0 )Y + ξ(θ0 ) log (θ) dY dY ˆ dθˆ  ˆ ˆ − dθ ξ  (θ) ˆ − η(θ0 ) η (θ)Y + η(θ) = dY dY dθˆ  ˆ ˆ + η(θ) ˆ − η(θ0 ) = [η (θ)Y − ξ  (θ)] dY ˆ − η(θ0 ), = η(θ) which is positive (or negative) if θˆ > θ0 (or θˆ < θ0 ). ˆ (ii) Since (θ) is increasing when θ ≤ θˆ and decreasing when θ > θ, ⎧ ⎪ ⎪ ⎨

λ(X) =

(θ1 ) ˆ (θ)

θˆ < θ1

ˆ (θ)

θ1 ≤ θˆ ≤ θ2 θˆ > θ2

supθ1 ≤θ≤θ2 (θ) = 1 ⎪ supθ∈Θ (θ) ⎪ ⎩ (θ2 )

for θ1 ≤ θ2 . Hence, λ(X) < c if and only if θˆ < d1 or θˆ > d2 for some constants d1 and d2 . From the result in (i), this means that λ(X) < c if and only if Y < c1 or Y > c2 for some constants c1 and c2 . Exercise 36 (#6.83). In Exercises 55 and 56 of Chapter 4, consider H0 : j = 1 versus H1 : j = 2. (i) Derive the likelihood ratio λ(X). (ii) Obtain an LR test of size α in Exercise 55 of Chapter 4. Solution. Following the notation in Exercise 55 of Chapter 4, we obtain that ⎧ ˆj = 1 ⎨ 1 λ(X) = ˆ ˆj = 2 ⎩ (θˆ1 ,j=1) (θ2 ,j=2) ⎧ √ 2 T1 /n ⎪ 2e ⎪ ≤ ⎨ 1 T2 /n π

 √ n 2 2 = T /n ⎪ 1 T /n 2e √ 2 2e ⎪ ⎩ π T2 /n > π , T1 /n

where T1 = Chapter 4,

n i=1

Xi2 and T2 = 

λ(X) =

n i=1

|Xi |. Similarly, for Exercise 56 of

1

h(X) = 0 ¯

e−nX ¯ ¯ n(1−X) (1−X)

h(X) = 1,

Chapter 6. Hypothesis Tests

285

¯ is the sample mean, h(X) = 1 if all Xi ’s are not larger than 1 and where X h(X) = 0 otherwise. (ii) Let c ∈ [0, 1]. Then the LR test of size α rejects H0 when 3 2e T2 /n  < c1/n , π T1 /n where c1/n is the αth quantile of the distribution of

2

T2 /n 2e √ π T1 /n

when

X1 , ..., Xn are independent and identically distributed as N (0, 1). Exercise 37 (#6.84). In Exercise 12, derive the likelihood ratio λ(X) when (a) H0 : θ ≤ θ0 ; (b) H0 : θ1 ≤ θ ≤ θ2 ; and (c) H0 : θ ≤ θ1 or θ ≥ θ2 . Solution. Let f and g be the probability densities of F and G, respectively, with respect to the measure corresponding to F + G. Then, the likelihood function is (θ) = θ[f (X) − g(X)] + g(X) and

 sup (θ) = 0≤θ≤1

f (X) g(X)

f (X) ≥ g(X) f (X) < g(X).

For θ0 ∈ [0, 1], 

θ0 [f (X) − g(X)] + g(X) g(X)

sup (θ) = 0≤θ≤θ0

Hence, for H0 : θ ≤ θ0 ,  λ(X) =

θ0 [f (X)−g(X)]+g(X) f (X)

f (X) ≥ g(X) f (X) < g(X).

f (X) ≥ g(X) f (X) < g(X).

1

For 0 ≤ θ1 ≤ θ2 ≤ 1,  sup 0≤θ1 ≤θ2 ≤1

(θ) =

θ2 [f (X) − g(X)] + g(X) θ1 [f (X) − g(X)] + g(X)

and, thus, for H0 : θ1 ≤ θ ≤ θ2 , ⎧ ⎨ θ2 [f (X)−g(X)]+g(X) f (X) λ(X) = ⎩ θ1 [f (X)−g(X)]+g(X)

f (X) ≥ g(X) f (X) < g(X).

g(X)

Finally, sup 0≤θ≤θ1 ,θ2 ≤θ≤1

f (X) ≥ g(X) f (X) < g(X)

(θ) = sup (θ). 0≤θ≤1

Hence, for H0 : θ ≤ θ1 or θ ≥ θ2 , λ(X) = 1.

286

Chapter 6. Hypothesis Tests

Exercise 38 (#6.85). Let (X1 , ..., Xn ) be a random sample from the discrete uniform distribution on {1, ..., θ}, where θ is an integer ≥ 2. Find a level α LR test for (i) H0 : θ ≤ θ0 versus H1 : θ > θ0 , where θ0 is a known integer ≥ 2; (ii) H0 : θ = θ0 versus H1 : θ = θ0 . Solution. (i) The likelihood function is (θ) = θ−n I{X(n) ,X(n) +1,...} (θ), where X(n) is the largest order statistic. Then, −n sup (θ) = X(n) ,

θ=2,3,...

 sup

(θ) =

θ=2,...,θ0

and

 λ(X) =

1 0

−n X(n) 0

X(n) ≤ θ0 X(n) > θ0 ,

X(n) ≤ θ0 X(n) > θ0 .

Hence, a level α test rejects H0 when X(n) > θ0 , which has size 0. (ii) From part (i) of the solution, we obtain that    X(n) n X(n) ≤ θ0 θ 0 λ(X) = 0 X(n) > θ0 . Then, λ(X) < c is equivalent to X(n) > θ0 or X(n) < θ0 c1/n . Let c = α. When θ = θ0 , the type I error rate is  * +n  P X(n) < θ0 α1/n = P (X1 < θ0 α1/n ) n

the integer part of θ0 α1/n = θ0 

1/n n θ0 α ≤ θ0 = α. Hence, an LR test of level α rejects H0 if X(n) > θ0 or X(n) < θ0 α1/n . Exercise 39 (#6.87). Let X = (X1 , ..., Xn ) be a random sample from the exponential distribution on the interval (a, ∞) with scale parameter θ. (i) Suppose that θ is known. Find an LR test of size α for testing H0 : a ≤ a0 versus H1 : a > a0 , where a0 is a known constant. (ii) Suppose that θ is known. Find an LR test of size α for testing H0 : a =

Chapter 6. Hypothesis Tests

287

a0 versus H1 : a = a0 . (iii) Repeat part (i) for the case where θ is also unknown. (iv) When both θ and a are unknown, find an LR test of size α for testing H0 : θ = θ0 versus H1 : θ = θ0 . (v) When a > 0 and θ > 0 are unknown, find an LR test of size α for testing H0 : a = θ versus H1 : a = θ. Solution. (i) The likelihood function is ¯

(a, θ) = θ−n ena/θ e−nX/θ I(a,∞) (X(1) ), ¯ is the sample mean and X(1) is the smallest order statistic. When where X θ is known, the MLE of a is X(1) . When a ≤ a0 , the MLE of a is min{a0 , X(1) }. Hence, the likelihood ratio is  1 X(1) < a0 λ(X) = e−n(X(1) −a0 )/θ X(1) ≥ a0 . Then λ(X) < c is equivalent to X(1) > d for some d ≥ a0 . To determine d, note that  nena/θ ∞ −nx/θ e dx sup P (X(1) > d) = sup θ a≤a0 a≤a0 d  nena0 /θ ∞ −nx/θ e dx = θ d = en(a0 −d)/θ . Setting this probability to α yields d = a0 − n−1 θ log α. (ii) Note that (a0 , θ) = 0 when X(1) < a0 . Hence the likelihood ratio is  0 X(1) < a0 λ(X) = e−n(X(1) −a0 )/θ X(1) ≥ a0 . Therefore, λ(X) < c is equivalent to X(1) ≤ a0 or X(1) > d for some d ≥ a0 . From part (i) of the solution, d = a0 − n−1 θ log α leads to an LR test of size α. ¯ − X(1) ). When a ≤ a0 , the MLE of a0 is (iii) The MLE of (a, θ) is (X(1) , X min{a0 , X(1) } and the MLE of θ is  ¯ − X(1) X X(1) < a0 ˆ θ0 = ¯ − a0 X(1) ≥ a0 . X Therefore, the likelihood ratio is  [T (X)]n λ(X) = 1

X(1) ≥ a0 X(1) < a0 ,

288

Chapter 6. Hypothesis Tests

¯ ¯ where T (X) = (X−X (1) )/(X−a0 ). Hence the LR test rejects H0 if and only 1/n ¯ − X(1) )/(X ¯ − a) if T (X) < c . From the solution to Exercise 33, Y = (X has the beta distribution with parameter (n − 1, 1). Then, 

  ¯ − X(1) X sup P T (X) < c1/n = sup P ¯ < c1/n X − a + a − a0 a≤a0 a≤a0   = P Y < c1/n  c1/n = (n − 1) xn−2 dx 0

= c(n−1)/n . Setting this probability to α yields c = αn/(n−1) . ¯ − X(1) ). Then the (iv) Under H0 , the MLE is (X(1) , θ0 ). Let Y = θ0−1 n(X likelihood ratio is λ(X) = en Y n e−Y . Thus, λ(X) < c is equivalent to Y < c1 or Y > c2 . Under H0 , 2Y has the chi-square distribution χ22n−2 . Hence, an LR test of size α rejects H0 when 2Y < χ22(n−1),1−α/2 or 2Y > χ22(n−1),α/2 , where χ2r,α is the (1 − α)th quantile of the chi-square distribution χ2r . ¯ X). ¯ Let Y = X(1) /(X ¯ − X(1) ). Then the (v) Under H0 , the MLE is (X, likelihood ratio is ¯ − X(1) )n = en (1 + Y )−n , ¯ −n (X λ(X) = en X ¯ − X(1) ). Then λ(X) < c is equivalent to Y > b for where Y = X(1) /(X some b. Under H0 , the distribution Y is the same as that of the ratio Y1 /Y2 , where Y1 has the exponential distribution on the interval (n, ∞) and scale parameter 1, Y2 has the gamma distribution with shape parameter n − 1 and scale parameter 1, and Y1 and Y2 are independent. Hence, b satisfies  ∞  ∞ en y2n−2 e−y2 e−y1 dy1 dy2 α= Γ(n − 1) 0 max{n,by2 }  ∞ en n−2 −y2 − max{n,by2 } = y2 e e dy2 . Γ(n − 1) 0 Exercise 40. Let (X1 , ..., Xn ) and (Y1 , ..., Yn ) be independent random samples from N (µ1 , 1) and N (µ2 , 1), respectively, where −∞ < µ2 ≤ µ1 < ∞. (i) Derive the likelihood ratio λ and an LR test of size α for H0 : µ1 = µ2 versus H1 : µ1 > µ2 . (ii) Derive the distribution of −2 log λ and the power of the LR test in (ii).

Chapter 6. Hypothesis Tests

289

(iii) Verify that the LR test in (ii) is a UMPU test. Solution. (i) The likelihood function is "  n n 1 1 1 (µ1 , µ2 ) = exp − (Xi − µ1 )2 − (Yi − µ2 )2 . (2π)n 2 i=1 2 i=1 ¯ be the sample mean based on Xi ’s and Y¯ be the sample mean based Let X ¯ + Y¯ )/2. The ¯ = (X on Yi ’s. When µ1 = µ2 , (µ1 , µ2 ) is maximized at µ ¯ ¯ ¯ ¯ ¯ < Y¯ . MLE of (µ1 , µ2 ) is equal to (X, Y ) when X ≥ Y . Consider the case X ¯ For any fixed µ1 ≤ Y , (µ1 , µ2 ) increases in µ2 and, hence, sup

µ1 ≤Y¯ , µ2 ≤µ1

(µ1 , µ2 ) = sup (µ1 , µ1 ). µ1 ≤Y¯

Also, sup

µ1 >Y¯ , µ2 ≤µ1

(µ1 , µ2 ) = sup (µ1 , Y¯ ) ≤ sup (µ1 , µ1 ), µ1 >Y¯

µ1 ≤Y¯

¯ < Y¯ . Hence, since X µ, µ ¯), sup (µ1 , µ1 ) = sup (µ1 , µ1 ) = (¯ µ1 ≤Y¯

µ2 ≤µ1

¯ < Y¯ . This shows that the MLE is (¯ ¯ < Y¯ . since µ ¯ < Y¯ when X µ, µ ¯) when X ¯ < Y¯ and Therefore, the likelihood ratio λ = 1 when X   n n exp − 12 i=1 (Xi − µ ¯)2 − 12 i=1 (Yi − µ ¯)2   n λ= ¯ 2 − 1 n (Yi − Y¯ )2 exp − 12 i=1 (Xi − X) i=1 2   ¯ − Y¯ )2 = exp − n (X 4

¯ ≥ Y¯ . Hence, when X



n ¯ 2 (X

− Y¯ )2

¯ ≥ Y¯ X ¯ < Y¯ . 0 X n ¯ ¯ Note that λ < c for some c ∈ (0, 1) is equivalent to 2 (X − Y ) > d for n ¯ ¯ some d > 0. Under H0 , 2 (X − Y ) has the standard normal distribution. Hence, setting d = Φ−1 (1 − α) yields an LR test of size α, where Φ is the cumulative distribution function of the  standard normal distribution.  ¯ − Y¯ ) (ii) Note that −2 log λ ≥ 0. Let δ = n2 (µ1 − µ2 ). Then Z = n2 (X has distribution N (δ, 1). For t > 0, −2 log λ =

P (−2 log λ ≤ t) = P (−2 log λ ≤ t, Z > 0) + P (−2 log λ ≤ t, Z ≤ 0) √ √ = P (Z ≤ t, Z > 0) + P (Z ≤ t, Z ≤ 0) √ = P (0 < Z ≤ t) + P (Z ≤ 0) √ = P (Z ≤ t) √ = Φ( t − δ).

290

Chapter 6. Hypothesis Tests

The power of the LR test in (ii) is then 1 − Φ(d − δ) = 1 − Φ(Φ−1 (1 − α) − δ). (iii) The likelihood can be written as   ¯ + nµ2 U − n(µ1 + µ2 )/2 , CX,Y exp θX ¯ + Y¯ , and CX,Y is a quantity does not depend where θ = n(µ1 − µ2 ), U = X on any  parameters. When θ = 0 (µ1 = µ2 ), Z is independent of U . Also, ¯ − U ). Hence, by Theorem 6.4 and Lemma 6.7 in Shao (2003), Z = n2 (2X the UMPU test for H0 : θ = 0 versus H1 : θ > 0 rejects H0 when Z > c for some c, which is the same as the LR test in (ii). Exercise 41 (#6.89). Let Xi1 , ..., Xini , i = 1, 2, be two independent random samples from the uniform distributions on (0, θi ), i = 1, 2, respectively, where θ1 > 0 and θ2 > 0 are unknown. (i) Find an LR test of size α for testing H0 : θ1 = θ2 versus H1 : θ1 = θ2 . (ii) Derive the limiting distribution of −2 log λ when n1 /n2 → κ ∈ (0, ∞), where λ is the likelihood ratio in part (i). Solution. (i) Let Yi = maxj=1,...,ni Xij , i = 1, 2, and Y = max{Y1 , Y2 }. The likelihood function is (θ1 , θ2 ) = θ1−n1 θ2−n2 I(0,θ1 ) (Y1 )I(0,θ2 ) (Y2 ) and the MLE of θi is Yi , i = 1, 2. When θ1 = θ2 , (θ1 , θ1 ) = θ1−n1 −n2 I(0,θ1 ) (Y ) and the MLE of θ1 is Y . Hence, the likelihood ratio is λ=

Y1n1 Y2n2 . Y n1 +n2

Assume that θ1 = θ2 . For any t ∈ (0, 1), P (λ < t) = P (λ < t, Y1 ≥ Y2 ) + P (λ < t, Y1 < Y2 )     = P Y2 < t1/n2 Y1 , Y1 ≥ Y2 + P Y1 < t1/n1 Y2 , Y1 ≥ Y2     = P Y2 < t1/n2 Y1 + P Y1 < t1/n1 Y2  1  t1/n2 y1 = n1 n 2 y2n2 −1 y1n1 −1 dy2 dy1 0



+ n1 n2

0

0

1

 0

t1/n1 y2

y1n1 −1 y2n2 −1 dy1 dy2

Chapter 6. Hypothesis Tests  = n1 t

0

1

291

y1n1 +n2 −1 dy1 + n2 t 

= (n1 + n2 )t

1



1

0

y2n1 +n2 −1 dy2

y n1 +n2 −1 dy

0

= t. Hence, the LR test of size α rejects H0 when λ < α. (ii) Under H0 , by part (i) of the solution, λ has the uniform distribution on (0, 1), which does not depend on (n1 , n2 ). The distribution of −2 log λ is then the exponential distribution on the interval (0, ∞) with scale parameter 2−1 , which is also the chi-square distribution χ21 . Consider now the limiting distribution of −2 log λ when n1 /n2 → κ ∈ (0, ∞). Assume that θ1 < θ2 . Then P (Y1 > Y2 ) = P (Y2 − Y1 − (θ2 − θ1 ) < −(θ2 − θ1 )) → 0 since Y2 − Y1 →p θ2 − θ1 . Thus, for the limiting distribution of −2 log λ, we may assume that Y1 ≤ Y2 and, consequently, −2 log λ = 2n1 (log Y2 −log Y1 ). Note that ni (θi − Yi ) →d θi Zi , i = 1, 2, where Zi has the exponential distribution on the interval (0, ∞) with scale parameter 1. By the δ-method, ni (log θi − log Yi ) →d Zi ,

i = 1, 2.

Because Y1 and Y2 are independent, −2 log λ + 2 log(θ1 /θ2 )n1 = 2[n1 (log θ1 − log Y1 ) − →d 2(Z1 − κZ2 ),

n1 n2 n2 (log θ2

− log Y2 )]

where Z1 and Z2 are independent. The limiting distribution of −2 log λ for the case of θ1 > θ2 can be similarly obtained. Exercise 42 (#6.90). Let (Xi1 , ..., Xini ), i = 1, 2, be two independent random samples from N (µi , σi2 ), i = 1, 2, respectively, where µi ’s and σi2 ’s are unknown. For testing H0 : σ22 /σ12 = ∆0 versus H1 : σ22 /σ12 = ∆0 with a known ∆0 > 0, derive an LR test of size α and compare it with the UMPU test. ¯1, X ¯2, σ ¯ i is the Solution. The MLE of (µ1 , µ2 , σ12 , σ22 ) is (X ˆ12 , σ ˆ22 ), where X sample mean based on Xi1 , ..., Xini and σ ˆi2 =

ni 1 ¯ j )2 , (Xij − X ni j=1

292

Chapter 6. Hypothesis Tests

¯1, X ¯2, σ i = 1, 2. Under H0 , the MLE of (µ1 , µ2 , σ12 ) is (X ˜12 ), where n1  n 2 ¯ 1 )2 + ¯ 2 ∆0 j=1 (X1j − X j=1 (X2j − X2 ) . σ ˜12 = ∆0 (n1 + n2 ) Then the likelihood ratio is proportional to

n1 /2 n2 /2 1 F , 1+F 1+F where F = S22 /(∆0 S12 ) and Sj2 is the sample variance based on Xi1 , ..., Xini , i = 1, 2. Under H0 , F has the F-distribution Fn2 −1,n1 −1 . Since the likelihood ratio is a unimodal function in F , an LR test is equivalent to the one that rejects the null hypothesis when F < c1 or F > c2 for some positive constants c1 < c2 chosen so that P (F < c1 ) + P (F > c2 ) = α under H0 . Note that the UMPU test is one of these tests with an additional requirement being unbiased, i.e., the ci ’s must satisfy P (B < c1 )+P (B > c2 ) = α, where B has the beta distribution with parameter ( n22+1 , n12−1 ) (e.g., Shao, 2003, p. 414). Exercise 43 (#6.91). Let (Xi1 , Xi2 ), i = 1, ..., n, be a random sample from the bivariate normal distribution with unknown mean and covariance matrix. For testing H0 : ρ = 0 versus H1 : ρ = 0, where ρ is the correlation coefficient, show that the test rejecting H0 when |R| > c is an LR test, where ' -& n n n ¯ 1 )(Xi2 − X ¯2) ¯ 1 )2 + ¯ 2 )2 R= (Xi1 − X (Xi1 − X (Xi2 − X i=1

i=1

i=1

¯ j is the sample mean based on is the sample correlation coefficient and X X1j , ..., Xnj . Discuss the form of the limiting distribution of −2 log λ, where λ is the likelihood ratio. Solution. From the normal distribution theory, the  MLE of the means are ¯ 1 and X ¯ 2 and the MLE of the variances are n−1 n (Xi1 − X ¯ 1 )2 and X i=1 n −1 2 ¯ n i=1 (Xi2 − X2 ) , regardless of whether H0 holds or not. The MLE of ρ is the sample correlation coefficient R. Under H0 , ρ = 0. Using these results, the likelihood ratio is λ = (1 − R2 )n/2 . Hence, an LR test rejects H0 when |R| > c for some c > 0. The distribution of R under H0 is given in Exercise 9(ii) in Chapter 2. Hence, the Lebesgue density of −2 log λ is Γ( n−1 2 ) (1 − e−x )−1/2 e−(n−2)x/2 I(0,∞) (x). √ πΓ( n−2 2 ) When ρ = 0, it follows from the result in Exercise 9(i) of Chapter 2 that √ n(R − ρ) →d N (0, (1 − ρ2 )2 /(1 + ρ2 )). By the δ-method,     √ −2 log λ 4ρ2 − log(1 − ρ2 ) →d N 0, 1+ρ . n 2 n

Chapter 6. Hypothesis Tests

293

Exercise 44. Consider the problem in Exercise 63 of Chapter 4. Find an LR test of size α for testing H0 : θ1 = θ2 versus H1 : θ1 = θ2 . Discuss the limiting distribution of −2 log λ, where λ is the likelihood ratio. Solution. the MLE of (θ1 , θ2 ) is equal √ From the√solution of Exercise 44,  n to  n−1 ( T1 T2 + T1 , T1 T2 + T2 ), where T1 = i=1 Xi I(0,∞) (Xi ) and T2 = n − i=1 Xi I(−∞,0] (Xi ). When θ1 = θ2 , the distribution of Xi is the double exponential distribution with mean 0 and scale parameter θ1 = θ2 . Hence, the MLE of θ1 = θ2 is (T1 + T2 )/n. Since the likelihood function is   T1 T2 (θ1 , θ2 ) = (θ1 + θ2 )−n exp − − , θ1 θ2 √ √ ( T1 + T2 )2n . λ= n 2 (T1 + T2 )n Under H0 , the distribution of λ does not depend on any unknown parameter. Hence, an LR test of size α rejects H0 when λ < c, where c is the (1 − α)th quantile of λ under H0 . Note that the likelihood ratio is

E(Ti ) =

nθi2 n[2θi3 (θ1 + θ2 ) − θi4 ] , Var(Ti ) = , i = 1, 2, θ1 + θ2 (θ1 + θ2 )2

and Cov(T1 , T2 ) = −E(T1 )E(T2 ) = −

n2 θ12 θ22 . (θ1 + θ2 )2

Hence, by the central limit theorem, ⎛ ⎛ 3 ⎞⎞ ⎡ ⎛ 2 ⎞⎤ θ1 (θ1 +2θ2 ) θ12 θ22

 θ1 − √ 2 2 T /n (θ1 +θ2 ) (θ1 +θ2 ) 2 ⎠⎦ ⎠⎠ . →d N2 ⎝0, ⎝ n⎣ 1 − ⎝ θ1 +θ θ22 θ12 θ22 θ23 (θ2 +2θ1 ) T2 /n − 2 2 θ1 +θ2 (θ1 +θ2 ) (θ1 +θ2 ) √ √ Let g(x, y) = 2 log( x + y) − log(x + y) − log 2. Then n−1 log λ = g(T1 /n, T2 /n). The derivatives ∂g(x, y) 1 1 = √ − ∂x x + xy x + y

and

∂g(x, y) 1 1 = √ − ∂y y + xy x + y

(θ2 −θ1 ) (θ1 −θ2 ) at x = E(T1 ) and x = E(T2 ) are equal to θθ12 (θ and θθ21 (θ 2 2 2 2 , respec1 +θ2 ) 1 +θ2 ) tively. Hence, by the δ-method, + √ * −1 (θ1 +θ2 )2 →d N (0, τ 2 ), n n log λ − log 2(θ 2 +θ 2 ) 1

2

where τ2 =

[θ1 θ22 (θ1 + 2θ2 ) + θ2 θ12 (θ2 + 2θ1 ) + 2θ12 θ22 ](θ1 − θ2 )2 . (θ1 + θ2 )2 (θ12 + θ22 )2

294

Chapter 6. Hypothesis Tests

Exercise 45 (#6.93). Let X1 and X2 be independent observations from the binomial distributions with sizes n1 and n2 and probabilities p1 and p2 , respectively, where ni ’s are known and pi ’s are unknown. (i) Find an LR test of level α for testing H0 : p1 = p2 versus H1 : p1 = p2 . (ii) Find an LR test of level α for testing H0 : p1 ≥ p2 versus H1 : p1 < p2 . Is this test a UMPU test? Solution. (i) The likelihood function is n1 −X1 X2 1 (p1 , p2 ) = CX pX p2 (1 − p2 )n2 −X2 , 1 (1 − p1 )

where CX is a quantity not depending on (p1 , p2 ). The MLE of (p1 , p2 ) is (X1 /n1 , X2 /n2 ). Under H0 , the MLE of p1 = p2 is U/n, where U = X1 +X2 and n = n1 + n2 . Then, the likelihood ratio is  U U  λ1 = 

X1 n1

X1 

n−U 1 − Un n1 −X1  X2 

n

1−

X1 n1

X2 n2

1−

X2 n2

n2 −X2 .

An LR test rejects H0 when λ1 < c, which is equivalent to ψ(X1 , X2 ) > g(U ) for some function g, where  ψ(X1 , X2 ) =

X1 n1

X1 

1−

X1 n1

n1 −X1 

X2 n2

X2 

1−

X2 n2

n2 −X2

is the denominator of λ1 . To determine g(U ), we note that, under H0 , the conditional distribution of ψ(X1 , X2 ) given U does not depend on any unknown parameter (which follows from the sufficiency of U under H0 ). Hence, if we choose g(U ) such that P (ψ(X1 , X2 ) > g(U )|U ) ≤ α, then P (ψ(X1 , X2 ) > g(U )) ≤ α. (ii) Using the same argument used in the solution of Exercise 40, we can show that the MLE of (p1 , p2 ) under H0 (p1 ≥ p2 ) is equal to (X1 /n1 , X2 /n2 ) if X1 /n1 ≥ X2 /n2 and is equal to (U/n, U/n) if X1 /n1 < X2 /n2 . Hence, the likelihood ratio is  λ1 X1 /n1 < X2 /n2 λ= 1 X1 /n1 ≥ X2 /n2 , where λ1 is given in part (i) of the solution. Hence, an LR test rejects H0 when λ1 < c and X1 /n1 < X2 /n2 , which is equivalent to the test > c−1 and U/(1 + n1 /n2 ) < X2 . Note that that rejects H0 when λ−1 1 −1 λ1 = h(X2 , U )/q(U ), where  h(X2 , U ) =

U −X2 n1

U −X2 

1−

U −X2 n1

n1 −U +X2 

X2 n2

X2 

1−

X2 n2

n2 −X2

Chapter 6. Hypothesis Tests

295

and q(U ) = Since

X2 (n1 −U +X2 ) (n2 −X2 )(U −X2 )

 U U  n

1−

 U n−U . n

> 1 when U/(1 + n1 /n2 ) < X2 , the derivative

∂ log h(X2 , U ) = log ∂X2

X2 (n1 − U + X2 ) (n2 − X2 )(U − X2 )

 > 0,

i.e., h(X2 , U ) is increasing when U/(1 + n1 /n2 ) < X2 . Thus, the LR test is equivalent to the test that rejects H0 when X2 > c(U ) for some function c(U ). The difference between this test and the UMPU test derived in Exercise 24 is that the UMPU test is of size α and possibly randomized, whereas the LR test is of level α and nonrandomized. Exercise 46 (#6.95). Let X1 and X2 be independently distributed as the exponential distributions on the interval (0, ∞) with unknown scale parameters θi , i = 1, 2, respectively. Define θ = θ1 /θ2 . Find an LR test of size α for testing (i) H0 : θ = 1 versus H1 : θ = 1; (ii) H0 : θ ≤ 1 versus H1 : θ > 1. Solution. (i) Since the MLE of (θ1 , θ2 ) is (X1 , X2 ) and, under H0 , the MLE of θ1 = θ2 is (X1 + X2 )/2, the likelihood ratio is X1 X2 λ=   = X +X 2 1

2

2

4F , (1 + F )2

where F = X2 /X1 . Note that λ < c is equivalent to F < c1 or F > c2 for 0 < c1 < c2 . Under H0 , F is has the F-distribution F2,2 . Hence, an LR test of size α rejects H0 when F < c1 or F > c2 with ci ’s determined by P (F < c1 ) + P (F > c2 ) = α under H0 . (ii) Using the same argument used in Exercises 40 and 45, we obtain the likelihood ratio  1 X1 < X2 λ= 4F X1 ≥ X2 . (1+F )2 Note that λ < c if and only if 4F/(1 + F )2 < c and X1 ≥ X2 , which is equivalent to F < b for some b. Let F2,2 be a random variable having the F-distribution F2,2 . Then

sup P (F < b) = sup P θ1 ≤θ2

θ1 ≤θ2

F2,2
0 and µ ∈ R. Find an LR test for testing H0 : γ = 1 versus H1 : γ = 1. (ii) In the testing problem in (i), find the forms of Wald’s test and Rao’s score test. (iii) Repeat (i) and (ii) when σ 2 = γµ with unknown γ > 0 and µ > 0. Solution. (i) The likelihood function is "  n  1 −n 2 (µ, γ) = ( 2πγ|µ|) exp − (Xi − µ) . 2γµ2 i=1 ¯ σ ¯ 2 ), where X ¯ is the sample mean The MLE of (µ, µ, γˆ ) = (X, ˆ 2 /X nγ) is (ˆ 2 −1 2 ¯ and σ ˆ =n i=1 (Xi − X) . Under H0 , by Exercise 41(viii) in Chapter 4, the MLE of µ is  µ+ (µ+ , 1) > (µ− , 1) µ ˆ0 = µ− (µ+ , 1) ≤ (µ− , 1), where µ± =

¯± −X



¯ 2 + 4ˆ 5X σ2 . 2

The likelihood ratio is λ=

 ¯ 2 en/2 σ ˆn µ0 − X) (ˆ µ0 , 1) nˆ σ 2 + n(ˆ = , exp − (ˆ µ, γˆ ) |ˆ µ0 |n 2ˆ µ20

¯ 2 /ˆ ¯ 2 /ˆ σ 2 . Under H0 , the distribution of X σ 2 does not which is a function of X depend on any unknown parameter. Hence, an LR test can be constructed with rejection region λ < c. (ii) Let n ⎞ ⎛ (Xi −µ)2 ¯ n(X−µ) n i=1 ∂ log (µ, γ) ⎜ − µ + γµ2 + γµ3 ⎟ s(µ, γ) = =⎝ n ⎠. 2 ∂(µ, γ) (X −µ) i n i=1 − 2γ + 2µ2 γ 2 The Fisher information about (µ, γ) is  In (µ, γ) = E{s(µ, γ)[s(µ, γ)]τ } = n

1 µ2 γ

+

2 µ2

1 µγ

Then, Rao’s score test statistic is Rn = [s(ˆ µ0 , 1)]τ [In (ˆ µ0 , 1)]−1 s(ˆ µ0 , 1), where µ ˆ0 is given in part (i) of the solution.

1 µγ 1 2γ 2

 .

Chapter 6. Hypothesis Tests

297

Let R(µ, γ) = γ − 1. Then ∂R/∂µ = 0 and ∂R/∂γ = 1. Wald’s test statistic Wn is equal to [R(ˆ µ, γˆ )]2 divided by the last element of the inverse µ, γˆ ), where µ ˆ and γˆ are given in part (i) of the solution. Hence, of In (ˆ Wn = (iii) Let T = n−1

n(ˆ γ − 1)2 . 2ˆ γ 2 + 4ˆ γ3

n

Xi2 . The likelihood function is   ¯  nX nµ nT + − . (µ, γ) = ( 2πγµ)−n exp − 2γµ γ 2γ i=1

When γ = √ 1, it is shown in Exercise 60 of Chapter 4 that the MLE of µ is µ ˆ0 = ( 1 + 4T − 1)/2. For the MLE of (µ, γ), it is equal to (ˆ µ, γˆ ) = ¯ when X ¯ > 0. If X ¯ ≤ 0, however, the likelihood is unbounded in ¯ σ (X, ˆ 2 /X) γ. Hence, the likelihood ratio is    nT ¯ >0 ¯ − nˆµ0 X ˆnµ ˆ0 )−n/2 exp − 2ˆ + n X en/2 σ µ 2 0 λ= ¯ ≤ 0. 0 X To construct an LR test, we note that, under H0 , T is sufficient for µ. Hence, under H0 , we may find a c(T ) such that P (λ < c(T )|T ) = α for every T . The test rejecting H0 when λ < c(T ) has size α, since P (λ < c(T )) = α under H0 . Note that   n nT n + 2γµ − 2µ ∂ log (µ, γ) 2 − 2γ s(µ, γ) = = , ¯ nµ n ∂(µ, γ) − 2γ + 2γnT2 µ − nγX 2 + 2γ 2 ∂ 2 log (µ, γ) n nT = − , ∂µ2 2µ2 γµ3 n ∂ 2 log (µ, γ) nT = − 2 2 + 2, ∂µ∂γ 2γ µ 2γ and

¯ n nT nµ ∂ 2 log (µ, γ) 2nX = − − 3. + 2 2 3 3 ∂γ 2γ γ µ γ γ

Hence, the Fisher information about (µ, γ) is  1 1 2µ2 + γµ In (µ, γ) = n 1 2γµ

1 2γµ 1 2γ 2



and Rao’s score test statistic is Rn = [s(ˆ µ0 , 1)]τ [In (ˆ µ0 , 1)]−1 s(ˆ µ0 , 1).

298

Chapter 6. Hypothesis Tests

Similar to that in part (ii), Wald’s test statistic Wn is equal to (ˆ γ − 1)2 divided by the last element of the inverse of In (ˆ µ, γˆ ), i.e., Wn =

n(ˆ γ − 1)2 . 2ˆ γ 2 + γˆ 3 /ˆ µ

¯ ≤ 0. But limn P (X ¯ ≤ 0) = 0 since Note that Wn is not defined when X µ > 0. Exercise 48 (#6.100). Suppose that X = (X1 , ..., Xk ) has the multinomial distribution with a known size n and an unknown probability vector (p1 , ..., pk ). Consider the problem of testing H0 : (p1 , ..., pk ) = (p01 , ..., p0k ) versus H1 : (p1 , ..., pk ) = (p01 , ..., p0k ), where (p01 , ..., p0k ) is a known probability vector. Find the forms of Wald’s test and Rao’s score test. Solution. The MLE of θ = (p1 , ..., pk−1 ) is θˆ = (X1 /n, ..., Xk−1 /n). The Fisher information about θ is In (θ) = n[D(θ)]−1 +

n τ Jk−1 Jk−1 , pk

where D(θ) denotes the (k−1)×(k−1) diagonal matrix whose k−1 diagonal elements are the components of the vector θ and Jk−1 is the (k − 1)-vector of 1’s. Let θ0 = (p01 , ..., p0(k−1) ). Then H0 : θ = θ0 and the Wald’s test statistic is ˆ θˆ − θ0 ) Wn = (θˆ − θ0 )τ In (θ)( 2 ˆ −1 (θˆ − θ0 ) + n [J τ (θˆ − θ0 )]2 = n(θˆ − θ0 )τ [D(θ)] Xk k−1 k (Xj − np0j )2 = , Xj j=1 τ (θˆ − θ0 ) = p0k − Xk /n. Let (θ) be the likelihood using the fact that Jk−1 function. Then 

∂ log (θ) Xk Xk−1 Xk X1 . s(θ) = − , ..., − = ∂θ p1 pk pk−1 pk

Note that [In (θ)]−1 = n−1 D(θ) − n−1 θθτ and θτ s(θ) =

k−1 j=1

Xj −

Xk pk

 = n − Xk −

Xk npk − Xk (1 − pk ) = . pk pk

Chapter 6. Hypothesis Tests

299

Also, [s(θ)]τ D(θ)s(θ) =

k−1

pj

j=1

=

k−1

pj

j=1

+2

k−1

Xj Xk − pj pk Xj −n pj

pj

j=1

=

=

2

2 +

k−1

pj

j=1

Xj −n pj



n−

Xk −n pk

Xk pk

2



k−1

(Xj − npj )2 (Xk − npk )2 + (1 − pk ) pj p2k j=1 

Xk + 2[(n − Xk ) − n(1 − pk )] n − pk k−1 j=1

(Xj − npj )2 (Xk − npk )2 + [θτ s(θ)]2 − pj pk

2(Xk − npk )2 pk k (Xj − npj )2 +

=

pj

j=1

+ [θτ s(θ)]2 .

Hence, Rao’s score test statistic is Rn = [s(θ0 )]τ [In (θ0 )]−1 s(θ0 ) = n−1 [s(θ0 )]τ D(θ0 )s(θ0 ) − n−1 [θ0τ s(θ0 )]2 =

k (Xj − np0j )2 . np0j j=1

Exercise 49 (#6.101). Let A and B be two different events in a probability space related to a random experiment. Suppose that n independent trials of the experiment are carried out and the frequencies of the occurrence of the events are given in the following 2 × 2 contingency table: B Bc

A X11 X21

Ac X12 X22

Consider testing H0 : P (A) = P (B) versus H1 : P (A) = P (B). (i) Derive the likelihood ratio λ and the limiting distribution of −2 log λ

300

Chapter 6. Hypothesis Tests

under H0 . (ii) Find the forms of Wald’s test and Rao’s score test. Solution. Let pij = E(Xij /n), i = 1, 2, j = 1, 2, and θ = (p11 , p12 , p21 ) be the parameter vector (p22 = 1 − p11 − p12 − p21 ). The likelihood function is proportional to n−X11 −X12 −X21 11 X12 X21 (θ) = pX . 11 p12 p21 (1 − p11 − p12 − p21 )

Note that H0 is equivalent to H0 : p21 = p12 . (i) The MLE of θ is θˆ = n−1 (X11 , X12 , X21 ). Under H0 , The MLE of p11 is still X11 /n, but the MLE of p12 = p21 is (X12 + X21 )/(2n). Then λ=

[(X12 + X21 )/2]X12 +X21 . X12 X21 X12 X21

Note that there are two unknown parameters under H0 . By Theorem 6.5 in Shao (2003), under H0 , −2 log λ →d χ21 . (ii) Let R(θ) = p12 − p21 . Then C(θ) = ∂R/∂θ = (0, 1, −1). The Fisher information matrix about θ is ⎞ ⎛ ⎞ ⎛ −1 1 1 1 0 0 p11 n ⎝ In (θ) = n ⎝ 0 1 1 1 ⎠ p−1 0 ⎠+ 12 p 22 −1 1 1 1 0 0 p21 with



[In (θ)]−1

p11 1 = ⎝ 0 n 0

0 p12 0

⎞ 0 1 0 ⎠ − θθτ . n p21

Therefore, Wald’s test statistic is ˆ τ {[C(θ)] ˆ τ [In (θ)] ˆ −1 C(θ)} ˆ −1 R(θ) ˆ Wn = [R(θ)] 2 n(X12 − X21 ) = . n(X12 + X21 ) − (X12 − X21 )2 Note that

⎛ s(θ) =

∂ log (θ) ⎜ =⎜ ⎝ ∂θ

X11 p11



X22 1−p11 −p12 −p21

X12 p12



X22 1−p11 −p12 −p21

X21 p21



X22 1−p11 −p12 −p21

⎞ ⎟ ⎟ ⎠

and, hence, s(θ) evaluated at θ˜ = n−1 (X11 , (X12 + X21 )/2, (X12 + X21 )/2) is ⎛ ⎞ 0 ⎜ ⎟ ˜ = ⎜ n(X12 −X21 ) ⎟ . s(θ) ⎝ X12 +X21 ⎠ n(X21 −X12 ) X12 +X21

Chapter 6. Hypothesis Tests

301

Therefore, Rao’s score test statistic is 2 ˜ τ [In (θ)] ˜ −1 s(θ) ˜ = (X12 − X21 ) . Rn = [s(θ)] X12 + X21

Exercise 50 (#6.102). Consider the r × c contingency table 1 2 .. . r

1 X11 X21 .. . Xr1

··· ··· ··· .. . ···

2 X12 X22 .. . Xr2

c X1c X2c .. . Xrc

with unknown pij = E(Xij )/n, where n is a known positive integer. (i) Let A1 , ..., Ac be disjoint events with A1 ∪ · · · ∪ Ac = Ω (the sample space of a random experiment), and let B1 , ..., Br be disjoint events with B1 ∪ · · · ∪ Br = Ω. Suppose that Xij is the frequency of the occurrence of Aj ∩ Bi in n independent trials of the experiment. Derive the χ2 goodnessof-fit test for testing independence of {A1 , ..., Ac } and {B1 , ..., Br }, i.e., H0 : pij = pi· p·j for all i, j

versus

H1 : pij = pi· p·j for some i, j,

where pi· = P (Bi ) and p·j = P (Aj ), i = 1, ..., r, j = 1, ..., c. (ii) Let (X1j , ..., Xrj ), j = 1, ..., c, be c independent random vectors having the multinomial distributions with sizes nj and unknown probability vectors (p1j , ..., prj ), j = 1, ..., c, respectively. Consider the problem of testing whether c multinomial distributions are the same, i.e., H0 : pij = pi1 for all i, j

versus

H1 : pij = pi1 for some i, j.

Show that the χ2 goodness-of-fit test is the same as that in (i). Solution. (i) Using the Lagrange multiplier method, we can obtain the MLE of pij ’s by maximizing ⎛ ⎞ r r c c Xij log pij − λ ⎝ pij − 1⎠ , i=1 j=1

i=1 j=1

where λ is the Lagrange multiplier. Thus, the MLE of pij is Xij /n. Under H0 , the MLE’s of pi· ’s and p·j ’s can be obtained by maximizing ⎛ ⎞  r  r c c Xij (log pi· + log p·j ) − λ1 pi· − 1 − λ2 ⎝ p·j − 1⎠ , i=1 j=1

i=1

j=1

302

Chapter 6. Hypothesis Tests

where  λ1 and λ2 are the Lagrange multipliers. Thus, the MLE of pi· is ¯ i· = c Xij /n and the MLE of p·j is X ¯ ·j = r Xij /n. Hence, the X j=1 i=1 χ2 statistic is r c ¯ i· X ¯ ·j )2 (Xij − nX χ2 = . ¯ i· X ¯ ·j nX i=1 j=1 The number of free parameters is rc − 1. Under H0 , the number of free parameters is r − 1 + c − 1 = r + c − 2. The difference of the two is rc − r − c + 1 = (r − 1)(c − 1). By Theorem 6.9 in Shao (2003), under H0 , χ2 →d χ2(r−1)(c−1) . Therefore, the χ2 goodness-of-fit test rejects H0 when χ2 > χ2(r−1)(c−1),α , where χ2(r−1)(c−1),α is the (1 − α)th quantile of the chi-square distribution χ2(r−1)(c−1) . (ii) Since (X1j , ..., Xrj ) has the multinomial distribution with size c nj and probability vector (p1j , ..., prj ), the MLE of pij is Xij /n. Let Yi = j=1 Xij . Under H0 , (Y1 , ..., Yr ) has the multinomial distribution with size n and ¯ i· = probability vector (p11 , ..., pr1 ). Hence, the MLE of pi1 under H0 is X ¯ Yi /n. Note that nj = nX·j , j = 1, ..., c. Hence, under H0 , the expected ¯ i· X ¯ ·j . Thus, the χ2 (i, j)th frequency estimated by the MLE under H0 is nX statistic is the same as that in part (i) of the solution. The number of free parameters in this case is c(r−1). Under H0 , the number of free parameters is r − 1. The difference of the two is c(r − 1) − (r − 1) = (r − 1)(c − 1). Hence, χ2 →d χ2(r−1)(c−1) under H0 and the χ2 goodness-of-fit test is the same as that in (i). Exercise 51 (#6.103). In Exercise 50(i), derive Wald’s test and Rao’s score test statistics. Solution. For a set {aij , i = 1, ..., r, j = 1, ..., c, (i, j) = (r, c)} of rc − 1 numbers, we denote vec(aij ) to be the (rc − 1)-vector whose components are aij ’s and D(aij ) to be the diagonal matrix whose diagonal elements are the components of vec(aij ). Let θ = vec(pij ), J be the (rc − 1)-vector of 1’s, and (θ) be the likelihood function. Then, 

∂ log (θ) Xrc Xij s(θ) = − = vec J ∂θ pij 1 − Jτ θ and

∂ 2 log (θ) = −D ∂θ∂θτ



Xij p2ij

 −

Xrc JJ τ . (1 − J τ θ)2

Since E(Xij ) = npij , the Fisher information about θ is −1 τ In (θ) = nD(p−1 ij ) + nprc JJ

with

[In (θ)]−1 = n−1 D(pij ) − n−1 θθτ .

Chapter 6. Hypothesis Tests

303

Let p˜ij be the MLE of pij under H0 and θ˜ = vec(˜ pij ). Then Rao’s score test statistic is ˜ τ [D(˜ ˜ Rn = n−1 [s(θ)] pij ) − θ˜θ˜τ ]s(θ). Note that ˜ = θ˜τ s(θ)



Xij −

(i,j) =(r,c)

Xrc p˜ij 1 − J τ θ˜

 = n − Xrc −

Xrc J τ θ˜ Xrc , =n− p˜rc 1 − J τ θ˜

¯ r· X ¯ ·c . Also, where p˜rc = 1 − J τ θ˜ = X

2 Xij Xrc τ ˜ ˜ [s(θ)] D(˜ pij )s(θ) = p˜ij − p˜ij p˜rc (i,j) =(r,c)

2

Xij = p˜ij − n + (1 − p˜rc ) n − p˜ij (i,j) =(r,c)

  Xij Xrc +2 p˜ij −n n− p˜ij p˜rc (i,j) =(r,c)

2

Xij = p˜ij − n + (1 + p˜rc ) n − p˜ij (i,j) =(r,c)

Xrc p˜rc

Xrc p˜rc

2

2 .

Hence,

2 2 Xij Xrc − n + p˜rc n − p˜ij p˜rc (i,j) =(r,c)

2 r c Xij 1 = p˜ij −n n i=1 j=1 p˜ij

Rn =

=

1 n



p˜ij

r c ¯ i· X ¯ ·j )2 (Xij − nX , ¯ i· X ¯ ·j nX i=1 j=1

¯ i· X ¯ ·j (part (i) of the solution to Exercise 50). using the fact that p˜ij = X 2 Hence, Rn is the same as χ in part (i) of the solution to Exercise 50. Let η(θ) be the (r − 1)(c − 1)-vector obtained by deleting components prj and pic , j = 1, ..., c − 1, i = 1, ..., r − 1, from the vector θ and let ζ(θ) be η(θ) with pij replaced by pi· p·j , i = 1, ..., r − 1, j = 1, ..., c − 1. Let ∂ζ θˆ = vec(Xij /n), the MLE of θ, R(θ) = η(θ) − ζ(θ), and C(θ) = ∂η ∂θ − ∂θ . Then, Wald’s test statistic is ˆ τ {[C(θ)] ˆ τ [In (θ)] ˆ −1 C(θ)} ˆ −1 R(θ). ˆ Wn = [R(θ)]

304

Chapter 6. Hypothesis Tests

Exercise 52 (#6.105). Let (X1 , ..., Xn ) be a random sample of binary random variables with θ = P (X1 = 1). (i) Let Π(θ) be the beta distribution with parameter (a, b). When Π is used as the prior for θ, find the Bayes factor and the Bayes test for H0 : θ ≤ θ0 versus H1 : θ > θ0 . (ii) Let π0 I[θ0 ,∞) (θ) + (1 − π0 )Π(θ) be the prior cumulative distribution, where Π is the same as that in (i) and π0 ∈ (0, 1) is a constant. Find the Bayes factor and the Bayes test for H0 : θ = θ0 versus H1 : θ = θ0 . Solution. (i) Under prior Π, the posterior  of θ is the beta distribution n with parameter (a + T, b + n − T ), where T = i=1 Xi . Then, the posterior probability of the set (0, θ0 ] (the null hypothesis H0 ) is Γ(a + b + n) p(T ) = Γ(a + T )Γ(b + n − T )

 0

θ0

ua+T −1 (1 − u)b+n−T −1 du.

Hence, the Bayes test rejects H0 if and only if p(T ) < factor is posterior odds ratio p(T )[1 − π(0)] = , prior odds ratio [1 − p(T )]π(0) where π(0) =

Γ(a + b) Γ(a)Γ(b)

 0

θ0

1 2

and the Bayes

ua−1 (1 − u)b−1 du

is the prior probability of the set (0, θ0 ]. (ii) Let  θT (1 − θ)n−T dΠ(θ) m1 (T ) = θ =θ0

 Γ(a + b) 1 a+T −1 θ (1 − θ)b+n−T −1 dθ Γ(a)Γ(b) 0 Γ(a + b)Γ(a + T )Γ(b + n − T ) = . Γ(a + b + n)Γ(a)Γ(b)   Since the likelihood function is (θ) = Tn θT (1 − θ)n−T , the posterior probability of the set {θ0 } (the null hypothesis H0 ) is  (θ)d[π0 I[θ0 ,∞) (θ) + (1 − π0 )Π(θ)]  0 p(T ) = θ=θ (θ)d[π0 I[θ0 ,∞) (θ) + (1 − π0 )Π(θ)] =

=

π0 θ0T (1

π0 θ0T (1 − θ0 )n−T . − θ0 )n−T + (1 − π0 )m1 (T )

Hence, the Bayes test rejects H0 if and only if p(T ) < )(1−π0 ) factor is p(T [1−p(T )]π0 .

1 2

and and the Bayes

Chapter 6. Hypothesis Tests

305

Exercise 53 (#6.114). Let (X1 , ..., Xn ) be a random sample from a continuous distribution F on R, Fn be the empirical distribution, Dn+ = supx∈R [Fn (x)−F (x)], and Dn− = supx∈R [F (x)−Fn (x)]. Show that Dn− (F ) and Dn+ (F ) have the same distribution and, for t ∈ (0, 1), P



Dn+ (F )



n

≤ t = n! i=1



un−i+2

max{0, n−i+1 −t} n

du1 · · · dun .

Proof. Let X(i) be the ith order statistic, i = 1, ..., n, X(0) = −∞, and X(n+1) = ∞. Note that   i − F (x) Dn+ (F ) = max sup 0≤i≤n X(i) ≤x 0 and is equal to 0 if T = 0. For p, it is the solution to 1 − α1 = 1 − FT,p (T + 1−). Hence, p is the (1 − α1 )th quantile of the beta distribution with parameter (T + 1, n − T ) if T < n and is equal to 1 if T = n. Let Fa,b be a random variable having the F-distribution Fa,b . Then, (a/b)Fa,b 1+(a/b)Fa,b has the beta distribution with parameter (a/2, b/2). Hence p=

1

T +1 n−T F2(T +1),2(n−T ),α1 T +1 + n−T F2(T +1),2(n−T ),α1

Chapter 7. Confidence Sets

315

when Fa,0,α is defined to be ∞. Similarly, p=

1

T n−T +1 F2T,2(n−T +1),1−α2 T + n−T +1 F2T,2(n−T +1),1−α2

.

−1 Note that Fa,b has the F-distribution Fb,a . Hence,

p=

1 1+

n−T +1 F2(n−T +1),2T,α2 T

.

Exercise 11 (#7.17). Let X be a sample of size 1 from the negative binomial distribution with a known size r and an unknown probability p ∈ (0, 1). Using the cumulative distribution function of T = X − r, show that a level 1 − α confidence interval for p is ' & r 1 T F2r,2T,α1 , , 1 + Tr F2r,2T,α1 1 + T +1 r F2(T +1),2r,α2 where α1 + α2 and Fa,b,α is the same as that in the previous exercise. Solution. Since the negative binomial family has monotone likelihood ratio in −T , the cumulative distribution function of T , FT,p (t), is increasing in p for fixed t. By Theorem 7.1 in Shao (2003), a level 1 − α confidence interval for p is [p, p], where p is the solution to FT,p (T ) = α2 and p is the solution to FT,p (T −) = 1 − α1 . Let Bm,p denote a binomial random variable with size m and probability p and βa,b denote a beta random variable with parameter (a, b). Then, FT,p (t) = P (Bt+r,p > r − 1) = P (Bt+r,p ≥ r) = P (βr,t+1 ≤ p). Hence, p is the α2 th quantile of βr,T +1 . Since FT,p (T −) = FT,p (T − 1), p is the (1 − α1 )th quantile of βr,T if T > 0 and 1 if T = 0. Using the same argument as that in the solution of the previous exercise, we conclude that ' & r 1 T F2r,2T,α1 . , [p, p] = 1 + Tr F2r,2T,α1 1 + T +1 r F2(T +1),2r,α2 Exercise 12 (#7.18). Let T be a statistic having the noncentral chisquare distribution χ2r (θ), where the noncentrality parameter θ ≥ 0 is unknown and r is a known positive integer. Show that the cumulative distribution function of T , Fθ (t), is nonincreasing in θ for each fixed t > 0 and use this result to construct a confidence interval for θ with confidence coefficient 1 − α. Solution A. From Exercise 27 of Chapter 1, Fθ (t) = e−θ/2

∞ (θ/2)j j=0

j!

G2j+r (t),

316

Chapter 7. Confidence Sets

where G2j+r (t) is the cumulative distribution function of the central chisquare distribution χ22j+r , j = 1, 2, .... From the definition of the chi-square distribution χ22j+r , it is the distribution of the sum of 2j+r independent and identically distributed χ21 random variables. Hence, G2j+r (t) is decreasing in j for any fixed t > 0. Let Y be a random variable having the Poisson distribution with mean θ/2 and h(j) = G2j+r (t), t > 0, j = 0, 1, 2, .... Then Fθ (t) = Eθ [h(Y )], where Eθ is the expectation with respect to the distribution of Y . Since the family of distributions of Y has monotone likelihood ratio in Y and h(Y ) is decreasing in Y , by Lemma 6.3 in Shao (2003), Eθ [h(Y )] is nonincreasing in θ. By Theorem 7.1 in Shao (2003), a 1 − α confidence interval for θ is [θ, θ] with θ = sup{θ : Fθ (T ) ≥ α1 }

and θ = inf{θ : Fθ (T ) ≤ 1 − α2 },

where α1 + α2 = α. Solution B. By definition, Fθ (t) = P (X + Y ≤ t) =

 0



P (X ≤ t − y)f (y)dy,

where X has the noncentral chi-square distribution χ21 (θ), Y has the central chi-square distribution χ2r−1 , f (y) is the Lebesgue density of Y , and X and Y are independent (Y = 0 if r = 1). From Exercise 9(ii) in Chapter 6, the family of densities of noncentral chi-square distributions χ21 (θ) has monotone likelihood ratio in X and, hence, P (X ≤ t − y) is nonincreasing in θ for any t and y. Hence, Fθ (t) is is nonincreasing in θ for any t > 0. The rest of the solution is the same as that in Solution A. Exercise 13 (#7.19). Repeat the previous exercise when χ2r (θ) is replaced by the noncentral F-distribution Fr1 ,r2 (θ) with unknown θ ≥ 0 and known positive integers r1 and r2 . Solution. It suffices to show that the cumulative distribution function of Fr1 ,r2 (θ), Fθ (t), is nonincreasing in θ for any t > 0, since the rest of the solution is the same as that in Solution A of the previous exercise. By definition, Fθ (t) is the cumulative distribution function of (U1 /r1 )/(U2 /r2 ), where U1 has the noncentral chi-square distribution χ2r1 (θ), U2 has the central chi-square distribution χ2r2 , and U1 and U2 are independent. Let g(y) be the Lebesgue density of r1 U2 /r2 . Then  ∞   Fθ (t) = P U1 ≤ t(r1 U2 /r2 ) = P (U1 ≤ ty)g(y)dy. 0

From the previous exercise, P (U1 ≤ ty) is nonincreasing in θ for any t and y. Hence, Fθ (t) is nonincreasing in θ for any t. Exercise 14 (#7.20). Let Xij , j = 1, ..., ni , i = 1, ..., m, be independent random variables having distribution N (µi , σ 2 ), i = 1, ..., m. Let

Chapter 7. Confidence Sets

317

m m µ ¯ = n−1 i=1 ni µi and θ = σ −2 i=1 ni (µi − µ ¯)2 . Construct an upper confidence bound for θ that has confidence coefficient 1 − α and is a function of T = (n − m)(m − 1)−1 SST/SSR, where SSR =

ni m

¯ i· )2 , (Xij − X

i=1 j=1

SST =

ni m

¯ 2, (Xij − X)

i=1 j=1

¯ i· is the sample mean based on Xi1 , ..., Xin , and X ¯ is the sample mean X i based on all Xij ’s. ni ¯ 2 Solution. Note that j=1 (Xij − Xi· ) has the chi-square distribution χ2ni −1 , i = 1, ..., m. By the independence of Xij ’s, SSR has the chi-square m ¯ 1· , ..., X ¯ m· ) and A be distribution χ2n−m , where n = i=1 ni . Let Y = (X √ the m × m diagonal matrix whose ith diagonal element is ni /σ. Then AY has distribution Nm (ζ, Im ), where Im is the identity matrix of order m √ √ and ζ = (µ1 n1 /σ, ..., µm nm /σ). Note that SSA =

m

¯ i· − X) ¯ 2 = Y τ A(Im − n−1 Km K τ )AY, ni (X m

i=1

√ √ τ τ 2 Km = n, (Im − n−1 Km Km ) = where Km = ( n1 , ..., nm ). Since Km −1 τ (Im − n Km Km ) and, by Exercise 22(i) in Chapter 1, SSA has the noncentral chi-square distribution χ2m−1 (δ) with τ δ = [E(AY )]τ (Im − n−1 Km Km )E(AY ) = θ.

Also, by Basu’s theorem, SSA and SSR are independent. Since SSA = SST − SSR, we conclude that T−

n−m SSA/(m − 1) = m−1 SSR/(n − m)

has the noncentral F-distribution Fm−1,n−m (θ). From the previous exercise, the cumulative distribution function of T , Fθ (t), is nonincreasing in θ for any t. Hence, by Theorem 7.1 in Shao (2003), an upper confidence bound for θ that has confidence coefficient 1 − α is θ = sup{θ : Fθ (T ) ≥ α}. Exercise 15 (#7.24). Let Xi , i = 1, 2, be independent random variables distributed as the binomial distributions with sizes ni and probabilities pi , i = 1, 2, respectively, where ni ’s are known and pi ’s are unknown. Show how to invert the acceptance regions of UMPU tests to obtain a level 1 − α 1) confidence interval for the odds ratio pp21 (1−p (1−p2 ) .

318

Chapter 7. Confidence Sets

1) Solution. Let θ = pp21 (1−p (1−p2 ) . From the solution to Exercise 24 in Chapter 6, a UMPU test for H0 : θ = θ0 versus H1 : θ = θ0 has acceptance region

A(θ0 ) = {(Y, U ) : c1 (U, θ0 ) ≤ Y ≤ c2 (U, θ0 )}, where Y = X2 , U = X1 + X2 , and ci (U, θ) are some functions. From Exercise 19 in Chapter 6, for each fixed U , ci (U, θ) is nondecreasing in θ. Hence, for every θ, −1 {θ ∈ A(θ)} = {θ : c1 (U, θ) ≤ Y ≤ c2 (U, θ)} = [c−1 2,U (Y ), c1,U (Y )],

where

c−1 i,U (Y ) = inf{x : ci (U, x) ≥ Y }.

−1 From Theorem 7.2 in Shao (2003), [c−1 2,U (Y ), c1,U (Y )] is a level 1 − α confidence interval for θ.

Exercise 16 (#7.25). Let X = (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ). (i) Suppose that σ 2 = γµ2 with unknown γ > 0 and µ ∈ R, µ = 0. Obtain a confidence set for γ with confidence coefficient 1 − α by inverting the acceptance regions of LR tests for H0 : γ = γ0 versus H1 : γ = γ0 . (ii) Repeat (i) when σ 2 = γµ with unknown γ > 0 and µ > 0. Solution. (i) The likelihood function is given in part (i) of the solution ¯ σ ¯ 2 ), to Exercise 47 in Chapter 6. The MLE of (µ, γ) is (ˆ µ, γˆ ) = (X, ˆ 2 /X n 2 −1 2 ¯ ¯ is the sample mean and σ where X ˆ =n i=1 (Xi − X) . When γ = γ0 , using the same argument in the solution of Exercise 41(viii) in Chapter 4, we obtain the MLE of µ as  (µ+ (γ0 ), γ0 ) > (µ− (γ0 ), γ0 ) µ+ (γ0 ) µ ˆ(γ0 ) = (µ+ (γ0 ), γ0 ) ≤ (µ− (γ0 ), γ0 ), µ− (γ0 ) where µ± (γ0 ) =

¯± −X



¯ 2 + 4ˆ (5X σ 2 )/γ0 . 2

The likelihood ratio is λ(γ0 ) =

 ¯ 2 µ(γ0 ) − X] nˆ σ 2 + n[ˆ . exp − n/2 2[ˆ µ(γ0 )]2 γ0 |ˆ µ(γ0 )|n ˆn en/2 σ

The confidence set obtained by inverting the acceptance regions of LR tests is {γ : λ(γ) ≥ c(γ)}, where c(γ) satisfies P (λ(γ) < c(γ)) = α. (ii) The likelihood function is given in part (ii) of the solution to Exercise ¯ σ ¯ when X ¯ > 0. If X ¯ ≤ 0, 47 in Chapter 6. The MLE of (µ, γ) is (X, ˆ 2 /X) however, the likelihood is unbounded in γ. When γ = γ0 , using the same

Chapter 7. Confidence Sets

319

argument as that in the solution to Exercise 60 of Chapter 4,we obtain the n MLE of µ as µ ˆ(γ0 ) = ( γ02 + 4T − γ0 )/2, where T = n−1 i=1 Xi2 . The likelihood ratio is ⎧ * +−n/2   ¯ ⎨ eˆ σ2 ¯ >0 X exp − 2γ0nT + nγX − nˆµ2γ(γ00 ) γ µ ˆ (γ µ ˆ (γ ) ) 0 0 0 0 λ(γ0 ) = ⎩ ¯ ≤ 0. 0 X The confidence set obtained by inverting the acceptance regions of LR tests is {γ : λ(γ) ≥ c(T, γ)}, where c(T, γ) satisfies P (λ(γ) < c(T, γ)|T ) = α. Exercise 17 (#7.26). Let X = (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ) with unknown µ and σ 2 . Discuss how to construct a confidence interval for θ = µ/σ with confidence coefficient 1 − α by (i) inverting the acceptance regions of the tests given in Exercise 22 of Chapter 6; (ii) applying Theorem 7.1 in Shao (2003). Solution. (i) From Exercise 22 in Chapter 6, the acceptance region of a test of size α√for H0 : θ ≤ θ0 versus H1 : θ > θ0 is {X: t(X) ≤ cα (θ0 )}, where ¯ ¯ is the sample mean, S 2 is the sample variance, and cα (θ) t(X) = nX/S, X √ From is the (1 − α)th quantile of the noncentral t-distribution tn−1 ( nθ). √ the solution to Exercise 22 in Chapter 6, the family of densities of tn−1 ( nθ) has monotone likelihood ratio in t(X). By Lemma 6.3 in Shao (2003), cα (θ) is increasing in θ and, therefore, {θ : cα (θ) ≥ t(X)} = [θ(X), ∞) for some θ(X). By Theorem 7.2 in Shao (2003), [θ(X), ∞) is a confidence interval for θ = µ/σ with confidence coefficient 1 − α. If it is desired to obtain a bounded confidence interval for θ, then we may consider C(X) = {θ : cα/2√(θ) ≥ t(X) ≥ dα/2 (θ)}, where dα (θ) is the is the αth quantile of tn−1 ( nθ). By considering the problem of testing H0 : θ ≥ θ0 versus H1 : θ < θ0 , we conclude that {θ : t(X) ≥ dα/2 (θ)} is a confidence interval for θ with confidence coefficient 1 − α/2. Hence, C(X) is a confidence interval for θ with confidence coefficient 1 − α. (ii) The cumulative distribution function of t(X) is  ∞ Fθ (t) = Φ(ty − θ)f (y)dy, 0

where Φ is the cumulative distribution function of N (0, 1), f (y) is the  Lebesgue density of W/(n − 1), and W has the chi-square distribution χ2n−1 . Hence, for any fixed t, Fθ (t) is continuous and decreasing in θ, limθ→∞ Fθ (t) = 0, and limθ→−∞ Fθ (t) = 1. By Theorem 7.1 in Shao (2003), [θ, θ] is a confidence interval for θ with confidence coefficient 1 − α, where θ is the unique solution to Fθ (t(X)) = 1 − α/2 and θ is the unique solution to Fθ (t(X)) = α/2. Exercise 18 (#7.27). Let (X1 , ..., Xn ) be a random sample from the

320

Chapter 7. Confidence Sets

uniform distribution on (θ − 12 , θ + 12 ), where θ ∈ R. Construct a confidence interval for θ with confidence coefficient 1 − α. Solution. Note that Xi + 12 − θ has the uniform distribution on (0, 1). Let X(j) be the jth order statistic. Then

 1 P X(1) + − θ ≤ c = 1 − (1 − c)n . 2

Hence P Similarly,

1 1/n + α1 ≤ θ 2

X(n) +

1 1/n − α2 ≥ θ 2

P



X(1) −

= 1 − α1 .  = 1 − α2 .

Let α = α1 + α2 . A confidence interval for θ with confidence coefficient 1/n 1/n 1 − α is then [X(1) − 12 + α1 , X(n) + 12 − α2 ]. Exercise 19 (#7.29). Let (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ) with unknown θ = (µ, σ 2 ). Consider the prior Lebesgue density π(θ) = π1 (µ|σ 2 )π2 (σ 2 ), where π1 (µ|σ 2 ) is the density of N (µ0 , σ02 σ 2 ),

a+1 2 1 1 2 π2 (σ ) = e−1/(bσ ) I(0,∞) (σ 2 ), Γ(a)ba σ 2 and µ0 , σ02 , a, and b are known. (i) Find the posterior of µ and construct a level 1 − α HPD credible set for µ. (ii) Show that the credible set in (i) converges to the confidence interval ¯ − tn−1,α/2 √S , X ¯ + tn−1,α/2 √S ] as σ 2 , a, and b converge to some limits, [X 0 n n ¯ is the sample mean, S 2 is the sample variance, and tn−1,α is the where X (1 − α)th quantile of the t-distribution tn−1 . Solution. (i) This is a special case of the problem in Exercise 20(iii) of Chapter 4. Let ω = σ −2 . Then the posterior density of (µ, ω) is p(µ|ω)p(ω), −2 −2 ¯ where p(µ|ω) is the density of N (µ∗ , ω −1 c−1 ∗ ), µ∗ = (σ0 µ0 +nX)/(n+σ0 ), −2 c∗ = n + σ0 , and p(ω) is the density of the gamma distribution with shape parameter a + n/2 and scale parameter γ = [b−1 + (n − 1)S 2 /2]−1 . The posterior density for µ is then  ∞ p(µ|ω)p(ω)dω f (µ) = 0



√ −(a+n/2) 2 −1 c∗ γ √ ω a+(n−1)/2 e−[γ +c∗ (µ−µ∗ ) /2]ω dω n 2πΓ(a + ) 0 2 √ −1 a+n/2 ) Γ(a + n+1 2 ) c∗ (2γ . = √ n −1 πΓ(a + 2 )[2γ + c∗ (µ − µ∗ )2 ]a+(n+1)/2

=



Chapter 7. Confidence Sets

321

Since this density is symmetric about µ∗ , a level 1 − α HPD credible set for µ is [µ∗ − t∗ , µ∗ + t∗ ], where t∗ > 0 satisfies 

µ∗ +t∗

µ∗ −t∗

f (µ)dµ = 1 − α.

¯ c∗ → n, (ii) Let σ02 → ∞, a → −1/2, and b → ∞. Then, µ∗ → X, 2γ −1 → (n − 1)S 2 , and √ Γ( n2 ) n[(n − 1)S 2 ](n−1)/2 f (µ) → √ 2 ¯ 2 n/2 , πΓ( n−1 2 )[(n − 1)S + n(µ − X) ] √ which is the density of (S/ n)T with T being √ a random variable having t-distribution tn−1 . Hence, t∗ → tn−1,α/2 S/ n and the result follows. Exercise 20 (#7.30). Let (X1 , ..., Xn) be a random sample from a distri, where f is a known Lebesgue bution on R with Lebesgue density σ1 f x−µ σ density and µ ∈ R and σ > 0 are unknown. Let X0 be a future observation that is independent of Xi ’s and has the same distribution as Xi . Find a pivotal quantity R(X, X0 ) and construct a level 1 − α prediction set for X0 . ¯ and S 2 be the sample mean and sample variance. ConSolution. Let X ¯ sider T = (X0 − X)/S. Since T =

¯ X0 − X = S

X0 −µ σ 1 n−1

n i=1





Xi −µ σ

1 n

n



i=1 1 n

Xi −µ σ

n

Xj −µ j=1 σ

2 1/2

and the density of (Xi− µ)/σ is f , T is a pivotal quantity. A 1 − α ¯ ≤ cS , where c is chosen such that prediction set for X0 is X0 : |X0 − X| P (|T | ≤ c) = 1 − α. Exercise 21 (#7.31). Let (X1 , ..., Xn ) be a random sample from a continuous cumulative distribution function F on R and X0 be a future observation that is independent of Xi ’s and is distributed as F . Suppose that F is increasing in a neighborhood of F −1 (α/2) and a neighborhood of F −1 (1 − α/2). Let Fn be the empirical distribution. Show that the prediction interval C(X) = [Fn−1 (α/2), Fn−1 (1 − α/2)] for X0 satisfies limn P (X0 ∈ C(X)) = 1 − α, where P is the joint distribution of (X0 , X1 , ..., Xn ). Solution. Since F is increasing in a neighborhood of F −1 (α/2), F −1 (t) is continuous at α/2. By the result in Exercise 28 of Chapter 5, limn Fn−1 (α/2) = F −1 (α/2) a.s. Then X0 − Fn−1 (α/2) →d X0 − F −1 (α/2) and, thus,     lim P Fn−1 (α/2) > X0 = P F −1 (α/2 > X0 ) = α/2, n

322

Chapter 7. Confidence Sets

since F is continuous. Similarly,     lim P X0 ≤ Fn−1 (1 − α/2) = P X0 ≤ F −1 (1 − α/2) = 1 − α/2. n

Hence,     lim P X0 ∈ C(X) = lim P Fn−1 (α) ≤ X0 ≤ Fn−1 (1 − α/2) = 1 − α. n

n

Exercise 22 (#7.33). Let X = (X1 , ..., Xn ) (n > 1) be a random sample from the exponential distribution on the interval (θ, ∞) with scale parameter θ, where θ > 0 is unknown. ¯ and X(1) /θ are pivotal quantities, where X ¯ is the (i) Show that both X/θ sample mean and X(1) is the smallest order statistic. (ii) Obtain confidence intervals (with confidence coefficient 1 − α) for θ based on the two pivotal quantities in (i). (iii) Discuss which confidence interval in (ii) is better in terms of the length. Solution. (i) Note that Xi /θ − 1 has the exponential distribution on the ¯ − 1 has the gamma disinterval (0, ∞) with scale parameter 1. Hence, X/θ tribution with shape parameter n and scale parameter n−1 and X(1) /θ − 1 has the exponential distribution on (0, ∞) with scale parameter n−1 . There¯ and X(1) /θ are pivotal quantities. fore, both X/θ (ii) Let cn,α be the αth quantile of the gamma distribution with shape parameter n and scale parameter n−1 . Then 

¯ X P cn,α/2 ≤ − 1 ≤ cn,1−α/2 = 1 − α, θ which leads to the 1 − α confidence interval   ¯ ¯ X X C1 (X) = . , 1 + cn,1−α/2 1 + cn,α/2 On the other hand,

 X(1) 1 1 1 1 P − 1 ≤ log = 1 − α, log ≤ n 1 − α/2 θ n α/2 which leads to the 1 − α confidence interval   X(1) X(1) C2 (X) = . , 1 − n−1 log(1 − α/2) 1 − n−1 log(α/2) (iii) The length of C1 (X) is ¯ n,α/2 − cn,1−α/2 ) X(c (1 + cn,α/2 )(1 + cn,1−α/2 )

Chapter 7. Confidence Sets

323

and the length of C2 (X) is X(1) log(2/α − 1)n−1 . [1 − n−1 log(α/2)][1 − n−1 log(1 − α/2)] √ From the central limit theorem, limn n(cn,α/2 − cn,1−α/2 ) > 0, assuming α < 12 . Hence, for sufficiently large n, [1 −

n−1

(cn,α/2 − cn,1−α/2 ) log(2/α − 1)n−1 < . −1 log(α/2)][1 − n log(1 − α/2)] (1 + cn,α/2 )(1 + cn,1−α/2 )

¯ > X(1) . Hence, for sufficiently large n, the length of C2 (X) is Also, X shorter than the length of C1 (X). Exercise 23 (#7.34). Let θ > 0 be an unknown parameter and T > 0 be a statistic. Suppose that T /θ is a pivotal quantity having Lebesgue density f and that x2 f (x) is unimodal at x0 in the sense that f (x) is nondecreasing for x ≤ x0 and f (x) is nonincreasing for x ≥ x0 . Consider the following class of confidence intervals for θ:  "  C=

b

[b−1 T, a−1 T ] : a > 0, b > 0,

f (x)dx = 1 − α . a

−1 2 2 Show that if [b−1 ∗ T, a∗ T ] ∈ C, a∗ f (a∗ ) = b∗ f (b∗ ) > 0, and a∗ ≤ x0 ≤ b∗ , −1 −1 then the interval [b∗ T, a∗ T ] has the shortest length within C. b Solution. We need to minimize a1 − 1b under the constraint a f (x)dx = 1 − α. Let t = x1 , then





b

f (x)dx = a

1 a 1 b

f

 1 1 dt = 1 − α. t t2

Since f is unimodal at x0 , f ( 1t ) t12 is unimodal at t = x10 . The result follows by applying Theorem 7.3(i) in Shao (2003) to the function f ( 1t ) t12 . Exercise 24 (#7.35). Let tn−1,α be the (1 − α)th quantile of the tdistribution tn−1 and zα be the (1 − α)th quantile of N (0, 1), where 0 < α < 12 and n = 2, 3, .... Show that √ √

2Γ( n2 ) tn−1,α ≥ zα , nΓ( n−1 2 )

n = 2, 3, ....

Solution. Let X = (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ). If σ 2 is known, then a 1−2α confidence interval obtained by inverting the UMPU ¯ − zα σ/√n, X ¯ + zα σ/√n], where X ¯ is the sample mean. tests is C1 (X) = [X

324

Chapter 7. Confidence Sets

If σ 2 is unknown, then a 1 − 2α confidence interval obtained by inverting ¯ − tn−1,α S/√n, X ¯ + tn−1,α S/√n], where the UMPU tests is C2 (X) = [X S 2 is the sample variance. For any fixed σ, both C1 (X) and C2 (X) are unbiased confidence intervals. By Theorem 7.5 in Shao (2003), C1 (X) is the UMAU (uniformly most accurate unbiased) confidence interval. By Pratt’s theorem (e.g., Theorem 7.6 in Shao, 2003), the expected length of C1 (X) is no √ larger than the expected length of C2 (X). The length of C2 (X) is 2tn−1,α S/ n. Since (n − 1)S 2 /σ 2 has the chi-square distribution χ2n−1 , √ 2Γ( n2 ) E(S) = √ σ, n = 2, 3, ..., nΓ( n−1 2 ) which implies that the expected length of C2 (X) is √ 2Γ( n2 ) 2σ 2σ √ √ tn−1,α ≥ √ zα , n−1 n nΓ( 2 ) n the length of C1 (X). This proves the result. Exercise 25 (#7.36(a),(c)). Let (X1 , ..., Xn ) be a random sample from N (µ, σ 2 ), µ ∈ R and σ 2 > 0. (i) Suppose that µ is known. Let an and bn be constants satisfying a2n fn (an ) b = b2n fn (bn ) > 0 and ann fn (x)dx = 1 − α, where fn is the Lebesgue density −1 of the chi-square distribution χ2n . Show that the interval [b−1 n T, an T ] has −1 the shortest length within the class of intervals of the form [b T, a−1 T ], b n f (x)dx = 1 − α, where T = i=1 (Xi − µ)2 . a n 2 −1 2 (ii) When µ is unknown, show that [b−1 n−1 (n − 1)S , an−1 (n − 1)S ] has the shortest length within the class of 1 − α confidence intervals of the form [b−1 (n − 1)S 2 , a−1 (n − 1)S 2 ], where S 2 is the sample variance. (iii) Find the shortest-length σ within the class of confidence √ interval for √ intervals of the form [b−1/2 n − 1S, a−1/2 n − 1S], where 0 < a < b < ∞, b and a fn−1 (x)dx = 1 − α. Solution. (i) Note that T /σ 2 has the chi-square distribution χ2n with Lebesgue density fn and x2 fn (x) is unimodal. The result follows from Exercise 23. (ii) Since (n − 1)S 2 has the chi-square distribution χ2n−1 , the result follows from part (i) of√the solution with n replaced by n − 1. (iii) Let t = 1/ x. Then

  b  √1 a 1 1 √ dt. f (x)dx = fn−1 √ 1 t 2t t √ a b b Minimizing √1a − √1b under the constraint a fn−1 (x)dx = 1 − α is the same    √1 as minimizing √1a − √1b under the constraint √1 a fn−1 √1t 2t1√t dt = 1−α. b

Chapter 7. Confidence Sets

325

It is easy to check that fn−1 ( √1t ) 2t1√t is unimodal. By Exercise 23, there exist √1b < √1a∗ such that b3∗ fn−1 (b∗ ) = a3∗ fn−1 (a∗ ) and the confidence inter∗ −1/2 √ −1/2 √ val [b∗ n − 1S, a∗ n − 1S] has the √ shortest length√within the class of confidence intervals of the form [b−1/2 n − 1S, a−1/2 n − 1S], where b 0 < a < b < ∞, and a fn−1 (x)dx = 1 − α. Exercise 26 (#7.38). Let f be a Lebesgue density that is nonzero in [x− , x+ ] and is 0 outside [x− , x+ ], −∞ ≤ x− < x+ ≤ ∞. (i) Suppose that f is decreasing. Show that, among all intervals [a, b] satb isfying a f (x)dx = 1 − α, the shortest interval is obtained by choosing b a = x− and b∗ so that x−∗ f (x)dx = 1 − α. (ii) Obtain a result similar to that in (i) when f is increasing. (iii) Show that the interval [X(n) , α−1/n X(n) ] has the shortest length among all intervals [b−1 X(n) , a−1 X(n) ], where X(n) is the largest order statistic based on a random sample of size n from the uniform distribution on (0, θ). Solution. (i) Since f is decreasing, we must have x− > −∞. Without loss of generality, we consider a and b such that x− ≤ a < b ≤ x+ . Assume a ≤ b∗ . If b ≤ b∗ , then  a  b∗  b∗  b f (x)dx = f (x)dx − f (x)dx < 1 − α − f (x)dx = 0, x−

x−

a

a

which is impossible, where the inequality follows from the fact that f is decreasing. If b > b∗ but b − a < b∗ − x− , then  b  b∗  b f (x)dx = f (x)dx + f (x)dx a

b∗

a

 ≤

b∗

f (x)dx + f (b∗ )(b − b∗ )

a



b∗


b∗ − x− . (ii) When f is increasing, we must have x+< ∞. The shortest interval is x obtained by choosing b = x+ and a∗ so that a∗+ f (x)dx = 1 − α. The proof is similar to that in part (i) of the solution. (iii) Let X(n) be the largest order statistic based on a random sample of size n from the uniform distribution on (0, θ). The Lebesgue density of X(n) /θ is nxn−1 I(0,1) (x). We need to minimize a−1 − b−1 under the constraint  1−α=



b

nx

n−1

a−1

dx = b−1

a

n y n+1

dy,

0 ≤ a < b ≤ 1.

Note that n/y n+1 is decreasing in [1, ∞). By (i), the solution is b−1 = 1  a−1 and a−1 satisfying 1 ny −(n+1) dy = 1 − α, which yields a = α1/n . The corresponding confidence interval is [X(n) , α1/n X(n) ]. Exercise 27 (#7.39). Let (X1 , ..., Xn ) be a random sample from the exponential distribution on (a, ∞) with scale parameter 1, where a ∈ R is unknown. Find a confidence interval for a having the shortest length within the class of confidence intervals [X(1) + c, X(1) + d] with confidence coefficient 1 − α, where X(1) is the smallest order statistic. Solution. The Lebesgue density of X(1) − θ is ne−nx I[0,∞) (x), which is decreasing on [0, ∞). Note that  P (X(1) + c ≤ a ≤ X(1) + d) = P (−d ≤ X(1) − a ≤ −c) =

−c

ne−nx dx.

−d

Hence, −c ≥ −d ≥ 0. To minimize d − c, the length of the 1 − α confidence interval [X(1) + c, X(1) + d], it follows from Exercise 26(i) that −d = 0 and −c satisfies  −c ne−nx dx = 1 − α, 0

which yields c = n−1 log α. The shortest length confidence interval is then [X(1) + n−1 log α, X(1) ]. Exercise 28 (#7.42). Let (X1 , ..., Xn ) be a random sample from a distribution with Lebesgue density θxθ−1 I(0,1) (x), where θ > 0 is unknown. (i) Construct a confidence interval for θ with confidence coefficient 1 − α, using a sufficient statistic. (ii) Discuss whether the confidence interval obtained in (i) has the shortest length within a class of confidence intervals. (iii) Discuss whether the confidence interval obtained in (i) is  UMAU. n Solution. (i) The complete and sufficient statistic is T = − i=1 log Xi . Note that θT has the gamma distribution with shape parameter n and scale

Chapter 7. Confidence Sets

327

parameter 1. Let f (x) be the Lebesgue density of θT . Then a confidence interval of coefficient 1 − α can be taken from the following class: " 

  b 1 1 −1 −1 dx = 1 − α . C = [(bT ) , (aT ) ] : f 2 x a x (ii) Note that f ( x1 ) is unimodal. By Exercise 23, [(b∗ T )−1 , (a∗ T )−1 ] with f (a∗ ) = f (b∗ ) has the shortest length within C. (iii) Consider testing hypotheses H0 : θ = θ0 versus H1 : θ = θ0 . The acceptance region of a UMPU test is A(θ0 ) = {X : c1 ≤ θ0 T ≤ c2 }, where c1 and c2 are determined by  c2  c2 f (x)dx = 1 − α and xf (x)dx = n(1 − α). c1

c1

Thus a UMAU confidence interval is [c1 /T, c2 /T ], which is a member of C but in general different from the one in part (ii). Exercise 29 (#7.45). Let X be a single observation from N (θ − 1, 1) if θ < 0, N (0, 1) if θ = 0, and N (θ + 1, 1) if θ > 0. (i) Show that the distribution of X is in a family with monotone likelihood ratio in X. (ii) Construct a Θ -UMA (uniformly most accurate) lower confidence bound for θ with confidence coefficient 1 − α, where Θ = (−∞, θ). Solution. (i) Let µ(θ) be the mean of X. Then ⎧ ⎨ θ−1 µ(θ) = 0 ⎩ θ+1

θ 0,

which is an increasing function of θ. Let fθ (x) be the Lebesgue density of X. For any θ2 > θ1 ,   fθ2 (x) [µ(θ2 )]2 − [µ(θ1 )]2 = exp [µ(θ2 ) − µ(θ1 )]x − fθ1 (x) 2 is increasing in x. Therefore, the family {fθ : θ ∈ R} has monotone likelihood ratio in X. (ii) Consider testing H0 : θ = θ0 versus H1 : θ > θ0 . The UMP test has acceptance region {X : X ≤ c(θ0 )}, where Pθ (X ≥ c(θ)) = α and Pθ denotes the distribution of X. Since Pθ is N (µ(θ), 1), c(θ) = zα + µ(θ), where zα is the (1 − α)th quantile of N (0, 1). Inverting these acceptance regions, we obtain a confidence set C(X) = {θ : X < zα + µ(θ)}.

328

Chapter 7. Confidence Sets

When X − zα > 1, if θ ≤ 0, then µ(θ) > X − zα cannot occur; if θ > 0, µ(θ) > X −zα if and only if θ > X −zα −1; hence, C(X) = (X −zα −1, ∞). Similarly, when X − zα < −1, C(X) = (X − zα + 1, ∞). When −1 ≤ X − zα < 0, µ(θ) > X − zα if and only if θ ≥ 0 and, hence, C(X) = [0, ∞). When 0 ≤ X − zα ≤ 1, µ(θ) > X − zα if and only if θ > 0 and, hence, C(X) = (0, ∞). Hence, a (−∞, θ)-UMA confidence lower bound for θ is ⎧ ⎨ X − zα − 1 θ= 0 ⎩ X − zα + 1

X > zα + 1 z α − 1 ≤ X ≤ zα + 1 X < zα − 1.

Exercise 30 (#7.46). Let X be a vector of n observations having distribution Nn (Zβ, σ 2 In ), where Z is a known n × p matrix of rank r ≤ p < n, β is an unknown p-vector, and σ 2 > 0 is unknown. Let θ = Lβ, where L is an s × p matrix of rank s and all rows of L are in R(Z), W (X, θ) =

2 ˆ ˆ 2 ]/s [X − Z β(θ) − X − Z β , ˆ 2 /(n − r) X − Z β

ˆ where βˆ is the LSE of β and, for each fixed θ, β(θ) is a solution of 2 ˆ = min X − Zβ2 . X − Z β(θ) β:Lβ=θ

Show that C(X) = {θ : W (X, θ) ≤ cα } is an unbiased 1 − α confidence set for θ, where cα is the (1 − α)th quantile of the F-distribution Fs,n−r . Solution. From the discussion in §6.3.2 of Shao (2003), W (X, η) has the noncentral F-distribution Fs,n−r (δ), where δ = η − θ2 /σ 2 for any η = Lγ, γ ∈ Rp . Hence, when θ is the true parameter value, W (X, θ) has the central F-distribution Fs,n−r and, therefore,     P θ ∈ C(X) = P W (X, θ) ≤ cα = 1 − α, i.e., C(X) has confidence coefficient 1 − α. If θ is not the true parameter value,       P θ ∈ C(X) = P W (X, θ ) ≤ cα ≤ P W (X, θ) ≤ cα = 1 − α, where the inequality follows from the fact that the noncentral F-distribution cumulative distribution function is decreasing in its noncentrality parameter (Exercise 13). Hence, C(X) is unbiased. Exercise 31 (#7.48). Let X = (X1 , ..., Xn ) be a random sample from the exponential distribution on (a, ∞) with scale parameter θ, where a ∈ R and θ > 0 are unknown. Find a UMA confidence interval for a with confidence

Chapter 7. Confidence Sets

329

coefficient 1 − α. Solution. From Exercises 15 and 33 of Chapter 6, for testing H0 : a = a0 versus H1 : a = a0 , a UMP test of size α rejects H0 when X(1) ≤ a0 or 2n(X(1) − a0 )/V > c, where X(1) is the smallest order statistic, V = c n 2 i=1 (Xi − X(1) ), and c satisfies (n − 1) 0 (1 + v)−n dv = 1 − α. The acceptance region of this test is   2n(X(1) − a0 ) A(a0 ) = X : 0 ≤ ≤c . V Then, C(X) = {a : a ∈ A(a)}   2n(X(1) − a) = a:0≤ ≤c V   cV = X(1) − , X(1) 2n is a UMA confidence interval for a with confidence coefficient 1 − α. Exercise 32. Let X = (X1 , ..., Xn ) be a random sample from the uniform distribution on (θ, θ + 1), where θ ∈ R is unknown. Obtain a UMA lower confidence bound for θ with confidence coefficient 1 − α. Solution. When n ≥ 2, it follows from Exercise 13(i) in Chapter 6 (with the fact that Xi − θ0 has the uniform distribution on (θ − θ0 , θ − θ0 + 1)) that a UMP test of size α for testing H0 : θ = θ0 versus H1 : θ = θ0 has acceptance region A(θ0 ) = {X : X(1) − θ0 < 1 − α1/n and X(n) − θ0 < 1}, where X(j) is the jth order statistic. When n = 1, by Exercise 8(iv) in Chapter 6, the family of densities of X has monotone likelihood ratio in X and, hence, the UMP test of size α rejects H0 when X > c and c satisfies P (X ≥ c) = α when θ = θ0 , i.e., c = 1 − α + θ0 . Hence, the acceptance region is {X : X < 1 − α + θ0 }, which is still equal to A(θ0 ) since X(1) = X(n) = X when n = 1. Therefore, a (−∞, θ)-UMA confidence set for θ with confidence coefficient 1 − α is C(X) = {θ : X ∈ A(θ)} = {θ : X(1) − (1 − α1/n ) < θ and X(n) − 1 < θ} = {θ : max{X(1) − (1 − α1/n ), X(n) − 1} < θ} = [θ, ∞], where

θ = max{X(1) − (1 − α1/n ), X(n) − 1}

330

Chapter 7. Confidence Sets

is a (−∞, θ)-UMA lower confidence bound for θ with confidence coefficient 1 − α. Exercise 33 (#7.51). Let (X1 , ..., Xn ) be a random sample from the Poisson distribution with an unknown mean θ > 0. Find a randomized UMA upper confidence n bound for θ with confidence coefficient 1 − α. Solution. Let Y = i=1 Xi and W = Y +U , where U is a random variable that is independent of Y and has the uniform distribution on (0, 1). Note that Y has the Poisson distribution with mean nθ. Then, for w > 0, P (W ≤ w) = =

=

∞ j=0 ∞ j=0 ∞ j=0

P (W ≤ w, Y = j) P (Y = j)P (U ≤ w − j) e−nθ (nθ)j (w − j)I(j,j+1] (w) j!

and d P (W ≤ w) dw ∞ e−nθ (nθ)j I(j,j+1] (w) = j! j=0

fθ (w) =

=

e−nθ (nθ)[w] I(0,∞) (w), [w]!

where [w] is the integer part of w. For θ1 < θ2 , fθ2 (w) = en(θ1 −θ2 ) fθ1 (w)

θ2 θ1

[w]

is increasing in [w] and, hence, increasing in w, i.e., the family {fθ : θ > 0} has monotone likelihood ratio in W . Thus, for testing H0 : θ = θ0 versus H1 : θ < θ0 , the UMP test has acceptance region {W : W ≥ c(θ0 )}, where  c(θ)  c(θ 0) fθ0 (w)dw = α. Let c(θ) be the function defined by 0 fθ (w)dw = 0 α. For θ1 < θ2 , if c(θ1 ) > c(θ2 ), then  α= 0

c(θ1 )

 fθ1 (w)dw ≥

0



c(θ1 )

fθ2 (w)dw >

0

c(θ2 )

fθ2 (w)dw = α,

where the first inequality follows from Lemma 6.3 in Shao (2003). Thus, we must have c(θ1 ) ≤ c(θ2 ), i.e., c(θ) is nondecreasing in θ. Let c−1 (t) =

Chapter 7. Confidence Sets

331

inf{θ : c(θ) ≥ t}. Then W ≥ c(θ) if and only if c−1 (W ) ≥ θ. Hence, a Θ UMA upper confidence bound with confidence coefficient 1 − α is c−1 (W ), where Θ = (θ, ∞). Exercise 34 (#7.52). Let X be a nonnegative integer-valued random variable from a population P ∈ P. Suppose that P contains discrete probability densities indexed by a real-valued θ and P has monotone likelihood ratio in X. Let U be a random variable that has the uniform distribution on (0, 1) and is independent of X. Show that a UMA lower confidence bound for θ with confidence coefficient 1 − α is the solution of the equation U Fθ (X) + (1 − U )Fθ (X − 1) = 1 − α (assuming that a solution exists), where Fθ (x) is the cumulative distribution function of X. Solution. Let W = X + U . Using the same argument in the solution of the previous exercise, we conclude that W has Lebesgue density fθ (w) = Fθ ([w]) − Fθ ([w] − 1), where [w] is the integer part of w. Note that the probability density function of Fθ with respect to the counting measure is Fθ (x)−Fθ (x−1), x = 0, 1, 2, .... Since P has monotone likelihood ratio in X, Fθ2 (x) − Fθ2 (x − 1) Fθ1 (x) − Fθ1 (x − 1) is nondecreasing in X for any θ1 < θ2 and, hence, the family of densities of W has monotone likelihood ratio in W . For testing H0 : θ = θ0 versus H 0 : θ > θ0 , a UMP test of size α rejects H0 when W ∞  ∞> c(θ0 ), where f (w)dw = α. Let c(θ) be the function defined by c(θ) fθ (w)dw = α c(θ0 ) θ0 and A(θ) = {W : W ≤ c(θ)}   ∞  = W : fθ (w)dw ≥ W

Since

∞ W

"



fθ (w)dw = α .

c(θ)

fθ (w)dw is nondecreasing in θ (Lemma 6.3 in Shao, 2003), C(W ) = {θ : W ∈ A(θ)}   ∞  = θ: fθ (w)dw ≥ α W

= [θ, ∞),

332

Chapter 7. Confidence Sets

∞ where θ is a solution to W fθ (w)dw = α (assuming that a solution exists),  X+U i.e., a solution to 0 fθ (w)dw = 1 − α. The result follows from  0

X+U

fθ (w)dw =

X−1  j+1 j=0

=

X−1

j



X+U

fθ (w)dw +

fθ (w)dw X

[Fθ (j) − Fθ (j − 1)] + U [Fθ (X) − Fθ (X − 1)]

j=0

= Fθ (X − 1) + U [Fθ (X) − Fθ (X − 1)] = U Fθ (X) + (1 − U )Fθ (X − 1). Exercise 35 (#7.60(a)). Let X1 and X2 be independent random variables from the exponential distributions on (0, ∞) with scale parameters θ1 and θ2 , respectively. Show that [αY /(2 − α), (2 − α)Y /α] is a UMAU confidence interval for θ2 /θ1 with confidence coefficient 1−α, where Y = X2 /X1 . Solution. First, we need to find a UMPU test of size α for testing H0 : θ2 = λθ1 versus H1 : θ2 = λθ1 , where λ > 0 is a known constant. The joint density of X1 and X2 is   1 X1 X2 , exp − − θ1 θ2 θ1 θ2 which can be written as

   1 1 λ 1 − (λX1 + X2 ) . exp −X1 − θ1 θ2 θ1 θ2 θ2 Hence, by Theorem 6.4 in Shao (2003), a UMPU test of size α rejects H0 when X1 < c1 (U ) or X1 > c2 (U ), where U = λX1 + X2 . Note that X1 /X2 is independent of U under H0 . Hence, by Lemma 6.7 of Shao (2003), the UMPU test is equivalent to the test that rejects H0 when X1 /X2 < d1 or X1 /X2 > d2 , which is equivalent to the test that rejects H0 when W < b1 or Y /λ W > b2 , where W = 1+Y /λ and b1 and b2 satisfy P (b1 < W < b2 ) = 1 − α (for size α) and E[W I(b1 ,b2 ) (W )] = (1 − α)E(W ) (for unbiasedness) under Z1 /Z2 , where Z1 H0 . When θ2 = λθ1 , W has the same distribution as 1+Z 1 /Z2 and Z2 are independent and identically distributed random variables having the exponential distribution on (0, ∞) with scale parameter 1. Hence, the distribution of W under H0 is uniform on (0, 1). Then the requirements on b1 and b2 become b2 − b1 = 1 − α and b22 − b21 = 1 − α, which yield b1 = α/2 and b2 = 1 − α/2 (assuming that 0 < α < 12 ). Hence, the acceptance region of the UMPU test is    α α Y 2−α α = Y : ≤ ≤ . A(λ) = W : ≤ W ≤ 1 − 2 2 2−α λ α

Chapter 7. Confidence Sets

Inverting A(λ) leads to

333



αY (2 − α)Y {λ : λ ∈ A(λ)} = , 2−α α

 ,

which is a UMAU confidence interval for λ = θ2 /θ1 with confidence coefficient 1 − α. Exercise 36 (#7.63). Let (Xi1 , ..., Xini ), i = 1, 2, be two independent random samples from N (µi , σi2 ), i = 1, 2, respectively, where all parameters ¯ i and S 2 be the sample mean and sample are unknown. Let θ = µ1 − µ2 , X i variance of the ith sample, i = 1, 2. (i) Show that ¯2 − θ ¯1 − X X R(X, θ) = 2 −1 2 2 n−1 1 S1 + n2 S2 is asymptotically pivotal, assuming that n1 /n2 → c ∈ (0, ∞). Construct a 1 − α asymptotically correct confidence interval for θ using R(X, θ). (ii) Show that ;2 −1 ¯1 − X ¯ 2 − θ) (X n1 + n−1 2 t(X, θ) =  [(n1 − 1)S12 + (n2 − 1)S22 ]/(n1 + n2 − 2) is asymptotically pivotal if either n1 /n2 → 1 or σ1 = σ2 holds. Solution. (i) Note that  ¯1 − X ¯2 − θ σ 2 + (n1 /n2 )σ22 X  1 R(X, θ) = 2 →d N (0, 1), −1 2 2 S12 + (n1 /n2 )S22 n−1 1 σ 1 + n2 σ 2 ¯1 − X ¯ 2 is distributed as N (θ, n−1 σ 2 + n−1 σ 2 ) and because X 1 2 1 2   2 2 2 2 σ + (n1 /n2 )σ2 σ + cσ2  1 →p  12 =1 2 2 S1 + (n1 /n2 )S2 σ1 + cσ22 by the fact that Si2 →p σi2 , i = 1, 2. Therefore, R(X, θ) is asymptotically pivotal. A 1 − α asymptotically correct confidence interval for θ is   2 2 −1 2 −1 2 ¯ −1 2 −1 2 ¯ ¯ ¯ X1 − X2 − zα/2 n1 S1 + n2 S2 , X1 − X2 + zα/2 n1 S1 + n2 S2 , where zα is the (1 − α)th quantile of N (0, 1). (ii) If σ12 = σ22 , then t(X, θ) has the t-distribution tn1 +n2 −2 . Consider now the case where σ1 = σ2 but n1 /n2 → 1. Note that ¯2 − θ ¯1 − X X t(X, θ) = 2 g(X), 2 + n−1 σ 2 n−1 σ 1 2 1 2

334

where

Chapter 7. Confidence Sets

 n1 + n2 − 2 σ12 + (n1 /n2 )σ22  g(X) = √ →p 1 n1 + n2 [(n1 − 1)/n2 ]S12 + [(n2 − 1)/n1 ]S22 √

when n1 /n2 → 1. Then, t(X, θ) →d N (0, 1) and, therefore, is asymptotically pivotal. Exercise 37 (#7.64). Let (X1 , ..., Xn ) be a random sample of binary random variables with unknown p = P (Xi = 1). (i) The confidence set for p obtained by inverting acceptance regions of Rao’s score tests is C3 (X) = {p : n(ˆ p − p)2 ≤ p(1 − p)χ21,α }. n where pˆ = n−1 i=1 Xi and χ21,α is the (1 − α)th quantile of χ21 . Show that C3 (X) = [p− , p+ ] with 2 p(1 − pˆ) + χ21,α ] 2nˆ p + χ21,α ± χ21,α [4nˆ . p± = 2(n + χ21,α ) (ii) Compare the length of C3 (X) with   C2 (X) = [ pˆ − z1−α/2 pˆ(1 − pˆ)/n, pˆ + z1−α/2 pˆ(1 − pˆ)/n ], the confidence set for p obtained by inverting acceptance regions of Wald’s tests. Solution. (i) Let g(p) = (n+χ21,α )p2 −(2nˆ p +χ21,α )p+nˆ p2 . Then C3 (X) = {p : g(p) ≤ 0}. Since g(p) is a quadratic form of p with g  (p) > 0, C3 (X) is an interval whose limits are two real solutions of g(p). The result follows from the fact that p± are the two real solutions to g(p) = 0. (ii) The length of the interval C3 (X) is 2 χ21,α [4nˆ p(1 − pˆ) + χ21,α ] l3 (X) = n + χ21,α and the length of the interval C2 (X) is # 4χ21,α pˆ(1 − pˆ) l2 (X) = . n Since [l2 (X)]2 − [l3 (X)]2 =

p(1 − pˆ) − n] (χ21,α )2 [(8n + 4χ21,α )ˆ , 2 n(n + χ1,α )2

Chapter 7. Confidence Sets

335

we conclude that l2 (X) ≥ l3 (X) if and only if pˆ(1 − pˆ) ≥

n . 4(2n + χ21,α )

Exercise 38 (#7.67). Let X = (X1 , ..., Xn ) be a random sample from ¯ be the sample the Poisson distribution with unknown mean θ > 0 and X mean.  ¯ −θ)/ θ/n is asymptotically pivotal. Construct (i) Show that R(X, θ) = (X a 1 − α asymptotically correct confidence interval for θ, using R(X, θ).  ¯ − θ)/ X/n ¯ is asymptotically pivotal. Derive (ii) Show that R1 (X, θ) = (X a 1 − α asymptotically correct confidence interval for θ, using R1 (X, θ). (iii) Obtain 1 − α asymptotically correct confidence intervals for θ by inverting acceptance regions of LR tests, Wald’s tests, and Rao’s score tests. Solution. √(i) Since E(X1 ) = Var(X1 ) = θ, the central limit theorem √ im¯ − θ) →d N (0, θ). Thus, R(X, θ) = √n(X ¯ − θ)/ θ →d plies that n(X N (0, 1) and is asymptotically pivotal. Let zα be the (1 − α)th quantile of N (0, 1). A 1 − α asymptotically correct confidence set for θ is 2 ¯ + z 2 )θ + nX ¯ 2 ≤ 0}. C(X) = {θ : [R(X, θ)]2 ≤ zα/2 } = {θ : nθ2 − (2nX α/2

¯ + z 2 )θ + nX ¯ 2 has two real roots Since the quadratic form nθ2 − (2nX α/2 2 ¯ 2 + z4 ¯ + z 2 ± 4nXz 2nX α/2 α/2 α/2 , θ± = 2n we conclude that C(X) = [θ− , θ+ ] is an interval. ¯ →p θ. By the result in part (i) of the (ii) By the law of large numbers, X √ √ ¯ ¯ →d N (0, 1) and − θ)/ X solution and Slutsky’s theorem, R(X, θ) = n(X is asymptotically pivotal. A 1 − α asymptotically correct confidence set for θ is 2 ¯ − θ)2 ≤ Xz ¯ 2 }, C1 (X) = {θ : [R1 (X, θ)]2 ≤ zα/2 } = {θ : n(X α/2 √ √ ¯ − zα/2 X/ ¯ + zα/2 X/ ¯ n, X ¯ n ]. which is the interval [X (iii) The likelihood function is ¯

(θ) = e−nθ θnX Then

and

n

1 . X i! i=1

¯ nX ∂ log (θ) = −n + θ ∂θ ¯ nX ∂ 2 log (θ) =− 2 . 2 ∂θ θ

336

Chapter 7. Confidence Sets

¯ Since E(X) ¯ = θ, the Fisher information is In (θ) = n/θ. The MLE of θ is X. For testing H0 : θ = θ0 versus H1 : θ = θ0 , Wald’s test statistic is ¯ = (X ¯ − θ0 )2 (n/X) ¯ = [R1 (X, θ0 )]2 . ¯ − θ0 )2 In (X) (X Hence, the confidence interval for θ obtained by inverting acceptance regions of Wald’s tests is C1 (X) given in part (ii) of the solution. Since θ0 is the MLE of θ under H0 , Rao’s score test statistic is

¯ nX −n + θ0

2

[In (θ0 )]−1 = [R(X, θ0 )]2 .

Hence, the confidence interval for θ obtained by inverting acceptance regions of Rao’s score tests is C(X) given in part (i) of the solution. The likelihood ratio for testing H0 : θ = θ0 versus H1 : θ = θ0 is ¯ n(X−θ 0)

λ=e

θ0 ¯ X

nX¯ .

¯ 0 ≤ c2 for some c1 and Note that λ ≥ c for some c is equivalent to c1 ≤ X/θ c2 . Hence, the confidence interval for θ obtained by inverting acceptance ¯ c2 X], ¯ where c1 and c2 are constants such that regions of LR tests is [c1 X, ¯ ¯ limn P (c1 X ≤ θ ≤ c2 X) = 1 − α. Exercise 39 (#7.70). Let X = (X1 , ..., Xn ) be a random sample from N (µ, ϕ) with unknown θ = (µ, ϕ). Obtain 1 − α asymptotically correct confidence sets for µ by inverting acceptance regions of LR tests, Wald’s tests, and Rao’s score tests. Are these sets always intervals? Solution. The log-likelihood function is 1 n n (Xi − µ)2 − log ϕ − log(2π). 2ϕ i=1 2 2 n

log (θ) = − Note that

∂ log (θ) = sn (θ) = ∂θ



n ¯ − µ) 1 n n(X (Xi − µ)2 − , 2 ϕ 2ϕ i=1 2ϕ



and the Fisher information is  In (θ) = n

1 ϕ

0

0

1 2ϕ2

 .

¯ ϕ), ¯ is the sample mean and ϕˆ = The  MLE of θ is θˆ = (X, ˆ where X n −1 2 ¯ n i=1 (Xi − X) .

Chapter 7. Confidence Sets

337

Consider testing H0 : µ = µ0 versus H1 : µ = µ0 . For Wald’s test, R(θ) = µ − µ0 with C = ∂R/∂θ = (1, 0). Hence, Wald’s test statistic is ˆ 2 {C τ [In (θ)] ˆ −1 C}−1 = [R(θ)]

¯ − µ0 )2 n(X . ϕˆ

Let zα be the (1−α)th quantile of N (0, 1). The 1−α asymptotically correct confidence set obtained by inverting the acceptance regions of Wald’s tests is   ¯ − µ)2 n(X 2 µ: ≤ zα/2 , ϕˆ which is the interval *

¯ − zα/2 X



¯ + zα/2 ϕ/n, ˆ X



+ ϕ/n ˆ .

n ¯ − µ0 )2 . Then Under H0 , the MLE of ϕ is n−1 i=1 (Xi − µ0 )2 = ϕˆ + (X the likelihood ratio is n/2

ϕˆ . λ= ¯ − µ0 )2 ϕˆ + (X 2

The asymptotic LR test rejects H0 when λ < e−zα/2 /2 , i.e., 2

¯ − µ0 )2 > (ezα/2 /n − 1)ϕ. ˆ (X Hence, the 1−α asymptotically correct confidence set obtained by inverting the acceptance regions of asymptotic LR tests is   2 ¯ − µ0 )2 ≤ (ezα/2 /n − 1)ϕˆ , µ : (X which is the interval   2 2 2 2 zα/2 /n zα/2 /n ¯ ¯ X − (e − 1)ϕ, ˆ X + (e − 1)ϕˆ . ¯ − µ0 )2 ) be the MLE of θ under H0 . Then Rao’s Let θ˜ = (µ0 , ϕˆ + (X score test statistic is ˜ τ [In (θ)] ˜ −1 s(θ). ˜ Rn2 = [s(θ)]

Note that ˜ = s(θ)

 ¯ − µ0 ) n(X , 0 . ¯ − µ0 )2 ϕˆ + (X

Hence, Rn2 =

¯ − µ0 )2 n(X ¯ − µ0 )2 ϕˆ + (X

338

Chapter 7. Confidence Sets

and the 1 − α asymptotically correct confidence set obtained by inverting the acceptance regions of Rao’s score tests is   2 ¯ − µ)2 ≤ z 2 ϕˆ , µ : (n − zα/2 )(X α/2 which is the interval + * 2 2 2 ), X 2 ) . ¯ + zα/2 ϕ/(n ¯ − zα/2 ϕ/(n ˆ − zα/2 ˆ − zα/2 X

Exercise 40. Let (X1 , ..., Xn ) be a random sample from a distribution with mean µ, variance σ 2 , and finite 4th moment. Derive a 1 − α asymptotically correct confidence interval for θ = µ/σ. n ¯ be the sample mean and σ ¯ 2 . It Solution. Let X ˆ 2 = n−1 i=1 (Xi − X) follows from Example 2.8 in Shao (2003) that  

2   ¯ √ µ σ γ X − → 0, , n N d 2 σ2 γ κ σ ˆ2 √ where γ = E(X1 − µ)3 and κ = E(X1 − µ)4 − σ 4 . Let g(x, y) = x/ y. √ Then ∂g/∂x = 1/ y and ∂g/∂y = −x/(2y 3/2 ). By the δ-method, 

¯   √ X µ →d N 0, 1 + µ2 κ/(4σ 6 ) − µγ/σ 4 . n − σ ˆ σ Let

1 ¯ 3 (Xi − X) n i=1 n

γˆ = and

1 ¯ 4−σ (Xi − X) ˆ4. n i=1 n

κ ˆ=

¯ 2κ By the law of large numbers, γˆ →p γ and κ ˆ →p κ. Let W = 1+X ˆ /(4ˆ σ 6 )− 4 ¯ X γˆ /ˆ σ . By Slutsky’s theorem,  √ ¯ n X √ − θ →d N (0, 1) ˆ W σ and, hence, a 1 − α asymptotically correct confidence interval for θ is & √ √ ' ¯ ¯ X W X W − zα/2 √ , + zα/2 √ , σ ˆ ˆ n σ n where zα is the (1 − α)th quantile of N (0, 1).

Chapter 7. Confidence Sets

339

Exercise 41. Consider the linear model X = Zβ + ε, where Z is an n × p matrix of full rank, β ∈ Rp , ε = (ε1 , ..., εn ) with independent and identically distributed εi ’s, E(εi ) = 0, and Var(εi ) = σ 2 . Let Zi be the ith row of Z. Assume that limn max1≤i≤n Ziτ (Z τ Z)Zi = 0. Find an asymptotically pivotal quantity and construct a 1 − α asymptotically correct confidence set for β. ˆ = σ 2 (Z τ Z)−1 . By Solution. Let βˆ be the LSE of β. Note that Var(β) Theorem 3.12 in Shao (2003), σ −1 (Z τ Z)1/2 (βˆ − β) →d Np (0, Ip ). ˆ 2 /n. Since Let σ ˆ 2 = X − Z β ˆ Xi − Ziτ βˆ = Xi − Ziτ β + Ziτ (β − β) ˆ = εi Z τ (β − β), i

we obtain that 1 ˆ2 (Xi − Ziτ β) n i=1 n

σ ˆ2 =

1 2 1 τ ˆ 2 = εi + [Zi (β − β)]2 − εi Ziτ (βˆ − β). n i=1 n i=1 n i=1 n

n

By the law of large numbers, n−1 inequality,

n

n

2 i=1 εi

→p σ 2 . By the Cauchy-Schwartz

1 τ τ −1 1 τ ˆ [Zi (β − β)]2 ≤ Z (Z Z) Zi [(Z τ Z)(βˆ − β)]2 , n i=1 n i=1 i n

n

which is bounded by [(Z τ Z)(βˆ − β)]2 max1≤i≤n Ziτ (Z τ Z)−1 Zi →p 0. By the Cauchy-Schwartz inequality again,  n " '2  n & n 1 τ ˆ 1 2 1 τ ˆ 2 →p 0. εi Zi (β − β) ≤ ε [Z (β − β)] n i=1 n i=1 i n i=1 i Hence, σ ˆ 2 →p σ 2 and σ ˆ −1 (Z τ Z)1/2 (βˆ − β) is asymptotically pivotal. A 1 − α asymptotically correct confidence set for β is then   , ˆ 2 χ2 β : (βˆ − β)τ (Z τ Z)(βˆ − β) ≤ σ p,α

where χ2p,α is the (1 − α)th quantile of the chi-square distribution χ2p . Exercise 42 (#7.81). Let (Xi1 , ..., Xini ), i = 1, 2, be two independent random samples from N (µi , σi2 ), i = 1, 2, respectively, where all parameters

340

Chapter 7. Confidence Sets

are unknown. (i) Find 1−α asymptotically correct confidence sets for (µ1 , µ2 ) by inverting acceptance regions of LR tests, Wald’s tests, and Rao’s score tests. (ii) Repeat (i) for the parameter (µ1 , µ2 , σ12 , σ22 ). (iii) Repeat (i) under the assumption that σ12 = σ22 = σ 2 . (iv) Repeat (iii) for the parameter (µ1 , µ2 , σ 2 ). Solution. (i) The likelihood function is proportional to ⎫ ⎧ ni 2 ⎬ ⎨ 1 1 1 − (Xij − µi )2 . n1 n2 exp 2 ⎭ ⎩ σ1 σ2 2σi i=1

j=1

Hence, the score function is

 ¯ 1 − µ1 ) n1 σ 2 (µ1 ) ¯ 2 − µ2 ) n2 σ 2 (µ2 ) n1 (X n1 n2 (X n2 1 2 , − 2, , − 2 , σ12 2σ14 2σ1 σ22 2σ24 2σ2 ¯ i is the sample mean of the ith sample and where X σi2 (t) =

ni 1 (Xij − t)2 , ni j=1

i = 1, 2,

and the Fisher information is the 4 × 4 diagonal matrix whose diagonal elements are n1 /σ12 , n1 /(2σ14 ), n2 /σ22 , and n2 /(2σ24 ). The MLE of ¯1, σ ¯2, σ ¯ i ). When µi is known, ˆ12 , X ˆ22 ), where σ ˆi2 = σi2 (X (µ1 , σ12 , µ2 , σ22 ) is (X 2 2 the MLE of σi is σi (µi ), i = 1, 2. Thus, the 1 − α asymptotically correct confidence set for (µ1 , µ2 ) by inverting acceptance regions of LR tests is   2 ˆ1n1 σ ˆ2n2 eχ2,α /2 , (µ1 , µ2 ) : [σ12 (µ1 )]n1 /2 [σ22 (µ2 )]n2 /2 ≤ σ where χ2r,α is the (1−α)th quantile of the chi-square distribution χ2r . The 1− α asymptotically correct confidence set for (µ1 , µ2 ) by inverting acceptance regions of Wald’s tests is   ¯ 1 − µ1 )2 ¯ 2 − µ2 )2 n1 (X n2 (X 2 + ≤ χ (µ1 , µ2 ) : 2,α σ ˆ12 σ ˆ22 and the 1 − α asymptotically correct confidence set for (µ1 , µ2 ) by inverting acceptance regions of Rao’s score tests is   ¯ 1 − µ1 )2 ¯ 2 − µ2 )2 n1 (X n2 (X 2 (µ1 , µ2 ) : + ≤ χ 2,α . σ12 (µ1 ) σ22 (µ2 ) (ii) The 1 − α asymptotically correct confidence set for θ = (µ1 , µ2 , σ12 , σ22 ) by inverting acceptance regions of LR tests is   2 2 2 2 2 θ : σ1n1 σ2n2 en1 σ1 (µ1 )/(2σ1 )+n2 σ2 (µ2 )/(2σ2 ) ≤ σ ˆ1n1 σ ˆ2n2 e(n1 +n2 +χ4,α )/2 .

Chapter 7. Confidence Sets

341

The 1−α asymptotically correct confidence set for θ by inverting acceptance regions of Wald’s tests is "   2  ¯ i − µi )2 ni (X σi2 − σi2 )2 ni (ˆ 2 θ: ≤ χ4,α . + σ ˆi2 2ˆ σi4 i=1 The 1−α asymptotically correct confidence set for θ by inverting acceptance regions of Rao’s score tests is  & "

2 ' 2 ¯ i − µi )2 ni (X 2σi4 ni σi2 (µi ) ni 2 θ: ≤ χ4,α . + − 2 σi2 ni 2σi4 2σi i=1 ¯1, X ¯2, σ (iii) The MLE of (µ1 , µ2 , σ 2 ) is (X ˆ 2 ), where σ ˆ2 =

ni 2 1 ¯ i )2 . (Xij − X n1 + n2 i=1 j=1

When µ1 and µ2 are known, the MLE of σ 2 is σ 2 (µ1 , µ2 ) =

ni 2 1 (Xij − µi )2 . n1 + n2 i=1 j=1

The 1 − α asymptotically correct confidence set for (µ1 , µ2 ) by inverting acceptance regions of LR tests is   2 ˆ2 . (µ1 , µ2 ) : σ 2 (µ1 , µ2 ) ≤ e−χ2,α /[2(n1 +n2 )] σ The score function in this case is

¯ 1 − µ1 ) n2 (X ¯ 2 − µ2 ) (n1 + n2 )σ 2 (µ1 , µ2 ) n1 + n2  n1 (X . , , − σ2 σ2 2σ 4 2σ 2 The Fisher information is the 3×3 diagonal matrix whose diagonal elements are n1 /σ 2 , n2 /σ 2 and (n1 + n2 )/(2σ 4 ). Hence, the 1 − α asymptotically correct confidence set for (µ1 , µ2 ) by inverting acceptance regions of Wald’s tests is   ¯ 1 − µ1 )2 + n2 (X ¯ 2 − µ2 )2 ≤ σ (µ1 , µ2 ) : n1 (X ˆ 2 χ22,α . The 1 − α asymptotically correct confidence set for (µ1 , µ2 ) by inverting acceptance regions of Rao’s score tests is   ¯ 1 − µ1 )2 + n2 (X ¯ 2 − µ2 )2 ≤ σ 2 (µ1 , µ2 )χ2 (µ1 , µ2 ) : n1 (X 2,α .

342

Chapter 7. Confidence Sets

(iv) The 1 − α asymptotically correct confidence set for θ = (µ1 , µ2 , σ 2 ) by inverting acceptance regions of LR tests is   2 2 2 θ : σ 2 eσ (µ1 ,µ2 )/σ2 ≤ σ ˆ 2 e1+χ3,α /(n1 +n2 ) . The 1−α asymptotically correct confidence set for θ by inverting acceptance regions of Wald’s tests is  θ:

2 ¯ i − µi )2 ni (X

σ ˆ2

i=1

σ 2 − σ 2 )2 (n1 + n2 )(ˆ + ≤ χ23,α 2ˆ σ4

" .

The 1−α asymptotically correct confidence set for θ by inverting acceptance regions of Rao’s score tests is  θ:

2 ¯ i − µi )2 ni (X i=1

σ2

"  2 n1 + n2 σ 2 (µ1 , µ2 ) 2 + − 1 ≤ χ3,α . 2 σ2

Exercise 43 (#7.83). Let X be a vector of n observations having distribution Nn (Zβ, σ 2 In ), where Z is a known n×p matrix of rank r ≤ p < n, β is an unknown p-vector, and σ 2 > 0 is unknown. Find 1 − α asymptotically correct confidence sets for θ = Lβ by inverting acceptance regions of LR tests, Wald’s tests, and Rao’s score tests, where L is an s × p matrix of rank s and all rows of L are in R(Z). Solution. Since the rows of L are in R(Z), L = AZ for an s × n matrix A with rank s. Since Z is of rank r, there is an n × r matrix Z∗ of rank r such that Zβ = Z∗ Qβ, where Q is r × p. Then, Lβ = AZβ = AZ∗ β∗ with β∗ = Qβ ∈ Rr . Hence, without loss of generality, in the following we assume that r = p. The likelihood function is 2

(β, σ ) =

1 2πσ 2

n/2

 1 2 exp − 2 X − Zβ . 2σ 

Then, Z τ (X − Zβ) ∂ log (β, σ 2 ) = , ∂β σ2 ∂ log (β, σ 2 ) X − Zβ2 n = − 2, ∂σ 2 2σ 4 2σ and the Fisher information matrix is

In (β, σ 2 ) =

Zτ Z σ2

0

0 n 2σ 4

 .

Chapter 7. Confidence Sets

343

The MLE of β is the LSE βˆ = (Z τ Z)−1 Z τ X and the MLE of σ 2 is σ ˆ2 = 2 ˆ X − Z β /n. Hence, the 1 − α asymptotic correct confidence set obtained by inverting acceptance regions of Wald’s tests is   ˆ 2 χ2s,α θ : (Lβˆ − θ)τ [L(Z τ Z)−1 Lτ ]−1 (Lβˆ − θ) ≤ σ and the 1 − α asymptotic correct confidence set obtained by inverting acceptance regions of Rao’s score tests is   τ 2 2 ˆ ˆ ˆ Z(Z τ Z)−1 Z τ [X − Z β(θ)] ≤ X − Z β(θ) χs,α , θ : n[X − Z β(θ)] ˆ where β(θ) is defined as 2 ˆ X − Z β(θ) = min X − Zβ2 . β:Lβ=θ

Following the discussion in Example 6.20 of Shao (2003), the likelihood ratio for testing H0 : Lβ = θ versus H1 : Lβ = θ is  −n/2 sW (X, θ) λ(θ) = , +1 n−r where W (X, θ) is given in Exercise 30. Hence, the 1 − α asymptotic correct confidence set obtained by inverting acceptance regions of LR tests is   2 θ : sW (X, θ) ≤ (n − r)(eχs,α /n − 1) . Exercise 44 (#7.85, #7.86). Let (X1 , ..., Xn ) be a random sample from a continuous cumulative distribution function F on R that is twice differentiable at θ = F −1 (p), 0 < p < 1, with F  (θ) > 0. (i) Let {kn } be a sequence of integers satisfying kn /n = p+cn−1/2 +o(n−1/2 ) with a constant c. Show that √ ˆ = c/F  (θ) + o(1) a.s., n(X(kn ) − θ) where X(j) is the and θˆ is the sample pth quantile. √ jth order statistic  (ii) Show that n(X(kn ) − θ)F (θ) →d N (c, p(1 − p)). (iii) Let {k1n } and {k2n } be two sequences of integers satisfying 1 ≤ k1n < k2n ≤ n,  k1n /n = p − zα/2 p(1 − p)/n + o(n−1/2 ), and k2n /n = p + zα/2



p(1 − p)/n + o(n−1/2 ),

where zα is the (1 − α)th quantile of N (0, 1). Let C(X) = [X(k1n ) , X(k2n ) ]. Show that limn P (θ ∈ C(X)) = 1 − α, using the result in part (ii).

344

Chapter 7. Confidence Sets

(iv) Construct a consistent estimator of the asymptotic variance of the sample median, using the interval C(X). Solution. (i) By the Bahadur representation (e.g., Theorem 7.8 in Shao, 2003), 

(kn /n) − Fn (θ) 1 X(kn ) = θ + a.s., +o √ F  (θ) n where Fn is the empirical distribution. For the pth sample quantile, the Bahadur representation is

 1 p − Fn (θ) ˆ √ +o a.s. θ=θ+ F  (θ) n The result follows by taking the difference of the two previous equations. (ii) Note that √

n(X(kn ) − θ)F  (θ) =

By (i),



ˆ  (θ) + n(X(kn ) − θ)F



n(θˆ − θ)F  (θ).

√ lim n(X(kn ) − θˆp )F  (θ) = c a.s. n

By Theorem 5.10 in Shao (2003), √

n(θˆ − θ)F  (θ) →d N (0, p(1 − p)).

Then, by Slusky’s Theorem, √ n(X(kn ) − θ)F  (θ) →d N (c, p(1 − p)). (iii) By (ii), √ and

   n(X(k1n ) − θ)F  (θ) →d N −zα/2 p(1 − p), p(1 − p)



   n(X(k2n ) − θ)F  (θ) →d N zα/2 p(1 − p), p(1 − p) .

Let Φ be the cumulative distribution function of N (0, 1). Then √ P (X(k1n ) > θ) = P ( n(X(k1n ) − θ)F  (θ) > 0) → 1 − Φ(zα/2 ) = α/2 and √ P (X(k2n ) < θ) = P ( n(X(k2n ) − θ)F  (θ) < 0) → Φ(−zα/2 ) = α/2. The result follows from P (θ ∈ C(X)) = 1 − P (X(k1n ) > θ) − P (X(k2n ) < θ).

Chapter 7. Confidence Sets

345

(iv) Let p = 12 . If F  exists and is positive at θ, the median of F , then the asymptotic variance of the sample median is {4n[F  (θ)]2 }−1 . The length of the interval C(X) is X(k2n ) − X(k1n ) . By the result in (i), this length is equal to 

zα/2 1 √  a.s., +o √ nF (θ) n i.e.,

 [X(k2n ) − X(k1n ) ]2 1 1 a.s. = +o 2 4zα/2 4n[F  (θ)]2 n 2 Therefore, [X(k2n ) −X(k1n ) ]2 /(4zα/2 ) is a consistent estimator of the asymptotic variance of the sample median.

Exercise 45 (#7.102). Let Ct,α (X) be a confidence interval for θt with confidence coefficient 1 − α, t = 1, ..., k. Suppose that C1,α (X), ..., Ck,α (X) are independent for any α. Show how to construct simultaneous confidence intervals for θt , t = 1, ..., k, with confidence coefficient 1 − α. Solution. Let ak = 1 − (1 − α)1/k . By the independence of Ct,ak (X), t = 1, ..., k, k

P (θt ∈ Ct,ak (X), t = 1, ..., k) =

P (θt ∈ Ct,ak (X)) t=1 k

(1 − α)1/k

= t=1

= 1 − α. Hence, Ct,ak (X), t = 1, ..., k, are simultaneous confidence intervals for θt , t = 1, ..., k, with confidence coefficient 1 − α. Exercise 46 (#7.105). Let x ∈ Rk and A be a k × k positive definite matrix. Show that (y τ x)2 xτ A−1 x = max . y∈Rk ,y =0 y τ Ay Solution. If x = 0, then the equality holds. Assume that x = 0. By the Cauchy-Schwarz inequality, (y τ x)2 = (y τ A1/2 A−1/2 x)2 ≤ (y τ Ay)(xτ A−1 x). Hence, xτ A−1 x ≥

(y τ x)2 . y∈Rk ,y =0 y τ Ay max

Let y∗ = A−1 x. Then (xτ A−1 x)2 (y∗τ x)2 = τ −1 = xτ A−1 x. τ y∗ Ay∗ x A AA−1 x

346

Chapter 7. Confidence Sets

Hence, xτ A−1 x ≤

(y τ x)2 y∈Rk ,y =0 y τ Ay max

and, thus, the equality holds. Exercise 47 (#7.106). Let x ∈ Rk and A be a k × k positive definite matrix. (i) Suppose that y τ A−1 x = 0, where y ∈ Rk . Show that xτ A−1 x =

(cτ x)2 . c∈Rk ,c =0,cτ y=0 cτ Ac max

(ii) Let X be a vector of n observations having distribution Nn (Zβ, σ 2 In ), where Z is a known n × p matrix of rank p < n, β is an unknown p-vector, and σ 2 > 0 is unknown. Using the result in (i), construct simultaneous confidence intervals (with confidence coefficient 1 − α) for cτ β, c ∈ Rp , c = 0, cτ y = 0, where y ∈ Rp satisfies Z τ Zy = 0. Solution. (i) If x = 0, then the equality holds. Assume that x = 0. Let D be the k × (k − 1) matrix which spans the linear subspace Y = {c : c ∈ Rk , cτ y = 0}. For any c ∈ Y (c = 0), c = Dt for some t ∈ Rk−1 , t = 0. Since y τ A−1 x = 0, A−1 x ∈ Y and, hence, A−1 x = Dl or x = ADl for some l ∈ Rk−1 , l = 0. Then (cτ x)2 (cτ x)2 = max τ c∈Y,c =0 cτ Ac c∈Rk ,c =0,cτ y=0 c Ac (tτ Dτ ADl)2 = max t∈Rk−1 ,t =0 tτ D τ ADt max

= (Dτ ADl)τ (Dτ AD)−1 (Dτ ADl) = lτ (Dτ AD)l = xτ A−1 x, where the third equality follows from the previous exercise. ˆ 2 /(n − p). Note that (ii) Let βˆ be the LSE of β and σ ˆ 2 = X − Z β τ τ 2 ˆ ˆ (β−β) (Z Z)(β−β)/(pˆ σ ) has the F-distribution Fp,n−p . Since Z τ Zy = 0, by the result in (i), max τ p

c∈R ,c =0,c y=0

[cτ (βˆ − β)]2 (βˆ − β)τ (Z τ Z)(βˆ − β) . = σ ˆ 2 cτ (Z τ Z)−1 c σ ˆ2

Thus, the 1 − α simultaneous confidence intervals for cτ β, c ∈ Rp , c = 0, cτ y = 0, are   2 2 ˆ pFp,n−p,α cτ (Z τ Z)−1 c, cτ βˆ + σ ˆ pFp,n−p,α cτ (Z τ Z)−1 c , Ic = cτ βˆ − σ

Chapter 7. Confidence Sets

347

for c ∈ Rp , c = 0, cτ y = 0, where Fp,n−p,α is the (1 − α)th quantile of the F-distribution Fp,n−p . This is because

 P cτ β ∈ Ic , c ∈ Rp , c = 0, cτ y = 0   [cτ (βˆ − β)]2 ≤ Fp,n−p,α =P max c∈Rp ,c =0,cτ y=0 pˆ σ 2 cτ (Z τ Z)−1 c   (βˆ − β)τ (Z τ Z)(βˆ − β) ≤ Fp,n−p,α =P pˆ σ2 = 1 − α. Exercise 48 (#7.111). Let (X1 , ..., Xn ) be independently distributed as 2 N (β0 + β1 zi , σ 2 ), i = 1, ..., n, where β and zi ’s 0 , β1 , and σ are unknown  n n 2 are known constants satisfying Sz = i=1 (zi − z¯) > 0, z¯ = n−1 i=1 zi . Show that   2 2 Iz = βˆ0 + βˆ1 z−ˆ σ 2F2,n−2,α D(z), βˆ0 + βˆ1 z+ˆ σ 2F2,n−2,α D(z) , z ∈ R, are simultaneous confidence intervals for β0 + β1 z, z ∈ R, with confidence coefficient 1 − α, where (βˆ0 , βˆ1 ) is the LSE of (β0 , β1 ), D(z) = (z − z¯)2 /Sz + n n−1 , and σ ˆ 2 = (n − 2)−1 i=1 (Xi − βˆ0 − βˆ1 zi )2 . Solution. Let β = (β0 , β1 ) and βˆ = (βˆ0 , βˆ1 ). Scheff´e’s 1 − α simultaneous confidence intervals for tτ β, t ∈ R2 are   ( τ ) t βˆ − σ ˆ 2F2,n−2,α tτ At, tτ βˆ + σ ˆ 2F2,n−2,α tτ At , t ∈ R2 (e.g., Theorem 7.10 in Shao, 2003), where

A=

n n¯ z n 2 n¯ z i=1 zi

−1

1 = nSz

n

2 i=1 zi

−n¯ z

−n¯ z n

 .

From the solution to Exercise 46, (tτ βˆ − tτ β)2 /tτ At is maximized at t∗ = A−1 (βˆ − β). Note that t∗ /c still maximizes (tτ βˆ − tτ β)2 /tτ At as long as c = 0, where c is the first component of t∗ . Since the first component of A−1 (βˆ − β) is n(βˆ0 − β0 ) + n(βˆ1 − β1 )¯ z and   z = 0 = 1, P n(βˆ0 − β0 ) + n(βˆ1 − β1 )¯ the quantity (tτ βˆ − tτ β)2 /tτ At is maximized at   A−1 βˆ0 − β0 βˆ1 − β1 n(βˆ0 − β0 ) + n(βˆ1 − β1 )¯ z

348

Chapter 7. Confidence Sets

whose first component is 1. Note that tτ At = D(z) when t = (1, z). Therefore, (βˆ0 + βˆ1 z − β0 − β1 z)2 (tτ βˆ − tτ β)2 = max . max z∈R D(z) tτ At t∈R2 ,t =0 Consequently, 

(βˆ0 + βˆ1 z − β0 − β1 z)2 P (β0 + β1 z ∈ Iz , z ∈ R) = P max ≤ F2,n−2,α z∈R sˆ σ 2 D(z)   (tτ βˆ − tτ β)2 =P max ≤ F2,n−2,α t∈R,t =0 2ˆ σ 2 tτ At



= 1 − α, where the last equality follows from (βˆ − β)τ A−1 (βˆ − β) (tτ βˆ − tτ β)2 = t∈R,t =0 2ˆ σ 2 tτ At 2ˆ σ2 max

by Exercise 46 and the fact that the right hand side of the previous equation has the F-distribution F2,n−2 . Exercise 49 (#7.117). Let X0j (j = 1, ..., n0 ) and Xij (i = 1, ..., m, j = 1, ..., n0 ) represent independent measurements on a standard and m competing new treatments. Suppose that Xij is distributed as N (µi , σ 2 ) with unknown µi and σ 2 > 0, j = 1, ..., n0 , i = 0, 1, ..., m. For i = 0, 1, ..., m, ¯ i· be the sample mean based on Xij , j = 1, ..., n0 . Define σ let X ˆ2 = m n0 −1 2 ¯ [(m + 1)(n0 − 1)] i=0 j=1 (Xij − Xi· ) . (i) Show that the distribution of ¯ i· − µi ) − (X ¯ 0· − µ0 )|/ˆ σ Rst = max |(X i=1,...,m

does not depend on any unknown parameter. (ii) Show that Dunnett’s intervals ' &m m m m ¯ ¯ ci Xi· − qα σ ˆ |ci |, ci Xi· + qα σ ˆ |ci | i=0

i=1

m

i=0

i=1

for all c0 , c1 , ..., cm satisfying i=0 ci = 0 are simultaneous confidence interm vals for i=0 ci µi with confidence coefficient 1−α, where qα is the (1−α)th quantile of Rst . ¯ i· − µi )/σ, i = 0, 1, ..., m, do Solution. (i) The distributions of σ ˆ /σ and (X not depend on any unknown parameter. The result follows from the fact that these random variables are independent so that their joint distribution does not depend on any unknown parameter and Rst is a function of these

Chapter 7. Confidence Sets

349

random variables. ¯ i· − µi )/ˆ (ii) Let Yi = (X σ . Then Rst = maxi=1,...,m |Yi − Y0 |. Note that m  m P ci µi are in Dunnett’s intervals for all ci ’s with ci = 0 i=0  m   m m   ¯   =P  ci (Xi· − µi ) ≤ qα σ ˆ |ci |, all ci ’s with ci = 0 i=0 i=1 i=0   m  m m   ci Yi  ≤ qα |ci |, all ci ’s with ci = 0 . = P  i=0

i=0

i=1

i=0

Hence, the result follows if we can show that max |Yi − Y0 | ≤ qα

i=1,...,m

is equivalent to  m  m m     ≤ qα c Y |c | for all c , c , ..., c satisfying ci = 0. i i i 0 1 m   i=0

i=1

i=0

  m m   i=0 ci Yi ≤ qα i=1 |ci | for all c0 , c1 , ..., cm satisfying mSuppose that c = 0. For any fixed i, let c = 1, ci = −1, and cj = 0, j = i. Then 0 i=0 i these ci ’s satisfy   m m   n  ci = 0, |ci | = 1, and  ci Yi  = |Yi − Y0 |. i=0

i=1

i=0

Hence, |Yi − Y0 | ≤ qα for i = 1, ..., m. Thus, maxi=1,...,m |Yi − Y0 | ≤ qα . Assume m now that maxi=1,...,m |Yi − Y0 | ≤ qα . For all c0 , c1 , ..., cm satisfying i=0 ci = 0,       m  m      = c Y c Y + c Y i i i i 0 0   i=0

i=1

 m  m    = ci Yi − ci Y0  i=1

i=1

 m     = ci (Yi − Y0 ) i=1



m

|ci ||Yi − Y0 |

i=1

≤ qα

m i=1

|ci |.

350

Chapter 7. Confidence Sets

Exercise 50 (#7.118). Let (X1 , ..., Xn ) be a random sample from the uniform distribution on (0, θ), where θ > 0 is unknown. Construct simultaneous confidence intervals for Fθ (t), t > 0, with confidence coefficient 1 − α, where Fθ (t) is the cumulative distribution function of X1 . Solution. The cumulative distribution function of X1 is ⎧ t≤0 ⎨ 0 t Fθ (t) = 0 0 = P X(n) ≤ θ ≤ cn X(n) = 1 − α, i.e.,

) ( Fcn X(n) (t), FX(n) (t) ,

t > 0,

are simultaneous confidence intervals for Fθ (t), t > 0, with confidence coefficient 1 − α.

References In addition to the main text, Mathematical Statistic (Shao, 2003), some other textbooks are listed here for further readings. Barndorff-Nielsen, O. E. (1978). Information and Exponential Families in Statistical Theory. Wiley, New York. Barndorff-Nielsen, O. E. and Cox, D. R. (1994). Inference and Asymptotics. Chapman & Hall, London. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis, second edition. Springer-Verlag, New York. Bickel, P. J. and Doksum, K. A. (2002). Mathematical Statistics, second edition. Holden Day, San Francisco. Casella, G. and Berger, R. L. (1990). Statistical Inference. Wadsworth, Belmont, CA. Chung, K. L. (1974). A Course in Probability Theory, second edition. Academic Press, New York. Ferguson, T. S. (1967). Mathematical Statistics. Academic Press, New York. Lehmann, E. L. (1986). Testing Statistical Hypotheses, second edition. Springer-Verlag, New York. Lehmann, E. L. and Casella, G. (1998). Theory of Point Estimation, second edition. Springer-Verlag, New York. Rao, C. R. (1973). Linear Statistical Inference and Its Applications, second edition. Wiley, New York. Rohatgi, V. K. (1976). An Introduction to Probability Theory and Mathematical Statistics. Wiley, New York. 351

352

References

Royden, H. L. (1968). Real Analysis, second edition. Macmillan, New York. Searle, S. R. (1971). Linear Models. Wiley, New York. Serfling, R. J. (1980). Approximation Theorems of Mathematical Statistics. Wiley, New York. Shao, J. (2003). Mathematical Statistics, second edition. Springer-Verlag, New York.

Index 0-1 loss, 73, 83 χ2 goodness-of-fit test, 301-302 σ-field, xv, 1 A Absolute error loss, 80, 155, 161 Admissibility, xv, 73-75, 155-160, 169-174, 187-188 Ancillary statistic, xv, 68-70 Approximate unbiasedness, 80 Asymptotic bias, xv, 88 Asymptotic correctness of confidence sets, xvi, 333, 335-343 Asymptotic distribution, see limiting distribution Asymptotic efficiency, 193-194, 238, 240 Asymptotic level, xv, 93-94, 306-308 Asymptotic mean squared error, xv, 88, 91, 134 Asymptotic pivotal quantity, 333-335, 339 Asymptotic relative efficiency, xv, 89-91, 132, 135-136, 140, 187-189, 196197, 236-238, 240-242 B Bayes action, xvi, 145-150, 152 Bayes estimator, xvi, 153-154, 156-160, 168-170 353

354

Index

Bayes factor, 304 Bayes risk, xvi, 81, 154, 156 Bayes rule, xvi, 80-84 Bayes test, 304 Best linear unbiased estimator (BLUE), 127-129, 132 Bootstrap, 247-249 Borel function, xvi, 2-4 Bounded completeness, xvi, 65-67 Bounded in probability, 39-40 C Characteristic function, xvi, 19, 21, 23-25 Cochran’s theorem, 16 Completeness, xvi, 64-70 Conditional distribution, 32 Conditional expectation, xvi, 26-34 Confidence interval, xvi, 85 Confidence region or set, xvi, 85-86 Conjugate prior, 141-142 Consistency of estimator, xvii, 86-88, 158-160, 189-191, 222, 344-345 Consistency of test, xvii, 92-93, 306-308 Contingency table, 299-301 Convergence almost surely, 35-36, 49 Convergence in distribution, 36-37, 42-45, 47-48 Convergence in moments, 35-37, 49 Convergence in probability, 36-37, 40-41, 46-47, 49 Correlation, 18

Index

355

Correlation coefficient, 56-58, 93-94, 279-280, 292 Cram´er-Rao low bound, 116 Credible set, 320-321 D Density estimation, 216-221 Dunnett’s interval, 348-349 E Empirical Bayes, xvii, 150-151, 153 Empirical distribution, xvii, 212, 216, 222, 228, 232-234, 243, 245, 305-306, 321, 343 Estimability, xvii, 120-123, 132-133 Expectation, 17 Expected length, 85, 324 Exponential family, xvii, 51-53, 59-60 F Fieller’s interval, 309 Fisher information, 112-115, 124 Fisher-scoring, 180-181, 206-208 G Gˆ ateaux differentiability, 222-223, 232 Generalized Bayes, xvii, 147-153, 157-158, 169 Goodness of fit test, see χ2 goodness-of-fit test Good sets principle, 1 H Highest posterior density credible set, 320-321 Hodges-Lehmann estimator, 236

356

Index

Huber’s estimator, 241-242, 245 I Independence, xvii, 14-16, 18, 53, 68-69 Influence function, 223-227, 230-232, 243 Integration, xviii, 5-8 Inverting acceptance regions, 317-319, 327, 334-338, 340-343 J Jackknife, 246-247 K Kaplan-Meier estimator, 214 L L-functional, 224-225, 227-228, 243 Least squares estimator (LSE), 119-120, 122, 125-132 Lebesgue density, 13-15, 19-20, 24 Length of a confidence interval, 322-324, 334 Liapounov’s condition, 50 Likelihood equation, xviii, 176-187 Likelihood ratio (LR) test, 283-297, 299-300, 335-337, 340-343 Limiting distribution, 54, 56-59, 136-138, 189-190, 193-198, 200, 202-205, 235, 290-293, 299 Lindeberg’s condition, 48-49, 249-250 Location-scale family, xviii, 52 M Mallows’ distance, 209-211 Maximum likelihood estimator (MLE), 175-182, 184-191, 193-197, 202204, 212, 214, 221 Maximum profile likelihood estimator, 221

Index

357

Mean, see expectation Measure, xix, 1-2 Median, 8, 37-38, 80, 238-239 Minimal sufficiency, 60-69 Minimax estimator, xix, 165-171, 174 Minimax rule, xix, 81-84 Minimum risk invariant estimator (MRIE), 160-164 Missing data, 222 Moment estimator, 134-138, 150-151, 153, 196-197 Moment generating function, xix, 20-22, 26 Monotone likelihood ratio, xx, 256-264, 268-269, 272, 282, 313-316, 327, 329-331 N Newton-Raphson method, 180-181, 206-207 Noncentral chi-square distribution, 16, 19-20, 258, 315 Noncentral F-distribution, 22, 315-316, 328 Noncentral t-distribution, 271-272, 319 O One-sample t-test, 271, 306-307 One-step MLE, 206-208 Optimal rule, xx, 75-78 P p-value, 84-85 Pairwise independence, 15 Pivotal quantity, xx, 309-313, 321-322 Posterior distribution, xx, 141-143, 156-158 Posterior expected loss, 153-154

358

Index

Posterior mean, 142, 144, 146 Posterior variance, 142, 144 Power function, xx, 276 Power series distribution, 52 Prediction, 321 Probability density, 12 Product measure, 10-11 Profile likelihood, 221 Q Quantile, 231-236 R Radon-Nikodym derivative, 9-10 Randomized confidence set, 330 Randomized estimator, xx, 71 Rao’s score test, 296-298, 300-303, 334-338, 340-343 Risk, xx, 71-74, 81-82, 174-175 Root of the likelihood equation (RLE), 195, 197-198, 200 S Sample median, 237-238, 241, 344-345 Sample quantile, 343 Scheff´e’s intervals, 347 Shortest-length confidence interval, 323-327 Simultaneous confidence intervals, xxi, 345-350 Size, xxi, 92 Sufficiency, xxi, 59-61, 114-115 Sup-norm distance, 211

Index

359

Superefficiency, 192 T Test, 73, 84 Testing independence, 301 Trimmed sample mean, 229, 240 Type II error probability, 251 U U-statistic, 116-119, 140, 244-245 Unbiased confidence set, xxii, 324, 328 Unbiased estimator, xxii, 77-80, 222 Uniform integrability, 38-39 Uniformly minimum variance unbiased estimator (UMVUE), xxii, 95-112, 115-116, 123-124, 132-133, 157-158, 187-188, 196 Uniformly most accurate (UMA) confidence set, xxi, 327-331 Uniformly most accurate unbiased (UMAU) confidence set, xxi, 324, 326327, 332-333 Uniformly most powerful (UMP) test, xxi, 251-254, 259-269, 281-282 Uniformly most powerful unbiased (UMPU) test, xxi, 269-270, 273-283, 289-292, 294-295, 332 V Variance estimator, 243-249, 344-345 W Wald’s test, 296-298, 300, 302-303, 334-337, 340-343 Weak law of large numbers, 45-46