Asymptotic Statistical Inference: A Basic Course Using R [1st ed. 2021] 9811590028, 9789811590023

The book presents the fundamental concepts from asymptotic statistical inference theory, elaborating on some basic large

195 27 6MB

English Pages 547 [540] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
About the Authors
List of Figures
List of Tables
1 Introduction
1.1 Introduction
1.2 Basics of Parametric Inference
1.3 Basics of Asymptotic Inference
1.4 Introduction to R Software and Language
2 Consistency of an Estimator
2.1 Introduction
2.2 Consistency: Real Parameter Setup
2.3 Strong Consistency
2.4 Uniform Weak and Strong Consistency
2.5 Consistency: Vector Parameter Setup
2.6 Performance of a Consistent Estimator
2.7 Verification of Consistency Using R
2.8 Conceptual Exercises
2.9 Computational Exercises
3 Consistent and Asymptotically Normal Estimators
3.1 Introduction
3.2 CAN Estimator: Real Parameter Setup
3.3 CAN Estimator: Vector Parameter Setup
3.4 Verification of CAN Property Using R
3.5 Conceptual Exercises
3.6 Computational Exercises
4 CAN Estimators in Exponential and Cramér Families
4.1 Introduction
4.2 Exponential Family
4.3 Cramér Family
4.4 Iterative Procedures
4.5 Maximum Likelihood Estimation Using R
4.6 Conceptual Exercises
4.7 Computational Exercises
5 Large Sample Test Procedures
5.1 Introduction
5.2 Likelihood Ratio Test Procedure
5.3 Large Sample Tests Using R
5.4 Conceptual Exercises
5.5 Computational Exercises
6 Goodness of Fit Test and Tests for Contingency Tables
6.1 Introduction
6.2 Multinomial Distribution and Associated Tests
6.3 Goodness of Fit Test
6.4 Score Test and Wald's Test
6.5 Tests for Contingency Tables
6.6 Consistency of a Test Procedure
6.7 Large Sample Tests Using R
6.8 Conceptual Exercises
6.9 Computational Exercises
7 Solutions to Conceptual Exercises
7.1 Chapter 2
7.2 Chapter 3
7.3 Chapter 4
7.4 Chapter 5
7.5 Chapter 6
7.6 Multiple Choice Questions
7.6.1 Chapter 2: Consistency of an Estimator
7.6.2 Chapter 3: Consistent and Asymptotically Normal Estimators
7.6.3 Chapter 4: CAN Estimators in Exponential and Cramér Families
7.6.4 Chapter 5: Large Sample Test Procedures
7.6.5 Chapter 6: Goodness of Fit Test and Tests for Contingency Tables
Appendix *-1.6pcIndex
Index
Recommend Papers

Asymptotic Statistical Inference: A Basic Course Using R [1st ed. 2021]
 9811590028, 9789811590023

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Shailaja Deshmukh Madhuri Kulkarni

Asymptotic Statistical Inference A Basic Course Using R

Asymptotic Statistical Inference

Shailaja Deshmukh • Madhuri Kulkarni

Asymptotic Statistical Inference A Basic Course Using R

123

Shailaja Deshmukh Department of Statistics Savitribai Phule Pune University Pune, Maharashtra, India

Madhuri Kulkarni Department of Statistics Savitribai Phule Pune University Pune, Maharashtra, India

ISBN 978-981-15-9002-3 ISBN 978-981-15-9003-0 https://doi.org/10.1007/978-981-15-9003-0

(eBook)

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Dedicated to Our Respected Teachers and Beloved Students Who Enriched Our Learning

Preface

Statistics as a scientific discipline deals with various methods of collection of data, a variety of tools for summarization and analysis of data to extract information from the data and logical techniques for meaningful interpretation of the analysis to convert information into knowledge. Numerous methods of analysis and their optimality properties are discussed in statistical inference literature. These differ depending on the size of the data. If the data size is relatively smaller, an optimal solution may not always exist. However, in many cases the scenario changes for the better as the sample size goes on increasing and the existence of an optimal solution can be ensured. Since statistics is concerned with accumulation of data, it is of prime interest to judge a variety of optimality properties of inference procedures, when we get an increasing amount of data. These optimality properties are investigated in asymptotic statistical inference theory. In the present book, we study in detail some basic large sample optimality properties of estimators and some test procedures. A rigorous mathematical approach is directed towards the theoretical concepts and simultaneously supported by simulation studies, with the tool of R software, to absorb the notions easily. The book begins with a brief introduction to basic framework of statistical inference. An overview of the concepts from parametric statistical inference for finite sample size and of the various modes of convergence of a sequence of random variables from probability theory is also provided. These notions form the foundation of the asymptotic statistical inference developed in subsequent chapters. Chapters 2 and 3 form the core of the book. The basic concept of consistency of an estimator for a real parameter and a vector parameter is discussed in detail in Chap. 2. In Chap. 3 we present in depth the convergence in distribution of a suitably normalized estimator. In particular, the focus is on the consistent and asymptotically normal (CAN) estimators. The large sample optimality properties of an estimator are defined in terms of its limiting behaviour as sample size increases. Hence, the convergence of a sequence of random variables becomes the principal probability tool in the asymptotic investigation of an estimator. In Chap. 4, we discuss two families of distributions for which optimal estimators do exist for the parameters of interest. It is shown that for the probability models belonging to an exponential family and a Cramér family, the maximum likelihood estimators of the parameters are CAN.

vii

viii

Preface

Chapters 5 and 6 study various test procedures and their properties when the sample size is large. In Chap. 5, we introduce a likelihood ratio test procedure and prove results related to the asymptotic null distribution of the likelihood ratio test statistic. Chapter 6 addresses the applications of the likelihood ratio test procedure when the underlying probability model is a multinomial distribution. In particular, we study tests for goodness of fit, tests for validity of the model and a variety of tests for contingency tables. In Chap. 6, we also study a score test and Wald’s test and examine their relationship with the likelihood ratio test and Karl Pearson’s chi-square test. We have discovered an important result regarding a score test statistic and Karl Pearson’s chi-square test statistic. While testing a hypothesis about a parameter of a multinomial distribution, these two statistics are identical. Numerous illustrations and examples of differing difficulty level, from routine to challenging, are incorporated throughout each chapter to clarify the concepts. These illustrations and several remarks reveal the depth of the incorporated theory. For better assimilation of the notions contained in the book, various exercises are included at the end of each chapter. Solutions to almost all the conceptual exercises are given in Chap. 7, to motivate students towards solving these exercises and to enable digestion of the underlying concepts. Over the years, we have noted that the concepts from asymptotic inference are crucial in modern statistics, but are difficult for the students to grasp due to their abstract nature. To overcome this difficulty, we have augmented the theory with R software as a tool for simulation and computation, which is a novel and a unique feature of our book. Nowadays R is a leading computational tool for statistics and data analysis. It can handle a variety of functions, such as data manipulation, statistical modeling, advanced statistical methods etc. It has numerous packages in the CRAN repository, which are constantly growing in number. R also has some excellent graphical facilities. Despite all these multiple advantages, not only R is free, but it is also platform-independent and hence can be used on any operating system. Hence, keeping up with the recent trend of using R software for statistical computations and data analysis, we too have used it extensively in this book for illustrating the concepts, verifying the properties of estimators and carrying out various test procedures. Chapter 1 covers a brief introduction to R software. The last section of each of the Chapters from 2, 3, 4, 5 and 6 presents R codes for the verification of the concepts and procedures discussed in the respective chapters. The major benefit of these codes is to understand the complex notions with ease. The R codes also reveal the hidden aspects of different procedures and cater to the educational need of visual demonstration of the concepts and procedures. The asymptotic theory gives reasonable answers in many scenarios, which are found approximately valid. It may be theoretically very hard to ascertain whether the approximation errors involved are insignificant or not, but one can have recourse to simulation studies to empirically judge the accuracy of certain approximations. It has been demonstrated in the book using R codes. These codes are deliberately kept simple, so that readers can understand the underlying theory with the minimal effort. At the end of each

Preface

ix

chapter, computational exercises based on R software are included so as to provide a hands-on experience to students. The book has evolved out of the instructional material prepared for teaching a course “Asymptotic Inference” for several years at Savitribai Phule Pune University, formerly known as University of Pune. To some extent, the topics coincide with what we used to cover in the course. No doubt, there are many excellent books on asymptotic inference. However, these books do not elaborate on the computational aspect. While teaching the course, we realized that the students need a simpler, user-friendly book. Students often come across terms such as “routine computations”, “trivial”, “obvious”, when in fact, the underlying steps are not so obvious and trivial for them. Hence, we decided to compile the teaching material in the form of a customized book to fulfill the need of the students while trying to fill the gap. While the competing texts are often found substantially concise, contrarily, we have tried to well develop the subject with the help of a variety of nicely worked out examples. The main motive is to provide fairly thorough treatment of basic techniques, theoretically and computationally using R, so that the book will be suitable for self-study. The style of the book is purposely kept conversational, so that the reader may feel vicinity of a teacher. Hopefully, a better understanding can provide more insights and propel students towards a better appreciation of the beauty of the subject. We will be deeply rewarded, if the present book helps students to enhance their understanding and to enjoy the subject. The mathematical prerequisites for this book are basic convergence concepts for a sequence of real numbers and familiarity with the properties of a variety of discrete and continuous distributions. It is assumed that the reader has the background knowledge of parametric inference for finite sample size. This includes concepts of sufficiency, information function, standard methods of estimation and finite sample optimality properties of estimators obtained using these methods. Some background to measure-theoretic probability theory would also be beneficial, since it forms the mathematical foundation for asymptotic inference. In particular, an awareness of concepts such as various modes of convergence, laws of large numbers and Lindeberg-Levy central limit theorem would be useful. In addition, a basic knowledge of R software is desirable. We have added three sections, devoted to a brief introduction to the basic concepts needed, in Chap. 1 for ready reference. For an interested reader, a list of reference books is given for an in-depth study of these concepts. The intended target audience of the present book is mainly post graduate students in a quantitative program, such as Statistics, Bio-statistics or Econometrics and other disciplines, where inference for large sample size is needed for the data analysis. The book will be useful to data scientists and researchers in many areas in which data size is large and various analytical methods are to be deployed such as categorical data analysis, regression analysis, survival analysis etc. It will also provide sufficient background information for studying inference in stochastic processes. The book is designed primarily to serve as a text book for a one semester introductory course in asymptotic statistical inference in any post-graduate statistics program.

x

Preface

We wish to express our special thanks to all our teachers, in particular we are grateful to Prof. B. K. Kale and Prof. M. S. Prasad who have laid the strong foundation of statistical inference and influenced our understanding, appreciation, and taste for the subject. We sincerely thank Prof. M.S. Prasad, Prof. M. B. Rajarshi, Dr Vidyagouri Prayag, Dr Akanksha Kashikar and Namitha Pais for reading the entire manuscript very carefully and for their constructive inputs. Incorporation of their suggestions, and also the comments and criticism by a number of reviewers has definitely improved the presentation and the rigour of the book. We thank the Head, Department of statistics, Savitribai Phule Pune University, for providing the necessary facilities. We wish to express our deep gratitude to the R core development team and the authors of contributed packages, who have invested a lot of time and effort in creating R as it is today. With the help of such a wonderful computational tool, it is possible to showcase the beauty of the theory of asymptotic statistical inference. We take this opportunity to acknowledge Nupoor Singh, editor of the Statistics section of Springer Nature and her team, for providing help from time to time and subsequent processing of the text to its present form. We are deeply grateful to our family members for their constant support and encouragement. Last but not the least, we owe profound thanks to all our students whom we have taught during last several years and who have been the driving force to take up this immense task. Their reactions and doubts in the class and our urge to make the theory crystal clear to them, compelled us to pursue this activity and to prepare a variety of illustrations and exercises. All mistakes and ambiguities in the book are exclusively our responsibility. We would love to know any mistakes that a reader comes across in the book. Feedback in the form of suggestions and comments from colleagues and readers is most welcome. Pune, India August 15, 2020

Shailaja Deshmukh Madhuri Kulkarni

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 1.2 Basics of Parametric Inference . . . . . . . . 1.3 Basics of Asymptotic Inference . . . . . . . . 1.4 Introduction to R Software and Language References . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

1 1 5 14 20 27

2 Consistency of an Estimator . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . 2.2 Consistency: Real Parameter Setup . . . 2.3 Strong Consistency . . . . . . . . . . . . . . . 2.4 Uniform Weak and Strong Consistency 2.5 Consistency: Vector Parameter Setup . . 2.6 Performance of a Consistent Estimator . 2.7 Verification of Consistency Using R . . 2.8 Conceptual Exercises . . . . . . . . . . . . . 2.9 Computational Exercises . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

29 29 30 55 57 60 69 73 88 93 93

3 Consistent and Asymptotically Normal Estimators 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 CAN Estimator: Real Parameter Setup . . . . . . . 3.3 CAN Estimator: Vector Parameter Setup . . . . . 3.4 Verification of CAN Property Using R . . . . . . 3.5 Conceptual Exercises . . . . . . . . . . . . . . . . . . . 3.6 Computational Exercises . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

95 95 96 120 143 161 165 166

4 CAN Estimators in Exponential and Cramér Families 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Exponential Family . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Cramér Family . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Iterative Procedures . . . . . . . . . . . . . . . . . . . . . . . 4.5 Maximum Likelihood Estimation Using R . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

167 167 168 198 232 234

. . . . . . . . . . .

. . . . . . . . . . .

xi

xii

Contents

4.6 Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 4.7 Computational Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

267 267 274 292 304 306 306

6 Goodness of Fit Test and Tests for Contingency Tables 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Multinomial Distribution and Associated Tests . . . . 6.3 Goodness of Fit Test . . . . . . . . . . . . . . . . . . . . . . . 6.4 Score Test and Wald’s Test . . . . . . . . . . . . . . . . . . . 6.5 Tests for Contingency Tables . . . . . . . . . . . . . . . . . 6.6 Consistency of a Test Procedure . . . . . . . . . . . . . . . 6.7 Large Sample Tests Using R . . . . . . . . . . . . . . . . . . 6.8 Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . . . 6.9 Computational Exercises . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

307 308 310 330 342 361 371 373 397 398 401

7 Solutions to Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . 7.1 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Multiple Choice Questions . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Chapter 2: Consistency of an Estimator . . . . . . . . . 7.6.2 Chapter 3: Consistent and Asymptotically Normal Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3 Chapter 4: CAN Estimators in Exponential and Cramér Families . . . . . . . . . . . . . . . . . . . . . . 7.6.4 Chapter 5: Large Sample Test Procedures . . . . . . . 7.6.5 Chapter 6: Goodness of Fit Test and Tests for Contingency Tables . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

403 403 435 478 491 497 503 503

5 Large Sample Test Procedures . . . 5.1 Introduction . . . . . . . . . . . . . . . 5.2 Likelihood Ratio Test Procedure 5.3 Large Sample Tests Using R . . . 5.4 Conceptual Exercises . . . . . . . . 5.5 Computational Exercises . . . . . . References . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . 508 . . . . . . 512 . . . . . . 517 . . . . . . 519

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527

About the Authors

Shailaja Deshmukh is a visiting faculty at the Department of Statistics, Savitribai Phule Pune University (formerly known as University of Pune). She retired as a Professor of Statistics from Savitribai Phule Pune University. She has taught around twenty five different theoretical and applied courses. Her areas of interest are inference in stochastic processes, applied probability, actuarial statistics and analysis of microarray data. She has a number of research publications in various peer-reviewed journals, such as Biometrika, Journal of Multivariate Analysis, J. R. Statist. Soc., Australian and New Zealand Journal of Statistics, Environmetrics, J. of Statistical Planning and Inference, Journal of Translational Medicine. She has published four books, the last of which was ‘Multiple Decrement Models in Insurance: An Introduction Using R’, published by Springer. She has served as an executive editor and as a chief editor of the Journal of Indian Statistical Association and she is an elected member of the international Statistical Institute. Madhuri Kulkarni has been working as an Assistant Professor at the Department of Statistics, Savitribai Phule Pune University since 2003. She has taught a variety of courses in the span of 17 years. The list includes programming languages like C and C++, core statistical courses like probability distributions, statistical inference, regression analysis, and applied statistical courses like actuarial statistics, Bayesian inference, reliability theory. She has been using R for teaching the practical and applied courses for more than a decade. She is a recipient of the prestigious U.S. Nair Young Statistician Award. She has completed research projects for Armament Research and Development Establishment (ARDE), Pune, and has also received core research grant for a research project on software reliability from DST-SERB, India in 2018. She writes regularly in English, Hindi and Marathi in her blog. She also shares the e-content developed by her.

xiii

List of Figures

Fig. Fig. Fig. Fig. Fig.

1.1 1.2 2.1 2.2 3.1

Fig. 3.2 Fig. 3.3 Fig. 3.4 Fig. 3.5 Fig. 3.6 Fig. 4.1 Fig. 4.2 Fig. Fig. Fig. Fig. Fig. Fig. Fig.

4.3 4.4 4.5 4.6 5.1 5.2 5.3

Probability mass function of binomial Bð5; pÞ distribution . . . . . . Histogram and box plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random samples from uniform U(0,1) distribution . . . . . . . . . . . . Uniform Uð0; hÞ distribution: consistency of XðnÞ and 2X n . . . . . . Cauchy Cðh; 1Þ distribution: histograms of normalized sample median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cauchy Cðh; 1Þ distribution: approximate normality of normalized sample median . . . . . . . . . . . . . . . . . . . . . . . . . . . Weibull distribution: approximate normality of normalized sample median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exponential Expðh; 1Þ distribution: MLE is not CAN . . . . . . . . . . Exponential Expðh; 1Þ distribution: asymptotic distribution of MLE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exponential Expðl; rÞ distribution: CAN estimator based on moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Truncated Poisson distribution: MLE . . . . . . . . . . . . . . . . . . . . . . Truncated Poisson distribution: approximate normality of normalized MLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal Nðh; h2 Þ distribution: log-likelihood . . . . . . . . . . . . . . . . . Normal Nðh; h2 Þ distribution: approximate normality of MLE . . . Scatter plots: MLE in joint and marginal models . . . . . . . . . . . . . Density plots: MLE in joint and marginal models . . . . . . . . . . . . Cauchy Cðh; 1Þ distribution: power function . . . . . . . . . . . . . . . . . Bivariate normal N2 ð0; 0; 1; 1; qÞ distribution: power function . . . . Asymptotic null distribution of likelihood ratio test statistic . . . . .

4 25 76 80 145 146 148 150 151 154 236 238 239 241 261 262 293 298 303

xv

List of Tables

Table Table Table Table Table Table Table

2.1 2.2 2.3 2.4 2.5 2.6 3.1

Table 3.2 Table 3.3 Table 3.4 Table 3.5 Table 4.1 Table 5.1 Table 6.1 Table 6.2 Table 6.3 Table 6.4 Table Table Table Table Table Table Table Table

6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12

Random Samples from Uniform Uð0; 1Þ Distribution. . . . . . Estimate of coverage probability when h ¼ 0. . . . . . . . . . . . Estimate of coverage probability when h ¼ 1. . . . . . . . . . . . Estimates of coverage probabilities: Gða; kÞ distribution . . . Estimates of coverage probabilities: Cðh; kÞ distribution . . . Performance of Tk ðX n Þ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparison of two methods of constructing an asymptotic confidence interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N2 ð0; 0; 1; 1; qÞ distribution: p-values of Shapiro-Wilk test . Coefficient of skewness and kurtosis of normalized Rn and normalized Z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample central moments of normalized Rn and normalized Z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Confidence intervals for q based on Rn and Fisher’s Z transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ^n and Rn . . . . . . . . N2 ð0; 0; 1; 1; qÞ Distribution: values of q ^n N2 ð0; 0; 1; 1; qÞ distribution: approximate variances of q and Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carver's data: two varieties of maize . . . . . . . . . . . . . . . . . . Uniform Uð0; 4Þ distribution: grouped frequency distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Truncated binomial distribution: frequency distribution . . . . Truncated binomial distribution: observed and expected frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IQ scores: grouped frequency distribution . . . . . . . . . . . . . . IQ scores: observed and expected frequencies . . . . . . . . . . . Limit laws of quadratic forms . . . . . . . . . . . . . . . . . . . . . . . Frequencies of phenotypes . . . . . . . . . . . . . . . . . . . . . . . . . . Test for goodness of fit: summary of test procedures . . . . . . Number of cars passing during a unit interval . . . . . . . . . . . Test for proportion: summary of test procedures . . . . . . . . . Test for equality of proportions: summary of test procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

75 83 83 84 86 87

. . 156 . . 158 . . 158 . . 159 . . 161 . . 244 . . 298 . . 320 . . 330 . . 335 . . . . . . . .

. . . . . . . .

336 337 342 344 376 378 379 382

. . 387

xvii

xviii

List of Tables

Table 6.13 Table 6.14 Table 6.15 Table 6.16 Table 6.17 Table 6.18 Table 6.19 Table Table Table Table Table Table Table Table

6.20 6.21 6.22 6.23 6.24 6.25 6.26 7.1

Cross-classification by gender and political party identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-classification of aspirin use and myocardial infraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Count data in a three-way contingency table . . . . . . . . . . . . Three-way contingency table: formulae for expected frequencies and degrees of freedom . . . . . . . . . . . . . . . . . . . Three-way contingency table: observed and expected frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Three-way contingency table: values of test statistic and p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Three-way contingency table: analysis by Poisson regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of organisms with specific genotype . . . . . . . . . . . . Number of organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Heights of eight year old girls . . . . . . . . . . . . . . . . . . . . . . . Cross-classification by race and party identification . . . . . . . Classification according to source of news . . . . . . . . . . . . . . Data on feedback of viewers of TV serial . . . . . . . . . . . . . . Levels of three variables: hair color, eye color and sex . . . . Answer Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . 390 . . 391 . . 392 . . 393 . . 396 . . 396 . . . . . . . . .

. . . . . . . . .

397 399 399 400 400 400 400 401 525

1

Introduction

Contents 1.1 1.2 1.3 1.4

1.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basics of Parametric Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basics of Asymptotic Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to R Software and Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 5 14 20

Introduction

Statistics is concerned with the collection of data, their analysis and interpretation. As a first step, the data are analyzed without any extraneous assumptions. The principal aim of such an analysis is the organization and summarization of the data to bring out their main features and clarify their underlying structure. The first step of such an analysis is known as an exploratory data analysis in which some graphs such as histogram, box plots, scatter plots are drawn and some interesting characteristics based on the given sample, such as sample mean, sample variance, coefficient of variation, correlation or regression, in case of multivariate data, are obtained. These sample characteristics are then used to estimate the corresponding population characteristics. It is the second step of analysis and is known as a confirmatory data analysis. Statistical inference plays a significant role in the confirmatory data analysis as it involves basically the inference procedures consisting of point estimation, interval estimation and testing of hypotheses. In these inference procedures it is essential to know the distribution of the sample characteristics. Probability theory and the distribution theory play the role of a bridge between the exploratory and the confirmatory data analysis. In many cases, it is difficult to find the distribution for a finite sample size and then one seeks to find it for a large sample size. To elaborate on all these issues we begin with the basic framework of parametric statistical inference. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0_1

1

2

1

Introduction

The parametric approach to statistical modeling assumes a family of probability distributions. More specifically, suppose X is a random variable or a random vector under study, defined on a probability space (, A, Pθ ), with probability law f (x, θ), θ ∈ . The probability law f (x, θ) is determined by the probability measure Pθ . The probability law is the probability mass function if X is a discrete random variable and it is a probability density function if X is a continuous random variable. The parameter θ may be a real parameter or a vector parameter. The set  is known as a parameter space. As θ varies over , we get a family of probability distributions. An important condition on the probability measure Pθ in statistical inference is indexing of Pθ by a parameter θ ∈ . The probability measure Pθ is said to be indexed by a parameter θ or labeled by a parameter θ if Pθ1 (·) = Pθ2 (·) implies θ1 = θ2 . A parameter θ is then known as an indexing parameter. In terms of probability law f (x, θ), it is stated as follows. Suppose a support S f of f (x, θ) is defined as S f = {x| f (x, θ) > 0}. Then a parameter θ is known as an indexing parameter if f (x, θ1 ) = f (x, θ2 ), ∀ x ∈ S f , implies that θ1 = θ2 . A collection {Pθ , θ ∈ } is known as a family of probability measures indexed by θ or { f (x, θ), θ ∈ } is known as a family of probability distributions indexed by θ. Such an indexing condition of the probability measure is known as an identifiability condition as it uniquely identifies the member from the family {Pθ , θ ∈ }. For example, if X follows a binomial B(n, p) distribution, then p is the indexing parameter; if X follows a normal N (μ, σ 2 ) distribution, then μ and σ 2 both are indexing parameters. Suppose X and Y denote the life lengths of the two components of a system in which the system fails if either of the components stops working. Thus, the life of a system is the same as the life of the component which fails first. Hence, the observable random variable is Z = min{X , Y }. Suppose X and Y are independent random variables each having exponential distribution with failure rate θ and λ respectively. Then Z has exponential distribution with failure rate θ + λ. In this situation, (θ, λ) cannot be an indexing parameter of Z , as infinitely many pairs of (θ, λ) will give rise to the same value of θ + λ. Thus, an indexing parameter is θ + λ and not (θ, λ) . The problem of identifiability is basic to all statistical methods and data analysis, occurring in such diverse areas as reliability theory, survival analysis and econometrics, where stochastic modeling is widely used. For more details, one may refer to the book by Rao [1]. An important assumption in parametric inference is that the form of the probability law f (x, θ) is known and the only unknown quantities are the indexing parameters. The main aim of statistical inference is to have the best guess about θ or some parametric function g(θ), such as mean or variance of a distribution, on the basis of a sample X ≡ {X 1 , X 2 , . . . , X n } of n observations from the distribution of X . It is to be noted the given data X is generated corresponding to some value of θ = θ0 , say, which is labeled as a true parameter. However, θ0 is unknown and we wish to guess its value on the basis of the observations X generated under θ0 . A true parameter θ0 is any member of  and hence usually θ0 is simply referred to as θ. To have the best guess of θ we find a suitable statistic, that is a Borel measurable function Tn (X ) of observations X . For example, suppose X is a random sample from a normal N (θ, 1) distribution, θ ∈ R. Then the sample mean X n or the sample median are functions of sample observations and can be used to have a good guess about θ, as θ is a population

1.1 Introduction

3

mean as well as population median. Suppose  = [0, ∞), we do come across with such a restricted parameter space setup, particularly while developing the likelihood ratio test procedure to test H0 : θ = 0 against the alternative that H0 : θ > 0. In such a case, it is desirable to have a statistic whose value lies in the interval [0, ∞) as a preliminary guess for θ. Observe that  √ √ √ (− nθ) > 0, if θ > 0 Pθ [X n < 0] = Pθ [ n(X n − θ) < − nθ] = 1/2, if θ = 0 . Here (·) denotes the distribution function of the standard normal distribution. Thus, for any θ ∈ [0, ∞), Pθ [X n < 0] > 0. As a consequence, for given data if X n < 0, one cannot use X n as a possible value of θ which is non-negative. Further, if  = {0, 1}, then Pθ [X n = 0] = 0 and Pθ [X n = 1] = 0. In such cases using the sample mean X n to guess for θ does not seem to be reasonable, instead a suitable function of observations with range space as {0, 1} should be used. In view of such limitations, any statistic cannot be used as an estimator of θ, it is essential that the range space of Tn is the same as the parameter space. We thus define an estimator and an estimate as follows. Suppose X is a set of all possible values of X . It is known as a sample space.

 Definition 1.1.1 Estimator and Estimate: Suppose X is a sample from the probability distribution of X , indexed by a parameter θ ∈ . A Borel measurable function Tn (X ) of X from X to  is known as an estimator of θ. For a given realization x of X , the value Tn (x) is known as an estimate of θ. This approach of defining an estimator is followed by many, including Rohatgi and Saleh [2] and Shao [3]. In Example 2.2.3, we define a suitable statistic to estimate θ when the parameter space  for a normal N (θ, 1) distribution is either [0, ∞) or {0, 1}. While defining consistency of an estimator Tn for a parameter θ in the next chapter, we study the limiting behavior of the probability that the distance between Tn and θ is small. This fact also indicates that the range space of Tn should be the same as that of θ, which may or may not be true for any statistic Tn . It is to be noted that an estimator is a random variable or a random vector and an estimate is a specific value of the random variable or a random vector. An estimator Tn (X ) forms a basis of inference for the parameter θ. In all the inference procedures the important basic assumption is that, it is possible to suggest an estimator of θ by having observations on X ; that is, we assume that observations on X provide information on θ. Such an assumption is valid as the probability law f (x, θ) changes as the value of an indexing parameter θ changes. To clarify this important point, in Fig. 1.1, we have plotted the probability mass function of binomial B(5, p) distribution for p = 0.1, 0.2, . . . , 0.9. From Fig. 1.1 we note that as the value of the parameter changes, the nature of the probability mass function changes. For some values of p, some values of X are more likely. On the other hand, if the observed value of X is 4, say, then from Fig. 1.1, we guess that it is more likely that p ∈ {0.7, 0.8, 0.9}. Such a feature is also observed when we have more data. It is to be noted that the joint distribution of X = {X 1 , X 2 , . . . , X n }, where each X i is distributed as

4

1

3

4

3

4

0.6 0.3 0.0

Probability 2

0

5

1

2

3

p = 0.5

p = 0.6

4

0

5

1

2

3

4

1

2

3

p = 0.7

p = 0.8

p = 0.9

5

0

1

2

3

4

5

x

4

5

0.3 0.0

Probability

0.3 0.0

Probability x

4

5

0.6

x

0.6

x

3

4

0.3 0

5

x

2

5

0.0

Probability

0.3

Probability 3

0.0

0.3

2

4

0.6

p = 0.4 0.6

x

0.6

x

0.3

1

1

x

0.0 0

0.6 0

5

0.0

1

0.3

Probability 2

0.6

0

Probability

1

0.0

0.6 0.3 0.0

Probability Probability

0

p = 0.3

p = 0.2

p = 0.1

Introduction

0

1

2

3 x

Fig. 1.1 Probability mass function of binomial B(5, p) distribution

X , also changes as the underlying indexing parameter changes. Thus, the sample {X 1 , X 2 , . . . , X n } does provide information on θ. Since Tn (X ) is a Borel measurable function of random variables X , it is again a random variable and its probability distribution is determined by that of X . It then follows that the probability distribution of Tn (X ) is also indexed by θ and it also provides information on θ. For example, if X is a random sample from a normal N (θ, 1) distribution then the sample mean X n again has a normal N (θ, 1/n) distribution. A random sample X from the distribution of X indicates that {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, each having the same probability law f (x, θ) as that of X . The n joint distrif (xi , θ). bution of {X 1 , X 2 , . . . , X n } is then given by f (x1 , x2 , . . . , xn , θ) = i=1 For the given data x = {x1 , x2 , . . . , xn }, f (x1 , x2 , . . . , xn , θ) is a function of θ and Fisher defines it as a likelihood function. We denote it by L(θ|x). Thus, L(θ|x) provides information on θ corresponding to the given observed data. If X is a discrete random variable, then L(θ|x) represents the probability of generating data x = {x1 , x2 , . . . , xn } when true parameter is θ. It varies as θ varies over  and hence

1.1 Introduction

5

provides information on θ corresponding to the given observed data. Most of the inference procedures are based on the likelihood function. The most popular is the method of maximum likelihood estimation to find an estimator of θ. This procedure has been proposed by Fisher in 1925. He proposed to estimate θ by that value of θ for which L(θ|x) is maximum corresponding to given data x and labeled it as a maximum likelihood estimate. Another heavily used inference procedure based on the likelihood function is the likelihood ratio test procedure for testing hypotheses. In the present book, we discuss both these procedures and their properties when the size of the random sample is large. Another frequently used method of estimation is a method of moments. It has been proposed by Karl Pearson in nineteenth century. In this method, the estimator is obtained by solving the system of equations obtained by equating sample moments to the corresponding population moments and is labeled as a moment estimator. There are various other methods to obtain estimators of the parameter of interest on the basis of given data such as method of least squares, methods based on sample quantiles and methods based on estimating functions. In this book, we will not discuss these methods of estimation or the properties of these estimators for finite n as our focus is on the discussion of large sample optimality properties of estimators and test procedures. For details of these methods and properties of the estimators for finite n, for interval estimation and for testing of hypotheses for finite n, one may refer to the following books—Casella and Berger [4], Kale and Muralidharan [5], Lehmann and Casella [6], Lehmann and Romano [7], Rohatgi and Saleh [2]. However for ready reference, in the following section, we list various results from parametric statistical inference for finite sample size, as these form a foundation of the asymptotic statistical inference theory.

1.2

Basics of Parametric Inference

Point estimation is one of the most important branches of statistical inference. To study the optimality properties of an estimator, one needs to know its distribution. The probability distribution of an estimator is known as a sampling distribution . As discussed in Sect. 1.1, if X is a random sample from a normal N (θ, σ 2 ) distribution, θ ∈ R, σ 2 > 0, then the sample mean X n is a Borel measurable function from X to  and hence is an estimator of θ. It has a normal N (θ, σ 2 /n) distribution and it is the sampling distribution of X n . Sampling distribution of an estimator is useful to investigate the properties of an estimator. The first natural property of an estimator is unbiasedness, as defined below.

 Definition 1.2.1

Unbiased Estimator: An estimator Tn (X ) is an unbiased estimator of g(θ), if E θ (Tn (X )) = g(θ), ∀ θ ∈ . The concept of unbiasedness requires that the sampling distribution of an estimator is centered at g(θ). If X is a random sample from the normal N (θ, σ 2 ) distribution, θ ∈ R, σ 2 > 0, then the sample mean X n is an unbiased estimator of θ. Here

6

1

Introduction

g(θ) = θ. If we have two unbiased estimators for the same parametric function g(θ), then using sampling distribution of the estimators we can find their variances and choose that estimator which has smaller variance. If the two estimators are not unbiased then these can be compared using the mean squared error (MSE).

 Definition 1.2.2

Mean Squared Error of an Estimator: Mean squared error of Tn (X ) as an estimator of g(θ) is defined as M S E(Tn (X )) = E θ (Tn (X ) − g(θ))2 . An estimator with smaller MSE is always preferred. If Tn (X ) is an unbiased estimator of g(θ), then MSE of Tn (X ) is the same as the variance of Tn (X ). Within a class of unbiased estimators of a parameter g(θ), we seek for an estimator which has the smallest variance. Under certain regularity conditions, there exists a lower bound for the variance of an unbiased estimator of g(θ) and it is known as the CramérRao lower bound for the variance of an unbiased estimator. To study this important and fairly general result we need a concept of information function, introduced by Fisher. In Sect. 1.1, we have discussed the concept of information and noted that the observations on X contain the information about the parameter. We now quantify this concept of information about θ in a sample or in any statistic, under the following two assumptions:

1. Suppose the support S f of f (x, θ) is free from θ.  2. The identity S f f (x, θ)d x = 1 can be differentiated with respect to θ at least twice under the integral sign. As a consequence,      2 ∂ ∂ ∂2 Eθ log f (X , θ) = 0 & E θ log f (X , θ) = E θ − 2 log f (X , θ) . ∂θ ∂θ ∂θ

 Definition 1.2.3 Information Function: The information function I (θ) which quantifies information about θ in a single observation on a random variable X is defined as  I (θ) = E θ

1 ∂ f (X , θ) f (X , θ) ∂θ

2



2 ∂ log f (X , θ) = Eθ ∂θ   ∂2 = E θ − 2 log f (X , θ) . ∂θ

I (θ) is usually referred to as the Fisher information function. The function ∂ f (x, θ) = ∂θ log f (x, θ) is interpreted as a relative rate of change in f (x, θ)

∂ 1 f (x,θ) ∂θ

∂ as θ varies. Thus, it is similar to velocity while ∂θ 2 log f (x, θ) is similar to accel∂ eration. The function ∂θ log f (X , θ), viewed as a function of X for a fixed θ is known as a score function. Thus, for each fixed θ, it is a random variable. From the above expressions it is clear that its expectation is 0 and variance is I (θ). 2

1.2 Basics of Parametric Inference

7

We use these results in subsequent chapters. Observe that I (θ) ≥ 0 and it is 0  2 ∂ log f (X , θ) = 0 which is equivalent to the statement that if and only if E θ ∂θ ∂ ∂θ

log f (X , θ) = 0 with probability 1, that is, f (X , θ) does not depend on θ or the distribution of X does not change as θ changes. The following theorem states a result about a lower bound for the variance of an unbiased estimator. Theorem 1.2.1 Cramér-Rao inequality: Suppose X is a random sample from the distribution of X with { f (x, θ), θ ∈ } as a family of probability distributions of X . Suppose U is a class of all unbiased estimators Tn of g(θ) such that E(Tn2 ) < ∞ for all θ ∈ . Suppose the following conditions are satisfied.

1. The support S f of f (x, θ) is free from θ. 2. The identity S f f (x, θ)d x = 1 can be differentiated with respect to θ at least twice under the integral sign. 3. An estimator Tn ∈ U is such that ∂ d d  t h(t, θ) d x = t g (θ) = E θ (Tn ) = log h(t, θ) h(t, θ)d x, dθ dθ ∂θ where h(t, θ) is a probability law of Tn . Then V ar (Tn ) ≥

(g  (θ))2 . I (θ)

A function (g  (θ))2 /I (θ) is known as the Cramér-Rao lower bound for the variance of an unbiased estimator of g(θ). One would like to have an unbiased estimator which attains the lower bound as specified in the above theorem. Such an estimator is known as a minimum variance bound unbiased estimator (MVBUE). Such estimators exist for some models, but in general, it is difficult to find such an estimator. The next step is to find an unbiased estimator whose variance is smaller than that of any other unbiased estimator. It leads to the concept of a uniformly minimum variance unbiased estimator (UMVUE). It is as defined below.

 Definition 1.2.4

Suppose U is a class of all unbiased estimators Tn of θ such that E(Tn2 ) < ∞ for all θ ∈ . An estimator Tn∗ ∈ U is called a UMVUE of θ if E(Tn∗ − θ)2 ≤ E(Tn − θ)2

∀ θ ∈  & ∀ Tn ∈ U.

8

1

Introduction

If the family of distributions of an estimator satisfies certain properties, then the estimator is a UMVUE of its expectation. These properties involve concepts of sufficiency and completeness of a statistic. We define these below.

 Definition 1.2.5

Sufficient Statistic: Suppose X = {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with probability law f (x, θ). A statistic Un = Un (X ) is a sufficient statistic for the family { f (x, θ), θ ∈ } if and only if the conditional distribution of X , given Un does not depend on θ. If the conditional distribution of X given Un does not depend on θ, then the conditional distribution of any statistic Vn = Vn (X ) given Un also does not depend on θ. It means that a sufficient statistic Un (X ) extracts all the information that the sample has about θ. If Un is a sufficient statistic, then it can be shown that I (θ) corresponding to Un is the same as I (θ) corresponding to a random sample X . It implies that there is no loss of information if the inference procedures are based on the sufficient statistic. Thus, one of the desirable properties of an estimator Tn (X ) is that it should be a function of sufficient statistic. Well-known Neyman-Fisher factorization theorem gives a criterion for determining a sufficient statistic. We state it below. Theorem 1.2.2 Neyman-Fisher factorization theorem: Suppose the joint probability law f (x1 , x2 , . . . , xn , θ) is factorized as

f (x1 , x2 , . . . , xn , θ) = h(x1 , x2 , . . . , xn )g(Tn (x), θ), where h is a non-negative function of {x1 , x2 , . . . , xn } only and does not depend on θ and g is a non-negative function of θ and {x1 , x2 , . . . , xn } through Tn (x). Then Tn (X ) is a sufficient statistic. In the statement of the above theorem, statistic Tn (X ) and parameter θ may be vector valued. The joint probability law f (x1 , x2 , . . . , xn , θ) viewed as a function of θ given data {x1 , x2 , . . . , xn } is nothing but a likelihood function L(θ|x). Thus, according to the Neyman-Fisher factorization theorem, if L(θ|x) is factorized as h(x)g(Tn (x), θ), then Tn (x) is a sufficient statistic for the family of distributions or simply Tn (x) is a sufficient statistic for θ. The concept of sufficiency is used frequently with another concept, called completeness of the family of distributions. We define it below and also define what is meant by a complete statistic .

 Definition 1.2.6

Complete Statistic: A family { f (x, θ), θ ∈ } of probability distributions of X is said to be complete, if for any function h E θ (h(X )) = 0 ⇒ Pθ [h(X ) = 0] = 1 ∀ θ ∈ .

1.2 Basics of Parametric Inference

9

A statistic Un (X ) based on a random sample X from the distribution of X is said to be complete if the family of distributions of Un (X ) is complete. Using Rao-Blackwell and Lemann-Scheffe theorem it can be shown that an unbiased estimator of g(θ), which is a function of complete sufficient statistic, is always the UMVUE of g(θ). We state these two theorems below. Theorem 1.2.3 Rao-Blackwell theorem: Suppose X is a random sample from the distribution of X with { f (x, θ), θ ∈ } as a family of probability distributions. Suppose U is a class of unbiased estimators Tn of θ such that E(Tn2 ) < ∞ for all θ ∈ . Suppose Un = Un (X ) is a sufficient statistic for the family. Then the conditional expectation E θ (Tn |Un ) is independent of θ and is an unbiased estimator of θ. Further, E θ (E θ (Tn |Un ) − θ)2 ≤ E θ (Tn − θ)2 ∀ θ ∈ .

Thus, according to Rao-Blackwell theorem, E θ (Tn |Un ) is an unbiased estimator with variance smaller than that of any other unbiased estimator of θ ∀ θ ∈ , that is, it is a UMVUE of θ. Lehmann-Scheffe theorem, stated below, conveys that under additional requirement of completeness of the sufficient statistic, E θ (Tn |Un ) is a unique UMVUE of its expectation. Theorem 1.2.4 Lehmann-Scheffe theorem: Suppose Un = Un (X ) is a complete sufficient statistic and there exists an unbiased estimator Tn of θ. Then there exists a unique UMVUE of θ and it is given by E θ (Tn |Un ).

Rao-Blackwell theorem and Lehmann-Scheffe theorem together convey that an unbiased estimator, which is a function of a complete sufficient statistic, is a unique UMVUE of its expectation. In Chap. 2, we prove that it is also a consistent estimator of its expectation. Testing of hypothesis is another fundamental branch of inference. It is different from estimation in some aspects, such as the accuracy measures and the appropriate asymptotic theory. We now present a brief introduction to a formal model for statistical hypothesis testing that was proposed by Neyman and Pearson in the late 1920. We are confronted with a hypothesis testing problem when we want to guess, which of two possible statements about a population is correct on the basis of observed data. Hypothesis is nothing but a statement about the population. When we are interested in studying a particular characteristic X of the population, with the assumption that the form of the probability law f (x, θ) of X is known and the only unknown quantities are the indexing parameters, a hypothesis reduces to a statement about the population parameter. On the basis of the observed data, one is interested in testing the validity of an assertion about the unknown parameter θ. For example, one may be interested in verifying if a proportion p of defectives in a lot of items is at most 5%. In such a situation the set of possible values of p are divided in two sets,

10

1

Introduction

one is (0, 0.05] and the other is (0.05, 1). One statement is p ≤ 0.05 and the other statement is p > 0.05. It is necessary to distinguish between the two hypotheses under consideration. In each case, we declare one of the two hypotheses to be a null hypothesis, denote it by H0 , and the other to be an alternative hypothesis and denote it by H1 . Roughly speaking, the logic for determining which hypothesis is H0 and which is H1 is as follows. A null hypothesis H0 should be the hypothesis to which one defaults if the evidence given by data is doubtful or is insufficient and H1 should be the hypothesis that one requires compelling evidence to embrace it. Hence, a null hypothesis is always interpreted as hypothesis of “no difference”. In general if  denotes the parameter space, then a null hypothesis corresponds to H0 : θ ∈ 0 and the alternative hypothesis corresponds to H1 : θ ∈ 1 , where 0 ∩ 1 = ∅ and 0 ∪ 1 = . If 0 contains only one point, we say that H0 is a simple hypothesis, otherwise it is known as a composite hypothesis. Similarly if 1 is a singleton set, then H1 is a simple hypothesis, otherwise it is a composite hypothesis. Thus, if a hypothesis is simple, the probability distribution of X is specified completely under that hypothesis. We now elaborate on the procedure of testing of hypotheses. Corresponding to given data x = (x1 , x2 , . . . , xn ), we find a decision rule that will lead to a decision to accept or to reject the null hypothesis. Such a decision rule partitions the sample space X into two disjoint sets C and C  such that if x ∈ C, we reject H0 , and if x ∈ C  , we do not reject H0 . The set C is known as a critical region or a rejection region. The set C  is known as an acceptance region. There are two types of errors that can be made if one uses such a procedure. One may reject H0 when in fact it is true. It is called as a type I error. Alternatively, one may accept H0 when it is false. This error is called a type II error. Thus Pθ (C), for θ ∈ 0 is the probability of type I error while Pθ (C  ), for θ ∈ 1 is the probability of type II error. Ideally, one would like to find a critical region for which both these probabilities are 0. However, it is not possible. If a critical region is such that the probability of type I error is 0, then the probability of type II error will be 1. As a next step, we would like to devise procedures that minimize the probabilities of committing errors. Unfortunately, there is an inevitable tradeoff between type I and type II errors so that we cannot minimize the probabilities of both types of errors simultaneously. The distinguishing feature of hypothesis testing is the manner in which it addresses the tradeoff between type I and type II errors. The Neyman-Pearson formulation of hypothesis testing offers a null hypothesis a privileged status. H0 will be maintained unless there is compelling evidence against it. Such a status is equivalent to declaring type I error to be more serious than type II error. Hence in the Neyman-Pearson formulation, an upper bound is imposed on the maximum probability of type I error that will be tolerated. This bound is known as a level of significance, conventionally denoted by α. The level of significance is specified prior to examining the data. We consider test procedures for which the probability of type I error is not greater than α. Such tests are called level α tests. In 1925, Fisher suggested two values for α as α = 0.05 and α = 0.01 in his extremely influential book “Statistical Methods for Research Workers”. These suggestions were intended as practical guidelines, but in view of the convenience of standardization in providing a common frame of

1.2 Basics of Parametric Inference

11

reference, these values gradually became the conventional levels to use. Interested reader may refer to Lehmann and Romano [7] (p. 57) about the choice of significance level α. In hypothesis testing, decisions typically are described in the language that acknowledges the privileged status of the null hypothesis and emphasizes that the decision criterion is based on the probability of committing a type I error. In describing the action of choosing H0 , many statisticians prefer the phrase “fail to reject the null hypothesis” to the phrase “accept the null hypothesis" because choosing H0 does not imply an affirmation that H0 is correct, it only means that the level of evidence against H0 is not sufficiently compelling to warrant its rejection at significance level α. To introduce some more concepts precisely, we proceed to define a test function as follows. A Borel measurable function φ : X → [0, 1] is known as a test function. A test function defined as ⎧ ⎨ 1, if X ∈ C γ(X ), if X ∈ B(C) φ(X ) = ⎩ 0, if X ∈ C  is known as a randomized test function, where B(C) denotes the boundary set of C. If γ(X ) = 0, it is known as a non-randomized test function. Thus, φ(X ) = 1 implies that H0 is rejected when the observed data are in the critical region. The function βφ (θ) = E θ φ(X ) = Pθ [X ∈ C] + E θ γ(X ) is known as a power function of φ(X ). A test is level α test if βφ (θ) ≤ α ∀ θ ∈ 0 ⇔

sup β(θ) = α.

θ∈0

sup βφ (θ) is known as a size of the test. Within a class of all level α tests, we seek

θ∈0

to find a test for which the probability of type II error is minimum. We thus get the most powerful (MP) test and uniformly most powerful (UMP) test. These are defined below.

 Definition 1.2.7

Most Powerful Test: Suppose Uα is a class of all level α tests to test H0 : θ ∈ 0 against the alternative H1 : θ = θ1 ∈ 1 . A test φ0 ∈ Uα is said to be the most powerful test against an alternative H1 if βφ0 (θ1 ) ≥ βφ (θ1 ) ∀ φ ∈ Uα . It is to be noted that in the above definition, we have a fixed value θ1 of a parameter in 1 . Thus, this definition is sufficient if 1 is a singleton set. In general 1 consists of more than one point. If a given test is an MP test for every point in 1 , then we get an UMP test. The precise definition is given below.

12

1

Introduction

 Definition 1.2.8

Uniformly Most Powerful Test: Suppose Uα is a class of all level α tests to test H0 : θ ∈ 0 against the alternative H1 : θ ∈ 1 . A test φ0 ∈ Uα is said to be an uniformly most powerful test against an alternative H1 if βφ0 (θ) ≥ βφ (θ) ∀ φ ∈ Uα uniformly in θ ∈ 1 .

In most of the cases, UMP tests do not exist. These exist for one-sided null and alternative hypotheses if the underlying distribution belongs to an exponential family. For details, one may refer to Lehmann and Romano [7]. We state below the Neyman-Pearson lemma, which is a fundamental lemma giving a general method for finding the MP test of a simple null hypothesis against a simple alternative. Theorem 1.2.5 Neyman-Pearson Lemma: Suppose X is a random variable with probability law f (x, θ), where θ ∈  = {θ0 , θ1 }. Suppose we want to test H0 : θ = θ0 against the alternative H1 : θ = θ1 .

1. Any test of the form ⎧ ⎨ 1, if f (x, θ1 ) > k f (x, θ0 ) γ(x), if f (x, θ1 ) = k f (x, θ0 ) φ(x) = ⎩ 0, if f (x, θ1 ) < k f (x, θ0 ), for some k ≥ 0 and 0 ≤ γ(x) ≤ 1 is the most powerful test of its size. If k = ∞, then the test  1, if f (x, θ0 ) = 0 φ(x) = 0, if f (x, θ0 ) > 0 is the most powerful test of its size. 2. Given α ∈ (0, 1), there exists a test φ of one of the two forms given in (1) with γ(x) = γ, (a constant) for which E θ0 (φ(X )) = α. It can be shown that the MP test as given by the Neyman-Pearson lemma is unique. This lemma is also useful to find a UMP test, if it exists. Suppose a null hypothesis H0 : θ = θ0 is simple and alternative is H1 : θ ∈ 1 . We derive an MP test for testing H0 : θ = θ0 against the alternative H1 : θ = θ1 , where θ1 ∈ 1 . If the critical region remains invariant to fixed θ1 ∈ 1 , then this an UMP test. A likelihood ratio test procedure discussed in Chap. 5 is an extension of the idea behind the NeymanPearson lemma. In most of the examples the critical region C can be expressed as [Tn < c] or [Tn > c] or [|Tn | > c], where Tn is a function of observed data X and fixed values of parameters under the hypotheses under study. Tn is known as a test statistic. If

1.2 Basics of Parametric Inference

13

the family { f (x, θ), θ ∈ } admits a sufficient statistic, then it is desirable to have a test statistic as a function of the sufficient statistic. In the critical region [Tn < c], c is known as a cut-off point and it is determined so that the size of the test is α, that is, supθ∈0 P[Tn < c] = α. Thus, to determine the cut-off point we need to know the distribution of the test statistic in the null set up, which is usually referred to as null distribution of a test statistic. It may not be always possible to find the null distribution for a finite sample size n. However, in most of the cases it is possible to obtain a large sample null distribution. We elaborate on this in Chaps. 5 and 6. Testing at a fixed level α as described above is one of the two standard approaches in any testing procedure. The other approach is based on a concept of a p-value. Reporting “reject H0 ” or “do not reject H0 ” is not very informative. Instead, it is better to know for every α, whether the test rejects the null hypothesis at that level. Generally, if the test rejects H0 at level α it will reject at level α > α also. Hence, there is a smallest α at which the test rejects and this number is known as the p-value. It is also known as significance probability or observed level of significance. It is defined below.

 Definition 1.2.9

p-value: Suppose for every α ∈ (0, 1) we have a size α test with rejection region Cα and T (X ) is the corresponding test statistic. Then, p -value is defined as

p-value = inf{α|T (X ) ∈ Cα }. Thus, the p-value is the smallest level at which we can reject H0 . We illustrate the evaluation of the p-value when the critical region is of the type [T (X ) > c]. Suppose T (x) is the observed value of the test statistic corresponding to given data. Then p-value is PH0 [T (X ) > T (x)]. Thus, it is the probability of observing under H0 , a sample outcome at least as extreme as the one observed. Informally, the p-value is a measure of the evidence against H0 . The smaller the p-value, the more extreme the outcome and the stronger the evidence against H0 . If p-value is large, H0 is not rejected. However, a large p-value is not strong evidence in favor of H0 , because a large p-value can occur for two reasons: (i) H0 is true or (ii) H0 is false but the test has low power. The approach of p-value is a good practice, as in this approach we determine not only whether the hypothesis is accepted or rejected at the given significance level, but also the smallest significance level at which the hypothesis would be rejected for the given observation. p-value gives an idea of how strongly the data contradict the hypothesis. It also enables to reach a verdict based on the significance level of the choice of the experimenter or the researcher. For example, p-value is equal to 0.07 is roughly interpreted as in 7 out of 100 simulations, we commit the error of rejecting H0 when in fact it is true. An experimenter or the researcher can decide whether this much error is acceptable and can take the decision accordingly. Similarly, p-value 0.03 may be large or small depending on the experiment under study. In most of the software, p-values are reported and the decision about the acceptance or rejection of the null hypothesis is left to the experimenter.

14

1

Introduction

In all these results and procedures, the sample size n is assumed to be a fixed finite number. It has been noticed that for a very few inference problems there exists an exact, optimal solution for finite n. In some cases for finite n, optimality theory may not exist or may not give satisfactory results due to intractability of a problem. Asymptotic optimality theory resolves these issues in many cases. There is a lot of literature related to asymptotic inference theory. For example, we mention a few books here such as Casella and Berger [4], DasGupta [8], Ferguson [9], Kale and Muralidharan [5], Lehmann and Casella [6], Lehmann [10], Lehmann and Romano [7], Rao [11], Rohatgi and Saleh [2], Shao [3], Silvey [12] and van der Vaart [13]. In the present book in Chaps. 2 to 4, we discuss large sample optimality properties of the estimators. Chapters 5 and 6 are devoted to the discussion on the test procedures when sample size is large. In the next section we discuss some basic concepts from probability theory which form a foundation of the asymptotic statistical inference and list some results which are frequently used in the proofs of the theorems and in the solutions of the problems in the present book. For details, one may refer to Athreya and Lahiri [14], Bhat [15], Gut [16] and Loeve [17].

1.3

Basics of Asymptotic Inference

Large sample optimality properties of an estimator are defined in terms of its limiting behavior as sample size increases and hence are based on the various modes of convergence of a sequence of random variables. Thus, the principal probability tool in asymptotic investigation is the convergence of a sequence of random variables. As sample size increases, we study the limiting behavior of a sequence {Tn , n ≥ 1} of estimators of θ and examine how close it is to θ, in some sense to be defined appropriately. Suppose {Tn , n ≥ 1} is a sequence of estimators of θ, that is, for every n ≥ 1, Tn is a measurable function of sample observations with range space as the parameter space , which we assume to be a real line to begin with. As a consequence, for each realization in X, Tn is a real number for each n ≥ 1 and hence a sequence {Tn , n ≥ 1} is equivalent to a sequence {an , n ≥ 1} of real numbers. Thus, all techniques of convergence of a sequence of real numbers can be used to study the convergence of a sequence of random variables. However, a sequence {Tn , n ≥ 1} of random variables is equivalent to a collection of sequences of real numbers. This collection is finite, countable or uncountable depending on whether  is finite, countable or uncountable. Thus, to discuss convergence of a sequence of random variables, one has to deal with convergence of a collection of sequences of real numbers. In various modes of convergence, a sequence of random variables is reduced to a collection of sequences of real numbers in some suitable way. The different ways lead to different types of convergence, such as point-wise convergence, almost sure convergence, convergence in probability, convergence in law or convergence in distribution and convergence in r -th mean. We define these modes of convergence below for a sequence of random

1.3 Basics of Asymptotic Inference

15

variables and use these to study the optimality properties of the estimator for a large sample size. Suppose a sequence {X n , n ≥ 1} of random variables and X are defined on the same probability space (, A, P).

 Definition 1.3.1

Almost Sure Convergence: Suppose N ∈ A is such that P(N ) = 0. Then {X n , n ≥ 1} is said to converge almost surely to a random variable X , denoted by a.s. X n → X , if X n (ω) → X (ω) ∀ ω ∈ N c , such that P(N ) = 0. The set N is known as a P-null set.

 Definition 1.3.2

Convergence in Probability: A sequence {X n , n ≥ 1} is said to converge in probaP

bility to a random variable X , denoted by X n → X , if ∀ > 0, P{ω| |X n (ω) − X (ω)| < } = P[|X n − X | < ] → 1 as n → ∞ .

 Definition 1.3.3

Convergence in Law: A sequence {X n , n ≥ 1} is said to converge in law to a random L

variable X , denoted by X n → X , if Fn (x) = P[X n ≤ x] → P[X ≤ x] = F(x), ∀ x ∈ C F (x) as n → ∞,

where C F (x) is a set of points of continuity of the distribution function F of X . In this mode of convergence, it is not necessary that a sequence of random variables {X n , n ≥ 1} and X are defined on the same probability space (, A, P).

 Definition 1.3.4

Convergence in r -th Mean: A sequence {X n , n ≥ 1} is said to converge in r -th r mean to a random variable X , denoted by X n → X , if for any r ≥ 1, E(|X n − X |r ) → 0 as n → ∞,

provided E(|X n − X |r ) is defined. If r = 2 then the convergence is referred to as q.m. convergence in quadratic mean and is denoted by X n → X . In asymptotic inference setup X n = Tn , where Tn is an estimator whose properties are to be investigated and X = θ, θ being a parameter under study. Thus, in all these modes of convergence we judge closeness or proximity of Tn to θ for large values of n. In general, limit of a sequence of random variables is a random variable. But when we are interested in the limiting behaviour of a sequence {Tn , n ≥ 1} of estimators of θ, the limit random variable is degenerate at θ, that is, it is a constant. In asymptotic inference, all the modes of convergence such as almost sure convergence, convergence in probability, convergence in law and convergence in r -th mean are heavily used. Many desirable properties of the estimators are defined in terms of

16

1

Introduction



these modes of convergence. For example, if Tn → θ, ∀ θ ∈  then Tn is said a.s. to be weakly consistent for θ. If Tn → θ, ∀ θ ∈  then Tn is said to be strongly consistent for θ. There are a number of implications among various modes of convergence. We state these results below. We also list some results, lemmas and theorems from probability theory, which help to verify the consistency of an estimator and to find the asymptotic distribution of the estimator with suitable normalization.

 Result 1.3.1 Almost sure convergence implies convergence in probability, but in general the converse is not true. It is true if the sequence {X n , n ≥ 1} is a monotone sequence of random variables (Gut [16], p. 213).

 Result 1.3.2 Convergence in r -th mean implies convergence in probability, but the converse is not true.

 Result 1.3.3 Convergence in probability implies convergence in law. Convergence in probability and convergence in law are equivalent if the limit random variable is degenerate.

 Result 1.3.4 A limit random variable in convergence in probability and in almost sure convergence P

P

is almost surely unique, that is, if X n → X and X n → Y , then X and Y are equivalent a.s. a.s. random variables, that is X = Y a.s. Similarly if X n → X and X n → Y , then X = Y a.s.

 Result 1.3.5 L

L

If X n → X and X n → Y , then X and Y are identically distributed random variables, that is FX (x) = FY (x) ∀ x ∈ R.

 Result 1.3.6 Almost sure convergence and convergence in probability are closed under all arithmetic operations as stated below. Suppose {X n , n ≥ 1}, X , {Yn , n ≥ 1} and Y are a.s. a.s. defined on the same probability space (, A, P). If X n → X and Yn → Y , then a.s. (i) X n ± Yn → X + Y . a.s. (ii) X n Yn → X Y . a.s. (iii) X n /Yn → X /Y , provided X n /Yn and X /Y are defined. Same is true for convergence in probability.

 Result 1.3.7 P

P

P

If X n − Yn → 0 and if X n → X , then Yn → X .

1.3 Basics of Asymptotic Inference

 Result 1.3.8 P

L

17

L

If X n − Yn → 0 and if X n → X , then Yn → X .

 Definition 1.3.5

A sequence of random variables {X n , n ≥ 1} is said to be bounded in probability if for any > 0, there exists a constant K and an integer n 0 such that P[|X n | ≤ K ] ≥ 1 − ∀ n ≥ n 0 .

It can be shown that a real random variable is always bounded in probability.

 Result 1.3.9 P

If X n → X , where X is a real random variable, then the sequence {X n , n ≥ 1} is bounded in probability.

 Result 1.3.10 L

If X n → X , where X is a real random variable, then the sequence {X n , n ≥ 1} is bounded in probability.

 Result 1.3.11

p

P

If {X n , n ≥ 1} is bounded in probability and if Yn → 0, then X n Yn → 0.

 Result 1.3.12

Slutsky’s Theorem: Suppose {X n , n ≥ 1} and {Yn , n ≥ 1} are sequences of random L

L

variables defined on the same probability space (, A, P). If X n → X and Yn → C P

(or Yn → C ), then P (i) X n Yn → 0, if C = 0. L (ii) X n ± Yn → X ± C . L (iii) X n Yn → XC . L (iv) X n /Yn → X /C , provided X n /Yn and X /C are defined.

 Result 1.3.13 Suppose g is a continuous function. Then a.s. a.s. (i) If X n → X ⇒ g(X n ) → g(X ). P P (ii) If X n → X ⇒ g(X n ) → g(X ). L L (iii) If X n → X ⇒ g(X n ) → g(X ). The third result is known as a continuous mapping theorem.

 Result 1.3.14

Borel-Cantelli

Lemma: Suppose {An , n ≥ 1} is a sequence of events defined on (, A, P). If ∞ n=1 P(An ) < ∞ then P(lim sup An ) = 0.

18

1

Introduction

 Result 1.3.15

Khintchine’s Weak Law of Large Numbers (WLLN): Suppose {X n , n ≥ 1} is a sequence of independent and identically distributed random variables with finite P

mean μ, then Sn /n = X n → μ.

 Result 1.3.16

Kolmogorov’s Strong Law of Large Numbers (SLLN): Suppose {X n , n ≥ 1} is a sequence of independent and identically distributed random variables with finite a.s. mean μ, then Sn /n = X n → μ.

 Result 1.3.17

Lindeberg-Levy CLT: Suppose {X n , n ≥ 1} is a sequence of independent and identically distributed random variables with mean μ and positive, finite variance σ 2 , then n

Yn =

i=1

X i − nμ √ n(X n − μ) Sn − nμ = √ = √ σ nσ nσ

L

→ Z ∼ N (0, 1).

From the above results we note that almost sure convergence implies convergence Pθ

in probability which further implies convergence in law. Thus if Tn → θ, then L

Tn → θ, but then the limiting distribution is degenerate and hence is not informative. We study the convergence in distribution of a suitably normalized Tn to get a limiting non-degenerate distribution. Such a limiting non-degenerate distribution is useful to find a large sample interval estimator for θ and for testing hypotheses about θ. Lindeberg-Levy CLT, Slutsky’s theorem, Khintchine’s WLLN and Kolmogorov’s SLLN are heavily used to find the asymptotic non-degenerate distribution of an estimator and the asymptotic null distribution of a test statistic. Some of the most useful tools in probability theory and inference are moment inequalities. We state below some of these which are needed in the proofs of some theorems in Chaps. 2 to 6.

 Inequality 1.3.1

If E(|X |m ) < ∞, then E(|X |r ) < ∞, for 0 < r ≤ m . Thus, if a moment of a certain order is finite, then all the moments of lower order are also finite.

 Inequality 1.3.2

Schwarz Inequality: E(|X Y |) ≤

 Inequality 1.3.3



E(|X |2 )E(|Y |2 ).

Jensen’s Inequality: If f (·) is a convex function and if E(X ) is finite, then f (E(X )) ≤ E( f (X )). If f (·) is a concave function, then f (E(X )) ≥ E( f (X )).

1.3 Basics of Asymptotic Inference

19

 Inequality 1.3.4 Basic Inequality: Suppose X is an arbitrary random variable and g(·) is a non-negative Borel function on R. If g(·) is even and non-decreasing on [0, ∞), then ∀ a > 0,

E(g(X )) − g(a) E(g(X )) ≤ P[|X | ≥ a] ≤ , M g(a)

where M denotes the almost sure supremum of g(X ).

 Inequality 1.3.5

Chebyshev’s Inequality: P(|X | ≥ a) ≤ E(X 2 )/a 2 .

 Inequality 1.3.6

Markov Inequality: P(|X | ≥ a) ≤ E(X r )/a r , r > 0. We now state the inverse function theorem from calculus. It is heavily used in the proofs of theorems in Chaps. 3 and 4, to examine whether inverse of some parametric function exists and has some desirable properties. Theorem 1.3.1 Inverse Function Theorem: Suppose D denotes the class of totally differentiable functions, that is a class of functions whose components have continuous partial derivatives. Suppose f = ( f 1 , f 2 , . . . , f n ) ∈ D is defined on an open set S in Rn and suppose T = f (S) is the range space. If the jacobian J f (a) = 0 for some a ∈ S, then there are two open sets M ⊂ S and N ⊂ T and a uniquely determined function g such that (i) a ∈ M & f (a) ∈ N , (ii) N = f (M), (iii) f is one to one on M, (iv) g is defined on N , g(N ) = M and g( f (x)) = x ∀ x ∈ M and (v) g ∈ D on N .

Inverse function theorem basically states that if the jacobian is non-zero then the unique inverse exists. In addition, if the given function is totally differentiable then the inverse function is also totally differentiable. All the modes of convergence and related results and theorems listed above are heavily used in asymptotic statistical inference to establish the asymptotic optimality properties of the estimators. A major result in asymptotic inference theory is that for some smooth probability models, maximum likelihood estimators are asymptotically optimal, in the sense that these are consistent and asymptotically normal with suitable normalization. Moreover, the variance of a maximum likelihood estimator asymptotically attains the Cramér-Rao lower bound. Thus, asymptotic theory justifies the use of the method maximum likelihood estimation in certain situations and hence it is the most frequently used method. Wald’s test procedure, a likelihood ratio test procedure and a score test procedure are the three major approaches of constructing tests of significance for parameters in statistical models. The asymptotic null distributions of these test statistics are heavily based on the results related to maximum likelihood estimation. We discuss all these results in detail in the following chapters.

20

1

Introduction

An important feature of asymptotic inference is that it is non-parametric, in the sense that whatever may be the distribution of X , if its mean E(X ) = g(θ), say, is finite, then a sample mean X n based on a random sample from the distribution of X , converges almost surely and in probability to g(θ). In addition if the variance is positive and finite, then by the central limit theorem one can obtain the large sample distribution of a suitably normalized estimator of θ. All such limit theorems provide distribution-free approximations for statistical quantities such as significance levels, critical values, power, confidence coefficients, and so on. However, the accuracy of these approximations is not distribution-free, it very much depends both on the sample size, on the underlying distribution as well as on the values of parameters. These are some limitations of the asymptotic inference theory. Although asymptotic inference is both practically useful and of theoretical importance, it basically gives approximations. It is difficult to assess these approximations theoretically but can be judged by simulation. Thus, one of the ways to judge the approximation errors is to supplement the theoretical results by a simulation work. This is a crucial aspect of asymptotic inference theory. The novelty of this book is use of R software (see R [18]) to illustrate such an important feature. The last section of every chapter is devoted to the application of R software to evaluate the performance of estimators and test procedures by simulation, to obtain solutions of the likelihood equations, to carry out the likelihood ratio test procedures, goodness of fit test procedures and tests for contingency tables. Moreover, it is also helpful to clarify the concepts of consistency and asymptotic distributions of the estimators. Some readers may be familiar with R software as it has been introduced in the curriculum of many under-graduate and post-graduate statistics programs. In the following section we give a brief introduction to R, which will be useful to beginners. We have also tried to make the codes given in the last sections of Chaps. 2 to 6 to be self explanatory.

1.4

Introduction to R Software and Language

In statistical analysis phase one needs a good statistical software to carry out a variety of computations and draw different types of graphs. There are a number of software available for the computation such as Excel, Minitab, Matlab and SAS. In last two decades R software is strongly advocated and a large proportion of the world’s leading statisticians use it for statistical analysis. It is a high-level language and an environment for data analysis and graphics, created by Ross Ihaka and Robert Gentleman in 1996. It is both a software and a programming language considered as a dialect of the S language developed by AT & T Bell Laboratories. The current R software is the result of a collaborative effort with contributions from all over the world. It has become very popular in academics and also in corporate world for variety reasons, such as its good computing performance, excellent built-in help system, flexibility in graphical environment, its vast coverage, availability of new, cutting edge applications in many fields and scripting and interfacing facilities. The most important advantage is that in spite of being the finest integrated software, it is freely

1.4 Introduction to R Software and Language

21

available software from the site called CRAN (Comprehensive R Archive Network) with address http://cran.r-project.org/. From this site one need to “Download and Install R” by running the appropriate pre-compiled binary distributions. When R is installed properly, you will see the R icon on your desktop/laptop. To start R, one has to click on the R icon. The data analysis in R proceeds as an interactive dialogue with the interpreter. As soon as we type a command at the prompt (>), and press the enter key, the interpreter responds by executing the command. The session is ended by typing q(). The latest version of R is 4.0.2 released on June 22, 2020. With this non-statistical part of the introduction to R, we now proceed to discuss how it is used for statistical analysis. The discussion is not exhaustive but we restrict to the functions or commands which are repeatedly used in this book. Like any other programming language, R contains data structures. Vectors are the basic data structures in R. The standard arithmetic functions and operators apply to vectors on an element-wise basis, with usual hierarchy. Below we state some functions which we need to write R codes for the concepts discussed in this book. The most useful R command for entering small data sets is the c (“combine”) function. This function combines or concatenates terms together. In the following code, we specify some such basic functions with their output. One can use any variable names, but care is to be taken as R is case sensitive. x=c(10, 23, 35, 49, 52, 67) ### c function to construct a vector with given elements x ### displays x, print(x) also displays the object x length(x) ## specifies a number of elements in x y=1:5; y ### constructs a vector with consecutive elements and prints it, two commands can be given on the same line with separator ";" u=seq(10,25,5); u ## sequence function to create a vector with first element 10, last element 25 and with increment 5 v=c(rep(1,3),rep(2,2),rep(3,5)); v ## rep function to create a vector where 1 is repeated thrice, 2 twice and 3 five times m=matrix(c(10, 23, 35, 49, 52, 67),nrow=2,ncol=3); m ### matrix with 2 rows and 3 columns, with first two elements forming first column and so on t(m) ### transpose of matrix m m1=matrix(c(10, 23, 35, 49,52, 67),nrow=2,ncol=3,byrow=T); m1 ### with additional argument byrow=T, we get matrix with 2 rows and 3 columns, with first three elements forming first row and next three forming second row. #### Output > x=c(10, 23, 35, 49, 52, 67) > x [1] 10 23 35 49 52 67 > length(x) [1] 6 > y=1:5; y > y

22

1

Introduction

[1] 1 2 3 4 5 > u=seq(10,25,5); u > u [1] 10 15 20 25 > v=c(rep(1,3),rep(2,2),rep(3,5)); v > v [1] 1 1 1 2 2 3 3 3 3 3 > m=matrix(c(10, 23, 35, 49, 52, 67),nrow=2,ncol=3); m [,1] [,2] [,3] [1,] 10 35 52 [2,] 23 49 67 > t(m) [,1] [,2] [1,] 10 23 [2,] 35 49 [3,] 52 67 > m1=matrix(c(10, 23, 35, 49, 52, 67),nrow=2,ncol=3,byrow=T); m1 [,1] [,2] [,3] [1,] 10 23 35 [2,] 49 52 67

For many probability distributions, to find the values of probability law or distribution function for specified values, to draw random samples from these distributions, R has excellent facility, specified in following four types of functions. The d function returns the probability law of the distribution, whereas the p function gives the distribution function of the distribution. The q function gives the quantiles, and the r function returns random samples from a distribution. Each family has a name and some parameters. The function name is found by combining either d, p, q or r with the name for the family. The parameter names vary from family to family but are consistent within a family. These functions are illustrated for the uniform U (2, 4) distribution in the following. dunif(c(2.5,3.3,3.9),2,4) ### punif(c(2.5,3.3,3.9),2,4) ### qunif(c(.25,.5,.75),2,4) ### r=runif(5,2,4) ### round(r,2)

###

### Output > dunif(c(2.5,3.3,3.9),2,4) [1] 0.5 0.5 0.5 > punif(c(2.5,3.3,3.9),2,4) [1] 0.25 0.65 0.95 > qunif(c(.25,.5,.75),2,4)

probability density function at 2.5,3.3,3.9 distribution function at 2.5,3.3,3.9 first, second and third quartiles random sample of size 5, stored in object r values in r rounded to second decimal point

1.4 Introduction to R Software and Language

23

[1] 2.5 3.0 3.5 > r=runif(5,2,4) > round(r,2) [1] 2.62 3.56 2.80 2.45 2.30

We use functions rbinom, rnorm to draw random samples from binomial and normal distributions. Thus, we need to change the family name and add appropriate parameters. We can get the names for all probability distributions by following the path on R console as help → manuals (in pdf) → An Introduction to R → Probability distributions. The function round(r,2) prints the values of r , rounded to the second decimal point. It is to be noted that rounding is mainly for printing purpose, original unrounded values of r are stored in the object r . There are some useful built-in functions. We illustrate commonly used functions with a data set stored in variable x. x=rnorm(25,3,2) ### random sample of size n=25 from normal distribution with mean 3 and standard deviation 2 mean(x); median(x); max(x); min(x); sum(x); cumsum(x)### cumulative sum var(x)### divisor is (n-1) and not n quantile(x,c(.25,.5,.75)) ### three quartiles summary(x) ### gives minimum, maximum, three quartiles and mean shapiro.test(x) ### Shapiro-Wilk test for normality, gives value of statistic and p-value ### Partial output > summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. -1.387 2.250 3.426 3.356 4.360 7.978 > shapiro.test(x) W = 0.97969, p-value = 0.8789

It is to be noted that we have drawn a random sample from normal distribution and from the p-value of Shapiro-Wilk test normality, normality is accepted, as is expected. We can carry out number of test procedures on similar lines, manual from help menu lists some of these. Apart from many built-in functions, one can write a suitable function as required in a specific situation. An illustration is given below after discussing commands to draw various types of plots. Graphical features of R software have a remarkable variety. Each graphical function has a large number of options making production of graphics very flexible. Graphical device is a graphical window or a file. There are two kinds of graphical functions— the high-level plotting functions, which create a new graph, and low-level plotting functions, which add elements to an already existing graph. The standard high-level plotting functions are plot() function for scatter plot, hist() function for histogram and boxplot() function for box plot, etc. The lower level plotting functions are lines() to impose curves on the existing plot, abline() to add a line

24

1

Introduction

with given intercept and slope, points() to add points at appropriate places etc. These functions take extra arguments that control the graphic. The graphs are produced with respect to graphical parameters, which are defined by default and can be modified with the function “par”. If we type ?par on R console, we get the description on number of arguments for graphical functions, as documented in R. We explain one among these which is frequently used in the book. It is par(mfrow=c(2,2)) or par(mfcol=c(2,2)). This command divides the graphical window invisibly in 2 rows and 2 columns to accommodate 4 graphs. A function legend() is usually added in plots to specify a list of symbols or colors used in the graphs. These features will be clear from a variety of graphs drawn in the last section of the subsequent chapters. We now illustrate how to draw an histogram, a boxplot, how to impose additional curves on these plots using lines function. We use rnorm(n,th,si) function to generate a random sample of size n from normal distribution with mean θ and standard deviation σ. It is to be noted that the third argument of the function rnorm(n,th,si) is a value of the standard deviation and not the variance. We draw an histogram of n = 120 generated observations with θ = 1 and σ = 2, using hist function, with relative frequency on y-axis. It is achieved by the argument freq=FALSE of the hist function. On this plot using lines function we impose the curve of probability density function of N (1, 22 ) distribution. We use dnorm(r,th,si) function to obtain values of ordinates at r running from θ − 3σ to θ + 3σ as probability that observation lies in this interval is 0.9973. We use the function seq to generate a sequence of r values. The function boxplot draws the boxplot of 120 generated observations. We adopt the same steps for the random sample generated from χ22 distribution to draw these plots. Normal distribution is a symmetric distribution while χ22 distribution is an asymmetric distribution. Four plots drawn in one window using par(mfrow= c(2,2)) function in Fig. 1.2 display these features. n=120 ### sample size th = 1; si=2 ### mean 1 and standard deviation 2 x=rnorm(n,th,si) ### sample of size n from N(1,2ˆ2) distribution r = seq(th-3*si,th+3*si,.2) ### sequence of points on x axis at which to find ordinates y = dnorm(r,th,si) ### ordinates of N(1,2ˆ2) distribution at r df=2 ### parameter in terms of degrees of freedom for chi-square distribution u=rchisq(n,df) ### sample of size n from chi-square distribution with 2 df v = seq(0,12,.3) ### sequence of points on x axis at which to find ordinates w = dchisq(v,df) ### ordinates of chi-square distribution with 2 df par(mfrow= c(2,2)) ### divides the graphical window in four panels hist(x,freq=FALSE,main="Histogram",xlab = "X", col="light blue") lines(r,y,"o",pch=20,col="dark blue") boxplot(x,main="Box Plot") hist(u,freq=FALSE,main="Histogram",xlab = "X", col="light blue") lines(v,w,"o",pch=20,col="dark blue") boxplot(u,main="Box Plot")

1.4 Introduction to R Software and Language

25

Histogram 8 6 4 −2

2

0.10 0.00

Density

0.20

Box Plot

−4

0

2

4

6

8 10

X

Box Plot

6 0

0.00

2

4

0.15

Density

0.30

8 10

Histogram

0

2

4

6

8

10

X

Fig. 1.2 Histogram and box plot

From the graph, we note that the curve of probability density of normal distribution is a close approximation to the histogram with relative frequencies. Box plot indicates the symmetry around the median 1 and range of the simulated values of X is approximately (−4, 7). The curve of probability density is a close approximation to the histogram with relative frequencies for the chi-square distribution also. Further, the asymmetry of the chi-square distribution is reflected in the box plot. If we run the same code, we may not get exactly the same graphs, as the sample generated will be different. If we want to have the same sample to be generated each time we run the code, we have to fix the seed. It can be done by the function set.seed(2), 2 is an arbitrary number and it can be replaced by any other number. We explain an important role of set.seed function in Sect. 2.7. So far we discussed how to use built-in functions of R. In the following code we illustrate how to write our own functions, with the help of the R code used to draw Fig. 1.1. It includes a function written to find probability mass function of binomial B(5, p) distribution for various values of p, plot function, points function and par(mfrow=c(3,3)) function.

26

1

Introduction

g=function(p) { x=0:5 g=dbinom(x,5,p) return(g) } par(mfrow=c(3,3)) p=seq(.1,.9,.1) pname=paste("p =",p,sep=" ") for(i in 1:length(p)) { x=0:5 plot(x,g(p[i]),type="h", xlab="x", ylab = "Probability",main= pname[i], ylim=c(0,0.7),col="blue",lwd=2) points(x,g(p[i]),pch=16,col="dark blue") }

Observe the use of seq function to create a vector of p-values from 0.1 to 0.9 with an increment of 0.1. The command pname in the above code is used to assign a title giving the value of p for each of the nine graphs. Within a loop plot function and points function are used to have a panel of nine graphs of probability mass function of binomial distribution. Note the arguments in a plot function, type=h produces vertical lines with height proportional to the probability at that point, xlab="x" and ylab = "Probability" assign labels on x-axis and y-axis, main= assigns title to the graph, ylim=c(0,0.7) specifies the lower and upper limit on the y-axis, col="blue" specifies the color of the vertical lines and lwd=2 determines the width of the line. In points function, pch=16 decides the point characteristic, that is, type of points, there are such 25 types specified by the numbers 1–25. There are a number of excellent books on introduction to statistics using R, such as Crawley [19], Dalgaard [20], Purohit et al. [21] and Verzani [22]. There is a tremendous amount of information about R on the web at http://cran.r-project. org/ with a variety of R manuals. Following are some links useful for beginners to learn R software. 1. 2. 3. 4. 5. 6.

https://www.datacamp.com/courses/free-introduction-to-r http://www.listendata.com/p/r-programming-tutorials.html http://www.r-tutor.com/r-introduction https://www.r-bloggers.com/list-of-free-online-r-tutorials/ https://www.tutorialspoint.com/r/ https://www.codeschool.com/courses/try-r

As for any software or programming language, best way to learn R is to use it for understanding the concepts and solving problems. We hope that this brief introduction will definitely be useful to a reader to be comfortable with R and follow the codes written in subsequent chapters.

1.4

Introduction to R Software and Language

27

In the next chapter we discuss the concept of consistency of an estimator in a real and a vector parameter setup, along with some methods to generate consistent estimators.

References 1. Rao, B. L. S. P. (1992). Identifiability in stochastic models: Characterization of probability distributions. Cambridge: Academic Press. 2. Rohatgi, V. K., & Saleh, A. K. Md. E. (2001). Introduction to probability and statistics. New York: Wiley. 3. Shao, J. (2003). Mathematical statistics (2nd ed.). New York: Springer. 4. Casella, G., & Berger, R. L. (2002). Statistical inference (2nd ed.). USA: Duxbury. 5. Kale, B. K., & Muralidharan, K. (2016). Parametric inference: An introduction. Delhi: Narosa. 6. Lehmann, E. L., & Casella, G. (1998). Theory of point estimation (2nd ed.). New York: Springer. 7. Lehmann, E. L., & Romano, J. P. (2005). Testing of statistical hypothesis (3rd ed.). New York: Springer. 8. DasGupta, A. (2008). Asymptotic theory of statistics and probability. New York: Springer. 9. Ferguson, T. S. (1996). A course in large sample theory. London: Chapman and Hall. 10. Lehmann, E. L. (1999). Elements of large sample theory. New York: Springer. 11. Rao, C. R. (1978). Linear statistical inference and its applications. New York: Wiley. 12. Silvey, S. D. (1975). Statistical inference. London: Chapman and Hall. 13. van der Vaart, A. (1998). Asymptotic statistics. Cambridge: Cambridge University Press. 14. Athreya, K. B., & Lahiri, S. N. (2006). Measure theory and probability theory. New York: Springer. 15. Bhat, B. R. (1999). Modern probability theory (3rd ed.). New Delhi: New Age International. 16. Gut, A. (2005). Probability: A graduate course. New York: Springer. 17. Loeve, M. (1978). Probability theory I (4th ed.). New York: Springer. 18. R Core Team. (2019). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/. 19. Crawley, M. J. (2007). The R book. London: Wiley. 20. Dalgaard, P. (2008). Introductory statistics with R (2nd ed.). New York: Springer. 21. Purohit, S. G., Gore, S. D., & Deshmukh, S. R. (2008). Statistics using R (2nd ed.). New Delhi: Narosa Publishing House. 22. Verzani, J. (2005). Using R for introductory statistics. New York: Chapman and Hall/CRC Press.

2

Consistency of an Estimator

Contents 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consistency: Real Parameter Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Strong Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uniform Weak and Strong Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Consistency: Vector Parameter Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance of a Consistent Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verification of Consistency Using R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computational Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29 30 55 57 60 69 73 88 93

5 Learning Objectives After going through this chapter, the readers should be able – to comprehend the concept of consistency of an estimator for a real and vector valued parameter – to compare performance of consistent estimators based on different criteria such as mean squared error and coverage probability – to verify consistency of an estimator using R

2.1

Introduction

As discussed in Chap. 1, in asymptotic inference theory, we study the limiting behavior of a sequence {Tn , n ≥ 1} of estimators of θ and examine how close it is to θ using various modes of convergence. The most frequently investigated large sample property of an estimator is weak consistency. Weak consistency of an estimator is defined in terms of convergence in probability. We examine how close the estimator is to the true parameter value in terms of probability of proximity. Weak consistency © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0_2

29

30

2

Consistency of an Estimator

is always referred to as consistency in literature. In the next section, we define it for a real parameter and illustrate by a variety of examples. We study some properties of consistent estimators, the most important being the invariance of consistency under continuous transformation. Strong consistency and uniform consistency of an estimator are discussed briefly in Sects. 2.3 and 2.4. In Sect. 2.5, we define consistency when the distribution of a random variable or a random vector is indexed by a vector parameter. It is defined in two ways as marginal consistency and joint consistency, the two approaches are shown to be equivalent. This result is heavily used in applications. Thus, to obtain a consistent estimator for a vector parameter, one can proceed marginally and use all the tools discussed in Sect. 2.2. From examples in Sects. 2.2 and 2.5, we note that, for a given parameter, one can have an uncountable family of consistent estimators and hence one has to deal with the problem of selecting the best from the family. It is discussed in Sect. 2.6. Within a family of consistent estimators of θ , the performance of a consistent estimator is judged by the rate of convergence of a true coverage probability to 1 and of MSE to 0 for a consistent estimator whose MSE exists, faster the rate better is the estimator. Section 2.7 is devoted to the verification of the consistency of an estimator by simulation. It is illustrated through some examples and R software.

2.2

Consistency: Real Parameter Setup

Suppose X is a random variable or a random vector defined on a probability space (, A, Pθ ), where the probability measure Pθ is indexed by a parameter θ ∈  ⊂ R. Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X and Tn ≡ Tn (X ) is an estimator of θ . Weak consistency of Tn is defined below.

 Definition 2.2.1

Weakly Consistent Estimator: A sequence {Tn , n ≥ 1} of estimators of θ is said to Pθ

be weakly consistent for θ if for each θ ∈ , Tn → θ , that is, given  > 0 and δ in (0, 1), ∃ n 0 (, δ, θ ) such that Pθ [|Tn − θ | > ] < δ, ⇔ Pθ [|Tn − θ | < ] ≥ 1 − δ, ∀ n ≥ n 0 (, δ, θ ).

Hence onwards, weakly consistent estimator will be simply referred to as a consistent estimator and, instead of saying that a sequence of estimators is consistent, we will say that an estimator Tn is consistent for θ . Pθ [|Tn − θ | < ] is known as a coverage probability as it is the probability of the event that a random interval (Tn − , Tn + ) covers the true but unknown parameter θ . If it converges to 1 as n → ∞, ∀ θ ∈  and ∀  > 0, then Tn is a consistent estimator of θ . In other words, Tn is consistent for θ , if with very high chance, for large n, Tn and θ are close to each other. It is to be noted that in the definition of consistency, the probability of the event [|Tn − θ | > ] is obtained under Pθ probability measure, that is two θ ’s involved in Pθ [|Tn − θ | < ] must be the same. We elaborate on this issue after Example 2.2.2.

2.2 Consistency: Real Parameter Setup

31

An important feature to be noted is that there is a subtle difference between convergence in probability in probability theory and consistency in inference setup, although consistency of an estimator is essentially convergence in probability. When P

one says that X n → X , one deals with one probability measure specified by P. In the Pθ

definition of consistency, Tn is a consistent estimator of θ if Tn → θ, ∀ θ ∈  as n → ∞. Thus, the definition of consistency deals with the entire family of probability measures indexed by θ . For each value of θ , the probability structure associated with the sequence {Tn , n ≥ 1} is different. The definition of consistency requires that for each possible value of θ , the probability structure is such that the sequence converges in probability to that value of θ . To emphasize such an important requirement, we use Pθ instead of P when we express consistency of Tn as an estimator of θ . Hence, Pθ

P

we write Tn → θ and not Tn → θ as n → ∞. We now present a variety of examples to elaborate on the concept of consistency of an estimator based on a random sample X = {X 1 , X 2 , . . . , X n } from the distribution under study. In this and all the subsequent chapters, we denote the likelihood of θ given data X by L n (θ |X ) instead of L n (θ |X = x), where x denotes an observed realization of X .  Example 2.2.1

Suppose X = {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (0, θ ) distribution, θ ∈  = (0, ∞). If X ∼ U (0, θ ), then its probability density function f X (x, θ ) is given by  1/θ, if 0 ≤ x ≤ θ f X (x, θ ) = 0, otherwise. Hence, the likelihood function L n (θ |X ) is given by, L n (θ |X ) = 1/θ n , if X i ≤ θ, ∀ i = 1, 2, . . . , n ⇔ X (n) ≤ θ and L n (θ |X ) = 0, if X (n) > θ. Observe that the likelihood function is not a continuous function of θ and hence is not a differentiable function of θ . Thus, the routine calculus theory is not applicable. However, it is strictly decreasing over the interval [X (n) , ∞). Hence, it attains maximum at the smallest possible value of θ . The smallest possible value of θ given the data is X (n) . Hence, the maximum likelihood estimator θˆn of θ is given by θˆn = X (n) . From the likelihood, it is clear that X (n) is a sufficient statistic for the family of uniform U (0, θ ) distributions for θ > 0. We establish the consistency of θˆn by showing that its coverage probability converges to 1. To find the coverage probability, we need to find the distribution function of X (n) . If X ∼ U (0, θ ) distribution, then it is easy to verify that the distribution function FX (n) (x, θ ) of X (n) is given by, ⎧ if x 0, the coverage probability is given by, Pθ [|X (n) − θ | < ] = Pθ [θ −  < X (n) < θ + ] = Pθ [θ −  < X (n) < θ ] = 1 if  ≥ θ, as Pθ [0 < X (n) < θ ] = 1 ∀ θ ∈ . For  < θ , we have Pθ [|X (n) − θ | < ] = Pθ [θ −  < X (n) < θ ] = 1 − FX (n) (θ − , θ )     n θ − n =1− 1− = 1− θ θ  → 1 as 0 and δ ∈ (0, 1) are specified constants. Thus, for  < θ ,   θ − n Pθ [|X (n) − θ | < ] ≥ 1 − δ ⇒ 1 − ≥1−δ θ   θ − n ⇒ ≤δ θ ⇒ n ≥ log δ/ log((θ − )/θ ) .

Hence, the minimum sample size is n 0 = log δ/ log((θ − )/θ ) + 1. If  ≥ θ , coverage probability is 1 and hence n 0 = 1. We now examine whether X (1) is consistent for θ . Instead of finding the coverage probability Pθ [|X (1) − θ | < ], Pθ

we show that X (1) → 0, ∀ θ ∈  and appeal to Result 1.3.4, which states that the limit random variable in convergence in probability is almost surely unique to arrive at the conclusion that X (1) is not consistent for θ . Observe that the distribution function FX (1) (x, θ ) of X (1) is given by FX (1) (x, θ ) = 1 − [1 − FX (x, θ )]n =

⎧ ⎨

0, if x 0, Pθ [|X (1) − 0| < ] = Pθ [− < X (1) < ] = Pθ [0 < X (1) < ] = 1 if  ≥ θ. For  < θ , we have Pθ [|X (1) − 0| < ] = Pθ [0 < X (1) < ] = FX (1) (, θ ) − FX (1) (0, θ )   n  = 1− 1− → 1 as < 1. θ θ

2.2 Consistency: Real Parameter Setup

33



Thus, X (1) → 0, ∀ θ ∈ . From Result 1.3.4, X (1) cannot converge in probability to θ . Thus, X (1) cannot be a consistent estimator of θ . However, Pθ

X (n) + cX (1) → θ , as convergence in probability is closed under all arithmetic operations and hence X (n) + cX (1) is also a consistent estimator of θ , where c is any real number such that range space of X (n) + cX (1) is same as the parameter space (0, ∞). If X ∼ U (0, θ ), then E(X ) = θ/2 < ∞, hence the moment estimator θ˜n of θ is Pθ given by, θ˜n = 2X n . By Khintchine’s WLLN X n → E(X ) = θ/2, ∀ θ ∈  and Pθ

hence θ˜n = 2X n → θ, ∀ θ ∈  which proves that θ˜n is a consistent estimator of θ .  In the following example, we examine the consistency of an estimator using different approaches.  Example 2.2.2

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution, θ ∈ R. In this example, we illustrate various approaches to examine the consistency of the sample mean X n as an estimator of a population mean θ . We use the result that X n ∼ N (θ, 1/n) distribution



Z=

√ n(X n − θ ) ∼ N (0, 1) distribution.

(i) The first approach is verification of consistency by the definition. For given  > 0, √ √ Pθ [|X n − θ | < ] = Pθ [ n|X n − θ | < n] √ √ = ( n) − (− n) → 1, as n → ∞ , ∀ θ ∈ R. Thus, the coverage probability converges to 1 as n → ∞, ∀ θ ∈  and ∀  > 0, hence the sample mean X n is a consistent estimator of θ . (ii) Since X n ∼ N (θ, 1/n) distribution, E(X n − θ )2 = V ar (X n ) = 1/n → 0, as n → ∞ , ∀ θ ∈ R. Thus, X n converges in quadratic mean to θ and hence converges in probability to θ . (iii) Suppose Fn (x), x ∈ R denotes the distribution function of X n − θ . Then √ √ √ Fn (x) = Pθ [X n − θ ≤ x] = Pθ [ n(X n − θ ) ≤ nx] = ( nx), x ∈ R. The limiting behavior of Fn (x) as n → ∞ is as follows: ⎧ ⎨ 0, if x < 0 1/2, if x = 0 Fn (x) → ⎩ 1, if x > 0.

34

2

Consistency of an Estimator

Suppose F is a distribution function of a random variable which is degenerate at 0, then it is given by  F(x) =

0, if x < 0 1, if x ≥ 0.

It is to be noted that Fn (x) → F(x), ∀ x ∈ C F (x) = R − {0}, where C F (x) is L

a set of points of continuity of F. It implies that (X n − θ ) → 0, where the limit Pθ

law is degenerate and hence, (X n − θ ) → 0, for all θ ∈ R, which proves that X n is consistent for θ . (iv) Observe that {X 1 , X 2 , . . . , X n } are independent and identically distributed Pθ

random variables with finite mean θ , hence by Khintchine’s WLLN, X n → θ , for all θ ∈ R. As stated in Result 1.3.4, limit random variable in convergence in probability is Pθ

almost surely unique, thus if X n → θ , then X n cannot converge in probability to any other parametric function g(θ ). Hence, X n cannot be consistent for any other parametric function g(θ ).  It is to be noted that in the above example, the first approach uses definition based on coverage probability, the second uses the result that convergence in r -th mean implies convergence in probability. In the third approach, we use the result that if a limit random variable is degenerate, then convergence in law implies convergence in probability. The last approach uses well known Khintchine’s WLLN.  Remark 2.2.1

In the first approach of the above example, we have shown that the coverage probability Pθ [|X n − θ | < ] converges to 1 as n → ∞. Here, θ is a true, but unknown, parameter value. Suppose we label Pθ [|X n − θ | < ] as a probability of true coverage for each fixed . The probability Pθ [|X n − θ1 | < ] is then labeled as a probability of false coverage as θ is a true parameter value and we compute the probability X n is in a  neighborhood of θ1 . Now consider √ √ √ Pθ [|X n − θ1 | < ] = Pθ [ n(−+θ1 − θ ) < n(X n − θ ) < n(+θ1 − θ )] √ √ = ( n( + θ1 − θ )) − ( n(− + θ1 − θ )) . Suppose θ1 < θ , then for  such that  + θ1 − θ < 0, both the terms in the above expression converge to 0, while if θ1 > θ , for  > 0, the first term converges to 1 and for  such that − + θ1 − θ > 0, the second term also goes to 1 and hence the probability of false coverage Pθ [|X n − θ1 | < ] converges to 0. Thus for θ = θ1 only the probability converges to 1 for all  > 0. In the definition of a consistency of an estimator, it is expected that a probability of true coverage converges to 1.

2.2 Consistency: Real Parameter Setup

35

If {X 1 , X 2 , . . . , X n } is a random sample of size n from either a Bernoulli B(1, θ ) or a Poisson Poi(θ ) or a uniform U (0, 2θ ) or an exponential distribution with mean θ , then the sample mean X n can be shown to be consistent for θ using the four approaches discussed in Example 2.2.2 and the central limit theorem for independent and identically distributed random variables with positive finite variance. The next example is again related to the consistency of θ when we have a random sample of size n from a normal N (θ, 1) distribution. However, the parameter space is not the entire real line, but a subset of real line, which may not be open. We come across such a parameter space while deriving likelihood ratio test procedures for testing H0 : θ = θ0 against H1 : θ > θ0 or for testing H0 : θ ∈ [a, b] against / [a, b]. This example illustrates the basic concepts of consistency very well. H1 : θ ∈  Example 2.2.3

Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution, θ ∈ , when  is either (i)  = [0, ∞) or (ii)  = [a, b] or (iii)  = {a, b}, a < b ∈ R or (iv)  = I , the set of all integers. We find the maximum likelihood estimator of θ in each case and examine whether it is consistent for θ . Corresponding to a random sample X from normal N (θ, 1) distribution, the likelihood function of θ is given by

 n 1 1 L n (θ |X ) = √ exp − (X i − θ )2 2 2π i=1   n √ −n 1 2 = 2π exp − (X i − θ ) . 2 i=1

The log likelihood function Q(θ ) = log L n (θ |X ) and its first and second derivatives are given by, Q(θ ) = c −

 1 (X i − θ )2 ⇒ Q  (θ ) = (X i − θ ) 2 n

n

i=1

i=1

= n(X n − θ ) and Q  (θ ) = −n, where c is a constant free from θ . Thus, the solution of the likelihood equation Q  (θ ) = 0 is given by, θ = X n . The second derivative is negative for all θ . (i) We first find the maximum likelihood estimator θˆn of θ when  = [0, ∞). If X n ≥ 0, then it is an estimator and the likelihood is maximum at X n . As discussed in Sect. 1.1,  √ √ √ (− nθ ) > 0, if θ > 0 Pθ [X n < 0] = Pθ [ n(X n − θ ) < − nθ ] = 1/2, if θ = 0. Thus, it is possible that X n < 0 and hence it cannot be an estimator of θ . Observe that X n < 0 ≤ θ ⇒ X n − θ < 0 ⇒ Q  (θ ) = n(X n − θ ) < 0

36

2

Consistency of an Estimator

which further implies that Q(θ ) is a decreasing function of θ . Thus, Q(θ ) attains maximum at the smallest possible value of θ which is 0. Thus, the maximum likelihood estimator θˆn of θ is given by θˆn =



X n , if X n ≥ 0 0, if X n < 0. Pθ

To verify the consistency, we proceed as follows. By WLLN X n → θ , for all θ ≥ 0. Now, for  > 0, Pθ [|θˆn − X n | < ] ≥ Pθ [θˆn = X n ] = Pθ [X n ≥ 0] √ = 1 − (− nθ ) → 1 if θ > 0 . Pθ Pθ As a consequence, if θ > 0, then θˆn − X n → 0 and X n → θ implies that Pθ θˆn → θ when θ > 0. Suppose θ = 0. Then using the fact that θˆn ≥ 0, for  > 0, as n → ∞,

√ P0 P0 [|θˆn | > ] = P0 [θˆn > ] = P0 [X n > ] = 1 − ( n) → 0 ⇒ θˆn → 0 . Pθ Thus, it is proved that θˆn → θ for all θ ≥ 0 and hence θˆn is a consistent estimator of θ . (ii) In this case, the parameter space  is [a, b] ⊂ R. As in the case (i), if X n ∈ [a, b] then only it can be labeled as an estimator. Note that in this case, the likelihood attains maximum at X n . However, for any θ ∈ [a, b], it is possible that X n < a and X n > b. It is shown below.

√ √ Pθ [X n < a] = Pθ [ n(X n − θ ) < n(a − θ )]  √ ( n(a − θ )) > 0 , if a < θ ≤ b = 1/2 , if θ = a . Along similar lines, √ √ Pθ [X n > b] = Pθ [ n(X n − θ ) > n(b − θ )]  √ 1 − ( n(b − θ )) > 0 , if a ≤ θ < b = 1/2 , if θ = b . / [a, b], then according to the definition of an estimator, it cannot Thus, if X n ∈ be an estimator of θ . Suppose X n < a ≤ θ ⇒ X n − θ < 0 ⇒ Q  (θ ) = n(X n − θ ) < 0, which further implies that Q(θ ) is a decreasing function of θ and thus attains maximum at the smallest possible value of θ which is a. Similarly, X n > b ≥ θ ⇒ X n − θ > 0 ⇒ Q  (θ ) = n(X n − θ ) > 0,

2.2 Consistency: Real Parameter Setup

37

implying that Q(θ ) is an increasing function of θ and thus attains maximum at the largest possible value of θ which is b. Thus, the maximum likelihood estimator θˆn of θ is given by θˆn

⎧ Xn < a ⎨ a, if = X n , if X n ∈ [a, b] ⎩ b, if X n > b.

To verify the consistency, we proceed on similar lines as in (i). By WLLN Pθ

X n → θ , for all θ ∈ [a, b]. Now for  > 0 and θ ∈ (a, b), Pθ [|θˆn − X n | < ] ≥ Pθ [θˆn = X n ] = Pθ [a ≤ X n ≤ b] √ √ = ( n(b − θ )) − ( n(a − θ )) → 1 . Pθ

Hence, ∀ θ ∈ (a, b), θˆn → θ . Now to examine convergence in probability at the boundary points a and b, consider for θ = a, Pa [|θˆn − a| > ] = Pa [θˆn − a > ]  Pa [θˆn > a + ] = 0 , if  > b − a √ = Pa [X n > a + ] = 1 − ( n) → 0, if  ≤ b − a . Pa

Thus, θˆn → a. Further for the boundary point b, Pb [|θˆn − b| > ] = Pb [b − θˆn > ]  Pb [θˆn < b − ] = 0 , if  > b − a √ = Pb [X n < b − ] = (− n) → 0, if  ≤ b − a . Pb Pθ Hence, θˆn → b. Thus, we have shown that ∀ θ ∈ [a, b], θˆn → θ and hence θˆn is a consistent estimator of θ . (iii) Suppose the parameter space is  = {a, b}. Thus, it consists of only two points where a, b are any fixed real numbers and we assume that b > a. It is to be noted that in this case the likelihood is not even a continuous function of θ and hence to find the maximum likelihood estimator of θ , we compare L n (a|X ) with L n (b|X ). Observe that, for b > a,

  n n 1 L n (b|X ) 1 2 2 (X i − b) + (X i − a) = exp − L n (a|X ) 2 2 i=1 i=1    n  1 2 2 X i + n(b − a ) = exp − −2(b − a) 2 i=1    a+b = exp n(b − a) X n − 2 ⇒ L n (b|X ) > L n (a|X ) if X n > (a + b)/2

(2.1)

38

2

Consistency of an Estimator

& L n (b|X ) ≤ L n (a|X ) if X n ≤ (a + b)/2. Hence, the maximum likelihood estimator θˆn of θ is given by θˆn =



b, if a, if

X n > (a + b)/2 X n ≤ (a + b)/2 .

Pb Pa To verify consistency of θˆn , we have to check whether θˆn → a and θˆn → b. Observe that, for all  > 0,

Pa [|θˆn − a| < ] ≥ Pa [θˆn = a] = Pa X n ≤ (a + b)/2 √  =  n(b − a)/2 → 1

as n → ∞ since (b − a) > 0. On similar lines, for all  > 0,

Pb [|θˆn − b| < ] ≥ Pb [θˆn = b] = Pb X n > (a + b)/2 √  = 1 −  n(a − b)/2 → 1 Pb Pa as n → ∞ since (a − b) < 0. Thus, θˆn → a and θˆn → b implying that θˆn is a consistent estimator of θ . In particular, if a = 0 and b = 1, the maximum likelihood estimator θˆn of θ is given by

θˆn =



1, if 0, if

X n > 1/2 X n ≤ 1/2 .

It is to be noted that for 0 <  ≤ 1, √  P0 [|θˆn − 0| < ] = P1 [|θˆn − 1| < ] =  n/2 . For n = 1, the probability is 0.69 and for n = 36, it is almost 1. Thus, the coverage probability is close to 1 even for a small sample size. In Sect. 2.7, while verifying consistency by simulation, we discuss this feature in more detail. (iv) In this case, the parameter space is the set integers. As in the above cases, of n (X i − θ )2 , which is maximum the log likelihood is given by Q(θ ) = c − 21 i=1 with respect to the variations in θ if θ = X n . However, Pθ [X n = k] = 0 for any integer k, hence X n cannot be an estimator of θ . To find the maximum likelihood estimator for θ , we compare the values of L n (θ |X ) at θ − 1, θ and at θ + 1. Proceeding on similar lines as in Eq. (2.1), we get   L n (θ |X ) = exp n(X n − (θ − 1/2)) ≥ 1 if X n ≥ (θ − 1/2). L n (θ − 1|X ) Similarly, it can show that L n (θ + 1|X ) ≥ L n (θ |X ) if X n ≥ (θ + 1/2). As a consequence, we conclude that the likelihood at θ is larger than or equal to that at θ − 1 and θ + 1 if X n ∈ [θ − 1/2, θ + 1/2). Hence, the maximum likelihood estimator θˆn of θ is given by θˆn = k

if

X n ∈ [k − 1/2, k + 1/2), where

k ∈ I.

2.2 Consistency: Real Parameter Setup

39

Suppose for a given random sample X n = 5.6, then observe that 6 − 1/2 < 5.6 < 6 + 1/2 and hence θˆn = 6, if for a given random sample X n = 5.5, then 6 − 1/2 ≤ 5.5 < 6 + 1/2 and hence θˆn = 6, if for given random sample X n = 3.3, then 3 − 1/2 < 3.3 < 3 + 1/2 and hence θˆn = 3, if X n = −7.8, then −8 − 1/2 < −7.8 < −8 + 1/2 and hence θˆn = −8. Thus, θˆn is the nearest integer to X n . It can also be expressed as θˆn = [X n + 1/2], where [x] is an integer part of x. To examine the consistency of θˆn , for any  > 0 and for θ ∈ I , Pθ [|θˆn − θ | < ] ≥ Pθ [θˆn = θ ] = Pθ [θ − 1/2 ≤ X n < θ + 1/2] √ √ = ( n/2) − (− n/2) → 1, as n → ∞, ∀ θ ∈ I . Hence, θˆn is a consistent estimator of θ . In all the above cases, the equation to find a moment estimator is given by X n = θ , and if X n belongs to the parameter space, then we can call it as a moment estimator. Whenever it exists, it is consistent for θ .   Example 2.2.4

Suppose X follows a Poisson Poi(θ ) distribution, θ ∈  = (0, ∞) and X = {X 1 , X 2 , . . . , X n } is a random sample of size n from it. The probability mass function of X is given by P[X = x] = e−θ θ x /x!, x = 0, 1, . . .. The loglikelihood function of θ corresponding to X and its first and second derivatives are given by n  X i − nθ, log L n (θ |X ) = c + (log θ ) n 

∂ log L n (θ |X ) = ∂θ

i=1

θ

n 

i=1

Xi −n &

∂2 log L n (θ |X ) = − ∂θ 2

Xi

i=1

θ2

,

where c is a constant free from θ . The solution of the likelihood equation is θ = X n and, at this solution, the second derivative is negative, provided X n > 0. It is to be noted that if the parameter space is  = (0, ∞), then X n is an estimator provided X n > 0. However, it is possible that X n = 0 ⇔ X i = 0 ∀ i = 1, 2, . . . , n, the probability of which is exp(−nθ ) > 0. In this case, the likelihood of θ is given by exp(−nθ ). It is a decreasing function of θ and attains supremum at θ = 0. However, 0 is not included in the parameter space. Hence, the maximum likelihood estimator of θ does not exist. To examine whether the moment estimator is consistent for θ , observe that mean of the Poi(θ ) distribution is θ . The equation to find a moment estimator is given by X n = θ . If X n > 0, then the moment estimator will be θ˜n = X n and it will be consistent for θ .   Remark 2.2.2

If in the above example, the parameter space is [0, ∞), then the maximum likelihood estimator is X n and it will be consistent for θ . However, if we define 00 = 1,

40

2

Consistency of an Estimator

then for θ = 0 

e−θ θ x P[X = x] = x!

=

1, if x =0 0, if x = 1, 2, . . . .

Thus, at θ = 0, X is degenerate at 0.  Example 2.2.5

Suppose X ∼ Poi(θ ), θ ∈  = (0, ∞). Suppose an estimator Tn based on a random sample of size n from the distribution of X is defined as follows:  Tn =

X n , if X n > 0 0.05, if X n = 0.

To examine whether it is consistent for θ , observe that ∀  > 0 and ∀ θ > 0, P[|Tn − X n | < ] ≥ P[Tn = X n ] = P[X n > 0] = 1 − exp(−nθ ) → 1, as n → ∞. Pθ

Thus, (Tn − X n ) → 0,



∀ θ ∈ , but by the WLLN, X n → θ and hence,



Tn → θ, ∀ θ ∈ , which proves that Tn is a consistent estimator of θ .



 Remark 2.2.3

In the above example, it is to be noted that the consistency of Tn follows from the fact that Tn = X n on a set whose probability converges to 1 as n → ∞ and Pθ

X n → θ . It is to be noted that the value of Tn on the set [X n = 0] does not matter, it can be any arbitrary small positive number. Thus, in general, if an estimator Tn is defined as  pn Un , with probability Tn = c, with probability 1 − pn , Pθ

where Un → θ, ∀ θ ∈ , pn → 1 as n → ∞ and c is any arbitrary number so that the range space of Tn is the parameter space, then Tn is consistent for θ . We now proceed to establish an important property of a consistent estimator which is well known as an invariance property of consistency under continuous transformation. It is in contrast with the unbiasedness property which is invariant under only linear transformation and not in general under any other transformations. In Chap. 1, we have noted that if P

P

X n → X ⇒ g(X n ) → g(X ) where g is a continuous function. The next theorem is the same result in the inference setup.

2.2 Consistency: Real Parameter Setup

41

Theorem 2.2.1 Suppose Tn is a consistent estimator of θ and g :  → R is a continuous function. Then g(Tn ) is a consistent estimator of g(θ ).

Proof It is to be noted that g being a continuous function, is a Borel function and hence g(Tn ) is a random variable. Continuity of g implies that, for any x and y in domain of g, given  > 0, ∃ δ > 0 such that if |x − y| < δ, then |g(x) − g(y)| < . Thus, given  > 0, ∃ δ > 0 such that |Tn − θ | < δ implies |g(Tn ) − g(θ )| < . As a consequence, [|g(Tn )(ω) − g(θ )| < ] ⊃ [|Tn (ω) − θ | < δ] ⇒ P[|g(Tn ) − g(θ )| < ] ≥ P[|Tn − θ | < δ] → 1, ∀ δ > 0 ⇒ P[|g(Tn ) − g(θ )| < ] → 1, ∀  > 0 and it is true ∀ θ ∈ . Hence, g(Tn ) is a consistent estimator of g(θ ).



This theorem is one of the most frequently used theorems to obtain a consistent estimator for a parametric function of interest, as is evident from the following examples.  Example 2.2.6

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a Poisson Poi(θ ) distribution, θ ∈  = (0, ∞). It is shown in Example 2.2.5 that Tn is a consistent estimator for θ . Further, P[X 1 = 0] = exp(−θ ) is a continuous function of θ . Hence, by Theorem 2.2.1, exp(−Tn ) is a consistent estimator for exp(−θ ). It to be noted that Tn is a biased estimator of θ and exp(−Tn ), although consistent, is also a biased estimator for exp(−θ ). To find a consistent and unbiased estimator for P[X 1 = 0] = exp(−θ ), we define random variables Yi , i = 1, 2, . . . , n as follows:  1, if Xi = 0 Yi = 0, otherwise . Thus, Yi = I[X i =0] is a Borel function of X i , i = 1, 2, . . . , n. Hence, {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables implies that {Y1 , Y2 , . . . , Yn } are also independent and identically distributed random variables, with E(Yi ) = P[X i = 0], i = 1, 2, . . . , n. Hence, the sample mean Y n is an unbiased estimator of E(Y1 ) = P[X 1 = 0] and by the WLLN  it is also consistent for P[X 1 = 0].  Example 2.2.7

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution, θ ∈  = R, then by WLLN, X n is a consistent estimator of θ . Further, P[X 1 ≤ a] = P[(X 1 − θ ) ≤ (a − θ )] = (a − θ ), where (·) is a distribution function of the standard normal distribution. It is a continuous function.

42

2

Consistency of an Estimator

Hence, by Theorem 2.2.1, (a − X n ) is a consistent estimator for (a − θ ). To find a consistent and unbiased estimator for P[X 1 ≤ a], we adopt the same procedure as in Example 2.2.6. We define random variables Yi , i = 1, 2, . . . , n as follows:  1, if Xi ≤ a Yi = 0, otherwise . Thus, {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables, with E(Yi ) = P[X i ≤ a] = (a − θ ), i = 1, 2, . . . , n. Hence, the sample mean Y n is an unbiased estimator of (a − θ ) and by the WLLN it is also consistent for (a − θ ).  The following example illustrates the application of Theorem 2.2.1 to generate a consistent estimator for the indexing parameter using a moment estimator.  Example 2.2.8

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a distribution 0 < x < 1, with probability density function f (x, θ ) = θ x θ −1 , θ ∈  = (0, ∞). We find a consistent estimator for θ based on a sample mean. Pθ

We have E(X ) = θ/(θ + 1) < ∞ ∀ θ ∈ (0, ∞). Thus, by the WLLN, X n → θ/(θ + 1), ∀ θ > 0. Suppose θ/(θ + 1) = φ, say. Then φ/(1 − φ) = θ . Hence, if a function g is defined as g(φ) = φ/(1 − φ), 0 < φ < 1, then g(φ) = θ . It is clear that g is a continuous function. Hence, by Theorem 2.2.1, g(X n ) = X n /(1 − X n ) is consistent for g(φ) = θ . One more approach to show that X n /(1 − X n ) is consistent for θ , is based on the fact that convergence in probability is closed under the arithmetic operations. Thus, θ 1 Pθ θ + 1 1 → ⇒ = +1 θ +1 θ θ Xn Xn 1 Pθ Pθ 1 −1→ ⇒ → θ. ⇒ θ Xn 1 − Xn We now find a consistent estimator for θ based on a sufficient statistic. The likelihood of θ given the random sample X = {X 1 , X 2 , . . . , X n } is Pθ

Xn →

L n (θ |X ) =

n

θ X iθ −1

i=1

⇔ log L n (θ |X ) = n log θ − θ

n  i=1

(− log X i ) −

n  i=1

log X i .

n (− log X i ) is a sufficient By the Neyman-Fisher factorization theorem, i=1 statistic. Suppose a random variable Y is defined as Y = − log X , then the probability density function f Y (y, θ ) of Y is given by f Y (y, θ ) = θ e−θ y , y > 0. Thus, the distribution of Y is exponential with mean 1/θ . Hence, the moment estimator

2.2 Consistency: Real Parameter Setup

43

θ˜n of θ based on a sufficient statistic is given by the equation Y n = Sn /n = E(Y ) = 1/θ and θ˜n = n/Sn = 1/Y n . By the WLLN, Pθ Pθ Sn /n → E(Y ) = 1/θ ⇒ θ˜n → θ ∀ θ ∈ .

Thus, θ˜n is a consistent estimator of θ . It can be easily verified that θ˜n is the maximum likelihood estimator of θ .  Using the WLLN, one can show that an empirical distribution function is a consistent estimator of the distribution function for a fixed real number, from which the sample is drawn. Further, by Theorem 2.2.1, we can obtain the consistent estimator of the indexing parameter based on an empirical distribution function. The following example elaborates on it. We first define an empirical distribution function, also known as a sample distribution function.

 Definition 2.2.2

Empirical Distribution Function: Suppose X = {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with distribution function F(x), x ∈ R. For each fixed x ∈ R, the empirical distribution function Fn (x) corresponding to the given random sample X is defined as Fn (x) =

n 1 number of X i ≤ x = Yi , n n i=1

where for i = 1, 2, . . . , n , Yi is defined as  Yi =

1, if X i ≤ x 0, if X i > x,

It is clear from the definition that Fn (x) is non-decreasing, right continuous with Fn (−∞) = 0 and Fn (∞) = 1. Thus, it satisfies all the properties of a distribution function. Moreover, it is a step function with discontinuities at n points. However, an important point to be noted is that it is not a deterministic function. From the definition, we note that the empirical distribution function is a Borel measurable function of {X 1 , X 2 , . . . , X n }, and hence it is a random variable for each fixed x. Observe that for each fixed x, Yi ∼ B(1, F(x)) distribution. Further, Yi is a Borel function of X i , i = 1, 2, . . . , n and hence {Y1 , Y2 , . . . , Yn } are also independent and identically distributed random variables. It then follows that n Fn (x) ∼ B(n, F(x)) distribution, with mean n F(x) and variance n F(x)(1 − F(x)). By the WLLN, Fn (x) =

n 1 P Yi → E(Yi ) = F(x), for each fixed x ∈ R. n i=1

44

2

Consistency of an Estimator

Using the result that an empirical distribution function is a consistent estimator of the distribution function, for each fixed x ∈ R, we can obtain a consistent estimator of the indexing parameter, when the distribution function F(x, θ ) is indexed by a parameter θ . We illustrate it in the following example.  Example 2.2.9

Suppose Fn (x) is an empirical distribution function corresponding to a random sample X = {X 1 , X 2 , . . . , X n } from the distribution of X , with the distribution function F(x, θ ), x ∈ R. We have noted that by the WLLN, n 1 Pθ Fn (x) = Yi → E(Yi ) = F(x, θ ), ∀ θ ∈ . n i=1

If F −1 exists and is continuous, then we can find a consistent estimator of θ based on Fn (x). We illustrate the procedure for two distributions. Suppose X follows an exponential distribution with scale parameter θ . Then its distribution function F(x, θ ) is given by  F(x, θ ) =

0, if x < 0 1 − exp{−θ x}, if x ≥ 0.

For x < 0, Fn (x) = 0 and F(x) = 0. For fixed x > 0, 1 − exp{−θ x} = y ⇒ θ = − x1 log(1 − y). Thus, Pθ

Fn (x) → F(x, θ ) ⇒



1 1 Pθ log(1 − Fn (x)) → − log(1 − F(x, θ )) x x = θ, ∀ θ ∈ .

Thus, for any fixed x > 0, − x1 log(1 − Fn (x)) is a consistent estimator of θ , in fact, we have an uncountable family of consistent estimators of θ . Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Weibull distribution with probability density function f (x, θ ) = θ x θ −1 exp{−x θ } , x > 0, θ > 0 . Then its distribution function F(x, θ ) for x > 0 is given by, F(x, θ ) = 1 − exp{−x θ }. Hence, for fixed x > 0, Pθ

Fn (x) → F(x, θ ), ∀ θ ∈  Pθ

⇒ − log(1 − Fn (x)) → x θ , ∀ θ ∈  log(− log(1 − Fn (x))) Pθ → θ, ∀ θ ∈ . ⇒ log x Thus, a consistent estimator of θ can be obtained from the empirical distribution function for each fixed x > 0. 

2.2 Consistency: Real Parameter Setup

45

Theorem 2.2.1 states that if g is a continuous function, then Tn is a consistent estimator of θ implies that g(Tn ) is a consistent estimator for g(θ ). However, if g is not continuous, still g(Tn ) can be a consistent estimator of g(θ ) as shown in the following example. We need following theorem in the solution. Theorem 2.2.2 P

If Wn → C < 0, then P[Wn ≤ 0] → 1 as n → ∞. P

L

Proof It is known that Wn → C ⇒ Wn → C and hence FWn (x) converges to the distribution function of W ≡ C at all real numbers except C, it being a point of discontinuity. Thus,  FWn (x) = P[Wn ≤ x] →

0, if x < C 1, if x > C .

Hence, P[Wn ≤ 0] → 1 as C < 0.



Theorem 2.2.2 is useful in proving some results in Cramér-Huzurbazar theory in Chap. 4.  Example 2.2.10

Suppose Tn is a consistent estimator of θ ∈ R. It is assumed that Pθ [Tn = 1] = 0 ∀ θ . Suppose a function g : R → R is defined as  g(x) =

−1, if x < 1 1, if x ≥ 1.

It is clear that g is not a continuous function, 1 being a point of discontinuity, and hence we cannot use Theorem 2.2.1 to claim consistency of g(Tn ) for g(θ ). We use the definition of consistency to verify whether g(Tn ) is consistent for g(θ ). Suppose θ ≥ 1, then g(θ ) = 1. Now, Pθ [|g(Tn ) − g(θ )| < ] = Pθ [|g(Tn ) − 1| < ] = Pθ [1 −  < g(Tn ) < 1 + ] = 1, if  > 2 , as possible values of g(Tn ) are −1 and 1. For 0 <  ≤ 2, Pθ [1 −  < g(Tn ) < 1 + ] = Pθ [g(Tn ) = 1] = Pθ [Tn ≥ 1] . Suppose now θ < 1, then g(θ ) = −1. Further, Pθ [|g(Tn ) − g(θ )| < ] = Pθ [|g(Tn ) − (−1)| < ] = Pθ [−1 −  < g(Tn ) < −1 + ] = 1, if  > 2 .

46

2

Consistency of an Estimator

For 0 <  ≤ 2, Pθ [−1 −  < g(Tn ) < −1 + ] = Pθ [g(Tn ) = −1] = Pθ [Tn < 1] . Pθ

It is given that Tn is consistent for θ , that is Tn → θ ∀ θ ∈ R. To examine the limiting behavior of Pθ [Tn < 1] and of Pθ [Tn ≥ 1], we use Theorem 2.2.2. Suppose θ < 1, then Pθ



Tn → θ ⇒ Tn − 1 → θ − 1 < 0 ⇒ P[Tn −1 ≤ 0] → 1 ⇒ P[Tn ≤ 1] → 1 as n → ∞ . Now with the assumption that P[Tn = 1] = 0, [Tn ≤ 1] = [Tn < 1] → 1. Thus, if θ < 1 then Pθ [|g(Tn ) − g(θ )| < ] → 1. Suppose θ ≥ 1, then Pθ



Tn → θ ⇒ 1 − Tn → 1 − θ ≤ 0 ⇒ P[1 − Tn ≤ 0] → 1 ⇒ P[Tn ≥ 1] → 1 as n → ∞ . Thus, if θ ≥ 1, then Pθ [|g(Tn ) − g(θ )| < ] → 1. Hence, we claim that Pθ

g(Tn ) → g(θ ) for all θ ∈ R. Hence, g(Tn ) is a consistent estimator of g(θ ), even if g is not a continuous function. Observe that for  = 2 and θ = 8, P[|g(Tn ) − θ | < ] = P[6 < g(Tn ) < 10] = 0 and hence g(Tn ) is not consistent for θ . It cannot be consistent for θ in view of the fact that the limit random variable in convergence in probability is almost surely unique.   Remark 2.2.4

It is to be noted that in the above example the assumption that P[Tn = 1] = 0, that is, the probability assigned to a discontinuity point is 0, plays a crucial role. The following theorem states the most useful result for verifying the consistency of an estimator provided its MSE exists. Theorem 2.2.3 An estimator Tn is a consistent estimator of θ if M S E θ (Tn ) → 0 as n → ∞.

Proof By Chebyshev’s inequality, for any  > 0 and for any θ ∈ , Pθ [|Tn − θ | > ] ≤ E(Tn − θ )2 / 2 = MSEθ (Tn )/ 2 → 0 and hence Tn is consistent for θ .



2.2 Consistency: Real Parameter Setup

47

 Remark 2.2.5

By the definition of MSE of Tn as an estimator of θ , we have M S E θ (Tn ) = E(Tn − θ )2 = E(Tn − E(Tn ) + E(Tn ) − θ )2 = E(Tn − E(Tn ))2 + (E(Tn ) − θ )2 = V arθ (Tn ) + (bθ (Tn ))2 → 0 if bθ (Tn ) → 0 and V arθ (Tn ) → 0 , where bθ (Tn ) = (E(Tn ) − θ ) is a bias of Tn as an estimator of θ . Thus, Theorem 2.2.3 can be restated as follows. If bθ (Tn ) = (E(Tn ) − θ ) → 0 and V arθ (Tn ) → 0, then Tn is consistent for θ . Such a consistent estimator is referred to as a MSE consistent estimator of θ .

 Definition 2.2.3

MSE Consistent Estimator: Suppose Tn is an estimator of θ such that MSE of Tn exists. If MSE of Tn converges to 0 as n → ∞, then Tn is called as a MSE consistent estimator of θ . Theorem 2.2.3 is nothing but the well-known result from probability theory which states that convergence in r -th mean implies convergence in probability. However, if M S E θ (Tn ) does not converge to zero then we cannot conclude that Tn is not consistent for θ . It is in view of the result that convergence in probability does not imply convergence in quadratic mean. The following example illustrates that the converse of Theorem 2.2.3 is not true.  Example 2.2.11

Suppose {X n , n ≥ 1} is a sequence of random variables defined as X n = μ + n , where {n , n ≥ 1} is a sequence of independent random variables such that  0, with probability 1 − 1/n n = n, with probability 1/n. Suppose μ ∈ R. With the given distribution of n , we have E(n ) = 1 and V ar (n ) = n − 1. Observe that For 0 <  < n, P[|n | < ] = P[|n | = 0] = 1 − 1/n → 1 as n → ∞ For  > n, P[|n | < ] = 1 P

P

⇒ n → 0 ⇒ X n → μ. Thus, X n as an estimator of μ is consistent for μ. But M S E μ (X n ) = E(X n − μ)2 = E(X n2 ) − 2μE(X n ) + μ2 = E((μ + n )2 ) − 2μ(μ + 1) + μ2 = n + 1 → ∞ as n → ∞.

48

2

Consistency of an Estimator

Thus, X n as an estimator of μ is consistent for μ but its MSE does not converge to 0.   Example 2.2.12

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (0, θ ) distribution, θ ∈  = (0, ∞). nθ nθ 2 2 )= & E(X (n) n+1 n+2 nθ 2 nθ ⇒ M S E θ (X (n) ) = E(X (n) − θ )2 = − 2θ + θ2 n+2 n+1 2θ 2 = → 0, (n + 1)(n + 2)

X ∼ U (0, θ ) ⇒ E(X (n) ) =

as n → ∞. Hence, X (n) is a MSE consistent estimator of θ . Similarly, (n − 1)θ n(n − 1)θ 2 2 )= & E(X (n−1) (n + 1) (n + 1)(n + 2) n(n − 1)θ 2 (n − 1)θ ⇒ M S E θ (X (n−1) ) = E(X (n−1) − θ )2 = − 2θ + θ2 (n + 1)(n + 2) (n + 1) 6θ 2 = → 0, (n + 1)(n + 2) E(X (n−1) ) =

as n → ∞. Hence, X (n−1) is also MSE consistent for θ .



Using Theorem 2.2.3, we now prove that the UMVUE, as defined in Sect. 1.2 is always a consistent estimator of its expectation. Theorem 2.2.4 Suppose an estimator Tn based on a random sample {X 1 , X 2 , . . . , X n } is an UMVUE of its expectation g(θ ). Then Tn is a MSE consistent estimator of g(θ ).

Proof It is given that Tn is an UMVUE of its expectation g(θ ). Hence, if Un is any other unbiased estimator of g(θ ) based on {X 1 , X 2 , . . . , X n }, we have 2 σn2 = V ar (Tn ) ≤ V ar (Un ), ∀ n ≥ 1 & ∀ θ ∈  ⇒ σn+1 ≤ V ar (Un+1 ) 2 where σn+1 = V ar (Tn+1 ) and Tn+1 is UMVUE of g(θ ) based on a random sample {X 1 , X 2 , . . . , X n+1 }. In particular, suppose Un+1 = Tn , where Tn is viewed as a function of {X 1 , X 2 , . . . , X n+1 }. Hence, we have 2 σn+1 ≤ σn2 , ∀ n ≥ 1 ⇒ {σn2 , n ≥ 1} is a non-increasing sequence.

2.2 Consistency: Real Parameter Setup

49

Further, it is bounded below by 0 and hence is convergent. Consequently, every subsequence of {σn2 , n ≥ 1} is convergent with the same limit. To find the limit, we find an appropriate subsequence of {σn2 , n ≥ 1} and show that it converges to 0. Suppose n = mk and Tmk is an UMVUE of g(θ ) based on a random sample {X 1 , X 2 , . . . , X mk }. Suppose an estimator Umk is defined as Umk =

1 {T (X 1 , X 2 , . . . , X m ) + T (X m+1 , . . . , X 2m ) k + · · · + T (X (m−1)k+1 , X (m−1)k+2 , . . . , X mk )} .

It is clear that E(Umk ) = g(θ ) and thus Umk is an unbiased estimator of g(θ ). Further, V ar (Umk ) = σm2 /k. Now Tmk is an UMVUE of g(θ ) and hence 2 ≤ σm2 /k → 0 as k → ∞ V ar (Tmk ) ≤ V ar (Umk ) ⇒ σmk



n → ∞.

2 , k ≥ 1} of the sequence {σ 2 , n ≥ 1} converges to 0 and Thus, a subsequence {σmk n hence the sequence {σn2 , n ≥ 1} also converges to 0. Thus, Tn is an unbiased estimator of g(θ ) with variance converging to 0 and hence it is a MSE consistent estimator of g(θ ). 

Following two examples illustrate Theorem 2.2.4.  Example 2.2.13

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli B(1, p) distrin X i is a complete sufficient statistic. Further, bution, 0 < p < 1. Then Sn = i=1 sample mean X n is an unbiased estimator of p and it is a function of a complete sufficient statistic. Hence, it is the UMVUE of p and hence a consistent estimator of p. Consistency also follows from WLLN. Observe that E(nSn ) = n 2 p & E(Sn2 ) = np − np 2 + n 2 p 2 ⇒ E(nSn − Sn2 ) = n(n − 1) p(1 − p). Thus, (nSn − Sn2 )/n(n − 1) is an unbiased estimator of p(1 − p) and it is a function of a complete sufficient statistic. Hence, it is the UMVUE of p(1 − p) and hence a consistent estimator of p(1 − p). Consistency of (nSn − Sn2 )/n(n − 1) can also be established as follows: nSn − Sn2 n Sn n Sn2 Pp → p − p 2 = p(1 − p). = − n(n − 1) n−1 n n − 1 n2   Example 2.2.14

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution θ ∈ R. Then X n is a complete sufficient statistic and its distribution is normal N (θ, 1/n). Hence, for any t ∈ R,

50

2

E(et X n ) = etθ +t Thus, Tn = et X n

−t 2 /2n

2 /2n

⇒ E(et X n −t

Consistency of an Estimator

2 /2n

) = etθ .

is the UMVUE of etθ and hence a consistent estimator Pθ

of etθ . Consistency of Tn also follows by noting that X n → θ implies Tn = et X n −t

2 /2n



→ etθ ∀ θ ∈ .



Following theorem proves that sample raw moments are consistent for corresponding population raw moments. Theorem 2.2.5 Suppose {X 1 , X 2 , . . . , X n } is random sample from the distribution of X , with indexing parameter θ ∈ . (i) Suppose g is a Borel function, such that n Pθ E θ (g(X )) = h(θ ) < ∞. Then i=1 g(X i )/n → h(θ ), ∀ θ ∈ . (ii) Suppose a  ) = E (X r ) of order r is finite, r ≥ 1. Then sample population raw moment θ n μr (θ  raw moment m r = i=1 X ir /n of order r is a consistent estimator of μr (θ )

Proof (i) Since {X 1 , X 2 , . . . , X n } is random sample and g is a Borel function, {g(X 1 ), g(X 2 ), . . . , g(X n )} are also independent and identically distributed random variables with finite mean h(θ ). Hence, by Khintchine’s WLLN, n Pθ i=1 g(X i )/n → h(θ ), ∀ θ ∈ . (ii) In particular, suppose g(X ) = X r , r ≥ 1, then μr (θ ) = E(X r ) is the r -th population raw moment, which is assumed to be finite. Then by (i) the sample raw moment n X ir /n is a consistent estimator of r -th population of order r given by m r = i=1   raw moment μr (θ ). As a consequence of Theorem 2.2.5 and the fact that the convergence in probability is closed under the arithmetic operations, we get that Pθ

m 2 = m 2 − (m 1 )2 → μ2 − (μ1 )2 = μ2 & Pθ

m 3 = m 3 − 3m 2 m 1 + 2(m 1 )3 → μ3 − 3μ2 μ1 + 2(μ1 )3 = μ3 . n In general, the sample central moment of order r given by m r = i=1 (X i − X n )r /n is consistent for r -th population central moment μr (θ ), provided it is finite. This result is proved in Sect. 2.5 using a different approach. Further, by taking g(X ) = et X , we get that the sample moment generating function Mn (t) =

n 



et X i /n → E(et X ) = M X (t),

i=1

for a fixed t for which the moment generating function M X (t) of X exists. On similar lines, we can show that the sample probability generating function Pn (t) =

n  i=1



t X i /n → E(t X ) = PX (t),

2.2 Consistency: Real Parameter Setup

51

for a fixed t ∈ (0, 1), where PX (t) is the probability generating function of a positive integer valued random variable X . In Theorem 2.2.5, it is proved that the sample raw moments are consistent for corresponding population raw moments, provided these exist. There are certain distributions for which moments do not exist, for example, Cauchy distribution. In such situations, one can find a consistent estimator for the parameter of interest based on sample quantiles. We now discuss this approach. Suppose X is an absolutely continuous random variable with distribution function F(x, θ ) and probability density function f (x, θ ), where θ is an indexing parameter. Suppose a p (θ ) is such that P[X ≤ a p (θ )] ≥ p

&

P[X ≥ a p (θ )] ≥ 1 − p.

Then a p (θ ) is known as the p-th population quantile or fractile. There may be multiple values of a p (θ ) unless the distribution function F(x, θ ) is strictly monotone. We assume that the distribution function F(x, θ ) is strictly monotone, hence p-th population quantile a p (θ ) is a unique solution of F(a p (θ ), θ ) = p, 0 < p < 1. We assume that the solution of the equation F(a p (θ ), θ ) = p exists. For example, suppose X follows an exponential distribution with scale parameter θ . Its distribution function F(x, θ ) is given by, F(x, θ ) = 1 − exp(−θ x) for x > 0. Hence, the solution of the equation F(a p (θ ), θ ) = p is given by, a p (θ ) = − log(1 − p)/θ, 0 < p < 1 and it is a p-th population quantile of the distribution of X . To define the corresponding sample quantile, suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with distribution function F(x, θ ). Suppose {X (1) , X (2) , . . . , X (n) } is the corresponding order statistics. Then X (rn ) is defined as a p-th sample quantile, where rn = [np] + 1, 0 < p < 1. In the following theorem, we prove that the p-th sample quantile is consistent for the p-th population quantile. We can then use the invariance property of consistency to get a consistent estimator for the desired parametric function. Theorem 2.2.6 Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X which is an absolutely continuous random variable with distribution function F(x, θ ) where θ is an indexing parameter. Suppose F(x, θ ) is strictly increasing, and p-th population quantile a p (θ ) is a unique solution of F(a p (θ ), θ ) = p, 0 < p < 1. Then the p-th sample quantile X (rn ) , where rn = [np] + 1, 0 < p < 1, is a consistent estimator of the p-th population quantile a p (θ ).

Proof It is given that the distribution of X is absolutely continuous with distribution function F(x, θ ), hence by the probability integral transformation it follows that U = F(X , θ ) ∼ U (0, 1) distribution. As a consequence, {F(X (1) ), F(X (2) ), . . . , F(X (n) )} can be treated as an order statistics corresponding to a random sample of size n from U (0, 1). Thus, the distribution of U(rn ) = F(X (rn ) ) is same as that of the rn -th-order statistics from uniform U (0, 1) distribution. The probability density function of U(rn ) is given by,

52

2

grn (u) =

Consistency of an Estimator

n! u rn −1 (1 − u)n−rn , (rn − 1)!(n − rn )!

0 < u < 1.

It then follows from the definition of beta function that rn rn (rn + 1) & E(U(r2 n ) ) = n+1 (n + 1)(n + 2) ⇒ E(U(rn ) − p)2 = E(U(r2 n ) ) − 2 p E(U(rn ) ) + p 2 E(U(rn ) ) =

=

rn (rn + 1) rn − 2p + p2 . (n + 1)(n + 2) n+1

To find the limit of E(U(rn ) − p)2 observe that, rn = [np] + 1 ⇒ np < rn ≤ np + 1 np rn np + 1 ⇒ < ≤ n+1 n+1 n+1 np rn ⇒ lim ≤ lim ≤ n→∞ n + 1 n→∞ n + 1 rn ⇒ p ≤ lim ≤ p n→∞ n + 1 rn ⇒ lim = p and n→∞ n + 1 rn + 1 rn n + 1 1 lim = lim + = p n→∞ n + 2 n→∞ n + 1 n + 2 n+2

lim

n→∞

np + 1 n+1

Hence,  lim E(U(rn ) − p) = lim 2

n→∞

n→∞

rn (rn + 1) rn 2 − 2p +p =0 (n + 1)(n + 2) n+1

q.m.

P



⇒ U(rn ) → p ⇒ U(rn ) → p ⇒ F(X (rn ) ) → p Pθ

⇒ F −1 (F(X (rn ) )) → F −1 ( p) Pθ

⇒ X (rn ) → a p (θ ), ∀ θ ∈ . Thus, the p-th sample quantile X (rn ) is a consistent estimator of the p-th population  quantile a p (θ ).  Example 2.2.15

For normal N (θ, 1) and Cauchy C(θ, 1) distributions, θ is a population median. Hence, in both the cases, by Theorem 2.2.6, the sample median X ([n/2]+1) is consistent for θ . For a uniform U (0, θ ) distribution, θ/2 is a population median. Hence, 2X ([n/2]+1) is consistent for θ . More generally, for a normal N (θ, 1) distribution, the p-th quantile is a p (θ ) = θ + −1 ( p), hence X ([np]+1) − −1 ( p)

2.2 Consistency: Real Parameter Setup

53

is consistent for θ , 0 < p < 1. For a uniform U (0, θ ) distribution, the pth quantile is given by, a p (θ ) = pθ , hence X ([np]+1) / p is consistent for θ , 0 < p < 1. For a Cauchy C(θ, 1) distribution, the p-th quantile is given by, a p (θ ) = θ + tan(π( p − 1/2)), hence X ([np]+1) − tan(π( p − 1/2)) is consistent for θ , 0 < p < 1. Thus, we have an uncountable family of consistent estimators for θ as p varies over (0, 1) for normal N (θ, 1), uniform U (0, θ ) and Cauchy C(θ, 1) distributions. We have to choose p appropriately to get a better estimator.  The following example illustrates how to obtain consistent estimators based on sample moments and sample quantiles.  Example 2.2.16

Suppose X follows an exponential distribution with location parameter (also known as a threshold parameter) θ and scale parameter 1. The probability density function of X is given by, f X (x, θ ) = exp{−(x − θ )}, x ≥ θ and its distribution function FX (x, θ ) is given by,  FX (x, θ ) =

0, if 1 − exp{−(x − θ )}, if

x θ, ∀ i = 1, 2, . . . , n ⇔ X (1) > θ.

i=1

It is to be noted that the likelihood function is not a continuous function of θ . However, it is strictly increasing over the interval (−∞, X (1) ) and hence attains supremum at the largest possible value of θ given the data, which is X (1) . Hence, the maximum likelihood estimator of θ is given by, θˆn = X (1) . We examine the consistency of θˆn by showing that its coverage probability converges to 1. From the distribution function of X , the distribution function FX (1) (x, θ ) of X (1) is

54

2

Consistency of an Estimator

given by  FX (1) (x, θ ) =

0, if 1 − exp{−n(x − θ )}, if

x 0 and ∀ θ ∈ . Hence, X (1) is a consistent estimator of θ . It is to be noted that the coverage probability does not depend on θ . Further, the distribution of X (1) is exponential with scale parameter n and location parameter θ , hence E(X (1) ) = θ + 1/n and V ar (X (1) ) = 1/n 2 . Thus, the bias of X (1) as an estimator of θ and variance of X (1) both converge to 0 and hence X (1) is MSE consistent for θ . It is known that the first sample quartile is consistent for the first population quartile which is a solution of F(x, θ ) = 1 − exp{−(x − θ )} = 1/4 ⇒ a1/4 (θ ) = θ + log(4/3). Thus, X ([n/4]+1) is consistent for θ + log(4/3) and hence X ([n/4]+1) − log(4/3) is consistent for θ .  Following example illustrates a different approach to verify the consistency of an estimator.  Example 2.2.17

Suppose X is a random variable with mean μ and known variance σ 2 , 0 < σ 2 < ∞. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . A parametric function g is defined as g(μ) = 0 if μ = 0 and g(0) = 1. Suppose for −1/2 < δ < 0, an estimator Tn is defined as  Tn =

1, if 0, if

|X n | < n δ |X n | ≥ n δ .

We examine whether Tn is a consistent estimator of g(μ). When μ = 0, by the √ L central limit theorem, n X n → Z 1 ∼ N (0, σ 2 ), since 0 < σ 2 < ∞. Hence, for μ = 0 and  > 0, P0 [|Tn − g(μ)| < ] = P0 [|Tn − 1| < ] ≥ P0 [|Tn = 1] = P0 [|X n | < n δ ]     =  n 1/2+δ /σ −  −n 1/2+δ /σ → 1 as n → ∞ .

2.2 Consistency: Real Parameter Setup

55

Suppose μ = 0 and  > 1. Then Pμ [|Tn − g(μ)| < ] = Pμ [|Tn − 0| < ] = 1. Now suppose μ = 0 and 0 <  ≤ 1. By the WLLN, P

P

P

X n → μ ⇒ 1/|X n | → 1/|μ| ⇒ n δ /|X n | → 0

⇒ Pμ n δ /|X n | <  → 1 ∀  > 0. Hence, for 0 <  ≤ 1 Pμ [|Tn − g(μ)| < ] = Pμ [|Tn − 0| < ] = Pμ [Tn = 0]

= Pμ [|X n | ≥ n δ ] = Pμ n δ /|X n | < 1 → 1 . As a consequence, Tn is a consistent estimator of g(μ).



In the following two sections, we briefly discuss strong consistency and uniform consistency.

2.3

Strong Consistency

As a pre-requisite to the concept of the strongly consistent estimator, we have given the definition of almost sure convergence of a sequence of random variables in Sect. 1.3. Using it, we define a strongly consistent estimator as follows:

 Definition 2.3.1

Strongly Consistent Estimator: A sequence of estimators {Tn , n ≥ 1} is said to be strongly consistent for θ if as n → ∞, a.s.

Tn → θ, ∀ θ ∈ , that is Tn (ω) → θ, ∀ ω ∈ N c , with Pθ (N ) = 0, ∀ θ ∈ ,

where set N is a p -null set. Equivalently, a sequence of estimators {Tn , n ≥ 1} is said to be strongly consistent for θ if Pθ [ lim Tn = θ ] = 1, ∀ θ ∈ . n→∞

Kolmogorov’s SLLN stated in Chap. 1, is useful to examine the strong consistency of an estimator which is in the form of an average. Suppose {X 1 , X 2 , . . . , X n } is random sample from the distribution of X , with indexing parameter θ ∈ . Suppose g is a Borel function, such that E θ (g(X )) = h(θ ) < ∞. Then by Kolmogorov’s n a.s. SLLN, Tn = i=1 g(X i )/n → h(θ ), ∀ θ ∈ . If h −1 exists and is continuous, −1 then h (Tn ) is a strongly consistent estimator of θ . Here, we use the result stated a.s. a.s. in Sect. 1.3 that, if f is a continuous function, then X n → X ⇒ f (X n ) → f (X ). In particular if {X 1 , X 2 , . . . , X n } is a random sample from a normal N (0, θ ) distribution or a Bernoulli B(1, θ ) distribution or a Poisson Poi(θ ) distribution, a.s. then by Kolmogorov’s SLLN, X n → θ ∀ θ ∈ . Thus, X n is a strongly consistent

56

2

Consistency of an Estimator

estimator of θ . Further, if g(x) = x r , r ≥ 1, then the sample raw moment m r = n r i=1 X i /n of order, r ≥ 1, is a strongly consistent estimator of population raw moment μr (θ ) = E θ (X r ) of order r , provided it is finite. In the following theorem, we state two sufficient conditions for the almost sure convergence of {X n , n ≥ 1} to X , which follow from the Borel-Cantelli lemma. Theorem 2.3.1 Suppose a sequence {X n , n ≥ 1} of random variables and X are defined on the same probability space.

1. 2.



a.s.

P[|X n − X | > ] < ∞, then X n → X .  a.s. r If for some r > 0, n≥1 E(|X n − X | ) < ∞, then X n → X .

If ∀  > 0,

n≥1

Following examples illustrate how these sufficient conditions are useful to examine strong consistency of an estimator.  Example 2.3.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (0, θ ) distribution. Hence, the distribution function FX (n) (x, θ ) of X (n) is given by FX (n) (x, θ ) =

⎧ ⎨

0, if x ] = 1 − Pθ [|X (n) − θ | < ] = 0  Pθ [|X (n) − θ | > ] < ∞, ∀ θ. ⇒ n≥1

For  < θ , as derived in Example 2.2.1, Pθ [|X (n) − θ | > ] = (1 − /θ )n ⇒



Pθ [|X (n) − θ | > ]

n≥1

=



(1 − /θ )n < ∞ as /θ < 1 .

n≥1

Thus, by the sufficient condition stated in Theorem 2.3.1, it follows that X (n) is a strongly consistent estimator of θ .   Example 2.3.2

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution. For N (θ, 1) distribution, θ is the population mean, hence by Kolmogorov’s SLLN, it immediately follows that the sample mean X n is strongly

2.3 Strong Consistency

57

consistent for θ . In what follows, we use an alternative method to establish the same. By the Markov inequality for  > 0, 

P[|X n − θ | > ] ≤

n≥1



E|X n − θ |r / r .

n≥1

 If r = 2, E|X n − θ |2 = V ar (X n ) = 1/n, but the series n≥1 1/n is divergent and we cannot draw any conclusion from sufficient condition (ii). However, in the same condition, it is required that the series be convergent for some r > 0. Suppose r = 4. To find E|X n − θ |4 , it is to be noted that, √ √ X ∼ N (θ, 1) ⇒ n(X n − θ ) ∼ N (0, 1) ⇒ Yn = ( n(X n − θ ))2 ∼ χ12 ⇒ E(Yn2 ) = n 2 E(X n − θ )4 = V ar (Yn ) + (E(Yn ))2 = 2 + 1 = 3   ⇒ E(X n − θ )4 = 3/n 2 < ∞. n≥1

n≥1

Thus, by the sufficient condition (ii), X n is a strongly consistent estimator of θ . 

2.4

Uniform Weak and Strong Consistency

As discussed in Sect. 2.2, an estimator Tn is said to be consistent for θ if Tn converges in probability to θ for all θ ∈ . If the convergence is uniform in θ , then we get uniform consistency. A precise definition is given below.

 Definition 2.4.1



Uniformly Consistent Estimator: Suppose Tn → θ, ∀ θ ∈ , that is, for given  > 0 and δ ∈ (0, 1), for each θ, ∃ n 0 (, δ, θ ) such that ∀ n ≥ n 0 (, δ, θ ), Pθ [|Tn − θ | < ] ≥ 1 − δ, ∀ θ ∈  ⇔ Pθ [|Tn − θ | > ] ≤ δ, ∀ θ ∈ . Tn is said to be uniformly consistent if n 0 (, δ, θ ) does not depend on θ . If sup n 0 (, δ, θ ) is finite then the convergence is uniform in θ . Following examples illustrate uniform consistency. Using the WLLN, sample averages are consistent for corresponding population averages, but the WLLN does not provide any information about the rate of convergence and it is also not useful to find out n 0 (, δ, θ ). Chebyshev’s inequality comes out to be handy to deal with the rate of convergence and determination of n 0 (, δ, θ ).

58

2

Consistency of an Estimator

 Example 2.4.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution. Then X n ∼ N (θ, 1/n). Hence, by Chebyshev’s inequality, the bound on coverage probability is given by, Pθ [|X n − θ | < ] ≥ 1 − E(X n − θ )2 / 2 = 1 − 1/n 2 → 1 as n → ∞ ∀ θ ∈ . Hence, X n is consistent for θ . We select n 0 (, δ, θ ) such that

1 − 1/n 2 ≥ 1 − δ ⇒ n ≥ 1/ 2 δ ⇒ n 0 (, δ, θ ) = 1/ 2 δ + 1, thus, n 0 (, δ, θ ) does not depend on θ and hence X n is a uniformly consistent estimator of θ .   Example 2.4.2

Suppose X follows an exponential distribution with scale parameter 1 and location parameter θ , then its probability density function is f X (x, θ ) = exp{−(x − θ )}, x ≥ θ . It is shown in Example 2.2.16 that corresponding to a random sample of size n from this distribution, the distribution of X (1) is again exponential with scale parameter n and location parameter θ . Hence, E(X (1) ) = θ + 1/n and V ar (X (1) ) = 1/n 2 , from which MSE of X (1) as an estimator of θ is, 2 ) − 2θ E(X (1) ) + θ 2 M S E θ (X (1) ) = E((X (1) − θ )2 ) = E(X (1)

= 2/n 2 → 0 as n → ∞. Thus, X (1) is a consistent estimator of θ . To examine uniform consistency, we use Chebyshev’s inequality as in the previous example. By Chebyshev’s inequality, Pθ [|X (1) − θ | < ] ≥ 1 − E(X (1) − θ )2 / 2 = 1 − 2/n 2  2 , ∀ θ ∈ . We select n 0 (, δ, θ ) such that 1 − 2/n 2  2 ≥ 1 − δ ⇒ n 2 ≥ 2/ 2 δ ⇒ n 0 (, δ, θ ) =



 2/ 2 δ + 1,

thus, n 0 (, δ, θ ) does not depend on θ and hence X (1) is a uniformly consistent estimator θ .   Example 2.4.3

Suppose X follows an exponential distribution with scale parameter 1/θ and location parameter 0, then its probability density function is f X (x, θ ) = (1/θ ) exp{−x/θ }, x ≥ 0, θ > 0. Then by the WLLN, the sample

2.4 Uniform Weak and Strong Consistency

59

mean X n based on a random sample of size n from the distribution of X is consistent for θ . To examine its uniform consistency, we use Chebyshev’s inequality as in the previous example. Thus, Pθ [|X n − θ | < ] ≥ 1 − E(X n − θ )2 / 2 = 1 − θ 2 /n 2 , ∀ θ > 0. If n 0 (, δ, θ ) is such that 1 − θ 2 /n 2 ≥ 1 − δ, then n 0 (, δ, θ ) depends on θ and hence X n is not uniformly consistent for θ . We get the same result if X follows a Poisson distribution with mean θ .   Example 2.4.4

In Example 2.2.9, it is shown that the empirical distribution function Fn (x), corresponding to a random sample {X 1 , X 2 , . . . , X n } drawn from the distribution with distribution function F(x), is consistent for F(x) for fixed x ∈ R. Further defining the random variables Yi as in Example 2.2.9, it follows that n Fn (x) has 1 n (x)) ≤ 4n . binomial B(n, F(x)) distribution. Hence, V ar (Fn (x)) = Fn (x)(1−F n Thus, by Chebyshev’s inequality, it follows that the convergence in probability is uniform in x.  Now, we introduce the concept of a uniformly strongly consistent estimator. It is defined on similar lines as the uniformly consistent estimator.

 Definition 2.4.2 Uniformly Strongly Consistent Estimator: Suppose Tn is a strongly consistent estimator of θ , that is, given  > 0, ∃ n 0 (, ω, θ ) such that ∀ n ≥ n 0 (, ω, θ ), |Tn (ω) − θ | <  , ∀ ω ∈ N c , with Pθ (N ) = 0, ∀ θ ∈ . If n 0 (, ω, θ ) does not depend on θ , then Tn is said to be a uniformly strongly consistent estimator of θ . In Example 2.4.4, it is shown that the sample distribution function is uniformly weakly consistent for the population distribution function. One can appeal to the SLLN to claim the strong consistency of the sample distribution function for the population distribution function for each fixed x ∈ R. It can further be proved that the strong convergence is also uniform in x, which is a well-known Glivenko-Cantelli theorem. The statement is given below. For proof, refer to Gut [1] (p. 306). Theorem 2.4.1 Glivenko-Cantelli Theorem: Suppose Fn (x) is an empirical distribution function, corresponding to a random sample {X 1 , X 2 , . . . , X n } drawn from the distribution with distribution function F(x). Then a.s.

sup |Fn (x) − F(x)| → 0 as n → ∞. x∈R

The theorem states that the sample distribution function is uniformly strongly consistent for the distribution function from which we have drawn the random sample. It is an important theorem as it forms a basis of non-parametric inference.

60

2

Consistency of an Estimator

In the next section, we extend the concept of consistency and all the related results when the distribution of a random variable or a random vector is indexed by a vector parameter.

2.5

Consistency: Vector Parameter Setup

Suppose X is a random variable or a random vector defined on a probability space (, A, Pθ ), where the probability measure Pθ is indexed by a vector parameter θ ∈  ⊂ Rk . Suppose θ = (θ1 , θ2 , . . . , θk ) . Given a random sample X = {X 1 , X 2 , . . . , X n } of size n from the distribution of X , suppose T n ≡ T n (X ) = (T1n , T2n , . . . , Tkn ) is an estimator of θ, that is T n is a random vector with range space as the parameter space  ⊂ Rk and Tin is an estimator of θi for i = 1, 2, . . . , k. Consistency of T n as an estimator of θ is defined in two ways as joint consistency and marginal consistency. These are defined below.

 Definition 2.5.1

Jointly Weakly Consistent Estimator: An estimator T n of θ is said to be jointly weakly consistent for θ if ∀ θ ∈  & ∀  > 0,

lim Pθ T n ∈ N (θ ) = 1,

n→∞

where N (θ ) is an  neighbourhood of θ with respect to Euclidean or square Euclidean norm, or in particular, ∀ θ ∈  & ∀  > 0,

lim Pθ [

n→∞

max |Tin − θi | < ] = 1.

i=1,2,...,k

 Definition 2.5.2

Marginally Weakly Consistent Estimator: An estimator T n of θ is said to be marginally weakly consistent for θ if Tin is consistent for θi , ∀ i = 1, 2, . . . , k . As in the real parameter setup, a weakly consistent estimator will be referred to as simply a consistent estimator. In the following theorem, we establish the equivalence of two definitions. Such an equivalence plays an important role in examining the consistency of a vector estimator, as one can simply proceed marginally and use all the tools discussed in Sect. 2.2. Theorem 2.5.1 Suppose T n = (T1n , T2n , . . . , Tkn ) is an estimator of θ = (θ1 , θ2 , . . . , θk ) . Then T n is jointly consistent for θ if and only if T n is marginally consistent for θ .

Proof Part(i) - In this part, we prove that joint consistency implies marginal consistency. Suppose T n is jointly consistent for θ, then ∀  > 0,

2.5 Consistency: Vector Parameter Setup

61



limn→∞ Pθ maxi=1,2,...,k |Tin − θi | <  = 1. Suppose the events E i , i = 1, 2, . . . , k are defined as follows. E =[

max

i=1,2,...,k

E

and

|Tin − θi | < ] & E i = [|Tin − θi | < ] ⇒ E=

k 

Ei ⇔ E c =

i=1

k 

E ic .

i=1

Now to prove marginal consistency observe that E=

k 

E i ⇒ E i ⊃ E, ∀ i = 1, 2, . . . , k

i=1

⇒ Pθ (E i ) ≥ Pθ (E), ∀ i = 1, 2, . . . , k ⇒ Pθ (E i ) → 1, ∀ i = 1, 2, . . . , k as Pθ (E) → 1 Pθ

⇒ Tin → θi ∀ i = 1, 2, . . . , k ⇒ T n is marginally consistent for θ . Thus, joint consistency implies marginal consistency of T n . Part(ii) - In this part, we prove that marginal consistency implies joint consistency. Suppose T n is marginally consistent for θ , then Tin is consistent for θi , ∀ i = 1, 2, . . . , k. Thus as n → ∞, Pθ [|Tin − θi | < ] = Pθ (E i ) → 1 ⇒ Pθ (E ic ) → 0 ∀ i = 1, 2, . . . , k  k  k   c c Ei Pθ (E ic ) → 0 ⇒ Pθ (E ) = Pθ ≤ i=1

i=1

⇒ T n is jointly consistent for θ.



Once the equivalence between joint and marginal consistency is established, it is of interest to find out which of the results from real parameter setup can be extended to vector parameter setup. In the next theorem, we prove that the invariance property of consistency under continuous transformation remains valid in vector setup as well. Theorem 2.5.2 Suppose T n is a consistent estimator of θ .

(i) (ii)

Suppose g : Rk → R is a continuous function. Then g(T n ) is consistent for g(θ ). Suppose g : Rk → Rl , l ≤ k is a continuous function. Then g(T n ) is consistent for g(θ).

62

2

Consistency of an Estimator

Proof (i) Continuity of g : Rk → R implies that given  > 0, ∃ δ > 0 such that when T n ∈ Nδ (θ ), |g(T n ) − g(θ )| < . Thus, ∀  > 0, ∀ θ ∈ , [|g(T n ) − g(θ )| < ] ⊃ [T n ∈ Nδ (θ )] ⇒ Pθ [|g(T n ) − g(θ )| < ] ≥ Pθ [T n ∈ Nδ (θ )] → 1 ∀ δ > 0 ⇒ Pθ [|g(T n ) − g(θ )| < ] → 1 ∀  > 0 n → ∞ Pθ

⇒ g(T n ) → g(θ ), ∀ θ ∈  ⇒ g(T n ) is consistent for g(θ ). (ii) A function g : Rk → Rl can be expressed as g(x) = (g1 (x), g2 (x), . . . , gl (x)) , where x = (x1 , x2 , . . . , xk ) and gi (x), ∀ i = 1, 2, . . . , l is a function from Rk to R. It is given that g is a continuous function, hence gi (x), ∀ i = 1, 2, . . . , l is a continuous function from Rk to R. Hence Pθ

gi (T n ) → gi (θ ), ∀ θ ∈  & ∀ i = 1, 2, . . . , l by (i) Pθ

⇒ (g1 (T n ), g2 (T n ), . . . , gl (T n )) → (g1 (θ ), g2 (θ ), . . . , gl (θ )) by Theorem 2.5.1 Pθ

⇒ g(T n ) → g(θ ) , ∀ θ ∈  ⇒ g(T n ) is consistent for g(θ ).

 In Theorem 2.2.5, it has been proved that the sample raw moments are consistent for the corresponding population raw moments. Using the invariance property of consistency under continuous transformations in vector parameter setup, in the next theorem, we prove that sample central moments are consistent for the corresponding population central moments. Theorem 2.5.3 Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X , with indexing parameter θ ∈ . Then sample central moment n (X i − X n )r of order, r ≥ 1, is a consistent estimator of population m r = n1 i=1 central moment μr (θ ) = E θ (X − E(X ))r of order r , provided it is finite.

Proof By the binomial theorem, we have, μr (θ ) = E θ (X − E(X ))r = =

r    r j=0 r   j=0

j

E(X j )(−1)r − j (E(X ))r − j

 r  μ j (−1)r − j (μ1 )r − j , j

2.5 Consistency: Vector Parameter Setup

63

and it can be presented as a continuous function g = g(μ1 , μ2 , . . . , μr ) from Rr to n R. On similar lines, m r = i=1 (X i − X n )r /n can be expressed as, n 

mr =

(X i − X n )r

i=1

r   n   r i=1 j=0

j

X i (−1)r − j (X n )r − j j

= n n   r  r = m j (−1)r − j (m 1 )r − j = g(m 1 , m 2 , . . . , m r ). j j=0

In Theorem 2.2.5, it has been proved that m i is consistent for μi , i = 1, 2, . . . , r provided μr is finite. Since marginal consistency is equivalent to joint consistency we get that (m 1 , m 2 , . . . , m r ) is consistent for (μ1 , μ2 , . . . , μr ) and hence by Theorem 2.5.2, Pθ

m r = g(m 1 , m 2 , . . . , m r ) → g(m 1 , m 2 , . . . , m r ) = μr (θ). Thus, the sample central moment m r is a consistent estimator of the population  central moment μr (θ). In the next theorem, we extend Theorem 2.5.3 to product moments. Theorem 2.5.4 Suppose Z = (X , Y ) has a bivariate probability distribution with E(X 2 ) < ∞ and E(Y 2 ) < ∞. Then a sample correlation coefficient between X and Y based on a random sample of size n from the distribution of Z is a consistent estimator of a population correlation coefficient between X and Y .

Proof The population correlation coefficient ρ between X and Y is given by ρ = Cov(X , Y )/σ X σY and the sample correlation coefficient Rn based on a random sample of size n is defined as, Rn =

n S X2 Y 1 where S X2 Y = (X i − X n )(Yi − Y n ), S X SY n i=1

n n 1 1 2 2 2 SX = (X i − X n ) & SY = (Yi − Y n )2 . n n i=1

i=1

By Theorem 2.5.3, S X and SY are consistent for σ X and σY respectively. The population covariance between X and Y is Cov(X , Y ) = E(X Y ) − E(X )E(Y ). By Khintchine’s WLLN, X n and Y n are consistent for E(X ) and E(Y ) respectively. To find a consistent estimator for E(X Y ) we define U = X Y , being Borel function it is a random variable. A random sample of size n from the distribution of Z , gives a random sample of size n from the distribution of U and again by Khintchine’s WLLN,

64

2

Consistency of an Estimator

n n U n = i=1 Ui /n = i=1 X i Yi /n is consistent for E(U ) = E(X Y ). Hence, a consistent estimator for covariance between X and Y is given by n n 1 1 X i Yi − X n Y n = (X i − X n )(Yi − Y n ) = S X2 Y . n n i=1

i=1

Convergence in probability is closed under all arithmetic operations. Hence, Rn is a consistent estimator of ρ.   Remark 2.5.1

From Theorem 2.2.5 and Theorem 2.5.3, it is clear that the sample mean is consistent for population mean, sample variance is consistent for population variance. It is in contrast with the result that sample variance is not unbiased for population variance. Further, convergence in probability is closed under all arithmetic operations. Hence, from these two theorems along with Theorem 2.5.4, we get that sample regression coefficients, sample multiple correlation coefficient and sample partial correlation coefficients are consistent for corresponding population coefficients. Following examples illustrate the results established in the above theorems.  Example 2.5.1

Suppose Tin , i = 1, 2, . . . , l are consistent estimators for θ . The convex combination Tn = li=1 αi Tin can be expressed as Tn = g(T1n , T2n , . . . , Tln ) where   g(x1 , x2 , . . . , xl ) = li=1 αi xi , with li=1 αi = 1, is a continuous function from Rl → R. Hence, consistency of Tn follows from Theorem 2.5.2. Thus, a convex combination of consistent estimators of θ is again a consistent estimator of θ .   Example 2.5.2

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of a random variable X , which is absolutely continuous with support [θ1 , θ2 ], where θ1 < θ2 ∈ R, and distribution function F. As discussed in Theorem 2.2.6, if the distribution of X is absolutely continuous with distribution function F(·), then by the probability integral transformation U = F(X ) ∼ U (0, 1) distribution. As a consequence, {F(X (1) ), F(X (n) )} can be treated as {U(1) , U(n) )}, the minimum and the maximum order statistics corresponding to a random sample of size n from U (0, 1) distribution. If U ∼ U (0, 1), then the distribution functions FU(1) (x) of U(1) and FU(n) (x) of U(n) are given by,

FU(1) (x) =

⎧ ⎨

0, if x 1, as P[0 < U(n) < 1] = 1. For  ≤ 1, we have P[|U(n) − 1| < ] = P[1 −  < U(n) < 1] = FU(n) (1) − FU(n) (1 − ) = 1 − (1 − )n → 1 . P Hence, we conclude that U(n) → 1. Thus, P

P

if U ∼ U (0, 1), then U(1) → 0 & U(n) → 1 . Suppose F −1 (·) exists. It is continuous, as F is continuous, hence by the invariance property of consistency under continuous transformation, it follows that P

P

⇒ F −1 (F(X (1) )) = X (1) → F −1 (0) = θ1

P

⇒ F −1 (F(X (n) )) = X (n) → F −1 (1) = θ2 ,

U(1) = F(X (1) ) → 0 & U(n) = F(X (n) ) → 1

P

for all θ1 and θ2 . Thus, (X (1) , X (n) ) is consistent for (θ1 , θ2 ) .



 Example 2.5.3

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (μ, σ 2 ) distribution. For normal N (μ, σ 2 ) distribution, μ1 = μ and μ2 = σ 2 . Hence, by the WLLN, X n is consistent for μ and by Theorem 2.5.3, m 2 is consistent  for σ 2 .  Example 2.5.4

Suppose X follows a gamma distribution with scale parameter α and shape parameter λ. Then its probability density function, mean and variance are given by f (x, α, λ) =

α λ −αx λ−1 x , x > 0, α > 0, λ > 0, μ1 = λ/α & μ2 = λ/α 2 . e (λ)

66

2

Consistency of an Estimator

Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . By Pθ



the WLLN, m 1 → μ1 = λ/α and by Theorem 2.5.3, m 2 → μ2 = λ/α 2 . ConverPθ

gence in probability is closed under all arithmetic operations, hence m 1 /m 2 → α Pθ

is equivalent to marginal and m 2 1 /m 2 → λ. By Theorem 2.5.1, joint consistency    consistency, hence the moment estimator T n = m 1 /m 2 , m 2 1 /m 2 of  θ = (α, λ) is a consistent estimator of θ.  Remark 2.5.2

Consistency of T n in Example 2.5.4 can also be shown by finding an appropriate transformation g : R2 → R2 and using the result established in Theorem 2.5.2. We adopt such an approach in the next example.  Example 2.5.5

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a lognormal L N (μ, σ 2 ) distribution. The first and the second population raw moments are μ1 = exp(μ + σ 2 /2) and μ2 = exp(2μ + 2σ 2 ). We examine whether a moment estimator of θ = (μ, σ 2 ) is a consistent estimator. By the WLLN, Pθ





m 1 → μ1 & m 2 → μ2 ⇒ (m 1 , m 2 ) → (μ1 , μ2 ) by Theorem 2.5.1. Now we find a continuous function g : R2 → R2 such Pθ

that g(m 1 , m 2 ) → g(μ1 , μ2 ) = (μ, σ 2 ) . g(x1 , x2 ) = (g1 (x1 , x2 ), g2 (x1 , x2 )) , where

Suppose

g

is

defined

as

g1 (x1 , x2 ) = 2 log x1 − (log x2 )/2 & g2 (x1 , x2 ) = log x2 − 2 log x1 . It is easy to see that g is a continuous function and g1 (μ1 , μ2 ) = 2 log μ1 − (log μ2 )/2 = μ & g2 (μ1 , μ2 ) = log μ2 − 2 log μ1 = σ 2 . Hence, by the invariance property of consistency under continuous transformation, Pθ

T n = g(m 1 , m 2 ) = (2 log m 1 −(log m 2 )/2, log m 2 − 2 log m 1 ) → θ = (μ, σ 2 ). Observe that T n is a moment estimator of θ = (μ, σ 2 ) and is a consistent  estimator of θ.

2.5 Consistency: Vector Parameter Setup

67

 Remark 2.5.3

There is one more approach to solve Example 2.5.5. It is known that if X ∼ L N (μ, σ 2 ) distribution, then Y = log X ∼ N (μ, σ 2 ) distribution. A random sample {X 1 , X 2 , . . . , X n } from the distribution of X is equivalent to a random sample {Y1 , Y2 , . . . , Yn } from the distribution of Y . In Example 2.5.3, we have shown that for N (μ, σ 2 ) distribution the sample mean and the sample variance are consistent for μ and σ 2 respectively. Hence, (Y n , SY2 ) is consistent for θ = (μ, σ 2 ) , where n n n 1 1 1 2 Yn = Yi = log X i and SY = (Yi − Y n )2 . n n n i=1

i=1

i=1

 Example 2.5.6

Suppose Z = (X , Y ) has a bivariate normal N2 (μ1 , μ2 , σ12 , σ22 , ρ) distribution and {Z 1 , Z 2 , . . . , Z n } is a random sample of size n from the distribution of Z . Since Z = (X , Y ) has a bivariate normal distribution, X ∼ N (μ1 , σ12 ) distribution and Y ∼ N (μ2 , σ22 ) distribution. A random sample {Z 1 , Z 2 , . . . , Z n } gives a random sample {X 1 , X 2 , . . . , X n } from the distribution of X and a random sample {Y1 , Y2 , . . . , Yn } from the distribution of Y . Hence, by Example 2.5.3, (X n , S X2 ) is consistent for (μ1 , σ12 ) and (Y n , SY2 ) is consistent for (μ2 , σ22 ) , n n where S X2 = i=1 (X i − X n )2 /n and SY2 = i=1 (Yi − Y n )2 /n. To find a consistent estimator for ρ = Cov(X, Y)/σ1 σ2 , we note that from Theorem 2.5.4, the sample correlation coefficient Rn is a consistent estimator of ρ, where Rn is given by,  n  S X2 Y 1 Rn = = (X i − X n )(Yi − Y n ) × S X SY n i=1  n −1/2 n  1 2 1 2 (X i − X n ) (Yi − Y n ) . n n i=1

i=1

Thus, T n = Rn is consistent for θ = (μ1 , μ2 , σ12 , σ22 , ρ) . It is to be noted that T n is a moment as well as a maximum likelihood estimator  of θ . (X n , Y n , S X2 , SY2 ,

)

The following example illustrates that a maximum likelihood estimator need not be consistent. It was the first example of a maximum likelihood estimator being inconsistent and was given by Neyman and Scott in 1948 and hence it is known as a Neyman-Scott example.

68

2

Consistency of an Estimator

 Example 2.5.7

Suppose X i j = μi + i j , where {i j , i = 1, 2, . . . , n, j = 1, 2} are independent and identically distributed random variables such that i j ∼ N (0, σ 2 ) distribution. It is a balanced one-way ANOVA design with two observations in each of the n groups. Thus, the random variables X i j ∼ N (μi , σ 2) distribution. The likelihood of μi i = 1, 2, . . . , n and σ 2 given the data X = X i j , i = 1, 2, . . . , n, j = 1, 2} is given by L n (μ1 , μ2 , . . . , μn , σ 2 |X )=



−2n

2π σ 2

⎫ ⎧ n  2 ⎬ ⎨ 1  exp − 2 (X i j − μi )2 . ⎭ ⎩ 2σ i=1 j=1

The maximum likelihood estimators of the parameters are given by, 1 X i j = X i , i = 1, 2, . . . , n, μˆ i = 2 2

&

j=1

σˆ n2

n 1  = Ti , 2n i=1

where Ti =

2 

(X i j − X i )2 .

j=1

It is to be noted that for each i = 1, 2, . . . , n, μˆ i is an average of two observations and it does not depend on n at all. Observe that σTi2 ∼ χ12 distribution, ∀ i = 1, 2, . . . , n. Hence, by the WLLN, n 1  Ti P → 1 n σ2 i=1



σˆ n2 =

n 1  σ2 P Ti → . 2n 2 i=1

Thus, σˆ n2 is the maximum likelihood estimator of σ 2 , but it is not consistent  for σ 2 .  Remark 2.5.4

It is to be noted that in the model of Neyman-Scott example, there are two observations in each of the n groups, and as n increases the number of groups increases. We face the same problem even if each group has a finite number k of observations. Thus, inconsistency of σˆ n2 is not due to the method of maximum likelihood estimation, but it is due to the model in which number of observations in each group remains the same but number of groups increases and hence the number of parameters also increases. The problem of inconsistency arises because the number of observations and the number of parameters grow at the same rate. On the other hand, if we fix the number of groups and if the number of observations in each group increases, then the scenario changes. Thus, if {X i j , i = 1, 2, j = 1, 2, . . . , n} are independent random variables such that X i j ∼ N (μi , σ 2 ) distribution. Then the maximum likelihood estimators of μi i = 1, 2 and σ 2 are consistent estimators for μi and σ 2 respectively (see solution of Exercise 2.8.28).

2.5 Consistency: Vector Parameter Setup

69

The Limit random variable in convergence in probability is almost surely unique, hence a given estimator cannot be consistent for two different parametric functions, but from the various examples discussed above, it is to be noted that for a given parametric function g(θ ), there are a number of consistent estimators. Hence, we must have some criterion to choose the best estimator from the family of consistent estimators for g(θ ). In the following section, we discuss two such criteria, one is based on the coverage probability and the other is based on the mean squared error.

2.6

Performance of a Consistent Estimator

Within a family of consistent estimators of θ , the performance of a consistent estimator is judged by the rate of convergence of a true coverage probability to 1 and of MSE to 0, provided the MSE of the estimator exists, faster the rate better is the estimator. We discuss this concept below. Criterion based on true coverage probability: Suppose T1n and T2n are two consistent estimators of θ . Then for  > 0 the true coverage probabilities are given by p1 (, θ, n) = Pθ [|T1n − θ | < ]

and

p2 (, θ, n) = Pθ [|T2n − θ | < ].

Since both T1n and T2n are consistent estimators of θ , both the true coverage probabilities converge to 1 as n → ∞ and hence the criterion for the preference between the two is based on the rate of convergence to 1. Thus, if p1 (, θ, n) → 1 faster than p2 (, θ, n) → 1, then T1n is preferred to T2n . In general, it is difficult to find the coverage probabilities and hence the second criterion based on the mean squared error is more useful. Criterion based on mean squared error: Suppose T1n and T2n are two consistent estimators of θ such that the mean squared errors of both exist. Then the mean squared errors of both will converge to 0. The criterion for the preference between the two is again based on the rate of convergence to 0. Thus, if M S E θ (T1n ) → 0 faster than that of T2n then T1n is preferred to T2n . In the class of unbiased estimators, the estimator with smallest variance is the best estimator, analogously within the class of MSE consistent estimators, the estimator whose mean squared error converges to 0 faster is a preferable estimator. Following examples illustrate how to judge the performance of a consistent estimator on the basis of the coverage probability and the MSE.  Example 2.6.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (0, θ ) distribution. We examine whether T1n = 2X n or T2n = X (n) is a better consistent estimator for θ . If X ∼ U (0, θ ), then E(X ) = θ/2 and V ar (X ) = θ 2 /12. Hence,

70

2

M S E θ (T1n ) = E θ (2X n − θ )2 = 4V ar (X n ) = =

Consistency of an Estimator

4 θ2 n 12

θ2 → 0 as n → ∞ at the rate of 1/n. 3n

To find MSE of T2n = X (n) , we have E θ (X (n) ) = nθ/(n + 1) and E θ (X (n) )2 = n(n + 1)θ 2 /((n + 1)(n + 2)). Hence, M S E θ (T2n ) = E θ (X (n) − θ )2 =

2θ 2 → 0 as n → ∞ at the rate of 1/n 2 . (n + 1)(n + 2)

Thus, T2n = X (n) is a better consistent estimator of θ than T1n = 2X n as its MSE  converges to 0 faster than that of T1n = 2X n .  Example 2.6.2

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from an exponential distribution with location parameter θ and scale parameter 1. In Example 2.2.14, we have obtained the maximum likelihood estimator of θ and it is X (1) , which is shown to be consistent. Similarly the consistent estimator based on sample mean is X n − 1. Further, M S E θ (X (1) ) =

2 n2

&

M S E θ (X n − 1) =

1 . n

Thus, M S E θ (X (1) ) converges to 0 faster than M S E θ (X n − 1). Hence, X (1) is a better consistent estimator of θ than the consistent estimator-based sample mean. In fact ∀ n > 2,

1 2−n 2 − = 2 < 0 ⇒ ∀ n > 2, M S E θ (X (1) ) < M S E θ (X n − 1). n2 n n 

 Remark 2.6.1

In both the above examples, the maximum likelihood estimator, which is a function of a sufficient statistic, is a better consistent estimator than the moment estimator.  Example 2.6.3

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution, θ ∈  = {0, 1}. In Example 2.2.3 we have obtained the maximum likelihood estimator θˆn of θ . It is given by

2.6 Performance of a Consistent Estimator

θˆn =



71

1, if X n > 1/2 0, if X n ≤ 1/2.

It is shown in the same example that for both θ = 0 and θ = 1, the coverage probability is 1 when  > 1. For 0 <  ≤ 1, to find the rate of convergence to 1, we use the result that for x > 0 and sufficiently large, (−x) ≈ x1 φ(−x), where (·) and φ(·) denote the distribution function and the probability density function of the standard normal distribution respectively. Thus, for 0 <  ≤ 1,

√  P0 [|θˆn − 0| < ] = P0 [θˆn = 0] = P0 X n ≤ 1/2 =  n/2  √  = 1 −  − n/2  2 1 2  √ = 1 − √ φ − n/2 = 1 − √ √ exp(−n/8) → 1, n n 2π exponentially fast as n → ∞. On similar lines,

 √  P1 [|θˆn − 1| < ] = P1 [θˆn = 1] = P1 X n > 1/2 = 1 −  − n/2  2  √ 2 1 = 1 − √ φ − n/2 = 1 − √ √ exp(−n/8) → 1, n n 2π exponentially fast as n → ∞. Thus, for both θ = 0 and θ = 1, the coverage probability p(, θ, n) is the same and thus the rate of convergence of coverage probability to 1 is the same. To find the rate of convergence of MSE to 0, we find MSE from the probability mass function of θˆn . Thus,  √  √n/2 , if  − n/2 , if  √  1 −  √n/2 , ˆ & Pθ [θn = 1] = Pθ [X n ≥ 1/2] = 1 −  − n/2 , Pθ [θˆn = 0] = Pθ [X n < 1/2] =



θ =0 θ =1 if θ = 0 if θ = 1.

Hence, the MSE of θˆn as an estimator of θ is given by M S E θ (θˆn ) = E θ (θˆn − θ )2 = (1 − θ )2 Pθ [θˆn = 1] + (0 − θ )2 Pθ [θˆn = 0] = (1 − θ )2 Pθ [X n ≥ 1/2] + θ 2 Pθ [X n < 1/2] √  = 1 −  n/2 if θ = 0 and  √  √  =  − n/2 = 1 −  n/2 if θ = 1. Thus, for both θ = 0 and θ = 1, M S E θ (θˆn ) = 1 − 

 √  √  − n n = 2 2 2 1 = √ √ exp(−n/8) → 0 exponentially fast, n 2π

72

2

Consistency of an Estimator

as n → ∞. Thus, the rate of convergence of coverage probability to 1 and of MSE to 0 is the same for θ = 0 and θ = 1. Further observe that p(, θ, n) = 1 − M S E θ (θˆn ). Now we consider a family {Tk (X n ), 0 < k < 1} of estimators for θ , where an estimator Tk (X n ) is defined as follows. For 0 < k < 1,  Tk (X n ) =

0, if X n < k 1, if X n ≥ k.

We verify whether Tk (X n ) is consistent for θ . Observe that  P0 [|Tk (X n ) − 0| < ] =

1, if >1 P0 [Tk (X n ) = 0], if 0 <  ≤ 1

√ 

& P0 [Tk (X n ) = 0] = P0 X n ≤ k =  nk → 1 as n → ∞, as k > 0. On similar lines, P1 [|Tk (X n ) − 1| < ] = P1 [1 −  < Tk (X n ) < 1 + ]  1, if >1 = P1 [Tk (X n ) = 1], if 0 <  ≤ 1

& P1 [Tk (X n ) = 1] = P1 X n > k √  = 1 −  n(k − 1) → 1 as n → ∞, as k < 1. Thus, if 0 < k < 1 then Tk (X n ) is consistent for θ . We obtain MSE of Tk (X n ) on similar lines as those for θˆn Thus,  Pθ [Tk (X n ) = 0] = Pθ [X n < k] =

√   √ nk ,  if θ = 0  n(k − 1) , if θ = 1

and  Pθ [Tk (X n ) = 1] = Pθ [X n ≥ k] =

√  1 − √ nk ,  if θ = 0 1 −  n(k − 1) , if θ = 1.

Hence, the MSE of Tk (X n ) as an estimator of θ is given by M S E θ (Tk (X n )) = E θ (Tk (X n ) − θ )2 = (1 − θ )2 Pθ [Tk (X n ) = 1] + (0 − θ )2 Pθ [Tk (X n ) = 0] √  = 1 −  nk if θ = 0 and √  =  n(k − 1) if θ = 1.

2.6 Performance of a Consistent Estimator

73

Observe that M S E θ (Tk (X n )) → 0 as n → ∞, thus Tk (X n ) is MSE consistent for θ . For the estimator Tk (X n ) also observe that the coverage probability pk (, θ, n) = 1 − M S E θ (Tk (X n )). Thus, we have an uncountable family of consistent estimators of θ and it is of interest to choose k ∈ (0, 1) to have a better estimator. Note that, for k = 1/2, expressions for the coverage probability are same for θ = 0 and 1. Similarly, expressions for the MSE are same for θ = 0 and 1. However, for 0 < k < 1/2 and for 1/2 < k < 1, their behavior is in the opposite direction, as shown below. √  √  nk <  n/2 √   √  pk (, 1, n) = 1 −  n(k − 1) > 1− − n/2 √  =  n/2 ⇒ pk (, 0, n) < pk (, 1, n) ⇒ M S E 0 (Tk (X n )) > M S E 1 (Tk (X n )) √ √   For 1/2 < k < 1, pk (, 0, n) =  nk >  n/2 √    √ pk (, 1, n) = 1 −  n(k − 1) < 1 −  − n/2 √  =  n/2 ⇒ pk (, 0, n) > pk (, 1, n) ⇒ M S E 0 (Tk (X n )) < M S E 1 (Tk (X n )).

For 0 < k < 1/2, pk (, 0, n) = 

Thus, if we have to choose k ∈ (0, 1) so that coverage probability and MSE have the same nature, the only choice is k = 1/2. It is to be noted that the choice of k is not dictated by the rate of convergence of coverage probability or of MSE. In the next section, using R code we show that T1/2 (X n ) = θˆn performs better than any other estimator, as is expected since it is the maximum likelihood estimator of θ .  In the next section, we discuss how to verify the consistency of an estimator by simulation, using R software.

2.7

Verification of Consistency Using R

Suppose Tn = Tn (X 1 , ..., X n ) is a consistent estimator of θ , based on a random sample from the distribution of a random variable or a random vector X . According to the definition, an estimator Tn is consistent for θ if for all θ ∈  and for all  > 0, the coverage probability pn (, θ ) = Pθ [|Tn − θ | < ] → 1 as n → ∞. The consistency of Tn is verified by simulating m random samples from the distribution of X and finding m the estimate rn of the coverage probability. The estimator rn is defined I[|Tni −θ | 0 and

2.7 Verification of Consistency Using R

77

δ ∈ (0, 1) are specified constants. It is given by n 0 = log δ/ log(θ − )/θ + 1. √ L Using the CLT, n(2X n − θ ) → Z 1 ∼ N (0, θ 2 /3). Thus,  √   3n Pθ [|2X n − θ | < ] ≥ 1 − δ ⇔ 2 ≥1−δ θ    2 δ θ −2 +1. 1−  ⇒ n0 = 3 2 2

2 For 2X n from Chebyshev’s inequality we get n 0 = θ /3δ 2 + 1. In the following code using R software, we compute the values of n 0 for θ = 2,  = 0.02, 0.03, 0.04 and δ = 0.02 when Tn = X (n) and Tn = 2X n . We use the function floor(x) which gives the integer part of x. th=2; ep=c(0.02,0.03,0.04); del=0.02 n0=floor(log(del)/log((2-ep)/2)) + 1; n0 ## Minimum sample size for X_(n) n1=floor(thˆ2*(qnorm(1-del/2))ˆ2/(3*epˆ2)) + 1; n1 ## Minimum sample size for # 2*sample mean by normal approximation n2=floor(thˆ2/(3*del*epˆ2)) + 1; n2 ## Minimum sample size for 2*sample mean # by Chebyshev’s inequality

From the output, we note that corresponding to  = 0.02, 0.03, 0.04 and δ = 0.02, the minimum sample sizes for X (n) are 390, 259, 194, for 2X n using normal approximation, the minimum sample sizes are 18040, 8018, 4510 and using Chebyshev’s inequality, these 166667, 74075, 41667. It is to be noted that for  = 0.03, minimum sample sizes for X (n) is 259 while for 2X n , it is 8018, almost 30 times more. Following code gives one more approach to find the minimum sample size for 2X n corresponding to the given precision in terms of  and δ. th = 2; ep = 0.04; del = 0.02; n = 100; nsim = 1000; ind = 0; t2 = c(); pr = 0 while(ind==0) { for(i in 1:nsim) { set.seed(i) x = runif(n,0,th) t2[i] = 2*mean(x) }

78

2

Consistency of an Estimator

pr = length(which(abs(t2-th) 1-del) ind = 1 n = n + 50 } n0 = n - 50; n0

The minimum sample size for 2X n corresponding to  = 0.4 and δ = 0.02 is 4650.  Remark 2.7.1

It is to be noted that such an approach in the above code to decide the minimum sample size is quite general and is not based on any formula. It can be used for any distribution and any estimator. In this setup, instead of using any formula from theory, we rely purely on computations. We keep computing the desired probability for increasing values of sample size n and stop when the threshold in terms of 1 − δ is crossed. The corresponding value is the required minimum sample size. One more feature to be noted in such a purely computational approach is that the minimum sample size may change if we change the seed while generating the samples. More precisely, if we change the seed, the minimum sample size will change by the increment in the sample size n, it is 50 in the above code. For example, if in the above code we set the seed as 2i, then the minimum sample size corresponding to the same  and δ is 4600. In the theoretical formulae based on normal approximation or based on Chebyshev’s inequality, the minimum sample size does not depend on the generated samples at all. Using normal approximation the minimum sample size for 2X n corresponding to  = 0.4 and δ = 0.02 is 4510. Thus, using the formula and using the purely computational approach we get approximately the similar results. In the above code, if we replace the estimator 2X n by X (n) , the minimum sample size corresponding to  = 0.4 and δ = 0.02 is 200. With the formula it is 194, thus again two approaches give comparable results. In view of the major difference in minimum sample size for X (n) and 2X n , in the following code, we have taken different initial sample sizes to verify the consistency of X (n) and 2X n .

2.7 Verification of Consistency Using R

### Consistency of X(n) th = 2; eps = c(0.02,0.03,0.04); tmax = c() init = 100; incr = 100; nmax = 700; nsim = 1000 N = seq(init,nmax,incr); p1 = matrix(nrow=3,ncol=length(N)) for(i in 1:3) { for(j in 1:length(N)) { n = N[j] for(m in 1:nsim) { set.seed(m) x = runif(n,0,th) tmax[m] = max(x) ## Estimator for m-th sample } p1[i,j] = length(which(abs(tmax-th) 0, μ ∈ R. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . (i) Verify whether X n is consistent for μ or σ . (ii) Find a consistent estimator for θ = (μ, σ ) based on the sample median and the sample mean.

2.8.26

Suppose {(X 1 , Y1 ) , (X 2 , Y2 ) , . . . , (X n , Yn ) } is a random sample from a bivariate Cauchy C2 (θ1 , θ2 , λ) distribution, with probability density function given by Kotz et al. [2], f (x, y, θ1 , θ2 , λ) =

λ 2 {λ + (x − θ1 )2 + (y − θ2 )2 }−3/2 (x, y) ∈ R2 , 2π θ1 , θ2 ∈ R, λ > 0.

Using marginal sample quartiles obtain two distinct consistent estimators of (θ1 , θ2 , λ) . Hence, obtain a family of consistent estimators of (θ1 , θ2 , λ) . 2.8.27

Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with probability density function f (x, θ ) given by, f (x, θ )  = (1/α) exp{−(x − θ )/α}, x ≥ θ , θ ∈ R, α > 0. Show that n (X (i) − X (1) )/(n − 1)) is consistent for (θ, α) . Obtain a (X (1) , ( i=2 consistent estimator of (θ, α) based on sample moments.

2.8.28

Suppose {X i j , i = 1, 2, j = 1, 2, . . . , n} are independent random variables such that X i j ∼ N (μi , σ 2 ) distribution. Find maximum likelihood estimator of μi i = 1, 2 and σ 2 . Examine whether these are consistent.

92

2

Consistency of an Estimator

2.8.29

Suppose {X 1 , X 2 , . . . , X n } is  a random sample from a normal N (θ, σ 2 ) n 2 (X i − X n )2 . (i) Examine whether distribution. Suppose Sn = i=1 2 2 T1n = Sn /n and T2n = Sn /(n − 1) are consistent for θ . (ii) Show that M S E θ (T1n ) < M S E θ (T2n ), ∀ n ≥ 2. (iii) Show that T3n = Sn2 /(n + k) is consistent for θ . Determine k such that M S E θ (T3n ) is minimum.

2.8.30

An electronic device is such that the probability of its instantaneous failure is θ , that is, if X denotes the life length random variable of the device, then P[X = 0] = θ . Given that X > 0, the conditional distribution of life length is exponential with mean α. In a random sample of size n, it is observed that r items failed instantaneously and remaining n − r items had life times {X i1 , X i2 , . . . , X in−r }. On the basis of these data, find consistent estimator of (θ, α) .

2.8.31

A linear regression model is Y = a + bX +  where E() = 0 & V ar () = σ 2 . Suppose {(X i , Yi ) , i = 1, 2, . . . , n} is a random sample from the distribution of (X , Y ) . Examine whether the least square estimators of a and b are consistent for a and b respectively.

2.8.32

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (μ, σ 2 ) distribution. Find a consistent estimator of P[X 1 < a] where a is any real number.

2.8.33

Suppose {Z 1 , Z 2 , . . . , Z n } is a random sample of size n from a multivariate normal N p (μ, ) distribution. Find a consistent estimator of θ = (μ, ). Also find a consistent estimator of l  μ where l is a vector in R p .

2.8.34

On the basis of a random sample of size n from a multinomial k distribution pi = 1, find in k cells with cell probabilities ( p1 , p2 , . . . , pk ), with i=1 a consistent estimator for p = ( p1 , p2 , . . . , pk−1 ).

2.8.35

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (θ1 , θ2 ) distribution, −∞ < θ1 < x < θ2 < ∞. Examine whether (X (1) , X (n) ) is consistent for (θ1 , θ2 ) . Obtain consistent estimator for (θ1 + θ2 )/2 and (θ2 − θ1 )2 /12 based on (X (1) , X (n) ) and also based on sample moments. Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a Laplace distribution with probability density function given by,

2.8.36

f (x, θ, λ) = (1/2λ) exp{−|x − θ |/λ}, x ∈ R, θ ∈ R, λ > 0 . Using stepwise maximization procedure find the maximum likelihood estimator of θ and λ and examine if those are consistent for θ and λ respectively.

2.9

Computational Exercises

2.9

93

Computational Exercises

Verify the results established in the following examples and exercises by simulation using R software. 2.9.1

Exercise 2.8.7(i) (Hint: Use code similar to Example 2.7.1).

2.9.2

Exercise 2.8.25 (Hint: Use code similar to Examples 2.7.1 and 2.7.4).

2.9.3

Exercise 2.8.27 (Hint: Use code similar to Example 2.7.4).

2.9.4

Exercise 2.8.29((i)&(ii)) (Hint: Use code similar to Example 2.7.1).

2.9.5

Example 2.2.6

2.9.6

Example 2.2.8

2.9.7

Example 2.2.15, for p = 0.25, 0.50, 0.75

2.9.8

Example 2.5.3

2.9.9

Example 2.5.4

2.9.10

Example 2.6.1

2.9.11

Example 2.6.2

References 1. Gut, A. (2005). Probability: A graduate course. New York: Springer. 2. Kotz, S., Balakrishnan, N., & Johnson, N. L. (2000). Continuous multivariate distributions: Models and applications (Vol. I, 2nd ed.). New York: Wiley.

3

Consistent and Asymptotically Normal Estimators

Contents 3.1 3.2 3.3 3.4 3.5 3.6

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAN Estimator: Real Parameter Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAN Estimator: Vector Parameter Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verification of CAN Property Using R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computational Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95 96 120 143 161 165

5 Learning Objectives After going through this chapter, the readers should be able – to understand the concept of a consistent and asymptotically normal (CAN) estimator of a real and vector valued parameter – to generate a CAN estimator using different methods such as method of moments, method based on the sample quantiles and the delta method – to judge the asymptotic normality of estimators using R

3.1

Introduction

In Chap. 2 we discussed a basic large sample property of consistency of an estimator. The present chapter is devoted to a study of an additional property of a consistent estimator, which involves its asymptotic distribution with a suitable normalization. Suppose Tn is a consistent estimator of θ . In view of the fact that convergence © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0_3

95

96

3

Consistent and Asymptotically Normal Estimators L

in probability implies convergence in law, Tn → θ, ∀ θ . Thus, the asymptotic distribution of Tn is degenerate at θ . Such a degenerate distribution is not helpful to find the rate of convergence or to find an interval estimator of θ . Hence, our aim is to find a blowing factor an such that the asymptotic distribution of an (Tn − θ ) is non-degenerate. L

Suppose an (Tn − θ ) → U , where U is a real random variable and an → ∞ as n → ∞, then an (Tn − θ ) is bounded in probability, by Result 1.3.10 which states L

that if X n → X then the sequence {X n , n ≥ 1} is bounded in probability. Hence, Tn − θ =

1 Pθ an (Tn − θ ) → 0, ∀ θ. an

Hence, Tn is a consistent estimator for θ . Thus, √ if an (Tn − θ ) converges in law, then is consistent for θ . In particular, if a = n, then an estimator Tn is said to be a T n √n n consistent estimator of θ . It is particularly of interest to find a sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞, so that the asymptotic distribution of an (Tn − θ ) is normal. Estimators for which large sample distribution of an (Tn − θ ) is normal are known as consistent and asymptotically normal (CAN) estimators. These play a key role in large sample inference theory. In Sects. 3.2 and 3.3, we investigate various properties of CAN estimators for a real as well as a vector parameter respectively. Section 3.4 is devoted to verification of CAN property by simulation using R.

3.2

CAN Estimator: Real Parameter Setup

We begin with a precise definition of a CAN estimator for a real parameter setup.

 Definition 3.2.1 Consistent and Asymptotically Normal Estimator: Suppose Tn is an estimator of θ and suppose there exists a sequence {an , n ≥ 1} of real numbers tending to √ L ∞ as n → ∞, such that an (Tn − θ )/ v(θ ) → Z as n → ∞, where Z ∼ N (0, 1) distribution and 0 < v(θ ) < ∞. Then the estimator Tn is said to be a consistent and asymptotically normal estimator with approximate variance v(θ )/an2 . As mentioned in the last chapter, two criteria for comparing consistent estimators reduce to a single criterion for comparing approximate variances, if we restrict the class of all consistent estimators to a subclass of CAN estimators. If Tn is a CAN estimator of θ , it is asymptotically unbiased and hence MSE of Tn as an estimator of θ is the approximate variance v(θ )/an2 . Thus, comparison based on MSE reduces to comparison based on the approximate variance. Suppose T1n and T2n are two CAN estimators of θ such that L

L

an (T1n − θ ) → U1 ∼ N (0, v1 (θ )) and an (T2n − θ ) → U2 ∼ N (0, v2 (θ )). If v1 (θ ) < v2 (θ ) ∀ θ , then T1n is preferred to T2n for large n.

3.2 CAN Estimator: Real Parameter Setup

97

Using the asymptotic normal distribution of Tn , we can obtain the rate of convergence of the coverage probability, as shown below. Suppose for  > 0, p(, n) denotes the coverage probability of a CAN estimator Tn of θ . Then,   −an an (Tn − θ ) an p(, n) = Pθ [|Tn − θ | < ] = Pθ √ < √ 0, the coverage probabilities are given by 

an p1 (, n) = 2 √ v1 (θ )





− 1 and

an p2 (, n) = 2 √ v2 (θ )

 − 1.

Suppose v1 (θ ) < v2 (θ ) ∀ θ . Then for any  > 0 and for large n, an an v1 (θ ) < v2 (θ ) ⇒ √ >√ ⇒ p1 (, n) > p2 (, n) v1 (θ ) v2 (θ ) and T1n is preferred to T2n for large n. If the sequence of norming constants is {an , n ≥ 1} for T1n and {bn , n ≥ 1} for T2n then the comparison is based on the approximate variances v1 (θ )/an2 and v2 (θ )/bn2 . If v1 (θ )/an2 < v2 (θ )/bn2 ∀ θ then T1n is preferred to T2n for large n.   √ From the large sample expression of p(, n) = 2 an / v(θ ) − 1, we can find the minimum sample size n 0 (, δ, θ ) required for a consistent estimator Tn to achieve the degree of accuracy specified by (, δ),  > 0, 0 < δ < 1. Thus, in particular with an2 = n,  √   n p(, n) = 2 √ −1 > 1−δ v(θ ) ⇒ n 0 (, δ, θ ) =



  2 v(θ ) δ −1 1 − +1. 2 2

If T1n and T2n are two CAN estimators of θ with approximate variances v1 (θ )/n and v2 (θ )/n respectively, then corresponding to the degree of accuracy specified by (, δ), minimum sample sizes are given by

98

3

Consistent and Asymptotically Normal Estimators



  v1 (θ ) −1  1− n 01 (, δ, θ ) = 2    v2 (θ ) −1  1− & n 02 (, δ, θ ) = 2

δ 2 δ 2

2 +1 2 +1.

Hence, if v1 (θ ) < v2 (θ ) ∀ θ , then n 01 (, δ, θ ) < n 02 (, δ, θ ) ∀ θ and T1n is preferred to T2n for large n. Following example illustrates how to verify that a given estimator is a CAN estimator.  Example 3.2.1

(i) Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a N (θ, 1), θ ∈ R. Then the maximum likelihood estimator θˆn of θ is θˆn = X n . In Example 2.2.2, it is for θ . Further, X n has normal N (θ, 1/n) distribution, shown that X n is consistent √ that is, for each n, n(X n − θ ) has N (0, 1) distribution, hence its asymptotic distribution is also standard normal. Thus, X n is CAN for θ with approximate variance 1/n. It may be noted that for N (θ, 1) distribution, the approximate variance of X n is I −1 (θ )/n, where I (θ ) is the information function. (ii) If θ ∈ {0, 1}, it is shown in Example 2.2.3 that the maximum likelihood estimator θˆn of θ is given by θˆn =



1, if X n > 0, if X n ≤

1 2 1 2

and it is consistent for θ . To examine if it is CAN, we find the limit law of √ n(θˆn − θ ) for θ = 0 and θ = 1. For θ = 0, √ √ P0 [ n(θˆn − 0) ≤ x] = P0 [θˆn ≤ x/ n] = 0 if x < 0 as θˆn = 0 or 1. √ √ For x = 0, P0 [ n(θˆn − 0) ≤ 0] = P0 [θˆn = 0] = P0 [X n ≤ 1/2] = ( n/2). For x > 0, √ P0 [θˆn ≤ x/ n] =



√ √ x/ n < 1 P0 [θˆn = 0] = ( n/2), if 0 < √ 1, if x/ n ≥ 1 .

Thus, √ P0 [ n(θˆn − 0) ≤ x] =

⎧ ⎪ ⎪ ⎨

if x 0 ⇔ 1 + x/ n > 1, then P1 [θˆn ≤ 1 + x/ n] = 1. Thus, ⎧ √ 0, if √ x b

√ and is consistent for θ√. To investigate√the asymptotic√ distribution of n(θˆn − θ ), it is to be noted that n(θˆn − θ ) − n(X n − θ ) = n(θˆn − X n ) and √ ∀  > 0, Pθ [ n|θˆn − X n | < ] ≥ Pθ [θˆn = X n ] → 1 as n → ∞, ∀ θ ∈ (a, b) √ as shown of n(θˆn − θ ) √ √ in Example 2.2.3. Thus, the asymptotic distribution and of n(X n − θ ) is the same ∀ θ ∈ (a, b). But n(X n − θ ) has standard normal distribution for all θ ∈ [a, b]. Hence, for all θ ∈ (a, b) the asymptotic √ ˆn − θ ) is standard normal. To find the asymptotic distribution n( θ distribution of √ √ ˆ of n(θˆn − θ ) at θ = a, we study √ the limit of Pa [ n(θn − a) ≤ x] for all x ∈ R. Since θˆn ≥ a, for x < 0, Pa [ n(θˆn − a) ≤ x] = 0. Suppose x = 0. Then √ √ Pa [ n(θˆn − a) ≤ 0] = Pa [ n(θˆn − a) = 0] = Pa [θˆn = a] = Pa [X n < a] √ = Pa [ n(X n − a) < 0] = (0) = 1/2. ˆ ˆ From the expression √ of θn , we have θn ≤ b Suppose 0 < x < n(b − a). Then



√ √ n(θˆn − a) ≤ n(b − a).

√ √ √ Pa [ n(θˆn − a) ≤ x] = Pa [ n(θˆn − a) ≤ 0] + Pa [0 < n(θˆn − a) ≤ x] √ = 1/2 + Pa [0 < n(X n − a) ≤ x] = 1/2 + (x) − 1/2 = (x). If x ≥

√ √ n(b − a), then Pa [ n(θˆn − a) ≤ x] = 1. Thus, ⎧ ⎪ ⎪ ⎨

0, 1/2 = (x), Pa [ n(θˆn − a) ≤ x] = (x), ⎪ ⎪ ⎩ 1, √

if if if if

x 0. Thus, ⎧ √ ⎨ 0, if √ x < n(a − b) √ (x), if n(a − b) ≤ x < 0 Pb [ n(θˆn − b) ≤ x] = ⎩ 1, if x ≥0. Consequently, as n → ∞, √ Pb [ n(θˆn − b) ≤ x] →



(x), if x < 0 1, if x ≥ 0 .

It √ is to be noted that as in the case of θ = a, the asymptotic distribution of n(θˆn − θ ) is not normal at θ = b, and 0 is a point of discontinuity. Further, the asymptotic distribution is a mixture of discrete and continuous distributions

102

3

Consistent and Asymptotically Normal Estimators

as shown below. Suppose a random variable U3 is defined as U3 = −|U | where U ∼ N (0, 1). Since U3 ≤ 0, P[U3 ≤ x] = 1 for all x ≥ 0. Suppose x < 0, then P[U3 ≤ x] = P[|U | ≥ −x] = 1 − P[x ≤ U ≤ −x] = 1 − (−x) + (x) = 2(x). Thus, the distribution function of U3 is given by

FU3 (x) =

2(x), if x < 0 1, if x ≥ 0 .

√ It then follows that Pb [ n(θˆn − b) ≤ x] → 0.5FU1 (x) + 0.5FU3 (x). Thus, for √ θ = a and θ = a, the asymptotic distribution of n(θˆn − θ ) is not normal. As in (ii), we can show that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (θˆn − θ ) is normal for θ = a, b. Hence, we conclude that θˆn is not a CAN estimator of θ when θ ∈ [a, b].   Remark 3.2.1

In Example 3.2.1, it is to be noted that when  = {0, 1}, the maximum likelihood estimator of θ is consistent but not CAN. Similarly, when  = [a, b], the maximum likelihood estimator of θ is consistent but not CAN as the asymptotic distribution is not normal at the boundary points a and b. In both the cases, we do not get asymptotic normality at the parametric points which are not the interior points. In the next chapter, we discuss Cramér Huzurbazar theory in which one of the regularity conditions is that the parameter θ is an interior point of the parameter space. Then for large n, its maximum likelihood estimator exists and is CAN with approximate variance 1/n I (θ ). The most frequently used method to generate a CAN estimator for θ is based on the WLLN and the CLT, provided the underlying assumptions are satisfied. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with indexing parameter θ . Further, suppose E(X ) = h(θ ) and V ar (X ) = v(θ ). It is assumed that v(θ ) is positive and finite which implies that E(X ) = h(θ ) < ∞. By the WLLN,



X n → h(θ ) and by the CLT, √ L n(X n − h(θ )) → U ∼ N (0, v(θ )) ∀ θ ∈ .

Hence, X n is CAN for h(θ ) with approximate variance v(θ )/an2 . From X n one can find a CAN estimator for θ provided the function h satisfies certain assumptions. In the following theorem, we discuss how the CAN property remains invariant under

3.2 CAN Estimator: Real Parameter Setup

103

differentiable transformation. This result is usually referred to in literature as the L

delta method. The proof is based on Result 1.3.10 which states that if Un → U , where U is a real random variable, then Un is bounded in probability. Theorem 3.2.1 Delta method: Suppose Tn is a CAN estimator of θ with approximate variance v(θ )/an2 . Suppose g is a differentiable function such that g  (θ ) = 0 and g  (θ ) is continuous. Then g(Tn ) is a CAN estimator for g(θ ) with approximate variance (g  (θ ))2 v(θ )/an2 .

Proof It is given that Tn is a CAN estimator of θ , thus, Pθ

L

Tn → θ and an (Tn − θ ) → U ∼ N (0, v(θ )), ∀ θ ∈ . It is given that g is differentiable, hence g is continuous and by the invariance of consistency under continuous transformation, g(Tn ) is consistent for g(θ ). Since g is a differentiable function, by the Taylor series expansion, g(Tn ) = g(θ ) + (Tn − θ )g  (θ ) + Rn , where |Rn | ≤ M|Tn − θ |1+δ , δ > 0 . Thus, L

an (g(Tn ) − g(θ )) = an (Tn − θ )g  (θ ) + an Rn → U1 ∼ N (0, (g  (θ ))2 v(θ )) Pθ

provided an Rn → 0 .

(3.2.1) Pθ

Now, |an Rn | ≤ Man |Tn − θ ||Tn − θ |δ . Since Tn is consistent for θ , |Tn − θ |δ → 0. L

L

Further, an (Tn − θ ) → U ⇒ an |Tn − θ | → |U |, by continuous mapping theorem. Thus, an |Tn − θ | is bounded in probability. Hence by Slutsky’s theorem, Pθ



an |Tn − θ ||Tn − θ |δ → 0 implying that an Rn → 0. Thus, from (3.2.1) we get that L

an (g(Tn ) − g(θ )) → U1 ∼ N (0, (g  (θ ))2 v(θ )) and hence g(Tn ) is a CAN estima tor of g(θ ) with approximate variance (g  (θ ))2 v(θ )/an2 .  Remark 3.2.2 L

L

By the continuous mapping theorem, if X n → X then g(X n ) → g(X ) if g L

is a continuous function. In particular if an (Tn − θ ) → U ∼ N (0, v(θ )), then L

g(an (Tn − θ )) → g(U ), if g is a continuous function. Distribution of g(U ) will again be normal if g is a linear function as normality is preserved under linear transformations, but not in general. Thus if g(x) = exp(x) or g(x) = a0 + a1 x + a2 x 2 , a2 = 0 or g(x) = 1/x, x = 0, then the distribution of g(X ) is not

104

3

Consistent and Asymptotically Normal Estimators

normal. In view of such situations, the result of Theorem 3.2.1 seems surprising, the explanation of which is found in its proof. If instead of continuity we impose the additional condition that g is differentiable, which implies continuity, then for large n in a small neighborhood of θ , g(Tn ) is approximately a linear function of Tn and a linear function of a random variable having normal distribution again has a normal distribution. Further, Tn is CAN for θ and hence the remainder term converges to 0 in probability. Thus, in delta method, an (g(Tn ) − g(θ )) is approximated by a linear function of an (Tn − θ ) and hence the normality is preserved. The following examples illustrate the application of delta method to generate CAN estimators.  Example 3.2.2

Suppose X follows a beta distribution with parameter (θ, 1) having probability density function f (x, θ ) = θ x θ −1 , 0 < x < 1, θ > 0. Hence, θ θ , E(X 2 ) = and θ +1 θ +2 θ = v(θ ), say. V ar (X ) = (θ + 2)(θ + 1)2 E(X ) =

For θ > 0, v(θ ) is positive and finite. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . Then by the WLLN and by the CLT,   √ θ θ Pθ L Xn → → Z 1 ∼ N (0, v(θ )), ∀ θ > 0. & n Xn − θ +1 θ +1 Thus, X n is CAN for θ/(θ + 1) = φ, say, with approximate variance v(θ )/n = v1 (φ)/n, say. To get a CAN estimator for θ , we use delta method and define a function g such that g(φ) = θ . Suppose g(φ) = φ/(1 − φ), 0 < φ < 1, then g(φ) = θ . Further, g  (φ) = 1/(1 − φ)2 = 0. Hence by the delta method, g(X n ) = X n /(1 − X n ) is CAN for g(φ) = θ with approximate variance v1 (φ)(g  (φ))2 /n = θ (1 + θ )2 /n(θ + 2). We find some more CAN n estimators of log X i is a θ as follows. In Example 2.2.8 we have shown that Sn = − i=1 sufficient statistic. Further, it is shown that if a random variable Y is defined as Y = − log X , then the distribution of Y is exponential and that of Sn is gamma G(θ, n) with scale parameter θ and shape parameter n. Thus, the moment estimator θ˜n of θ based on a sufficient statistic is given by the equation Y n = Sn /n = E(Y ) = 1/θ and hence θ˜n = n/Sn = 1/Y n . By the WLLN, Pθ Sn /n → E(Y ) = 1/θ , hence θ˜n is consistent for θ . Further, V ar (Y ) = 1/θ 2 < ∞. Hence by the CLT, √

L

n(Y n − 1/θ ) → Z 2 ∼ N (0, 1/θ 2 ) ∀ θ > 0.

3.2 CAN Estimator: Real Parameter Setup

105

Thus, Y n is CAN for 1/θ = φ with approximate variance 1/nθ 2 . Suppose a function g is defined as g(φ) = 1/φ = θ , then g  (φ) = −1/φ 2 = 0. Hence by the delta method, g(Y n ) = 1/Y n is CAN for g(φ) = 1/φ = θ , with approximate variance (1/nθ 2 )θ 4 = θ 2 /n. Thus, X n /(1 − X n ) and 1/Y n are both CAN for θ with the same norming factor. We compare their approximate variances to examine which is better. It is to be noted that θ (1 + θ )2 θ − θ2 = > 0 ∀ θ > 0, θ +2 θ +2 and hence 1/Y n is better than X n /(1 − X n ). We now find the maximum likelihood estimator of θ . The likelihood of θ given the random sample X = {X 1 , X 2 , . . . , X n } is L n (θ |X ) =

n 

θ X iθ −1

i=1



log L n (θ |X ) = n log θ + θ

n 

log X i −

i=1

n 

log X i .

i=1

From the log likelihood we have ∂ n  1 log X i = 0 ⇒ θ = . log L n (θ |X ) = + ∂θ θ Yn i=1 n

∂ 2 Further, ∂θ 2 log L n (θ |X ) = −n/θ < 0 ∀ θ > 0 and hence at the solution of the likelihood equation also. Hence, the maximum likelihood estimator θˆn of θ is given by θˆn = 1/Y n , which is CAN with approximate variance θ 2 /n. For a random variable X with probability density function f (x, θ ) = θ x θ −1 , the information function is I (θ ) = 1/θ 2 and thus the approximate variance θ 2 /n = 1/n I (θ ). Further, the maximum likelihood estimator θˆn is the same as the moment estimator of θ based on the sufficient statistic and is better than the estimator based on the sample mean.  2

 Example 3.2.3

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson distribution with mean θ > 0. An estimator Tn is defined as

Tn =

X n , if X n > 0 0.01, if X n = 0

Consistency of Tn follows using the same arguments as in Example 2.2.5. For a Poisson distribution with mean θ > 0, E(X ) = V ar (X ) = θ < ∞. Hence by

106

3

Consistent and Asymptotically Normal Estimators

the WLLN and by the CLT, X n is CAN for θ with approximate variance θ/n. Observe that for  > 0 √ √ √ P[| n(Tn − θ ) − n(X n − θ )| < ] = P[ n|Tn − X n | < ] ≥ P[Tn = X n ] = P[X n > 0] = 1 − exp(−nθ ) → 1, ∀ θ > 0



√ √ Pθ n(Tn − θ ) − n(X n − θ ) → 0, ∀ θ > 0 √ L ⇒ If n(X n − θ ) → U then √ L n(Tn − θ ) → U by Result 1.3.8 √ L Now n(X n − θ ) → Z 1 ∼ N (0, θ ) by CLT √ L n(Tn − θ ) → Z 1 ∼ N (0, θ ) ∀ θ > 0, ⇒

which proves that Tn is CAN for θ with approximate variance θ/n. Suppose g(θ ) = e−θ , then it is a differentiable function with g  (θ ) = −e−θ = 0. Hence by the delta method, e−Tn is CAN for e−θ = P[X 1 = 0] with approximate variance  θ e−2θ /n.  Example 3.2.4

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution, where θ ∈ R. In Example 3.2.1, we have proved that X n is CAN for θ with approximate variance 1/n. Suppose g : R → R is a function defined as g(x) = x 2 . It is a differentiable function with g  (x) = 2x and it is = 0, ∀ x = 0. √ L 2 Hence, from Theorem 3.2.1, it follows that n(X n − θ 2 ) → Z 1 which has 2 N (0, 4θ ) distribution for all θ ∈ R − {0}. At θ =√0, we cannot apply Theorem 3.2.1. When θ = 0, X n ∼ N (0, 1/n), that is, n X n ∼ N (0, 1) and hence 2 2 n X n ∼ χ12 . As a consequence n X n is bounded in probability. Thus, √ √ √ L 2 2 P0 2 n(X n − 0) = (1/ n)n X n → 0 and hence n(X n − 0) → U ≡ 0. As in Example 3.2.1, it can be shown that there exists no sequence {an , n ≥ 1} of real 2 numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an X n 2 is normal. Thus, X n is not CAN for θ 2 if θ ∈ R. It is CAN if the parameter space 2 is taken as R − {0}. It is to be noted that with norming factor n, n X n ∼ χ12 .  Example 3.2.4 conveys that if g  (θ ) = 0, then g(X n ) is not CAN if the parameter space is R. With the norming factor n, the asymptotic distribution of g(X n ) − g(θ ) is chi-square. In the following theorem, we prove that such a result is true in general for any distribution.

3.2 CAN Estimator: Real Parameter Setup

107

Theorem 3.2.2 Suppose Tn is a CAN estimator of θ with approximate variance v(θ )/an2 . Suppose g is a differentiable function such that g  (θ ) = 0 and g  (θ ) = 0. Then

2 L L a 2 (g(Tn ) − g(θ )) → U ∼ χ12 and an (g(Tn ) − g(θ )) → 0. g  (θ )v(θ ) n Proof It is given that Tn is a CAN estimator of θ , thus, Pθ

L

Tn → θ and an (Tn − θ ) → W ∼ N (0, v(θ )), ∀ θ ∈ . Since g is differentiable, by the Taylor series expansion 1 g(Tn ) = g(θ ) + (Tn − θ )g  (θ ) + (Tn − θ )2 g  (θ ) 2 + Rn , where |Rn | ≤ M|Tn − θ |2+δ , δ > 0 . Hence, an2 (g(Tn ) − g(θ )) =

1 a 2 (Tn − θ )2  v(θ ) n g (θ ) + an2 Rn . 2 v(θ )

(3.2.2)



Since Tn is consistent for θ , |Tn − θ |δ → 0. Now L

an (Tn − θ ) → W ∼ N (0, v(θ )) an2 (Tn − θ )2 L → U ∼ χ12 v(θ ) ⇒ an2 (Tn − θ )2 is bounded in probability ⇒



⇒ |an2 Rn | ≤ Man2 (Tn − θ )2 |Tn − θ |δ → 0 Pθ

⇒ an2 Rn → 0 L

⇒ (2/g  (θ )v(θ ))an2 (g(Tn ) − g(θ )) → U ∼ χ12 from (3.2.2) ⇒ an (g(Tn ) − g(θ ))   2 1 g  (θ )v(θ ) Pθ 2 = (g(T ) − g(θ )) → 0 a n an 2 g  (θ )v(θ ) n L

⇒ an (g(Tn ) − g(θ )) → 0, the second last step follows from Slutsky’s theorem.



108

3

Consistent and Asymptotically Normal Estimators

 Example 3.2.5

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of a random variable X with mean 0, variance σ 2 and finite fourth central moment μ4 . Since E(X ) = 0, V ar (X ) = E(X 2 ). In Theorem 2.5.3, it is proved that sample central moments are consistent for corresponding population central moments, hence Pσ 2

m 2 = Sn2 → μ2 = σ 2 . Now, 

 n 1 2 2 2 Xi − X n − σ n i=1  n  √ 1  2 2 X i − nσ − ( n X n )X n . = √ n

√ √ n(Sn2 − σ 2 ) = n

i=1

Observe that by the CLT √ √ L n X n → Z 1 ∼ N (0, σ 2 ) ⇒ n X n is bounded in probability Pσ 2 Pσ 2 √ By the WLLN X n → 0 ⇒ ( n X n )X n → 0. Now, {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables implies that {X 12 , X 22 , . . . , X n2 } are also independent and identically distributed random variables with mean σ 2 and variance μ4 − μ22 . Hence, by the n CLT √  L (1/ n)( X i2 −nσ 2 ) → Z 2 ∼ N (0, μ4 −μ22 ) i=1 n √ √ 1  2 L ⇒ n(Sn2 − σ 2 ) = √ ( X i −nσ 2 ) − ( n X n )X n → Z 2 ∼ N (0, μ4 −μ22 ), n i=1

by Slutsky’s theorem. The unbiased estimator of σ 2 is given by Un2 =

n 

(X i − X n )2 /(n − 1) = (n/(n − 1))Sn2

i=1

= an Sn2 where an = n/(n − 1) → 1 and the consistency of Un2 follows from the consistency of Sn2 . To examine whether it is CAN, consider √ √ √ √ n(Un2 − σ 2 ) − n(Sn2 − σ 2 ) = n(an Sn2 − σ 2 ) − n(Sn2 − σ 2 ) √ √ 2 n 2 Pσ 2 = nSn (an − 1) = S → 0, n−1 n

3.2 CAN Estimator: Real Parameter Setup

109

Pσ 2 √ L as Sn2 → σ 2 and hence is bounded in probability. But n(Sn2 − σ 2 ) → Z 2 ∼ √ L N (0, μ4 − μ22 ) and hence n(Un2 − σ 2 ) → Z 2 ∼ N (0, μ4 − μ22 ). These results remain valid even if E(X ) = 0, observe that

Sn2 =

n n 1 1 (X i − X n )2 = ((X i − E(X ) − (X n − E(X )))2 n n i=1

=

1 n

i=1 n 

(Yi − Y n )2 ,

i=1

where Yi = X i − E(X ), i = 1, 2, . . . , n and E(Yi ) = 0, i = 1, 2, . . . , n. We then proceed exactly on similar lines as in (i) and (ii) and show that even if E(X ) = 0, sample variance Sn2 is CAN for σ 2 and the unbiased estimator of σ 2 is also CAN for σ 2 .  In Sect. 2.2, we have discussed a method based on sample quantiles to generate consistent estimators for the parameter of interest. Thus, for a Cauchy C(θ, 1) distribution, with location parameter θ , the sample median is consistent for the population median which is θ . The population first quartile is θ − 1 while the population third quartile is θ + 1. Thus, X ([n/4]+1) + 1 and X ([3n/4]+1) − 1 are also consistent for θ . It is of interest to see whether these are CAN for θ and which among these is the best estimator. Below we state a theorem which is useful to find CAN estimators based on the sample quantiles. For proof, we refer to Serfling [1] and DasGupta [2]. Theorem 3.2.3 Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X which is absolutely continuous with distribution function F(x, θ ) and probability density function f (x, θ ), where θ is an indexing parameter. Suppose F −1 exists and f (a p (θ ), θ ) = 0. Suppose rn = [np] + 1, 0 < p < 1. Then (i) the p-th sample quantile X (rn ) is consistent for the p-th population quantile √ L a p (θ ) and (ii) n(X (rn ) − a p (θ )) → Z 1 ∼ N (0, v(θ )), where v(θ ) = p(1 − p)/( f (a p (θ ), θ ))2 .  Example 3.2.6

Suppose {X 1 , X 2 , . . . , X n } is a random sample from each of the following distributions. (i) N (θ, 1), (ii) C(θ, 1) and (iii) U (θ − 1, θ + 1), θ ∈ R. For a normal N (θ, 1) distribution, the p-th population quantile is given by a p (θ ) = θ + −1 ( p), hence X ([np]+1) − −1 ( p) is CAN for θ , with approximate variance (1/n)2π p(1 − p) exp((−1 ( p))2 ). For a uniform U (θ − 1, θ +1) distribution, the p-th population quantile is given by a p (θ ) = 2 p + (θ − 1), hence X ([np]+1) − 2 p + 1 is CAN for θ , with approximate variance 4 p(1 − p)/n. For a Cauchy C(θ, 1) distribution, the p-th population quantile is given by a p (θ ) = θ + tan(π( p − 1/2)), hence X ([np]+1) − tan(π( p − 1/2)) is CAN for

110

3

Consistent and Asymptotically Normal Estimators

θ , with approximate variance ( p(1 − p)/n) π 2 (1 + tan(π( p − 1/2))2 ). Thus, we have an uncountable family of CAN estimators for θ for each distribution. For all the three distributions, θ is the population median and hence the sample median X ([n/2]+1) is CAN for θ with approximate variance v(θ )/n where v(θ ) = 1/4( f (a p (θ ), θ ))2 . For N (θ, 1), v(θ ) = π/2, for C(θ, 1), v(θ ) = π 2 /4 and for U (θ − 1, θ + 1), v(θ ) = 1. For both normal N (θ, 1) and uniform U (θ − 1, θ + 1) distributions, θ is the population mean and hence the sample mean X n is CAN for θ with approximate variance 1/n for N (θ, 1) and 1/3n for U (θ − 1, θ + 1) distribution. In both the cases, the sample mean is better than the sample median as the approximate variance is less for the sample mean than that for the sample median. For a C(θ, 1) distribution, the sample median is CAN for θ with approximate variance π 2 /4n. The population first quartile is θ − 1 while the population third quartile is θ + 1. Thus, X ([n/4]+1) + 1 and X ([3n/4]+1) − 1 are also CAN for θ with the same approximate variance 3π 2 /4n. Thus, among the CAN estimators of θ based on the sample median, the sample first quartile and the sample third quartile, the one based on the sample median is the best. In fact, within the family of CAN estimators based on sample quantiles and for distributions symmetric around θ , the CAN estimator based on the sample median is the best, its approximate variance being the smallest, which follows from the fact that θ is the mode of the distribution.   Example 3.2.7

Suppose a random variable X has an exponential distribution with location parameter θ and scale parameter 1. Its probability density function is given by f X (x, θ ) = exp{−(x − θ )}, x ≥ θ, θ ∈ R. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . In Example 2.2.16, we have obtained the maximum likelihood estimator of θ . It is X (1) and we have also verified that it is consistent for θ . In the same example, we derived the distribution function FX (1) (x, θ ) of X (1) and it is given by

FX (1) (x, θ ) =

0, if x < θ 1 − exp{−n(x − θ )}, if x ≥ θ.

It thus follows that for each n, X (1) has an exponential distribution with location parameter θ and scale parameter n, which further implies that for each n, Yn = n(X (1) − θ ) has the exponential distribution with location parameter 0 and scale parameter 1. Hence, its asymptotic distribution is the same. Thus, with norming factor n, the asymptotic distribution of X (1) is not normal, but we cannot conclude that X (1) is not CAN. As in Example 3.2.1, it can be shown that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (X (1) − θ ) is normal. Hence, we conclude that X (1) is not CAN for θ . We now find CAN estimator of θ based on a sample mean.

3.2 CAN Estimator: Real Parameter Setup

111

If a random variable X has an exponential distribution with location parameter θ and scale parameter 1, then E(X ) = θ + 1 and V ar (X ) = 1. Hence, by the Pθ

WLLN, X n → θ + 1, ∀ θ , which implies that X n − 1 is consistent for θ . Further by the CLT, √

n(X n − (θ + 1)) =



L

n((X n − 1) − θ ) → Z ∼ N (0, 1).

Hence, X n − 1 is also CAN for θ with approximate variance 1/n. We now find a CAN estimator of θ based on the sample median. From the distribution function of X , we find the median of X to be a1/2 (θ ) = θ + loge 2. By Theorem 3.3.3, the sample median X (rn ) , where rn = [n/2] + 1, is CAN for θ + loge 2 with approximate variance 1/4n(exp(− loge 2))2 = 1/n. Hence, X (rn ) − loge 2 is CAN for θ with approximate variance 1/n. Thus, both X n − 1 and the sample median are CAN with the same approximate variance.   Example 3.2.8

Suppose X is a random variable with probability density function f (x, θ ) = 2θ 2 /x 3 , x ≥ θ, θ ∈ R. Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . Corresponding to a random sample X , the likelihood of θ is given by L n (θ |X ) =

n 



2

/X i3

=2 θ

n 2n

i=1

n 

X i−3 ,

X i ≥ θ, ∀ i ⇔ X (1) ≥ θ.

i=1

Thus, the likelihood is an increasing function of θ on (−∞, X (1) ] and attains maximum at the maximum possible value of θ given the data X . The maximum possible value of θ given data is X (1) and hence the maximum likelihood estimator θˆn of θ is given by X (1) . To verify the consistency of X (1) as an estimator of θ , we find the coverage probability using the distribution function of X (1) . The distribution function FX (x) of X is given by

FX (x) =

0, if x < θ 1 − θ 2 /x 2 , if x ≥ θ.

Hence, the distribution function of X (1) is given by

FX (1) (x) = 1 − [1 − FX (x)]

n

=

0, if x < θ 1 − θ 2n /x 2n , if x ≥ θ.

112

3

Consistent and Asymptotically Normal Estimators

For  > 0, the coverage probability is given by Pθ [|X (1) − θ | < ] = Pθ [θ −  < X (1) < θ + ] = Pθ [θ < X (1) < θ + ] as X (1) ≥ θ = FX (1) (θ + ) − FX (1) (θ ) 2n  θ 2n θ = 1− − 0 = 1 − (θ + )2n θ + → 1 ∀  > 0 and ∀ θ as n → ∞. Hence, X (1) is consistent for θ . To derive the asymptotic distribution of X (1) , with suitable norming, as in Example 3.2.7, we define Yn = n(X (1) − θ ) and derive its distribution function G Yn (y) for y ∈ R. Since X (1) ≥ θ, Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = Pθ [n(X (1) − θ ) ≤ y] = Pθ [X (1) ≤ θ + y/n] = FX (1) (θ + y/n) 2n  θ = 1− → 1 − exp(−2y/θ ). θ + y/n Thus, the asymptotic distribution of Yn = n(X (1) − θ ) is exponential with location parameter 0 and scale parameter 2/θ . Thus, with norming factor n, the asymptotic distribution of X (1) is not normal. Proceeding on similar lines as in Example 3.2.1, we claim that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (X (1) − θ ) is normal, hence X (1) is not CAN for θ . We find CAN estimator for θ based on p-th sample quantile.√From the distribution function of X , the p-th population quantile a p (θ ) = θ/ (1 − p). Hence, the p-th sample quantile X ([np]+1) is CAN for of CAN estia p (θ ) with approximate variance θ 2 p/4n(1 − p)2 . Thus, the family √ mators of θ based on p-th sample quantile is given by Tn = (1 − p)X ([np]+1) with approximate variance θ 2 p/4n(1 − p), 0 < p < 1. It is to be noted that E(X ) = 2θ < ∞. Hence, by the WLLN, Pθ

X n → E(X ) = 2θ, ∀ θ . Hence, the moment estimator X n /2 is consistent for ∞ θ . Now E(X 2 ) = 2θ 2 θ x 2 /x 3 d x is not a convergent integral, as the integral  ∞ pm (x) a qn (x) d x is convergent if n − m ≥ 2 where pm (x) and qn (x) are polynomials of degree m and n respectively. Hence for this distribution, the second raw moment and hence the variance does not exist and we cannot appeal to the CLT to claim normality.  The main motive to find an asymptotic non-degenerate distribution of a suitably normalized estimator for a parameter θ is (i) to find an asymptotic null distribution of a test statistic for testing certain hypotheses about θ and (ii) to find an interval estimator of θ , as an interval estimator is more informative than the point estimator. In Chaps. 5 and 6, we study how to derive the asymptotic null distribution of a

3.2 CAN Estimator: Real Parameter Setup

113

test statistic on the basis of an asymptotic non-degenerate distribution of a suitably normalized estimator. We now discuss how the asymptotic distribution is useful to find an asymptotic confidence interval for a parameter of interest. Suppose Tn is a CAN estimator for θ . Then the asymptotic normal distribution is useful to find large sample confidence interval for θ . Following examples illustrate the procedure.  Example 3.2.9

Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with mean θ . Then variance of X 1 is θ 2 . By the WLLN and the CLT we immediately get that X n is CAN for θ with approximate variance θ 2 /n. Hence,   √  √ n Xn L Xn − θ = n Qn = − 1 → Z ∼ N (0, 1) θ θ and for large n, Q n can be treated as a pivotal quantity to construct a confidence interval for θ . Thus, given the confidence coefficient (1 − α), we find a(1−α/2) such that P[−a(1−α/2) < Q n < a(1−α/2) ] = 1 − α, where a(1−α/2) is (1 − α/2) th quantile of the standard normal distribution. Now, −a(1−α/2) < Q n < a(1−α/2) ⇔

Xn Xn √ < θ < √ . 1 + a(1−α/2) / n 1 − a(1−α/2) / n

Thus, asymptotic confidence interval for θ with confidence coefficient (1 − α) is given by   Xn Xn . √ , √ 1 + a(1−α/2) / n 1 − a(1−α/2) / n   Example 3.2.10

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, θ 2 ) distribution, θ ∈ R − {0}. Then   √  √ Xn n Xn − θ = n Qn = − 1 ∼ N (0, 1) θ θ for each n. Hence, Q n is a pivotal quantity to construct a confidence interval for θ . Thus, proceeding as in Example 3.2.9, for each n, the confidence interval for θ with confidence coefficient (1 − α) is given by   Xn Xn . √ , √ 1 + a(1−α/2) / n 1 − a(1−α/2) / n 

114

3

Consistent and Asymptotically Normal Estimators

Example 3.2.9 and Example 3.2.10 convey that once we have a CAN estimator for θ , it is very easy to find an asymptotic confidence interval for θ . However, such an easy procedure may not work always. For example, suppose {X 1 , X 2 , . . . , X n } is a random sample from Poisson Poi(θ ) distribution. To find asymptotic confidence interval for θ , note that by the WLLN and the CLT, √ √ √ L L n(X n − θ ) → Z 1 ∼ N (0, θ ) ⇒ Q n = ( n/ θ)(X n − θ ) → Z ∼ N (0, 1) and Q n can be treated as a pivotal quantity to construct a confidence interval for θ . Thus, given the confidence coefficient (1 − α), we find a(1−α/2) such that P[−a(1−α/2) < Q n < a(1−α/2) ] is 1 − α. Now, √

−a(1−α/2) < Q n < a(1−α/2) ⇔

√ θ θ X n − √ a(1−α/2) < θ < X n + √ a(1−α/2) . n n

Thus, θ is involved in both the lower and upper bounds of the interval and hence this procedure does not give us the desired confidence interval. Such a problem is very common and arises because θ is involved in the approximate variance v(θ )/an2 of the CAN estimator Tn of θ . There are two approaches to resolve this problem, one is a studentization procedure and the other is a variance stabilization technique. In a studentization procedure, the variance function v(θ ) is replaced by its consistent estimator, whereas in a variance stabilization technique we find a transformation g so that variance of g(Tn ) is free from θ . Using g(Tn ) we first form a confidence interval for g(θ ) and hence find a confidence interval for θ . We discuss below these procedures in detail. Studentization procedure: Suppose Tn is CAN for θ with approximate variance √ L v(θ )/an2 . Then Q n = (an / v(θ ))(Tn − θ ) → Z ∼ N (0, 1). However, such a pivotal quantity may not be useful to get the asymptotic confidence interval, as θ is involved in the variance function. Suppose v(θ ) is a continuous function of θ . Then by the invariance property of consistency under continuous transformation, v(Tn ) is consistent for v(θ ). Using Slutsky’s theorem, an Q˜ n = √ (Tn − θ ) = v(Tn )

 √  v(Tn ) an L √ √ (Tn − θ ) → Z ∼ N (0, 1). v(θ ) v(θ )

Thus, for large n, Q˜ n is also a pivotal quantity and hence given the confidence coefficient (1 − α), we find a(1−α/2) such that P[−a(1−α/2) < Q˜ n < a(1−α/2) ] = 1 − α. Now, −a(1−α/2) < Q˜ n < a(1−α/2) √ √ v(Tn ) v(Tn ) ⇔ Tn − a(1−α/2) < θ < Tn + a(1−α/2) . an an

3.2 CAN Estimator: Real Parameter Setup

115

Thus, asymptotic confidence interval for θ with confidence coefficient (1 − α) is given by √ √   v(Tn ) v(Tn ) Tn − a(1−α/2 , Tn + a(1−α/2) an an which can also be expressed as 

 Tn − s.e.(Tn )a(1−α/2) , Tn + s.e.(Tn )a(1−α/2) ,

where s.e.(Tn ) is the standard error of Tn for large n. We get a symmetric confidence interval as the limiting distribution of the pivotal quantity is a symmetric distribution. This procedure is known as a studentization procedure in view of the fact that to get the pivotal quantity Q˜ n from Q n we replace the variance function by its consistent estimator. We adopt similar procedure to get Student’s t distribution. Suppose distributed random variables each {X 1 , X 2 , . . . , X n } are independent and identically√ having normal N (0, σ 2 ) distribution. Then Un = n X n /σ has standard normal distribution. If σ is replaced by Sn where Sn2 is an unbiased estimator of σ 2 , then the distribution of Un , apart from some constants, is t distribution with (n − 1) degrees of freedom. Variance stabilization technique: The technique is heavily based on the invariance property of CAN estimators under differentiable transformation as proved in Theorem 3.2.1. The delta method states that if Tn is CAN for θ with approximate variance v(θ )an2 and g is a differentiable function with g  (θ ) = 0, then g(Tn ) is CAN for g(θ ) with approximate variance (g  (θ ))2 v(θ )/an2 . Variance function of g(Tn ) thus depends on θ . It is noted in the examples of constructing confidence intervals, for the parameter of exponential and Poisson distributions and in the studentization procedure, that it is better to have a variance function to be free from θ so that the associated pivotal quantity is useful to construct an asymptotic confidence interval. In a variance stabilization technique, as the nomenclature indicates, we try to find a function g so that the variance function of g(Tn ) does not depend on θ . More specifically we try to find g such that  g (θ ) v(θ ) = c 

 ⇔

g(θ ) =

c dθ + k, √ v(θ )

where c is any positive real number and k is a constant of integration. With such a choice of g, the approximate variance of g(Tn ) will be c2 /an2 . Using the pivotal quantity based on the large sample distribution of g(Tn ), we obtain the confidence interval for g(θ ). By the inverse function theorem, the condition g  (θ ) = 0 imposed on g assures that a unique inverse of g exists and hence from the confidence interval for g(θ ) we can get the confidence interval for θ , assuming g is a one-to-one function of θ . Of course technique works only for those functions g, for which the  this dθ can be explicitly found out. indefinite integral √cv(θ )

116

3

Consistent and Asymptotically Normal Estimators

Following examples illustrate the procedure.  Example 3.2.11

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson distribution with mean θ . Then V ar (X ) = θ < ∞ and hence by the WLLN and by the CLT, X n is CAN for θ with asymptotic variance θ/n. Thus, √ √ L Q n = ( n/ θ)(X n − θ ) → Z ∼ N (0, 1) and Q n is a pivotal quantity. However, as discussed above Q n is not useful to construct a confidence interval for θ . Hence, adopting studentization procedure we define a pivotal quantity Q˜ n as √  ˜ Q n = ( n/ X n )(X n − θ ). By Slutsky’s theorem, Q˜ n has the standard normal distribution. Hence, given the confidence coefficient (1 − α), we find a(1−α/2) such that P[−a(1−α/2) < Q˜ n < a(1−α/2) ] = 1 − α. Now, −a(1−α/2) < Q˜ n < a(1−α/2)   Xn Xn ⇔ X n − √ a(1−α/2) < θ < X n + √ a(1−α/2) . n n Thus, asymptotic confidence interval for θ with confidence coefficient (1 − α) is given by     Xn Xn X n − √ a(1−α/2) , X n + √ a(1−α/2) n n which can be expressed as 

 X n − s.e.(X n )a(1−α/2) , X n + s.e.(X n )a(1−α/2) ,

 where X n /n is the standard error of X n . In variance stabilization technique, we find a differentiable function g such that (g  (θ ))2 = 0 and V ar (g(X n )) is free from θ , that is √  (g  (θ ))2 θ = c2 ⇔ g(θ ) = c√dθ . Hence, g(θ ) = 2c θ. Thus, we have a θ pivotal quantity Q˜ n as Q˜ n =

√      √ √ √ n 2c X n − 2c θ = 2 n X n − θ ∼ N (0, 1). c

Hence corresponding to the confidence coefficient (1 −α), asymptotic confi  √ dence interval for θ is given by X n − 2√1 n a(1−α/2) , X n + 2√1 n a(1−α/2) . Consequently, asymptotic confidence interval for θ is given by   2  2  .  X n − 2√1 n a(1−α/2) , X n + 2√1 n a(1−α/2)

3.2 CAN Estimator: Real Parameter Setup

117

 Example 3.2.12

Suppose {X 1 , X 2 , . . . , X n }√is a random sample from a normal N (θ, θ 2 ) distribution, θ > 0. Then Tn = n(X n − θ ) ∼ N (0, θ 2 ) distribution. We now find a differentiable function g such that (g  (θ ))2 = 0 and V ar (g(X n )) are free from θ , that is (g  (θ ))2 θ 2 = c2 , that is, g(θ ) = c θdθ = c log θ. Hence, Qn =

√ L n(log X n − log θ ) → Z ∼ N (0, 1) ⇒ Q n is a pivotal quantity.

Thus, we find a(1−α/2) such that P[−a(1−α/2) < Q n < a(1−α/2) ] = 1 − α, the given confidence coefficient. Now, −a(1−α/2) < Q n < a(1−α/2) ⇔

√ √ log X n − a(1−α/2) / n < log θ < log X n + a(1−α/2) / n.

Hence, the asymptotic confidence interval for θ with confidence coefficient (1 − α) is given by 

  √  √  . exp log X n − a(1−α/2) / n , exp log X n + a(1−α/2) / n 

 Remark 3.2.3

It is to be noted that X n ∼ N (θ, θ 2 /n) distribution for each n, but log X n ∼ N (log θ, 1/n) distribution for large n.  Example 3.2.13

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from an exponential distribution with mean θ . We Obtain 100(1 − α)% asymptotic confidence interval for the survival function e−t/θ , where t is a fixed positive real number, by two methods. In the first method, we use CAN estimator of e−t/θ and studentization procedure. In the second method, we use the fact that e−t/θ is a monotone function of θ and using the confidence interval for θ based on X n , we obtain 100(1 − α)% asymptotic confidence interval for e−t/θ . If X follows an exponential distribution with mean θ , then V ar (X ) = θ 2 . Corresponding to a given random sample, by the WLLN and the CLT X n is CAN for θ with approximate variance θ 2 /n. (i) Suppose g(x) = e−t/x , then g is a differentiable function and g  (x) = xt2 e−t/x = 0 for all x > 0. Hence by the delta method, g(X n ) = e−t/X n is CAN for g(θ ) = e−t/θ = P[X > t] with approximate variance t 2 e−2t/θ /nθ 2 .

118

3

Consistent and Asymptotically Normal Estimators 2

The consistent estimator of the approximate variance is t 2 e−2t/X n /n X n . Hence by Slutsky’s theorem, Qn =

√ Xn n  te−t/X n

e−t/X n − e−t/θ



L

→ Z ∼ N (0, 1) .

Thus for large n, Q n is a pivotal quantity and is useful to find asymptotic confidence interval for the survival function e−t/θ . Given a confidence coefficient (1 − α), we find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get e−t/X n − a1−α/2

te−t/X n te−t/X n √ < e−t/θ < e−t/X n + a1−α/2 √ . Xn n Xn n

Hence using studentization technique, 100(1 − α)% large sample confidence interval for e−t/θ is given by  e

−t/X n

 1 − a1−α/2

t √ Xn n

 , e



−t/X n

1 + a1−α/2

t √



Xn n

.

(ii) Since X n is CAN for θ with approximate variance θ 2 /n, √

 √ n Qn = Xn − θ = n θ





Xn −1 θ

L

→ Z ∼ N (0, 1) .

Thus for large n, Q n is a pivotal quantity. Given a confidence coefficient (1 − α), we can find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get Xn 1+

a1−α/2 √ n

< θ
0 and hence g(θ ) = e−t/θ is a monotone increasing function of θ . Thus, a < θ < b ⇒ e−t/a < e−t/θ < e−t/b . Hence, 100(1 − α)% large sample confidence interval for e−t/θ is given by ⎛ ⎝exp

⎧  ⎨ −t 1 + ⎩

a1−α/2 √ n

Xn

⎫ ⎬ ⎭

,

⎧  ⎨ −t 1 − exp



a1−α/2 √ n

Xn

 ⎫⎞ ⎬ ⎠ . ⎭

It is to be noted that we cannot compare the two confidence intervals by comparing the approximate variance of the CAN estimators of e−t/θ , since in the second

3.2 CAN Estimator: Real Parameter Setup

119

method we use the fact that e−t/θ is a monotone function of θ to construct the confidence interval. We compare the lower and upper limits and arrive at the conclusion that the two methods essentially lead to the same large sample confidence intervals. The lower limit L n of the 100(1 − α)% asymptotic confidence interval for e−t/θ , by using CAN estimator of e−t/θ and by estimating its variance can be expressed as follows:   t −t/X n 1 − a1−α/2 L 1n = e √ Xn n    t t t2 t3 1 − a1−α/2 = 1− + − − ··· √ 2 3 Xn Xn n 2!X n 3!X n = 1−

t Xn

+ a1−α/2

t2

+

2 2!X n t2

− a1−α/2

t √ Xn n

t3 − a + ··· . 1−α/2 2√ 3√ Xn n 2X n n

Similarly, the lower limit L n of the 100(1 − α)% asymptotic confidence interval for e−t/θ , using the fact that e−t/θ is a monotone function of θ , can be expressed as follows: ⎫ ⎧  √ ⎬ ⎨ −t 1 + a1−α/2 n L 2n = exp ⎭ ⎩ Xn   2  a 2 1 + a1−α/2 √ √ t −t 1 + 1−α/2 n n + − ··· = 1− 2 Xn 2X n = 1−

t Xn

+

t2 2

2!X n

+ a1−α/2

2 t 2 a1−α/2 ta1−α/2 t2 − − ··· . + √ 2√ 2 Xn n Xn n 2n X n

If we compare L 1n with L 2n , then we note that the first four terms are identical √in both the expressions. In rest of the terms of L 1n , the denominator involves n, while in rest of the terms of L 2n , the denominator involves n, n 2 , n 3 , . . .. Similar scenario is observed in upper limits. It is to be noted that these comparisons are for limits which are obtained for large n. Hence, in general, there would not be any difference between the two. We verify it by simulation in Sect. 3.4.  In the next section, we extend the results of a CAN estimator for a real parameter setup to a CAN estimator for a vector parameter setup.

120

3.3

3

Consistent and Asymptotically Normal Estimators

CAN Estimator: Vector Parameter Setup

Suppose X is a random variable or a random vector defined on a probability space ( , A, Pθ ), where the probability measure Pθ is indexed by a vector parameter θ ∈  ⊂ Rk . Suppose θ = (θ1 , θ2 , . . . , θk ) . Given a random sample {X 1 , X 2 , . . . , X n } of size n from the distribution of X , suppose T n = (T1n , T2n , . . . , Tkn ) is an estimator of θ , that is, T n is a random vector with range space as the parameter space  ⊂ Rk and Tin is an estimator of θi for i = 1, 2, . . . , k. Consistency of T n as an estimator of θ is defined in Sect. 2.5 using the two approaches, marginal and joint consistency, and it is proved that the two approaches are equivalent. However, such an equivalence is not valid for a CAN estimator in a vector setup, and it is essential to treat real and vector parameter setups separately. We define below a CAN estimator for a vector parameter.

 Definition 3.3.1 Consistent and Asymptotically Normal Estimator for a Vector Parameter: Suppose T n is an estimator of θ and suppose there exists a sequence {an , n ≥ 1} of L

real numbers tending to ∞ as n → ∞, such that an (T n − θ ) → U ∼ Nk (0, (θ )) distribution as n → ∞, where (θ ) is a positive definite matrix. Then T n is a CAN estimator of θ with approximate dispersion matrix (θ )/an2 . Pθ

L

As for a real parameter, if an (T n − θ ) → U , then T n → θ for all θ and T n is L

consistent for θ . It is to be noted that if an (T n − θ) → U ∼ Nk (0, (θ )) distribution, then each component of an (T n − θ) has asymptotically normal distribution. However, it is known from the theory of multivariate normal distribution that though each component of random vector X has normal distribution, the distribution of X need not be multivariate normal. As a consequence, though each of the components of T n = (T1n , T2n , . . . , Tkn ) is CAN for the corresponding component of θ = (θ1 , θ2 , . . . , θk ) , the vector estimator T n may not be a CAN estimator of θ. Thus, we need to deal with multivariate setup to obtain a CAN estimator for a vector parameter. If an estimator is in the form of an average, then the standard tool to generate a CAN estimator in multiparameter setup is the multivariate CLT and the extension of delta method. For the consistent estimators based on sample quantiles, the asymptotic normality can be established using the asymptotic joint distribution of the order statistics. In the following theorems, we state all these results and illustrate using examples. Theorem 3.3.1 Multivariate CLT: Suppose X is a k-dimensional random vector with mean vector E(X ) = μ and dispersion matrix , which is a positive definite matrix. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . Sup√ L pose X n denotes the sample mean vector, then n(X n − μ) → U ∼ Nk (0, ) distribution as n → ∞.

3.3 CAN Estimator: Vector Parameter Setup

121

In Theorem 2.2.5, it has been proved that the sample raw moments are consistent for the corresponding population raw moments. Further, the joint consistency and the marginal consistency are equivalent. Hence, T n = (m 1 , m 2 , . . . , m k ) is consistent for μ = (μ1 , μ2 , . . . , μk ) , provided μk < ∞. In the following theorem, using the multivariate CLT, we prove that T n is also a CAN estimator of μ. Theorem 3.3.2 Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X whose raw moments up to order 2k are finite. Then a random vector T n = (m 1 , m 2 , . . . , m k ) of first k sample raw moments is a CAN estimator of a vector μ = (μ1 , μ2 , . . . , μk ) of corresponding population raw moments with approximate dispersion matrix /n, where  = [σi j ] and σi j = Cov(X i , X j ), i, j = 1, 2, . . . , k.

Proof Consistency of T n for the parameter μ follows from Theorem 2.2.5 and the equivalence of joint and marginal consistency. To show that it is CAN, suppose a random vector Z is defined as Z = (X , X 2 , . . . , X k ). Then E(Z ) = μ and the dispersion matrix  of Z is given by  = [σi j ], where σi j = Cov(X i , X j ), i, j = 1, 2, . . . , k. To examine kwhetheri  is positive definite, suppose a random variable ai X . Without loss of generality, we assume that X is Y is defined as Y = i=1 a non-degenerate random variable which implies that Y is also a non-degenerate random variable and hence V ar (Y ) > 0. Observe that for any non-zero vector a = (a1 , a2 , . . . , ak ) of real numbers  0 < V ar (Y ) = V ar

k  i=1

 ai X

i

=

k  k 

ai a j Cov(X i , X j ) = a  a,

i=1 j=1

which proves that  is a positive definite matrix. Thus Z is a random vector with mean vector μ and the positive definite dispersion matrix . Now a random sample {X 1 , X 2 , . . . , X n } from the distribution of X gives a corresponding n random Z i /n sample {Z 1 , Z 2 , . . . , Z n } from the distribution of Z . Suppose Z n = i=1 denotes the sample mean vector. Then by the multivariate CLT, √ L n(Z n − μ) → U ∼ Nk (0, ) distribution as n → ∞. But Z n = Tn and hence we have proved that T n = (m 1 , m 2 , . . . , m k ) , a random vector of first k sample raw moments is a CAN estimator of a vector μ = (μ1 , μ2 , . . . , μk ) of corresponding population raw moments with approximate dispersion matrix /n.  Multivariate CLT is useful to verify whether the estimator T n is CAN, provided it is in the form of an average. If an estimator is based on sample quantiles, then the following theorem, which states the asymptotic joint distribution of sample quantiles, is useful to verify whether the given estimator is a CAN estimator in a vector parameter setup. We state the theorem for k = 2. It can be easily generalized for higher dimensions.

122

3

Consistent and Asymptotically Normal Estimators

Theorem 3.3.3 Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X which is absolutely continuous random variable with probability density function f (x, θ ), where θ is an indexing parameter. Suppose {X (1) , X (2) , . . . , X (n) } is a corresponding order statistic. Suppose Y1n = X ([np1 ]+1) and Y2n = X ([np2 ]+1) are the p1 -th and p2 -th sample quantiles respectively and a p1 (θ) and a p2 (θ) are the p1 -th and p2 -th population quantiles respectively, 0 < p1 < p2 < 1. Suppose 0 < f (a p1 (θ ), θ) < ∞ and 0 < f (a p2 (θ ), θ ) < ∞. Then, as n → ∞,

√ L n((Y1n −a p1 (θ )),(Y2n − a p2 (θ ))) → U ∼N2 (0, (θ )), ∀ θ for which (θ ) is positive definite, where (θ ) = [σi j ] with p1 (1 − p1 ) p2 (1 − p2 ) , σ22 = & 2 ( f (a p1 (θ), θ)) ( f (a p2 (θ ), θ ))2 p1 (1 − p2 ) = σ21 = . f (a p1 (θ ), θ) f (a p2 (θ ), θ)

σ11 = σ12

Theorems 3.3.2 and 3.3.3 together with the invariance property of CAN estimators under differentiable transformation is useful to find CAN estimators for parameters of interest. Invariance property of CAN estimators under differentiable transformation, also known as the delta method, is stated in the following theorem. The proof is given for part(i). Theorem 3.3.4 Delta method: Suppose T n = (T1n , T2n , . . . , Tkn ) is a CAN estimator of θ = (θ1 , θ2 , . . . , θk ) with approximate dispersion matrix (θ )/an2 , where (θ ) = [σi j (θ )] is a positive definite matrix. (i) Suppose g : Rk → R is a totally differentiable function. Then L

an (g(T n ) − g(θ )) → U ∼ N (0, v(θ)), as n → ∞, ∀ θ for which v(θ ) > 0, where v(θ ) =  (θ ) and  is a gradient vector of order k × 1 of g, with i-th ∂g component given by ∂θ . i k (ii) Suppose g : R → Rl , l ≤ k is such that g(x) = (g1 (x), g2 (x), . . . , gl (x)) and x ∈ Rk . Suppose all partial derivatives of the type ∂ gi /∂ x j , i = 1, 2, . . . , l and j = 1, 2, . . . , k exist and are continuous, that is, g1 , g2 , . . . , gl are totally differentiable functions. Suppose M is a matrix of order l × k with (i, j)-th ele∂ gi , i = 1, 2, . . . , l and j = 1, 2, . . . , k. Then as n → ∞, ment ∂θ j √ L n(g(T n ) − g(θ )) → U ∼ Nl (0, M(θ )M  ) for all θ for which M(θ )M  is a positive definite matrix.

3.3 CAN Estimator: Vector Parameter Setup

123

Proof (i) It is given that T n is a CAN estimator for θ . Thus, Pθ

L

T n → θ and an (T n − θ) → U ∼ Nk (0, (θ )) ∀ θ ∈ . g : Rk → R is a totally differentiable function which implies that g is continuous and hence by invariance of consistency under continuous transformation, g(T n ) is consistent for g(θ ). Now g is a totally differentiable function, hence by the Taylor series expansion, g(T n ) = g(θ ) +

k 

(Tin − θi )

i=1

1  ∂2g (Tin − θi )(T jn − θ j ) |θ ∗ 2 ∂ Tin ∂ Tin n k

where Rn =

∂g |θ + R n , ∂ Tin

k

i=1 j=1





and θn∗ = αT n + (1 − α)θ , 0 < α < 1. Since T n → θ, we have θn∗ → θ . Hence, ∂2g ∗ ∂ Tin ∂ Tin |θ n



g → ∂θ∂i ∂θ = di j , say. Further, an (Tin − θi ) → U1 ∼ N (0, σii (θ )) j hence an (Tin − θi ) is bounded in probability for all i = 1, 2, . . . k and 2

L



(T jn − θ j ) → 0 for all j = 1, 2, . . . k. Thus, Pθ 1  ∂2g {an (Tin − θi )}(T jn − θ j ) |θ ∗ n → 0 . 2 ∂ Tin ∂ T jn k

an R n =

k

i=1 j=1

As a consequence, L

an (g(T n ) − g(θ )) =  {an (T n − θ)} + an Rn → U2 ∼ N (0, v(θ )) where v(θ) =  (θ ) . Hence g(T n ) is a CAN estimator for g(θ ) with approximate variance v(θ)/an2 .



In the following examples, we illustrate how delta method is useful to obtain a CAN estimator.  Example 3.3.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample from X following an exponential distribution with mean θ1 and {Y1 , Y2 , . . . , Yn } is a random sample from Y following an exponential distribution with mean θ2 . Suppose X and Y are independent. By the WLLN, X n is consistent for θ1 and Y n is consistent for θ2 . Thus, T n = (X n , Y n ) is consistent for θ = (θ1 , θ2 ) . To examine whether it is CAN for (θ1 , θ2 ) , define Z = (X , Y ) . Then E(Z ) = θ = (θ1 , θ2 ) and dispersion matrix  of Z is  = diag(θ12 , θ22 ) as X and Y are independent random variables. A random sample {X 1 , X 2 , . . . , X n } from X and a random sample {Y1 , Y2 , . . . , Yn }

124

3

Consistent and Asymptotically Normal Estimators

from Y are equivalent to a random sample {Z 1 , Z 2 , . . . , Z n } from Z . Hence, √ L by the multivariate CLT, we have n(Z n − θ) → U ∼ N2 (0, ), ∀ θ, that is, T n = (X n , Y n ) is CAN for θ = (θ1 , θ2 ) with approximate dispersion matrix /n. As in Example 3.2.2, the approximate dispersion matrix /n is related to the information matrix. For this probability model I (θ) =diag(1/θ12 , 1/θ22 ). Thus, I (θ ) =  −1 . Suppose we want to find a CAN estimator for P[X < Y ]. First we obtain its expression in terms of θ = (θ1 , θ2 ) and then use the delta method. Since X and Y are independent, P[X < Y ] = E(I[X 0. By the Pθ



WLLN λ˜ n = X n → λ and Y n → λ p. Convergence in probability is closed under P

θ all arithmetic operations, hence p˜ n = Y n /X n → p. Hence, θ˜ n = (λ˜ n , p˜ n ) is a  consistent estimator of θ = (λ, p) . To examine whether it is CAN, a random vector Z is defined as Z = (X , Y ) . Then E(Z ) = (λ, λ p) and the dispersion matrix  of Z is given by

 =

 V ar (X ) Cov(X , Y ) , Cov(X , Y ) V ar (Y )

where V ar (X ) = λ, V ar (Y ) = λ p and Cov(X , Y ) = E(X Y ) − E(X )E(Y ). Now E(X Y ) = E(E(X Y )|X ) = E(X E(Y |X )) = E(X X p) = p(V ar (X ) + (E(X ))2 ) = p(λ + λ2 ), hence Cov(X , Y ) = p(λ + λ2 ) − λ2 p = λ p. Thus, the dispersion matrix  of Z is   λ λp  = λp λp .  A random sample from the distribution of (X , nY ) is the same as a random sample {Z 1 , Z 2 , . . . , Z n } from Z . Suppose Z n = i=1 Z i /n denotes the sample mean √ L vector. Then by the multivariate CLT, n(Z n − E(Z )) → U ∼ N2 (0, ) distribution as n → ∞. But Z n = (X n , Y n ) and hence T n = (X n , Y n ) is CAN for (λ, λ p) = φ, say, with approximate dispersion matrix /n. We now find a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = θ = (λ, p) . Suppose g = (g1 , g2 ) : R2 → R2 is defined as g1 (x1 , x2 ) = x1 and g2 (x1 , x2 ) = x2 /x1 . Then

∂ ∂ g1 (x1 , x2 ) = 1, g1 (x1 , x2 ) = 0, ∂ x1 ∂ x2 ∂ x2 ∂ 1 g2 (x1 , x2 ) = − 2 & g2 (x1 , x2 ) = . ∂ x1 ∂ x2 x1 x1 These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at (λ, p) is given by   1 0 M = − p/λ 1/λ .

130

3

Consistent and Asymptotically Normal Estimators

Hence by Theorem 3.3.4, g(T n ) = (g1 (X n , Y n ), g2 (X n , Y n ))   Yn = Xn, = (λ˜ n , p˜ n ) is CAN for g(φ) = θ = (λ, p) , Xn with the approximate dispersion matrix M M  /n, which is diag[λ, p(1 − p)/λ]. Now we check whether M M  = I −1 (θ ). From the likelihood we have n 

Xi ∂ i=1 log L n (θ|X ) = −n + , ∂λ λ ∂2 nXn ∂2 log L (θ|X ) = − , log L n (θ|X ) = 0 n ∂λ2 λ2 ∂ p∂λ n 

∂ and log L n (θ|X ) = ∂p

Yi −

i=1

n 

n 

Xi

i=1

1− p n n   Yi − Xi

∂2 log L n (θ|X ) = ∂ p2

i=1

i=1

(1 − p)2

+

i=1



i=1

Yi

p n  Yi p2

,

.

Thus, I (θ ) is given by  n I (θ) =

 n/λ 0 0 nλ/ p(1 − p) .

Thus, the approximate dispersion matrix M M  /n = I −1 (θ)/n, as in Example 3.3.2. To obtain the maximum likelihood estimator, the system of likelihood equations is given by n 

−n +

n 

Xi

i=1

λ

= 0, and

Yi −

i=1

n 

n 

Xi

i=1

+

1− p

Yi

i=1

p

=0

and its solution is given as λ = X n and p = Y n /X n . The matrix D of second order partial derivatives is given by ⎛ ⎜ D=⎝

− nλX2n 0



0 n  i=1

Yi −

n 

i=1 (1− p)2

Xi

n 



Yi

i=1 p2

⎟ ⎠.

3.3 CAN Estimator: Vector Parameter Setup

131

At the solution of the system of likelihood equations it is given by ⎛ Dsol = ⎝



n Xn

0



0 −

3

nXn (X n −Y n )

⎠.

Its first principal minor is negative and the second is positive, as (X n − Y n ) > 0, hence Dsol is an almost surely negative definite matrix and hence at the solution, the likelihood attains its maximum. Thus, the maximum likelihood estimator θˆ n = (λˆ n , pˆ n ) of θ is given by   θˆ n = X n , Y n /X n = (λ˜ n , p˜ n ) . Thus, the maximum likelihood estimator θˆ n = (λˆ n , pˆ n ) of θ is the same as the moment estimator of θ based on the sufficient statistics and hence is CAN with the approximate dispersion matrix M M  /n, which is a diagonal matrix, implying that for large n, X n and Y n /X n are independent.   Example 3.3.5

Suppose X follows an exponential distribution with location parameter μ and scale parameter 1/σ with probability density function f (x, μ, σ ) = (1/σ ) exp {−(x − μ)/σ } , x ≥ μ, σ > 0, μ ∈ R . Then E(X ) = μ + σ & V ar (X ) = σ 2 ⇒ E(X 2 ) = σ 2 + (μ + σ )2 . Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X . The moment estimator θ˜ n of θ = (μ, σ ) is given by θ˜ n = (μ˜ n , σ˜ n ) = (m 1 −



    √    2  2 m2, m2) = m1 − m2 − m1 , m2 − m1 .





By the WLLN, m 1 = X n → μ + σ and by Theorem 2.5.3, m 2 → σ 2 . ConverPθ √ gence in probability is closed under all arithmetic operations, hence m 2 → σ Pθ √ and m 1 − m 2 → μ. Thus, the moment estimator θ˜ n is consistent for θ. To examine whether it is CAN, using the same the procedure as in Example 3.3.2 and Example 3.3.4, we get that T n = (m 1 , m 2 ) is CAN for

132

3

Consistent and Asymptotically Normal Estimators

(μ + σ, σ 2 + (μ + σ )2 ) = φ with approximate dispersion matrix as /n where  is given by   V ar (X ) Cov(X , X 2 ) . = Cov(X , X 2 ) V ar (X 2 ) To find the elements of matrix , we need to find the third and fourth raw moments of X . If we define Y = X − μ, then Y has an exponential distribution with location parameter 0 and mean σ . Hence, the r -th raw moment of Y is given by E(Y r ) = r !σ r , r ≥ 1. From the raw moments of Y , we find the raw moments of X . Thus, E(Y 3 ) = 6σ 3 ⇒ ⇒ & E(Y 4 ) = 24σ 4 ⇒ ⇒

E(X 3 ) = 6σ 3 + μ3 + 6μσ 2 + 3σ μ2 Cov(X , X 2 ) = 4σ 3 + 2μσ 2 E(X 4 ) = 24σ 4 + μ4 + 24σ 3 μ + 12σ 2 μ2 + 4σ μ3 V ar (X 2 ) = 4σ 2 (μ2 + 4μσ + 5σ 2 ).

Thus, the dispersion matrix  is given by  =

 4σ 3 + 2μσ 2 σ2 . 4σ 3 + 2μσ 2 4σ 2 (μ2 + 4μσ + 5σ 2 )

To find a CAN estimator for (μ, σ ) , suppose g = (g1 , g2 ) : R2 → R2 is defined   as g1 (x1 , x2 ) = x1 − x2 − x12 and g2 (x1 , x2 ) = x2 − x12 so that g(φ) = θ. Further, x1 ∂ g1 (x1 , x2 ) = 1 +  , ∂ x1 x2 − x12

∂ 1 g1 (x1 , x2 ) = −  ∂ x2 2 x − x2 2

1

∂ x1 ∂ 1 g2 (x1 , x2 ) = −  & g2 (x1 , x2 ) =  . ∂ x1 ∂ x 2 x2 − x12 2 x2 − x12 These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at φ is  M=

μ+2σ σ − μ+σ σ

1 − 2σ



1 2σ

.

Hence, by Theorem 3.3.4, g(T n ) = (g1 (m 1 , m 2 ), g2 (m 1 , m 2 )) = (m 1 −

√ √ m 2 , m 2 ) = (μ˜ n , σ˜ n )

3.3 CAN Estimator: Vector Parameter Setup

133

is CAN for g(φ) = θ = (μ, σ ) , with the approximate dispersion matrix M M  /n, where   σ 2 −σ 2 . M M  = −σ 2 2σ 2  In Example 3.3.5, the distribution of X involves two parameters μ and σ . Further, E(X ) is a function of both the parameters but V ar (X ) is a function of only σ . In Example 3.2.5, we have shown that the sample variance is CAN for population variance. Thus, it is of interest to examine whether a random vector of sample mean and sample variance is CAN for a vector of population mean and population variance. In Theorem 3.3.2, we have proved that a random vector of raw moments is CAN for a vector of corresponding population raw moments. In the following example, we show that a random vector (X n , Sn2 ) of sample mean and sample variance is CAN for (E(X ), V ar (X )) .  Example 3.3.6

Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with mean μ, variance σ 2 and finite raw moments up to order 4. Then from Theorem 3.3.2, T n = (m 1 , m 2 ) is CAN for φ = (μ1 , μ2 ) with approximate dispersion matrix /n where  is  =

V ar (X ) Cov(X , X 2 ) 2 Cov(X , X ) V ar (X 2 )



 =

 μ2 − μ2 μ3 − μ1 μ2 1 . μ3 − μ1 μ2 μ4 − μ2 2

Suppose g = (g1 , g2 ) : R2 → R2 is defined as g1 (x1 , x2 ) = x1 and g2 (x1 , x2 ) = x2 − x12 so that g(φ) = θ = (μ, σ 2 ) . Further, ∂ ∂ ∂ ∂ x1 g1 (x 1 , x 2 ) = 1, ∂ x2 g1 (x 1 , x 2 ) = 0, ∂ x1 g2 (x 1 , x 2 ) = −2x 1 and ∂ ∂ x2 g2 (x 1 , x 2 ) = 1. These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at φ is given by   1 0 M = −2μ 1 . 1

Hence, by Theorem 3.3.4, g(T n ) = (g1 (m 1 , m 2 ), g2 (m 1 , m 2 )) = (m 1 , m 2 ) = (X n , Sn2 ) is CAN for g(φ) = θ = (μ, σ 2 ) with the approximate dispersion matrix M M  /n, where   μ3 μ2 . M M  = μ3 μ4 − μ22

134

3

Consistent and Asymptotically Normal Estimators

We assume here that μ4 = μ22 .



 Remark 3.3.1

In Example 3.3.2 when X ∼ N (μ, σ 2 ) distribution, we have shown that (X n , Sn2 ) is CAN for (μ, σ 2 ) with approximate dispersion matrix D/n, where  D=

σ2 0 0 2σ 4

 .

When X ∼ N (μ, σ 2 ) distribution, it is known that μ3 = 0 and μ4 = 3σ 4 . Thus, μ4 − μ22 = 2σ 4 . Thus, results of Example 3.3.2 can be derived from Example 3.3.6.  Example 3.3.7

Suppose X follows a Laplace distribution, also known as a double exponential distribution, with probability density function   1 |x − μ| , f (x, μ, α) = exp − 2α α

x, μ ∈ R, α > 0.

We obtain a CAN estimator of (μ, α) based on (i) the sample quantiles and (ii) the sample moments. The distribution function FX (x) of X is given by

FX (x) =

1 2

1−

exp{(x − μ)/α}, if x < μ exp{−(x − μ)/α}, if x ≥ μ.

1 2

Hence, FX (x) =

1 1 1 ⇒ exp(x − μ)/α = 4 2 4

⇒ a1/4 (θ) = μ − α loge 2 and

3 1 3 ⇒ 1 − exp{−(x − μ)}/α = ⇒ a3/4 (θ) = μ + α loge 2. 4 2 4 It is to be noted that in view of symmetry of the double exponential distribution around μ, the first and the third quartile are equidistant from μ. Further, FX (x) =

  1 | − α loge 2| f (a1/4 (θ), μ, α) = exp − 2α α 1 1 1 1 = exp(log ) = , & f (a3/4 (θ ), μ, α) = . 2α 2 4α 4α

3.3 CAN Estimator: Vector Parameter Setup

135

Suppose {X 1 , X 2 , . . . , X n } is a random of X . From   sample from the distribution Theorem 3.3.3 it follows that T n = X ([n/4]+1) , X ([3n/4]+1) is CAN for φ = (μ − α loge 2, μ + α loge 2) with approximate dispersion matrix  =

3α 2 α 2 α 2 3α 2



 =α

2

 3 1 . 1 3

We now find a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = θ = (μ, α) . Suppose g = (g1 , g2 ) : R2 → R2 is defined as g1 (x1 , x2 ) = (x1 + x2 )/2 and g2 (x1 , x2 ) = (x2 − x1 )/2 loge 2 . Then ∂ 1 ∂ 1 g1 (x1 , x2 ) = , g1 (x1 , x2 ) = , ∂ x1 2 ∂ x2 2 1 1 ∂ ∂ g2 (x1 , x2 ) = − g2 (x1 , x2 ) = & . ∂ x1 2 loge 2 ∂ x2 2 loge 2 These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at φ is given by  M=

1/2 1/2 −1/2 loge 2 1/2 loge 2



 =

 0.5 0.5 . −0.7213 0.7213

Hence by Theorem 3.3.4,  g(T n ) =

X ([n/4]+1) + X ([3n/4]+1) X ([3n/4]+1) − X ([n/4]+1) , 2 2 loge 2



= (μˆ n , αˆ n )

is CAN for g(φ) = θ = (μ, α) with the approximate dispersion matrix M M  /n, ! " ! " which is diag 2α 2 , α 2 /(log 2)2 = diag 2α 2 , 2.0814α 2 . It implies that for large n, μˆ n and αˆ n are independent. To obtain a CAN estimator for (μ, α) based on the sample moments, note that the double exponential distribution is symmetric around μ, hence E(X ) = μ. To find V ar (X ), we define Y = X − μ, then V ar (X ) = V ar (Y ). Further, Y is distributed as U1 − U2 , where U1 and U2 are independent random variables each having an exponential distribution with mean α and E(Ui )r = r !αr , i = 1, 2. Hence, V ar (X ) = V ar (Y ) = V ar (U1 − U2 ) = 2α 2 . Thus, for a double exponential distribution, E(X ) is a function of μ only and V ar (X ) is a function of α only. Hence, instead of using Theorem 3.3.2, it is better to use the result established in Example 3.3.6, which states that (m 1 , m 2 ) is CAN for (μ1 , μ2 ) = (μ, 2α 2 ) = φ, say with approximate dispersion matrix /n, where  is  =

μ3 μ2 μ3 μ4 − μ22

 .

136

3

Consistent and Asymptotically Normal Estimators

Now to find the central moments μ3 and μ4 of X , we use the fact that Y = X − μ is distributed as U1 − U2 , where U1 and U2 are independent random variables each having an exponential distribution with mean α. Hence the common characteristic functions of U j , j = 1, 2 is φ(t) = (1 − itα)−1 . Thus, the characteristic function of Y is given by φY (t) = E(exp{itY }) = E(exp{itU1 })E(exp{−itU1 }) = (1 − itα)−1 (1 + itα)−1 = (1 + t 2 α 2 )−1 . It is now clear that all odd ordered raw moments of Y , which are same as the central moments, are 0 and μ2r = μ2r = (2r )!α 2r . Thus, μ3 = 0 and μ4 − μ22 = 24α 4 − 4α 4 = 20α 4 and hence  is diag[2α 2 , 20α 4 ]. To find CAN estimator for (μ, α) , suppose g = (g1 , g2 ) : R2 → R2 is defined as √ g1 (x1 , x2 ) = x1 and g2 (x1 , x2 ) = x2 /2 so that g(φ) = (μ, α) . Now, ∂ ∂ g1 (x1 , x2 ) = 1, g1 (x1 , x2 ) = 0, ∂ x1 ∂ x2 ∂ ∂ 1 g2 (x1 , x2 ) = 0 & g2 (x1 , x2 ) = − √ . ∂ x1 ∂ x2 2 2x2 These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives, evaluated at φ, is given by   1 0 M = 0 −1/4α . Hence, by Theorem 3.3.4, g(T n ) = (g1 (m 1 , m 2 ), g2 (m 1 , m 2 ))    = m 1 , m 2 /2 is CAN for g(φ) = (μ, α)  with the approximate " matrix M M /n, where ! 2 dispersion  2 M M = diag 2α , 1.25α .



 Remark 3.2.2

In a multiparameter setup, the estimators are compared on the basis of generalized variance, which is nothing but the determinant of the approximate dispersion matrix. In the above example, we have obtained two CAN estimators of (μ, α) , one based on the sample quantiles and the other based on the sample moments. We compare their generalized variances to examine which one is better. Generalized variance of the CAN estimator based on the sample quantiles is

3.3 CAN Estimator: Vector Parameter Setup

137

2α 4 /(log2)2 = 4.1627α 4 and generalized variance of the CAN estimator based on the sample moments is 2.5α 4 . Thus, CAN estimator based on the sample moments is better than that based on the sample quantiles.  Example 3.3.8

Suppose X ∼ C(μ, σ ) distribution with location parameter μ and scale parameter σ . The probability density function f X (x, θ ) and the distribution function FX (x, θ) of X are given by   1 σ 1 1 −1 x − μ , f X (x, θ) = and FX (x, θ ) = + tan π σ 2 + (x − μ)2 2 π σ x ∈ R, μ ∈ R, σ > 0. From the distribution function, we have a1/4 (θ) = μ − σ and a3/4 (θ) = μ + σ . Suppose {X 1 , X 2 , . . . , X n } is a random sample from Cauchy C(μ, σ ) distribution. From Theorem 2.2.6, X ([n/4]+1) is consistent for μ − σ and X ([3n/4]+1) is  consistent for μ + σ . Hence, (X ([n/4]+1) + X ([3n/4]+1) )/2, (X ([3n/4]+1) − X ([n/4]+1) )/2 is consistent for θ = (μ, σ ) . We examine if it is CAN for θ. From Theorem 3.3.3, it follows that   T n = X ([n/4]+1) , X ([3n/4]+1) is CAN for φ = (μ − σ, μ + σ ) with approximate dispersion matrix,  =

3σ 2 π 2 /4 σ 2 π 2 /4 σ 2 π 2 /4 3σ 2 π 2 /4

 =

σ 2π 2 4



 3 1 . 1 3

We now find a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = θ = (μ, σ ) . Suppose g = (g1 , g2 ) : R2 → R2 is defined as g1 (x1 , x2 ) = (x1 + x2 )/2 and g2 (x1 , x2 ) = (x2 − x1 )/2. Then ∂ 1 ∂ 1 g1 (x1 , x2 ) = , g1 (x1 , x2 ) = , ∂ x1 2 ∂ x2 2 1 1 ∂ ∂ g2 (x1 , x2 ) = − & g2 (x1 , x2 ) = . ∂ x1 2 ∂ x2 2 These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at φ is given by   1/2 1/2 M = −1/2 1/2 .

138

3

Consistent and Asymptotically Normal Estimators

Hence, by Theorem 3.3.4, 

X ([n/4]+1) + X ([3n/4]+1) X ([3n/4]+1) − X ([n/4]+1) , 2 2  g(φ) = θ = (μ, σ )

g(T n ) =

 is CAN for

with the approximate matrix M M  /n, where ! 2 2dispersion "  2 2 M M = diag σ π /2, σ π /4 . We know that for C(μ, σ ) distribution, the second quartile, that is, median  is μ. From Theorem 3.3.3, it follows that T n = X ([n/4]+1) X ([n/2]+1) is CAN for φ = (μ − σ, μ) with approximate dispersion matrix,  =

3σ 2 π 2 /4 σ 2 π 2 /4 σ 2 π 2 /4 σ 2 π 2 /4

 =

σ 2π 2 4



 3 1 . 1 1

We now find a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = θ = (μ, σ ) . Suppose g = (g1 , g2 ) : R2 → R2 is defined as g1 (x1 , x2 ) = x2 and g2 (x1 , x2 ) = x2 − x1 . Then ∂∂x1 g1 (x1 , x2 ) = 0, ∂ ∂ ∂ ∂ x2 g1 (x 1 , x 2 ) = 1, ∂ x1 g2 (x 1 , x 2 ) = −1 and ∂ x2 g2 (x 1 , x 2 ) = 1. These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at φ is given by  M=

 0 1 −1 1 .

  Hence, by Theorem 3.3.4, g(T n ) = X ([n/2]+1) , X ([n/2]+1) − X ([n/4]+1) is CAN for g(φ) = θ = (μ, σ ) with the approximate dispersion matrix M M  /n, ! " where M M  = diag σ 2 π 2 /4, σ 2 π 2 /2 . Now, we proceed and find a CAN estimator  based on the second  and the third quartiles, from Theorem 3.3.3, T n = X ([n/2]+1) , X ([3n/4]+1) is CAN for φ = (μ, μ + σ ) with approximate dispersion matrix,  =

σ 2 π 2 /4 σ 2 π 2 /4 σ 2 π 2 /4 3σ 2 π 2 /4

 =

σ 2π 2 4



 1 1 . 1 3

We search for a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = θ = (μ, σ ) . Suppose g = (g1 , g2 ) : R2 → R2 is defined as g1 (x1 , x2 ) = x1 and g2 (x1 , x2 ) = x2 − x1 . Then ∂∂x1 g1 (x1 , x2 ) = 1, ∂ ∂ ∂ ∂ x2 g1 (x 1 , x 2 ) = 0, ∂ x1 g2 (x 1 , x 2 ) = −1 and ∂ x2 g2 (x 1 , x 2 ) = 1. These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at φ is given by

3.3 CAN Estimator: Vector Parameter Setup

139

 M=

 1 0 −1 1 .

Hence, by Theorem 3.3.4,   g(T n ) = X ([n/2]+1) , X ([3n/4]+1) − X ([n/2]+1) is CAN for g(φ) = θ = (μ, σ ) with the approximate dispersion matrix M  is again" a diagonal matrix given by M M  /n, where ! 2M  2 M M = diag σ π /4, σ 2 π 2 /2 . Thus, CAN estimator based on the second and third quartiles has the same approximate dispersion matrix as that based on the first and the second quartiles. Further, CAN estimators of (μ, σ ) based on (i) the first and the third quartiles, (ii) the first and the second quartiles and (iii) the second and the third quartiles have the same generalized variance σ 4 π 4 /8. In this sense, all the three estimators are equivalent.  In the following example using multivariate CLT and the delta method as discussed in Theorem 3.3.4, we find famous arctan transformation of Fisher, popularly known as Fisher’s Z transformation, for the correlation parameter in the bivariate normal distribution.  Example 3.3.9

Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by  =

1ρ ρ 1

 ,

ρ ∈ (−1, 1). Suppose {(X i , Yi ) , i = 1, 2, . . . , n} is a random sample from the distribution of (X , Y ) . The sample correlation coefficient Rn is defined as n S X2 Y 1 2 Rn = where S X Y = (X i − X n )(Yi − Y n ), S X SY n i=1

n n 1 1 S X2 = (X i − X n )2 & SY2 = (Yi − Y n )2 . n n i=1

i=1

From Example 2.5.4, it follows that Rn is consistent for ρ. We now examine whether Rn is CAN for ρ. Note that X ∼ N (0, 1), Y ∼ N (0, 1) and the conditional distribution of X given Y = y is also normal N (ρ y, 1 − ρ 2 ). Suppose a

140

3

Consistent and Asymptotically Normal Estimators

random vector U is defined as U = (X 2 , Y 2 , X Y ) . Then μ = E(U ) = (1, 1, ρ) . To find the dispersion matrix V of U , note that V ar (X 2 ) = 2, V ar (Y 2 ) = 2 & V ar (X Y ) = E(X 2 Y 2 ) − (E(X Y ))2 = E(X 2 Y 2 ) − ρ 2 . Using the conditional distribution of X given Y = y, we find E(X 2 Y 2 ) and the remaining elements of V as follows: E(X 2 Y 2 ) = = = = ⇒ V ar (X Y ) = Cov(X 2 , X Y ) = = 3 E((X − ρY ) |Y ) = ⇒ E(X 3 |Y ) = = 3 ⇒ E(X Y ) = ⇒ Cov(X 2 , X Y ) =

E(E(X 2 Y 2 )|Y ) = E(Y 2 E(X 2 |Y )) E(Y 2 (V ar (X |Y ) + (E(X |Y ))2 )) E(Y 2 (1 − ρ 2 + ρ 2 Y 2 )) = 1 − ρ 2 + ρ 2 E(Y 4 ) 1 − ρ 2 + 3ρ 2 = 1 + 2ρ 2 1 + ρ 2 & Cov(X 2 , Y 2 ) = 1 + 2ρ 2 − 1 = 2ρ 2 E(X 3 Y ) − E(X 2 )E(X Y ) E(E(X 3 Y |Y )) − ρ = E(Y E(X 3 |Y )) − ρ 0 since X |Y = y ∼ N (ρ y, 1 − ρ 2 ) 3ρY E(X 2 |Y ) − 3ρ 2 Y 2 E(X |Y ) + ρ 3 Y 3 3ρ(1 − ρ 2 )Y + ρ 3 Y 3 E(Y E(X 3 |Y )) = 3ρ(1 − ρ 2 )E(Y 2 ) + ρ 3 E(Y 4 ) = 3ρ 2ρ similarly Cov(Y 2 , X Y ) = 2ρ.

Hence the dispersion matrix V of U is given by ⎞ ⎛ 2 2ρ 2 2ρ 2 ⎜ 2ρ ⎟ V = ⎝ 2ρ 2 ⎠. 2ρ 2ρ 1 + ρ 2 From the principal minors of V it follows that it is a positive definite matrix with determinant 4(1 − ρ 2 )2 = 0. A random sample {(X i , Yi ) , i = 1, 2, . . . , n} gives a random sample {U 1 , U 2 , . . . , U n } from the distribution of U . Hence by the multivariate CLT, √

L

n(U n − μ) → W ∼ N3 (0, V ) where  n  n n    2 2 Un = X i /n, Yi /n, X i Yi /n . i=1

i=1

i=1

It is to be noted that √

n((S X2 , SY2 , S X Y ) − μ) =

√ √ 2 2 n(U n − μ) − n(X n , Y n , X n Y n ) .

3.3 CAN Estimator: Vector Parameter Setup

141

Now, X ∼ N (0, 1) ⇒

√ √

P

n X n ∼ N (0, 1) & X n → 0 by WLLN P

n Y n ∼ N (0, 1) & Y n → 0 by WLLN √ √ P ⇒ ( n X n )X n → 0 as n X n is bounded in probability √ √ P ⇒ ( n Y n )Y n → 0 as nY n is bounded in probability √ √ P ⇒ ( n X n )Y n → 0 as n X n is bounded in probability √ √ P ⇒ n((S X2 , SY2 , S X Y ) − μ) − n(U n − μ) → 0 √ L ⇒ n((S X2 , SY2 , S X Y ) − μ) → W ∼ N3 (0, V ) as √ L n(U n − μ) → W .

Y ∼ N (0, 1) ⇒

To examine whether the sample correlation coefficient Rn is CAN for ρ, observe that Rn =

S X2 Y = g(S X2 , SY2 , S X Y ) where g(x1 , x2 , x3 ) = x3 /(x1 x2 )1/2 , S X SY

a function from R3 − {(0, 0, 0)} → (−1, 1). Hence, the vector of partial derivatives of g is given by −x3 ∂ g(x1 , x2 , x3 ) = 3/2 1/2 , ∂ x1 2x1 x2 −x3 ∂ 1 ∂ g(x1 , x2 , x3 ) = 1/2 3/2 , g(x1 , x2 , x3 ) = 1/2 1/2 . ∂ x2 ∂ x 3 2x1 x2 x1 x2 By Theorem 3.3.4, √

n(g(S X2 , SY2 , S X Y ) − g(μ)) =

√ L n(Rn − ρ) → W1 ∼ N (0, σ 2 ) where

σ 2 =  V  and  = (−ρ/2, −ρ/2, 1). Hence σ 2 = (1 − ρ 2 )2 . Thus, Rn is a CAN estimator of ρ with approximate variance (1 − ρ 2 )2 /n. To obtain the asymptotic confidence interval for ρ, we use variance stabilization technique and find a function g so that  2 2 2 the approximate variance  1of g(Rn ) is free from ρ, that is, (g (ρ)) (1 − ρ ) = c. With c = 1, g(ρ) = 1−ρ 2 dρ. Now,  g(ρ) =

 1/2 1/2 dρ + 1−ρ 1+ρ 1 1+ρ = log = tanh−1 (ρ) . 2 1−ρ

1 dρ = 1 − ρ2

 

142

3

Consistent and Asymptotically Normal Estimators

The transformation g(ρ) = 21 log 1+ρ 1−ρ is known as Fisher’s Z transformation or arctan transformation. Thus,   √ 1 1 1 + Rn 1+ρ L n − log → Z ∼ N (0, 1). log 2 1 − Rn 2 1−ρ Using this transformation and routine calculations, we get the asymptotic confidence interval for θ = 21 log 1+ρ 1−ρ , and further obtain the asymptotic confidence √ n interval for ρ from it. With Wn = 21 log 1+R 1−Rn , Q n = n(Wn − θ ) is a pivotal quantity having the standard normal distribution. Hence, we find a(1−α/2) such that P[−a(1−α/2) < Q n < a(1−α/2) ] = 1 − α, the given confidence coefficient. Now, −a(1−α/2) < Q n < a(1−α/2) √ √ ⇔ Wn − a(1−α/2) / n < θ < Wn + a(1−α/2) / n √ √ 1+ρ ⇔ 2(Wn − a(1−α/2) / n) < log < 2(Wn + a(1−α/2) / n) 1−ρ √ √ 1 + ρ ⇔ e{2(Wn −a(1−α/2) / n)} < < e{2(Wn +a(1−α/2) / n)} 1−ρ ⇔

e{2(Wn −a(1−α/2) /

√ √

n)}

e{2(Wn −a(1−α/2) / n)}

−1 +1

< ρ
0 . Then its distribution function F(x, θ ) for x > 0 is given by F(x, θ ) = 1 − exp{−x θ }, which implies that the population median is a1/2 (θ ) = (log 2)1/θ = (0.6931)1/θ . Hence by the delta method, θ˜n = log(log 2)/(log X ([n/2]+1) ) = −0.3666/(log X ([n/2]+1) ) is CAN for θ with approximate variance v(θ ) θ2 θ2 = = . n n(log 2)2 (log(log 2))2 0.06456n Thus for large n, the distribution of √ √     −0.3666 0.3666 0.06456n 0.2541 n −θ =− +θ Tn = θ log X ([n/2]+1) θ log X ([n/2]+1) is approximately normal. We verify the same by simulation using following R code. Consistency of θ˜n can be verified using the procedure described in Sect. 2.7. We take θ = 2 and sample sizes from 500 to 7500 with the increment of 1000. th = 2; n = seq(500,7500,1000); length(n); nsim = 1000 t = matrix(nrow=length(n),ncol=nsim); p = c() for(m in 1:length(n)) { for(i in 1:nsim) { set.seed(i) x = rweibull(n[m],th,1) me = median(x) t[m,i] = -(0.2541*sqrt(n[m])/th)*((0.3666/log(me))+th) } p[m] = shapiro.test(t[m,])$p.value } d = round(data.frame(n,p),4); d u = t[length(n),]; r = seq(-4,4,.1); y = dnorm(r) par(mfrow=c(2,2)) hist(u,freq=FALSE,main="Histogram",ylim=c(0,0.4), xlab = expression(paste("T"[n])),col="light blue") lines(r,y,"o",pch=20,col="dark blue") boxplot(u,main="Box Plot",xlab = expression(paste("T"[n])))

148

3

Consistent and Asymptotically Normal Estimators

Box Plot

−3

−1

1

3

0.0 0.1 0.2 0.3 0.4

Density

Histogram

−2

0

2

4

Tn

0.0

0.4

−1

Fn(t)

1

0.8

3

Empirical Distribution Function

−3

Sample Quantiles

Normal Q−Q Plot

−3

−1

0

1

2

Theoretical Quantiles

3

−4

−2

0

2

4

Tn

Fig. 3.3 Weibull distribution: approximate normality of normalized sample median

qqnorm(u);qqline(u) plot(ecdf(u),main="Empirical Distribution Function", xlab=expression(paste("T"[n])),ylab = expression(paste("F"[n](t)))) lines(r,pnorm(r),col="blue")

The p-values of Shapiro-Wilk test for normality for sample sizes 500, 1500, 2500, 3500 are 0, for sample sizes 4500, 5500, 6500, 7500 are 0.0033, 0.0242, 0.0107 and 0.1104 respectively. Thus, for sample size n = 7500, normality of sample median is acceptable according to Shapiro-Wilk test with the p-value 0.1104. It is to be noted that the sample size required to achieve similar results in Example 3.4.1 for Cauchy distribution is 80. Thus, the requirement of sample size depends on the underlying probability model and even the values of the indexing parameter. Figure 3.3 displays four plots for n = 7500, which indicate that the distribution of Tn is approximately normal. The same procedure can be repeated for various values of θ ∈ . 

3.4 Verification of CAN Property Using R

149

 Example 3.4.3

Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution, with location parameter θ ∈ R and scale parameter 1. In Example 3.2.7, it is proved that X (1) is not a CAN estimator of θ , that is, there does not exist any sequence {an , n ≥ 1} of real numbers, such that an → ∞ as n → ∞ for which the asymptotic distribution of an (X (1) − θ ) is normal. By taking an = n, we verify it by simulation. Further, it is known that for each n and hence asymptotically, distribution of Tn = n(X (1) − θ ) is exponential with mean 1. We verify this result by simulation, graphically and with goodness of fit test, using following R code. We also use one built-in function for goodness of fit test. th = 1.5; n = 60; nsim = 2000; t = c() for(i in 1:nsim) { set.seed(i) x = th + rexp(n,rate=1) t[i] = min(x) } tn = n*(t-th); summary(tn); v=var(tn)*(nsim-1)/nsim; v r = seq(-4,4,.2); y = dnorm(r) par(mfrow= c(2,2)) hist(tn,freq=FALSE,main="Histogram",ylim=c(0,0.4),xlim=c(-4,5), xlab = expression(paste("T"[n])), col="light blue") lines(r,y,"o",pch=20,col="dark blue") boxplot(tn,main="Box Plot",xlab = expression(paste("T"[n]))); qqnorm(tn); qqline(tn) plot(ecdf(tn),main="Empirical Distribution Function", xlab=expression(paste("T"[n])),ylab = expression(paste("F"[n](t)))) lines(r,pnorm(r),"o",pch=20,col="blue") shapiro.test(tn) O = hist(tn,plot=FALSE)$counts; sum(O) bk = hist(tn,plot=FALSE)$breaks; bk M = max(bk); u = seq(0,M,.1); y1 = dexp(u, rate=1) par(mfrow= c(1,1)) hist(tn,freq=FALSE,main="Histogram of T_n",ylim=c(0,1),xlim=c(0,M), xlab = expression(paste("T"[n])),col="light blue") lines(u,y1,"o",pch=20,col="dark blue") e = exp(1); ep = c() for(i in 1:(length(bk)-1)) { ep[i] = eˆ(-bk[i]) - eˆ(-bk[i+1]) } a = 1-sum(ep); a; ep1 = c(ep, a); ef = sum(O)*ep1; O1 = c(O,0) d = data.frame(O1,round(ef,2)); d

150

3

Consistent and Asymptotically Normal Estimators

Box Plot

0

2

4

6

8

0.0 0.1 0.2 0.3 0.4

−2

0

2

4 Tn

Normal Q−Q Plot

Empirical Distribution Function

0.0

2

4

Fn(t)

6

0.8

8

Tn

0

Sample Quantiles

−4

0.4

Density

Histogram

−3

−1 0

1

2

3

0

2

Theoretical Quantiles

4

6

8

10

Tn

Fig. 3.4 Exponential E x p(θ, 1) distribution: MLE is not CAN

ts = sum((O1-ef)ˆ2/ef);ts df = length(ef)-1; df; b = qchisq(.95,df); b; p1 = 1-pchisq(ts,df); p1 chisq.test(O1,p=ep1) ### pooling of frequencies less than 5 is ignored

We know that the distribution of Tn = n(X (1) − θ ) is exponential with mean 1. Thus, V ar (Tn ) = 1. From summary statistic and variance, we note that based on 2000 simulations, the mean of Tn is 1.0055, very close to theoretical mean 1 and variance is 1.0371, again close to theoretical variance 1. From all the four graphs in Fig. 3.4, it is clear that the asymptotic distribution of Tn is not normal. Shapiro-Wilk test also rejects the null hypothesis of normality of Tn as its p-value is 0 (less than 2.2e−16 ). All these results are for sample size 60, but even for the large sample size we get similar results and normality of Tn is not acceptable. The first graph of histogram in Fig. 3.4, superimposed by the curve of probability density function of the standard normal distribution indicates that an exponential distribution will be a better fit. Figure 3.5 supports such a hunch

3.4 Verification of CAN Property Using R

151 Histogram of T_n

0.6 0.4 0.0

0.2

Density

0.8

1.0

Fig. 3.5 Exponential E x p(θ, 1) distribution: asymptotic distribution of MLE

0

2

4

6

8

10

Tn

and the theory that the distribution of Tn is exponential with mean 1. To verify the visual impression analytically, we carry out goodness of fit test by using Karl Pearson’s chi-square test statistic. We obtain observed frequencies O by extracting counts from hist and class intervals by extracting breaks from hist. We take the last interval as (M, L), where M is the maximum value of vector breaks and L denotes the maximum value of support of the distribution in null setup. In this example, L = ∞. Expected frequencies are obtained by computing the probabilities of the class intervals using the exponential distribution with mean 1. The data frame d shows the close agreement between observed and expected frequencies. With these data, we compute Karl Pearson’s test statistic and the corresponding p-value. It is 0.5668, which supports the theoretical result that Tn has the exponential distribution with mean 1. There is a built-in function chisq.test(O1,p=ep1) for Karl Pearson’s goodness of fit test, where O1 is a vector of observed frequencies, when data are grouped in class intervals and ep1 is a vector specifying probabilities of these class intervals under the null hypothesis. The built-in function chisq.test also gives the same results. It is to be noted that we get these results for the sample size 60 as for finite n also  distribution of Tn = n(X (1) − θ ) is exponential. In the next example we illustrate how to verify a CAN property of a vector estimator for a vector parameter.

152

3

Consistent and Asymptotically Normal Estimators

 Example 3.4.4

Suppose X follows an exponential distribution with location parameter μ and scale parameter 1/σ with probability density function #

1 x −μ , x ≥ μ, σ > 0, μ ∈ R . f (x, μ, σ ) = exp − σ σ √ √ In Example 3.3.5, it is shown that (μ˜ n , σ˜ n ) = (m 1 − m 2 , m 2 ) is CAN for (μ, σ ) , with the approximate dispersion matrix D/n, where  D=

σ 2 −σ 2 −σ 2 2σ 2

 .

√ √ There are two approaches to verify that (μ˜ n , σ˜ n ) = (m 1 − m 2 , m 2 ) is CAN for (μ, σ ) . One is based on the Cramér-Wold device which states that L

L

X n → X if and only if L  X n → L  X ∀ L ∈ Rk , L = 0 . Cramér-Wold device allows higher dimensional problems to reduce to the onedimensional case. Thus, X n converges to k-multivariate normal distribution, if and only if l  X n converges to univariate normal distribution for any k-dimensional vector l of real numbers. In the following code, we adopt this approach to verify normality of (μ˜ n , σ˜ n ) for five vectors. mu = 3; si = 2; n = 1000; nsim = 1500; m1 = m2 = m3 = m4 = c() for(i in 1:nsim) { set.seed(i) u = runif(n,0,1) x = mu-si*log(1-u) m1[i] = mean(x) m2[i] = mean(xˆ2) m3[i] = m2[i]-m1[i]ˆ2 m4[i] = sqrt(m3[i]) } x1 = m1-m4; summary(x1) x2 = m4; summary(x2) y = matrix(nrow=nsim,ncol=2) y = cbind(sqrt(n)*(x1-mu),sqrt(n)*(x2-si)) v = matrix(c(siˆ2,-siˆ2,-siˆ2,2*siˆ2),nrow=2,byrow=TRUE); v l = matrix(nrow=5,ncol=2) l[,1] = c(2,5,7,8,10); l[,2] = c(4,8,12,14,18) l1 = t(l); pv = wi = c()

3.4 Verification of CAN Property Using R

153

for(i in 1:5) { di = l[i,]%*%v%*%l1[,i] di = as.numeric(di) wi = (y%*%l1[,i])/sqrt(di) pv[i] = shapiro.test(wi)$p.value } round(pv,3)

For five two-dimensional vectors as specified in l, p-values of Shapiro-Wilk test are 0.2530.1710.1840.1930.208. These √  convey that the distribution of √ Tn = l  n(μ˜ n − μ), n(σ˜ n − σ ) / l  V l can be approximated by the standard normal distribution, which further implies that (μ˜ n , σ˜ n ) is CAN for (μ, σ ) , with the approximate dispersion matrix D/n. √ √ In the second approach, we use the result that if (μ˜ n , σ˜ n ) = (m 1 − m 2 , m 2 ) is CAN for (μ, σ ) , then √ √ √ √ m 2 − σ ) D −1 (m 1 − m 2 −μ, m 2 − σ ) ∼ χ22 . Tn = n(m 1 − m 2 − μ, 2 We verify whether Tn has χ2 distribution graphically and by the chi-square test using the following R code. mu = 3; si = 2; n = 500; nsim = 1200; m1 = m2 = m3 = m4 = w = c() for(i in 1:nsim) { set.seed(i) u = runif(n,0,1) x = mu-si*log(1-u) m1[i] = mean(x) m2[i] = mean(xˆ2) m3[i] = m2[i]-m1[i]ˆ2 m4[i] = sqrt(m3[i]) } x1 = m1-m4; x2 = m4; summary(x1); summary(x2); y = cbind(x1-mu,x2-si);summary(y); v = matrix(c(siˆ2,-siˆ2,-siˆ2,2*siˆ2),nrow=2,byrow=TRUE);v v1 = solve(v); y1 = t(y) for(i in 1:nsim) { w[i] = n*y[i,]%*%v1%*%y1[,i] } summary(w) O = hist(w,plot=FALSE)$counts; sum(O) bk = hist(w,plot=FALSE)$breaks; bk M = max(bk); u = seq(0,M,.2); y2 = dchisq(u,2) hist(w,freq=FALSE,main="Histogram of T_n",xlim=c(0,M),ylim=c(0,.5), xlab = expression(paste("T"[n])),col="light blue")

154

3

Consistent and Asymptotically Normal Estimators Histogram of T_n

0.3 0.2 0.0

0.1

Density

0.4

0.5

Fig. 3.6 Exponential E x p(μ, σ ) distribution: CAN estimator based on moments

0

5

10

15

Tn

lines(u,y2,"o",pch=20,col="dark blue") ep = c() for(i in 1:(length(bk)-1)) { ep[i] = pchisq(bk[i+1],2) - pchisq(bk[i],2) } a = 1-sum(ep); a; ep1 = c(ep, a); ef = sum(O)*ep1; O1 = c(O,0) d = data.frame(O1,round(ef,2)); d ts = sum((O1-ef)ˆ2/ef); ts; df = length(ef)-1; df b = qchisq(.95,df); b; p1 = 1-pchisq(ts,df); p1; chisq.test(O1,p=ep1)

In the output, x1 is a vector of simulated values of μ˜ n , x2 is a vector of simulated values of σ˜ n and w is a vector of simulated values of Tn . Summary statistics (not shown) of x1, x2 and of w indicate that mean of simulated values of μ˜ n is close to the assumed value of μ = 3, the mean of simulated values of σ˜ n is close to σ = 2 and the mean of w is close to 2, which is a mean of χ22 distribution. The p-value 0.1758 of Karl Pearson’s chi-square test statistic supports the theory that the distribution of Tn is χ22 . The built-in function chisq.test also gives the same results. Figure 3.6 shows that the histogram of simulated values of w is closely approximated by the probability density function of χ22 distribution. It is to be noted that the sample size required in this example is 500, while in the previous example it is 60. 

3.4 Verification of CAN Property Using R

155

 Example 3.4.5

In Example 3.2.13, we have obtained 100(1 − α)% asymptotic confidence interval for the survival function e−t/θ , where t is a fixed positive real number. We have used following two methods—(i) based on the studentization procedure and (ii) using the fact that e−t/θ is a monotone function of θ . By method (i), 100(1 − α)% large sample confidence interval for e−t/θ is given by  e−t/X n 1 − a1−α/2



t √



Xn n

 , e−t/X n 1 + a1−α/2

t √



Xn n

and by method (ii) it is ⎛ ⎝exp

⎧  ⎨ −t 1 + ⎩

a1−α/2 √ n

Xn

⎫ ⎬ ⎭

,

⎧  ⎨ −t 1 − exp



a1−α/2 √ n

Xn

 ⎫⎞ ⎬ ⎠ . ⎭

We compare these two asymptotic confidence intervals by simulation using the R code given below. For comparison, we obtain the empirical confidence coefficient (ECC), estimate of average length of the interval (AL) and its variance (VL). th = 2; t = 0.4; par = exp(-t/th); nsim = 2000; n = c(50,100,150,200,250); alpha = 0.05; a = qnorm(1-alpha/2,0,1) mth_est = mest = cc1 = cc2 = alc1 = alc2 = vlc1 = vlc2 = c() for(s in 1:5) { L1 = U1 = L2 = U2 = est = th_est = c() for(j in 1:nsim) { set.seed(j) x = rexp(n[s],rate = 1/th) th_est[j] = mean(x) est[j] = exp(-t/th_est[j]) L1[j] = est[j]-a*t*est[j]/(sqrt(n[s])*th_est[j]) U1[j] = est[j]+a*t*est[j]/(sqrt(n[s])*th_est[j]) L2[j] = exp(-t*(1+a/sqrt(n[s]))/th_est[j]) U2[j] = exp(-t*(1-a/sqrt(n[s]))/th_est[j]) } cc1[s] = length(which(L1par))/nsim cc2[s] = length(which(L2par))/nsim length1 = U1-L1 length2 = U2-L2 alc1[s] = mean(length1) alc2[s] = mean(length2)

156

3

Consistent and Asymptotically Normal Estimators

Table 3.1 Comparison of two methods of constructing an asymptotic confidence interval ˆ

n

θˆn

e−t/θn

ECC1

ECC2

AL1

AL2

VL1

VL2

50 100 150 200 250

2.0055 2.0047 2.0052 2.0042 2.0032

0.8160 0.8186 0.8182 0.8183 0.8184

0.9440 0.9545 0.9550 0.9580 0.9510

0.9450 0.9540 0.9565 0.9565 0.9520

0.0918 0.0645 0.0525 0.0455 0.0406

0.0918 0.0645 0.0525 0.0455 0.0406

0.0001 0 0 0 0

0.0001 0 0 0 0

vlc1[s] = var(length1) vlc2[s] = var(length2) mth_est[s]=mean(th_est) mest[s]=mean(est) } d = cbind(mth_est,mest,cc1,cc2,alc1,alc2,vlc1,vlc2) d = round(d,4); d1 = data.frame(n,d); d1

The output of data frame d1 is displayed in Table 3.1. The first and the second column present the mean of 2000 values of θˆn and ˆ e−t/θn respectively for sample sizes n = 50, 100, . . . , 250. From these columns, we note that for all the sample sizes, the estimate of θ is close to 2 and of e−t/θ is close to e−4/2 = 0.8187. Further, the average length of the confidence interval is same for both the methods and all the sample sizes. It decreases as the sample size increases, as is expected. The variance of the length is almost 0 for both the methods and all the sample sizes. The empirical confidence coefficients for both the methods are more or less the same for all the sample sizes and approach 0.95 as the sample size increases. Thus, there is not much variation in the properties of large sample confidence intervals corresponding to two methods.  √ In Example 3.3.9,2we have shown that the asymptotic distribution of n(Rn − ρ)/(1 − ρ ) and also of Fisher’s Z transformation of Rn is standard normal, where Rn is the sample correlation coefficient. However, as pointed out in Remark 3.3.3, the rates of convergence to normality are different. In the next example, we compare the rate of convergence to normality of Rn and Fisher’s Z transformation of Rn . To generate a random sample from a bivariate normal distribution, we use the package mvtnorm. For details of this package, refer to Genz and Bretz [4] and Genz et al. [5].  Example 3.4.6

Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by   1ρ = ρ 1 ,

3.4 Verification of CAN Property Using R

157

ρ ∈ (−1, 1). In this example, we compare the rate of convergence to normality of Rn and Fisher’s Z transformation of Rn . For various sample sizes, we simulate from the bivariate normal distribution and judge the rate of convergence to normality based on p-values of Shapiro-Wilk test. We also compare the performance based on the coefficient of skewness and coefficient of kurtosis. The R code is given below. library(mvtnorm) rho = .5; a =.5*(log((1+rho)/(1-rho))); mu = c(0,0); sig=matrix(c(1,rho,rho,1),nrow=2) n = c(100,140,180,220); nsim = 1200; pt = pz = c() t = matrix(nrow=length(n),ncol=nsim); z=matrix(nrow=length(n),ncol=nsim) for(m in 1:length(n)) { for(i in 1:nsim) { set.seed(i) x = rmvnorm(n[m],mu,sig) v = cor(x) R = v[1,2] s = 0.5*(log((1+R)/(1-R))) t[m,i] = (sqrt(n[m])/(1-rhoˆ2))*(R-rho) z[m,i] = sqrt(n[m])*(s-a) } pt[m] = shapiro.test(t[m,])$p.value pz[m] = shapiro.test(z[m,])$p.value } d = round(data.frame(n,pt,pz),4); d ct = matrix(nrow=length(n),ncol=3) cz = matrix(nrow=length(n),ncol=3) skt = kut = c(); skz = kuz = c(); r = 2:4 for(m in 1:length(n)) { for(l in 1:length(r)) { ct[m,l] = mean((t[m,] - mean(t[m,]))ˆr[l]) cz[m,l] = mean((z[m,] - mean(z[m,]))ˆr[l]) } } ct = round(ct,4); cz = round(cz,4); ct; cz for(m in 1:length(n)) { skt[m] = (ct[m,2])ˆ2/(ct[m,1])ˆ3 skz[m] = (cz[m,2])ˆ2/(cz[m,1])ˆ3 kut[m] = ct[m,3]/(ct[m,1])ˆ2 kuz[m] = cz[m,3]/(cz[m,1])ˆ2 } d1 = round(data.frame(n,skt,skz,kut,kuz),4); d1

158

3

Consistent and Asymptotically Normal Estimators

Table 3.2 N2 (0, 0, 1, 1, ρ) distribution: p-values of Shapiro-Wilk test n

rn

Z

100 140 180 220

0.0000 0.0009 0.0144 0.0755

0.9611 0.7433 0.4470 0.9388

The p-values of Shapiro-Wilk test are displayed in Table 3.2. It of size 100, 140 and 180 normality of √is noted that for a sample n(Rn − ρ)/(1 − ρ 2 ) is not accepted but normality of corresponding Z trans√ formation is accepted. For a sample of size 220, normality of n(Rn − ρ) is acceptable with p-value 0.0755, while the p-value of corresponding Z transformation is 0.9388. Observe that the p-values corresponding to Z transformation are not monotone functions of sample size, these decrease initially and increase at a later stage. Such a feature is in view of the fact that the p-value is not just a function of the sample size, but it is also a function of observed value of the test statistics, which changes as observed data change. To summarize, normality of √ Z transformation is acceptable for small sizes but not for n(Rn − ρ)/(1 − ρ 2 ). Hence, we conclude that rate of convergence to normality of Rn is much slower than that of its Z transformation. Following Table 3.3 displays the values of coefficient of skewness β1 = μ23 /μ32 and of coefficient of kurtosis γ1 = μ4 /μ22 √ of n(Rn − ρ)/(1 − ρ 2 ) and of normalized Fisher’s Z transformation for four sample sizes. From the table, we note that there is a difference between the values of the coefficient of skewness β1 , but the values of coefficient of kurtosis are more or less the same for all the sample sizes. The coefficient of skewness of normalized Z is closer to 0 than that of normalized Rn , which again implies that the rate of convergence to normality of Z is faster than that of Rn . From the definitions of β1 and of γ1 , we note that the difference between the values of the coefficient of skewness and similarity between the values of the coefficient of kurtosis, should depend on the sample central moments. Table 3.4 displays the values of second, third and fourth sample central moments, m 2 , m 3 , m 4 respectively, both for normalized Rn and normalized Z . From the table, we observe that the second central moment for both Rn and Z is close to 1 as expected, because Rn and Z are normalized to have variance Table 3.3 Coefficient of skewness and kurtosis of normalized Rn and normalized Z n

β1 (Rn )

β1 (Z )

γ1 (Rn )

γ1 (Z )

100 140 180 220

0.1122 0.0652 0.0350 0.0324

0.0016 0.0000 0.0012 0.0007

3.1402 3.1788 3.0858 3.1598

3.0091 3.0364 2.9764 3.1012

3.4 Verification of CAN Property Using R

159

Table 3.4 Sample central moments of normalized Rn and normalized Z n

m 2 (Rn )

m 2 (Z )

m 3 (Rn )

m 3 (Z )

m 4 (Rn )

m 4 (Z )

100 140 180 220

1.0238 0.9882 1.0119 0.9682

1.0228 0.9940 1.0210 0.9723

-0.3470 -0.2508 -0.1905 -0.1715

-0.0411 -0.0021 0.0359 0.0262

3.2913 3.1045 3.1595 2.9621

3.1481 3.0003 3.1028 2.9316

1. Further, the fourth central moment for both Rn and Z is close to 3, which is reflected in the values of the coefficient of kurtosis to be close to 3. The third central moments for Rn and Z are different. The third central moment of Z is closer to 0, which is reflected in the values of the coefficient of skewness.  Fisher’s Z transformation, being a variance stabilizing transformation, is useful to find an asymptotic confidence interval for ρ. In the next example, we discuss how to compute an asymptotic confidence interval for ρ, using Z transformation and using the studentization procedure. We compare the performance of these two procedures by empirical confidence coefficient, average length of the confidence interval and variance of the length of the confidence interval.  Example 3.4.7

Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by   1ρ = ρ 1 , ρ ∈√(−1, 1). In Example 3.3.9, we have shown that the asymptotic distribution of n(Wn − θ ) is standard normal, where Wn is Fisher’s Z transformation of Rn n given by Wn = 21 log 1+R 1−Rn , where Rn is the sample correlation coefficient and

θ = 21 log 1+ρ 1−ρ . In the same example, we have obtained the asymptotic confidence interval for ρ using Fisher’s Z transformation, that is, using variance stabilization technique and also the asymptotic confidence interval for ρ using the studentization procedure. The asymptotic confidence interval with confidence coefficient (1 − α) for ρ using variance stabilization technique is given by √ √   exp{2(Wn − a(1−α/2) / n)} − 1 exp{2(Wn + a(1−α/2) / n)} − 1 , , √ √ exp{2(Wn − a(1−α/2) / n)} + 1 exp{2(Wn + a(1−α/2) / n)} + 1

n where Wn = 21 log 1+R 1−Rn . The asymptotic confidence interval with confidence coefficient (1 − α) for ρ using the studentization procedure is given by   a(1−α/2) (1 − Rn2 ) a(1−α/2) (1 − Rn2 ) , Rn + . Rn − √ √ n n

160

3

Consistent and Asymptotically Normal Estimators

Following R code computes asymptotic confidence interval for ρ for a sample of size 250 from the bivariate normal distribution. The performance of the two procedures is compared on the basis of 1500 simulations. library(mvtnorm) n = 250; nsim = 1500 rho = .5; a=.5*(log((1+rho)/(1-rho))); alpha = 0.05; b = qnorm(1-alpha/2) mu=c(0,0); sig=matrix(c(1,rho,rho,1),nrow=2) R = s = LCI = UCI = LR = UR = c() for(i in 1:nsim) { set.seed(i) x = rmvnorm(n,mu,sig) v = cor(x) R[i] = v[1,2] s[i] = .5*(log((1+R[i])/(1-R[i]))) LCI[i] = (exp(2*(s[i]-b/sqrt(n)))-1)/(exp(2*(s[i]-b/sqrt(n)))+1) UCI[i] = (exp(2*(s[i]+b/sqrt(n)))-1)/(exp(2*(s[i]+b/sqrt(n)))+1) LR[i] = R[i]-b*(1-R[i]ˆ2)/sqrt(n) UR[i] = R[i]+b*(1-R[i]ˆ2)/sqrt(n) } mean(R); shapiro.test(s); shapiro.test(R) ecc = length(which(LCIrho))/nsim; ecc ##empirical conf. coeff. eccR = length(which(LRrho))/nsim; eccR d = data.frame(head(LCI),head(UCI),head(LR), head(UR)); d1 = round(d,4); d1 length_CI = UCI-LCI; AVG_L_CI = mean(length_CI); V_L_CI = var(length_CI) AVG_L_CI;V_L_CI length_R = UR-LR; AVG_L_R = mean(length_R); V_L_R = var(length_R) AVG_L_R;V_L_R

The mean of 1500 simulated values of Rn comes out to be 0.4998, very close to the value of ρ = 0.5. The p-values of Shapiro-Wilk test for normality are 0.6128 and 0.1896 for Fisher’s Z transformation and Rn respectively, supporting the normality of both. It is to be noted that as in the previous example, the p-value corresponding to Fisher’s Z transformation is larger than that of Rn . Based on 1500 simulations, the empirical confidence coefficient is 0.946 for Fisher’s Z transformation and 0.9413 for Rn , both of them being close to the confidence coefficient 0.95. The average length of the confidence interval is 0.1852 for Fisher’s Z transformation with the variance of the length being 0.00013, very small. The average length of the confidence interval is 0.1854 for Rn , with the variance of the length being 0.00014. It is also very small. Thus, on the basis of these three criteria, the performance of the two techniques is almost the same. The first 6 confidence intervals obtained using both the techniques are displayed in

3.4 Verification of CAN Property Using R

161

Table 3.5 Confidence intervals for ρ based on Rn and Fisher’s Z transformation Number

Variance stabilization technique Studentization technique

1 2 3 4 5 6

(0.3707, 0.5630) (0.3891, 0.5775) (0.3522, 0.5483) (0.4279, 0.6076) (0.4189, 0.6007) (0.3931, 0.5806)

(0.3762, 0.5687) (0.3946, 0.5833) (0.3575, 0.5540) (0.4336, 0.6135) (0.4246, 0.6066) (0.3986, 0.5864)

Table 3.5. From the table, we note that the confidence intervals corresponding to two techniques are comparable. More precisely, the confidence intervals obtained using studentization technique are slightly shifted to the right of those obtained using variance stabilization technique. 

3.5

Conceptual Exercises

3.5.1 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (0, θ ) distribution, θ > 0. (i) Examine whether the maximum likelihood estimator of θ is a CAN estimator of θ . (ii) Examine whether the moment estimator of θ is a CAN estimator of θ . (iii) Solve (i) and (ii) if {X 1 , X 2 , . . . , X n } are independent random variables where X i has uniform U (0, iθ ) distribution, θ > 0. 3.5.2 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (0, θ ) distribution. Obtain 100(1 − α)% asymptotic confidence interval for θ based on a sufficient statistic. 3.5.3 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (θ, 1) distribution, 0 < θ < 1. Find the maximum likelihood estimator of θ and the moment estimator of θ . Examine whether these are CAN estimators of θ . 3.5.4 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli B(1, θ ) distribution, θ ∈ (0, 1). (i) Suppose an estimator θˆn is defined as follows: θˆn

⎧ ⎨ 0.01, if X n = 0 = X , if 0 < X n < 1 ⎩ n 0.98, if X n = 1.

Examine whether it is a CAN estimator of θ . (ii) Examine whether the maximum likelihood estimator of θ is a CAN estimator of θ if θ ∈ [a, b] ⊂ (0, 1). 3.5.5 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution. Find the maximum likelihood estimator of θ and examine if it is CAN for θ if θ ∈ [0, ∞). Identify the limiting distribution at θ = 0. 3.5.6 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of a random variable X with probability density function

162

3.5.7

3.5.8

3.5.9

3.5.10

3.5.11

3

Consistent and Asymptotically Normal Estimators

f (x, θ ) = kθ k /x k+1 , x ≥ θ, k ≥ 3. (i) Find the maximum likelihood estimator of θ and examine whether it is CAN for θ . (ii) Find the moment estimator of θ and examine whether it is CAN for θ . (iii) Find 95% asymptotic confidence interval for θ based on the maximum likelihood estimator. Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with mean θ . (i) Find the CAN estimator for the mean residual life E(X − t|X > t), t > 0. (ii) Show that for some constant c( p), √ n(c( p)X ([np]+1) − θ ) converges in law to N (0, σ 2 ( p)). Find the constant c( p) and σ 2 ( p). Suppose {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables each having a Poisson distribution with mean θ, θ > 0. Find a CAN estimator of P[X 1 = 1]. Is it necessary to impose any condition on the parameter space? Under this condition, using the CAN estimator of P[X 1 = 1], obtain a large sample confidence interval for P[X 1 = 1]. Suppose {X 1 , X 2 , . . . , X n } is a random sample from f (x, θ ) = θ x θ −1 , 0 < x < 1, θ > 0. Find a CAN estimator of e−θ based on the sample mean and also based on a sufficient statistic. Compare the two estimators. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli B(1, θ ). Find a CAN estimator of θ (1 − θ ) when θ ∈ (0, 1) − {1/2}. What is the limiting distribution of the estimator when θ = 1/2 and when the norming √ factor is n and n? Suppose {X 1 , X 2 , . . . , X n } is a random sample from a geometric distribution, with probability mass function p(x, θ ) = θ (1 − θ )x , x = 0, 1, . . . .

However, X 1 , X 2 , . . . , X n are not directly observable, but one can note whether X i ≥ 2 or not. (i) Find a CAN estimator for θ , based on the observed data. (ii) Find a CAN estimator for θ , if X i ≥ 2 is replaced by X i > 2. 3.5.12 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution. However, X 1 , X 2 , . . . , X n are not directly observable, but one can note whether X i > 2 or not. Find a CAN estimator for θ , based on the observed data. 3.5.13 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X, with probability density function f (x, α, θ ) as given by & f (x, α, θ ) =

2x αθ , 2(α−x) α(α−θ ) ,

if if

Find a CAN estimator of θ when α is known.

0 0. x

Obtain a CAN estimator for p assuming k to be known. distribu3.5.15 Suppose {X 1 , X 2 , . . . , X n } is a random n sample from an exponential n X i /n and T2n = i=1 X i /(n + 1). tion with mean θ . Suppose T1n = i=1 (i) Examine whether T1n and T2n are consistent for θ . Prove that T1n is CAN for θ . √ Pθ (ii) Prove that n(T2n − T1n ) → 0 and hence both T1n and T2n are CAN for θ with the same approximate variance, but M S E θ (T2n ) < M S E θ (T1n ) ∀ n ≥ 1. (iii) Find a CAN estimator for P[X 1 > t] where t is a positive real number. 3.5.16 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson Poi(θ ) distribution. Examine whether the sample variance is CAN for θ . 3.5.17 Show that the empirical distribution function Fn (a) is CAN for F(a), where a is a fixed real number. Hence obtain a asymptotic confidence interval for F(a). 3.5.18 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, aθ 2 ) distribution, θ > 0 and a is a known positive real number. Find the maximum likelihood estimator of θ . Examine whether it is CAN for θ . 3.5.19 Suppose {X 1 , X 2 , . . . , X n } is a random sample from na uniform U (−θ, θ ) |X i |. Are sample distribution. Find CAN estimator for θ based on i=1 mean and sample median CAN for θ ? Justify your answer. Find a consistent estimator for θ based on X (1) and find a consistent estimator for θ based on X (n) . Examine if these are CAN for θ . uniform 3.5.20 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a ' n X i )1/n is CAN U (0, θ ) distribution, θ > 0. Examine whether Sn = ( i=1 −1 for θ e . 3.5.21 Suppose {X 1 , X 2 , . . . , X 2n+1 } is a random sample from a uniform U (θ − 1, θ + 1) distribution. (i) Show that X 2n+1 and X (n+1) are both CAN for θ . Compare the two estimators. (ii) Using the large sample distribution, obtain the minimum sample size n 0 required for both the estimators to attain a given level of accuracy specified by  and δ such that P[|Tn − θ | < ] ≥ 1 − δ, ∀ n ≥ n 0 , where Tn is either X 2n+1 or X (n+1) . 3.5.22 Suppose {X 1 , X 2 , . . . , X n } is a random sample from X with probability density function f (x, θ ) = θ/x 2 , x ≥ θ, θ > 0. (i) Find the maximum likelihood estimator of θ and examine if it is CAN for θ . (ii) Find a CAN estimator of θ based on the sample quantiles. 3.5.23 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Weibull distribution with probability density function f (x, θ ) = θ x θ −1 exp{−x θ } , x > 0, θ > 0 .

164

3.5.24

3.5.25

3.5.26

3.5.27

3.5.28

3.5.29

3.5.30

3.5.31

3.5.32

3.5.33

3

Consistent and Asymptotically Normal Estimators

Obtain the estimator of θ based on the sample quantiles. Is it CAN? Justify your answer. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace (θ, 1) distribution. Find a family of CAN estimators of θ based on the sample quantiles. Also find the CAN estimator of θ based on the sample mean and the sample median. Which one is better? Why? Suppose {X 1 , X 2 , . . . , X n } is a random sample from a gamma G(θ, λ) distribution. Find 100(1 − α)% asymptotic confidence interval for the mean of the distribution. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli distribution with mean θ . Find an asymptotic confidence interval for θ with confidence coefficient (1 − α) using both the studentization procedure and the variance stabilization technique. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (μ, σ 2 ) distribution. Find a CAN estimator of the p-th population quantile based on p-th sample quantile as well as based on a CAN estimator of θ = (μ, σ 2 ) . Which is better? Why? Obtain the asymptotic confidence interval for the p-th population quantile using the estimator which is better between the two. Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with location parameter μ and scale parameter 1/σ . (i) Obtain an asymptotic confidence interval for μ when σ is known and when it is unknown. (ii) Obtain an asymptotic confidence interval for σ when μ is known and when it is unknown. (iii) Obtain an asymptotic confidence interval for the p-th population quantile when both μ and σ are unknown. Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (θ1 , θ2 ) distribution, −∞ < θ1 < θ2 < ∞. Obtain a CAN estimator of pth population quantile and hence based on it obtain the asymptotic confidence interval for p-th population quantile. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson Poi(θ ) distribution, θ > 0. (i) Obtain a CAN estimator of the coefficient of variation cv(θ ) of X when it is defined as √ cv(θ ) = standard deviation/mean = 1/ θ . (ii) If the estimator of cv(θ ) is proposed as cv(θ ˜ ) = Sn /X n , where X n is the sample mean and Sn is the sample standard deviation, examine if it is CAN for θ . Compare the two estimators. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a log-normal distribution with parameters μ and σ 2 . Find a CAN estimator of (μ1 , μ2 ) . Hence obtain a CAN estimator for θ = (μ, σ 2 ) and its approximate variance-covariance matrix. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a gamma distribution with scale parameter α and shape parameter λ. Find a moment estimator of (α, λ) and examine whether it is CAN. Find its approximate variancecovariance matrix. Suppose X i j = μi + i j where {i j , i = 1, 2, 3, j = 1, 2, . . . , n} are independent and identically distributed random variables each having a normal

3.5 Conceptual Exercises

165

N (0, σ 2 ) distribution. (i) Obtain a CAN estimator for θ = μ1 − 2μ2 + μ3 . (ii) Suppose {i j , i = 1, 2, 3, j = 1, 2, . . . , n} are independent and identically distributed random variables with E(i j ) = 0 and V ar (i j ) = σ 2 . Is the estimator of θ obtained in (i) still a CAN estimator of θ ? Justify your answer. 3.5.34 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (θ1 , θ2 ) distribution, where θ1 < θ2 ∈ R. (i) Find the maximum likelihood estimator of (θ1 , θ2 ) . Show that it is consistent but not CAN. (ii) Find a CAN estimator of (θ1 + θ2 )/2. 3.5.35 Suppose {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables with finite fourth order moment. Suppose E(X 1 ) = μ and V ar (X 1 ) = σ 2 . Find a CAN estimator of the coefficient of variation σ/μ. 3.5.36 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X with distribution function F. Suppose random variables Z 1 and Z 2 are defined as follows. For a < b,

1, if X ≤a Z1 = 0, if X > a.

Z2 =

1, 0,

if if

X ≤b X > b.

Show that for large n, the distribution of (Z 1n , Z 2n ) is bivariate normal. Hence obtain a CAN estimator for (F(a), F(b)) . 3.5.37 Suppose {X 1 , X 2 , . . . , X n } is a random sample from the following distributions—(i) Normal N (μ, σ 2 ) and (ii) exponential distribution with location parameter θ and scale parameter λ. Find the maximum likelihood estimators of the parameters using stepwise maximization procedure and examine whether these are CAN.

3.6

Computational Exercises

Verify the results of the following exercises and examples by simulation using R code. 3.6.1 3.6.2 3.6.3 3.6.4

Exercise 3.5.1 (i) and (ii) (Hint: Code will be similar to that of Example 3.4.4). Exercise 3.5.2 (Hint: Code will be similar to that of Example 3.4.4). Exercise 3.5.4 (Hint: Code will be similar to that of Example 3.4.1). In Exercise 3.5.5, compare the performance of the asymptotic confidence intervals based on the maximum likelihood estimator and moment estimator of θ .(Hint: Use the approach similar to that of Example 3.4.5 and Example 3.4.7). 3.6.5 In Exercise 3.5.7, find the value of p for which σ 2 ( p) is minimum. (Hint: In the solution of Exercise 3.5.7, you will find the expression of σ 2 ( p).)

166

3

Consistent and Asymptotically Normal Estimators

3.6.6 Exercise 3.5.15, (i) & (ii) (Hint: Code will be similar to that of Example 3.4.1. See the solution of Exercise 3.5.15 for the expressions of MSE). 3.6.7 Exercise 3.5.21 (Hint: Code will be similar to that of Example 3.4.1. See the solution of Exercise 3.5.21 for the expressions of the minimum sample size). 3.6.8 Exercise 3.5.26. Display the first 6 confidence intervals for both the procedures. Also compute the empirical confidence coefficient for both the procedures.(Hint: Use the approach similar to that of Example 3.4.7). 3.6.9 Exercise 3.5.27. Take p = 0.40. Display the first 6 confidence intervals for both the procedures. Also compute the empirical confidence coefficient for both the procedures. 3.6.10 Exercise 3.5.30 (Hint: See the solution of Exercise 3.5.30) 3.6.11 Exercise 3.5.31 (Hint: See the solution of Exercise 3.5.31) 3.6.12 Example 3.2.2 3.6.13 Example 3.2.6. Take any value of p. 3.6.14 Example 3.2.8 (Hint: Code will be similar to that of Example 3.4.4). 3.6.15 Example 3.2.11 (Hint: Use the approach similar to that of Example 3.4.7). 3.6.16 Example 3.3.1 (Hint: Code will be similar to that of Example 3.4.1).

References 1. 2. 3. 4.

Serfling, R. J. (1980). Approximation theorems of mathematical statistics. New York: Wiley. DasGupta, A. (2008). Asymptotic theory of statistics and probability. New York: Springer. Anderson, T. W. (2003). An introduction to multivariate statistical analysis. New York: Wiley. Genz, A., & Bretz, F. (2009). Computation of multivariate normal and t probabilities: Lecture notes in statistics (Vol. 195). Heidelberg: Springer. 5. Genz, A., Bretz, F., Miwa, T., Mi, X., Leisch, F., Scheipl, F., Hothorn, T. (2020). mvtnorm: Multivariate normal and t distributions. R package version 1.1-0. http://CRAN.R-project.org/ package=mvtnorm.

4

CAN Estimators in Exponential and Cramér Families

Contents 4.1 4.2 4.3 4.4 4.5 4.6 4.7

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exponential Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cramér Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Iterative Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maximum Likelihood Estimation Using R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computational Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

167 168 198 232 234 261 264

5 Learning Objectives After going through this chapter, the readers should be able – to derive CAN estimators of parameters of the probability models belonging to one-parameter and multiparameter exponential family – to appreciate the Cramér-Huzurbazar theory of maximum likelihood estimation of parameters of the probability models belonging to one-parameter and multiparameter Cramér family – to compare CAN estimators on the basis of asymptotic relative efficiency – to implement the iterative procedures for computation of maximum likelihood estimators using R

4.1

Introduction

In Chap. 2, we discussed the concept of consistency of an estimator based on a random sample from the distribution of X , which is either a random variable or a

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0_4

167

168

4

CAN Estimators in Exponential and Cramér Families

random vector with probability law f (x, θ), indexed by a parameter θ ∈ . It may be a real parameter or a vector parameter. Chapter 3 was devoted to the discussion of a CAN estimator of a parameter. The present chapter is concerned with the study of a CAN estimator of a parameter, when a probability distribution of X belongs to a specific family of distributions such as an exponential family or a Cramér family. An exponential family is a subclass of a Cramér family. In Sect. 4.2, we prove that in a one-parameter exponential family and in a multiparameter exponential family, the maximum likelihood estimator and the moment estimator based on a sufficient statistic are the same and these are CAN estimators. Section 4.3 presents the Cramér-Huzurbazar theory for the distributions belonging to a Cramér family. Cramér-Huzurbazar theory, which is usually referred to as standard large sample theory of maximum likelihood estimation, asserts that for large sample size with high probability, the maximum likelihood estimator of a parameter is a CAN estimator. These results are heavily used in Chaps. 5 and 6 to find the asymptotic null distribution of the likelihood ratio test statistic, Wald’s test statistic and the score test statistic. In many models, the system of likelihood equations cannot be solved explicitly and we need some numerical procedures. In Sect. 4.4, we discuss frequently used numerical procedures to solve the system of likelihood equations, such as Newton-Raphson procedure and method of scoring. The last section illustrates the results established in Sects. 4.2 to 4.4 using R software.

4.2

Exponential Family

An exponential family is a practically convenient and widely used unified family of distributions indexed either by a real parameter or by a vector parameter. It contains most of the standard discrete and continuous distributions that are used for modeling, such as normal, Poisson, binomial, exponential, gamma, and multivariate normal. The reason for the special status of an exponential family is that a number of important and useful results regarding estimation and testing of hypotheses can be unified within the framework of an exponential family. This family also forms the basis for an important class of models, known as generalized linear models. A fundamental treatment of the general exponential family is available in Lehmann and Romano [1] and other books cited therein. Suppose X is a random random variable or a random vector with probability law f (x, θ), which is either a probability density function or a probability mass function, θ ∈  ⊂ R and it is an indexing parameter. The distribution of X belongs to a one-parameter exponential family if the following conditions are satisfied. (i) The support S f of f (x, θ) defined as S f = {x| f (x, θ) > 0} is free from θ. (ii) The parameter space  is such that S f f (x, θ) d x = 1 and it is an open set. (iii) The probability law f (x, θ) is expressible as f (x, θ) = exp{U (θ)K (x) + V (θ) + W (x)},

4.2 Exponential Family

169

where U and V are functions of θ only and K and W are functions of x only. (iv) U is a twice differentiable function of θ with U  (θ) = 0. (v) K (x) and 1 are linearly independent, that is, a + bK (x) = 0 ⇒ a = 0 and b = 0. In condition (ii), it is enough to assume that the true parameter θ0 is an interior point. If  is assumed to be an open set, it is always satisfied. It has been proved that  is a convex set (see Lehmann and Romano [1]). It is easy to verify that all standard distributions, such as N (θ, 1), θ ∈ R, exponential distribution with mean θ > 0, Binomial B(n, θ) where n is known and θ ∈ (0, 1), Poisson Poi(θ), θ > 0, geometric with success probability θ ∈ (0, 1) and the truncated versions of these distributions, constitute a one-parameter exponential family of distributions. N (θ, θ) distribution where θ > 0, belongs to a one-parameter exponential family. However, N (θ, θ2 ) distribution, where θ > 0 does not belong to a one-parameter exponential family, but belongs to a curved exponential family, refer to van der Vaart [2] for details. In Sect. 4.3, we show that it belongs to a Cramér family. A family of normal N (θ, 1) distributions, with θ ∈ I + where I + = {1, 2, . . .} or θ ∈ {−1, 0, 1}, is not an exponential family as the parameter space is not open. Similarly, a family of gamma distributions with a known scale parameter and shape parameter belonging to I + , also known as Erlang distribution, is not an exponential family, as the parameter space is not open. A uniform U (0, θ) or a uniform U (θ − 1, θ + 1) distribution, an exponential distribution with scale parameter 1 and location parameter θ, do not belong to an exponential family as the support of the distribution depends on the parameter θ. A Cauchy distribution with scale parameter 1 and location parameter θ does not belong to the exponential family, as its probability density function f (x, θ) cannot be expressed in a required form. For the same reason, a Laplace distribution does not belong to an exponential family. Some of the properties of the distributions belonging to an exponential family are as follows: from the distribution belonging to a (i) If {X 1 , X 2 , . . . , X n } is a random sample one-parameter exponential family, then rn=1 K (X r ) is a minimal sufficient statistic. (ii) A function V is differentiable any number of times with respect to θ. (iii) The identity S f f (x, θ)d x = 1 can be differentiated with respect to θ any num  ∂ ber of times under the integral sign. As a consequence, E θ ∂θ log f (X , θ) = 0 and information function I (θ) is given by  I (θ) = E θ

  2 ∂ ∂2 log f (X , θ) = E θ − 2 log f (X , θ) ∂θ ∂θ   ∂ = V arθ log f (X , θ) . ∂θ

170

4

Observe that I (θ) = V arθ



∂ ∂θ

CAN Estimators in Exponential and Cramér Families

   ∂ log f (X , θ) ≥ 0. If V arθ ∂θ log f (X , θ) = 0,

∂ then ∂θ log f (X , θ) = U  (θ)K (X ) + V  (θ) is a degenerate random variable; it may be a function of θ. It further implies that K (X ) is some function of θ, which is contrary to the assumption that  K (X ) is a function of X only. Thus, it follows that ∂ I (θ) = V arθ ∂θ log f (X , θ) > 0. Further,

 Eθ

 ∂ −V  (θ) log f (X , θ) = 0 ⇒ E θ (K (X )) = = η(θ), say (4.2.1) ∂θ U  (θ)

It is to be noted that E θ (K (X )) < ∞ as U  (θ) = 0. We find the variance of K (X ) using the following formula for I (θ). We have 

2  2 ∂ log f (X , θ) = E θ U  (θ)K (X ) + V  (θ) ∂θ    −V  (θ) 2  2 = (U (θ)) E θ K (X ) − U  (θ)

I (θ) = E θ

= (U  (θ))2 E θ (K (X ) − E(K (X )))2 = (U  (θ))2 V arθ (K (X )) I (θ) , as U  (θ) = 0. ⇒ V arθ (K (X )) = (U  (θ))2

(4.2.2)

Since I (θ) = (U  (θ))2 V arθ (K (X )) and U  (θ) = 0, if V arθ (K (X )) < ∞ then I (θ) < ∞. Thus, if the variance of K (X ) is finite, then 0 < I (θ) < ∞. We find one more expression of I (θ) as follows:    ∂2 I (θ) = E θ − 2 log f (X , θ) = E θ −U  (θ)K (X ) − V  (θ) ∂θ      V  (θ) U (θ)V  (θ) − V  (θ)U  (θ)    = U (θ)  − V (θ) = U (θ) U (θ) (U  (θ))2    d −V (θ) = U  (θ)η  (θ). (4.2.3) = U  (θ) dθ U  (θ) We use these expressions of E(K (X )), V ar (K (X )) and I (θ) in the following Theorem 4.2.1, which proves an important result that for the distributions belonging to an exponential family, method of moment estimation and maximum likelihood estimation lead to the best CAN estimator of an indexing parameter θ. It is true when the moment estimator is a function of a sufficient statistic. We use the inverse function theorem, stated in Chap. 1, in the proof of Theorem 4.2.1 and of Theorem 4.2.2 to examine whether inverse of some parametric function exists and has some desirable properties.

4.2 Exponential Family

171

Theorem 4.2.1 Suppose the distribution of a random variable or a random vector X belongs to a one-parameter exponential family with indexing parameter θ. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . Then the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN for θ with approximate variance 1/n I (θ).

Proof Since the distribution of X belongs to a one-parameter exponential family with indexing parameter θ, its probability law is given by f (x, θ) = exp{U (θ)K (x) + V (θ) + W (x)}, where U and V are differentiable functions of θ with U  (θ) = 0. Further, 1 and K (x) are linearly independent, the parameter space  is an open set and the support S f of f (x, θ) is free from θ. Corresponding to a random sample X = {X 1 , X 2 , . . . , X n } of size n from the distribution of X , the likelihood of θ is given by L n (θ|X ) =

n

r =1

exp{U (θ)K (X r ) + V (θ) + W (X r )}

= exp U (θ)

n r =1

K (X r ) + nV (θ) +

n

W (X r )

.

(4.2.4)

r =1

 From the Neyman-Fisher factorization theorem, it follows that rn=1 K (X r ) is a sufficient statistic for the family. Thus, the moment estimator of θ based on a sufficient statistic is a solution of the equation Tn = rn=1 K (X r )/n = E θ (K (X )) = η(θ) = −V  (θ)/U  (θ) by Eq. (4.2.1). To investigate whether this equation has a solution, we note that I (θ) = U  (θ)η  (θ) from Eq. (4.2.3). Further, I (θ) > 0 and U  (θ) = 0 ∀ θ ∈ , thus if U  (θ) > 0, then η  (θ) > 0 and if U  (θ) < 0, then η  (θ) < 0 which implies that η  (θ) = 0 ∀ θ ∈ . Hence, by the inverse function theorem, unique η −1 exists and the moment equation Tn = η(θ) has a unique solution. Thus, the moment estimator θ˜ n based on a sufficient statistic is given by θ˜ n = η −1 (Tn ). To find the maximum likelihood estimator, from the likelihood as given in Eq. (4.2.4), we have ∂ log L n (θ|X ) = U  (θ)nTn + nV  (θ) and ∂θ ∂2 log L n (θ|X ) = U  (θ)nTn + nV  (θ). ∂θ2 Thus, the likelihood equation is given by Tn = η(θ) with its solution as θ = η −1 (Tn ). To claim it to be the maximum likelihood estimator, we examine whether the second derivative of the log-likelihood is negative at the solution of the likelihood equation. Observe that at Tn = η(θ), that is at θ = η −1 (Tn )  ∂2 log L n (θ|X )|θ=η−1 (Tn ) = n U  (θ)η(θ) + V  (θ) 2 ∂θ   −V  (θ)   = n U (θ)  + V (θ) U (θ)

172

4

CAN Estimators in Exponential and Cramér Families



−U  (θ)V  (θ) + U  (θ)V  (θ) =n U  (θ) = −n I (θ) = −n I (η −1 (Tn )) < 0 a.s. Hence, θˆ n = η −1 (Tn ) is the maximum likelihood estimator of θ and it is the same as the moment estimator θ˜ n = η −1 (Tn ) based on a sufficient statistic. To establish that θˆ n = θ˜ n is CAN, observe that E(K (X )) = η(θ) < ∞ as U  (θ) = 0 and  Pθ hence by Khintchine’s WLLN, Tn = n1 rn=1 K (X r ) → E θ (K (X )) = η(θ) as n → ∞, ∀ θ ∈ . Thus, Tn is consistent for η(θ). To find its asymptotic distribution with suitable normalization, it is to be noted that {K (X 1 ), K (X 2 ), . . . , K (X n )} are independent and identically distributed random variables with E θ (K (X )) = η(θ) and V arθ (K (X )) = I (θ)/(U  (θ))2 , which is positive and finite. Hence, by the CLT applied to {K (X 1 ), K (X 2 ), . . . , K (X n )}, we have n 

K (X r ) − nη(θ) √ L L → Z ∼ N (0, 1) ⇔ n(Tn − η(θ)) → Z 1 ∼ N (0, σ(θ)), √  n I (θ)/U (θ)

r =1

where σ(θ) = I (θ)/(U  (θ))2 . Thus, Tn is CAN for η(θ) = φ, say with approximate variance σ(θ)/n = σ1 (φ)/n. To find the CAN estimator for θ, we use the delta method. Suppose g(φ) = η −1 (φ) then g(φ) = η −1 (φ) = η −1 (η(θ)) = θ. Further, η(θ) = −V  (θ)/U  (θ). Since U and V are differentiable twice, η(θ) is differentiable and by the inverse function theorem g(φ) = η −1 (φ) is a differentiable function and hence a continuous function. Hence, by the invariance property of consistency under continuous transformation, Pθ



Tn → η(θ) ⇒ η −1 (Tn ) = θˆ n = θ˜ n → η −1 (η(θ)) = θ , ∀ θ ∈  . Thus, θˆ n = θ˜ n is consistent for θ. Now, g is differentiable and g  (φ) =

d −1 d dθ η (φ) = η −1 (η(θ)) = . dφ dη(θ) dη(θ)

To find the expression for

dθ dη(θ)

observe that,

d −1 d η (η(θ)) = θ=1 dθ dθ dη −1 (η(θ)) dη(θ) = 1, by chain rule ⇒ dη(θ) dθ dθ  ⇒ η (θ) = 1 dη(θ) dθ U  (θ) ⇒ = (η  (θ))−1 = = 0, dη(θ) I (θ)

η −1 (η(θ)) = θ ⇒

4.2 Exponential Family

173

as U  (θ) = 0 and I (θ) < ∞. Thus, g  (φ) =

dθ dη(θ)

for all φ. Hence, by the delta method, g(Tn ) = g(φ) = θ with approximate variance given by 1 1 I (θ) V1 (φ)(g  (φ))2 = n n (U  (θ))2



U  (θ) I (θ)  = 0 ∀ θ ∈ −1 η (Tn ) = θˆ n = θ˜ n

=

U  (θ) I (θ)

2 =

 and hence is CAN for

1 . n I (θ)

Thus, for the distributions belonging to a one-parameter exponential family with indexing parameter θ, the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is a CAN estimator of θ with approximate variance 1/n I (θ).  It is to be noted that θˆ n is only a local maximum likelihood estimator of θ. Theorem 4.2.1 can be easily verified for all standard distributions, such as normal N (θ, 1), θ ∈ R, exponential distribution with mean θ > 0, gamma distribution with scale 1 and shape parameter λ > 0, Binomial B(n, θ) where n is known and θ ∈ (0, 1), Poisson Poi(θ), θ > 0, geometric with success probability θ ∈ (0, 1) and even the truncated versions of these distributions. In the following examples we illustrate the results of Theorem 4.2.1 for some distributions.  Example 4.2.1

The number of insects per leaf is believed to follow a Poisson Poi(θ) distribution. Many leaves have no insects, because those are unsuitable for feeding. For the given situation if X denotes the number of insects per leaf, then the distribution of X is modeled as a truncated Poisson Poi(θ) distribution, truncated at 0, θ > 0. The probability mass function of the Poisson Poi(θ) distribution truncated at 0 is given by e−θ θx −θ (1 − e ) x! ⇒ log f (x, θ) = U (θ)K (x) + V (θ) + W (x),

f (x, θ) = Pθ [X = x] =

where x = 1, 2, . . . , and U (θ) = log θ, K (x) = x, V (θ) = −θ − log(1 − e−θ ) and W (x) = − log x!. Further, U and V are differentiable functions of θ and can be differentiated any number of times and U  (θ) = 1/θ = 0. 1 and k(x) = x are linearly independent. The parameter space is an open set and support of X , which is a set of natural numbers, and is free from θ. Thus, Poisson Poi(θ) distribution truncated at 0 is a member of a one-parameter exponential family. Hence, by Theorem 4.2.1, the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN with approximate variance I −1 (θ)/n. We now proceed to find the expressions for the estimator and I (θ). Corresponding to a random sample of size n from the distribution of X , the likelihood of θ is given by

174

4

CAN Estimators in Exponential and Cramér Families

L n (θ|X ) ≡ L n (θ|X 1 , X 2 , . . . , X n ) =

n

i=1

e−θ θ Xi (1 − e−θ ) X i !

 =

n

e−θ 1 − e−θ

n 

Xi

θi=1 . n  Xi ! i=1

n

By the Neyman-Fisher factorization theorem, it follows that i=1 X i is a sufficient statistic. The moment estimator of θ based on the sufficient statistic is then given by the equation, X n = E(X ) = θ/(1 − e−θ ) = η(θ) say. It is to be noted that η  (θ) =

1 − e−θ − θe−θ Pθ [Y > 1] = > 0, (1 − e−θ )2 (1 − e−θ )2

∀ θ > 0,

where Y ∼ Poi(θ) with support {0, 1, . . . , }. Hence, by the inverse function theorem, η −1 exists and by using numerical methods, which are discussed in Sect. 4.4, we get the moment estimator θ˜ n of θ based on the sufficient statistic as θ˜ n = η −1 (X n ). We now proceed to find the maximum likelihood estimator. From the likelihood of θ, as specified above we get the following likelihood equation: n 

−n − Further,

ne−θ + 1 − e−θ

i=1

θ

Xi =0



Xn = n 

∂2 ∂θ2

log L n (θ|X ) =

ne−θ (1 − e−θ )2



θ . 1 − e−θ Xi

i=1

θ2

and at the solution of the likelihood equation it is −n(1 − e−θ − θe−θ )/θ(1 − e−θ )2 < 0 for all θ > 0. Thus, the maximum likelihood estimator θˆ n of θ is given by θˆ n = η −1 (X n ), which is the same as the moment estimator based on the sufficient statistic. The information function I (θ) is given by   −ne−θ nθ ∂2 + 2 n I (θ) = E − 2 log L n (θ|X ) = −θ 2 ∂θ θ (1 − e ) =

n(1 − e−θ − θe−θ ) = nU  (θ)η  (θ) . θ(1 − e−θ )2

Thus, θ˜ n = θˆ n is CAN for θ with approximate variance 1/n I (θ) = θ(1 − e−θ )2 /n(1 − e−θ − θe−θ ).



4.2 Exponential Family

175

 Example 4.2.2

Suppose X ∼ N (θ, θ), θ > 0. Then its probability density function is given by   1 exp − (x − θ)2 2θ 2πθ   1 θ 1 , x ∈ R θ > 0. = √ exp − x 2 + x − 2θ 2 2πθ

f (x, θ) = √

1

Hence, log f (x, θ) = U (θ)K (x) + V (θ) + W (x), where U (θ) = −1/2θ, K (x) = x 2 , W (x) = x and V (θ) = − log(2πθ)/2 − θ/2. Further U and V are differentiable functions of θ and can be differentiated any number of times and U  (θ) = 1/2θ2 = 0. To prove that 1 and k(x) = x 2 are linearly independent, consider a + bx 2 = 0. Taking derivative with respect to x, we get 2bx = 0, with one more derivative 2b = 0, that is b = 0 and then from a + bx 2 = 0, we have a = 0 which implies that 1 and k(x) = x 2 are linearly independent. The parameter space (0, ∞) is an open set and support of X , which is a real line, is free from θ. Thus, a family of normal N (θ, θ) distributions with θ > 0 is a one-parameter exponential family. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . By Theorem 4.2.1, the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN with approximate variance 1/n I (θ). We now proceed to find the expressions for the estimator and I (θ). Corresponding to a random sample of size n from the distribution of X , the likelihood of θ is given by   n

1 1 2 exp − (X i − θ) L n (θ|X ) ≡ L n (θ|X 1 , X 2 , . . . , X n ) = √ 2θ 2πθ i=1

 n n n 1 1 2 nθ = √ exp − Xi + Xi − . 2θ 2 2πθ i=1

i=1

n X i2 is a sufBy the Neyman-Fisher factorization theorem, it follows that i=1 ficient statistic. The moment estimator of θ based on the sufficient statistic is the solution of the equation, m 2 = E(X 2 ).It is a quadratic equation given by   θ2 + θ − m 2 = 0 with solution θ = −1 ± 1 + 4m 2 /2. Since θ > 0, we dis   card the root θ = −1 − 1 + 4m 2 /2 and hence the moment estimator θ˜ n of θ    based on the sufficient statistic is θ˜ n = −1 + 1 + 4m 2 /2. To find the maximum likelihood estimator, from the likelihood of θ as specified above we get the likelihood equation and the second derivative of the log likelihood at the solution as follows: n n ∂ 1 2 n X i − = 0 ⇔ θ2 + θ − m 2 = 0 log L n (θ|X ) = − + 2 ∂θ 2θ 2θ 2 i=1

176

4

CAN Estimators in Exponential and Cramér Families

n 

&

∂2 n log L n (θ|X ) = 2 − ∂θ2 2θ

i=1



X i2

θ3 n 



n 

X i2

n 

X i2



⎥ 1⎢ n i=1 i=1 ⎥ − + =− ⎢ + ⎣ θ 2θ 2θ2 2θ2 ⎦

X i2



⎥ 1 ⎢ n i=1 ⎥ at the solution of likelihood equation =− ⎢ + ⎣ 2 θ 2 2θ ⎦ < 0,

∀ θ>0.

Thus, the maximum likelihood estimator θˆ n of θ is given by  ˆθn = (−1 + 1 + 4m  )/2, which is the same as the moment estimator based on 2

the sufficient statistic. The information function I (θ) is given by   ∂2 n n(θ + θ2 ) n(1 + 2θ) n I (θ) = E − 2 log L n (θ|X ) = − 2 + = . 3 ∂θ 2θ θ 2θ2 Thus, θ˜ n = θˆ n is CAN for θ with approximate variance 1/n I (θ) = 2θ2 /n(1 + 2θ). We can also prove that θ˜ n = θˆ n is CAN for θ using the expression for θˆ n and the delta method. From Theorem 2.2.5, we have Pθ

m 2 → μ2 (θ) = θ + θ2 . Hence, θˆ n =

−1 +



1 + 4m 2

2





−1 +



1 + 4(θ + θ2 ) =θ. 2

To prove that it is CAN, we use the delta method. From Example 3.3.2, we have T n = (m 1 , m 2 ) is CAN for (θ, θ2 + θ) = φ, say, with approximate dispersion matrix /n, where  is given by   θ 2θ2 . = 2θ2 2θ2 + 4θ3 √ We now define a function g : R2 → R such that g(x1 , x2 ) = (−1+ 1 + 4x2 )/2. Its partial derivatives are given by ∂x∂ 1 g(x1 , x2 ) = 0 and √ ∂ ∂x2 g(x 1 , x 2 ) = 1/ 1 + 4x 2 . Thus, partial derivatives exist and are continuous, hence g is a totally  differentiable function. Hence by Theorem 3.3.4, g(T ) = (−1 + 1 + 4m  )/2 = θˆ n is CAN for g(θ, θ2 + θ) = θ with approxin

2

mate variance  /n, where  = (0, 1/(1+2θ)) and   = 2θ2 /(1+2θ). 

4.2 Exponential Family

177

 Remark 4.2.1

In Example 4.2.2, we have seen that a normal N (θ, θ) distribution with θ > 0 forms a one-parameter exponential family. In the next example, we note that a normal N (θ, θ2 ) distribution θ > 0 does not belong to a one-parameter exponential family, but using the WLLN and the CLT along with the delta method, it can be shown that the maximum likelihood estimator is a CAN estimator of θ.  Example 4.2.3

Suppose X ∼ N (θ, θ2 ), where θ > 0. Then its probability density function is given by f (x, θ) = √

    1 1 x2 x 1 . exp − 2 (x − θ)2 = √ exp − 2 + − 2θ 2θ θ 2 2πθ2 2πθ2 1

2

x x In view of the two terms − 2θ 2 and θ in the exponent, both involving θ and x, f (x, θ) cannot be expressed as f (x, θ) = exp{U (θ)K (x) + V (θ) + W (x)} and hence the family of N (θ, θ2 ) distributions, where θ > 0 is not a one-parameter exponential family. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . To find the maximum likelihood estimator of θ, the log-likelihood of θ corresponding to the data X ≡ {X 1 , X 2 , . . . , X n } is given by

log L n (θ|X ) = c −

n n 1 (X i − θ)2 log θ2 − 2 2 2θ i=1

n n n 1 2 1 = c − − n log θ − 2 Xi + Xi , 2 2θ θ i=1

i=1

where c is a constant, free from θ. It is a differentiable function of θ. Hence the likelihood equation and its solution are given by n 

n − + θ

i=1

n 

X i2

θ3



Xi

i=1

θ2

θ2 + θm 1 − m 2 = 0   −m 1 ± m 2 1 + 4m 2 ⇒ θ= . 2 = 0⇔

Since θ > 0, we discard the negative root. The second order partial derivative of the log-likelihood and its value at the positive root of the likelihood equation are given by

178

4

∂2 ∂θ2

= = = =
0 family in Example 4.2.2. We now examine whether it is CAN. Consistency of θˆ n follows immediately from the consistency of raw moments for the corresponding population raw moments and the fact that convergence in probability  is closed under  all arithmetic operations. To examine whether θˆ n = (−m + m 2 + 4m  )/2 is 1

1

2

asymptotically normal, we use Theorem 3.3.2 and an appropriate transformation. From Theorem 3.3.2, we have T n = (m 1 , m 2 ) is CAN for φ = (μ1 , μ2 ) = (θ, 2θ2 ) with approximate dispersion matrix /n where  is given by  =

μ2 − (μ1 )2 μ3 − μ1 μ2 μ3 − μ1 μ2 μ4 − (μ2 )2



 =

 θ2 2θ3 , 2θ3 6θ4

with (μ1 , μ2 ) = (θ, 2θ2 ) . We have obtained the third and fourth raw moments for N (μ, σ 2 ) distribution in Example 3.3.2. From those expressions we have μ3 = μ3 + 3μσ 2 = θ3 + 3θ3 = 4θ3 and μ4 = 3σ 4 + μ4 + 6μ2 σ2 = 10θ4 . We

further define a transformation g : R2 → R as g(x1 , x2 )=(−x1 + x12 + 4x2 )/2. Then ⎞ ⎛ ∂ 1⎝ x1 ⎠ & ∂ g(x1 , x2 ) =  1 g(x1 , x2 ) = . −1 +  ∂x1 2 ∂x2 x 2 + 4x x 2 + 4x 2

1

1

2

These partial derivatives are continuous and hence g is a totally differentiable function. The gradient vector  evaluated at (θ, 2θ2 ) is given by by Theorem 3.3.4,  = [−1/3, 1/3θ] . Hence,     2  g(m , m ) = (−m + m + 4m  )/2 = θˆ n is CAN for g(θ, 2θ2 ) = θ with 1

2

1

1

2

4.2 Exponential Family

179

approximate variance  /n, where   = θ2 /3 > 0 ∀ θ > 0. It is to noted that     ∂2 3n 1 6θ2 2θ n I (θ) = E θ − 2 log L n (θ|X ) = n − 2 + 4 − 3 = 2 . ∂θ θ θ θ θ Thus, θˆ n is CAN for θ ∈ (0, ∞) with approximate variance θ2 /3n = 1/n I (θ).   Example 4.2.4

Suppose a random variable X has a power series distribution with probability mass function as p(x, θ) = Pθ [X = x] = ax θ x /A(θ), x = 0, 1, 2, . . ., where A(θ) = 0 is a norming constant and the parameter space  is such that A(θ) is a convergent series and p(x, θ) ≥ 0. Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . To find a sufficient statistic, the likelihood of θ corresponding to the given random sample is given by  L n (θ|X ) =

n

 aXi

n 

θi=1

Xi

/(A(θ))n .

i=1

n Hence by the Neyman-Fisher factorization theorem, i=1 X i is a sufficient statistic for the family of power series distributions. Thus, the moment estimator of θ based on na sufficient statistic is a solution of the equation X n = i=1 X i /n = E(X ). However, we do not have the explicit form of the probability mass function and hence E(X ) cannot be evaluated using the for∞ xi p(xi , θ). To find E(X ), we use the following theorem for mula E(X ) = i=1 the power series.  Theorem: If the power series expansion f (x) = n≥1 an x n is valid in an open interval (−r , r ) then for every x ∈ (−r , r ) the derivative f  (x) exists and is given  by the power series expansion f (x) = n≥1 nan x n−1 . (Apostol [3], p. 448). As a corollary to this theorem, f has derivatives of every order and these can be obtained by repeated differentiation, term by term of the power series. Thus, ∞

p(xi , θ) = 1

i=1





ax θ x = A(θ)

i=1

is a power series, hence is differentiable any number of times with respect to θ. As a consequence, ∞ i=1

∞ ∞ ∂ ∂ p(xi , θ) = 0 ⇒ p(xi , θ) = 0 ∂θ ∂θ i=1 i=1  ∞  ∂ ⇒ log p(xi , θ) p(xi , θ) = 0 ∂θ

p(xi , θ) = 1 ⇒

i=1

180

4

CAN Estimators in Exponential and Cramér Families

 ∂ log p(X , θ) = 0 ⇒E ∂θ   −A (θ) X ⇒E =0 + A(θ) θ θ A (θ) ⇒ E(X ) = = η(θ), say. A(θ) 

Thus, the moment equation is given by X n = η(θ). To investigate whether this equation has a solution, we proceed as follows. We have 

 ∂2 I (θ) = E − 2 log p(X , θ) ∂θ   A(θ)A (θ) − (A (θ))2 X + =E θ2 (A(θ))2 θ A (θ) A(θ)A (θ) − (A (θ))2 + 2 A(θ)θ (A(θ))2 A(θ)A (θ) + θ A(θ)A (θ) − θ(A (θ))2 = . θ(A(θ))2 =

Now observe that η(θ) = θ A (θ)/A(θ). Since A(θ) is differentiable any number of times with respect to θ, η(θ) is also differentiable any number of times. We have η  (θ) =

A(θ)A (θ) + θ A(θ)A (θ) − θ(A (θ))2 = θI (θ) . (A(θ))2

Further, I (θ) > 0 and we can assume θ = 0, as if θ = 0 then the distribution of X is degenerate at 0. Thus if θ > 0, then η  (θ) > 0 and if θ < 0, then η  (θ) < 0 which implies that η  (θ) = 0 ∀ θ ∈ . Hence, by the inverse function theorem, a unique η −1 exists. Since η(θ) is differentiable, η −1 is also differentiable and hence continuous. Thus, the moment equation X n = η(θ) has a unique solution and the moment estimator θ˜ n based on the sufficient statistic is given by θ˜ n = η −1 (X n ). that E(X ) = η(θ) < ∞ as A(θ) = 0 To examine whether θ˜ n is CAN, observe  n X i /n → E(X ) = η(θ) as n → and hence by Khintchine’s WLLN, X n = i=1 ∞, ∀ θ ∈ . Thus, X n is consistent for η(θ) and by the invariance of consistency under continuous transformation θ˜ n = η −1 (X n ) is consistent for θ. To find its asymptotic distribution with suitable normalization, we first find the variance of X using the following formula for I (θ). We have   2 X A (θ) 2 ∂ log p(X , θ) = E − I (θ) = E ∂θ θ A(θ) 2   1 θ A (θ) 1 1 = 2E X − = 2 E(X − E(X ))2 = 2 V ar (X ) . θ A(θ) θ θ 

4.2 Exponential Family

181

Thus, V ar (X ) = θ2 I (θ) and it is positive and finite. Hence, by the CLT n 

X i − nη(θ) √ L L  → Z ∼ N (0, 1) ⇔ n(X n − η(θ)) → Z 1 ∼ N (0, σ(θ)) , 2 nθ I (θ)

i=1

where σ(θ) = θ2 I (θ). Thus, X n is CAN for η(θ) = φ, say with approximate variance σ(θ)/n = σ1 (φ)/n, say. To find a CAN estimator for θ, we use the delta method. Suppose g(φ) = η −1 (φ) then g(φ) = η −1 (φ) = η −1 (η(θ)) = θ. Further, η(θ) is differentiable any number of times and by the inverse function theorem g(φ) = η −1 (φ) is a differentiable function. Now, g  (φ) =

d −1 d dθ η (φ) = η −1 (η(θ)) = . dφ dη(θ) dη(θ)

Now, as shown in Theorem 4.2.1, we have dθ 1 = (η  (θ))−1 = = 0 as θ = 0 & I (θ) < ∞ dη(θ) θI (θ) dθ = 0 ∀ θ ∈  ⇒ g  (φ) = dη(θ) and hence for all φ. Hence, by the delta method, g(X n ) = η −1 (X n ) = θ˜ n is CAN for g(φ) = θ and its approximate variance is given by 2  1 2 1 1 1  2 = σ1 (φ)(g (φ)) = θ I (θ) . n n θI (θ) n I (θ) To find the maximum likelihood estimator, from the likelihood as specified above, we have n  Xi  −n A (θ) i=1 ∂ log L n (θ|X ) = + and ∂θ A(θ) θ n  Xi 2   2 −n A(θ)A (θ) + n(A (θ)) ∂ i=1 log L n (θ|X ) = − . ∂θ2 (A(θ))2 θ2 Thus, the likelihood equation is given by X n = η(θ) with its solution as θ = η −1 (X n ). To claim it to be the maximum likelihood estimator, we examine whether the second derivative of the log-likelihood is negative at the solution of the likelihood equation. Observe that at X n = η(θ), that is at θ = η −1 (X n ) ∂2 −n A(θ)A (θ) + n(A (θ))2 nθ A (θ) log L n (θ|X ) = − 2 2 2 ∂θ (A(θ)) θ A(θ)

182

4

CAN Estimators in Exponential and Cramér Families

−n A(θ)A (θ) − nθ A(θ)A (θ) + nθ(A (θ))2 θ(A(θ))2 = −n I (θ) = −n I (η −1 (X n )) < 0 a.s. =

Hence, θˆ n = η −1 (X n ) is the maximum likelihood estimator of θ. It is the same as the moment estimator θ˜ n = η −1 (X n ) based on the sufficient statistic and hence is CAN for θ with approximate variance 1/n I (θ).  Thus, for the power series distribution with indexing parameter θ, the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN for θ with approximate variance 1/n I (θ). This result is the same as that for a one-parameter exponential family. In fact, the family of power series distributions, which are discrete distributions, is also a one-parameter exponential family as shown below. Suppose the random variable X has a power series distribution. Then its probability mass function is given by p(x, θ) = Pθ [X = x] = ax θ x /A(θ), x = 0, 1, 2, . . ., where A(θ) = 0 is a norming constant and  is such that A(θ) is a convergent series and p(x, θ) ≥ 0. Some of the ax may be zero. The probability mass function p(x, θ) can be rewritten as follows: log p(x, θ) = log ax + x log θ − log A(θ) = U (θ)K (x) + V (θ) + W (x), where U (θ) = log θ, K (x) = x, W (x) = log ax and V (θ) = − log A(θ). Further U is differentiable function of θ and can be differentiated any number of times and U  (θ) = 1/θ = 0. Differentiability of V (θ) follows from the theorem for power series which states that the power series has derivatives of every order and those can be obtained by repeated differentiation, term by term of the power series. To prove that 1 and k(x) = x are linearly independent, consider a + bx = 0, if x = 0 then a = 0 which implies bx = 0. If we take x = 1, then b = 0. Thus, 1 and k(x) = x are linearly independent. The parameter space is an open set (−r , r ), where r is a radius of convergence of the series A(θ). Support of X is 0, 1, 2, . . ., which is free from θ. Thus, the family of power series distributions is a one-parameter exponential family. Many standard discrete distributions, such as Poisson, binomial, geometric, negative binomial, logarithmic series distribution and their truncated versions belong to the class of power series distributions. For example, suppose X follows truncated Poisson Poi(θ) distribution, truncated at 0, θ > 0. The probability mass function of the Poisson Poi(θ) distribution, truncated at 0 is given by f (x, θ) = Pθ [X = x] =

e−θ θx ax θ x = , x = 1, 2, . . . , −θ A(θ) (1 − e ) x!

where ax = 0 for x = 0 and ax = 1/x! for x=1, 2, . . ., and A(θ) = (1 − e−θ )/e−θ = eθ − 1. Thus, Poisson Poi(θ) distribution truncated at 0 is a power series distribution. In Example 4.2.1, it is shown that the maximum likelihood

4.2 Exponential Family

183

estimator of θ is the same as the moment estimator of θ based on a sufficient statistic and is CAN for θ with approximate variance 1/n I (θ). We now proceed to extend the results of a one-parameter exponential family to a multiparameter exponential family. We first define it and state some of its important properties. For the proof of these we refer to Lehmann and Romano [1]. Suppose X is a random variable or a random vector with probability law f (x, θ), which is either a probability density function or a probability mass function. It is indexed by a vector parameter θ ∈  ⊂ Rk . The distribution of X is said to belong to a k-parameter exponential family if the following conditions are satisfied. (i) The parameter space  contains an open rectangle of dimension k, which is satisfied if  is an open set. (ii) The support S f of f (x, θ) is free from θ. (iii) The probability law f (x, θ) is expressible as f (x, θ) = exp

k

Ui (θ)K i (x) + V (θ) + W (x) ,

i=1

where Ui and V are functions of θ only and K i and W are functions of x only, i = 1, 2, . . . , k . (iv) Ui , i = 1, 2, . . . , k have continuous partial derivatives with respect to θ1 , θ2 , . . . , θk and  !  dUi    = 0. |J | =  dθ j  (v) The functions 1 and K i (x) i = 1, 2, . . . , k are linearly independent, that is, l0 + l1 K 1 (x) + l2 K 2 (x) + · · · + lk K (x) = 0 ⇒ li = 0, ∀ i = 0, 1, 2, . . . , k. This condition implies that 1, K 1 (x), K 2 (x), . . . , K k (x) are not functionally related to each other and it further implies that V (θ), U1 (θ), . . . , Uk (θ) are also not functionally related to each other. The fifth condition which states that V (θ), U1 (θ), . . . , Uk (θ) are also not functionally related to each other is useful in the proof of Theorem 4.2.2. The distributions belonging to a multiparameter exponential family satisfy the following properties. Suppose the dimension of the parameter space is k. from the distribution belonging (i) If {X 1 , X 2 , . . . , X n } is a random sample" # to n K (X ) i = 1, 2, . . . , k is a a k-parameter exponential family, then i r r =1 minimal sufficient statistic for θ. (ii) Ui , i = 1, 2, . . . , k and V have partial derivatives up to second order with respect to θi s. (iii) The identity S f f (x, θ)d x = 1 can be differentiated with respect to θi s under the integral sign at least twice. As a consequence,

184

4

CAN Estimators in Exponential and Cramér Families



 E θ ∂θ∂ i log f (X , θ) = 0, i = 1, 2, . . . , k and the information matrix $ % I (θ) = Ii j (θ) is given by 

 ∂ ∂ log f (X , θ) log f (X , θ) ∂θi ∂θ j   2 ∂ log f (X , θ) , i, j = 1, 2, . . . , k. = Eθ − ∂θi ∂θ j

Ii j (θ) = E θ

Further, I (θ) is also a dispersion matrix as  Ii j (θ) = Cov

 ∂ ∂ log f (X , θ), log f (X , θ) ∂θi ∂θ j

and it is a positive definite matrix. & '  i  By the inverse function theorem, the condition that |J | =  dU dθ j   = 0 implies that Ui (θ), i = 1, 2, . . . , k are one-to-one functions of {θ1 , θ2 , . . . , θk } and are invertible. Hence, if we relabel Ui (θ) = φi , i = 1, 2, . . . , k, then {θ1 , θ2 , . . . , θk } can be uniquely expressed in terms of φ = {φ1 , φ2 , . . . , φk }. With such relabeling, the ( ) k probability law f (x, θ) = exp i=1 Ui (θ)K i (x) + V (θ) + W (x) is expressible as k

k

φi K i (x) + V1 (φ) + W (x) = β(φ) exp φi K i (x) g(x) , f (x, φ) = exp i=1

i=1

(4.2.5) where β(φ) = exp(V1 (φ)) and g(x) = exp(W (x)). The representation of f (x, φ) as in (4.2.5) is known as a canonical representation of a k-parameter exponential family and {φ1 , φ2 , . . . , φk } are known as natural parameters. The condition that 1 and K i (x) i = 1, 2, . . . , k are linearly independent implies that there is no functional relation among {V1 (φ), φ1 , φ2 , . . . , φk }, in particular there is no functional relation between φi and φ j . As a consequence, ∂φi ∂φ j

 =

1, 0,

if if

i= j i = j

We use this result in the proof of Theorem 4.2.2. There is an interesting relation between information matrices I (θ) and I (φ), when the probability law f (x, θ) is expressed in a general form and when the same is expressed in a canonical form. We derive it below. We first find it for k = 1. Suppose f (x, φ) = β(φ) exp{φK (x)}g(x) & f (x, θ) = β1 (θ) exp{U (θ)K (x)}W1 (x), where φ = U (θ) .

4.2 Exponential Family

185

By the chain rule, ∂ log f (x, θ) ∂ log f (x, φ) ∂φ = ∂θ ∂φ ∂θ  2  2    ∂ log f (X , θ) ∂ log f (x, φ) 2 ∂φ = Eφ ⇒ Eθ ∂θ ∂θ ∂φ ⇒

I (θ) = (U  (θ))2 I (φ) .

The identity I (θ) = (U  (θ))2 I (φ) has following nice interpretation. The CramérRao lower bound for the variance of an unbiased estimator of φ is 1/n I (φ) whereas the Cramér-Rao lower bound for the variance of an unbiased estimator of U (θ) is (U  (θ))2 /n I (θ). Here φ = U (θ) and hence, 1 (U  (θ))2 = n I (φ) n I (θ)



I (θ) = (U  (θ))2 I (φ).

The identity I (θ) = (U  (θ))2 I (φ) is extended for vector parameter θ and φ as follows. Suppose in a k-parameter exponential family, the probability law

f (x, θ) = exp

k i=1

f (x, φ) = β(φ) exp

Ui (θ)K i (x) + V (θ) + W (x) k

&

φi K i (x) g(x) ,

i=1

where φ = (U1 (θ), U2 (θ), . . . , Uk (θ)) . By the chain rule, k k ∂ log f (x, θ) ∂ log f (x, φ) ∂φr ∂ log f (x, θ) ∂ log f (x, φ) ∂φs = & = . ∂θi ∂φr ∂θi ∂θ j ∂φs ∂θ j r =1

s=1

' & ' & ∂Ui (θ) ∂φi = . From the conditions of Suppose a matrix Jk×k is defined as J = ∂θ ∂θ j j a k-parameter exponential family we know that |J | = 0. Thus, 

 ∂ log f (X , θ) ∂ log f (X , θ) ∂θi ∂θ j   k k ∂φr ∂φs ∂ log f (X , φ) ∂ log f (X , φ) Eφ = ∂θi ∂θ j ∂φr ∂φs

Ii j (θ) = E θ

r =1 s=1

k k ∂φr ∂φs Ir s (φ) = ∂θi ∂θ j r =1 s=1

186

4

=

k k

CAN Estimators in Exponential and Cramér Families

(i, r )-th element of J  × (r , s)-th element of

r =1 s=1

I (φ) × (s, j)-th element of J = (i, j)-th element of J  I (φ)J . Thus, we have the identity I (θ) = J  I (φ)J , which is analogous to that for k = 1. In the following example, we illustrate how to express the two-parameter exponential family in a canonical form and also verify the relation derived above between the information matrices.  Example 4.2.5

Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by   = σ2

 1 ρ , ρ 1

σ 2 > 0, − 1 < ρ < 1. We examine whether the distribution belongs to a twoparameter exponential family and then express the probability law in a canonical form. The probability density function f (x, y, σ 2 , ρ) of Z = (X , Y ) is given by   1 1 2 2  f (x, y, σ , ρ) = exp − 2 (x − 2ρx y + y ) , 2σ (1 − ρ2 ) 2πσ 2 1 − ρ2 2

(x, y) ∈ R2 , σ 2 > 0, − 1 < ρ < 1. (i) It is to be noted that the parameter space  given by  = {(σ 2 , ρ) |σ 2 > 0, −1 < ρ < 1} is an open set. (ii) The support of Z = (X , Y ) is R2 which does not depend on the parameters. (iii) The probability density function f (x, y, σ 2 , ρ) can be rewritten as follows: log f (x, y, σ 2 , ρ) = − log 2π − log σ 2  2 1 1 − log(1 − ρ2 ) − 2 (x + y 2 ) − 2ρx y 2 2 2σ (1 − ρ ) = U1 (σ 2 , ρ)K 1 (x, y) + U2 (σ 2 , ρ)K 2 (x, y) + V (σ 2 , ρ) + W (x, y), 1 2 2 2 where U1 (σ 2 , ρ) = − 2σ2 (1−ρ 2 ) , K 1 (x, y) = x + y , U2 (σ , ρ) =

ρ , σ 2 (1−ρ2 )

K 2 (x, y) = x y, V (σ 2 , ρ) = − log σ 2 − 21 log(1 − ρ2 ) and W (x, y) = − log 2π. & ' 2 i (iv) The matrix J = dU dθ j of partial derivatives, where θ1 = σ and θ2 = ρ, is given by

4.2 Exponential Family

187

⎛ J=

1 4 2) ⎝ 2σ (1−ρ ρ − σ4 (1−ρ2 )

ρ − σ2 (1−ρ 2 )2 1+ρ2

⎞ ⎠.

σ 2 (1−ρ2 )2

2 It is clear that U1 and & U'2 have continuous partial derivatives with respect to σ  dUi  1 and ρ and |J | =  dθ j  = 2σ6 (1−ρ 2 )2  = 0. (v) To examine whether functions

1, x 2 + y 2 and x y are linearly independent, suppose g(x, y) = l1 (x 2 + y 2 ) + l2 x y + l3 = 0. Then

∂ ∂2 g(x, y) = 2l1 = 0 g(x, y) = 2xl1 + l2 y = 0 & ∂x ∂x 2 ⇒ l1 = 0 ⇒ l2 = 0 ∂ g(x, y) = 2xl1 + l2 = 0. Now g(x, y) = l1 (x 2 + y 2 ) + l2 x y + l3 = 0 from ∂x implies l3 = 0. Thus, a bivariate normal distribution with zero mean vector and dispersion matrix , satisfies all the requirements of a two-parameter exponential family and hence belongs to a two-parameter exponential family. The condition that |J | = 0 implies that Ui (σ 2 , ρ), i = 1, 2 are one-to-one functions of (σ 2 , ρ) and are invertible. Hence, we define

U1 (σ 2 , ρ) = φ1 = U2 (σ 2 , ρ) = φ2 =

−1 2σ 2 (1 − ρ2 )

&

ρ −2φ1 φ2 &ρ=− . ⇔ σ2 = 2 2 σ 2 (1 − ρ2 ) 2φ 4φ1 − φ2 1

With such a relabeling, the probability law f (x, y, σ 2 , ρ) is expressible as f (x, y, φ1 , φ2 ) = β(φ1 , φ2 ) exp {φ1 K 1 (x, y) + φ2 K 2 (x, y)} g(x, y)

(4.2.6)

 where β(φ) = 1/σ 2 1 − ρ2 and g(x, y) = 1/2π. Thus, the probability law of a bivariate normal distribution, as expressed in Eq. (4.2.6), is a canonical representation of a two-parameter exponential family and {φ1 , φ2 } are natural parameters. We now find the information matrices I (σ 2 , ρ) and I (φ1 , φ2 ) and verify the relation I (σ 2 , ρ) = J  I (φ1 , φ2 )J where J is as derived above. To find the information matrix I (σ 2 , ρ), we note that E(K 1 (X , Y )) = E(X 2 + Y 2 ) = 2σ 2 and E(K 2 (X , Y )) = E(X Y ) = σ 2 ρ. The derivatives of log f (x, y, σ 2 , ρ) are given below. ∂ 1 K 1 (X , Y ) ρK 2 (X , Y ) log f (x, y, σ 2 , ρ) = − 2 + − 2 2 4 ∂σ σ 2(1 − ρ )σ (1 − ρ2 )σ 4 2 ∂ 1 K 1 (X , Y ) 2ρK 2 (X , Y ) log f (x, y, σ 2 , ρ) = 4 − + 2 2 2 6 ∂(σ ) σ (1 − ρ )σ (1 − ρ2 )σ 6

188

4

∂2 log f (x, y, σ 2 , ρ) = ∂ρ∂σ 2 ∂ log f (x, y, σ 2 , ρ) = ∂ρ ∂2 log f (x, y, σ 2 , ρ) = ∂ρ2

CAN Estimators in Exponential and Cramér Families

ρK 1 (X , Y ) (1 + ρ2 )K 2 (X , Y ) − 2 2 4 (1 − ρ ) σ (1 − ρ2 )2 σ 4 ρ (1 + ρ2 )K 2 (X , Y ) ρK 1 (X , Y ) + − 2 2 2 2 (1 − ρ ) (1 − ρ ) σ (1 − ρ2 )2 σ 2 (1 + ρ2 ) K 1 (X , Y )(1 + 3ρ2 ) − (1 − ρ2 )2 (1 − ρ2 )3 σ 2 2 2ρ(3 + ρ )K 2 (X , Y ) + . (1 − ρ2 )3 σ 2

Hence by the definition, the information matrix I (σ 2 , ρ) is given by ⎛ I (σ 2 , ρ) = ⎝

1 σ4 −ρ σ 2 (1−ρ2 )

−ρ

σ 2 (1−ρ2 ) 1+ρ2

⎞ ⎠.

(1−ρ2 )2

To find I (φ1 , φ2 ) we have log f (x, y, φ1 , φ2 ) = log β(φ 2 ) + φ1 K1 (x, y) + φ2 K 2 (x, y) + log g(x, y)  1 , φ 2 where log β(φ1 , φ2 ) = log 1/σ 1 − ρ2 = (log(4φ21 − φ22 ))/2. Hence, ∂ ∂ log f (x, y, φ1 , φ2 ) = log β(φ1 , φ2 ) + K 1 (x, y) ∂φ1 ∂φ1 4φ1 + K 1 (x, y) = 2 4φ1 − φ22 4(4φ21 − φ22 ) ∂2 ∂2 log f (x, y, φ1 , φ2 ) = log β(φ1 , φ2 ) = − 2 2 ∂φ1 ∂φ1 (4φ21 − φ22 )2 ∂2 ∂2 8φ1 φ2 log f (x, y, φ1 , φ2 ) = log β(φ1 , φ2 ) = ∂φ2 ∂φ1 ∂φ2 ∂φ1 (4φ21 − φ22 )2 ∂ ∂ log f (x, y, φ1 , φ2 ) = log β(φ1 , φ2 ) + K 2 (x, y) ∂φ2 ∂φ2 −φ2 + K 2 (x, y) = 4φ21 − φ22 4φ21 − φ22 ∂2 ∂2 log f (x, y, φ , φ ) = log β(φ , φ ) = − . 1 2 1 2 ∂φ22 ∂φ22 (4φ21 − φ22 )2 Hence the Information matrix I (φ1 , φ2 ) is given by 1 I (φ1 , φ2 ) = 2 (4φ1 − φ22 )2  =σ

4



4(4φ21 − φ22 ) −8φ1 φ2 −8φ1 φ2 4φ21 − φ22

 4ρ 4(1 + ρ2 ) . 4ρ 1 + ρ2



4.2 Exponential Family

189

Now matrix J is given by J=



1 2 ⎝ 2σρ − σ2

1 σ 2 (1 − ρ2 )

ρ − 1−ρ 2 1+ρ2 1−ρ2

⎞ ⎠.

It then follows that I (σ 2 , ρ) = J  I (φ1 , φ2 )J .



Theorem 4.2.1 is extended to a multiparameter exponential family in two steps. First, we prove it for a multiparameter exponential family with a canonical representation of its probability law and later for a general form. Following theorem is an extension of Theorem 4.2.1 to a k-parameter exponential family expressed in a canonical form. Theorem 4.2.2 Suppose the distribution of a random variable or a random vector X belongs to a k-parameter exponential ( family with ) probability law k f (x, φ) = β(φ) exp i=1 φi K i (x) g(x) with indexing parameter φ. Suppose X ={X 1 , X 2 , . . . , X n} is a random sample from the distribution of X . Then the moment estimator of φ based on a sufficient statistic is the same as the maximum likelihood estimator of φ and it is CAN for φ with approximate dispersion matrix I −1 (φ)/n.

Proof The distribution of X belongs to a k-parameter exponential family with indexing parameter φ = (φ1 , φ2 , . . . , φk ) , where {φ1 , φ2 , . . . , φk } are natural parameters. ( ) k Hence its probability law is given by f (x, φ) = β(φ) exp i=1 φi K i (x) g(x). Corresponding to a random sample X from the distribution of X , the likelihood of φ is given by k

n

L n (φ|X ) = β(φ) exp φi K i (xr ) g(xr ) r =1



= (β(φ)) exp n

i=1 n k r =1 i=1

φi K i (xr )

n

g(xr ) .

(4.2.7)

r =1

"n From the Neyman-Fisher factorization theorem, it follows that r =1 K i (X r ), i = 1, 2, . . . , k} is jointly sufficient for the family. Thus, the system of moment equations to find moment estimator of φ based on a sufficient statistic is given by  Tin = rn=1 K i (X r )/n = E φ (K i (X )), i = 1, 2, . . . , k. To find E φ (K i (X )) we use

∂ the result that E φ ( ∂φ log f (X , φ)) = 0, i = 1, 2, . . . , k. Observe that i

log f (x, φ) = log β(φ) +

k

φi K i (x) + log g(x)

i=1



∂ ∂ log f (X , φ) = log β(φ) + K i (X ) . ∂φi ∂φi

190

4

CAN Estimators in Exponential and Cramér Families

Hence,  Eφ

 ∂ log f (X , φ) = 0 ∂φi ⇒ E φ (K i (X )) = −

∂ log β(φ) = h i (φ), say, i = 1, 2, . . . , k . ∂φi

Thus, the system of moment equations is given by Tin = h i (φ), i = 1, 2, . . . , k. To ensure this system of equations has a unique solution, we verify whether & that '  ∂h i (φ)  |H | =  ∂φ j  = 0. We have ∂ ∂ log f (X , φ) = log β(φ) + K i (X ) ∂φi ∂φi ∂2 ∂2 log f (X , φ) = log β(φ) . ⇒ ∂φ j ∂φi ∂φ j ∂φi Now,  Ii j (φ) = E φ −

 ∂2 ∂2 ∂ log f (X , φ) = − log β(φ) = h i (φ) . ∂φ j ∂φi ∂φ j ∂φi ∂φ j

Thus, H = I (φ), which is positive definite and hence |H | = 0. Thus, by the inverse function theorem the system of moment equations given by Tin = h i (φ), i = 1, 2, . . . , k has a unique solution given by φi = qi (T1n , T2n , . . . , Tkn ), say, for i = 1, 2, . . . , k and hence the moment estimator φ˜n of φ based on the sufficient statistic is given by φ˜n = q(T n ),

where

T n = (T1n , T2n , . . . , Tkn ) & q = (q1 , q2 , . . . , qk ) .

To find the maximum likelihood estimator of φ, from the likelihood as given in Eq. (4.2.7), we have the system of likelihood equations as ∂ ∂ log L n (φ|X ) = n log β(φ) + K i (X r ) = 0 ∂φi ∂φi n



r =1

Tin = h i (φ), i = 1, 2, . . . , k .

Thus, the system of likelihood equations is the same as the system of moment equations and hence it has a unique solution given by φi = qi (T1n , T2n , . . . , Tkn ) i = 1, 2, . . . , k. To examine whether this solution gives the maximum likelihood estimator, we verify whether the matrix of second order partial derivatives of the log-likelihood is negative definite almost surely at the solution. Now, ∂2 ∂ ∂2 log L n (φ|X ) = n log β(φ) = −n ∂φ j ∂φi ∂φ j ∂φi ∂φ j

 −

 ∂ log β(φ) ∂φi

4.2 Exponential Family

191

= −n

∂ h i (φ) = −n Ii j (φ) . ∂φ j

Hence, the matrix of second order partial derivatives of the log-likelihood is −n I (φ), which is a negative definite matrix for any φ and hence at the solution of the system of likelihood equations. Thus, the maximum likelihood estimator φˆ of φ is given n

by φˆ in = qi (T1n , T2n , . . . , Tkn ), i = 1, 2, . . . , k. Thus, the maximum likelihood estimator of φ is the same as the moment estimator based on a sufficient statistic and is given by φ˜n = φˆn = q(T n ), where T n = (T1n , T2n , . . . , Tkn ) &

q(T n ) = (q1 (T n ), q2 (T n ), . . . , qk (T n )) .

To establish that φ˜n = φˆn is CAN, observe that {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables implies that {K i (X 1 ), K i (X 2 ), . . . , K i (X n )} are also independent and identically distributed random variables, being Borel functions, for all i = 1, 2, . . . , k. Hence by Khintchine’s WLLN, Tin =

n 1 P K i (X r ) → E φ (K i (X )) = h i (φ), as n → ∞, i = 1, 2, . . . , k, ∀ φ . n r =1

Thus, T n is consistent for h ≡ h(φ) = (h 1 (φ), h 2 (φ), . . . , h k (φ)) . It is known that β(φ) has partial derivatives up to order 2, hence ∂φ j∂∂φi log β(φ) = ∂φ∂ j h i (φ) exists and is continuous. Thus, {h 1 (φ), h 2 (φ), . . . , h k (φ)} are totally differentiable functions. Hence, by the inverse function theorem, {q1 , q2 , . . . , qk } are also totally difP

ferentiable functions and hence are continuous. Now we have proved that T n → h, hence by the invariance property of consistency under continuous transformation, we P

get qi (T n ) → qi (h) = φi , i = 1, 2, . . . , k. Since marginal consistency and joint consistency are equivalent, P φ˜n = φˆn = q(T n ) → q(h) = φ ∀ φ .

Thus, φ˜n = φˆn is a consistent estimator of φ. To find its asymptotic distribution with suitable normalization, we first find the dispersion matrix D of U = (K 1 (X ), K 2 (X ), . . . , K k (X )) using the following formula for I (φ). We have 

 ∂ ∂ Ii j (φ) = E φ log f (X , φ) log f (X , φ) ∂φi ∂φ j    ∂ ∂ log β(φ) + K i (X ) log β(φ) + K j (X ) = Eφ ∂φi ∂φ j " # = E φ (−h i + K i (X ))(−h j + K j (X ))

192

4

CAN Estimators in Exponential and Cramér Families

# " = E φ (K i (X ) − E(K i (X )))(K j (X ) − E(K j (X ))) = Cov(K i (X ), K j (X )) i, j = 1, 2, . . . , k . Hence, D = I (φ), which is a positive definite matrix. Thus, U r = (K 1 (X r ), K 2 (X r ), . . . , K k (X r )) , r = 1, 2, . . . , n are independent and identically distributed random vectors with E φ (U ) = h and dispersion matrix I (φ), which is positive definite. Hence, by the multivariate CLT applied to {U 1 , U 2 , . . . , U n } we have √ √ L n(U n − h) = n(T n − h) → Z 1 ∼ Nk (0, I (φ)) . Thus, T n is CAN for h with approximate dispersion matrix I (φ)/n. To find the CAN estimator for φ we use the delta method. It is known that {q1 , q2 , . . . , qk } are totally differentiable functions, hence φ˜n = φˆn = q(T n ) is CAN for q(h) = φ with & ' ∂qi . Now to find the approximate dispersion matrix M I (φ)M  /n, where M = ∂h j matrix M I (φ)M  , note that φi = qi (h 1 , h 2 , . . . , h k ), hence ∂φi ∂qi (h 1 , h 2 , . . . , h k ) = ∂φ j ∂φ j k ∂qi (h 1 , h 2 , . . . , h k ) ∂h m = by chain rule ∂h m ∂φ j m=1

=

k

(i, m)-th element of M × (m, j)-th element of I (φ)

m=1

= (i, j)-th element of M I (φ) . ∂φi To find the value of ∂φ , we use the property that K 1 (X ), K 2 (X ), . . . , K k (X ) and 1 j are linearly independent which implies that {φ1 , φ2 , . . . , φk , 1} are not functionally related to each other. Hence,  ∂φi 1, if i = j = 0, if i = j ∂φ j

Thus, M I (φ) is an identity matrix which implies that M = I −1 (φ) and hence M I (φ)M  = I −1 (φ), as I (φ) is a symmetric matrix. Thus, φ˜ = φˆ = q(T ) is n

n

n

CAN for q(h) = φ with approximate dispersion matrix M I (φ)M  /n = I −1 (φ)/n.  Using the relation I (θ) = J  I (φ)J between information matrices, we extend Theorem 4.2.2 to a multiparameter exponential family with the probability law f (x, θ) in a general form.

4.2 Exponential Family

193

Theorem 4.2.3 Suppose the distribution of a random variable or a random vector X belongs to a k-parameter (exponential family with the probability ) law k f (x, θ) = exp i=1 Ui (θ)K i (x) + V (θ) + W (x) , where θ is an indexing parameter. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . Then the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN for θ with approximate dispersion matrix I −1 (θ)/n.

Proof Since the distribution of X belongs with ( to a k-parameter exponential family ) k the probability law f (x, θ) = exp i=1 Ui (θ)K i (x) + V (θ) + W (x) , we have & '  dUi   dθ j  = 0. Hence by the inverse function theorem, Ui (θ) can be uniquely relabeled as Ui (θ) = φi , i = 1, 2, . . . , k and the inverse exists which is continuous and totally differentiable. Suppose ⇔

θi = pi (φ1 , φ2 , . . . , φk ), i = 1, 2, . . . , k θ = p(φ) = ( p1 (φ), p2 (φ), . . . , pk (φ)) .

Then each of { p1 , p2 , . . . , pk } are continuous and totally differentiable functions from Rk to R. With such a relabeling of the kparameters, the probability law f (x, θ) φi K i (x)}g(x), which is a canonical can be expressed as f (x, φ) = β(φ) exp{ i=1 representation of the exponential family and {φ1 , φ2 , . . . , φk } are natural parameters. Hence by Theorem 4.2.2, the moment estimator φ˜n of φ based on a sufficient statistic is the same as the maximum likelihood estimator φˆ of φ and it is CAN for φ with approximate dispersion matrix I −1 (φ)/n. Now,

n

φ˜ in = φˆ in ∀ i = 1, 2, . . . , k ⇒ pi (φ˜ 1n , φ˜ 2n , . . . , φ˜ kn ) = pi (φˆ 1n , φˆ 2n , . . . , φˆ kn ) ∀ i = 1, 2, . . . , k ⇒ θ˜ in = θˆ in ∀ i = 1, 2, . . . , k ⇒ θ˜n = θˆn . θˆn is a maximum likelihood estimator of θ, being a function of maximum likelihood estimators φˆ in i = 1, 2, . . . , k. Further, the system of moment equations based on a sufficient statistic is given by Ui (θ) = φi = qi (T n ), i = 1, 2, . . . , k. Thus, the moment estimator based on a sufficient statistic is given by Ui (θ˜n ) = φ˜ in = qi (T n ), i = 1, 2, . . . , k ⇒

U (θ˜n ) = φ˜n

⇔ θ˜n = p(φ˜n )

is a moment estimator of θ based on a sufficient statistic. On the other hand, we can obtain a system of moment equations and a system of likelihood equations in terms of θ and show that the two are the same. Now φ˜n = φˆn is consistent for φ. Further, p = ( p1 , p2 , . . . , pk ) is a totally differentiable and hence continuous function from

194

4

CAN Estimators in Exponential and Cramér Families

Rk to Rk and hence by the invariance property of consistency under continuous transformation we have p(φˆn ) = ( p1 (φˆn ), p2 (φˆn ), . . . , pk (φˆn )) = (θˆ 1n , θˆ 2n , . . . , θˆ kn ) P

θ = θˆ n = θ˜ n → θ ∀ θ ∈ .

From Theorem 4.2.2, we know that φˆn is CAN for φ with approximate dispersion matrix I −1 (φ)/n. Further p = ( p1 , p2 , . . . , pk ) is a totally differentiable function. Hence by the delta method, p(φˆn ) is CAN for p(φ) = θ with approx' & ∂ pi . To find the matrix imate dispersion matrix M I −1 (φ)M  /n where M = ∂φ j & ' ∂φi M I −1 (φ)M  , we use the identity I (θ) = J  I (φ)J where J = ∂θ . Observe that j φi = Ui (θ) i = 1, 2, . . . , k. Hence, k k ∂φi ∂Ui (θ) ∂θr ∂φi ∂ pr (φ) = = ∂φ j ∂θr ∂φ j ∂θr ∂φ j r =1

=

k k

r =1

(i, r )-th element of J × (r , j)-th element of M

r =1 s=1

= (i, j)-th element of J M . It is known that {φ1 , φ2 , . . . , φk } are not functionally related to each other. Hence, ∂φi ∂φ j

 =

1, 0,

if if

i= j i = j

Hence, J M is an identity matrix which implies that M = J −1 . Thus, M I −1 (φ)M  = J −1 I −1 (φ)(J −1 ) = (J  I (φ)J )−1 = I −1 (θ) . Hence, we have proved that if the distribution of X belongs to a multiparameter exponential family, then the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN for θ with approximate dispersion matrix I −1 (θ)/n. In the following example, we verify that normal N (μ, σ 2 ) distribution with μ ∈ R and σ 2 > 0, which belongs to a two-parameter exponential family.

4.2 Exponential Family

195

 Example 4.2.6

Suppose X ∼ N (μ, σ 2 ) with μ ∈ R and σ 2 > 0. Then, (i) the parameter space  = {(μ, σ 2 )|μ ∈ R, σ 2 ∈ R+ } is an open set. (ii)The support of X is the real line which does not depend on the parameters. (iii) The probability density function f (x, μ, σ 2 ) is given by   1 1 f (x, μ, σ 2 ) = √ exp − 2 (x − μ)2 2σ 2πσ 2   1 1 2 μ μ2 = √ exp − 2 x + 2 x − 2 . 2σ σ 2σ 2πσ 2 Hence, 1 2 μ 1 μ2 x + 2 x − log 2πσ 2 − 2 2 2σ σ 2 2σ = U1 (θ)K 1 (x) + U2 (θ)K 2 (x) + V (θ) + W (x),

log f (x, μ, σ 2 ) = −

where U1 (θ) = − 1/2σ 2 , K 1 (x) = x 2 , U2 (θ)= σμ2 , K 2 (x) = x, V (θ)= − (1/2) log 2πσ 2 − μ2 /2σ 2 and W (x) = 0. (iv) The matrix J = of partial derivatives is given by   0 1/2σ 4 . J= 1/σ 2 −μ/σ 4

&

dUi dθ j

'

It is clear that U&1 and 'U2 have continuous partial derivatives with respect to μ and  dUi  2 σ and |J | =  dθ j  = − 2σ1 6 = 0. (v) Using routine procedure, the functions 1, x and x 2 can be shown to be linearly independent. Thus, a normal N (μ, σ 2 ) distribution with μ ∈ R and σ 2 > 0 satisfies all the requirements of a two-parameter exponential family. In Example 3.3.2, we have noted that the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator  of θ and it is CAN for θ with approximate dispersion matrix I −1 (θ)/n.  Example 4.2.7

Suppose (Y1 , Y2 ) has a multinomial distribution in three cells with cell probabilities (θ + φ)/2, (1 − θ)/2 and (1 − φ)/2, 0 < θ, φ < 1. It is to be noted that (i) the parameter space is  = {(θ, φ) |0 < θ, φ < 1} and it is an open set. (ii)The support of (Y1 , Y2 ) is {(0, 0), (0, 1), (1, 0)}, which does not depend on the parameters. (iii) The joint probability mass function of (Y1 , Y2 ) is given by p y1 y2 = P[Y1 = y1 , Y2 = y2 ] =



θ+φ 2

 y1 

1−θ 2

 y2 

1−φ 2

1−y1 −y2

y1 , y2 = 0, 1 & y1 + y2 ≤ 1.

,

196

4

CAN Estimators in Exponential and Cramér Families

Hence,      θ+φ 1−θ 1−φ + y2 log + (1 − y1 − y2 ) log y1 log 2 2 2 y1 [log(θ + φ) − log(1 − φ)] y2 [log(1 − θ) − log(1 − φ)] − log 2 + log(1 − φ) K 1 (y1 , y2 )U1 (θ, φ) + K 2 (y1 , y2 )U2 (θ, φ) + W (y1 , y2 ) + V (θ, φ), 

log p y1 y2 = = + =

where K 1 (y1 , y2 ) = y1 , K 2 (y1 , y2 ) = y2 , W (y1 , y2 ) = − log 2, U1 (θ, φ) = log(θ + φ) − log(1 − φ), U2 (θ, φ) =&log(1 ' − θ) − log(1 − φ) and dUi V (θ, φ) = log(1 − φ). (iv) Now the matrix J = dθ j of partial derivatives is given by  1  1 1 θ+φ θ+φ + 1−φ J= . 1 −1 1−θ

1−φ

It is clear that U1&and U '2 have continuous partial derivatives with respect to θ  dUi  −2θ and φ and |J | =  dθ j  = − (θ+φ)(1−θ)(1−φ) = 0. (v) To examine whether the functions 1, K 1 (y1 , y2 ) = y1 and K 2 (y1 , y2 ) = y2 are linearly independent, suppose l1 y1 + l2 y2 + l3 = 0. Then y1 = y2 = 0 implies l3 = 0. In l1 y1 + l2 y2 = 0, y1 = 1, y2 = 0 implies l1 = 0 and in l2 y2 = 0 if y2 = 1, then l2 = 0. Thus, multinomial distribution when the cell probabilities are (θ + φ)/2, (1 − θ)/2 and (1 − φ)/2 satisfies all the requirements of a two-parameter exponential family and hence belongs to a two-parameter exponential family. Thus, by Theorem 4.2.3, based on a random sample of size n, the moment estimator of (θ, φ) based on a sufficient statistic is the same as the maximum likelihood estimator of (θ, φ) and it is CAN for (θ, φ) with approximate dispersion matrix I −1 (θ, φ)/n. Suppose (X 1 , X 2 , X 3 ) denote the cell frequencies corresponding to the random sample of size n from the given trinomial distribution, X 1 + X 2 + X 3 = n. Then the likelihood of (θ, φ) corresponding to the observed data (X 1 , X 2 , X 3 ) is given by log L n (θ, φ|X 1 , X 2 , X 3 ) = X 1 [log(θ + φ) − log(1 − φ)] + X 2 [log(1 − θ) − log(1 − φ)] − n log 2 + n log(1 − φ) . Hence, ∂ X1 X2 log L n (θ, φ|X 1 , X 2 , X 3 ) = − ∂θ θ+φ 1−θ   1 X2 − n ∂ 1 + log L n (θ, φ|X 1 , X 2 , X 3 ) = X 1 + ∂φ θ+φ 1−φ 1−φ X1 1 = + (X 1 + X 2 − n) θ+φ 1−φ X1 X3 = − . θ+φ 1−φ

4.2 Exponential Family

197

Thus, the system of likelihood equations is given by ∂ X1 X2 log L n (θ, φ|X 1 , X 2 , X 3 ) = − =0 ∂θ θ+φ 1−θ X1 ∂ X3 log L n (θ, φ|X 1 , X 2 , X 3 ) = − =0. ∂φ θ+φ 1−φ

(4.2.8) (4.2.9)

From the Eqs. (4.2.8) and (4.2.9) we get X2 X3 = 1−θ 1−φ

X 3θ − X 2φ = X 3 − X 2 .



(4.2.10)

Further from Eq. (4.2.8) we have (1 − θ)X 1 = (θ + φ)X 2



(X 1 + X 2 )θ + X 2 φ = X 1 .

(4.2.11)

From Eqs. (4.2.10) and (4.2.11) we get θ=

X1 + X3 − X2 X 1 + X 2 + X 3 − 2X 2 2X 2 = =1− . n n n

From Eq. (4.2.10) we have  2X 2 − X3 + X2 X 2φ = X 3θ − X 3 + X 2 = X 3 1 − n 2X 2 X 3 2X 3 = X2 − ⇒ φ=1− . n n 

To verify that the solution of the system of likelihood equations leads to a maximum likelihood estimator, we examine whether the matrix of second order partial derivatives of the log-likelihood is negative definite almost surely, at the solution. The second order partial derivatives of the log-likelihood are given by −X 1 X2 ∂2 log L n (θ, φ|X 1 , X 2 , X 3 ) = − 2 2 ∂θ (θ + φ) (1 − θ)2 2 ∂ −X 1 X3 log L n (θ, φ|X 1 , X 2 , X 3 ) = − 2 2 ∂φ (θ + φ) (1 − φ)2 ∂2 −X 1 . log L n (θ, φ|X 1 , X 2 , X 3 ) = ∂θ∂φ (θ + φ)2 If M denotes the matrix of second order partial derivatives of the log-likelihood, then it is clear that the first principal minor is negative and the second is positive for all θ and φ. Hence, M is negative definite almost surely, at the solution. Thus, the maximum likelihood estimators θˆ n and φˆ n of θ and φ are given by 2X 2 θˆ n = 1 − n

&

2X 3 φˆ n = 1 − . n

198

4

CAN Estimators in Exponential and Cramér Families

Now we find the moment estimators based on the sufficient statistic. From the likelihood equation, it is clear that (X 1 , X 2 ) is jointly sufficient for the family. Since X 1 + X 2 + X 3 = n, (X 2 , X 3 ) is also jointly sufficient for the family. Hence, to find the moment estimators based on the sufficient statistic, we have following two equations. n X2 1−θ 1 Y2r = E(Y2 ) = = ⇒ n n 2

θ =1−

r =1

&

n X3 1−φ 1 Y3r = E(Y3 ) = = ⇒ n n 2

2X 2 n

φ=1−

r =1

2X 3 . n

Thus, the moment estimators based on the sufficient statistic are given by 2X 2 θ˜ n = 1 − n

&

2X 3 φ˜ n = 1 − , n

which are the same as the maximum likelihood estimators. From Theorem 4.2.3, (θˆ n , φˆ n ) is CAN for (θ, φ) with approximate dispersion matrix I −1 (θ, φ)/n. Now, from the second order partial derivatives of the log-likelihood as given above we find the information matrix I (θ, φ). We have E(X 1 ) = n(θ + φ)/2, E(X 2 ) = n(1 − θ)/2 and E(X 3 ) = n(1 − φ)/2. Hence, the information matrix I (θ, φ) is given by   1+φ 1 I (θ, φ) =

2(θ+φ)(1−θ) 1 2(θ+φ)

which is a positive definite matrix.

2(θ+φ) 1+θ 2(θ+φ)(1−φ)

, 

In the following section, we discuss CAN estimators for the parameters of a distribution belonging to a Cramér family which includes an exponential family.

4.3

Cramér Family

Suppose X is a random variable or a random vector with a probability law f (x, θ), indexed by a real parameter θ ∈  ⊂ R. Suppose the probability law f (x, θ) satisfies the following conditions in an open interval Nρ (θ0 ) = (θ0 − ρ, θ0 + ρ) ⊂ , where ρ > 0 and θ0 is a true parameter value, that is, θ0 is the value of the parameter which generated a random sample {X 1 , X 2 , . . . , X n } from the distribution of X . C-1 The support S f is free from the parameter θ. C-2 The parameter space is an open set.

4.3 Cramér Family

199

∂ ∂ ∂ C-3 The partial derivatives ∂θ log f (x, θ), ∂θ 2 log f (x, θ) and ∂θ 3 log f (x, θ) exist for almost all values of x ∈ S f . C-4 The identity S f f (x, θ)d x = 1 can be differentiated with respect to θ under the integral sign at least twice. Thus, the information function I (θ) is given by 2

3

 I (θ) = E θ

  2 ∂ ∂2 & 0 < I (θ) < ∞ . log f (X , θ) = E θ − 2 log f (X , θ) ∂θ ∂θ

 3  ∂  C-5  ∂θ 3 log f (x, θ) < M(x), where M(x) may depend on θ0 and ρ and E(M(X )) 0 and hence the third condition gets violated. However, a Laplace distribution with probability density function given by f (x, θ) = (2θ)−1 exp{−|x|/θ}, x ∈ R, θ > 0 belongs to a Cramér family. A gamma distribution with a known scale parameter and shape parameter λ ∈ I + does not belong to a Cramér family as the parameter space is not an open set. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with the probability law f (x, θ) and θ0 is a true parameter. For the distributions belonging to a Cramér family following four results are true.

 Result 4.3.1

With probability approaching 1 as n → ∞, the likelihood equation ∂ log L n (θ|X ) = 0 admits a solution θˆn (X ) and it is consistent for θ0 . ∂θ

200

4

CAN Estimators in Exponential and Cramér Families

 Result 4.3.2

For large n , the distribution of θˆn (X ) can be approximated by the normal distribution N (θ0 , 1/n I (θ0 )), that is, √ L n(θˆn (X ) − θ0 ) → Z 1 ∼ N (0, 1/I (θ0 )).

 Result 4.3.3

With probability approaching 1 as n → ∞, there is a relative maximum of the likelihood function at θˆn (X ), that is, P

∂2 log L n (θ|X )|θˆ n (X ) < 0 ∂θ2

! → 1, as n → ∞.

 Result 4.3.4

With probability approaching 1 as n → ∞, the consistent solution of the likelihood equation is unique. The first two results were established by Cramér [4]. However, issues such as whether there is a relative maximum at the solution of the likelihood equation and whether the consistent solution is unique were not addressed by Cramér [4]. Huzurbazar [5] established these results, which are the last two results listed above. The four results collectively are known as Cramér-Huzurbazar theorem (Kale and Muralidharan [6]). These are usually referred to as the standard theory of maximum likelihood estimation when the estimation is based on a random sample {X 1 , X 2 , . . . , X n }. Now we prove these results. We first prove a lemma, which is heavily used to prove the Cramér-Huzurbazar theorem. Suppose f and g are two probability density functions, then  theKullback Leibler distance between f and g is defined as  f (x) f (x)d x. It can be shown that D( f , g) ≥ 0. In the followD( f , g) = log g(x) ing lemma, we prove its particular case.

 Lemma 4.3.1

Suppose f (x, θ) is the probability density function of X and it is indexed by θ. Then    f (X , θ1 ) ≥ 0, I (θ1 , θ0 ) = E θ0 − log f (X , θ0 )

and the equality holds if and only if θ1 = θ0 . Proof It is clear that  E θ0

  *  f (X , θ1 ) f (x, θ1 ) = f (x, θ0 )d x f (X , θ0 ) f (x, θ0 ) Sf



*

f (x, θ0 )d x = 1 ⇒

= Sf

log E θ0

f (X , θ1 ) f (X , θ0 )

 = 0.

4.3 Cramér Family

201

Suppose a function g is defined as g(u) = − log u, u > 0, ⇒ g  (u) = −1/u & g  (u) = 1/u 2 > 0 ⇒ g is a convex function. Thus by Jensen’s inequality, 



E θ0 − log

f (X , θ1 ) f (X , θ0 )



 ≥ − log E θ0

f (X , θ1 ) f (X , θ0 )

 = 0 ⇒ I (θ1 , θ0 ) ≥ 0.

Equality holds if and only if ∀ x ∈ S f f (x, θ1 ) = 1 ⇔ f (x, θ1 ) = f (x, θ0 ) ⇔ θ1 = θ0 , f (x, θ0 ) as θ is an indexing parameter.



 Remark 4.3.1

I (θ1 , θ0 ) is known as the Kullback-Leibler information per unit of observation. It is a measure of the ability of the likelihood ratio to distinguish between f (x, θ1 ) and f (x, θ0 ) when θ0 is the true parameter value. The inequality I (θ1 , θ0 ) ≥ 0 is known as the Shanon-Kolmogorov information inequality. Suppose L n (θ|X ) denotes the likelihood of θ given data X . Then by Kolmogorov’s SLLN and Lemma 4.3.1, we have 1 log n



L n (θ|X ) L n (θ0 |X )

 =

  n 1 f (X i , θ) a.s. log → − I (θ, θ0 ) < 0. (4.3.1) n f (X i , θ0 ) i=1

Thus for large n, the likelihood function has higher value at θ0 , than at any specific value of θ, provided different θ correspond to different distributions, that is, provided θ is a labeling parameter. The inequality (4.3.1) specifies the rate of convergence of the likelihood ratio. If θ0 is the true parameter value, ) converges to 0 exponentially fast at the rate then the likelihood ratio LLnn(θ(θ|X 0 |X ) exp(−n I (θ, θ0 )). Another result needed in the proofs of the Cramér-Huzurbazar theorem is Theorem P

2.2.2, proved in Sect. 2.2 which states that if Wn → C < 0, then P[Wn ≤ 0] → 1 as n → ∞. In the following theorem, we prove Result 4.3.1 and Result 4.3.3 together. Proof of Result 4.3.3 is also given separately later.

202

4

CAN Estimators in Exponential and Cramér Families

Theorem 4.3.1 Suppose X = {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X which belongs to a Cramér family and θ0 is a true parameter value. With prob∂ log L n (θ|X ) = 0 ability approaching 1 as n → ∞, the likelihood equation ∂θ ˆ ˆ admits a solution θn (X ) and θn (X ) is a consistent estimator of θ0 . Further, the likelihood function has a relative maximum at θˆ n (X ).

Proof The likelihood of θ corresponding to the given sample X is given by L n (θ|X ) =

n

n

f (X i , θ) ⇔ log L n (θ|X ) =

i=1

log f (X i , θ).

i=1

For δ > 0, the logarithm of the likelihood ratio is expressed as follows: ! L n (θ0 + δ|X ) = log L n (θ0 + δ|X ) − log L n (θ0 |X ) log L n (θ0 |X ) n n = log f (X i , θ0 + δ) − log f (X i , θ0 ) i=1

= =

n i=1 n

i=1

f (X i , θ0 + δ) f (X i , θ0 )

log

Yi

where

i=1

!

Yi = log

f (X i , θ0 + δ) f (X i , θ0 )

! .

Note that Yi is a Borel function of X i i = 1, 2, . . . , n. Since {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, being Borel functions, {Y1 , Y2 , . . . , Yn } are also independent and identically distributed random variables with E θ0 (Yi ) = −I (θ0 + δ, θ0 ) < ∞. Hence, by Khintchine’s WLLN 1 L n (θ0 + δ|X ) log n L n (θ0 |X )

! =

n Pθ0 1 Yi → − I (θ0 + δ, θ0 ) < 0. n

(4.3.2)

i=1

On similar lines, 1 L n (θ0 − δ|X ) log n L n (θ0 |X )

!

Pθ0

→ − I (θ0 − δ, θ0 ) < 0 .

(4.3.3)

It is to be noted that θ0 being an indexing parameter, L n (θ0 ± δ|X ) = L n (θ0 |X ) for any realization of X . Suppose events E n and Fn are defined as follows: E n = {ω| log L n (θ0 + δ|X (ω)) < log L n (θ0 |X (ω))} Fn = {ω| log L n (θ0 − δ|X (ω)) < log L n (θ0 |X (ω))} ⇒ log L n (θ0 ± δ|X (ω)) < log L n (θ0 |X (ω)) ∀ ω ∈ E n ∩ Fn .

4.3 Cramér Family

203

Further from the Cramér regularity conditions, it follows that the likelihood function is differentiable and hence continuous over the interval (θ0 − δ, θ0 + δ). Hence, there exists a point θˆ n (X ) ∈ (θ0 − δ, θ0 + δ) at which log-likelihood attains its maximum. Again from the Cramér regularity conditions, the first and the second derivative of log-likelihood exist. Thus, using the theory of calculus we have ∂ ∂2 log L n (θ0 |X )|θˆ n (X ) < 0 log L n (θ0 |X )|θˆ n (X ) = 0 and ∂θ ∂θ2 for θˆ n (X ) ∈ (θ0 − δ, θ0 + δ). Suppose an event Hn is defined as Hn = An ∩ Bn ∩ Cn , where   ∂ An = ω| log L n (θ0 |X (ω))|θˆ n (X (ω)) = 0 , ∂θ ( ) Bn = ω|θˆ n (X (ω)) ∈ (θ0 − δ, θ0 + δ)   ∂2 & Cn = ω| 2 log L n (θ0 |X (ω))|θˆ n (X (ω)) < 0 . ∂θ It is to be noted that ω ∈ E n ∩ Fn ⇒ ω ∈ Hn ⇒ E n ∩ Fn ⊂ Hn ⇒ P(Hn ) ≥ P(E n ∩ Fn ). From (4.3.2), we have Pθ0 $ % Wn = n1 log L n (θ0 + δ|X )/L n (θ0 |X ) → − I (θ0 + δ, θ0 ) < 0. Hence by Theorem 2.2.2, Pθ0 [Wn ≤ 0] → 1 as n → ∞. However, θ0 being an indexing parameter Wn = 0 a.s. Thus, P(E n ) → 1. Similarly, from Eq. (4.3.3), P(Fn ) → 1. Now P(E n ) → 1 & P(Fn ) → 1 ⇒ P(E nc ) → 0 & P(Fnc ) → 0 ⇒ P(E nc ∪ Fnc ) ≤ P(E nc ) + P(Fnc ) → 0 ⇒ P(E n ∩ Fn ) → 1 ⇒ P(Hn ) → 1 ⇒ P(An ) ≥ P(Hn ) → 1, P(Bn ) ≥ P(Hn ) → 1 & P(Cn ) ≥ P(Hn ) → 1, P(An ∩ Bn ) ≥ P(Hn ) → 1. Now P(An ) → 1 implies that with probability approaching 1, there is a solution to the likelihood equation. Similarly, P(Bn ) → 1 implies that θˆ n (X ) is consistent for θ0 as δ is arbitrary. The fact that P(An ∩ Bn ) → 1 states that with probability approaching 1, there is a consistent solution of the likelihood equation and the Result 4.3.1 is proved. From the statement P(Cn ) → 1, we conclude that with probability approaching 1, there is a relative maximum of the likelihood at θˆ n (X ) and the Result 4.3.3 is proved. 

204

4

CAN Estimators in Exponential and Cramér Families

In the following theorem, we give an alternative proof of Result 4.3.3 and it is as given by Huzurbazar [5]. Theorem 4.3.2 With probability approaching 1 as n → ∞, there is a relative maximum of the likelihood function at θˆ n (X ), that is,

P

∂2 log L n (θ|X )|θˆ n (X ) < 0 ∂θ2

! → 1, as n → ∞.

Proof Under the Cramér regularity conditions, the likelihood function is twice differentiable and the information function I (θ) is positive and finite. Observe that by Khintchine’s WLLN n n 1 ∂2 1 ∂2 1 ∂2 log L n (θ|X )|θ0 = log f (X i , θ)|θ0 = log f (X i , θ)|θ0 n ∂θ2 n ∂θ2 n ∂θ2 i=1

i=1

Pθ0

→ −I (θ0 ). Suppose

∂2 ∂θ2

(4.3.4)

log L n (θ|X )|θˆ n (X ) = 0. Then by the mean value theorem

1 ∂2 1 ∂2 log L (θ|X )| − log L n (θ|X )|θ0 n θˆ n (X ) n ∂θ2 n ∂θ2 1 ∂3 = (θˆ n (X ) − θ0 ) 3 log L n (θ|X )|θn∗ (X ) , n ∂θ where θn∗ (X ) is a convex combination of θˆ n (X ) and θ0 given by θn∗ (X ) = αθˆ n (X ) + (1 − α)θ0 , 0 < α < 1. Now using the fifth regularity condition we have    n     1 ∂3 1  ∂ 3    ∗ ∗ ≤ log L (θ|X )| log f (X , θ)| n i θn (X )  θn (X )    n ∂θ3 3 n ∂θ i=1   3  ∂ Pθ0 → E θ0  3 log f (X , θ)|θn∗ (X )  ∂θ ≤ E θ0 (M(X )) < ∞.   3   ∂ ∗ log L (θ|X )| Thus, using the fact that  n1 ∂θ n θn (X )  converges in probability to a finite 3 number a say, which may depend on θ0 , we have

4.3 Cramér Family

205

   1 ∂2  1 ∂2    n ∂θ2 log L n (θ|X )|θˆ n (X ) − n ∂θ2 log L n (θ|X )|θ0      1 ∂ 3   ˆ  = (θn (X ) − θ0 )  log L n (θ|X )|θn∗ (X )  3 n ∂θ Pθ0

→ 0,   2   ∂ 1 ∂2 as θˆ n (X ) is consistent for θ0 . Since  n1 ∂θ 2 log L n (θ|X )|θˆ n (X ) − n ∂θ 2 log L n (θ|X )|θ0  converges to 0 in probability, both the terms have the same limit for convergence in probability. But as proved in (4.3.4), Pθ0 Pθ0 1 ∂2 1 ∂2 log L (θ|X )| → −I (θ ) ⇒ log L n (θ|X )|θˆ n (X ) → −I (θ0 ) < 0. n 0 θ 0 2 2 n ∂θ n ∂θ & 2 ' ∂ By Theorem 2.2.2, we conclude that Pθ0 ∂θ log L (θ|X )| < 0 →1 n 2 θˆ n (X ) as n → ∞ and the Result 4.3.3 is proved. 

We now prove a lemma, proof of which is similar to that of Theorem 4.3.2. It is needed in the proof of Result 4.3.4.

 Lemma 4.3.2

Suppose Tn is any consistent estimator of θ0 . Then ! ∂2 Pθ0 log L n (θ|X )|Tn < 0 → 1 as n → ∞. ∂θ2

Proof Using arguments similar to those in the proof of Theorem 4.3.2 and using the consistency of Tn we have    1 ∂2  1 ∂2    n ∂θ2 log L n (θ|X )|Tn − n ∂θ2 log L n (θ|X )|θ0      1 ∂3  ∗ = |(Tn − θ0 )|  log L (θ|X )| n θn (X )  3 n ∂θ Pθ0

→ 0, where θn∗ (X ) is a convex combination of Tn and θ0 . Observe that Pθ0 Pθ0 1 ∂2 1 ∂2 log L (θ|X )| → −I (θ ) ⇒ log L (θ|X )| → −I (θ0 ) < 0. n 0 n T θ n 0 n ∂θ2 n ∂θ2

Hence, we conclude that Pθ0

! ∂2 log L (θ|X )| < 0 → 1 as n → ∞. n T n ∂θ2 

206

4

CAN Estimators in Exponential and Cramér Families

Theorem 4.3.3 With probability approaching 1 as n → ∞, a consistent solution of the likelihood equation is unique.

Proof The proof is by contradiction. Suppose if possible, θˆ 1n (X ) and θˆ 2n (X ) are two consistent solutions of the likelihood equation in (θ0 − δ, θ0 + δ), where X ∈ E n ∩ Fn . Hence, with probability approaching 1 as n → ∞, 1 ∂ log L n (θ|X )|θˆ 1n (X ) = 0 n ∂θ

and

1 ∂ log L n (θ|X )|θˆ 2n (X ) = 0. n ∂θ

Hence, by Rolle’s theorem, with probability approaching 1 as n → ∞, 1 ∂2 log L n (θ|X )|θ˜ n (X ) = 0 where n ∂θ2 θ˜ n (X ) = αθˆ 1n (X ) + (1 − α)θˆ 2n (X ), 0 < α < 1. It is to be noted that θˆ 1n (X ) and θˆ 2n (X ) are consistent estimators of θ0 and hence, being a convex combination, θ˜ n (X ) is also a consistent estimator of θ0 . Hence by the Lemma 4.3.2, Pθ0 1 ∂2 log L (θ|X )| → −I (θ0 ) < 0 n ˜ θ (X ) n n ∂θ2 ! ∂2 log L (θ|X )| < 0 → 1, as n → ∞ . ⇒ P n θ˜ n (X ) ∂θ2

It is a contradiction to the statement that with probability approaching 1 as n → ∞, 1 ∂2 n ∂θ2 log L n (θ|X )|θ˜ n (X ) = 0. Hence, it is proved that with probability approaching 1 as n → ∞, a consistent solution of the likelihood equation is unique.  We now proceed to prove the Result 4.3.2, which states that the large sample distribution of a consistent solution of the likelihood equation is normal. Theorem 4.3.4 For large n, the distribution of θˆ n (X ) is approximately normal N (θ0 , 1/n I (θ0 )), that is,



n(θˆ n (X ) − θ0 ) → Z 1 ∼ N (0, 1/I (θ0 )). L

Proof From the regularity conditions, likelihood is thrice differentiable, hence by the Taylor series expansion we have 0=

∂ ∂ ∂2 log L n (θ|X )|θˆ n (X ) = log L n (θ|X )|θ0 +(θˆ n (X ) − θ0 ) 2 log L n (θ|X )|θ0 ∂θ ∂θ ∂θ 3 ∂ 1 ˆ + (θn (X ) − θ0 )2 3 log L n (θ|X )|θ∗ (X ) , 2 ∂θ

4.3 Cramér Family

207

where θ∗ (X ) = α θˆ n (X ) + (1 − α)θ0 . By rearranging the terms, we get (θˆ n (X ) − θ0 ) = ⇒

∂2 ∂θ2

∂ − ∂θ log L n (θ|X )|θ0 ∂3 log L n (θ|X )|θ0 + 21 (θˆ n (X ) − θ0 ) ∂θ 3 log L n (θ|X )|θ ∗

√ n(θˆ n (X ) − θ0 )

=

√1 ∂ log L n (θ|X )|θ 0 n ∂θ 1 ∂2 1 ˆ 1 ∂3 − n ∂θ2 log L n (θ|X )|θ0 − 2 (θn (X ) − θ0 ) n ∂θ3

=

Un , Vn

log L n (θ|X )|θ∗ (X )

where 1 ∂ Un = √ log L n (θ|X )|θ0 & n ∂θ 1 ∂2 1 ˆ 1 ∂3 Vn = − log L (θ|X )| − (X ) − θ ) log L n (θ|X )|θ∗ (X ) . ( θ n n 0 θ 0 n ∂θ2 2 n ∂θ3 The denominator Vn is different from 0 for large n as is clear from the following:  n  Pθ0 ∂2 1 ∂2 1 − − log L (θ|X )| = log f (X , θ)| → − I (θ0 ) , n i θ0 θ0 2 2 n ∂θ n ∂θ i=1

by WLLN. Further, using the arguments similar to those in the proof of Theorem 4.3.2, we have    n     1 ∂3 1  ∂ 3     ∂θ3 log f (X i , θ)|θn∗ (X )   n ∂θ3 log L n (θ|X )|θn∗ (X )  ≤ n i=1   3  ∂ Pθ0 → E θ0  3 log f (X , θ)|θn∗ (X )  ∂θ ≤ E θ0 (M(X )) < ∞ . Moreover, θˆ n (X ) is consistent for θ0 , and hence 1 ˆ 1 ∂3 2 (θn (X ) − θ0 ) n ∂θ3

Pθ0

log L n (θ|X )|θ∗ (X ) → 0. Consequently, the denominator Vn = −

Pθ0 1 ∂2 1 ˆ 1 ∂3 log L (θ|X )| − (X ) − θ ) log L n (θ|X )|θ∗ (X ) → I (θ0 ). ( θ n n 0 θ 0 2 3 n ∂θ 2 n ∂θ

The numerator Un can be expressed as n 1 ∂ 1 ∂ Un = √ log L n (θ|X )|θ0 = √ log f (X i , θ)|θ0 . ∂θ n ∂θ n i=1

208

4

CAN Estimators in Exponential and Cramér Families

It is given that {X 1 , X 2 , . . . , X n } are independent and identically distributed random ∂ log f (X i , θ)|θ0 for i = 1, 2, . . . , n are also variables and being Borel functions, ∂θ independent and identically distributed random variables with  E θ0

∂ log f (X i , θ)|θ0 ∂θ



 = 0 & V arθ0

∂ log f (X i , θ)|θ0 ∂θ

 = I (θ0 ),

i = 1, 2, . . . , n. From the regularity conditions, we have 0 < I (θ0 ) < ∞. Hence by the CLT for the independent and identically distributed random variables with positive finite variance, n 

∂ ∂θ

log f (X i , θ)|θ0 − n ∗ 0 L → Z ∼ N (0, 1) √ n I (θ0 ) 1 ∂ L ⇔ Un = √ log L n (θ|X )|θ0 → U ∼ N (0, I (θ0 )). n ∂θ i=1

Using Slutsky’s theorem, √ Un L n(θˆ n (X ) − θ0 ) = → Z 1 ∼ N (0, 1/I (θ0 )). Vn Thus, it is proved that θˆ n (X ) is CAN for θ0 with approximate variance  1/n I (θ0 ).  Remark 4.3.2

Results 4.3.1 to 4.3.4 collectively convey that under the Cramér regularity conditions, for large n, the likelihood equation has a unique consistent solution θˆ n (X ) and a relative maximum is attained at this solution. Further Result 4.3.2 asserts that it is CAN for θ0 with approximate variance 1/n I (θ0 ). It is to be noted that for finite n, 1/n I (θ0 ) is the Cramér-Rao lower bound for the variance of an unbiased estimator of θ0 . For large n, θˆ n (X ) is unbiased for θ0 , thus for large n, variance of θˆ n (X ) attains the Cramér-Rao lower bound and hence it is an asymptotically efficient estimator. Hence, θˆ n (X ) is referred to as the best asymptotically normal (BAN) estimator of θ0 , (Rohatgi and Saleh [7]). 1/n I (θ0 ) is also referred to as the Fisher lower bound for the variance, (Kale and Muralidharan [6]). Using similar arguments as in Results 4.3.1 to 4.3.4, it can be proved that all the four results are true for any interior point θ ∈ . It is already noted that all the standard distributions, such as N (θ, 1) when θ ∈ R, exponential distribution with mean θ > 0, gamma distribution with scale 1 and shape θ > 0, Binomial B(n, θ) where n is known and θ ∈ (0, 1), Poisson Poi(θ), θ > 0, geometric distribution with success probability θ ∈ (0, 1) and the truncated

4.3 Cramér Family

209

versions of these, constitute a one-parameter exponential family of distributions. Hence, all these distributions satisfy the Cramér regularity conditions and Results 4.3.1 to 4.3.4 are valid. It is noted that a Cauchy distribution with location parameter θ and scale parameter 1 does not belong to a one-parameter exponential family. In the following example, we show that it belongs to a Cramér family and hence the maximum likelihood estimator of θ is BAN for θ. For a Cauchy distribution, it is not possible to solve the likelihood equation explicitly and hence it becomes necessary to apply suitable iterative procedures to get the numerical solution of the likelihood equation. Most commonly used iterative procedures are Newton-Raphson procedure obtained from the Taylor series expansion and the method of scoring as proposed by Fisher. These are discussed in Sect. 4.4.  Example 4.3.1

Suppose a random variable X follows a Cauchy C(θ, 1) distribution with location parameter θ and scale parameter 1, then its probability density function f (x, θ) is given by f (x, θ) =

1 1 , x ∈ R, θ ∈ R . π 1 + (x − θ)2

The support of X is a real line and it is free from θ. Further, the parameter space is also a real line, which an open set. Now, ∂ ∂2 −2 + 2(x − θ)2 2(x − θ) , log f (x, θ) = log f (x, θ) = ∂θ 1 + (x − θ)2 ∂θ2 [1 + (x − θ)2 ]2 3 3 ∂ 4(x − θ) − 12(x − θ) 4(x − θ)((x − θ)2 − 12) log f (x, θ) = = . ∂θ3 [1 + (x − θ)2 ]3 [1 + (x − θ)2 ]3 ∂ ∂ ∂ Thus, the partial derivatives ∂θ log f (x, θ), ∂θ 2 log f (x, θ) and ∂θ 3 log f (x, θ) exist for almost all values of x ∈ S f . It is in view of the fact that for fixed x, log f (x, θ) is a logarithm of a polynomial of degree 2 in θ and hence it is an analytic function of θ. Further, 2

3

  3  4|x − θ|((x − θ)2 + 12) ∂ ≤  log f (x, θ) = M(x), say.   ∂θ3 [1 + (x − θ)2 ]3 Now we examine whether E(M(X )) < ∞. We have E(M(X )) = =

1 π 1 π

*



*

−∞ ∞ −∞

* 1 ∞ = π 0 * 1 ∞ = π 0

4|x − θ|((x − θ)2 + 12) 1 dx 2 3 [1 + (x − θ) ] 1 + (x − θ)2 4|y|(y 2 + 12) 1 dy (1 + y 2 )3 1 + y 2 8y(y 2 + 12) dy (1 + y 2 )4 P3 (y) dy, P8 (y)

210

4

CAN Estimators in Exponential and Cramér Families

where P3 (y) and P8 (y) are polynomial functions of y with degrees 3 and 8 ∞ respectively. The infinite integral π1 0 PP38 (y) (y) dy is convergent as the degree of polynomial in denominator is larger by 1, than the degree of polynomial in numerator, which implies that the third partial derivative of log f (x, θ) is bounded by an integrable function. Now to examine whether the differentiation  and integration in the identity S f f (x, θ) = 1 can be interchanged, we note that ∂ ∂θ

2(x−θ) f (x, θ) = π1 (1+(x−θ) 2 )2 is a continuous function of θ and hence integrable over a finite interval (a, b). Further, this integral is uniformly convergent as a → −∞ and b → ∞ as the integrand behaves like x13 near ±∞. Using similar arguments

∂ for ∂θ 2 f (x, θ), differentiation and integration can be interchanged second time. It follows that   * * ∂ 2(x − θ) 2t 1 ∞ 1 Eθ = =0, log f (X , θ) = ∂θ π S f (1 + (x − θ)2 )2 π −∞ (1 + t 2 )2 2

integrand being an odd function. We now find the information function I (θ) as follows:     ∂2 2 − 2(X − θ)2 I (θ) = E − 2 log f (X , θ) = E ∂θ [1 + (X − θ)2 ]2 * ∞ 2 1 − (X − θ)2 = dx π −∞ [1 + (X − θ)2 ]3 * * 2 ∞ 1 − y2 4 ∞ 1 − y2 = dy = dy π −∞ [1 + y 2 ]3 π 0 [1 + y 2 ]3 , + * 1 1 * 4 1 ∞ u− 2 1 ∞ u2 = du − du with y 2 = u π 2 0 [1 + u]3 2 0 [1 + u]3 ! * * 4 1 ∞ u 1/2−1 u 3/2−1 1 ∞ = du − du π 2 0 [1 + u]1/2+5/2 2 0 [1 + u]3/2+3/2    ! 1 5 3 3 2 B −B by definition of beta function = , , π 2 2 2 2 ! 2 (1/2)(5/2) (3/2)(3/2) = − π (3) (3) ! 1 1 3π π = . = − π 4 4 2 Thus I (θ) is positive and finite. Thus, a Cauchy C(θ, 1) distribution satisfies all the Cramér regularity conditions, hence it belongs to a Cramér family. By the Cramér-Huzurbazar theorem, for large n, the maximum likelihood estimator θˆ n of θ is a CAN estimator of θ with the approximate variance 1/n I (θ) = 2/n. For large n, the approximate variance attains the Cramér lower bound for the variance, hence it is a BAN estimator. However, the likelihood equation given by

4.3 Cramér Family

211

n 2(X i −θ) log L n (θ|X ) = i=1 = 0 cannot be solved explicitly and we need 1+(X i −θ)2 to use the numerical methods, discussed in Sect. 4.4, to obtain the value of the maximum likelihood estimator corresponding to the given random sample. It is to be noted that for each n, there are multiple roots to the likelihood equation out of which some root is consistent. Since a Cauchy distribution belongs to a Cramér family, for large n, with high probability, the consistent solution of the likelihood equation is unique.  ∂ ∂θ

 Remark 4.3.3

In Example 3.2.6, it is shown that for a Cauchy C(θ, 1) distribution, the sample median is CAN for θ with the approximate variance π 2 /4n. It is to be noted that π 2 /4n = 2.4694/n > 2/n as expected. In Example 4.2.3, we have noted that a normal N (θ, θ2 ) distribution, when θ > 0, does not belong to a one-parameter exponential family, but still the maximum likelihood estimator of θ is CAN with approximate variance 1/n I (θ). These results lead to a conjecture that this distribution may be a member of a Cramér family, which is verified in the next example.  Example 4.3.2

Suppose X ∼ N (θ, θ2 ) distribution, θ ∈  = (0, ∞). Thus, the parameter space (0, ∞) is an open set. The support of the distribution is a real line, which is free from θ. The partial derivatives of log f (x, θ) up to order three exist and are as follows:

∂ ∂θ ∂2 ∂θ2 ∂3 ∂θ3

x 1 x2 1 log f (x, θ) = − log 2π − log θ − 2 + − , 2 2θ θ 2 x 1 x2 log f (x, θ) = − + 3 − 2 θ θ θ 1 3x 2 2x log f (x, θ) = 2 − 4 + 3 , θ θ θ 2 12x 2 6x log f (x, θ) = − 3 + 5 − 4 . θ θ θ

To examine whether the third derivative of log f (x, θ) is bounded by an integrable function, observe that for θ ∈ Nδ (θ0 ), θ0 − δ < θ < θ0 + δ ⇔ θ01+δ < 1θ < θ01−δ . Hence,           3   2 ∂ 12x 2 6x   2   12x 2   6x      ∂θ3 log f (x, θ) = − θ3 + θ5 − θ4  ≤  θ3  +  θ5  +  θ4           12x 2   6x  2 + +  = M(x), ≤      3 4 5 (θ0 − δ) (θ0 − δ)  (θ0 − δ)

212

4

CAN Estimators in Exponential and Cramér Families

       2   12x 2   6x  + + where M(x) =  (θ −δ)     . Now E(X 2 ) = 2θ2 . We want to 3 (θ0 −δ)4 (θ0 −δ)5 0 find a bound on E(M(X )), thus it is enough to find a bound on E(|X |). Observe that E(|X |) = E(|X − θ + θ|) ≤ E(|X − θ|) + |θ|. Now, 1

E(|X − θ|) = √ 2πθ 1 =√ 2πθ 1 =√ 2πθ 1 =√ 2πθ 2θ2 =√ 2πθ

 1 2 |x − θ| exp − 2 (x − θ) d x 2θ −∞   * ∞ 1 2 dy with y = x − θ |y| exp − 2 y 2θ −∞   * ∞ 1 y exp − 2 y 2 dy 2 2θ 0 as integrand is an even function   * ∞ 1 exp − 2 t dt with y 2 = t 2θ 0 ! 2 −t ∞ =θ − exp{ 2 } . 2θ 0 π *





Thus, E(|X |) = E(|X − θ + θ|) ≤ E(|X − θ|) + |θ| = θ quence,



2 π

+ θ. As a conse-

             6 2 12 2 2      + 2θ  + θ E(M(X )) ≤  +θ (θ0 − δ)3  π (θ0 − δ)5   (θ0 − δ)4  < ∞ ∀ θ ∈ (θ0 − δ, θ0 + δ). Thus, N (θ, θ2 ) distribution when θ ∈ (0, ∞) satisfies all the Cramér regularity conditions and hence it belongs to a Cramér family.   Remark 4.3.4

   2        6x  It is to be noted that the function  θ23  +  12x  +  θ4  for θ ∈  is not bounded θ5 2

as θ23 , 12x and 6x can be made arbitrarily large by selecting θ sufficiently θ4 θ5 small. However, we want the Cramér regularity conditions to be satisfied for all     2   12x 2   6x  θ ∈ Nρ (θ0 ) and the function  (θ −δ)3  +  (θ −δ)5  +  (θ −δ)4  remains bounded 0 0 0 for all θ ∈ Nρ (θ0 ) for δ < ρ. In Example 3.3.9, it is shown that if (X , Y ) has a bivariate normal distribution with parameter (0, 0, 1, 1, ρ) , ρ ∈ (−1, 1), then the sample correlation coefficient Rn is a CAN estimator of ρ with approximate variance (1 − ρ2 )2 /n. In the next example, we show that the family of a bivariate normal distribution with parameter (0, 0, 1, 1, ρ) , ρ ∈ (−1, 1), belongs to a Cramér family. Hence, for large n, the

4.3 Cramér Family

213

maximum likelihood estimator of ρ exists and is a CAN estimator of ρ with approximate variance 1/n I (ρ) = (1 − ρ2 )2 /n(1 + ρ2 ), which is less than the approximate variance (1 − ρ2 )2 /n of Rn as expected.  Example 4.3.3

Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by  =

 1 ρ , ρ 1

ρ ∈ (−1, 1). Then its probability density function f (x, ρ) is given by   1 (x 2 + y 2 − 2ρx y)  , (x, y) ∈ R2 , − 1 < ρ < 1 . exp − 2(1 − ρ2 ) 2π 1 − ρ2 It is to be noted that the parameter space (−1, 1) is an open set and the support of the distribution is R2 which does not depend on the parameter ρ. Observe that 1 x 2 + y2 ρx y log f (x, ρ) = − log(1 − ρ2 ) − + − log 2π 2 2(1 − ρ2 ) (1 − ρ2 ) and it cannot be expressed in the form required for a one-parameter exponential family. Thus the distribution of (X , Y ) does not belong to a one-parameter exponential family. To examine if it belongs to a Cramér family, we have already noted that the first two conditions are satisfied. Now we verify whether the derivatives of log f (x, ρ) up to order 3 exist. From the expression of log f (x, ρ) we have ρ(x 2 + y 2 ) x y(1 + ρ2 ) ∂ ρ − + log f (x, ρ) = 2 ∂ρ 1−ρ (1 − ρ2 )2 (1 − ρ2 )2 ∂2 1 + ρ2 (x 2 + y 2 )(1 + 3ρ2 ) x y(6ρ + 2ρ3 ) log f (x, ρ) = − + 2 2 2 ∂ρ (1 − ρ ) (1 − ρ2 )3 (1 − ρ2 )3 ∂3 1 + 4ρ + 2ρ3 (x 2 + y 2 )(12ρ(1 + ρ2 )) log f (x, ρ) = − ∂ρ3 (1 − ρ2 )3 (1 − ρ2 )4 2 4 6x y(1 + 6ρ + ρ ) + . (1 − ρ2 )4 Suppose ρ ∈ Nδ (ρ0 ), that is, ρ0 − δ < ρ < ρ0 + δ ⇔ 1/(ρ0 + δ) < 1/ρ < 1/(ρ0 − δ). Then  3      ∂   1 + 4ρ + 2ρ3   (x 2 + y 2 )(12ρ(1 + ρ2 ))   ≤ +  log f (x, ρ)  ∂ρ3   (1 − ρ2 )3    (1 − ρ2 )4    6x y(1 + 6ρ2 + ρ4 )   +   (1 − ρ2 )4

214

4

CAN Estimators in Exponential and Cramér Families

   1 + 4(ρ0 + δ) + 2(ρ0 + δ)3   ≤   (1 − (ρ0 − δ)2 )3  2   (x + y 2 )(12(ρ0 + δ)(1 + (ρ0 + δ)2 ))   +   (1 − (ρ0 − δ)2 )4    6x y(1 + 6(ρ0 + δ)2 + (ρ0 + δ)4 )   +   (1 − (ρ − δ)2 )4 0

= M(x, y). Using the fact that E(X 2 + Y 2 ) = 2 and E|X Y | ≤ (E(X 2 )E(Y 2 ))1/2 = 1, we note that M(X , Y ) is an integrable random variable ∀ ρ ∈ (ρ0 − δ, ρ0 + δ). From the second derivative of log f (x, ρ) we find the information function as follows:   ∂2 I (ρ) = E − 2 log f (X , ρ) ∂ρ E(X 2 + Y 2 )(1 + 3ρ2 ) E(X Y )(6ρ + 2ρ3 ) 1 + ρ2 + − 2 2 2 3 (1 − ρ ) (1 − ρ ) (1 − ρ2 )3 1 + ρ2 2(1 + 3ρ2 ) ρ(6ρ + 2ρ3 ) =− + − (1 − ρ2 )2 (1 − ρ2 )3 (1 − ρ2 )3 4 2 2(1 − ρ ) 1+ρ 1 + ρ2 = − = . (1 − ρ2 )3 (1 − ρ2 )2 (1 − ρ2 )2

=−

Thus, 0 < I (ρ) < ∞. Thus, all the Cramér regularity conditions are satisfied and the distribution of (X , Y ) belongs to a Cramér family. Hence for large n, the maximum likelihood estimator of ρ exists and it is a CAN estimator of ρ with approximate variance 1/n I (ρ) = (1 − ρ2 )2 /n(1 + ρ2 ). Now to find the maximum likelihood estimator of ρ, the log-likelihood function is given by n 

n log L n (ρ|(X , Y )) = − log(1 − ρ2 ) − 2

(X i2 + Yi2 )

i=1

2(1 − ρ2 )

n 

ρ +

X i Yi

i=1

(1 − ρ2 )

.

Hence the likelihood equation is as follows: nρ ∂ − log L n (ρ|(X , Y )) = ∂ρ 1 − ρ2

ρ

⇒ ρ(1 − ρ )−

n 

(X i2 + Yi2 )

i=1

(1 + ρ2 ) +

(1 − ρ2 )2 n  ρ (X i2 + Yi2 ) i=1

X i Yi

i=1 (1 − ρ2 )2

(1 + ρ2 )

+ n ⇒ ρ3 − ρ2 Vn − ρ(1 − Un ) − Vn = 0 ⇒ aρ3 + 3bρ2 + 3cρ + d = 0 , say 2

n 

n 

i=1

n

=0 X i Yi =0

4.3 Cramér Family

215 n 

where Un =

n 

(X i2 + Yi2 )

i=1

, Vn =

n

X i Yi

i=1

& a = 1, 3b = −Vn ,

n

3c = −(1 − Un ) & d = −Vn .

Thus, the likelihood equation is a cubic equation in ρ. The condition for a unique real root for the cubic equation is G 2 + 4H 3 > 0, where G = a 2 d − 3abc + 2b3 and H = ac − b2 . To examine whether this condition is satisfied, suppose random variables G n and Hn are defined as follows: ! −4Vn 1 Vn2 Un Vn 2Vn3 . Gn = + − & Hn = − 1 − Un + 3 3 27 3 3 It is to be noted that n 

Un =

(X i2 + Yi2 )

i=1 n 

Vn =

n X i Yi

i=1

n



→ E(X 2 + Y 2 ) = 2 & Pρ

→ E(X Y ) = ρ .

As a consequence, ! Pρ 1 & ρ' 1 Vn2 1 − Un + 1−2+ −Hn = → 3 3 3 3 & ' 1 ρ = −1 + = C ∈ (−4/3, −2/3) as ρ ∈ (−1, 1) . 3 3 Now Pρ

−Hn → C < 0 ⇒ P[−Hn ≤ 0] → 1 by Theorem 2.2.2 ⇒ P[Hn ≥ 0] → 1 ⇒ P[Hn > 0] → 1 as P[Hn = 0] = 0 ⇒ Pρ [G 2n + 4Hn3 > 0] → 1 as n → ∞. Hence, with probability approaching 1 the cubic equation ρ3 − ρ2 Vn − ρ(1 − Un ) − Vn = 0 has a unique real root. Suppose g(ρ) = ρ3 − ρ2 Vn − ρ(1 − Un ) − Vn , then g(−1) = −2Vn − Un = −

n 1 2 (X i + Yi2 + 2X i Yi ) n i=1

n 1 =− (X i + Yi )2 < 0 a.s. n i=1

216

4

CAN Estimators in Exponential and Cramér Families

n 1 2 & g(1) = −2Vn + Un = (X i + Yi2 − 2X i Yi ) n i=1

n 1 = (X i − Yi )2 > 0 a.s. n i=1

Further g is a continuous function, hence with probability 1, the unique root is in (−1, 1). Suppose the root is denoted by ρˆ n . Then by the Cramér-Huzurbazar theory, it is the maximum likelihood estimator of ρ and it is a CAN estimator of ρ with approximate variance 1/n I (ρ) = (1 − ρ2 )2 /n(1 + ρ2 ). n n X i Yi /n = i=1 Ui /n, where Ui = X i Yi , i = 1, 2, . . . , n. Suppose Vn = i=1 Corresponding to a random sample from the distribution of (X , Y ) , we have a random sample {U1 , U2 , . . . , Un }. Further, E(Ui ) = ρ & V ar (Ui ) = E(X 2 Y 2 ) − (E(X Y ))2 = E(X 2 Y 2 ) − ρ2 . To find E(X 2 Y 2 ), we use the result that the conditional distribution of X given Y is normal N (ρY , 1 − ρ2 ). Observe that E(X 2 Y 2 ) = E({E(X 2 Y 2 )|Y }) = E(Y 2 {E(X 2 )|Y }) = E(Y 2 {V ar (X |Y ) + (E(X |Y ))2 }) = E(Y 2 {(1 − ρ2 ) + (ρY )2 }) = (1 − ρ2 )E(Y 2 ) + ρ2 E(Y 4 ) = (1 − ρ2 ) + 3ρ2 = 1 + 2ρ2 . Thus, V ar (Ui ) = 1 + ρ2 , which is positive and finite. By the WLLN and by the CLT Pρ

Vn → ρ &

√ L n(Vn − ρ) → Z 1 ∼ N (0, 1 + ρ2 ) .

Hence, Vn is CAN for ρ with approximate variance (1 + ρ2 )/n. It is to be noted that Vn is a moment estimator of ρ. Thus, both Vn and ρˆ n are CAN for ρ. Further, 1 + ρ2 −

(1 − ρ2 )2 (1 + ρ2 )2 − (1 − ρ2 )2 4ρ2 = = >0, (1 + ρ2 ) (1 + ρ2 ) (1 + ρ2 )

implying that ρˆ n is more efficient than Tn , which is expected as ρˆ n is a BAN estimator of ρ. 

4.3 Cramér Family

217

 Remark 4.3.5

From Example 3.3.9 and Example 4.3.3, we note that if (X , Y ) has a bivariate normal distribution with parameter (0, 0, 1, 1, ρ) , ρ ∈ (−1, 1), then we have the following three CAN estimators of ρ: (i) the sample correlation coefficient Rn with approximate variance (1 − ρ2 )2 /n, (ii) the maximum likelihood (1 − ρ2 )2 /n(1 + ρ2 ) and (iii) the estimator ρˆ n of ρ with approximate variance n moment estimator of ρ given by Vn = i=1 X i Yi /n with approximate variance (1 + ρ2 )/n. Among these three ρˆ n has the smallest variance. In Example 4.5.4, R code is given to derive the maximum likelihood estimator ρˆ n and to verify that it is a CAN estimator of ρ. In the same example it is shown that for a given sample, values of ρˆ n and Rn are close. In Example 5.3.2, we also note that for various values of ρ, the values of the approximate variances of ρˆ n and Rn are close. Using routine procedure, it can be proved that if (X , Y ) has a bivariate normal distribution with parameter (μ1 , μ2 , σ12 , σ22 , ρ) , where μ1 , μ2 ∈ R, σ12 , σ22 > 0 and ρ ∈ (−1, 1), then the distribution belongs to a five-parameter exponential family. The sample correlation coefficient Rn is the maximum likelihood estimator of ρ as well as a moment estimator of ρ based on a sufficient statistic and is a CAN estimator of ρ with approximate variance (1 − ρ2 )2 /n. We have noted that if the probability distribution belongs to either a one-parameter exponential family or a Cramér family with indexing parameter θ, then for large n, the maximum likelihood estimator of θ is BAN for θ. Hence, it is better to obtain an interval estimator based on the maximum likelihood estimator of θ or to define a test statistic for testing certain hypotheses using the maximum likelihood estimator of θ. Hodges and Lecam have given one example in 1953 in which they have proposed an estimator which is better than the maximum likelihood estimator, in the sense of having smaller variance at least at one parametric point, (Kale and Muralidharan [6]). The estimator proposed by Hodges and Lecam is hence known as super efficient estimator. We discuss it below: Super efficient estimator: Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution. Then the sample mean X n ∼ N (θ, 1/n) distribution for any n. It is to be noted that the variance 1/n of X n is same as the Cramér Rao lower bound for the variance of an unbiased estimator. Hence, X n is an efficient estimator of θ. Hodges and Lecam proposed an estimator Tn of θ as follows. Suppose for 0 < α < 1,  X n , if |X n | > n −1/4 Tn = αX n , if |X n | ≤ n −1/4 . A technique of defining Tn in this way is known as a shrinkage technique. If X n is small, it is made further small by multiplying it by a fraction, to define Tn . The cutoff n −1/4 can be replaced by any sequence {an , n ≥ 1} such that an → 0 and √ −1 nan → ∞. For example, an = 1/ log n or an = n 2 +δ where 0 < δ < 1/2. We now show that Tn with any such an is CAN for θ and find its approximate variance.

218

4

CAN Estimators in Exponential and Cramér Families

√ √ √ Suppose Yn = n(Tn −θ) − n(X n − θ) = n(Tn − X n ). Suppose θ = 0. Then for  > 0, √ P[|Yn | < ] = P[| n(Tn − X n )| < ] ≥ P[Tn = X n ] = P[|X n | > an ] √ √ = 1− P[−an < X n < an ] = 1−( n(an − θ))+(− n(an + θ)) . Now θ = 0 ⇒ d(θ, 0) > 0, where d is a distance function. Suppose d(θ, 0) = λ. The sequence {an , n ≥ 1} is such that an → 0, that is, given 1 > 0, there exists n 0 (1 ) such that ∀ n ≥ n 0 (1 ), |an | < 1 . Suppose 1 = λ. Then ∀ n ≥ n 0 (λ), |an | < λ. Case(i): Suppose θ < −an then θ + an < 0. Further, θ < −an < an ⇒ θ − an < 0. Thus, in this case P[|Yn | < ] → 1. Case(ii): Suppose θ > an then θ − an > 0. Further, θ > an > −an ⇒ θ + an > 0 P

θ = 0, Yn → 0. As a conseand in this case also P[|Yn | < ] → √ 1. Hence, when √ quence, asymptotic distribution of n(Tn − θ) and of n(X n − θ) is the same. But √ √ L L n(X n − θ) → Z ∼ N (0, 1) and hence n(Tn − θ) → Z ∼ N (0, 1). Suppose θ = 0. Observe that √ √ √ √ √ n(Tn ) = n(Tn − αX n + αX n ) = n(Tn − αX n ) + nαX n = Un + nαX n . Then for  > 0, √ P[|Un | < ] = P[| n(Tn − αX n )| < ] ≥ P[Tn = αX n ] = P[|X n | < an ] √ √ √ √ √ = P[− nan < n (X n ) < nan ] = ( nan ) − (− nan ) √ → 1 as n → ∞ as nan → ∞. √ P Hence, when θ = 0, Un → 0. As a consequence, asymptotic distribution of n(Tn ) √ √ L and of α n(X n ) is the same. But α n(X n ) → Z 1 ∼ N (0, α2 ) and hence √ L n(Tn ) → Z 1 ∼ N (0, α2 ). Thus for all θ, asymptotic distribution of Tn is normal with approximate variance v(θ)/n, where v(θ) is given by  1, if θ = 0 v(θ) = α2 , if θ = 0 Thus at θ = 0, V ar (Tn ) < V ar (X n ) and hence Tn is labeled as a super efficient estimator. In general, one can use similar shrinkage technique to improve a CAN estimator that minimizes the variance at a fixed point θ0 . More specifically, suppose Sn is a CAN estimator of θ with approximate variance σ(θ)/n. Suppose {an , n ≥ 1} is a √ sequence of real numbers such that an → 0 and nan → ∞. For 0 < α < 1, an estimator Tn is defined as  if |Sn − θ0 | > an Sn , Tn = α(Sn − θ0 ) + θ0 , if |Sn − θ0 | ≤ an

4.3 Cramér Family

219

Then it can be shown that Tn has asymptotically normal distribution with mean θ and approximate variance w(θ)/n, where w(θ) is given by  w(θ) =

σ(θ), α2 σ(θ),

if if

θ  = θ0 θ = θ0

The shrinkage technique can be extended to reduce the variance at a finite number of points. Thus, the set of parameters, at which approximate variance of the super efficient estimator can be made smaller than the approximate variance of the maximum likelihood estimator, has Lebesgue measure 0. In view of the fact that the reduction in variance can be achieved only at finitely many parametric points, the super efficient estimator is not practically useful. For more discussion on this, we refer to Kale and Muralidharan [6].  Remark 4.3.6

In view of the above example of a super efficient estimator given by Hodges and Lecam, many statisticians object to label the CAN estimator with approximate variance 1/n I (θ) as a BAN estimator. More discussion on this issue can be found in Rao [8], p. 348. We now discuss the concept of asymptotic relative efficiency (ARE). It is useful to compare two CAN estimators via their asymptotic variances. It is defined as follows:

 Definition 4.3.1 Asymptotic Relative Efficiency: Suppose T1n and T2n are two CAN estimators of θ with approximate variance σ12 /n and σ22 /n respectively. Then ARE of T1n with respect to T2n is σ2 A R E(T1n , T2n ) = 22 . σ1

If the norming factors of T1n and T2n are an and bn respectively, then ARE of T1n with respect to T2n is defined as σ22 /bn2 . n→∞ σ 2 /a 2 n 1

A R E(T1n , T2n ) = lim  Remark 4.3.7

If the norming factors an and bn tend to ∞ at the same rate then, A R E(T1n , T2n ) = σ22 /σ12 . If A R E(T1n , T2n ) > 1, then T1n is preferred to T2n as an estimator of θ. For a Cauchy C(θ, 1) distribution, the maximum likelihood estimator T1n of θ is CAN for

220

4

CAN Estimators in Exponential and Cramér Families

θ with the approximate variance 2/n. Further, the sample median T2n is CAN for θ with the approximate variance π 2 /4n. Thus A R E(T1n , T2n ) =

π2 = 1.2324 > 1 . 8

Thus, the maximum likelihood estimator T1n is preferred to the sample median. However, the gain in efficiency is marginal and it would be better to use the sample median in view of its computational ease. For a normal N (θ, 1) distribution, the maximum likelihood estimator T1n of θ is CAN for θ with the approximate variance 1/n and the sample median T2n is CAN for θ with the approximate variance π/2n. Thus, ARE of sample mean with respect to sample median is A R E(T1n , T2n ) =

π = 1.5714 > 1 . 2

Thus, the maximum likelihood estimator T1n is again preferred to the sample median. In other words, ARE of the sample median with respect to the sample mean is 2/π = 0.6364. It is interpreted as follows. If we use the sample mean instead of the sample median to estimate θ, then we get the same accuracy with 64% of observations. Suppose X follows a Laplace (θ, 1) distribution with probability density function given by f (x, θ) = (1/2) exp{−|x − θ|}, x ∈ R, θ ∈ R. It is easy to verify that the sample median T1n = X ([n/2]+1) is CAN for θ with approximate variance 1/n. If X follows Laplace distribution, then E(X ) = θ and V ar (X ) = 2 < ∞. Thus, by the WLLN and by the CLT, T2n = X n is CAN for θ with approximate variance 2/n. Hence, A R E(T1n , T2n ) = σ22 /σ12 = 2, which implies that the sample median is a better estimator of θ than the sample mean. It is to be noted that the sample median is the maximum likelihood estimator of θ. If the distribution belongs to either a one-parameter exponential family or a Cramér family, then we know that the maximum likelihood estimator is asymptotically efficient and the other estimator cannot be better than that. However, other estimators may have some desirable properties, such as ease of computation, robustness to underlying assumptions, which make them desirable. For example, in gamma G(α, λ) distribution, finding the maximum likelihood estimator of (α, λ) is difficult than the moment estimator. In such cases, efficiency of the maximum likelihood estimator becomes important in calibrating what we are giving up, if we use another estimator. We now briefly discuss the extension of Cramér-Huzurbazar theory to a multiparameter setup.

4.3 Cramér Family

221

Cramér Huzurbazar theory in a multiparameter setup: Suppose X is a random variable or a random vector with the probability law f (x, θ) which is indexed by a vector parameter θ = (θ1 , θ2 , . . . , θk ) ∈  ⊂ Rk . Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from the distribution of X . Suppose the probability law f (x, θ) satisfies the following conditions in a neighborhood Nρ (θ0 ) ⊂ , where θ0 is a true parameter value. C-1 The support S f is free from the parameter θ. C-2 There exists an open subset of a parameter space which contains the true parameter point θ0 . 2 C-3 The partial derivatives ∂θ∂ i log f (x, θ), i = 1, 2, . . . , k, ∂θ∂i ∂θ j log f (x, θ), ∂ log f (x, θ), i, j, l = 1, 2, . . . , k exist for i, j = 1, 2, . . . , k and ∂θi ∂θ j ∂θl almost all values of x ∈ S . f  C-4 The identity S f f (x, θ)d x = 1 can be differentiated with respect to θi s under   the integral sign at least twice. As a consequence, E ∂θ∂ i log f (X , θ) = 0, $ % i = 1, 2, . . . , k and the information matrix I (θ) = Ii j (θ) is given by 3



 ∂ ∂ Ii j (θ) = E log f (X , θ) log f (X , θ) ∂θi ∂θ j   ∂2 log f (X , θ) i, j = 1, 2, . . . , k. =E − ∂θi ∂θ j Further I (θ) is a positive definite matrix.    ∂3  log f (x, θ) C-5 There exist functions Mi jl (x) such that  ∂θi ∂θ  < Mi jl (x), where j ∂θl Mi jl (x) may depend on θ0 and ρ and E(Mi jl (X )) < ∞ for all i, j, l = 1, 2, . . . , k. Thus, the third order partial derivatives of log f (x, θ) are bounded by integrable functions. If the probability law f (x, θ) satisfies these Cramér regularity conditions in a neighborhood Nρ (θ0 ) ⊂ , then the corresponding family of distributions is a multiparameter Cramér family. It can be verified that a multiparameter exponential family is a subclass of a multiparameter Cramér family. As in Example 4.3.1, we can show that a Cauchy C(θ, σ) distribution with location parameter θ and scale parameter σ belongs to a two-parameter Cramér family. However, the Laplace distribution with probability density function f (x, θ, α) =

  |x − θ| 1 , x ∈ R, θ ∈ R, α > 0 exp − 2α α

does not belong to a two-parameter Cramér family, as the third condition gets violated. A uniform U (θ − α, θ + α) distribution is not a member of a two-parameter

222

4

CAN Estimators in Exponential and Cramér Families

Cramér family, as its support depends on the parameters. A negative binomial distribution with probability mass function,  Pθ [X = x] =

 x +k−1 k p (1 − p)x , x = 0, 1, . . . , 0 < p < 1, k ∈ I + x

does not belong to a two-parameter Cramér family as the parameter space in not open. If k ∈ (0, ∞), then it belongs to a two-parameter Cramér family. As in the case of the distributions belonging to a Cramér family with real indexing parameter, following four results are true in a multiparameter setup when the distribution satisfies the five regularity conditions stated above. We state these below, for more details, we refer to Kale and Muralidharan [6] and references cited therein. Cramér-Huzurbazar theorem in a k-parameter Cramér family: Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X having the probability law f (x, θ) and θ0 is a true parameter.

 Result 4.3.5

With probability approaching 1 as n → ∞, a system of likelihood equations given by ∂θ∂ i log L n (θ|X ) = 0, i = 1, 2, . . . , k admits a solution θˆ n (X ) which is consistent for θ0 .

 Result 4.3.6

Forlarge n , the distribution of θˆ n (X ) can be approximated by the normal distribution −1 Nk θ0 , I (θ0 )/n , that is,  L  √  n θˆ n (X ) − θ0 → Z 1 ∼ Nk 0, I −1 (θ0 ) .

 Result 4.3.7

With probability approaching 1 as n → ∞, there& is a relative maximum of ' the likeli∂2 ˆ hood function at θn (X ), that is, the matrix D = ∂θi ∂θ j log L n (θ|X )|θˆ (X ) of second n order partial derivatives evaluated at θˆ n (X ) is almost surely negative definite.

 Result 4.3.8

With probability approaching 1 as n → ∞, a consistent solution of the system of likelihood equations is unique.  Remark 4.3.8

If X ∼ N (μ, σ 2 ), then we have seen in Example 4.2.6 that it belongs to a two-parameter exponential family, hence it also belongs to a Cramér family. If X 1 , X 2 , . . . , X n is a random sample from normal N (μ, σ 2 ), then it is shown in Example 3.3.2, that the maximum likelihood estimator of (μ, σ 2 ) is CAN with

4.3 Cramér Family

223

approximate variance-covariance matrix I −1 (μ, σ 2 ). Similarly, it can be shown that the distribution of (X , Y ) with the probability mass function as   x −λ x y P[X = x, Y = y] = e λ p (1 − p)x−y /x!, y y = 0, 1, . . . , x; x = 0, 1, 2, . . . , where λ > 0 and 0 < p < 1, belongs to a two-parameter exponential family and hence it also belongs to a two-parameter Cramér family. In Example 3.3.4, it is shown that the maximum likelihood estimator of (λ, p) is CAN with approximate dispersion matrix I −1 (λ, p). As expected these results are consistent with the Cramér-Huzurbazar theorem in a multiparameter setup. We now discuss two examples in which the distributions belong to a multiparameter Cramér family.  Example 4.3.4

Suppose a random vector (X , Y ) follows a bivariate normal N2 (0, 0, σ12 , σ22 , ρ) distribution where ρ = 0 is a known correlation coefficient. The probability density function of (X , Y ) is given by   2 2 x 1 1 2ρx y y  f (x, y, σ12 , σ22 ) = exp − − + 2 , 2(1 − ρ2 ) σ12 σ1 σ2 σ2 2πσ1 σ2 1 − ρ2 (x, y) ∈ R2 , σ12 , σ22 > 0. Thus, the support of the distribution is free from the parameters, parameter space  is  = {(σ12 , σ22 ) |σ12 , σ22 > 0} and it is an open set. However, f (x, y, σ12 , σ22 ) cannot be expressed in a form required for a two-parameter family. Hence, the distribution does not belong to a two-parameter exponential family. We examine whether it belongs to a two-parameter Cramér family. We have noted that the first two conditions are satisfied. The partial derivatives up to order three, of log f (x, y, σ12 , σ22 ) are as follows. Suppose c is a constant free from parameters. Then log f (x, y, σ12 , σ22 ) = c − +

1 1 x2 log σ12 − log σ22 − 2 2 2(1 − ρ2 )σ12

ρx y y2 − (1 − ρ2 )(σ12 )1/2 (σ22 )1/2 2(1 − ρ2 )σ22

∂ 1 x2 2 2 log f (x, y, σ , σ ) = − + 1 2 ∂σ12 2σ12 2(1 − ρ2 )σ14 ρx y − 2 2(1 − ρ )(σ12 )3/2 (σ22 )1/2

224

4

CAN Estimators in Exponential and Cramér Families

∂ 1 y2 log f (x, y, σ12 , σ22 ) = − 2 + 2 ∂σ2 2σ2 2(1 − ρ2 )σ24 ρx y − 2 2(1 − ρ )(σ12 )1/2 (σ22 )3/2 ∂2 1 x2 2 2 log f (x, y, σ , σ ) = − 1 2 ∂(σ12 )2 2σ14 (1 − ρ2 )σ16 3ρx y + 2 4(1 − ρ )(σ12 )5/2 (σ22 )1/2 ∂2 1 y2 log f (x, y, σ12 , σ22 ) = − 2 4 2 ∂(σ2 ) 2σ2 (1 − ρ2 )σ26 3ρx y + 4(1 − ρ2 )(σ22 )5/2 (σ12 )1/2 ∂2 ρx y log f (x, y, σ12 , σ22 ) = 2 2 2 ∂(σ2 )∂(σ1 ) 4(1 − ρ )(σ22 )3/2 (σ12 )3/2 ∂3 1 3x 2 2 2 log f (x, y, σ , σ ) = − + 1 2 ∂(σ12 )3 σ16 (1 − ρ2 )σ18 15ρx y − 2 8(1 − ρ )(σ12 )7/2 (σ22 )1/2 ∂3 1 3y 2 2 2 log f (x, y, σ , σ ) = − + 1 2 ∂(σ22 )3 σ26 (1 − ρ2 )σ28 15ρx y − 2 8(1 − ρ )(σ12 )1/2 (σ22 )7/2 ∂3 ∂(σ22 )∂(σ12 )2 ∂3 ∂(σ12 )∂(σ22 )2

log f (x, y, σ12 , σ22 ) = − log f (x, y, σ12 , σ22 ) = −

3ρx y 8(1 − ρ2 )(σ12 )5/2 (σ22 )3/2 3ρx y 8(1 − ρ2 )(σ12 )3/2 (σ22 )5/2

.

Thus, partial derivatives of log f (x, y, σ12 , σ22 ) up to order three exist. We further examine whether the third order partial derivatives are bounded by integrable 2 − δ, σ 2 + δ) and functions. Observe that for δ > 0, σ12 ∈ (σ01 01 2 2 2 2 2 σ2 ∈ (σ02 − δ, σ02 + δ), where σ01 and σ02 are true parameter values, we have        ∂3      1 3x 2  2 2      + log f (x, y, σ1 , σ2 ) ≤    6 2 8  ∂(σ12 )3  (σ01 − δ) (1 − ρ )(σ01 − δ)      15ρx y  +  8(1 − ρ2 )(σ01 − δ)7 (σ02 − δ)  = M111 (x, y) say

4.3 Cramér Family

225

     2    3σ 1  1 + & E(M111 (X , Y )) ≤   (σ01 − δ)6   (1 − ρ2 )(σ01 − δ)8      15ρσ1 σ2  < ∞. +  2 7 8(1 − ρ )(σ01 − δ) (σ02 − δ)  ∂3 ∂(σ12 )3

Hence, the third order partial derivative

log f (x, y, σ12 , σ22 ) is bounded by

an integrable function. On similar lines, we can show that the remaining third order partial derivatives are bounded by integrable functions. We now find the information matrix I (σ12 , σ22 ) = [Ii, j (σ12 , σ22 )]. We have E(X 2 ) = σ12 , E(Y 2 ) = σ22 & E(X Y ) = ρσ1 σ2 . Hence,  I1,1 (σ12 , σ22 ) = E −  = E − =−

1 2σ14

∂2 ∂(σ12 )2

 log f (X , Y , σ12 , σ22 )

X2 3ρX Y + − 4 2σ1 (1 − ρ2 )σ16 4(1 − ρ2 )(σ12 )5/2 (σ22 )1/2 1

+

1 (1 − ρ2 )σ14



3ρ2 4(1 − ρ2 )σ14

=

2 − ρ2 4(1 − ρ2 )σ14



.

2 − ρ2

Similarly, I2,2 (σ12 , σ22 ) =

4(1 − ρ2 )σ24   ∂2 2 2 2 2 2 2 log f (X , Y , σ1 , σ2 ) Now, I1,2 (σ1 , σ2 ) = I2,1 (σ1 , σ2 ) = E − ∂(σ22 )∂(σ12 )   ρ2 ρX Y . = − = E − 4(1 − ρ2 )(σ22 )3/2 (σ12 )3/2 4(1 − ρ2 )σ12 σ22

Thus, the information matrix I (σ12 , σ22 ) is given by ⎛ ⎞ 2 2−ρ2 − 4(1−ρρ2 )σ2 σ2 2 )σ 4 4(1−ρ 1 1 2 ⎠ 2 I (σ12 , σ22 ) = ⎝ . 2−ρ2 − 4(1−ρρ2 )σ2 σ2 4(1−ρ2 )σ 4 1 2

2

It is a positive definite matrix as its first principle minor is positive and |I (σ12 , σ22 )| = 1/4(1 − ρ2 )σ14 σ24 > 0. Thus, all the Cramér regularity conditions are satisfied and hence the bivariate normal N2 (0, 0, σ12 , σ22 , ρ) distribution belongs to a two-parameter Cramér family. Hence, based on the sample of size n 2 ,σ 2 ) of ˆ 2n from the distribution of (X , Y ) , the maximum likelihood estimator (σˆ 1n 2 2 2  −1 (σ1 , σ2 ) is a CAN estimator with approximate dispersion matrix I (σ1 , σ22 )/n. Thus, √ L 2 2 n(σˆ 1n − σ12 , σˆ 2n − σ22 ) → Z ∼ N2 (0, I −1 (σ12 , σ22 )). The inverse of the information matrix I (σ12 , σ22 ) is given by I

−1

 (σ12 , σ22 )

=

 ρ2 σ12 σ22 (2 − ρ2 )σ14 . ρ2 σ12 σ22 (2 − ρ2 )σ24

226

4

CAN Estimators in Exponential and Cramér Families

To find the maximum likelihood estimators, the system of likelihood equations is as given below. n 

ρ

X i2

n 

X i Yi n i=1 i=1 + − =0 2σ12 2(1 − ρ2 )σ14 2(1 − ρ2 )(σ13 )(σ2 ) n n   Yi2 X i Yi ρ n i=1 i=1 − − 2 + = 0. 2σ2 2(1 − ρ2 )σ24 2(1 − ρ2 )(σ1 )(σ23 ) −

(4.3.5)

(4.3.6)

Multiplying Eq. (4.3.5) by 1/σ22 and subtracting from it 1/σ12 × Eq. (4.3.6), we have  n  n 1 X i2 /σ12 − Yi2 /σ22 = 0. 2(1 − ρ2 )σ1 σ2 i=1

Hence,

n

X i2 /σ12 =

i=1

n

 Yi2 /σ22

i=1

⇒ σ12 =

i=1



1 1 − ρ2

⇒ 1/σ2 =

n i=1

n 

n 

X i2

⎜ ⎜ i=1 ⎜ ⎝ n

−ρ

X i2 /σ12

n

1/2 Yi2

i=1

⎞1/2 ⎞ ⎜ i=1 ⎟ ⎟ ⎜ ⎟ ⎟ n ⎝ ⎠ ⎟ ⎠ Yi2 ⎛ n

X i Yi

i=1

n

X i2

i=1

from Eq. (4.3.5)  n  n 2 2 2 2 & σ2 = σ1 Yi / Xi i=1



=

1 1 − ρ2

i=1 n 

n 

Yi2

⎜ ⎜ i=1 ⎜ ⎝ n

−ρ

X i Yi

i=1

n

⎛ n ⎜ i=1 ⎜ n ⎝ i=1

⎞1/2 ⎞ 2 Yi ⎟ ⎟ X i2

⎟ ⎠

⎟ ⎟. ⎠

Thus, we have a solution of a system of likelihood equations. It can be shown that the matrix of second order partial derivatives at the solution is almost surely 2 ,σ 2 ) ˆ 2n negative definite matrix. Hence, the maximum likelihood estimator (σˆ 1n 2 2  of (σ1 , σ2 ) is given by ⎛ 2 σˆ 1n =

1 1 − ρ2

n 

n 

X i2

⎜ ⎜ i=1 ⎜ ⎝ n

−ρ

X i Yi

i=1

n

⎞1/2 ⎞ ⎜ i=1 ⎟ ⎟ ⎜ ⎟ ⎟ n ⎝ ⎠ ⎟ ⎠ Yi2 ⎛ n

i=1

X i2

4.3 Cramér Family

227

⎛ 2 & σˆ 2n =

1 1 − ρ2

n 

n 

Yi2

⎜ ⎜ i=1 ⎜ ⎝ n

−ρ

X i Yi

i=1

n

⎛ n ⎜ i=1 ⎜ n ⎝ i=1

⎞1/2 ⎞ ⎟ ⎟ ⎟ ⎟ . ⎠ ⎟ ⎠ 2

Yi2 Xi

  Remark 4.3.9

In Example 4.3.4, it is shown that when ρ = 0, then √ L 2 2 n(σˆ 1n − σ12 , σˆ 2n − σ22 ) → Z ∼ N2 (0, I −1 (σ12 , σ22 )). Hence, √

L

2 n(σˆ 1n − σ12 ) → Z 1 ∼ N (0, (2 − ρ2 )σ14 ) &

√ L 2 n(σˆ 2n − σ22 ) → Z 2 ∼ N (0, (2 − ρ2 )σ24 ).

(4.3.7)

If (X , Y ) has bivariate normal distribution, then X ∼ N (0, σ12) and Y ∼ N (0, σ22). 2 and σ 2 denote the maximum likelihood estimator of σ 2 and σ 2 ˜ 2n Suppose σ˜ 1n 1 2 respectively, based on the random sample of size n from the distributions of X and Y . Then as shown in Example 3.3.2, √

L

2 n(σ˜ 1n − σ12 ) → Z 1 ∼ N (0, 2σ14 ) &



L

2 n(σ˜ 2n − σ22 ) → Z 2 ∼ N (0, 2σ24 ). (4.3.8)

Apparently results in Eq. (4.3.7) and Eq. (4.3.8) seem to be inconsistent, as the approximate variances are different. However, it is to be noted that the estimators in Eq. (4.3.7) and in Eq. (4.3.8) are obtained under different models. In Eq. (4.3.7), the estimators are derived under bivariate normal model, while in Eq. (4.3.8), these are derived under univariate normal model. Estimators of parameters involved in the marginal distribution in a bivariate model will have a different behavior than the estimators of the same parameters of the same distribution, but treated 2 is the maximum likelihood estimator as a univariate model. In this example, σˆ 1n 2 2 is the maximum of σ1 of a marginal distribution in a bivariate model, while σ˜ 1n 2 involves data likelihood estimator of σ12 in a univariate model. Observe that σˆ 1n 2 on Y variables and σˆ 2n involves data on X variables also. On the other hand, n n 2 = 2 2 = 2 ˜ 2n σ˜ 1n i=1 X i /n and σ i=1 Yi /n. If ρ = 0, then X and Y are independent random variables, with X ∼ N (0, σ12 ) and Y ∼ N (0, σ22 ). In this case, the results based on bivariate and univariate models match as expected, in view of the fact that X and Y are uncorrelated and the bivariate probability density function is just the product of marginal probability density functions. One more important point to be noted is as follows: 2 = (2 − ρ2 )σ14 ≤ 2σ14 Approximate variance of σˆ 1n 2 = Approximate variance of σ˜ 1n .

228

4

CAN Estimators in Exponential and Cramér Families

In a bivariate model, information on σ12 is available not only via X variable but also from Y variable through the correlation coefficient ρ and hence approximate 2 is smaller than that of σ 2 . For the similar reason, ˜ 1n variance of σˆ 1n 2 Approximate variance of σˆ 2n = (2 − ρ2 )σ24 ≤ 2σ24 2 = Approximate variance of σ˜ 2n .

Such a feature is reflected in the information function also. The (1, 1)-th element from the information matrix in a bivariate normal model is larger than the information function from the univariate normal model where X ∼ N (0, σ12 ) as I1,1 (σ12 , σ22 ) − I (σ12 ) =

2 − ρ2 1 ρ2 − = > 0. 4(1 − ρ2 )σ14 2σ14 4(1 − ρ2 )σ14

Similarly, I2,2 (σ12 , σ22 ) − I (σ22 ) =

2 − ρ2 1 ρ2 − = > 0. 4(1 − ρ2 )σ24 2σ24 4(1 − ρ2 )σ24

In the following example of a bivariate Cauchy distribution, we observe similar features of a bivariate and corresponding univariate models. In the next example, we verify whether a bivariate Cauchy distribution (Kotz et al. [9]) belongs to a two-parameter Cramér family.  Example 4.3.5

Suppose a random vector (X , Y ) follows a bivariate Cauchy C2 (θ1 , θ2 ) distribution with probability density function given by f (x, y, θ1 , θ2 ) =

#−3/2 1 " (x, y) ∈ R2 , θ1 , θ2 ∈ R. 1+(x − θ1 )2 + (y − θ2 )2 2π

It has been shown that the Cauchy C(θ, 1) distribution with location parameter θ does not belong to a one-parameter exponential family. On similar lines, we can show that the probability law of bivariate Cauchy C2 (θ1 , θ2 ) distribution cannot be expressed in a form required for a two-parameter exponential family and hence it does not belong to a two-parameter exponential family. In Example 4.3.1, we have shown that Cauchy C(θ, 1) distribution belongs to a Cramér family. On similar lines, we examine whether C2 (θ1 , θ2 ) distribution belongs to a twoparameter Cramér family. Observe that the parameter space is R2 and it is an open set. The support of the distribution is also R2 and it does not depend on the parameters. We now examine whether log f (x, y, θ1 , θ2 ) has partial derivatives up to order three. We have 3 log{1 + (x − θ1 )2 + (y − θ2 )2 } 2 ∂ 3(x − θ1 ) log f (x, y, θ1 , θ2 ) = ∂θ1 {1 + (x − θ1 )2 + (y − θ2 )2 } log f (x, y, θ1 , θ2 ) = − log 2π −

4.3 Cramér Family

229

∂ 3(y − θ2 ) log f (x, y, θ1 , θ2 ) = ∂θ2 {1 + (x − θ1 )2 + (y − θ2 )2 } 2 ∂ −3{1 − (x − θ1 )2 + (y − θ2 )2 } log f (x, y, θ , θ ) = 1 2 {1 + (x − θ1 )2 + (y − θ2 )2 }2 ∂θ12 ∂2 −3{1 + (x − θ1 )2 − (y − θ2 )2 } log f (x, y, θ , θ ) = 1 2 {1 + (x − θ1 )2 + (y − θ2 )2 }2 ∂θ22 ∂2 6(x − θ1 )(y − θ2 ) log f (x, y, θ1 , θ2 ) = ∂θ2 ∂θ1 {1 + (x − θ1 )2 + (y − θ2 )2 }2 ∂3 −6(x − θ1 ){3 − (x − θ1 )2 + 3(y − θ2 )2 } log f (x, y, θ1 , θ2 ) = 3 {1 + (x − θ1 )2 + (y − θ2 )2 }3 ∂θ1 ∂3 −6(y − θ2 ){3 + 3(x − θ1 )2 − (y − θ2 )2 } log f (x, y, θ , θ ) = 1 2 {1 + (x − θ1 )2 + (y − θ2 )2 }3 ∂θ23 ∂3 −6(y − θ2 ){1 − 3(x − θ1 )2 + (y − θ2 )2 } log f (x, y, θ , θ ) = 1 2 {1 + (x − θ1 )2 + (y − θ2 )2 }3 ∂θ2 ∂θ12 ∂3 −6(x − θ1 ){1 + (x − θ1 )2 − 3(y − θ2 )2 } log f (x, y, θ1 , θ2 ) = . 2 {1 + (x − θ1 )2 + (y − θ2 )2 }3 ∂θ1 ∂θ2 Thus, partial derivatives of log f (x, y, θ1 , θ2 ) up to order three exist. We further examine whether the third order derivatives are bounded by integrable functions. Observe that    ∂3  6|x − θ1 |{3 + (x − θ1 )2 + 3(y − θ2 )2 }    3 log f (x, y, θ1 , θ2 ) ≤  ∂θ1  {1 + (x − θ1 )2 + (y − θ2 )2 }3 = M111 (x, y), say . Suppose u 1 = x − θ1 and u 2 = y − θ2 . Then using the fact that the integrand is an even function, we have 1 E(M111 (X , Y )) = 2π = = =

*



*



6|u 1 |{3 + u 21 + 3u 22 }

−∞ −∞ * ∞* ∞ u

{1 + u 21 + u 22 }9/2

du 1 du 2

2 2 1 {3 + u 1 + 3u 2 } du 1 du 2 2 2 {1 + u 1 + u 2 }9/2 0 0 +* * ∞ u {3 + u 2 /(1 + u 2 )} 1 12 ∞ 1 1 2 π 0 (1 + u 22 )7/2 0 {1 + u 21 /(1 + u 22 )}9/2 ! * * ∞ 1 3+t 6 ∞

12 π

π

0

with

(1 + u 22 )5/2 u 21 1 + u 22

=t

0

(1 + t)9/2

dt du 2

, du 1 du 2

230

4

=

6 π

* 0



CAN Estimators in Exponential and Cramér Families

1 [3B(1, 7/2) + B(2, 5/2)] du 2 (1 + u 22 )5/2

* 132 ∞ t −1/2 dt 35π 0 (1 + t)5/2 132 = B(1/2, 2) < ∞, 35π

=

with u 22 = t. Hence, the third order partial derivative

∂3 ∂θ13

log f (x, y, θ1 , θ2 ) is

bounded by an integrable function. On similar lines, we can show that the remaining third order partial derivatives of log f (x, y, θ1 , θ2 ) are bounded by integrable functions. We now find the information matrix I (θ1 , θ2 ) = [Ii, j (θ1 , θ2 )]. Observe that   ∂2 I1,2 (θ1 , θ2 ) = E − log f (x, y, θ1 , θ2 ) ∂θ2 ∂θ1   −6(X − θ1 )(Y − θ2 ) =E {1 + (X − θ1 )2 + (Y − θ2 )2 }2 * ∞* ∞ 6 u1u2 =− du 1 du 2 2π −∞ −∞ {1 + u 21 + u 22 }7/2 ⎡ ⎤ * ∞ * ∞ u2 u1 6 ⎢ ⎥ =− du 1 ⎦ du 2 ⎣ 2 2 7/2 u 2π −∞ {1 + u 2 } 1 −∞ {1 + }7/2 1+u 22 * ∞ u2 6 u1 =− ×0, 2 7/2 u2 2π −∞ {1 + u 2 } {1 + 1+u1 2 }7/2 2

being an odd function =0. 

I1,1 (θ1 , θ2 ) = = = =

=

 ∂2 E − 2 log f (x, y, θ1 , θ2 ) ∂θ1   3{1 − (X − θ1 )2 + (Y − θ2 )2 } E {1 + (X − θ1 )2 + (Y − θ2 )2 }2 * ∞* ∞ 1 − u 21 + u 22 3 du 1 du 2 2π −∞ −∞ {1 + u 21 + u 22 }7/2 +* , * ∞ 1 − u 21 + u 22 3 ∞ du 1 du 2 π −∞ 0 {1 + u 21 + u 22 }7/2 ⎡ ⎤ 2 * ∞ * ∞ 1 − u1 2 1 3 1+u 2 ⎢ ⎥ du 1 ⎦ du 2 ⎣ 2 5/2 u 21 7/2 π −∞ (1 + u 2 ) 0 {1 + 1+u 2 } 2

4.3 Cramér Family

231

=

3 2π

*



−∞

1 (1 + u 22 )2

* 0

∞ t −1/2 (1 − t)

{1 + t}7/2

3 2π *

*



−∞ ∞

du 2

u 21

with =

! dt

1 + u 22

=t

1 (1 + u 22 )2

! * ∞ t 1/2−1 t 3/2−1 dt − dt du 2 {1 + t}1/2+3 {1 + t}3/2+2 0 0 * ∞ 1 3 = [B(1/2, 3) − B(3/2, 2)] du 2 2π −∞ (1 + u 22 )2 ! * ∞ (1/2)(3) (3/2)(2) 1 3 du 2 = − 2π −∞ (1 + u 22 )2 (7/2) (7/2) * ∞ * ∞ 1 t 1/2−1 6 6 = du = dt 2 5π −∞ (1 + u 22 )2 5π 0 {1 + t}1/2+3/2 with u 22 = t 6 6 (1/2)(3/2) 3 = B(1/2, 3/2) = = . 5π 5π (2) 5 Similarly,



I2,2 (θ1 , θ2 ) = = = =

=

=

 ∂2 E − 2 log f (x, y, θ1 , θ2 ) ∂θ2   3{1 + (X − θ1 )2 − (Y − θ2 )2 } E {1 + (X − θ1 )2 + (Y − θ2 )2 }2 * ∞* ∞ 1 + u 21 − u 22 3 du 1 du 2 2π −∞ −∞ {1 + u 21 + u 22 }7/2 * * 3 ∞ ∞ 1 + u 21 − u 22 du 1 du 2 π −∞ 0 {1 + u 21 + u 22 }7/2 ⎡ ⎤ 2 * ∞ * ∞ 1 − u2 2 1 3 1+u 1 ⎢ ⎥ du 2 ⎦ du 1 ⎣ 2 u π −∞ (1 + u 21 )5/2 0 {1 + 1+u2 2 }7/2 1 ! * ∞ −1/2 * ∞ 1 t (1 − t) 3 dt du 1 2π −∞ (1 + u 21 )2 0 {1 + t}7/2 with

6 = 5π

*

∞ −∞

1 6 du 1 = 2 2 5π (1 + u 1 )

* 0



u 22 1 + u 21

=t

t 1/2−1 dt {1 + t}1/2+3/2

with u 21 = t

232

4

=

CAN Estimators in Exponential and Cramér Families

6 6 (1/2)(3/2) 3 B(1/2, 3/2) = = . 5π 5π (2) 5

Thus, the information matrix I (θ1 , θ2 ) is a diagonal matrix with diagonal elements 3/5 each. It is a positive definite matrix. It then follows that the bivariate Cauchy distribution satisfies all the Cramér regularity conditions, hence it belongs to a two-parameter Cramér family. By the Cramér-Huzurbazar theorem, for large n the maximum likelihood estimator (θˆ 1n , θˆ 2n ) of (θ1 , θ2 ) is a CAN estimator with the approximate dispersion matrix I −1 (θ1 , θ2 )/n =diag[5/3, 5/3]/n. The system of likelihood equations cannot be solved explicitly and we need to use the numerical methods, discussed in Sect. 4.4, to obtain the value of the maximum likelihood estimator corresponding to the given random sample.   Remark 4.3.10

As in the bivariate normal model in the Example 4.3.4, we note that the approximate variance 5/3n = 1.6667/n of θˆ 1n in a bivariate Cauchy model is smaller than the approximate variance 2/n of the maximum likelihood estimator of θ1 in a corresponding univariate Cauchy C(θ1 , 1) distribution based on the random sample from X . We have similar scenario for θˆ 2n . Similarly, I1,1 (θ1 , θ2 ) = 0.6 > 0.5 = I (θ1 ) & I 2,2 (θ1 , θ2 ) = 0.6 > 0.5 = I (θ2 ). We get such a relation in view of the dependence between X and Y which we cannot quantify as the correlation coefficient does not exist. Further, we cannot derive the explicit expressions for the maximum likelihood estimators, either for a bivariate Cauchy distribution or a univariate Cauchy distribution, so it is not visible whether data on Y is involved in the estimator of θ1 . However, the information on θ1 is also derived from the component Y as X and Y are associated.

In Sect. 4.5, we discuss how to draw a random sample from a bivariate Cauchy distribution and based on it, how to obtain the maximum likelihood estimator of (θ1 , θ2 ) . We also obtain the maximum likelihood estimator of θ1 , treating it as a location parameter of a marginal distribution of X using data generated under bivariate Cauchy model and note that the two estimates are different. Similar scenario is observed for θ2 . We obtain Spearman’s rank correlation coefficient as a measure of association between X and Y on the basis of generated sample.

4.4

Iterative Procedures

In Example 4.2.1, we have noted that for a Poisson Poi(θ) distribution, truncated at 0, likelihood equation or the moment equation to get the moment estimator based on a sufficient statistic, cannot be solved explicitly. A Cauchy distribution with location

4.4 Iterative Procedures

233

parameter θ and scale parameter 1 belongs to a Cramér family and we have to face the same problem to solve the likelihood equation to get the maximum likelihood estimator. In the above example of bivariate Cauchy distribution, we come across the same problem. In such situations, we adopt the numerical methods. We describe two such procedures, for a real parameter setup and for a vector parameter setup. Examples illustrating these procedures are discussed in Sect. 4.5, using R software. Iterative procedures in a real parameter setup: Suppose the distribution of X is indexed by a real parameter θ and L n (θ|X ) is a likelihood function of θ corresponding to a random sample from the distribution of X . Newton-Raphson procedure: This procedure has been derived from the Taylor’s series expansion of L n (θ|X ) up to second order. In this procedure, the consecutive iterative values are obtained by the following formula: θ(i+1) = θ(i) −

∂ ∂θ log L n (θ|X )|θ(i) , ∂2 log L n (θ|X )|θ(i) ∂θ2

where θ(i) denotes the i th iterative value. Method of scoring: This procedure is proposed by Fisher in 1925. The consecutive iterative values in it are obtained by the following formula: θ(i+1) = θ(i) +

∂ ∂θ

log L n (θ|X )|θ(i) n I (θ(i) )

,

where θ(i) denotes the i th iterative value. The method of scoring is essentially derived from the Newton-Raphson procedure by replacing the denominator of the second term by its expectation. Hence it is also known Fisher-Newton-Raphson procedure. ∂2 It is well justified by the WLLN as − n1 ∂θ 2 log L n (θ|X )|θ converges in probability to the information function I (θ). Iterative procedures in a multiparameter setup: Suppose the distribution of X is indexed by a vector parameter θ = (θ1 , θ2 , . . . , θk ) and L n (θ|X ) is a likelihood function of θ corresponding to a random sample from the distribution of X . Newton-Raphson procedure: In this procedure, the consecutive iterative values are obtained by the following formula:

θ(i+1)

!−1 ∂2 = θ(i) − log L n (θ|X ) × ∂θi ∂θ j |θ(i)   ∂ ∂ ∂ log L n , log L n , . . . , log L n ∂θ1 ∂θ2 ∂θk |θ

, (i)

234

4

CAN Estimators in Exponential and Cramér Families

where θ(i) denotes the i th iterative value. Method of scoring: In this procedure, the consecutive iterative values are obtained by the following formula:   ∂ 1 −1 ∂ ∂ log L n , log L n , . . . , log L n , θ(i+1) = θ(i) + I (θ)|θ(i) n ∂θ1 ∂θ2 ∂θk |θ (i)

where θ(i) denotes the i th iterative value. In both the cases and for both the methods, the iterative procedure terminates when the consecutive iterative values are approximately close to each other. It has been proved in Kale [10] and Kale [11] that if the initial iterative value θ(0) is taken as a value of any consistent estimator of θ, then under Cramér regularity conditions, the iterative procedure terminates with probability approaching 1 as n → ∞. The method of scoring is preferred to the Newton-Raphson procedure, if the information function is free from θ as there is considerable simplification in the numerical procedure. For example, for Cauchy C(θ, 1), I (θ) = 1/2 and it is better to use method of scoring than the Newton-Raphson procedure by taking value of the sample median as an initial iterative value.

4.5

Maximum Likelihood Estimation Using R

This section presents some examples illustrating the use of R software to find out the maximum likelihood estimator graphically, by numerical methods discussed in previous section and some built-in functions in R.  Example 4.5.1

In Example 4.2.1, we discussed the maximum likelihood estimation of the parameter θ on the basis of a random sample from a truncated Poisson distribution, truncated at 0. It is noted that the likelihood equation given by X n = θ/(1 − e−θ ) cannot be solved explicitly. We now discuss how to approximately decide the root from the graph of log-likelihood and the graph of the first derivative of loglikelihood. We then use Newton-Raphson procedure and method of scoring to solve the equation using R software. We have log L n (θ|X ) = −nθ − n log(1 − e−θ ) + log θ

n i=1

n 

Xi ∂ ne−θ i=1 + log L n (θ|X ) = −n − & ∂θ θ 1 − e−θ n  Xi 2 −θ ne ∂ i=1 log L n (θ|X ) = − . ∂θ2 θ2 (1 − e−θ )2

Xi −

n i=1

log X i !

4.5 Maximum Likelihood Estimation Using R

235

Further the information function is given by I (θ) = (1 − e−θ − θe−θ )/θ(1 − e−θ )2 . As a first step we generate a random sample of size n = 300 from the truncated Poisson distribution with θ = 3. Using graphs as well as both the iterative procedures, we find the maximum likelihood estimator of θ. n=300; e=exp(1); a=3 ## parameter of

Poisson distribution truncated at 0 y=1:100; p=eˆ(-a)*aˆy/((1-eˆ(-a))*factorial(y)); sum(p); set.seed(123) x=sample(y,n,replace=TRUE, prob=p) ## random sample # from truncated Poisson distribution x1=sum(x); x2=sum(log(factorial(x))); th=seq(.5,5,.005) logl= -n*th - n*log(1-eˆ(-th)) + log(th)*x1 - x2 par(mfrow=c(1,2)) plot(th,logl,"l",pch=20,xlab="Theta",ylab="Log-likelihood") b=which.max(logl); mle = th[b]; mle; abline(v=mle) dlogl=-n-n*eˆ(-th)/(1-eˆ(-th)) + x1/th; summary(dlogl) plot(th,dlogl,"l",pch=20,xlab="Theta", ylab="First Derivative of Log-likelihood") abline(h=0) p=th[which.min(abs(dlogl))]; p; abline(v=p) ## Newton-Raphson procedure dlogl=function(a) { t1=-n-n*eˆ(-a)/(1-eˆ(-a)) + x1/a return(t1) } d2logl=function(a) { t2=-n*eˆ(-a)/(1-eˆ(-a))ˆ2 - x1/aˆ2 return(t2) } thini=2.93 # initial value of th r=c();r[1]=thini;k=1;diff=1 while(diff > 10ˆ(-4)) { r[k+1]=r[k]-dlogl(r[k])/d2logl(r[k]) diff=abs(r[k+1]-r[k]) k=k+1 } thmle=r[k]; thmle ### Method of scoring inf=function(a) { t3=(1-eˆ(-a)-a*eˆ(-a))/(a*(1-eˆ(-a))ˆ2) return(t3) } s=c(); s[1]=thini; k=1; diff=1 while(diff > 10ˆ(-4)) { s[k+1]=s[k]+dlogl(s[k])/(n*inf(s[k])) diff=abs(s[k+1]-s[k]) k=k+1 } thmles=s[k]; thmles

236

4

CAN Estimators in Exponential and Cramér Families

800 600 400

−1200

0

200

−800 −1000

Log−likelihood

−700

First Derivative of Log−likelihood

−600

1000

From both the graphs, displayed in Fig. 4.1, the approximate value of the maximum likelihood estimator θˆ n is 2.93. By definition of the maximum likelihood ∂ log L n (θ|X ) = 0. However, it may not be exactly 0 estimator, it is a solution of ∂θ for the given realization. Hence we try to find that θ, for which it is minimum using the function p=th[which.min(abs(dlogl))]. By Newton-Raphson procedure, θˆ n = 2.9278 and by the method of scoring, θˆ n = 2.9278. In both these iterative procedures, we have taken the value of initial iterate as 2.93, as it is a solution of the likelihood equation obtained graphically. All the four procedures produce approximately the same value of θˆ n , note that it will change for each realization of the truncated Poisson distribution. One can find out the root of the likelihood equation directly in R using the uniroot function. We have to provide an interval such that at the lower and ∂ log L n (θ|X ) is of opposite sign. In the folat the upper limit of the interval, ∂θ lowing, we adopt this approach to find θˆ n and examine whether the large sample √ distribution of Tn = n I (θ)(θˆ n − θ) is standard normal.

1

2

3

4

5

Theta

Fig. 4.1 Truncated Poisson distribution: MLE

1

2

3

Theta

4

5

4.5 Maximum Likelihood Estimation Using R

237

e=exp(1); y=1:100; a=3; p=eˆ(-a)*aˆy/((1-eˆ(-a))*factorial(y)) sum(p); n=600; nsim=1500; x1 = x2 = c() for(m in 1:nsim) { set.seed(m) x=sample(y,n,replace=TRUE, prob=p) x1[m]=sum(x) x2[m]=mean(x) } dlogl=function(par) { term=-n-n*eˆ(-par)/(1-eˆ(-par)) + x1[i]/par return(term) } mle=c() for(i in 1:nsim) { mle[i]=uniroot(dlogl,c(x2[i]-1,5))$root } summary(mle); var(mle)*(nsim-1)/nsim v=a*(1-eˆ(-a))ˆ2/(n*(1-eˆ(-a)-a*eˆ(-a))); v; v1=sqrt(v) t=(mle-a)/v1; summary(t); shapiro.test(t) r=seq(-4,4,.1); y=dnorm(r) par(mfrow= c(2,2)) hist(t,freq=FALSE,main="Histogram", ylim=range(0,max(y)), xlab=expression(paste("T"[n])), col="light blue") lines(r,y,"o",pch=20,col="dark blue") boxplot(t,main="Box Plot", xlab=expression(paste("T"[n]))) qqnorm(t); qqline(t) plot(ecdf(t),main="Empirical Distribution Function", ylab=expression(paste("F"[n](t)))) lines(r,pnorm(r),col="red")

We have generated 1500 random samples, each of size 600, from the truncated Poisson distribution, truncated at 0, with θ = 3. The output of the function summary(mle) gives median 2.999 and mean to be 3 of 1500 values of the maximum likelihood estimator. Further, the variance comes out to be 0.0058, which is very close to the value 0.0056 of the approximate variance 1/n I (θ) of θˆ n √ with θ = 3. Approximate normality of the distribution of Tn = n I (θ)(θˆ n − θ) is supported by the p-value 0.3914 of Shapiro-Wilk test and the four graphs in Fig. 4.2. 

238

4

CAN Estimators in Exponential and Cramér Families

Box Plot

−3

−1

1

3

0.0 0.1 0.2 0.3 0.4

−1 0

1

2

3 Tn

Normal Q−Q Plot

Empirical Distribution Function

−1

Fn(t)

1

0.8

3

Tn

0.0

−3

Sample Quantiles

−3

0.4

Density

Histogram

−3

−1 0

1

2

3

−4

−2

0

2

4

x

Theoretical Quantiles

Fig. 4.2 Truncated Poisson distribution: approximate normality of normalized MLE

 Example 4.5.2

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normalN (θ, θ2 ) distribution, θ > 0. In Example 4.2.3, it is shown that θˆ n = (−m  + m 2 + 4m  )/2 is 1

1

2

CAN for θ having the approximate variance 1/n I (θ) = θ2 /3n. We verify these results by simulation. We first find the maximum likelihood estimate from the graph of log-likelihood function. Following is the R code for plotting the graph and locating the maximum.

4.5 Maximum Likelihood Estimation Using R

239

th = 1.25; n = 250; set.seed(14); x = rnorm(n,th,th) m1 = mean(x); m1 ;m2 = mean(xˆ2); m2; v = m2-m1ˆ2; v tn = (-m1+sqrt(m1ˆ2+4*m2))/2; tn loglik = function(a) { LL = 0 for(i in 1:n) { LL = LL + log(dnorm(x[i],a,a)) } return(LL) } theta = seq(0.1,4,0.02);length(theta) Ltheta = loglik(theta) plot(theta,Ltheta,"l",xlab="Theta",ylab="Log-likelihood", ylim=range(-2500,-100)) b = which.max(Ltheta); mle = theta[b]; mle abline(v=mle,col="blue")

−1000 −1500 −2500

−2000

Log−likelihood

−500

On the basis of a random sample of size 250 generated with θ = 1.25, we note that the maximum likelihood estimate is 1.22, both from the formula and from the graph of log-likelihood displayed in Fig. 4.3. Following R code verifies that √ the distribution of Tn = n I (θ)(θˆ n − θ) can be approximated by the standard normal distribution.

0

1

2 Theta

Fig. 4.3 Normal N (θ, θ2 ) distribution: log-likelihood

3

4

240

4

CAN Estimators in Exponential and Cramér Families

th=1.25; n=250; nsim=1500; m1 = m2 = c() for(m in 1:nsim) { set.seed(m) x=rnorm(n,th,th) m1[m]=mean(x) m2[m]=mean(xˆ2) } T1=(-m1+sqrt(m1ˆ2+4*m2))/2 summary(T1); v=var(T1)*(nsim-1)/nsim; v sigma=thˆ2/(3*n); sigma; s1=(sqrt(3*n)/th)*(T1-th) r=seq(-4,4,.08); length(r); y=dnorm(r) par(mfrow= c(2,2)) hist(s1,freq=FALSE,main="Histogram", ylim=range(0,max(y)), xlab=expression(paste("T"[n])), col="light blue") lines(r,y,"o",pch=20,col="dark blue") boxplot(s1,main="Box Plot",xlab=expression(paste("T"[n]))) qqnorm(s1); qqline(s1) plot(ecdf(s1),main="Empirical Distribution Function", ylab=expression(paste("F"[n](t)))) lines(r,pnorm(r),col="blue") shapiro.test(s1)

Summary statistic of Tn shows close agreement with the true parameter value θ = 1.25 as the sample median and the sample mean based on 1500 simulations come out to be 1.25. Further, the variance 0.002053 of the simulated values is close to the approximate variance θ2 /3n = 0.002083. Figure 4.4 and the p-value 0.8469 of Shapiro-Wilk test support the theoretical claim that the maximum likelihood estimator is CAN.   Example 4.5.3

Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from the logistic distribution with probability density function f (x, θ) =

exp{−(x − θ)} , x ∈ R, θ ∈ R . (1 + exp{−(x − θ)})2

The log-likelihood of θ corresponding to given random sample X is log L n (θ|X ) = −

n i=1

(X i − θ) − 2

n i=1

log(1 + exp{−(x − θ)}) .

4.5 Maximum Likelihood Estimation Using R

241

1 2 3 −1 −3 −3

−1 0

1

2

3 Tn

Normal Q−Q Plot

Empirical Distribution Function

0.4

−1

Fn(t)

0.8

1 2 3

Tn

0.0

−3

Sample Quantiles

Box Plot

0.0 0.1 0.2 0.3 0.4

Density

Histogram

−3

−1 0

1

2

−2

3

Theoretical Quantiles

0

2

4

x

Fig. 4.4 Normal N (θ, θ2 ) distribution: approximate normality of MLE

Further, the first and the second order partial derivatives of the log likelihood are given by exp{−(X i − θ)} ∂ log L n (θ|X ) = n − 2 ∂θ 1 + exp{−(X i − θ)} n

∂2 ∂θ2

log L n (θ|X ) = −2

i=1 n i=1

exp{−(X i − θ)} . (1 + exp{−(X i − θ)})2

n exp{−(X i −θ)} Thus, the likelihood equation is given by 2 i=1 1+exp{−(X i −θ)} − n = 0. We can find its solution either by the Newton-Raphson procedure or the method of scoring. Method of scoring is easier as the information function I (θ) = 1/3 is free from θ. The information function I (θ) can be easily computed from the second derivative of the log-likelihood function as given above and the beta function. As an initial iterative value, one may take sample median or the sample mean. The logistic distribution is symmetric around the location parameter θ, hence the population mean and population median both are θ, further population variance

242

4

CAN Estimators in Exponential and Cramér Families

is π 2 /3 (Kotz et al. [12], p. 117). By WLLN and by the CLT, the sample mean X n is CAN estimator of θ with approximate variance π 2 /3n = 3.2899/n. By Theorem 3.2.3, the sample median X [n/2]+1 is CAN estimator of θ with approximate variance 4/n, which is larger than the approximate variance of X n . Thus, sample mean is a better CAN estimator of θ. It can be shown that the logistic distribution belongs to a Cramér family and hence the maximum likelihood estimator θˆ n of θ is a BAN estimator of θ with approximate variance 3/n. Following is the R code to find the maximum likelihood estimate corresponding to the given data, by using uniroot function from R and to examine whether it is CAN for θ with approximate variance 3/n, that is, to examine the asymptotic normality of √ Tn = n/3(θˆ n − θ). e=exp(1); a=3; n=120; nsim=1000; x=matrix(nrow=n,ncol=nsim) for(j in 1:nsim) { set.seed(j) x[,j]=rlogis(n,a,1) } dlogl=function(par) { term2=0 for(i in 1:n) { term2=term2 -2*eˆ(-x[i,j]+par)/(1+eˆ(-x[i,j]+par)) } dlogl= n + term2 return(dlogl) } mle=c() for(j in 1:nsim) { mle[j]=uniroot(dlogl,c(10,.01))$root } summary(mle); var(mle)*(nsim-1)/nsim; v=3/n; v; v1=sqrt(v) t=(mle-a)/v1; summary(t); shapiro.test(t) r=seq(-4,4,.1); y=dnorm(r) par(mfrow= c(2,2)) hist(t,freq=FALSE,main="Histogram", ylim=range(0,max(y)), xlab=expression(paste("T"[n])), col="light blue") lines(r,y,"o",pch=20,col="dark blue") boxplot(t,main="Box Plot",xlab=expression(paste("T"[n]))) qqnorm(t); qqline(t) plot(ecdf(t),main="Empirical Distribution Function", ylab=expression(paste("F"[n](t)))) lines(r,pnorm(r),col="blue")

4.5 Maximum Likelihood Estimation Using R

243

From the output, the summary statistic of mle shows that from 1000 simulated values, the median is 3, mean is 2.997 with variance 0.02489 which is very close to 3/n = 0.025. The Shapiro-Wilk test procedure with p-value 0.5313 supports the claim that the large sample distribution of the maximum likelihood estimator of θ is normal with approximate variance 3/n. The four plots, which are not shown here, also support the claim. All these results are for the sample size n = 120 and for θ = 3.  In Example 3.3.9, it is shown that if (X , Y ) has a bivariate normal N2 (0, 0, 1, 1, ρ) distribution, ρ ∈ (−1, 1), then the sample correlation coefficient Rn is a CAN estimator of ρ with approximate variance (1 − ρ2 )2 /n and in Example 4.3.3, it is shown that the maximum likelihood estimator ρˆ n of ρ, which is a unique real root of a cubic equation, is a CAN estimator of ρ with approximate variance (1 − ρ2 )2 /n(1 + ρ2 ). In the following example, R code is given to find the maximum likelihood estimator of ρ and to verify that it is a CAN estimator.  Example 4.5.4

Suppose (X , Y ) ∼ N2 (0, 0, 1, 1, ρ), ρ ∈ (−1, 1). In Example 4.3.3, we have shown that the maximum likelihood estimator of ρ is a unique real root of the cubic equation given by ρ3 − ρ2 Vn − ρ(1 − Un ) − Vn = 0 where n n Un = (X i2 + Yi2 )/n & Vn = X i Yi /n. i=1

i=1

We generate a random sample from the distribution of (X , Y ) and use NewtonRaphson procedure to solve this equation. As an initial value in the iterative n X i Yi /n, which is a moment estimator procedure, we take the value of Vn = i=1 of ρ and is a consistent estimator of ρ. On the basis of multiple simulations, we examine whether ρˆ n and Rn are CAN for ρ. We also examine the normality of corresponding Fisher’s Z transformation. rho = 0.3;th=0.5*log((1+rho)/(1-rho)); th mu = c(0,0); sig=matrix(c(1,rho,rho,1),nrow=2) n = 270; nsim = 1200 library(mvtnorm) u = v = Z = T = s = R = mle = c() g=function(a) { term=aˆ3-aˆ2*v1-a*(1-u1)-v1 return(term) }

244

4

CAN Estimators in Exponential and Cramér Families

dg=function(a) { term=3*aˆ2-a*2*v1 + u1-1 return(term) } for(i in 1:nsim) { set.seed(i) x = rmvnorm(n,mu,sig) R[i] = cor(x)[1,2] s[i] = 0.5*(log((1+R[i])/(1-R[i]))) Z[i] = sqrt(n)*(s-th) T[i] = sqrt(n)*(R-rho)/(1-rhoˆ2) u[i] = sum((x[,1]ˆ2+x[,2]ˆ2))/n v[i] = sum((x[,1]*x[,2]))/n } m = 5; e = matrix(nrow=m,ncol=nsim) for(i in 1:nsim) { e[1,i] = v[i]; v1 = v[i]; u1 = u[i]; j = 1; diff = 1 while(diff>10ˆ(-4)) { e[j+1,i] = e[j,i]-g(e[j,i])/dg(e[j,i]) diff = abs(e[j+1,i]-e[j,i]); j = j+1 } mle[i] = e[j,i] } d = round(data.frame(mle,R),4); View(d); head(d); tail(d) summary(mle); summary(s); summary(R) vmle = (n-1)*var(mle)/n; avmle = (1-rhoˆ2)ˆ2/n*(1+rhoˆ2); vmle; avmle vZ = (n-1)*var(s)/n; avZ = 1/n;vZ;avZ vR = (n-1)*var(R)/n; avR = (1-rhoˆ2)ˆ2/n;vR;avR shapiro.test(mle); shapiro.test(R); shapiro.test(s)

We have generated 1200 random samples each of size n = 270 with ρ = 0.3. The function View(d) displays the values of ρˆ n and Rn for all the simulations. The first six values are reported in Table 4.1. From summary(mle), the median and the mean of 1200 maximum likelihood estimators are 0.3007 and 0.3001 respectively, very close to ρ = 0.3. The variance of these 1200 maximum likelihood estimators is 0.0027 which is close to the approximate variance of ρˆ n given by (1 − ρ2 )2 /n(1 + ρ2 ) = 0.0033. For comparison, we have also computed the Table 4.1 N2 (0, 0, 1, 1, ρ) Distribution: values of ρˆ n and Rn ρˆ n

0.2672

0.2684

0.2573

0.3389

0.3008

0.3096

Rn

0.2672

0.2788

0.2627

0.3204

0.3078

0.2925

4.5 Maximum Likelihood Estimation Using R

245

sample correlation coefficient Rn and corresponding Z transformation for each of 1200 random samples. From summary(s), we note that the median and the mean of 1200 Z values are 0.3111 and 0.3101 respectively and these are close to θ = (1/2) log(1 + ρ)/(1 − ρ) = 0.3095. From summary(R), we observe that the median and the mean of 1200 Rn values are 0.3014 and 0.2996 respectively, again very close to ρ = 0.3. The variance of these 1200 Z values is 0.0037, and it is the same as the approximate variance of Z which is 1/n = 0.0037. Similarly, the variance of these 1200 Rn values is 0.0030, and it is also close to the approximate variance of Z given by (1 − ρ2 )2 /n = 0.0031. The asymptotic normality of ρˆ n , Rn and Z is supported by the p-values of the Shapiro-Wilk test for normality. The p-values corresponding to ρˆ n , Rn and Z are 0.3548, 0.2355 and 0.726 respectively. It is to be noted that the p-value corresponding to Z is larger than the other two, supporting Remark 3.3.3 that convergence to normality of Z  is faster than that of Rn . In the following examples, we discuss how to find a maximum likelihood estimator for a vector parameter using the Newton-Raphson iterative procedure and the method of scoring.  Example 4.5.5

Suppose a random variable X follows a gamma distribution with scale parameter α and shape parameter λ. Its probability density function is given by f (x, α, λ) =

αλ −αx λ−1 x , x > 0, α > 0, λ > 0. e (λ)

It belongs to a two-parameter exponential family. In Example 2.7.3, we have verified that on the basis of a random sample X ≡ {X 1 , X 2 , . . . , X n } from the  distribution of X , (αˆ n , λˆ n ) = (m 1 /m 2 , m 2 1 /m 2 ) is a consistent estimator of  (α, λ) . Using these as initial estimates in the Newton-Raphson procedure, we now find the maximum likelihood estimator of (α, λ) . We also find it by finding the point at which log-likelihood attains the maximum. The log-likelihood of (α, λ) corresponding to the given random sample and the system of likelihood equations are given by log L n (α, λ|X ) = nλ log α − n log (λ) − α

n

X i + (λ − 1)

i=1

∂ nλ X i = 0 ⇔ α = λ/X n log L n (α, λ|X ) = − ∂α α n

i=1

∂ log X i = 0 . log L n (α, λ|X ) = n log α − ndigamma(λ) + ∂λ n

i=1

n i=1

log X i

246

4

CAN Estimators in Exponential and Cramér Families

The two likelihood equations lead to an equation in λ given by  g(λ) = n log

λ Xn

 − ndigamma(λ) +

Here digamma(λ) is defined as digamma(λ) = tion by Newton-Raphson procedure. Now, g  (λ) =

n

log X i = 0 .

i=1 d dλ

log (λ). We solve the equa-

n d2 log (λ) . − ntrigamma(λ), where trigamma(λ) = λ dλ2

Thus, the iterative formula by Newton-Raphson procedure is λ(i+1) = λ(i) − g(λ(i) )/g  (λ(i) ). As an initial iterate value we take λ(0) = m 2 1 /m 2 . Following R code gives the maximum likelihood estimator of (α, λ) . alpha=3; lambda=4; n=250; set.seed(120) z=rgamma(n,shape = lambda,scale = 1/alpha ); summary(z) v=(n-1)*var(z)/n;v; mu=lambda/alpha;mu si=lambda/alphaˆ2;si m1=mean(z); m2=mean(zˆ2); m3=m2-m1ˆ2; alest=m1/m3; laest=m1ˆ2/m3; alest; laest # log-likelihood function loglike=function(a,b) { n*b*log(a) - n*log(gamma(b))-a*sum(z) + (b-1)*sum(log(z)) } a=seq(2,4,by=0.01); length(a); b=seq(3,5,by=0.01); length(b) L=matrix(nrow=length(a),ncol=length(b)) for(i in 1:length(a)) for(j in 1:length(b)) { L[i,j]=loglike(a[i],b[j]) } index=max(L);index indices=which(L==max(L),arr.ind=TRUE);indices alphamle=a[indices[1]];alphamle; lambdamle=b[indices[2]];lambdamle ### Newton-Raphson procedure g=function(b) { term= n*log(b) - n*log(m1)-n*digamma(b)+sum(log(z)) return(term) } dg=function(b) { term= n/b - n*trigamma(b) return(term) }

4.5 Maximum Likelihood Estimation Using R

247

u=c(); u[1]=laest;i=1;diff=1 while(diff>10ˆ(-4)) { u[i+1]=u[i]- g(u[i])/dg(u[i]) diff=abs(u[1+i]-u[i]) i=i+1 } mlelambda=u[i];mlealpha=mlelambda/m1;mlealpha;mlelambda

Summary statistics show that the sample mean 1.3555 is close to the population mean 1.3333 and the sample variance 0.4711 is close to population variance 0.4444. The moment estimator of α is 2.8775 and that of λ is 3.9004. The loglikelihood attains the maximum when α is 2.95 and λ is 4.00. By NewtonRaphson iterative procedure the maximum likelihood estimators of α and λ are 2.9515 and 4.0007 respectively.  In the above example, a gamma distribution involves two parameters, but the system of likelihood equations can be reduced to a single equation in λ by expressing α in terms of λ from the first equation. Thus, iterative procedures for a real parameter can be used to find the maximum likelihood estimator of λ and then of α. In the next example, we illustrate the application of both Newton-Raphson procedure and the method of scoring in the multiparameter setup.  Example 4.5.6

Suppose a random variable X follows a Cauchy C(θ, λ) distribution with location parameter θ and shape parameter λ. Its probability density function is given by f (x, θ, λ) =

1 λ , x ∈ R, θ ∈ R, λ > 0 . 2 π λ + (x − θ)2

Our aim is to find the maximum likelihood estimator (θ, λ) on the basis of a random sample X ≡ {X 1 , X 2 , . . . , X n } from the distribution of X . The loglikelihood function of (θ, λ) is given by log L n (θ, λ|X ) = n log λ − n log π −

n

 log λ2 + (X i − θ)2 .

i=1

Hence, the system of likelihood equations is given by Xi − θ ∂ =0 log L n (θ, λ|X ) = 2 2 ∂θ λ + (X i − θ)2 n

i=1

∂ 1 n =0. log L n (θ, λ|X ) = − 2λ ∂λ λ λ2 + (X i − θ)2 n

i=1

248

4

CAN Estimators in Exponential and Cramér Families

Using following R code, we solve these by (i) maximizing the log-likelihood function, (ii) by Newton-Raphson procedure and (iii) by the method of scoring. For Newton-Raphson procedure, we require the second order partial derivatives. These are as follows: (X i − θ)2 − λ2 ∂2 log L (θ, λ|X ) = 2 n ∂θ2 (λ2 + (X i − θ)2 )2 n

i=1

∂2 Xi − θ log L n (θ, λ|X ) = −4λ 2 ∂λ∂θ (λ + (X i − θ)2 )2 n

i=1

∂2 ∂λ2

1 n − 2 2 2 λ λ + (X i − θ)2 n

log L n (θ, λ|X ) = −

i=1

+ 4λ2

n i=1

1 . (λ2 + (X i − θ)2 )2

For the method of scoring, we first find the information matrix I (θ, λ) = [Ii, j (θ, λ)], i, j = 1, 2. Observe that     ∂2 4λ(X − θ) I1,2 (θ, λ) = E − log f (X , θ, λ) = E ∂λ∂θ (λ2 + (X − θ)2 )2 * ∞ 4 u x −θ = du, with =u 2 3 πλ −∞ (1 + u ) λ * ∞ u du exists and integrand is an odd function . = 0 , as 2 3 −∞ (1 + u ) Now using beta function, we have    ∂2 ((X − θ)2 − λ2 ) E − 2 log f (X , θ, λ) = −2E ∂θ (λ2 + (X − θ)2 )2 * ∞ 2 2 −1 + u x −θ − 2 du, where =u 2 3 πλ −∞ (1 + u ) λ * ∞ * ∞ 1 u2 2 2 du − du 2 2 3 2 πλ −∞ (1 + u ) πλ −∞ (1 + u 2 )3 * ∞ * ∞ 4 1 u2 4 du − du , πλ2 0 (1 + u 2 )3 πλ2 0 (1 + u 2 )3 integrand being even function * ∞ 1 −1 * ∞ 3 −1 t2 t2 2 2 dt − dt , with u 2 = t πλ2 0 (1 + t)3 πλ2 0 (1 + t)3 1 2 [B(1/2, 5/2) − B(3/2, 3/2)] = 2 . 2 πλ 2λ 

I1,1 (θ, λ) = = = =

= =

4.5 Maximum Likelihood Estimation Using R

249

Proceeding with similar substitutions as in I1,1 (θ, λ), we get 

 ∂2 I2,2 (θ, λ) = E − 2 log f (X , θ, λ) ∂λ   1 1 1 2 +2 2 =E − 4λ λ2 (λ + (X − θ)2 ) (λ2 + (X − θ)2 )2 1 . = 2λ2 Thus, the information matrix is a diagonal matrix with each diagonal element 1/2λ2 . Following R code gives the maximum likelihood estimates of θ and λ using the three approaches. th=3; lambda=2; n=200; set.seed(50); z=rcauchy(n,location=th,scale=lambda ) summary(z); m1=median(z); m2=quantile(z,.75)-m1; m2 thest=m1; thest; laest=m2; laest #### log-likelihood function loglike=function(a,b) { n*log(b) - n*log(pi)-sum(log(bˆ2+(z-a)ˆ2)) } a=seq(2,4,by=0.01); b=seq(1,3,by=0.01); length(a); length(b) L=matrix(nrow=length(a),ncol=length(b)) for(i in 1:length(a)) for(j in 1:length(b)) { L[i,j]=loglike(a[i],b[j]) } max=max(L); max; indices=which(L==max(L),arr.ind=TRUE);indices thmle=a[indices[1]];thmle; lambdamle=b[indices[2]];lambdamle ### Newton-Raphson procedure dloglth=function(a,b) { term=0 for(i in 1:n) { term= term + (z[i]-a)/(bˆ2 + (z[i]-a)ˆ2) } term1=2*term return(term1) } dloglla=function(a,b)

250

4

CAN Estimators in Exponential and Cramér Families

{ term=0 for(i in 1:n) { term= term + 1/(bˆ2 + (z[i]-a)ˆ2) } term2= n/b -2*b*term return(term2) } L11=function(a,b) { term=0 for(i in 1:n) { term= term + ((z[i]-a)ˆ2 - bˆ2)/((bˆ2 + (z[i]-a)ˆ2)ˆ2) } term1=2*term return(term1) } L12=function(a,b) { term=0 for(i in 1:n) { term= term + (z[i]-a)/((bˆ2 + (z[i]-a)ˆ2)ˆ2) } term2=-4*b*term return(term2) } L22=function(a,b) { term=0 for(i in 1:n) { term= term - 2/(bˆ2 + (z[i]-a)ˆ2) + (4*bˆ2)/((bˆ2 + (z[i]-a)ˆ2)ˆ2) } term3=-n/bˆ2 +term return(term3) } L=function(a,b) { f=matrix(c(L11(a,b),L12(a,b),L12(a,b),L22(a,b)), byrow=TRUE,ncol=2) return(f) } v=function(a,b) { f=matrix(c(dloglth(a,b), dloglla(a,b)),byrow=TRUE,ncol=2) return(f) }

4.5 Maximum Likelihood Estimation Using R

251

m=5; EstMat=matrix(nrow=m,ncol=2); EstMat[1,]=c(thest,laest); diff=2; i=1 while(diff>0.00001) { EstMat[i+1,]=EstMat[i,] - v(EstMat[i,1],EstMat[i,2])%*%solve(L(EstMat[i,1],EstMat[i,2])) diff=sum((EstMat[i+1,]-EstMat[i,])ˆ2) i=i+1 } EstMat #### Method of scoring I=matrix(nrow=2,ncol=2) I=function(a,b) { f=matrix(c(1/(2*bˆ2),0,0,1/(2*bˆ2)), byrow=TRUE,ncol=2) return(f) } Mat=matrix(nrow=m,ncol=2); Mat[1,]=c(thest,laest); diff=2; i=1 while(diff>0.00001) { Mat[i+1,]=Mat[i,] + (1/n)* v(Mat[i,1],Mat[i,2]) %*%solve(I(Mat[i,1],Mat[i,2])) diff=sum((Mat[i+1,]-Mat[i,])ˆ2) i=i+1 } Mat

A random sample of size 200 is generated from the Cauchy distribution with θ = 3 and λ = 2. By maximizing the log-likelihood, the maximum likelihood estimate of θ is 3.15 and that of λ is 2.12. Using initial estimates as θ0 = 3.14, which is the sample median, and λ0 = 2.08, which is the difference between the third sample quartile and the sample median, both the iterative procedures produce the maximum likelihood estimate of θ as 3.15 and that of λ as 2.12. It can be verified that the distribution belongs to a two-parameter Cramér family and hence the maximum likelihood estimator is CAN with approximate dispersion  matrix I −1 (θ, λ)/n.  Remark 4.5.1

In the above example, we have used three approaches to find the maximum likelihood estimators, but we note that, once we have an information matrix, the method of scoring is better than the Newton-Raphson method. In the next example of a bivariate Cauchy distribution, the information matrix is free from the parameters also. Hence, it is better to use the method of scoring for finding the maximum likelihood estimates.

252

4

CAN Estimators in Exponential and Cramér Families

In the following example, we verify the results established in Example 4.3.5. There is no built-in function to generate a random sample from a bivariate Cauchy distribution. Hence, we first discuss how to draw a random sample from a bivariate Cauchy distribution. If a random vector (X , Y ) follows a bivariate Cauchy C2 (θ1 , θ2 ) distribution, then the marginal probability density function of Y is derived as follows: *∞ f (y, θ2 ) = −∞

#−3/2 1 " dx 1 + (x − θ1 )2 + (y − θ2 )2 2π

1 1 = 2π (1 + (y − θ2 )2 )3/2 =

=

=

1 1 π (1 + (y − θ2 )2 )3/2 1 1 π 1 + (y − θ2 )2 1 1 π 1 + (y − θ2 )2

*∞ 0

*∞ 0

*∞ −∞

*∞ 0

1 dx (1 + (x − θ1 )2 /(1 + (y − θ2 )2 ))3/2

1 du u = (x − θ1 ) (1 + u 2 /(1 + (y − θ2 )2 ))3/2

t −1/2 dt with u 2 /(1 + (y − θ2 )2 ) = t 2(1 + t)3/2 t 1/2−1 dt B(1/2, 1)(1 + t)1/2+1

1 1 = , y ∈ R. π 1 + (y − θ2 )2 Thus, the marginal distribution of Y is Cauchy with location parameter θ2 . On similar lines, it follows that the marginal distribution of X is also Cauchy with location parameter θ1 . Hence, the conditional distribution of X given Y = y has the probability density function as f (x, θ1 , θ2 |y) =

1 + (y − θ2 )2 x ∈ R, θ1 , θ2 ∈ R. 2{1 + (x − θ1 )2 + (y − θ2 )2 }3/2

To generate a random sample from a bivariate Cauchy distribution, we first generate a random observation from Y and corresponding to realized value y, generate x from the conditional probability density function. Hence, we find the distribution function of a conditional distribution of X given Y = y as follows. Observe that for each fixed y, the conditional distribution of X given Y = y is symmetric around θ1 . Suppose the conditional distribution function FX |Y =y (x) of X given Y = y is denoted by Fy (x). With 1 + (y − θ2 )2 = a we have 1 Fy (x) = 2

*x −∞

(1 + (y − θ2 )2 ) du (1 + (u − θ1 )2 + (y − θ2 )2 )3/2

4.5 Maximum Likelihood Estimation Using R

1 = √ 2 a

*x −∞

253

1 du. (1 + (u − θ1 )2 /a)3/2

Suppose x = θ1 . Then 1 Fy (θ1 ) = √ 2 a 1 = √ 2 a 1 = √ 2 a

*θ1 −∞

*0 −∞ *∞

0

=

1 2

*∞ 0

1 du (1 + (u − θ1 )2 /a)3/2 1 dw with (u − θ1 ) = w (1 + w2 /a)3/2 1 1 dw = 2 3/2 (1 + w /a) 2

*∞ 0

t −1/2 dt with w2 /a = t 2(1 + t)3/2

t 1/2−1

1 1 du = . 1/2+1 B(1/2, 1) (1 + t) 2

Thus, θ1 is the median of the conditional distribution of X given Y = y and the distribution is symmetric around θ1 . Suppose x > θ1 and ((x − θ1 )2 /a)/(1 + (x − θ1 )2 /a) = b(x), say. Then 1 Fy (x) = √ 2 a

*x −∞

1 1 = + √ 2 2 a

1 du (1 + (u − θ1 )2 /a)3/2 *x θ1

x−θ * 1

1 1 = + √ 2 2 a

0

=

1 1 + 2 2 1 1 + 2 2

b(x) *

0

=

1 1 + 2 2

1 du (1 + w2 /a)3/2

2 (x−θ * 1 ) /a

0

=

1 du (1 + (u − θ1 )2 /a)3/2

b(x) *

0

t −1/2 dt 2(1 + t)3/2

with (u − θ1 ) = w

by w2 /a = t

1 (v/(1 − v))−1/2 (1 − v)3/2 dv 2 (1 − v)2

by t = v/(1 − v)

1 1 1 v 1/2−1 (1 − v)1−1 dv = + G(b(x)), B(1/2, 1) 2 2

254

4

CAN Estimators in Exponential and Cramér Families

where G is a distribution function of a beta distribution of first kind with shape 1 parameter 1/2 and shape 2 parameter 1. By symmetry, for x < θ1  Fy (x) = 1 −

 1 1 1 1 + G(b(x)) = − G(b(x)). 2 2 2 2

Thus,  Fy (x) =

1/2 − (1/2)G(b(x)), if x < θ1 1/2 + (1/2)G(b(x)) if x ≥ θ1 .

By the probability integral transformation, to obtain a random sample from the conditional distribution of X given Y = y, we solve Fy (x) = u, where u is a random observation from a U (0, 1) distribution. Suppose qbeta(1 − 2u, 1/2, 1) = q, say. If u < 1/2 then 1 1 − G(b(x)) = u ⇒ b(x) = qbeta(1 − 2u, 1/2, 1) = q 2 2  ⇒ (x − θ1 )2 = aq/(1 − q) ⇒ x = θ1 − aq/(1 − q), we take negative root as u < 1/2 implies x < θ1 . Suppose qbeta(2u − 1, 1/2, 1) = p, say. If u ≥ 1/2 then 1 1 + G(b(x)) = u ⇒ b(x) = qbeta(2u − 1, 1/2, 1) = p 2 2  ⇒ (x − θ1 )2 = ap/(1 − p) ⇒ x = θ1 + ap/(1 − p), we take positive root as u ≥ 1/2 implies x ≥ θ1 . Thus, we adopt the following stepwise procedure to generate a random sample from a bivariate Cauchy distribution. 1. 2. 3. 4.

Generate y from C(θ2 , 1) and hence find a = 1 + (y − θ2 )2 . Generate u from U (0, 1). Depending on the√value of u find d = qbeta((·), √ 1/2, 1). Find x = θ1 − ad/(1 − d) or x = θ1 + ad/(1 − d) corresponding to u < 1/2 and u ≥ 1/2 respectively.

 Example 4.5.7

Suppose a random vector (X , Y ) follows a bivariate Cauchy C2 (θ1 , θ2 ) distribution with probability density function given by f (x, y, θ1 , θ2 ) =

#−3/2 1 " (x, y) ∈ R2 , 1 + (x − θ1 )2 + (y − θ2 )2 2π θ1 , θ2 ∈ R.

4.5 Maximum Likelihood Estimation Using R

255

In Example 4.3.5, we have obtained the information matrix of the distribution and it is free from the parameters. Hence, we use the method of scoring to obtain the maximum likelihood estimator of (θ1 , θ2 ) . We also obtain the maximum likelihood estimator of θ1 , which is a location parameter of the marginal distribution of X , but use data generated under the bivariate model. Similarly, we obtain the maximum likelihood estimator of θ2 , which is a location parameter of the marginal distribution of Y , but use data generated under the bivariate model. We compare these estimates with the estimates in the bivariate model. Following is a R code for these computations. n=300; th1=1; th2=2; x=b=c(); set.seed(40) y=rcauchy(n,location=th2,scale=1); a= 1+(y-th2)ˆ2 set.seed(12); u=runif(n,0,1) for(i in 1:n) { if(u[i]10ˆ(-4)) { Mat[k+1,]=Mat[k,] + (1/n)* v(Mat[k,1],Mat[k,2])%*%solve(I) diff=sqrt(sum(Mat[k+1,]-Mat[k,])ˆ2) k=k+1 } mle=Mat[k,] ;mle ### Marginal of X dth1=function(a) { term=0 for(i in 1:n) { term=term + (x[i]-a)/(1 + (x[i]-a)ˆ2) term1=2*term return(term1) } s=c(); s[1]=th1est; k=1; diff=1 while(diff > 10ˆ(-4)) { s[k+1]=s[k] + (2/n)* dth1(s[k]) diff=abs(s[k+1]-s[k]) k=k+1 } mleth1=s[k]; mleth1 ### Marginal of Y dth2=function(a) { term=0 for(i in 1:n) { term=term + (y[i]-a)/(1 + (y[i]-a)ˆ2) }

4.5 Maximum Likelihood Estimation Using R

257

term1=2*term return(term1) } u=c(); u[1]=th2est; k=1; diff=1 while(diff > 10ˆ(-4)) { u[k+1]=u[k] + (2/n)* dth2(u[k]) diff=abs(u[k+1]-u[k]) k=k+1 } mleth2=u[k]; mleth2

On the basis of a generated sample, we have obtained Spearman’s rank correlation coefficient and it is −0.0355. It indicates the association between X and Y . The estimate of the information matrix on the basis of generated data is given by

Iˆn (θ1 , θ2 ) =



 0.6417 0.0263 . 0.0263 0.6584

The (1, 1)-th element of Iˆn (θ1 , θ2 ) is obtained as the mean of the values of second derivative of the logarithm of joint density function, multiplied by −1. The other elements are obtained on similar lines. The diagonal elements are close to 0.6, but off-diagonal elements are not close to 0. If we increase the sample size, then these will approach to 0. Using the same method, estimates of expected values of score functions are obtained and these are −0.0039 and −0.0085, which are close to 0 as expected. In the method of scoring, we take the initial estimate of θ1 as 0.9863, which is the sample median of X and the initial estimate of θ2 as 1.9250, which is the sample median of Y . The maximum likelihood estimate of (θ1 , θ2 ) comes out to be (0.9934, 1.9868) , which is close to the true parameter value (1, 2) of (θ1 , θ2 ) . The maximum likelihood estimate of θ1 , treating it as a location parameter of the marginal distribution of X , but using the same data, is 0.9969. It is different from 0.9934, as expected. Similarly, the maximum likelihood estimate of θ2 , treating it as a location parameter of the marginal distribution of Y , but using the same data, is 1.9679. It is also different from 1.9868. In view of the association between X and Y , the maximum likelihood estimates from the bivariate model and the corresponding univariate models are different, when these are based on the bivariate data.  In the next example, we use the same model used in Example 4.5.7 and generate multiple random samples from a bivariate Cauchy distribution to obtain the estimate of approximate dispersion matrix of (θˆ 1n , θˆ 2n ) . We also obtain the estimates of approximate variances of the estimators of parameters of the marginal distributions and compare with those of joint distribution.

258

4

CAN Estimators in Exponential and Cramér Families

 Example 4.5.8

Suppose a random vector (X , Y ) follows a bivariate Cauchy C2 (θ1 , θ2 ) distribution as specified in Example 4.5.7. Following is a R code to generate multiple random samples from the bivariate Cauchy distribution and to obtain the estimate of approximate dispersion matrix of (θˆ 1n , θˆ 2n ) and approximate variances of the estimators of parameters of the marginal distributions. n=300; nsim=1500; th1=1; th2=2 y=u=x=b=a=matrix(nrow=n,ncol=nsim);r=c() for(j in 1:nsim) { set.seed(j) y[,j]=rcauchy(n,location=th2,scale=1) u[,j]=runif(n,0,1) } for(j in 1:nsim) { for(i in 1:n) { a[i,j]=1+(y[i,j]-th2)ˆ2 if(u[i,j]10ˆ(-4)) { Mat[k+1,]=Mat[k,] + (1/n)* v(Mat[k,1],Mat[k,2])%*%solve(I) diff=sqrt(sum(Mat[k+1,]-Mat[k,])ˆ2) k=k+1 } mle[j,]=Mat[k,] } summary(mle); D1=round(cov(mle),5); D2=round(solve(I)/n,4); D1;D2 ### Marginal dth=function(a) { term=0 for(i in 1:n) { term=term + (samp[i]-a)/(1 + (samp[i]-a)ˆ2) } term1=2*term return(term1) } marmle=matrix(nrow=nsim,ncol=2) for(j in 1:nsim) { m1[j]=median(x[,j]);m2[j]=median(y[,j]);th1est=m1[j];th2est=m2[j] samp=x[,j]; s=c();s[1]=th1est; k=1; diff=1 while(diff > 10ˆ(-4)) { s[k+1]=s[k] + (2/n)* dth(s[k]) diff=abs(s[k+1]-s[k]) k=k+1 } marmle[j,1]=s[k] samp=y[,j]; s=c();s[1]=th2est; k=1; diff=1 while(diff > 10ˆ(-4)) { s[k+1]=s[k] + (2/n)* dth(s[k]) diff=abs(s[k+1]-s[k]) k=k+1 }

260

4

CAN Estimators in Exponential and Cramér Families

marmle[j,2]=s[k] } summary(marmle); apply(marmle,2,var); v=2/n; v ### Scatter plots par(mfrow=c(1,2)) plot(mle[,1],marmle[,1],xlab="MLE of Th1: Joint Distribution", ylab="MLE of Th1: Marginal Distribution",col="blue") abline(0,1,col="maroon") plot(mle[,2],marmle[,2],xlab="MLE of Th2: Joint Distribution", ylab="MLE of Th2: Marginal Distribution",col="blue") abline(0,1,col="maroon") ### Density plots par(mfrow=c(1,2)) plot(density(mle[,1]),main="MLE of Theta_1") lines(density(marmle[,1]),col=2,lty=2) legend("bottomleft",legend=c("Joint","Marginal"), col=1:2,lty=1:2,bty="n") plot(density(mle[,2]),main="MLE of Theta_2") lines(density(marmle[,2]),col=2,lty=2) legend("bottomleft",legend=c("Joint","Marginal"), col=1:2,lty=1:2,bty="n")

On the basis of 1500 random samples, each of size n = 300, the mean and variance of Spearman’s rank correlation coefficients is −0.0032 and 0.0049 respectively. It indicates the association between X and Y , which is rather weak. In the method of scoring, we take the initial estimate of θ1 and of θ2 , as the sample medians of X and Y respectively. The summary of the maximum likelihood estimates of (θ1 , θ2 ) gives mean to be (1.0021, 2.000) and median to be (1.0035, 1.999) . Both are close to the true parameter value (1, 2) of (θ1 , θ2 ) . The estimate of approximate dispersion matrix D of (θˆ 1n , θˆ 2n ) and I −1 (θ1 , θ2 )/n are as follows:  D=

0.00549 −0.00008 −0.00008 0.00556

 &

I

−1

 (θ1 , θ2 )/n =

 0.0056 0 . 0 0.0056

We note that the two are very close to each other. We also obtained the maximum likelihood estimates of (θ1 , θ2 ) , treating θ1 , θ2 as the location parameters of the marginal distributions, but using the data generated under joint distribution. The summary of these gives mean to be (1.0007, 1.998) and median to be (1.0004, 1.998) , slightly different than those obtained for the joint distribution. The estimates of approximate variances of θ˜ 1n and θ˜ 2n , which are the maximum likelihood estimates of θ1 and θ2 respectively, treating these as the location parameters of the marginal distributions, are 0.0064 and 0.0069 respectively. These are close to 2/n = 0.0067 but are different from those obtained under joint distribution. Under joint distribution, the approximate variance of θˆ 1n is 0.00549 and it

261

2.2 2.1 2.0 1.9 1.7

1.8

MLE of Th2: Marginal Distribution

1.1 1.0 0.9 0.8

MLE of Th1: Marginal Distribution

1.2

2.3

4.5 Maximum Likelihood Estimation Using R

0.8

1.0

1.2

MLE of Th1: Joint Distribution

1.7

1.9

2.1

2.3

MLE of Th2: Joint Distribution

Fig. 4.5 Scatter plots: MLE in joint and marginal models

is smaller than the approximate variance of θ˜ 1n which is 0.0064. Similarly, the approximate variance of θˆ 2n is 0.00556 and it is smaller than the approximate variance of θ˜ 1n which is 0.0069. Thus, simulation results do support the theoretical results as derived in Example 4.3.5. Figure 4.5 displays the scatter plots of the maximum likelihood estimators of θ1 and of θ2 under joint and marginal setup. We observe that the estimates under two setups are different, as expected. Figure 4.6 displays the density plots of the maximum likelihood estimators of θ1 and of θ2 under joint and marginal setup. From this figure also we note that the estimates under the two setups are different. 

4.6

Conceptual Exercises

4.6.1 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X with probability density function f (x, θ) = θ/x θ+1 x > 1, θ > 0. (i) Examine whether the distribution belongs to a one-parameter exponential family. (ii) On the basis of a random sample of size n from the distribution of X , find the moment estimator of θ based on a sufficient statistic and the maximum likelihood estimator of θ. (iii) Examine whether these are CAN estimators of θ. (iv) Obtain the CAN estimator for P[X ≥ 2]. 4.6.2 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a binomial B(m, θ) distribution, truncated at 0, 0 < θ < 1 and m is a known positive integer. Examine whether the distribution belongs to a one-parameter exponential

262

4

CAN Estimators in Exponential and Cramér Families

MLE of Theta_2

3 1

1

2

2

Density

Density

3

4

4

5

5

MLE of Theta_1

0.8

1.0

Joint Marginal

0

0

Joint Marginal 1.2

N = 1500 Bandwidth = 0.01545

1.7

1.9

2.1

2.3

N = 1500 Bandwidth = 0.01517

Fig. 4.6 Density plots: MLE in joint and marginal models

family. Find the moment estimator of θ based on a sufficient statistic and the maximum likelihood estimator of θ. Examine whether the two are the same and whether these are CAN. Find their approximate variances. 4.6.3 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X with probability density function (i) f (x, θ) = (x/θ) exp{−x 2 /2θ}, x > 0, θ > 0 and (ii) f (x, θ) = (3x 2 /θ3 ) exp{−x 3 /θ3 } x > 0, θ > 0. Examine whether the distribution belongs to a one-parameter exponential family. On the basis of a random sample of size n from these distributions, find the moment estimator based on a sufficient statistic and the maximum likelihood estimator of θ. Examine whether the two are the same and whether these are CAN. Find their approximate variances. 4.6.4 Suppose X has a logarithmic series distribution with probability mass function given by p(x, θ) =

θx −1 x = 1, 2, . . . , 0 < θ < 1 . log(1 − θ) x

Show that the logarithmic series distribution is a power series distribution. On the basis of a random sample from the logarithmic series distribution, find the moment estimator of θ based on a sufficient statistic and the maximum

4.6 Conceptual Exercises

263

likelihood estimator of θ. Examine whether the two are the same and whether these are CAN. Find their approximate variances. 4.6.5 Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by   = σ2

 1 ρ , ρ 1

σ 2 > 0, − 1 < ρ < 1. On the basis of a random sample of size n from the distribution of (X , Y ) find the maximum likelihood estimator of (σ 2 , ρ) and examine if it is CAN. Find the approximate dispersion matrix. 4.6.6 Suppose (X , Y ) is random vector with a joint probability mass function as   x −λ x y P[X = x, Y = y] = e λ p (1 − p)x−y /x!, y y = 0, 1, . . . , x; x = 0, 1, 2, . . . , where λ > 0 and 0 < p < 1. Examine if the distribution belongs to a twoparameter exponential family. Hence, find a CAN estimator for (λ, p) and its approximate dispersion matrix. 4.6.7 Suppose (X , Y ) has a bivariate normal distribution with mean vector (μ1 , μ2 ) and dispersion matrix  given by  =

 1 ρ , ρ 1

where ρ = 0 and is known and μ1 , μ2 ∈ R. Show that the distribution belongs to a two-parameter exponential family. Hence, find a CAN estimator (μ1 , μ2 ) and its approximate dispersion matrix. 4.6.8 Suppose a random variable X has a negative binomial distribution with parameters (k, p) and with the following probability mass function. 

 x +k−1 k P[X = x] = p (1 − p)x x = 0, 1, 2, . . . . k−1 (i) Show that the distribution belongs to a one-parameter exponential family, if k is known and p ∈ (0, 1) is unknown. Hence obtain a CAN estimator of p. (ii) Examine whether the distribution belongs to a one-parameter exponential family, if p is known and k is unknown positive integer. (iii) Examine whether the distribution belongs to a two-parameter exponential family, if both p ∈ (0, 1) and k are unknown, where k is a positive integer. 4.6.9 Examine whether a logistic distribution with probability density function f (x, θ) =

exp{−(x − θ)} , x ∈ R, θ ∈ R (1 + exp{−(x − θ)})2

264

4

CAN Estimators in Exponential and Cramér Families

belongs to a one-parameter exponential family. If not, examine if it belongs to a one-parameter Cramér family. If yes, find a CAN estimator of θ and its approximate variance. 4.6.10 Suppose a random variable X follows a Cauchy C(θ, λ) distribution with location parameter θ and shape parameter λ. Examine whether the distribution belongs to a two-parameter Cramér family. 4.6.11 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson distribution with parameter θ > 0. An estimator Tn is defined as  Tn =

Xn, 0.01,

if if

Xn > 0 Xn = 0

Show that T1n = e−Tn is a CAN estimator of P[X 1 = 0] = e−θ . Find its approximate variance. Suppose random variables Yi , i = 1, 2, . . . , n are defined as follows:  1, if Xi = 0 Yi = 0, otherwise . Obtain a CAN estimator T2n for e−θ based on {Y1 , Y2 , . . . , Yn } and find its approximate variance. Find A R E(T1n , T2n ) .

4.7

Computational Exercises

Verify the results by simulation using R. 4.7.1 Suppose a random variable X follows a Cauchy C(θ, 1) distribution with location parameter θ and shape parameter 1. Using R draw a random sample from the distribution of X , plot the likelihood function and find the maximum likelihood estimator θ. Using simulation examine whether it is CAN for θ. (Hint: Use the code similar to that Example 4.5.3.) 4.7.2 In Exercise 4.6.2, you have obtained the moment estimator of θ based on the sufficient statistic and the maximum likelihood estimator of θ corresponding to a random sample {X 1 , X 2 , . . . , X n } from the binomial B(m, θ) distribution, truncated at 0, 0 < θ < 1 and m is a known positive integer. Using R draw a random sample from the binomial B(m, θ) distribution, truncated at 0. Plot the likelihood function and approximately identify the solution of the likelihood equation. Use Newton-Raphson iterative procedure and method of scoring to solve the likelihood equation and find the maximum likelihood estimator of θ on the basis of a random sample generated from B(m, θ) distribution, truncated at 0 assuming some values for m and θ. Verify that the maximum likelihood estimator of θ is CAN. (Hint: Use the code similar to that for truncated Poisson distribution.)

4.7 Computational Exercises

265

4.7.3 Suppose (X , Y ) has bivariate normal distribution with zero mean vector and dispersion matrix  given by  =

 1 ρ , ρ 1

−1 < ρ < 1. We have discussed the maximum likelihood estimation of ρ in Example 4.3.3. It is noted that we cannot find the explicit solution but a unique root of the likelihood equation exists. Hence use the simulation approach to get the solution of the likelihood equation. Draw a random sample from the above distribution using rmvnorm command from the MASS library to draw a sample from bivariate normal distribution. Plot the likelihood function and approximately identify the solution of the likelihood equation. Solve the likelihood equation n by the Newton-Raphson procedure. Take the initial value X i Yi /n (Hint: Use the code similar to that for truncated of ρ as ρ0 = i=1 Poisson distribution.) 4.7.4 Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by,   = σ2

 1 ρ , ρ 1

σ 2 > 0, − 1 < ρ < 1. In Exercise 4.6.5, you have obtained the maximum likelihood estimators of (σ 2 , ρ) and shown that it is CAN . Verify the results by simulation using R code. Solve the system of likelihood equations using Newton-Raphson procedure and also by the method of scoring. 4.7.5 Suppose a random vector (X , Y ) follows a bivariate normal N2 (0, 0, σ12 , σ22 , ρ) distribution where ρ = 0 is a known correlation coefficient. In Example 4.3.4, it is shown that the maximum likelihood estimator of (σ12 , σ22 ) is a CAN estimator of (σ12 , σ22 ) . Verify the result by simulation. Based on the bivariate data, obtain the maximum likelihood estimator of σ12 and of σ12 , treating these as parameters of marginal distributions. Comment on the results. 4.7.6 Suppose a random vector (X , Y ) follows a bivariate normal N2 (μ1 , μ2 , σ12 , σ22 , ρ) distribution where where μ1 , μ2 ∈ R, σ12 , σ22 > 0 and ρ ∈ (−1, 1). On the basis of simulated data, obtain the maximum likelihood estimator of (μ1 , μ2 , σ12 , σ22 , ρ) . (Hint: The distribution belongs to five parameter exponential family, hence the maximum likelihood estimator of (μ1 , μ2 , σ12 , σ22 , ρ) is same as the moment estimator of (μ1 , μ2 , σ12 , σ22 , ρ) based on the sufficient statistic.) 4.7.7 Suppose a random variable X follows a Cauchy C(θ, λ) distribution with location parameter θ and shape parameter λ. Using R and Cramér-Wold device verify that the maximum likelihood estimator of (θ, λ) is CAN with approximate dispersion matrix I −1 (θ, λ)/n. (Hint: I (θ, λ) is obtained in Example 4.5.5.)

266

4

CAN Estimators in Exponential and Cramér Families

4.7.8 For the multinomial distribution as specified in Example 4.2.7, simulate the sample of size n and based on that find the maximum likelihood estimators of the parameters θ and φ using Newton-Raphson procedure and using the method of scoring. Comment on the result. It has been proved in Example 4.2.7 that the the maximum likelihood estimator of (θ, φ) is CAN. Verify it by simulation. 4.7.9 Suppose a random variable X follows a gamma distribution with scale parameter α and shape parameter λ, with probability density function given by f (x, α, λ) =

αλ −αx λ−1 x , x > 0, α > 0, λ > 0 . e (λ)

Find the maximum likelihood estimator of (α, λ) using method of scoring. (Hint: Use procedure as in Example 4.5.5) 4.7.10 Suppose a random variable X follows a logistic distribution with probability density function f (x, θ) =

exp{−(x − θ)} , x ∈ R, θ ∈ R. (1 + exp{−(x − θ)})2

Find the maximum likelihood estimator of θ using the Newton-Raphson procedure and the method of scoring. As an initial iterative value one may take the sample median or the sample mean as both are consistent for θ. (Hint: Some part of the code of Example 4.5.3 will be useful.)

References 1. Lehmann, E. L., & Romano, J. P. (2005). Testing of statistical hypothesis (3rd ed.). New York: Springer. 2. van der Vaart, A. (1998). Asymptotic statistics. Cambridge: Cambridge University Press. 3. Apostol, T. (1967). Calculus (2nd ed., Vol. I). New York: Wiley. 4. Cramér, H. (1946). Mathematical methods of statistics. Princeton: Princeton University Press. 5. Huzurbazar, V. S. (1948). The likelihood equation, consistency and maxima of the likelihood function. The Annals of Eugenics, 14, 185–200. 6. Kale, B. K., & Muralidharan, K. (2016). Parametric inference: An introduction. Delhi: Narosa. 7. Rohatgi, V. K., & Saleh, A. K. Md. E. (2001). Introduction to probability and statistics. New York: Wiley. 8. Rao, C. R. (1978). Linear statistical inference and its applications. New York: Wiley. 9. Kotz, S., Balakrishnan, N., & Johnson, N. L. (2000). Continuous multivariate distributions: Models and applications (2nd ed., Vol. I). New York: Wiley. 10. Kale, B. K. (1961). On the solution of the likelihood equation by iteration processes. Biometrika, 48, 452–456. 11. Kale, B. K. (1962). On the solution of the likelihood equation by iteration processes, the multiparameter case. Biometrika, 49, 479–486. 12. Kotz, S., Balakrishnan, N., & Johnson, N. L. (1995). Continuous univariate distributions (2nd ed., Vol. II). New York: Wiley.

5

Large Sample Test Procedures

Contents 5.1 5.2 5.3 5.4

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Likelihood Ratio Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Large Sample Tests Using R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conceptual Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

267 274 292 304

5 Learning Objectives After going through this chapter, the readers should be able – to perform the large sample test procedures using the test statistic based on the CAN estimator and judge the performance of a test procedure using the power function – to carry out the likelihood ratio test procedure and decide the asymptotic null distribution of the likelihood ratio test statistic – to use R software in the large sample test procedures and the likelihood ratio test procedure

5.1

Introduction

In Chaps. 2, 3 and 4, we discussed point estimation of a parameter and studied the large sample optimality properties of the estimators. We also discussed interval estimation for large n. The present and the next chapters are devoted to the large sample test procedures. All the results about the estimators established in Chaps. 2, 3, and

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0_5

267

268

5

Large Sample Test Procedures

4 are heavily used in both the chapters. Most of the theory of testing of hypotheses has revolved around the Neyman-Pearson lemma, which leads to the most powerful test for simple null against simple alternative hypothesis. It also leads to the uniformly most powerful tests in certain models, in particular for exponential families. A likelihood ratio test procedure, which we discuss in the second section, is also an extension of Neyman-Pearson lemma in some sense. Wald’s test procedure, the score test procedure, which are frequently used in statistical modeling and analysis, are related to the likelihood ratio test procedure. All these test procedures, when the underlying probability model is a multinomial distribution, play a significant role in tests for validity of a proposed model, goodness of fit tests and tests for contingency tables. In the present chapter, we discuss the likelihood ratio test procedure and its asymptotic theory. The next chapter is devoted to likelihood ratio test procedures associated with a multinomial distribution, in particular, the tests for goodness of fit and the tests for contingency tables. Suppose X is a random variable or a random vector whose distribution is indexed by a real parameter θ. On the basis of a random sample of size n from the distribution of X , it is of interest to test the following hypotheses. (i) H0 : θ = θ0 against the alternative H1 : θ > θ0 . (ii) H0 : θ = θ0 against the alternative H1 : θ < θ0 . (iii) H0 : θ = θ0 against the alternative H1 : θ = θ0 . θ0 is a specified value of the parameter. Optimal test procedures, such as uniformly most powerful tests, uniformly most powerful unbiased tests, have been developed to test such hypotheses. If the distribution satisfies the monotone likelihood ratio property or if it belongs to an exponential family, then uniformly most powerful tests exist for the one-sided hypotheses. However, these may not exist for all the distributions. Thus, in a general setup, test procedures are based on a suitable test statistic, which is a function of the deviation of θ0 from an appropriate estimator of θ. To account for the variation in the deviation, it is divided by the standard error of the estimator. Thus, in most of the cases the test statistic is given by Tn =

θˆ n − θ0 , se(θˆ n )

where θˆ n is a suitable estimator of θ and se(θˆ n ) is the standard error of the estimator, that is, the estimator of the standard deviation of θˆ n . The test procedure is to reject H0 , if Tn > c1 in case (i), if Tn < c2 , in case (ii) and in case (iii) if |Tn | > c3 . The constants c1 , c2 and c3 are determined so that the probability of type I error is fixed at a specified level of significance α. To determine the constants c1 , c2 and c3 , it is essential to know the null distribution of Tn . In some situations, it is difficult to find out the null distribution of Tn for finite n. However, in most of the cases, it is possible to find the asymptotic null distribution of Tn . Using the asymptotic null distribution of Tn , we can find the approximate values of constants c1 , c2 and c3 . Using the

5.1 Introduction

269

results studied in previous chapters, we now discuss how to obtain the asymptotic null distribution of Tn . Suppose θ˜ n is a CAN estimator of θ with approximate variance v(θ)/n. The test statistic Tn or Sn to test the null hypothesis H0 can be defined as follows: Tn =

 n/v(θ0 ) (θ˜ n − θ0 )

 or

Sn =

n/v(θ˜ n ) (θ˜ n − θ0 ) .

Under H0 , the asymptotic distribution of Tn is standard normal, or equivalently, the asymptotic null distribution of Tn2 is χ21 . If v(θ) is a continuous function of θ, then v(θ˜ n ) is a consistent estimator of v(θ) and hence by Slutsky’s theorem, the asymptotic null distribution of Sn is also standard normal, which implies that the asymptotic null distribution of Sn2 is χ21 . Using these asymptotic null distributions, one can determine the approximate values of the constants c1 , c2 and c3 . If the distribution of X belongs to a one-parameter exponential family or a Cramér family, then it is proved in Chap. 4, that for large n, the maximum likelihood estimator θˆ n of θ is a CAN estimator of θ with approximate variance 1/n I (θ). In such cases the test statistic is defined as   Tn = n I (θ0 )(θˆ n − θ0 ) or Sn = n I (θˆ n )(θˆ n − θ0 ) . Under H0 , the asymptotic distribution of Tn is standard normal. If I (θ) is a continuous function of θ then the asymptotic null distribution of Sn is also standard normal or equivalently, the asymptotic null distributions of Tn2 and Sn2 are χ21 . On similar lines in a k-parameter setup, If θ˜ n is CAN for θ with approximate dispersion matrix (θ)/n, then for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , the test statistic is defined as Wn = n(θ˜ n − θ0 )  −1 (θ0 )(θ˜ n − θ0 ) or

Un = n(θ˜ n − θ0 )  −1 (θ˜ n )(θ˜ n − θ0 ) .

The null hypothesis is rejected if Wn > c or Un > c. The asymptotic null distribution of Wn is χ2k . If each element of (θ) is a continuous function of θ, then (θ˜ n ) is P

a consistent estimator of (θ). Consequently, Wn − Un → 0, hence the asymptotic null distribution of Un is also χ2k . Thus, the constant c can be determined or corresponding p-value can be computed. If the distribution belongs to a k-parameter exponential family or a k-parameter Cramér family, the large sample distribution of the maximum likelihood estimator θˆ n is Nk (θ, I −1 (θ)/n), where I (θ) is the information matrix. Hence, for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , the test statistic is defined as Wn = n(θˆ n − θ0 ) I (θ0 )(θˆ n − θ0 ) or

Un = n(θˆ n − θ0 ) I (θˆ n )(θˆ n − θ0 ) .

The null hypothesis is rejected if Wn > c or Un > c. The asymptotic null distributions of Wn and Un are χ2k and hence the constant c can be determined or corresponding p-value can be computed.

270

5

Large Sample Test Procedures

For some distributions more than one CAN estimators exist for the indexing parameter. In such situations, we select the estimator which has the smallest approximate variance, to propose the test statistic. Following examples illustrate these test procedures.  Example 5.1.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace distribution with location parameter θ and scale parameter 1. Then the sample median is the maximum likelihood estimator of θ and it is CAN with approximate variance 1/n. The sample mean is also CAN for θ with approximate variance 2/n. Thus, the sample median is a better estimator of θ. Hence, the test procedure to test H0 : θ = 1 against H1 : θ > 1, is√based  on the sample  median X ([n/2]+1) . We propose the test statistic Tn as Tn = n X ([n/2]+1) − 1 . For large n, under H0 , Tn ∼ N (0, 1) distribution. The null hypothesis H0 : θ = 1 is rejected against H1 : θ > 1, if Tn > c where c is determined corresponding to the given level of significance α  and the asymptotic null distribution of Tn . Thus, c = a1−α .  Remark 5.1.1

It is to be noted that a Laplace distribution with location parameter θ and scale parameter 1 is not a member of Cramér family, but the maximum likelihood estimator of θ exists and it is CAN.  Example 5.1.2

Suppose X follows a Laplace distribution with probability density function   1 |x − μ| , x, μ ∈ R, α > 0. f (x, μ, α) = exp − 2α α In Example 3.3.7, we have obtained CAN estimators of (μ, α) based on (i) the sample quantiles and (ii) the sample moments. On the basis of generalized variance, we have noted that the CAN estimator based on the sample moments is better than that based on the sample quantiles. Hence, the test statis, is based tic to test H0 : μ = μ0 , α = α0 against H1 : μ = μ0 , α= α0√  on the sample moments. In Example 3.3.7, we have shown that m 1 , m 2 /2 is CAN   for (μ, α) with the approximate dispersion matrix /n = diag 2α2 , 1.25α2 /n. Hence, we propose the test statistic as      Wn = n m 1 − μ0 , m 2 /2 − α0  −1 (μ0 , α0 ) m 1 − μ0 , m 2 /2 − α0  = n(m 1 − μ0 )2 /2α02 + n( m 2 /2 − α0 )2 /1.25α02 . For large n under H0 , Wn ∼ χ22 distribution. The null hypothesis H0 is rejected against H1 if Wn > c, where c is determined corresponding to the given level

5.1 Introduction

271

of significance α and the asymptotic null distribution of Wn . Thus, c = χ21−α,2 . Another test statistic is defined as      Un = n m 1 − μ0 , m 2 /2 − α0  −1 m 1 , m 2 /2 ×   m 1 − μ0 , m 2 /2 − α0 2   2 = n m 1 − μ0 /m 2 + n m 2 /2 − α0 /0.625m 2 . For large n, under H0 , Un ∼ χ22 distribution. The null hypothesis H0 is rejected  against H1 if Un > c, where c = χ21−α,2 .  Example 5.1.3

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson Poi(θ) distribution. Then the sample mean X n is CAN for θ with approximate variance θ/n. To derive the large sample test procedure to test H0 : P[X > 0] = 2/3 against H1 : P[X > 0] < 2/3, note that P[X > 0] = 1 − e−θ = g(θ), say. Then g is a differentiable function with g  (θ) = e−θ = 0 for all θ > 0. Hence by the delta method, 1 − e−X n is CAN for 1 − e−θ with approximate variance θe−2θ /n. By Slutsky’s theorem,

 n L e X n (1−e−X n )−(1 − e−θ ) = e X n (e−θ − e X n ) → Z ∼ N (0, 1) . Xn Xn  Hence, we propose the test statistic Sn as Sn = n/X n e X n (1/3 − e−X n ). For large n under H0 , Sn ∼ N (0, 1) distribution. The null hypothesis H0 is rejected against H1 if Sn < c, where c is determined corresponding to the given level of significance α and the asymptotic null distribution of Sn . Thus, c = −a1−α .  n

 Remark 5.1.2

In Example 5.1.3, P[X > 0] = 1 − e−θ = 2/3 ⇒ θ = log 3, hence the null hypothesis H0 : P[X > 0] = 2/3 can be expressed as H0 : θ = log 3 = θ0 , say. For a Poisson Poi(θ) distribution, it is known that the sample mean X n is CAN for θ with  approximate variance θ/n and hence the test statistic Sn is given by

Sn = n/X n (X n − θ0 ) and the test statistic Tn is given by √ Tn = n/θ0 (X n − θ0 ). For large n under H0 , Sn ∼ N (0, 1) distribution. The null hypothesis H0 is rejected against H1 if Sn < c, where c = −a1−α . The test procedure based on Tn will be similar to that based on Sn . Such a conversion may not be possible for all the distributions. For example, in Exercise 5.4.4 we cannot have such a conversion.

272

5

Large Sample Test Procedures

 Example 5.1.4

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (μ, σ 2 ) distribution and we want to derive a large sample test procedure to test H0 : μ = μ0 , σ 2 = σ02 against H1 : μ = μ0 , σ 2 = σ02 . In Example 3.3.2 we have shown that θˆ n = (X n , Sn2 ) is CAN for θ = (μ, σ 2 ) with approximate dispersion matrix /n, where   2 0 σ . = 0 2σ 4 As a consequence, for large n, n(θˆ n − θ)  −1 (θˆ n − θ) ∼ χ22 and by Slutsky’s ˆ n−1 (θˆ n − θ) ∼ χ2 , where  ˆ n is diag(Sn2 , 2Sn4 ). theorem, for large n, n(θˆ n − θ)  2 2  Suppose θ0 = (μ0 , σ0 ) . We propose the test statistic as ˆ n−1 (θˆ n − θ0 ). For large n under H0 , Tn ∼ χ2 distribution. The Tn = n(θˆ n − θ0 )  2 null hypothesis H0 is rejected against H1 , if Tn > c where c is determined corresponding to the given level of significance α and the asymptotic null distribution  of Tn . Thus, c = χ22,(1−α) .  Example 5.1.5

Suppose X and Y are independent random variables having Bernoulli B(1, p1 ) and B(1, p2 ) distributions respectively, 0 < p1 , p2 < 1. Suppose X = {X 1 , X 2 , . . . , X n 1 } is a random sample from the distribution of X and Y = {Y1 , Y2 , . . . , Yn 2 } is a random sample from the distribution of Y . On the basis of these samples we want to test H0 : p1 = p2 against the alternative p1 = p2 . Suppose P1n 1 = X n 1 =

n1

X i /n 1 & P2n 2 = Y n 2 =

i=1

n1

Yi /n 2

i=1

denote the proportion of successes in X and the proportion of successes in Y respectively. The maximum likelihood estimator of ( p1 , p2 ) when 0 < p1 , p2 < 1 is (X n 1 , Y n 2 ) ≡ (P1n 1 , P2n 2 ) . Further, by the WLLN and the CLT as n 1 → ∞ & n 2 → ∞, P

L

P1n 1 → p1 , P2n 2 → p2 ,



L

n 1 (P1n 1 − p1 ) → Z 1 &

√ P n 2 (P2n 2 − p2 ) → Z 2

where Z 1 ∼ N (0, p1 (1 − p1 )) & Z 2 ∼ N (0, p2 (1 − p2 )) distribution. Since X and Y are independent, √

√ L n 1 (P1n 1 − p1 ) − n 2 (P2n 2 − p2 ) → Z 3 , where Z 3 ∼ N (0, p1 (1 − p1 ) + p2 (1 − p2 )). Suppose n 1 → ∞ & n 2 → ∞ such that (n 1 + n 2 )/n 1 → a & (n 1 + n 2 )/n 2 → b where a and b are constants. Then

5.1 Introduction

273



n1 + n2 √ n 1 (P1n 1 − p1 ) − n1 where Z 4 ∼ N (0, v),



n1 + n2 √ L n 2 (P2n 2 − p2 ) → Z 4 , n2

where v = ap1 (1 − p1 ) + bp2 (1 − p2 ). Suppose under H0 the common value of p1 and p2 is denoted by p. Then a test statistic Wn to test H0 : p1 = p2 is defined as follows: Wn = 

(P1n 1 − P2n 2 ) P1n 1 (1 − P1n 1 )/n 1 + P2n 2 (1 − P2n 2 )/n 2   √ n 1 +n 2 √ 2 n 1 (P1n 1 − p) − n 1n+n n 2 (P2n 2 − p) n1 2

=  ((n 1 + n 2 )/n 1 )P1n 1 (1 − P1n 1 ) + ((n 1 + n 2 )/n 2 )P2n 2 (1 − P2n 2 ) Un = √ , Vn L

P

where Un → Z 4 and Vn → ap1 (1 − p1 ) + bp2 (1 − p2 ). Hence, by Slutsky’s theorem, the asymptotic null distribution of Wn is standard normal. The null hypothesis H0 is rejected if |Wn | > a1−α/2 . We define one more test statistic as follows. In the null set up p1 = p2 = p. Then the log-likelihood of p given random samples X and Y , using independence of X and Y is given by n

n2 1 log L n 1 +n 2 ( p|X , Y ) = Xi + Yi log p

i=1

i=1 n1

+ n1 + n2 −

i=1

Xi −

n2

Yi log(1 − p).

i=1

It follows that the maximum likelihood estimator of p is n 1 pˆ n =

Xi +

i=1

n2  i=1

(n 1 + n 2 )

 Yi =

(n 1 P1n 1 + n 2 P2n 2 ) = Pn , say. (n 1 + n 2 )

P

Further under H0 , pˆ n → p. Another test statistic Sn is defined as follows: (P1n 1 − P2n 2 ) Sn = √ Pn (1 − Pn )(1/n 1 + 1/n 2 )   n 1 +n 2 √ n 1 +n 2 √ n (P − p) − n 2 (P2n 2 − p) 1 1n 1 n1 n2 = √ Pn (1 − Pn )((n 1 + n 2 )/n 1 + (n 1 + n 2 )/n 2 ) Nn = √ , Dn

274

5

Large Sample Test Procedures

L

where Nn → Z 4 ∼ N (0, v), where v = ap1 (1 − p1 ) + bp2 (1 − p2 ) = p(1 − p)(a + b) under H0 . Note that P

Dn → p(1 − p)(a + b). Hence, by Slutsky’s theorem, under H0 the asymptotic distribution of Sn is standard normal. The null hypothesis H0 is rejected if  |Sn | > a1−α/2 . We will discuss both these test procedures again in Sect. 6.4, where we prove that Wn is Wald’s test statistic and Sn is a score test statistic for testing H0 : p1 = p2 against the alternative p1 = p2 based on samples from two independent Bernoulli distributions. In the next section, we discuss the most frequently used large sample test procedure, known as likelihood ratio test procedure. All the tests for contingency table and goodness of fit tests are likelihood ratio test procedures, when the underlying probability model is a multinomial distribution. These tests are discussed in Chap. 6.

5.2

Likelihood Ratio Test Procedure

Likelihood ratio test procedure is the most general test procedure when the parameter space is either a subset of R or Rk . Whenever an optimal test exists, such as the most powerful test, uniformly most powerful test or uniformly most powerful unbiased test, the likelihood ratio test procedure leads to the optimal test procedure. Suppose X is a random variable or a random vector whose probability law f (x, θ) is indexed by a parameter θ, which may be a real parameter or a vector parameter. Suppose , 0 and 1 denote the parameter space, the parameter space corresponding to a null hypothesis and the parameter space corresponding to an alternative hypothesis respectively, where 0 ∩ 1 = ∅ and 0 ∪ 1 = . On the basis of a random sample X = {X 1 , X 2 , . . . , X n } of size n from the distribution of X , suppose we are interested in testing H0 : θ ∈ 0 against the alternative H1 : θ ∈ 1 . Likelihood n of θ corresponding to the data X = {X 1 , X 2 , . . . , X n } is f (X i , θ). If both the null and alternative hypotheses are given by L n (θ|X ) = i=1 simple, then by the Neyman-Pearson lemma, the most powerful test is based on the likelihood ratio L n (θ1 |X )/L n (θ0 |X ). For certain special models and certain composite hypotheses, the most powerful test turns out to be independent of θ1 ∈ 1 . Thus, we get an uniformly most powerful test for testing H0 against H1 . When both H0 and H1 are composite, a sensible extension of the idea behind the Neyman-Pearson lemma is to base a test on Tn = sup0 L n (θ|X )/ sup1 L n (θ|X ). Thus, the single points {θ0 } and {θ1 } are replaced by sup0 and sup1 respectively. For mathematical simplicity in the denominator of Tn , supremum over 1 is replaced by the supremum over  and the likelihood ratio test statistic λ(X ) is defined as sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

.

5.2 Likelihood Ratio Test Procedure

275

The likelihood ratio test procedure is also proposed by Neyman and Pearson in 1928. It is to be noted that the likelihood ratio test statistic is a function of the minimal sufficient statistic and thus has the desirable property of achieving the reduction of data by sufficiency. If X is a discrete random variable then sup0 L n (θ|X ) denotes the maximum possible probability of obtaining the data X if θ ∈ 0 . This is compared with the maximum possible probability of obtaining the data X if θ ∈ . It is to be noted that sup L n (θ|X ) ≥ 0 & sup L n (θ|X ) > 0 ⇒ λ(X ) ≥ 0 0



0 ⊂  ⇒ sup L n (θ|X ) ≤ sup L n (θ|X ) ⇒ λ(X ) ≤ 1 0



⇒ 0 ≤ λ(X ) ≤ 1. If λ(X ) is near 1, then the numerator and denominator are close to each other and it indicates that the support of the data is to a null setup. On the other hand, if λ(X ) is small, then the data support an alternative setup. Hence, the likelihood ratio test procedure rejects H0 if λ(X ) < c, 0 < c < 1, where c is determined so that size of the test is α, that is, sup0 Pθ [λ(X ) < c] = α. In carrying out the test, one encounters two difficulties-one is finding the null distribution of λ(X ) and the second is finding supremum of the likelihood in the null setup as well as in the entire parameter space. The problem of finding supremum is essentially that of finding the maximum likelihood estimators of the parameters in  and in 0 . For some distributions, such as normal or exponential, the critical region [λ(X ) < c] is equivalent to [Tn (X ) > k] or [Tn (X ) < k] or [|Tn (X )| > k] where k is a constant so that size of the test is α and is determined using the null distribution of Tn , which is easy to obtain. In many cases, it is difficult to find the exact null distribution of λ(X ). But this problem is resolved by considering the null distribution for large n, to find the approximate value of the cut-off point. It has been proved by Wilks [1] that under certain conditions for large n, −2 log λ(X ) ∼ χr2 where r is the difference between the number of parameters estimated in  and in 0 . In spite of these apparent difficulties of a likelihood ratio test procedure, it does provide a unified approach for developing test procedures. Besides testing of hypotheses, a likelihood ratio test procedure is also used to construct a confidence interval for the desired parametric function. It is defined as usual by the acceptance region. The likelihood ratio test procedure is closely related to the score test and Wald’s test. We will elaborate on this in the next chapter. In addition to the intuitive interpretation of the likelihood ratio test, in many cases of interest it is equivalent to optimal tests. We now show that if the most powerful test exists for testing a simple null hypothesis against a simple alternative, then the likelihood ratio test and the most powerful test are equivalent. Suppose 0 = {θ0 } and 1 = {θ1 }. Then according to the Neyman-Pearson lemma the most powerful test is as given below.

φ(X ) =

⎧ ⎪ ⎨ 1,

if

L n (θ1 |X ) L n (θ0 |X )

>k

⎪ ⎩ 0,

if

L n (θ1 |X ) L n (θ0 |X )

≤ k.

276

5

Large Sample Test Procedures

L n (θ1 |X ) > k L n (θ0 |X ). Now the likelihood ratio test statistic λ(X ) is given by

(5.2.1)

H0 is rejected if

λ(X ) = sup L n (θ|X )/ sup L n (θ|X ) = L n (θ0 |X )/ sup L n (θ|X ). 0





H0 is rejected if λ(X ) < c ⇔ L n (θ0 |X ) < c sup L n (θ|X ). If L n (θ0 |X ) > L n (θ1 |X ) then λ(X ) = 1 and H0 is accepted. If L n (θ1 |X ) > L n (θ0 |X ) then H0 is rejected if L n (θ0 |X ) 1 < c ⇔ L n (θ1 |X ) > L n (θ0 |X ). L n (θ1 |X ) c

(5.2.2)

From (5.2.1) and (5.2.2), it is clear that the likelihood ratio test procedure and the most powerful test procedure are equivalent. On similar lines, it can be shown that the likelihood ratio test procedure and the uniformly most powerful test procedure are equivalent. Following examples illustrate the likelihood ratio test procedure and how the rejection region λ(X ) < c is reduced to Tn > k or Tn < k, where Tn is a test statistic whose null distribution can be obtained easily.  Example 5.2.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution. To derive the likelihood ratio test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , the first step is to obtain the maximum likelihood estimator of θ in the entire parameter space  and in the null space 0 . Corresponding to a random sample X ≡ {X 1 , X 2 , . . . , X n } of size n from a normal N (θ, 1) distribution, the likelihood of θ is given by   n  1 1 2 − θ) L n (θ|X ) = exp − (X √ i 2 2π i=1   n √ −n 1 2 = exp − (X i − θ) . 2π 2 i=1

Here  = R and it is well-known that the sample mean X n is the maximum likelihood√ estimator of θ and X n has normal N (θ, 1/n) distribution, which implies that Z = n(X n − θ) has standard normal distribution. The null space is {θ0 }, thus the supremum of the likelihood in null space is attained at θ = θ0 . Hence, the likelihood ratio test statistic λ(X ) is given by    n sup L n (θ|X ) n 1 0 2 2 λ(X ) = (X i − θ0 ) − (X i − X n ) = exp − sup L n (θ|X ) 2 i=1 i=1   n  = exp − (X n − θ0 )2 . (5.2.3) 2

5.2 Likelihood Ratio Test Procedure

277

The likelihood ratio √ test procedure rejects H0 if λ(X ) < c ⇔√ n|X n − θ0 | > k, where k is determined √ so that size of the test is α, that is, Pθ0 [ n|X n − θ0 | > k] = α. Under H0 , Z = n(X n − θ0 ) ∼ N (0, 1) and hence k = a1−α/2 . Suppose now H0 : θ ≥ θ0 and the alternative is H1 : θ < θ0 . In this case 0 = [θ0 , ∞) and  = R. As discussed in Example 2.2.3, if the parameter space is 0 = [θ0 , ∞), then the maximum likelihood estimator θˆ 0n of θ is given by ⎧ if X n < θ0 ⎨ θ0 , θˆ 0n = ⎩ Xn, if X n ≥ θ0 . Suppose X n ≥ θ0 , then the maximum likelihood estimator of θ in  and in 0 is the same and hence λ(X ) = 1 and data support the null setup, hence H0 : θ ≥ θ0 is not rejected, which is quite reasonable. If X n < θ0 , the likelihood ratio test statistic λ(X ) is as in Eq. (5.2.3). The likelihood ratio test procedure rejects H0 if λ(X ) < c ⇔ n(X n − θ0 )2 > c1 ⇔



n(X n − θ0 ) < k, as X n < θ0 .

The constant √ k is determined so that size of the test is α, that is, sup0 Pθ [ n(X n − θ0 ) < k] = α. At √ θ = θ0 , Pθ0 [ n(X n − θ0 ) < k] = α ⇒ k = aα . Now for θ > θ0 consider, √ √ Pθ [ n(X n − θ0 ) < k] = Pθ [ n(X n − θ + θ − θ0 ) < k] √ = Pθ [ n(X n − θ) < k1 ] , √ where √ k1 =k − n(θ − θ0 ) < k = aα as (θ − θ0 ) > 0. Thus, Pθ [ n(X n√− θ0 ) < k] < α and hence √ sup0 Pθ [ n(X n − θ0 ) < k] = Pθ0 [ n(X n − θ0 ) < k] = α. Thus, at k = aα , √ size of the test is α. Thus, if X n < θ0 , then H0 is rejected if n(X n −θ0 ) < k = aα . Suppose now we want to test H0 : θ ≤ θ0 against the alternative H1 : θ > θ0 . In this case 0 = (−∞, θ0 ]. As discussed in Example 2.2.3, the maximum likelihood estimator θˆ 0n of θ is given by ⎧ if X n ≤ θ0 ⎨ Xn, θˆ 0n = ⎩ θ0 , if X n > θ0 . Suppose X n ≤ θ0 , then the maximum likelihood estimator of θ in  and in 0 is the same and hence λ(X ) = 1 and data support the null setup. Hence, H0 : θ ≤ θ0 is not rejected. Now suppose X n > θ0 . In this case the likelihood ratio test statistic λ(X ) is as in Eq. (5.2.3). The likelihood ratio test procedure rejects H0 if λ(X ) < c ⇔ n(X n − θ0 )2 > c1 ⇔



n(X n − θ0 ) > k, as X n > θ0 .

278

5

Large Sample Test Procedures

The constant √ k is determined so that size of the test is α, that is, sup0 Pθ [ n(X n − θ0 ) > k] = α. At √ θ = θ0 , Pθ0 [ n(X n − θ0 ) > k] = α ⇒ k = a1−α . Now for θ < θ0 consider, √ √ Pθ [ n(X n − θ0 ) > k] = Pθ [ n(X n − θ + θ − θ0 ) > k] √ = Pθ [ n(X n − θ) > k1 ] , √ where √ k1 = k − n(θ − θ0 ) > k = a1−α as (θ − θ0 ) < 0. Thus, Pθ [ n(X n√− θ0 ) > k] < α and hence √ sup0 Pθ [ n(X n − θ0 ) > k] = Pθ0 [ n(X n − θ0 ) > k] = α. Thus, at k = a1−α , size of the test is α. Hence, if X n > θ0 , then H0 is rejected if √ n(X n − θ0 ) > k = a1−α .   Example 5.2.2

Suppose X ∼ N (θ, σ 2 ). Suppose we want to derive the likelihood ratio test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 when σ 2 unknown. Corresponding to a random sample X ≡ {X 1 , X 2 , . . . , X n } of size n from normal N (θ, σ 2 ) distribution, the likelihood of (θ, σ 2 ) is given by   n  1 1 2 2 exp − 2 (X i − θ) L n (θ, σ |X ) = √ 2σ 2πσ i=1   n √ −n 1 2 = 2πσ exp − 2 (X i − θ) . 2σ i=1

If  = {(θ, σ 2 )|θ ∈ R, σ 2 > 0}, then the samplemean X n is the maximum liken (X i − X n )2 /n is the maxilihood estimator of θ and sample variance Sn2 = i=1 mum likelihood estimator√ of σ 2 . Further, X n has normal N (θ, σ 2 /n) distribution which implies n − θ)/σ has standard normal distribution and n that Z = n(X (X i − X n )2 /σ 2 ∼ χ2n−1 distribution. It is also known that X n nSn2 /σ 2 = i=1 and nSn2 /σ 2 are independent random variables. To derive the likelihood ratio test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 when 2 2 σ 2 unknown, the null space is 0 = {(θ, n σ )|θ = θ0 ,2σ > 0} and the maximum 2 2 likelihood estimator of σ is S0n = i=1 (X i − θ0 ) /n. Hence, the likelihood ratio test statistic λ(X ) is given by sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 



n 

(X i − θ0 )2

−n/2

i=1

=



n 

(X i − X n )2

i=1

−n/2

⎧ ⎨

exp − 21 ⎩ ⎧ ⎨

exp − 21 ⎩

n 

(X i −θ0 )2

i=1

n 

(X i −θ0 )2 /n ⎭

i=1 n 

(X i −X n )2

i=1

n 

⎫ ⎬ ⎫ ⎬

(X i −X n )2 /n ⎭

i=1

5.2 Likelihood Ratio Test Procedure

⎛ n

(X i − X n )2

⎜ i=1 =⎜ n ⎝ ⎛ ⎜ =⎜ ⎝

(X i − θ0 )2

279

⎞n/2



⎟ ⎟ ⎠

n 

⎜ =⎜ ⎝

i=1

i=1 n 

(X i − X n + X n − θ0 )2

i=1 n 

(X i − X n )2

i=1 n 

(X i − X n )2

(X i − X n )2 + n(X n − θ0 )2

⎞n/2 ⎟ ⎟ ⎠

⎞n/2 ⎟ ⎟ ⎠ ⎞−n/2



⎜ n(X n − θ0 )2 ⎟ ⎟ =⎜ 1 + n ⎠ ⎝  2 (X i − X n )

i=1

i=1

.

(5.2.4)

The likelihood ratio test procedure rejects H0 if n(X n − θ0 )2

λ(X ) < c ⇔

n 

> c1 ⇔ |Tn | > k

(X i − X n )2

i=1

√ where Tn = $

n 

n(X n − θ0 )

(X i − X n )2 /(n − 1)

i=1

and k is determined so that size of the test is α, that is, Pθ0 [|Tn | > k] = α. Under H0 , Tn ∼ tn−1 distribution and hence k = tn−1,1−α/2 . Suppose now H0 : θ ≥ θ0 and the alternative is H1 : θ < θ0 . Further, σ 2 is unknown. In this case, null space is 0 = {(θ, σ 2 )|θ ≥ θ0 , σ 2 > 0} and the maximum likelihood estimators of θ and of σ 2 are given by θˆ 0 n =

2 σˆ 0n =

⎧ ⎨ Xn,

if

X n ≥ θ0



if

X n < θ0 .

θ0 ,

⎧ n ⎪ 1  ⎪ (X i − X n )2 , ⎪ n ⎪ ⎨ i=1 ⎪ ⎪ ⎪ ⎪ ⎩

1 n

n 

(X i − θ0 )2 ,

if

X n ≥ θ0

if

X n < θ0 .

i=1

It is to be noted that if X n ≥ θ0 , then the maximum likelihood estimators of θ and of σ 2 are the same in  and in 0 . Hence, the likelihood ratio test statistic λ(X ) = 1 when X n ≥ θ0 and H0 : θ ≥ θ0 is not rejected. If X n < θ0 then the likelihood ratio test statistic λ(X ) is as given in Eq. (5.2.4). Thus, if X n < θ0 then the likelihood ratio test procedure rejects H0 if

280

5

Large Sample Test Procedures

n(X n − θ0 )2 > c1 ⇔ Tn < k n  (X i − X n )2

λ(X ) < c ⇔

i=1

where Tn = $

n 

√ n(X n − θ0 ) (X i − X n

)2 /(n

, − 1)

i=1

as X n < θ0 . The constant k is determined so that size of the test is α, that is, sup0 Pθ [Tn < k] = α. When θ = θ0 , Tn ∼ tn−1 distribution and as discussed  in Example 5.2.1, supremum is attained at θ0 and hence k = tn−1,α .  Example 5.2.3

Suppose X ∼ N (θ, 1) and we want to derive the likelihood ratio test procedure for testing H0 : |θ| ≤ a against the alternative H1 : |θ| > a, where a is a positive real number. Corresponding to a random sample X ≡ {X 1 , X 2 , . . . , X n } of size n from a normal N (θ, 1) distribution, the likelihood of θ is given by   1 1 √ exp − (X i − θ)2 2 2π i=1   n √ −n 1 2 = 2π exp − (X i − θ) . 2

L n (θ|X ) =

n 

i=1

Further,  = R and the sample mean X n is the maximum likelihood estimator of θ and it has normal N (θ, 1/n) distribution. In null space θ0 ∈ [−a, a] hence as discussed in Example 2.2.3, the maximum likelihood estimator of θ is given by ⎧ X n < −a ⎨ −a, if θˆ 0n = X , if X n ∈ [−a, a] ⎩ n a, if Xn > a . As in the previous example, if X n ∈ [−a, a], then the maximum likelihood estimator of θ is same in  and in 0 . Hence, the likelihood ratio test statistic λ(X ) = 1 when X n ∈ [−a, a] and H0 : |θ| ≤ a is not rejected, which seems to be reasonable. If X n < −a or X n > a then the likelihood ratio test statistic λ(X ) is given by  λ(X ) =

2 exp{ −n 2 (X n + a) }, if X n < −a −n exp{ 2 (X n − a)2 }, if X n > a

5.2 Likelihood Ratio Test Procedure

281

The likelihood ratio test procedure rejects H0 if λ(X ) < k, that is, if     −n −n 2 2 exp (X n + a) < c1 & X n < −a or exp (X n − a) < c2 2 2 & Xn > a ⇔ (X n + a)2 > c3 & X n < −a or (X n − a)2 > c4 & Xn > a ⇔ (X n + a) < −c5 or (X n − a) > c6 ⇔ X n < −c5 − a or X n > c6 + a ⇔ X n < −c − a or X n > c + a, if c5 = c6 = c ⇔ |X n | > a + c. The constant c is determined so that size of the test is α, that is, α = sup Pθ [|X n | > a + c] θ0

⇔ sup{1 − Pθ [−a − c < X n < a + c]} = α θ0

√ √ ⇔ sup{1 − ( n(a + c − θ)) + ( n(−a − c − θ))} = α θ0

√ √ inf {( n(a + c − θ)) − ( n(−a − c − θ))} = 1 − α −a≤θ≤a √ √ ⇔ ( nc) − (− nc) = 1 − α √ ⇔ ( nc) = 1 − α/2 1 1 ⇔ c = √ −1 (1 − α/2) = √ a1−α/2 . n n ⇔

Thus, H0 : |θ| ≤ a against the alternative H1 : |θ| > a is rejected if |X n | > a + c,  where c = √1n a1−α/2 . In the three examples discussed above, it is possible to convert the critical region [λ(X ) < c] to a critical region in terms of a test statistic Tn , whose null distribution can be obtained for each n. However, such a conversion is not possible for many distributions and hence we have to use the asymptotic distribution of λ(X ), which is discussed in the following theorems. It is assumed that the probability law f (x, θ), indexed by a parameter θ ∈  ⊂ R, belongs to a Cramér family. As a consequence, for large n, the maximum likelihood estimator of θ exists and is CAN with approximate variance 1/n I (θ). This result is heavily used in deriving the asymptotic null distribution of λ(X ).

282

5

Large Sample Test Procedures

Theorem 5.2.1 Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ R. Suppose f (x, θ) belongs to a Cramér family. If λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , where θ0 is a specified value of the parameter, then under H0 , L

−2 log λ(X ) → U ∼ χ21 as n → ∞. Proof Suppose θˆ n is a maximum likelihood estimator of θ based on a random sample X . Since the distribution of X belongs to a Cramér family, θˆ n is CAN with approximate variance 1/n I (θ). The likelihood ratio test procedure rejects H0 : θ = θ0 against the alternative H1 : θ = θ0 , if λ(X ) < c ⇔ − 2 log λ(X ) > c1 where c and c1 are constants and −2 log λ(X ) is given by − 2 log λ(X ) = 2[log L n (θˆ n |X ) − log L n (θ0 |X )].

(5.2.5)

Expanding log L n (θ0 |X ) around θˆ n using Taylor series expansion, we have ∂ log L n (θ0 |X )|θˆ n log L n (θ0 |X ) = log L n (θˆ n |X ) + (θ0 − θˆ n ) ∂θ0 (θˆ n − θ0 )2 ∂ 2 + log L n (θ0 |X )|θˆ n + Rn , 2 ∂θ02 where Rn is the remainder term given by Rn =

(θˆ n −θ0 )3 ∂ 3 3! ∂θ03

log L n (θ0 |X )|θn∗ , where

P θn∗ = αθ0 + (1 − α)θˆ n , 0 < α < 1. It is to be noted that under H0 , θn∗ → θ0 as n → ∞. Further, ∂θ∂ 0 log L n (θ0 |X )|θˆ n = 0, as θˆ n is a solution of the likelihood equation. Substituting the expansion in (5.2.5) we have

−2 log λ(X ) = 2[log L n (θˆ n |X ) − log L n (θ0 |X )]

∂2 2 ˆ = (θn − θ0 ) − 2 log L n (θ0 |X )|θˆ n − 2Rn ∂θ0

2 √ ∂ 1 2 = ( n(θˆ n − θ0 )) − log L n (θ0 |X )|θˆ n − 2Rn . n ∂θ02 Observe that

% % % (θˆ − θ )3 ∂ 3 % 0 % n % ∗ |Rn | = % log L (θ |X )| n 0 θn % % % 3! ∂θ03 % % % % % 1 ∂3 1 %% √ ˆ % % % ∗% . ≤ √ %( n(θn − θ0 ))3 % % log L (θ |X )| n 0 θ n% 3 % n ∂θ0 6 n

5.2 Likelihood Ratio Test Procedure

283

% % % 1 ∂3 % % As shown in the proof of Result 4.3.3 as given by Huzurbazar, % n ∂θ3 log L n (θ0 |X )|θn∗ %% 0 Pθ0 √ ˆ L → K < ∞, where K is a constant. Further, n(θn − θ0 ) → Z 1 , hence by the √ L continuous mapping theorem |( n(θˆ n − θ0 ))3 | → |Z 3 | and hence is bounded in 1

Pθ0

probability. Thus, Rn → 0. By Cramér-Huzurbazar theory, √ L n(θˆ n − θ0 ) → Z 1 ∼ N (0, I −1 (θ0 ))



L n I (θ0 )(θˆ n − θ0 )2 → U ∼ χ21 .

Pθ0

∂ Further, − n1 ∂θ 2 log L n (θ0 |X )|θˆ n → I (θ0 ) . Hence by Slutsky’s theorem under H0 , 2

0

√ 1 ∂2 2 I (θ0 ) ˆ −2 log λ(X ) = ( n(θn − θ0 )) log L n (θ0 |X )|θˆ n − I (θ0 ) n ∂θ02 L

− 2Rn → U ∼ χ21 .    Remark 5.2.1

From Theorem 5.2.1, we note that if λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , where θ0 is a specified real number,  then under H0 , √ ˆ 2 ∂ have the same lim−2 log λ(X ) and ( n(θn − θ0 ))2 − n1 ∂θ 2 log L n (θ0 |X )|θˆ n 0

iting distribution.  Example 5.2.4

Suppose a coin with probability θ of getting heads is tossed 100 times and 60 heads are observed. We derive a likelihood test procedure to test H0 : θ = 0.5 against the alternative H1 : θ = 0.5. Suppose X denotes the outcome of a toss of a coin with probability θ of getting heads. Then X has Bernoulli B(1, θ) distribution. On the basis of a random sample of size 100 from the distribution of X , we want to test the hypothesis H0 : θ = 0.5 against the alternative H1 : θ = 0.5. The likelihood of θ given the data X is given by L n (θ|X ) = θn X n (1 − θ)n−n X n . To derive a likelihood ratio test procedure, we note that the sample mean X n is the maximum likelihood estimator of θ in the entire parameter space  = (0, 1) and the null space 0 is 0 = {0.5} and hence the supremum of the likelihood

284

5

Large Sample Test Procedures

in the null space is attained at θ = 0.5. The likelihood ratio test statistic λ(X ) is given by sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

0.5n X n 0.5n−n X n (X n )n X n (1 − X n )n−n X n

.

It is difficult to find the null distribution of λ(X ) and also difficult to convert the rejection region λ(X ) < c in terms of some statistic Tn . Hence, we use Theorem 5.2.1 to get the asymptotic null distribution of λ(X ). Under H0 , −2 log λ(X ) ∼ χ21 distribution. Thus, H0 is rejected if −2 log λ(X ) > c where c = χ21,1−α = 3.8415, if α = 0.05. For the given data −2 log λ(X ) = 2[60 log(0.6) + 40 log(0.4) − 100 log(0.5)] = 4.0271 > c = 3.8415 

and hence H0 is rejected. The next theorem is an extension of Theorem 5.2.1 to a multiparameter setup.

Theorem 5.2.2 Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk . Suppose f (x, θ) belongs to a Cramér family. If λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , L

where θ0 is a specified vector, then under H0 , −2 log λ(X ) → U ∼ χ2k as n → ∞. Proof Suppose θˆ n is a maximum likelihood estimator of θ based on a random sample X . Since the distribution of X belongs to a Cramér family, θˆ n is CAN with approximate dispersion matrix I −1 (θ)/n. According to the likelihood ratio test procedure, H0 : θ = θ0 is rejected against the alternative H1 : θ = θ0 if λ(X ) < c ⇔ − 2 log λ(X ) > c1 where c and c1 are constants and −2 log λ(X ) is given by −2 log λ(X ) = 2[log L n (θˆ n |X ) − log L n (θ0 |X )]. As in Theorem 5.2.1, we expand log L n (θ0 |X ) around θˆ n using Taylor series expansion. Thus we have k ∂ ˆ (θi0 − θˆ in ) log L n (θ0 |X )|θˆ log L n (θ0 |X ) = log L n (θn |X ) + n ∂θi0 i=1

+

k k 1

2

i=1 j=1

(θi0 − θˆ in )(θ j0 − θˆ jn )

∂2 log L n (θ0 |X )|θˆ + Rn , n ∂θi0 ∂θ j0

where Rn is the remainder term. Further, ∂θ∂i0 log L n (θ0 |X )|θˆ =0, ∀ i = 1, 2, . . . , k, n as θˆ is a solution of the system of likelihood equations. Hence we get n

5.2 Likelihood Ratio Test Procedure

285

−2 log λ(X ) = 2[log L n (θˆ n |X ) − log L n (θ0 |X )] k k

=n

(θi0 − θˆ in )(θ j0 − θˆ jn ) ×

i=1 j=1



 1 ∂2 log L n (θ0 |X )|θˆ − 2Rn n n ∂θi0 ∂θ j0  = n(θˆ n − θ0 ) Mn (θˆ n − θ0 ) − 2Rn , −

where Mn is a matrix with (i, j)-th element given by 2 Mn (i, j) = − n1 ∂θ0i∂∂θ0 j log L n (θ0 |X )|θˆ . By the Cramér-Huzurbazar theorem, n

√ L n(θˆ n − θ0 ) → Z 1 ∼ Nk (0, I −1 (θ0 )) ⇒

n(θˆ n − θ0 ) I (θ0 )(θˆ n − θ0 ) → U ∼ χ2k . L

Further, Mn (i, j) = −

Pθ0 ∂2 1 log L n (θ0 |X )|θˆ → Ii j (θ0 ) , n n ∂θ0i ∂θ0 j Pθ0

for all i, j = 1, 2, . . . , k. Hence, Mn → I (θ0 ). As a consequence, n(θˆ n − θ0 ) Mn (θˆ n − θ0 ) − n(θˆ n − θ0 ) I (θ0 )(θˆ n − θ0 ) P

θ0 = n(θˆ n − θ0 ) (Mn − I (θ0 ))(θˆ n − θ0 ) → 0 ,

L as n → ∞. Hence, n(θˆ n − θ0 ) Mn (θˆ n − θ0 ) → U ∼ χ2k . Now the remainder term Rn is given by

Rn =

  3 k k k 1 ∂ log L n (θ0 |X ) (θi0 − θˆ in )(θ j0 − θˆ jn )(θl0 − θˆ ln ) |θ∗n , 3! ∂θi0 ∂θ j0 ∂θl0 i=1 j=1 l=1

Pθ where, θ∗n = αθ0 + (1 − α)θˆ n , 0 < α < 1 and under H0 , θ∗n → θ0 . To show that Pθ

Rn → 0, consider % % % %  3 k k % %1 k log L (θ |X ) ∂ n 0 (θi0 − θˆ in )(θ j0 − θˆ jn )(θl0 − θˆ ln ) |θ∗n %% |Rn | = %% ∂θi0 ∂θ j0 ∂θl0 % % 3! i=1 j=1 l=1 k k k √ √ 1 √ ≤ √ | n(θi0 − θˆ in )|| n(θ j0 − θˆ jn )|| n(θl0 − θˆ ln )| × 6 n i=1 j=1 l=1 % 3 % % 1 ∂ log L n (θ0 |X ) % % % ∗ | θn % . % n ∂θ ∂θ ∂θ i0 j0 l0

286

5

Large Sample Test Procedures

Now we use the condition that the third order partial derivatives of log f (x, θ) are bounded by integrable functions and proceed on the similar lines as in the proof of % % 3 % ∂ log L (θ |X ) % Pθ0 Result 4.3.3, as given by Huzurbazar. It then follows that % n1 ∂θi0 ∂θnj0 ∂θ0 l0 |θ∗n % → K < ∞, where K is a constant. Further, L √ √  n(θˆ n − θ0 ) = n (θˆ 1n − θ10 ), (θˆ 2n − θ20 ), . . . , (θˆ kn − θk0 ) → Z 1 . k  k  k Suppose g : Rk →R is defined as g(x1 , x2 , . . . , xk ) = i=1 j=1 l=1 |x i ||x j ||xl |. Then g is a continuous function and hence by the continuous mapping theo√ L rem, g( n (θˆ 1n − θ10 ), (θˆ 2n − θ20 ), . . . , (θˆ kn − θk0 ) → g(Z 1 ) which implies  k k √ √ √ that k | n(θi0 − θˆ in )|| n(θ j0 − θˆ jn )|| n(θl0 − θˆ ln )| is bounded i=1

j=1

l=1

Pθ0

in probability and hence Rn → 0. Thus, by Slutsky’s theorem under H0 , −2 log λ(X ) = n(θˆ n − θ0 ) Mn (θˆ n − θ0 ) − 2Rn → U ∼ χ2k . L

 

 Remark 5.2.2

As in Theorem 5.2.1, in multiparameter set up, if λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , where θ0 is a specified real number, then under H0 , −2 log λ(X ) and n(θˆ n − θ0 ) Mn (θˆ n − θ0 ) have the same limiting distribution. In the next chapter we will discuss how these two statistics are related to a score test statistic and Wald’s test statistic.  Example 5.2.5

Suppose {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables with following probability mass function. P[X 1 = 1] = (1 − θ1 )/2, P[X 1 = 2] = θ2 /2, & P[X 1 = 4] = (1 − θ2 )/2 .

P[X 1 = 3] = θ1 /2

We derive a likelihood ratio test procedure to test H0 : θ1 = 1/2, θ2 = 1/4 against the alternative H1 : θ = 1/2, θ2 = 1/4. The likelihood of θ corresponding to the data X ≡ {X 1 , X 2 , . . . , X n } is given by        1 − θ1 n 1 θ 2 n 2 θ 1 n 3 1 − θ2 n 4 L n (θ1 , θ2 |X ) = 2 2 2 2 −n n1 n2 n3 n4 = 2 (1 − θ1 ) θ2 θ1 (1 − θ2 ) , 

5.2 Likelihood Ratio Test Procedure

287

as n 1 + n 2 + n 3 + n 4 = n, where n i denotes the frequency of i, i = 1, 2, 3, 4 in the given sample of size n. To develop a likelihood ratio test procedure, we first obtain the maximum likelihood estimator of (θ1 , θ2 ) . Likelihood is a differentiable function of θ1 and θ2 . Hence, the system of likelihood equations is given by ∂ n1 n3 log L n (θ1 , θ2 |X ) = − + =0 ∂θ1 1 − θ1 θ1 n2 n4 ∂ log L n (θ1 , θ2 |X ) = − =0 & ∂θ2 θ2 1 − θ2 and its solution θˆ n = (θˆ 1n , θˆ 2n ) is given by θˆ 1n =

n3 n1 + n3

&

θˆ 2n =

n4 . n2 + n4

 n1 The matrix of second partial derivatives is diag − (1−θ



2 1)



n3 , − nθ22 θ12 2



n4 (1−θ2 )2

and it is negative definite for all (θ1 , θ2 ) . Hence, θˆ n = (θˆ 1n , θˆ 2n ) is the maximum likelihood estimator of θ = (θ1 , θ2 ) . The entire parameter space  is  = {(θ1 , θ2 ) |θ1 , θ2 ∈ (0, 1)} and the null space 0 is 0 = {(θ1 , θ2 ) |θ1 = 1/2, θ2 = 1/4}. The likelihood ratio test statistic λ(X ) for testing the null hypothesis H0 : θ1 = 1/2, θ2 = 1/4 against the alternative H1 : θ = 1/2, θ2 = 1/4 is given by sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

2−n

( 41 )n 1 +n 3 ( 18 )n 2 ( 38 )n 4 . (1 − θˆ 1n )n 1 θˆ n 2 θˆ n 3 (1 − θˆ 2n )n 4 2n

1n

The null hypothesis is rejected if λ(X ) < c < 1 ⇔ − 2 log λ(X ) > c1 . For the large sample size, −2 log λ(X ) ∼ χ22 distribution. H0 is rejected if −2 log λ(X ) > χ22,1−α , where χ22,1−α is (1 − α)-th quantile of χ22 distribution.  The next theorem is a further extension of Theorem 5.2.2 in a multiparameter setup. In many cases, θ0 is not completely specified. In null setup, few parameters out of {θ1 , θ2 , . . . , θk } are specified and remaining need to be estimated on the basis of the given data. More precisely, suppose θ is partitioned as θ = (θ(1) , θ(2) ) , where θ(1) = (θ1 , θ2 , . . . , θm ) and θ(2) = (θm+1 , θm+2 , . . . , θk ) . In null setup θ(1) is completely specified and one needs to estimate the parameters involved in θ(2) . θ(2) is known as a nuisance parameter. We come across with such a setup in many practical situations. For example, in regression analysis, the global F-test and the partial Ftest for testing the significance of regression coefficients are the likelihood ratio test procedures where few parameters are specified under null setup and the remaining are estimated from the given data. Goodness of fit tests, test for validity of the model

288

5

Large Sample Test Procedures

and the tests for contingency table are the likelihood ratio tests when underlying probability model is a multinomial distribution in k cells. In null setup, cell probabilities are either completely specified or indexed by an unknown parameter θ or in some cases there is a certain relation amongst the cell probabilities. Following two theorems are related to the asymptotic null distribution of −2 log λ(X ), when the null hypothesis is a composite hypothesis. We outline the proof of this theorem, for details one may refer to Ferguson [2], Theorem 22. Theorem 5.2.3 Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk and the distribution of X belongs to a Cramér family. Suppose λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ(1) = θ(1) 0 against the alternative (1) (1) (1) (2)  H1 : θ = θ0 , where θ = (θ1 , θ2 , . . . , θm ) and θ = (θm+1 , θm+2 , . . . , θk ) is partition of θ with m < k and θ(1) 0 is a specified vector. Then under H0 , L

−2 log λ(X ) → U ∼ χr2 as n → ∞, where r = k − (k − m) = m. (1) (2) Proof Suppose θˆ n = (θˆ n , θˆ n ) is a maximum likelihood estimator of (2) θ=(θ(1) , θ(2) ) in the entire parameter space and θ˜ is a maximum likelihood estin

(1)

mator of θ(2) when θ(1) = θ0 , based on a random sample X . Since the distribution of X belongs to a Cramér family, θˆ n is CAN for θ with approximate dispersion (2) matrix I −1 (θ)/n. Similarly, θ˜ n is CAN for θ(2) with approximate dispersion matrix 22 (θ)/n, say. The likelihood ratio test statistic λ(X ) is then given by (1) (2) (2) (1) λ(X ) = sup L n (θ|X )/ sup L n (θ|X ) = L n ((θ0 , θ˜ n )|X )/L n ((θˆ n , θˆ n )|X ) . 0



(2)  Suppose θ0 = (θ(1) 0 , θ 0 ) , then −2 log λ(X ) can be expressed as (1)

(2)

(1)

(2)

(1)

(2)

−2 log λ(X ) = 2[log L n ((θˆ n , θˆ n )|X ) − log L n ((θ0 , θ˜ n )|X )] (2) = 2[log L n ((θˆ n , θˆ n )|X )) − log L n ((θ(1) 0 , θ 0 )|X )] (1) (2) ˜ (2) − 2[log L n ((θ(1) 0 , θ n )|X ) − log L n ((θ 0 , θ 0 )|X )] = 2[log L n (θˆ |X ) − log L n (θ |X )]



n 0 (1) ˜ (2) (1) (2) 2[log L n ((θ0 , θn )|X ) − log L n ((θ0 , θ0 )|X )].

Thus, −2 log λ(X ) = Un − Wn , where Un = 2[log L n (θˆ n |X ) − log L n (θ0 |X )] (1) (2) ˜ (2) & Wn = 2[log L n ((θ(1) 0 , θ n )|X ) − log L n ((θ 0 , θ 0 )|X )] .

5.2 Likelihood Ratio Test Procedure

289 L

Since θ0 is a known vector, by Theorem 5.2.2, Un → U ∼ χ2k . Expanding log L n (1) (2) (1) (2) ((θ , θ )|X ) around (θ , θ˜ ) by Taylor series expansion and proceeding on 0

0

n

0

L

similar lines as in the proof of Theorem 5.2.2, one can show that Wn → W ∼ χ2k−m . Thus, for large n, −2 log λ(X ) is distributed as U − W . It has been proved by Wilks [1] that Cochran’s theorem on quadratic forms in normal variates holds for the asymptotic distributions and hence U − W is distributed as χ2k−(k−m) . Thus, under H0 for large n, −2 log λ(X ) has χ2m distribution. It is to be noted that the degrees of freedom m is the difference between the number of parameters estimated in the entire parameter space and the number of parameters estimated in the null parameter space.   Theorem 5.2.3 is useful in a variety of tests in regression analysis. Following example illustrates its application.  Example 5.2.6

Suppose {X 1 , X 2 , . . . , X n } is a random sample from normal N (μ, σ 2 ) distribution. We illustrate Theorem 5.2.3 to derive the likelihood ratio test procedure for testing H0 : σ = σ0 against the alternative H0 : σ = σ0 when μ is unknown. Suppose X ∼ N (μ, σ 2 ) distribution. We first derive a likelihood ratio test procedure for testing H0 , when μ is known. Then the null hypothesis H0 : σ = σ0 is a simple null hypothesis. The null parameter space is 0 = {σ0 } and the entire parameter space is  = (0, ∞). Corresponding to a random sample X ≡ {X 1 , X 2 , . . . , X n } of size n from normal N (μ, σ 2 ) distribution, the likelihood of σ 2 is given by L n (σ 2 |X ) =

n 



i=1

1 2πσ

exp{−

1 (X i − μ)2 } . 2σ 2

It then follows that the maximum estimator of σ 2 in the entire parameter n likelihood 2 2 space is given by S0n = i=1 (X i − μ) /n. The likelihood ratio test statistic λ(X ) is then given by

sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

2 S0n

σ02

n/2

⎧ ⎫ n 2  ⎪ ⎪  (X − μ) ⎪ ⎪ i ⎨ 1 1 ⎬ i=1 . exp − 2 2 ⎪ 2 S0n σ0 ⎪ ⎪ ⎪ ⎩ ⎭

The null hypothesis is rejected if λ(X ) < c < 1 ⇔ − 2 log λ(X ) > c1 . If sample size is large, then by Theorem 5.2.1, −2 log λ(X ) ∼ χ21 distribution and H0 is rejected if −2 log λ(X ) > χ21,1−α where χ21,1−α is (1 − α)-th quantile of χ21 distribution. Suppose μ is not known. Then the null hypothesis is also a composite hypothesis and μ is a nuisance parameter. The entire parameter space  is  = {(μ, σ 2 )|μ ∈ R, σ 2 > 0} and the null space 0 is

290

5

Large Sample Test Procedures

0 = {(μ, σ 2 )|μ ∈ R, σ 2 = σ02 }. In the null space the maximum likelihood estimators of μ is X n . In Example 3.3.2 we have obtained the maximum likelihood n (X i − X n )2 /n estimators of μ and σ 2 which are given by X n and Sn2 = i=1 respectively. Hence, the likelihood ratio test statistic λ(X ) is given by

sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

Sn2 σ02

n/2

⎧ ⎫ n 2  ⎪ ⎪  (X − X ) ⎪ ⎪ i n ⎨ 1 1 ⎬ i=1 . exp − 2 ⎪ 2 Sn2 σ0 ⎪ ⎪ ⎪ ⎩ ⎭

The null hypothesis is rejected if λ(X ) < c < 1 ⇔ − 2 log λ(X ) > c1 . If sample size is large, then by Theorem 5.2.3, −2 log λ(X ) ∼ χ21 distribution, since we estimate two parameters in the entire parameter space and only one parameter in the null space. H0 is rejected if −2 log λ(X ) > χ21,1−α where χ21,1−α is (1 − α)th quantile of χ21 distribution.  The next theorem is an extension of Theorem 5.2.3 where in null space the parameters have some functional relationship among themselves. More precisely, in the null setup, θi = gi (β1 , β2 , . . . , βm ) , i = 1, 2, . . . , k, where m ≤ k and g1 , g2 , . . . , gk are Borel measurable functions from Rm to R. Thus, k parameters are expressed in terms of m parameters. Such a scenario is very common in the tests for contingency tables and the goodness of fit tests. In Theorem 5.2.4, we derive the large sample distribution of −2 log λ(X ) in this case. The theorem is heavily used in all tests related to contingency tables and in all the goodness of fit tests, which are discussed in the next chapter. Theorem 5.2.4 Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk and the distribution of X belongs to a Cramér family. Suppose λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ ∈ 0 against the alternative H1 : θ ∈ 1 , where in 0 , θi = gi (β1 , β2 , . . . , βm ) , i = 1, 2, . . . , k, where m ≤ k and g1 , g2 , . . . , gk are Borel measurable functions from Rm to R, having continuous partial derivatives of first order. Then under L

H0 , −2 log λ(X ) → U ∼ χr2 as n → ∞, where r = k − m, k the number of parameters estimated in the entire parameter space and m the number of parameters estimated in the null parameter space. Proof Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk , k ≥ 1. Suppose θˆ n is the maximum likelihood estimator of θ in the entire parameter space, based on the random sample {X 1 , X 2 , . . . , X n } from the distribution of X . Further, θˆ n is a CAN estimator of θ with approximate dispersion matrix I −1 (θ)/n of order k × k. In the null

5.2 Likelihood Ratio Test Procedure

291

setup θi = gi (β1 , β2 , . . . , βm ) , i = 1, 2, . . . , k, where m ≤ k and g1 , g2 , . . . , gk are Borel measurable functions from Rm to R. Suppose θ˜ n is the maximum likelihood estimator of θ in the null setup, that is, θ˜ n = g(β˜ n ), where β˜ n is the maximum likelihood estimator of β = ((β1 , β2 , . . . , βm ) based on the random sample {X 1 , X 2 , . . . , X n } from the distribution of X . Further, β˜ n is a CAN estimator of β with approximate dispersion matrix I −1 (β)/n of order m × m. The likelihood ratio test statistic λ(X ) is then given by λ(X ) = sup L n (θ|X )/ sup L n (θ|X ) = L n (θ˜ n |X )/L n (θˆ n |X ) . 0



Hence, −2 log λ(X ) = 2[log L n (θˆ n |X ) − log L n (θ˜ n |X )] = 2[log L n (θˆ |X ) − log L n (θ |X )] n

0

− 2[log L n (β˜ n |X ) − log L n (β 0 |X )] = U n − Wn where Un = 2[log L n (θˆ n |X ) − log L n (θ0 |X )] Wn = 2[log L n (β˜ n |X ) − log L n (β 0 |X )], and β 0 is a known vector such that θ0 = g(β 0 ) and log L n (β 0 |X ) = log L n (θ0 |X ). L

L

From Theorem 5.2.2, Un → U ∼ χ2k and Wn → W ∼ χ2m . Thus, for large n, −2 log λ(X ) is distributed as U − W . Again using the result by Wilks [1], we claim that U − W is distributed as χ2k−m . Thus for large n under H0 , −2 log λ(X ) has χ2k−m distribution.    Example 5.2.7

Suppose X ∼ Poi(θ1 ) and Y ∼ Poi(θ2 ). Suppose X = {X 1 , X 2 , . . . , X n 1 } is a random sample from a Poisson distribution with parameter θ1 and Y = {Y1 , Y2 , . . . , Yn 2 } is a random sample from a Poisson distribution with parameter θ2 . Suppose X and Y are independent. We derive the likelihood ratio test procedure to test H0 : θ1 = θ2 against H1 : θ1 = θ2 . In the entire parameter space, the maximum likelihood estimator of θ1 is θˆ 1n 1 = X n 1 and the maximum likelihood estimator of θ2 is θˆ 2n 2 = Y n 2 . In the null setup θ1 = θ2 = θ, say. Then using independence of X and Y , the likelihood of θ given random samples X and Y is L n (θ|X , Y ) = e

−(n 1 +n 2 )θ

θ

  n 1 X n 1 +n 2 Y n 2

n 1  i=1

Xi !

−1 n 2  i=1

−1 Yi !

.

292

5

Large Sample Test Procedures

It then follows that the maximum likelihood  estimator  θˆ n 1 +n 2 of θ is θˆ n 1 +n 2 = n 1 X n 1 + n 2 Y n 2 /(n 1 + n 2 ). The likelihood ratio test statistic λ(X ) is then given by 

sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

n 1 X n +n 2 Y n 2 ˆ e−(n 1 +n 2 )θn1 +n2 θˆ n 1 +n 21 ˆ

ˆ



n1 X n1 n2 Y n2 θˆ 2n 2

e−(n 1 θ1n1 +n 2 θ2n2 ) θˆ 1n 1

.

The null hypothesis is rejected if λ(X ) < c < 1 ⇔ − 2 log λ(X ) > c1 . If sample sizes are large, then by Theorem 5.2.4, −2 log λ(X ) ∼ χ21 distribution, as in the entire parameter space we estimate two parameters and in null space we estimate one parameter. H0 is rejected if −2 log λ(X ) > χ21,1−α where χ21,1−α is (1 − α)-th quantile of χ21 distribution. 

5.3

Large Sample Tests Using R

In this section we illustrate how the R software is useful in large sample test procedures and in likelihood ratio test procedures.  Example 5.3.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a Cauchy C(θ, 1) distribution. We derive a large sample test procedure for testing H0 : θ = 0 against the alternative H1 : θ > 0, based on the maximum likelihood estimator θˆ n of θ as it is the BAN estimator of θ with approximate variance 2/n. Thus, the test statistic √ Tn is given by Tn = n/2 θˆ n and H0 is rejected if Tn > c. Under H0 for large n, Tn ∼ N (0, 1) distribution. Hence, P0 [Tn > c] = α ⇒ c = −1 (1 − α). The power function β(θ) for θ ≥ 0 is given by & '  β(θ) = Pθ [Tn > c] = Pθ θˆ n > 2/n c & '    = Pθ n/2(θˆ n − θ) > c − n/2 θ = 1 −  c − n/2 θ . Following R code is used to find the maximum likelihood estimator θˆ n on the basis of a random sample generated when θ = 0.5, it computes Tn and the power function with α = 0.05.

5.3 Large Sample Tests Using R

293

th=.5; n=120; set.seed(123); x=rcauchy(n,th,1); summary(x) dlogl=function(par) { term=0 for(i in 1:n) { term=term + (x[i] - par)/(1+(x[i]-par)ˆ2) } dlogl= 2*term return(dlogl) } dlogl(1); dlogl(0.1); mle=uniroot(dlogl,c(.1,1))$root; mle T = sqrt(n/2)*mle; T; b = qnorm(.95); b ; p = 1-pnorm(T); p ### p-value th1=seq(0,1.5,.03); beta = 1 - pnorm(b-sqrt(n/2)*th1); beta[1] plot(th1, beta, "o", pch = 20, main="Power Function", xlab="Theta", ylab = "Power",col="blue") abline(h=beta[1], col="dark blue")

For the random sample of size n = 120 generated with θ = 0.5, θˆ n = 0.5253, Tn = 4.0689 which is larger than 1.65, 95% quantile of the standard normal distribution, p-value is almost 0. Hence, the data do not have sufficient support to null setup and we reject H0 . From Fig. 5.1, we observe that the power function β(θ) is an increasing function of θ and is almost 1 for all θ > 0.5. It is 0.05 at θ = 0, since we have decided the cut-off c so that power function at θ = 0 is 0.05. It can be verified that if we generate a random sample with θ = 0, then the

0.6 0.2

0.4

Power

0.8

1.0

Power Function

0.0

0.5

1.0 Theta

Fig. 5.1 Cauchy C(θ, 1) distribution: power function

1.5

294

5

Large Sample Test Procedures

data support to null setup. In fact generating random samples with θ = 0, we can find the false positive rate, that is, estimate of probability of type I error, also known as empirical level of significance. Following R code computes the false positive rate, when random samples of size 120 are generated m = 1000 times with θ = 0. th=0; n=120; m=1000; x=matrix(nrow=n,ncol=m) for(j in 1:m) { set.seed(j) x[,j]=rcauchy(n,th,1) } dlogL=function(par) (T > b))/m; FPR { term=0 for(i in 1:n) { term=term + (x[i,j] - par)/(1+(x[i,j]-par)ˆ2) } dlogL= 2*term return(dlogL) } mle=med=c() for(j in 1:m) { med[j]=median(x[,j]) mle[j]=uniroot(dlogL,c(med[j]-3,med[j]+3))$root } summary(mle);T = sqrt(n/2)*mle; b = qnorm(.95); b FPR=length(which(T > b))/m; FPR

From the 1000 simulations, estimate of probability of type I error comes out to be 0.051. Summary statistic of 1000 values of θˆ n shows that the sample median is 0.0004, which is close to 0.  In Example 3.3.9, we have shown that if (X , Y ) ∼ N2 (0, 0, 1, 1, ρ) distribution, ρ ∈ (−1, 1), then the sample correlation coefficient Rn is a CAN estimator of ρ with approximate variance (1 − ρ2 )2 /n. In Example 4.3.3, it is shown that the maximum likelihood estimator ρˆ n of ρ is also a CAN estimator of ρ with approximate variance (1 − ρ2 )2 /n(1 + ρ2 ) and it is smaller than that of Rn . In the following example, we derive a test procedure to test H0 : ρ = ρ0 against the alternative H1 : ρ = ρ0 based on Rn and ρˆ n and compare their performance on the basis of the power function.

5.3 Large Sample Tests Using R

295

 Example 5.3.2

Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by   1 ρ = , ρ 1 ρ ∈ (−1, 1). Suppose we want to test H0 : ρ = ρ0 against the alternative H1 : ρ = ρ0 . In Example 3.3.9 and Example 4.3.3, we have shown that  √ n n(1 + ρ2 ) L L (Rn − ρ) → N (0, 1) & (ˆρn − ρ) → N (0, 1). 2 (1 − ρ ) (1 − ρ2 ) Thus, to test H0 : ρ = ρ0 against the alternative H1 : ρ = ρ0 , we have the following two test statistics.  Tn =

n(1 + ρ20 )

(1 − ρ20 )

(ˆρn − ρ0 ) &

√ n Sn = (Rn − ρ0 ). (1 − ρ20 )

The critical region is [|Tn | > c] and [|Sn | > c]. The cut-off c is decided using the given level of significance α and the large sample null distribution of the test statistic, which is standard normal for both the test statistics. Hence, c = a1−α/2 . We compare the performance of these test statistics, based on the empirical level of significance and the power function. The power function βTn (ρ) of Tn is given by βTn (ρ) = Pρ [|Tn | > c] = 1 − Pρ [[|Tn | ≤ c] ⎤ ⎡ c(1 − ρ20 ) c(1 − ρ20 ) = 1 − Pρ ⎣−  + (ρ0 − ρ) < (ˆρn − ρ) ≤  + (ρ0 − ρ)⎦ n(1 + ρ20 ) n(1 + ρ20 ) = 1 −  (a(ρ) + c b(ρ)) +  (a(ρ) − c b(ρ)) ,

where   n(1 + ρ2 ) 1 + ρ2 (1 − ρ20 ) a(ρ) = (ρ0 − ρ) & b(ρ) =  . 2 2 1−ρ 1 + ρ20 (1 − ρ ) The power function β Sn (ρ) of Sn is given by β Sn (ρ) = Pρ [|Sn | > c] = 1 − Pρ [[|Sn | ≤ c] √ √ = 1 − Pρ [−c(1 − ρ20 )/ n + ρ0 − ρ < (Rn − ρ) ≤ c(1 − ρ20 )/ n + ρ0 − ρ] ,√ √ √ c(1 − ρ20 ) c(1 − ρ20 ) n(ρ0 − ρ) n n(ρ0 − ρ) − ρ) ≤ − < (R + = 1 − Pρ n (1 − ρ2 ) (1 − ρ2 ) (1 − ρ2 ) (1 − ρ2 ) (1 − ρ2 ) √



c(1 − ρ20 ) c(1 − ρ20 ) n(ρ0 − ρ) n(ρ0 − ρ) + − = 1− + . 2 2 2 (1 − ρ ) (1 − ρ ) (1 − ρ ) (1 − ρ2 )

296

5

Large Sample Test Procedures

Using the following R code, we compute the empirical levels of significance and the power functions. rho_0=0.3;mu = c(0,0); sig=matrix(c(1,rho_0,rho_0,1),nrow=2) n = 80; nsim = 1500; R = u = v = mle = c(); library(mvtnorm) g=function(a) { term=aˆ3-aˆ2*v1-a*(1-u1)-v1 return(term) } dg=function(a) { term=3*aˆ2-a*2*v1 + u1-1 return(term) } for(i in 1:nsim) { set.seed(i) x = rmvnorm(n,mu,sig) R[i] = cor(x)[1,2] u[i] = sum((x[,1]ˆ2+x[,2]ˆ2))/n v[i] = sum((x[,1]*x[,2]))/n } m=5;e=matrix(nrow=m,ncol=nsim) for(i in 1:nsim) { e[1,i]=v[i] v1 = v[i] u1 = u[i] j = 1; diff = 1 while(diff > 10ˆ(-4)) { e[j+1,i]= e[j,i]-g(e[j,i])/dg(e[j,i]) diff=abs(e[j+1,i]-e[j,i]) j=j+1 } mle[i]=e[j,i] } summary(mle); tn = sn = pft = pfs = a = d = e = f = c() alpha = 0.05; b = qnorm(1-alpha/2); b for(i in 1:nsim) { tn[i] = sqrt(n*(1+rho_0ˆ2))*(mle[i]-rho_0)/(1-rho_0ˆ2) sn[i] = sqrt(n)*(R[i]-rho_0)/(1-rho_0ˆ2) } d0=data.frame(tn, sn); d1=round(d0,4); View(d1);head(d1); tail(d1) elost = length(which(abs(tn)>b))/nsim; elost eloss = length(which(abs(sn)>b))/nsim; eloss ### power function

5.3 Large Sample Tests Using R

297

rho = seq(-0.36,0.86,0.03); lrho = length(rho); lrho; for(i in 1:lrho) { a[i] = sqrt(n*(1+rho[i]ˆ2))*(rho_0-rho[i])/(1-rho[i]ˆ2) d[i] = sqrt((1+rho[i]ˆ2))*(1-rho_0ˆ2)/((1-rho[i]ˆ2)*sqrt(1+rho_0ˆ2)) e[i] = sqrt(n)*(rho_0-rho[i])/(1-rho[i]ˆ2) f[i] = (1-rho_0ˆ2)/(1-rho[i]ˆ2) pft[i] = 1 - pnorm(a[i]+b*d[i]) + pnorm(a[i]- b*d[i]) pfs[i] = 1 - pnorm(e[i]+b*f[i]) + pnorm(e[i]-b*f[i]) } s=seq(-.4,.9,.1);u=c(0,0.05,seq(.1,.9,.1)) plot(rho,pft,"o",lty=1,main="Power Function", xlab = expression(paste("rho")), ylab=expression(paste("Power")),col="dark blue", xlim=c(-.4,1.3),xaxt="n",ylim=c(0,1),yaxt="n") lines(rho,pfs,"o",lty=2,col="green") abline(h=alpha,col="purple") axis(1,at = s,las = 1,cex.axis = 0.7) axis(2,at = u,las = 1,cex.axis = 0.7) legend("topleft",legend=c("Tn","Sn"),lty=c(1,2), col=c("dark blue","green"), title=expression(paste("Test Statistic"))) umle=a+b*d; uR=e+b*f; lmle=a-b*d; lR=e-b*f d2=data.frame(umle,uR,lmle,lR);d3=round(d2,4);head(d3) avmle=(1-rhoˆ2)ˆ2/(n*(1+rhoˆ2));avR=(1-rhoˆ2)ˆ2/n d4=data.frame(rho,avmle,avR);d5=round(d4,4);View(d5); head(d5);tail(d5)

The values of both the test statistics are displayed by the View(d1) function, the first 6 and the last 6 can be obtained from the head(d1) and tail(d1) function respectively. The empirical level of significance is 0.054 and 0.053 corresponding to Tn and Sn respectively, which is very close to the given level of significance 0.05. From Fig. 5.2, we note that power functions of the tests based on both the test statistics are almost the same. At ρ = 0.3, the power functions of both the test statistics have value 0.05, as the cut-off is determined corresponding to α = 0.05. As the values of ρ shift away from ρ = 0.3 in either direction, the power functions increase as expected. From data frame d3, we note that 95% confidence intervals for ρ based on Tn and Sn are almost the same. Further, the behavior of the power functions of both the test statistics is the same. Such a similarity is in view of the fact that the large sample distributions of both ρˆ n and of Rn are normal, with the same mean and their approximate variances are almost the same, for large n. These are displayed in Table 5.1, for some values of ρ and for n = 80. These can be observed for all values of ρ with View(d5) function. 

298

5

Large Sample Test Procedures

Power Function Test Statistic Tn Sn

0.90 0.80 0.70

Power

0.60 0.50 0.40 0.30 0.20 0.10 0.05 0.00 −0.4

−0.2

0.0

0.2

0.4

0.6

0.8

rho

Fig. 5.2 Bivariate normal N2 (0, 0, 1, 1, ρ) distribution: power function Table 5.1 N2 (0, 0, 1, 1, ρ) distribution: approximate variances of ρˆ n and Rn ρ

A.Variance of MLE

A.Variance of R

−0.36 −0.33 −0.30 −0.27 −0.24 −0.21

0.0084 0.0090 0.0095 0.0100 0.0105 0.0109

0.0095 0.0099 0.0104 0.0107 0.0111 0.0114

In the following examples we discuss how to carry out likelihood ratio test procedures using R. As discussed in Sect. 5.2, the first step in this procedure is to find the maximum likelihood estimator in the null and the entire parameter space. Thus, all the tools presented in Sect. 4.5 can be utilized to find the maximum likelihood estimator.  Example 5.3.3

Suppose X has a truncated exponential distribution, truncated above at 5. Its probability density function is given by f (x, θ) =

θe−xθ , 1 − e−5θ

0 < x < 5,

θ>0.

5.3 Large Sample Tests Using R

299

Suppose we want to test H0 : θ = θ0 = 2 against the alternative H1 : θ = θ0 on the basis of a random sample X ≡ {X 1 , X 2 , . . . X n } generated from this distribution. To generate a random sample, we use a probability integral transformation. Thus, invert the distribution function FX (x) of X . It is given by ⎧ 0, if x 0 for all pi (θ) = 1. The likelihood ratio test is useful to validate such a model. i and i=1 The likelihood ratio test is related to Wald’s test and the score test. Sect. 6.4 is concerned with Wald’s test procedure and the score test procedure and their relation with the likelihood ratio test procedure. An important finding of this section is the link between a score test statistic and Karl Pearson’s chi-square test statistic. It is proved that while testing any hypothesis about the parameters of a multinomial distribution, these two statistics are identical. Section 6.6 presents a brief introduction to the concept of a consistency of a test procedure. Section 6.7 elaborates on the application of R software to validate various results proved in earlier sections, to perform the goodness of test procedures and tests for contingency tables. Thus, in this chapter most of the tests are associated with a multinomial distribution. Hence, the next section is devoted to the in-depth study of a multinomial distribution. Illustrative examples are given to show that in some cases a multinomial distribution belongs to an exponential family or a Cramér family and we can use all the results established in Chap. 4. We focus on the maximum likelihood estimation of cell probabilities p and the asymptotic properties of the maximum likelihood estimator of p. Some tests associated with multinomial distribution are also developed.

6.2

Multinomial Distribution and Associated Tests

In an experiment E with only two outcomes, the random variable X is defined as X = 1, if outcome of interest occurs and X = 0 otherwise. For example, in Bernoulli trials there are only two outcomes, the outcome of interest is labeled as success and the other as failure. The distribution of X is Bernoulli B(1, θ) distribution where θ is interpreted as the probability of success. Multinomial distribution is an extension of the Bernoulli distribution and it is a suitable model for the experiment with k outcomes. Suppose an experiment E results in k outcomes {O1 , O2 , . . . , Ok } and pi denotes the probability that E results in outcome Oi , pi > 0, i = 1, 2, . . . , k and k p i=1 i = 1. A random variable Yi , for i = 1, 2, . . . , k, is defined as Yi =

⎧ ⎨ 1, ⎩

0,

if outcome is Oi , otherwise ,

k Yi = 1, implying that Yi will be 1 only for one i and will be 0 for all where i=1 other i. The joint probability mass function p(y1 , y2 , . . . , yk ) of {Y1 , Y2 , . . . , Yk } is

6.2 Multinomial Distribution and Associated Tests

311

given by p(y1 , y2 , . . . , yk ) =

k 

y

pi i ,

i=1

yi = 0, 1,

k 

yi = 1 &

i=1

k 

pi = 1 .

i=1

The joint distribution of {Y1 , Y2 , . . . , Yk } is a multinomial distribution in k cells, specified k by the cell probabilities p1 , p2 , . . . , pk with pi > 0 for all i = 1, 2, . . . , k pi = 1. Thus, the parameter space of the multinomial distribution in k and i=1 cells is given by =

p = ( p1 , p2 , . . . , pk )| pi > 0 ∀ i = 1, 2, . . . , k &

k 

pi = 1 .

i=1

It is clear that Yi ∼ B(1, pi ) with E(Yi ) = pi and V ar (Yi ) = pi (1 − pi ), i = 1, 2, . . . , k. If Yi = 1 then Y j = 0 for any j = i, implies that Yi Y j = 0 with probability 1. As a consequence, E(Yi Y j ) = 0 & Cov(Yi , Y j ) = −E(Yi )E(Y j ) = − pi p j ∀ i = j = 1, 2, . . . , k . Suppose a random vector Y is defined as Y = (Y1 , Y2 , . . . , Yk ) , then E(Y ) = ( p1 , p2 , . . . , pk ) and the dispersion matrix D = [σi j ] of Y is of order k × k, where σii = pi (1 − pi ) and σi j = − pi p j . If we add all the columns of D, then for each row, the sum comes out to be 0 implying that D is a singular matrix. This is aconsequence of the fact that {Y1 , Y2 , . . . , Yk } are linearly related by the identity k  i=1 Yi = 1. Hence, we consider the random vector Y as Y = (Y1 , Y2 , . . . , Yk−1 ) ,  then E(Y ) = ( p1 , p2 , . . . , pk−1 ) and dispersion matrix D = [σi j ] of Y is of order (k − 1) × (k − 1), where σii = pi (1 − pi ) and σi j = − pi p j . D can be shown to be positive definite. For example for k = 3,   p1 (1 − p1 ) − p1 p2 D= − p1 p2 p2 (1 − p2 ). The first principal minor is p1 (1 − p1 ) > 0 and the second principal minor, which is the same as the determinant |D| is |D| = p1 (1 − p1 ) p2 (1 − p2 ) − p12 p22 = p1 p2 (1 − p1 − p2 ) = p1 p2 p3 > 0 implying that D is a positive definite matrix. In general, |D| = p1 p2 . . . pk . The joint probability mass function p(y) = p(y1 , y2 , . . . , yk−1 ) of {Y1 , Y2 , . . . , Yk−1 } is given by k k   y p(y) = pi i ⇔ log p(y) = yi log pi , i=1

where pk = 1 −

i=1 k−1  i=1

pi & yk = 1 −

k−1  i=1

yi .

312

6

Goodness of Fit Test and Tests for Contingency Tables

In further discussion we assert that the joint distribution of {Y1 , Y2 , . . . , Yk−1 } is a multinomial distribution in k cells with cell probabilities pi , i = 1, 2, . . . , k k pi = 1. We now find the information matrix I ( p) = [Ii j ( p)] of order with i=1 k (k − 1) × (k − 1) where p = ( p1 , p2 , . . . , pk−1 ). From log p(y) = i=1 yi log pi , we have ∂ yi yk ∂2 yi yk log p(y) = − , log p(y) = − 2 − 2 , i = 1, 2, . . . , k − 1 2 ∂ pi pi pk ∂ pi pi pk ∂2 yk log p(y) = − 2 , ∂ p j ∂ pi pk

j = i = 1, 2, . . . , k − 1.

Hence, 

   Yi ∂2 Yk 1 1 + 2 = + and Iii ( p) = E − 2 log p(Y ) = E pi pk ∂ pi pi2 pk     Yk ∂2 1 Ii j ( p) = E − log p(Y ) = E . = 2 ∂ p j ∂ pi pk pk It can be shown that D = I −1 ( p), which can be easily verified for k = 3 and k = 4. For k = 3,

D −1

  1− p2  1 p1 p2 p2 (1 − p2 ) = p11p3 = p1 p2 p1 (1 − p1 ) p1 p2 p3 p3   1 1 1 + = p1 1 p3 1 p3 1 , p3 p2 + p3

1 p3 1− p1 p2 p3



which is the same as I ( p1 , p2 ). For k = 4, D −1 =

1 p1 p2 p3 p4 ⎛

⎞ p2 p3 (1 − p2 − p3 ) p1 p2 p3 p1 p2 p3 ⎝ ⎠ p1 p2 p3 p1 p3 (1 − p1 − p3 ) p1 p2 p3 p1 p2 p3 p1 p2 p3 p1 p2 (1 − p1 − p2 )

which simplifies to ⎛ ⎜ D −1 = ⎝

1 p1

+ 1 p4 1 p4

1 p4

1 p2

1 p4

+ 1 p4

1 p4

1 p3

1 p4 1 p4

+

⎞ 1 p4

⎟ ⎠ = I ( p1 , p2 , p3 ).

Suppose the experiment E is repeated under identical conditions, so that the probabilities of k outcomes remain the same for each repetition. We further assume that

6.2 Multinomial Distribution and Associated Tests

313

the repetitions are independent of each other. Suppose  X i denotes the frequency of occurrence of Oi in n repetitions. Then X i = rn=1 Yir , i = 1, 2, . . . , k with k−1 X k = n −  i=1 X i . If a random vector X is defined as X = (X 1 , X 2 , . . . , X k−1 ) , n then X = r =1 Y r , where Y r is the r -th observation on Y = (Y1 , Y2 , . . . , Yk−1 ) . The likelihood of p = ( p1 , p2 , . . . , pk−1 ) given the data U = {Y 1 , Y 2 , . . . , Y n } ≡ (X 1 , X 2 , . . . , X k ) is given by n  k 

L n ( p|U ) =

piYir =

r =1 i=1



k 

piX i

i=1

log L n ( p|U ) =

n  k 

Yir log pi =

r =1 i=1

k 

X i log pi .

i=1

From the likelihood, it is clear that {X 1 , X 2 , . . . , X k−1 } is a sufficient statistic for the family. To find the maximum likelihood estimator of p = ( p1 , p2 , . . . , pk−1 ), we need to maximize log L n ( p|U ) with respect to the variation in p subject to k the condition that i=1 pi = 1. Hence, we use Lagrange’s method of multipliers. Suppose a function g defined as g( p1 , p2 , . . . , pk , λ) =

k 

 X i log pi + λ

i=1

k 

 pi − 1

.

i=1

Solving the system of equations given by ∂ Xi g( p1 , p2 , . . . , pk , λ) = − λ = 0, i = 1, 2, . . . , k and ∂ pi pi  ∂ pi − 1 = 0 g( p1 , p2 , . . . , pk , λ) = ∂λ k

i=1

and using the condition that pˆ in of pi as pˆ in = It is to be noted that

k i=1

k i=1

X i = n, we get the maximum likelihood estimator

n Xi 1 Yir , i = 1, 2, . . . , k. = n n r =1

pˆ in = 1, as expected.

We now discuss the properties of pˆ in . Observe that X i ∼ B(n, pi ) ⇒ E(X i ) = npi , V ar (X i ) = npi (1 − pi ), i = 1, 2, . . . , k

314

6

 & Cov(X i , X j ) = Cov

n 

Goodness of Fit Test and Tests for Contingency Tables

Yir ,

r =1

n 

 Y js

=

s=1

=

n  n 

Cov(Yir , Y js )

r =1 s=1 n 

Cov(Yir , Y jr ) = −npi p j ,

r =1

as Cov(Yir , Y js ) = 0 ∀ r = s, {Y 1 , Y 2 , . . . , Y n } being independent random vectors. Thus, E( pˆ in ) = E(X i /n) = pi & V ar ( pˆ in ) = pi (1 − pi )/n → 0 as n → ∞ . Thus, pˆ in is an unbiased estimator of pi and its variance converges to 0 and hence it is an MSE consistent estimator of pi . The consistency also follows from the WLLN as n Xi 1 P pˆ in = Yir → E(Yir ) = pi , i = 1, 2, . . . , k − 1. = n n r =1

Since joint consistency is equivalent to the marginal consistency, pˆ n =(ˆp1n , pˆ 2n , . . . , pˆ (k−1)n ) is consistent for p. To examine whether it is a CAN estimator of p we note that   X1 X2 X k−1  pˆ n = , ,..., n n n  n  n n   1 1 1 = Y1r , Y2r , . . . , Y(k−1)r = Y n , n n n r =1

r =1

r =1

where Y = (Y1 , Y2 , . . . , Yk−1 ) . Repeating the experiment E, n times gives a random sample {Y 1 , Y 2 , . . . , Y n } of size n from the distribution of Y . Further, E(Y ) = p and dispersion matrix of Y is D = I −1 ( p) as specified above. It is positive definite. Hence, by the multivariate CLT, √

L

n(Y n − p) → Z 1 ∼ Nk−1 (0, D)



√ L n( pˆ n − p) → Z 1 ∼ Nk−1 (0, D).

Thus, pˆ n is a CAN estimator of p, with approximate dispersion matrix D/n = I −1 ( p)/n.  Remark 6.2.1

It is to be noted that for each i = 1, 2, . . . , k − 1, √ L n( pˆ n − p) → Z 1 ∼ Nk−1 (0, I −1 ( p)) √ L n( pˆ in − pi ) → Z 1 ∼ N (0, pi (1 − pi )), ⇒

6.2 Multinomial Distribution and Associated Tests

315

where pˆ in = X i /n. Suppose k = 3. Thus, the maximum likelihood estimator of ( p1 , p2 ) derived from the trinomial model and the two marginal univariate models √ component √ are the same. Further, the large sample distribution of the first in n( pˆ n − p) is the same as the large sample distribution of n( pˆ 1n − p1 ), derived from the univariate Bernoulli model. However, 1 + p1 1− 1 & I2,2 ( p1 , p2 ) = + p2 1− I1,1 ( p1 , p2 ) =

1 > I ( p1 ) = p1 − p2 1 > I ( p2 ) = p1 − p2

1 1 + p1 1 − p1 1 1 + p2 1 − p2

as observed in Exercise 4.6.7 for bivariate normal N2 (μ1 , μ2 , 1, 1, ρ) model, where ρ = 0 is known. In Example 4.3.4, in which we discussed bivariate normal N2 (0, 0, σ12 , σ22 , ρ) model with known ρ = 0, and in Example 4.3.5, where we discussed a bivariate Cauchy model with location parameters θ1 and θ2 , we observed similar feature in information functions for bivariate and univariate models. In both these examples, the maximum likelihood estimator of the parameters derived from the bivariate and univariate models are different and the approximate variances in the asymptotic distributions are also different. Both these models belong to a two-parameter Cramér family while both the trinomial distribution and bivariate normal N2 (μ1 , μ2 , 1, 1, ρ) distribution, belong to a two-parameter exponential family. From the asymptotic distribution of pˆ n , we observe that √ L n( pˆ n − p) → Z 1 ∼ Nk−1 (0, I −1 ( p)) L

⇒ Q n = n( pˆ n − p) I ( p)( pˆ n − p) → χ2k−1 . Further, pˆ n is consistent for p and each element of matrix I ( p) is a continuous P

function of p, hence I ( pˆ n ) → I ( p). Suppose Wn is defined as Wn = n( pˆ n − p) I ( pˆ n )( pˆ n − p). Then, P

L

Wn − Q n = n( pˆ n − p) (I ( pˆ n ) − I ( p))( pˆ n − p) → 0 ⇒ Wn → χ2k−1 . All these results are useful to obtain the asymptotic null distribution of a test statistic for certain tests of interest. Following examples illustrate the large sample test procedures associated with a multinomial distribution.  Example 6.2.1

Suppose we have a trinomial distribution with cell probabilities p1 (θ) = (1 + θ)/2 and p2 (θ) = p3 (θ), 0 < θ < 1. Hence,

316

6

Goodness of Fit Test and Tests for Contingency Tables

p2 (θ) = p3 (θ) = (1 − θ)/4. The likelihood of θ corresponding to the data X ≡ {X 1 , X 2 , X 3 } is given by  L n (θ|X ) =  =

1+θ 2 1+θ 2

X1  X1 

1−θ 4 1−θ 4

 X 2 +X 3 n−X 1

, as X 1 + X 2 + X 3 = n,

where X i denotes the frequency of cell i, i = 1, 2, 3. The likelihood is a differentiable function of θ, hence the likelihood equation and its solution θˆ n are given by X1 2X 1 − n n − X1 X1 − =0 ⇒ θˆ n = =2 − 1. 1+θ 1−θ n n The second order derivative ∂2 log L n (θ|X ) = − X 1 /(1 + θ)2 −(n − X 1 )/(1 − θ)2 < 0 a.s. which implies ∂θ2 that the maximum likelihood estimator θˆ n of θ is given by θˆ n = (2X 1 − n)/n = 2X 1 /n − 1. To develop a procedure to test the null hypothesis H0 : p1 (θ) = 0.6 against the alternative H1 : p1 (θ) < 0.6, we obtain the large sample distribution of θˆ n with a suitable normalization. It is well-known that for the trinomial distribution, the cell frequency X i has a binomial distribution, i = 1, 2. Thus,  X 1 ∼ B(n, p1 (θ)) distribution. Further, X 1 /n can be expressed as X 1 /n = nj=1 Y1 j /n, where Y1 j , j = 1, 2, . . . , n are independent and identically distributed random variables each having Bernoulli B(1, p1 (θ)) distribution with mean p1 (θ) and variance 0 < p1 (θ)(1 − p1 (θ)) < ∞. Hence by the WLLN, n 2X 1 − n X1 1 X1 Pθ Pθ Y1 j → p1 (θ) ⇒ θˆ n = = =2 −1→θ n n n n j=1

and by the CLT ⎛ ⎞ n  √ 1 L n⎝ Y1 j − p1 (θ)⎠ → Z 1 ∼ N (0, p1 (θ)(1 − p1 (θ))) n j=1



√ L n(θˆ n − θ) → Z 1 ∼ N (0, 1 − θ2 ).

Further, to test the null hypothesis H0 : p1 (θ) = 0.6 ⇔ (1 + θ)/2 = 0.6 ⇔ θ = θ0 = 0.2 against the alternative H1 : θ < 0.2, we propose two test statistics Sn and Wn as   √ √ 2 ˆ ˆ Sn = n(θn − 0.2)/ 1 − θ0 & Wn = n(θn − 0.2)/ 1 − θˆ n2 .

6.2 Multinomial Distribution and Associated Tests L

317 L

Under H0 , Sn → Z ∼ N (0, 1) and by Slutsky’s theorem Wn → Z ∼ N (0, 1). H0 is rejected if Sn < c or Wn < c where c is such that Pθ0 [Sn < c] = α. If & α = 0.05 then c = −1.65. If X 1 = 33 and n = 50, Sn = 0.8660 Wn = 0.8956. Both are larger than c and hence we conclude that data do not  have sufficient evidence to reject H0 . When the cell probabilities are indexed by the parameter θ, real or vector valued, then in some cases a multinomial distribution belongs to an exponential family and all the results established for an exponential family in Chap. 4 are applicable. In some other cases it belongs to a Cramér family and results valid for a Cramér family proved in Chap. 4 are applicable. Following examples illustrate these applications. In the next example, we show that a multinomial distribution, as a model for a genetic experiment, belongs to a one-parameter exponential family and how this fact is used to derive a large sample test procedure to test some hypothesis.  Example 6.2.2

According to a certain genetic model, the probabilities for three outcomes are θ2 , 2θ(1 − θ) and (1 − θ)2 , 0 < θ < 1. The appropriate probability distribution for this model is a multinomial distribution in three cells. Suppose the random vector (Y1 , Y2 ) has trinomial distribution with cell probabilities θ2 , 2θ(1 − θ) and (1 − θ)2 , then its joint probability mass function is given by Pθ [Y1 = y1 , Y2 = y2 ] = (θ2 ) y1 (2θ(1 − θ)) y2 ((1 − θ)2 )1−y1 −y2 = 2 y2 θ2y1 +y2 (1 − θ)2−2y1 −y2 . To examine whether it belong to a one-parameter exponential family, observe that the joint probability mass function can be expressed as log Pθ [Y1 = y1 , Y2 = y2 ] = y2 log 2 + (2y1 + y2 ) log θ + (2 − 2y1 − y2 ) log(1 − θ) = y2 log 2 + (2y1 + y2 )(log θ − log(1 − θ)) + 2 log(1 − θ) = U (θ)K (y1 , y2 ) + V (θ) + W (y1 , y2 ), where U (θ) = log θ − log(1 − θ), K (y1 , y2 ) = 2y1 + y2 , V (θ) = 2 log(1 − θ) and W (y1 , y2 ) = y2 log 2. Thus, the probability law of (Y1 , Y2 ) is expressible in the form required for a one-parameter exponential family. The support of the probability mass function is {(0, 0), (0, 1), (1, 0)} and it is free from θ, the parameter space is (0, 1), which is an open set. Further, 1 1 = θ(1−θ) = 0. K (y1 , y2 ) and 1 are linearly independent U  (θ) = 1θ + 1−θ because in the identity a + b(2y1 + y2 ) = 0, if y1 = y2 = 0, then a = 0, if further in the identity b(2y1 + y2 ) = 0, if either y1 = 0 and y2 = 1 or y1 = 1 and y2 = 0, then b = 0. Thus, all the requirements of a one-parameter exponential family are satisfied and hence the joint probability mass function of (Y1 , Y2 )

318

6

Goodness of Fit Test and Tests for Contingency Tables

belongs to a one-parameter exponential family. To find the maximum likelihood estimator of θ, the likelihood of θ corresponding to the data X ≡ {X 1 , X 2 , X 3 } is given by, L n (θ|X ) = 2 X 2 (θ)2X 1 +X 2 (1 − θ)2n−2X 1 −X 2 , X 1 + X 2 + X 3 = n, where X i is the frequency of i-th cell in the sample, i = 1, 2, 3. The likelihood is a differentiable function of θ, hence the likelihood equation and its solution θˆ n are given by 2X 1 + X 2 2n − 2X 1 − X 2 − =0 θ 1−θ



2X 1 + X 2 θˆ n = . 2n

Further, the second order derivative ∂2 log L n (θ|X ) = −(2X 1 + X 2 )/θ2 − (2n − 2X 1 − X 2 )/(1 − θ)2 < 0, ∂θ2 ∀ θ ∈ (0, 1) & ∀ X 1 & X 2 , since 2n − 2X 1 − X 2 = X 2 + 2X 3 . Hence, θˆ n is the maximum likelihood estimator of θ. Next we find a moment estimator θ˜ n of θ based on a sufficient statistic. From the likelihood we observe that 2X 1 + X 2 is a sufficient statistic. Thus, θ˜ n is solution of the equation n 2X 1 + X 2 1 (2Y1r + Y2r ) = E(2Y1 + Y2 ) = n n r =1

2X 1 + X 2 = 2θ2 + 2θ(1 − θ) = 2θ ⇒ θ˜ n = . 2n Since the distribution of (Y1 , Y2 ) belongs to a one-parameter exponential family, by Theorem 4.2.1, θˆ n = θ˜ n is CAN for θ with approximate variance 1/n I (θ). Now,   ∂2 n I (θ) = E θ − 2 log L n (θ|X ) ∂θ =

2n n(2θ2 + 2θ(1 − θ)) 2n − 2nθ2 − 2nθ(1 − θ) + = . θ2 (1 − θ)2 θ(1 − θ)

Thus, θˆ n is a CAN estimator of θ with approximate variance θ(1 − θ)/2n. We use these results to develop a test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . Suppose two test statistics Sn and Wn are defined as   ˆ Sn = 2n/(θ0 (1 − θ0 )) (θn − θ0 ) & Wn = 2n/(θˆ n (1 − θˆ n )) (θˆ n − θ0 ).

6.2 Multinomial Distribution and Associated Tests

319

Then under H0 , for large n, Sn ∼ N (0, 1). By Slutsky’s theorem, under H0 , for large n, Wn ∼ N (0, 1) and hence H0 : θ = θ0 is rejected against the alternative H1 : θ = θ0 at level of significance α if |Sn | > c or if |Wn | > c where c = a1−α/2 . One more approach to test H0 : θ = θ0 against the alternative H1 : θ = θ0 is the likelihood ratio test. The likelihood ratio test statistic λ(X ) is given by sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X )

=



(θ0 )2X 1 +X 2 (1 − θ0 )2n−2X 1 −X 2 . (θˆ n )2X 1 +X 2 (1 − θˆ n )2n−2X 1 −X 2

It is difficult to get the finite sample distribution of the likelihood ratio. Hence, we use its asymptotic distribution. From Theorem 5.2.1 for large n under H0 , −2 log λ(X ) ∼ χ21 distribution. For large n, H0 is rejected if −2 log λ(X ) > c, where c is such that the size of the test is α and is determined using the χ21 distribution. Thus, c = χ21,1−α . For this model, we can obtain variance of θˆ n for finite n. It is given by V ar (θˆ n ) = V ar



2X 1 + X 2 2n



1 (4V ar (X 1 ) + V ar (X 2 ) + 4Cov(X 1 , X 2 )) 4n 2  1  = 2 4nθ2 (1 − θ2 ) + n2θ(1 − θ)(1 − 2θ(1 − θ)) − 4nθ2 2θ(1 − θ) 4n θ(1 − θ) = , 2n =

which is the same as 1/n I (θ). Further, E(θˆ n ) = E



2X 1 + X 2 2n

 =

1 (2nθ2 + 2nθ(1 − θ)) = θ . 2n

Thus, θˆ n is an unbiased estimator of θ. Since its variance attains the Cramér-Rao lower bound for the variance, it is MVBUE of θ. It is to be noted that θˆ n is a function of sufficient statistic 2X 1 + X 2 . Further, the dimension of the sufficient statistic and the dimension of the parameter space is the same, hence 2X 1 + X 2 is a complete statistic. Thus, θˆ n is a function of complete and sufficient statistic, it is unbiased estimator of θ and hence by the Rao-Blackwell theorem and the Lehmann-Scheffe theorem it is UMVUE of θ.   Remark 6.2.2

In Example 6.2.2, we have discussed three test procedures to test H0 : θ = θ0 against the alternative H1 : θ = θ0 . Tests based on Sn and Wn reject H0 when |Sn | > c and |Wn | > c respectively. These rejection regions are also equivalent

320

6

Goodness of Fit Test and Tests for Contingency Tables

to Sn2 > c and Wn2 > c and the asymptotic distribution of both Sn2 and Wn2 is χ21 . Thus, the likelihood ratio test procedure, test procedure based on Sn2 and the one based on Wn2 for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 are equivalent. This result holds true in a general setup and we discuss it in Sect. 6.4. The test based on Sn2 is a score test and the one based on Wn2 is Wald’s test. The following example shows that a multinomial distribution, when cell probabilities are indexed by a parameter θ, does not belong to a one-parameter exponential family, but belongs to a Cramér family and hence all the results established for a Cramér family are useful to develop large sample test procedures.  Example 6.2.3

Fisher has used a multinomial distribution in 4 cells to analyze Carver’s data on two varieties of maize classified as starchy versus sugary and further cross classified with the color as green and white, (refer to Kale and Muralidharan [1]). The cell probabilities depend on a parameter θ ∈ (0, 1), which is known as a linkage factor. The cell probabilities are given in Table 6.1, along with the observed frequencies as obtained in the experiment. An appropriate probability model for the outcome of the given genetic experiment is a multinomial distribution in four cells with cell probabilities (2 + θ)/4, (1 − θ)/4, (1 − θ)/4 and θ/4. Suppose (Y1 , Y2 , Y3 ) has multinomial distribution in four cells with these cell probabilities. Then its joint probability mass function Pθ [Y1 = y1 , Y2 = y2 , Y3 = y3 ] = pθ (y1 , y2 , y3 ) is given by pθ (y1 , y2 , y3 ) = ((2 + θ)/4) y1 ((1 − θ)/4) y2 +y3 (θ/4) y4 = 4−1 (2 + θ) y1 (1 − θ) y2 +y3 θ y4 , yi = 0, 1 for i = 1, 2, 3 and

4 

yi = 1. With y4 = 1 − y1 − y2 − y3 , the joint

i=1

probability mass function can be expressed as log pθ (y1 , y2 , y3 ) = − log 4 + y1 log(2 + θ) + (y2 + y3 ) log(1 − θ) + y4 log θ 2+θ 1−θ = − log 4 + y1 log + (y2 + y3 ) log + log θ, θ θ However, it cannot be expressed as U (θ)K (y1 , y2 , y3 ) + V (θ) + W (y1 , y2 , y3 ), in particular we cannot get the term U (θ)K (y1 , y2 , y3 ). Thus, the probability law of (Y1 , Y2 , Y3 ) does not belong to a one-parameter exponential family. We Table 6.1 Carver’s data: two varieties of maize Category

Starchy green

Cell probabilities 2+θ 4 Observed 1977 frequencies

Starchy white

Sugary green

Sugary white

1−θ 4

1−θ 4

θ 4

906

904

32

6.2 Multinomial Distribution and Associated Tests

321

now examine whether it belongs to a Cramér family. It is to be noted that the parameter space  is (0, 1) and it is an open set. The set of possible values of Yi for i = 1, 2, 3 is {0, 1}, which is free from θ. Further, all the cell probabilities as a function of θ are analytical functions of θ and hence are differentiable any number of times. We have ∂ y1 y2 + y3 y4 log pθ (y1 , y2 , y3 ) = − + ∂θ 2+θ 1−θ θ ∂2 y1 y2 + y3 y4 log pθ (y1 , y2 , y3 ) = − − − 2 2 2 2 ∂θ (2 + θ) (1 − θ) θ ∂3 2y1 2(y2 + y3 ) 2y4 log pθ (y1 , y2 , y3 ) = − + 3 . ∂θ3 (2 + θ)3 (1 − θ)3 θ All these partial derivatives exist for θ in any open subset of . Further, if θ0 is a true parameter value, then for θ ∈ Nδ (θ0 ), we have θ0 − δ < θ < θ0 + δ ⇔ θ01+δ < 1θ < θ01−δ . Hence,    3        2y1   2(y2 + y3 )   2y4  ∂          ∂θ3 log pθ (y1 , y2 , y3 ) ≤  (2 + θ)3  +  (1 − θ)3  +  θ3  2y1 2(y2 + y3 ) 2y4 = + + 3 3 (2 + θ) (1 − θ)3 θ 2y1 2(y2 + y3 ) 2y4 ≤ + + (2 + θ0 − δ)3 (1 − θ0 − δ)3 (θ0 − δ)3 = M(y1 , y2 , y3 ) say and E θ (M(Y1 , Y2 , Y3 )) =

2+θ 1−θ θ + + 0. Hence, both the roots of g(θ) = 0 are real. Further, product of two roots is c/a = −2X 4 /n < 0 which

6.2 Multinomial Distribution and Associated Tests

323

implies that one root is positive and the other is negative. To examine whether the positive root is in (0, 1), observe that g(0) = −2X 4 < 0 &

g(1) = n − (X 1 − 2X 2 − 2X 3 − X 4 ) − 2X 4 = 3(X 2 + X 3 ) > 0 .

Further, g is a continuous function, thus there exists θ ∈ (0, 1) such that g(θ) = 0. Moreover, ∂2 X1 X2 + X3 X4 log L(θ|X ) = − − − 2 < 0 ∂θ2 (2 + θ)2 (1 − θ)2 θ for any set of frequencies {X 1 , X 2 , X 3 , X 4 } and hence the likelihood attains its maximum at both the roots. Hence, the positive root θˆ n of the quadratic equation g(θ) = 0 is the maximum likelihood estimator of θ. Thus, (X 1 − 2X 2 − 2X 3 − X 4 ) + θˆ n =



(X 1 − 2X 2 − 2X 3 − X 4 )2 + 8n X 4 . 2n

Since the distribution belongs to a Cramér family, θˆ n is a CAN estimator of θ with approximate variance 1/n I (θ) = 2θ(1 − θ)(2 + θ)/n(2θ + 1). For the given data, the likelihood equation is 3819θ2 + 1675θ − 64 = 0 and θˆ n = 0.0354. We use this result to develop a test procedure for testing H0 : θ = 0.02 against the alternative H1 : θ = 0.02. Suppose the test statistic Sn is given by Sn =



n I (θ0 )(θˆ n − θ0 ) =



n I (0.02)(θˆ n − θ0 ) =

√ 13.134n(θˆ n − θ0 ) ,

then under H0 , for large n, Sn ∼ N (0, 1) and hence H0 : θ = 0.02 is rejected against the alternative H1 : θ = 0.02 at level of significance α if |Sn | > c where c = a1−α/2 . For the given data, Sn = 3.4486 at α = 0.05, c = 1.95, hence  H0 is rejected. As in Example 6.2.2, the test statistic Wn is defined as Wn = n I (θˆ n )(θˆ n − θ0 ), then by Slutsky’s theorem, under H0 , for large n, Wn ∼ N (0, 1) and hence H0 is rejected against the alternative H1 at level of significance α if |Wn | > c where c = a1−α/2 . For the given data, Wn = 2.6411 > c = 1.95 and hence H0 is rejected. One more approach to test H0 against the alternative H1 is the likelihood ratio test. For the given data Tn = −2 log λ(X ) = 9.2321. H0 is rejected if Tn > c and c is decided using the asymptotic null distribution of Tn which is χ21 and c = 3.8414. Thus, H0 is rejected according to likelihood ratio test procedure.  The multinomial distribution in Example 6.2.3 belongs to a Cramér family. We now demonstrate that if pi (θ) satisfy certain regularity conditions, then a multinomial distribution in k cells with cell probabilities pi (θ), i = 1, 2, . . . k indexed by a parameter θ belongs to a Cramér family. We derive these conditions when θ is a real parameter. Suppose Y = (Y1 , Y2 , . . . , Yk−1 ) has multinomial distribution in k cells

324

6

Goodness of Fit Test and Tests for Contingency Tables

with cell probabilities pi (θ), i = 1, 2, . . . k indexed by a parameter θ ∈  ⊂ R. The joint probability mass function pθ (y1 , y2 , . . . , yk−1 ) = pθ (y) of Y is given by pθ (y) = Pθ [Y1 = y1 , Y2 = y2 , . . . , Yk−1 = yk−1 ] =

k  ( pi (θ)) yi , i=1

yi = 0, 1 &

k 

yi = 1 .

i=1

k pi (θ) = 1. We assume it to be an open The parameter space  is such that i=1 set. Further, the support of each Yi is {0, 1}, which is free from θ. Suppose the partial derivatives of pi (θ) exist up to order 3 for all i = 1, 2, . . . k . From the joint probability mass function pθ (y), we have  ∂ ∂ yi log pθ (y) = log pi (θ), ∂θ ∂θ

 ∂2 ∂2 log p (y) = yi 2 log pi (θ) θ ∂θ2 ∂θ

k

k

i=1

&

∂3 ∂θ3

log pθ (y) =

k 

yi

i=1

∂3 ∂θ3

i=1

log pi (θ) ,

as being finite summation, derivatives can be taken inside the sum. Thus, if partial derivatives of pi (θ), i = 1, 2, . . . k up to order 3 exist, then partial derivatives of log pθ (y) up to order 3 exist. Now,   k    ∂ ∂ Eθ Yi log pθ (Y ) = E θ log pi (θ) ∂θ ∂θ i=1

=

k 

 ∂ ∂ log pi (θ) = pi (θ) . ∂θ ∂θ k

pi (θ)

i=1

i=1

To find its value, we note that k  i=1

pi (θ) = 1 ⇒

k k  ∂ ∂  pi (θ) = 0 ⇒ pi (θ) = 0 . ∂θ ∂θ i=1

i=1

  ∂ log pθ (Y ) = 0. Further, using the fact that E(Yi Y j ) = 0 ∀ i = j = Thus, E θ ∂θ 1, 2, . . . , k, we have  Eθ

2  k 2  ∂ ∂ Yi log pθ (Y ) = E θ log pi (θ) ∂θ ∂θ i=1  k 2   ∂ 2 = Eθ Yi log pi (θ) ∂θ i=1

6.2 Multinomial Distribution and Associated Tests

325

⎞ ⎛ k  k  ∂ ∂ + Eθ ⎝ Yi Y j log pi (θ) log p j (θ)⎠ ∂θ ∂θ j=1 i=1



2 ∂ pi (θ) . log pi (θ) ∂θ

k 

=

i=1

Now observe that  Eθ

  k  k  ∂2  ∂2 ∂2 log p (Y ) = E Y log p (θ) = pi (θ) 2 log pi (θ) . i i θ θ 2 2 ∂θ ∂θ ∂θ i=1

 To find a relation between E θ  Eθ

∂ ∂θ

i=1

 2 2  ∂ log pθ (Y ) and E θ ∂θ log p (Y ) we note that θ 2

 ∂ log pθ (Y ) = 0 ∂θ ⇒

k  i=1

∂ ⇒ ∂θ

pi (θ) 

k  i=1

∂ log pi (θ) = 0 ∂θ

 ∂ pi (θ) log pi (θ) = 0 ∂θ

k k   ∂ ∂ ∂2 pi (θ) 2 log pi (θ) = 0 ⇒ pi (θ) log pi (θ) + ∂θ ∂θ ∂θ i=1

i=1

2  k ∂ ∂2 pi (θ) pi (θ) 2 log pi (θ) = 0 ⇒ log pi (θ) + ∂θ ∂θ i=1 i=1   2 2  ∂ ∂ log p (Y ) =0 ⇒ Eθ log pθ (Y ) + E θ θ ∂θ ∂θ2   2  ∂2 ∂ ⇒ Eθ log pθ (Y ) = E θ − 2 log pθ (Y ) . ∂θ ∂θ k 



Thus, for a multinomial distribution in k cells with cell probabilities pi (θ), i = 1, 2, . . . k indexed by a parameter θ, the information function I (θ) is given by 

 2  ∂ ∂2 log pθ (Y ) = E θ − 2 log pθ (Y ) ∂θ ∂θ   k k 2   ∂ ∂2 pi (θ) pi (θ) 2 log pi (θ) . log pi (θ) = − = ∂θ ∂θ

I (θ) = E θ

i=1

i=1

326

6

Goodness of Fit Test and Tests for Contingency Tables

We assume that 0 < I (θ) < ∞. As an illustration, in Example 6.2.3 we have discussed a multinomial distribution in four cells with cell probabilities (2 + θ)/4, (1 − θ)/4, (1 − θ)/4 and θ/4. Hence, 1 −1 ∂ ∂ log p1 (θ) = , log p2 (θ) = ∂θ 2+θ ∂θ 1−θ ∂ −1 1 ∂ log p3 (θ) = & log p4 (θ) = ∂θ 1−θ ∂θ θ and

−1 ∂2 −1 ∂2 log p (θ) = , log p2 (θ) = 1 2 2 2 ∂θ (2 + θ) ∂θ (1 − θ)2 2 2 ∂ −1 ∂ −1 log p3 (θ) = & log p4 (θ) = 2 . 2 2 2 ∂θ (1 − θ) ∂θ θ

Thus, I (θ) =

4 

 pi (θ)

i=1

2 ∂ 2+θ 2(1 − θ) θ + + 2 log pi (θ) = ∂θ 4(2 + θ)2 4(1 − θ)2 4θ =

& I (θ) = −

4 

pi (θ)

i=1

(2θ + 1) 2θ(1 − θ)(2 + θ)

∂2 2+θ 2(1 − θ) θ log pi (θ) = + + 2 2 2 2 ∂θ 4(2 + θ) 4(1 − θ) 4θ =

(2θ + 1) . 2θ(1 − θ)(2 + θ)

To examine the last condition in the Cramér regularity conditions, consider     3  3   k ∂ ∂  =  log p (y) y log p (θ)  i i θ    ∂θ3  ∂θ3 i=1


0 ∀ i = 1, 2, . . . , k and i=1 pi = 1, suppose we want to test H0 : p = p(θ) against the alternative H1 : p = p(θ), where θ is an indexing parameter of dimension l < k. Suppose λ(X ) is a likelihood ratio test statistic based on a random sample of size n. If the multinomial distribution with cell probabilities indexed by θ belongs to a Cramér family, then for large n under H0 , −2 log λ(X ) has χ2k−1−l distribution.

Proof Suppose Y = (Y1 , Y2 , . . . , Yk−1 ) has multinomial distribution in k cells with cell probabilities p. Suppose X = (X 1 , Y2 , . . . , X k ) denotes the vector of cell frek X i = n. quencies corresponding to a random sample of size n from Y with i=1  Then the maximum likelihood estimator pˆ n = ( pˆ 1n , pˆ 2n , . . . , pˆ kn ) of p is given by, pˆ n = (X 1 /n, X 2 /n, . . . , X k /n) . Suppose the cell probabilities are indexed by a parameter θ, which is a vector valued parameter of dimension l < k. To test H0 : p = p(θ) against the alternative H1 : p = p(θ) using the likelihood ratio test procedure, the entire parameter space is

328

6

Goodness of Fit Test and Tests for Contingency Tables

 = { p = ( p1 , p2 , . . . , pk ) | pi > 0, i = 1, 2, . . . , k &

k 

pi = 1}

i=1

and the null space is 0 = { p = ( p1 , p2 , . . . , pk ) | p = p(θ)}. Suppose θˆ n denotes the maximum likelihood estimator of θ based on the observed data X . Since the distribution belongs to the Cramér family, θˆ n is CAN for θ with approximate dispersion matrix I −1 (θ)/n. Now the likelihood ratio test statistic λ(X ) is given by λ(X ) = sup L n ( p|X )/ sup L n ( p|X ) = L n (θˆ n |X )/L n ( pˆ n |X ) . 0



Hence as in Theorem 5.2.4, −2 log λ(X ) = 2[log L n ( pˆ n |X ) − log L n (θˆ n |X )] = 2[log L n ( pˆ n |X ) − log L n ( p 0 |X )] − 2[log L n (θˆ n |X ) − log L n (θ0 |X )] = U n − Wn , where Un = 2[log L n ( pˆ n |X ) − log L n ( p 0 |X )] and Wn = 2[log L n (θˆ n |X ) − log L n (θ0 |X )], θ0 is a known vector and p 0 = p(θ0 ) so that L

log L n ( p 0 |X )= log L n (θ0 |X). From Theorem 5.2.2, Un → U ∼ χ2k−1 and L

Wn → W ∼ χl2 . Thus, for large n, −2 log λ(X ) is distributed as U − W , which has  χ2k−1−l distribution, using the result of Wilks [2].  Remark 6.2.3

As in Theorem 5.2.4, the parameters pi , i = 1, 2, . . . , k − 1 are functions of l parameters θ1 , θ2 , . . . , θl . In all the goodness of fit tests, in the null hypothesis cell probabilities are indexed by parameters. Following example illustrates the application of Theorem 6.2.1 to examine validity of the probability model as proposed in Example 6.2.3.  Example 6.2.4

In Example 6.2.3, Carver’s data on two varieties of maize classified as starchy versus sugary and further cross classified with the color as green and white are given in Table 6.1. We examine whether the proposed theoretical model is valid on the basis of given data, using likelihood ratio test procedure. An appropriate probability model for the outcome of the given experiment is a multinomial distribution in four cells with cell probabilities p = ( p1 , p2 , p3 , p4 ) . We want to test H0 : p = p(θ) against the alternative that pi ’s do not depend on θ, the only restriction is these are positive and add up to 1. Corresponding to given data the

6.2 Multinomial Distribution and Associated Tests

329

maximum likelihood estimator of p is pˆn = (X 1 /n, X 2 /n.X 3 /n, X 4 /n) , where X i denotes the frequency of i-th cell and its total is n = 3819. In the null setup, the cell probabilities (2 + θ)/4, (1 − θ)/4, (1 − θ)/4 and θ/4 depend on θ. On the basis of the given data, we have obtained the maximum likelihood estimator θˆ n of θ in Example 6.2.3. Its value is 0.0354. Using it, we obtain the maximum likelihood estimate of p(θ). Thus, we compute the likelihood ratio test statistic λ(X ). For the given data,  k  k   X i log( pˆ in ) − X i log( pi (θˆ n )) = 1.2398. −2 log(λ(X )) = 2 i=1

i=1

From Theorem 6.2.1 for large n, −2 log λ(X ) ∼ χ2m where m = 3 − 1 = 2. H0 is rejected if −2 log λ(X ) > c where c is such that the size of the test is α and is determined using the χ22 distribution. Thus c = χ21,0.95 = 5.9914. The corresponding p-value is 0.5380. Hence on the basis of the given data, we conclude that the proposed model is a valid model.  In the goodness of fit tests and in tests for validity of the model, the cell probabilities are usually indexed by a parameter θ, which may be real or vector valued, as in the above example. In general suppose we have a multinomial distribution in k cells k pi (θ) = 1. θ is an indexwith cell probabilities pi (θ), where pi (θ) > 0 and i=1 ing parameter, in the sense that pi (θ1 ) = pi (θ2 ) for any i = 1, 2, . . . , k implies that θ1 = θ2 . On the basis of the given data in terms of cell frequencies {X 1 , X 2 , . . . , X k }, k X i = n, we obtain maximum likelihood estimator θˆ n of θ and hence of with i=1 cell probabilities as pi (θˆ n ). For the likelihood ratio test procedure, in the null setup, the cell probabilities are indexed by θ while in the entire parameter space there is no restriction on the cell probabilities, except the condition that they add up to 1. The likelihood ratio test statistics λ(X ) is then given by λ(X ) = sup L n ( p|X )/ sup L n ( p|X ) = 0



k k   ( pi (θˆ n )) X i / ( pˆ in ) X i . i=1

i=1

The likelihood ratio test procedure is used to test the goodness of fit of the proposed distribution or to test the validity of the proposed model. For large n, H0 is rejected if −2 log λ(X ) > c where c is such that the size of the test is α and is determined using the large sample distribution of −2 log λ(X ), which is χ2k−1−l , as derived in Theorem 6.2.1. In the next section, we discuss the role of a multinomial distribution in a test for goodness of fit, which is essentially a test for validity of the model. Theorem 6.2.1 is heavily used in a goodness of fit test.

330

6

6.3

Goodness of Fit Test and Tests for Contingency Tables

Goodness of Fit Test

As discussed in Sect. 6.1, suppose the observed data on a characteristic X are in the form of (yi , f i ) or ([xi , xi+1 ), f i ) i = 1, 2, . . . , k. where yi is the possible value of a discrete random variable X and f i denotes the frequency of yi in a random sample of size n from X , i = 1, 2, . . . , k. A random sample of size n from a continuous distribution is grouped as ([xi , xi+1 ), f i ) i = 1, 2, . . . , k, where f i denotes the number of observations in the class interval [xi , xi+1 ). On the basis of these observed data, we wish test whether the data are from a specific distribution. In the following example, we illustrate how the likelihood ratio test for a multinomial distribution is useful to test the conjecture that the data are from a specified distribution.  Example 6.3.1

A computer program is written to generate random numbers from a uniform U (0, 4) distribution. 200 observations are generated and are grouped in 8 classes. Table 6.2 displays frequencies of these 8 classes. We examine whether these data offer any evidence of the accuracy of the program, that is, we examine whether the data correspond to a random sample from a uniform U (0, 4) distribution. Hence, we set our null hypothesis as H0 : X ∼ U (0, 4) distribution against the alternative that X has any other continuous distribution. Thus, in the entire setup, the appropriate probability model for these data is a multinomial 8 distribution pi = 1. The in 8 cells with cell probabilities p = ( p1 , p2 , . . . , p8 ) with i=1 conjecture that X ∼ U (0, 4) is converted in terms of the null hypothesis H0 : p = p 0 where cell probabilities p 0 = ( p01 , p02 , . . . , p08 ) are completely specified as follows: p0r = P[xr −1 ≤ X ≤ xr ] = 0.5/4 = 0.125, , r = 1, 2, . . . , 8 as each interval [xr −1 , xr ] is of the same length 0.5. The alternative H1 : p = p 0 is equivalent to stating that X does not have U (0, 4) distribution. To test H0 : p = p 0 against the alternative H1 : p = p 0 , we adopt a likelihood ratio test procedure when underlying probability model is a multinomial distribution with 8 cells. Thus, the entire parameter space is 8  = { p = ( p1 , p2 , . . . p8 ) | pi > 0, i = 1, 2, . . . , 8 & i=1 pi = 1} and the maximum likelihood estimator pˆ in of pi is given by pˆ in = X i /n i = 1, 2, . . . , 8, where X i denotes the frequency of the i-th class, i = 1, 2, . . . , 8. The null space 0 is 0 = { p| p = p 0 }. Hence, the likelihood ratio test statistics λ(X ) is Table 6.2 Uniform U (0, 4) distribution: grouped frequency distribution Class interval [0, 0.5)

[0.5, 1)

[1, 1.5)

[1.5, 2)

[2, 2.5)

[2.5, 3)

[3, 3.5)

[3.5, 4]

Frequency

17

25

23

29

31

21

25

29

6.3 Goodness of Fit Test

331

given by 8 

sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

( p0i ) X i

i=1 8 

.

( pˆ in ) X i

i=1

From Theorem 5.2.2, for large n under H0 , −2 log λ(X ) ∼ χ27 distribution. H0 is rejected if −2 log λ(X ) > c where c is such that the size of the test is α and is determined using the χ27 distribution. Thus c = χ27,1−α . For the given data −2 log λ(X ) = 6.2827, with α = 0.5, c = 14.06 and p-value is 0.5072. So we may conclude that data offer evidence that the program is written properly and the observed data are from U (0, 4) distribution. There is a built-in function chisq.test to carry out this test for goodness of fit. We discuss it in Sect. 6.7.  In the goodness of fit test procedures, the most frequently used test statistics is Karl Pearson’s test statistic, proposed by Karl Pearson in 1900. It is given chi-square k (oi − ei )2 /ei , where oi and ei denote the observed and expected by Tn (P) = i=1 frequencies of i-th class, i = 1, 2, . . . , k, ei is labeled as expected frequency, since it denotes a frequency expected under the null setup. Thus, the test statistic Tn (P) measures the deviation of the observed frequencies from the frequencies expected under the hypothesized distribution. The deviation may be due to sampling fluctuations or may be large enough which suggests that the data may not be from the assumed distribution. Hence, large values of Tn (P) do not support the null setup that the data are generated under a specific distribution. Thus, the null hypothesis H0 : p = p 0 is rejected if Tn (P) > c, where c is determined using the null distribution of Tn (P). In the following Theorem 6.3.1, we prove that the likelihood ratio test statistic and Karl Pearson’s chi-square test statistic for testing H0 : p = p 0 against the alternative H1 : p = p 0 in a multinomial distribution are equivalent, in the sense that their asymptotic null distributions are the same. Theorem 6.3.1 Suppose a multinomial distribution in k cells with cell probabilities k pi = 1 belongs p = ( p1 , p2 , . . . , pk ) where pi > 0 ∀ i = 1, 2, . . . , k and i=1 to a Cramér family. For testing H0 : p = p 0 against the alternative H1 : p = p 0 , where p 0 is a completely specified vector (i) the likelihood ratio test statistic and Karl Pearson’s chi-square statistic k (oi − ei )2 /ei have the same asymptotic null distribution, which is Tn (P) = i=1 χ2k−1 , where oi and ei denote the observed and expected frequencies respectively of the i-th class, i = 1, 2, . . . , k. k (oi − ei )2 /oi also has the asymptotic null distri(ii) the test statistic Wn = i=1 2 bution to be χk−1 .

332

6

Goodness of Fit Test and Tests for Contingency Tables

Proof Suppose Y = (Y1 , Y2 , . . . , Yk−1 ) has a multinomial distribution in k cells with cell probabilities p. Suppose X = (X 1 , X 2 , . . . , X k ) denotes the vector of cell k frequencies corresponding to a random sample of size n from Y with i=1 X i = n. The maximum likelihood estimator pˆ n = ( pˆ 1n , pˆ 2n , . . . , pˆ kn ) of p is then given by pˆ n = (X 1 /n, X 2 /n, . . . , X k /n) . To test H0 : p = p 0 against the alternative H1 : p = p 0 using the likelihood ratio test procedure, note that the entire parameter space is

k   pi = 1  = p = ( p1 , p2 , . . . , pk ) | pi > 0, i = 1, 2, . . . , k & i=1

and the null space 0 is 0 = { p| p = p 0 }. Hence, the likelihood ratio test statistic λ(X ) is given by k 

sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

( p0i ) X i

i=1 k 

=

( pˆ in

)Xi

 k   p0i X i i=1

pˆ in

.

i=1

From Theorem 5.2.2, for large n under H0 , −2 log λ(X ) ∼ χ2k−1 distribution. To prove that Karl Pearson’s test chi-square statistic Tn (P) also has χ2k−1 distribution under H0 , we proceed as follows. It is to be noted that oi = X i and ei = np0i denote the observed and expected frequencies of i-th class, i = 1, 2, . . . , k. Suppose ui =

√ ui n( pˆ in − p0i ) ⇔ pˆ in = p0i + √ , i = 1, 2, . . . , k . n

k u i k k √ = Further, observe that i=1 i=1 pˆ in − i=1 p0i = 0. Now, n k  p0i  X i λ(X ) = i=1 pˆ implies in

−2 log λ(X ) = 2

k 

X i (log pˆ in − log p0i ) = 2n

i=1 k  

= 2n

i=1

ui p0i + √ n



pˆin (log pˆ in − log p0i )

i=1

   ui log p0i + √ − log p0i n

  ui log 1 + √ n p0i i=1    k   u i2 u i3 ui ui p0i + √ = 2n − + − ··· √ 2 3 n n p0i 2np0i 3n 3/2 p0i i=1

= 2n

k  

ui p0i + √ n



k 

6.3 Goodness of Fit Test

333





k 

u i3 u i2 u i3 u i4 u i2 ui = 2n + + − + + ··· √ − 2 2 3 2np0i np0i n 3n 3/2 p0i 2n 3/2 p0i 3n 2 p0i i=1   k  u i2 ui +√ = 2n + Vn , 2np0i n i=1

where Vn = 2n Thus, using the

k

k u i3 u i4 i=1 a1 n 3/2 + 2n i=1 a2 n 2 + · · · , where a1 k u i √ = 0, we have fact that i=1 n

−2 log λ(X ) =

k  u i2 + Vn p0i

=

i=1

=

k  i=1

and a2 are constants.

k  n( pˆ in − p0i )2 + Vn p0i i=1

(n pˆ in − np0i )2 + Vn np0i

=

k  (oi − ei )2 + Vn . ei i=1

P

Thus, −2 log λ(X ) − Tn (P) = Vn . If we show that Vn → 0 then −2 log λ(X ) L

and Tn (P) have the same limit law. But −2 log λ(X ) → U ∼ χ2k−1 and hence P

L

Tn (P) → U ∼ χ2k−1 . To prove that Vn → 0, we consider the first term of Vn given by T1n = 2n

k  i=1

a1

k−1 u i3 2a1  √ 2a1 √ = ( n( pˆ in − p0i ))3 + √ ( n( pˆ kn − p0k ))3 . √ 3/2 n n n i=1

In Sect. 6.2, it is proved that for a multinomial distribution with k cells, Zn =

√  √ √ n( pˆ 1n − p01 ), n( pˆ 2n − p02 ), . . . , n( pˆ (k−1)n − p0(k−1) )

L

→ Z ∼ Nk−1 (0, I −1 ( p 0 )), where I ( p 0 ) is a positive definite matrix. Suppose a function g : Rk−1 → R is k−1 3 defined as g(x) = i=1 xi , then it is a continuous function and by the continuous mapping theorem g(Z n ) =

k−1  √ L ( n( pˆ in − p0i ))3 → g(Z ), i=1

which implies that g(Z n ) is bounded in probability and hence k−1 √ 2a 3 P √1 i=1 ( n( pˆ in − p0i )) → 0. Now using the fact that n pk = 1 − p1 − p2 − · · · pk−1 , the second term in T1n can be expressed as 3 2a1 √ 2a1 n( pˆ kn − p0k ) = √ √ n n

3 k−1 √ ( n( pˆ in − p0i )) . i=1

334

6

Goodness of Fit Test and Tests for Contingency Tables

Further, we define a function g : Rk−1 → R as g(x) = continuous function. By the continuous mapping theorem



k−1 i=1 x i

3

, which is a

k−1 3 √ 3 P 2a1 √ L ( n( pˆ in − p0i )) → g(Z ) ⇒ √ n( pˆ kn − p0k ) → 0 . g(Z n ) = n i=1

P

Thus, T1n → 0. Using similar logic, we can prove that the remaining terms in Vn P

converge in probability to 0 and hence Vn → 0. Thus under H0 , −2 log λ(X ) and Tn (P) have the same limit law and it is χ2k−1 . (ii) The test statistic Tn (P) can be expressed as k  n( pˆ in − p0i )2 = n( pˆ n − p 0 ) A( pˆ n − p 0 ) p0i i=1   1 1 1 . , ,..., where A = diag p01 p02 p0k

Tn (P) =

Similarly, Wn can be expressed as

Wn =

k k   (oi − ei )2 (n pˆ in − np0i )2 = oi n pˆ in i=1

i=1

k  n( pˆ in − p0i )2 = = n( pˆ n − p 0 ) An ( pˆ n − p 0 ), pˆ in i=1

where An = diag





1 , 1 , . . . , pˆ1 pˆ 1n pˆ 2n kn P that An → A and

P

. As in Theorem 5.2.2, the fact that pˆ n → p 0 P

under H0 , implies hence Wn − Tn (P) → 0. Thus, the limit law  of Wn and Tn (P) is the same and it is χ2k−1 . Wn = 6.9652. Further, For the data in Example 6.3.1, Tn (P) = 6.08 & c = χ27,0.95 = 14.06 and values of both Tn (P) and Wn is less than the cut-off c. The p-values corresponding to Tn (P) and Wn are 0.5304 and 0.4325 respectively. Hence, data do not have sufficient evidence to reject H0 . Next example illustrates a goodness of fit test procedure for a discrete distribution, it is similar to that in Example 6.3.1.  Example 6.3.2

Table 6.3 displays possible values of a discrete random variable Y and the corresponding frequencies f i in a random sample of size n = 50 from the distribution of Y . We test the claim that the data are from binomial B(6, 0.6) distribution,

6.3 Goodness of Fit Test

335

Table 6.3 Truncated binomial distribution: frequency distribution i

1

2

3

4

5

6

fi

3

8

12

15

9

3

truncated at 0, using likelihood ratio test statistic, Pearson’s chi-square statistic and Wn . We wish to test the conjecture on the basis of data conveying the frequency X = { f 1 , f 2 , f 3 , f 4 , f 5 , f 6 } of 6 possible values of Y in a random sample of size 50 from Y . The appropriate probability model for these data is a multinomial distribution in 6 cells with cell probabilities p = ( p1 , p2 , p3 , p4 , p5 , p6 ) with 6 i=1 pi = 1. The conjecture that Y follows binomial B(6, .6) distribution, truncated at 0, can be converted in terms of the null hypothesis H0 : p = p 0 where cell probabilities p 0 = ( p01 , p02 , p03 , p04 , p05 , p06 ) are completely specified by the B(6, 0.6) distribution, truncated at 0. Thus, 6 (0.6)i (0.4)6−i p0i = i , i = 1, 2, . . . , 6 . 1 − (0.4)6 In the entire parameter space, the maximum likelihood estimator of p is given by pˆ in = X i /n, i = 1, 2, . . . , 6. Hence, the likelihood ratio test statistic λ(X ) is given by 6 

sup L n ( p|X ) λ(X ) =

0

sup L n ( p|X ) 

=

( p0i ) X i

i=1 6 

( pˆ in

. )Xi

i=1

Under H0 , −2 log λ(X ) ∼ χ25 distribution and H0 is rejected if −2 log λ(X ) > c, where c = χ21−α,5 . For the given data, −2 log λ(X ) = 1.2297 and c = 11.0705 with α = 0.05 and the p-value is 0.9420. Hence, data provides strong support to H0 . We now find the value of Karl Pearson’s chi-square test statistic and of Wn . The expected frequencies ei are then given by ei = np0i . The observed frequencies are oi = f i . Table 6.4 displays the values of observed and expected frequencies. k (oi − ei )2 /ei is The value of Pearson’s chi-square test statistic Tn (P) = i=1 k 1.3537 and of Wn = i=1 (oi − ei )2 /oi is 1.06 with corresponding p-values 0.9293 and 0.9576. Thus, on the basis of these two test procedures, we note that the data strongly support the null setup. From Table 6.4, we note that the observed and the expected frequencies of the first and the last class are less than 5. Hence, according to the convention, we may pool the observed and the expected frequency of the first two classes and the last two classes. For the pooled data, Tn (P) = 0.8412. Under H0 , Tn ∼ χ23 . For large n, H0 is rejected if Tn (P) > c, with α = 0.05, c = χ23,0.95 = 7.8147, which is larger than value of Tn (P). Hence, the conclusion remains the same that data do not show sufficient evidence to reject

336

6

Goodness of Fit Test and Tests for Contingency Tables

Table 6.4 Truncated binomial distribution: observed and expected frequencies i

oi

p0i

ei

1 2 3 4 5 6

3 8 12 15 9 3

0.0370 0.1388 0.2776 0.3123 0.1874 0.0468

1.8508 6.9404 13.8809 15.6159 9.3696 2.3424

H0 . The claim that the data are from binomial B(6, .6) distribution, truncated at 0 may be accepted. In R, there is a built-in function chisq.test to test the goodness of fit. It is based on the Karl Pearson’s chi-square test statistic. We demonstrate it in Sect. 6.7.  In the goodness of fit test in Example 6.3.1 and in Example 6.3.2, the null hypothesis is simple. Hence, it is expressed as H0 : p = p 0 in the setup of multinomial distribution, where p 0 is a completely specified vector. Suppose we have data on scores of students, classified in k classes with corresponding frequencies. The conjecture is that the scores have a normal N (μ, σ 2 ) distribution, then the hypothesis can be expressed as H0 : p = p(θ), where θ = (μ, σ 2 ) and the cell probabilities are functions of an unknown parameter θ. Thus, the null hypothesis is a composite hypothesis as in Theorem 5.2.4 or Theorem 6.2.1. In such a setup, the first step is to estimate θ from the given data and then estimate cell probabilities. We use the likelihood ratio test, as we have used for testing the validity of the model. This is the most common scenario in the goodness of fit test. The next example is a typical example of fitting a continuous distribution, normal in this case, to the data presented in the form of grouped frequency distribution. The procedure is similar for any other continuous distribution. We use Theorem 6.2.1 to determine the critical region.  Example 6.3.3

It is often assumed the IQ scores of human beings are normally distributed. We test this claim for the data given in Table 6.5, using the likelihood ratio test procedure. Suppose a random variable Y denotes the IQ score. Then the conjecture is Y ∼ N (μ, σ 2 ) distribution, which we test on the basis of the data conveying the number X = {X 1 , X 2 , X 3 , X 4 , X 5 , X 6 } of human beings with IQ score within a specified class interval for 6 class intervals. Observe that X 1 + X 2 + X 3 + X 4 + X 5 + X 6 = n = 100. The parameters μ and σ 2 are unknown and we estimate these on the basis of the given data. The appropriate probability model for these data is again a multinomial distribution in 6 cells with cell probabilities depending on θ = (μ, σ 2 ) in the null setup. Thus, for i = 1, 2, . . . , 6,     xi−1 − μ xi − μ − pi (θ) = Pθ [xi−1 ≤ X ≤ xi ] =  σ σ = gi (μ, σ 2 ), say .

6.3 Goodness of Fit Test

337

Table 6.5 IQ scores: grouped frequency distribution IQ score

≤ 90

(90, 100]

(100, 110]

(110, 120]

(120, 130]

> 130

Frequency

10

18

23

22

18

9

In the entire parameter space, the parameter is p = ( p1 , p2 , p3 , p4 , p5 , p6 ) with 6 2 i=1 pi = 1. The conjecture that Y ∼ N (μ, σ ) distribution can be converted in terms of the null hypothesis H0 : p = p(θ) against the alternative H1 : p = p(θ), where p(θ) = ( p1 (θ), p2 (θ), . . . , p6 (θ)) . Suppose θˆ n denotes the maximum likelihood estimator of θ on the basis of given data. As the underlying probability model is a normal distribution, θˆ n is CAN for θ. Further, pˆ n (θˆ n ) is the maximum likelihood estimator of p(θ). In the entire parameter space, the maximum likelihood estimator of p is given by pˆ in = X i /n, i = 1, 2, . . . , 6 and in the null space pˆ in (θˆ n ) = ((xi − μˆ n )/σˆ n ) − ((xi−1 − μˆ n )/σˆ n ), i = 1, 2, . . . , k. From Theorem 6.2.1, for large n under H0 , −2 log λ(X ) ∼ χ2m where m = 5 − 2 = 3 as in the entire parameter space we estimate 5 parameters and in the null space we estimate 2 parameters. For large n, H0 is rejected if −2 log λ(X ) > c where c is such that the size of the test is α and is determined using the χ23 distribution. Thus c = χ23,1−α . For the given data μˆ n = X n = 109.7 and σˆ n2 = 210.91, which are calculated by taking the frequency of class (140) as 0. Further, −2 log λ(X ) = 2 & c=

6 

[X i (log pˆ in − log pˆ in (θˆ n ))] = 5.694188

i=1 2 χ3,0.95

= 7.8147 > −2 log λ(X )

with α = 0.05. Thus, data do not have sufficient evidence to reject H0 and we may conclude that the normal distribution seems to be an appropriate model for IQ scores.  Example 6.3.3 is a typical example of a goodness of fit test, in which null hypothesis is H0 : p = p(θ), where the cell probabilities are functions of an unknown paramcommonly used test statistic is Karl Pearson’s eter θ of dimension l × 1 say. The most k (oi − ei )2 /ei . In this setup, the Pearsonchi-square statistic given by Tn (P) = i=1 Fisher theorem (Kale and Muralidharan [1]) states that under H0 , Tn (P) ∼ χ2k−1−l distribution where l is the number of parameters estimated in the null setup. In the following Theorem 6.3.2, we prove that for a multinomial distribution when cell probabilities are indexed by θ, the likelihood ratio test statistic and Karl Pearson’s chisquare test statistic for testing H0 : p = p(θ) against the alternative H1 : p = p(θ) have the null distribution. We further show that a test statistic ksame asymptotic (oi − ei )2 /oi also has the same asymptotic null distribution, which is Wn = i=1 chi-square.

338

6

Goodness of Fit Test and Tests for Contingency Tables

Theorem 6.3.2 In a multinomial distribution with k cells having cell probabilities k pi = 1, supp = ( p1 , p2 , . . . , pk ) where pi > 0 ∀ i = 1, 2, . . . , k and i=1 pose we want to test H0 : p = p(θ) against the alternative H1 : p = p(θ), where θ is an indexing parameter of dimension l < k. It is assumed that a multinomial distribution, when cell probabilities are indexed by θ, belongs to a Cramér family. Then (i) the likelihood ratio −2 log λ(X ) and Karl Pearson’s chi-square k test statistic (oi − ei )2 /ei have the same asymptotic null distribution statistic Tn (P) = i=1 as χ2k−1−l , k (oi − ei )2 /oi also has the asymptotic null distri(ii) the test statistic Wn = i=1 2 bution as χk−1−l .

Proof Suppose Y = (Y1 , Y2 , . . . , Yk−1 ) has a multinomial distribution in k cells with cell probabilities p. Suppose X = (X 1 , X 2 , . . . , X k ) denotes the vector of cell k frequencies corresponding to a random sample of size n from Y with i=1 X i = n. The maximum likelihood estimator pˆ n = ( pˆ 1n , pˆ 2n , . . . , pˆ kn ) of p is then given by pˆ n = (X 1 /n, X 2 /n, . . . , X k /n) . Suppose the cell probabilities are indexed by a parameter θ, which is a vector valued parameter of dimension l < k. To test H0 : p = p(θ) against the alternative H1 : p = p(θ) using the likelihood ratio test procedure, note that the null space 0 is given by k ˆ 0 = { p| p = p(θ), i=1 pi (θ) = 1}. Suppose θ n denotes the maximum likelihood estimator of θ based on the observed data X . Since the distribution belongs to a Cramér family, θˆ n is CAN for θ with approximate dispersion matrix I −1 (θ)/n. In Theorem 6.2.1, we have proved that −2 log λ(X ) has χ2k−l−1 distribution. (i) To prove that Tn (P) ∼ χ2k−l−l distribution under H0 , we proceed on similar lines as in Theorem 6.3.1. Note that oi = X i and ei = npi (θˆ n ) denote the observed and expected frequencies of i-th class, i = 1, 2, . . . , k. Suppose √ √ ˆ ˆ u i = n( pˆ in − pi (θn )) ⇔ pˆ in = pi (θn ) + u i / n , i = 1, 2, . . . , k. Further, k k k √ it is to be noted that i=1 u i / n = i=1 pˆ in − i=1 pi (θˆ n ) = 0. With these substitutions, −2 log λ(X ) can be expressed as follows: −2 log λ(X ) = 2

k 

X i (log pˆ in − log pi (θˆ n )) = 2n

i=1

k  

i=1



pˆ in (log pˆ in − log pi (θˆ n ))

i=1

  ui ˆ ˆ log pi (θn ) + √ − log pi (θn ) = 2n n i=1    k   u u i i pi (θˆ n ) + √ log 1 + √ = 2n n n pi (θˆ ) ui pi (θˆ n ) + √ n



k 

n

6.3 Goodness of Fit Test

= 2n

k   i=1



339

ui pi (θˆ n ) + √ n

 ×

 u i3 u i2 + − ··· − √ n pi (θˆ n ) 2n( pi (θˆ n ))2 3n 3/2 ( pi (θˆ n ))3   k  u i2 u i3 ui = 2n + ··· + √ − n 2npi (θˆ n ) 3n 3/2 ( pi (θˆ n ))2 i=1   k  u i2 u i4 u i3 + 2n + + ··· − ˆ 2n 3/2 ( pi (θˆ n ))2 3n 2 ( pi (θˆ n ))3 i=1 npi (θ n )   k  u i2 ui = 2n +√ + Vn , n 2npi (θˆ n ) ui

i=1

k u i4 + 2n i=1 a2 + · · · , where a1 and a2 ˆ ( pi (θn ))3 n 2 k u i are constants. Thus, using the fact that i=1 √n = 0, we have where Vn = 2n

k

u i3 i=1 a1 ( p (θˆ ))2 n 3/2 i n

−2 log λ(X ) =

k  i=1

u i2 + Vn pi (θˆ n )

=

k  n( pˆ in − pi (θˆ n ))2 + Vn pi (θˆ )

i=1

n

i=1

k  (n pˆ in − npi (θˆ n ))2 + Vn = npi (θˆ ) n

=

k  (oi − ei )2 + Vn . ei i=1

P

Thus, −2 log λ(X ) − Tn (P) = Vn . If we show that Vn → 0 then −2 log λ(X ) and P

Tn (P) have the same limit law. To prove Vn → 0, we consider the first term of Vn given by √ k u i3 2a1  ( n( pˆ in − pi (θˆ n )))3 = √ 2 3/2 ˆ n ( pi (θn )) n ( pi (θˆ n ))2 i=1 i=1   √ √ k 2a1  ( n( pˆin − p0i ) − n( pi (θˆ n ) − p0i ))3 = √ n ( pi (θˆ n ))2 i=1   √ √ k 2a1  ( n( pˆin − p0i ))3 − ( n( pi (θˆ n ) − p0i ))3 = √ n ( pi (θˆ n ))2 i=1   √ √ √ √ k 2a1  3( n( pˆin − p0i ))2 ( n( pi (θˆ n ) − p0i )) − 3( n( pˆ in − p0i ))( n( pi (θˆ n ) − p0i )2 ) , + √ n ( pi (θˆ ))2

T1n = 2n

k 

a1

i=1

n

where p0i = pi (θ0 ). We first consider the term √ k k−1 √ 2a1  ( n( pˆ in − p0i ))3 2a1  ( n( pˆ in − p0i ))3 = √ √ n n ( pi (θˆ n ))2 ( pi (θˆ n ))2 i=1 i=1 √ 2a1 ( n( pˆ kn − p0k ))3 + √ . n ( pk (θˆ n ))2

340

6

Suppose Y n =

√ 

n( pˆ 1n − p01 ) , ( p1 (θˆ n ))2

Goodness of Fit Test and Tests for Contingency Tables

 √ √ n( pˆ (k−1)n − p0(k−1) ) n( pˆ 2n − p02 ) , . . . , ( p2 (θˆ n ))2 ( pk−1 (θˆ n ))2 

= An Z n ,

where An = diag 1/( p1 (θˆ n ))2 , 1/( p2 (θˆ n ))2 , . . . , 1/( pk−1 (θˆ n ))2 . Suppose   P P θˆ n → θ0 , then An → A = diag 1/( p1 (θ0 )2 , 1/( p2 (θ0 )2 , . . . , 1/( pk−1 (θ0 ))2 . Further, Zn =

√  √ √ n( pˆ 1n − p01 ), n( pˆ 2n − p02 ), . . . , n( pˆ (k−1)n − p0(k−1) )

L

→ Z ∼ Nk−1 (0, I −1 ( p 0 )), where I ( p 0 ) is a positive definite matrix. Hence by Slutsky’s theorem, L

Y n = An Z n → AZ . As in Theorem 6.3.1, a function g : Rk−1 → R is defined k−1 3 xi , then it is a continuous function. By the continuous mapping as g(x) = i=1 theorem  √  k−1  ( n( pˆ in − p0i ))3 L → g(AZ ), g(Y n ) = 2 ˆ ( pi (θ )) n

i=1

which implies is bounded in probability and hence  that g(Y n )  k−1 (√n( pˆin − p0i ))3 P 2a 1 √ → 0. Now using the fact that i=1 ˆ 2 n ( pi (θn ))

pk = 1 − p1 − p2 − · · · pk−1 , the term

√ ( n( pˆ kn − p0k ))3 2a √1 n ( pk (θˆ n ))2

can be expressed as

3 k−1 √ √ 2a1 ( n( pˆ kn − p0k ))3 2a1 1 = √ ( n( pˆ in − p0i )) . √ n n ( pk (θˆ n ))2 ( pk (θˆ n ))2 i=1 Now defining a function g : Rk−1 → R as g(x) =



k−1 i=1 x i

3

, which is a continu 3 k−1 √ ous function, by the continuous mapping theorem g(Z n )= i=1 ( n( pˆ in − p0i ))

→ g(Z ). Further, ( pk (θˆ n ))2 → ( pk (θ0 ))2 . Hence, √ √ P (2a1 /  n)( n( pˆ kn − p0k ))3 /( pk (θˆ n ))2 → 0. To examine whether k √ 3 ˆ ˆ 2 converges in law, it is to be noted that by i=1 ( n( pi (θ n ) − p0i )) /( pi (θ n )) the mean value theorem, L

P

√ √ n( pi (θˆ n ) − p0i ) = δi |θ∗n × n(θˆ n − θ0 )   ∂ ∂ ∂ where δi = pi (θˆ n ), pi (θˆ n ), . . . , pi (θˆ n ) ∂θ1 ∂θ2 ∂θl P P and θ∗n = αθ0 + (1 − α)θˆ n , 0 < α < 1. Since θˆ n → θ0 , we have θ∗n → θ0 . Hence,

6.3 Goodness of Fit Test

341

√  √ √ n( pk (θˆ n ) − p0k ) n( p1 (θˆ n ) − p01 ) n( p2 (θˆ n ) − p02 ) Un = , ,..., ( p1 (θˆ n ))2 ( p2 (θˆ n ))2 ( pk (θˆ n ))2 √  √ √ = Bn n( p1 (θˆ n ) − p01 ), n( p2 (θˆ n ) − p02 ), . . . , n( pk (θˆ n ) − p0k )   √ √ √ = Bn δ1 |θ∗n n(θˆ n − θ0 ), δ2 |θ∗n n(θˆ n − θ0 ), . . . , δk |θ∗n n(θˆ n − θ0 ) √ = Bn Mn n(θˆ n − θ0 ), where Bn is a diagonal matrix of order k × k with diagonal elements 1/( pi (θˆ n ))2 , i = 1, 2, . . . , k and Mn is a matrix of order k × l with i-th row as δi |θ∗n , P P i = 1, 2, . . . , k. Since θˆ n → θ0 , we have Bn → B where B is a matrix of order P

k × k with diagonal elements 1/( pi (θ0 ))2 , i = 1, 2, . . . , k. Since θ∗n → θ0 , we have P

Mn → M where M is a matrix of order k × l with i-th row as δi |θ0 , i = 1, 2, . . . , k. √ L Further, n(θˆ n − θ0 ) → Z ∼ Nl (0, ), where  is a positive definite matrix. √ ˆ L Hence, Bn Mn n(θn − θ0 ) → B M Z . Again defining a function g : Rk → R as k g(x) = i=1 xi3 , which is a continuous function and by using the continuous map√ k P (( pi (θˆ n ) − p0i ))3 /( p1 (θˆ n ))2 → 0. Now ping theorem, we get that (2a1 / n) i=1 using the similar arguments, it can be shown that the remaining terms in T1n P

and Vn converge in probability to 0. Thus, Vn → 0 and hence −2 log λ(X ) and k (oi − ei )2 /ei both have the limit law as χ2k−l−1 under H0 . Tn (P) = i=1 (ii) As in Theorem 6.2.1, Tn (P) can be expressed as k  Tn (P) = n( pˆ in − pi (θˆ n ))2 / pi (θˆ n ) = n( pˆ n − p(θˆ n )) An ( pˆ n − p(θˆ n )) , i=1

  where An = diag 1/ p1 (θˆ n ), 1/ p2 (θˆ n ), . . . , 1/ pk (θˆ n ) . Similarly, Wn can be expressed as Wn =

k k   (oi − ei )2 (n pˆ in − npi (θˆ n ))2 = oi n pˆ in i=1

i=1

=

k  n( pˆ in − pi (θˆ n ))2 = n( pˆ n − p(θˆ n )) Bn ( pˆ n − p(θˆ n )), pˆ in i=1

  where Bn = diag 1/ pˆ 1n , 1/ pˆ 2n , . . . , 1/ pˆ kn . Suppose A = diag(1/ p01 , 1/ p02 , . . . , 1/ p0k ). Now under H0 , P P P P θˆ n → θ0 ⇒ An → A & pˆ n → p 0 ⇒ Bn → A

⇒ Tn (P) − Wn = n( pˆ n − p(θˆ n )) (An − Bn )( pˆ n − p(θˆ n )) → 0 P

⇒ Wn =

k 

L

L

(oi − ei )2 /oi → U ∼ χ2k−l−1 as Tn (P) → U ∼ χ2k−l−1 .

i=1



342

6

Goodness of Fit Test and Tests for Contingency Tables

Table 6.6 IQ scores: observed and expected frequencies IQ score

oi = X i

pˆ in (θˆ n )

ei

≤90 (90, 100] (100, 110] (110, 120] (120, 130] >130

10 18 23 22 18 9

0.067046 0.164623 0.256148 0.252670 0.158005 0.062613

6.7046 16.4622 25.6148 25.2670 15.8005 6.2613

For the data given in Example 6.3.3, we find the values of Tn and Wn . We have μˆ n = X n = 109.7 and σˆ n2 = 210.91. Further, we note that oi = X i and we find ei as ei = n pˆ in (θˆ n ), i = 1, 2, . . . , 6. Table 6.6 displays the values of observed and expected frequencies. For the given data, Tn (P) = 3.9567. For large n, H0 is rejected if Tn > c where c is such that the size of the test is α and the asymptotic null distribution of Tn (P), which is χ23 . Thus c = χ23,1−α . Further for the given data, the value of the test statistic Wn is 3.1019. For large n, H0 is rejected if Wn > c, where c is such that the size of the test is α and is determined using the χ23 distribution. Thus if α = 0.05 then c = χ23,0.95 = 7.8147 which is larger than values of both Tn (P) and Wn . Hence, data do not show the sufficient evidence to reject H0 and normal distribution can be taken as a good model for IQ scores.  Remark 6.3.1

In Example 6.3.2, the parameters of the binomial distribution are known. In most of the situations, the numerical parameter is known, but the probability of success θ is not known. We then estimate it from the given data. Thus, for binomial B(m, θ) distribution, truncated at 0, the maximum likelihood estimator which is same as the moment estimator based on the sufficient statistic is given by the solution of the equation X n = mθ/(1 − (1 − θ)m ). In such a situation the degrees of freedom of the asymptotic null chi-square distribution is reduced by 1, as we estimate one parameter in the null space. In the next section, we discuss a class of large sample tests and exhibit how likelihood ratio test, test based on Karl Pearson’s chi-square statistic and a test based on Wn = (oi − ei )2 /oi are related to each other in a general setup.

6.4

Score Test and Wald’s Test

In Theorem 6.3.1 and Theorem 6.3.2, we have noted that for a multinomial distribution in k cells, for testing H0 : p = p 0 against the alternative H1 : p = p 0 or for testing H0 : p = p(θ) against the alternative H1 : p = p(θ), the likelihood ratio test, test based on Karl Pearson’s chi-square statistic and a test based on the test

6.4 Score Test and Wald’s Test

343

k statistic Wn = i=1 (oi − ei )2 /oi are asymptotically equivalent, in the sense that the asymptotic null distributions of all the three test statistics are the same. In this section, we study a general class of large sample tests where these three test statistics are asymptotically equivalent. This class includes a score test and Wald’s test. Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk , k ≥ 1. Suppose θˆ n is the maximum likelihood estimator of θ based on a random sample {X 1 , X 2 , . . . , X n } from the distribution of X . Further, we assume that θˆ n is a CAN estimator of θ with approximate dispersion matrix I −1 (θ)/n, which is true if the probability law f (x, θ) belongs to a k-parameter exponential family or a k-parameter Cramér family. As a consequence Q n (X , θ) = n(θˆ n − θ) I (θ)(θˆ n − θ) → U ∼ χ2k . L

Suppose Sn (X , θ) is defined as Sn (X , θ) = n(θˆ n − θ) Mn (θˆ n − θ), where Mn = [Mn (i, j)] & Mn (i, j) = −

1 ∂ 2 log L n (θ|X ) |θˆ . n n ∂θi ∂θ j





Since Mn (i, j) → Ii j (θ) for all i, j = 1, 2, . . . , k, we have Mn → I (θ). Hence as n → ∞, Sn (X , θ) − Q n (X , θ) = n(θˆ n − θ) (Mn − I (θ))(θˆ n − θ) Pθ

→0



L

Sn (X , θ) → U ∼ χ2k .

Further, we define Wn (X , θ) as Wn (X , θ) = n(θˆ n − θ) I (θˆ n )(θˆ n − θ) . We have θˆ n to be consistent for θ and hence if each element of matrix I (θ) is a P continuous function of θ, then I (θˆ ) → I (θ). Consequently, n

Wn (X , θ) − Q n (X , θ) = n(θˆ n − θ) (I (θˆ n ) − I (θ))(θˆ n − θ) P

L

→ 0 ⇒ Wn (X , θ) → U ∼ χ2k . Suppose Ui (X , θ) for i = 1, 2, . . . , k is defined as n 1 ∂ log L n (θ|X ) 1  ∂ log f (X r , θ) Ui (X , θ) = √ =√ ∂θi ∂θi n n r =1

344

6

Goodness of Fit Test and Tests for Contingency Tables

and a random vector V n (X , θ) is defined as V n (X , θ) = (U1 (X , θ), U2 (X , θ), . . . , Uk (X , θ)) . V n (X , θ) is known as a vector of score functions. Further it is known that  Eθ

∂ log f (X , θ) ∂θi





∂ log f (X , θ) ∂ log f (X , θ) = 0 & Cov , ∂θi ∂θ j = Ii j (θ), i, j = 1, 2, . . . , k .



As a consequence, E θ (V n (X , θ)) = 0 and dispersion matrix D of V n (X , θ) is I (θ). f (X r ,θ) is denoted by Yir , and (Y1r , Y2r , . . . , Ykr ) by Y r , r = 1, 2, . . . , n, If ∂ log ∂θ i then E(Y r ) = 0 and its dispersion matrix is given by I (θ), which is positive definite. Further, {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, implies {Y 1 , Y 2 , . . . , Y n } are independent and identically distributed random vectors. Hence by the multivariate CLT, V n (X , θ) = (U1 (X , θ), U2 (X , θ), . . . , Uk (X , θ) n 1  L = √ Y → Z 2 ∼ Nk (0, I (θ)) . n r =1 r Consequently, L

Fn (X , θ) = V n (X , θ)I −1 (θ)V n (X , θ) → U ∼ χ2k . Thus, Q n (X , θ), Sn (X , θ), Wn (X , θ) and Fn (X , θ) have the same limiting distribution and it is χ2k . Further, observe that L

V n (X , θ) → Z 2 ∼ Nk (0, I (θ)) ⇒

L

I −1 (θ)V n (X , θ) → Z 1 ∼ Nk (0, I −1 (θ)) .

√ L We know that n(θˆ n − θ) → Z 1 ∼ Nk (0, I −1 (θ)). Thus, I −1 (θ)V n (X , θ) and √ ˆ n(θn − θ) have the same limit law. In Table 6.7 we summarize all these results. These results are heavily used in testing null hypothesis H0 : θ = θ0 against the alternative H1 : θ = θ0 based on a random sample {X 1 , X 2 , . . . , X n } from the distribution of X . We assume that θ0 is a completely specified vector, thus H0 is a simple null hypothesis. We now discuss three test procedures to test H0 against H1 . Table 6.7 Limit laws of quadratic forms Quadratic form

Limit law

Q n (X , θ) = n(θˆ n − θ) I (θ)(θˆ n − θ) Sn (X , θ) = n(θˆ n − θ) Mn (θˆ n − θ) Wn (X , θ) = n(θˆ − θ) I (θˆ )(θˆ − θ)

χ2k χ2k χ2k χ2k

n

n

n

Fn (X , θ) = V n (X , θ)I −1 (θ)V n (X , θ)

6.4 Score Test and Wald’s Test

345

Likelihood ratio test: In Chap. 5 we have discussed this test procedure in detail. Suppose λ(X ) is a test statistic for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . In Theorem 5.2.2, it is proved that P

−2 log λ(X ) − Sn (X , θ) → 0



L

− 2 log λ(X ) → U ∼ χ2k .

H0 is rejected if −2 log λ(X ) > c where the cut-off point c is determined corresponding to the given level of significance α and using the asymptotic null distribution of −2 log λ(X ), which is χ2k , which implies that c = χ2k,1−α . Neyman and Pearson proposed this test procedure in 1928. Wald’s test: Wald’s test for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 is based on the test statistic Tn (W ) given by Tn (W ) = Wn (X , θ0 ) = n(θˆ n − θ0 ) I (θˆ n )(θˆ n − θ0 ) . H0 is rejected if Tn (W ) > c where the cut-off point c is determined corresponding to the given level of significance α and using the null distribution of Tn (W ). We L have proved above that Wn (X , θ) = n(θˆ − θ) I (θˆ )(θˆ − θ) → U ∼ χ2 , hence n

n

n

k

the asymptotic null distribution of Wald’s test statistic Tn (W ) is χ2k , which implies c = χ2k,1−α . Wald proposed this test procedure in 1943. Score test: It is proposed by C.R.Rao in 1947. The test statistic for a score test for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 is based on a score function and is given by Tn (S) = Fn (X , θ0 ) = V n (X , θ0 )I −1 (θ0 )V n (X , θ0 ) . H0 is rejected if Tn (S) > c where the cut-off point c is determined corresponding to the given level of significance α and using the asymptotic null distribution L

of Tn (S). We have proved above that Fn (X , θ) → U ∼ χ2k , hence the asymptotic null distribution of the score test statistic Tn (S) is χ2k , which implies that c = χ2k,1−α . Thus, all the three test procedures described above are asymptotically equivalent, in the sense that, the asymptotic null distribution for all the three statistics is the same. For a score test, computation of θˆ n is not necessary, which can be a major advantage for some probability models.  Remark 6.4.1

 It is to be noted that s.e.(θˆ n ) = 1/n I (θˆ n ) is the standard error of θˆ n . Hence, in real parameter setup, Wald’s test statistic and a score test statistic can also be defined as

346

6

Tn∗ (W ) Tn∗ (S)

= =

 

Goodness of Fit Test and Tests for Contingency Tables



Tn (W ) = n I (θˆ n )(θˆ n − θ0 ) = (θˆ n − θ0 )/s.e.(θˆ n )  Tn (S) = n I (θ0 )(θˆ n − θ0 ) = (θˆ n − θ0 )/s.e.(θˆ n )|θ0 ,

where s.e.(θˆ n )|θ0 is the standard error of θˆ n evaluated at θ0 . In both the procedures under H0 , the asymptotic null distributions of the test statistics Tn∗ (W ) and Tn∗ (S) are standard normal. In Wald’s test procedure H0 is rejected at level of significance α if |Tn∗ (W )| > c, similarly in a score test procedure H0 is rejected if |Tn∗ (S)| > c, where c = a1−α/2 . In simple or multiple liner regression or in logistic regression, for testing significance of the regression coefficients, the most frequently used test is Wald’s test. For example, in a simple liner regression model Y = β0 + β1 X + , the test statistic for testing H0 : β1 = 0 against the alternative H1 : β1 = 0 is Tn = βˆ 1n /s.e.(βˆ 1n ), which is Wald’s test statistic. For large n, its null distribution is standard normal and H0 is rejected at level of significance α, if |Tn | > a1−α/2 . Many statistical software use Wald’s test. In the following examples we illustrate these test procedures.  Example 6.4.1

Suppose {X 1 , X 2 , . . . , X n } is a random sample from Cauchy C(θ, 1) distribution. We derive Wald’s test procedure and a score test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . In Chap. 4, we have proved that Cauchy C(θ, 1) distribution belongs to a Cramér family. Hence for large n, the maximum likelihood estimator θˆ n is CAN for θ with approximate variance 1/n I (θ) = 2/n √ as I (θ) = 1/2 for Cauchy C(θ, 1) distribution. Thus s.e.(θˆ n ) = 2/n, which is ∗ free from θ. Hence, Wald’s test statistic Tn (W ) and the score test statistic Tn∗ (S) are the same and are given by Tn∗ (W ) = Tn∗ (S) = (θˆ n − θ0 )/s.e.(θˆ n ) =

 n/2(θˆ n − θ0 ) .

For large n, its null distribution is standard normal and H0 is rejected at level of  significance α if |Tn∗ (W )| > a1−α/2 . In the next example for a Bernoulli B(1, θ) distribution, we compute the limit laws of four quadratic forms as listed in Table 6.7 and illustrate how these are useful to test the null hypothesis H0 : θ = θ0 .  Example 6.4.2

Suppose X follows Bernoulli B(1, θ) distribution and X = {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . We derive likelihood ratio test procedure, Wald’s test procedure and the score test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 , where θ0 is a specified constant, assuming

6.4 Score Test and Wald’s Test

347

sample size n is large. Since X ∼ B(1, θ), its probability mass function p(x, θ) is p(x, θ) = Pθ [X = x] = θ x (1 − θ)1−x , x = 0, 1; 0 < θ < 1 . Hence, the likelihood function of θ corresponding to the random sample X is n 

L n (θ|X ) = θi=1

Xi

n 

(1−X i )

(1 − θ)i=1 ⇔ log L n (θ|X ) n n   = log θ X i + log(1 − θ)(n − Xi ) . i=1

i=1

log L n (θ|X ) is a differentiable function of θ. Hence, n 

∂ log L n (θ|X ) = ∂θ &

n−

Xi

i=1

θ n 



n 

Xi

i=1

1−θ n  Xi n−

Xi ∂ 2 log L n (θ|X ) i=1 i=1 = − − . ∂θ2 θ2 (1 − θ)2

We assume that all X i s are not 0 or not 1. It is to be noted that ∂ log∂θL n2 (θ|X ) < 0 for any realization of the random sample {X 1 , X 2 , . . . , X n } which implies that at the solution of the likelihood equation, the likelihood attains the maximum. Thus, the maximum likelihood estimator θˆ n of θ is given by θˆ n = X n , which is the relative frequency of occurrence of outcome 1 in the random sample of size L n. By the WLLN, θˆ n → θ and by the CLT 2

√ L n(θˆ n − θ) → Z 1 ∼ N (0, θ(1 − θ)) √ n L ⇔ Zn = √ (θˆ n − θ) → Z ∼ N (0, 1) . θ(1 − θ) Thus, θˆ n is CAN estimator of θ with approximate variance θ(1 − θ)/n. Now, L

Z n → Z ∼ N (0, 1), ⇒ Q n (X , θ) = Z n2 =

n L (θˆ n − θ)2 → U ∼ χ21 . θ(1 − θ)

Further, Mn = −

1 ∂ 2 log L n (θ|X ) 1 |θˆ n = ˆθn (1 − θˆ n ) n ∂θ2 n L (θˆ n − θ)2 → U ∼ χ21 . ⇒ Sn (X , θ) = X n (1 − X n )

348

6

Goodness of Fit Test and Tests for Contingency Tables

The information  2 function I (θ) is given by ,θ) I (θ) = E θ − ∂ log∂θp(X = 1/θ(1 − θ). Hence, 2 Wn (X , θ) = n I (θˆ n )(θˆ n − θ)2 =

n L (θˆ n − θ)2 → U ∼ χ21 . ˆθn (1 − θˆ n )

The score function Vn (X , θ) is given by 1 ∂ log L n (θ|X ) Vn (X , θ) = √ ∂θ n ⎞ ⎛ n n  Xi Xi n− √ ˆ ⎟ 1 ⎜ i=1 i=1 ⎟ = n(θn − θ) . = √ ⎜ − 1−θ ⎠ θ(1 − θ) n⎝ θ

 1 Now, Vn (X , θ) → Z 2 ∼ N 0, θ(1 − θ) n L ⇒ Fn (X , θ) = (θˆ n − θ)2 → U ∼ χ21 . θ(1 − θ) 

L

L

Thus, corresponding to a random sample from B(1, θ), Sn (X , θ) → U , L

L

Wn (X , θ) → U and Fn (X , θ) → U & U ∼ χ21 . Now to test H0 : θ = θ0 against the alternative H1 : θ = θ0 , the likelihood ratio test statistic −2 log λ(X ), Wald’s test statistic Tn (W ) and the score test statistic is Tn (S) are given by   Xn 1 − Xn −2 log λ(X ) = 2n X n log + (1 − X n ) log θ0 1 − θ0 n Tn (W ) = Wn (X , θ0 ) = (θˆ n − θ0 )2 θˆ n (1 − θˆ n ) n & Tn (S) = Fn (X , θ0 ) = (θˆ n − θ0 )2 . θ0 (1 − θ0 ) For large n under H0 , −2 log λ(X ) is distributed as Sn (X , θ0 ) and under H0 , L

L

L

Sn (X , θ0 ) → U , Wn (X , θ0 ) → U , & Fn (X , θ0 ) → U & U ∼ χ21 . H0 is rejected if the value of the test statistic is larger than c, where the cutoff point c is determined corresponding to the given level of significance α and using the null distribution of the test statistic. For all the three test procedures, the asymptotic null distribution is the same as χ21 which implies that c = χ21,1−α .

6.4 Score Test and Wald’s Test

349

We now express Tn (W ) and Tn (S) as follows. Note that the sample mean X n is nothing but the proportion of successes in the sample of size n and is given by θˆ n = X n = o1 /n where o1 denotes the number of 1’s, that is, number of successes in n trials. Suppose o0 denotes the number of 0’s, that is, number of failures in n trials. Suppose e1 = nθ0 and e0 = n(1 − θ0 ) denote the expected number of successes and failures in n trials under the null setup. Further, o1 + o0 = n and e1 + e0 = n. With this notation, Tn (W ) can be rewritten as follows: Tn (W ) = = = = = similarly, Tn (S) = = =

n

(θˆ n − θ0 )2 =

n (o1 /n − θ0 )2 o1 /n(1 − o1 /n)

θˆ n (1 − θˆ n ) (o1 − e1 )2 n (o1 − e1 )2 (o1 + o0 ) = o1 o0 o1 o0 2 (o1 − e1 ) o1 (o1 − e1 )2 (o1 − e1 )2 o1 (1 + ) = + o1 o0 o1 o1 o0 2 2 (o1 − e1 ) (n − o0 − n + e0 ) o1 + o1 o1 o0 2 2 (o1 − e1 ) (o0 − e0 ) + o1 o0 n n3 (θˆ n − θ0 )2 = (o1 /n − θ0 )2 θ0 (1 − θ0 ) nθ0 n(1 − θ0 ) (o1 − e1 )2 n (o1 − e1 )2 (e0 + e1 ) = e1 e0 e1 e0 (o1 − e1 )2 (o1 − e1 )2 e1 (o1 − e1 )2 (o0 − e0 )2 + = + . e1 e1 e0 e1 e0

Thus, Tn (S) is exactly same as Karl Pearson’s chi-square test statistic Tn (P).   Remark 6.4.2

 √ In the above example, it is to be noted that s.e.(θˆ n ) = θˆ n (1 − θˆ n )/ n is the standard error of θˆ n . Hence, for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 for B(1, θ) distribution, Wald’s test statistic Tn∗ (W ) and the score test statistic Tn∗ (S) are as follows: √  θˆ n − θ0 n Tn∗ (W ) = Tn (W ) =  (θˆ n − θ0 ) = s.e.(θˆ n ) θˆ n (1 − θˆ n ) √  θˆ n − θ0 n ∗ Tn (S) = Tn (S) = √ . (θˆ n − θ0 ) = θ0 (1 − θ0 ) s.e.(θˆ n )|θ0 In both the procedures under H0 , the asymptotic null distribution of the test statistics is standard normal. In Wald’s test procedure H0 is rejected if

350

6

Goodness of Fit Test and Tests for Contingency Tables

|Tn∗ (W )| > c and in the score test procedure H0 is rejected if |Tn∗ (S)| > c, where c = a1−α/2 . We rewrite Tn∗ (S) as follows: √ √ n(U /n − p0 ) n Tn∗ (S) = √ (θˆ n − θ0 ) = √ p0 (1 − p0 ) θ0 (1 − θ0 ) √ n(Pn − p0 ) , where p0 = θ0 , = √ p0 (1 − p0 ) U denotes the total number of successes in a random sample of size n and Pn is the proportion of successes. The function prop.test from R uses the score test statistic for testing the null hypothesis H0 : p = p0 based on the sample proportion Pn , where p denotes the population proportion. We illustrate it in Sect. 6.7. In Theorem 6.3.1, it is proved that for a multinomial distribution with k cells, the 2 i) likelihood ratio test statistic and Karl Pearson’s chi-square statistic Tn (P) = (oi −e ei

i) and a statistic Wn = (oi −e for testing H0 : p = p 0 against the alternative oi H1 : p = p 0 , have the same limiting null distribution as χ2k−1 . In the following theorem we prove an additional feature of these test statistics. In Example 6.4.2 for a Bernoulli distribution, we have shown that Wald’s test statistic Tn (W ) can 2 (oi − ei )2 /oi and the score test statistic Tn (S) reduces to be expressed as i=1 2 2 i=1 (oi − ei ) /ei . In the following theorem, we extend the results of this example and prove that in a multinomial distribution with k cells, for testing H0 : p = p 0 against the alternative H1 : p = p 0 , Wald’s test statistic Tn (W ) simk plifies to i=1 (oi − ei )2 /oi while the score test statistic Tn (S) simplifies to Karl k Pearson’s chi-square test statistic Tn (P) = i=1 (oi − ei )2 /ei . 2

Theorem 6.4.1 Suppose Y = {Y1 , Y2 , . . . , Yk−1 } has a multinomial distribution in k cells with cell probabilities p = { p1 , p2 , . . . , pk−1 }, where pi > 0, i = 1, 2, . . . , k with k−1 pk = 1 − i=1 pi . On the basis of a random sample of size n from the distribution of Y , suppose we want to test H0 : p = p 0 against the alternative H1 : p = p 0 , where p 0 is a completely specified vector. Suppose oi and ei denote that observed and expected cell frequencies of i-th cell, i = 1, 2, . . . , k. Then (i) Wald’s test the k (oi − ei )2 /oi and (ii) The score test statistic statistic Tn (W ) is the same as i=1 k Tn (S) is the same as Tn (P) = i=1 (oi − ei )2 /ei .

Proof Suppose X = (X 1 , X 2 , . . . , X k ) denotes the vectorof cell frequencies cork X i = n. The maxresponding to a random sample of size n from Y with i=1  imum likelihood estimator pˆ n = ( pˆ 1n , pˆ 2n , . . . , pˆ (k−1)n ) of p is then given by pˆ n = (X 1 /n, X 2 /n, . . . , X k−1 /n) . (i) Suppose I ( p)(k−1)×(k−1) = [Ii j ( p)] is the information matrix for the multinomial distribution in k cells, which is obtained in Sect. 6.2. Wald’s test for testing

6.4 Score Test and Wald’s Test

351

H0 : p = p 0 against the alternative H1 : p = p 0 is based on the test statistic Tn (W ) = n( pˆ n − p 0 ) I ( pˆ n )( pˆ n − p 0 ) with Iii ( p) =

1 1 1 + & Ii j ( p) = . pi pk pk

Suppose o = (o1 , o2 , . . . , ok−1 ) = n pˆ n and e = (e1 , e2 , . . . , ek−1 ) = n p 0 denote the vector of observed and expected cell frequencies. Then Tn (W ) can be expressed as 1 (n pˆ n − n p 0 ) I ( pˆ n )(n pˆ n − n p 0 ) n 1 = (o − e) I ( pˆ n )(o − e), n

Tn (W ) = n( pˆ n − p 0 ) I ( pˆ n )( pˆ n − p 0 ) =

which after using the form of I ( pˆ n ), simplifies as follows:   k−1  k−1 1 1 1 + Tn (W ) = (oi − ei ) + (oi − ei )(o j − e j ) n pˆ in n pˆ kn n pˆ kn i=1 i=1 j=i=1 ⎧ ⎫ k−1 k−1 k−1  k−1 ⎨ ⎬  (oi − ei )2   1 = + (oi − ei )2 + (oi − ei )(o j − e j ) ⎭ oi ok ⎩ k−1 

i=1

=

k−1  i=1



2

i=1

2 k−1 (oi − ei )2 1  + (oi − ei ) oi ok

i=1 j=i=1

i=1

k−1 k   (oi − ei )2 1 = + (ok − ek )2 as (oi − ei ) = 0 oi ok i=1

i=1

k  (oi − ei )2 . = oi i=1

(ii) Score test for testing H0 : p = p 0 against the alternative H1 : p = p 0 is based on the test statistic Tn (S) = V n (X , p 0 )I −1 ( p 0 )V n (X , p 0 ), where V n (X , p) is a vector of score functions. It is defined as V n (X , p) = (U1 (X , p), U2 (X , p), . . . , Uk−1 (X , p)) 1 ∂ log L n ( p|X ) where Ui (X , p) = √ . ∂ pi n L

It is proved that V n (X , p) → Z 1 ∼ Nk−1 (0, I ( p)). Consequently, L

V n (X , p)I −1 ( p)V n (X , p) → U which has χ2k−1 distribution. In Sect. 6.2, we have verified that the information matrix I ( p) is the inverse of the dispersion matrix

352

6

Goodness of Fit Test and Tests for Contingency Tables

D = [σi j ] where σii = pi (1 − pi ) and σi j = − pi p j of Y = (Y1 , Y2 , . . . , Yk−1 ) having multinomial distribution in k cells. Hence, with V n ≡ V n (X , p 0 ), Tn (S) can be written as Tn (S) = V n D( p 0 )V n . To show that Tn (S) = Tn (P), note that X i = oi and ei = np0i . The log-likelihood of p corresponding to the observed cell frequencies k X i log pi . Hence, X is given by log L n ( p|X ) = i=1   1 ∂ log L n ( p|X ) 1 Xk Xi Ui (X , p) = √ = √ − ∂ pi pk n n pi   X1 1 Xk X2 Xk X k−1 Xk − , − , ..., − ⇒ V n = √ p0k p02 p0k p0(k−1) p0k n p01   1 ok o2 ok ok−1 ok o1 = √ n − , − , ..., − e1 ek e2 ek ek−1 ek n   √ ok − ek o2 − e2 ok − ek ok−1 − ek−1 ok − ek o1 − e1 . = n − , − , ..., − e1 ek e2 ek ek−1 ek

Further, the dispersion matrix D = [σi j ] under H0 can be rewritten as σii = p0i (1 − p0i ) =

ei  ei (n − ei ) ei  1− = & n n n2

σi j = − p0i p0 j = −

ei e j . n2

With these substitutions, Tn (S), which is a quadratic form, can be expressed as follows. nTn (S) =

 k−1   oi − ei ok − ek 2 − ei (n − ei ) ei ek i=1

−2

  k−1  k−1   oj − ej oi − ei ok − ek ok − ek ei e j − − ei ek ej ek i=1 j=i+1

  k−1  k−1   oi − ei 2 ok − ek 2  = ei (n − ei ) + ei (n − ei ) ei ek i=1



−2

ok − ek ek

 k−1  i=1

oi − ei ei

i=1

 ei (n − ei )

   k−1 k−1  k−1  k−1   oj − ej oi − ei ok − ek 2   ei e j − 2 ei e j −2 ei ej ek i=1 j=i+1

i=1 j=i+1

 k−1 k−1 ! " oj − ej ok − ek   oi − ei ei e j +2 + ek ei ej 

i=1 j=i+1

=n

k−1 k−1 k−1  k−1   (oi − ei )2  − (oi − ei )2 − 2 (oi − ei )(o j − e j ) ei i=1

i=1

i=1 j=i+1

6.4 Score Test and Wald’s Test

353





 k−1 k−1  k−1  ok − ek 2 ⎣ ei (n − ei ) − 2 ei e j ⎦ ek i=1 i=1 j=i+1   ok − ek +2 ek ⎤ ⎡ "  k−1 k−1  k−1   ! oi − ei  oj − ej oi − ei ⎣ ei e j − ei (n − ei )⎦ + ei ej ei 

+

i=1 j=i+1

.

i=1

(6.4.1) From the definition of expected frequencies we have k−1 

ei (n − ei ) − 2

i=1

k−1  k−1 

k

i=1 ei

ei e j = n(n − ek ) −

i=1 j=i+1

k−1 

= n. As a consequence,

ei2 − 2

i=1

k−1 2  = n(n − ek ) − ei

k−1  k−1 

ei e j

i=1 j=i+1

i=1

= n(n − ek ) − (n − ek )2 = nek − ek2 . Hence the term ⎡ ⎤   k−1 k−1  k−1  ok − ek 2 ⎣ n(ok − ek )2 ei (n − ei ) − 2 ei e j ⎦ = − (ok − ek )2 . ek ek i=1

i=1 j=i+1

Now, we simplify the sum Un =

Un =

k−1  k−1 

'

k−1 k−1 i=1

j=i+1

oi −ei ei

+

o j −e j ej

( ei e j as follows.

[(oi − ei )e j + (o j − e j )ei ]

i=1 j=i+1

=

k−1  i=1

=

k−1 

k−1 

k−1 

(oi − ei )

ej +

k−1 

(oi − ei )

k−1  i=1

k−1  i=1

j=i+1

i=1

+

ej +

j=i+1

i=1

=

k−1 

(oi − ei )

k−1  j=1

ei

k−1 

(o j − e j )

j=i+1

(o j − e j )

j−1 

ei by interchanging the sums

i=1

ej

j=i+1

(oi − ei )

i−1  j=1

e j by interchanging i and j in second term

354

6

=

k−1 

⎡ (oi − ei ) ⎣

i=1

i−1 

ej +

j=1

Goodness of Fit Test and Tests for Contingency Tables



k−1 

ej⎦ =

k−1  (oi − ei )(n − ei − ek ).

j=i+1

i=1

Further,  k−1  k−1   oi − ei ei (n − ei ) = (oi − ei )[(n − ei − ek ) − (n − ei )] Un − ei i=1

i=1

=−

k−1 

(oi − ei )ek .

i=1

  ' ' ( o j −e j k−1 k−1 oi −ei k Substituting this expression in 2 oke−e + i=1 j=i+1 e e k i j ( k−1  oi −ei  k−1 ei e j − i=1 e (n − e ) , it reduces to −2(o − e ) (o − e ). With i i k k i i=1 i ei these simplifications, expression for nTn (S) in (6.4.1) can be written as, nTn (S) = n

k−1 k−1 k−1  k−1   (oi − ei )2  − (oi − ei )2 − 2 (oi − ei )(o j − e j ) ei i=1

i=1

i=1 j=i+1

 n(ok − ek )2 − (ok − ek )2 − 2(ok − ek ) (oi − ei ) ek k−1

+

i=1

k k k k−1    (oi − ei )2  − (oi − ei )2 − 2 (oi − ei )(o j − e j ) =n ei i=1

=n

k  (oi − ei )2 − ei i=1

=n

i=1

 k 

2

i=1 j=i+1

(oi − ei )

i=1

k k   (oi − ei )2 as (oi − ei ) = 0 . ei i=1

i=1

k Thus, the score test statistic Tn (S) simplifies to Tn (P) = i=1 (oi − ei )2 /ei , the most frequently used Karl Pearson’s chi-square statistic. In Theorem 6.3.1, it is  proved that for large n, Tn (W ) and Tn (S) have χ2k−1 distribution under H0 . In Sect. 6.7 we verify the results of Theorem 6.4.1 by simulation using R. For agiven sample from a multinomial distribution, it is always simple to compute k 2 i=1 (oi − ei ) /oi , instead of Tn (W ) and Tn (P) instead of Tn (S). It will be clear from Example 6.7.6 in Sect. 6.7.

6.4 Score Test and Wald’s Test

355

 Remark 6.4.3

At the beginning of this section we have shown that for a class of distributions with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk , k ≥ 1, if the maximum likelihood estimator θˆ n of θ is a CAN estimator of θ with approximate dispersion matrix I −1 (θ)/n, then the asymptotic null distributions of Wald’s test statistic Tn (W ) and the score test statistic Tn (S) is χ2k−1 . In Theorem 6.3.1, it is proved that for a class of multinomial distributions in k cells, Tn (W ) and k  k 2 2 distributed i=1 (oi − ei ) /oi , also Tn (S) and i=1 (oi − ei ) /ei are identically k for large n in null setup. Theorem 6.4.1 proves that Tn (W ) and i=1 (oi − ei )2 /oi k are identical random variables, similarly Tn (S) and Tn (P) = i=1 (oi − ei )2 /ei are identical random variables, for any n.

 Remark 6.4.4

Suppose Y = {Y1 , Y2 , . . . , Yk−1 } has a multinomial distribution in k cells with cell probabilities being function of θ, which may be real or vector valued parameter. Thus, p(θ) = { p1 (θ), p2 (θ), . . . , pk (θ)}, where pi (θ) > 0, i = 1, 2, . . . , k k pi (θ) = 1. On the basis of a random sample of size n from the and i=1 distribution of Y , suppose we want to test H0 : θ = θ0 which is equivalent to H0 : p = p 0 = p(θ0 ) against the alternative H1 : p = p 0 . Thus, p 0 is a again completely specified vector.  However, in this setup Wald’s test statistic k (oi − ei )2 /oi but score test statistic is is Tn (W ) is in general not equal to i=1 k 2 Tn (S) = i=1 (oi − ei ) /ei . It is illustrated in Example 6.7.7.  Remark 6.4.5

In Theorem 6.4.1, the null hypothesis is simple. In many test procedures the null hypothesis is composite, for example, goodness of fit test procedures as in Example 6.3.3. Suppose Y = {Y1 , Y2 , . . . , Yk−1 } has a multinomial distribution in k cells with cell probabilities p being a function of θ, a vector valued parameter of dimension l × 1, l < k. On the basis of a random sample of size n from the distribution of Y , suppose we want to test H0 : p = p(θ) against the alternative H1 : p = p(θ). Observe that the null hypothesis is a composite hypothesis. In null space we first obtain the maximum likelihood estimator θˆ of θ and hence of p(θ). n

Thus, the expected frequency of i-th cell is ei = npi (θˆn ). It is to be noted that in the proof of Theorem 6.4.1, when the null hypothesis is simple, the derivation to show that Tn (S) = Tn (P) depends on the vector of score functions, expressed in terms of oi and ei and the inverse of information matrix, again expressed in terms of ei ’s. The derivation remains valid when the null hypothesis is H0 : p = p(θ). Thus, in this setup also the score test statistic and Karl Pearson’s chi-square test statistic are k (oi −ei )2 the same. However, Wald’s test statistic does not reduce to i=1 . In the oi next section, we prove that the score test statistic and Karl Pearson’s test statistic

356

6

Goodness of Fit Test and Tests for Contingency Tables

are the same in more general setup as well. For example, in a r × s contingency table, underlying model is again a multinomial distribution. Suppose we wish to test that the two criteria A and B are not associated with each other, then the null hypothesis is composite and in the null setup the cell probabilities have some relations among them. In particular, the r s − 1 cell probabilities are expressed in terms of r + s − 2 parameters. In this case also, the score test statistic and Karl Pearson’s test statistic are the same. We will elaborate on this in Sect. 6.5. We now briefly discuss the score test and Wald’s test for testing a composite null hypothesis, when in the null setup the parameters have some functional relations among themselves and when underlying model need not be multinomial. Suppose X is a random variable or a random vector with probability law f (x, θ) indexed by a parameter θ ∈  ⊂ Rk , k ≥ 1. Suppose θˆ n is the maximum likelihood estimator of θ in the entire parameter space, based on a random sample {X 1 , X 2 , . . . , X n } from the distribution of X . Further, we assume that θˆ n is a CAN estimator of θ with approximate dispersion matrix I −1 (θ)/n of order k × k. Suppose the null hypothesis is H0 : θi = gi (β1 , β2 , . . . , βm ) , i = 1, 2, . . . , k, m ≤ k and g1 , g2 , . . . , gk are Borel measurable functions from Rm to R, having continuous partial derivatives of first order. Thus k parameters are expressed in terms of m parameters. Suppose θ˜ n is the maximum likelihood estimator of θ in a null setup, that is, θ˜ n = g(β˜ n ), where β˜ n is the maximum likelihood estimator of β = (β1 , β2 , . . . , βm ) based on the random sample {X 1 , X 2 , . . . , X n } from the distribution of X . Further, we assume that β˜ n is a CAN estimator of β with approximate dispersion matrix I −1 (β)/n of order m × m. The likelihood ratio test statistic λ(X ) is then given by sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

L n (θ˜ n |X ) . L n (θˆ |X ) n

L

In Theorem 5.2.4, we have proved that −2 log λ(X ) → U ∼ χ2k−m in the null setup. The score test statistic in such a setup is given by Tn(c) (S) = V n (X , θ˜ n )I −1 (θ˜ n )V n (X , θ˜ n ) , where V n (X , θ) is a vector of score functions (Rao [3], p. 418). It is to be noted that the score test statistic is obtained by replacing θ in the score function by its maximum likelihood estimator in the null setup. To define Wald’s test statistic when the null hypothesis is H0 : θi = gi (β1 , β2 , . . . , βm ) , i = 1, 2, . . . , k, we express the conditions imposed by null hypothesis on θ as Ri (θ) = 0, i = 1, 2, . . . , k − m. It is assumed that Ri ’s admit continuous partial derivative of first order. Wald’s test statistic is then given by

6.4 Score Test and Wald’s Test

357

Tn(c) (W ) =

k−m  k−m 

λi j (θˆ n )Ri (θˆ n )R j (θˆ n ),

i=1 j=1

where θˆ n is the maximum likelihood estimator of θ in the entire parameter space and λi j (θˆ n ) is the (i, j)-th element of inverse of the approximate dispersion matrix of (R1 (θˆ n ), R2 (θˆ n ), . . . , Rk−m (θˆ n )) , evaluated at θˆ n (Rao [3], p. 419). For both the procedures, the null hypothesis is rejected if the value of the test statistic is larger than c, where c is determined using the given level of significance (c) and the asymptotic null distribution. The asymptotic null distribution of both Tn (S) (c) and Tn (W ) is χ2k−m (Rao [3], p. 419). We illustrate these two test procedures in the following example.  Example 6.4.3

Suppose X and Y are independent random variables having Bernoulli B(1, p1 ) and B(1, p2 ) distributions respectively, 0 < p1 , p2 < 1. Suppose X = {X 1 , X 2 , . . . , X n 1 } is a random sample from the distribution of X and Y = {Y1 , Y2 , . . . , Yn 2 } is a random sample from the distribution of Y . On the basis of these samples we want to test H0 : p1 = p2 against the alternative H1 : p1 = p2 . Note that the null hypothesis is a composite hypothesis. In this example, we derive a score test procedure and Wald’s test procedure for testing H0 against the alternative H1 . Suppose P1n 1 = X n 1 =

n1 

X i /n 1 & P2n 2 = Y n 2 =

i=1

n1 

Yi /n 2

i=1

denote the proportion of successes in X and the proportion of successes in Y respectively. In Example 5.1.5, we have derived two large sample test procedures for testing H0 against the alternative H1 , based on the following two test statistics. (P1n 1 − P2n 2 )

Wn = 

P1n 1 (1 − P1n 1 )/n 1 + P2n 2 (1 − P2n 2 )/n 2 (P1n 1 − P2n 2 ) , & Sn = √ Pn (1 − Pn )(1/n 1 + 1/n 2 ) where Pn = (n 1 P1n 1 + n 2 P2n 2 )/(n 1 + n 2 ) and n = n 1 + n 2 . In this example we show that Wn is a square root of Wald’s test statistic, while Sn is a square root of the score test statistic. The log-likelihood of ( p1 , p2 ) in the entire parameter space, corresponding to random samples X and Y , using independence of X and Y is given by   n1 n1   X i log p1 + n 1 − X i log(1 − p1 ) log L n ( p1 , p2 |X , Y ) = i=1

+

n2  i=1

 Yi log p2 + n 2 −

i=1 n2  i=1

 Yi log(1 − p2 )

358

6

Goodness of Fit Test and Tests for Contingency Tables

= n 1 P1n 1 log p1 + n 1 (1 − P1n 1 ) log(1 − p1 ) + n 2 P2n 2 log p2 + n 2 (1 − P2n 2 ) log(1 − p2 ). From the log-likelihood it is easy to show that the maximum likelihood estimator of ( p1 , p2 ) is (P1n 1 , P2n 2 ) . In the null setup p1 = p2 = p, say. Then the log-likelihood of p corresponding to given random samples X and Y , using independence of X and Y , is given by  log L n ( p|X , Y ) =

n1 



Xi +

i=1

n2 

 Yi log p

i=1 n1 

+ n1 + n2 −

i=1

Xi −

n2 

 Yi log(1 − p).

i=1

It then follows that the maximum likelihood estimator of p is n 1 pˆ n =

i=1

Xi +

n2  i=1

(n 1 + n 2 )

 Yi =

(n 1 P1n 1 + n 2 P2n 2 ) = Pn . (n 1 + n 2 )

To derive the score test statistic, we obtain a vector of score functions as follows. From the log-likelihood of ( p1 , p2 ) in the entire parameter space, we have n 1 P1n 1 n 1 (1 − P1n 1 ) n 1 (P1n 1 − p1 ) ∂ log L n ( p1 , p2 |X , Y ) = − = ∂ p1 p1 1 − p1 p1 (1 − p1 ) ∂ log L n ( p1 , p2 |X , Y ) n 2 P2n 2 n 2 (1 − P2n 2 ) n 2 (P2n 2 − p2 ) = − = ∂ p2 p2 1 − p2 p2 (1 − p2 )   1 n 1 (P1n 1 − p1 ) 1 n 2 (P2n 2 − p2 )  ⇒ Vn ( p1 , p2 ) = √ ,√ n 2 p2 (1 − p2 ) n 1 p1 (1 − p1 )   1 n 1 (P1n 1 − pˆ n ) 1 n 2 (P2n 2 − pˆ n )  ⇒ Vn ( pˆ n , pˆ n ) = √ , ,√ n 1 pˆ n (1 − pˆ n ) n 2 pˆ n (1 − pˆ n ) as in the null setup p1 = p2 = p and pˆ n is its maximum likelihood estimator. It √ is to be noted that √ in the first component of Vn ( p1 , p2 ), the first factor is Similarly, in 1/ n 1 and not 1/ n 1 + n 2 as P1n 1 is based on n 1 observations. √ the second component of Vn ( p1 , p2 ), the first factor is 1/ n 2 . From the loglikelihood, it is easy to find the information matrix. Its inverse at pˆ n is given by, I −1 ( pˆ n , pˆ n ) = diag( pˆ n (1 − pˆ n ), pˆ n (1 − pˆ n )) = pˆ n (1 − pˆ n )I2 , where I2 is an identity matrix of order 2. Observe that P1n 1 − pˆ n = P1n 1 − (n 1 P1n 1 + n 2 P2n 2 )/(n 1 + n 2 ) = n 2 (P1n 1 − P2n 2 )/(n 1 + n 2 ) P2n 2 − pˆ n = −n 1 (P1n 1 − P2n 2 )/(n 1 + n 2 )

6.4 Score Test and Wald’s Test

359



⇒ Vn ( pˆ n , pˆ n ) =

√ n 1 n 2 (P1n 1 − P2n 2 ) √ ( n 2 , − n 1 ) (n 1 + n 2 ) pˆ n (1 − pˆ n )

⇒ Tn(c) (S) = V n (X , pˆ n )I −1 ( pˆ n , pˆ n )V n (X , pˆ n ) n 1 n 2 (P1n 1 − P2n 2 )2 pˆ n (n 1 + n 2 )2 ( pˆ n (1 − pˆ n ))2 √ √ √ √ (1 − pˆ n ( n 2 , − n 1 ) I2 ( n 2 , − n 1 ) n 1 n 2 (P1n 1 − P2n 2 )2 = (n 1 + n 2 ) (n 1 + n 2 )2 ( pˆ n (1 − pˆ n )) ⎛ (P1n 1 − P2n 2 ) n 1 n 2 (P1n 1 − P2n 2 )2 = = ⎝ (n 1 + n 2 )( pˆ n (1 − pˆ n )) P (1 − P )( 1 + =

 ⇒

n

Tn(c) (S) = 

(P1n 1 − P2n 2 ) Pn (1 − Pn )( n11 +

1 n2 )

n

n1

⎞2 ⎠ 1 n2 )

= Sn .

The asymptotic null distribution of Tn(c) (S) is χ21 , as k = 2 parameters are estimated in the entire space and in the null setup both are expressed in term of m = 1 (c) parameter. The null hypothesis is rejected if Tn (S) > c, where c = χ21−α,1 , where α is the given level of significance. Equivalently the test statistic Sn can also be used to test the null hypothesis. Its asymptotic null distribution is standard normal and H0 is rejected if |Sn | > a1−α/2 . (c) We now find the Wald’s test statistic Tn (W ). In the null setup, the condition p1 = p2 can be expressed as R( p1 , p2 ) = p1 − p2 = 0, ⇒ R( pˆ 1n , pˆ 2n ) = pˆ 1n − pˆ 2n = P1n 1 − P2n 2 p1 (1 − p1 ) p2 (1 − p2 ) V ar (R( pˆ 1n , pˆ 2n )) = + n1 n2 − P2n 2 )2 (P 1n 1 ⇒ Tn(c) (W ) = pˆ 1n (1 − pˆ 1n )/n 1 + pˆ 2n (1 − pˆ 2n )/n 2  (P1n 1 − P2n 2 ) (c) ⇒ Tn (W ) =  = Wn . P1n 1 (1 − P1n 1 )/n 1 + P2n 2 (1 − P2n 2 )/n 2 The asymptotic null distribution of Tn(c) (W ) is χ21 . The null hypothesis is rejected if Tn(c) (W ) > c, where c = χ21−α,1 . Equivalently the test statistic Wn can also be used to test the null hypothesis. Its asymptotic null distribution is standard normal  and H0 is rejected if |Wn | > a1−α/2 .

360

6

Goodness of Fit Test and Tests for Contingency Tables

 Remark 6.4.6

The function prop.test from R uses the score test statistic for testing the hypothesis of equality of population proportions based on the sample proportions, with the sample of size n 1 from population 1 and the sample of size n 2 from population 2, when two populations are independent. We illustrate it in Sect. 6.7. Following example illustrates the score test and Wald’s test for a composite hypothesis in a multinomial distribution.  Example 6.4.4

Suppose Y = (Y1 , Y2 , Y3 ) has a multinomial distribution in 4 cells with cell 4 pi = 1. Suppose p = ( p1 , p2 , p3 ). probabilities pi > 0, i = 1, 2, 3, 4 and i=1 On the basis of a random sample of size n from the distribution of Y , we want to test the null hypothesis H0 : p1 = p3 & p2 = p4 against the alternative that at least one of two equalities in the null setup are not valid. Suppose in the null setup p1 = p3 = α & p2 = p4 = β then 2α + 2β = 1, thus in null setup there is only one unknown parameter. We obtain its maximum likelihood estimator as follows. The log-likelihood of (α, β) given observed cell frequencies X = (X 1 , X 2 , X 3 , X 4 ) with X 1 + X 2 + X 3 + X 4 = n, is given by. log L n (α, β|X 1 , X 2 , X 3 , X 4 ) = (X 1 + X 3 ) log α + (X 2 + X 4 ) log β. Maximizing it subject to the condition α + β = 1/2, the maximum likelihood estimator of α is αˆ n = (X 1 + X 3 )/2n and of β is βˆ n = (X 2 + X 4 )/2n. Note that αˆ n + βˆ n = 1/2. Suppose pˆ 0n denote the maximum likelihood estimator of p in the null setup. To derive the score test statistic, we first find the vector of score 4 X i log pi . Thus, the vector V n of score functions from the log-likelihood i=1 functions is given by       X2 X3 1 X4 X4 X4 X1 , , V n (X , p) = √ − − − p1 p4 p2 p4 p3 p4 n       √ o2 o3 o1 o4 o4 o4 , , , ⇒ V n (X , pˆ 0n ) = n − − − e1 e4 e2 e4 e3 e4 where oi = X i and e1 = e3 = (X 1 + X 3 )/2 & e2 = e4 = (X 2 + X 4 )/2. Further, Information matrix I ( p) can also be expressed in terms of ei and proceeding as in Theorem 6.4.1, it follows that the score test statistic 4 (oi − ei )2 /ei . Tn(c) (S) = Tn (P) = i=1 To obtain Wald’s test statistic, observe that in the null setup, the condition p1 = p3 & p2 = p4 ⇔ R1 ( p) = p1 − p3 = 0 & R2 ( p) = p2 − p4 = 0. We now find the approximate dispersion matrix of D/n = [λi j ] of (R1 ( pˆ n ), R2 ( pˆ n )), where pˆ n is the maximum likelihood estimator of p in the

6.4 Score Test and Wald’s Test

361

entire parameter space and is given by pˆ n = (X 1 /n, X 2 /n, X 3 /n) . Suppose pˆ 4n = X 4 /n. Observe that λ11 = V ar (R1 ( pˆ n ) = V ar ( pˆ 1n − pˆ 3n )

λ22

= V ar ( pˆ 1n ) + V ar ( pˆ 3n ) − 2cov( pˆ 1n , pˆ 3n ) p1 (1 − p1 ) p3 (1 − p3 ) 2 p1 p3 = + + n n n = V ar (R2 ( pˆ n ) = V ar ( pˆ 2n − pˆ 4n )

λ12

= V ar ( pˆ 2n ) + V ar ( pˆ 4n ) − 2cov( pˆ 2n , pˆ 4n ) p2 (1 − p2 ) p4 (1 − p4 ) 2 p2 p4 = + + n n n = λ21 = Cov(R1 ( pˆ n ), R2 ( pˆ n ) = Cov( pˆ 1n − pˆ 3n , pˆ 2n − pˆ 4n ) = Cov( pˆ 1n , pˆ 2n ) − Cov( pˆ 1n , pˆ 4n ) − Cov( pˆ 3n , pˆ 2n ) + Cov( pˆ 3n , pˆ 4n ) p1 p2 p1 p4 p3 p2 p3 p4 =− + + − . n n n n

Suppose λi j is the (i, j)-th element of (D/n)−1 . Then by definition, Wald’s test statistic Tn(c) (W ) is given by Tn(c) (W ) =

2  2 

λi j ( pˆ n )Ri ( pˆ n )R j ( pˆ n )

i=1 j=1

= λ11 ( pˆ n )( pˆ 1n − pˆ 3n )2 + λ22 ( pˆ n )( pˆ 2n − pˆ 4n )2 + 2λ12 ( pˆ n )( pˆ 1n − pˆ 3n )( pˆ 2n − pˆ 4n ). (c)

(c)

The asymptotic null distribution of both Tn (S) and Tn (W ) is χ22 . The null hypothesis is rejected if value of the test statistic is > c, where c = χ21−α,2 .  We illustrate the computation of the test statistics in Example 6.7.9 in Sect. 6.7. In the next section we find Tn(c) (S) for testing hypothesis of independence of two attributes in a r × s contingency table and show that it is the same as Tn (P).

6.5

Tests for Contingency Tables

As discussed in Sect. 6.1, when n objects are classified according to two criteria A and B, with r levels of A and s levels of B, then the count data are presented as an r × s contingency table. Suppose n i j is a frequency of (i, j)-th cell, n i j being the number of objects having i-th level of attributeA and  j-th level of attribute B, n i j ≥ 0 ∀ i = 1, 2, . . . , r , j = 1, 2, . . . , s and ri=1 sj=1 n i j = n. The probability model underlying the r × s contingency table is a multinomial distribution in for the(i, j)th cell. We assume pi j > 0 r × s cells with cell probabilities as pi j  ∀ i = 1, 2, . . . , r , j = 1, 2, . . . , s and ri=1 sj=1 pi j = 1. On the basis of the

362

6

Goodness of Fit Test and Tests for Contingency Tables

data in a contingency table, we can investigate relationship between the two criteria. In tests related to a contingency table, we set up a null hypothesis which reflects some relation among the cell probabilities, depending on the possible relationship. We investigate such relations using the likelihood ratio tests for the multinomial distribution, a score test which comes out to be the same as Karl Pearson’s test and also a test based on a statistic similar to Wn as defined in Sect. 6.3. We obtain the maximum likelihood estimator of the cell probabilities, which are governed by the assumed relationship among cell probabilities, to carry out various test procedures. In the entire setup the parameter p is a vector of r s − 1 dimension with components as pi j , which are positive ∀ i = 1, 2, . . . , r , j = 1, 2, . . . , s − 1 and r s−1 i=1 j=1 pi j ≤ 1. As shown in Sect. 6.2, using Lagrange’s method of multipliers the maximum likelihood estimator of p corresponding to the observed cell frequencies n i j is given by pˆ i jn = n i j /n, i = 1, 2, . . . , r , j = 1, 2, . . . , s, where   √ L ˆ n − p) → Z 1 ∼ Nr s−1 (0, I −1 ( p)). pˆr sn = 1 − ri=1 s−1 j=1 pˆ i jn . Further, n( p We begin with the most frequently used test procedure in a two-way contingency table. It is about investigating the conjecture that A and B are two independent criteria which is equivalent to the statement that A and B are not associated with each other. Test for independence of two attributes in a two-way contingency table: A conjecture of no association between the two attributes A and B in a two-way contingency table can be expressed in terms of the parameters indexing the underlying probability model. The statement that A and B are two independent criteria is equivalent to the statement that pi j = pi. p. j ∀ i = 1, 2, . . . , r , j = 1, 2, . . . , s, where pi. = sj=1 pi j is the probability that the object possesses the i-th level of A and  p. j = ri=1 pi j is the probability that the object possesses the j-th level of B. To elaborate on this relation, suppose two categorical random variables X 1 and X 2 are defined as X 1 = i & X 2 = j, if the given object possesses i-th level of A and j-th level of B. Hence, independence of A and B can be expressed as pi j = P[X 1 = i, X 2 = j] = P[X 1 = i]P[X 2 = j] = pi. p. j ∀ i = 1, 2, . . . , r , j = 1, 2, . . . , s. Thus, if we are interested in testing the hypothesis of independence of two attributes then the null hypothesis is H0 : pi j = pi. p. j ∀ i & j against the alternative H1 : pi j = pi. p. j for at least one pair (i, j) i = 1, 2, . . . , r & j = 1, 2, . . . , s. The null parameter space 0 in this setup is ⎧ ⎫ s r ⎨ ⎬   p. j = 1 & pi. = 1 . 0 = ( p11 , p12 , . . . , pr s ) | pi j = pi. p. j ∀ i & j, ⎩ ⎭ j=1

i=1

To find the maximum likelihood estimators of the parameters in the null space, observe that the likelihood of p under H0 is

6.5 Tests for Contingency Tables

363

L n ( p|n 11 , n 12 , . . . , nr s ) =

r  s 

( pi. p. j )

i=1 j=1

s

ni j

=

r  i=1

pi.ni.

s 

n

p. j. j ,

j=1

where n i. = j=1 n i j is the marginal frequency of the i-th level of A and  n . j = ri=1 n i j is the marginal frequency of j-th level of B. We maximize  the likelihood with respect to variations in pi. and p. j under the condition that ri=1 pi. = 1 and sj=1 p. j = 1. Again using Lagrange’s method of multipliers and proceeding on similar lines as in Sect. 6.2, we get the maximum likelihood estimator pˆ i.n of pi. and pˆ . jn of p. j as pˆ i.n = n i. /n, i = 1, 2, . . . , r & pˆ . jn = n . j /n, j = 1, 2, . . . , s . It is to be noted that the joint distribution of (n 1. , n 2. , . . . , nr . ) is multinomial in r cells with cell probabilities ( p1. , p2. , . . . , pr . ) and joint distribution of (n .1 , n .2 , . . . , n .s ) is also multinomial in s cells with cell probabilities ( p.1 , p.2 , . . . , p.s ) . As a consequence, the maximum likelihood estimator ( pˆ 1.n , pˆ 2.n , . . . , pˆ (r −1).n ) is a CAN estimator of ( p1. , p2. , . . . , p(r −1). ) . Similarly, the maximum likelihood estimator ( pˆ .1n , pˆ .2n , . . . , pˆ .(s−1)n ) is a CAN estimator of ( p.1 , p.2 , . . . , p.(s−1) ) . The likelihood ratio test statistic λ(X ) is given by λ(X ) = sup L n ( p|X )/ sup L n ( p|X ) 0



⎞⎛ ⎞−1 ⎛ r  s  r  s  ni.  n . j n i j   n n n .j ij i. ⎠⎝ ⎠ . =⎝ n n n i=1

j=1

(6.5.1)

i=1 j=1

From Theorem 5.2.4, for large n under H0 , −2 log λ(X ) has χl2 distribution, where l = r s − 1 − (r − 1 + s − 1) = (r − 1)(s − 1). H0 is rejected if −2 log λ(X ) > c, where c is determined by the size of the test and the null distribution of −2 log λ(X ). Thus, c = χ2(r −1)(s−1),1−α . Proceeding as in Theorem 6.4.1, in the following theorem we now prove that, for testing the hypothesis of independence of two attributes in a r × s contingency table, the likelihood ratio test statistic and Karl Pearson’s chi-square test statistic Tn (P) are asymptotically equivalent, under H0 . In practice, it is always simpler to compute Tn (P) than −2 log λ(X ). Theorem 6.5.1 In a r × s contingency table for testing H0 : Two attributes A and B are independent against the alternative H1 : A and B are not independent, the asymptotic null distribution of the likelihood ratio test statistic −2 log λ(X ), Karl Pearson’s chi-square test statistic Tn (P) and Wn is χ2(r −1)(s−1) , where

364

6

Tn (P) =

Goodness of Fit Test and Tests for Contingency Tables

r  r  s s   (oi j − ei j )2 (oi j − ei j )2 & Wn = , ei j oi j i=1 j=1

i=1 j=1

where oi j and ei j are the observed and the expected frequencies of (i, j)-th cell respectively. Proof For testing H0 : Two attributes A and B are independent against the alternative H1 : A and B are not independent, Karl Pearson’s chi-square test statistic Tn (P) is given by r  s  (oi j − ei j )2 , Tn (P) = ei j i=1 j=1

where oi j = n pˆ i jn = n i j & ei j = n p˜ i jn = n pˆ i.n pˆ . jn =

n i. n. j n

are the observed and the expected frequencies of (i, j)-th cell respectively, pˆ i jn is the maximum likelihood estimator of pi j in the entire parameter space and p˜ i jn = pˆ i.n pˆ . jn is the maximum likelihood estimator of pi j in the null space. Suppose ui j =



ui j n( pˆ i jn − pˆ i.n pˆ . jn ) ⇔ pˆ i jn = pˆ i.n pˆ . jn + √ , n i = 1, 2, . . . , r , j = 1, 2, . . . , s.

Further noted  that  r its is to be√ r s r s u / n = i=1 j=1 i j i=1 j=1 pˆ i jn − i=1 pˆ i.n j=1 pˆ . jn = 0. Now, from the expression of λ(X ) as given in Eq. (6.5.1), we have ⎛ ⎞ s r  r s    ⎝ −2 log λ(X ) = 2 n i j log pˆi jn − n i. log pˆi.n − n . j log pˆ . jn ⎠ i=1 j=1

i=1

j=1

r  s   ni j  log pˆi jn − log pˆ i.n pˆ . jn = 2n n i=1 j=1

= 2n

r  s   i=1 j=1

= 2n

r  s   i=1 j=1

= 2n

r  s   i=1 j=1 r  s 



ui j pˆi.n pˆ . jn + √ n

  ui j log 1 + √ n pˆi.n pˆ . jn   u i2j u i3j ui j ui j +√ − + − · · · √ 2 pˆ 2 3 pˆ 3 n n pˆi.n pˆ . jn 2n pˆi.n 3n 3/2 pˆi.n . jn . jn

ui j pˆi.n pˆ . jn + √ n pˆi.n pˆ . jn

    ui j log pˆi.n pˆ . jn + √ − log pˆ i.n pˆ . jn n 

u i3j u i2j u i3j u i2j ui j + 3/2 2 2 + − 3/2 2 2 + · · · √ − 2n pˆi.n pˆ . jn n p ˆ p ˆ n 3n pˆ i.n pˆ . jn 2n pˆ i.n pˆ . jn i.n . jn i=1 j=1   r  s 2  ui j ui j = 2n +√ + Vn , 2n pˆi.n pˆ . jn n = 2n

i=1 j=1



6.5 Tests for Contingency Tables

365

where Vn consists of the terms with powers of n in the denominator. As in Theorem 6.4.1, we can show that the numerator of Vn is bounded in probability and hence   √ P Vn → 0 as n → ∞. Observe that ri=1 sj=1 u i j / n = 0 and r  s 

u i2j

i=1 j=1

pˆ i.n pˆ . jn

=

r  r  s s   n 2 ( pˆ i jn − pˆ i.n pˆ . jn )2 (oi j − ei j )2 = = Tn (P). n pˆ i.n pˆ . jn ei j i=1 j=1

i=1 j=1

P

Hence, −2 log λ(X ) − Tn (P) → 0. Hence, for large n under H0 , −2 log λ(X ) and Tn (P) have the same distribution and it is χ2(r −1)(s−1) . The null hypothesis H0 is rejected if Tn (P) > c, where the cut-off c is determined corresponding to the size of the test and the asymptotic null distribution of Tn (P). To prove that Wn also has the same √ asymptotic null distribution, suppose elements of a matrix with (i, j)-th element n( pˆ i jn − pˆ i.n pˆ . jn ) are presented in a vector Y n of dimension r s. Then Tn (P) can be expressed as Y n An Y n , where An is a diagonal P

P

matrix with typical element 1/ pˆ i.n pˆ . jn . It is known that pˆ i.n → pi. and pˆ . jn → p. j . P

Hence, An → A where A is a diagonal matrix with typical element 1/ pi. p. j = 1/ pi j under H0 . Observe that a test statistic Wn defined as Wn =

r  s 

(oi j − ei j )2 /oi j =

i=1 j=1

r  s 

n( pˆ i jn − pˆ i.n pˆ . jn )2 / pˆ i jn

i=1 j=1

can be expressed as Y n Bn Y n , where Bn is a diagonal matrix with typical element P

P

1/ pˆ i jn . It is known that pˆ i jn → pi j and hence, Bn → B where B is a diagonal P

matrix with typical element 1/ pi j . Observe that under H0 , An − Bn → 0, a null P

matrix, which implies that Y n Bn Y n − Y n An Y n → 0. Thus under H0 , the asymptotic  distribution of Wn is the same as that of Tn (P), which is χ2(r −1)(s−1) .  Remark 6.5.1

It is to be noted that Wn is similar to the Wald’s statistic for testing H0 : p = p 0 against the alternative H0 : p = p 0 in a multinomial distribution. In Sect. 6.4, we have defined a score test statistic for a composite null hypothesis. A null hypothesis of independence of two attributes in a r × s contingency table is a composite null hypothesis. In the following theorem we prove that a score test statistic is the same as Karl Pearson’s chi-square test statistic for testing a null hypothesis of independence of two attributes in a r × s contingency table.

366

6

Goodness of Fit Test and Tests for Contingency Tables

Theorem 6.5.2 In a r × s contingency table for testing the null hypothesis of independence of two attributes, a score test statistic is the same as Karl Pearson’s chi-square test statistic.

Proof A score test statistic Tn(c) (S) is given by Tn(c) (S) = V n (X , θ˜ n )I −1 (θ˜ n )V n (X , θ˜ n ), where θ˜ n is the maximum likelihood estimator of θ in the null setup. For a r × s contingency table θ = p, a vector of pi j of dimension r s − 1 and in the gi j ( p1. , p2. , . . . , p null  setup  pi j = pi. p. j =  r . , p.1 , p.2 , . . . , p.s ), with the condition ri=1 sj=1 pi j = 1, ri=1 pi. = 1 and sj=1 p. j = 1. Thus, r s − 1 parameters are expressed in terms of r + s − 2 parameters. The maximum likelihood estimators of these in the null setup are given by p˜ i jn = pˆ i.n pˆ . jn , i = 1, 2, . . . , r and j = 1, 2, . . . , s. To find the vector V n of score functions, note that oi j = n i j is the observed frequency and ei j = n pˆ i.n pˆ . jn is the expected frequency of (i, j)-th cell. Thus, log L n ( p|X ) =

r  s 

n i j log pi j

i=1 j=1

  ni j 1 ∂ log L n ( p|X ) 1 nr s ⇒ Ui j (X , p) = √ =√ − ∂ pi j pr s n n pi j     √ oi j oi j 1 or s or s = n − − ⇒ Ui j (X , pˆ i.n pˆ . jn ) = √ pˆr .n pˆ .sn ei j er s n pˆ i.n pˆ . jn   √ oi j − ei j or s − er s . − = n ei j er s Suppose the elements in a two-way table are organized in a vector, then the diagonal element in I −1 ( p) corresponding to (i, j)-th cell is pi j (1 − pi j ). In the null setup, the maximum likelihood estimator of pi j is pˆ i.n pˆ . jn . Thus, the diagonal element of I −1 ( p) at pˆ i.n pˆ . jn is given by pˆ i.n pˆ . jn (1 − pˆ i.n pˆ . jn ) = (ei j /n)(1 − ei j /n). The offdiagonal element is − pkl puv . At the maximum likelihood estimator in the null setup, it is given by − pˆ k.n pˆ .ln pˆ u.n pˆ .vn = −(ekl /n)(euv /n). The expressions of the score function Ui j and of the elements in the inverse of information matrix are similar to those as in Theorem 6.4.1. Hence, proceeding exactly on the same lines as in   (c) Theorem 6.4.1, we can show that Tn (S) = ri=1 sj=1 (oi j − ei j )2 /ei j , which is Karl Pearson’s chi-square test statistic.  In the following example, we illustrate the derivation of Theorem 6.5.2 for a 2 × 2 contingency table.

6.5 Tests for Contingency Tables

367

 Example 6.5.1

Suppose we want to test the hypothesis of independence of two attributes in a 2 × 2 contingency table. Suppose oi j = n i j and ei j = n pˆ i.n pˆ . jn = n i. n . j /n are the observed and expected frequency of (i, j)-th cell respectively. The vector V n ≡ V n (X , p˜ n ) of score functions is given by V n = =

√ n



o11 − e11 o22 − e22 o12 − e12 o22 − e22 o21 − e21 o22 − e22 − , − , − e11 e22 e12 e22 e21 e22



√  nU n , say.

The inverse of information matrix I −1 ( p˜ n ) is given by n12 Mn , where ⎛ ⎞ e11 (n − e11 ) −e11 e12 −e11 e21 Mn = ⎝ −e11 e12 e12 (n − e12 ) −e12 e21 ⎠ . −e11 e21 −e12 e21 e21 (n − e21 ) Suppose the column of Mn are denoted by C1 , C2 , C3 . Then U n × C1 = (o11 − e11 )(n − e11 ) − (o12 − e12 )e11 − (o21 − e21 )e11 (o22 − e22 ) + (e11 e12 + e11 e21 − e11 (n − e11 )) e22 = n(o11 − e11 ) − e11 ((o11 − e11 ) + (o12 − e12 ) + (o21 − e21 )) (o22 − e22 ) + (e11 (e12 + e21 − n + e11 )) e22 (o22 − e22 ) = n(o11 − e11 ) + e11 (o22 − e22 ) − e11 e22 e22 = n(o11 − e11 ). Simplifying in a similar way, U n Mn reduces to n((o11 − e11 ), (o12 − e12 ), (o21 − e21 )). Hence, observe that Tn(c) (S) = V n (X , p˜ n )I −1 ( p˜ n )V n (X , p˜ n ) √ √ 1 = ( nU n )( 2 Mn )( nU n ) n = ((o11 − e11 ), (o12 − e12 ), (o21 − e21 )) × U n (o11 − e11 )2 (o12 − e12 )2 (o21 − e21 )2 + + e11 e12 e21 (o22 − e22 ) − ((o11 − e11 ) + (o12 − e12 ) + (o21 − e21 )) e22 (o11 − e11 )2 (o12 − e12 )2 (o21 − e21 )2 = + + e11 e12 e21 (o22 − e22 ) − (n − o22 − n + e22 ) e22 2  2  (oi j − ei j )2 /ei j . = =

i=1 j=1

368

6

Goodness of Fit Test and Tests for Contingency Tables

Thus, Karl Pearson’s chi-square test statistic and the score test statistic are the same.  In Example 6.7.9, we verify that for testing independence of two attributes for data in a 2 × 3 contingency table, values of Karl Pearson’s chi-square test statistic and of the score test statistic are the same. For a two-way contingency table, other hypotheses of interest are as follows: (i) Irrelevance of criterion B: In this setup the pi j ’s do not change as levels of criterion B change, for all i, that is, pi j = ai , say ∀ j and for i = 1, 2, . . . , r . Then pi. =

s  j=1

pi j =

s  j=1

ai = sai ⇒ ai = pi.

1 1 ⇒ pi j = pi. . s s

Hence, the null hypothesis is expressible as H0 : pi j = pi. /s. The maximum likelihood estimator of pi j in null setup is pˆ i jn = (n i. /n)(1/s) = n i. /ns. (ii) Irrelevance of criterion A: As discussed in (i) above, in this setup pi j ’s do not change as levels of criterion A change, for all j. Hence the null hypothesis is given by, H0 : pi j = p. j /r and the maximum likelihood estimator of pi j in null setup is pˆ i jn = n . j /nr . (iii) Complete irrelevance: In this setup pi j ’s do not change as levels of either criterion A or B change. Hence, the null hypothesis is H0 : pi j = 1/r s. Hence, the maximum likelihood estimator of pi j in null setup is pˆ i jn = 1/r s. All these hypotheses can be tested using either the likelihood ratio test or the score test or Karl Pearson’s chi-square test. The asymptotic null distribution of all these statistics is χr2s−1−l , where l is the number of parameters estimated in the null setup. Thus values of l are r − 1, s − 1 and 0 respectively in above three cases. If the value of the test statistic is larger than χr2s−1−l,1−α , then H0 is rejected.  Remark 6.5.2

From the proof of Theorem 6.5.2, we note that the vector of score functions and the inverse of information matrix are expressed in terms of oi j and ei j . Hence the proof remains valid for any other null hypotheses, which are listed above. The formula for ei j will change according to the null hypothesis of interest. Thus, for a two-way contingency table, to test any hypothesis the score test statistic and Karl Pearson’s chi-square test statistic are identical. We may come across a situation, when pi j ’s do not change as levels of criterion B change, for some i. In such a case we write the hypothesis accordingly and adopt the same procedure as outlined above. For example, suppose in a 3 × 3 contingency table, we want to test that pi j ’s do not change as levels of criterion B change, for i = 1, 2, but change for i = 3. Then the null hypothesis is expressed as H0 : p1 j = a1 , p2 j =

6.5 Tests for Contingency Tables

369

a2 , j = 1, 2, 3. In the null setup, we have 5 parameters a1 , a2 , p31 , p32 and p33 such that 3a1 + 3a2 + p31 + p32 + p33 = 1. Subject to this condition, we maximize the likelihood to obtain the maximum likelihood estimators of these parameters and then use either the likelihood ratio test or the score test to test the given hypothesis. Test procedures for a two-way contingency table can be extended to a threeway contingency table. Suppose n i jk denotes the frequency of (i, j, k)-th cell, i = 1, 2, . . . , r , j = 1, 2, . . . , s, k = 1, 2, . . . , m, when n objects are classified according to three criteria, A with r levels, B with s levels and C with m levels. As in a two-way contingency table, joint distribution of n i jk is a multinomial distribution in r sm − 1 cells, with cell probabilities pi jk , i = 1, 2, . . . , r , j = 1, 2, . . . , s, k = 1, 2, . . . , m. In the entire parameter space, the maximum likelihood estimator of pi jk is pˆ i jkn = n i jk /n. Once we have the maximum likelihood estimators of the parameters in null and in the entire parameter space, we can find likelihood ratio test statistic −2 log λ(X ). Similarly, we can find the expected frequencies and hence Karl Pearson’s chi-square test statistic. It can be proved that for a three-way contingency table also, to test any hypothesis, the score test statistic and Karl Pearson’s chi-square test statistic are the same, as underlying probability model is again a multinomial distribution.  Remark 6.5.3

In summary, if underlying probability model is a multinomial distribution, then the score test statistic and Karl Pearson’s chi-square test statistic are the same for the following three types of null hypotheses. (i) H0 : p = p 0 , where p 0 is a completely specified vector, (ii) H0 : p = p(θ), where θ is an unknown vector and (iii) H0 specifies some functional relations among the cell probabilities, as in two-way or three-way contingency table. We get the two statistics to be the same in view of the fact that Tn (S) = V n (X , p˜ n )I −1 ( p˜ n )V n (X , p˜ n ) is a quadratic form and Karl Pearson’s   chi-square test statistic Tn (P) = ri=1 sj=1 (oi j − ei j )2 /ei j can also be expressed as a quadratic form Y n An Y n . With a peculiar form of the probability mass function of the multinomial distribution and its dispersion matrix, which is nothing but the inverse of the information matrix, the vector of sore functions and the inverse of the information matrix evaluated at the maximum likelihood estimator of p in null setup, result in V n (X , p˜ n ) = Y n and I −1 ( p˜ n ) = An and hence the two test statistics are exactly the same. For a three-way contingency table, Karl Pearson’s chi-square test statistic is given by r  s  m  (oi jk − ei jk )2 Tn (P) = . ei jk i=1 j=1 k=1

370

6

Goodness of Fit Test and Tests for Contingency Tables

The null hypothesis is rejected when the value of the test statistic is larger than a constant c, where c is determined corresponding to the given size of the test and the asymptotic null distribution of the test statistic, which is χr2sm−1−l , where l is the number of parameters estimated in the null space. The most frequently encountered hypotheses in a three-way contingency table are listed below. (i) Complete or mutual independence among A, B and C: In this case the null hypothesis is H0 : pi jk = pi.. p. j. p..k ∀ i, j, k, where pi.. =

s  m  j=1 k=1

pi jk , p. j. =

r  m  i=1 k=1

pi jk & p..k =

r  s 

pi jk .

i=1 j=1

To clarify such a relation under H0 , as in a two-way contingency table, we define three categorical random variables X 1 , X 2 and X 3 as X 1 = i, X 2 = j & X 3 = k, if the given object possesses i-th level of A, j-th level of B and k-th level of C. Hence, the mutual independence can be expressed as pi jk = P[X 1 = i, X 2 = j, X 3 = k] = P[X 1 = i]P[X 2 = j]P[X 3 = k] = pi.. p. j. p..k ∀ i, j, k. The number l of parameters one need to estimate in null setup is l = (r − 1) + (s − 1) + (m − 1). The alternative in this case is H1 : pi jk = pi.. p. j. p..k for at least one triplet (i, j, k). Proceeding on similar lines as in the case of a two-way contingency table, the maximum likelihood estimator of pi jk in the null setup is pˆ i jkn = (n i.. /n)(n . j. /n)(n ..k /n). (ii) Conditional independence: Suppose it is of interest to test whether two attributes are independent given the levels of the third attribute. In particular, suppose we want to test whether A and C are conditionally independent given B. In terms of probability distribution of random variables it can expressed as follows: P[X 1 = i, X 3 = k|X 2 = j] = P[X 1 = i|X 2 = j]P[X 3 = k|X 2 = j] pi jk pi j. p. jk ⇔ = p. j. p. j. p. j. pi j. p. jk ⇔ pi jk = . p. j. Thus, the null hypothesis that A and C are conditionally independent given B is expressed as H0 : pi jk = pi j. p. jk / p. j. ∀ i, j, k. Hence, the maximum likelihood estimator of pi jk in the null setup is pˆ i jkn = (n i j. /n)(n . jk /n)(n/n . j. ) = (n i j. n . jk )/(nn . j. ). In the null setup, we estimate pi j. , these are r s − 1 parameters, p. jk , these are sm − 1 parameters and p. j. , which are s − 1 parameters. However, p. j. can be obtained from pi j. by taking sum over i or from p. jk by taking sum over k. Hence, the number l of parameters one estimates in null setup is l = r s − 1 + sm − 1 + s − 1 − 2(s − 1) = r s + sm − s − 1.

6.5 Tests for Contingency Tables

371

(iii) Independence between A and (B, C): In this case the null hypothesis is H0 : pi jk = pi.. p. jk ∀ i, j, k, which again follows from pi jk = P[X 1 = i, X 2 = j, X 3 = k] = P[X 1 = i]P[X 2 = j, X 3 = k] = pi.. p. jk . Thus the maximum likelihood estimator of pi jk in null setup is pˆ i jkn = (n i.. /n)(n . jk /n). The number l of parameters we have to estimate in null set up is l = (r − 1) + (sm − 1). (iv) Suppose we want to test the hypothesis that the probabilities of classification according to criterion A are known, given by πi , say. Then H0 : pi jk = πi p. jk ∀ i, j, k. The maximum likelihood estimator of pi jk in the null setup is pˆ i jkn = πi (n . jk /n). The number l of parameters one need to estimate in null set up is l = sm − 1. In Sect. 6.7, we discuss some examples in which we carry out these tests for contingency table using R. In the next section, we briefly discuss the concept of a consistency of a test procedure.

6.6

Consistency of a Test Procedure

Consistency of a test procedure is an optimality criterion for a test procedure. It is defined as follows:

 Definition 6.6.1

Consistency of a Test Procedure: Suppose X = {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X , whose probability law is indexed by a parameter θ ∈ , which may be real or vector valued. Suppose {φn (X ), n ≥ 1} is a sequence of test functions based on X for testing H0 : θ ∈ 0 against the alternative H1 : θ ∈ 1 where 0 ∩ 1 = ∅ and 0 ∪ 1 = . The test procedure governed by a test function φn is said to be consistent if (i) sup E θ (φn (X )) → α ∈ (0, 1) θ∈0

&

(ii)E θ (φn (X )) → 1 ∀ θ ∈ 1 ,

where α is a size of the test. Most of the test procedures discussed in Chap. 5 and in this chapter are consistent. In view of this fact, the consistency of a test procedure is a too weak property to be really useful. If a given test procedure is not consistent, then it conveys that something must be fundamentally wrong with the test. If a test procedure is not consistent against a large class of alternatives, then it is considered as an undesirable test. For example, suppose we want to test H0 : θ = 0 against the alternative H1 : θ > 0 based

372

6

Goodness of Fit Test and Tests for Contingency Tables

on a random sample of size n from a Cauchy C(θ, 1) distribution. Suppose the test function φn is given by ) φn =

1, 0,

Xn > k if otherwise.

The cut-off point c is determined so thatPθ=0 [X n > k] = α, the given level of significance. It is known that if X ∼ C(θ, 1) distribution then X n ∼ C(θ, 1) distribution. Hence, c is the (1 − α)-th quantile of the C(θ, 1) distribution. Thus, E θ=0 (φn (X )) = α and the first requirement of the consistency of a test procedure is satisfied. Suppose β(θ) denotes the power function, then for θ > 0, β(θ) = Pθ [X n > k] = Pθ [X n − θ > k − θ] = P[U > k − θ], where U ∼ C(0, 1) . Thus, Pθ [X n > k] does not depend on n at all, so will not converge to 1 as n → ∞. Hence, the test procedure based on X n is not consistent. We have noted in Exercise 2.8.15 of Chap. 2, that for a C(θ, 1) distribution, X n is not consistent for θ. However, the sample median is consistent for θ. In Chap. 4, we have proved that C(θ, 1) distribution belongs to a Cramér family and hence the maximum likelihood estimator of θ is CAN for θ. Thus, X n is not at all a desirable estimator for θ which is reflected in the test procedure based on X n . In the following example we show that for a Cauchy C(θ, 1) distribution, the test procedure based on the maximum likelihood estimator of θ is consistent.  Example 6.6.1

Suppose X = {X 1 , X 2 , . . . , X n } is a random sample from a Cauchy C(θ, 1) distribution. Suppose a test procedure for testing H0 : θ = 0 against the alternative H1 : θ > 0 is given by ) 1, if Tn > k φn (X ) = 0, otherwise, where Tn is the maximum likelihood estimator of θ. We examine if the test procedure with level of significance α is consistent. It is known that C(θ, 1) distribution belongs to a Cramér family and hence the maximum likelihood estimator Tn of θ is CAN for θ with approximate variance 1/n I (θ) = 2/n. Thus √ L n/2(Tn − θ) → Z ∼ N (0, 1). For large n, the cut-off point k is decided so that ' (   n/2 Tn > n/2 k = α ⇒ k = a1−α 2/n , P0 [Tn > k] = α ⇔ P0 where a1−α is (1 − α)-th quantile of standard normal distribution. Thus, E θ=0 (φn (X )) = α. Suppose β(θ) denotes the power function, then

6.6 Consistency of a Test Procedure

'

373

(

'

  β(θ) = Pθ Tn > a1−α 2/n = Pθ n/2 (Tn − θ) > a1−α − n/2 θ    = 1 −  a1−α − n/2 θ → 1, ∀ θ > 0. Thus, the test procedure is consistent.

(



The next section is devoted to illustrations of various test procedures discussed in this chapter using R software.

6.7

Large Sample Tests Using R

In the present chapter, we discussed tests for validity of the model, test for goodness of fit and tests for contingency tables. All these tests are the likelihood ratio test, when the underlying probability model is a multinomial distribution. Further, it is noted that the score test, Wald’s test and the likelihood ratio test are asymptotically equivalent. In addition, we have also proved that in a multinomial distribution for testing simple and the composite null hypotheses about the probability vector, the score test statistic and Karl Pearson’s chi-square test statistic are identical. In this section, we verify all these results and illustrate how to carry out these tests using R software.  Example 6.7.1

In Example 6.2.2, we have discussed a genetic model in which the probabilities for three outcomes are θ2 , 2θ(1 − θ) and (1 − θ)2 , 0 < θ < 1. The appropriate probability distribution for this model is a multinomial distribution in three cells. We have shown that the multinomial distribution, with these cell probabilities, belongs to a one-parameter exponential family and the maximum likelihood estimator of θ is θˆ n = (2X 1 + X 2 )/2n. Further, it shown that it is a CAN estimator of θ with approximate variance θ(1 − θ)/2n. We have defined two test statis√ tics Sn = 2n/θ0 (1 − θ0 ) (θˆ n − θ0 ) and Wn = 2n/θˆ n (1 − θˆ n ) (θˆ n − θ0 ) for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . In both the cases, H0 is rejected if the absolute values of the test statistics are larger than c = a1−α/2 . We verify these results by simulation using the following code. It is to be noted that when a random sample of size n is drawn from a multinomial distribution with k cells, the cell frequencies (X 1 , X 2 , . . . , X k ) with X 1 + X 2 + · · · + X k = n is a sufficient statistic and the joint distribution of (X 1 , X 2 , . . . , X k ) is also a multinomial distribution with parameter n and with the same cell probabilities. In all the test procedures related to a multinomial distribution, the observed data are (X 1 , X 2 , . . . , X k ) . Hence to generate such data, we draw a random sample of size 1 from a multinomial distribution with parameters n and cell probabilities p = ( p1 , p2 , . . . , pk ) .

374

6

Goodness of Fit Test and Tests for Contingency Tables

th =.4;

th0 =.3; b = qnorm(.95); b; p = c(thˆ2,2*th*(1-th),(1-th)ˆ2); p n = 150; set.seed(21); x = rmultinom(1,n,p); x; dlogl=function(par) { dlogl=(2*x[1]+x[2])/par -(2*n-2*x[1]-x[2])/(1-par) return(dlogl) } dlogl(.5); dlogl(0.3) mle=uniroot(dlogl,c(.5,.3))$root; mle a = (2*x[1]+x[2])/(2*n);a ## mle by formula Sn = sqrt(2*n/(th0*(1-th0)))*(mle-th0);Sn Wn = sqrt(2*n/(mle*(1-mle)))*(mle-th0);Wn pv1 = 1-pnorm(Sn)+pnorm(-Sn); pv1 pv2 = 1-pnorm(Wn)+pnorm(-Wn); pv2 ### Verification of CAN property nsim = 1000; n = 150; x = matrix(nrow=length(p),ncol=nsim); mle = c() for(j in 1:nsim) { set.seed(j) x[,j] = rmultinom(1,n,p) } dlogl = function(par) { dlogl = (2*x[1,j]+x[2,j])/par -(2*n-2*x[1,j]-x[2,j])/(1-par) return(dlogl) } for(j in 1:nsim) { mle[j] = uniroot(dlogl,c(.6,.2))$root } summary(mle); Tn = sqrt(2*n/(th*(1-th)))*(mle-th) Sn = sqrt(2*n/(th0*(1-th0)))*(mle-th0); Wn = sqrt(2*n/(mle*(1-mle)))*(mle-th0) shapiro.test(Tn); shapiro.test(Sn); shapiro.test(Wn)

For θ = 0.4, the vector of cell probabilities is (0.16, 0.48, 0.36) and the observed cell frequencies are (28, 71, 51) which add up to 150. The maximum likelihood estimator of θ using uniroot function is 0.4233. From the formula derived in Example 6.2.2, the estimate is the same as 0.4233. To test H0 : θ = θ0 = 0.3 against the alternative H1 : θ = θ0 , the value of test statistic Sn corresponding to observed data, is 3.4380 and that of Wn is 3.2719, the corresponding p-values are 0.00058 and 0.00107 respectively. Thus, according to both the test procedures, data do not support the null setup. In Example 6.2.2, we have shown that the maximum likelihood estimator of θ is a CAN estimator. Further, large sample

6.7 Large Sample Tests Using R

375

distribution of both Sn and Wn is standard normal. It is verified on the basis of 1000 simulations, each of sample size 150. The p-value of the Shapiro-Wilk test comes out to be 0.216, 0.216 and 0.225 for Tn , Sn and Wn respectively, supporting the claim that the maximum likelihood estimator of θ is a CAN estimator and  large sample distribution of both Sn and Wn are standard normal. The following example is concerned with the test for validity of a model.  Example 6.7.2

According to genetic linkage theory, observed frequencies of four phenotypes resulting from crossing tomato plants are in the ratio 9/16 + θ : 3/16 − θ : 3/16 − θ : 1/16 + θ. A researcher reported the frequencies of four phenotypes as displayed in Table 6.8. Our aim is to check whether genetic linkage theory seems plausible on the basis of the given data. In the entire parameter space, the data in 4 cells are modeled by a multinomial distribution in 4 cells with cell probabilities p1 , p2 , p3 , p4 , which are positive and add up to 1. The maximum likelihood estimator pˆ in of pi is given by pˆ in = n i /n, where n i denotes the observed frequency of the i-th cell, i = 1, 2, 3, 4. To test the validity of the proposed model, we use Karl Pearson’s chi-square test, Wald’s test and the likelihood ratio test procedure. Under H0 , Karl Pearson’s chi-square test statistic, Wald’s test statistic and −2 log(λ(X )) follow χr2 distribution, where r = 3 − 1 = 2. In the null space the cell probabilities are given by 9 3 + θ, p2 (θ) = p3 (θ) = −θ 16 16 1 3 +θ, 0 constant

382

6

Goodness of Fit Test and Tests for Contingency Tables

pWnl = 1-pnorm(Wn); pWnl pSnl = 1-pnorm(Sn); pSnl prop.test(sum(x),n,p=p0,alt="greater",correct=FALSE) ### Null:p > p0, H_0 is rejected if value of Wn or Sn < constant pWng = pnorm(Wn); pWng pSng = pnorm(Sn);pSng prop.test(sum(x),n,p=p0,alt="less",correct=FALSE)

The output for the alternative H1 : p = p0 is summarized in Table 6.11. From the p-values reported in Table 6.11, we note that on the basis of the simulated data, the null hypothesis H0 : p = p0 against the alternative H1 : p = p0 gets rejected. It is to be expected as the sample is generated with p = 0.4 and p0 is taken as 0.3. It is to be noted that the p-value for Wald’s test or the score test can be computed using the asymptotic null distribution of Wn and Sn which is standard normal or using the asymptotic null distribution of Wn2 and Sn2 which is χ21 . These come out to be the same. We note that the value of Karl Pearson’s test statistic Tn (P) is the same as that of the score test statistic Sn2 and the value of the test statistic Un is the same as that of Wn2 . Further, observe that the value of the test statistic and the p-value given by the built-in function prop.test are the same as that of the score test statistic Sn2 . If the alternative hypothesis is H1 : p > p0 , then H0 is rejected if Wn > c or Sn > c. The cut-off c and the corresponding p-values are obtained using the asymptotic null distribution of Wn and Sn which is standard normal. Similarly, if the alternative hypothesis is H1 : p < p0 , then H0 is rejected if Wn < c or Sn < c. The cut-off c and the corresponding p-values are obtained using the asymptotic null distribution of Wn and Sn . We note that when the alternative hypothesis is H1 : p > p0 , the p-values are 0.0046 and 0.0029 for Wald’s test procedure and the score test procedure respectively. Thus, H0 is rejected on the basis of simulated data, again it is as per the expectations as p = 0.4 and p0 = 0.3. When the alternative hypothesis is H1 : p < p0 , the p-values are 0.9954 and 0.9971 for Wald’s test procedure and the score test procedure respectively, giving the strong support to null setup, as p = 0.4 and p0 = 0.3. Again observe that the value of the test statistic and the p-value given by the built-in function prop.test with the respective options “greater” and Table 6.11 Test for proportion: summary of test procedures Test procedure

Value of test statistic

p-value

Likelihood ratio test Wald’s test

Tn = 7.2920 Wn = 2.6060 Un = Wn2 = 6.7912 Sn = 2.7603 Sn2 = 7.6190 Tn (P) = 7.6190 7.6190

0.0069 0.0092

Score test Karl Pearson’s test prop.test

0.0058 0.0058 0.0058

6.7 Large Sample Tests Using R

383

“less” for the alternative hypothesis, are the same as that of the score test statistic Sn . To compute p-values the asymptotic null distribution of Sn is used which is standard normal.  The next example verifies the result proved in Theorem 6.4.1.  Example 6.7.6

In Theorem 6.4.1, it is proved that for a multinomial distribution in k cells, for testing H0 : p = p 0 against the alternative H1 : p = p 0 , Wald’s test statistic Tn (W ) k (oi − ei )2 /oi while the score test statistic Tn (S) simplifies to simplifies to i=1 Karl Pearson’s chi-square test statistic. In this example we verify these results by simulation using R. We find the value of Wald’s test statistic using following two formulae. 

(i) Tn (W ) = n( pˆ n − p 0 ) I ( pˆ n )( pˆ n − p 0 )

&

k  (oi − ei )2 (ii) Tn (W ) = . oi i=1

We find the value of the score test statistic using following two formulae . (i) Tn (S) = V n D( p 0 )V n ,   1 Xk X2 Xk X k−1 Xk X1  − , − , ..., − where V n = √ p0k p02 p0k p0(k−1) p0k n p01 k (oi −ei )2 and (ii) Tn (S) = i=1 , where oi and ei denote the observed and expected ei cell frequencies of the i-th cell. p = c(.3,.2,.3,.1,.1); p0 = c(.35,.15,.2,.15,.15); n = 100; set.seed(20) x = rmultinom(1,n,p); mle = x[1:4]/n; mle D = function(u) ### Information matrix { D = matrix(nrow = 4,ncol=4) for(i in 1:4) { for(j in 1:4) { if(i==j) { D[i,j] = 1/u[i]+1/u[5] } else {

384

6

Goodness of Fit Test and Tests for Contingency Tables

D[i,j] = 1/u[5] } } } return(D) } u1 = x/n; p1 = p0[1:4] TWn = n*t((mle-p1))%*%D(u1)%*%(mle-p1); TWn ### Wald’s test statistic u2 = x[1:4]/p0[1:4]; u3 = rep(x[5]/p0[5],4) vn = 1/sqrt(n)*(u2-u3); M = solve(D(p0)) TSn = t(vn)%*%M%*%vn; TSn ### score test statistic o = x; e = n*p0 ### vectors of observed and expected frequencies Wn = sum((o-e)ˆ2/o); Wn Sn = sum((o-e)ˆ2/e); Sn

We note that Tn (W ) = 11.1127 = k 2 i=1 (oi − ei ) /ei .

k

i=1 (oi

− ei )2 /oi and Tn (S) = 11.6405 = 

 Example 6.7.7

Suppose we have a multinomial distribution as discussed in Example 6.2.2 with cell probabilities θ2 , 2θ(1 − θ) and (1 − θ)2 , 0 < θ < 1. Thus the cell probabilities depend on θ. Suppose we want to test H0 : θ = 0.3 against the alternative whether in this setup Wald’s H1 : θ = 0.3. As stated in Remark k 6.4.4, we examine (oi − ei )2 /oi and the score test statistic Tn (S) test statistic Tn (W ) is equal to i=1 k is equal to i=1 (oi − ei )2 /ei . th = .4; n = 100; set.seed(20); p = c(thˆ2,2*th*(1-th),(1-th)ˆ2); p x = rmultinom(1,n,p); x; dlogl = function(par) { dlogl = (2*x[1]+x[2])/par -(2*n-2*x[1]-x[2])/(1-par) return(dlogl) } dlogl(.5); dlogl(0.3) mle = uniroot(dlogl,c(.5,.3))$root; mle th0 =.3; o = x; e = n*c(th0ˆ2,2*th0*(1-th0),(1-th0)ˆ2); e; o Wn = sum((o-e)ˆ2/o); Wn; Sn = sum((o-e)ˆ2/e); Sn D=function(u) { D = matrix(nrow = 2,ncol=2) for(i in 1:2) {

6.7 Large Sample Tests Using R

385

for(j in 1:2) { if(i==j) { D[i,j] = 1/u[i]+1/u[3] } else { D[i,j] = 1/u[3] } } } return(D) } u1 = c(mleˆ2,2*mle*(1-mle),(1-mle)ˆ2); p0 = c(th0ˆ2,2*th0*(1-th0),(1-th0)ˆ2) p1 = p0[1:2]; u2 = u1[1:2] TWn = n*t((u2-p1))%*%D(u1)%*%(u2-p1); TWn u3 = x[1:2]/p0[1:2]; u4 = rep(x[3]/p0[3],2) vn = 1/sqrt(n)*(u3-u4); M = solve(D(p0)) TSn = t(vn)%*%M%*%vn; TSn

From the output, we note that for the simulated sample k 

(oi − ei )2 /oi = 8.6385 = Tn (W ) = 9.3597

i=1

&

k  (oi − ei )2 /ei = 15.5090 = Tn (S). i=1

  Example 6.7.8

In Example 6.4.3 we have derived test procedures for testing H0 : p1 = p2 against the alternative H1 : p1 = p2 on the basis of random samples drawn from X ∼ B(1, p1 ) and Y ∼ B(1, p2 ) where X and Y are independent random variables. In this example we simulate samples from two Bernoulli distributions carry out these test procedures. Further, we show that a built-in function prop.test is based on a score test statistic.

386

6

Goodness of Fit Test and Tests for Contingency Tables

m = 150; n = 170 # Sample size for X and Y respectively p1 = .4; p2 = .5; set.seed(20) x = rbinom(m,1,p1); y = rbinom(n,1,p2); mx = mean(x); my = mean(y); mx; my mp = (sum(x)+sum(y))/(m+n); mp logLx = function(a) { LL = 0 for(i in 1:m) { LL = LL + log(dbinom(x[i],1,a)) } return(LL) } logLy = function(a) { LL = 0 for(i in 1:n) { LL = LL + log(dbinom(y[i],1,a)) } return(LL) } p = seq(0.1,.9,0.01); length(p) Lx = logLx(p); bx = which.max(Lx); mlex = p[bx]; mlex Ly = logLy(p); by = which.max(Ly); mley = p[by]; mley Lnull = logLx(p) + logLy(p); b = which.max(Lnull); mlenull = p[b]; mlenull logL = logLx(mlex) + logLy(mley) logLnull = logLx(mlenull) + logLy(mlenull) Tn = -2*(logLnull-logL); Tn ## LRTS b = qchisq(.95,1); b; p=1-pchisq(Tn,1); p Wn = (mx-my)/(mx*(1-mx)/m +my*(1-my)/n)ˆ(0.5); Wn; Wnˆ2 ## Wald’s test pWn = 1-pnorm(abs(Wn)) + pnorm(-abs(Wn)); pWn Sn = (mx-my)/(mp*(1-mp)*(1/m +1/n))ˆ(0.5); Sn; Snˆ2 ## score test pSn = 1-pnorm(abs(Sn)) + pnorm(-abs(Sn)); pSn a = c(sum(x),sum(y)) d = c(m,n) prop.test(a,d,alt="two.sided",correct=FALSE) ## built-in function pWn2 = 1-pchisq(Wnˆ2,1);pWn2 pSn2 = 1-pchisq(Snˆ2,1);pSn2 ### Null:p1 < p2 pWnl = 1-pnorm(Wn); pWnl pSnl = 1-pnorm(Sn);pSnl prop.test(a,d,alt="greater",correct=FALSE) ### Null:p1 > p2 pWng = pnorm(Wn); pWng pSng = pnorm(Sn);pSng prop.test(a,d,alt="less",correct=FALSE)

6.7 Large Sample Tests Using R

387

Table 6.12 Test for equality of proportions: summary of test procedures Test procedure

Value of test statistic

p-value

Likelihood ratio test Wald’s test

Tn = 8.3891 Wn = −2.9274 Wn2 = 8.5699 Sn = −2.8844 Sn2 = 8.3198 8.3198

0.0038 0.0035

Score test prop.test

0.0039 0.0039

On the basis of simulated samples, the maximum likelihood estimate of p1 is 0.38 and that of p2 is 0.54. Under the null setup p1 = p2 = p and the maximum likelihood estimate of p is 0.4656. The output for the alternative H1 : p1 = p2 is summarized in Table 6.12. From the p-values reported in Table 6.12, we note that on the basis of simulated data, the null hypothesis H0 : p1 = p2 against the alternative H1 : p1 = p2 gets rejected. It is to be expected as the samples are generated with p1 = 0.4 and p2 = 0.5. Note that the p-values for Wald’s test and the score test, computed using the asymptotic null distribution of Wn and Sn , which is standard normal, are same as those computed using the asymptotic null distribution of Wn2 and Sn2 which is χ21 . Further, observe that the value of the test statistic and the p-value given by the built-in function prop.test are the same as that of the score test statistic Sn2 . If the alternative hypothesis is H1 : p1 > p2 , then H0 is rejected if Wn > c or Sn > c. Similarly, if the alternative hypothesis is H1 : p1 < p2 , then H0 is rejected if Wn < c or Sn < c. The cut-off c and the corresponding p-values are obtained using the asymptotic null distribution of Wn and Sn . We note that when the alternative hypothesis is H1 : p1 > p2 , the p-values are 0.9982 and 0.9980 for Wald’s test procedure and the score test procedure respectively, giving the strong support to the null setup, as p1 = 0.4 and p2 = 0.5. When the alternative hypothesis is H1 : p1 < p2 , the p-values are 0.0017 and 0.0019 for Wald’s test procedure and the score test procedure respectively. Thus, H0 is rejected giving the strong support to the alternative setup, as p = 0.4 and p2 = 0.5. Again observe that the value of the test statistic and the p-value given by the built-in function prop.test with the respective options “greater” and “less” for the alternative hypothesis are the same as that of the score test statistic Sn . To compute p-values, the asymptotic null distribution  of Sn is used which is standard normal. In the next example, we illustrate the computation of Wald’s test statistic in Example 6.4.4.  Example 6.7.9

Suppose Y = (Y1 , Y2 , Y3 ) has a multinomial distribution in 4 cells with cell 4 pi = 1. Suppose p = ( p1 , p2 , p3 ). probabilities pi > 0, i = 1, 2, 3, 4 and i=1 On the basis of a random sample of size n from the distribution of Y , we want

388

6

Goodness of Fit Test and Tests for Contingency Tables

to test the null hypothesis H0 : p1 = p3 & p2 = p4 against the alternative that at least one of the two equalities in the null setup are not valid. Suppose in the null setup p1 = p3 = α & p2 = p4 = β then 2α + 2β = 1. The maximum likelihood estimator of α is αˆ n = (X 1 + X 3 )/2n and of β is βˆ n = (X 2 + X 4 )/2n. 4 The score test statistic is Tn(c) (S) = Tn (P) = i=1 (oi − ei )2 /ei and Wald’s test statistic is given by Tn(c) (W ) = λ11 ( pˆ n )( pˆ 1n − pˆ 3n )2 + λ22 ( pˆ n )( pˆ 2n − pˆ 4n )2 + 2λ12 ( pˆ n )( pˆ 1n − pˆ 3n )( pˆ 2n − pˆ 4n ) where λi j , i, j = 1, 2 are as given in Example 6.4.4 and pˆ in = X i /n, i = 1, 2, 3, 4. We generate a random sample of size n = 100 from a multinomial distribution in 4 cells to test the null hypothesis. p = c(.35,.23,.25,.17); n = 100; set.seed(110) x = rmultinom(1,n,p); mle = x/n; mle e1 = e3 = (x[1]+x[3])/2; e2 = e4 = (x[2]+x[4])/2; e1/n; e4/n o = x; e = c(e1,e2,e3,e4) Wn = sum((o-e)ˆ2/o); Wn; Sn = sum((o-e)ˆ2/e); Sn pWn = 1-pchisq(Wn,2); pSn = 1-pchisq(Sn,2); pWn; pSn yn = c((mle[1]-mle[3]), (mle[2]-mle[4])); yn = as.vector(yn) a11 = (mle[1]*(1-mle[1]) + mle[3]*(1-mle[3])+2*mle[1]*mle[3])/n a22 = (mle[2]*(1-mle[2]) + mle[4]*(1-mle[4])+2*mle[2]*mle[4])/n a12 = (-mle[1]*mle[2] + mle[1]*mle[4] + mle[2]*mle[3] - mle[3]*mle[4])/n M=matrix(c(a11,a12,a12,a22),nrow=2,byrow=TRUE) Tn=t(yn)%*%solve(M)%*%yn; Tn; pTn=1-pchisq(Tn,2); pTn

On the basis of the generated data, the maximum likelihood estimate of p in the entire parameter space is (0.44, 0.22, 0.23, 0.11) while in the null space it is (0.335, 0.165, 0.335, 0.165). The value of the score test statistic, which is same as Karl Pearson’s chi-square test statistic, is 10.2488, while value of the Wald’s test statistic is 11.4191. The respective p-values are 0.0060 and 0.0033. Hence, the data do not support null setup. It seems reasonable as we have generated data under p = (0.35, 0.23, 0.25, 0.17) , in which p1 and p3 are not close to each other, similarly 4 p2 and p42are also not close. We have also computed the test statis(oi − ei ) /oi , its value is 11.4242, with p-value 0.0033. Hence, tic Wn = i=1 the null hypothesis is rejected on the basis of Wn also. It is to be noted that the value of Wn is close to the value of Wald’s test statistic, but these are not the same. Observe that the null hypothesis H0 : p1 = p3 & p2 = p4 can be expressed as H0 : p = p(α) where p(α) = (α, 1/2 − α, α, 1/2 − α) , where 0 < α < 1/2. Thus, the cell probabilities are indexed by a real parameter α. As stated in

6.7 Large Sample Tests Using R

389

Remark 6.4.4 and as noted in Example 6.7.7, if the null hypothesis is composite, 4 (oi − ei )2 /oi . in general Wald’s test statistic does not simplify to Wn = i=1 In the following example, we verify Theorem 6.5.1, which states that values of Karl Pearson’s chi-square test statistic and the score test statistic are the same for testing hypothesis of independence of attributes in a two-way contingency table.  Example 6.7.10

Table 6.13 presents cross-classification of two attributes gender (A) with two levels as female and male, and political party identification (B) with three levels as democratic, republican party and independents. Data are from the book Agresti [4], p. 38. We want to test whether there is any association between gender and political party identification. In this example we verify Theorem 6.5.1. Thus we find Karl Pearson’s chi-square test statistic and the score test statistic and show that the two are same. A = matrix(c(762,327,468,484,239,477), byrow=TRUE,ncol=3); A; r = 2; s = 3 E = matrix(nrow=2,ncol=3) for(i in 1:r) { for(j in 1:s) { E[i,j] = (sum(A[i,])*sum(A[,j]))/sum(A) } } T = sum((A-E)ˆ2/E); T; ### Karl Pearson’s test statistic df = (r-1)*(s-1); df; b = qchisq(.95,df); b; p = 1-pchisq(T,df); p chisq.test(A) A1 = as.vector(A); A1; E1 = as.vector(E); E1; n = sum(A1); n A2 = A1[-length(A1)]; A2; E2 = E1[-length(A1)]; E2 vn = nˆ(.5)*(A2/E2-A1[length(A1)]/E1[length(A1)]); vn; b = E2/n; b D=function(u) ### inverse of Information matrix { D = matrix(nrow = 5,ncol=5) for(i in 1:5) { for(j in 1:5) { if(i==j) { D[i,j] = u[i]*(1-u[i]) } else {

390

6

Goodness of Fit Test and Tests for Contingency Tables

Table 6.13 Cross-classification by gender and political party identification

Female Male

Political party identification Democratic Republican

Independents

762 484

468 477

327 239

D[i,j] = -u[i]*u[j] } } } return(D) } kp = t(vn)%*%D(b)%*%vn;kp ## score test statistic

From the output we note that the value of Karl Pearson’s chi-square test statistic is 30.0702 and of score test statistic is also 30.0702. The built-in function chisq.test(A) also gives the same value. On the basis of the given data, the null hypothesis of association between gender and part identification is rejected, p-value being almost 0.  In the next example, we test the hypothesis of independence of attributes, again in a two-way contingency table, by using three different approaches.  Example 6.7.11

Table 6.14 displays a report on relationship between aspirin use and heart attacks by the Physicians’ Health Study Research Group at Harvard Medical School. Data are from the book Agresti [5] p. 37. The attribute A “Myocardial Infraction” has three levels as “Fatal Attack”, “Nonfatal Attack” and “No Attack”. The attribute B has two levels as “Placebo” and “Use of Aspirin”. On the basis of these data, it is of interest to examine whether the use of aspirin and incidence of heart attack are associated. It is examined by applying the test of independence of two attributes on these data. Following R code performs the test in three different ways. In the first approach, we find the expected frequencies and use Karl Pearson’s chi-square test statistic. In the second approach, a built-in function chisq.test(data) gives the result of Karl Pearson’s chi-square test. Thirdly, we use a function xtab from R to prepare a 2 × 3 contingency table and a built-in function summary() gives the result.

6.7 Large Sample Tests Using R

391

A = matrix(c(18,171,10845,5,99,10933), byrow=TRUE,ncol=3); A ## Given data r = 2; s = 3; E = matrix(nrow=r,ncol=s) for(i in 1:r) { for(j in 1:s) { E[i,j] = (sum(A[i,])*sum(A[,j]))/sum(A) } } E ## Matrix of expected frequencies T = sum((A-E)ˆ2/E); T # Karl Pearson’s chi-square test statistic df = (r-1)*(s-1); df; b = qchisq(.95,2); b; p = 1-pchisq(T,2) ; p # p-value chisq.test(A) # built-in function #### To construct a two-way contingency table A = c("P","A","P","A","P","A"); B = c("FA","FA","NFA","NFA","NA","NA") D = data.frame(A,B); D; Dt = c(18,5,171,99,10845,10933) U = xtabs(Dt˜.,D); U ## contingency table summary(U)

The value of Karl Pearson’s chi-square test statistic is 26.9030, which is larger than the cut-off 5.9915 with p-value 1.439e − 06 and hence on the basis of the given data we can say that the use of aspirin and incidence of heart attack is associated. The output of a built-in function chisq.test(A) is given below. The results are same as stated above. > chisq.test(A) Pearson’s Chi-squared test data: A X-squared = 26.903, df = 2, p-value = 1.439e-06

Output of the cross tabulation is given below. Table 6.14 Cross-classification of aspirin use and myocardial infraction B

A Fatal attack

Nonfatal attack

No attack

Placebo Aspirin

18 5

171 99

10845 10933

392

6

Goodness of Fit Test and Tests for Contingency Tables

D A B 1 P FA 2 A FA 3 P NFA 4 A NFA 5 P NA 6 A NA > Dt=c(18,5,171,99,10845,10933) > U B A FA NA NFA A 5 10933 99 P 18 10845 171 > summary(U) Call: xtabs(formula = Dt˜., data = D) Number of cases in table: 22071 Number of factors: 2 Test for independence of all factors: Chisq = 26.903, df = 2, p-value = 1.439e-06

The data frame D specifies levels of 6 cells and the vector Dt gives the counts of the 6 cells according to the levels as specified in D. The function U=xtabs(Dt ∼ .,D) provides the 2 × 3 contingency table and summary(U) gives the value of Karl Pearson’s chi-square test statistic and corresponding p-value. The results are the same as above.  The next example gives R code to carry out tests procedures in a three-way contingency table as discussed in Sect. 6.5. We extend the techniques used in the previous example.  Example 6.7.12

Table 6.15 displays data on presence and absence of a coronary artery disease, serum cholesterol level and blood pressure. It is of interest to examine various relations among these three attributes based on the given data. We list these below. Table 6.15 Count data in a three-way contingency table Serum cholesterol

Low High

Disease Present Low BP

Present High BP

Absent Low BP

Absent High BP

10 11

38 34

421 432

494 322

6.7 Large Sample Tests Using R

393

1. One would like to test, whether the presence of disease depends on the blood pressure and serum cholesterol, that is, in terms of tests for contingency tables, whether the three attributes are associated with each other. 2. Another hypothesis of interest is, whether blood pressure levels and presence or absence of the disease are independent given the serum cholesterol levels 3. Similarly, one may like to see, whether serum cholesterol levels and presence or absence of the disease are independent given the blood pressure levels. 4. One more test of interest is, whether the attribute “disease” is independent of blood pressure and serum cholesterol, while blood pressure and serum cholesterol may not be independent. We have discussed these four types of hypothesis in Sect. 6.5. The first conjecture is about the mutual independence of three attributes, while second and third are related to conditional independence. The fourth is independence of A with B&C, where A denotes the attribute “disease”, B denotes “BP” and C denotes “serum cholesterol”. We carry out the test procedures to examine these claims, under the assumption that the joint distribution of cell counts is multinomial. As a first step, we construct a three-way contingency table for the above data using xtab function in R. Depending on the hypothesis, we find the expected frequency for each cell and use Karl Pearson’s chi-square test statistic with appropriate degrees of freedom. In Table 6.16 we list the null hypothesis, formulae for expected frequencies (E) and for degrees of freedom. There is one more approach to analyze the count data in a contingency table and it is via Poisson regression, with three factors as presence and absence of a coronary artery disease, serum cholesterol level and blood pressure. Using loglm, function in library MASS of R, (Venables and Ripley [6]) we can analyze these count data by Poisson regression approach. Both the approaches are illustrated below and they yield the same results. Table 6.16 Three-way contingency table: formulae for expected frequencies and degrees of freedom Hypothesis

Null hypothesis

E

Degrees of freedom

1. Mutual independence 2. Conditional independence given cholesterol 3. Conditional independence given BP 4. Independence of disease with BP and cholesterol

H0 : pi jk = pi.. p. j. p..k H0 : pi jk =

n i.. n . j. n ..k n2

r sm − r − s − m + 2

pi.k p. jk p..k

n i.k n . jk n ..k

(r − 1)(s − 1)m

H0 : pi jk =

pi j. p. jk p. j.

n i j. n . jk n . j.

(r − 1)s(m − 1)

n i.. n . jk n

(r − 1)(sm − 1)

H0 : pi jk = pi.. p. jk

394

6

Goodness of Fit Test and Tests for Contingency Tables

### To construct 2x2x2 contingency table Disease = c("Yes","Yes","Yes","Yes","No","No","No","No") BP = c("Low","Low","High","High","Low","Low","High","High") Cholesterol = c("Low","High","Low","High","Low","High","Low","High") D = data.frame(Disease,BP,Cholesterol); D Dt = c(10,11,38,34,421,432,494,322) T = xtabs(Dt˜.,D); T ## contingency table summary(T)### Results for mutual independence r = 2; s = 2; m = 2; n = sum(T); n; E1 = E2 = E3 = E4 = O = c() t=1 for(i in 1:r) for(j in 1:s) for(k in 1:m) { O[t] = T[i,j,k] E1[t] = sum(T[i,,])*sum(T[,j,])*sum(T[,,k])/(nˆ2) E2[t] = sum(T[i,,k])*sum(T[,j,k])/sum(T[,,k]) E3[t] = sum(T[i,j,])*sum(T[,j,k])/sum(T[,j,]) E4[t] = sum(T[i,,])*sum(T[,j,k])/n t = t+1 } d = round(data.frame(O,E1,E2,E3,E4),4); d d = as.matrix(d) df = c(r*s*m-r-s-m+2,m*(r-1)*(s-1),s*(r-1)*(m-1),(s*m-1)*(r-1)); df TS = p = b = c() for(i in 1:4) { TS[i] = sum(((d[,1]-d[,i+1])ˆ2)/d[,i+1]) p[i] = 1-pchisq(TS[i],df[i]) b[i] = qchisq(0.95,df[i]) } d1 = round(data.frame(TS,df,p,b),4); d1 #### Poison regression approach library("MASS") data = data.frame(Disease,BP,Cholesterol,Dt) loglm(Dt˜Disease+BP+Cholesterol,data=data) loglm(Dt˜Disease*Cholesterol+BP*Cholesterol,data=data) loglm(Dt˜Disease*BP+Cholesterol*BP,data=data) loglm(Dt˜Disease+(BP*Cholesterol),data=data)

A partial output corresponding to function T=xtabs(Dt∼ .,D) is given below, to specify how the three-way contingency table is constructed.

6.7 Large Sample Tests Using R

395

### Output D Disease BP Cholesterol 1 Yes Low Low 2 Yes Low High 3 Yes High Low 4 Yes High High 5 No Low Low 6 No Low High 7 No High Low 8 No High High Dt=c(10,11,38,34,421,432,494,322) T , , Cholesterol = High BP Disease High Low No 322 432 Yes 34 11 , , Cholesterol = Low BP Disease High Low No 494 421 Yes 38 10

Data frame D specifies the levels of three attributes according to which the data in the vector Dt of counts is entered, for example, there are 10 patients suffering from the disease, when the levels of BP and cholesterol are at low level and there are 322 patients not suffering from the disease, when the levels of BP and cholesterol are at high level. Object T displays the cross tabulation of data in vector Dt according to levels specified in D. It prepares two tables corresponding to two levels of third attribute “cholestorel”. Thus, when cholesterol level is high, we get a 2 × 2 table where rows correspond to presence of disease and columns correspond to levels of BP. Table 6.17 (data frame d) displays the observed frequencies O and the expected frequencies E i , where E i denotes the frequency expected under the hypotheses i, i = 1, 2, 3, 4. The vector O of observed frequencies is formed as follows. Index i corresponds to attribute “Disease” with level 1 for presence and level 2 for absence, index j corresponds to attribute “BP” with level 1 for high and level 2 for low and index k corresponds to attribute “cholesterol” with level 1 for high and level 2 for low. As indices i, j and k run from 1 to r = 2, 1 to s = 2 and 1 to m = 2 respectively, vector O corresponds to counts according to the combinations of levels (1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2). The vectors of the expected frequencies follow the same pattern. The results of the test procedures corresponding to the hypotheses 1 to 4 are displayed in Table 6.18. From the first row of Table 6.18, we note that the data do not have sufficient evidence to accept the hypothesis of mutual independence.

396

6

Goodness of Fit Test and Tests for Contingency Tables

Table 6.17 Three-way contingency table: observed and expected frequencies O

322

494

432

421

34

38

11

10

E1 E2 E3 E4

381.42 335.95 327.14 337.21

459.71 505.48 488.86 503.92

375.41 418.05 432.35 419.62

452.46 409.52 420.64 408.251

21.25 20.05 28.86 18.79

25.62 26.52 43.14 28.08

20.92 24.95 10.64 23.38

25.21 21.48 10.36 22.75

Table 6.18 Three-way contingency table: values of test statistic and p-values Hypothesis

Test statistic

df

p-value

Cut-off point

Mutual independence Conditional independence given Cholesterol Conditional independence given BP Independence of disease with BP and cholesterol

50.0468

4

0.0000

9.4877

30.2432

2

0.0000

5.9915

1.6841

2

0.4308

5.9915

31.1632

3

0.0000

7.8147

Thus, the presence of disease and levels of BP and cholesterol are associated with each other. The built-in function summary(T) gives the results only for the hypothesis of mutual independence. These are the same as displayed in Table 6.18. Further, we note that given the attribute “cholesterol”, disease and BP are associated with each other, but the two attributes “Disease” and “cholesterol” are not associated with each other given the attribute “BP”. It is to be noted from Table 6.17 that under this hypothesis, the observed frequencies and the frequencies expected under null setup are in close agreement with each other. The fourth hypothesis of independence of “Disease” with “BP” and “cholesterol” is rejected. Results for the four tests using Poisson regression approach are displayed in Table 6.19. In addition to Karl Pearson’s test, this approach also uses likelihood ratio test procedure. The values of the test statistic for Karl Pearson’s test and likelihood ratio test are close to each other. The results obtained by the Poisson regression approach are exactly the same as in Table 6.18. 

6.8 Conceptual Exercises

397

Table 6.19 Three-way contingency table: analysis by Poisson regression Hypothesis

Test procedure

Test statistic

df

p-value

Mutual independence

Likelihood ratio

51.93

4

1.42e−10

Karl Pearson Likelihood ratio

50.04 31.58

4 2

3.53e−10 1.39e−7

Karl Pearson Likelihood ratio

30.24 1.66

2 2

2.71e−7 0.4360

Karl Pearson Likelihood ratio

1.68 31.94

2 3

0.4308 5.38e−7

Karl Pearson

31.16

3

7.85e−7

Conditional independence given cholesterol Conditional independence given BP Independence of disease with BP and cholesterol

6.8

Conceptual Exercises

6.8.1 In a multinomial distribution with 3 cells, the cell probabilities are p1 (θ) = p2 (θ) = (1 + θ)/3 and p3 (θ) = (1 − 2θ)/3, 0 < θ < 1/2. (i) Examine whether the distribution belongs to a one-parameter exponential family. On the basis of a random sample of size n from this distribution find the maximum likelihood estimator and the moment estimator based on the sufficient statistic for θ and examine if these are CAN. (ii) Use the result to derive Wald’s test and the score test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . 6.8.2 In a multinomial distribution with four cells, the cell probabilities are p1 (θ) = p4 (θ) = (2 − θ)/4 and

p2 (θ) = p3 (θ) = θ/4 , 0 < θ < 2.

Examine whether the distribution belongs to a one-parameter exponential family. On the basis of a random sample of size n from this distribution find the maximum likelihood estimator of θ and examine if it is CAN. Use the result to derive (i) a likelihood ratio test, (ii) Wald’s test, (iii) a score test and (iv) Karl Pearson’s chi-square test to test H0 : θ = θ0 against the alternative H1 : θ = θ0 . 6.8.3 In a certain genetic experiment two different varieties of certain species are crossed. A specific characteristic of an offspring can occur at three levels A, B and C. According to the proposed model, probabilities for three levels A, B and C are 1/12, 3/12 and 8/12 respectively. Out of fifty offspring 6, 8 and 36 have levels A, B and C respectively. Test the validity of the proposed model by a score test, Karl Pearson’s test and by Wald’s test.

398

6

Goodness of Fit Test and Tests for Contingency Tables

6.8.4 On the basis of data in a 3 × 3 contingency table, derive a likelihood ratio test procedure and Karl Pearson’s test procedure to test H0 : pi j = p ji , i = j = 1, 2, 3 against the alternative H1 : pi j = p ji , i = j = 1, 2, 3 for at least one pair. 6.8.5 On the basis of data in a 2 × 3 contingency table, derive a likelihood ratio test procedure and Karl Pearson’s test procedure to test H0 : p11 = p12 = p13 against the alternative that there is no restriction as specified in H0 . 6.8.6 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace distribution with location parameter θ and scale parameter 1. Derive a large sample test procedure to test H0 : θ = θ0 against the alternative H0 : θ > θ0 and examine whether it is a consistent test procedure.

6.9

Computational Exercises

Verify the results by simulation using R. 6.9.1 For the multinomial distribution of Example 6.2.1, obtain the maximum likelihood estimator of θ on the basis of a simulated sample. Find the value of the test statistic Tn and write the conclusion about the test for H0 against H1 as specified in Example 6.2.1. Verify that the maximum likelihood estimator of θ is a CAN estimator of θ. (Hint: Use code similar to Example 6.7.1.) 6.9.2 On the basis of data given in Example 6.2.3, examine whether the proposed model  is valid using a test based on a score test and 4 (oi − ei )2 /oi . Use the formula for score test statistic in terms Wn = i=1 of a quadratic form and find its value. Examine whether it is the same as the value of Karl Pearson’s test statistic. (Hint: Use code similar to Example 6.7.6.) 6.9.3 For the multinomial distribution in Exercise 6.8.1, on the basis of simulated sample, test H0 : θ = 1/4 against the alternative H1 : θ = 1/4 using Wald’s test. Find the p-value. Plot the power function and hence examine whether the test is consistent. (Hint: Use code similar to Example 6.7.2.) 6.9.4 For the multinomial distribution in Exercise 6.8.2, on the basis of simulated sample, test H0 : θ = 1 against the alternative H1 : θ = 1 using the score test. Find the p-value. Plot the power function and hence examine whether the test is consistent.(Hint: Use code similar to Example 6.7.2.) 6.9.5 Suppose (Y1 , Y2 ) has a multinomial distribution in three cells with cell probabilities (θ + φ)/2, (1 − θ)/2 and (1 − φ)/2, 0 < θ, φ < 1. On the basis of simulated sample, test the hypothesis H0 : θ = θ0 , φ = φ0 against the alternative H1 : θ = θ0 , φ = φ0 using (i) a likelihood ratio test, (ii) Wald’s test and (iii) a score test. (Hint: Use code similar to Example 6.7.2.)

6.9 Computational Exercises

399

6.9.6 A gene in a particular organism is either dominant (A) or recessive (a). Under the assumption that the members of the population of this organism choose their mating partner in a manner that is completely unrelated to the type of the gene, there are three possible genotypes which can be observed namely, A A, Aa and aa respectively. Table 6.20 provides the number of organisms possessing these genotypes when a sample of 600 organisms is selected. Test the claim that the proportions of the organisms in a population corresponding to these genotypes are as provided in Table 6.20. (Hint: Use code similar to Example 6.7.2.) 6.9.7 Table 6.21 shows the number (x) of a particular organism found in 100 samples of water from a pond. Test the hypothesis that these data are from a binomial B(6, p) distribution. (Hint: Use code similar to Example 6.7.4.) 6.9.8 Table 6.22 presents the distribution of heights collected on 300 8-year old girls. Examine whether the data are from normal distribution. (Hint: Use code similar to Example 6.7.4.) 6.9.9 Table 6.23 displays data on number of individuals classified according to party and race. Test the hypothesis of independence between party identification and race, using three approaches discussed in Example 6.7.10. 6.9.10 A sample from 200 married couples was taken from a certain population. Husbands and wives were interviewed separately to determine whether their main source of news was from the newspapers, radio or television. The results are displayed in Table 6.24. (i) Test the hypothesis of symmetry specified by H0 : pi j = p ji . (ii) Test the hypothesis of symmetry and independence specified by H0 : pi j = πi π j where π1 + π2 + π3 = 1. 6.9.11 When a new TV serial is launched, the producer wants to get a feedback from the viewers. Random samples of 250, 200 and 350 consumers from three cities are selected and the following data is obtained from them. Suppose three categories A, B and C are defined as A: Never heard about the serial, B: Heard about the serial but did not watch and C: saw it at least once. Can we claim on the basis of the data in Table 6.25 that the viewers preferences differ in the three cities? Table 6.20 Number of organisms with specific genotype Genotype

Proportion

Number of organisms

AA Aa aa

θ2 2θ(1 − θ) (1 − θ)2

200 300 100

Table 6.21 Number of organisms x

0

Frequency 15

1

2

3

4

5

6

30

25

20

5

4

1

400

6

Goodness of Fit Test and Tests for Contingency Tables

Table 6.22 Heights of eight year old girls Height (in cms)

[114, 120)

[120, 126)

[126, 132)

[132, 138)

[138, 144)

Observed frequency

29

91

130

46

4

Table 6.23 Cross-classification by race and party identification Race Black White

Party identification Democrat

Independent

Republican

103 341

15 105

11 405

Table 6.24 Classification according to source of news

Wife

Papers Radio TV

Husband Papers

Radio

TV

15 11 23

6 10 15

10 20 90

Table 6.25 Data on feedback of viewers of TV serial City 1 City 2 City 3 Total

A

B

C

Total

51 60 69 180

70 71 95 234

129 69 188 386

250 200 350 800

6.9.12 The resident data set “HairEyeColor” in R gives the distribution of hair and eye color and sex for 592 statistics students. It is a three-dimensional array resulting from cross-tabulating 592 observations on 3 attributes. The attributes and their levels are as displayed in Table 6.26. The data can be obtained by giving commands data(HairEyeColor). The commands HairEyeColor and help(HairEyeColor) give the description of the data set. On the basis of these data, test whether (i) the three attributes are associated with each other, (ii) hair color and eye color are independent attributes given the sex of an individual and (iii) hair color and sex are independent attributes given the eye color of an individual. (Hint: Use code similar to Example 6.7.12.)

References

401

Table 6.26 Levels of three variables: hair color, eye color and sex Number

Attribute

Levels

1 2 3

Hair Color Eye Color Sex

Black, Brown, Red, Blond Brown, Blue, Hazel, Green Male, Female

References 1. Kale, B. K., & Muralidharan, K. (2016). Parametric inference: An introduction. Delhi: Narosa. 2. Wilks, S. S. (1938). The large sample distribution of the likelihood ratio test for testing composite hypotheses. Annals of Mathematical Statistics, 9, 60–62. 3. Rao, C. R. (1978). Linear statistical inference and its applications. New York: Wiley. 4. Agresti, A. (2007). An introduction to categorical data analysis (2nd ed.). New York: Wiley. 5. Agresti, A. (2002). Categorical data analysis (2nd ed.). New York: Wiley. 6. Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th ed.). New York: Springer. http://www.stats.ox.ac.uk/pub/MASS4.

7

Solutions to Conceptual Exercises

Contents 7.1 7.2 7.3 7.4 7.5 7.6

7.1

Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple Choice Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Chapter 2: Consistency of an Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Chapter 3: Consistent and Asymptotically Normal Estimators . . . . . . . . . . . . . . . . . 7.6.3 Chapter 4: CAN Estimators in Exponential and Cramér Families . . . . . . . . . . . . . . 7.6.4 Chapter 5: Large Sample Test Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.5 Chapter 6: Goodness of Fit Test and Tests for Contingency Tables . . . . . . . . . . . . .

403 435 478 491 497 503 503 508 512 517 519

Chapter 2

2.8.1 Suppose Tn is a consistent estimator of θ . Obtain conditions on the sequence {an , n ≥ 1} such that the following are also consistent estimators of θ . (i) an Tn , (ii) an + Tn and (iii) (an + nTn )/(n + 1). Pθ

Solution: It is given that Tn is a consistent estimator of θ , that is Tn → θ, Pθ

∀ θ ∈ . In (i) if an → 1 as n → ∞ then an Tn → θ,

∀ θ ∈ .



In (ii) if an → 0 as n → ∞ then an + Tn → θ, ∀ θ ∈ . In (iii) (an + nTn )/(n + 1) = an /(n + 1) + n/(n + 1)Tn . Further, n/(n + 1) → 1, hence if the sequence {an , n ≥ 1} is such that an /(n + 1) → 0 as n → ∞, Pθ

then (an + nTn )/(n + 1) → θ, ∀ θ ∈ . For example, an = n δ , δ < 1 or

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0_7

403

404

7

Solutions to Conceptual Exercises

an = exp(−n). It is to be noted that if an = n δ , 0 < δ < 1, then {an , n ≥ 1} is not a convergent sequence. 2.8.2 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a distribution of X , where  E(X ) = θ and E(X 2 ) = V < ∞. Show that n 2 i X i is a consistent estimator of θ . Tn = n(n+1) i=1 Solution: It is given that E(X ) = θ , hence  E(Tn ) = E

 2 i Xi n(n + 1) n



 2 i E(X i ) n(n + 1) n

=

i=1

i=1

n(n + 1) 2θ = = θ. n(n + 1) 2 Thus, Tn is an unbiased estimator of θ . Since {X 1 , X 2 , . . . , X n } is a random sample, {i X i , i = 1, 2, . . . , n} are independent random variables. Further, V ar (X ) = V − θ 2 = σ 2 , say and it is finite. Hence,  n  2  n   2 4 V ar (Tn ) = V ar i Xi = 2 i 2σ 2 n(n + 1) n (n + 1)2 i=1

i=1

n(n + 1)(2n + 1) 4σ 2 = 2 2 n (n + 1) 6 2 2σ (2n + 1) = → 0 as n → ∞ . 3n(n + 1) Thus, Tn is an unbiased estimator of θ and its variance to 0 and hence it is a MSE consistent estimator of θ .

2σ 2 (2n+1) 3n(n+1)

converges

sample from a distribution with 2.8.3 Suppose {X 1 , X 2 , . . . , X n } is a random  n n ai X i , where i=1 ai → 1 meanθ and variance σ 2 . Suppose Tn = i=1 n ai2 → 0. Show that Tn is consistent for θ . and i=1 Solution: Since E(X i ) = θ and V ar (X i ) = σ 2 , we have  E(Tn ) = E

n 

 ai X i



i=1

n 

ai → θ

i=1

⇒ Biasθ (Tn ) = (E(Tn ) − θ ) → 0 and  V ar (Tn ) = V ar

n  i=1

Hence, Tn is consistent for θ .

 ai X i

= σ2

n  i=1

ai2 → 0 .

7.1 Chapter 2

405

2.8.4 Suppose g(x) is an even, non-decreasing and non-negative function on [0, ∞). Then show that Tn is consistent for η(θ ) if E(g(Tn − η(θ )) → 0 ∀ θ . Solution: The basic inequality from probability theory states that if X is an arbitrary random variable and g(·) is even, non-decreasing and non-negative Borel function on [0, ∞), then for every a > 0, P[|X | ≥ a] ≤

E(g(X )) . g(a)

Using the basic inequality and the fact that g is an even function, for all  > 0, we have, Pθ [|Tn − η(θ )| > ] ≤

E(g(|Tn − η(θ )|)) E(g(Tn − η(θ )) = →0 ∀ θ. g() g()

Hence, Tn is consistent for η(θ ). 2.8.5 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from the following distributions—(i) Bernoulli B(1, θ ), (ii) Poisson Poi(θ ), (iii) uniform U (0, 2θ ) and exponential distribution with mean θ . Show that the sample mean X n is consistent for θ using the four approaches discussed in Example 2.2.2. Solution: For Bernoulli B(1, θ ), Poisson Poi(θ ), uniform U (0, 2θ ) and exponential distribution, mean is θ and respective variances v(θ ) are θ (1 − θ ), θ, θ 2 /3 and θ 2 and these are finite. Hence, by the CLT √ L n/v(θ )(X n − θ ) → Z ∼ N (0, 1). We use this result to find the limit of coverage probability. As discussed in Example 2.2.2, (i) the first approach is verification of consistency by the definition. Observe that, for given  > 0, 



n/v(θ ) X n − θ < n/v(θ ) Pθ [|X n − θ | < ] = Pθ



≈ n/v(θ ) −  − n/v(θ ) → 1, as n → ∞ , ∀ θ ∈ . Thus, the coverage probability converges to 1 as n → ∞, ∀ θ ∈  and hence the sample mean X n is a consistent estimator of θ . (ii) It is to be noted that E(X n − θ )2 = V ar (X n ) = v(θ )/n → 0, as n → ∞ , ∀ θ ∈ . Thus, X n converges in quadratic mean to θ and hence converges in probability to θ .

406

7

Solutions to Conceptual Exercises

(iii) Suppose Fn (x), x ∈ R denotes the distribution function of X n − θ . Then  n/v(θ )(X n − θ ) ≤ n/v(θ ) x Fn (x) = Pθ [X n − θ ≤ x] = Pθ

≈ n/v(θ ) x , x ∈ R and limiting behavior of Fn (x) as n → ∞ is as follows: ⎧ ⎨ 0, if x < 0 1/2, if x = 0 Fn (x) → ⎩ 1, if x > 0. Thus, Fn (x) → F(x), ∀ x ∈ C F (x) = R − {0}, where F is a distribution function of a random variable degenerate at 0 and C F (x) is a set of points L

of continuity of F. It implies that (X n − θ ) → 0, where the limit law is Pθ

degenerate and hence, (X n − θ ) → 0, for all θ ∈ , which proves that X n is consistent for θ . Pθ (iv) Further, by Khinchine’s WLLN, X n → θ , for all θ ∈ . 2.8.6 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample of size n from a Poisson Poi(θ ) distribution. Find the maximum likelihood estimator of θ and examine whether it is consistent for θ when (i) θ ∈ [a, b] ⊂ (0, ∞) and (ii) θ ∈ {1, 2}. Solution: Corresponding to a random sample X from Poi(θ ) distribution, the likelihood of θ is given by  n  n  n   Xi Xi (1/X i !) exp(−θ )θ = (1/X i !) exp(−nθ ) θ i=1 . L n (θ |X ) = i=1

i=1

(i) The log likelihood function Q(θ ) = log L n (θ |X ) and its first and second derivatives are given by Q(θ ) = c − nθ + n X n log θ, Q  (θ ) = −n + and Q  (θ ) = −

nXn = n(X n − θ )/θ θ

nXn , θ2

where c is a constant free from θ . Thus, the solution of the likelihood equation is given by θ = X n and at this solution the second derivative is negative if X n > 0. Hence, the maximum likelihood estimator θˆn of θ is given by θˆn = X n , provided X n ∈ [a, b]. However, for any θ ∈ [a, b], it is possible

7.1 Chapter 2

407

that X n < a and X n > b as shown below. Suppose Un = Un ∼ Poi(nθ ). Hence,

n i=1

X i then

Pθ [X n < a] = Pθ [Un < na] > 0 & Pθ [X n > b] = Pθ [Un > nb] > 0. Now Xn < a ≤ θ ⇒ Xn − θ < 0



Q  (θ ) = n(X n − θ )/θ < 0

and hence Q(θ ) is a decreasing function of θ . It attains maximum at the smallest possible value of θ which is a. Similarly, Xn > b ≥ θ ⇒ Xn − θ > 0



Q  (θ ) > 0

and hence Q(θ ) is an increasing function of θ . It attains maximum at the largest possible value of θ which is b. Thus, the maximum likelihood estimator θˆn of θ is given by ⎧ ⎨ a, if X n < a θˆn = X , if X n ∈ [a, b] ⎩ n b, if X n > b. Pθ

To verify its consistency, observe that by WLLN X n → θ , for all θ ∈ [a, b]. Now for  > 0 and θ ∈ (a, b), for large n Pθ [|θˆn − X n | < ] ≥ Pθ [θˆn = X n ] = Pθ [a ≤ X n ≤ b] √ √ √ √ ≈ ( n(b − θ )/ θ ) − ( n(a − θ )/ θ) Pθ

and it converges to 1 as n → ∞. Hence, ∀ θ ∈ (a, b), θˆn → θ . Now to examine convergence in probability at the boundary points a and b, note that for θ = a, Pa [|θˆn − a| > ] = Pa [θˆn − a > ] = Pa [θˆn > a + ] and Pa [θˆn > a + ] =



0, if  > b − a √ √ Pa [X n > a + ] ≈ 1 − ( n/ a) → 0, if  ≤ b − a.

Pa Thus, θˆn → a. Further, for the boundary point b, Pb [|θˆn − b| > ] = Pb [b − θˆn > ] = Pb [θˆn < b − ] and  0, if  > b − a ˆ √ √ Pb [θn < b − ] = Pb [X n < b − ] ≈ (− n/ b) → 0, if  ≤ b − a.

408

7

Solutions to Conceptual Exercises

Pb Thus, θˆn → b and hence θˆn is a consistent estimator of θ . (ii) It is to be noted that if θ ∈  = {1, 2}, the likelihood is not even a continuous function of θ and hence to find the maximum likelihood estimator of θ , ) nXn . we compare L n (2|X ) with L n (1|X ). Observe that, LL nn (2|X (1|X ) = exp(−n) 2 Now

exp(−n)2n X n > 1 ⇔

− n + n X n log 2 > 0 ⇔ X n > 1/ log 2 = 1.4427 .

Thus, L n (2|X ) > L n (1|X ) if X n > 1.4427. Hence, the maximum likelihood estimator θˆn of θ is given by θˆn =



2, if X n > 1.4427 1, if X n ≤ 1.4427.

Pθ To verify consistency of θˆn , we have to check whether θˆn → θ for every θ , P2 P1 that is, whether θˆn → 2 and θˆn → 1. Observe that

P2 [|θˆn − 2| < ] =



1, if  > 1 ˆ P2 [θn = 2], if 0 <  ≤ 1.

Further,   P2 [θˆn = 2] = P2 X n > 1.4427 √ √ ≈ 1− n(1.4427 − 2)/ 2 → 1 as

n → ∞.

On similar lines, P1 [|θˆn − 1| < ] = P1 [1 −  < θˆn < 1 + ] =



1, if  > 1 ˆ P1 [θn = 1], if 0 <  ≤ 1

and,   √  P1 [θˆn = 1] = P1 X n < 1.4427 ≈  n(1.4427 − 1) → 1 as n → ∞. P2 P1 Thus, θˆn → 2 and θˆn → 1 implying that θˆn is consistent for θ .

2.8.7 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (θ − 1, θ + 1) distribution, θ ∈ R. (i) Examine whether T1n = X n , T2n = X (1) + 1 and T3n = X (n) − 1 are consistent estimators for θ . Which one is better? Why? (ii) Find an uncountable family of consistent estimators of θ based on sample quantiles.

7.1 Chapter 2

409

Solution: Suppose X ∼ U (θ − 1, θ + 1) distribution, then E(X ) = θ and hence by the WLLN, X n is consistent for θ . Distribution function FX (x, θ ) of X is given by ⎧ 0, if x 21. If we compare the rate of convergence of MSE to 0, then also MSE of T2n , which is same as that of T3n , converges to 0 faster than that of T1n . Thus, T2n and T3n are better than T1n .

410

7

Solutions to Conceptual Exercises

(ii) From the distribution function of X , the p-th population quantile a p (θ ) is given by the solution of the equation, FX (x, θ ) = p ⇔ (x − (θ − 1))/2 = p ⇒ a p (θ ) = 2 p − 1 + θ, 0 < p < 1. Suppose rn = [np] + 1. Then the p-th sample quantile X (rn ) is consistent for a p (θ ) and hence X (rn ) − 2 p + 1 is consistent for θ . Thus, the uncountable family of consistent estimators of θ based on the sample quantiles is given by {X (rn ) − 2 p + 1, 0 < p < 1}. 2.8.8 Suppose {X 1 , X 2 , . . . , X n } are independent random variables where X i follows a uniform U (i(θ − 1), i(θ + 1)) distribution, θ ∈ R. Find the maximum likelihood estimator of θ and examine whether it is a consistent estimator of θ . Solution: Suppose a random variable Yi is defined as Yi = X i /i, i = 1, 2, . . . , n. Then {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables where Yi ∼ U (θ − 1, θ + 1) distribution. Corresponding to observations {X 1 , X 2 , . . . , X n }, we have a random sample Y ≡ {Y1 , Y2 , . . . , Yn } from the distribution of Y ∼ U (θ − 1, θ + 1). The likelihood of θ given Y is given by  n n  1 1 L n (θ |Y ) = , θ − 1 ≤ Yi ≤ θ + 1, = 2 2 i=1

∀ i ⇔ Y(1) ≥ θ − 1, Y(n) ≤ 1 + θ. Thus, the likelihood is constant for Y(n) − 1 ≤ θ ≤ Y(1) + 1. Thus, any value between (Y(n) − 1, Y(1) + 1) can be taken as the maximum likelihood estimator of θ . Thus, we define the maximum likelihood estimator θˆn of θ as θˆn = α(Y(n) − 1) + (1 − α)(Y(1) + 1). Now as shown in Example 2.5.2, Pθ

Y(1) → θ − 1 &



Y(n) → θ + 1 ∀ θ .



Thus, θˆn → θ ∀ θ , which implies that θˆn is consistent for θ . 2.8.9 Suppose {X 1 , X 2 , . . . , X n } are independent random variables where X i ∼ U (0, iθ ) distribution, θ ∈  = R+ . (i) Find the maximum likelihood estimator of θ and examine whether it is consistent for θ . (ii) Find the moment estimator of θ and examine whether it is consistent for θ . Solution: Suppose X i ∼ U (0, iθ ), then it is easy to verify that Yi = X i /i ∼ U (0, θ ). Thus, {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables each having a uniform U (0, θ ). Proceeding on similar lines as in Example 2.2.1, we get that the maximum likelihood estimator of θ is Y(n) and it is consistent for θ . The moment estimator of θ is 2Y n and it is consistent for θ .

7.1 Chapter 2

411

2.8.10 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (−θ, θ ) distribution. Examine whether −X (1) and X (n) are both consistent for θ . Is (X (n) − X (1) )/2 consistent for θ ? Justify your answer. Solution: We define a random variable Y as Y = (X + θ )/2θ , then by the probability integral transformation, Y ∼ U (0, 1). In Example 2.5.2, we have shown that corresponding to a random sample of size n from the uniform U (0, 1) distribution, P

Y(1) → 0 &

P

Y(n) → 1





X (1) → − θ &



X (n) → θ .

Thus, −X (1) and X (n) both are consistent for θ . Further, convergence in probPθ

ability is closed under all arithmetic operations, thus (X (n) − X (1) )/2 → θ and hence is consistent for θ . 2.8.11 Suppose {X 1 , X 2 , . . . , X 2n+1 } is a random sample from a uniform U (θ − 1, θ + 1) distribution, θ ∈ R. Examine whether X (n) − 1 and X ([n/4]+1) is consistent for θ . Solution: Suppose X ∼ U (θ − 1, θ + 1) distribution. Then its distribution function F(x, θ ) is given by ⎧ 0, if x y, then (2n + 1) p = n/4 + y − x < n/4 which implies that p < n/4(2n + 1) < 1/8. Thus, Pθ

X ([n/4]+1) → a p (θ ) < a1/8 (θ ) < θ . If x < y, then maximum possible value for y − x can be 1. Hence, (2n + 1) p = n/4 + y − x < n/4 + 1 ⇒ p < n/4(2n + 1) + 1/(2n + 1) < 1/2. Pθ

Thus, X ([n/4]+1) → a p (θ ) < θ . Hence, X ([n/4]+1) does not converge in probability to θ and hence X ([n/4]+1) is not consistent for θ . 2.8.12 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample of size n from a binomial B(1, θ ) distribution, θ ∈  = (0, 1). (i) Find the maximum likelihood estimator of θ and examine whether it is consistent for θ . (ii) Find the moment estimator of θ and examine whether it is consistent for θ . (iii) Find the maximum likelihood estimator of θ and examine whether it is consistent for θ , if  = (a, b) ⊂ (0, 1). Solution: Suppose X ∼ B(1, θ ), then the likelihood of θ corresponding to the given random sample is given by log L n (θ |X ) = log θ

n  i=1

X i − log(1 − θ )(n −

n 

X i ).

i=1

To find the maximum likelihood estimator of θ , the first and the second derivative of log L n (θ |X ) are given by ∂ nXn n − nXn nXn n log L n (θ |X ) = − = − ∂θ θ 1−θ θ (1 − θ ) 1 − θ   Xn − θ n = 1−θ θ   2 1 1 ∂ − log L (θ |X ) = n X − & n n ∂θ 2 θ2 (1 − θ )2 n − < 0 ∀ θ ∈ (0, 1). (1 − θ )2

Q(θ ) =

Solving the likelihood equation Q(θ ) = 0, we get the solution as θ = X n and at this solution, the second derivative is almost surely negative. (i) If the parameter space is  = (0, 1), then X n is an estimator provided X n ∈ (0, 1). However, it is possible that X n = 0 ⇔ X i = 0 ∀ i = 1, 2, . . . , n, the probability of which is (1 − θ )n > 0. In this case, the likelihood of θ is given by (1 − θ )n . It is a decreasing function of θ and attains

7.1 Chapter 2

413

supremum at θ = 0. Similarly, it is possible that X n = 1 ⇔ X i = 1 ∀ i = 1, 2, . . . , n, the probability of which is θ n > 0. In this case, the likelihood of θ is given by θ n . It is an increasing function of θ and attains supremum at θ = 1. However, 0 and 1 are not included in the parameter space. Hence, the maximum likelihood estimator of θ does not exist if θ ∈ (0, 1). However, it is to be noted that both P[X n = 0] = (1 − θ )n and P[X n = 1] = θ n converge to 0 as n increases, that is with probability approaching 1, 0 < X n < 1 and for large n, X n as the maximum likelihood estimator of θ . By the WLLN it is consistent for θ . (ii) The moment estimator of θ is X n provided X n ∈ (0, 1), however as discussed above it is possible that X n = 0 and 1. Thus, a moment estimator / (0, 1). If 0 < X n < 1, by the WLLN it is consistent does not exist if X n ∈ for θ . (iii) In (i) we have seen that the solution of the likelihood equation is θ = X n and the second derivative is negative at this solution. Thus X n is the maximum likelihood estimator if X n ∈ [a, b]. However, 



P Xn < a = P

 n  i=1

 X i < an =

[an]−1   r =0

 n r θ (1 − θ )n−r > 0. r

  Similarly, P X n > b > 0. Suppose X n < a. Further, a ≤ θ ⇒ X n < θ ⇒ Q(θ ) < 0. In this case, the likelihood is a decreasing function of θ and attains supremum at smallest possible value of θ , which is a. Similarly, if X n > b then θ ≤ b ⇒ X n > θ ⇒ Q(θ ) > 0. In this case, the likelihood is an increasing function of θ and attains supremum at largest possible value of θ , which is b. Hence, the maximum likelihood estimator θˆn of θ is given by ⎧ ⎨ a, if X n < a θˆn = X , if a ≤ X n ≤ b ⎩ n b, if X n > b. Now to verify whether it is consistent, we proceed as follows. By WLLN, Pθ

X n → θ , ∀ θ ∈ [a, b] and by the CLT √ L n(X n − θ ) → Z 1 ∼ N (0, θ (1 − θ )). For  > 0 Pθ [|θˆn − X n | < ] ≥ Pθ [θˆn = X n ] = Pθ [a ≤ X n ≤ b]  √ √ √ n(a − θ ) n(b − θ ) n(X n − θ ) = Pθ √ ≤ √ ≤ √ θ (1 − θ ) θ (1 − θ ) θ (1 − θ ) √  √  n(b − θ ) n(a − θ ) ≈ √ − √ → 1 as n → ∞, θ (1 − θ ) θ (1 − θ )

414

7

Solutions to Conceptual Exercises

Pθ Pθ ∀ θ ∈ (a, b). Thus, (θˆn − X n ) → 0, ∀ θ ∈ (a, b) and hence, θˆn → θ, ∀ θ ∈ (a, b). Now for θ = a, Pa [|θˆn − a| > ] = Pa [θˆn − a > ] = Pa [θˆn > a + ] = 0 if a +  > b ⇔  > b − a. Suppose 0 <  ≤ b − a, then    Pa θˆn > a +  = Pa a +  < X n < b  √ √ √ n(a +  − a) n(X n − a) n(b − a) < √ < √ = Pa √ a(1 − a) a(1 − a) a(1 − a) √   √  n(b − a) n ≈ √ − √ → 0 as n → ∞. a(1 − a) a(1 − a) Pa Thus, θˆn → a. Suppose θ = b, then Pb [|b − θˆn | > ] = Pb [b − θˆn > ] = Pb [θˆn < b − ] = 0 if b −  < a ⇔  > b − a. Suppose 0 <  ≤ b − a, then    Pb θˆn < b −  = Pb a < X n < b −   √ √ √ n(a − b) n(X n − b) n(b −  − b) < √ < √ = Pb √ b(1 − b) b(1 − b) b(1 − b) √   √  n(a − b) n ≈  −√ − √ → 0 as n → ∞. b(1 − b) b(1 − b) Pb Pθ Hence, θˆn → b. Thus, θˆn → θ for all θ ∈ [a, b] and hence is consistent for θ.

2.8.13 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution, θ ∈  = {0, 1}. An estimator Tk (X n ) is defined as follows. Prove that it is consistent for θ , if and only if 0 < k < 1.  0, if X n < k Tk (X n ) = 1, if X n ≥ k.

Solution: Suppose 0 < k < 1. To verify consistency of Tk (X n ), we examine P0

P1

whether Tk (X n ) → 0 and Tk (X n ) → 1. Observe that  P0 [|Tk (X n ) − 0| < ] =

1, if  > 1 P0 [Tk (X n ) = 0], if 0 <  ≤ 1

  √  & P0 [Tk (X n ) = 0] = P0 X n ≤ k =  nk → 1 as n → ∞, as k > 0.

7.1 Chapter 2

415

On similar lines, P1 [|Tk (X n ) − 1| < ] = P1 [1 −  < Tk (X n ) < 1 + ]  1, if  > 1 = P1 [Tk (X n ) = 1], if 0 <  ≤ 1  √   Further, P1 [Tk (X n ) = 1] = P1 X n > k = 1 −  n(k − 1) → 1 as n → ∞, as k < 1. Thus, if 0 < k < 1, Tk (X n ) is consistent for θ . Now suppose Tk (X n ) is P0

P1

consistent for θ , that is, Tk (X n ) → 0 and Tk (X n ) → 1, that is ∀  > 0, P0 [|Tk (X n ) − 0| < ] → 1 and P1 [|Tk (X n ) − 1| < ] → 1 as n → ∞. 1, coverage probability is 1 for Since Tk (X n ) is either 0 or 1, for  >√  both θ= 0 and θ = 1. For  ≤ 1,  nk → 1 implies k > 0 and √ 1 −  n(k − 1) → 1 implies k < 1. For these two implications to be true, k must be in (0, 1). 2.8.14 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (θ, 1) distribution, θ ∈  = {−1, 0, 1}. (i) Find the maximum likelihood estimator of θ and examine whether it is consistent for θ . (ii) Examine whether it is unbiased for θ . Examine whether it is asymptotically unbiased for θ . Solution: Corresponding to a random sample X from a normal N (θ, 1) distribution, the likelihood of θ is given by   1 1 2 L n (θ |X ) = √ exp − (X i − θ ) 2 2π i=1   n  √ 1 = ( 2π)−n exp − (X i − θ )2 . 2 n 

i=1

(i) It is to be noted for θ ∈  = {−1, 0, 1}, the likelihood is not a continuous function of θ and hence to find the maximum likelihood estimator of θ , we compare L√ n (−1|X ) with L n (0|X ) and L n (0|X ) with L n (1|X ). Observe that, with c = ( 2π)−n ,    n n  1  2 L n (−1|X ) = c exp − Xi + 2 Xi + n 2 i=1 i=1   n 1 2 L n (0|X ) = c exp − Xi 2 i=1    n n  1  2 Xi − 2 Xi + n . L n (1|X ) = c exp − 2 i=1

i=1

416

7

Solutions to Conceptual Exercises

Further,   n    L n (−1|X ) X i − n/2 = exp −n(X n − (−1/2)) = exp − L n (0|X ) i=1

> 1 if X n < −1/2

&

≤ 1 if X n ≥ −1/2.

Similarly,   n    L n (0|X ) 1 X i + n) = exp −n(X n − 1/2) = exp − (−2 L n (1|X ) 2 i=1

> 1 if X n < 1/2 &

≤ 1 if X n ≥ 1/2.

Hence, the maximum likelihood estimator θˆn of θ is given by θˆn

⎧ X n < −1/2 ⎨ −1, if = 0, if −1/2 ≤ X n < 1/2 ⎩ 1, if X n ≥ 1/2 .

P−1 P0 θˆn will be a consistent estimator of θ if and only if θˆn → −1, θˆn → 0 and P1 θˆn → 1. Observe that for 0 <  ≤ 1, P−1 [|θˆn − (−1)| < ] = P−1 [−1 −  < θˆn < −1 + ] = P−1 [θˆn = −1]   √  = P−1 X n < −1/2 =  n/2 → 1 as n → ∞.

Further, for 1 <  ≤ 2, P−1 [|θˆn − (−1)| < ] = P−1 [−1 −  < θˆn < −1 + ] = P−1 [θˆn = −1, θˆn = 0]    √  = P−1 X n < 1/2 =  3 n/2 → 1 as n → ∞. Now suppose  > 2, then P−1 [|θˆn − (−1)| < ] = P−1 [−1 −  < θˆn < −1 + ] = P−1 [θˆn = −1, θˆn = 0, θˆn = 1] = 1. P−1 Thus, θˆn → −1. For θ = 0 and for 0 <  ≤ 1,

P0 [|θˆn | < ] = P0 [− < θˆn < ] = P0 [θˆn = 0]   = P0 −1/2 < X n < 1/2 √   √  =  n/2 −  − n/2 → 1 as n → ∞. Further for  > 1, P0 [|θˆn | < ] = P0 [− < θˆn < ] = P0 [θˆn = −1, θˆn = 0, θˆn = 1] = 1.

7.1 Chapter 2

417 P0

Hence, θˆn → 0. Now for θ = 1 and for 0 <  ≤ 1, P1 [|θˆn − 1| < ] = P1 [1 −  < θˆn < 1 + ] = P1 [θˆn = 1]    √  = P1 X n ≥ 1/2 = 1 −  − n/2 → 1 as n → ∞. For 1 <  ≤ 2, P1 [|θˆn − 1| < ] = P1 [1 −  < θˆn < 1 + ] = P1 [θˆn = 0, θˆn = 1]    √  = P1 X n ≥ −1/2 = 1 −  −3 n/2 → 1 as n → ∞. Further for  ≥ 2, P1 [|θˆn − 1| < ] = P1 [1 −  < θˆn < 1 + ] = 1. P1



Hence, θˆn → 1. Thus, θˆn → θ, ∀ θ ∈  = {−1, 0, 1} and hence θˆn is consistent for θ . (ii) The estimator θˆn is unbiased for θ , if E θ (θˆn ) = θ, ∀ θ ∈ . The  possible values of θˆn are {−1, 0, 1} with probabilities Pθ X n < −1/2 ,     Pθ −1/2 ≤ X n < 1/2 and Pθ X n ≥ 1/2 for θ = −1, 0, 1 respectively. Hence,     E −1 (θˆn ) = −1P−1 X n < −1/2 + 0P−1 −1/2 ≤ X n < 1/2   + 1P−1 X n ≥ 1/2 √  √ = −1P−1 n(X n − (−1)) < n(−1/2 − (−1))  √ √ + 1P−1 n(X n − (−1)) ≥ n(1/2 − (−1)) √   √  = − n/2 + 1 −  3 n/2 . For n = 4, E −1 (θˆn ) = −0.8386, for n = 49, E −1 (θˆn ) = −0.99976 and for n = 100, it is approximately −1. Hence, we conclude that θˆn is not unbiased for θ . However, for n = 100, it is approximately −1, indicates that it may be asymptotically unbiased for θ , which is verified below. It is to be noted that √   √  E −1 (θˆn ) = − n/2 + 1 −  3 n/2 → −1 as n → ∞,       E 0 (θˆn ) = −1P0 X n < −1/2 + 0P0 −1/2 ≤ X n < 1/2 + 1P0 X n ≥ 1/2 √  √ = −1P0 n(X n − 0) < n(−1/2 − 0) √  √ + 1P0 n(X n − 0) ≥ n(1/2 − 0)  √  √  = − − n/2 + 1 −  n/2 → 0 as n → ∞.

418

7

Solutions to Conceptual Exercises

    & E 1 (θˆn ) = −1P1 X n < −1/2 + 0P1 −1/2 ≤ X n < 1/2   + 1P1 X n ≥ 1/2 √  √ = −1P0 n(X n − 1) < n(−1/2 − 1)  √ √ + 1P0 n(X n − 1) ≥ n(1/2 − 1)  √   √  = − −3 n/2 + 1 −  − n/2 → 1 as n → ∞. Thus, E θ (θˆn ) → θ as n → ∞ for all θ ∈ . Hence, θˆn is asymptotically unbiased for θ . 2.8.15 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Cauchy C(θ, 1) distribution, where θ ∈ R. Examine whether the sample mean is consistent for θ . Solution: If {X 1 , X 2 , . . . , X n } is a random sample from a Cauchy C(θ, 1) distribution, then the sample mean X n also follows a Cauchy C(θ, 1) distribution. Hence for  > 0, 

θ +

1 1 dx 2 θ − π 1 + (x − θ ) θ + 2 1  −1 = tan (x − θ ) θ − = tan−1 () , π π

Pθ [|X n − θ | < ] =

which is a constant free from n and hence does not converge to 1 as n → ∞. Hence, the sample mean is not consistent for θ . 2.8.16 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from X with probability density function f (x, θ ) = θ/x 2 , x ≥ θ, θ > 0. (i) Find the maximum likelihood estimator of θ and examine its consistency for θ by computing the coverage probability and also the MSE. (ii) Examine if X (n) is consistent for θ . Solution: (i) The probability density function of a random variable X is given by f X (x, θ ) = θ/x 2 , x ≥ θ. Corresponding to a random sample X from this distribution, the likelihood of θ is given by L n (θ |X ) =

n  i=1

θ/X i2



n

n 

X i−2 , X i ≥ θ, ∀ i ⇔ X (1) ≥ θ ⇔ θ ≤ X (1) .

i=1

Thus, the likelihood is an increasing function of θ and attains maximum at the maximum possible value of θ given data {X 1 , X 2 , . . . , X n }. The maximum possible value of θ given data is X (1) and hence the maximum likelihood

7.1 Chapter 2

419

estimator θˆn of θ is given by X (1) . To verify the consistency of X (1) as an estimator of θ , we find the coverage probability using the distribution function of X (1) . The distribution function FX (x) of X is given by  FX (x) =

0, if x < θ 1 − θ/x, if x ≥ θ.

Hence, the distribution function of X (1) is given by  FX (1) (x) = 1 − [1 − FX (x)]n =

0, if x < θ 1 − θ n /x n , if x ≥ θ.

For  > 0, the coverage probability is given by Pθ [|X (1) − θ | < ] = Pθ [θ −  < X (1) < θ + ] = Pθ [θ < X (1) < θ + ] as X (1) ≥ θ = FX (1) (θ + ) − FX (1) (θ ) = 1 − θ n /(θ + )n → 1 ∀  > 0 and ∀ θ as n → ∞. Hence, X (1) is consistent for θ . To compute the MSE of X (1) as an estimator of θ , we find the probability density function g(x, θ ) of X (1) from its distribution function. It is given by g(x, θ ) = nθ n /x n+1 , x ≥ θ . Hence, nθ nθ 2 2 )= , E(X (1) n−1 n−2 2θ 2 ⇒ M S E θ (X (1) ) = → 0, as n → ∞. (n − 1)(n − 2)

E(X (1) ) =

Thus, X (1) is MSE consistent for θ . (ii) To examine if X (n) is consistent for θ , consider for  > 0, the coverage probability, Pθ [|X (n) − θ | < ] = Pθ [θ −  < X (n) < θ + ] = Pθ [θ < X (n) < θ + ] as X (n) ≥ θ = FX (n) (θ + ) − FX (n) (θ ) = (1 − θ/(θ + ))n → 0 ∀  > 0 and ∀ θ as n → ∞. Hence, X (n) is not consistent for θ .

420

7

Solutions to Conceptual Exercises

2.8.17 Suppose X follows a Laplace distribution with probability density function f (x, θ ) = exp{−|x − θ |}/2, x ∈ R, θ ∈ R. A random sample of size n is drawn from the distribution of X . Examine whether X n is consistent for θ . Examine if it is MSE consistent. Is sample median consistent for θ ? Justify. Find the maximum likelihood estimator of θ and examine whether it is consistent for θ . Find a family of consistent estimators of θ based on the sample quantiles. Solution: Since X follows a Laplace distribution, E(X ) = θ < ∞ and Pθ

V ar (X ) = 2. Thus, by the WLLN X n → θ . Hence, X n is a consistent estimator of θ . Further, it is unbiased and hence its MSE is given by V ar (X n ) = 2/n → 0. Thus, X n is MSE consistent for θ . A Laplace distribution is a symmetric distribution, symmetric around θ , thus the population median is also θ . Hence by Theorem 2.2.6, the sample median is consistent for θ . To find the maximum likelihood estimator of θ , note that the likelihood of θ given the data X is, L n (θ |X ) =

n 

 exp{−|X i − θ |}/2 = (1/2 ) exp − n

i=1

n 

 |X i − θ |

.

i=1

n |X i − θ | is miniIt is maximum with respect to variations in θ when i=1 mum and it is minimized when θ is the sample median. Thus, the maximum likelihood estimator of θ is the sample median and it is consistent for θ . To find a family of consistent estimators of θ based on the sample quantiles, we first find the distribution function F(x, θ ) of X . If x < θ then |x − θ | = −(x − θ ). Hence for x < θ , x F(x, θ ) = −∞

1 exp{−|u − θ |} du = 2

x −∞

1 exp{u − θ } du 2

1 = exp{x − θ } 2 θ x 1 1 For x ≥ θ, F(x, θ ) = exp{−|u − θ |} du + exp{−|u − θ |} du 2 2 −∞

=

1 + 2

x θ

θ

1 exp{−(u − θ )} du 2

1 1 = + [1 − exp{−(x − θ )}] 2 2 1 = 1 − exp{−(x − θ )} . 2

7.1 Chapter 2

421

Now the p-th population quantile a p (θ ) is a solution of the equation F(x, θ ) = p. For p < 1/2, F(x, θ ) = p ⇒ (1/2) exp{x − θ } = p ⇒ x = a p (θ ) = θ + log 2 p . For p ≥ 1/2, F(x, θ ) = p ⇒ 1 − (1/2) exp{−(x − θ )} = p ⇒ x = a p (θ ) = θ − log 2(1 − p) . Pθ

By Theorem 2.2.6, the p-th sample quantile X ([np]+1) → a p (θ ). Hence, for all p ∈ (0, 1/2), X ([np]+1) − log 2 p is consistent for θ and, for all p ∈ [1/2, 1), X ([np]+1) + log 2(1 − p) is consistent for θ . 2.8.18 Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with mean θ . Show that X n is MSE consistent for θ . Find a constant c ∈ R such that Tn = n X n /(n + c) has MSE smaller than that of X n . Is Tn consistent for θ ? Justify. Solution: Since X follows an exponential distribution with mean θ , V ar (X )=θ 2 . Further, X n is an unbiased estimator of θ , hence M S E θ (X n ) = V ar (X n )=θ 2 /n → 0 as n → ∞. Thus, X n is MSE consistent for θ . Now for Tn =

nXn nθ −cθ nθ 2 , E(Tn ) = , Biasθ (Tn ) = & V ar (Tn ) = n+c n+c n+c (n + c)2 ⇒ M S E θ (Tn ) = V ar (Tn ) + (Biasθ (Tn ))2 nθ 2 c2 θ 2 (n + c2 )θ 2 = + = . (n + c)2 (n + c)2 (n + c)2

We find c ∈ R such that M S E θ (Tn ) is minimum. Suppose g(c) =

(n + c)2 2c − 2(n + c2 )(n + c) (n + c2 )  ⇒ g (c) = (n + c)2 (n + c)2 2(n + c)n(c − 1) = . (n + c)2

Now g  (c) = 0 ⇒ c = 1. c cannot be −n as in that case Tn is not defined. Thus for c = 1, MSE of Tn is smaller than that of X n . Consistency of Tn follows from the consistency of X n and the fact that n/(n + c) → 1 as n → ∞. 2.8.19 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace distribution with probability density function f (x, θ ) given by f (x, θ ) = (1/2θ ) exp {−|x|/θ } , x ∈ R,

θ > 0.

Examine whether following Sample mean, n for 2θ . (i)1/2 nestimators are consistent |X i |/n and (iv) ( i=1 X i /n) . (ii) sample median, (iii) i=1

422

7

Solutions to Conceptual Exercises

Solution: Since X follows a Laplace distribution, it is distributed as Y1 − Y2 , where Y1 and Y2 are independent random variables each having an exponential distribution with scale parameter 1/θ . Hence, E(X ) = 0 and V ar (X ) = Pθ

2θ 2 . By the WLLN, the sample mean X n → E(X ) = 0, hence X n cannot be consistent for θ as the limit random variable in convergence in probability is almost surely unique. Since the distribution of X is symmetric around 0, the median of X is the same as the mean of X which is zero. From Theorem 2.2.6, the sample median is consistent for population median which is 0 hence sample median cannot be consistent for θ . Now,  E(|X |) = (1/2θ )



−∞





|x| exp {−|x|/θ } d x = (2/2θ )

x exp {−x/θ } d x = θ.

0

n Pθ |X i |/n → θ and hence is consistent for θ . Hence by the WLLN, i=1 It is to be noted that E(X 2 ) = V ar (X ) = 2θ 2 as E(X ) = 0. Hence by the n Pθ WLLN, i=1 X i2 /n → 2θ 2 . By the invariance property of consistency n Pθ √ X i2 /n)1/2 → 2θ and hence it is under continuous transformations, ( i=1 not consistent for θ . 2.8.20 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a distribution of X with probability density function θ/x θ +1 , x > 1, θ > 0. Examine whether X n is consistent for θ for θ > 1. What happens if 0 < θ ≤ 1? Obtain a consistent estimator of θ based on the transformations g(x) = log x and g(x) = 1/x. ∞ Solution: For the given distribution E(X ) = θ 1 x/x θ +1 d x. The integral is convergent if the degree of polynomial in denominator is larger by 1 than the degree of polynomial in numerator, that is if θ + 1 − 1 > 1 ⇔ θ > 1. If 0 < θ ≤ 1, then the integral is divergent and E(X ) does not exist. For Pθ

θ > 1, E(X ) = θ/(θ − 1). By the WLLN, X n → θ/(θ − 1) and hence it cannot be consistent for θ , since the limit random variable in convergence in probability is almost surely unique. To find a consistent estimator of θ based on the transformation log x, define Y = log X , then using jacobian of transformation method, it follows that Y has an exponential distribution with scale parameter θ , that is, mean 1/θ . Corresponding to a random sample {X 1 , X 2 , . . . , X n }, we have a random sample {Y1 , Y2 , . . . , Yn } and by the n n Pθ WLLN, Y n = i=1 log X i /n → 1/θ, ∀ θ > 0. Thus, n/ i=1 log X i is a consistent estimator of θ . Now to find a consistent estimator of θ based on the transformation 1/x, observe that  E

1 X



∞ =θ

1 x θ +2

1

dx =

θ 0. θ +1

7.1 Chapter 2

423

Hence by the WLLN, Tn =

n 1  1 Pθ θ → ⇒ n Xi θ +1 i=1

Tn Pθ → θ. 1 − Tn

Thus, Tn /(1 − Tn ) is a consistent estimator of θ . 2.8.21 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from an exponential distribution with scale parameter 1 and location parameter θ . Examine whether X (1) is strongly consistent for θ . Solution: If X follows an exponential distribution with scale parameter 1 and location parameter θ , then its probability density function f X (x, θ ) is given by f X (x, θ ) = exp{−(x − θ )}, x ≥ θ . It is shown in Example 2.2.16 that the distribution of X (1) is again exponential with scale parameter n and location parameter θ . Hence, E(X (1) ) = θ + 1/n and V ar (X (1) ) = 1/n 2 , which implies that M S E θ (X (1) ) = E(X (1) − θ )2 = (Biasθ X (1) )2 + V ar (X (1) ) = 2/n 2 . To examine strong consistency we use the sufficient condition which states  a.s. r that if for some r > 0, n≥1 E(|X n − X | ) < ∞, then X n → X . Observe that   2 a.s. E(|X (1) − θ |2 ) = < ∞ ⇒ X (1) → θ. n2 n≥1

n≥1

Hence, X (1) is strongly consistent for θ . We may use another sufficient condition of almost sure convergence which states that if ∀ >0,  a.s. n≥1 P[|X n − X | > ] < ∞, then X n → X . Using the exponential distribution of X (1) , we have for  > 0, P[|X (1) − θ | > ] = 1 − P[θ < X (1) < θ + ] = e−n  ⇒ P[|X n − X | > ] n≥1

 = (e− )n < ∞ as e− < 1 for  > 0 n≥1 a.s.

⇒ X (1) → θ. 2.8.22 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution, where θ ∈ {0, 1}. Examine whether the maximum likelihood estimator θˆn of θ is strongly consistent for θ . Examine whether θˆn = θ almost surely for large n.

424

7

Solutions to Conceptual Exercises

Solution: In Example 2.2.3, we have obtained the maximum likelihood estimator of θ corresponding to a random sample X from a normal N (θ, 1) distribution, when θ ∈ {0, 1}. It is given by  1, if X n > 21 ˆθn = 0, if X n ≤ 21 . Further, we have shown that it is consistent for θ , since for  > 1, Pθ [|θˆn − θ | < ] = 1 for √θ =0 and θ = 1. For 0 <  ≤ 1, Pθ [|θˆn − θ | < ] =  n/2 → 1, as n → ∞, for both θ = 0 and θ = 1. Thus, for large n and 0 <  ≤ 1, √  n ˆ Pθ [|θn − θ | ≥ ] = 1 −  2 √   √  2 n n 2 e−n/8 =φ = − √ =√ √ , 2 2 n n 2π where φ(·) is the probability density function of the standard normal distribution. By the ratio test for convergence of series, √ √ e−(n+1)/8 n n −1/8 e = √ → e−1/8 < 1 √ −n/8 n+1 e n+1  2  e−n/8 ⇒ Pθ [|θˆn − θ | > ] = √ √ 0. Hence, by the sufficient condition of almost sure convergence, a.s. θˆn → θ . Suppose an event An is defined as An = {ω|θˆn (ω) = θ }, then  n≥1

Pθ [θˆn = θ ] < ∞ ⇔



P(An ) < ∞

n≥1

⇒ P(lim sup An ) = 0 ⇔ P(lim inf Acn ) = 1 ⇒ Pθ {ω|θˆn (ω) = θ, ∀ n ≥ n 0 (ω)} = 1, by the Borel-Cantelli lemma. Hence, we conclude that θˆn = θ almost surely for large n.

426

7

Solutions to Conceptual Exercises

2.8.24 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a Bernoulli B(1, θ ) distribution. Examine whether X n is uniformly consistent for θ ∈  = (0, 1). Solution: Since X ∼ B(1, θ ) distribution, E(X n ) = θ and V ar (X n ) = θ (1 − θ )/n. Consistency of X n for θ follows from the WLLN. We use Chebyshev’s inequality to find the minimum sample size n 0 (, δ, θ ). By Chebyshev’s inequality, E(X n − θ )2 2 θ (1 − θ ) 1 =1− ≥ 1− as θ (1 − θ ) ≤ 1/4, ∀ θ ∈ . n 2 4n 2

Pθ [|X n − θ | < ] ≥ 1 −

We select n 0 (, δ, θ ) such that 1−

! 1 1 1 + 1, ≥ 1 − δ ⇒ n ≥ (, δ, θ ) = ⇒ n 0 4n 2 4 2 δ 4 2 δ

thus, n 0 (, δ, θ ) does not depend on θ and hence X n is uniformly consistent for θ . 2.8.25 Suppose X follows an exponential distribution with location parameter μ and scale parameter σ , with probability density function f (x, μ, σ ) as f (x, μ, σ ) = (1/σ ) exp {−(x − μ)/σ } x ≥ μ, σ > 0, μ ∈ R. Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . (i) Verify whether X n is consistent for μ or σ . (ii) Find a consistent estimator for θ = (μ, σ ) based on the sample median and the sample mean. Solution: (i) For a random variable X following an exponential distribution with location parameter μ and scale parameter σ , E(X ) = μ + σ . Hence, Pθ

given a random sample {X 1 , X 2 , . . . , X n }, by the WLLN X n → E(X ) = μ + σ . Thus, X n is consistent for μ + σ and it cannot be consistent for any other parametric function, since the limit law in convergence in probability is almost surely unique. Hence, X n is not consistent for μ or σ . (ii) We find the median of X from its distribution function FX (x) given by  FX (x) =

0, if x < μ 1 − exp{−(x − μ)/σ }, if x ≥ μ.

Solution of the equation FX (x) = 1/2 gives the median a1/2 (θ) = μ + σ loge 2. Sample median X ([n/2]+1) is consistent for the population median a1/2 (θ ) = μ + σ loge 2. Thus,

7.1 Chapter 2

427 Pθ



X n → μ + σ & X ([n/2]+1) → μ + σ loge 2 ⇒ Xn −

X ([n/2]+1) − X n loge 2 − 1



→ μ &

X ([n/2]+1) − X n Pθ → σ, loge 2 − 1

as convergence in probability is closed under arithmetic operations. Thus, (X n + 3.2589(X ([n/2]+1) − X n ), − 3.2589(X ([n/2]+1) − X n )) is a consistent estimator of (μ, σ ) . 2.8.26 Suppose {(X 1 , Y1 ) , (X 2 , Y2 ) , . . . , (X n , Yn ) } is a random sample from a bivariate Cauchy C2 (θ1 , θ2 , λ) distribution, with probability density function, f (x, y, θ1 , θ2 , λ) = (λ/2π ){λ2 + (x − θ1 )2 + (y − θ2 )2 }−3/2 (x, y) ∈ R2 , θ1 , θ2 ∈ R, λ > 0. Using the sample quartiles based on the samples from marginal distributions, obtain two distinct consistent estimators of (θ1 , θ2 , λ) . Hence, obtain a family of consistent estimators of (θ1 , θ2 , λ) . Solution: Since (X , Y ) has a bivariate Cauchy distribution, the marginal distribution of X is Cauchy C(θ1 , λ) and that of Y is Cauchy C(θ2 , λ). Hence, the quartiles of X and Y are given by Q 1 (X ) = θ1 − λ, Q 2 (X ) = θ1 , Q 3 (X ) = θ1 + λ & Q 1 (Y ) = θ2 − λ, Q 2 (Y ) = θ2 , Q 3 (Y ) = θ2 + λ. From Theorem 2.2.6, the sample quartiles are consistent for the corresponding population quartiles. Hence, P(θ1 ,λ)

P(θ1 ,λ)

P(θ1 ,λ)

X ([n/4]+1) → θ1 − λ, X ([n/2]+1) → θ1 & X ([3n/4]+1) → θ1 + λ P(θ2 ,λ)

P(θ2 ,λ)

P(θ2 ,λ)

& Y([n/4]+1) → θ2 − λ, Y([n/2]+1) → θ2 & Y([3n/4]+1) → θ2 + λ. Thus, Tn1 = X ([n/2]+1) is consistent for θ1 . To get another consistent estimator for θ1 , we use the result that convergence in   probability is closed under arithmetic operations. Hence, we have Tn2 = X ([n/4]+1) + X ([3n/4]+1) /2 to be consistent for θ1 . On similar  lines, Sn1 = Y([n/2]+1) and  for θ2 . Now toget consisSn2 = Y([n/4]+1) + Y([3n/4]+1) /2 are consistent  = X tent estimator of λ, observe that U ([n/2]+1) − X ([n/4]+1) and   n1 Un2 = Y([n/2]+1) − Y([n/4]+1) both are consistent for λ. Thus, V n1 = (Tn1 , Sn1 , Un1 ) and V n2 = (Tn2 , Sn2 , Un2 ) are two distinct consistent estimators of (θ1 , θ2 , λ) . Further, the convex combination of V n1 and V n2 given by αV n1 + (1 − α)V n2 , 0 < α < 1 is a consistent estimator of (θ1 , θ2 , λ) . Thus, a family of consistent estimators of (θ1 , θ2 , λ) is given by {αV n1 + (1 − α)V n2 , 0 < α < 1}.

428

7

Solutions to Conceptual Exercises

2.8.27 Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with probability density function f (x, θ, α) given by f (x, θ, α) = (1/α) exp{−(x − θ )/α}, x ≥ θ, θ ∈ R, α > 0 .   n  Show that X (1) , i=2 (X (i) − X (1) )/(n − 1) is consistent for (θ, α) .  Obtain a consistent estimator of (θ, α) based on the sample moments. Solution: To verify the consistency of X (1) as an estimator of θ , we find the distribution function of X (1) , it is given by FX (1) (x) = 1 − [1 − FX (x)]n , x ∈ R. The distribution function FX (x) is given by  0, if x < θ FX (x) = 1 − exp{−(x − θ )/α}, if x ≥ θ. Hence, the distribution function of X (1) is given by  0, if x < θ FX (1) (x) = 1 − exp{−n(x − θ )/α}, if x ≥ θ. Thus, the distribution of X (1) is again exponential with location parameter θ and scale parameter n/α. Hence, E(X (1) ) = θ + α/n which implies that the bias of X (1) as an estimator of θ is α/n and it converges to 0 as n → ∞. Further, V ar (X (1) ) = α 2 /n 2 → 0 as n → ∞. Hence, X (1) is consistent for θ . We have derived this result in Example 2.2.16 when α = 1. Now to examn (X (i) − X (1) )/(n − 1) is consistent for α, we define ine whether Tn = i=2 random variables Yi as Yi = (n − i + 1)(X (i) − X (i−1) ), i = 2, 3, . . . , n. Then n 

Yi =

i=2

n  i=2

X (i) − (n − 1)X (1) =

n 

(X (i) − X (1) ) = (n − 1)Tn .

i=2

It can be proved that {Y2 , Y3 , . . . , Yn } are independent and identically distributed random variables each following an exponential distribution with location parameter 0 and scale parameter 1/α. Thus, E(Y2 ) = α < ∞. n Pθ,α Hence by the WLLN, i=2 Yi /(n − 1) → α and hence Tn is consistent for Since joint consistency is equivalent to the marginal consistency,  α.   n (X (i) − X (1) )/(n − 1) is consistent for (θ, α) . To obtain a X (1) , i=2 consistent estimator for (θ, α) based on the sample moments, observe Pθ

that E(X ) = θ + α and V ar (X ) = α 2 . By the WLLN X n → θ + α and Pθ

m 2 → α 2 . Convergence in probability is closed under all arithmetic operaPθ Pθ √ √ tions, hence m 2 → α and m 1 − m 2 → θ . Thus, a consistent estimator √ √ of (θ, α) based on the sample moments is (m 1 − m 2 , m 2 ) .

7.1 Chapter 2

429

2.8.28 Suppose {X i j , i = 1, 2, j = 1, 2, . . . , n} are independent random variables such that X i j ∼ N (μi , σ 2 ) distribution. Find the maximum likelihood estimators of μi i = 1, 2 and σ 2 . Examine whether these are consistent. Solution: It is given that the random variables X i j ∼ N (μi , σ 2 ) distribution, hence the likelihood of μi , i = 1, 2 and σ 2 given the data X = {X i j , i = 1, 2, n, j = 1, 2, . . . , n} is given by L n (μ1 , μ2 , σ 2 |X ) =



2π σ 2

−2n

⎫ ⎧ 2  n ⎬ ⎨ 1  exp − 2 (X i j − μi )2 ⎭ ⎩ 2σ i=1 j=1

and the maximum likelihood estimators are given by μˆ in =

n 1 X i j = X in , i = 1, 2 n j=1

&

σˆ n2 =

2 n  1  Tin , where Tin = (X i j − X in )2 , i = 1, 2. 2n i=1

j=1

P

By the WLLN μˆ in → μi , i = 1, 2 and hence it is consistent for μi , i = 1, 2. To examine whether σˆ n2 is consistent for σ 2 , observe that Tin /n is the sample central moment of order 2 for i = 1, 2, hence it is consistent for the population central moment of order 2, which is σ 2 . Thus, 2 1  1 P P Tin → σ 2 Tin → σ 2 ⇒ σˆ n2 = n 2n i=1

and σˆ n2 is consistent for σ 2 . 2.8.29 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, σ 2 ) disn 2 (X i − X n )2 . (i) Examine whether tribution. Suppose Sn = i=1 2 2 T1n = Sn /n and T2n = Sn /(n − 1) are consistent for θ . (ii) Show that M S E θ (T1n ) < M S E θ (T2n ), ∀ n ≥ 2. (iii) Show that T3n = Sn2 /(n + k) is consistent for θ . Determine k such that M S E θ (T3n ) is minimum. Solution: (i) Suppose X ∼ N (θ, σ 2 ) distribution. Then Tn = Sn2 /σ 2 ∼ 2 distribution. Hence, E(Tn ) = n − 1 and V ar (Tn ) = 2(n − 1). As a χn−1 consequence, σ 2 (n − 1) 2σ 4 (n − 1) , V ar (T1n ) = n n2 4 σ (2n − 1) & M S E θ (T1n ) = → 0, n2 E(T1n ) =

430

7

Solutions to Conceptual Exercises

as n → ∞. Similarly, E(T2n ) = σ 2 , V ar (T2n ) = M S E θ (T2n ) =

2σ 4 → 0, as n → ∞. n−1

Thus, both T1n = Sn2 /n and T2n = Sn2 /(n − 1) are consistent for θ . (ii) It is easy to verify that σ 4 (2n − 1) 2σ 4 − n2 n−1 4 σ (1 − 3n) = 2 < 0, ∀ n ≥ 2. n (n − 1)

M S E θ (T1n ) − M S E θ (T2n ) =

(iii) As in (i) we have σ 2 (n − 1) 2σ 4 (n − 1) , V ar (T3n ) = n+k (n + k)2 4 2 σ (2n − 2 + (k + 1) ) & M S E θ (T3n ) = , (n + k)2 E(T3n ) =

and it converges to 0 as n → ∞. Thus, T3n is consistent for θ . To determine k such that M S E θ (T3n ) is minimum, suppose M S E θ (T3n ) = σ 4 g(k), where 2n − 2 + (k + 1)2 (n + k)2 (n + k)2 2(k + 1) − (2n − 2 + (k + 1)2 )2(n + k) ⇒ g  (k) = (n + k)4  g (k) = 0 ⇒ (n + k)(k + 1) − (2n − 2 + (k + 1)2 ) = 0 ⇒ k(n − 1) − n + 1 = 0 ⇒ k = 1 . g(k) =

Thus, for k = 1, M S E θ (T3n ) is minimum. 2.8.30 An electronic device is such that the probability of its instantaneous failure is θ , that is, if X denotes the life length random variable of the device, then P[X = 0] = θ . Given that X > 0, the conditional distribution of life length is exponential with mean α. In a random sample of size n, it is observed that r items failed instantaneously and remaining n − r items had life times {X i1 , X i2 , . . . , X in−r }. On the basis of these data, find a consistent estimator of θ and of α. Solution: Suppose F denotes the distribution function of X , then F(x, θ, α) = 0 ∀ x < 0 as X ≥ 0 a. s. For x = 0, F(x, θ, α) = P[X ≤ 0] =

7.1 Chapter 2

431

P[X = 0] = θ . For x > 0, F(x, θ, α) = P[X ≤ x] = P[X = 0] + P[0 < X ≤ x] = P[X = 0] + P[X ≤ x|X > 0]P[X > 0] = θ + (1 − e−x/α )(1 − θ ). Hence, the distribution function F(x, θ, α) is given by F(x, θ, α) =

⎧ ⎨

if x < 0 if x = 0 ⎩ θ + (1 − e−x/α )(1 − θ ), if x > 0. 0, θ,

It is neither continuous nor discrete. To find a consistent estimator of θ , we define a random variable Yi , i = 1, 2, . . . , n as  1, if X i = 0 Yi = 0, if X i > 0. Thus, Yi is a Borel function of X i , i = 1, 2, . . . , n and hence {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables each having Bernoulli B(1, θ ) distribution. Hence by the WLLN, Y n = r /n is a consistent estimator of E(Y1 ) = θ . To find a consistent estimator for α, we define a random variable Ui , i = 1, 2, . . . , n as  Ui =

0, if X i = 0 X i , if X i > 0.

{U1 , U2 , . . . , Un } are also independent and identically distributed random variables. Observe that   E(U1 ) = E(X 1 I[X 1 >0] ) = E E(X 1 I[X 1 >0] )|I[X 1 >0]   = E I[X 1 >0] E(X 1 |X 1 > 0) = α(1 − θ ). Hence by the WLLN, X i1 + X i2 + · · · + X in−r Sn P → α(1 − θ ), = n n where Sn = X i1 + X i2 + · · · + X in−r . Un =

Hence,   X i1 + X i2 + · · · + X in−r Sn n Sn P Tn = → α. = = n n −r n −r n −r

432

7

Solutions to Conceptual Exercises

2.8.31 A linear regression model is given by Y = a + bX + , where E() = 0 and V ar () = σ 2 . Suppose {(X i , Yi ) , i = 1, 2, . . . , n} is a random sample from the distribution of (X , Y ) . Examine whether the least square estimators of a and b are consistent for a and b respectively. Solution: For a linear regression model Y = a + bX + , it is known that a = E(Y ) − bE(X ) and b = Cov(X , Y )/V ar (X ). Further, corresponding to a random sample {(X i , Yi ) , i = 1, 2, . . . , n}, the least square estimators of a and b are given by SX Y & aˆ n = Y n − bˆn X n , bˆn = SX X where S X Y is the sample covariance and S X X is the sample variance of X . In Theorem 2.5.4, it is proved that the sample covariance is consistent for the population covariance. From Theorem 2.5.3, the sample variance is consistent for the population variance. Further, the sample mean is consistent for a population mean. Convergence in probability is closed under arithmetic operations, hence it follows that aˆ n and bˆn are consistent for a and b respectively. 2.8.32 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (μ, σ 2 ) distribution. Find a consistent estimator of P[X 1 < a] where a is any real number. Solution: Suppose X ∼ N (μ, σ 2 ), then (X − μ)/σ ∼ N (0, 1). Thus, P[X 1 < a] =  ((a − μ)/σ ) = g(μ, σ 2 ), say . Since (·) is a distribution function of a continuous random variable, g is a continuous function from R2 to R. In Example 2.5.3, it shown that if {X 1 , X 2 , . . . , X n } is a random sample of size n from a normal N (μ, σ 2 ) distribution then (m 1 , m 2 ) is a consistent estimator of (μ, σ 2 ) . Hence, bythe invariance property of consistency under continuous transformation, √   (a − m 1 )/ m 2 is a consistent estimator of P[X 1 < a] =  ((a − μ)/σ ). 2.8.33 Suppose {Z 1 , Z 2 , . . . , Z n } is a random sample of size n from a multivariate normal N p (μ, ) distribution. Find a consistent estimator of θ = (μ, ). Also find a consistent estimator of l  μ where l is a vector in R p . Solution: Suppose Z = (X 1 , X 2 , . . . , X p ) ∼ N p (μ, ), where μ = (μ1 , μ2 , . . . , μ p ) and  = [σi j ] p× p . Then X i ∼ N (μi , σii ), i = 1, 2, . . . , p. A random sample {Z 1 , Z 2 , . . . , Z n } gives a random sample of size  n on each X i . Hence as shown in Example 2.5.3, the sample mean X in = rn=1 X ir /n is consistent for μi , i = 1, 2, . . . , p. Since joint consistency and marginal consistency are equivalent,

7.1 Chapter 2

433

Z n = (X 1n , X 2n , . . . , X pn ) is consistent for μ = (μ1 , μ2 , . . . , μ p ) . Now ar (X i ), hence as shown in Example 2.5.3, sample variance σii = V Siin = rn=1 (X ir − X in )2 /n is consistent for σii . Further, σi j = Cov(X i , X j ). In Theorem 2.5.4, it is proved that the sample covariance is consistent for population covariance. Hence, the sample covari ance Si jn = rn=1 (X ir − X in )(X jr − X jn )/n is consistent for population covariance σi j . Thus, if Sn = [Si jn ] denotes the sample dispersion matrix then Sn is consistent for . Hence, the consistent estimator for θ = (μ, ) is (Z n , Sn ). To find a consistent estimator for l  μ, we use invariance property of consistency under continuous transformation. We define a function and hence the cong : R p → R as g(x) = l  x.It is a continuous function p p sistent estimator for l  μ = i=1 li μi is l  Z n = i=1 li X in . 2.8.34 On the basis of a random sample of size n from a multinomial distribution k in k cells with cell probabilities ( p1 , p2 , . . . , pk ), with i=1 pi = 1, find a consistent estimator for p = ( p1 , p2 , . . . , pk ). Solution: Suppose Y = (Y1 , Y2 , . . . , Yk ) has a multinomial k distribution pi = 1 and in k cells with cell probabilities ( p1 , p2 , . . . , pk ), with i=1 k Y = 1. Then for each i, Y has Bernoulli B(1, p ) distribution with i i i i=1 E(Yi ) = pi , i = 1, 2, . . . , k. Suppose {Y r , r = 1, 2, . . . , n} is a random sample from the distribution  of Y . If X i denotes the frequency k of i-th cell X i = n. By in the sample, then X i = rn=1 Yir , i = 1, 2, . . . , k and i=1 the WLLN, pˆ in =

n Pp Xi 1 Yir → pi , i = 1, 2, . . . , k . = n n r =1

Thus, (X 1 /n, X 2 /n, . . . , X k /n) is a consistent estimator of p = ( p1 , p2 , . . . , pk ). It is to be noted that (X 1 /n, X 2 /n, . . . , X k /n) is a vector of relative frequencies of k cells in a sample of size n which add up to 1. 2.8.35 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (θ1 , θ2 ) distribution, −∞ < θ1 < x < θ2 < ∞. Examine whether (X (1) , X (n) ) is a consistent estimator of θ = (θ1 , θ2 ) . Obtain a consistent estimator of (θ1 + θ2 )/2 and of (θ2 − θ1 )2 /12 based on (X (1) , X (n) ) and also based on the sample moments. Solution: Suppose we define a random variable Y as Y = (X − θ1 )/(θ2 − θ1 ), then by probability integral transformation, Y ∼ U (0, 1). In Example 2.5.2, we have shown that corresponding to a random sample of size n from the uniform U (0, 1) distribution, P

Y(1) → 0 &

P

Y(n) → 1





X (1) → θ1 &



X (n) → θ2 .

434

7

Solutions to Conceptual Exercises

Thus, (X (1) , X (n) ) is consistent for (θ1 , θ2 ) . Further, convergence in probability is closed under all arithmetic operations, hence (X (n) + X (1) ) Pθ θ1 + θ2 → 2 2

&

(X (n) − X (1) )2 Pθ (θ2 − θ1 )2 → . 12 12

To obtain a consistent estimator of (θ1 + θ2 )/2 and of (θ2 − θ1 )2 /12 based on the sample moments, it is to be noted that (θ1 + θ2 )/2 is the mean and (θ2 − θ1 )2 /12 is the variance of the U (θ1 , θ2 ) distribution. Hence by the WLLN, the sample mean X n is consistent for (θ1 + θ2 )/2 and by Theorem 2.5.3, the sample variance m 2 is consistent for (θ2 − θ1 )2 /12. 2.8.36 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample of size n from a Laplace distribution with probability density function given by f (x, θ, λ) = (1/2λ) exp {−|x − θ |/λ} , x ∈ R, θ ∈ R, λ > 0 . Using stepwise maximization procedure, find the maximum likelihood estimator of θ and of λ and examine if those are consistent for θ and λ respectively. Solution: Corresponding to a random sample X from a Laplace distribution, the log-likelihood of (θ, λ) is given by 1 |X i − θ |. λ n

log L n (θ, λ|X ) = −n log 2λ −

i=1

Suppose λ is fixed at λ0 , the log likelihood is maximized with respect to variations in θ when θ is the sample median X ([n/2]+1) , it does not depend on the fixed value of  λ. Now we consider a function n |X i − X ([n/2]+1) |/λ. It is a differentiable funch(λ) = −n log 2λ − i=1 tion of λ and by the calculus method we get that h(λ) is maximum when n |X i − X ([n/2]+1) |/n. Hence, the maximum likelihood estimators λ = i=1 n of θ and λ are given by θˆn = X ([n/2]+1) and λˆ n = i=1 |X i − X ([n/2]+1) |/n. To examine whether these are consistent, we proceed as follows. For a Laplace distribution with location parameter θ and scale parameter λ, the population median is θ and hence by Theorem 2.2.6, the sample median X ([n/2]+1) = θˆn is consistent for θ . Now to examine whether λˆ n is consistent, we define Y = (X − θ )/λ, then Y follows a Laplace distribution with location parameter 0 and scale parameter 1. Hence, E(|Y |) = 1. Since {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, being Borel functions, {|Y1 |, |Y2 |, . . . , |Yn |} are also independent and identically distributed random variables with mean 1. Hence by the WLLN, n n 1 1 P P |Yi | → 1 ⇒ Tn = |X i − θ | → λ. n n i=1

i−1

7.1 Chapter 2

435

Now we use following two inequalities related to the absolute values, which are given by, |a + b| ≤ |a| + |b| and ||a| − |b|| ≤ |a − b| to establish consistency of λˆ n . We have,

n

n

1 

1



|Tn − λˆ n | =

|X i − θ | − |X i − θˆn |

n

n ≤ ≤

1 n 1 n

i−1 n



i−1





|X i − θ | − |X i − θˆn |

i−1 n 





X i − θ − X i + θˆn = |θˆn − θ |.

i−1

Thus, we get |Tn − λˆ n | ≤ |θˆn − θ |, which implies that if |θˆn − θ | < , then |Tn − λˆ n | < . Hence for every  > 0, P[|Tn − λˆ n | < ] ≥ P[|θˆn − θ | < ] → 1 as n → ∞. P P As a consequence, Tn − λˆ n → 0. We have proved that Tn → λ and hence P λˆ n → λ.

7.2

Chapter 3

3.5.1 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (0, θ ) distribution, θ > 0. (i) Examine whether the maximum likelihood estimator of θ is a CAN estimator of θ . (ii) Examine whether the moment estimator of θ is a CAN estimator of θ . (iii) Solve (i) and (ii) if {X 1 , X 2 , . . . , X n } are independent random variables where X i follows a uniform U (0, iθ ) distribution, θ > 0. Solution: (i) Corresponding to a random sample of size n from a uniform U (0, θ ) distribution, it is shown in Example 2.2.1 that the maximum likelihood estimator θˆn of θ is given by X (n) and is consistent for θ . To derive the asymptotic distribution of X (n) with suitable norming, we define Yn = n(θ − X (n) ) and derive its distribution function G Yn (y) for y ∈ R from the distribution function of X (n) . It is given by FX (n) (x) = [FX (x)]n

⎧ ⎨ 0, if x < 0 ( x )n , if 0 ≤ x < θ = ⎩ θ 1, if x ≥ θ.

Since X (n) ≤ θ, Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then

436

7

Solutions to Conceptual Exercises

G Yn (y) = Pθ [n(θ − X (n) ) ≤ y] = Pθ [X (n) ≥ θ − y/n] = 1 − FX (n) (θ − y/n), hence

G Yn (y) =

⎧ ⎨

1 − 0 = 1, n 1 − ( θ −y/n θ ) = 1 − (1 − ⎩ 1 − 1 = 0,

y n nθ ) ,

if θ − y/n < 0 ⇔ y ≥ nθ if 0 < y ≤ nθ if y ≤ 0.

As n → ∞,  G Yn (y) →

0, if y ≤ 0 − θy 1 − e , if y ≥ 0.

Thus, Yn = n(θ − X (n) ) converges in distribution to an exponential distribution with location parameter 0 and scale parameter 1/θ . Thus, with norming factor n, the asymptotic distribution of X (n) is not normal. Proceeding on similar lines as in Example 3.2.1, there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞, such that the asymptotic distribution of an (X (n) − θ ) is normal. Hence, X (n) is not CAN for θ . (ii) If a random variable X follows a uniform U (0, θ ) distribution, then Pθ

E(X ) = θ/2 < ∞. Hence by the WLLN, X n → E(X ) = θ/2, ∀ θ . Hence, θ˜n = 2X n is consistent for θ . Further, V ar (X ) = θ 2 /12, which is positive and finite and hence by the CLT,  L   √  n X n − θ/2 → Z 1 ∼ N 0, θ 2 /12   √ L ⇔ n(2X n − θ ) → Z 2 ∼ N 0, θ 2 /3 , as n → ∞. Hence, θ˜n = 2X n is CAN for θ with approximate variance θ 2 /3n. (iii) Suppose X i ∼ U (0, iθ ), then it is easy to verify that Yi = X i /i ∼ U (0, θ ). Thus, {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables each having a uniform U (0, θ ). Hence, proceeding on similar lines as in (i) and (ii), the maximum likelihood estimator of θ is Y(n) and it is consistent but not CAN for θ . The moment estimator of θ is 2Y n and it is CAN for θ with approximate variance θ 2 /3n. 3.5.2 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (0, θ ) distribution. Obtain 100(1 − α)% asymptotic confidence interval for θ based on a sufficient statistic. Solution: In Example 2.2.1 we have shown that corresponding to a random sample X = {X 1 , X 2 , . . . , X n } from a uniform U (0, θ ) distribution, X (n)

7.2 Chapter 3

437

is a sufficient statistic for the family of U (0, θ ) distributions for θ > 0. In the solution of Exercise 3.5.1 it is shown that Yn = n(θ − X (n) ) converges in distribution to an exponential distribution parameter  with location    0 and scale parameter 1/θ . Hence, Q n = θ − X (n) /θ = n 1 − X (n) /θ converges in distribution to the exponential distribution with location parameter 0 and scale parameter 1. Thus, Q n is a pivotal quantity. Given a confidence coefficient (1 − α), we can find a and b so that P[a < Q n < b] = 1 − α. Inverting the inequality a < Q n < b, we get X (n) X (n) < θ < . 1 − a/n 1 − b/n Hence, 100(1 − α)% large sample confidence interval for θ is given by 

X (n) , 1 − a/n

X (n) 1 − b/n

 .

3.5.3 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (θ, 1) distribution, 0 < θ < 1. Find the maximum likelihood estimator of θ and the moment estimator of θ . Examine whether these are CAN estimators for θ . Solution: If X ∼ U (θ, 1), then its probability density function f (x, θ ) and the distribution function F(x, θ ) are as follows:  1 if θ < x < 1 1−θ , f (x, θ ) = 0, otherwise. ⎧ ⎨ 0, if x < θ x−θ , if θ ≤ x < 1 F(x, θ ) = ⎩ 1−θ 1, if x ≥ 1. Corresponding to a random sample X from this distribution, the likelihood of θ is given by L n (θ |X ) =

n  i=1

1 1 , = 1−θ (1 − θ )n

X i ≥ θ, ∀ i ⇔ X (1) ≥ θ.

Thus, the likelihood is an increasing function of θ and attains maximum at the maximum possible value of θ given data X . The maximum possible value of θ given data is X (1) and hence the maximum likelihood estimator θˆn of θ is given by X (1) . To verify the consistency of X (1) as an estimator of θ , we find the coverage probability using the distribution function of X (1) . The distribution function of X (1) is given by

438

7

FX (1) (x) = 1 − [1 − FX (x)]n =

Solutions to Conceptual Exercises

⎧ ⎨

0, if x < θ n , if θ ≤ x < 1 ) 1 − ( 1−x 1−θ ⎩ 1, if x ≥ 1.

For  > 0, the coverage probability is given by Pθ [|X (1) − θ | < ] = Pθ [θ −  < X (1) < θ + ] = Pθ [θ < X (1) < θ + ] as X (1) ≥ θ = FX (1) (θ + ) − FX (1) (θ )   1−θ − n −0 = 1− 1−θ → 1 ∀  > 0 and ∀ θ as n → ∞. Hence, X (1) is consistent for θ . To derive the asymptotic distribution of X (1) with suitable norming, we define Yn = n(X (1) − θ ) and derive its distribution function G Yn (y) for y ∈ R. Since X (1) ≥ θ, Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = Pθ [n(X (1) − θ ) ≤ y] = Pθ [X (1) ≤ θ + y/n] = FX (1) (θ + y/n). Thus, ⎧ n θ + y/n < θ ⇔ y < 0 ⎪ ⎨ 1 − (1 − 0) = n0, if 1−θ −y/n G Yn (y) = , if θ ≤ θ + y/n < 1 ⇔ 0 ≤ y < n(1 − θ ) 1− 1−θ ⎪ ⎩ 1 − [1 − 1] = 1, if y ≥ n(1 − θ ). Hence,  G Yn (y) →

0, if y < 0 y 1 − e− 1−θ , if y ≥ 0.

Thus, the asymptotic distribution of Yn = n(X (1) − θ ) is exponential with location parameter 0 and scale parameter 1/(1 − θ ) and with norming factor n, the asymptotic distribution of X (1) is not normal. Proceeding on similar lines as in Example 3.2.1, it follows that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (X (1) − θ ) is normal, hence we claim that X (1) is not CAN for θ . Another approach to claim that X (1) is not CAN is as follows. Suppose Z n = an (X (1) − θ ). Then Fn (x) = P[Z n ≤ x] = 0 if x < 0. Hence, limn→∞ Fn (x) = 0 if x < 0. However, (x) > 0 if x < 0, where (·) denotes the distribution function of the standard normal distribution. Hence, there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (X (1) − θ ) is normal.

7.2 Chapter 3

439

(ii) If a random variable X ∼ U (θ, 1) distribution, then E(X ) = (1 + θ )/2 < ∞. Hence, by the WLLN, Pθ X n → E(X ) = (1 + θ )/2, ∀ θ . Hence, θ˜n = 2X n − 1 is consistent for θ . Further, V ar (X ) = (1 − θ )2 /12, which is positive and finite and hence by the CLT,     √ 1+θ (1 − θ )2 L n Xn − → Z 1 ∼ N 0, 2 12   √ (1 − θ )2 L , ⇔ n(2X n − 1 − θ ) → Z 2 ∼ N 0, 3 as n → ∞. Hence, θ˜n = 2X n − 1 is CAN for θ with approximate variance (1 − θ )2 /3n. 3.5.4 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli B(1, θ ) distribution, θ ∈ (0, 1). (i) Suppose an estimator θˆn is defined as follows: θˆn

⎧ ⎨ 0.01, if X n = 0 = X , if 0 < X n < 1 ⎩ n 0.98, if X n = 1.

Examine whether it is a CAN estimator of θ . (ii) Examine whether the maximum likelihood estimator of θ is a CAN estimator of θ if θ ∈ [a, b] ⊂ (0, 1). Solution: (i) Suppose the distribution of a random variable X is a Bernoulli B(1, θ ), then its probability mass function is given by, P[X = x] = θ x (1 − θ )1−x , x = 0, 1, θ ∈ (0, 1). Given a random sample {X 1 , X 2 , . . . , X n } from Bernoulli B(1, θ ) distribution, by the WLLN X n √ L is consistent for θ and by the CLT n(X n − θ ) → Z 1 ∼ N (0, θ (1 − θ )). Further for any  > 0, P[|X n − θˆn | < ] ≥ P[θˆn = X n ] = P[0 < X n < 1] = 1 − θ n − (1 − θ )n → 1, if θ ∈ (0, 1). Hence, X n and θˆn have the same limit in convergence in probability. Thus, consistency of X n implies consistency of θˆn . To find its asymptotic distribution, note that √ √ √ n(X n − θ ) − n(θˆn − θ ) = n(X n − θˆn ) and √ P[ n|X n − θˆn | < ] ≥ P[θˆn = X n ] = P[0 < X n < 1] = 1 − θ n − (1 − θ )n → 1, if θ ∈ (0, 1).

440

7

Solutions to Conceptual Exercises

√ √ Thus, for θ ∈ (0, 1), the limit law of n(X n − θ ) and of n(θˆn − θ ) is the √ L same. But by the CLT n(X n − θ ) → Z 1 ∼ N (0, θ (1 − θ )) and hence √ L n(θˆn − θ ) → Z 1 ∼ N (0, θ (1 − θ )), ∀ θ ∈ (0, 1). (ii) Here the parameter space is [a, b]. It is shown in the solution of Exercise 2.8.12 that the maximum likelihood estimator θˆn of θ is given by θˆn

⎧ ⎨ a, if X n < a = X , if a ≤ X n ≤ b ⎩ n b, if X n > b.

It is √ shown to be consistent for θ . To √ √ find its asymptotic distribution, note that n(X n − θ ) − n(θˆn − θ ) = n(X n − θˆn ) and √ P[ n|X n − θˆn | < ] ≥ P[θˆn = X n ] = P[a ≤ X n ≤ b] → 1, if θ ∈ (a, b) , as shown√in the solution of Exercise 2.8.12. Thus, for θ ∈ (a, b), the limit √ law of n(X n − θ ) and of n(θˆn − θ ) is the same. But by the CLT √ √ L L n(X n − θ ) → Z 1 ∼ N (0, θ (1 − θ )) and hence n(θˆn − θ ) → Z 1 ∼ N (0, θ (1 − √ θ )), ∀ θ ∈ (a, b). We now investigate the asymptotic distrin(θˆn − θ ) at θ = a and at θ = b. Suppose θ = a, then bution of √ Pa [ n(θˆn − a) ≤ x] = 0 if x < 0 as θˆn ≥ a. If x = 0, then √ Pa [ n(θˆn − a) ≤ 0] = Pa [θˆn = a] = Pa [X n < a]  √ √ n(a − a) n(X n − a) < √ → (0), = Pa √ a(1 − a) a(1 − a) which is 1/2. Suppose x > 0 then, √ √ √ Pa [ n(θˆn − a) ≤ x] = Pa [ n(θˆn − a) ≤ 0] + Pa [0 < n(θˆn − a) ≤ x] √ = Pa [ n(θˆn − a) ≤ 0]   √ n(X n − a) x ≤√ + Pa 0 < √ a(1 − a) a(1 − a)   1 x → + √ − (0) 2 a(1 − a)   x . =  √ a(1 − a) Thus, √ Pa [ n(θˆn − a) ≤ x] →

⎧ ⎪ ⎨

if x < 0 if x =0

⎪ ⎩ √ x , if x > 0. a(1−a) 0, 1 2,

7.2 Chapter 3

441

√ Suppose θ = b, then θˆn ≤ b ⇒ Pb [ n(θˆn − b) ≥ x] = 0 if x > 0. If x = 0, then √ Pb [ n(θˆn − b) ≥ 0] = Pb [θˆn = b] = Pb [X n > b]  √ √ n(X n − b) n(b − b) 1 > √ → . = Pb √ 2 b(1 − b) b(1 − b) Suppose x < 0 then, √ √ √ Pb [ n(θˆn − b) ≥ x] = Pb [x ≤ n(θˆn − b) < 0] + Pb [ n(θˆn − b) ≥ 0]   √ x n(X n − b) ≤ √ 0

if x = 0 ⎪ ⎩1 −  √ x , if x < 0. b(1−b) 0, 1 2,

which is equivalent to ⎧

⎪ √ x , if x < 0  ⎨ √ b(1−b) ˆ 1 Pb [ n(θn − b) ≤ x] → if x = 0 ⎪ 2, ⎩ 1, if x > 0. √ L Thus, n(θˆn − θ ) → Z 1 ∼ N (0, θ (1 − θ )), ∀ θ ∈ (a, b), but at θ = a and θ = b, which √ are the boundary points of the parameter space, asymptotic distribution of n(θˆn − θ ) is not normal and hence we conclude that θˆn is not CAN for θ ∈ [a, b]. It is noted that for θ ∈ (a, b), Pθ [a ≤ X n ≤ b] → 1 as n → ∞, thus for large n, θˆn = X n and will have approximate normal distribution. 3.5.5 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution. Find the maximum likelihood estimator of θ and examine if it is CAN for θ if θ ∈ [0, ∞). Identify the limiting distribution at θ = 0. Solution: In Example 2.2.3, we have obtained the maximum likelihood estimator θˆn of θ as

442

7

θˆn =



Solutions to Conceptual Exercises

X n , if X n ≥ 0 0, if X n < 0,

and it is shown to be consistent.√ To examine whether it is CAN we proceed as follows. Since X ∼ N (θ, 1), √ √ n(X n − θ ) ∼ N (0, 1). Further, √ n(X n − θ ) − n(θˆn − θ ) = n(X n − θˆn ). Observe that for θ > 0 and for  > 0, √ Pθ [| n(X n − θˆn )| < ] ≥ Pθ [X n = θˆn ] = Pθ [X n ≥ 0] √ = 1 − (− nθ ) → 1 if θ > 0. √ √ √ Pθ Thus, for θ > 0, n(X n − θ ) − n(θˆn − θ ) → 0, hence n(X n − θ ) and √ √ L n(θˆn − θ ) have the same limit law. But n(X n − θ ) → Z ∼ N (0, 1) and √ L hence n(θˆn − θ ) → Z ∼ N (0, 1), for θ > 0. Suppose now θ = 0. √ If x < 0, P0 [ n(θˆn − 0) ≤ x] = 0 as θˆn ≥ 0. √ √ For x = 0, P0 [ n θˆn ≤ 0] = P0 [ n θˆn = 0] = P0 [X n < 0] = (0) = 1/2. √ √ √ For x > 0, P0 [ n θˆn ≤ x] = P0 [ n θˆn ≤ 0] + P0 [0 < n θˆn ≤ x] √ = 1/2 + P0 [0 < n(X n ) ≤ x] = 1/2 + (x) − (0) = (x). Thus for θ = 0,

⎧ ⎨ 0, if x < 0 √ 1/2, if x = 0, P0 [ n θˆn ≤ x] = ⎩ (x), if x > 0,

√ which shows that at θ = 0, the asymptotic distribution of n θˆn is not normal and 0 is a point of discontinuity. Suppose U1 is a random variable with a distribution degenerate at 0. Then its distribution function is given by  0, if x < 0 FU1 (x) = 1, if x ≥ 0 Suppose a random variable U2 is defined as U2 = |U | where U ∼ N (0, 1). Then P[U2 ≤ x] = 0 if x < 0. Suppose x ≥ 0, then P[U2 ≤ x] = P[|U | ≤ x] = P[−x ≤ U ≤ x] = (x) − (−x) = 2(x) − 1. Thus, the distribution function of U2 is given by  0, if x < 0 FU2 (x) = 2(x) − 1, if x ≥ 0 √ It is easy to verify that P0 [ n(θˆn − 0) ≤ x] = (1/2)FU1 (x) + (1/2)FU2 (x).

7.2 Chapter 3

443

3.5.6 Suppose X ≡ {X 1 , X 2 , . . ., X n } is a random sample from a distribution of a random variable X with probability density function f (x, θ )=kθ k /x k+1 , x ≥ θ, θ > 0 & k ≥ 3 is a fixed positive integer. (i) Find the maximum likelihood estimator of θ and examine whether it is CAN for θ . (ii) Find the moment estimator of θ and examine whether it is CAN for θ . (iii) Find 95% asymptotic confidence interval for θ based on the maximum likelihood estimator. Solution: Corresponding to a random sample X from this distribution, the likelihood of θ is given by

L n (θ |X ) =

n 

 kθ

k

/X ik+1

=k θ

n kn

i=1

n 

−1 X ik+1

, X i ≥ θ, ∀ i ⇔ X (1) ≥ θ.

i=1

Thus, the likelihood is an increasing function of θ and attains the maximum at the maximum possible value of θ given data X . The maximum possible value of θ given data is X (1) and hence the maximum likelihood estimator θˆn of θ is given by X (1) . To verify the consistency of X (1) as an estimator of θ , we find the coverage probability using the distribution function of X (1) . The distribution function FX (x) of X is given by  0, if x < θ FX (x) = 1 − θ k /x k , if x ≥ θ. Hence, the distribution function of X (1) is given by  0, if x < θ n FX (1) (x) = 1 − [1 − FX (x)] = 1 − θ kn /x kn , if x ≥ θ. For  > 0, the coverage probability is given by Pθ [|X (1) − θ | < ] = Pθ [θ −  < X (1) < θ + ] = Pθ [θ < X (1) < θ + ] as X (1) ≥ θ = FX (1) (θ + ) − FX (1) (θ ) θ kn −0 (θ + )kn → 1 ∀  > 0 and ∀ θ as n → ∞. = 1−

Hence, X (1) is consistent for θ . To derive the asymptotic distribution of X (1) with suitable norming, we define Yn = n(X (1) − θ ) and derive its distribution function G Yn (y) for y ∈ R. Since X (1) ≥ θ, Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = Pθ [n(X (1) − θ ) ≤ y] = Pθ [X (1) ≤ θ + y/n] = FX (1) (θ + y/n).

444

7

 Hence G Yn (y) =

Solutions to Conceptual Exercises

0, if θ + y/n < θ ⇔ y < 0 θ )kn , if θ + y/n ≥ θ ⇔ y ≥ 0. 1 − ( θ +y/n

As n → ∞,  G Yn (y) →

0, if y ≤ 0 1 − exp(−ky/θ ), if y ≥ 0.

Thus, the asymptotic distribution of Yn = n(X (1) − θ ) is exponential distribution with location parameter 0 and scale parameter k/θ . Thus, with norming factor n, the asymptotic distribution of X (1) is not normal. Proceeding on similar lines as in Example 3.2.1, it follows that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (X (1) − θ ) is normal. Hence we conclude that X (1) is not CAN for θ . (ii) For a random variable X with probability density function f (x, θ ) = kθ k /x k+1 , x ≥ θ , E(X ) = kθ/(k − 1) < ∞ as k ≥ 3. Hence, by Pθ

the WLLN, X n → E(X ) = kθ/(k − 1), ∀ θ . Hence, (k − 1)X n /k is consistent for θ . Further, E(X 2 ) = kθ 2 /(k − 2) and V ar (X ) = kθ 2 /(k − 2)(k − 1)2 , which is positive and finite for k ≥ 3 and hence by the CLT,     √ kθ kθ 2 L as n → ∞. → Z 1 ∼ N 0, n Xn − k−1 (k − 2)(k − 1)2 Using delta method, (k − 1)X n /k is CAN for θ with approximate variance θ 2 /nk(k − 2). (iii) The maximum likelihood estimator θˆn of θ is X (1) , it is consistent and L

Yn = n(X (1) − θ ) → Y ∼ exp(k/θ ). To find a pivotal quantity based on the maximum likelihood estimator, note that if Y ∼ exp(k/θ ), then its moment generating function is (1 − θ t/k)−1 . Thus, if a random variable U is defined as U = kY /θ , then its moment generating function is (1 − t)−1 , implying that U has the exponential distribution with location parameter 0 and scale parameter 1. Suppose Q n is defined as Q n = (k/X (1) )n(X (1) − θ ) then by Slutsky’s theorem, Qn =

k θ k L n(X (1) − θ ) = Yn → 1 × U = U ∼ exp(1). X (1) X (1) θ

Thus for large n, Q n is a pivotal quantity and we can find uncountably many b pairs (a, b) such that a e−y dy = 1 − α. Now a
t), t > 0. (ii) Show that for n(c( p)X ([np]+1) − θ ) converges in law to N (0, σ 2 ( p)). Find the constant c( p) and σ 2 ( p). Solution: (i) If X has an exponential distribution with mean θ , then it is known that the distribution of residual life random variable X − t|X > t is also exponential with mean θ . If {X 1 , X 2 , . . . , X n } is a random sample from Pθ

an exponential distribution with mean θ < ∞, then by the WLLN X n → θ . Further, V ar (X ) = θ 2 which is positive and finite and hence by the CLT, √ L n(X n − θ ) → Z 1 ∼ N (0, θ 2 ). Thus, X n is CAN for θ with approximate 2 variance θ /n and it is also CAN for E(X − t|X > t) with approximate variance θ 2 /n. (ii) For an exponential distribution with mean θ , the p-th population quantile is given by a p (θ ) = −θ log(1 − p). Hence, √ L n(X ([np]+1) − (−θ log(1 − p))) → Z 1 ∼ N (0, v(θ, p)) where 2 v(θ, p) = θ p/(1 − p). Thus, √ L n(c( p)X ([np]+1) − θ ) → Z 2 ∼ N (0, σ 2 ( p)) where 2 2 σ ( p) = θ p/(1 − p)(log(1 − p))2 and c( p) = −1/ log(1 − p). 3.5.8 Suppose {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables each having a Poisson distribution with mean θ, θ > 0. Find a CAN estimator of P[X 1 = 1]. Is it necessary to impose any condition on the parameter space? Under this condition, using the CAN estimator of P[X 1 = 1], obtain a large sample confidence interval for P[X 1 = 1]. Solution: In Example 2.2.5, it is shown that if {X 1 , X 2 , . . . , X n } is a random sample of size n from Poisson Poi(θ ) distribution with θ > 0, then an estimator Tn of θ defined as  Xn, if Xn > 0 Tn = 0.05, if Xn = 0 is consistent for θ . We now examine if its asymptotic distribution with suitable norming is normal. By the CLT it immediately follows that √ L √ Z 1 ∼ N (0, θ ). √Now, √n(X n − θ ) → n(Tn − θ ) − n(X n − θ√ ) = n(Tn − X n ) and if it converges√to 0 in probability, then normality of n(X n − θ ) implies normality of n(Tn − θ ). Observe that, for  > 0,

446

7

Solutions to Conceptual Exercises

√ P[ n|Tn − X n | < ] ≥ P[Tn = X n ] = P[X n > 0] = 1 − exp(−nθ ) → 1, ∀ θ > 0. Thus, √

√ Pθ n(Tn − X n ) → 0, ∀ θ > 0 and hence, L

n(X n − θ ) → Z 1 ∼ N (0, θ ) ⇒

√ L n(Tn − θ ) → Z 1 ∼ N (0, θ ) , ∀ θ > 0

which proves that Tn is CAN for θ with approximate variance θ/n. Now P[X 1 = 1] = θ e−θ = g(θ ), say. It is clear that g is a differentiable function and g  (θ ) = (1 − θ )e−θ = 0, ∀ θ = 1. Hence ∀ θ = 1, by the delta method g(Tn ) = Tn e−Tn is CAN for g(θ ) = θ e−θ with approximate variance e−2θ θ (1 − θ )2 /n, that is,

L √ −Tn n Tn e − P[X 1 = 1] → Z 1 ∼ N (0, e−2θ θ (1 − θ )2 ) . Thus by Slutsky’s theorem, & Qn =



L n −Tn T e − P[X = 1] → Z ∼ N (0, 1) . n 1 e−2Tn Tn (1 − Tn )2

Hence, Q n is a pivotal quantity and is useful to find a large sample confidence interval for P[X 1 = 1]. Thus, given a confidence coefficient (1 − α), we can find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get Tn e−Tn − a1−α/2 e−Tn (1 − Tn ) Tn /n < P[X 1 = 1] < Tn e−Tn + a1−α/2 e−Tn (1 − Tn ) Tn /n . Hence, for all θ = 1, 100(1 − α)% large sample confidence interval for P[X 1 = 1] is given by

Tn e−Tn − a1−α/2 e−Tn (1 − Tn ) Tn /n, Tn e−Tn + a1−α/2 e−Tn (1 − Tn ) Tn /n

.

3.5.9 Suppose {X 1 , X 2 , . . . , X n } is a random sample from f (x, θ ) = θ x θ −1 , 0 < x < 1, θ > 0. Find a CAN estimator of e−θ based on the sample mean and also based on a sufficient statistic. Compare the two estimators. Solution: In Example 3.2.2, it is shown that if {X 1 , X 2 , . . . , X n } is a random sample from a distribution with probability density function f (x, θ ) = θ x θ −1 , 0 < x < 1, θ > 0, then a CAN estimator of θ based on the sample mean is given by θˆn = X n /(1 − X n ) with approximate variance θ (1 + θ )2 /n(θ + 2). A CAN estimator for θ based on a sufficient statistic is

7.2 Chapter 3

447

n given by, θ˜n = n/Sn = n/(− i=1 log X i ) with approximate variance θ 2 /n. We now use delta method to find CAN estimator of e−θ based on the sample mean and also based on the sufficient statistic. Suppose g(θ ) = e−θ , then g is a differentiable function with g  (θ ) = −e−θ = 0 for all θ > 0. Hence, by the delta method, g(θˆn ) = exp −X n /(1 − X n ) is CAN for g(θ ) = e−θ with approximate variance e−2θ θ (1 + θ )2 /n(θ + 2). Similarly, CAN estimator of e−θ based on the sufficient statistic is given by, g(θ˜n ) = exp {−n/Sn } with approximate variance e−2θ θ 2 /n. As discussed in Example 3.2.2, it can be shown that g(θ˜n ) is a better CAN estimator than g(θˆn ) as an estimator of e−θ . 3.5.10 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli B(1, θ ). Find a CAN estimator of θ (1 − θ ) when θ ∈ (0, 1) − {1/2}. What is the limiting √ distribution of the estimator when θ = 1/2 and when the norming factor is n and n? Solution: Suppose X ∼ B(1, θ ). Then E(X ) = θ and V ar (X ) = θ (1 − θ ) which is positive and finite for all θ ∈ (0, 1). Then, by the WLLN and the CLT, it follows that X n is CAN for θ with approximate variance θ (1 − θ )/n. Suppose g(θ ) = θ (1 − θ ), then g is a differentiable function and g  (θ ) = (1 − 2θ ) = 0 for all θ ∈ (0, 1) − {1/2}. Hence by the delta method, g(X n ) = X n (1 − X n ) is CAN for g(θ ) = θ (1 − θ ) with approximate variance θ (1 − θ )(1 − 2θ )2 /n. Suppose θ = 1/2. Then √ L n(X n − 1/2) → Z 1 ∼ N (0, 1/4) and hence it is bounded in probability. Hence,

 √ 2 √  √  2 n X n (1 − X n ) − 1/4 = n X n − X n − 1/4 = − n X n − 1/2 2 P −1 √ = √ n(X n − 1/2) → 0. n √ Thus, at θ = 1/2, n(X n (1 − X n ) − 1/4) does not have limiting distribution as a normal distribution. Proceeding on similar lines as in Example 3.2.1, we tending to ∞ claim that there exists no sequence {an , n ≥ 1} of real numbers  as n → ∞ such that the√asymptotic distribution of an X n (1 − X n ) − 1/4 is normal. If instead of n norming, suppose we take the norming factor as n, then

  2  2 n X n (1 − X n ) − 1/4 = n X n − X n − 1/4 = −n X n − 1/2 √ 2 L = − n(X n − 1/2) → − U , where U ∼ χ12 . Thus, with norming factor n, limiting distribution is non-degenerate but not normal.

448

7

Solutions to Conceptual Exercises

3.5.11 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a geometric distribution, with probability mass function p(x, θ ) = θ (1 − θ )x , x = 0, 1, . . . . However, X 1 , X 2 , . . . , X n are not directly observable, but one can note whether X i ≥ 2 or not. (i) Find a CAN estimator for θ , based on the observed data. (ii) Find a CAN estimator for θ , if X i ≥ 2 is replaced by X i > 2. Solution: (i) Suppose X has geometric distribution, with probability mass function p(x, θ ) = θ (1 − θ )x , x = 0, 1, . . .. Then h(θ ) = Pθ [X ≥ 2] = θ



(1 − θ )x = θ (1 − θ )2

x≥2



(1 − θ )x−2

x≥2

= θ (1 − θ )2 θ −1 = (1 − θ )2 . We now define a random variable Yi , i = 1, 2, . . . , n as  1, if X i ≥ 2 Yi = 0, if X i < 2. Since {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, being Borel functions, {Y1 , Y2 , . . . , Yn } are also independent and identically distributed random variables, each having Bernoulli B(1, h(θ )) distribution. By the WLLN and the CLT, it follows that Y n is CAN for h(θ ) = φ say, with approximate variance h(θ )(1 − h(θ ))/n. To find a CAN estimator of θ , we find a transformation g : R → R such that g(φ) = θ . have g(φ) = 1 − φ 1/2 . It is Now (1 − θ )2 = φ ⇒ θ = 1 − φ 1/2 . Thus, we√  a differentiable function of φ and g (φ) = −1/2 φ = 0. Hence by the delta 1/2 method, g(Y n ) = 1 − Y n is CAN for g(φ) = θ with approximate variance (h(θ )(1 − h(θ ))/n) × (1/4φ) = (1 − (1 − θ )2 )/4n. (ii) In this setup we define a random variable Yi , i = 1, 2, . . . , n as  Yi =

1, if X i > 2 0, if X i ≤ 2.

Observe that h(θ ) = Pθ [X > 2] = θ



(1 − θ )x = θ (1 − θ )3

x≥3 3 −1

= θ (1 − θ ) θ



(1 − θ )x−3

x≥3

= (1 − θ ) . 3

Thus, {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables, each having Bernoulli B(1, h(θ )) distribution. By the WLLN and

7.2 Chapter 3

449

the CLT, Y n is CAN for h(θ ) = φ say, with approximate variance 1/3 h(θ )(1 − h(θ ))/n. Proceeding on similar lines as in (i) we get that 1 − Y n 2 3 is CAN for θ with approximate variance (3θ − 3θ + θ )/(9n(1 − θ )). 3.5.12 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, 1) distribution, θ ∈ R. However, X 1 , X 2 , . . . , X n are not directly observable, but one can note whether X i > 2 or not. Find a CAN estimator for θ , based on the observed data. Solution: Suppose ∼ N (θ, 1) distribution. Then h(θ ) = Pθ [X > 2] = 1 − (2 − θ ). As in the previous example, we define a random variable Yi , i = 1, 2, . . . , n as  Yi =

1, if X i > 2 0, if X i ≤ 2.

Proceeding on similar lines as in the previous example, it can be shown that 2 − −1 (1 − Y n ) is CAN for θ with approximate variance (1/n)(1 − (2 − θ ))(2 − θ )−4 ((2 − θ ))φ 2 ((2 − θ )). 3.5.13 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X, with probability density function f (x, α, θ ) as given by  f (x, α, θ ) =

2x αθ , 2(α−x) α(α−θ )

if 0 < x ≤ θ , if θ < x ≤ α.

Find a CAN estimator of θ when α is known. Solution: From the probability density function f (x, α, θ ) of X we have E(X ) = (α + θ )/3 and V ar (X ) = (α 2 − αθ + θ 2 )/18 < ∞. Hence by the √ Pθ L WLLN, X n → E(X ) = (α + θ )/3 and by the CLT, n(X n − E(X )) → Z 1 ∼ N (0, V ar (X )). Hence, θˆn = 3X n − α is consistent for θ and   √ L n(θˆn − θ ) → Z 2 ∼ N 0, (α 2 − αθ + θ 2 )/2 . Thus, θˆn is CAN for θ with approximate variance (α 2 − αθ + θ 2 )/2n. 3.5.14 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a negative binomial distribution with probability mass function given by 

 x +k−1 k Pθ [X = x] = p (1 − p)x , x = 0, 1, . . . , 0 < p < 1, k > 0. x Obtain a CAN estimator for p assuming k to be known.

450

7

Solutions to Conceptual Exercises

Solution: If X follows a negative binomial distribution then E(X ) =

k(1 − p) k(1 − p) < ∞. & V ar (X ) = p p2

Hence by the WLLN and by the CLT,     √ k(1 − p) k(1 − p) L k(1 − p) . Xn → → Z 1 ∼ N 0, & n Xn − p p p2 L

Thus, X n is CAN for k(1 − p)/ p = φ, say, with approximate variance k(1 − p)/np 2 . To get a CAN estimator for p, we find a transformation g such that g(φ) = p. Suppose g(y) = k/(k + y), y > 0, then g is a differentiable function with g  (y) = −k/(k + y)2 = 0, ∀ y > 0. Hence by the delta method, g(X n ) = k/(k + X n ) is CAN for g(φ) = p with approximate variance (k(1 − p)/np 2 ) × ( p 4 /k 2 ) = p 2 (1 − p)/nk. 3.5.15 Suppose {X 1 , X 2 , . . . , X n } is a random distribun sample from an exponential n X i /n and T2n = i=1 X i /(n + 1). tion with mean θ . Suppose T1n = i=1 (i) Examine whether T1n and T2n are consistent for θ . (ii) Prove that √ Pθ n(T2n − T1n ) → 0 and hence both T1n and T2n are CAN for θ with the same approximate variance, but M S E θ (T2n ) < M S E θ (T1n ) ∀ n ≥ 1. (iii) Find a CAN estimator for P[X 1 > t], where t is a positive real number. Solution: (i) Suppose X has an exponential distribution with mean θ . Then V ar (X ) = θ 2 which is positive and finite. Hence by the WLLN and by the CLT ∀ θ ∈ , Pθ

Xn → θ &

√ √ L n(X n − θ ) = n(T1n − θ ) → Z 1 ∼ N (0, θ 2 ).

Thus, T1n = X n is CAN for θ with approximate variance θ 2 /n. Now, 1  n Pθ Xi = T1n → θ. n+1 n+1 n

T2n =

i=1

(ii) To examine whether T2n is CAN observe that   √ √ √ √ n n(T1n − θ ) − n(T2n − θ ) = n(T1n − T2n ) = n T1n − T1n n+1 √ n 1 = √ T1n √ n+1 n+1 Pθ



→ 0 as T1n → θ & hence is bounded in probability.

7.2 Chapter 3

451

√ √ Thus, n(T1n − θ ) and n(T2n − θ ) have the same asymptotic distribution as N (0, θ 2 ). Hence, both T1n and T2n are CAN for θ with the same approximate variance θ 2 /n. It is to be noted that T1n is unbiased for θ and hence M S E θ (T1n ) = V ar (T1n ) = θ 2 /n. To find M S E θ (T2n ), observe that n n T1n ⇒ E(T2n ) = θ n+1 n+1 2 2  θ n nθ 2 . & V ar (T2n ) = = n+1 n (n + 1)2 T2n =

Thus, M S E θ (T2n ) =

nθ 2 θ2 θ2 θ2 + = < = M S E θ (T1n ) ∀ n ≥ 1 . 2 2 (n + 1) (n + 1) n+1 n

(iii) If X follows an exponential distribution with mean θ , then P[X > t] = exp(−t/θ ) = g(θ ), say. It is clear that g is a differentiable function and hence continuous. Thus, the consistent estimator for g(θ ) is given by g(T1n ). To examine if it is CAN, we use delta method. Note that g  (θ ) = exp(−t/θ )(t/θ 2 ) = 0, ∀ θ > 0. Hence, g(T1n ) = exp(−t/T1n ) is CAN for g(θ ) = exp(−t/θ ) with approximate variance (exp(−t/θ )(t/θ 2 ))2 (θ 2 /n) = t 2 exp(−2t/θ )/nθ 2 . 3.5.16 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson Poi(θ ) distribution. Examine whether the sample variance is CAN for θ . Solution: Suppose X ∼ Poi(θ ), then E(X ) = V ar (X ) = θ < ∞. From n (X i − X n )2 is conTheorem 2.5.3, the sample variance m 2 = Sn2 = n1 i=1 sistent for V ar (X ) = θ . To examine whether it is asymptotically normal, observe that Sn2

n n 1 1 2 = (X i − X n ) = ((X i − θ ) − (X n − θ ))2 n n i=1

i=1

n n 1 1 2 2 = (Yi − Y n )2 = Yi − Y n , n n i=1

i=1

where Yi = X i − θ . Thus, E(Yi ) = 0 and E(Yi2 ) = V ar (Yi ) = θ . Hence √ P L by the WLLN, Y n → E(Yi ) = 0. By the CLT n(Y n ) → Z 1 ∼ N (0, θ ). √ P Hence, n(Y n )(Y n ) → 0. Further, by the CLT applied to {Y12 , Y22 , . . . , Yn2 }, √ 1 n L n( n i=1 Yi2 − θ ) → Z 2 ∼ N (0, v(θ )), where v(θ ) = V ar (Yi2 ) = E(X i − θ )4 − (E(X i − θ )2 )2 = μ4 (θ ) − θ 2 . As a consequence,  n  √ √ 1 2 2 2 n(Sn − θ ) = n Yi − Y n − θ n i=1

452

7

=

√ n



n 1 2 Yi − θ n

 −

Solutions to Conceptual Exercises



L

n(Y n )(Y n ) → Z 2

i=1

where Z 2 ∼ N (0, v(θ )). Thus, the sample variance Sn2 is CAN for θ with approximate variance v(θ )/n. To find v(θ ) we need to find the fourth central moment of Poi(θ ) distribution. We obtain it from that the cumulant generating function C X (t) of X . The moment generating function M X (t) of X is M X (t) = exp(θ {et − 1}) and hence the cumulant generating function C X (t) of X is C X (t) = θ (et − 1) = θ (t + t 2 /2! + t 3 /3! + t 4 /4! + · · · ). Using the relation between cumulants ki and central moments μi we have ki = μi = θ, i = 2, 3 and μ4 = k4 + 3k22 = θ + 3θ 2 . Thus, v(θ ) = μ4 (θ ) − θ 2 = θ + 2θ 2 . 3.5.17 Show that the empirical distribution function Fn (a) is CAN for F(a), where a is a fixed real number. Hence obtain a large sample confidence interval for F(a). Solution: Suppose {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X with the distribution function F(x), x ∈ R. For each fixed a ∈ R, the empirical distribution function Fn (a) corresponding to the given random sample, is defined as Fn (a) =

n number of X i ≤ a 1 Yi = Y n , = n n i=1

where for i = 1, 2, . . . , n,  Yi =

1, if 0, if

Xi ≤ a X i > a.

In Example 2.2.9, it is shown that Fn (a) is consistent for F(a). Now {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, hence being Borel functions, {Y1 , Y2 , . . . , Yn } are also independent and identically distributed random variables with E(Yi ) = Pθ [X i ≤ a] = F(a) and V ar (Yi ) = v = F(a)(1 − F(a)) < ∞. Hence by the CLT, √ ⇔

L

n(Y n − F(a)) → Z 1 ∼ N (0, v)

√ L n(Fn (a) − F(a)) → Z 1 ∼ N (0, v) .

Thus, the empirical distribution function Fn (a) is CAN for F(a) with approximate variance v/n. To obtain the asymptotic confidence interval for F(a), note that by Slutsky’s theorem, & Qn =

n L ((Fn (a) − F(a)) → Z ∼ N (0, 1) . Fn (a)(1 − Fn (a))

7.2 Chapter 3

453

Hence, Q n is a pivotal quantity and is useful to find the asymptotic confidence interval for F(a). Thus, given the confidence coefficient (1 − α), we find the (1 − α/2)-th quantile a1−α/2 of the standard normal distribution and invert the inequality −a1−α/2 < Q n < a1−α/2 . Hence, 100(1 − α)% asymptotic confidence interval for F(a) is given by 

& Fn (a) − a1−α/2

Fn (a)(1 − Fn (a)) , Fn (a) + a1−α/2 n

&

Fn (a)(1 − Fn (a)) n

 .

3.5.18 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal N (θ, aθ 2 ) distribution, θ > 0 and a is a known positive real number. Find the maximum likelihood estimator of θ . Examine whether it is CAN for θ . Solution: In Example 4.2.3, we have obtained the maximum likelihood estimator of θ and shown it to be CAN for θ when X ∼ N (θ, θ 2 ). Here X ∼ N (θ, aθ 2 ), hence we proceed on same lines to ' find the maximum like ˆ lihood estimator of θ . It is given by θn = (−m + m 2 + 4am  )/2a. Now, 1

θˆn =

−m 1 +

'  m 2 1 + 4am 2 2a

1

2

θ 2 + 4a(aθ 2 + θ 2 → 2a −θ + θ (1 + 2a) = =θ. 2a Pθ

−θ +

Hence, it is consistent for θ . To examine whether θˆn is CAN, we use Theorem 3.3.2 and an appropriate transformation. From Theorem 3.3.2, T n = (m 1 , m 2 ) is CAN for φ = (μ1 , μ2 ) = (θ, θ 2 (1 + a)) with approximate dispersion matrix /n where  is given by  =

 2aθ 3 aθ 2 . 2aθ 3 2aθ 4 (a + 2)

We further define a transformation g : R2 → R such that '

g(x1 , x2 ) = (−x1 + x12 + 4ax2 )/2a. Then with the routine procedure we get θˆn is CAN for θ with approximate variance aθ 2 /n(1 + 2a) > 0 ∀ θ > 0. It is to noted that     ∂2 1 3θ 2 (a + 1) 2θ − n I (θ ) = E θ − 2 log L n (θ |X ) = n − 2 + ∂θ θ aθ 4 aθ 3 (1 + 2a)n = . aθ 2 Thus, θˆn is CAN for θ with approximate variance aθ 2 /n(1 + 2a) = 1/n I (θ ).

454

7

Solutions to Conceptual Exercises

3.5.19 Suppose {X 1 , X 2 , . . . , X n } is a random sample  from a uniform U (−θ, θ ) n |X i |. Are the sample distribution. Find a CAN estimator of θ based on i=1 mean and the sample median CAN for θ ? Justify your answer. Find a consistent estimator for θ based on X (1) and find a consistent estimator for θ based on X (n) . Examine if these are CAN for θ . Solution: Suppose X ∼ U (−θ, θ ). Then its distribution function FX (x, θ ) is given by ⎧ ⎨ 0, if x ≤ −θ FX (x, θ ) = x+θ , if −θ ≤ x < θ ⎩ 2θ 1, if x ≥ θ. Suppose Y = |X |, then Y ≥ 0 hence P[Y ≤ y] = 0 if y < 0. Further, Y ≤ θ . Hence, P[Y ≤ y] = 1 if y > θ . Suppose 0 ≤ y < θ , then P[Y ≤ y] = P[|X | ≤ y] = FX (y, θ ) − FX (−y, θ ) =

y+θ −y + θ y − = . 2θ 2θ θ

Thus, distribution function FY (y, θ ) of Y = |X | is given by ⎧ ⎨ 0, if y ≤ 0 FY (y, θ ) = θy , if 0 ≤ y < θ ⎩ 1, if y ≥ θ. Thus, Y = |X | ∼ U (0, θ ). Hence, E(|X |) = θ/2 and V ar (|X |) = θ 2 /12 and by the WLLN and by the CLT,     n √ 1 θ θ2 Pθ θ L . Tn = |X i | → → Z 1 ∼ N 0, & n Tn − n 2 2 12 i=1

As a consequence 2Tn is CAN for θ with approximate variance θ 2 /3n. For U (−θ, θ ) distribution, both the population mean and median are 0. Hence, the sample mean and the sample median will converge to 0 in probability. Thus, the sample mean and the sample median are not consistent for θ and Pθ



hence not CAN for θ . From Example 2.5.2, X (1) → −θ and X (n) → θ . Thus, −X (1) and X (n) are consistent for θ . However, using arguments similar to those in the solution of Exercise 3.5.1, we can show that these are not CAN. 3.5.20 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (0, θ ) dis1/n  n ( tribution, θ > 0. Examine whether Sn = Xi is CAN for θ e−1 . n

i=1

1 n Solution: Suppose Tn = − log Sn = i=1 (− log X i ) = n i=1 Yi where Yi = − log X i , i = 1, 2, . . . , n. Suppose X ∼ U (0, θ ) and Y = − log X . Then the probability density function f Y (y, θ ) of Y is given by 1 n

7.2 Chapter 3

455

f Y (y, θ ) = (1/θ )e−y , − log θ < y < ∞. Further using the method of integration by parts, we have !  ∞  1 ∞ 1 d E(Y ) = ye−y = y (−e−y ) = 1 − log θ θ − log θ θ − log θ dy !  ∞  ∞ d 1 1 y 2 e−y = y 2 (−e−y ) E(Y 2 ) = θ − log θ θ − log θ dy !  ∞ 1 2 −y ∞ −y (−y e )− log θ + 2 ye = θ − log θ = (log θ )2 + 2 − 2 log θ = 1 + (1 − log θ )2 . Thus, V ar (Y ) = 1 and {Y1 , Y2 , . . . , Yn } are independent and identically distributed random variables with finite mean 1 − log θ and variance 1. Hence by the WLLN and CLT, Tn =

n √ 1 Pθ L Yi → 1 − log θ & n(Tn − (1 − log θ )) → Z ∼ N (0, 1). n i=1

Thus, Tn is CAN for 1 − log θ with approximate variance 1/n. To examine whether Sn is CAN for θ e−1 , we consider a transformation g(x) = e−x , it is differentiable function with g  (x) = −e−x = 0. Hence by the delta method, 1/n (n is CAN for g(Tn ) = e−Tn = Sn = i=1 X i −1+log θ −1 = θ e with approximate variance g(1 − log θ ) = e (1/n)(−e−1+log θ )2 which reduces to θ 2 e−2 /n. 3.5.21 Suppose {X 1 , X 2 , . . . , X 2n+1 } is a random sample from a uniform U (θ − 1, θ + 1) distribution. (i) Show that X 2n+1 and X (n+1) are both CAN for θ . Compare the two estimators. (ii) Using the large sample distribution, obtain the minimum sample size n 0 required for both the estimators to attain a given level of accuracy specified by  and δ, such that P[|Tn − θ | < ] ≥ 1 − δ, ∀ n ≥ n 0 , where Tn is either X 2n+1 or X (n+1) . Solution: (i) Suppose X ∼ U (θ − 1, θ + 1) distribution. Then E(X ) = θ , V ar (X ) = 1/3 and population median is θ . By the WLLN and the CLT X 2n+1 is CAN for θ with approximate variance 1/3(2n + 1). Further by Theorem 3.3.3, the sample median X (n+1) is CAN for θ with approximate variance 1/(2n + 1). Comparing the approximate variances of the two estimators we get that sample mean is better than the sample median. (ii) By the   L √ CLT 3(2n + 1) X 2n+1 − θ → Z ∼ N (0, 1). Hence,



  P |X 2n+1 − θ | <  ≈  3(2n + 1)  −  − 3(2n + 1) 

= 2 3(2n + 1)  − 1 > 1 − δ

456

7

Solutions to Conceptual Exercises

δ 3(2n + 1)  ≥ 1 − 2   δ ⇒ 3(2n + 1)  ≥ −1 1 − 2  −1  δ 2  (1 − 2 ) 1 ⇒n≥ − 2 6 2    δ 2 −1  (1 − 2 ) 1 − ⇒ n0 = + 1. 6 2 2

⇒



For large n, X (n+1) ∼ N (θ, 1/(2n + 1)). Hence,



  P |X (n+1) − θ | <  ≈  (2n + 1)  −  − (2n + 1) 

= 2 (2n + 1)  − 1 > 1 − δ

δ (2n + 1)  ≥ 1 − ⇒  2  δ ⇒ (2n + 1)  ≥ −1 1 − 2  −1 2  (1 − 2δ ) 1 ⇒n≥ − 2 2 2   2 −1 (1 − 2δ ) 1 − ⇒ n0 = + 1, 2 2 2 it is larger than that for X 2n+1 . 3.5.22 Suppose {X 1 , . . . , X n } is a random sample from X with probability density function f (x, θ ) = θ/x 2 , x ≥ θ, θ > 0. (i) Find the maximum likelihood estimator of θ and examine if it is CAN for θ . (ii) Find a CAN estimator of θ based on the sample quantiles. Solution: (i) The probability density function of a random variable X is given by f X (x, θ ) = θ/x 2 , x ≥ θ. In Exercise 2.8.16, we have obtained the maximum likelihood estimator θˆn of θ which is X (1) and it is shown to be consistent for θ . To derivethe asymptotic  distribution of X (1) with suitable norming, we define Yn = n X (1) − θ and derive its distribution function G Yn (y) for y ∈ R. Since X (1) ≥ θ, Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = Pθ [n(X (1) − θ ) ≤ y] = Pθ [X (1) ≤ θ + y/n] = FX (1) (θ + y/n).

7.2 Chapter 3

457

Now, the distribution function FX (x) of X is given by  0, if x < θ FX (x) = 1 − θ/x, if x ≥ θ. Hence, the distribution function of X (1) is given by  FX (1) (x) = 1 − [1 − FX (x)]n =

0, if x < θ 1 − θ n /x n , if x ≥ θ.

Consequently, distribution function G Yn (y) is given by  G Yn (y) =

1−

0,

θ θ +y/n

n if θ + y/n < θ , if θ + y/n ≥ θ

⇔y 0, θ > 0 . Obtain an estimator of θ based on the sample quantiles. Is it CAN? Justify your answer. Solution: The distribution function F(x, θ ) of a Weibull distribution is given by  x u θ −1 exp{−u θ }du, x > 0 F(x, θ ) = θ 0

458

7

 =



Solutions to Conceptual Exercises

exp{−y}dy, by substituting u θ = y

0

= 1 − exp{−x θ }. To find the p-th quantile a p (θ ), we solve F(x, θ ) = p ⇔ − log(1 − p) = x θ ⇒ a p (θ ) = (− log(1 − p))1/θ , 0 < p < 1 . By Theorem 2.2.6, the p-th sample quantile X ([np]+1) is consistent for pth population quantile a p (θ ) and by Theorem 3.3.3, X ([np]+1) is CAN for a p (θ ) = φ, say, with approximate variance  −1 p nθ 2 (1 − p)(log(1 − p))(2−2/θ ) . We now define a function g : R+ → R+ as g(x) = (log(− log(1 − p)))/ log x so that g(φ) = θ . It is clear that g is a differentiable function with g  (x) = (− log(− log(1 − p)))/((log x)2 x) = 0 for all x > 0, but is not defined for x = 1 ⇔ p = 1 − e−1 . We hence assume that p = 1 − e−1 . By the delta method, g(X ([np]+1) ) = (log(− log(1 − p)))/(log X ([np]+1) ) is CAN for g(φ) = θ with approximate variance  −1 pθ 2 n(1 − p)(log(1 − p))2 (log(− log(1 − p)))2 . 3.5.24 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace (θ, 1) distribution. Find a family of CAN estimators of θ based on sample quantiles. Also find the CAN estimator of θ based on the sample mean and the sample median. Which one is better? Why? Solution: Suppose X follows a Laplace (θ, 1) distribution. Then its probability density function is given by, f (x, θ ) = (1/2) exp{−|x − θ |}, x ∈ R, θ ∈ R. In Exercise 2.8.17, we have obtained the distribution function F(x, θ ) of X and it is given by  1 if x < θ 2 exp{x − θ }, F(x, θ ) = 1 − 21 exp{−(x − θ )}, if x ≥ θ. To find a family of CAN estimators of θ based on sample quantiles, we note that for p
0. √   −1 Hence, D − v(θ) < 0. Thus, Tn = m 1 + m 2  ( p) is a better CAN estimator of a p (θ ) with approximate variance σ 2 + σ 2 −2 ( p)/2. To find the asymptotic confidence interval for a p (θ ) based on Tn we define a pivotal  −1/2   Tn − a p (θ ) . Given a quantity Q n as Q n = (m 2 + m 2 −2 ( p)/2)/n confidence coefficient (1 − α), we can find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get 100(1 − α)% large sample confidence interval for a p (θ ) as ⎛ ⎝Tn − a1−α/2

+

−2 m 2 + m 2  2( p)

n

+ , Tn + a1−α/2

−2 m 2 + m 2  2( p)

n

⎞ ⎠ .

3.5.28 Suppose {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with location parameter μ and scale parameter 1/σ . (i) Obtain an asymptotic confidence interval for μ when σ is known and when it is unknown. (ii) Obtain an asymptotic confidence interval for σ when μ is known and when it is unknown. (iii) Obtain an asymptotic confidence interval for the p-th quantile when both μ and σ are unknown.

7.2 Chapter 3

463

Solution: X follows an exponential distribution with location parameter μ and scale parameter 1/σ , hence its probability density function is f (x, μ, σ ) = (1/σ ) exp{−(x − μ)/σ }, x ≥ μ. (i) Suppose σ is known. Proceeding as in Example 2.2.16, the maximum likelihood estimator of μ is X (1) and it is consistent for μ. Further, the distribution function FX (1) (x, μ, σ ) of X (1) is given by  FX (1) (x, μ, σ ) =

0, if x < μ 1 − exp{− σn (x − μ)}, if x ≥ μ.

It thus follows that for each n, X (1) has an exponential distribution with location parameter μ and scale parameter n/σ which further implies that Yn = (n/σ )(X (1) − μ) has the exponential distribution with location parameter 0 and scale parameter 1 for each n and Yn is a pivotal quantity. The asymptotic confidence interval for μ when σ is known is based on Yn . Given a confidence coefficient (1 − α), we can find a and b so that P[a < Yn < b] = 1 − α. Inverting the inequality a < Yn < b, we get X (1) −

bσ aσ < μ < X (1) − . n n

Hence, 100(1 − α)% large sample confidence interval for μ is given by 

bσ aσ X (1) − , X (1) − n n

 .

When σ is unknown, pivotal quantity Yn is of no use to find the confidence interval. We then use the studentization procedure and replace σ by its consistent estimator. Now V ar (X ) = σ 2 , hence the sample variance Sn2 is a consistent estimator of σ 2 . Hence,  . / L  n n  σ Qn = X (1) − μ = (X (1) − μ) → U , Sn Sn σ where U follows the exponential distribution with location parameter 0 and scale parameter 1. Given a confidence coefficient (1 − α), we can find a and b so that P[a < Q n < b] = 1 − α. Inverting the inequality a < Q n < b, we get 100(1 − α)% large sample confidence interval for μ as 

bSn aSn X (1) − , X (1) − n n

 .

(ii) Suppose μ is known, then Y = X − μ follows an exponential distribution with location parameter 0 and scale parameter 1/σ . A random sample {X 1 , X 2 , . . . , X n } gives a random sample {Y1 , Y2 , . . . , Yn } from the distribution of Y . Hence, by the WLLN and the CLT, Y n = X n − μ is CAN for σ with approximate variance σ 2 /n. Thus,

464

7

Solutions to Conceptual Exercises



  L n/σ Y n − σ → Z ∼ N (0, 1) '   L ⇒ Q n = n/Y n Y n − σ → Z ∼ N (0, 1) , by Slutsky’s theorem. Thus, Q n is a pivotal quantity. Given a confidence coefficient (1 − α), we can find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get ' Y n − a1−α/2

' Y n /n < σ < Y n + a1−α/2

Y n /n .

Hence using the studentization technique, 100(1 − α)% large sample confidence interval for σ is given by 

' Y n − a1−α/2



' Y n /n, Y n + a1−α/2

Y n /n

.

Suppose now that μ is unknown. In Example 3.3.5, we have shown that √ √ (m 1 − m 2 , m 2 ) is CAN for (μ, σ ) with the approximate dispersion matrix D/n, where   2 −σ 2 σ . D= −σ 2 2σ 2 √ Thus when μ is unknown, m 2 is CAN√for σ with variance √ approximate  2 2σ /n. Suppose Q n is defined as Q n = n/2m 2 m 2 − σ . Then & Qn =

⎫ ⎧+ &  ⎨ 2    n √ 2σ ⎬ n √ m2 − σ = m − σ 2 ⎩ 2m 2 ⎭ 2m 2 2σ 2 L

→ Z ∼ N (0, 1), by Slutsky’s theorem. Thus, Q n is a pivotal quantity. Given a confidence coefficient (1 − α), we can find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get √ √ m 2 − a1−α/2 2m 2 /n < σ < m 2 + a1−α/2 2m 2 /n . Hence using the studentization technique, 100(1 − α)% large sample confidence interval for σ is given by √

m 2 − a1−α/2



2m 2 /n,

√ m 2 + a1−α/2 2m 2 /n .

7.2 Chapter 3

465

(iii) Suppose θ = (μ, σ ) . The p-th quantile a p (θ ) is a solution of the equation FX (a p (θ ), μ, σ ) = p

1 1 − exp{− (a p (θ ) − μ)} = p σ a p (θ) = μ − σ log(1 − p) .

⇔ ⇒

√ √ Thus, a p (θ ) is a function of μ and σ . Now (m 1 − m 2 , m 2 ) is CAN for (μ, σ ) with the approximate dispersion matrix D/n. Suppose a function g : R2 → R is defined as g(x1 , x2 ) = x1 − x2 log(1 − p). Further, ∂ ∂ ∂ x1 g(x 1 , x 2 ) = 1 and ∂ x2 g(x 1 , x 2 ) = − log(1 − p). These partial derivatives are continuous and hence g is a totally differentiable function. The gradient vector  evaluated at (μ, σ ) is given by  = [1, − log(1 − p)] . Hence by the delta method, g(m 1 −

√ √ √ √ m 2 , m 2 ) = m 1 − m 2 − log(1 − p) m 2 √ = m 1 − m 2 (1 + log(1 − p)) = Tn ,

say, is CAN for g(μ, σ ) = μ − σ log(1 − p) = a p (θ ) with approximate variance  D/n, where 2 2  D = σ 2 [1 + 2 log(1√− p) √+ 2(log(1 − p)) ] = σ h( p), say. Suppose Q n is defined as Q n = ( n/ m 2 h( p)) Tn − a p (θ) Then  Qn =

√     L n σ h( p) Tn − a p (θ ) → Z ∼ N (0, 1), √ √ √ m 2 h( p) σ h( p)

by Slutsky’s theorem. Thus, Q n is a pivotal quantity. Given a confidence coefficient (1 − α), we can find the quantile a1−α/2 of the standard normal distribution so that P[−a1−α/2 < Q n < a1−α/2 ] = 1 − α. Inverting the inequality −a1−α/2 < Q n < a1−α/2 , we get 100(1 − α)% large sample confidence interval for a p (θ ) as

Tn − a1−α/2



m 2 h( p)/n, Tn + a1−α/2



m 2 h( p)/n

.

3.5.29 Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (θ1 , θ2 ) distribution, −∞ < θ1 < θ2 < ∞. Obtain a CAN estimator of p-th population quantile and hence based on it obtain an asymptotic confidence interval for p-th population quantile. Solution: Suppose X ∼ U (θ1 , θ2 ) distribution. Then its distribution function F(x, θ1 , θ2 ) is given by F(x, θ1 , θ2 ) =

⎧ ⎨ 0, ⎩

x−θ1 θ2 −θ1 ,

1,

if x < θ1 if θ1 ≤ x < θ2 if x ≥ θ2 .

466

7

Solutions to Conceptual Exercises

Hence the p-th population quantile a p (θ) = θ1 + p(θ2 − θ1 ), where θ = (θ1 , θ2 ) . From Theorem 3.3.3, the p-th sample quantile X ([np]+1) is CAN for p-th population quantile a p (θ) with approximate variance p(1 − p)(θ2 − θ1 )2 /n. In the solution of Exercise 2.8.35, we have shown that (X (1) , X (n) ) is consistent for (θ1 , θ2 ) . Thus, the consistent estimator for the approximate variance p(1 − p)(θ2 − θ1 )2 /n is p(1 − p)(X (n) − X (1) )2 /n. Hence by Slutsky’s theorem, &  L  1 n Qn = X ([np]+1) − a p (θ ) → Z ∼ N (0, 1) . p(1 − p) (X (n) − X (1) ) Thus, Q n is a pivotal quantity for large n. Given a confidence coefficient (1 − α), adopting the usual procedure we get 100(1 − α)% large sample confidence interval for a p (θ ) as √

 X ([np]+1) − a1−α/2

p(1 − p)(X (n) − X (1) ) , X ([np]+1) + a1−α/2 √ n



p(1 − p)(X (n) − X (1) ) √ n

 .

3.5.30 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson Poi(θ ) distribution, θ > 0. (i) Obtain a CAN estimator of the coefficient of variation √ cv(θ ) of X when it is defined as cv(θ ) = standard deviation/mean = 1/ θ. (ii) If the estimator of cv(θ ) is proposed as cv(θ ˜ ) = Sn /X n , where X n is the sample mean and Sn is the sample standard deviation, examine if it is CAN for θ . Compare the two estimators. Solution: A random variable X ∼ Poi(θ ) distribution, hence E(X ) = V ar (X ) = θ < ∞. By the WLLN and the CLT X n is CAN for θ with approximate variance θ/n. √ (i) The coefficient of variation cv(θ ) of X is 1/ θ. To find its CAN√estimator, suppose a function g : (0, ∞) → (0, ∞) is defined as g(x) = 1/ x, then g is a differentiable function with g  (x) = −1/2x 3/2 = 0. Hence, by the delta √ method g(X n ) = 1/ X n is CAN for g(θ ) = 1/ θ with approximate variance (θ/n) × (1/4θ 3 ) = 1/4nθ 2 . (ii) Suppose the estimator of cv(θ ) is proposed as cv(θ ˜ ) = Sn /X n . Consistency of cv(θ ˜ ) follows from the consistency of Sn for standard deviation of X and consistency of X n for mean of X and the invariance property of consistency. To examine if it CAN, we use the result established in Example 3.3.6 that T n = (m 1 , m 2 ) is CAN for φ = (μ1 , μ2 ) with approximate dispersion matrix /n, where  is given by  =

 μ3 μ2 . μ3 μ4 − μ22

To find the elements of , we note that the moment generating function M X (t) of X is M X (t) = exp(θ {et − 1}) and hence the cumulant generating function

7.2 Chapter 3

467

C X (t) of X is C X (t) = θ (et − 1) = θ (t + t 2 /2! + t 3 /3! + t 4 /4! + · · · ). It is known that the i-th cumulant is the coefficient of t i /i!. Thus, ki = θ for all i. Using the relation between cumulants ki and moments we have μ1 = k1 = θ, μi = ki = θ, i = 2, 3 and μ4 = k4 + 3k22 = θ + 3θ 2 . Thus the matrix  is given by   θ θ . = θ θ + 2θ 2 To examine whether cv(θ ˜ ) = Sn /X n is CAN, we further define a transformation g : R2 → R such that √ √ x2 x2 ∂ ∂ 1 g(x1 , x2 ) = ⇒ g(x1 , x2 ) = − 2 & g(x1 , x2 ) = √ . x1 ∂ x1 ∂ x2 2x1 x2 x1 These partial derivatives are continuous and hence g is a totally differentiable function. The gradient vector  evaluated at (θ, θ ) is given by  = [−1/θ 3/2 , 1/2θ 3/2 ] . Hence, by Theorem√3.3.4, we get that g(X n , Sn2 ) = Sn /X n is CAN for g(θ, θ ) = 1/ θ with approximate variance . It is to be noted that D /n, where D = (1 + 2θ )/4θ 2 (1 + 2θ )/4θ 2 > 1/4θ 2 ∀ θ > 0. Hence, 1/ X n is a better CAN estimator of cv(θ ) than the CAN estimator Sn /X n of cv(θ ). 3.5.31 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a log-normal distribution with parameters μ and σ 2 . Find a CAN estimator of (μ1 , μ2 ) . Hence obtain a CAN estimator for θ = (μ, σ 2 ) and its approximate variance-covariance matrix. Solution: Suppose a random variable X follows log-normal distribution with parameters μ and σ 2 . Hence, Y = log X ∼ N (μ, σ 2 ) distribution. This relation is useful to find moments of X = eY from the moment generating function MY (·) of Y . Thus, E(X r ) = E(er Y ) = MY (r ) = exp{μr + σ 2 r 2 /2}. Hence, μ1 = exp{μ + σ 2 /2} and μ2 = exp{2μ + 2σ 2 } μ3 = exp{3μ + 9σ 2 /2} and μ4 = exp{4μ + 8σ 2 } V ar (X ) = exp{2μ + 2σ 2 } − exp{2μ + σ 2 } = exp{2μ + σ 2 }(exp{σ 2 } − 1) Cov(X , X 2 ) = exp{3μ + 5σ 2 /2}(exp{2σ 2 } − 1) and V ar (X 2 ) = exp{4μ + 4σ 2 }(exp{4σ 2 } − 1). By the WLLN and by the equivalence of marginal and joint consistency, T n = (m 1 , m 2 ) is consistent for (μ1 , μ2 ) . Further, by Theorem 3.3.2 it is CAN for φ = (μ1 , μ2 ) with approximate dispersion matrix /n, where the dispersion matrix  is given by

468

7

 =

Solutions to Conceptual Exercises

V ar (X ) Cov(X , X 2 ) 2 Cov(X , X ) V ar (X 2 )



and the elements of the matrix are as specified above. We now find a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = θ = (μ, σ 2 ) . Suppose g = (g1 , g2 ) is defined as g1 (x1 , x2 ) = 2 log x1 − (1/2) log x2 and g2 (x1 , x2 ) = log x2 − 2 log x1 . Then 2 ∂ 1 ∂ 2 ∂ g1 (x1 , x2 ) = , g1 (x1 , x2 ) = − , g2 (x1 , x2 ) = − ∂ x1 x1 ∂ x2 2x2 ∂ x1 x1 ∂ 1 & g2 (x1 , x2 ) = . ∂ x2 x2 These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at (μ1 , μ2 ) is given by  M=

 2/μ1 −1/(2μ2 ) . 1/μ2 −2/μ1

Hence, by Theorem 3.3.4, g(T n ) = (g1 (m 1 , m 2 ), g2 (m 1 , m 2 )) = (2 log m 1 − (1/2) log m 2 , log m 2 − 2 log m 1 ) is CAN for g(μ1 , μ2 ) = θ = (μ, σ 2 ) , with approximate dispersion matrix M M  /n, where M M  is,  M M  =

 2 2 2 2 2 2 4eσ − 2e2σ + e4σ /4 − 9/4 −4eσ + 3e2σ − e4σ /2 + 3/2 . 2 2 2 2 2 2 4eσ − 4e2σ + e4σ − 1 −4eσ + 3e2σ − e4σ /2 + 3/2

It is to be noted that σ 2 > 0 ⇒ eσ > 1. Hence, 2

4eσ − 2e2σ + e4σ /4 − 9/4 = (1/4)(eσ − 1)(e3σ + e2σ + 9 − 7eσ ) > 0. 2

2

2

2

2

2

2

Similarly, 4eσ − 4e2σ + e4σ − 1 = e4σ − (2eσ − 1)2 2

2

2

2

2

= (e2σ − 2eσ + 1)(e2σ + 2eσ − 1) 2

2

2

2

= (eσ − 1)2 (e2σ + eσ + eσ − 1) > 0. 2

2

2

2

7.2 Chapter 3

469

3.5.32 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a gamma distribution with scale parameter α and shape parameter λ. Find a moment estimator of (α, λ) and examine whether it is CAN. Find its approximate variancecovariance matrix. Solution: Suppose X follows gamma distribution with scale parameter α and shape parameter λ. Then its probability density function is given by f (x, α, λ) =

α λ −αx λ−1 x , x > 0, α > 0, λ > 0 . e (λ)

Further, E(X ) = λ/α and V ar (X )=λ/α 2 which implies that E(X 2 )=(λ + λ2 )/α 2 . Thus, the moment estimator of (α, λ) is a solution of the system of equations given by m 1 = E(X ) =

λ λ + λ2 & m 2 = E(X 2 ) = α α2 λ & m 2 = V ar (X ) = 2 . α



m 1 = E(X ) =

λ α

The solution is given by α = m 1 /m 2 & λ = m 2 1 /m 2 . Thus, the moment esti mator of (α, λ) is given by (αˆ n , λˆ n ) = (m 1 /m 2 , m 2 1 /m 2 ) . To examine if it is CAN, we use the result established in Example 3.3.6 and the delta method. We find the higher order moments of X from its moment generating function M X (t). It is given by M X (t) = (1 − t/α)−λ , t < α. Hence, its cumulant generating function C X (t) is C X (t) = log M X (t) = −λ log(1 − t/α). Expanding it we have,   t t3 t4 t2 C X (t) = λ + 2 + 3 + 4 + ··· α 2α 3α 4α   2 3 2t 6t 4 t t + + + · · · . =λ + α 2!α 2 3!α 3 4!α 4 Using the relation between cumulants ki and moments we have, μ1 = k1 =

λ λ 2λ , μ2 = k2 = 2 , μ3 = k3 = 3 α α α & μ4 = k4 + 3k22 =

6λ 3λ2 + 4 . α4 α

Thus moments of X up to order 4 are finite and hence from Example 3.3.6, we have T n = (m 1 , m 2 ) is CAN for φ = (μ1 , μ2 ) = (λ/α, λ/α 2 ) with approximate dispersion matrix /n where  is

470

7

 =

μ3 μ2 μ3 μ4 − μ22



 =

λ α2 2λ α3

Solutions to Conceptual Exercises



2λ α3 6λ+2λ2 α4



λ = 2 α

1 2 α

2 α 6+2λ α2

 .

We now find a transformation g : R2 → R2 such that g(T n ) is CAN for g(φ) = (α, λ) . Suppose g = (g1 , g2 ) is defined as g1 (x1 , x2 ) = x1 /x2 and g2 (x1 , x2 ) = x12 /x2 . Then ∂ 1 ∂ x1 ∂ 2x1 g1 (x1 , x2 ) = , g1 (x1 , x2 ) = − 2 , g2 (x1 , x2 ) = ∂ x1 x2 ∂ x2 x2 x2 ∂ x1 &

x2 ∂ g2 (x1 , x2 ) = − 12 . ∂ x2 x2

These partial derivatives are continuous and hence g1 and g2 are totally differentiable functions. The matrix M of partial derivatives evaluated at (μ1 , μ2 ) is given by M=

 α2

− αλ 2α −α 2 3



λ





− αλ 2 −α λ

2

 .

Hence, by Theorem 3.3.4, g(T n ) = (g1 (m 1 , m 2 ), g2 (m 1 , m 2 )) =    (m 1 /m 2 , m 2 1 /m 2 ) is CAN for g(μ1 , μ2 ) = (α, λ) , with approximate dis  persion matrix M M /n, where M M is, M M  = λ

 3α 2

λ2 2α λ

2

+ 2αλ + 2α

 + 2α . 2λ + 2

2α λ

3.5.33 Suppose X i j = μi + i j where {i j , i = 1, 2, 3, j = 1, 2, . . . , n} are independent and identically distributed random variables each having a normal N (0, σ 2 ) distribution. (i) Obtain a CAN estimator of θ = μ1 − 2μ2 + μ3 . (ii) Suppose {i j } are independent and identically distributed random variables with E(i j ) = 0 and V ar (i j ) = σ 2 . Is the estimator of θ obtained in (i) still a CAN estimator of θ ? Justify your answer. Solution: (i) We have X i j = μi + i j ∼ N (μi , σ 2 ), i = 1, 2, 3. Further, X in ∼ N (μi , σ 2 /n), i = 1, 2, 3 and X in , i = 1, 2, 3 are independent. Hence Tn = X 1n − 2X 2n + X 3n follows N (θ, 6σ 2 /n) distribution. Thus, Tn is unbiased for θ and its variance converges to 0 as n → ∞, hence Tn is consistent for θ . Further for each n, Tn ∼ N (θ, 6σ 2 /n) which implies that its asymptotic distribution is also normal. Thus, Tn is CAN for θ with approximate variance 6σ 2 /n.

7.2 Chapter 3

471

(ii) It is given that {i j , i = 1, 2, 3, j = 1, 2, . . . , n} are independent and identically distributed random variables. Hence, {X i j , i = 1, 2, 3, j = 1, 2, . . . , n} are also independent random variables with E(X i j ) = μi and V ar (X i j ) = σ 2 . By the WLLN, P

P

X in → μi , i = 1, 2, 3. Hence, Tn → θ , convergence in probability being closed under arithmetic operations. To examine whether Tn is CAN for θ , suppose a random vector Z j is defined as Z j = (X 1 j , X 2 j , X 3 j ) , j = 1, 2, . . . , n. Observe that {Z 1 , Z 2 , . . . , Z n } are independent and identically distributed random vectors with mean vector μ = (μ1 , μ2 , μ3 ) and dispersion matrix  = σ 2 I3 , which is positive definite. Hence by the multivariate CLT, √

L

n(Z n − μ) → Z 1 ∼ N3 (0, σ 2 I3 ), where Z n = (X 1n , X 2n , X 3n ) .

We further define a transformation g : R3 → R such that g(x1 , x2 , x3 ) = x1 − 2x2 + x3 . Then ∂∂x1 g(x1 , x2 , x3 ) = 1, ∂ ∂ ∂ x2 g(x 1 , x 2 , x 3 ) = −2 and ∂ x3 g(x 1 , x 2 , x 3 ) = 1. These partial derivatives are continuous and hence g is a totally differentiable function. The gradient vector  evaluated at μ is given by δ = [1, −2, 1] . Hence, by Theorem 3.3.4, it follows that g(X 1n , X 2n , X 3n ) = Tn is CAN for g(μ1 , μ2 , μ3 ) = θ with approximate variance  /n, where  = 6σ 2 . Thus, estimator Tn of θ obtained in (i) is CAN estimator for θ , even if the assumption of normality is relaxed. 3.5.34 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a uniform U (θ1 , θ2 ) distribution, where θ1 < θ2 ∈ R. (i) Find the maximum likelihood estimator of (θ1 , θ2 ) . Show that it is consistent but not CAN. (ii) Find a CAN estimator of (θ1 + θ2 )/2. Solution: (i) If X ∼ U (θ1 , θ2 ), then its probability density function f (x, θ1 , θ2 ) and the distribution function F(x, θ1 , θ2 ) are as follows:  f (x, θ1 , θ2 ) =

F(x, θ1 , θ2 ) =

1 θ2 −θ1 ,

0,

⎧ ⎨ 0, ⎩

x−θ1 θ2 −θ1 ,

1,

if

θ1 < x < θ2 otherwise.

if x < θ1 if θ1 ≤ x < θ2 if x ≥ θ2 .

Corresponding to a random sample X , the likelihood of θ1 , θ2 is given by L n (θ1 , θ2 |X ) =

n  i=1

1 1 = , θ2 − θ1 (θ2 − θ1 )n

θ1 ≤ X i ≤ θ2 ,

∀ i ⇔ X (1) ≥ θ1 , X (n) ≤ θ2 .

472

7

Solutions to Conceptual Exercises

Thus, the likelihood attains maximum when (θ2 − θ1 ) is minimum. Now, θ2 ≥ X (n) and −θ1 ≥ −X (1) . Thus, (θ2 − θ1 ) ≥ X (n) − X (1) implying that (θ2 − θ1 ) is minimum, given data X , when θ1 = X (1) and θ2 = X (n) . Hence the maximum likelihood estimator θˆ1n of θ1 and θˆ2n of θ2 are given by θˆ1n = X (1) and θˆ2n = X (n) . To verify the consistency of these estimators of θ , we define Y = (X − θ1 )/(θ2 − θ1 ) Then Y ∼ U (0, 1) and from Example 2.5.2 we have, P

P

P

P

Y(1) → 0 & Y(n) → 1 ⇒ X (1) → θ1 & X (n) → θ2 . To derive the asymptotic distribution of X (1) with suitable norming, we define Yn = n(X (1) − θ1 ) and derive its distribution function G Yn (y) for y ∈ R. Since X (1) ≥ θ1 , Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = P[n(X (1) − θ1 ) ≤ y] = Pθ [X (1) ≤ θ1 + y/n] = FX (1) (θ1 + y/n). Thus, ⎧ n if θ1 + y/n < θ1 ⇔ y < 0 ⎪ ⎨ 1 − (1 − 0) = 0, n θ2 −θ1 −y/n G Yn (y) = 1− , if θ ≤ θ + y/n < θ2 ⇔ 0 ≤ y < n(θ2 − θ1 ) 1 1 θ2 −θ1 ⎪ ⎩ 1 − [1 − 1] = 1, if y ≥ n(θ2 − θ1 ).

Hence,  G Yn (y) →

0, 1−e

−θ

y 2 −θ1

if y < 0 , if y ≥ 0.

Thus, the asymptotic distribution of Yn = n(X (1) − θ1 ) is exponential with location parameter 0 and scale parameter 1/(θ2 − θ1 ). Thus, with norming factor n, the asymptotic distribution of X (1) is not normal. Proceeding on similar lines as in Example 3.2.1, it follows that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞, such that the asymptotic distribution of an (X (1) − θ1 ) is normal. Hence, we conclude that X (1) is not CAN for θ1 . To derive the asymptotic distribution of X (n) with suitable norming, we define Yn = n(θ2 − X (n) ) and derive its distribution function G Yn (y) for y ∈ R. Since X (n) ≤ θ2 , Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = P[n(θ2 − X (n) ) ≤ y] = P[X (n) ≥ θ2 − y/n] = 1 − FX (n) (θ2 − y/n), hence

G Yn (y) =

⎧ ⎨

1 − 0, if θ2 − y/n < θ1 ⇔ y ≥ n(θ2 − θ1 ) 0 < y ≤ n(θ2 − θ1 ) 1 − (1 − n(θ2y−θ1 ) )n , if ⎩ 1 − 1, if y ≤ 0.

7.2 Chapter 3

473

As n → ∞,

 G Yn (y) →

0, 1−e

−θ

y 2 −θ1

if y ≤ 0 , if y ≥ 0.

Thus, Yn = n(θ2 − X (n) ) converges in distribution to an exponential distribution with location parameter 0 and scale parameter 1/(θ2 − θ1 ). Thus, with norming factor n, the asymptotic distribution of X (n) is not normal. Proceeding on similar lines as in Example 3.2.1, it follows that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞, such that the asymptotic distribution of an (X (n) − θ ) is normal. Hence, it is proved that X (n) is not CAN for θ2 . Thus, (X (1) , X (n) ) is not CAN for (θ1 , θ2 ) . It is to be noted that it is enough to prove that one of the two X (1) and X (n) is not CAN to conclude that (X (1) , X (n) ) is not CAN for (θ1 , θ2 ) . (ii) Since X ∼ U (θ1 , θ2 ), E(X ) = (θ1 + θ2 )/2 and V ar (X ) = (θ2 − θ1 )2 /12 < ∞. Hence by the WLLN and by the CLT,     √ θ1 + θ2 (θ2 − θ1 )2 P θ1 + θ2 L  . Xn → → Z 1 ∼ N 0, & n Xn − 2 2 12 Hence, X n is CAN for (θ1 + θ2 )/2 with approximate variance (θ2 − θ1 )2 /12n. 3.5.35 Suppose {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables with finite fourth order moment. Suppose E(X 1 ) = μ and V ar (X 1 ) = σ 2 . Find a CAN estimator of the coefficient of variation σ/μ. Solution: From Example 3.3.6, we know that (m 1 , m 2 ) is CAN for (μ1 , μ2 ) with approximate dispersion matrix /n where  is   μ3 μ2 . = μ3 μ4 − μ22 √ To find a CAN estimator of σ/μ, we note that σ/μ = μ2 /μ1 = g(μ1 , μ2 ) √ 2 say, where g : R → R is a function defined as g(x1 , x2 ) = x2 /x1 . Then ∂ x2 ∂ 1 g(x1 , x2 ) = − 2 & g(x1 , x2 ) = √ . ∂ x1 ∂ x2 2x1 x2 x1 These partial derivatives are continuous and hence g is a totally differentiable function. The gradient vector  evaluated at (μ1 , μ2 ) is  √  √   − μ2 /(μ1 )2 , (1/2)μ1 μ2 = −σ/μ2 , (1/2)μσ . Hence, by Theorem √ 3.3.4, g(m 1 , m 2 ) = m 2 /m 1 is CAN for g(μ1 , μ2 ) = σ/μ with approximate variance D /n, where D = σ 4 /μ4 − μ3 /μ31 + (μ4 − σ 4 )/4μ2 σ 2 .

474

7

Solutions to Conceptual Exercises

3.5.36 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X with distribution function F. Suppose random variables Z 1 and Z 2 are defined as follows. For a < b,  1, if X ≤ a Z1 = 0, if X > a.  Z2 =

1, if X ≤ b 0, if X > b.

Show that for large n, the distribution of (Z 1n , Z 2n ) is bivariate normal. Hence obtain a CAN estimator for (F(a), F(b)) . Solution: From the definition of Z 1 and Z 2 we have, E(Z 1 ) = F(a), V ar (Z 1 ) = F(a)(1 − F(a)) & E(Z 2 ) = F(b), V ar (Z 2 ) = F(b)(1 − F(b)). To find Cov(Z 1 , Z 2 ), observe that for a < b, E(Z 1 Z 2 ) = P[Z 1 = 1, Z 2 = 1] = P[X ≤ a, X ≤ b] = P[X ≤ a] = F(a) and hence Cov(Z 1 , Z 2 ) = F(a) − F(a)F(b) = F(a)(1 − F(b)). Thus, if we define Z = (Z 1 , Z 2 ) then E(Z ) = (F(a), F(b)) and dispersion matrix  of Z is given by  =

F(a)(1 − F(a)) F(a)(1 − F(b)) F(a)(1 − F(b)) F(b)(1 − F(b))



Observe that the first principal minor of  is F(a)(1 − F(a)) and it is positive. The second principal minor which is the determinant of  is F(a)(1 − F(b))(F(b) − F(a)) and it is also positive. Hence  is a positive definite matrix. Now a random sample {X 1 , X 2 , . . . , X n } from the distribution of X gives a random sample {Z 1 , Z 2 , . . . , Z n } from the distribution of Z . Hence by the WLLN and by the multivariate CLT, P

Z n = (Z 1n , Z 2n ) → (F(a), F(b))

&

√ L n(Z n − (F(a), F(b)) ) → Y ∼ N2 (0, ).

Thus, we have proved that for large n the distribution of (Z 1n , Z 2n ) is bivariate normal and (Z 1n , Z 2n ) is CAN for (F(a), F(b)) with approximate dispersion matrix /n.

7.2 Chapter 3

475

3.5.37 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from the following distributions (i) Normal N (μ, σ 2 ) and (ii) exponential distribution with location parameter θ and scale parameter λ. Find the maximum likelihood estimators of the parameters using stepwise maximization procedure and examine whether these are CAN. Solution: (i) Suppose X ∼ N (μ, σ 2 ). Corresponding to a random sample X of size n from normal N (μ, σ 2 ) distribution, the likelihood of θ = (μ, σ 2 ) is given by

L n (θ |X ) =

n  i=1



1 2πσ

exp{−

1 (X i − μ)2 } 2σ 2

⇔ log L n (θ |X ) = c −

n n 1  (X i − μ)2 , log σ 2 − 2 2σ 2 i=1

where c is a constant free from θ. Suppose σ 2 is fixed at σ02 and we find maximum of the likelihood with respect to the variations in μ. Thus, log L n (θ |X ) = c −

n n 1  (X i − μ)2 . log σ02 − 2 2σ02 i=1

By the usual method, log-likelihood is maximum when μ = X n , hence μˆ n (σ02 ) = X n . It is to be noted that μˆ n (σ02 ) does not depend on the fixed value of σ 2 . Now we consider na function 2 (X i − X n ) /2σ 2 . It is a differentiable funch(σ 2 ) = c − n log σ 2 /2 − i=1 n (X i − X n )2 /n. Hence the tion of σ 2 and is maximum when σ 2 = i=1 2 maximum of μ and σ are given by μˆ n = X n and n likelihood estimators (X i − X n )2 /n. In Example 3.3.2, these are shown to be CAN. σˆ n2 = i=1 (ii) Suppose X follows an exponential distribution with location parameter θ and scale parameter λ. Hence its probability density function is given by f (x, θ, λ) = (1/λ) exp {−(x − θ )/λ} , x ≥ θ, θ ∈ R, λ > 0. Corresponding to a random sample X ≡ {X 1 , X 2 , . . . , X n } from this distribution, the likelihood of (θ, λ) is given by L n (θ, λ|X ) =

n  (1/λ) exp {−(X i − θ )/λ} , X i ≥ θ ∀ i = 1, 2, . . . , n, i=1

which can be expressed as  L n (θ, λ|X ) = (1/λ) exp − n

n  i=1

 X i /λ + nθ/λ

, X (1) ≥ θ.

476

7

Solutions to Conceptual Exercises

Suppose λ is fixed at λ0 , then the likelihood is maximized with respect to variations in θ when θ is maximum given the data. The maximum value of the fixed value of θ given the data is X (1) . Note that it does not depend  on  λ. n (X i − X (1) )/λ . It Now we consider a function h(λ) = (1/λ)n exp − i=1 is a differentiable function of λ and h(λ) is maximum when n (X i − X (1) )/n. Hence the maximum likelihood estimators of θ λ = i=1 n (X i − X (1) )/n. To verify the and λ are given by θˆn = X (1) and λˆ n = i=1 consistency of X (1) as an estimator of θ , we find the distribution function of X (1) , it is given by FX (1) (x) = 1 − [1 − FX (x)]n , x ∈ R. The distribution function FX (x) is given by  0, if x < θ FX (x) = 1 1 − exp{− λ (x − θ )}, if x ≥ θ. Hence, the distribution function of X (1) is given by  0, if x < θ FX (1) (x) = 1 − exp{− λn (x − θ )}, if x ≥ θ. Thus, the distribution of X (1) is again exponential with location parameter θ and scale parameter n/λ. Hence E(X (1) ) = θ + λ/n which implies that the bias of X (1) as an estimator of θ is λ/n and it converges to 0 as n → ∞. Further, V ar (X (1) ) = λ2 /n 2 → 0 as n → ∞. Hence, X (1) is consistent for θ . To derive the asymptotic distribution of X (1) , with suitable norming, we define Yn = n(X (1) − θ ) and derive its distribution function G Yn (y) for y ∈ R. Since X (1) ≥ θ, Yn ≥ 0, hence for y < 0, G Yn (y) = 0. Suppose y ≥ 0, then G Yn (y) = Pθ [n(X (1) − θ ) ≤ y] = Pθ [X (1) ≤ θ + y/n] = FX (1) (θ + y/n) n = 1 − exp{− (θ + y/n − θ )} = 1 − exp{−y/λ}, y ≥ 0. λ Thus, for each n, Yn follows, an exponential distribution with location parameter 0 and scale parameter 1/λ and hence the asymptotic distribution of Yn = n(X (1) − θ ) is the same. Thus, with norming factor n, the asymptotic distribution of X (1) is not normal. Proceeding as in Example 3.2.1, we can show that there exists no sequence {an , n ≥ 1} of real numbers tending to ∞ as n → ∞ such that the asymptotic distribution of an (X (1) − θ ) is normal, hence we claim that X (1) is consistent but not CAN for θ . Now to examn (X i − X (1) )/n is CAN for λ, observe that λˆ n can be ine whether λˆ n = i=1 expressed as 1 λˆ n = n



n  i=1

 X i − n X (1)

1 = n



n  i=1

 X (i) − n X (1)

7.2 Chapter 3

477 n n 1 1 = (X (i) − X (1) ) = (X (i) − X (1) ). n n i=1

i=2

We define random variables Yi as Yi = (n − i + 1)(X (i) − X (i−1) ), i = 2, 3, . . . , n. Then n 

Yi =

i=2

n 

X (i) − (n − 1)X (1) =

i=2

n 

(X (i) − X (1) ) = n λˆ n .

i=2

It can be proved that {Y2 , Y3 , . . . , Yn } are independent and identically distributed random variables each having an exponential distribution with location parameter 0 and scale parameter 1/λ. Thus, E(Y2 ) = λ and V ar (Y2 ) = λ2 < ∞. Hence by the WLLN, n n n 1  Pθ,λ 1 n−1 1  P Yi → λ ⇒ λˆ n = Yi = Yi → λ, ∀ θ & λ n−1 n n n−1 i=2

i=2

i=2

which proves that λˆ n is consistent. By the CLT √



 n 1  L n−1 Yi − λ → Z 1 ∼ N (0, λ2 ). n−1 i=2

From Slutsky’s theorem, √

 n

   √ n n 1  1  n √ Yi − λ = √ n−1 Yi − λ n−1 n−1 n−1 i=2 i=2 L

→ Z 1 ∼ N (0, λ2 ). Now, √ √ n(λˆ n − λ) − n



   n n √  1  1 1 Yi − λ = n Yi − n−1 n n−1 i=2

i=2

−1 1  Yi = √ nn−1 n

i=2

P 1 n converges to 0 in probability, as n−1 i=2 Yi → λ and hence is bounded in √ probability. Hence, the limit distribution of n(λˆ n − λ) and √ √ L 1 n 2 ˆ n( n−1 i=2 Yi − λ) is the same. Thus, n(λn − λ) → Z 1 ∼ N (0, λ ). Hence, λˆ n is CAN for λ.

478

7.3

7

Solutions to Conceptual Exercises

Chapter 4

4.6.1 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X with probability density function f (x, θ ) = θ/x θ +1 x > 1, θ > 0. (i) Examine whether the distribution belongs to a one-parameter exponential family. (ii) On the basis of a random sample of size n from the distribution of X , find the moment estimator of θ based on a sufficient statistic and the maximum likelihood estimator of θ . (iii) Examine whether these are CAN estimators of θ . (iv) Obtain the CAN estimator for P[X ≥ 2]. Solution: (i) The probability density function f (x, θ ) of X can be expressed as    θ θ = exp{log θ − (θ + 1) log x} f (x, θ ) = θ +1 = exp log x x θ +1 = exp{U (θ )K (x) + V (θ ) + W (x)}, where U (θ ) = −θ, K (x) = log x, V (θ ) = log θ and W (x)= − log x. Thus, (1) the probability law of X is expressible in the form required in a oneparameter exponential family, (2) support of the probability density function is (1, ∞) and it is free from θ , (3) the parameter space is (0, ∞) which is an open set, (4) U  (θ ) = −1 = 0 and (5) K (x) and 1 are linearly independent because in the identity a + b log x = 0 if x = 1, then a = 0, if further in the identity b log x = 0, if x = 2 then b = 0. Thus, all the requirements of a one-parameter exponential family are satisfied and hence the distribution of X belongs to a one-parameter exponential family. (ii) To find the maximum likelihood estimator of θ , the log-likelihood of θ corresponding to the data X is given by log L n (θ |X ) = n log θ − (θ + 1)

n 

log X i

i=1

∂ n  log X i . log L n (θ |X ) = − ∂θ θ n



i=1

ˆ Hence, the nsolution θn of the likelihood equation is given by ˆθn = n/ i=1 log X i . The second derivative ∂2 log L n (θ |X ) = −n/θ 2 < 0, ∀ θ > 0. Hence, θˆn is the maximum like∂θ 2 n lihood estimator of θ . From the likelihood it is clear that i=1 log X i is a sufficient statistic. Hence the momentestimator of θ based on the suffin log X i /n = E(log X ). To find cient statistic is solution of the equation i=1  ∂ E(log X ), we use the identity E ∂θ log f (X , θ ) = 0. Hence, E(log X ) = 1/θ . Hence n the moment estimator of θ based on the sufficient log X i . statistic is θ˜n = n/ i=1

7.3 Chapter 4

479

(iii) Since the distribution of X belongs to a one-parameter exponential family, the moment estimator of θ based on the sufficient statistic and the maximum likelihood estimator of θ are the same. These are CAN with approximate variance 1/n I (θ ). Now,   ∂2 n n I (θ ) = E θ − 2 log L n (θ |X ) = 2 . ∂θ θ Thus, θˆn = θ˜n is CAN for θ with approximate variance θ 2 /n. (iv) From the probability density function of X ,  ∞ θ d x = 2−θ = g(θ ), say. P[X ≥ 2] = θ x +1 2 It is clear that g is a differentiable function and g  (θ ) = −(1/2θ )2 2θ log 2 = ˆ −2−θ log 2 = 0, ∀ θ > 0. Hence by the delta method, 2−θn is CAN for 2−θ 2 −θ 2 2 with approximate variance (θ /n) × (2 log 2) = θ (log 2)2 /4θ n. 4.6.2 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a binomial B(m, θ ) distribution, truncated at 0, 0 < θ < 1 and m is a known positive integer. Examine whether the distribution belongs to a one-parameter exponential family. Find the moment estimator of θ based on a sufficient statistics and the maximum likelihood estimator of θ . Examine whether the two are the same and whether these are CAN. Find their approximate variances. Solution: The distribution of X is binomial B(m, θ ), truncated at 0, hence its probability mass function is given by   x m θ (1 − θ )m−x ⇒ log f (x, θ ) f (x, θ ) = Pθ [X = x] = x (1 − (1 − θ )m ) = U (θ )K (x) + V (θ ) + W (x), where x = 1, 2, . . . , m and U (θ ) = log θ − log(1 − θ ),   V (θ ) = m log(1 − θ ) − log(1 − (1 − θ )m ), K(x) = x and W (x) = mx . Further, U and V are differentiable functions of θ and can be differentiated any number of times and U  (θ ) = 1/θ (1 − θ ) = 0. 1 and K (x) are linearly independent. The parameter space is an open set and support of X is free from θ . Thus, binomial B(m, θ ) distribution, truncated at 0, is a member of a one-parameter exponential family. Hence by Theorem 4.2.1, the moment estimator of θ based on a sufficient statistics is the same as the maximum likelihood estimator of θ and it is CAN with approximate variance 1/n I (θ ). Corresponding to an random sample of size n from the distribution of X , n i=1 K (X i ) = i=1 X i is a sufficient statistics. The moment estimator of θ based on the sufficient statistics is then given by the equation, X n = E(X ) = mθ (1 − (1 − θ )m )−1 = η(θ ) say. It is to be noted that

480

7

Solutions to Conceptual Exercises

1 − (1 − θ )m − mθ (1 − θ )m−1 (1 − (1 − θ )m )2 Pθ [Y > 1] = > 0, ∀ θ ∈ (0, 1) (1 − (1 − θ )m )2

η (θ ) =

where Y ∼ B(m, θ ) distribution with support {0, 1, . . . , m}. Hence by the inverse function theorem, η−1 exists and by using numerical methods, which are discussed in Sect. 4.4, we get the moment estimator θ˜n of θ based on the sufficient statistics as θ˜n = η−1 (X n ). To find the maximum likelihood estimator, the likelihood and the log-likelihood of θ is given by

L n (θ |X ) =

n   Xi  m θ (1 − θ )m−X i i=1

Xi

(1 − (1 − θ )m )

& log L n (θ |X ) = c + log(θ/(1 − θ ))

n 

Xi

i=1

+ mn log(1 − θ ) − n log(1 − (1 − θ )m ) where c is a constant free from θ . Thus, the likelihood equation is n 

Xi

i=1

θ (1 − θ )



mθ mn mn(1 − θ )m−1 = 0 ⇔ Xn = − . 1−θ 1 − (1 − θ )m (1 − (1 − θ )m )

Thus, the maximum likelihood estimator θˆn of θ is given by θˆn = η−1 (X n ), which is the same as the moment estimator based on the sufficient statistics. The information function I (θ ) is given by I (θ ) = U  (θ )η (θ ) =

1 − (1 − θ )m − mθ (1 − θ )m−1 . θ (1 − θ )(1 − (1 − θ )m )2

Thus, θ˜n = θˆn is CAN for θ with approximate variance 1/n I (θ ). 4.6.3 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution of X with probability density function (i) f (x, θ ) = (x/θ ) exp{−x 2 /2θ }, x > 0, θ > 0 and (ii) f (x, θ ) = (3x 2 /θ 3 ) exp{−x 3 /θ 3 }, x > 0, θ > 0. Examine whether the distribution belongs to a one-parameter exponential family. On the basis of a random sample of size n from these distributions, find the moment estimator based on a sufficient statistic and the maximum

7.3 Chapter 4

481

likelihood estimator of θ . Examine whether the two are the same and whether these are CAN. Find their approximate variances. Solution: (i) It is easy to verify that the distribution of X belongs to a oneparameter exponential family. Hence by Theorem 4.2.1, the moment estimator of θ based on a sufficient statistics is the same as the maximum likelihood estimator of θ and it is CAN with approximate variance n 1/n I2(θ ). The maxX i /2n, which is imum likelihood estimator θˆn of θ is given by θˆn = i=1 same as the moment estimator based on the sufficient statistics. The information function I (θ ) is given by     1 ∂2 1 X2 I (θ ) = E − 2 log f (X , θ ) = E − 2 + 3 = 2 . ∂θ θ θ θ Thus, θ˜n = θˆn is CAN for θ with approximate variance 1/n I (θ ) = θ 2 /n. (ii) This distribution also belongs to a one-parameter exponential family. Hence by Theorem 4.2.1, the moment estimator of θ based on a sufficient statistic is the same as the maximum likelihood estimator of θ and it is CAN with approximate variance 1/n I (θ ). The maximum likelihood estimator θˆn of θ is given by θˆn = (m 3 )1/3 , which is same as the moment estimator based on the sufficient statistic. The information function I (θ ) is given by    9 ∂2 3 12X 3 = 2 . I (θ ) = E − 2 log f (X , θ ) = E − 2 + 5 ∂θ θ θ θ 

Thus, θ˜n = θˆn is CAN for θ with approximate variance 1/n I (θ ) = θ 2 /9n. 4.6.4 Suppose X has a logarithmic series distribution with probability mass function given by p(x, θ ) =

θx −1 x = 1, 2, . . . , 0 < θ < 1. log(1 − θ ) x

Show that a logarithmic series distribution is a power series distribution. On the basis of a random sample from a logarithmic series distribution, find the moment estimator of θ based on a sufficient statistic and the maximum likelihood estimator of θ . Examine whether the two are the same and whether these are CAN. Find their approximate variances. Solution: The probability mass function of X has can be expressed as p(x, θ ) =

θx ax θ x −1 = , x = 1, 2, . . . , log(1 − θ ) x A(θ )

where ax = 1/x for x = 1, 2, . . . , and A(θ ) = − log(1 − θ ), 0 < θ < 1. Thus, a logarithmic series distribution is a power series distribution. Suppose

482

7

Solutions to Conceptual Exercises

{X 1 , X 2 , . . . , X n } is a random sample from a logarithmic series distribution. Hence, as shown in 4.2.4, the moment estimator of θ based on the Example n X i is same as the maximum likelihood estimator of sufficient statistic i=1 θ . It is CAN with approximate variance 1/n I (θ ). To find the expression for the estimator and I (θ ), we find E(X ) =

 θx −1 −θ (1 − θ )−1 x = . log(1 − θ ) x log(1 − θ ) x≥1

Thus, the moment estimator of θ based on the sufficient statistic which is same as the maximum likelihood estimator of θ is a solution of the equation (1−θ )−1 X n = −θ log(1−θ ) . It exists uniquely and can be obtained by numerical methods discussed in Sect. 4.4. Now to find the information function, we have, log p(x, θ ) = − log(− log(1 − θ )) + x log θ − log x 1 ∂ 1 x log p(x, θ ) = + ∂θ log(1 − θ ) 1 − θ θ 1 1 1 1 x ∂2 log p(x, θ ) = + − 2. ∂θ 2 log(1 − θ ) (1 − θ )2 (log(1 − θ ))2 (1 − θ )2 θ Hence the information function I (θ ) is given by   ∂2 I (θ ) = E − 2 log p(X , θ ) ∂θ 1 1 1 1 θ (1 − θ )−1 −1 − − log(1 − θ ) (1 − θ )2 (log(1 − θ ))2 (1 − θ )2 θ 2 log(1 − θ ) −(θ + log(1 − θ )) . = θ (log(1 − θ ))2 (1 − θ )2 =

1 = Suppose g(θ ) = θ + log(1 − θ ), then g(0) = 0 and g  (θ ) = 1 − 1−θ −θ 1−θ < 0 which implies that g is a decreasing function. Hence θ > 0 ⇒ g(θ ) < g(0) = 0. Thus, −(θ + log(1 − θ )) > 0 implying that I (θ ) > 0.

4.6.5 Suppose (X , Y ) has a bivariate normal distribution with zero mean vector and dispersion matrix  given by   = σ2

 1 ρ , ρ 1

σ 2 > 0, − 1 < ρ < 1. On the basis of a random sample of size n from the distribution of (X , Y ) , find the maximum likelihood estimator of (σ 2 , ρ) and examine if it is CAN. Find the approximate dispersion matrix.

7.3 Chapter 4

483

Solution: Z = (X , Y ) has bivariate normal distribution, hence its probability density function f (x, y, σ 2 , ρ) is given by   1 1 2 2 exp − 2 f (x, y, σ , ρ) = (x − 2ρx y + y ) , 2σ (1 − ρ 2 ) 2π σ 2 1 − ρ 2 2

(x, y) ∈ R2 , σ 2 > 0, − 1 < ρ < 1. In Example 4.2.5, we have shown that the bivariate normal distribution belongs to a two-parameter exponential family. Hence by Theorem 4.2.3, the moment estimator of (σ 2 , ρ) based on a sufficient statistic is the same as the maximum likelihood estimator of (σ 2 , ρ) and it dispersion matrix I −1 (σ 2 , ρ)/n. is CAN for (σ 2 , ρ) with approximate  The  n n 2 + Y 2 ), (X X Y sufficient statistic for the family is i=1 i=1 i i , hence i i 2  the moment estimator for (σ , ρ) based on the sufficient statistic is given by the following system of equations. n 1 2 (X i + Yi2 ) = E(K 1 (X , Y )) = E(X 2 + Y 2 ) = 2σ 2 n i=1

n 1 X i Yi = E(K 2 (X , Y )) = E(X Y ) = σ 2 ρ. n i=1

Hence, the moment estimator for (σ 2 , ρ) based on the sufficient statistic, which is same as the maximum likelihood estimator of (σ 2 , ρ) , is given by

σˆ n2

1 = 2n

n 

2 (X i2

+ Yi2 )

&

i=1

ρˆn =

n 

X i Yi

i=1 n 

.

(X i2 + Yi2 )

i=1

It is to be noted that for this example, it is simpler to find the moment estimator than the maximum likelihood estimator of (σ 2 , ρ) . The information matrix I (σ 2 , ρ) is derived in Example 4.2.5 and is given by ⎛ ⎞ −ρ I (σ , ρ) = ⎝ 2

1 σ4 −ρ σ 2 (1−ρ 2 )

σ 2 (1−ρ 2 ) 1+ρ 2 (1−ρ 2 )2

⎠.

Inverse of I (σ 2 , ρ) is given by   (1 + ρ 2 )σ 4 σ 2 ρ(1 − ρ 2 ) . I −1 (σ 2 , ρ) = σ 2 ρ(1 − ρ 2 ) (1 − ρ 2 )2 Thus, (σˆ n2 , ρˆn ) is a CAN estimator of (σ 2 , ρ) with approximate dispersion matrix I −1 (σ 2 , ρ)/n.

484

7

Solutions to Conceptual Exercises

4.6.6 Suppose (X , Y ) is random vector with a joint probability mass function as   x −λ x y px y = P[X = x, Y = y] = e λ p (1 − p)x−y /x!, y y = 0, 1, . . . , x, x = 0, 1, 2, . . . , where λ > 0 and 0 < p < 1. Examine if the distribution belongs to a twoparameter exponential family. Hence, find a CAN estimator for (λ, p) and its approximate dispersion matrix. Solution: The probability mass function of (X , Y ) is given by   x −λ x y e λ p (1 − p)x−y /x! , y y = 0, 1, . . . , x, x = 0, 1, 2, . . . ,

px y = P[X = x, Y = y] =

It is to be noted that (i) the parameter space  = {(λ, p) |λ > 0, 0 < p < 1} is an open set. (ii) The support S of (X , Y ) is S = {(x, y)|y = 0, 1, . . . , x; x = 0, 1, 2, . . . , } which does not depend on the parameters. (iii) The logarithm of the probability mass function of (X , Y ) is as follows:   x − log x! − λ + x(log λ + log(1 − p)) y + y(log p − log(1 − p)) = W (x, y) + V (λ, p) + K 1 (x, y)U1 (λ, p) + K 2 (x, y)U2 (λ, p) ,

log px y = log

  V (λ, p) = −λ, K 1 (x, y) = x, where W (x, y) = log xy − log x!, and K (x, y) = y, U (λ, p) = log p − log U1 (λ, p) = log λ + log(1 − p) 2 2  dUi (1 − p). (iv) The matrix J = dθ j of partial derivatives, where (θ1 , θ2 ) = (λ, p) , is given by

1 J=

λ

0

− (1−1 p) 1 p(1− p)

 .

It is clear that U1 and U2 have continuous partial derivatives with respect

1 i

to λ and p and |J | = dU dθ j = λ p(1− p)  = 0. (v) The functions 1, x and y

are linearly independent. Thus, the joint distribution of (X , Y ) satisfies all the requirements of a two-parameter exponential family and hence it belongs to a two-parameter exponential family. Hence by Theorem 4.2.3, based on a random sample of size n, the moment estimator of (λ, p) based on a sufficient statistic is the same as the maximum likelihood estimator of (λ, p) and it is CAN for (λ, p) with approximate dispersion matrix I −1 (λ, p)/n. In Example 3.3.4, we have shown that the moment estimator based on the

7.3 Chapter 4

485

sufficient statistic is the same as the maximum likelihood estimator of (λ, p)   and it is given by (λ˜ n , p˜ n ) = X n , Y n /X n . Further, the information matrix as obtained in Example 3.3.4 is I (λ, p) =diag[1/λ, λ/ p(1 − p)]. 4.6.7 Suppose (X , Y ) has a bivariate normal distribution with mean vector (μ1 , μ2 ) and dispersion matrix  given by  =

 1 ρ , ρ 1

where ρ = 0 and is known and μ1 , μ2 ∈ R. Show that the distribution belongs to a two-parameter exponential family. Hence, find a CAN estimator (μ1 , μ2 ) and its approximate dispersion matrix. Solution: The probability density function of (X , Y ) is given by f (x, y, μ1 , μ2 ) =  exp −

1 × 2π 1 − ρ 2

 1 2 2 ) − 2ρ(x − μ )(y − μ ) + (y − μ ) ) , ((x − μ 1 1 2 2 2(1 − ρ 2 )

(x, y) ∈ R2 , μ1 , μ2 ∈ R. Thus, the support of the distribution does not depend on the parameters and the parameter space is an open set. Observe that log f (x, y, μ1 , μ2 ) can be expressed as follows. Suppose c is a constant free from parameters 1 (x 2 + y 2 − 2ρx y + μ21 + μ22 − 2ρμ1 μ2 ) 2(1 − ρ 2 ) 1 + (2μ1 (x − ρ y) + 2μ2 (y − ρx)) 2(1 − ρ 2 ) = U1 (μ1 , μ2 )K 1 (x, y) + U2 (μ1 , μ2 )K 2 (x, y) + V (μ1 , μ2 ) + W (x, y), = μ1 /(1 − ρ 2 ), = x − ρ y, U2 (μ1 , μ2 ) = μ2 /(1 − ρ 2 ) = y − ρx, = −(μ21 + μ22 − 2ρμ1 μ2 )/2(1 − ρ 2 ) & = −(x 2 + y 2 − 2ρx y)/2(1 − ρ 2 ).

log f (x, y, μ1 , μ2 ) = c −

where U1 (μ1 , μ2 ) K 1 (x, y) K 2 (x, y) V (μ1 , μ2 ) W (x, y) The matrix J = given by



dUi dθ j



of partial derivatives, where (θ1 , θ2 ) = (μ1 , μ2 ) , is J=

1 1 − ρ2



 1 0 . 0 1

486

7

Solutions to Conceptual Exercises

It is clear that U1 and have continuous partial derivatives with respect to μ1

 U2

1 i

and μ2 and |J | = dU dθ j = (1−ρ 2 )2  = 0. (v) It is easy to verify that the functions 1, (x − ρ y) and (y − ρx) are linearly independent. Thus, the joint distribution of (X , Y ) satisfies all the requirements of a two-parameter exponential family and hence it belongs to a two-parameter exponential family. Hence by Theorem 4.2.3, based on a random sample of size n, the moment estimator of (μ1 , μ2 ) based on a sufficient statistic is the same as the maximum likelihood estimator of (μ1 , μ2 ) and it is CAN for (μ1 , μ2 ) with approximate dispersion matrix I −1 (μ1 , μ2 )/n. To find the information matrix, we find the partial derivatives of the log f (x, y, μ1 , μ2 ). These are as follows: 1 ∂ log f (x, y, μ1 , μ2 ) = − (2μ1 − 2ρμ2 − 2(x − ρ y)) ∂μ1 2(1 − ρ 2 ) ∂2 2 1 log f (x, y, μ1 , μ2 ) = − =− 2 2 2(1 − ρ ) 1 − ρ2 ∂μ1 ∂2 −2ρ ρ log f (x, y, μ1 , μ2 ) = − = 2 ∂μ2 ∂μ1 2(1 − ρ ) 1 − ρ2 ∂ 1 log f (x, y, μ1 , μ2 ) = − (2μ2 − 2ρμ1 − 2(y − ρx)) ∂μ2 2(1 − ρ 2 ) 1 ∂2 2 =− log f (x, y, μ1 , μ2 ) = − . 2(1 − ρ 2 ) 1 − ρ2 ∂μ22 Hence, the information matrix I (μ1 , μ2 ) and its inverse are given by     1 1 −ρ 1 ρ −1 & I (μ , μ ) = . I (μ1 , μ2 ) = 1 2 ρ 1 1 − ρ 2 −ρ 1 From the system of likelihood equation, the maximum likelihood estimator of (μ1 , μ2 ) is given by (X n , Y n ) . It is to be noted that the maximum likelihood estimator of (μ1 , μ2 ) derived from the bivariate model and the two marginal univariate models is the same. However, when ρ = 0, I1,1 (μ1 , μ2 ) =

1 1 > 1 = I (μ1 ) & I2,2 (μ1 , μ2 ) = > 1 = I (μ2 ) 2 1−ρ 1 − ρ2

as observed in Example 4.3.4 and in Example 4.3.5. 4.6.8 Suppose a random variable X has a negative binomial distribution with parameters (k, p) and with the following probability mass function.   x +k−1 k P[X = x] = p (1 − p)x x = 0, 1, 2, . . . . k−1 (i) Show that the distribution belongs to a one-parameter exponential family, if k is known and p ∈ (0, 1) is unknown. Hence obtain a CAN estimator of p.

7.3 Chapter 4

487

(ii) Examine whether the distribution belongs to a one-parameter exponential family, if p is known and k is an unknown positive integer. (iii) Examine whether the distribution belongs to a two-parameter exponential family, if both p ∈ (0, 1) and k are unknown, where k is a positive integer. Solution: If (i) k is known and p ∈ (0, 1) is unknown, then the probability mass function can be expressed as 

 x +k−1 log p(x, k, p) = log + k log p + x log(1 − p) k−1 = U ( p)K (x) + V ( p) + W (x), where U ( p) = log(1  − p), K (x) = x , V ( p) = k log p and W (x) = log x+k−1 k−1 . Thus, (i) the probability law of X is expressible in the form required in a oneparameter exponential family, (ii) support of the probability mass function is {0, 1, 2, . . .} and it is free from p, (iii) the parameter space is (0, 1) which is an open set, (iv) U  ( p) = −1/(1 − p) = 0 and (v) x and 1 are linearly independent. Thus, the distribution belongs to a one-parameter exponential family, if k is known and p ∈ (0, 1) is unknown. Suppose X = {X 1 , X 2 , . . . , X n } is a random sample from the distribution of X . By Theorem 4.2.1, the moment estimator of p based on a sufficient statistics is the same as the maximum likelihood estimator of p and it is CAN with approximate variance 1/n I ( p). The moment estimator pˆ n of p based on the sufficient statistics is given by pˆ n = k/(X n + k), which is same as the maximum likelihood estimator of p. The information function I ( p) is given by     k ∂2 k X = 2 I ( p) = E − 2 log p(X , k, p) = E + . 2 2 ∂θ p (1 − p) p (1 − p) If X follows a negative binomial distribution with parameters (k, p) then E(X ) =

k(1 − p) k(1 − p) < ∞. & V ar (X ) = p p2

Hence by the WLLN and by the CLT, P

Xn →

    √ k(1 − p) k(1 − p) L k(1 − p) . → Z 1 ∼ N 0, & n Xn − p p p2

Thus, X n is CAN for k(1 − p)/ p = φ, say, with approximate variance k(1 − p)/np 2 . To obtain a CAN estimator for p, we find a transformation g such that g(φ) = p. Suppose g(y) = k/(k + y), y > 0, then g is a differentiable function with g  (y) = −k/(k + y)2 which is not 0, ∀ y > 0. Hence by

488

7

Solutions to Conceptual Exercises

the delta method, g(X n ) = k/(k + X n ) is CAN for g(φ) = p with approximate variance (k(1 − p)/np 2 ) × ( p 4 /k 2 ) = p 2 (1 − p)/nk = 1/n I ( p). (ii) If p is known and k is an unknown positive integer, the parameter space is not an open set, hence the distribution does not belong to a one-parameter exponential family. (iii) If both p ∈ (0, 1) and k are unknown, where k is a positive integer, then again the parameter space is not an open set, hence the distribution does not belong to a two-parameter exponential family. 4.6.9 Examine whether a logistic distribution with probability density function f (x, θ ) =

exp{−(x − θ )} , x ∈ R, θ ∈ R (1 + exp{−(x − θ )})2

belongs to a one-parameter exponential family. If not, examine if it belongs to a one-parameter Cramér family. If yes, find a CAN estimator of θ and its approximate variance. Solution: If X follows a logistic distribution, then log f (x, θ ) is given by log f (x, θ ) = −(x − θ ) − 2 log(1 + exp{−(x − θ )}) and it cannot be expressed in the form U (θ )K (x) + V (θ ) + W (x). Hence, the distribution does not belong to a one-parameter exponential family. To examine whether it belongs to a Cramér family, we note that (i) support of the probability density function is R and it is free from θ , (ii) the parameter space is R which is an open set. The partial derivatives of log f (x, θ ) up to order 3 are given by 2 exp{−(x − θ ) ∂ log f (x, θ ) = 1 − ∂θ (1 + exp{−(x − θ )}) 2 exp{−(x − θ )} ∂2 log f (x, θ ) = − 2 ∂θ (1 + exp{−(x − θ )})2 ∂3 2 exp{−(x − θ )}(1 − exp{−(x − θ )}) log f (x, θ ) = − . 3 ∂θ (1 + exp{−(x − θ )})3 Thus, partial derivatives of log f (x, θ ) up to order 3 exists. Now observe that





3

2 exp{−(x − θ )}







∂θ 3 log f (x, θ ) ≤ (1 + exp{−(x − θ )})3



2 exp{−2(x − θ )}



= M(x), +

(1 + exp{−(x − θ )})3

7.3 Chapter 4

489

say. Further, ∞ E(M(X )) =

M(x) f (x, θ ) d x −∞ ∞

= −∞ ∞

= 0

2 exp{−2(x − θ )} + (1 + exp{−(x − θ )})5

∞

−∞

2y + (1 + y)5

∞ 0

2 exp{−3(x − θ )} (1 + exp{−(x − θ )})5

2y 2 < ∞, (1 + y)5

with the substitution exp{−(x − θ )} = y. Thus, the third order partial derivative of the log-likelihood is bounded by an integrable random variable. From the second order partial derivative of the log-likelihood we have  I (θ ) = E

2 exp{−(X − θ ) (1 + exp{−(X − θ )})2



∞ =

2 exp{−2(x − θ ) dx (1 + exp{−(x − θ )})4

−∞ ∞

=2 0

y , by exp{−(x − θ )} = y (1 + y)4

= 2B(2, 2) = 2/6 = 1/3. Thus, 0 < I (θ ) < ∞. Thus, all the Cramér regularity conditions are satisfied and hence the logistic distribution belongs to a one-parameter Cramér family. Hence, the maximum likelihood estimator of θ is CAN for θ with approximate variance 3/n. The likelihood equation is given by n exp{−(X i −θ )} 2 i=1 1+exp{−(X i −θ )} − n = 0. We can obtain its solution either by the Newton-Raphson procedure or the method of scoring (see Exercise 4.7.10). As an initial iterative value one may take sample median or the sample mean as both are consistent for θ . Method of scoring is easier as the information function I (θ ) = 1/3 is free from θ . We have discussed in Example 4.5.3 how to find its solution using uniroot function. 4.6.10 Suppose a random variable X follows a Cauchy C(θ, λ) distribution with location parameter θ and shape parameter λ. Examine whether the distribution belongs to a two-parameter Cramér family. Solution: Part of the solution is given in Example 4.3.1 and in Example 4.5.6. It remains to find out the third order partial derivatives and verify that these are bounded by integrable functions.

490

7

Solutions to Conceptual Exercises

4.6.11 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson distribution with parameter θ > 0. An estimator Tn is defined as  Tn =

Xn, 0.01,

if if

Xn > 0 Xn = 0

Show that T1n = e−Tn is a CAN estimator of P[X 1 = 0] = e−θ . Find its approximate variance. Suppose random variables Yi , i = 1, 2, . . . , n are defined as follows:  1, if Xi = 0 Yi = 0, otherwise . Obtain a CAN estimator T2n for e−θ based on {Y1 , Y2 , . . . , Yn } and find its approximate variance. Find A R E(T1n , T2n ) . Solution: Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Poisson distribution with parameter θ > 0. An estimator Tn is defined as  Tn =

Xn, 0.01,

if if

Xn > 0 Xn = 0

For a Poisson distribution with parameter θ > 0, E(X ) = V ar (X ) = θ < ∞. Hence by the WLLN and by the CLT, X n is CAN for θ with approximate variance θ/n. Observe that, for  > 0, P[|Tn − X n | < ] ≥ P[Tn = X n ] = P[X n > 0] = 1 − exp(−nθ ) → 1, for θ > 0. Hence, Tn is also consistent for θ . Similarly, √ P[ n|Tn − X n | < ] ≥ P[Tn = X n ] = P[X n > 0] = 1 − exp(−nθ ) → 1, for θ > 0. √ √ √ Pθ n(Tn − X n ) → 0, ∀ θ > 0. Hence, n(Tn − θ ) and n(X n − θ ) √ L have the same limiting distribution. By CLT, n(X n − θ ) → Z 1 ∼ N (0, θ ) √ L and hence, n(Tn − θ ) → Z 1 ∼ N (0, θ ), ∀ θ > 0, which proves that Tn is CAN for θ with approximate variance θ/n. Suppose g(θ ) = e−θ , then it is a differentiable function with g  (θ ) = −e−θ = 0. Hence by the delta method, e−Tn is CAN for e−θ = P[X 1 = 0] with approximate variance θ e−2θ /n. In the another approach to find a CAN estimator for P[X 1 = 0] we define random variables Yi , i = 1, 2, . . . , n as follows:  1, if Xi = 0 Yi = 0, otherwise . Thus,

7.3 Chapter 4

491

Since {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables, being Borel functions, {Y1 , Y2 , . . . , Yn } are also independent and identically distributed random variables each having Bernoulli B(1, p) distribution where p = E(Yi ) = P[X i = 0] = e−θ . Further, V ar (Yi ) = e−θ (1 − e−θ ) < ∞ Hence by the WLLN and the CLT, the sample mean Y n = T2n , say, is CAN for e−θ with approximate variance e−θ (1 − e−θ )/n. Hence,  θr A R E(T1n , T2n ) =

e−θ (1 − e−θ ) eθ − 1 = =1+ −2θ θe θ

r ≥2

θ

r!

>1.

Thus, T1n is preferred to T2n .

7.4

Chapter 5

5.4.1 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace distribution with probability density function f (x, θ ) given by   1 |x| , x ∈ R, θ > 0. f (x, θ ) = exp − 2θ θ Derive a large sample test procedure for testing H0 : θ = θ0 against n the alter|X i |/n. native H1 : θ < θ0 , when the test statistic is a function of Un = i=1 Find the power function. Solution: If Y = |X |, then it is easy to verify that X follows an exponential distribution with mean θ . Hence by the WLLN and the CLT, Un is CAN for θ with approximate variance θ 2 /n. Hence we define the test statistic as √ √ n n Tn = (Un − θ0 ) or Sn = (Un − θ0 ). θ0 Un The asymptotic null distribution of both Tn and Sn is the standard normal. Hence H0 : θ = θ0 is rejected against the alternative H1 : θ < θ0 , if Tn < −a1−α or Sn < −a1−α . The power function corresponding to the test statistic Tn is given by √ ! n β(θ ) = Pθ [Tn < −a1−α ] = Pθ (Un − θ0 ) < −a1−α θ0 √ √ ! n n θ0 = Pθ (Un − θ ) < (θ0 − θ ) − a1−α θ θ θ  √ n θ0 = (θ0 − θ ) − a1−α . θ θ The power function corresponding to the test statistic Sn can be obtained on similar lines.

492

7

Solutions to Conceptual Exercises

5.4.2 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Cauchy C(θ, 1) distribution. Derive a large sample test procedure for testing H0 : θ = 0 against the alternative H1 : θ = 0. Obtain the power function. Solution: Suppose X ∼ C(θ, 1). Then given a random sample of size n from the distribution of X , the maximum likelihood estimator θˆn of θ is CAN for θ with √approximate variance 2/n. Hence the test statistic Tn is defined as Tn = n/2 θˆn . For large n under H0 , Tn ∼ N (0, 1) distribution. The null hypothesis H0 is rejected against H1 if |Tn | > c, where c is determined corresponding to the given level of significance α and the asymptotic null distribution of Tn . Thus, c = a1−α/2 . The power function β(θ ) is derived as follows:  β(θ ) = Pθ [|Tn | > c] = 1 − Pθ −c < n/2 θˆn < c  √ √ √ √ = 1 − Pθ −c 2/ n < θˆn < c 2/ n  √ √ √ √ = 1 − Pθ −c − θ n/ 2 < n/2(θˆn − θ ) < c − θ n/ 2 √ √ √ √ = 1 −  c − θ n/ 2 +  −c − θ n/ 2 . It is to be noted that at θ = 0, β(θ ) = α. 5.4.3 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from an exponential distribution with scale parameter 1 and location parameter θ . Develop a test procedure to test H0 : θ = θ0 against the alternative H1 : θ = θ0 . Solution: The probability density function of a random variable X having an exponential distribution with location parameter θ and scale parameter 1 is given by f X (x, θ ) = exp{−(x − θ )}, x ≥ θ. In Example 3.2.7, it is shown that X (1) is the maximum likelihood estimator withlocation of θ and the distribution of X (1) is exponential distribution  parameter θ and scale parameter n. If we define Yn = n X (1) − θ then its distribution is exponential with location parameter 0 and scale parameter 1. To test H0 : θ = θ0 against the  alternative H1 : θ = θ0 , we propose a test statistic Tn as Tn = n X (1) − θ0 . Under H0 : θ = θ0 , Tn has the exponential distribution with scale parameter 1. H0 is rejected against H1 : θ = θ0 , if Tn > c where c is such that Pθ0 [Tn > c] = α ⇔ Pθ0 [Tn < c] = 1 − α ⇔ 1 − exp{−n(c − θ0 )} = 1 − α ⇔ c = θ0 − log α/n.

7.4 Chapter 5

493

5.4.4 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a normal distribution with mean μ and variance σ 2 . Develop a test procedure to test the hypothesis H0 : P[X < a] = p0 against H1 : P[X < a] = p0 . n Solution: In Example 3.3.2, it is shown that μˆ n = X n and σˆ n2 = n1 i=1 (X i − X n )2 are the maximum likelihood estimators of μ and σ 2 respectively. Further, (μˆ n , σˆ n2 ) is CAN for (μ, σ 2 ) , with approximate dispersion matrix D/n, where   2 0 σ . D= 0 2σ 4 Now P[X < a] = ((a − μ)/σ ) = g(μ, σ 2 ) where g : R2 → R is such   that a − x1 g(x1 , x2 ) =  √ x2   1 a − x1 ∂ g(x1 , x2 ) = −φ √ ⇒ √ ∂ x1 x2 x2   ∂ a − x1 a − x1 & g(x1 , x2 ) = −φ √ . 3/2 ∂ x2 x2 2x2 These partial derivatives are continuous and hence g is a totally differentiable function. derivatives evaluated at (μ, σ 2 ) is The vector  of these partial



a−μ a−μ a−μˆ n 1  = −φ( a−μ σ ) σ , −φ( σ ) 2σ 3 . Hence by the delta method,  σˆ n  a−μ  is CAN for P[X < a] =  σ with approximate variance v(μ, σ 2 ) =  D/n where         a−μ 2 a−μ 1 2 a−μ  2 4  D = σ −φ + 2σ −φ σ σ σ 2σ 3 2 −(a−μ)   (a − μ)2 e σ2 1+ . = 2π 2σ 2

Consequently, Un = & Vn =

 



a−μˆ n σˆ n





−

 a−μ  σ

v(μ, σ 2 )

a−μˆ n σˆ n





−

L

→ Z ∼ N (0, 1)

 a−μ 

v(μˆ n , σˆ n2 )

σ

L

→ Z ∼ N (0, 1) ,

by Slutsky’s theorem. Hence we propose the test statistic Tn as

 a−σˆ μˆ n − p0 n Tn = . v(μˆ n , σˆ n2 )

494

7

Solutions to Conceptual Exercises

H0 is rejected if |Tn | > c where c is such that PH0 [|Tn | > c] = α. For large n, under H0 , Tn ∼ N (0, 1) distribution. Hence c = a1−α/2 . 5.4.5 Suppose X ≡ {X 1 , X 2 , . . . , X n } is a random sample from a Poisson distribution with parameter θ . Obtain the likelihood ratio test to test H0 : P[X = 0] = 1/3 against the alternative H1 : P[X = 0] = 1/3. Solution: Suppose X ∼ P(θ ). To test H0 : P[X = 0] = 1/3 against the alternative H1 : P[X = 0] = 1/3 is equivalent to test H0 : θ = θ0 against the alternative H1 : θ = θ0 , where θ0 = loge 3. Now the entire parameter space  is  = (0, ∞) and the null space is 0 = {θ0 }. The maximum likelihood estimator of θ in the entire parameter space is X n . Hence, the likelihood ratio test statistic λ(X ) is given by sup L n (θ |X ) λ(X ) =

0

sup L n (θ |X ) 

=

e−nθ0 (θ0 )n X n e−n X n (X n )n X n

.

The null hypothesis H0 against the alternative H1 is rejected if λ(X ) < c, that is, −2 log λ(X ) > c1 . If the sample size is large, then −2 log λ(X ) ∼ 2 2 where χ1,1−α is χ12 distribution and H0 is rejected if −2 log λ(X ) > χ1,1−α 2 (1 − α)-th quantile of χ1 distribution. 5.4.6 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a lognormal distribution with parameters μ and σ 2 . Derive a large sample test procedure to test H0 : μ = μ0 , σ 2 = σ02 against H1 : μ = μ0 , σ 2 = σ02 . Solution: In Exercise 3.5.31 we have obtained a CAN estimator of (μ, σ 2 ) . Using it and the procedure as adopted in Example 5.2.2, we can develop a test procedure to test H0 : μ = μ0 , σ 2 = σ02 against H1 : μ = μ0 , σ 2 = σ02 . 5.4.7 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Gamma G(α, λ) distribution. Derive a large sample test procedure to test H0 : α = α0 against H1 : α = α0 when λ is (i) known and (ii) unknown. Solution: Suppose X follows a Gamma G(α, λ) distribution. Then E(X ) = λ/α and V ar (X ) = λ/α 2 . (i) If λ is known, then by the WLLN and CLT, X n is CAN for λ/α with approximate variance λ/nα 2 . By the delta method, Un = λ/X n is CAN for α with approximate variance α 2 /nλ. Hence we define a test statistic as Tn =

√ √ nλ nλ (Un − α0 ) or Sn = (Un − α0 ). α0 Un

The null hypothesis is rejected if |Tn | > c or |Sn | > c. The cut-off c is determined using the given level of significance and the asymptotic null distribution. Under H0 , the asymptotic distribution of both the test statistics is standard normal.

7.4 Chapter 5

495

(ii) Suppose λ is unknown. In Exercise 3.5.32 we have shown that   (m 1 /m 2 , m 2 1 /m 2 ) is CAN for (α, λ) with approximate dispersion matrix D/n, where D is,  3α 2 D=λ

λ2 2α λ

2

+ 2αλ + 2α

 + 2α . 2λ + 2

2α λ

Hence, αˆ n = m 1 /m 2 is CAN for α with approximate variance v(α, λ) = (3α 2 /λ2 + 2α 2 /λ)/n. Further, λˆ n = m 2 1 /m 2 is a consistent estimator of λ. A large sample test procedure to test H0 : α = α0 against H1 : α = α0 is then based on the following two test statistics, both have the asymptotic null distribution to be the standard normal. √ √ n n Tn = ' (αˆ n − α0 ) or Sn = ' (αˆ n − α0 ). v(α0 , λˆ n ) v(αˆ n , λˆ n ) The null hypothesis is rejected if |Tn | > c or |Sn | > c where c = a1−α/2 . 5.4.8 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a geometric distribution with probability mass function px = P[X = x] = θ (1 − θ )x , x = 0, 1, 2, . . .. Obtain the likelihood ratio test to test H0 : P[X = 0] = 0.3 against the alternative H1 : P[X = 0] = 0.3. Solution: Observe that P[X = 0] = θ , hence H0 : P[X = 0] = 0.3 implies θ = 0.3. The maximum likelihood estimator of θ in the entire parameter space is 1/(1 + X n ). Using the routine procedure we can obtain the likelihood ratio test to test H0 against the alternative H1 . 5.4.9 Suppose X ≡ {X 1 , X 2 , . . . , X n } are independent and identically distributed random variables with following probability mass function. P[X 1 = 1] = (1 − θ )/2, P[X 1 = 2] = 1/2, P[X 1 = 3] = θ/2, 0 < θ < 1. Derive a likelihood ratio test procedure to test H0 : θ = 1/2 against the alternative H0 : θ = 1/2. Explain how the critical region will be decided if the sample size is large. Solution: The likelihood of θ corresponding to the data X is given by      1 − θ n1 1 n2 θ n3 2 2 2  n 1 = (1 − θ )n 1 θ n 3 , n 1 + n 2 + n 3 = n, 2 

L n (θ |X ) =

496

7

Solutions to Conceptual Exercises

where n i is number of times i occurs in the sample, i = 1, 2, 3. Likelihood is a differentiable function of θ , hence the likelihood equation and its solution θˆn are given by −

n1 n3 + =0 1−θ θ



θˆn =

n3 . n1 + n3

n1 n3 ∂ The second order derivative ∂θ 2 log L n (θ |X ) = − (1−θ )2 − θ 2 < 0, ∀ θ ∈ (0, 1). Hence the maximum likelihood estimator of θ is θˆn = n 3 /(n 1 + n 3 ). Now the entire parameter space  is  = (0, 1) and the null space is 0 = {1/2}. Hence, the likelihood ratio test statistic λ(X ) is given by 2

sup L n (θ |X ) λ(X ) =

0

sup L n (θ |X )

=



( 21 )n ( 21 )n 1 ( 21 )n 3 (n 1 + n 3 )(n 1 +n 3 ) = . 2(n 1 +n 3 ) n n1 1 n n3 3 ( 21 )n (1 − θˆn )n 1 (θˆn )n 3

The null hypothesis H0 : θ = 1/2 against the alternative H1 : θ = 1/2 is rejected if λ(X ) < c ⇔ − 2 log λ(X ) > c1 . If sample size is large, then 2 −2 log λ(X ) ∼ χ12 distribution and H0 is rejected if −2 log λ(X ) > χ1,1−α 2 where χ1,1−α is (1 − α)-th quantile of χ12 distribution. 5.4.10 Suppose X has a discrete distribution with possible values 1, 2, 3, 4 with probabilities (2 − θ1 )/4, θ1 /4, θ2 /4, (2 − θ2 )/4 respectively. On the basis of a random sample from the distribution of X , derive a likelihood ratio test procedure to test H0 : θ1 = 1/3, θ2 = 2/3 against the alternative H1 : θ = 1/3, θ2 = 2/3. Solution: Using the procedure similar to that in Example 5.2.5, we get the solution. 5.4.11 Suppose X ≡ {X 1 , X 2 , . . . , X n 1 } is a random sample from a Bernoulli B(1, θ1 ) distribution and Y ≡ {Y1 , Y2 , . . . , Yn 2 } is a random sample from a Bernoulli B(1, θ2 ) distribution. Suppose X and Y are independent random variables. Derive a likelihood ratio test procedure for testing H0 : θ1 = θ2 against the alternative H1 : θ1 = θ2 . Solution: Suppose X ∼ B(1, θ1 ) and Y ∼ B(1, θ2 ). In the null setup θ1 = θ2 = θ , say. Then the likelihood of θ given random samples X and Y , using independence of X and Y is given by n 1 

L n (θ |X , Y ) = θ

i=1

Xi +

n2 

i=1

 Yi

  n1 n2   n 1 +n 2 − X i − Yi

(1 − θ )

i=1

i=1

.

Then the maximum likelihood n 1 n 2 estimator of θ is X i + i=1 Yi )/(n 1 + n 2 ). In the entire parameter space the θˆn 1 +n 2 = ( i=1

7.4 Chapter 5

497

maximum likelihood estimator θˆ1n 1 of θ1 is θˆ1n 1 = X n 1 and the maximum likelihood estimator θˆ2n 2 of θ2 is θˆ2n 2 = Y n 2 . The likelihood ratio test statistic λ(X ) is then given by n 1 

sup L n (θ|X ) λ(X ) =

0

sup L n (θ|X ) 

=

n 1 

θˆ

i=1 n1

Xi +

n2 

θˆn 1i=1 +n 2

i=1

Xi

Yi



n 2 

θˆ

i=1 2n 2

 Yi





(1 − θˆn 1 +n 2 )

(1 − θˆn 1 )

n 1 +n 2 −

  n1  n1 − Xi i=1

n1 

i=1

Xi −

n2 

 Yi

i=1



(1 − θˆ2n 2 )

n2 −

n2 



.

Yi

i=1

The null hypothesis is rejected if −2 log λ(X ) > c. If sample size is large, −2 log λ(X ) ∼ χ12 distribution, as in the entire parameter space we estimate two parameters and in null space we estimate one parameter. H0 is rejected if 2 2 where χ1,1−α is (1 − α)th quantile of χ12 distribution. −2 log λ(X ) > χ1,1−α

7.5

Chapter 6

6.8.1 In a multinomial distribution with 3 cells, the cell probabilities are p1 (θ ) = p2 (θ ) = (1 + θ )/3 and p3 (θ ) = (1 − 2θ )/3, 0 < θ < 1/2. (i) Examine whether the distribution belongs to a one-parameter exponential family. On the basis of a random sample of size n from this distribution find the moment estimator based on the sufficient statistic and the maximum likelihood estimator of θ and examine if these are CAN. (ii) Use the result to derive Wald’s test and a score test procedure for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . Solution: (i) Suppose Y = (Y1 , Y2 ) has a multinomial distribution in three cells with the given cell probabilities. Hence the joint probability mass function of (Y1 , Y2 ) is given by     1 + θ y1 +y2 1 − 2θ 1−y1 −y2 Pθ [Y1 = y1 , Y2 = y2 ] = , 3 3 y1 , y2 = 0, 1 and y1 + y2 ≤ 1. Thus,

 1+θ log Pθ [Y1 = y1 , Y2 = y2 ] = (y1 + y2 ) log 3   1 − 2θ + (1 − y1 − y2 ) log 3   = (y1 + y2 ) log(1 + θ ) − log(1 − 2θ ) + log(1 − 2θ ) − log 3 = U (θ )K (y1 , y2 ) + V (θ ) + W (y1 , y2 ), 

498

7

Solutions to Conceptual Exercises

where U (θ ) = log(1 + θ ) − log(1 − 2θ ), K (y1 , y2 ) = y1 + y2 , V (θ ) = log(1 − 2θ ) − log 3 and W (y1 , y2 ) = 0. Thus, (1) the probability law of (Y1 , Y2 ) is expressible in the form required in a one-parameter exponential family, (2) support of the probability mass function is {(0, 0), (0, 1), (1, 0)} and it is free from θ , (3) the parameter space is (0, 1/2) which is an open set, (4) U  (θ ) = 1/(1 + θ ) + 2/(1 − 2θ ) = 3/(1 + θ )(1 − 2θ ) = 0 and (5) K (y1 , y2 ) and 1 are linearly independent because in the identity a + b(y1 + y2 ) = 0 if y1 = y2 = 0, then a = 0, if further in the identity b(y1 + y2 ) = 0, if either y1 = 0 and y2 = 1 or y1 = 1 and y2 = 0 then b = 0. Thus, all the requirements of a one-parameter exponential family are satisfied and hence the joint probability mass function of (Y1 , Y2 ) belongs to a oneparameter exponential family. To find the maximum likelihood estimator of θ , the likelihood of θ corresponding to the data X ≡ {X 1 , X 2 , X 3 } is given by L n (θ |X ) = ((1 + θ )/3) X 1 +X 2 ((1 − 2θ )/3) X 3 , X 1 + X 2 + X 3 = n, where X i is the frequency of i-th cell in the sample, i = 1, 2, 3. Likelihood is a differentiable function of θ , hence the likelihood equation and its solution θˆn are given by X1 + X2 2X 3 − =0 1+θ 1 − 2θ



θˆn =

X 1 + X 2 − 2X 3 1 3 X3 = − . 2n 2 2 n

The second derivative ∂2 X1 + X2 4X 3 log L n (θ |X ) = − − < 0, a.s. 2 2 ∂θ (1 + θ ) (1 − 2θ )2 ∀ θ ∈ (0, 1/2) & ∀ X 1 , X 2 . Hence θˆn is the maximum likelihood estimator of θ . Since the distribution of (Y1 , Y2 ) belongs to a one-parameter exponential family, θˆn is CAN for θ with approximate variance 1/n I (θ ). Now,   ∂2 np1 (θ ) + np2 (θ ) 4np3 (θ ) + n I (θ ) = E θ − 2 log L n (θ |X ) = ∂θ (1 + θ )2 (1 − 2θ )2   2 2n n 4 = = + . 3 1+θ 1 − 2θ (1 + θ )(1 − 2θ ) Thus, θˆn is CAN for θ with approximate variance (1 + θ )(1 − 2θ )/2n. (ii) As discussed in Sect. 6.4, in real parameter set up, Wald’s test statistic Tn (W ) and the score test statistic Tn (S) are defined as Tn (W ) =

θˆn − θ0 & s.e.(θˆn )

Tn (S) =

θˆn − θ0 , s.e.(θˆn )|θ0

7.5 Chapter 6

499

where s.e.(θˆn )|θ0 is the standard error of θˆn evaluated at θ0 . Here s.e.(θˆn ) = ' (1+θˆn )(1−2θˆn ) . In both the procedures under H0 , the asymptotic null distri2n bution of both the test statistics is standard normal. In Wald’s test procedure H0 is rejected if |Tn (W )| > c and in score test procedure H0 is rejected if |Tn (S)| > c, where c = a1−α/2 . 6.8.2 In a multinomial distribution with four cells, the cell probabilities are p1 (θ ) = p4 (θ ) = (2 − θ )/4 and

p2 (θ ) = p3 (θ ) = θ/4 , 0 < θ < 2.

Examine whether the distribution belongs to a one-parameter exponential family. On the basis of a random sample of size n from this distribution find the maximum likelihood estimator of θ and examine if it is CAN. Use the result to derive (i) a likelihood ratio test, (ii) Wald’s test, (iii) a score test and (iv) Karl Pearson’s chi-square test for testing H0 : θ = θ0 against the alternative H1 : θ = θ0 . Solution: Suppose Y = (Y1 , Y2 , Y3 ) has a multinomial distribution in four cells with the given cell probabilities. Hence the joint probability mass function p(y1 , y2 , y3 ) = Pθ [Y1 = y1 , Y2 = y2 , Y3 = y3 ] of (Y1 , Y2 , Y3 ) is given by     2 − θ y1 +y4 θ 1−y1 −y4 , p(y1 , y2 , y3 ) = 4 4 y1 , y2 , y3 = 0, 1 and y1 + y2 + y3 + y4 = 1. Thus, log p(y1 , y2 , y3 ) = − log 4 + (y1 + y4 )[log(2 − θ ) − log θ ] + log θ = U (θ )K (y1 , y2 , y3 ) + V (θ ) + W (y1 , y2 , y3 ), where U (θ ) = log(2 − θ ) − log θ, K (y1 , y2 , y3 ) = y1 + y4 , V (θ ) = log θ and W (y1 , y2 , y3 ) = − log 4. Thus, (1) the probability law of (Y1 , Y2 , Y3 ) is expressible in the form required in a one-parameter exponential family, (2) support of Yi is {0, 1}, i = 1, 2, 3 and it is free from θ , (3) the parameter space is (0, 2) which is an open set, (4) U  (θ ) = −2/(2 − θ )θ = 0 and (5) K (y1 , y2 , y3 ) and 1 are linearly independent. Thus, all the requirements of a one-parameter exponential family are satisfied and hence the joint probability mass function of (Y1 , Y2 , Y3 ) belongs to a one-parameter exponential family. By Theorem 4.2.1, the maximum likelihood estimator of θ is CAN for θ with approximate variance 1/n I (θ ). To find the maximum likelihood estimator of θ , the log-likelihood of θ corresponding to the data X ≡ {X 1 , X 2 , X 3 , X 4 } is given by log L n (θ |X ) = −n log 4 + (X 1 + X 4 )[log(2 − θ ) − log θ ] + n log θ, X 1 + X 2 + X 3 + X 4 = n,

500

7

Solutions to Conceptual Exercises

where X i is the frequency of ith cell in the sample, i = 1, 2, 3, 4. Likelihood is a differentiable function of θ , hence the likelihood equation and its solution θˆn are given by 2n − 2(X 1 + X 4 ) − nθ = 0



θˆn = (2X 2 + 2X 3 )/n .

Now,

 !  ∂2 1 1 1 E θ (Y1 + Y4 ) + 2 − I (θ ) = E θ − 2 log p(Y1 , Y2 , Y3 ) = ∂θ (2 − θ )2 θ2 θ  ! 1 2(2 − θ ) 1 1 1 + 2 = = − . (2 − θ )2 θ2 4 θ θ (2 − θ ) Thus, θˆn is CAN for θ with approximate variance θ (2 − θ )/n. (i) To test H0 : θ = θ0 against the alternative H1 : θ = θ0 , the likelihood ratio test statistic λ(X ) is given by sup L n (θ |X ) λ(X ) =

0

sup L n (θ |X ) 

 =

2 − θ0 2 − θˆn

 X 1 +X 4 

θ0 θˆn

 X 2 +X 3

.

The null hypothesis is rejected if −2 log λ(X ) > c. If the sample size is large, then by Theorem 5.2.1, −2 log λ(X ) ∼ χ12 distribution. H0 is rejected 2 . if −2 log λ(X ) > χ1,1−α (ii) and (iii): In real parameter set up, Wald’s test statistic Tn (W ) and the score test statistic Tn (S) are defined as Tn (W ) =

θˆn − θ0 & s.e.(θˆn )

Tn (S) =

θˆn − θ0 . s.e.(θˆn )|θ0

' ˆ θˆn ) Here s.e.(θˆn ) = θn (2− . In both the procedures under H0 , the asymptotic n null distribution of the test statistics is standard normal. In Wald’s test procedure H0 is rejected if |Tn (W )| > c and in score test procedure H0 is rejected if |Tn (S)| > c, where c = a1−α/2 . (iv) It is known that Karl Pearson’s chi-square test statistic Tn (P) is same as the score test statistic. Hence, H0 is rejected if |Tn (P)| > c, where c = a1−α/2 . 6.8.3 In a certain genetic experiment two different varieties of certain species are crossed. A specific characteristic of an offspring can occur at three levels A, B and C. According to the proposed model, probabilities for three levels A, B and C are 1/12, 3/12 and 8/12 respectively. Out of fifty offspring 6, 8 and 36 have levels A, B and C respectively. Test the validity of the proposed model by a score test and by Wald’s test.

7.5 Chapter 6

501

Solution: The probability model for the given experiment is a trinomial distribution with cell probabilities p1 , p2 , p3 where pi > 0 for i = 1, 2, 3 3 pi = 1. Under the proposed model p1 = 1/12, p2 = 3/12 and and i=1 p3 = 8/12. Thus, to test the validity of the proposed model, the null hypothesis is H0 : p1 = 1/12, p2 = 3/12, p3 = 8/12 against the alternative that at least one of the pi ’s are not as specified by the model. To find the score test statistic and Wald’s test statistic, we find the expected frequencies ei , expected under the proposed model, as e1 = n × 1/12 = 4.1667, e2 = n × 3/12 = 12.5 and e3 = n × 8/12 = 33.3333 with n = 50. Thus, the score test statistic Tn (S) = 3 (oi −ei )2 3 (oi −ei )2 = 2.21 and Wald’s test statistic Tn (W ) = i=1 = i=1 ei oi 3.03. Under H0 , asymptotic null distribution of both the statistics is χ22 . The null hypothesis H0 is rejected Tn (S) > c and if Tn (W ) > c, where c is determined corresponding to the given level of significance α = 0.05 and 2 = 5.99. Thus, value of both the asymptotic null distribution. Here c = χ2,0.95 the test statistics is less than c. Hence, data do not have sufficient evidence to reject H0 and the proposed model may be considered to be valid. 6.8.4 On the basis of data in a 3 × 3 contingency table, derive a likelihood ratio test procedure and Karl Pearson’s test procedure to test H0 : pi j = p ji , i = j = 1, 2, 3 against the alternative H1 : pi j = p ji , i = j = 1, 2, 3 for at least one pair. Solution: To derive a likelihood ratio test procedure and Karl Pearson’s test procedure, as a first step we find the maximum likelihood estimators of cell probabilities in the null setup. Under H0 : pi j = p ji , i = j = 1, 2, 3, suppose p12 = p21 = a, p13 = p31 = b and p23 = p32 = c, say. Thus, in the null setup, the parameter to be estimated is θ = ( p11 , p22 , p33 , a, b, c) such that p11 , p22 , p33 , a, b, c > 0 & p11 + p22 + p33 + 2a + 2b + 2c = 1. Corresponding to observed cell frequencies n i j , i, j = 1, 2, 3, adding to n, the log-likelihood of θ is given by log L n (θ|n i j , i, j = 1, 2, 3) = n 11 log p11 + n 22 log p22 + n 33 log p33 + (n 12 + n 21 ) log a + (n 13 + n 31 ) log b + (n 23 + n 32 ) log c. Using Lagranges’ method of multipliers, we maximize the log-likelihood with respect to variation in θ subject to the condition that p11 + p22 + p33 + 2a + 2b + 2c = 1. Thus, the maximum likelihood estimators of cell probabilities in the null setup are pˆ iin =

n ii n 12 + n 21 ˆ n 13 + n 31 n 23 + n 32 , i = 1, 2, 3, aˆ n = , bn = & cˆn = . n 2n 2n n

502

7

Solutions to Conceptual Exercises

Once we have these maximum likelihood estimators, we can find the expected frequencies and can carry out both the likelihood ratio test procedure and Karl Pearson’s test procedure. 6.8.5 On the basis of data in a 2 × 3 contingency table, derive a likelihood ratio test procedure and Karl Pearson’s test procedure to test H0 : p11 = p12 = p13 against the alternative that there is no restriction as specified in H0 . Solution: As in the previous example, to derive a likelihood ratio test procedure and Karl Pearson’s test procedure, we first find the maximum likelihood estimators of cell probabilities in the null setup. Under H0 : p11 = p12 = p13 = a,say. Thus, in the null setup, the parameter to be estimated is θ = (a, p21 , p22 , p23 ) such that a, p21 , p22 , p23 > 0 and 3a + p21 + p22 + p23 = 1. Corresponding to observed cell frequencies n i j , i = 1, 2, j = 1, 2, 3, adding to n, the log-likelihood of θ is given by log L n (θ |n i j , i, j = 1, 2, 3) = (n 11 + n 12 + n 13 ) log a + n 21 log p21 + n 22 log p22 + n 23 log p23 . Using Lagranges’ method of multipliers, we maximize the log-likelihood with respect to variation in θ subject to the condition that 3a + p21 + p22 + p23 = 1. Thus, the maximum likelihood estimators of cell probabilities in the null setup are aˆ n =

n 11 + n 12 + n 13 , 3n

pˆ 2 jn =

n2 j , j = 1, 2, 3. n

Once we have these maximum likelihood estimators, we can find the expected frequencies and can carry out both the likelihood ratio test procedure and Karl Pearson’s test procedure. 6.8.6 Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace distribution with location parameter θ and scale parameter 1. Derive a large sample test procedure to test H0 : θ = θ0 against the alternative H0 : θ > θ0 and examine whether it is a consistent test procedure. Solution: The probability density function f (x, θ ) of X having Laplace distribution with location parameter θ and scale parameter 1 is given by f (x, θ ) =

1 exp{−|x − θ |}, x ∈ R, θ ∈ R. 2

For this distribution, the sample median is the maximum likelihood estimator of θ and it is CAN with approximate variance 1/n. Hence, the test procedure to based on the sample median X ([n/2]+1) . test H0 : θ = θ0 against H1 : θ > θ0 is√ The test statistic Tn is given by Tn = n(X ([n/2]+1) − θ0 ). For large n under H0 , Tn ∼ N (0, 1) distribution. The null hypothesis H0 : θ = θ0 is rejected

7.5 Chapter 6

503

against H1 : θ > θ0 if Tn > c, where c is determined corresponding to the given level of significance α and the asymptotic null distribution of Tn . Thus, c = a1−α . To examine whether it is a consistent test, we find the power function β(θ ). Thus,  √  β(θ ) = Pθ [Tn > c] = Pθ [ n X ([n/2]+1) − θ0 > c]  !  √ √  c = Pθ n X ([n/2]+1) − θ > n θ0 − θ + √ n   √ = 1 −  c + n(θ0 − θ ) → 1 ∀ θ > θ0 . Hence the test is consistent.

7.6

Multiple Choice Questions

In the following multiple choice questions, more than one options may be correct. Answers to all the questions are given in Table 7.1 at the end of the chapter.

7.6.1

Chapter 2: Consistency of an Estimator

1. Suppose X i j ∼ N (μi , σ 2 ) distribution for i = 1, 2, . . . k, j = 1, 2, . . . n and  X i j ’s are independent random variables. If μˆ in = X in = nj=1 X i j /n & σˆ n2 = k  n 2 i=1 j=1 (X i j − X in ) /nk, then which of the following statements is/are correct? (a) (b) (c) (d)

μˆ in is a maximum likelihood estimator of μi and is consistent for μi . σˆ n2 is an unbiased estimator of σ 2 . σˆ n2 is a maximum likelihood estimator of σ 2 and is consistent for σ 2 . μˆ in is not consistent for μi .

2. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables each following a uniform U (0, θ ) distribution. Then which of the following statements is/are NOT consistent for θ ? (a) (b) (c) (d)

X (1) . X (n) . X (n−1) . X ([n/2]+1) .

3. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables each following normal N (μ, μ2 ) distribution. Which of the following statements is/are correct?

504

7

(a) (b) (c) (d)

Solutions to Conceptual Exercises

X n is a consistent estimator of μ. n (X i − X )2 is a consistent estimator of μ2 . Sn2 = n1 i=1 Sn = Sn2 cannot be a consistent estimator of μ. Sample median is a consistent estimator of μ.

4. Suppose {X 1 , X 2 . . . X n } are independent random variables, where X i ∼ U (0, iθ ) for i = 1, 2, . . . n. Which of the following statements is/are correct?  (a) n2 n1 Xi i is not consistent for θ . (b) Sample mean is consistent for θ . (c) max{X 1 , X 2 /2, X 3 /3, . . . X n /n} is consistent for θ . (d) X (n) is consistent for θ . 5. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables with probability density function f (x, θ ) = θ x θ −1 , 0 < x < 1, θ > 0. Which of the following statements is/are correct? (a) (b)

log(0.5) log(X ([n/2)]+1) ) is consistent Xn is consistent for θ . 1−X n

for θ .

(c) X n is consistent for θ . log(X ([n/2)]+1) ) is consistent for θ . (d) log(0.5) 6. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables each following Bernoulli B(1, θ ) distribution, where θ ∈ [0.25, 0.75]. Which of the following is/are NOT consistent for θ ? (a) X n . ⎧ ⎪ ⎨0.25 ifX n < 0.25 (b) θˆ = X n if 0.25 ≤ X n ≤ 0.75 . ⎪ ⎩ 0.75 i f X n > 0.75 (c) min{0.25,  2 X n }. (d) n1 Xi . 7. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables with probability density function f (x, θ ) = 2θ 2 /x 3 x ≥ θ, θ > 0. Then which of the following statements is/are correct? (a) (b) (c) (d)

2 2 2 3 X ([n/3]+1) is a consistent estimator of θ . 1 2 2 3 X ([n/3]+1) is a consistent estimator of θ . 2 3 X ([n/3]+1) is a consistent estimator of θ . ' 2 3 X ([n/3]+1) is a consistent estimator of θ .

7.6 Multiple Choice Questions

505

8. Suppose {X 1 , X 2 . . . X n } is a random sample from a uniform U (0, 3θ ) distribution, θ > 0. If Tn = max{X 1 , . . . , X n }/3, which of the following statements is/are NOT correct? (a) (b) (c) (d)

is consistent for θ . is unbiased for θ . is a sufficient statistic for the family of U (0, 3θ ) distributions. is a maximum likelihood estimator of θ .

Tn Tn Tn Tn

9. Suppose {X 1 , X 2 . . . X n } is a random sample from the distribution with the following probability density function.  f (x, μ, α) =

1 α (x

0

− μ)α−1 exp{−(x − μ)} x ≥ μ, μ ∈ R, α > 0 otherwise

Which of the following statements is/are correct? (a) (b) (c) (d)

X n is a consistent estimator of μ. X n − Sn2 is a consistent estimator of μ, where Sn2 is the sample variance. Sn2 is a consistent estimator of α. X n is a consistent estimator of μ + α.

10. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables with probability density function f (x, θ ) = (2/θ 2 )(x − θ ), θ < x < 2θ, θ > 0. Which of the following statements is/are correct? (a) (b) (c) (d)

X (1) is a consistent estimator of θ . X (n) is a consistent estimator of θ . X (n) /2 is a consistent estimator of θ . X n is a consistent estimator of θ .

11. Suppose (X , Y ) is a random vector with joint distribution given by f (x, y) =

e−β y (β y)x −θ y y ≥ 0, x = 0, 1, 2, . . . , β, θ > 0. θe x!

Then X n is a consistent estimator of (a) βθ . (b) β. (c) βθ . (d)

β . θ2

506

7

Solutions to Conceptual Exercises

12. Suppose {X 1 , X 2 , . . . , X n } is a random sample of size n from a uniform U (θ − 1, θ + 1) distribution. Then which of the following statements is/are correct? (a) (b) (c) (d)

sample mean is consistent for θ . sample median is consistent for θ . X (n) is consistent for θ . X (1) is consistent for θ .

13. Suppose {X 1 , X 2 , . . . , X 2n } is a random sample of size 2n from a uniform U (θ − 1, θ + 1) distribution. Then which of the following statements is/are correct? (a) (b) (c) (d)

X (1) is consistent for θ . X (1) + 1 is consistent for θ . X (n+1) is consistent for θ . X (2n) − 1 is consistent for θ .

14. Following are two statements about an estimator Tn of θ . (I) If MSE of Tn as an estimator of θ converges to 0 as n → ∞, then Tn is a consistent estimator of θ . (II) If Tn is a consistent estimator of θ , then MSE of Tn as an estimator of θ converges to 0 as n → ∞. Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

15. Suppose {X 1 , X 2 , . . . , X n } is a random sample from Cauchy C(θ, 1) distribution. Then which of the following statements is/are correct? (a) (b) (c) (d)

Sample mean is consistent for θ . Sample median is consistent for θ . The maximum likelihood estimator of θ is consistent for θ . The first sample quartile is consistent for θ .

16. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Laplace distribution with probability density function f (x, θ ) given by f (x, θ ) = (1/2θ ) exp {−|x|/θ } , x ∈ R, Which of the following estimators are consistent for θ ?

θ > 0.

7.6 Multiple Choice Questions

(a) (b) (c) (d)

507

Sample mean. sample median. n |X i |/n. i=1 n X i2 /n)1/2 . ( i=1

17. Following are two statements about an estimator Tn of θ . (I) If Tn is a strongly consistent estimator of θ , then it is a weakly consistent estimator of θ . (II) If Tn is a weakly consistent estimator of θ , then it is a strongly consistent estimator of θ . Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

18. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Bernoulli distribution B(1, θ ). Which of the following statements is/are correct? (a) (b) (c) (d)

Sample mean is a consistent estimator of θ . Sample mean is a uniformly consistent estimator of θ . Sample mean is a strongly consistent estimator of θ . None of a, b, c is true.

19. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a distribution with finite second order moment. Which of the following statements is/are correct? (a) Sample mean is a consistent and unbiased estimator of population mean. (b) Sample mean is a consistent but a biased estimator of population mean. (c) Sample variance is a consistent and unbiased estimator of population variance. (d) Sample variance is a consistent but a biased estimator of population variance. 20. Following are two statements about an estimator Tn of θ . (I) If Tn is a consistent estimator of θ and g is a continuous function, then g(Tn ) is a consistent estimator of g(θ ). (II) If Tn is a consistent estimator of θ and g is not a continuous function, then g(Tn ) is not a consistent estimator of g(θ ). Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

508

7

7.6.2

Solutions to Conceptual Exercises

Chapter 3: Consistent and Asymptotically Normal Estimators

1. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables, each with probability density function f (x, θ ) = (2/θ 2 )(x − θ ), θ < x < 2θ, θ > 0. Which of the following statements is/are NOT correct? √ L θ2 n(X n − 5θ 3 ) → N (0, 9 ) as n → ∞. √ L θ2 (b) n( 35 X n − θ ) → N (0, 50 ) as n → ∞. √ L 3θ θ2 (c) n(X n − 5 ) → N (0, 9 ) as n → ∞. √ L θ2 (d) n(X n − 5θ 3 ) → N (0, 18 ) as n → ∞. (a)

2. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables, each following a Poisson distribution with mean μ. Then the asymp√ totic distribution of n(e−X n − e−μ ) is given by (a) (b) (c) (d)

N (0, μe−μ ). N (0, e−μ ). N (0, μ2 e−2μ ). N (0, μe−2μ ).

3. Suppose {X 1 , X 2 . . . X n } is a random sample from a uniform U (0, θ + 1) distribution. Then the maximum likelihood estimator of θ is (a) (b) (c) (d)

X (1) + 1. X (n) − 1. X (n) . any value in the interval [X (1) , X (n) ].

4. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables each having√an exponential distribution with mean θ . Then the asymp 2 totic distribution of Snn (X n − θ ), where Sn2 = n1 n1 X i2 − X n is (a) (b) (c) (d)

N (0, 1). N (0, θ ). N (1, 1). exponential with mean θ .

5. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables, each with probability density function f (x, α, θ ) =

1 θ α α

x α−1 e−x/θ x > 0, α, θ > 0.

Then which of the following statements is/are correct?

7.6 Multiple Choice Questions

509

√ L n(X n − αθ ) → N (0, αθ 2 ). √ L (b) n(log X n − log θ ) → N (0, α). √ L (c) n(X n − α) → N (0, θ ). √ L (d) n(log X n − log αθ ) → N (0, 1/α). (a)

6. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables, each with probability density function f (x, θ ) = θ/x 2 , x ≥ θ . Then which of the following statements is/are correct? √ L 2 n(X ([n/4]+1) − 4θ 3 ) → N (0, 16θ ). √ L 2 (b) n(X ([n/4]+1) − 4θ 3 ) → N (0, 16θ /27). √ L (c) n( 43 X ([n/4]+1) − θ ) → N (0, 9θ 2 ). √ L (d) n( 43 X ([n/4]+1) − θ ) → N (0, θ 2 /3). (a)

7. Suppose {X 1 , X 2 . . . X n } are independent and identically distributed random variables, each following an exponential distribution with mean θ . Then e−X n is a CAN estimator of e−θ with approximate variance (a) (b) (c) (d)

θ e−2θ /n. θ 2 e−θ /n. θ e−θ /n. θ 2 e−2θ /n.

8. The moment estimator of θ based on a random sample of size n from a distribution with the probability density function f (x, θ ) = (θ + 1)x θ , 0 < x < 1, θ > −1, is given by (a) X n /(1 − X n ). (b) X n . (c) 1−2X n . (d)

X n −1 4X n −1 . 2X n −1

9. Suppose {X 1 , X 2 . . . X n } is a random sample from a uniform U (0, 2θ ) distribution. Then the maximum likelihood estimator of θ (a) (b) (c) (d)

is X (n) /2. is X (n) . is X ([n/2]+1) . does not exist.

510

7

Solutions to Conceptual Exercises

10. Suppose {X 1 , X 2 . . . X n } is a random sample from the distribution with probability density function f (x, θ, λ) =

1 λ , x ∈ R λ > 0, θ ∈ R. π λ2 + (x − θ )2

Then which of the following statements is/are correct? √ 2 2 L n(X ([n/4]+1) − (θ − λ)) → N (0, π 4λ ). √ 2 2 L (b) n(X ([n/4]+1) − (λ − θ )) → N (0, π 4λ ). √ 2 2 L (c) n(X ([3n/4]+1) − (θ + λ)) → N (0, 3π4 λ ). √ 2 2 L (d) n(X ([3n/4]+1) − (θ + λ)) → N (0, π 4λ ). (a)

11. Suppose {X 1 , X 2 . . . X n } is a random sample from√ a normal N (μ, σ 2 ) distribution. Then the approximate dispersion matrix of n((X n , Sn2 ) − (μ, σ 2 ) ) is given by  σ2 0 . 0 σ4   2 σ 0 . 0 2σ 4   1/σ 2 0 . 0 1/σ 4   2 0 σ . 0 1/2σ 4 

(a) (b) (c) (d)

12. Suppose {X 1 , X 2 . . . X n } is a random sample from an exponential distribution with mean θ . Then 100(1 − α)% asymptotic confidence interval for θ

√ √ nXn nXn √ , √ . + n aα/2 + n

a1−α/2 √ √ nXn nXn √ , √ . aα/2 + n a1−α/2 + n Xn √ , X n√ . a1−α/2 + n aα/2 + n

(a) is (b) is (c) is

(d) can not be constructed using X n . 13. Suppose {X 1 , X 2 . . . X n } is a random sample from a Poisson Poi(θ ) distribution. The variance stabilizing transformation for constructing an asymptotic confidence interval for θ is (a) (b) (c) (d)

g(θ ) = log(θ ). g(θ ) = 1/θ√. g(θ ) = 1/√ θ. g(θ ) = 2 θ .

7.6 Multiple Choice Questions

511

14. Suppose {X 1 , X 2 . . . X n } is a random sample from a normal N (θ, θ 2 ), θ > 0 distribution. Then, the variance stabilizing transformation for constructing an asymptotic confidence interval for θ based on X n is (a) (b) (c) (d)

g(θ ) = log(θ ). g(θ ) = 1/θ√. g(θ ) = 1/√ θ. g(θ ) = 2 θ .

15. Following are two statements about the estimator of an indexing real parameter θ. (I) If Tn is a consistent estimator of θ , then its asymptotic distribution is degenerate at θ . √ L (II) If n(Tn − θ ) → Z , then Tn is a consistent estimator of θ . Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true. L

16. Suppose an (Tn − θ ) → Z , where an → ∞ as n → ∞. Which of the following statements is/are correct? L

(a) If g is a continuous function, then g(an (Tn − θ )) → g(Z ). L

(b) If g is a differentiable function, then g(an (Tn − θ )) → g(Z ). (c) Suppose Z ∼ N (0, 1). If g is a differentiable function, then the asymptotic distribution of g(an (Tn − θ )) is also normal. (d) Suppose Z ∼ N (0, 1). If g is a differentiable function and the first derivative is always non-zero, then the asymptotic distribution of g(an (Tn − θ )) is also normal. 17. Suppose X ∼ C(θ, 1). Which of the following statements is/are correct? (a) Approximate variance of the sample median is less than the approximate variance of the sample first quartile. (b) Approximate variance of the sample median is less than the approximate variance of the sample third quartile. (c) Approximate variance of the sample third quartile is less than the approximate variance of the sample first quartile. (d) Approximate variance of the sample third quartile is the same as the approximate variance of the sample first quartile.

512

7

Solutions to Conceptual Exercises

18. Suppose X ∼ N (θ, 1). Which of the following statements is/are correct? (a) Approximate variance of the sample median is less than the approximate variance of the sample first quartile. (b) Approximate variance of the sample median is less than the approximate variance of the sample third quartile. (c) Approximate variance of the sample third quartile is less than the approximate variance of the sample first quartile. (d) Approximate variance of the sample third quartile is the same as the approximate variance of the sample first quartile. 19. Suppose a distribution of X is indexed by a parameter θ = (θ1 , θ2 ) . Following are two statements about the estimator of θ. (I) If Tin is a consistent estimator of θi , i = 1, 2, then T n = (T1n , T2n ) is a consistent estimator of θ. (II) If Tin is a CAN estimator of θi , i = 1, 2 and both have the same norming factor, then T n = (T1n , T2n ) is a CAN estimator of θ. Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

20. Suppose (X , Y ) ∼ N2 (0, 0, 1, 1, ρ) distribution, −1 < ρ < 1. To construct a large sample confidence interval for ρ, based on the sample correlation coefficient, the variance stabilizing transformation is (a) g(ρ) = log 1+ρ 1−ρ . (b) g(ρ) =

1 2

log 1−ρ 1+ρ .

(c) g(ρ) = log 1−ρ 1+ρ . (d) g(ρ) =

7.6.3

1 2

log 1+ρ 1−ρ .

Chapter 4: CAN Estimators in Exponential and Cramér Families

1. Suppose {X 1 , X 2 . . . X n } is a random sample from the distribution with probability function f (x, θ ) = θ x θ −1 , 0 < x < 1, θ > 0. Then which of the following statements is/are correct? (a) The maximum likelihood estimator of θ is a CAN estimator of θ . (b) X n is MLE of θ . X n −1

7.6 Multiple Choice Questions

513

(c) X n is a CAN estimator of θ . (d) There does not exist any CAN estimator of θ which attains the Cramér-Rao lower bound. 2. Suppose {X 1 , X 2 . . . X n } is a random sample from the distribution with probability density function given by f (x, θ ) = (θ + 1)x θ , 0 < x < 1, θ > −1. Suppose Tn is a moment estimator of θ based on a sufficient statistic. Which of the following statements is/are correct? (a) (b) (c) (d)

Tn Tn Tn Tn

is not CAN for θ . is not MLE of θ . = 1−4X n . 2X n −1 n = −  log(X − 1. ) i

3. In a multinomial distribution with 4 cells, the cell probabilities and the observed cell frequencies are given by 1/16 + θ, 3/16 − θ, 3/16 − θ, 9/16 + θ & 31, 37, 35, 187 respectively. Then the likelihood equation can be simplified as (a) 31(1 + 16θ ) − 72(3 − 16θ ) + 187(9 + 16θ ) = 0. 1 3 9 + θ )31 ( 16 − θ )72 ( 16 + θ )187 = 0. (b) ( 16 31 72 187 (c) 1+16θ − 3−16θ + 9+16θ = 0. 72 31 187 − 3−16θ + 9+16θ = 0. (d) 1+16θ 4. Suppose {X 1 , X 2 . . . X n } is a random sample from a double exponential distribution with location parameter θ , with probability density function given by f (x) = 0.5e−|x−θ | x ∈ R, θ ∈ R Then the maximum likelihood estimator of θ (a) (b) (c) (d)

does not exist. is the sample median. is the sample mean. is the mean of |X i |, i = 1, 2, . . . , n.

5. Suppose {X 1 , X 2 . . . X n } are independent random variables, where X i ∼ U (0, iθ ) for i = 1, 2, . . . , n. Then the maximum likelihood estimator of θ is (a) X n . (b) max{X 1 , X 2 . . . X n }.

514

7

Solutions to Conceptual Exercises

(c) max{X n 1 ,XXi 2 /2, X 3 /3, . . . X n /n}. (d) n1 i=1 i . 6. Suppose (X , Y ) is a random vector with joint distribution given by   e−λ λx x y θ (1 − θ )x−y y = 0, 1, 2, . . . x, x = 0, 1, 2 . . . , 0 < θ < 1, λ > 0. x! y

Then a system of likelihood equations based on a random sample of size n are given by (a) −n + (b) −1 +

n

Xi

=0 λ =0 &

i=1

Xi λ n

(c) −n + λ i=1 = 0 (d) none of the above. Xi

n

i=1 Yi

& Yi θ



&

θ X i −Yi 1−θ 

n i=1 Yi

θ

n

− = 0. −

i=1

n X i − i=1 Yi 1−θ

n i=1

n X i − i=1 Yi 1−θ

= 0. = 0.

7. Suppose {X 1 , X 2 . . . X n } is a random sample from a uniform U (−θ, θ ) distribution. Then the maximum likelihood estimator of θ is (a) (b) (c) (d)

X (n) . max{|X 1 |, . . . |X n |}. −X (1) . max{−X (1) , X (n) }.

8. Suppose {X 1 , X 2 . . . X n } is a random sample from the distribution with probability density function f (x, μ, σ ) =

  1 x −μ , x > μ, σ > 0. exp − σ σ

If μˆ n and σˆ n denote the maximum likelihood estimators of μ and σ respectively, then which of the following statements is/are correct? (a) (b) (c) (d)

(μˆ n , σˆ n ) (μˆ n , σˆ n ) (μˆ n , σˆ n ) (μˆ n , σˆ n )

= (X (1) , Sn2 ) , where Sn2 is the sample variance. = (X (1) , X n ) . = (X (1) , X n − X (1) ) . = (X n , Sn2 ) .

9. On the basis of a random sample of size n from a probability law which is indexed by a real valued parameter θ , which of the following statements is/are correct ? (a) the maximum likelihood estimator of θ may not exist. (b) the maximum likelihood estimator of θ , if it exists, is always a function of sufficient statistic.

7.6 Multiple Choice Questions

515

(c) the maximum likelihood estimator is always consistent for θ . (d) the maximum likelihood estimator of θ is always asymptotically normally distributed. 10. Following are two statements about the estimator of an indexing parameter θ of a k-parameter exponential family. (I) The maximum likelihood estimator of θ is a CAN estimator of θ. (II) The moment estimator of θ based on a sufficient statistic is a CAN estimator of θ . Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

11. Suppose X follows an exponential distribution with mean 1/θ . Suppose X is discretized to define Y as Y = k if k < X ≤ k + 1 for k = 0, 1, . . . . Suppose a random sample {Y1 , Y2 , . . . , Yn } is available. Then the moment estimator θˆ of θ based on a sufficient statistic is (a) (b) (c) (d)

θˆ = 1/Y n . θˆ = 1/Y n + 1. θˆ = log(1 + 1/Y n ). same as the maximum likelihood estimator of θ .

12. Suppose {X 1 , X 2 . . . X n } is a random sample from the distribution with probability mass function √

f (x, p) = p(1 − p)

x

, x = 0, 1, 4, 9, 16, . . . , 0 < p < 1.

Then which of the following statements is/are correct? (a) The probability mass function f (x, p) satisfies the Cramér regularity conditions. (b) The maximum likelihood estimator of p does not exist. (c) A CAN estimator of p does not exist. (d) The maximum likelihood estimator of p is not CAN for p. 13. Which one of the following probability laws does NOT fulfill Cramér regularity conditions? (a) f (x, θ ) = 1, θ < x < θ + 1, θ > 0. (b) f (x, θ ) = θ x (1 − θ )1−x , x = 0, 1, 0 < θ < 1.

516

7

Solutions to Conceptual Exercises

(c) f (x, θ ) = θ e−θ x , x > 0, θ > 0. (d) f (x, θ ) = (θ + 1)x θ , 0 < x < 1, θ > −1. 14. Which of the following probability laws is/are NOT a member of exponential family? (a) (b) (c) (d)

N (θ, θ ), θ > 0. N (θ, θ 2 ), θ > 0. f (x, θ ) = θ (1 − x)θ −1 , 0 < x < 1, θ > 0. f (x, θ ) = θ x (1 − θ )1−x , x = 0, 1, 0 < θ < 1.

15. Suppose that the joint distribution of a random vector (X , Y ) is, f (x, y, β, θ ) =

θ e−(β+θ )y (β y)x , y > 0, x = 0, 1, . . . , β, θ > 0. x!

Which of the following statements is/are correct? (a) (b) (c) (d)

f (x, y, β, θ ) is a member of a two-parameter exponential family. 1/Y n is a maximum likelihood estimator of θ . X n /Y n is maximum likelihood estimator of β. (X n , Y n ) is consistent for (β, θ ).

16. Suppose E denotes a one-parameter exponential family. Then which of the following statements is/are NOT correct? (a) Exponential distribution with location parameter θ = 0 and scale parameter 1 belongs to E. (b) Cauchy distribution with location parameter θ belongs to E. (c) Normal distribution with mean θ and variance θ , θ > 0 belongs to E. (d) Laplace distribution with location parameter θ and scale parameter 1 belongs to E. 17. Suppose E denotes a one-parameter exponential family. Then which of the following statements is/are correct? (a) Uniform distribution U (0, θ ) belongs to E. (b) Gamma distribution with scale parameter θ and shape parameter 5 belongs to E. (c) Normal distribution with mean θ and variance θ 2 belongs to E. (d) Laplace distribution with location parameter 0 and scale parameter θ belongs to E.

7.6 Multiple Choice Questions

517

18. Suppose C denotes a Cramér family. Then which of the following statements is/are correct? (a) Exponential distribution with location parameter θ = 0 and scale parameter 1 belongs to C. (b) Cauchy distribution with location parameter θ belongs to C. (c) Normal distribution with mean θ and variance θ , θ > 0 belongs to C. (d) Laplace distribution with location parameter 0 and scale parameter θ belongs to C. 19. Following are two statements about a multinomial distribution in three cells with cell probabilities as θ 2 , 2θ (1 − θ ) and (1 − θ )2 , 0 < θ < 1. (I) It belongs to an exponential family. (II) It belongs to a Cramér family. Which of the following statements is/are correct? (a) (b) (c) (d)

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

20. Following are two statements about the estimator of an indexing parameter θ of a k-parameter exponential family. (I) The maximum likelihood estimator of θ is a CAN estimator of θ. (II) The moment estimator of θ based on a sufficient statistic is a CAN estimator of θ . Which of the following statements is/are correct? (a) (b) (c) (d)

7.6.4

Both (I) and (II) are true. Both (I) and (II) are false. (I) is true but (II) is false. (I) is false but (II) is true.

Chapter 5: Large Sample Test Procedures

1. Suppose {X 1 , X 2 . . . X n } is a random sample from a normal N (θ, 1) distribution. Then the likelihood function attains its maximum under H0 : θ ≤ θ0 at (a) (b) (c) (d)

θ0 . Xn . min{θ0 , X }. max{θ0 , X }.

518

7

Solutions to Conceptual Exercises

2. Suppose X is a random variable or a random vector with probability law f (x, θ ) indexed by a parameter θ ∈  ⊂ Rk and the distribution of X belongs to a Cramér family. Suppose λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ (1) = θ (1) 0 against the alternative H1 : (1) (1) (1)  θ = θ 0 , where θ = (θ1 , θ2 , . . . , θm ) and θ (2) = (θm+1 , θm+2 , . . . , θk ) is (1) partition of θ with m < k and θ 0 is a specified vector. Then the asymptotic null distribution of −2 log λ(X ) is (a) (b) (c) (d)

χm2 . 2 . χk−1 2 . χk−m 2 χk−m−1 .

3. Suppose X ∼ B(1, p) distribution, 0 < p < 1. On the basis of a random sample of size n from the distribution of X , we want to test H0 : p = p0 against the alternative p = p0 , where p0 is a specified constant. Then asymptotic null distribution of which of the following test statistics is N (0, 1)? (a) √ (X n − p0 )

X n (1−X n )/n (X n − p0 )

(b) √ (c) (d)

.

.

X n (1−X n ) √ (X n − p0 ) . p0 (1− p0 )/n (X √ n − p0 ) . p0 (1− p0 )

4. Suppose X and Y are independent random variables having Bernoulli B(1, p1 ) and B(1, p2 ) distributions respectively, 0 < p1 , p2 < 1. On the basis of random samples of sizes n 1 and n 2 from the distribution of X and Y respectively, we want to test H0 : p1 = p2 against the alternative p1 = p2 . Then asymptotic null distribution of which of the following test statistics is N (0, 1)? (a)

'

(b)

'

(c)



(d)

(X n 1 −Y n 2 ) X n 1 (1−X n 1 )/n 1 +Y n 2 (1−Y n 2 )/n 2 (X n 1 −Y n 2 ) X n 1 (1−X n 1 )+Y n 2 (1−Y n 2 )

.

.

(X n 1 −Y n 2 ) , Pn (1−Pn )(1/n 1 +1/n 2 ) (X n −Y n 2 ) √ 1 , Pn (1−Pn )

where Pn = (n 1 X n 1 + n 2 Y n 2 )/(n 1 + n 2 ).

7.6 Multiple Choice Questions

519

5. Suppose X is a random variable or a random vector with probability law f (x, θ ) indexed by a parameter θ ∈  ⊂ Rk and the distribution of X belongs to a Cramér family. Suppose λ(X ) is a likelihood ratio test statistic based on a random sample X = {X 1 , X 2 , . . . , X n } for testing H0 : θ ∈ 0 against the alternative H1 : θ ∈ 1 , where in 0 , θi = gi (β1 , β2 , . . . , βm ) , i = 1, 2, . . . , k, where m ≤ k and g1 , g2 , . . . , gk are Borel measurable functions from Rm to R, having continuous partial derivatives of first order. Then the asymptotic null distribution of −2 log λ(X ) is χr2 where (a) (b) (c) (d)

7.6.5

r r r r

= k. = k − m − 1. = m. = k − m.

Chapter 6: Goodness of Fit Test and Tests for Contingency Tables

1. Suppose the distribution of a random variable or a random vector X , indexed by a vector parameter θ belongs to a Cramér family. Suppose θˆ n is a maximum likelihood estimator of θ based on a random sample of size n from the distribution of X . Then Wald’s test statistics for testing H0 : θ = θ 0 is given by (a) n(θˆ n − θ 0 ) (θˆ n − θ 0 ).  2 L (b) n(θˆ n − θ 0 ) Mn (θˆ n − θ 0 ) where Mn = − ∂∂θlog . i ∂θ j  ˆ ˆ (c) n(θ n − θ 0 ) I (θ 0 )(θ n − θ 0 ). (d) n(θˆ n − θ 0 ) I (θˆ n )(θˆ n − θ 0 ). 2. Suppose the distribution of a random variable or a random vector X , indexed by a vector parameter θ belongs to a Cramér family. Suppose θˆ n is a maximum likelihood estimator of θ based on a random sample of size n from the distribution of X . Suppose V n (X , θ 0 ) denote the vector of score functions evaluated at θ 0 . Then the score test statistics for testing H0 : θ = θ 0 is given by (a) (b) (c) (d)

V n (X , θ 0 )V n (X , θ 0 ). V n (X , θ 0 )I −1 (θ 0 )V n (X , θ 0 ). V n (X , θ 0 )I (θ 0 )V n (X , θ 0 ). V n (X , θ 0 )I −1 (θˆ n )V n (X , θ 0 ).

3. Suppose Y follows a trinomial distribution with cell probabilities p1 , p2 , p3 > 0 and p1 + p2 + p3 = 1. Suppose pˆ in denotes the maximum likelihood estimator of pi , i = 1, 2, 3 and I ≡ I ( p1 , p2 , p3 ) denotes the information matrix. Which of the following statements is/are correct?

520

7

Solutions to Conceptual Exercises

√ L n(( pˆ 1n , pˆ 2n , pˆ 3n ) − ( p1 , p2 , p3 )) → N3 (0, I ). √ L (b) n(( pˆ 1n , pˆ 2n , pˆ 3n ) − ( p1 , p2 , p3 )) → N3 (0, I −1 ). √ L (c) n(( pˆ 1n , pˆ 2n ) − ( p1 , p2 )) → N2 (0, I ). √ L (d) n(( pˆ 1n , pˆ 2n ) − ( p1 , p2 )) → N2 (0, I −1 ). (a)

follows a multinomial distribution in k cells with cell probabilities 4. Suppose Y  k pi = 1. Suppose pˆ n denotes the maximum likelihood estimapi > 0 and i=1 tor of p = ( p1 , p2 , . . . , pk−1 ) and I ( p) denotes the information matrix. Which of the following statements is/are correct? L

2 . (a) n( pˆ n − p) I ( p)( pˆ n − p) → χk−1 L

2 . (b) n( pˆ n − p) I −1 ( p)( pˆ n − p) → χk−1 L

2 . (c) n( pˆ n − p) I ( pˆ n )( pˆ n − p) → χk−1 L

2 . (d) n( pˆ n − p) I −1 ( pˆ n )( pˆ n − p) → χk−1

5. For a multinomial distribution in k cells with cell probabilities p = ( p1 , p2 , . . . , k pk ) with pi > 0 ∀ i = 1, 2, . . . , k and i=1 pi = 1, suppose we want to test H0 : p = p(θ ) against the alternative H1 : p = p(θ), where θ is an indexing parameter of dimension l < k. Suppose λ(X ) is a likelihood ratio test statistic based on a random sample of size n. If the multinomial distribution with cell probabilities indexed by θ belongs to a Cramér family, then which of the following statements is/are correct? For large n, −2 log λ(X ) follows (a) (b) (c) (d)

2 distribution. χk−1 2 χk−l distribution. 2 χkl−1 distribution. 2 χk−1−l distribution.

6. Suppose a multinomial distribution with cell probabilities p = ( p1 , p2 , . . . , pk ) k where pi > 0 ∀ i = 1, 2, . . . , k and i=1 pi = 1, belongs to a Cramér family. For testing H0 : p = p 0 against the alternative H1 : p = p 0 , which of the following statements is/are correct? The asymptotic null distribution of (a) (b) (c) (d)

2 . the likelihood ratio test statistic is χk−1 k 2 . (oi − ei )2 /ei is χk−1 Karl Pearson’s chi-square test statistic i=1 k 2 Karl Pearson’s chi-square test statistic i=1 (oi − ei ) /ei is χk2 . k 2 2 i=1 (oi − ei ) /oi is χk .

7.6 Multiple Choice Questions

521

7. In a multinomial distribution with k cells having cell probabilities p = ( p1 , k p2 , . . . , pk ) where pi > 0 ∀ i = 1, 2, . . . , k and i=1 pi = 1, suppose we want to test H0 : p = p(θ ) against the alternative H1 : p = p(θ ), where θ is an indexing parameter of dimension l < k. It is assumed that the multinomial distribution when cell probabilities are indexed by θ belongs to a Cramér family. Then which of the following statements is/are correct? The asymptotic null distribution of (a) (b) (c) (d)

2 . the likelihood ratio test statistic is χk−l−1 k 2 . Karl Pearson’s chi-square test statistic i=1 (oi − ei )2 /ei is χk−l−1 k 2 2 Karl Pearson’s chi-square test statistic i=1 (oi − ei ) /ei is χk−l . k 2 2 i=1 (oi − ei ) /oi is χk−l−1 .

8. Suppose Y = {Y1 , Y2 , . . . , Yk−1 } has a multinomial distribution in k cells with cell probabilities p = { p1 , p2 , . . . , pk−1 }, where pi > 0, i = 1, 2, . . . , k with k−1 pk = 1 − i=1 pi . On the basis of a random sample of size n from the distribution of Y , suppose we want to test H0 : p = p 0 against the alternative H1 : p = p 0 , where p 0 is a completely specified vector. Then which of the following statements is/are correct? k (oi − ei )2 /oi are identically distributed for (a) Wald’s test statistic and i=1 large n, under H0 . k (oi − ei )2 /oi are identical random variables, (b) Wald’s test statistic and i=1 under H0 . (c) the and Karl Pearson’s chi-square test statistic k score test statistic 2 /e are identically distributed for large n, under H . (o − e ) i i i 0 i=1 (d) the score test statistic and Karl Pearson’s chi-square test statistic k 2 i=1 (oi − ei ) /ei are identical random variables, under H0 . 9. Suppose Y = {Y1 , Y2 , . . . , Yk−1 } has a multinomial distribution in k cells with cell probabilities p being a function of θ , a vector valued parameter of dimension l × 1, l < k. On the basis of a random sample of size n from the distribution of Y , suppose we want to test H0 : p = p(θ) against the alternative H1 : p = p(θ ). Then which of the following statements is/are correct? k (oi − ei )2 /oi are identically distributed for (a) Wald’s test statistic and i=1 large n, under H0 . k (oi − ei )2 /oi are identical random variables, (b) Wald’s test statistic and i=1 under H0 .

522

7

Solutions to Conceptual Exercises

(c) the and Karl Pearson’s chi-square test statistic k score test statistic 2 i=1 (oi − ei ) /ei are identically distributed for large n, under H0 . (d) the and Karl Pearson’s chi-square test statistic k score test statistic 2 i=1 (oi − ei ) /ei are identical random variables, under H0 . 10. In a r × s contingency table for testing H0 : Two attributes A and B are independent against the alternative H1 : A and B are not independent, which of the following statements is/are correct? The asymptotic null distribution of (a) the likelihood Ratio test statistic −2 log λ(X ) is χ(r2 −1)(s−1) .   (o −e )2 (b) Karl Pearson’s chi-square test statistic ri=1 sj=1 i j ei j i j is χ(r2 −1)(s−1) .   (o −e )2 (c) ri=1 sj=1 i j oi j i j is χ(r2 −1)(s−1) .   (o −e )2 (d) ri=1 sj=1 i j oi j i j is χr2s−1 . 11. In a r × s contingency table for testing H0 : Two attributes A and B are independent against the alternative H1 : A and B are not independent, which of the following statements is/are correct? (a) Wald’s test statistic and large n, under H0 .

r i=1

s j=1

(oi j −ei j )2 oi j

are identically distributed for

  (o −e )2 (b) Wald’s test statistic and ri=1 sj=1 i j oi j i j are identical random variables, under H0 . (c) the score test statistic and Karl Pearson’s chi-square test statistic r s (oi j −ei j )2 are identically distributed for large n, under H0 . i=1 j=1 ei j (d) the score test statistic and Karl Pearson’s chi-square test statistic r s (oi j −ei j )2 are identical random variables, under H0 . i=1 j=1 ei j 12. In a r × s contingency table for testing H0 : Two attributes A and B are independent against the alternative H1 : A and B are not independent, the expected frequency ei j of (i, j)-th cell is given by (a) (b) (c) (d)

ei j ei j ei j ei j

= n i j /n. = n i. n . j . = n i. n . j /n. = n i. n . j /n 2 .

13. In a three way contingency table for testing H0 : Three attributes A, B and C are mutually independent against the alternative H1 : A, B and C are not mutually independent, the expected frequency ei jk of (i, j, k)-th cell is given by

7.6 Multiple Choice Questions

(a) (b) (c) (d)

523

= n i jk /n. = (n i.. /n)(n . j. /n)(n ..k ). = (n i.. )(n . j. )(n ..k )/n 3 . = (n i.. )(n . j. )(n ..k )/n.

ei jk ei jk ei jk ei jk

14. In a three way contingency table, suppose there are r levels of attribute A, s levels of attribute b and m levels of attribute C. While testing H0 : Three attributes A, B and C are mutually independent against the alternative H1 : A, B and C are not mutually independent, the number of parameters to be estimated in the null space is given by (a) (b) (c) (d)

r sm − 1. r + s + m − 3. r + s + m. r + s + m − 1.

15. In a three way contingency table for testing H0 : A and C are conditionally independent given B, the maximum likelihood estimator of pi jk , probability of (i, j, k)-th cell, is given by (a) (b) (c) (d)

(n i j. n . jk )/(nn . j. ). (n i j. n . jk )/n . j. . (n i j. n . jk )/(n 2 n . j. ). n i jk /n.

16. In a three way contingency table for testing H0 : A and C are conditionally independent given B, the number of parameters to be estimated in the null space is given by (a) (b) (c) (d)

r sm − 1. r s + sm − s + 1. r s + sm − r m. r s + sm − s − 1.

17. In a three way contingency table for testing the null hypothesis that A and (B, C) are independent, the null hypothesis can be expressed as (a) (b) (c) (d)

H0 H0 H0 H0

: : : :

pi jk pi jk pi jk pi jk

= = = =

pi j. p. jk pi.. pi j. pi.. p. jk pi j. pi.k

∀ i, j, k. ∀ i, j, k. ∀ i, j, k. ∀ i, j, k.

524

7

Solutions to Conceptual Exercises

18. In a three way contingency table for testing the null hypothesis that A and (B, C) are independent, the asymptotic null distribution of Karl Pearson’s chi-square test statistic is χl2 , where l is (a) (b) (c) (d)

r sm − 1. (r − 1)(sm − 1). (r − 1)sm. r (sm − 1).

19. Suppose {φn (X ), n ≥ 1} is a sequence of test functions based on X = {X 1 , X 2 , . . . , X n } for testing H0 : θ ∈ 0 against the alternative H1 : θ ∈ 1 where 0 ∩ 1 = ∅ and 0 ∪ 1 = . The test procedure governed by a test function φn is said to be consistent if (a) (b) (c) (d)

supθ ∈0 E θ (φn (X )) → α ∈ (0, 1) and E θ (φn (X )) → 1 ∀ θ ∈ 1 . supθ ∈0 E θ (φn (X )) → α ∈ (0, 1). E θ (φn (X )) → 1 ∀ θ ∈ 1 . E θ (φn (X )) → α ∈ (0, 1) for some θ ∈ 0 and E θ (φn (X )) → 1 for some θ ∈ 1 .

20. Suppose {X 1 , X 2 , . . . , X n } is a random sample from a Cauchy C(θ, 1) distribution. A test procedure for testing H0 : θ = 0 against the alternative H1 : θ = 0 is consistent if the test statistic is based on the (a) (b) (c) (d)

sample median. sample first quartile. sample third quartile. sample mean.

7.6 Multiple Choice Questions

525

Table 7.1 Answer Key Q.No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Chapter 2 a,c a,d a,b,d c a,b a,c,d a,d b b,c,d a,c a a,b b,c,d c b,c c c a,b,c a,d c

Chapter 3 a,c d b a a,d b,d d c a c b a d a a a,b,d a,b,d a,b,d c d

Chapter 4 a d c b c a d c a,b a c,d a a b a,b,c a,b,d d b,c,d a a

Chapter 5 c a a,c a,c d

Chapter 6 d b d a,c d a,b a,b,d a,b,c,d a,c,d a,b,c a,c,d c b b a d c b a a,b,c

Index

A Almost sure convergence, 15 Approximate dispersion matrix, 120, 121, 124, 133, 192 Approximate variance, 96, 102, 103, 111, 172, 173, 216 Asymptotically efficient estimator, 208 Asymptotic confidence interval, 113, 115, 117, 119, 142, 155, 159 Asymptotic null distribution, 112, 269, 273, 281, 288, 364 Asymptotic relative efficiency, 219

B BAN estimator, 208, 210, 216, 219, 242 Basic inequality, 19, 405 Borel-Cantelli lemma, 17, 56, 425, 426 Bounded in probability, 17, 96, 103, 109, 123, 283, 286, 334, 340

C CAN estimator, 96, 98–100, 102, 110, 111, 120, 121, 143, 168, 173, 181, 210, 214, 216, 217, 225, 232, 314 Canonical representation, 184, 187, 189, 193 Chebyshev’s inequality, 19, 46, 57 Chisq.test, 151, 154, 302, 377, 378, 381 CLT, 18, 77, 102, 106, 172, 181, 208, 216 Complete statistic, 8, 319 Composite hypothesis, 10 Consistency of a test procedure, 372 Consistent estimator, 32, 33, 36–40, 43, 44, 46, 50, 52, 54, 74 Continuous mapping theorem, 17, 103, 283, 286, 334, 341

Convergence in law, 15, 34, 96 Convergence in probability, 15, 31, 47, 69 Convergence in r -th mean, 15, 34, 47 Coverage probability, 30, 31, 69, 97 Cramér family, 168, 199, 210, 212, 214, 242, 281, 322, 327 Cramér-Huzurbazar theory, 168, 216, 220, 283, 327 Cramér-Rao lower bound, 6, 7, 185, 208, 319 Cramér regularity conditions, 199, 203, 210, 212, 214, 221, 232, 327

D Delta method, 103, 104, 117, 122, 147, 172, 173, 194

E Empirical confidence coefficient, 155, 160 Empirical distribution function, 43, 59 Empirical level of significance, 294, 295 Estimate, 3 Estimator, 3

F False positive rate, 294 Fisher lower bound, 208 Fisher’s Z transformation, 139, 141, 156, 159

G Generalized variance, 136, 139 Glivenko-Cantelli theorem, 59

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 S. Deshmukh and M. Kulkarni, Asymptotic Statistical Inference, https://doi.org/10.1007/978-981-15-9003-0

527

528 H Hypothesis of independence of attributes, 389, 390

I Identifiability, 2 Indexing parameter, 2 Information function, 6, 105, 169, 210, 214 Information matrix, 124, 127, 184, 225, 232 Invariance property of CAN estimator, 115, 122 Invariance property of consistency, 40, 61, 62, 65, 67, 114, 172, 191, 194 Inverse function theorem, 19, 171, 172, 174, 180, 184, 190, 191, 193

J Jensen’s inequality, 18, 201 Joint consistency, 60, 61

K Karl Pearson’s chi-square test statistic, 151, 154, 331, 338, 350, 351, 356, 364, 366, 368, 370, 376, 377, 380, 383, 389, 393 Khintchine’s WLLN, 18, 33, 63, 172, 202, 204 Kolmogorov’s SLLN, 18, 55, 201 Kullback-Leibler information, 201

L Lehmann-Scheffe theorem, 9, 319 Level of significance, 10 Likelihood function, 4 Likelihood ratio test procedure, 275, 299, 300, 337, 376 Likelihood ratio test statistic, 274, 331, 338, 349, 364, 377, 380

M Marginal consistency, 60, 61 Markov inequality, 19, 57 Maximum likelihood estimator, 5, 31, 36– 38, 43, 105, 171, 181, 210, 214, 216, 217, 226, 313, 363 Mean squared error, 6, 69 Method of scoring, 168, 233, 234, 241, 248, 255 Minimum sample size, 32, 77, 82, 97

Index Moment estimator, 5, 33, 171, 179, 217 Most powerful test, 275 MSE consistent estimator, 47–49, 53, 314 Multinomial distribution, 310, 312 Multiparameter Cramér family, 221, 223, 327 Multiparameter exponential family, 168, 183, 189, 192, 221 Multivariate CLT, 120, 121, 124, 140, 192, 314, 344

N Natural parameters, 184, 187, 189, 193 Newton-Raphson procedure, 168, 233, 234, 245, 248 Neyman-Fisher factorization theorem, 8, 42, 128, 171 Neyman-Pearson lemma, 12, 87, 274, 275 Neyman-Scott example, 68

O One-parameter exponential family, 168, 169, 171, 173, 182, 199, 317, 374

P Poisson regression, 394 Population quantile, 51 Power function, 11, 292, 295 Power series distribution, 179, 182 Prop.test, 350, 360, 383, 388 P-value, 13, 145, 158

R Rao-Blackwell theorem, 9, 319 R software, 20, 23, 26, 74, 77, 143, 167, 168, 234, 292, 374

S Sample distribution function, 43 Sample moments, 53 Sample quantile, 51, 53, 109 Sampling distribution, 5 Score function, 6, 344 Score test statistic, 274, 346, 349, 350, 356, 357, 359–361, 366, 368, 370, 377, 382, 383, 386, 389 Setting seed, 74 Shanon-Kolmogorov information inequality, 201

Index Shapiro-Wilk test, 143, 157 Shrinkage technique, 217 Simple hypothesis, 10 Size of the test, 11, 275 Slutsky’s theorem, 18, 103, 107, 108, 114, 116, 208, 273, 283 Standard error, 115, 268, 346 Strongly consistent estimator, 55–57 Studentization procedure, 114, 142, 159 Sufficient statistic, 8, 31, 42, 169, 171, 183, 313 Super efficient estimator, 217, 219 T Test for goodness of fit, 309, 330, 377 Test for validity of a model, 375 Test function, 11 Three-way contingency table, 308, 393 Two-way contingency table, 308, 362, 389, 390 Type I error, 10, 87

529 Type II error, 10, 87

U Unbiased estimator, 5 Uniform consistency, 57 Uniformly consistent estimator, 58 Uniformly Minimum Variance Unbiased Estimator (UMVUE), 7, 9, 48, 319 Uniformly strongly consistent estimator, 59

V Variance stabilization technique, 114, 115, 141, 159

W Wald’s test statistic, 274, 345, 346, 349, 351, 357, 360, 361, 376, 383, 388 Weakly consistent estimator, 30