Asymptotic Theory of Testing Statistical Hypotheses: Efficient Statistics, Optimality, Power Loss and Deficiency 9783110935998


223 113 7MB

English Pages 299 [304] Year 2011

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Preface
Notations
1 Asymptotic test theory
1.1 First-order asymptotic theory
1.2 Second order efficiency
1.3 On efficiency of first and second order
1.4 Power loss
1.5 Efficiency and deficiency
1.6 Deficiency results for the symmetry problem
2 Asymptotic expansions under alternatives
2.1 Introduction
2.2 A formal rule
2.3 General Theorem
2.4 Proof of General Theorem
2.5 L-, R-, and U-statistics
2.6 Auxiliary lemmas
3 Power loss
3.1 Introduction
3.2 General theorem
3.3 Tests based on L-, R-, and U-statistics
3.4 Proof of General Theorem: Lemmas
3.5 Proof of Lemmas
3.6 Power loss for L-, R-, and U-tests
3.7 Proofs of Theorems
3.8 Combined L-tests
3.9 Other statistics
4 Edgeworth expansion for the likelihood ratio
4.1 Introduction
4.2 Moment conditions
4.3 Case of independent but not identically distributed terms
A LeCam’s Third Lemma
B Convergence rate under alternatives
B.1 General theorem
B.2 Proof of Theorem B.1.1
B.3 L-, R-, and U-statistics
B.4 Proof of Theorem B.3.1
C Proof of Theorem 1.3.1
D The Neyman-Pearson Lemma
E Edgeworth expansions
F Proof of Lemmas 2.6.1–2.6.5
F.1 Proof of Lemma 2.6.1
F.2 Proof of Lemma 2.6.2
F.3 Proof of Lemma 2.6.3
F.4 Proof of Lemma 2.6.4
F.5 Proof of Lemma 2.6.5
G Proof of Lemmas 3.7.1–3.7.5
G.1 Proof of Lemma 3.7.1
G.2 Proof of Lemma 3.7.2
G.3 Proof of Lemma 3.7.3
G.4 Proof of Lemma 3.7.4
G.5 Proof of Lemma 3.7.5
H Asymptotically complete classes
H.1 Non-asymptotic theorem on complete classes
H.2 Asymptotic theorem on complete classes
H.3 Power functions of complete classes
I Higher order asymptotics for R-, L-, and U-statistics
I.1 R-statistics
1.2 L-statistics
1.3 U-statistics
1.4 Symmetric statistics
Bibliography
Subject Index
Author Index
Recommend Papers

Asymptotic Theory of Testing Statistical Hypotheses: Efficient Statistics, Optimality, Power Loss and Deficiency
 9783110935998

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

MODERN PROBABILITY AND STATISTICS ASYMPTOTIC THEORY OF TESTING STATISTICAL HYPOTHESES: EFFICIENT STATISTICS, OPTIMALITY, P O W E R LOSS, AND DEFICIENCY

ALSO AVAILABLE IN MODERN PROBABILITY AND STATISTICS: Selected Topics in Characteristic Functions N.G. Ushakov Chance and Stability: Stable Distributions and Their Applications \/. V/. Uchaikin and V.M. Zolotarev Normal Approximation: New Results, Methods and Problems Vladimir V. Senatov Modern Theory of Summation of Random Variables Vladimir M. Zolotarev

MODERN PROBABILITY AND STATISTICS

Asymptotic Theory of Testing Statistical Hypotheses: Efficient Statistics, Optimally, Power Loss, and Deficiency

Vladimir E. BENING Moscow State University, Russia

III MSP III UTRECHT · BOSTON · KÖLN · TOKYO

2000

VSP BV

Tel: + 3 1 30 6 9 2 5 7 9 0

P.O. B o x 3 4 6

Fax: +31 3 0 6 9 3 2 0 8 1 [email protected]

3700 A H Zeist

www.vsppub.com

The Netherlands

© VSP B V 2000 First p u b l i s h e d in 2 0 0 0 ISBN 90-6764-301-7

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.

Printed

in The Netherlands

by Ridderprint

bv,

Ridderkerk.

Contents Foreword

iii

Preface

vii

Notations

xix

1 Asymptotic test theory 1.1 First-order asymptotic theory 1.2 Second order efficiency 1.3 On efficiency of first and second order 1.4 Power loss 1.5 Efficiency and deficiency 1.6 Deficiency results for the symmetry problem

1 1 7 10 13 15 41

2 Asymptotic expansions under alternatives 2.1 Introduction 2.2 A formal rule 2.3 General Theorem 2.4 Proof of General Theorem 2.5 L-, R-, and {/-statistics 2.6 Auxiliary lemmas

49 49 52 56 58 62 65

3 Power loss 3.1 Introduction 3.2 General theorem 3.3 Tests based on L-, R-, and [/-statistics 3.4 Proof of General Theorem: Lemmas 3.5 Proof of Lemmas 3.6 Power loss for L-, R-, and [/-tests 3.7 Proofs of Theorems 3.8 Combined L-tests 3.9 Other statistics

71 71 74 80 87 88 98 106 110 117

4 Edgeworth expansion for the likelihood ratio 4.1 Introduction 4.2 Moment conditions 4.3 Case of independent but not identically distributed terms

125 125 134 141

i

A LeCam's Third Lemma

149

Β Convergence rate under alternatives B.l General theorem B.2 Proof of Theorem B. 1.1 B.3 L-, R-, and t/-statistics B.4 Proof of Theorem B.3.1

157 157 160 164 167

C Proof of Theorem 1.3.1

179

D The Neyman-Pearson Lemma

183

Ε Edgeworth expansions

185

F Proof of Lemmas 2.6.1-2.6.5 F.l Proof of Lemma 2.6.1 F.2 Proof of Lemma 2.6.2 F.3 Proof of Lemma 2.6.3 F.4 Proof of Lemma 2.6.4 F.5 Proof of Lemma 2.6.5

201 201 202 204 208 213

G Proof of Lemmas 3.7.1-3.7.5 G.l Proof of Lemma 3.7.1 G.2 Proof of Lemma 3.7.2 G.3 Proof of Lemma 3.7.3 G.4 Proof of Lemma 3.7.4 G.5 Proof of Lemma 3.7.5

215 215 216 219 222 223

Η Asymptotically complete classes H.l Non-asymptotic theorem on complete classes H.2 Asymptotic theorem on complete classes H.3 Power functions of complete classes

229 229 231 235

I

Higher order asymptotics for R-, L-, and [/-statistics I.1 Ä-statistics 1.2 L-statistics 1.3 L^-statistics

239 239 246 251

1.4 Symmetric statistics

253

Bibliography

255

Subject Index

271

Author Index

275

ii

Foreword This book is the fifth in the series of monographs 'Modern Probability and Statistics* following the books • V.M. Zolotarev, Modern Theory of Summation of Random Variables; • V.V. Senatov, Normal Approximation: New Results, Methods and Problems; • V.M. Zolotarev and V.V. Uchaikin, Chance and Stability. Stable Distributions and their Applications·, • N.G. Ushakov, Selected Topics in Characteristic Functions. The Russian school of probability theory and mathematical statistics made a universally recognized contribution to these sciences. Its potentialities are not only very far from being exhausted, but are still increasing. During last decade there appeared many remarkable results, methods and theories which undoubtedly deserve to be presented in monographic literature in order to make them widely known to specialists in probability theory, mathematical statistics and their applications. However, due to recent political changes in Russia followed by some economic instability, for the time being, it is rather difficult to organize the publication of a scientific book in Russia now. Therefore, a considerable stock of knowledge accumulated during last years yet remains scattered over various scientific journals. To improve this situation somehow, together with the VSP publishing house and first of all, its director, Dr. Jan Reijer Groesbeek who with readiness took up the idea, we present this series of monographs. The scope of the series can be seen from both the title of the series and the titles of the published and forthcoming books: • Yu.S. Khokhlov, Generalizations of Stable Distributions: Structure and Limit Theorems; • G.L. Shevlyakov and N.O. Vilchevskii, Robust Estimation: Criteria and Methods. Among the proposals under discussion are the following books: iii

• A.V. Bulinski and M.A. Vronski, Limit Theorems for Associated Variables,

Random

• V.E. Bening and V.Yu. Korolev, Compound Cox Processes and their Applications in Insurance and Finance·, • E.V. Morozov, General Queueing Networks: the Method of Regenerative Decomposition; • G.P. Chistyakov, Analytical Methods in the Problem of Stability of Decompositions of Random Variables; • A.N. Chuprunov, Random Processes Observed at Random Times; • D.H. Mushtari, Probabilities and Topologies on Linear Spaces·, • V.G. Ushakov, Priority Queueing Systems; • V.Yu. Korolev and V.M. Kruglov, Random Sequences with Random Indices; • Yu.V. Prokhorov and A.P. Ushakova, Reconstruction of Distribution Types; • L. Szeidl and V.M. Zolotarev, Limit Theorems for Random Polynomials and Related Topics; • E.V. Bulinskaya, Stochastic Inventory Systems: Foundations and Recent Advances;

as well as many others. To provide high-qualified international examination of the proposed books, we invited well-known specialists to join the Editorial Board. All of them kindly agreed, so now the Editorial Board of the series is as follows: L. Accardi (University Roma Tor Vergata, Rome, Italy) A. Balkema (University of Amsterdam, the Netherlands) M. Csörgö (Carleton University, Ottawa, Canada) W. Hazod (University of Dortmund, Germany) V. Kalashnikov (Moscow Institute for Systems Research, Russia) V. Korolev (Moscow State University, Russia)—Editor-in-Chief V. Kruglov (Moscow State University, Russia) M. Maejima (Keio University, Yokohama, Japan) J. D. Mason (University of Utah, Salt Lake City, USA) E. Omey (EHSAL, Brussels, Belgium) K. Sato (Nagoya University, Japan) J. L. Teugels (Katholieke Universiteit Leuven, Belgium) A. Weron (Wroclaw University of Technology, Poland) M. Yamazato (University of Ryukyu, Japan) iv

V. Zolotarev (Steklov Institute of Mathematics, Moscow, Russia)—Editorin-Chief We hope that the books of this series will be interesting and useful to both specialists in probability theory, mathematical statistics and those professionals who apply the methods and results of these sciences to solving practical problems. Of course, the choice of authors primarily from Russia is due only to the reasons mentioned above and by no means signifies that we prefer to keep to some national policy. We invite authors from all countries to contribute their books to this series.

V. Yu. Korolev, V. M. Zolotarev, Editors-in-Chief Moscow, November 1999.

ν

Preface The purpose for writing this book was to present an introduction to the modern asymptotic theory of testing statistical hypotheses and to give a review of methods used for the investigation of the higher order asymptotics for the widely used statistics such as linear combinations of order statistics (/--statistics), linear rank statistics CR-statistics), f/-statistics, etc. The author did not intend to embrace all the subject completely, so that the choice of material was to a great extent inspired by his own interests. In particular, and this is an indisputable drawback of the book, only purely theoretical, mathematical aspects of the subject are considered here. However, the author can somehow justify himself for ignoring the applied problems by that otherwise the book would become frighteningly voluminous. Although all statistical notions used in the book are strictly defined, their 'physical' meaning or practical usefulness are not always discussed in full detail. Therefore, the reader who wishes to get acquainted with the statistical motivation of these notions more thoroughly, is recommended to refer beforehand to some advanced textbooks in mathematical statistics, e. g., to H. Cramer (Cramer, 1946), Mathematical Methods of Statistics, E.L. Lehmann (Lehmann, 1959), Testing Statistical Hypotheses, and C.R. Rao (Rao, 1965), Linear Statistical Inference and Its Applications. In the book, the problem of testing a simple hypothesis concerning a onedimensional parameter against a sequence of close one-sided alternatives is considered. In doing so, we follow and study the so-called Pitman's approach where both size and power of tests are separated from zero and one, respectively, and are interested in the second order efficiency, asymptotic deficiency and power loss. Unlike most preceding works where very complicated techniques were used involving extremely cumbersome calculations, in this book, base statements are proved with the use of a method which is technically rather simple, more clear and visual. This method is based on the ideas of the theory due to L. LeCam. The main attention is paid to tests based on linear rank statistics, linear combinations of order statistics, and [/-statistics. The book contains a great deal of heuristic reasoning and repetitions which, as the author hopes, will not fatigue the readers, but on the contrary, will favor better understanding of the material and facilitate the process of reading. Most formal proofs are taken out of the main text to Appendices. vii

Asymptotic methods are widely used in mathematical statistics. This is due to the fact that within non-asymptotic settings, the solution often depends on a particular type of distribution, sample size, etc., whereas the asymptotic approach provides a more general inference concerning the object under investigation. Moreover, the analysis of higher order asymptotics in problems of mathematical statistics is important from both applied and theoretical viewpoints, since it makes it possible to compare asymptotically equivalent statistical procedures (for example, tests with the same limit power or asymptotic efficiency). As a rule, the analysis of higher order asymptotics is based on asymptotic expansions which are important by themselves as they can provide a considerable increase of accuracy of numerical computation. Many papers deal with the applications of asymptotic expansions in statistics (see, for example, (Cramer, 1946; Bickel, 1974; Barndorff-Nielsen and Cox, 1989; Hall, 1992)). The possibility of construction of asymptotic expansions (and hence, of the analysis of higher order asymptotics) is usually connected with the existence of explicit representations for the distribution (or its characteristic function) or with the representability of the statistic of interest as the sum of independent random variables. In the first case the asymptotic expansion is derived by means of calculus whereas in the second case the main tool is the well-developed theory of summation of independent random variables. The problems related to the construction of asymptotic expansions within this theory are expounded in the books (Cramer, 1962; Gnedenko and Kolmogorov, 1954; Petrov, 1975; Bhattacharya and Ranga Rao, 1976; Feller, 1971). However, there are many estimators and test statistics to which these methods cannot be applied. Here we can separate two approaches which make it possible to investigate the asymptotic properties of statistics in some cases. The first approach is based on the possibility to use a special stochastic expansion of the statistic under consideration, that is, to represent the statistic as a polynomial, up to the remainder term, in normalized sums of independent random variables. The application of these statistics to testing hypotheses and problems of estimation attracted serious attention (see, e.g., (Chibisov, 1983; Pfanzagl, 1980; Götze and Milbrodt, 1999)). The other approach (see also Appendix I) (involving rather complicated techniques) was applied in the papers (Albers et al., 1976; Bickel and van Zwet, 1978) (see Section 1.6) to the construction of asymptotic expansions for linear i?-statistics in the symmetry problem and the two-sample problem and consisted in the construction of the asymptotic expansion for some conditional distribution of the initial statistic. The desired asymptotic expansion was then obtained as a result of averaging. Note that, in fact, it took almost 40 and 70 pages to do so. However, these methods are also applicable far not always. For example, consider L-statistics and {/-statistics. The stochastic expansions of these statistics are not representable as polynomials in normalized sums of independent random variables while their conditional distributions are extremeviii

ly complicated. Recently, the interest to L-, R-, and {/-statistics considerably grew, see, e.g., (Häjek and Sidak, 1967; Serfling, 1980; Hettmansperger, 1984; Helmers, 1984; Bickel et al, 1986; Korolyuk and Borovskikh, 1994; Lee, 1990; Bentkus et al., 1997). This growth of interest is caused by that these statistics are widely used in the theory of estimation and non-parametric statistics. Important merits of these statistics are their calculation simplicity and robustness (see (Huber, 1972)). Moreover, under rather general regularity conditions, these classes admit asymptotically efficient statistics. For R-, L-, and U-statistics, under some regularity conditions, many authors obtained the asymptotic normality, the Berry-Esseen type estimates and asymptotic expansions (see Appendix I). All the abovesaid may be interpreted as the results concerning the distributions of test statistics under the null hypothesis. However, when the properties of power functions of tests are investigated in problems of testing hypotheses, it is necessary to consider the distributions of test statistics under alternatives as well. The book is devoted to the investigation of such properties of distributions as convergence rate estimates and asymptotic expansions of asymptotically efficient statistics under alternative hypotheses. These problems can also be treated as problems of stability of convergence rate estimates and asymptotic expansions when the distribution under consideration is known up to a parameter. We also study some second order properties of the powers of tests based on asymptotically efficient statistics. In particular, we consider their asymptotic deficiency (see (Hodges and Lehmann, 1970)) and power loss. As a rule, we formulate the results in general terms of an arbitrary statistical experiment and then, as particular cases, we obtain the results for L-, R-, and [/-statistics. We consider only an asymptotic approach within which, as the sample size η grows, the size of test remains separated from zero and a sequence of local alternatives is considered under which the power function is separated from one. A special attention is paid to asymptotically efficient tests based on L-, R-, and [/-statistics for testing a simple hypothesis in a regular one-parameter family. We consider only 'regular' families in which these local alternatives approach the hypothesis as tn~v2, 0 < t < C, C > 0. In many cases the distributions under alternatives were investigated by the same methods as those used under the hypothesis. For example, let the distribution of observations depend on a parameter θ and the simple hypothesis Ho: θ = 6Q is to be tested. Assume that the test is based on a statistic Tn whose asymptotic normality with parameters (μ„(θ), σ^(θ)) can be established. Then the substitution θ = θο yields the distribution under the hypothesis Ho, whereas the substitution θ Φ QQ yields the distribution of the statistic Tn under the IX

alternative Ηχ. Under local or closed alternatives, θη —» θο, we obtain that the distribution of Tn is asymptotically normal with parameters (μη(θη), σ%(θη)). However, this direct approach does not take into account the specificity of the problem connected with the locality of the alternatives and leads to extra regularity conditions imposed under alternatives. A method which is adequate to the setting of the problem can be obtained by the theory due to L. LeCam (see (LeCam, 1955; LeCam, 1956; LeCam, 1960; LeCam, 1986)). This theory is based on the concept of contiguity (see also (Häjek, 1962; Häjek and Sidäk, 1967)). In particular, if An is the logarithm of the likelihood ratio of θη to θο and the joint distribution of (T„,A„) under θ = θο has a limit, then the distribution of (T n , An) under θ = θ„ also has a limit which is easily determined. If the limit distributions have densities po(x,y) andp\(x,y), then pi(x,y) = e?pQ(x,y).

One of effective applications of this method is known to rank statistics (see (Häjek and Sidäk, 1967)), since the distribution of the vector of ranks under an alternatives is considerably much more complicated than under the hypothesis which hampers the application of the direct approach. For this reason, conditional distributions were used in (Albers et al., 1976; Bickel and van Zwet, 1978) to construct the asymptotic expansion (see Section 1.6). Let X\ Xn be independent identically distributed real-valued random variables with density p(x, θ), θ e θ c R 1 , Θ being an open set containing zero. We are interested in the problem of testing simple hypothesis HQ: θ = 0

(1)

against the simple one-sided alternative Ηχ: θ > 0, θ e Θ. Within this setting, withfixedn, generally speaking, there is no best (uniformly most powerful) test. But if η -> then for each θ > 0, θ e Θ, the power of any reasonable test tends to one (see Section 1.1). In this situation it is difficult to compare different tests. If we consider the power functions of'close' alternatives of the form H„i:

θ = τί,

τ = n~V2,

0 s}, J —oo fi/n =n Jis)ds, Jis) = ila\x))' Ja-l)/n

p(x) = p(x, 0),

ail) = I Jkis) = ( ^ » ' L ^ ,

,

hix,y) = la\x) k = 0,1

s e [0,1],

+ Z(1)(y) + Ψ ^ , ^ ) ,

lis) = la) ( f "

1

(27)

and Ψ(χ,>0 is a function which is measurable and symmetric in two arguments and E 0 m z i , x 2 ) I χύ a= o. In Chapter 2, Appendix Β and Appendix F, we present the asymptotic expansions and estimates of the rate of convergence of the distribution functions of these statistic under the alternative hypothesis H n i . In doing so we obtain the asymptotic expansions under alternatives from the asymptotic expansions constructed under the null hypothesis HoThese results can be used for the construction of asymptotic expansions and estimates of power functions of tests based on such statistics. They also describe the changes in the limit law under deviations from the initial xiv

distribution. We note once more that in the case of ü-statistics, unlike t h e approach used in (Albers et al., 1976), the asymptotic expansion is obtained by the method which is technically simpler, does not make use of conditional distributions and under relaxed regularity conditions. Note t h a t the imposed regularity conditions can be divided into two groups: regularity conditions concerning the logarithm of the likelihood ratio A n (t) under the null hypothesis Ho (we consider the weakest regularity conditions obtained in (Chibisov and van Zwet, 1984a) and presented in Chapter 4, under which, for example, the support of the density p(x, Θ) may depend on Θ), and regularity conditions used in the proof of the asymptotic normality, and in the construction of convergence rate estimates and asymptotic expansions for L-, R-, and [/-statistics (see (Albers et al., 1976; Helmers, 1984; Bickel et al., 1986)). In Chapter 3 and Appendix G we formulate and prove theorems which provide the possibility to determine r(t) for the statistics Tni, i = 1,2,3,4. As an illustration, here we present the expression for r2(t) = Lim n(ß*(t) - & 2 (ί)), π-*» where ßn2(t) and ß*(t) are power functions of tests of level a e (0,1) based on the statistics Tn2 and An(t), respectively, (see (24) and (3)). Then under the corresponding regularity conditions (see Section 3.6) the representation r2(t) =

- fV7)Var[ür2 - th2 I Li = uaV~I\,

(28)

holds, where φ(χ) is the standard normal density and K2 = ~ Γ J ^ B ^ d F - H s ) , Jo

(29)

Li = - f J i ( s ) ß ( s ) d F - 1 ( s ) , Jo

(30)

L2 = — f J2(s)B(s)dF-\s), Jo

(31)

B(s) is the Brownian bridge (i.e., the Gaussian process with zero expectation and covariances EB(s)B(t) = min(s, t) — st, s, t e [0,1] on [0,1]). Moreover, the conditional variance on the right-hand of (28) can be written out explicitly (see Section 3.3), namely, Var[tf2 - tL2 I Li = u a V / ] = v0 + vtf + v2t2, where vo = 2{l\ - 2/1/2 + /112) + 4(1 - u2a)(/? XV

Iui),

(32)

1>1 = 4wa(/0/i - /oil), V2 - 1001 —Iq, i = 0,1;

Ii = Ρ jf_iW+H8)dF-\8), Jo h=

Jo

f1J1(s)s(l-s)dF-\s),

i,j = 0,1;

1 = 1,2;

K(s, t) = min(s, t) — st, μ(β)= 4 = Γ VI Jo

la\F~\t))dt,

and the functions «/^(s) are defined in (27). Section 3.8 is devoted to combined L-tests. Consider these tests in more detail. Assume that we are still interested in testing the hypothesis Ho against a sequence of alternatives Η b u t the original sample is split into a fixed number τη > 1 of parts CXi, ...,Xni), ...,(Xn-nm+i,

...,Xn),

ni + ...+nm

(33)

= n,

and, moreover, m w

n

Assume that on the base of each subsample, asymptotically efficient Lstatistics T ^ , I = 1 m, are constructed by formula (24). Form the combined L-statistic m Tn2 =

Z=1

Σ

(

3

5

)

and consider the test of level a e (0,1) based on the statistic Tn It is clear that such a test is asymptotically efficient, since the principal terms of stochastic expansions of An(t) and Sn2 = anTn2 + bn coincide under an appropriate choice of an > 0 and bn. Therefore, it is interesting to determine the quantity rfXt)

= UmniM) η—

~

Äü(i»

= Lim n(ß*(t) - ßn2(t)) - r2(t) = rf\t) xvi

- r2(t)

(36)

where ßn2(t), ßn2(t) and β*(t) are the power functions of tests of level a based on the statistics Tn2, Tn2 and A n (t) respectively. In Section 3.8 we present the formulas for r(^\t) and r(2m)(i). Combined tests appear, for example, if the whole sample X\, ...,Xn, is unknown, but only its parts (33) are available which, due to some reasons, cannot be united as a single whole. In this case the quantities f ^ ( f ) characterize the 'losses' due to the lack of information concerning the whole sample. These tests can also be used in the case where the initial sample is assumed inhomogeneous (although it is homogeneous in reality) and consists of homogeneous groups (33). Then r ^ i t ) characterizes the 'losses' due to the wrong assumption concerning the sample. Combined tests were considered in (van Zwet and Oosterhoff, 1967; Albers and Akritas, 1987). One- and two-sample combined rank tests were considered in (Albers, 1989; Albers, 1991). In these papers, i?-tests were investigated and quantities similar to r ^ i t ) were found by the method essentially based on the construction of asymptotic expansions for the corresponding power functions. The limit under consideration was obtained from these expansions by direct cumbersome calculations without using conditional variances. The method we use here can be also applied to combined R- and {/-tests. As this is so, it considerably simplifies the calculations. Chapter 1 contains the necessary introduction; in Chapter 4 we consider problems related to the construction of asymptotic expansions for the logarithm of the likelihood ratio. For completeness, in Appendices A, C, D, E, and Η we present some facts and results which are intensively exploited in the main part of the book. The author started to study the asymptotic theory of testing statistical hypotheses under the influence of D.M. Chibisov. His lively interest, numerous discussions and benevolent criticism to a great extent stimulated the work on the book. In my work, I was supported by advices and all-round help from my friends and colleagues. Among them, I should first mention V.Yu. Korolev who actually was the initiator of writing this book. Throughout my work I felt consistent moral support from V.M. Kruglov and V.M. Zolotarev. Many useful remarks oriented at improving the presentation of the material were made by N.G. Ushakov. The manuscript was carefully read and prepared for publication by A.V. Kolchin. I would like to express my sincere gratitude and thanks to all my friends and colleagues mentioned above for their assistance in writing and preparation of this book. In its final stages, the work on the book was supported by the Russian Foundation for Basic Research, grants 97-01-00271, 99-01-00847, by the Russian Humanitarian Scientific Foundation, grant 97-02-02235, and by the grant RFBR-DFG 98-01-04108. V.E. Bening xvii

Notations Throughout the book, a triple numeration is used: the first digit denotes the number of a Chapter, the second one denotes the number of the Section in the Chapter and the last one denotes the number of an item (Theorem, Lemma, formula) in the Section. For example, a reference to Theorem 3.2.1 means that to the first Theorem of the second Section of the third Chapter. The following notation are used throughout the book. The £th derivative of a function of θ (if it exists) will be denoted by superscript (k), e.g. /

3 )

(χ,θ)=-^/(χ,θ),

f^k\x,

0) being the right-hand derivative of Θ) at with respect to χ (or s) will be denoted by a prime, e.g. d f i x , 0) = — fix, dx

θ

= 0. Derivatives

θ).

For function of θ at θ = 0 the argument θ will be omitted, e.g. a2 f

f i x ) = fix, 0).

ix)= θ=0

Set ο.

and let —i, —> and —> denote, respectively, the almost surely convergence, convergence in probability and convergence in distribution. a=' means that an equality holds with probability one. If A is an event, then 1^(0 is the indicator of A and A° is the complement of a set A. L\[0,1], L2 [0,1] denote the sets of integrable and square integrable functions on [0,1] respectively. The letter C, sometimes with subscripts, will be used in inequalities to express the assertion that there exists a positive constant for which the inequality holds true. XIX

Ν is the set of natural numbers; No = N u {0}; R 1 is the real line, W is the space of vectors (rows) χ = xp), xj e R 1 . X„ is a random element taking values in an arbitrary sample space i3£n, Denote by Jz?(Xn) the distribution of X„. The symbols H„o and H n i denote the hypothesis H„o: -5f(Xre) = Ρη,ο»

H„i: J?iXn) = Pn,i,

where Pn>o and P „ j mean distributions ofX„ having densitiesp n oix) a n d p n i i x ) with respect to some σ-finite measure under Hno and H„i, respectively. We will write E„ o> En>i and Varo, Vari for the expectations and variances with respect to P„,o and Pn>i. Define the logarithm of the likelihood ratio as

_

Pn,o(Xn) > 0»

Ρη,θ(Αη)

n_0 (X n )=p n ,i(X„) = O,

+00,

Ρη,θ(Χη) = 0, Pn,l(Xn) > 0.

For a real-valued test statistic TN = TN(XN), introduce K = s

n

A„,

-

Sn

= anTn

+

bn,

where an > 0 and bn are some real numbers, τ = n~y2,n

e

N,

τη i 0 is a small parameter. σ 2 ) means the normal distribution in R 1 with parameters μ and 2 σ ; Φ(·) and φ(·) denote the distribution function and the density 1) respectively; for a e (0,1) let ua

= Φ _ 1 (1 -

a)

be the upper a-quantile of Φ(χ). (X\,...,Xn) means independent identically distributed random variables with common densityρ(χ,θ) with θ ranging over an open set Θ c R 1 containing zero. Denote pix)

lix,

= pix,

Θ)

0),

Fix)

= logpfr, θ), l

Jkis)

=

= J '

itk)ix))

{ i )

x=F~Hs)

lix)

(x)=

,

F'HS)

p(t)dt,

= lix,

0),

heix)

^ o g p i x , d )

k e No,

= inf {x:

= θ~ΗΚχ,θ)-Κχ,0)),

,

lis) = la) ( f XX

Fix)

i e

1

N

> s},

[0,1],

se

θ

> 0,

0 )

J 1 (s) = e/(s).

(Ri,...,R+) are the ranks of (|Χχ|,..., statistics of the sample (Χχ Xn)\ r„(s),

Fn(x),

(Xi:n, ...,Xn:n) are the order Fn(x)

are empirical distribution functions based on respectively. Let Bn(s) = Vn(rn(s) -s), se [0,1] and B(s) stand for a Brownian bridge on [0,1]; Lk = - f1 Jk{s)B(s)dF-\s), Jo

Lnk = - Γ Jk(s)Bn(s)dF-Hs), Jo where

k e N,

= Ε0(Ζα)(Ζι))2 is the Fisher information. Given a e (0,1), define by β* and β η the powers of the size-α tests based on An and Tn respectively. or r = Lim τ~2(β* - ßnl η >°°

r = Umn(ß*-ßn) η Let Ii = C JC„}

> 1.

Let μι and of be the expectation and the variance of UX\, Θ) — 1{X\, 0) under Ηχ respectively. Applying once again the Central Limit Theorem to An we obtain ΗΛ — ^ ( 0 , 1 ) . \

σιφι

J

Therefore, in view of (1.1.1) ßn = Ρη,ι{Λ„ > cn} = 1 - Φ =

+ 0(1) ( 1 1 2 )

1.1. First-order asymptotic theory

3

Application of Jensen's inequality to μο and μι yields

Hence it follows that - μο)

»

and due to (1.1.2) ßn = Ρ π ,ι( Λ * >c„} η

» 1.

This result is not sufficiently informative for the comparison of tests performance because such an evaluation would require knowledge of the rate of convergence of their powers to 1. This, however, is a complicated matter and we will not consider this problem here (see, e.g., (Chernoff, 1952; Chernoff, 1956; Bahadur, 1960a; Bahadur, 1960b; Bahadur, 1965; Bahadur, 1967; Bahadur, 1971; Groeneboom and Oosterhoff, 1977; Groeneboom, 1979; Groeneboom and Oosterhoff, 1981; Kester, 1983; Kallenberg, 1978; Kallenberg, 1981; Nikitin, 1991)). Usually, the following Pitman's approach (see Subsection 1.5.1 and (Pitman, 1948; Noether, 1955)) is used: the test size a e (0,1) remains fixed but instead of a fixed alternative θ > 0 we consider the so-called local or contiguous alternatives {θ„} for which θ„ —» 0 as τι °° so that the power tends to a limit which lies strictly between a and 1. Under natural regularity conditions (see, e.g.(1.1.2)), it can be easily shown that the class of these sequences is the class of sequences θη for which Lim \/ηθη = t, for some constant t with 0 < t < °o. To justify this approach, we might argue that large sample sizes would be relevant in practice only if the alternatives of interest were close to the null hypothesis and thus hard to distinguish with only a small sample. Relation (1.1.2) shows that we are concerned with θη such that μι - μο =

0(η~υ2).

Thus, for any 0 < t < C, C > 0, we consider testing Ho: θ = 0 against H„ ( : θ = τί. Throughout the book we use the abbreviation τ = η -1/2

(1.1.3)

4

1. Asymptotic test theory

and let Ρ„,ο and Pn>i denote the distributions ofX„ = (X\, ...,Xn) under Ho and H„ ( respectively. Obviously, they have the densities η

π.

Pn, o(x) = ΠΡοίΧί), 1

Ρ/ι,ί(χ) = I J p « ^ ) 1

(1.1.4)

with respect to the corresponding product measure, χ = (χι, ...,xn). The respective expectations are denoted by En>o and En>i (with subscript η omitted when applied to a function of single Xi). Denote by ßn(t) = Ε ηιί Ψ η (Χ„)

(1.1.5)

the power of a test Ψ η for Ho against the local alternative H n i . Considered as a function of t, this sequence converges for every feasible test to a monotone continuous function taking values in (0,1) (see (1.1.14)). Using the reasoning from (Pfanzagl, 1980, p. 43), we observe that this reparametrization is not only a matter of technical convenience. In meaningful applications of tests, we know which alternatives we wish to discriminate from the hypothesis with high probability, say, y, and we choose the sample size accordingly. Hence it is reasonable to compare the power of different tests for alternatives with maximal rejection probability y, irrespective of the sample size. Exactly this is achieved if we compare the power functions βn(t) for fixed t. Assume that all measures Ρ n t are mutually absolutely continuous. Consider the loglikelihood ratio * (t) ί+\ = log ι — ^ »— ^ / (XJ i r \ = log 1 Pnjt&n) A ——. n ""ra,0

Ρη,ΟίΛη'

Then by (1.1.4) η

Λ„(ί) =

τί) - OXi, 0)].

(1.1.6)

i=l By the Taylor series expansion, l{Xi,τί)

= Ttla\Xi)

- l(Xi,0)

+ \{xtfl(2\Xi)

+ ...

(1.1.7)

Here and in what follows the Ath derivative of a function with respect to θ is denoted by the superscript k. For a function of θ at θ = 0 the argument θ will be often omitted, e.g., li2\x)=

d2 ^2*i rejects Ho when An(i) > c n }

with c n f defined by (assuming continuity of the corresponding distribution) Pn,o{A„(i) > c n>i } = a. Using (1.1.9) and the Central Limit Theorem we obtain JS?(A„(f) I H 0 )

• JV (-\t2I,t2l)

.

(1.1.10)

Hence - \t2I,

(1.1.11)

ß*(t) = Ρ Πιί (Λ„ω > cn,t).

(1.1.12)

Cnf

> Ct = tVlUa Φ(ua) = 1 -

a.

The power of this most powerful test is

It is well known from the LAN theory that (it also easily follows from (1.1.9)) seuΜ*) I H n>i ) — > J / (±t 2 i, t 2 i ) .

(1.1.13)

6

1. Asymptotic test theory

Thus (1.1.11H1.1.13) yield > ß(t) = Φ(tVl - ua).

ß*(t)

(1.1.14)

These results have been obtained in (Wald, 1941). Observe that /3*(i), known as the envelope power function (i.e., the supremum over all size-α tests of the power at rt), is not the power function of a single test. The envelope power function renders a standard for evaluating the power function of any particular test. For each t > 0 it is the power of the most powerful test against H„i( based on An(t). Thus it provides an upper bound for the power of any test for Ho against H i : ί > 0. It is well known that there are many (first order) asymptotically efficient tests, i.e., the tests whose power function ßn(t) converges to the same limit as ß^(t). Those are, for example, the tests based on on An(to) with an arbitrary ίο > 0, on the maximum likelihood estimator Θη, on a certain linear combination of order statistics; on a certain [/-statistics; for θ location parameter there are asymptotically efficient rank tests (see Chapter 3). Hence there is an abundance of tests such that ßn(t)

>ß(t),

t> 0,

(1.1.15)

1.e., of tests which are most powerful for Ho against H„ ( to within o(l) for every t > 0. They can be compared with each other by higher order terms of their power. We even have the result (see (Pfanzagl, 1974, p. 31, Theorem 6)) that if (1.1.15) is satisfied for a single t > 0, then (1.1.15) holds true for all t > 0 (efficiency up to o(l) for a single t > 0 implies efficiency up to o(l) for all t > 0). Before proceeding to the higher-order theory, let us derive some simple formulas to be used in what follows. Denote by poAx) a n d Pi,t(x) the limiting densities of An(t) under Ho and Ηη>ί respectively, which correspond to the normal distributions in (1.1.10) and (1.1.13). Observe that they are related to each other by exPo,t(x) = Pi,t(*)> which follows from the properties of the loglikelihood ratio or can be verified directly. We will need expressions forpo,t(ct) andp\f{c t ). Substituting (1.1.11) into the explicit expressions for normal densities (1.1.10), (1.1.13) yields P 0 , ( c ) = ^= 0,

and let us need expressions for the differences of the corresponding sizes and powers up to ο(δη). Assuming certain regularity, so that the distribution functions of An(t) under Pn>o and Pn i possess the Edgeworth expansions (see Appendix E), it is easy to see that these differences are completely determined by the leading terms of these expansions, because the subsequent terms will contribute at most 0(τδη) = ο(δη). The leading terms are the normal distributions we have just discussed. Thus it is immediately seen that β

0^-0^

= δηρο/ct) + ο(δη) = ^=φ(ω α ) + ο(δη),

tv*

(1.1.17)

(1.1.18)

1.2. Second order efficiency Topically, an asymptotically efficient test statistic (properly normalized) has the score function L ^ as its leading term, so that it is of the form Tn=L(»

+ xQn + ...,

(1.2.1)

with Qn bounded in probability. For example (see (1.1.9)), A„(io) is equivalent Tn=L?+\rt«Lf.

For rank statistics (R-statistics) and linear combinations of order statistics (L-statistics) Qn can be written as a quadratic functional of the empirical process (centered and normalized empirical distribution function) (see Chapter 3, Section 3.3). In the seventieths, for the power functions ßn(t) of various asymptotically efficient tests an expansion in τ to terms of order τ2 was obtained. The objective was to study the deficiencies (see Subsection 1.5.3) of the corresponding tests, which we will briefly discuss later on. Writing down such expansions in an explicit form required very involved calculations. For 'parametric' test statistics, first a 'stochastic expansion' of the form (1.2.1), but containing also the T2 term was derived. It was used to obtain the Edgeworth expansions (see Appendix E) for the distributions of Tn under Ho and H n i . For the rank statistics, a different technique based on a certain conditioning was used in (Albers et al., 1976; Bickel and van Zwet, 1978). The Edgeworth expansion

8

1. Asymptotic test theory

under Ho was used to obtain an expansion in τ for the critical value dn defined by P„,0{T n > dn} = a. Then the Edgeworth expansion for ßn(t) = Ρn,t{Tn > 0} forms an asymptotically complete class. This means that for any sequence of size-α tests having powers ßn(t), there exists a sequence s„ > 0 such that 0. Here (t) stands for the power of the size-α test based on An(sn). It was shown in (Chibisov, 1983, p. 1069, Example 2.2) that βB*(f) - ß : ß ) = ^ ^ j f ^

1

-"«)

where A , = \t\t

- sf (Var0 l{2\Xr) - Γ1 C o v ^ 1 ^ ) ,

P\X0))

These formulas show that the loglikelihood ratio tests for different s do not dominate each other and their powers differ by terms of order τ 2 , unless Dt s vanishes. It does so when {PQ, θ e 0 c R 1 } is an exponential family because then a uniformly most powerful test exists and ß ^ i t ) does not depend on s > 0.

1.3. On efficiency of first and second order It has been observed in Section 1.2 that if two tests are asymptotically efficient for the same testing problem, then typically their powers will not only agree to first but to second order. A general result of this type was given in (Pfanzagl, 1979) entitled 'First order efficiency implies second order efficiency\ Because of their technical nature, however, these contributions give little insight into the nature of this phenomenon. In this section, following (Bickel et al., 1981), we provide an intuitive understanding of the phenomenon (see Section 1.2) by proving Theorem 1.3.1 of this kind under a somewhat loose assumption in terms of a general statistical experiment (3Cn, s f n , {Pra,o, Ρη,ι})· A formal proof of Theorem 1.3.1 is given in Appendix C. Though this proof is straightforward, a non-mathematically inclined reader may wish to skip it. Let a random element of a measurable space srf n ) be observed, η = 1,2,3,... Denote by JSf(Xn) the distribution of X„. Consider the problem to test the hypothesis H„o: Jf(Xn) = P„,o

against the alternative H„i: J f ( X n ) = Pn>i,

1.3. On efficiency of first and second order

11

where Ρ„,ο and Pn_i are two distributions on n , srf n ) having densities p n o M andPni(x) with respect to some σ-finite measure. We write E„,o, E„ i and Varo, Vari for the expectations and variances with respect to Ρ„,ο and Ρ„,ι. Define the logarithm of the likelihood ratio as log^"'1^!, PnßiXm) Λ„ = < 0, +oo,

pnfi(Xn) > 0, Ρη,θ(Χη) =Ρπ,ΐ(Χίΐ) = 0, Ρη,θ(Χη) = 0, Ρη,ΐίΧη) > °·

By the Neyman-Pearson Lemma (see Appendix D), the most powerful level-α η test is based on A„: τ;(Λ„,α„) = {°· [1,

^"^ί01"'· An > cn(an),

with Εη,θΚ(Λ*> «η) = ßn(1Ψ*η(Αη, On), where /3*(α„) is the maximum attainable power against Pn>i at level a n . For a real-valued competing test statistic Tn = T„(X„), consider also a level-a n right-sided test based on Tn, i.e., Ψ(Τ

a)_i0,Tndn(an). We have Ε„, 0 Ψ η (Γ η , On) = a n ,

ßniOn) = ΕΠ>1Ψ„(Τ„, CCn),

where βη(θη) is the power of this test against Pn>i at level a n . Assume t h a t this test is asymptotically efficient (see Section 1.1). For most statistical problems such efficient tests exist. Typically, an efficient test statistic after a linear transform is close to the loglikelihood ratio (see Sections 1.1 and 1.2), i.e., Δ„ =

0, Sn — An,

Sn= Αη = ±eo, otherwise.

with Sn = anTn + bn, is small in a proper sense for some an > 0 and bn e R 1 . For independent identically distributed observations from a smooth parametric family, this is the case where Tn possesses an efficient score statistic as its leading term (see Sections 1.1 and 1.2). For a sequence y{n) e (0,1], we say that the sequence of level-an tests Ψη(Τη, is y{n)-efficient i f , for η —> ßn(an) ~ ßn(an) = o(y(n)).

an)

(1.3.1)

12

1 .Asymptotic

test theory

In a more usual terminology (when X„ is formed by η independent identically distributed observations, see Section 1.2), the first and second order efficiency correspond to y(ra)-efficiency with γ(η) = 1 and γ(η) = τ respectively. THEOREM 1.3.1 (Bickel et al, 1981). Let lim inf a n > 0, 71—

(1.3.2)

and let there exist Β > 0 such that for any y e R 1 , any δ > 0 and η —>

sup P„,o{x - y(n) ß } = ο(γ(η)),

(1.3.5)

Ρ „ , ι { Δ η < - Β } =o(y(n)).

(1.3.6)

of level-an

tests Ψη(Τη,

an) is

y{n)-efßcient.

Theorem 1.3.2 below concerning the rate of convergence in (1.3.1) immediately follows from the proof of Theorem 1.3.1 (see Appendix C). THEOREM 1.3.2. Suppose

that

lim inf On = C0, 71— and that for anyy

C0 > 0,

e R 1 there exists C i ( y ) > 0 such

sup Ρ„,ο{* - γ(η) 0, i = 2 , 3 , 4 , and δη i 0, δη i 0, such that

Ε η , 0 |Δ η |Ι ( ^ ( η ) 3 ) (|Δ„|) < 02(Β)δηγ(η),

(1.3.9)

Ρ„,ο{Δα > B } < C3(ß)5„y(n),

(1.3.10)

Ρ„,ι{Δ„ < -Β} < 04(Β)δηγ(η).

(1.3.11)

Then

0 < ßn(ccn) - ßniOn.) < γ(η)εη,

(1.3.12)

where εη = 2e2DB%C1(2D)

+ eB+DC2(B)5n

+ eDC3(B)5n

+

C4(B)8n,

(1.3.13) D = Β - log(C0 - C3(B)önY(n)).

(1.3.14)

1.4. Power loss

13

1.4. Power loss The difference ß*n(t) -

ßn(t)

is closely related to the deficiency of the corresponding test (see Subsection 1.5.3), which is the number of additional observations needed for this test to achieve the same power as the most powerful test. This notion was introduced in (Hodges and Lehmann, 1970). Deficiencies of various tests were extensively studied in 70s in (Albers et al., 1976) for rank tests, in (Chibisov, 1983; Pfanzagl, 1980) for 'parametric' tests, in (Jureckovä, 1979; Bender, 1980; Albers, 1984a; Albers, 1984b; Klaassen and van Zwet, 1985; Bening and Chibisov, 1999) for the test theory with nuisance parameters, and others. When the limit r(i):= Lim n(/3„*(f ) - & ( * ) )

(1.4.1)

exists, the asymptotic deficiency is finite and can be directly expressed through this limit. We do not present this relationship here (see Subsection 1.5.3, (1.5.39)). Rather, we directly deal with quantity (1.4.1), which we refer to as the power loss. This quantity was actually the object of the studies on deficiency. As we pointed out, its derivation was very involved. An elaboration of the argument given in the previous Section leads to the following formula for the power loss. Suppose that An=Sn-

An(t)

as in (1.2.2) is of order τ in a somewhat stronger sense than before. Namely, assume that (VnAn,An(t))

converges in distribution under P„,o to a certain bivariate random variable. Denoting Π„ =

\/nAn,

we rewrite it as (Π„,Λπ(ί))-^(Π,Λ).

(1.4.2)

In all regular cases Λ is a normal random variable (see (1.4.9)). Denote its distribution function and density by Φι(χ) and Po/x). Let ct be the limiting critical value defined by Φι(θί) = 1 - a. Then r(i) = L i m n ( ß * ( t ) - &(*)) = \eCtp0f(ct) Var[IT | Λ = cj.

(1.4.3)

14

1. Asymptotic test theory

Observe that (see (1.1.10H1.1.13), (1.1.16)) ec'Po,t(ct) =

Pu(ct),

where pi,t(x) is the limiting density of A„(i) under Pn>i and r ct = tVIua 1

ι ο

1 p0,t(x) = j-^φ

- ψΐ,

f2x - t2l\

(2x + t2l\ I J,

(1.4.4)

(2x + t2l\ =

α4

·5>

Combined with (1.4.3) and (1.1.16), these relations yield r(t) = —^ 0,

τ = ή~υ2.

Consider an asymptotically efficient test based on Τ η1 = τ Σ ΐ α Χ Χ 0 = ΐ } » , t=l la\x)=

^log

(1.4.7)

ρ(χ,θ) θ=0

Writing out the Taylor expansion of A„(i) as described in Section 1.1 (see (1.1.8) for the notation and (1.1.9)), we obtain An(t) = tL™ - \t2I + \rt2 (Lf mk = E0l{k\Xi),

+ i f m 3 ) + • • ·,

(1-4.8)

k = 1,2,...

We introduce Sn — tTn — gi I-

Assume that the joint distribution of ( L ^ . L ® ) converges under Ho to a normal one. Denote by (L ( 1 ) ,L ( 2 ) ) a random vector in R 2 having this limiting distribution.

15

1.5. Efficiency and deficiency

Then (1.4.2) holds with Δπ = - | τ i 2 ( 4 2 ) + i i m 3 ) ...,

(1.4.9)

A = £L(1) — \t2I, n=-ii2(L(2>

+

iim3).

Then we obtain (see (1.4.4), (1.4.6)) tit) = 2tVl

- tVl) Var[n I L (1) = V!ua]

t3 = —j=(p(ua - tVi)(y/ar0l(2)(Xi) 8

-

I^El^iX^iX^).

v7

(1.4.10)

In the above reasoning we assume that the tests are of size α exactly, but formula (1.4.3) remains valid as the sizes converge to α and equal each other up to oil 2 ), i.e.,

Ρη,θ{Λ„(ί) > Cnf}

2

- Pnfi{Tn > dn) = θ(T ).

Formula (1.4.3) demonstrates, in particular, that the power loss (hence the deficiency) is determined by the terms of order τ of the asymptotically efficient test statistic. We give an informal proof of (1.4.3) in Section 3.2. This 'proof was first presented in (Chibisov, 1982a; Chibisov, 1982b). Its justification, however, depends on the structure of Tn. Formula (1.4.3) was proved in (Chibisov, 1985) for statistics admitting a stochastic expansion in terms of sums of independent identically distributed random variables (which is typical for 'parametric' problems) subject to certain conditions (see Section 3.9).

1.5. Efficiency and deficiency In this section, following (Hodges and Lehmann, 1956; Hodges and Lehmann, 1960; Hodges and Lehmann, 1970; Serfling, 1980; Pitman, 1948; Noether, 1955; Albers, 1974; Albers, 1975; Hettmansperger, 1984), we consider the concepts of asymptotic relative efficiency and deficiency, which were introduced in (Pitman, 1948; Hodges and Lehmann, 1970) respectively.

1.5.1. Asymptotic relative efficiency Consider two competing statistical procedures Πι and Π2 for the same problem. Assume that we agree on a criterion by which the performance of Πι and Π2 is measured, which we denote as π\ and πι, respectively. If Πχ and Π2 are

1 .Asymptotic test theory

16

unbiased estimators, it will usually be the variance of the estimates, if Πι and Π2 are tests, this criterion will usually be the power of the tests, etc. For η = 1,2,..., consider statistical procedures Πι = Π„ι and Π2 = Π„2 based on η observations. Let π„ι and 7^2 denote its qualities. Define kn in the following way: if Π„ι is based on η observations then kn is the number of observations which is needed for Π„2 to attain the same level of performance as Π„ι· So we search for a number kn such that

Here kn is treated as a continuous variable, the performance of Π„2 being defined for real η by linear interpolation between consecutive integers. Comparison of the procedures Π„ι and Π„2 involves comparison of η and kn, and this can be carried out in various ways. A possible way to compare Π„ι and Π„2 is to study the behavior of η K

n

which is usually regarded as the relative efficiency of the procedure Π„2 with respect to the procedure Π η χ. In general, it is not possible to find en for a fixed value of n, as the exact values of the qualities of Π„ι and Π„2 are usually not known. Suppose t h a t the specified performance criterion is tightened in a way that causes the required sample size η and kn to tend to infinity. If in this case the ratio en tends to some limit e, the value e is called the asymptotic relative efficiency of Π„2 with respect to Π„ι, which conveys a great deal of useful information in a compact form. As an example, let us consider estimating. Let θ„ι and θη2 denote competing estimators for a parameter Θ. Suppose that Jz? (ηγσ-\θ)(θηι - θ)) 1), y > 0, i = 1,2, as η - > If our criterion is based on the variance of the asymptotic distribution, then the two estimates 'perform equivalently' if σ?(θ) η*

of(0) ~ k*

η '

e

~"™kn-{a2W)J

[σ!(θ)\νγ •

If, however, we adopt as the performance criterion the probability concentration of the estimate in an ε-neighborhood of Θ, for ε > 0 specified and fixed, then a different quantity emerges as the measure of asymptotic relative efficiency. For a comparison of θη\ and θ„2 by this criterion, we may consider the quantities Jtni = Ρη,θ{\θηί ~ θ\ > ε}, 1 = 1,2,

17

1.5. Efficiency and deficiency and compare the rates at which these quantities tend to 0 as η —» cases, the convergence is 'exponentially fast': 7i - 1 log/foi -> γί(θ,ε),

In typical

i = 1,2.

In such a case, the two procedures may be said to 'perform equivalently" at respective sample sizes kn and η satisfying Πηΐ

>1,

Jini = log ηni,

1-1,2.

In this case we arrive at . τι γ2(θ, ε) e = Lim — = ———-. Κ η(θ, ε) It is thus seen that the 'asymptotic variance' and 'probability concentration' criteria yield differing measures of asymptotic relative efficiency. It can happen in a given problem that these two approaches lead to discordant measures (one having value greater than 1, the other, less than 1). For example, see (Basu, 1956). Further introductory discussion of asymptotic relative efficiency may be found in (Bahadur, 1967; Fraser, 1957a; Rao, 1965; Cramer, 1946). Now we consider the Pitman approach to comparison of tests. Let Ψ„χ = Ψ„ι(Χη), X„ = (Χι, ...,Xn), and Ψη2 = Ψη2(Χη) be two tests for the hypothesis Ho: θ = θο against the alternative Hi: θ > θ0. Then the relative efficiency of Ψη2 with respect to Ψ„ι is the ratio n/kn, where η and kn are the number of observations necessary to give Ψ„χ and Ψ„2 the same power β for a given level a. In general, this ratio depends on the particular alternative chosen (as well as n). However, in the asymptotic case, this somewhat undesirable phenomenon can be avoided. It might be argued that restriction to the asymptotic case is even more undesirable in itself, but the unfortunate fact remains that for many test procedures in current use the asymptotic power function is only one available. The concept of asymptotic relative efficiency is due to (Pitman, 1948) (see also (Noether, 1955)). Observe that, at least for consistent tests, the power with respect to a fixed alternative tends to 1 if the number of observations is sufficiently large (see also Section 1.1). Therefore, the power no longer provides a worthwhile criterion for preferring one test over another. On the other hand, it is possible to define sequences of alternatives θ η changing with η in such a way that as η °° the power of the corresponding sequence of tests converges to some number less than 1. It seems then reasonable to define the asymptotic relative efficiency

18

1. Asymptotic

test theory

of the second test with respect to the first test as a limit of the corresponding ratios nlkn, where kn is defined by Ε η , Λ ι ί Χ η ) = α,

Ε„>θ0Ψ*,,2(Χ*π) = α,

α e (0,1)

and = βηΐ(θη) = Εη,β,Ύηΐ&η) = ß, nkn2 = ßkn2(en) = Ε„,θηΨΑη2(ΧΑ„ ) = ß,

ß e (0,1).

A theorem due to Pitman allows us to compute this limit if certain general conditions are satisfied. Namely, let Ψ„(Χ„) be a test for the hypothesis Ho against the alternative θ > θο based on η observations, let Tn = Tn(Xn) be the test statistic, and let μ*(θ)= ΕnßTn>

σ^(θ) = Var0 Τη.

Now we formulate Pitman's Conditions relative to a neighborhood θο < θ < θο + δ, δ > 0 of the null hypothesis. (PA) For θ e [θο, θο + δ], μ„(θ) is τη times differentiable, with μ^\θ0) = ... = μ^ _ 1 ) (θ 0 ) = 0,

μ(™\θο) > 0.

(ΡΒ) For some φ(η) —> °° and some constant ν > 0, _ . φ(η)σ (θο) Lim —τ - ; η = ν. ^ μη (θο) (PC) For θη = θο + 0(φ~υηι(η)), and for any sequence θ„ 1 θο Lim

—- = 1, σ·2(θ0)

Lim -7-; = 1. μη (θο)

(PD) For some continuous strictly increasing distribution function Fix) sup sup I Ρη,θ {σ~Ηθ){Τη - μ„(θ) < χ} - F(*)| θο—θ-θο+5 χ

> 0,

η ->

In typical cases, the test statistics under consideration will satisfy (PA)(PD) with F(x) = Φ(ζ) in (PD), m = 1 in (PA), and φ(η) = y/n,

θη - θ0 + it,

in (PC). In this case . >/ησ (θο) Lim —7TTη = v. Μη (θο) T

t > 0,

19

1.5. Efficiency and deficiency THEOREM 1.5.1 (Pitman-Noether).

(i) Let Tn satisfy (PAMPD) and

Ψη(Χ„) = I(Clli00)(Tn(Xn)) with {Tn >cn}

a,

(0,1).

ae

For a < β θηΨη(Χ„) = Ρ η β η { Τ η >Cn}^

β

if and. only if (θη - θοΓ φ(η) IJt— ml ν

> F , (1 — a) — F,

(1 — β).

(1.5.1)

(ii) Let W

=

ί = 1>2>

where Ί^ each satisfy (PAMPD) with common F(x), m, φ(η) and V;, i = 1,2 in (PB). Let φ(η) = ηγ, γ > 0. Then Pitman's asymptotic relative efficiency ο/"Ψ„2(Χη) relative to Ψ„ι(Χη) is e = Lim — = ( — ) kn \v2J PROOF. The proof is based on (Serfling, 1980). Using (PD), we obtain |1 - βη(θη) ~ F(a-\en)(cn

- μη(θη))\

> 0,

η ->

Thus, βηΟη) -> β if and only if a~\entcn By the above reasoning,

- μη(θη)) -> F ~ x ( l - β). an->

(1.5.2)

a

if and only if o^HflbXc» - μη(θ0)) -> F ^ U - a).

(1.5.3)

20

1. Asymptotic

test theory

In view of (PC), hence it follows that (1.5.2) and (1.5.3) together are equivalent to (1.5.3) together with σ~Ηθο)(μη(θη)

-

μ„(θ0))

F~Hl

- a ) - F~Hl

(1.5.4)

~ β)•

Making use of (PA) and (PB), we obtain -hav

ra^

f o

*

σ η (θθΧμη(θη)-μη(θθ))

^θη)(θη

-

vrnU Am) (f)

νπι\μη

θοΓφ(η)

,

>

(βο)

where θη e [θο, θ„]. Thus, by (PC) (1.5.4) is equivalent to (1.5.1), which proves part (i). Now consider the tests based on T^iXn) and T*2)(X„) with sizes ccni = Εη>θοΨηί(Χη) = Ρπ,θοί^ > Cm} -> a, Let a < β < 1 and

θη = θ0 + ίφ-1/πι(η),

ae (0,1),

i = 1,2.

t > 0.

By (i), if kn is the sample size at which performs 'equivalently' to with sample size n, that is, at which and T^1* have the same limiting power β for the given sequence of alternatives, so that ßnlWn) = Ε„,θ„Ψηι(Χ„) = Ρη,θΛΤ^ > Cnl} ß k M )

2)

= Ε„,θ„Ψ,„2(Χ,„) = Ρ η , θη {7ΐ > ckn2}

β, β,

then we must have (kn) proportional to φ(η) and (θη - θ0Γ

Φ(η) _ (θη - θρΓ

m!

vi

ml

φ(Ηη) V2

or (kn)

^ V2

φ(η)

vi'

For φ(η) = ηγ,

we thus conlude that (\ Kf J ) ' ^ - V·

2

This gives the result sought for.



Clearly, e is of a local nature; it is sometimes called the Pitman-efficiency to avoid confusion with other efficiency measures.

1.5. Efficiency and deficiency

21

REMARK 1.5.1. It is clear that alternatives of the type

θηe -distribution. Now let $ be the class of all tests of Ho satisfying (PAMPE) and suppose that 9 contains a test statistic, say T*, such that (1) For every given a e (0,1) and t > 0, no other test in 2) has a larger asymptotic power than T*; then ν* < ν

for all

Tn e 9.

(1.5.5)

(2) v* > 0. A test based on statistic T* satisfying (1.5.5) is called the best test in and the following theorem is true. THEOREM 1.5.2 (van Eeaden, 1963). If & contains a best test based on statistic T* and test based on Tn e Q> with ν > 0 and ρ(θ„)

> ρ(θ0) = P,

θ0,

then

PROOF. Consider the test Ψ„λ, λ > 0, based on the statistic Τηλ = λ!Γ* + (1 — λ)Τη, where Λ is a constant independent of n. It is easy to verify that Τηχ e 3) for all λ >0. Moreover, μ^(θ)

= λμ™(θ) + (1 -

λ)μ£\θ),

σξλ(θη) -> λ 2 + (1 - λ) 2 + 2λ(1 - λ)ρ,

θη -> θ0,

where μηλ(θ) = Ε η_θΤηλ, μη(θ) = Ε

ηβΤη,

Μη*(θ) = Επ>θΤ*,

i=

l,...,n.

Finally, let (α(1), ...,a(n)) be a vector of scores (which are some given functions of i and ή), and set η Tn = Y/aiR+)sgniXi). i=1

(1.5.6)

Every test for the symmetry problem t h a t is based on a statistic of the form (1.5.8), is called a linear rank test. Examples of test statistics of the form (1.5.6) are (1) The sign test with aii) = 1,

i = 1, ...,n.

1. Asymptotic test theory

26

(2) Wilcoxon's signed rank test with a{i) = i,

i = l, ...,n.

(3) A test based on the statistic Tn with weights a(i) satisfying ra(i)

2 /

(1.5.7)

φ(χ) dx

Jo

Thus. i = 1, ...,n.

(1.5.8)

This test is similar to van der Waerden's two sample test, and we will call this test van der Waerden's one sample test. (4) A test where a(i) is the expected value of the ith order statistic of a sample of size η from a ^-distribution. This test, which is asymptotically identical with van der Waerden's one sample test (cf., e.g., (Lehmann, 1959)), was considered in (Fraser, 1957b). The fact that the linear rank tests introduced above are valid for all symmetric distribution functions H(x) yields, of course, an advantage over the tests like the f-test and the Xn-test. On the other hand, it seems plausible that, as a price for their wider validity, the former are less powerful than the latter. Obviously, the choice between the two types of tests heavily depends on the weight of this price. In this connection, it seems interesting to know efficiencies (or deficiencies, see Subsection 1.5.3) of the linear rank tests with respect to the tests like the ί-test and the X„-test (see also (Hodges and Lehmann, 1956; Hodges and Lehmann, I960)). We consider here only the X n -test for Ho based on the sample mean (1.5.9) Before we can evaluate them, we have to give a more precise formulation of the circumstances under which we want to compare the two types of tests. Consider again some parametric family & = {^(σ _ 1 (χ - θ))}, where F(x) is known, continuous, and symmetric about zero. Let Wn(Sn) be a test for Ho: θ = 0,

27

1.5. Efficiency and deficiency

based on the statistic Sn. The most general alternative is, of course, (Ho)c, but for the sake of simplicity we restrict our attention to the one-sided alternative hypothesis Hi: θ > 0. Let Ψη(Τη) be a linear rank test for Ho, based on some statistic T„ of the form (1.5.6). Then we have to compare the performance of Ψ„(Τ„) and Ψ η (5 η ) under Ηχ. As the performance criteria, we take the power functions of Ψ η (Τ„) sind Ύ η (8 η ), denoted by /3j, respectively. To find the efficiency of Ψ η (Τ„) with respect to Ψ„(5 η ), we have to know the behavior of /3j and /3f as the sample size η —» For a fixed test size a e (0,1) and a fixed alternative Θ, the power of every feasible test tends to 1 as η —» °° (see Section 1.1). Typically, Ψ η (5„) and Ψη(Τη) are such feasible tests, and therefore we obtain Lim βJ = Limßf = 1. ra-x» " n-x» " This result is not sufficiently informative for the evaluation of efficiencies. Such an evaluation would require knowledge of the rate of convergence of /3j and to 1. This, however, is a complicated matter, and for linear rank tests very little is known about it. Therefore, the following Pitman approach is used: the test size a e (0,1) remains fixed but instead of a fixed alternative θ > 0 we consider the so-called local or contiguous alternatives. This means that we look at sequences of alternatives {θ„} for which θη —> 0 as τι °° at such rate that the power tends to a limit which lies strictly between α and 1. It can be demonstrated that under natural regularity conditions (see, e.g. (1.1.2)) the class of these sequences is usually the class of sequences θη for which Lim >/ηθη = t, η— for some constant t with 0 < t < For such sequences, methods are available to find Lim ßZ(a, θη), Lim βξ(α, θη). By comparing these limits we can evaluate the asymptotic relative efficiency e = e(T,S) of Ψ„(Τ„) with respect to Ψ„(δ η ) for fixed test sizes a and local alternatives H„i: θ = tn~m, 0 0.

1. Asymptotic test theory

28

Similatly, there exist, for each a linear r a n k test Ψ„(Τ*), which is asymptotically most powerful among all linear r a n k tests for the hypothesis of symmetry Ho against H„i. For example, assume that = { Φ ί σ Λ χ - θ))}, where Φ(χ) is the standard normal distribution function. If the scale parameter σ is known, Ψ η (5*) is the test based on Xn (see (1.5.9)); if the scale parameter σ is unknown, Ψη(Ξ*) is the ί-test. In both cases, Ψη(Τ*) is the normal scores test. Obviously, Ψ η (Τ*) cannot be better than Ψ„(δ*), as the linear r a n k tests for Ho and H„i form a sub-class of the class of all tests for Ho and H„i. In particular, e* =e(T*,S*)
η—»οβ I ' J n—wo I ' 0 't 1 I 1=1 / \ i=l = (Eol-Xil)2 =

(1.5.13)

π

For the Wilcoxon test, the asymptotic relative efficiency with respect to a best test may be found from (1.5.10) with a(i) = i,

α(ί) = Φ

1 1

/n + l + i\ (— — I, V 2n + 2 J '

i=

l,...,n,

namely

^

Σ"=ι i2 Σ?=ι(Φ" 1 ((2η + 2)~Hn + 1 + i)))2

( / ο ^ Φ - ^ Ι + βίβ**») 2 3 =3 ^ '— = - = 0.9549. β(φ-Η(1 + s)/2))2ds π

(1.5.14)

The asymptotic relative efficiency of the sign test with respect to the Wilcoxon test follows from (1.5.12) and (1.5.14): e = 2/3. This efficiency is not equal to the squared correlation coefficient between the test statistics. For the correlation coefficient, we find from (1.5.10) (cf. (van Eeaden and Benard, 1957)) γ · /q ρ = Lim =— 2 ^ yjn ΣΓ=ιί 2 and

e = 3/4.

(1.5.15)

30

1. Asymptotic test theory 2. Χ ι HAS A DOUBLE EXPONENTIAL DISTRIBUTION:

f(x - Θ) = F\x - θ) =

The best test in this case is the sign test (cf. (Ruist, 1954; Hoeffding and Rosenblatt, 1955)). The asymptotic relative efficiency of van der Waerden's one sample test with respect to the best test thus follows from the correlation coefficient between the test statistics. This correlation coefficient is independent of the distribution of Xi, the two tests being simultaneously nonparametric. The asymptotic relative efficiency of van der Waerden's one sample test with respect to the sign test is thus 2/π for a sample from a double exponential distribution, the same as the asymptotic relative efficiency of the sign test with respect to van der Waerden's one sample test for a sample from a normal distribution. For the Wilcoxon test, we find (cf. (1.5.15) e = 3/4

for the asymptotic relative efficiency with respect to the best test. For the test based on the sample mean, the asymptotic relative efficiency with respect to the best test follows from (1.5.11) with a(i) = 1, i = 1, ...,η; σ 2 is the variance of a double exponential distribution with density fix — Θ) =

ie-l*-0l

and Zi is the ith order statistic of the absolute value of (Xi,...,Xn); hence e = p 2 = Lim ^(ση)- 1 £ En,oZ; j

= (σ^ΕοίΧιΐ) 2 = 1/2.

(1.5.16)

Finally, we consider a test similar to van der Waerden's one sample test, i.e., we choose a(i) so that (cf. (1.5.7)) ra(i) / e~xdx= Jo

i i = Ι,.,.,η,

(1.5.17)

i = Ι,.,.,η.

(1.5.18)

n +1

or a(t) = log

71 + 1

:,

η+1—ι

The asymptotic relative efficiency of this test with respect to the best test follows from (1.5.10) with a(i) = l,

ä(i) = log —^-7—:, n+1—ι

i = Ι,.,.,η,

1.5. Efficiency and

31

deficiency

namely · (Eiilogitn e = ρ-2 = τLim ' s + l - i r H n + l)))^ — τι Σ·=ι log 2 «» + 1 - i)~Hn + 1)) (/oMogd-s)^)2

!

/oMo^d-eids

~2'

(1.5.19)

so the test with weights (1.5.18) is of the same efficiency as the test based on the sample mean. The same holds true for the normal distribution: van der Waerden's one sample test is of the same efficiency as the test based on the sample mean. REMARK 1.5.5. As already observed earlier in this subsection, van der Waerden's one sample test and Fraser's test are asymptotically identical; their asymptotic relative efficiency is 1 for a sample from any distribution and their correlation coefficient is asymptotically 1. Thus, these tests are, for a sample from, e.g., a double exponential distribution, an example of the case where the relation e = p2

holds and none of the two tests is the best one. We now present an example of this situation with 1.

e*

Let Tn ι and Tn% be two test statistics in ^ with 0 < v2 < vi, ρ(θο), θ„ -> θ0,

ρ(θ„) and

Consider the test

(1.5.20)

V2 ρ * —. vi , λ > 0 based on the statistic Τηχ = λΤη2

+

(1 -

(1.5.21)

λ)Τηι,

where λ is a constant independent of τι. Choose λ = λ* so that νχ is a minimum. Then V

r

=

i-Py2 (1 - p)(vι + v2)

r

> 0>

2 y2 VT, — —ηV W—1 - P ) κ < VÖ. vf - 2pViV2 + vi

(1.5.22)

32

1. Asymptotic test theory

Thus, if Τηλ· e 9, then Τηχ· is of a higher efficiency than Tn\ and Tn 2. Further, for the asymptotic correlation coefficient between Tn%* and Tn2 we arrive at the representation «••(i-A-W λ* 2 + (1 — λ*) 2 + 2λ*(1 — λ*)ρ Thus, Τ η 2 and

(1.5.23)

ν V2

are two test statistics for which the relation e = p2

holds. The fact t h a t Τηχ* is not necessarily a best test follows from the example below. Let (Χι,..., Xn) be a sample from a normal distribution with known variance and let Tn\ and Tn2 be t h e Wilcoxon and the sign test statistics. Then 2 3 1= π

V

(see (1.5.15)). Then

η 2 2 = -> π

0 3 Ρ = 74

^ v2 ΡΦ — vi

and ^^ λ

v

=

*

v f v | ( l - ρ2)

=

vf - 2pviv 2 + v |

3_ 2π(5 - 3y/2)

< 1.

A best test in this case h a s v* = 1, therefore Τηχ* is not a best test.

1.5.3.

Deficiency

In Subsection 1.5.1, a possible way to compare statistical procedures Π„ι and n„2 with the help of the ratio n/kn was considered. Another way to compare Π Λ ι and Π„2 is to consider the behavior of the difference kn — n, i.e., the number of additional observations required by the less effective procedure. Such difference comparisons have been performed from time to time (see, for example, (Fisher, 1925; Walsh, 1949; Pearson, 1950)). Although this difference seems to be a very n a t u r a l object to examine, historically the ratio nlkn was preferred by almost all authors in view of its simpler behavior (see e.g. Subsection 1.5.2). The first general investigation of kn — η was carried out in (Hodges and Lehmann, 1970), where kn — η is called the deficiency of Π„2 with respect to Π„ι and denoted by dn=knn. If Lim,,-^ dn exists, it is called the asymptotic deficiency of Π„2 with respect to Π„ι and denoted by d = Limd„. n—>~

1.5. Efficiency and deficiency

33

As concerns the relation between efficiency and deficiency, we distinguish two cases. In the first case, if τ· n 1 e = Lim — Φ 1, n->~ kn it does not make much sense to consider the deficiency: if η tends to infinity, dn does so at the same rate as n. If e = l, however, the situation is essentially different. In this case, Π„ι and Π„2 perform equally well in the first order, and from the fact that e = 1 we cannot even deduce which of the two procedures is better. Hence, to be able to judge the difference in performance between Π„ι and Π„2, we have to apply a more refined measure here, and it becomes interesting to consider dn. In view of this, we restrict the study of deficiencies to the case where e = 1, which occurs in many important statistical problems. Of course, d is not so easy to compute as e because it requires computing an additional term in the corresponding asymptotic expansions. Under the assumption that e = 1, we calculate dn and d as follows. Denote the performance criteria for Π„ι and Π„2 as j^i and π ^ , respectively. By definition, dn = kn — n may, for each n, be found from the equation nkn2 = itni.

(1.5.24)

If Π„ι and Π„2 are tests for the same hypothesis at the same level a e (0,1), then, typically, Πηί = Πηί(θη), ϊ = 1, 2, are the powers of these tests against the same sequence of local alternatives parameterized by a parameter Θ. If Π„ι is more powerful than Π„2 we search for a number kn = η+ dn such that π^2(θ η ) = J^lWn). In order to solve (1.5.24), kn has to be treated as a continuous variable. There are several ways to do this in a quite satisfactory manner. One possibility is the following: let [x] denote the integer part of x, then for non-integer kn we take the sample size [A„] or [&„]+l with probability 1—kn+[kn], kn—[kn], respectively, thus yielding a continuous expected sample size kn. The performance of Π„2 is then measured by 1tkn2 = (1 — kn + [kn\)n[kn]2 + (kn ~ [knYiR{kn\+l)2· As we already mentioned, itn\ and ή^2 are generally not known exactly, and we have to use asymptotic results. To find the asymptotic relative efficiency

34

1. Asymptotic test theory

of Π„2 with respect to Π„ι, it suffices to arrive at an asymptotic result of the following kind: vi . _ rr. π,ii = — + o(n nr

),

(1.5.25)

7^2 = -+o(n~r), nr

for certain Vi and v2 not depending on η and for a positive constant r. From (1.5.24) and (1.5.25) it follows that \k \*nJ nJ

V2

+ 0(1),

and therefore,

-(5Γ· In particular, if vi = v2 = v, we arrive at e = 1. In order to calculate the deficiencies, the information contained in (1.5.25) is not sufficient; we need a somewhat stronger result, namely ** = "7 nr

+

" nSr+si +

d·5·26)

= -7 + + nr nr+s

o(n~r~s),

for certain ν,γι,ί = 1,2 not depending on η and certain positive constants r and s. The leading terms of both expansions in (1.5.26) are chosen to be identical, because we are only interested in the deficiencies if e = 1. From (1.5.24) and (1.5.26) we can solve dn:

rvdn

Y2-Y1

, - Γ - 3

Λ

.

which yields dn =

rv

+

o( n i-«).

(1.5.27)

Hence ±°°, d =

72 - η rv 0,

0 0. We compare the test that is appropriate when a given value of σ can be relied upon with the ί-test which places no reliance on such an assumed value of σ in terms of deficiency. When σ is known, the hypothesis Ho is accepted at level a e (0,1) on the basis of η observations (X\, ...,Xn) when (1.5.31)

y/nXn ^ ua,

where Φ(αα) = 1 — a. For the sake of simplicity, set σ = 1. Then the power of the test (1.5.31) is βηΐ(θ) = 7ΐηΐ(θ) = Φ^θ

-

Ua).

(1.5.32)

Consider now the ί-test based on k observations, with the acceptance region (1.5.33) where

The power of the latter test can be represented as &2(θ) = πΑ2(θ) = ΕΦ(\/£θ - ckSk). We search for k„ such that y/n

(1.5.34)

1.5. Efficiency and deficiency

39

Let us set rj* = VkQ, add and subtract c* within the parenthesis, and expand the right-hand side about r\k — Ck- Formula (1.5.34) then becomes nk2W) = ΕΦ(\/£θ = Ε

ckSk)

- ck) + 0, we consider testing Ho: 0 = 0 against with

Η η ί : θ = τί

τ=η

based on the sample CXi, ...,Xn). It is well known (see Sections 1.1-1.4) that there are many (first order) asymptotically efficient tests, i.e., the tests whose power function ßn(t) converges to the same limit as β*(ί). Though the Edgeworth expansions for the distributions of various asymptotically efficient test statistics and of An(t) differ by terms of order τ, it was observed that their powers ßn(t) differ from each other and from ß*(t) by ο(τ), the latter meaning that the power agrees with β*(t) up to terms of order τ. Then the Edgeworth expansion for βn(t) and ß*(t) may be written as (see Sections 1.2-1.4) ßn(t) = 0, as η -> » , for ε < ρ < 1 — ε. The binomial distribution hence places asymptotically a strictly positive mass on the interval of length 2c VVar Tn. As the binomial distribution is integer-valued, this mass has to be divided over at most 2cv/Var Tn + 1 points. But this implies that there is at least one point where the binomial distribution function has a jump which is at least of order LVVarT„. Since Var Tn = np{ 1 — p), this jump is at least of order τ. However, in view of (1.6.2), the Edgeworth expansions are continuous functions. As continuous functions, obviously, approximate functions with jumps of order τ no sharper than to order r, it becomes obvious that in the present case (1.6.3) never holds. Hence, as far as the sign test is concerned, this approach does not work. Fortunately, the relative simplicity of the distribution of Tn in this case allows us to establish an expansion for the power of the sign test by other methods. If the scores a(i) are not all equal, the situation may be different: Tn will attain more values than in the case of the sign test. Hence the probability mass of its distribution can be divided over more points, which may lead to jumps in the distribution function of Tn that are of sufficiently small order. To illustrate this, we consider Wilcoxon's signed rank test, where a(i) = i,

i = l, ...,n.

In this case, too, the distribution of Tn is asymptotically normal, and moreover, Tn is also integer-valued. Hence, the same reasoning as in the case of the sign test implies that the distribution function of Tn must have at least one jump of order at least l/^/Var Tn. But here Var Tn is of order £)*=1 i2, i.e., of order n3, which implies that at least one jump of order at least τ 3 occurs. Under these circumstances, an Edgeworth expansion satisfying (1.6.3) might exist. The examples above suggest the following conclusions: it is not necessary to require that each summand a(i)Vi itself is non-lattice, as is prescribed by the standard conditions. We only need that the lattice which the standardized rp ρ rp sum Jy arT " is concentrated on is sufficiently fine. This can be achieved by imposing a suitable condition on a(i) which prevents them from getting too close to each other. It can be demonstrated that these conclusions are true and that the following condition on the α(ΐ) is sufficient: assume that there exists a constant δ > 0 such that v{x: there exists j such that \x — a(j)\ < ζ} > δζτι

44

1. Asymptotic test theory

for some ζ>τ3

log n,

where v{·} denotes the Lebesgue measure. This weak condition is satisfied, e.g., for the scores of Wilcoxon's signed rank test and for those of the onesample normal scores test. For the scores of the sign test, it is, obviously, not satisfied. The last problem we have to deal with, is the fact that a(i)Vi are not independent. To reveal the nature of this dependence, we consider the joint distribution of Vi Vn. First we recall briefly the relevant notation from Subsection 1.5.2. Let CXi, ...,Xn) be η independent identically distributed random variables with distribution function Fix). Let 0 < Z\ < ... < Z„ denote the ordered sequence of the absolute values |Χι|,..., \Xn\· Define V\,...,Vn as follows: Vi = 1 if Xi corresponding to Zt is positive, and V; = 0 otherwise, i = 1 ,...,n. Finally, denote the corresponding vectors by X, Z, and V. Now we immediately see that, conditioned on Ζ = z, the random variables V\ Vn are independent with P{Vi = l\Z

= z} = — ^ -, jizu + Ji—Zi)

fix) = Fix),

i =

l,...,n.

Therefore, p{Vi = i,...,vn = 1} = e [ ] and n w - u - n

f(Zi)

fiZi) W / < - Z

i

> ·

As 0 < Z\ < ... < Zn are obviously dependent, it follows that, in general, η

p{Vi = ι,..., v„ = 1} * n p { y * = 1 } · i=l Hence, V\,..., Vn are not independent, and we cannot give an Edgeworth expansion directly for the distribution function Fn(x) in (1.6.1). However, as V"i, ...,Vn are independent conditioned on Ζ = z, we can give an Edgeworth expansion ΦΛ(χ I z) for

By taking the expectation of this conditional expectation with respect to Ζ we then find for Fnix) the expansion Φη(χ) = ΕΦ„(* I Ζ).

(1.6.4)

1.6. Deficiency results for the symmetry

problem

45

The result in (1.6.4) is the formal solution to our problem: after removing several obstacles, in (Albers et al., 1976) the Edgeworth expansion was obtained for the distribution function of Tn. It remains, however, to formulate the conditions under which this formal result can be translated into a more explicit one. This is a very technical matter which we leave out of account almost completely here. We merely indicate the kind of restrictions under which such a simplification can be achieved. First, if we do not consider general distribution functions Fix), but only those which occur under the hypothesis of symmetry or under contiguous alternatives, we can evaluate the expectation in (1.6.4). For simplicity, we restrict ourselves to the particular kind of contiguous alternatives determined above (see Subsection 1.5.2), i.e., we consider the contiguous location alternatives H „ i : θ = θη = tn~y2,

0/ηθα.

From (1.6.6) and (1.6.8), the deficiency dn of the normal scores test with respect to the X„-test is easily found. As \/n + dnQn = r)„(l + \i?dn + 0(τ 4 )), the power of the normal scores test based on n + dn observations is ßnWn) = φ(ηη - Ua) - i?Vn\ «>. This result shows that in the normal case the price for using a distribution-free test is surprisingly low. As the second example, we briefly consider the logistic case, where Fix) =

1 + e~x

Here Wilcoxon's signed rank test is the asymptotically most powerful rank test. For any particular θο» let Ψ*(θο) be the most powerful test for Ho: θ = 0

against the simple alternative θ = θο and let dn denote the deficiency of Wilcoxon's test with respect to Ψ*(θο). It can be shown (see (Albers et al., 1976)) that dn has a finite limit d, d

=lo(2u"

+ f)

»)2

+

l>

where

Hence, against the logistic alternatives of type (1.6.5), a finite number of additional observations suffices to compensate for the amount by which the power of Wilcoxon's test falls short of the maximum power attainable. Finally, we pay some attention to the question whether the asymptotic results in (1.6.6), (1.6.7), (1.6.9), and (1.6.10) are of value for small to moderate sample sizes. Using exact power results from literature, it can be shown that the power expansions (1.6.6) and (1.6.7) already yield excellent approximations for samples of sizes 5-20. The situation becomes entirely different, however, if we have long-tailed distributions under the alternative. For example, in the case of Wilcoxon's signed rank test against the Cauchy alternatives the expansion obtained by the above methods leads us to very bad results for the same range of sample sizes as above. As concerns the deficiency approximations in (1.6.9) and (1.6.10), there appears to exist a satisfactory agreement between the values these approximations yield and those which are derived from the

48

1. Asymptotic

test theory

exact power results from literature, again for the sample sizes 5-20. More information on finite sample results, including some tables, is given in (Albers, 1974).

2

Asymptotic expansions under alternatives When the limiting distributions of test statistics are specified under the hypothesis in a certain sense, LeCam's Third Lemma (see Appendix A) allows us to obtain their limiting distributions under close alternatives. In this Chapter we generalize LeCam's Third Lemma by using the Edgeworth-type asymptotic expansions (see Appendix E) in the case of asymptotically efficient test statistics (see Chapter 1). A general theorem (see Theorem 2.3.1) is proved which is then re-formulated for linear combinations of order statistics (L-statistics), linear rank statistics CR-statistics), and [/-statistics (see Theorem 2.5.1).

2.1.

Introduction

Asymptotic expansions under alternatives play an important role in the theory of testing hypotheses (see, e.g., (Bickel, 1974; Chibisov, 1983; Pfanzagl, 1974; Pfanzagl, 1980; Does, 1984)). Many authors used asymptotic expansions for the investigation of asymptotic efficiency of tests (see, e.g., (Chibisov, 1974; Pfanzagl, 1974; Pfanzagl, 1979; Albers et al., 1976; Bickel and van Zwet, 1978)). Along with providing better approximations to be used for numerical purposes, these expansions were applied to obtaining asymptotic expansions for power of tests. These expansions are of special interest in the case of asymptotically efficient tests (see Chapter 1), which have the same limiting power and may be distinguished from one another by higher order terms. Let a random element X„ of a measurable space n , srf n ) be observed, η = 1,2,3,... Denote by J£?(X„) the distribution of X„. Consider the problem of testing the hypothesis H„o : Jf(Xn)

49

= Pn.O

50

2. Asymptotic expansions under

alternatives

against the alternative H„i: J?(Xn) = P n , i '

where ΡΠιο and Pn>i are two distributions on {3Cn, stfn) having densitiesp n o(x) and pni(x) with respect to some σ-finite measure. We write E„ o. En l and Varo, Vari for the expectations and variances with respect to Ρη,ο and Ρη,ι· Define the logarithm of the likelihood ratio as log Λ„ =

0,

Ρπ,θ(Χη)'

Pn,θ(Χη) > 0, Ρπ,θ(Χη) =Ρη,ΐ( Χ η) = °> PnflCXn) = 0, Ρ^,ΐίΧη) >

By the Neyman-Pearson Lemma (see Appendix D), the most powerful test is based on An. For a real-valued competing test statistic Tn = T„(X„), consider also a test based on Tn. A widely used approach consists in the study of the asymptotic behavior of a test statistic for an arbitrary underlying distribution, which may correspond either to the hypothesis or to the alternative. Namely, let the underlying distribution depend on θ and test statistics Tn be used to test Hq : θ = 9Q.

Suppose that, for every Θ, one can prove that Tn is asymptotically normal ^ν(μη(θ), σ%(θ)). Then, for a sequence of local alternatives, say θη —» θ, we obtain (μ„(θ„), σ^(θ„)) as the asymptotic distribution of Tn, provided the convergence is uniform in Θ. This 'direct' approach may be suitable when the hypothesis plays no special role in the whole family of distributions. But even then, a more appropriate method would be to take into account the local nature of the alternatives at the every beginning. This is done in the theory based on the concept of contiguity (see Appendix A) developed in (LeCam, 1960), see, e.g., the monographs (Roussas, 1972; Häjek and Öidäk, 1967). In particular, if An is the logarithm of the likelihood ratio of the distributions of the sample under θη and θο and the joint distribution of (Tn, An) under Ho converges to a limit, then the distribution of (An, Tn) under θ = θ„ also has a limit, which is easily determined: if the limiting distributions have densities, J30(^.y) andpiGx.y), then Pi(*,;y) = e?PQ 0 and bn. For independent identically distributed observations from a smooth parametric family, this is the case when Tn has an efficient score statistic as its leading term (see Section 1.2). Based on this fact, we may expect that an asymptotic expansion under the alternative for the distribution function of such statistic can be obtained from the asymptotic expansion under H„o (as in (2.1.1)). In this chapter we present sufficient conditions under which the above appears to be true. Under some additional assumptions, our main theorem allows us to obtain an asymptotic expansion for the distribution function of Tn under Η η χ. In doing so, we refine LeCam's Third Lemma by using asymptotic expansions for distributions of these statistics. The regularity conditions required to arrive at these asymptotic expansions are imposed under the main hypothesis H„o and are formulated in terms of general statistical experiment. A general theorem (see Theorem 2.3.1) is proved which is then re-formulated for linear combinations of order statistics (L-statistics), linear rank statistics (R-statistics), and {/-statistics (see Theorem 2.5.1). Theorem 2.5.1 gives a higher order generalization of LeCam's Third Lemma and allows us to give an automatic formula under the alternative for the Edgeworth expansion of the distribution of any asymptotically efficient test statistic Tn once the asymptotic distribution of Tn is specified under the hypothesis. Throughout the chapter, we use the following notations. The Mh derivative of a function of Θ (if it exists) is denoted by superscript (k), e.g., d3 /^(je, 0) being the right derivative of Θ) at θ = 0. Derivatives with respect to χ (or s) are denoted by the prime, e.g.,

52

2. Asymptotic

expansions

under

alternatives

For function of 0 at θ = 0 the argument θ is omitted, e.g., d2 f (X) =

fix,

fix)

θ)

= fix,

0).

θ=0

Let r = n~m,

ne

N,

τ„ I 0 is a small parameter. Let a2) stand for the normal distribution in R 1 with parameters μ and σ 2 ; Φ(·) and φ(·) denote the distribution function and the density 1) respectively; for a e (0,1) let ua = Φ~\ΐ

-

a)

be the upper a quantile of Φ(χ). The letter C, sometimes with subscripts, is used in inequalities to express the assertion that there exists a positive constant for which the inequality is satisfied.

2.2. A formal rule In this section we heuristically describe a formal rule for obtaining the first k e No terms of asymptotic expansion in the powers of a small parameter τη I 0 under alternatives. The regularity conditions for obtaining this expansion are given in Section 2.3. Assume that there exist some constants an > 0, bn for which (see Section 2.1) Δη = Sn -

A„,

Sn = anTn

+ bn,

is small in an appropriate sense. Approximating the expression exp{—Δ„} by the Taylor formula = 1 — A + iA 2 — iA 3 + we obtain the following representation for probability under consideration Pn,l{Tn

0 exp{—Δ„ + a n T n + b n } l ( ^ ) i T n ) = Εη,οβχρΙσηΤη k

" Σ

+ δ^^.^ΤηίΕ^οίβχρί-Δ^}

I

Tn]

1

TTEn,0 exp{a n T n + M l ^ i T J E ^ o K - A J *

| TJ.

2.2. A formal rule

53

Assume that the distribution of the statistic Tn under the hypothesis H„o has a density ρ η (υ). Then this probability can be also written as k J rx

* η Λ τ η < χ} =

77 /

exp{a„i; + 6 η }Ε η>0 [(-Δ„)' | Tn = v]pn(v)dv.

1=0 l· J-°°

(2.2.1)

Let gnjis) = En>0 exp{isTn}(—An)1,

l ε N0,

(2.2.2)

be the Fourier transform of the function Ε π ,ο[(-Δ„/ I Tn = υ]ρη(υ). Assume also that gnii(s) admits an asymptotic expansion of the form k-l

gnj(s) = g(s) Σ o

(2.2.3)

where g(s) is some function, Qji(-) are some polynomials and xn i 0 is a small parameter. Then by the inversion formula, k-l

Εη,οΚ-Δ*/ I τη = υ]ρη(υ) = £

T>n+lQß(-Dv)G(v),

(2.2.4)

>o where

and the function G(v) is defined by the equality g(s) = J exp{isv}G(v)dv.

(2.2.5)

Relations (2.2.1) and (2.2.4) imply that P m { T b < X } = G®(I) *

t=o

'

>o

1 u

fx

/ J'· J —

ex

P{anv +

bn}Qj(i-j)(-Dv)G{v)dv. (2.2.6)

Now we give an heuristic example of how this formula can be used. Let Xn = {X\, ...,Xn), where Xi are independent identically distributed random variables with density p(x, θ), θ ε R 1 . Let us consider the problem of testing the hypothesis H0: θ = 0

54

2. Asymptotic expansions under alternatives

against a sequence of close alternatives of the form (see Section 1.1) τ = n~v2,

H„jt: θ = τί,

00 exp{isT n } (Ä„)'

Ζ = 0,1,..., A.

Finally, the following condition characterizes the asymptotic expansion under the main hypothesis H„o. CONDITION 4. There exist polynomials Qji(-), I - 0, l,...,k,j an integrable function g(s) such that

dn,i, 1=1

J\s\V(n)

= 0,1, ...,k-l,

and

Ζ = 0,1,..., k,

IQji(is)g(s)\ ds = o(4),

Ζ = 0,1, ...,k,

j = 0,...,k - I,

where gni(ß) = E^oexplisT^X-An)',

Ζ = 0,1, ...,k,

k-l gnAs)=g(s)Y^'ClQjl(is). j=0

THEOREM 2.3.1 (Bening, 1982; Bening, 1994b). Let Then sup

Pn,i {Tn än + ε } = Ρ η , ι ( Κ - bn\ >än + e, |ij b | < ε } + ο(τ£)

It follows from Condition 1 that

Ρn,l{K 1{Τ;

ο[(—Δ*)llßn,c(Λ* - 0„)Ιλ„(Δ„) I Τ*η = v]pn(v)dv + Rn2(x) + o(rt), where

Δ* = anT*n + bn-

(2.4.5) An,

and the remainder term Rn2(x) satisfies the inequalities ^

+

1

J

rx

exp{a„u + bn}

χ Ε η>0 [|Δ;| Α+1 Ι βη , ε (Λ; - bn)Ιλ„(Δ„)exp{|Δ* |} | T* = v]pn(v)dv 1 fx - 71—^ e x P { ö n + bn + e} / exp{2a n u + 2b n } {κ + 1)! J— ο» xE n > 0 [K|* + 1 I A ) l (Ä n ) I rn = v]pn(.v)dv < Ci exp{ä„ + bn + 2xan + 2&„}E„,oK|* +1 I An (Ä ;i ),

Ci > 0;

by virtue of Condition 2, hence Rn2(x) = O(^), for all χ < γ η and

&n,oexp{2y„an} = ο(τ£).

Now let us prove that E„,0 exp{a„T* + ^ ( - A ^ I ^ A J I b , . , « = E„,o exp{a„T* + ^}(-Α*η)ιΙ(_^)(Ό

bn)l(^)(T*n) + ο(τ£),

I = 0,1

k,

2.4. Proof of General Theorem

61

for all χ < y„. The Holder inequality and Condition 2 yield |E„.o exp{a n T* n + bn}(-A*n)llAri(Än)lirjK

-

bn)I(^O\

< exp{a„jc + &„}Ε„ι0|Δ* ^ „ ( Ä J I ^ t A * - bn) < exp{a„y n + 6 Λ }(Ε η ,ο|Δ;| Α+1 Ι Α „(Δ η ))ώ(Ρ η) οΚ ~ h e k+l-l < C% exp{a„y„ + un β»«,

K A ^

= C 2 (exp{a„y„ + &„}6„ >0 )*^(exp{a n y n + 6 „ } ε „ ( 0 ) ) ^ Γ = 0,

for Kfi exp{α η γ η + bn} = ο(τ£), ε„(0) exp {a nYn + On In a similar way, Condition 2 implies the equalities E„, 0 exp{a„T* + Μ ί - Δ ^ Ι ^ Α ^ . » , , , ^ ) = ο(τ*),

I = 0,1, ...,k,

for χ < γη and exp{a„y„ + bn}bnJL+\ =

ο(φ.

Therefore, it follows from (2.4.5) that ΡηΛ{Τ*

< χ} = Σ π Γ βχρ{α„ι; + ft«}^.,, [ ( - < ) * I Τ*η = v]pn(v)dv + οφ. 1=0 ι · J ~°° (2.4.6)

We set g*aJ(s) = J exp{isy}En>o [(—Δ* / | T* = that is,

g'nj(s) = E„,o exp{is7^}(-A;y,

v]pn(v)dv,

I = 0,1

k.

Applying the inversion formula for the Fourier transform and (2.4.6), we see that equality (2.4.1) follows from the relations

I L

|ϊ| 0 such that, for any me N, n m |E n > 0 exp{isiT n 3 }(A n ) / | - > 0,

exp{isT n 3 }(T n S f I —» 0, for log(n + 1) < |s|


bnm.

(2) The relation Εη,ο (Δ π 3 (ί)) 2 = ο(τ 1+δ ° /3 )

holds, where bait) = tTn3 - ± 0 such that 1 =

JlsHn^

0,1,

where ^ ( s ) = En,oexp{isTn3},

= E„,o exp{isTn3}An3(t), ^ > )

Ά

= exp{-I

S

V},

i W = 0·

LEMMA 2.6.5. Let Conditions (C4) and (L 3 ) hold; then (1) There exist positive constants 84 > 0, n5* < \s\ 1/2, δβ > 0 such that

-> 0,

/ι1+δβ |E„,o exp{isT„ 4 }(T n 4 ) / | - > 0,

1 = 0,1,

(2) The relation Ε„,ο(Δ η4 (ί)) 2 = ο(τ 1+δ ° /3 ),

is true, where Δ„4«) = tTn4 - ± σ ¥ - Λ π .

η ->

70

2. Asymptotic expansions under alternatives

(3) There exists