140 79 2MB
English Pages 106 [100] Year 2023
SpringerBriefs in Statistics JSS Research Series in Statistics Yuichi Goto · Hideaki Nagahata · Masanobu Taniguchi · Anna Clara Monti · Xiaofei Xu
ANOVA with Dependent Errors
SpringerBriefs in Statistics
JSS Research Series in Statistics Editors-in-Chief Naoto Kunitomo, The Institute of Mathematical Statistics, Tachikawa, Tokyo, Japan Akimichi Takemura, The Center for Data Science Education and Research, Shiga University, Hikone, Shiga, Japan Series Editors Shigeyuki Matsui, Graduate School of Medicine, Nagoya University, Nagoya, Aichi, Japan Manabu Iwasaki, School of Data Science, Yokohama City University, Yokohama, Kanagawa, Japan Yasuhiro Omori, Graduate School of Economics, The University of Tokyo, Bunkyo-ku, Tokyo, Japan Masafumi Akahira, Institute of Mathematics, University of Tsukuba, Tsukuba, Ibaraki, Japan Masanobu Taniguchi, School of Fundamental Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan Hiroe Tsubaki, The Institute of Statistical Mathematics, Tachikawa, Tokyo, Japan Satoshi Hattori, Faculty of Medicine, Osaka University, Suita, Osaka, Japan Kosuke Oya, School of Economics, Osaka University, Toyonaka, Osaka, Japan Taiji Suzuki, School of Engineering, University of Tokyo, Tokyo, Japan Kunio Shimizu, The Institute of Mathematical Statistics, Tachikawa, Tokyo, Japan
The current research of statistics in Japan has expanded in several directions in line with recent trends in academic activities in the area of statistics and statistical sciences over the globe. The core of these research activities in statistics in Japan has been the Japan Statistical Society (JSS). This society, the oldest and largest academic organization for statistics in Japan, was founded in 1931 by a handful of pioneer statisticians and economists and now has a history of about 80 years. Many distinguished scholars have been members, including the influential statistician Hirotugu Akaike, who was a past president of JSS, and the notable mathematician Kiyosi Itô, who was an earlier member of the Institute of Statistical Mathematics (ISM), which has been a closely related organization since the establishment of ISM. The society has two academic journals: the Journal of the Japan Statistical Society (English Series) and the Journal of the Japan Statistical Society (Japanese Series). The membership of JSS consists of researchers, teachers, and professional statisticians in many different fields including mathematics, statistics, engineering, medical sciences, government statistics, economics, business, psychology, education, and many other natural, biological, and social sciences. The JSS Series of Statistics aims to publish recent results of current research activities in the areas of statistics and statistical sciences in Japan that otherwise would not be available in English; they are complementary to the two JSS academic journals, both English and Japanese. Because the scope of a research paper in academic journals inevitably has become narrowly focused and condensed in recent years, this series is intended to fill the gap between academic research activities and the form of a single academic paper. The series will be of great interest to a wide audience of researchers, teachers, professional statisticians, and graduate students in many countries who are interested in statistics and statistical sciences, in statistical theory, and in various areas of statistical applications.
Yuichi Goto · Hideaki Nagahata · Masanobu Taniguchi · Anna Clara Monti · Xiaofei Xu
ANOVA with Dependent Errors
Yuichi Goto Faculty of Mathematics Kyushu University Fukuoka, Japan
Hideaki Nagahata Risk Analysis Research Center The Institute of Statistical Mathematics Tachikawa, Tokyo, Japan
Masanobu Taniguchi Waseda University Shinjuku City, Tokyo, Japan
Anna Clara Monti Department of Law, Economics Management and Quantitative Methods Università degli Studi del Sannio Benevento, Italy
Xiaofei Xu Department of Probability and Statistics School of Mathematics and Statistics Wuhan University Wuhan, Hubei, China
ISSN 2191-544X ISSN 2191-5458 (electronic) SpringerBriefs in Statistics ISSN 2364-0057 ISSN 2364-0065 (electronic) JSS Research Series in Statistics ISBN 978-981-99-4171-1 ISBN 978-981-99-4172-8 (eBook) https://doi.org/10.1007/978-981-99-4172-8 © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
To our families
Preface
The analysis of variance (ANOVA) is a statistical method for assessing the impact of multiple factors and their interactions when there are three or more factors. The method was first developed by R. A. Fisher in the 1910s and since then has been studied extensively by many authors. In the case of i.i.d. data, most literature has focused on the setting where the number of groups (a) and the number of observations in each group (n) are small, referred to as fixed-a and -n asymptotics. ANOVA for time series data, commonly referred to as longitudinal or panel data analysis, has been extensively studied in econometrics. In this field, large-a and fixed-n asymptotics or large-a and -n asymptotics are commonly examined, with a primary focus on the regression coefficient. Consequently, results regarding the existence of fixed and random effects of factors have been hardly developed. This monograph aims to present the recent developments related to one- and twoway models mainly for time series data under the framework of fixed-a and large-n asymptotics. Especially, we focus on (i) the testing problems for the existence of fixed and random effects of factors and interactions among factors under various settings, including uncorrelated and correlated groups, fixed and random effects, multi- and high-dimension, parametric and nonparametric spectral densities, and (ii) the local asymptotic normality (LAN) property for one-way models on i.i.d. data. This book is suitable for statisticians and economists as well as psychologists and data analysts. Figure 1 illustrates the relationships between the chapters. In Chapter 1, a historical overview of ANOVA and the fundamentals of time series analysis are provided, along with motivation and concise summary of the content covered in the book. Chapter 2 examines a test for the presence of fixed effects in the one-way model with independent groups. Chapter 3 extends the analysis to high-dimensional settings. Chapters 4 and 5 address correlated groups in one-way and two-way models, respectively. Lastly, Chapter 6 explores the log-likelihood ratio process to construct optimal tests in the context of i.i.d. settings.
vii
viii
Preface
Fig. 1 Relationships between chapters
We are greatly indebted to Profs. M. Hallin, B. Shumway, D.S. Stoffer, C.R. Rao, P.M. Robinson, T. DiCiccio, S. Lee, C.W.S. Chen, Y. Chen, S. Yamashita, Y. Yajima, and Y. Matsuda for their valuable comments of fundamental impact on ANOVA and time series regression analysis. Thanks are extended to Drs. Y. Liu, F. Akashi, and Y. Xue for their collaboration and assistance with simulations. The research was supported by JSPS Grant-in-Aid for Research Fellow under Grant Number JP201920060 (Y.G.); JSPS Grant-in-Aid for Research Activity Start-up under Grant Number JP21K20338 (Y.G.); JSPS Grant-in-Aid for EarlyCareer Scientists JP23K16851 (Y.G.); JSPS Grant-in-Aid for Early-Career Scientists JP20K13581 (H.N.); JSPS Grant-in-Aid for Challenging Exploratory Research under Grant Number JP26540015 (M.T.); JSPS Grant-in-Aid for Scientific Research (A) under Grant Number JP15H02061 (M.T.); JSPS Grant-in-Aid for Scientific Research (S) under Grant Number JP18H05290 (M.T.); the Research Institute for Science and Engineering (RISE) of Waseda University (M.T.); a start-up research grant of the Wuhan University under Grand Number 600460031 (X.X.).
Preface
ix
Last but certainly not the least, we extend our sincerest appreciation to Mr. Yutaka Hirachi of Springer Japan and Mrs. Kavitha Palanisamy of Springer Nature for their assistance and patience. Fukuoka, Japan Tokyo, Japan Tokyo, Japan Benevento, Italy Wuhan, China February 2023
Yuichi Goto Hideaki Nagahata Masanobu Taniguchi Anna Clara Monti Xiaofei Xu
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Overview of Chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 1 6 7
2 One-Way Fixed Effect Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Tests for Time-Dependent Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Three Classical and Famous Test Statistics . . . . . . . . . . . . . . . . 10 2.1.2 Likelihood Ratio Test Based on Whittle Likelihood . . . . . . . . 12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 One-Way Fixed Effect Model for High-Dimensional Time Series . . . . . 3.1 Tests for High-Dimensional Time-Dependent Errors . . . . . . . . . . . . . . 3.1.1 Asymptotics of Fundamental Statistics for High-Dimensional Time Series . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Test Statistics for High-Dimensional Time Series . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 19 22 25 28
4 One-Way Fixed and Random Effect Models for Correlated Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Test for the Existence of Fixed Effects . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Classical Test Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 A Test Statistic for Correlated Groups . . . . . . . . . . . . . . . . . . . . 4.2 Test for the Existence of Random Effects . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29 29 30 32 39 41
5 Two-Way Random Effect Model for Correlated Cells . . . . . . . . . . . . . . . 5.1 Test for the Existence of Random Effects . . . . . . . . . . . . . . . . . . . . . . . 5.2 Test for the Existence of Random Interactions . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43 43 50 53
xi
xii
Contents
6 Optimal Test for One-Way Random Effect Model . . . . . . . . . . . . . . . . . . 6.1 Log-Likelihood Ratio Process for One-Way Random Effect Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 The Hypothesis That the Variance of Random Effect Equals Zero Under the Null . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 The Hypothesis That the Variance of Random Effects is not Equal to Zero Under the Null . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
63 65
7 Numerical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 One-Way Effect Model with Independent Groups . . . . . . . . . . . . . . . . 7.1.1 The Empirical Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 The Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 One-Way Effect Model with Correlated Groups . . . . . . . . . . . . . . . . . . 7.2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Two-Way Effect Model with Correlated Groups . . . . . . . . . . . . . . . . . . 7.3.1 Test for the Existence Random Effects . . . . . . . . . . . . . . . . . . . 7.3.2 Test of Interaction Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67 67 68 70 73 73 74 77 78 80 84
55 56
8 Empirical Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 8.1 Average Wind Speed Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Acronyms
N Z Rd A det(A) tr(A) A* A A− 0p 1p Op Ip Jp Xn → X Xn ⇒ X o p (an ) O p (an ) O Up (·) a and b ni ni j n ρi and ρi j ϕ χr2 [1 − ϕ]
The set of natural numbers The set of integers The set of d-tuples of real numbers The transpose of a matrix A The determinant of a matrix A The indicator function The trace of a matrix A The conjugate transpose of a matrix A √ The Euclidean norm of a matrix A defined by tr(A*A) The Moore–Penrose inverse of a matrix A A p-dimensional vector whose entries are all zero A p-dimensional vector whose entries are all one A p-by-p zero matrix A p-by-p identity matrix A p-by-p matrix whose entries are all one In probability X n converges in probability to X X n converges in distribution to X An order of the probability, that is, for a sequence of random variables {X n } and {an },0 < an ∈ R, an−1 X n converges in probability to zero An order of the probability, that a sequence of random variables is, for {X n } and {an },0 < an ∈ R, an−1 X n is bounded in probability A p-by-p matrix whose elements are probability order O p (·) with respect to all the elements uniformly The number of groups Sample size for an i-th group Sample size for an (i, j)-th group Total sample size for all groups or cells A constant related to unbalanced data Significance level The upper ϕ-percentiles of the chi-square distribution with r degrees of freedom xiii
xiv
r,δ δ Zit Zi jt μ αi βj γ ij α
γ β
eit ei jt f (λ) fˆn (λ) W (·) ω(·) Mn TLH,n
TLR,n
TBNP,n
TNT,n TmLH,n, p
TmLR,n, p
TmBNP,n, p
Tiid,n
Acronyms
The cumulative distribution function of the noncentral chi-square with r degrees of freedom and the noncentrality parameter δ The noncentrality parameter of the noncentral chi-square distribution A t-th p-dimensional observation from an i-th group A t-th p-dimensional observation from an (i, j)-th cell A p-dimensional general mean which is common to all groups or cells A p-dimensional fixed or random effect for an i-th group or an i-th level of factor A A p-dimensional fixed or random effect for a j-th level of factor B A p-dimensional fixed or random interaction between the i-th level of factor A and the j-th level of factor B The variance of random effects (α1 , . . . , αa ) The variance of random effects (β1 , . . . βb ) The variance of random interactions , γ21 , . . . , γa1 , γ12 , . . . , γa2 , . . . , γ1b , . . . , γab ) (γ11 A p-dimensional time series disturbance from the (i, j)-th cell at time t A spectral density matrix A spectral density matrix Some consistent estimator of a spectral density matrix A window function A lag window function The bandwidth parameter of the kernel method The classical Lawley–Hotelling test statistic, defined in (2.5), for independent observations for one-way model with the independent groups The classical likelihood ratio test statistic, defined in (2.6), for independent observations for one-way model with the independent groups The classical Bartlett–Nanda–Pillai test statistic, defined in (2.7), for independent observations for one-way model with the independent groups The test statistic, defined in (2.16), for time series for one-way model with the independent groups The modified Lawley–Hotelling test statistic to high-dimensional time series for one-way model with the independent groups defined in (3.10) The modified likelihood ratio test statistic to high-dimensional time series for one-way model with the independent groups defined in (3.11) The modified Bartlett–Nanda–Pillai test statistic to high-dimensional time series for one-way model with the independent groups defined in (3.12) The classical F-statistic defined in (4.5)
Acronyms
Tts,n TGALT,n Tα,GSXT,n Tγ ,GSXT,n LZ
(θ 0 , θ n ) T 1,GKKT,n T 2,GKKT,n φGKKT,n
xv
The extended F-statistic to time series for one-way model with the independent groups defined in (4.6) The test statistic, defined in (4.7), for the existence of fixed and random effects for one-way model with correlated groups The test statistic, defined in (5.5), for the existence of random effects for two-way model with correlated groups The test statistic, defined in (5.8), for the existence of interactions for two-way model with correlated groups The log-likelihood function of Z defined in (6.2) The log-likelihood ratio process for parameters θ 0 and θ n defined in (6.3) The quantity defined in (6.4) which appears in the asymptotic distribution of (θ 0 , θ n ) The quantity defined in (6.6) which appears in the asymptotic distribution of (θ 0 , θ n ) The test function defined in (6.7)
Chapter 1
Introduction
This chapter describes elements of stationary processes. Concretely, we introduce the spectral distribution, orthogonal increment process, linear filter, and the response function. We explain the spectral representation of autocovariance functions and stationary process itself. Also, the relationship between linear filters and response functions is discussed. We mention a classical ANOVA approach in time series, and a brief outline of this book.
1.1 Foundations Analysis of variance (ANOVA) has a long history. ANOVA deals with the problem of testing the null hypothesis that the means of different populations or the within-group means are all equal. The established theory covers testing and inference for independent observations (e.g., Rao, 1973; Anderson, 2003). However, recently, we observe dependent data in a variety of fields, e.g., finance, medical science, environmental science, engineering, and signal processing. For these dependent observations, we need the ANOVA approach, i.e., ANOVA with dependent errors. As an illustration, the plots G1–G3 in Fig. 1.1 show the financial returns for IBM, FORD, and Merck from February 12, 2021, to February 10, 2023, which are recognized as dependent data in view of substantial evidence. The data can be accessed from Yahoo Finance at https://finance.yahoo.com/. For G i , i = 1, 2, 3, we assume the following stochastic models: G i (t) = z it = μi + eit , i = 1, 2, 3,
(1.1)
where {eit } s are stationary processes, which will be detailed later. Consider the problem of testing the hypothesis: H : μ1 = μ2 = μ3 versus K : H does not hold. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. Goto et al., ANOVA with Dependent Errors, JSS Research Series in Statistics, https://doi.org/10.1007/978-981-99-4172-8_1
1
2
1 Introduction
Fig. 1.1 Plots for the financial returns of IBM, FORD, and Merck
In Chapter 2, the test statistics will be introduced, and their statistical properties will be elucidated as well. The model (1.1) is described by a stochastic process. In what follows, we provide its elements in vector form. Throughout the book, we use the following notations: N = the set of all positive integers, Z = the set of all integers, and Rm = the mdimensional Euclidean space. Let {X t = (X 1t , . . . , X mt ) : t ∈ Z} be a family of vector random variables, called an m-dimensional vector process, where indicates transposition of the vector. The autocovariance function (·, ·) is defined by (t, s) := Cov(X t , X s ) = E{(X t − EX t )(X s − EX s )∗ }, t, s ∈ Z, where ∗ denotes complex conjugate transpose. An m-dimensional vector process {X t : t ∈ Z} is said to be stationary if (i) E(X ∗t X t ) < ∞, for all t ∈ Z; (ii) E(X t ) = c for all t ∈ Z, where c is a constant vector; (iii) (t, s) = (0, s − t) for all s, t ∈ Z. We state three fundamental theorems related to stationary processes (for the proofs, see, e.g., Hannan, 1970). Theorem 1.1 If (·) is the autocovariance function of an m-dimensional vector stationary process {X t }, then it has the representation (s) =
π
−π
eisλ d F(λ), s ∈ Z,
where F(λ) is a matrix-valued function whose increment F(λ1 ) − F(λ2 ), λ1 ≥ λ2 , is non-negative definite. The function F(λ) is uniquely defined if in addition we require that
1.1 Foundations
(i)
3
F(−π ) = 0, and F(λ) is right continuous.
The matrix F(λ) is called the spectral distribution matrix. If F(λ) is absolutely continuous with respect to the Lebesgue measure μ on [−π, π ] so that F(λ) =
λ −π
f (μ)dμ,
where f (λ) is a matrix with entries f jk (λ), then f (λ) is called the spectral density matrix. If ∞ (h) < ∞, h=−∞
where (h) := is given by
the greatest eigenvalue of (h)∗ (h), the spectral density matrix f (λ) =
∞ 1 (h)e−ihλ . 2π h=−∞
For the spectral representation of {X t }, we need the concept of orthogonal increment process. We say that {Z(λ) : −π ≤ λ ≤ π } is an m-dimensional vector-valued orthogonal increment process if (i) (ii) (iii) (iv)
all the components of E{Z(λ)Z(λ)∗ } are finite, −π ≤ λ ≤ π ; E{Z(λ)} = 0, −π ≤ λ ≤ π ; E[{Z(λ4 ) − Z(λ3 )}{Z(λ2 ) − Z(λ1 )}∗ ] = 0 if (λ1 , λ2 ] ∩ (λ3 , λ4 ] = ∅; E[{Z(λ + δ) − Z(λ)}{Z(λ + δ) − Z(λ)}∗ ] → 0 as δ → 0.
Theorem 1.2 If {X t : t ∈ Z} is an m-dimensional vector stationary process with E(X t ) = 0 and spectral distribution matrix F(λ), then there exists a right continuous orthogonal increment process {Z(λ) : −π ≤ λ ≤ π } such that (i) E[{Z(λ) − Z(−π )}{Z(λ) − Z(−π )}∗ ] = F(λ), −π ≤ λ ≤ π ; (ii) Xt =
π
e−itλ dZ(λ).
(1.2)
−π
If a sequence of random vectors {an } satisfies an − a → 0 as n → ∞, for a constant vector a, we say that {an } converges to the limit a in the mean. We denote this by l.i.m.n→∞ an = a. Theorem 1.3 Suppose that {X t : t ∈ Z} is an m-dimensional vector stationary process with E(X t ) = 0 and spectral distribution matrix F(·) and spectral representation (1.2), and that { A j : j ∈ Z} is a sequence of m × m matrices. Write Yt =
∞ j=0
A j X t− j ,
(1.3)
4
1 Introduction
and h(λ) =
∞
A j ei jλ .
j=0
Then, (i) the necessary and sufficient condition for (1.3) to exist as the l.i.m. of partial sums is π ∗ tr (1.4) h(λ)d F(λ)h(λ) < ∞; −π
(ii) if (1.4) holds, the process {Y t : t ∈ Z} is stationary with the autocovariance function and spectral representation Y (s) =
π
−π
eisλ h(λ)d F(λ)h(λ)∗ and Y t =
π
e−itλ h(λ)dZ(λ),
−π
respectively. Let { A j } be a sequence of m × m-matrices satisfying ∞ A j 2 < ∞, j=0
and let {U t : t ∈ Z} be a sequence of i.i.d. random vectors with mean zero and covariance matrix V (for short, {U t } ∼ i.i.d.(0, V )). Then, from Theorem 1.3, we can see that the generalized linear process Xt =
∞
A j U t− j
(1.5)
j=0
is defined in the l.i.m. of partial sums, and that {X t } has the spectral density matrix ⎞ ⎛ ⎞∗ ⎛ ∞ ∞ 1 ⎝ f (λ) = A j ei jλ ⎠ V ⎝ A j ei jλ ⎠ . 2π j=0 j=0 If {X t } is generated by X t + B 1 X t−1 + · · · + B p X t− p = U t + C 1 U t−1 + · · · + C q U t−q , where B 1 , . . . , B p , C 1 , . . . , C q are m × m matrices, and {U t } ∼ i.i.d.(0, V ), it is called an m-dimensional vector autoregressive moving average of order ( p, q) (for
1.1 Foundations
5
short VARMA( p, q)). Letting B(z) = I + B 1 z + · · · + B p z p , where I is the m × m identify matrix, we assume det B(z) = 0 for all z ∈ C such that |z| ≤ 1. Then, we can see that {X t } is stationary and has the spectral density matrix f (λ) =
∗ ∗ −1 1 iλ −1 iλ B e C e V C eiλ B eiλ . 2π
The VARMA( p, q) models are often used in many applications, and have the linear form of (1.5). Consider the vector form of (1.1): Z t = μ + et ,
(1.6)
where Z t = (z 1t , z 2t , z 3t ) , μ = (μ1 , μ2 , μ3 ) , and et = (e1t , e2t , e3t ) . If we assume that {et } is stationary with spectral density matrix f (λ), we can apply the spectral analysis for (1.6). Brillinger (1981) introduced the following model: z jkt = μ + αt + β jt + e jkt ,
(1.7)
where μ is a constant, {αt } is a stationary process with E{αt } = 0 and spectral density f αα (λ), {β jt }, j = 1, . . . , J are stationary with E{β jt } = 0 and spectral density f ββ (λ), and {e jkt }, j = 1, . . . , J, k = 1, . . . , K , are stationary with E{ε jkt } = 0 and spectral density f ee (λ). For an observed stretch {X jkt }, t = 0, 1, . . . , n − 1, let dz(n) (λ) := jk
n−1
z jkt e−iλt , dz(n) (λ) := j·
t=0
K 1 (n) d (λ), K k=1 z jk
and dz(n) (λ) := ··
K J 1 (n) d (λ). K J k=1 j=1 z jk
Brillinger (1981) showed that A1 (λ) :=
J K
2 1
(n) 1
(λ)
,
dz jk (λ) − dz(n) j· J (K − 1) j=1 k=1 2π n
A2 (λ) := J −1 K
J
2 1
(n) 1
(n)
2
d (λ) (λ) , and A3 (λ) :=
dz j· (λ) − dz(n) ·· 2π n 2π n z··· j=1
6
1 Introduction
are asymptotically, independently chi-square distributed. For each frequency λ, the statistics A1 (λ), A2 (λ), and A3 (λ) can be used for the testing problems for the model (1.7). Shumway and Stoffer (2006) introduced the following model: z i jkt = μt + αit + β jt + γi jt + ei jkt , and, for each frequency λ, they developed a similar analysis. In this book, we will develop systematic inference theories for time series ANOVA models by the use of the total information of the observed stretch instead of the information from each frequency.
1.2 Overview of Chapters In the rest of this book, we will examine the following contents. Chapter 2 addresses the problem of testing the null hypothesis H that the within-group means of time series ANOVA model are equal. For ANOVA with independent disturbances, the following tests, LR = likelihood ratio test, LH = Lawley–Hotelling test and BNP = Bartlett–Nanda–Pillai test, are illustrated. For ANOVA with dependent disturbances, we will give a sufficient condition for LR, LH, and BNP to be asymptotically chi-square distributed under H . It is shown that generalized autoregressive conditional heteroscedasticity (GARCH) disturbance satisfies this sufficient condition. Chapter 3 introduces p-dimensional ANOVA models Zit = μ + α i + eit , i = 1, . . . , a, and considers the problem of testing H : α 1 = · · · = α a = 0 v.s. K : α i = 0 for some i. Let Ti , i = 1, 2, 3, √ be the standardized versions of LH, LR, and BNP, respectively. By assuming p 3/2 / n → 0 as n, p → ∞, we show that, under H , Ti , i = 1, 2, 3, follow asymptotically a standard normal distribution. Chapter 4 introduces a random effect model Zit = μ + τ i + eit , i = 1, . . . , a, t = 1, . . . , n i , where Zit is a p-dimensional observation of the i-th group, μ is the general mean, τ i is a p-dimensional normal random vector with mean 0 and variance matrix τ , and (e 1t , . . . , eat ) is a sequence of stationary processes with spectral density matrix f (λ). We consider the problem of testing H : τ = 0 (no random effect) versus K : τ = 0 (random effect),
References
7
and propose a new test TGALT,n with a quadratic form. Under H , it is shown that TGALT,n converges in a distribution to a chi-square distribution. Under K , we prove that TGALT,n is consistent. The results of Chapter 4 will be generalized to the case of two-way models in Chapter 5. Chapter 6 explores the asymptotics of the likelihood ratio processes under nonstandard settings. We consider a family of one-way random ANOVA models, and show that the model does not hold local asymptotic normality (LAN). Hence, we cannot apply the ordinary optimal theory based on LAN. However, we can show that the log-likelihood ratio test is asymptotically most powerful via Neyman–Pearson’s lemma. Chapter 7 deals with numerical studies for a variety of tests and models. Chapter 8 provides the empirical data analysis. We assess the existence of area effects for the average wind speed data observed in seven cities located in Japan.
References Anderson, T. W. (2003). An introduction to multivariate statistical analysis (3rd ed.). Wiley. Brillinger, D. R. (1981). Time series: Data analysis and theory. San Francisco: Holden-Day. Hannan, E. J. (1970). Multiple time series. Wiley. Rao, C. R. (1973). Linear statistical inference and its applications. New York: Wiley. Shumway, R. H., & Stoffer, D. S. (2006). Time series analysis and its application with R examples (2nd ed.). New York: Springer.
Chapter 2
One-Way Fixed Effect Model
While ANOVA for independent errors has been well-tuned, ANOVA for timedependent errors is in its infancy. In this chapter, we extend the one-way fixed model with independent errors and groups to a one-way fixed model with time-dependent errors and independent groups. We illustrate the asymptotics of the classical test for time-dependent errors and propose the test for time-dependent errors introduced in Section 2.1. For the classical tests proposed for independent errors, we give sufficient conditions for their asymptotic chi-square distribution. For the case where sufficient conditions are violated, we propose to use a likelihood ratio test based on the Whittle likelihood. In what follows, we develop our discussion based on the results by Nagahata and Taniguchi (2017).
2.1 Tests for Time-Dependent Errors In this section, we consider tests for the existence of fixed effects by introducing a one-way fixed effect model with time-dependent errors and independent groups. Let Zi1 , . . . , Z ini be p-dimensional stretches, observed from the following oneway fixed model: Zit = μ + α i + eit , i = 1, . . . , a, t = 1, . . . , n i ,
(2.1)
where μ = (μ1 , . . . , μ p ) is a grand mean, α i = (αi1 , . . . , αi p ) is a fixed effect, . . , eit p ) ; i = 1, . . . , a, t = 1, . . . , n i } is the disturbance process. and {eit = (eit1 , . a αi = 0 p . Here we assume i=1 Suppose that (i) the observed stretch {Zit ; i = 1, . . . , a, t = 1, . . . , n i } is available. (ii) the p-dimensional time series {eit } is a centered stationary process, which has a p-by- p lag h autocovariance matrix (h) = { j,k (h) ; j, k = 1, . . . , p}, h ∈ Z, and spectral density matrix f (λ) = ( f i (λ))i=1,...,a for λ ∈ [−π, π ], where © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. Goto et al., ANOVA with Dependent Errors, JSS Research Series in Statistics, https://doi.org/10.1007/978-981-99-4172-8_2
9
10
2 One-Way Fixed Effect Model
f i (λ) is a p-by- p spectrum of {eit }. Moreover, {ei· }, i = 1, . . . , a are mutually independent. a ni . (iii) n 1 = · · · = n a with n = i=1 Remark 2.1 Condition (ii) is a standard assumption, called homoscedasticity (e.g., Section 8.9 in Anderson (2003)) and allows for the case of within-group correlation. In ANOVA and Design of Experiments, Condition (iii) is called balanced design. Let {eit } be generated from eit =
∞
A j ηi(t− j) ,
j=0
∞
A j 2 < ∞, i = 1, . . . , a,
(2.2)
j=0
where the ηit ’s are i.i.d. centered p-dimensional random vectors with variance G, and the A j ’s are p × p constant matrices. Then, {eit } has the autocovariance matrices: (l) =
∞
A j G Aj+l ,
j=0
and the spectral density matrix: ∗ 1 A j e i jλ G A j e i jλ . 2π j=0 j=0 ∞
f (λ) =
∞
We are interested in testing the hypothesis: H21 : α 1 = · · · = α a versus K 21 : α i = 0 for some i.
(2.3)
The null hypothesis H21 implies that all the effects are zero. Under the null hypothesis H21 defined in (2.3), we derive the asymptotic distribution of the test statistics.
2.1.1 Three Classical and Famous Test Statistics In this subsection, we discuss the Lawley–Hotelling test, the likelihood ratio test, and the Bartlett–Nanda–Pillai test statistic proposed for independent observations. To write the test statistics, we introduce ni ni a 1 1 Zit , Zˆ ·· := Zit , Zˆ i· := n i t=1 n i=1 t=1
Sˆ H :=
a i=1
n i ( Zˆ i· − Zˆ ·· )( Zˆ i· − Zˆ ·· ) , and Sˆ E :=
(2.4) ni a (Zit − Zˆ i· )(Zit − Zˆ i· ) , i=1 t=1
2.1 Tests for Time-Dependent Errors
11
where n = n 1 + · · · + n a . These are called within-group mean (for the i-th treatment group), grand mean, between-group sum of squares or sum of squares for treatment, and within-group sum of squares or sum of squares for error, respectively. If eit s are mutually independent with respect to i, the following test statistics are proposed under normality: −1 TLH,n := ntr{ Sˆ H Sˆ E } (Lawley − −Hotelling test), TLR,n := −n log{| Sˆ E |/| Sˆ E + Sˆ H |} (likelihood ratio test),
TBNP,n
:= ntr Sˆ H ( Sˆ E + Sˆ H )−1 (Bartlett–Nanda–Pillai test).
(2.5) (2.6) (2.7)
We consider the following assumption: Assumption 2.1 det{ f (0)} > 0. i √ √ eit , and e := ( n 1 eˆ 1 , . . . , n a eˆ a· ). Let eˆ i· := n i−1 nt=1 The following lemma is due to Hannan (1970) (p. 208, p. 221). Lemma 2.1 Under Assumption 2.1, as n i , i = 1, . . . , a, tends to ∞, if eit is generated by the generalized linear process (2.2), then (i) eˆ i· → 0 p in probability for i = 1, . . . , a; (ii) vec{e} converges in distribution to the ap-dimensional centered normal with variance ⎛ ⎞ f (0) O p · · · O p ⎜ O p f (0) · · · O p ⎟ ⎜ ⎟ 2π ⎜ . .. .. ⎟ . ⎝ .. . . ⎠ O p · · · f (0)
Op
Theorem 2.1 Assume that the processes {eit } in (2.2) have the finite fourth-order cumulant, and that (2.8) ( j) = O p f or all j = 0. Then, under H21 and Assumption 2.1, all the tests TLH,n , TLR,n , and TBNP,n follow the asymptotic chi-square distribution with (a − 1) p degrees of freedom. Proof (Theorem 2.1) By the transformation Zit → (0)−1/2 Zit , we observe that the three test statistics TLH,n , TLR,n , and TBNP,n are invariant. Thus, without loss of generality, we can assume (0) = I p . Under the setting of Section 2.1, we show that 1ˆ SE = I p + O p n
1 √ ni
.
(2.9)
Here, note that TLH,n
= tr SˆH
1 ˆ SE n
−1
,
(2.10)
12
2 One-Way Fixed Effect Model
TLR,n = n log I p + SˆH SˆE−1 ,
−1 1 1 TBNP,n = tr SˆH SˆE + SˆH . n n
(2.11) (2.12)
By substituting (2.9) for (2.10), (2.11), and (2.12) and noting that d log |F| = tr F −1 d F (Magnus and Neudecker, 1999), we see that the stochastic expansion of the three statistics TLH,n , TLR,n , TBNP,n (= T ) is given by T = tr ee + O p
1 √ ni
,
√ √ where = I a − ρρ , with ρ = ( n 1 /n, . . . , n a /n) (for i.i.d. case, e.g., Fujikoshi 2011, pp. 164–165). Since ( j) = O p , ( j = 0), Lemma 2.1 implies et al. that tr ee has an asymptotic chi-square distribution with (a − 1) p degrees of freedom by Rao (2009). Thus, the results follow. Remark 2.2 The condition (2.8) implies that the {eit }’s are uncorrelated processes. Theorem 2.1 shows that the three test statistics proposed for independent observations apply also to dependent observations satisfying (2.8). Note that the condition (2.8) is not very stringent since the following practical nonlinear time series model satisfies (2.8). Bollerslev et al. (1988) introduced the vector generalized autoregressive conditional heteroscedasticity (GARCH(u, v)) model: et = H t 1/2 ηt and vech(H t ) = w +
u i=1
B i vech (H t−i ) +
v
C j vech et− j e t− j ,
j=1
where et = (et1 , . . . , et p ) , {ηt ; t = 1, 2, . . .} follows the i.i.d. centered pdimensional vector with variance I p , vech(·) denotes the column stacking operator of the lower portion of a symmetric matrix, w is p( p + 1)/2 constant vector, and B i s and C j s are ( p( p + 1)/2) × ( p( p + 1)/2) constant matrices. Let Ft−1 be the σ -algebra generated by {et−1 , et−2 , . . .}. We assume H t is measurable with respect to Ft−1 and ηt ⊥ Ft−1 . Since this model is critical and widespread in nonlinear time series analysis of financial data, the three tests based on TLH,n , TLR,n , and TBNP,n can be applied to financial data.
2.1.2 Likelihood Ratio Test Based on Whittle Likelihood In Theorem 2.1, we saw that the classical test statistics TLH,n , TLR,n , and TBNP,n are asymptotically chi-square distributed when (2.8) holds. However, the tests TLH,n , TLR,n , and TBNP,n are not available if one wants to test the hypothesis defined
2.1 Tests for Time-Dependent Errors
13
in (2.3) for general disturbances which do not satisfy (2.8). For this reason, we propose a new test based on the Whittle likelihood. Whittle’s approximation to the Gaussian likelihood function is given by n −1
i 1 tr I i (λs ) f (λs )−1 , 2 i=1 s=0
a
l(μ, α) := −
where λs = 2π s/n i and n n ∗ i i 1 iλt iλu I i (λ) := (Zit − μ − α i )e (Ziu − μ − α i )e . 2π n i t=1 u=1 Under the null hypothesis H21 defined in (2.3), we know from ∂l(μ,α) ∂μ
= 0p,
∂l(μ,α) ∂α i
= 0 p that the solutions are
ˆ ·· := μ=μ
∂l(μ,0 p ) ∂μ
= 0p,
ni ni a 1 1 ˆ ·· ). Zit and α = αˆ i· := (Zit − μ n i=1 t=1 n i t=1
Hence, we introduce the following test statistic: ˆ ·· , α) ˆ − l(μ ˆ ·· , 0 p ) , TWLR := 2 l(μ or equivalently TWLR =
a √ √ n i αˆ i· {2π f (0)}−1 n i αˆ i· .
(2.13)
i=1
The following result holds. Lemma 2.2 Suppose Assumption 2.1 holds. If eit is generated by the generalized linear process (2.2), then under H21 , the test statistic TWLR has an asymptotic chisquare distribution with (a − 1) p degrees of freedom. Proof (Lemma 2.2) Under H21 , we obtain n −1 ni ni a i ∂l(μ, 0 p ) 1 1 −1 =− f (λs ) (−eiλs t ) (Ziu − μ)e−iλs u ∂μ 2 i=1 s=0 2π n i t=1 u=1 ni ni 1 + (Zit − μ)eiλs t (−e−iλs u ) . 2π n i t=1 u=1 Noting that ni 1 1 (s = 0) iλs t e = 0 (s = 0), n i t=1
(2.14)
14
2 One-Way Fixed Effect Model
we can see that
∂l(μ,0 p ) ∂μ
= 0 p leads to the solution μ=
ni a 1 Zit . n i=1 t=1
Next, n i −1 ∂l(μ, α) 1 =− f (λs )−1 ∂α i 2 s=0
+
1 2π n i
ni ni 1 (−eiλs t ) (Ziu − μ − α i )e−iλs u 2π n i t=1 u=1 ni ni (Zit − μ − α i )eiλs t (−e−iλs u ) = 0 p t=1
u=1
leads to αi = Similarly, from
∂l(μ,α) ∂μ
ni 1 (Zit − μ). n i t=1
= 0 p , we obtain μ=
ni a 1 (Zit − α i ). n i=1 t=1
As a solution, we may take α = αˆ i· :=
ni ni a 1 1 ˆ ·· ) and μ = μ ˆ ·· := (Zit − μ Zit . n i t=1 n i=1 t=1
From the above, it follows that ˆ ·· , α) ˆ − l(μ ˆ ·· , 0 p ) TWLR = 2 l(μ ni ni a n i −1 1 ˆ ·· )eiλs t ˆ ·· ) e−iλs u f (λs )−1 = tr (Zit − μ (Ziu − μ 2π n i t=1 u=1 i=1 s=0 ni ni 1 ˆ ·· − αˆ i· )eiλs t ˆ ·· − αˆ i· ) e−iλs u f (λs )−1 − (Zit − μ (Ziu − μ 2π n i t=1 u=1 n −1 n n a i i i 1 αˆ i· eiλs t αˆ i· eiλs u f (λs )−1 = tr − 2π n i t=1 u=1 i=1 s=0
+
ni ni 1 ˆ ·· ) e−iλs u f (λs )−1 αˆ i· eiλs t (Ziu − μ 2π n i t=1 u=1
2.1 Tests for Time-Dependent Errors
15
ni ni 1 −iλs u iλs t −1 ˆ ·· )e + (Zit − μ f (λs ) αˆ i e . 2π n i t=1 u=1 Recalling (2.14), we obtain TWLR =
ni ni a 1 1 ˆ ·· ) f (0)−1 αˆ i· αˆ i· f (0)−1 + αˆ i· tr − (Ziu − μ 2π 2π u=1 u=1 i=1
+ =
ni 1 ˆ ·· )αˆ i· f (0)−1 (Zit − μ 2π t=1
a tr −αˆ i· n i αˆ i· {2π f (0)}−1 + 2αˆ i· n i αˆ i· {2π f (0)}−1 i=1
=
a √
n i αˆ i· {2π f (0)}−1
√
n i αˆ i· .
(2.15)
i=1
Note that ni a 1 1 αˆ i· = α¯ i· − n i α¯ i· , where α¯ i· = (Zit − μ). n i=1 n i t=1
√ Expression (2.15) is a mean-corrected quadratic form. Since n i α¯ i· converges in distribution to the p-dimensional centered normal distribution with variance 2π f (0) (see Hannan, 1970, p. 208), we can see that TWLR converges in distribution to the chi-square distribution with (a − 1) p degrees of freedom. Remark 2.3 As Lemma 2.2, the new test statistic based on the Whittle likelihood is asymptotically chi-square distributed even if the condition (2.8) does not hold. Furthermore, to propose a practical version of TWLR we consider ˆf i (λ) := 1 2π
n i −1 h=−(n i −1)
ω
h Mn i
e
−ihλ
|h| ˆ 1− i (h), ni
n i −h 1 ˆ i (h) := (Zi(t+h) − Zˆ i· )(Zit − Zˆ i· ) , n i t=1
where Mni is a positive sequence of integers, Zˆ i· in ˆ i (h) is defined in (2.4), ∞ ω(x) := −∞ W (t)eixt dt, and the function W (·) satisfies the following conditions: ∞ W (·) is a real, bounded, non-negative, even function such that −∞ W (t)dt = 1 and ∞ 2 −∞ W (t)dt < ∞ with a bounded derivative.
16
2 One-Way Fixed Effect Model
The following assumption is made. Assumption 2.2 For the same integer ν, (i) for some ν ≥ 1, lim
x→0
1 − ω(x) < ∞, |x|ν
(ii) Mni → ∞, {Mni }ν /n i → 0, (iii) ∞
∞
∞
s2 =−∞ s3 =−∞ s4 =−∞
|κri1 ,r2 ,r3 ,r4 (s2 , s3 , s4 )|
< ∞,
∞
|h|ν ||(h)|| < ∞, ν ≥ 0
h=−∞
where κri1 ,r2 ,r3 ,r4 (s2 , s3 , s4 ) := Cum{ei0r1 , eis2 r2 , eis3 r3 , eis4 r4 } for i = 1, . . . , a and r1 , r2 , r3 , r4 = 1, . . . , p. The following lemma is due to Hannan (1970) (p. 280, p. 283, and p. 331). Lemma 2.3 Under Assumption 2.2, as n i , i = 1, . . . , a, tend to ∞, ˆf i (λ) → f (λ) in probability for i = 1, . . . , a. Under Assumption 2.2, we can replace f (0) in (2.13) by ˆf i (0): TNT,n =
a −1 √ √ ˆ n i αˆ i· 2π f i (0) n i αˆ i· .
(2.16)
i=1
Therefore, the following result immediately follows from Slutsky’s theorem. Theorem 2.2 Under H21 and Assumptions 2.1 and 2.2, if eit is generated by the generalized linear process (2.2), then the test statistic TNT,n has an asymptotic chisquare distribution with (a − 1) p degrees of freedom. Remark 2.4 The test statistic TNT,n in Theorem 2.2 is practically useful because it can be calculated directly from the observed values.
References Anderson, T. W. (2003). An introduction to multivariate statistical analysis (3rd ed.). Wiley. Bollerslev, T., Engle, R. F., & Wooldridge, J. M. (1988). A capital asset pricing model with timevarying covariances. The Journal of Political Economy, 116–131. Fujikoshi, Y., Ulyanov, V. V., & Shimizu, R. (2011). Multivariate statistics: High-dimensional and large-sample approximations 760. Wiley. Hannan, E. J. (1970). Multiple time series. Wiley.
References
17
Magnus, J. R., & Neudecker, H. (1999). Matrix differential calculus with applications in statistics and econometrics. New York: Wiley. Nagahata, H., & Taniguchi, M. (2017). Analysis of variance for multivariate time series. Metron, 76, 69–82. Rao, C. R. (2009). Linear statistical inference and its applications 22. Wiley.
Chapter 3
One-Way Fixed Effect Model for High-Dimensional Time Series
Although finite dimensionality was assumed in Chapter 2, ANOVA for highdimensional time-dependent errors has not been fully developed. In this chapter, we extend the one-way fixed model with time-dependent errors and independent groups to a one-way fixed model with high-dimensional time-dependent errors and independent groups. In Section 3.1, we develop the asymptotics of the basic statistics and modified classical tests for high-dimensional time-dependent errors. A sufficient condition for the modified classical tests to be asymptotically normal is also presented. This chapter is based mostly on Nagahata and Taniguchi (2018).
3.1 Tests for High-Dimensional Time-Dependent Errors Throughout, we consider the one-way fixed effect model under which a a-tuple of p-dimensional time series Zi1 , . . . , Zini , i = 1, . . . , a satisfies Zit = μ + α i + eit , i = 1, . . . , a, t = 1, . . . , n i ,
(3.1)
where the disturbances eit = (eit1 , . . . , eit p ) . Here, μ is the global mean of the the i-th treatment, which measures model (3.1), and α i denotes the fixed effects of a αi = 0 p . the deviation from the grand mean μ satisfying i=1 We suppose that (i) the observed stretch {Zit ; i = 1, . . . , a, t = 1, . . . , n i } is available, process, which has a (ii) the p-dimensional time series {eit } is a centered stationary p-by- p lag h autocovariance matrix (h) = jk (h) 1≤ j,k≤ p , h ∈ Z and spectral density matrix f (λ) = ( f i (λ))i=1,...,a for λ ∈ [−π, π ], where f i (λ) is a p-by- p spectrum of {eit }, furthermore {ei· }, i = 1, . . . , a are mutually independent, and a ni . (iii) n 1 = · · · = n a with n = i=1 © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. Goto et al., ANOVA with Dependent Errors, JSS Research Series in Statistics, https://doi.org/10.1007/978-981-99-4172-8_3
19
20
3 One-Way Fixed Effect Model for High-Dimensional Time Series
Remark 3.1 The above conditions are the same as those considered at the beginning of Chapter 2. Since the sum of the treatment effects is zero, we consider the hypotheses: H31 : α 1 = · · · = α a = 0 p versus K 31 : α i = 0 p for some i. The null hypothesis H31 implies that all the effects are zero. For our high-dimensional dependent observations, we use the following Lawley– Hotelling test statistic TLH,n , the likelihood ratio test statistic TLR,n , and the Bartlett– Nanda–Pillai test statistic TBNP,n : −1
TLH,n := ntr Sˆ H Sˆ E , TLR,n := −n log | Sˆ E |/| Sˆ E + Sˆ H |, and TBNP,n := ntr Sˆ H ( Sˆ E + Sˆ H )−1 , where Sˆ H :=
a
n i ( Zˆ i· − Zˆ ·· )( Zˆ i· − Zˆ ·· ) , and Sˆ E :=
i=1
with Zˆ i· :=
1 ni
ni t=1
1 Zit and Zˆ ·· := n
ni a
(Zit − Zˆ i· )(Zit − Zˆ i· )
i=1 t=1 ni a
Zit .
i=1 t=1
Here, Sˆ H and Sˆ E are called the between-group sum of squares and the within-group sum of squares, respectively. We derive the null asymptotic distribution of the three test statistics under the following assumptions. Assumption 3.1 For the dimension p of Zit , the sample size of the i-th group n i and the total sample size n satisfy p 3/2 √ → 0 as n, p → ∞ and n ni → ρi > 0 as n → ∞. n
(3.2)
Remark 3.2 Condition (3.2) implies that the sample size of the i-th group n i and the total sample size n of all the groups are asymptotically of the same order.
3.1 Tests for High-Dimensional Time-Dependent Errors
21
Assumption 3.2 For the p-vectors eit = (eit1 , . . . , eit p ) given in (3.1), there exists a non-negative integer such that ⎫ ⎧⎛ ⎞ ⎬ ⎨ ⎝1 + |s j |⎠ |κri1 ,...,r (s2 , . . . , s )| < ∞, ⎭ ⎩ ,...,s =−∞ ∞
s2
j=2
for any -tuple r1 , . . . , r ∈ {1, . . . , p} and i = 1, . . . , a. Here, κri1 ,...,r (s2 , . . . , s ) := Cum{ei0r1 , eis2 r2 , . . . , eis r }, where, for a random variable {X t }, Cum(X 1 , . . . , X ) denotes the cumulant of order of (X 1 , . . . , X ) (see Brillinger, 1981, p. 19). Remark 3.3 As per Condition (ii) at the beginning of section 3.1, {ei· }, i = 1, . . . , a are mutually independent. Hence, Cum{ei·· , e j·· , · · · } = 0 (i = j). Remark 3.4 If eitrm1 , . . . , eitrm h for some -tuple m 1 , . . . , m h ∈ {1, . . . , } are independent of eitrm h+1 , . . . , eitrm for the remaining ( − h)-tuple m h+1 , . . . , m ∈ {1, . . . , }, then κrim ,...,rm (sm 1 +1 , . . . , sm ) = 0 (Brillinger, 1981, p. 19). Assump1 tion 3.2 implies that if the time points of a group of eit A r∗ ’s are well separated from the remaining time points of eit B r∗ ’s, the values of κri1 ,...,r (s2 , . . . , s ) become small (and hence summable) (see Brillinger, 1981, p. 19). This property is natural for stochastic processes with short memory. Although some readers may believe that Assumption 3.2 is very restrictive, it is not so. Nisio (1960) studied a sequence of polynomial processes: X L (t) =
L
a J (t − u 1 , . . . , t − u J )W (u 1 ) · · · W (u J ),
(3.3)
J =0 u 1 ,...,u J i.i.d.
where the a J ’s are absolutely summable, and {W (u)} ∼ N (0, 1). Nisio (1960) showed that, when a process {Y (t)} is strictly stationary and ergodic, there is a sequence of polynomial processes that converges to {Y (t)} in law. The process X L (t) is called the Volterra series. Clearly, the sequence of polynomial processes (3.3) satisfies Assumption 3.2. Also, in the following (3.5), we present a very practical nonlinear time series model, which satisfies Assumption 3.2. Assumption 3.3 The disturbance process {ei· } is an uncorrelated process, that is, ( j) = O p for all j = 0.
(3.4)
Remark 3.5 Assumption 3.3 means that we replace independent disturbances in the classical ANOVA by dependent ones like white noise and GARCH type disturbances. It should be noted here that the condition (3.4) is of course restrictive, but includes practical nonlinear time series models such as DCC-GARCH(u, v):
22
3 One-Way Fixed Effect Model for High-Dimensional Time Series
i.i.d. 1/2 eit = H it ηit , ηit ∼ 0 p , I p , √ √ H = D R D , Dit = diag σit1 , . . . , σit p , ⎛ it ⎞ it it it eit1 2 u v ⎜ .. ⎟ eit = ⎝ . ⎠ , σit j = w j + b j l=1 ei(t−l) j , σi(t−l) j + c j l=1 eit p −1/2 −1/2 Q it diag Q it , Rit = diag Q it ⎞ ⎛ e˜it1 e ⎟ ⎜ ˜ + α e˜ i(t−1) e˜ e˜ it = ⎝ ... ⎠ , e˜it j = √σit j , Q it = (1 − α − β) Q i(t−1) + β Q i(t−1) , it j e˜it p
(3.5) ˜ the unconditional correlation matrix, is a constant positive semidefinite where Q, matrix, and H it ’s are measurable with respect to ηi(t−1) , ηi(t−2) , · · · (see Engle, 2002) satisfying (3.4). By Giraitis et al. (2000, formula (2.3)), a typical component is expressed as ∞ bt− j1 · · · b jl−1 − jl η j1 · · · η jl , l=0 jl < jl−1 0, k2 > 0, k3 > 0, θ2 > 0, h 1 > 0, and h 2 > −n k2 θ2 . Theorem 6.1 (i) Under the null hypothesis H61 , the log-likelihood ratio (θ 0,61, θ n,61) has the following asymptotic expansion: for h ∈ R and sufficiently large n such that h 2 > −n k2 θ2 , (θ 0,61 , θ n,61 ) ⎧ √ ah 2 g1 (T 1,GKKT,n ) ⎪ h h1 h3 θ 2 ⎪ 3 ⎪ − a2 log(1 + θ 1 ) − 2(θ +h + o p (1) ⎪ θ2 +h 1 2(θ2 +h 1 ) ⎪ 2 2 1) g2 (T 1,GKKT,n ) ⎪ ⎪ ⎪ ⎨ 2 ah h = √ 3 g1 (T 1,GKKT,n ) − 2θ 3 + o p (1) θ2 ⎪ 2 ⎪ ⎪ h1 h ⎪ ⎪ g2 (T 1,GKKT,n ) − a2 log(1 + θ 1 ) + o p (1) ⎪ ⎪ 2(θ +h ) 2 ⎪ 2 1 ⎩ o p (1)
k2 ≥ 1, k3 = 21 , k1 = 1, k2 ≥ 1, k3 = 21 , k1 > 1, k2 ≥ 1, k3 > 21 , k1 = 1, k2 ≥ 1, k3 > 21 , k1 > 1,
6.2 The Hypothesis That the Variance of Random Effect Equals …
57
where √ T 1,GKKT,n :=
n(Z 1· − θ3 ) ,..., √ θ2
√ n(Z a· − θ3 ) , √ θ2
(6.4)
(n) , the a-dimensional standard normal distribution, and which follows, under H61
a a 2 g1 (x1 , . . . , xa ) g2 (x1 , . . . , xa ) := . i=1 x i i=1 x i
(n) (ii) Under the null hypothesis H61 , the Fisher information of the model (6.1) is given by ⎞ ⎛ 1 1 0 2(θ1 +θ2 )2 2(θ1 +θ2 )2 ⎟ ⎜ 1 1 0 ⎠. I(θ 0,61 ) := ⎝ 2(θ1 +θ 2 2(θ +θ )2 2) 1 2 1 0 0 θ1 +θ2
Proof (Theorem 6.1) We only prove (i) since (ii) is an immediate consequence of the definition of the Fisher information matrix. Let the elements of θ n,63 denote (n) (n) (n) . From the definition of (θ 0,61 , θ n,61 ), a lengthy but straightforward θ1 θ2 θ3 calculation gives
=
(θ 0,61 , θ n,61 ) a n θ3 − θ3(n) θ2(n) + nθ1(n) +
2 an θ3(n) − θ3 θ (n) a(n − 1) − log 2 (Z i. − θ3 ) − 2 θ2 2 θ2(n) + nθ1(n) i=1
n a θ2(n) − θ2
2θ2(n) θ2
(Z it − Z i. )2 −
i=1 t=1
θ (n) + nθ1(n) a log 2 2 θ2 + nθ1
a a θ1 − θ2 ) + n 2 θ1(n) n2 2 + (Z i. − θ3 ) − (Z i· − θ3 )2 2 θ + nθ (θ ) 2 2 1 2θ2 θ2(n) + nθ1(n) i=1 i=1
n(θ2(n)
= L 1 + L 2 + L 3 + L 4, where
L 1 :=
a n θ3 − θ3(n) θ2(n) + nθ1(n)
L 3 := −
i=1
(Z i. − θ3 ),
2 an θ3(n) − θ3 , L 2 := − 2 θ2(n) + nθ1(n)
a n θ (n) a(n − 1) θ (n) − θ2 log 2 + 2 (n) (Z it − Z i. )2 , 2 θ2 2θ2 θ2 i=1 t=1
58
6 Optimal Test for One-Way Random Effect Model
and L 4 := −
a θ (n) + nθ1(n) a n(θ2(n) − θ2 ) + n 2 θ1(n) log 2 + (Z i. − θ3 )2 (n) (n) 2 θ2 + nθ1 2θ2 θ2 + nθ1 i=1
θ1 n2 (Z i· − θ3 )2 . 2 θ2 (θ2 + nθ1 ) i=1 a
−
We can show the following convergences: for k1 , k2 , k3 > 0, (i) h3 (Z i. − θ3 ) h2 h1 k −1 3 θ2 + n k2 + n n k1 i=1 n √ a √ θ 2h3 n(Z i. − θ3 ) = − √ k3 − 21 k3 −k2 − 21 k3 + 21 −k1 θ2 n θ2 + n h2 + n h 1 i=1 ⎧ 0√ (k3 ≤ 21 , k1 < 21 + k3 ) or (k3 > 21 ), ⎪ ⎪ ⎪ ⎪ θ 2 h3 1 1 ⎪ ⎪ ⎨ h 1 g1 (T n ) k3 < 2 , k1 = 2 + k3 , or − ∞ k3 < 21 , k1 > 21 + k3 , ⇒ +∞ (6.5) √ ⎪ ⎪ h θ 1 3 2 ⎪ g (T ) k = , k = 1, ⎪ θ2 +h 1 1 n 3 1 2 ⎪ ⎪ ⎩ √h 3 g (T ) k = 1 , k > 1, a
L1 = −
θ2 1
n
3
2
1
as n → ∞, (ii) L2 = −
ah 23 2n 2k3 −1 θ2 + nhk22 + n nhk11
ah 23 2 n 2k3 −1 θ2 + h 2 n 2k3 −1−k2 + n 2k3 −k1 h 1 ⎧ 0 (k3 ≤ 21 , k1 < 2k3 ) or (k3 > 21 ), ⎪ ⎪ ⎪ 2 ⎪ ah 3 ⎪ k3 < 21 , k1 = 2k3 , ⎪ ⎨− 2h 1 k3 < 21 , k1 > 2k3 , ⇒ −∞ ⎪ 2 ⎪ ah 3 ⎪ k3 = 21 , k1 = 1, − 2(θ +h ⎪ ⎪ ) ⎪ ⎩ ah 232 1 − 2θ2 k3 = 21 , k1 > 1,
=−
as n → ∞, (iii) h2a L2 = − 2θ2 n k2 −1
ah 2 1 − n1 1 1− + n 2(n k2 −1 θ2 + hn2 )
6.2 The Hypothesis That the Variance of Random Effect Equals …
59
a n 1 2 h 2 2a 1 − n1 i=1 t=1 (Z i j − Z i· ) − a(n − 1) θ2 + √ 1 2a(n − 1) 2 n k2 − 2 θ2 + √h 2n + O(n 1−2k2 ) indeterminate form 0 < k2 < 1, ⇒ 0 k2 ≥ 1, as n → ∞, (iv) L4 a a n(θ2(n) − θ2 ) + n 2 θ1(n) θ2(n) + nθ1(n) = − log + (Z i. − θ3 )2 2 θ2 2θ2 θ2(n) + nθ1(n) i=1 h2 a h1 = − log 1 + k + k −1 2 n 2 θ2 n 1 θ2 h2 h1 + + 2 n k2 θ2 + h 2 + n k2 −k1 +1 h 1 2 n k1 −1 θ2 + n k1 −k2 −1 h 2 + h 1 2 a √ n(Z i. − θ3 ) × √ θ2 i=1 ⎧ ⎪ 0 < k1 < 1, ⎨−∞ h1 h1 a ⇒ 2(θ2 +h 1 ) g2 (T n ) − 2 log(1 + θ2 ) k1 = 1, ⎪ ⎩ 0 k1 > 1,
as n → ∞. Here, we employed the Taylor √ expansion in (ii) and exploited the fact that, under √ the null H , n(Z − θ )/ θ 2 follows the standard normal distribution in (i), 61 i. 3 a √ 2 n(Z i. − θ 3 ) /θ 2 follows the chi-square distribution with a degrees of freei=1 a n 2 dom in (iv), and i=1 t=1 (Z i j − Z i· ) /θ2 follows the chi-square distribution with a(n − 1) degrees of freedom (iii). Remark 6.1 The log-likelihood ratio process (θ 0,61 , θ n,61 ) tends to an indeterminate form as n → ∞ on the set {k2 ; 0 < k2 < 1} of related contiguous orders. On the other hand, (θ 0,61 , θ n,61 ) tends to −∞ as n → ∞ on the set {(k1 , k2 , k3 ); k2 ≥ 1, k1 < 1, k1 ≤ k3 + 21 }. Due to the term L 1 defined in (6.5), (θ 0,61 , θ n,61 ) tends to −∞ or to an indeterminate form as n → ∞ on the set {(k1 , k2 , k3 ); k2 ≥ 1, k3 < 1 , k1 > k3 + 21 }. 2 Remark 6.2 Theorem 6.1 tells us that the asymptotic behavior of the likelihood ratio process for the random effect model is non-standard. We wish that the random effect model has the LAN structure in order to construct the optimal test based on the
60
6 Optimal Test for One-Way Random Effect Model
LAN, but the random effect model does nothold the LANproperty.The asymptotic distribution of (θ 0,61 , θ n,61 ) includes g1 (x1 , . . . , xa ) and g2 (x1 , . . . , xa ) , which tend to the centered normal distribution with variance a and the chi-square distribution with a degrees of freedom, respectively, on the set {k2 ≥ 1, k3 = 21 , k1 = 1}. The Fisher information matrix is singular. From Theorem 6.1 and Remark 6.2, we cannot use the LAN framework for our setting, and thus, a different approach is needed. Fortunately, the Neyman–Pearson lemma leads to an asymptotically most powerful test (see Definition 6.1). For simplicity, we shall consider the simpler contiguous hypothesis: H62 : θ = θ 0,62
⎛ ⎞ 0 := ⎝θ2 ⎠ vs θ3
⎛ h1 ⎞ n k1
(n) K62 : θ = θ n,62 := ⎝ θ2 ⎠ , θ3
where θ2 > 0 and h 1 > 0. Then, the simpler result corresponding to Theorem 6.1 holds. Theorem 6.2 Under the null hypothesis H62 , (θ 0,62 , θ n,62 ) has the following the asymptotic expansion: (θ 0,62 , θ n,62 ) =
− a2 log(1 + o p (1)
h1 ) θ2
+
h1 g (T 1,GKKT,n ) 2(θ2 +h 1 ) 2
+ o p (1) k1 = 1, k1 > 1.
Proof (Theorems 6.2) Let the elements of θ n,62 denote θ1(n) θ2 θ3 . Then, from the definition of (θ 0,62 , θ n,62 ), a θ2 + nθ1(n) a n 2 θ1(n) log + (Z i· − θ3 )2 2 θ2 2θ2 (θ2 + nθ1(n) ) i=1 a h1 h1 n a + = − log 1 + (Z i· − θ3 )2 , 2 θ2 n k1 −1 2(n k1 −1 θ2 + h 1 ) θ2 i=1
(θ 0,62 , θ n,62 ) = −
which gives the conclusion.
Remark 6.3 The log-likelihood ratio process (θ 0 , θ n ) tends to −∞ as n → ∞ on the set {k1 ; k1 < 1}. (n) The asymptotic distribution of (θ 0,62 , θ n,62 ) under the alternative K62 can be derived as follows. (n) Theorem 6.3 Under the null hypothesis K62 , the following the asymptotic expansion holds:
6.2 The Hypothesis That the Variance of Random Effect Equals …
− a2 log(1 + (θ 0,62 , θ n,62 ) = o p (1)
h1 ) θ2
+
h1 g (T 2,GKKT,n ) 2θ2 2
61
+ o p (1) k1 = 1, k1 > 1,
as n → ∞, where ⎞ √ √ n(Z 1· − θ3 ) n(Z a· − θ3 ) ⎠ := ⎝ ,..., , (n) nθ1 + θ2 nθ1(n) + θ2 ⎛
T 2,GKKT,n
(6.6)
(n) which follows, under H61 , the a-dimensional standard normal distribution.
Proof (Theorems 6.3) Let the elements of θ n,62 denote θ1(n) θ2 θ3 . From the definition of (θ 0,62 , θ n,62 ), a a n 2 θ1(n) θ2 + nθ1(n) + (Z i· − θ3 )2 log 2 θ2 2θ2 (θ2 + nθ1(n) ) i=1 a h1 h1 n a + = − log 1 + (Z i· − θ3 )2 , 2 θ2 n k1 −1 2θ2 n k1 −1 θ2 + nθ1(n) i=1
(θ 0,62 , θ n,62 ) = −
which gives the conclusion.
Remark 6.4 The log-likelihood ratio process (θ 0,62 , θ n,62 ) tends to an indeterminate form as n → ∞ on the set {k1 ; k1 < 1}. To state the main result, let us define the asymptotically most powerful test at the asymptotic level α Lehmann and Romano (2006, Definition 13.3.1, p. 541). Definition 6.1 For the simple hypothesis ϑ = ϑ0 against ϑ = ϑn := ϑ0 + hn −k for some k > 0 and h > 0, a sequence of tests {φn } is asymptotically most powerful at the asymptotic level α if lim sup Eϑ0 (φn ) ≤ α, n
and, for any test {ψn } such that lim supn Eϑ0 (ψn ) ≤ α, lim sup Eϑn (φn ) − Eϑn (ψn ) ≥ 0. n
By recalling the Neyman–Pearson lemma, we define the test function, for αn such that αn → α as n → ∞, as
φGKKT,n
⎧ ⎪ ⎨1 (θ 0,62 , θ n,62 ) > cn , := γn (θ 0,62 , θ n,62 ) = cn , ⎪ ⎩ 0 (θ 0,62 , θ n,62 ) < cn ,
(6.7)
62
6 Optimal Test for One-Way Random Effect Model
where the critical value cn and the constant γn are determined by Eθ0 (φGKKT,n ) = αn . Then, we can show that the test defined in (6.7) is asymptotically most powerful. Theorem 6.4 Fix k1 = 1. Then, (i) the critical value cn converges to c := − log
a h1 h1 + log 1 + χ 2 [1 − α] as n → ∞, 2 θ2 2(θ2 + h 1 ) a
where χa2 [1 − α] denotes the upper α quantile of chi-square with a degrees of freedom, (ii) the asymptotic power of the test under the local alternative K3(n) is given by lim P (θ 0,62 , θ n,62 ≥ cn ) = P X ≥
n→∞
θ2 2 χ [1 − α] , (θ2 + h 1 ) a
where X follows the chi-square distribution with a degrees of freedom, and (iii) the test {φGKKT,n } is asymptotically most powerful at asymptotic level α. Proof (Theorems 6.4) Denote the asymptotic distributions of the log-likelihood ratio process (θ 0,62 , θ n,62 ) under the null and alternative by L H and L K . (i) From Lemma 2.11 of Van der Vaart (2000), Theorem 6.2 yields, under the null H62 , sup |P (θ 0,62 , θ n,62 ) < x − P(L H < x)| → 0 as n → ∞, x∈R
and, thus, αn = P((θ 0,62 , θ n,62 ) ≥ cn ) → P(L H ≥ c) as n → ∞, where c is a constant satisfying P(L H ≥ c) = α. This implies cn → c := − log
h1 h1 a + log 1 + χ 2 [1 − α] as n → ∞. 2 θ2 2(θ2 + h 1 ) a
(ii) Apply Theorems 6.3 and 6.4 (i) to deduce (n) , (θ 0,62 , θ n,62 ) − cn ⇒ L K − c under K62 (n) and, thus, under K62 , we have
lim P((θ 0,62 , θ n,62 ) ≥ cn ) = P(L K ≥ c) θ2 χa2 [1 − α] . =P X≥ (θ2 + h 1 )
n→∞
6.3 The Hypothesis That the Variance of Random Effects …
63
(iii) Applying the Neyman–Pearson lemma, we obtain, for any n ∈ N and any test {ψn } such that lim supn Eθn (ψn ) ≤ α, that Eθn (φGKKT,n ) − Eθn (ψn ) ≥ 0.
6.3 The Hypothesis That the Variance of Random Effects is not Equal to Zero Under the Null In this section, we consider the contiguous hypothesis that the variance of random effects belongs to the interior of the parameter space under the null, which can be written as ⎛ ⎞ ⎛ ⎞ θ1 + nhk11 θ1 (n) H63 : θ = θ 0,63 := ⎝θ2 ⎠ , K63 : θ = θ n,63 := ⎝θ2 + nhk22 ⎠ , θ3 θ3 + nhk33 where θ1 > 0, θ2 > 0, h 1 > 0, h 1 > −n k1 θ1 , and h 2 > −n k2 θ2 . Theorem 6.5 Under the null hypothesis H63 , for all k1 ≥ 1, k2 > 0, and k3 > 0, the log-likelihood ratio (θ 0,63 , θ n,63 ) degenerates to zero, that is, (θ 0,63 , θ n,63 ) converges in probability to zero as n → ∞. Proof (Proof of Theorem 6.5) Let the elements of θ n,63 denote θ1(n) θ2(n) θ3(n) . A lengthy but straightforward computation gives that (θ 0,63 , θ n,63 )
⎛ ⎞ (n) (n) (n) (n) a n θ2 θ2 + nθ1 θ2 − θ2 1 a a ⎝ − log + (Z it − Z i· )2 ⎠ = − (n − 1) log (n) 2 θ2 2 θ2 + nθ1 θ2 2θ2 i=1 t=1 ⎞ ⎛ (n) (n) a (θ − θ2 ) + n(θ1 − θ1 ) n ⎝ (Z i· − θ3 )2 ⎠ + 2 (n) (n) θ + nθ 2 1 i=1 2(θ2 + nθ1 ) ⎛ ⎞ (n) (n) a √ nθ2 + n 2 θ1 (θ3 − θ3 ) an(θ3 − θ3 )2 n − θ (Z ) 3 ⎠ i· ⎝ − + √ (n) (n) (n) (n) θ2 + nθ1 θ2 + nθ1 2(θ2 + nθ1 ) i=1 h2 h2 a h1 a − log 1 + k + k −1 = − (n − 1) log 1 + k 2 2 n 2 θ2 n 2 θ2 + n k2 +1 θ1 n 1 θ2 + n k1 θ1 1 a n h 2 2a 1 − n1 2 ah 2 (1 − n1 ) t=1 (Z i j − Z i· ) − a(n − 1) i=1 θ2 + + √ 1 h2 2a(n − 1) 2(n k2 −1 θ2 + hn2 ) 2 n k2 − 2 θ2 + √ n h2 + 2(n k2 θ2 + h 2 + n k2 +1 θ1 + n k2 −k1 +1 h 1 )
64
6 Optimal Test for One-Way Random Effect Model
a
h1
+
√ Z i. − nθ3 2 √ θ2 + nθ1
2(n k1 −1 θ2 + n k1 −k2 −1 h 2 + n k1 θ1 + h 1 ) i=1 √ θ2 a h 3 n + θ1 Z i. − nθ3 + k −1 θ2 + nθ1 n 3 θ2 + n k3 −k2 −1 h 2 + n k3 θ1 + n k3 −k1 h 1 i=1
−
ah 23
2(n 2k3 −1 θ2 + n 2k3 −k2 −1 h 2 + n 2k3 θ1 + n 2k3 −k1 h 1 )
,
√ a √ which, in conjunction with the fact that i=1 ( n(Z i· − θ3 )/ θ2 + nθ1 )2 follows the chi-square distribution with a degrees of freedom under the null hypothesis H63 , tends in probability to zero as n → ∞ on the set {k1 ; k1 ≥ 1}. Remark 6.5 For {k1 ; k1 < 1}, (θ 0,63 , θ n,63 ) tends to an indeterminate form as n → ∞. Remark 6.6 Theorem 6.5 shows that the asymptotic behavior of the log-likelihood ratio process (θ 0,63 , θ n,63 ) is non-standard even if the variance of the random effects belongs to the interior of the parameter space under the null. Remark 6.7 For the contiguous hypothesis
H64
⎛ ⎞ ⎛ ⎞ θ1 θ1 + nhk11 (n) : θ = θ 0 := ⎝θ2 ⎠ , K64 : θ = θ n := ⎝ θ2 ⎠ , θ3 θ3
where θ1 > 0, θ2 > 0, h 1 > 0, and h 1 > −n k1 θ1 , we can show the same result as Theorem 6.5, that is, under the null hypothesis H64 , (θ 0,64 , θ n,64 ) converges in probability to zero as n → ∞. Proof (Remark 6.7) Let θ n,64 = θ1(n) θ2 θ3 . A straightforward calculation yields (θ 0,64 , θ n,64 )
a n θ2 + nθ1(n) a n(θ1(n) − θ1 ) = − log + (Z i· − θ3 )2 2 θ2 + nθ1 2(θ2 + nθ1(n) ) θ2 + nθ1 i=1 h1 a = − log 1 + k −1 2 n 1 θ2 + n k1 θ1 a h1 n 2 + (Z i. − θ3 ) , 2(n k1 −1 θ2 + n k1 θ1 + h 1 ) θ2 + nθ1 i=1 which tends to 0 as n → ∞.
References
65
References Goto, Y., Kaneko, T., Kojima, S., & Taniguchi, M. (2022). Likelihood ratio processes under nonstandard settings. Theory of Probability and Its Applications, 67, 246–260. Hallin, M., Hlubinká, D., & Hudecova, S. (2021). Efficient fully distribution-free center-outward rank tests for multiple-output regression and MANOVA. Journal of the American Statistical Association, na, 1–43. Lehmann, E. L., & Romano, J. P. (2006). Testing statistical hypotheses. Springer. Searle, S. R., Casella, G., & McCulloch, C. E. (1992). Variance components. New York: Wiley. Van der Vaart, A. W. (2000). Asymptotic statistics 3. Cambridge University Press.
Chapter 7
Numerical Analysis
In this chapter, we illustrate some numerical studies investigating the finite sample performance of the tests for one-way and two-way models presented in the previous chapters. First, we demonstrate the performance of the Lawley–Hotelling test statistic (LH), the likelihood ratio test statistic (LR), and the Bartlett–Nanda–Pillai test statistic (BNP), defined in (2.5)–(2.7), respectively, for Chapter 2. Second, we investigate the finite sample performance of the tests based on (4.6) and (4.7) in Chapter 4. Note that the test statistic defined in (4.6) is almost the same as that defined in (2.16). Third, we illustrate the performance of the test based on (5.5) and (5.8) in Chapter 5.
7.1 One-Way Effect Model with Independent Groups We consider the one-way fixed effect model (see also (2.1)): Zit = μ + α i + it
t = 1, . . . , n i ,
i = 1, . . . , a,
where Zit is a p-dimensional random vector, μ = (μ1 , . . . , μ p )T , α i = (αi,1 , . . . , αi, p )T , and it is the disturbance term. Recall that the hypothesis for the treatment effects given in (2.3) is H21 : α 1 = α 2 = . . . = α a = 0 vs K 21 : α i = 0 for some i. The null hypothesis H21 implies that all the effects are zero. ( p) For it = it(1) , . . . , it , the DCC-GARCH(1, 1) is a typical example of an uncorrelated process (see Engle, 2002). We assume the DCC-GARCH(1,1) model for it , that is, it follows p-dimensional centered normal distribution with variance H i,t , where ( p) Dit = diag σit(1) , . . . , σit , H it = Dit Rit Dit , © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. Goto et al., ANOVA with Dependent Errors, JSS Research Series in Statistics, https://doi.org/10.1007/978-981-99-4172-8_7
67
68
7 Numerical Analysis
2 ( j) ( j) σit = c j + a j i(t−1)( j) + b j σi(t−1) ,
(7.1)
−1/2
−1/2 Rit = diag Q it Q it diag Q it , T ˜ + α ˜ i(t−1) ˜ i(t−1) Q it = (1 − α − β) Q + β Q it−1 , ( j) T ( p) ( j) ˜ it = ˜it(1) , . . . , ˜it , and ˜it = it , ( j) σit
(7.2)
j = 1, . . . , p.
7.1.1 The Empirical Size This subsection focuses on the tests under the null hypothesis. We carry out two experiments to study the finite sample performance of the LH, the LR, and the BNP test under the null (2.3). In the first experiment, we investigate the performance of three tests with different p and sample sizes under fixed a = 3. In specific, we set the dimension p of Zit to vary between 2 and 5 and consider a = 3 groups with three samples: n 1 = 1000, n 2 = 1500, and n 3 = 2000. The parameters of Eq. (7.1) are given in Appendix at the end of this chapter along with the unconditional correlation ˜ In (7.2), we set α = 0.05 and β = 0.85. matrix Q. The empirical size from 10, 000 replications is presented in Table 7.1 for the LH, LR, and BNP tests. It can be seen that, under the null hypothesis, regardless of different p values and various nominal levels, the actual level is always remarkably close to the nominal level, and the performance of the three tests is very similar. For example, for the LH test, given the nominal level 0.1, the simulated significance level varies from 0.102 to 0.107 with p changing from 2 to 5, which are all quite close to 0.1. In summary, all the classical tests are effective and varying the dimension of Zit does not affect the accuracy of the three tests.
Table 7.1 Empirical size of the LH, LR, and BNP tests when p varies (a = 3) Nominal Dimension of Zit level p=2 p=3 p=4 LH
LR
BNP
0.10 0.05 0.01 0.10 0.05 0.01 0.10 0.05 0.01
0.102 0.053 0.009 0.102 0.053 0.009 0.102 0.053 0.009
0.104 0.054 0.012 0.104 0.054 0.012 0.103 0.054 0.012
0.102 0.054 0.010 0.102 0.053 0.010 0.102 0.053 0.010
p=5 0.107 0.054 0.011 0.105 0.054 0.011 0.105 0.053 0.010
7.1 One-Way Effect Model with Independent Groups
69
Fig. 7.1 Q Q-plot of the LH (left upper panel), LR (right upper panel), and BNP (lower panel) statistics versus the chi-square distribution with p(a − 1) degrees of freedom under H0 when p = 5 and a = 3
The Q Q-plots of the LH, LR, and BNP test statistics versus the chi-square distribution with p(a − 1) degrees of freedom, under H0 , when p = 5 and a = 3 are in Fig. 7.1. It is evident that the distribution of the three statistics is approximated extremely well by the chi-square distribution with p(a − 1) degrees of freedom even far away in the tails. In the second simulation experiment, we fix p = 2 while the number a of groups varies, i.e., a = 3, 4, 5, 7, 10, and the sample sizes are set as follows: • • • •
n 1 = 1000, n 2 = 1500, n 3 = 2000 for a = 3 n 1 = 1000, n 2 = 1300, n 3 = 1700, n 4 = 2000 for a = 4 n 1 = 1000, n 2 = 1250, n 3 = 1500, n 4 = 1750, n 5 = 2000 for a = 5 n 1 = 1000, n 2 = 1250, n 3 = 1500, n 4 = 1750, n 5 = 2000, n 6 = 2250, n 7 = 2500 for a = 7
70
7 Numerical Analysis
Table 7.2 Empirical size of the LH, LR, and BNP tests when a varies ( p = 2) Nominal Number of groups level a=3 a=4 a=5 a=7 LH
LR
BNP
0.10 0.05 0.01 0.10 0.05 0.01 0.1 0.05 0.01
0.102 0.053 0.009 0.102 0.053 0.009 0.102 0.053 0.009
0.098 0.048 0.010 0.098 0.048 0.010 0.097 0.048 0.010
0.100 0.049 0.011 0.100 0.048 0.011 0.100 0.048 0.011
0.096 0.047 0.008 0.096 0.047 0.008 0.095 0.047 0.008
a = 10 0.102 0.053 0.009 0.102 0.053 0.009 0.102 0.053 0.009
• n 1 = 1000, n 2 = 1200, n 3 = 1400, n 4 = 1600, n 5 = 1800, n 6 = 2000, n 7 = 2200, n 8 = 2400, n 9 = 2600, n 10 = 2800 for a = 10. Table 7.2 displays the average simulated level with various q of the LH, LR, and BNP tests over 10,000 replications. It shows that whatever the number of groups, the actual level of the three tests is again very close to the nominal level under the null hypothesis. In other words, all the classical tests are robust to the number of groups. The Q Q-plots of the LH, LR, and BNP statistics versus the chi-square distribution with p(a − 1) degrees of freedom, under H0 , when p = 2 and a = 10 are displayed in Fig. 7.2. It can be seen that the distribution of the three statistics is impressively close to the chi-square distribution with p(a − 1) degrees of freedom even for a number of groups as large as 10.
7.1.2 The Power To investigate the power of the tests under the alternative hypothesis, we consider the same setup as 7.1 but we set fixed effects αi = δi 1 p , where 1 p is a p-dimensional unit vector. In case a = 3, we fix δ1 = −0.01, δ2 = 0, and δ3 = 0.01. Notice that this is a very small deviation from the null hypothesis. Table 7.3 illustrates the empirical power with nominal levels 0.01, 0.05, and 0.1 when p varies. Unlike the null hypothesis case, it shows that the power of all the three tests is affected by the dimension of Zit , and it shows that the power decreases when p increases for all the three tests. Again the differences in the performances of the three tests are negligible for all the three tests. Figure 7.3 illustrates the receiver operating characteristic (R OC) curves for the LR, the LH, and the BNP tests when p = 5 and a = 3. The ROC curve is obtained by plotting the empirical power on y-axis (under the alternative) and the empirical size (under the null) on the x-axis. The closer the ROC curve is to the upper left corner of the graph, the better is the performance of the test.
7.1 One-Way Effect Model with Independent Groups
71
Fig. 7.2 Q Q-plots of the LH (left upper panel), LR (right upper panel), and BNP (lower panel) statistics versus the chi-square distribution with p(a − 1) degrees of freedom under H0 when p = 2 and a = 10 Table 7.3 Empirical power of the LR, LH, and BNP tests when p varies (a = 3) Dimension of Zit Nominal level Test p=2 p=3 p=4 p=5 0.01
0.05
0.1
LR LH BNP LR LH BNP LR LH BNP
0.973 0.974 0.973 0.948 0.948 0.947 0.849 0.849 0.848
0.651 0.651 0.649 0.840 0.841 0.839 0.650 0.651 0.649
0.630 0.632 0.629 0.827 0.828 0.827 0.630 0.632 0.629
0.567 0.569 0.566 0.784 0.786 0.783 0.567 0.569 0.566
72
7 Numerical Analysis
Fig. 7.3 The R OC curve for the LR (solid line with circles), the LH (dashed line with stars), and the BNP (dotted line with crosses) test when a = 3 and p = 5 (the knots correspond to ξ percentiles of the chi-square distribution with p(a − 1), where ξ = 0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99, 0.995). The x-axis is the empirical size and the y-axis is the empirical power
It can be seen that the three curves are overlapping which indicates that there is no appreciable difference among the tests. Next, the power is investigated when a varies with fixed p = 2. The values of the δi ’s for a = 3, 4, 5, 7, 10 are reported below. • • • •
a = 3 ⇒ δ1 = −0.01, δ2 = 0, δ3 = 0.01 a = 4 ⇒ δ1 = −0.015, δ2 = −0.005, δ3 = 0.005, δ4 = −0.015 a = 5 ⇒ δ1 = −0.02, δ2 = −0.01, δ3 = 0, δ4 = 0.01, δ5 = 0.02 a = 7 ⇒ δ1 = −0.03, δ2 = −0.03, δ3 = −0.01, δ4 = 0, δ5 = 0.01, δ6 = 0.02, δ7 = 0.03 • a = 10 ⇒ δ1 = −0.045, δ2 = −0.035, δ3 = −0.025, δ4 = −0.015, δ5 = −0.005, δ6 = 0.005, δ7 = 0.015, δ8 = 0.025, δ9 = 0.035, δ10 = 0.045 Table 7.4 presents the simulated power of three tests with nominal levels equal to 0.01, 0.05, and 0.1 when a varies from 3 to 10. It shows that all the three tests Table 7.4 Empirical power of the LR, LH, and BNP tests when a varies ( p = 2, nominal level is set as 0.01) Nominal Number of groups level Test a=3 a=4 a=5 a=7 a = 10 0.01
0.05
0.1
LR LH BNP LR LH BNP LR LH BNP
0.849 0.849 0.848 0.948 0.948 0.947 0.973 0.974 0.973
0.999 1.000 0.999 1.000 1.000 1.000 1.000 1.000 1.000
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
7.2 One-Way Effect Model with Correlated Groups
73
exhibit strong power under the alternative hypothesis, and increasing the number of groups enhances the power of the tests. For example, when the nominal level equals 0.01 and 0.05 if a = 3, the power of all tests is slightly smaller than one, while when a is larger than 3, the power increases to one. In summary, all the three tests show satisfactory performance in terms of simulated level and power, and no substantial differences are observed among them. Changing the dimension p and the number of groups q does not appreciably affect the simulated significance level, while increasing p reduces the power and increasing q enhances the power of these tests.
7.2 One-Way Effect Model with Correlated Groups In this part, we show the finite sample performance for the test statistic TGALT,n and Tts,n for fixed and random models with time-dependent errors and correlated groups based on Chapter 4.
7.2.1 Setup We generate data from the one-way effect model: Zit = μ + α i + eit for i = 1, . . . , a; t = 1, . . . , n i . We set the dimensions p, q, and the number of groups a as follows: p = 1, q = 1, and a = 3, 6, 9, respectively. Consider five experiment designs with different setup of group size: (BI) n 1 = · · · = n a = 300, (BII) n 1 = · · · = n a = 1000, (BIII) n 1 = · · · = n a = 2000, (BIV) n 1 = · · · = n a = 3000, and (Unbalanced) n 3k−1 = 500, and other sample sizes are 2000. That is, (B1) − (BIV) cases are balanced designs with equal group size, and the size varies from small to larger values. (Unbalanced) is the case of unbalanced design with unequal group sample size. We consider two scenarios for the disturbance process {et } := (e 1t , . . . , eat ) : independent groups (Scenario 1) and correlated groups (Scenario 2). For both scenarios, {et } follows a multivariate moving average model. In specific, we suppose that (7.3) et = t + t−1 , where = (i j ) is the coefficient matrix. In Scenario 1, = 0.7I a . In Scenario 2, 3k−2,3k−2 = −0.5, 3k,3k = 0.4, 3k,3k−2 = 0.4, and 3k,3k−1 = 0.2 for positive integer k ≤ a/3, and i j = 0 otherwise. The disturbance process t is an i.i.d. sequence, and follows the centered multivariate normal distribution with variance or centered multivariate t distribution with scale and 5 degrees of freedom.
74
7 Numerical Analysis
For both processes of t , in Scenario 1, is an identity matrix. In Scenario 2, the diagonal element of is equal to 1, and j, j+1 = j+1, j = 0.5 for 1 ≤ j ≤ a − 1. The following three situations for α i are considered: (A) α = (α1 , . . . , αa ) = 0a which corresponds to the null hypothesis for both fixed effect and random effect models, i.e., H41 and H42 ; (B) α = (α1 , . . . , αa ) , where α3k−2 = 0.1, α3k−1 = −0.1, α3k = 0.2 for k ≤ a/3. This corresponds to the alternative hypothesis case for fixed effect model K 41 ; and (C) α follows the centered normal distribution with variance α , where α is a block diagonal matrix whose off-diagonal blocks are all 3-by-3 zero matrix and main-diagonal blocks are all the same 3-by-3 matrix b : ⎛ ⎞ 2 1 0 b = ⎝1 4 0.5⎠ 1000. 0 0.5 1 The situation (C) corresponds to the alternative K 42 .
7.2.2 Test Results We report the rejection probabilities of the test statistic TGALT,n defined in (4.7) and the classical test Tts,n defined in (4.6) over 1000 simulations. Figure 7.4 shows the empirical size of the tests under the null hypothesis. Both tests work relatively well for a = 3 and Scenario 1 (the top-left plot) for all processes. The test based on TGALT,n performs better than Tts,n for a = 6, 9 under Scenarios 1 and 2 if the sample size of each group is larger or equal to 2000. Under Scenario 2 with correlated groups, the test based on TGALT,n outperforms the classical test based on Tts,n with smaller size distortion for all designs; this is not surprising as the correlated groups are dealt with. On the other hand, when the sample size of each group is small, e.g., (BI) n 1 = · · · = n a = 300, the test based on TGALT,n performs worse than the classical test, and exhibits small size distortion as shown from the second to the last plot, although the performance considerably improves by increasing the sample size. The difference in performance between the two processes for t , i.e., normal and student t, is mixed, and becomes negligible in the independent case when the sample size is large or in the correlated case. Figure 7.5 shows the empirical power of the tests under the alternative hypothesis of fixed effects. The upper panels for a = 3, 6, 9 and Scenario 1 display that the empirical power of both tests are nearly equal in each model; under Scenario 2, both tests perform well with power of Tts,n being one when the sample size is larger than or equal to each other, and the power of the test based on TGALT,n is always around 1, regardless of the number of groups a, sample size, and whether the group is balanced or not. For Scenario 1, the power of TGALT,n exhibits better or similar performance with Tts,n , and approaches one when the sample size is greater than 1000.
7.2 One-Way Effect Model with Correlated Groups
Unbalanced
0.20 0.00
BI
0.20
BIII
BIV
T_GALT normal T_GALT t T_ts normal T_ts t
BIV
Unbalanced
BIV
Unbalanced
T_GALT normal T_GALT t T_ts normal T_ts t
0.10 0.00
0.05 0.00 BIII
BIII
0.05
size
0.10
0.15 size
0.05 0.00
BII
BII
a = 9 , scenario 2
0.15
T_GALT normal T_GALT t T_ts normal T_ts t
BI
BI
Unbalanced
0.15
0.20
BII
a = 6 , scenario 2 0.20
BIV
a = 3 , scenario 2
size
BIII
size
0.10 0.05
0.05 0.00
0.00
BII
T_GALT normal T_GALT t T_ts normal T_ts t
0.15
0.15
T_GALT normal T_GALT t T_ts normal T_ts t
size
0.10
size
0.05
0.10
0.15
T_GALT normal T_GALT t T_ts normal T_ts t
BI
0.10
a = 9 , scenario 1
a = 6 , scenario 1 0.20
0.20
a = 3 , scenario 1
75
BI
BII
BIII
BIV
Unbalanced
BI
BII
BIII
BIV
Unbalanced
Fig. 7.4 Empirical size of tests for the existence of fixed and random effects based on TGALT,n and Tts,n . The three columns correspond to a = 3, 6, and 9, respectively. The upper and lower panels correspond to Scenarios 1 (independent groups) and 2 (correlated groups), respectively. The tick marks of the x-label (BI), (BII), (BIII), (BIV), and (Unbalanced) correspond to the five sample size cases, respectively
Figure 7.6 shows the empirical power of the tests under the alternative hypothesis of random effects. It shows that the performance improves by increasing the sample size. Again the upper panels for a = 3, 6, 9 and Scenario 1 show that empirical power of both tests are similar for each model, while as expected, the lower panels highlight that the test based on TGALT,n outperforms the classical test based on Tts,n under correlated groups (Scenario 2). In most cases, the size and the power for the unbalanced design (Unbalanced) are close to the results for the balanced designs (BII) and (BIII), i.e., n i =1000 and 2000 for all groups, respectively. For almost all cases, the test based on TGALT,n is better than the classical test based on Tts,n
76
7 Numerical Analysis a = 6 , scenario 1
a = 9 , scenario 1 1.0 0.8 0.6
BII
BIII
BIV
0.4 BI
BII
BIII
BIV
Unbalanced
BI
a = 6 , scenario 2
BIV
Unbalanced
Unbalanced
1.0
size
size BIII
BIV
0.6
0.6 0.0
T_GALT normal T_GALT t T_ts normal T_ts t BI
BII
BIII
BIV
Unbalanced
0.2
0.2
0.4
0.4
size BII
BIII
0.8
0.8
1.0 0.8 0.6 0.4 0.2 0.0
T_GALT normal T_GALT t T_ts normal T_ts t
BII
a = 9 , scenario 2
1.0
a = 3 , scenario 2
BI
0.2
0.2
Unbalanced
T_GALT normal T_GALT t T_ts normal T_ts t
T_GALT normal T_GALT t T_ts normal T_ts t
0.0
BI
T_GALT normal T_GALT t T_ts normal T_ts t
0.0
0.0
T_GALT normal T_GALT t T_ts normal T_ts t
0.0
0.2
0.4
0.4
size
size
size
0.6
0.6
0.8
0.8
1.0
1.0
a = 3 , scenario 1
BI
BII
BIII
BIV
Unbalanced
Fig. 7.5 Empirical power of tests for the existence of fixed effects based on TGALT,n and Tts,n . The three columns correspond to a = 3, 6, and 9, respectively. The upper and lower panels correspond to Scenarios 1 (independent groups) and 2 (correlated groups), respectively. The tick marks of the x-label (BI), (BII), (BIII), (BIV), and (Unbalanced) correspond to the five sample size cases, respectively
with larger empirical power. Under the case with correlated groups (Scenario 2), the empirical power of both tests approach one when the sample size is larger than 2000 and a = 6, 9, as shown in the last two plots of the bottom row. The process for t makes a significant difference; the power under the normal distribution is better than that under the student t for all designs. Overall, the test based on TGALT,n works well in detecting the existence of fixed or random effects. In summary, the test based on TGALT,n outperforms the classical test when the groups are correlated and the sample size of each group is sufficiently large.
7.3 Two-Way Effect Model with Correlated Groups
BIV
0.6 BI
Unbalanced
1.0
1.0
BIII
BIV
BIII
BIV
Unbalanced
BIII
BIV
Unbalanced
T_GALT normal T_GALT t T_ts normal T_ts t
0.6
0.6
0.4 0.0
0.2
0.2 0.0 BII
BII
a = 9 , scenario 2
0.4
0.6 0.4 0.2 0.0 BI
BI
Unbalanced
T_GALT normal T_GALT t T_ts normal T_ts t
0.8
T_GALT normal T_GALT t T_ts normal T_ts t
0.8
BII
a = 6 , scenario 2 1.0
BIII
a = 3 , scenario 2
0.8
BII
0.0
0.0
0.2
0.2
0.4
0.4
0.6
0.6 0.4 0.2 0.0 BI
T_GALT normal T_GALT t T_ts normal T_ts t
0.8
T_GALT normal T_GALT t T_ts normal T_ts t
0.8
T_GALT normal T_GALT t T_ts normal T_ts t
1.0
1.0
1.0
a = 9 , scenario 1
a = 6 , scenario 1
a = 3 , scenario 1
0.8
77
BI
BII
BIII
BIV
Unbalanced
BI
BII
BIII
BIV
Unbalanced
Fig. 7.6 Empirical power of tests for the existence of random effects based on TGALT,n and Tts,n . The columns correspond to a = 3, 6, and 9, respectively. The upper and lower panels correspond to Scenarios 1 (independent groups) and 2 (correlated groups), respectively. The tick marks of the x-label (BI), (BII), (BIII), (BIV), and (Unbalanced) correspond to the five sample size cases, respectively
7.3 Two-Way Effect Model with Correlated Groups In this part, we study the finite sample performance of the tests for random models with time-dependent errors and correlated groups. As an illustration, we investigate the performance of the test for the existence of random effects and random interactions based on the two-way random effect models in Chapter 5. In specific, two experiments are carried out. The first deals with random effects in two-way models without interaction. The second concerns interaction effects in two-way models with interaction. The empirical size of the tests under the null hypothesis and the power under the alternative hypothesis with different types of dependent errors are investigated.
78
7 Numerical Analysis
7.3.1 Test for the Existence Random Effects In the first experiment, we focus on the tests for random effects without interaction term γ i j , that is, the data are generated from model (5.1). The parameters are (a, b, p) = (2, 2, 1), the sample size is n ∈ {100, 250, 500, 1000, 2000, 3000}, the number of replications is R = 1000, the nominal level is τ = 0.05, and the random effects (α1 , α2 ) follow the centered bivariate normal distribution with variance α = σα2 I2 and σα ∈ {0, 0.1, 0.3, 0.5} corresponding to the null and the alternative, respectively. For the innovation time series et := (e11t , e21t , e12t , , e22t ) , we consider vector AR(1) or MA(1) models, that is, et := et−1 + t or et := t + t−1 , respectively. Here, t := (11t , 21t , 12t , 22t ) are i.i.d. white noise. We consider two distributions for the noise t , the standard normal distribution and the centered t-distribution with 5 degrees of freedom and unit variance. For both cases, the coefficient matrix is defined as ⎞ ⎛ −0.4 0 0 c1 ⎜ 0 0.7 0 0 ⎟ ⎟ := ⎜ ⎝ c2 0 0.5 0 ⎠ , 0 0.3 0 0.3 for (c1 , c2 ) ∈ {(0, 0), (−0.5, 0.2), (−0.5, 0), (−0.5, −0.5), (0.2, −0.5)}, respectively. The element c1 in refers to the inter-group correlation between the coordinate (1,1)-cell and the (2,2)-cell, and c2 indicates within-group correlation with respect to the factor A between the (1,1)-cell and the (1,2)-cell. The empirical size (under null) and the power (under alternative) are computed by R 1 (i) I Tα,GSXT,n ≥ χrˆ2n,α [1 − τ ] , R i=1
(7.4)
(i) where Tα,GSXT,n denotes the value of the statistic (5.5) in the i-th replication and I {ω} is an indicator function which takes value one when ω holds and zero otherwise. Figures 7.7 and 7.8 display the empirical size and the empirical power of the test for the existence of random effects with different setups, and the disturbances follow the normal and the t−distribution, respectively. We can see that when the sample size n increases, the performance of the test improves in all the cases. From the top panel, which refers to the case under null hypothesis without random effect (σα = 0), the simulated level of the test for the model with the AR(1)-type innovation approaches the nominal level 0.05 when n = 2000, while the empirical size of the test for the model with MA(1) error reaches the nominal level faster when n is as large as 500 and the performance is slightly more stable than with the AR(1) case. Under the alternative, the results show that the test under both AR(1) and MA(1) errors exhibits increasing power with n, and the test shows a larger power with MA(1)-type innovation than with the AR(1)-type innovation at small sample size and greater σα .
7.3 Two-Way Effect Model with Correlated Groups
79
AR (1), normal dist sigma = 0
0.0
1.0 0.0
0.2
0.2
0.4
0.4
0.6
0.6
0.8
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.8
1.0
MA (1), normal dist sigma = 0 (c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
100
250
500
1000
2000
3000
100
250
500
1000
2000
3000
MA (1), normal dist, sigma = 0.1
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
AR (1), normal dist, sigma = 0.1
100
3000
250
500
1000
2000
3000
MA (1), normal dist, sigma = 0.3
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
AR (1), normal dist, sigma = 0.3
100
3000
250
500
1000
2000
3000
MA (1), normal dist, sigma = 0.5
0.0
100
250
500
1000
2000
3000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
AR (1), normal dist, sigma = 0.5
100
250
500
1000
2000
3000
Fig. 7.7 Empirical size (the top row) and power (the second to bottom rows) for the test of random effects. The x-axis is sample size and y-axis is the average rejection probabilities over 1000 replications in each plot. The first panel refers to the null (σα = 0), and the second to bottom panel corresponds to the alternative (σα = 0.1, 0.3, 0.5). The left and right columns correspond to the AR(1)- and MA(1)-type innovations with disturbances following the normal distribution, respectively
As concerns the distributions of the disturbances et , our test shows that there is no significant difference between the normal and the student t-distribution under the null, whereas under the alternative, the power is slightly higher when et follows the normal distribution rather than the t distribution. As for σα , it can be seen from the second, third, and fourth columns that the power of our test becomes larger when σα increases. For example, when σα = 0.1, the power of the test is close to 0.6 when n = 3000 for the case of AR(1)-type innovation and normal disturbances, while the power increases to 0.9 when σα becomes 0.5. This is not surprising, as larger σα indicates more significant random effects, which augment the power of the test.
80
7 Numerical Analysis AR (1), t dist sigma = 0
0.0
1.0 0.0
0.2
0.2
0.4
0.4
0.6
0.6
0.8
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.8
1.0
MA (1), t dist sigma = 0 (c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
100
250
500
1000
2000
3000
100
250
500
1000
2000
3000
MA (1), t dist, sigma = 0.1
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
AR (1), t dist, sigma = 0.1
100
3000
250
AR (1), t dist, sigma = 0.3
500
1000
2000
3000
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
MA (1), t dist, sigma = 0.3
3000
100
250
AR (1), t dist, sigma = 0.5
500
1000
2000
3000
0.0
100
250
500
1000
2000
3000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
MA (1), t dist, sigma = 0.5
100
250
500
1000
2000
3000
Fig. 7.8 Empirical size (the top row) and power (the second to bottom rows) for the test of random effects. The x-axis is sample size and y-axis is the average rejection probabilities over 1000 replications in each plot. The first panel refers to the null (σα = 0), and the second to bottom panel corresponds to the alternative (σα = 0.1, 0.3, 0.5). The left and right columns correspond to the AR(1)- and MA(1)-type innovations with disturbances following the t-distribution, respectively
7.3.2 Test of Interaction Effects In this subsection, we include nonzero interaction γ i j in the two-way effect model and generate data from model (5.6). We consider the same settings as in Section 7.3.1, except that the interactions term (γ11 , γ21 , γ12 , γ22 ) follows the centered multivariate normal distribution with variance γ = σγ2 I4 for σγ ∈ {0, 0.1, 0.3, 0.5}. Here, the case σγ = 0 and nonzero σγ correspond to the null and the alternative hypothesis, respectively. Similarly, we define the rejection probability, i.e., the empirical size (under null) and the power (under alternative) by
7.3 Two-Way Effect Model with Correlated Groups
81
R 1 (i) I Tγ ,GSXT,n ≥ χrˆ2n,γ [1 − τ ] , R i=1
(7.5)
where Tγ(i) ,GSXT,n denotes the value of the test statistic defined in (5.8) in the i-th replication. Figures 7.9 and 7.10 show the empirical size and power of the tests for γ with the disturbances following the normal and the t−distribution, respectively. The results
AR (1), normal dist, sigma = 0
0.8
1.0 0.0
0.0
0.2
0.2
0.4
0.4
0.6
0.6
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.8
1.0
MA (1), normal dist, sigma = 0 (c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
100
250
500
1000
2000
3000
100
250
AR (1), normal dist, sigma = 0.1
500
1000
2000
3000
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
MA (1), normal dist, sigma = 0.1
3000
100
250
AR (1), normal dist, sigma = 0.3
500
1000
2000
3000
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
MA (1), normal dist, sigma = 0.3
100
3000
250
AR (1), normal dist, sigma = 0.5
500
1000
2000
3000
0.0
100
250
500
1000
2000
3000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
MA (1), normal dist, sigma = 0.5
100
250
500
1000
2000
3000
Fig. 7.9 Empirical size (the top row) and power (the second to bottom rows) for the test of interaction effects. The x-axis is sample size and y-axis is the average rejection probabilities over 1000 replications in each plot. The first panel corresponds to the null (σγ = 0), and the second to bottom panel refers to the alternative (σγ = 0.1, 0.3, 0.5). The left and right columns correspond to the AR(1)- and MA(1)-type innovations with disturbances following the normal distribution, respectively
82
7 Numerical Analysis MA (1), t dist, sigma = 0 1.0
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.0
0.2
0.2
0.4
0.4
0.6
0.6
0.8
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.8
1.0
AR (1), t dist, sigma = 0
100
250
500
1000
2000
3000
100
250
500
1000
2000
3000
MA (1), t dist, sigma = 0.1
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
AR (1), t dist, sigma = 0.1
100
3000
250
AR (1), t dist, sigma = 0.3
500
1000
2000
3000
0.0
100
250
500
1000
2000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
MA (1), t dist, sigma = 0.3
3000
100
250
500
1000
2000
3000
MA (1), t dist, sigma = 0.5
0.0
100
250
500
1000
2000
3000
0.2
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
(c1,c2)=(0,0) (−0.5, 0.2) (−0.5, 0) (−0.5, −0.5) (0.2, −0.5)
0.0
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1.0
1.0
AR (1), t dist, sigma = 0.5
100
250
500
1000
2000
3000
Fig. 7.10 Empirical size (the top row) and power (the second to bottom rows) for the test of interaction effects. The x-axis is sample size and y-axis is the average rejection probabilities over 1000 replications in each plot. The first panel corresponds to the null (σγ = 0), and the second to bottom panel refers to the alternative (σγ = 0.1, 0.3, 0.5). The left and right columns correspond to the AR(1)- and MA(1)-type innovations with disturbances following the t-distribution, respectively
are similar to the previous experiment, that is, the test performance improves with larger sample size, and the test for the model with MA(1) error reaches the nominal level faster and generally exhibits higher power than in the case of AR(1)-type innovation. No significant difference is observed between the normal and the student t-distributed noise under the null. The power of the test increases with σα which refers to higher interaction effects in the two-way models.
7.3 Two-Way Effect Model with Correlated Groups
83
Appendix: DCC-GARCH Parameters DCC-GARCH p = 2 2 (1) σit(1) = 0.001 + 0.05 it(1) + 0.90σi(t−1) 2 (2) σit(2) = 0.001 + 0.10 it(1) + 0.85σi(t−1)
˜Q = 1.0 0.5 . 0.5 1.0
DCC-GARCH p = 3 2 (1) σit(1) = 0.001 + 0.05 it(1) + 0.92σi(t−1) 2 (2) σit(2) = 0.001 + 0.10 it(1) + 0.85σi(t−1) 2 (3) σit(3) = 0.001 + 0.07 it(1) + 0.90σi(t−1)
⎡
⎤ 1.0 0.5 0.4 ˜ = ⎣0.5 1.0 0.3⎦ . Q 0.4 0.3 1.0
DCC-GARCH p = 4 2 (1) σit(1) = 0.001 + 0.05 it(1) + 0.92σi(t−1) 2 (2) σit(2) = 0.001 + 0.10 it(1) + 0.85σi(t−1) 2 (3) σit(3) = 0.001 + 0.07 it(1) + 0.90σi(t−1) 2 (4) σit(4) = 0.001 + 0.06 it(1) + 0.88σi(t−1)
⎡
1.00 ⎢0.45 ˜ =⎢ Q ⎣0.40 0.50
0.45 1.00 0.30 0.73
0.40 0.30 1.00 0.40
⎤ 0.50 0.73⎥ ⎥. 0.40⎦ 1.00
DCC-GARCH p = 5 2 (1) σit(1) = 0.001 + 0.05 it(1) + 0.92σi(t−1) 2 (2) σit(2) = 0.001 + 0.10 it(1) + 0.85σi(t−1) 2 (3) σit(3) = 0.001 + 0.07 it(1) + 0.90σi(t−1) 2 (4) σit(4) = 0.001 + 0.06 it(1) + 0.88σi(t−1) 2 (5) σit(5) = 0.001 + 0.08 it(1) + 0.92σi(t−1)
⎡
1.00 ⎢0.47 ⎢ ˜ = ⎢0.43 Q ⎢ ⎣0.50 0.50
0.47 1.00 0.37 0.73 0.45
0.43 0.37 1.00 0.40 0.65
0.50 0.73 0.40 1.00 0.50
⎤ 0.50 0.45⎥ ⎥ 0.65⎥ ⎥. 0.50⎦ 1.00
84
7 Numerical Analysis
Reference Engle, R. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. J. Bus. Econ. Stat., 20, 339–350.
Chapter 8
Empirical Data Analysis
This chapter analyzes the average wind speed data observed in seven cities located in coastal and inland areas in Japan by assuming the one-way effect model with timedependent errors and correlated groups. The purpose is to assess the existence of area effects. To this end, both the TGALT,n statistic designed for correlated groups and the Tts,n statistic designed for independent groups are applied. The results highlight the greater accuracy of the former test which appropriately takes into account the correlation structure among the series.
8.1 Average Wind Speed Data In this chapter, we carry out some real data analysis to investigate the finite sample performance of the tests for one-way models presented in the previous chapters. In specific, we illustrate the performance of the test in the one-way effect model with time-dependent errors and correlated groups based on model (4.1) of Chapter 4. We consider the average wind speed data observed in seven cities (points) in Japan: Matsumoto, Maebashi, Kumagaya, Chichibu, Tokyo, Chiba, and Yokohama. The location map of the seven points is displayed in Fig. 8.1. Three cities are located in coastal areas (Tokyo, Chiba, and Yokohama), while the other four cities are in inland areas. The data come from the Japan Meteorological Agency (JMA), and can be found at https://www.data.jma.go.jp/gmd/risk/obsdl/index.php. The JMA records weather observations for the purpose of preventing weather disasters by issuing domestic weather forecasts, warnings, and advisories. An additional purpose is to monitor the climate for the development of industry and the conservation of the global environment. The JMA collects automatic and visual observations of atmospheric pressure, wind, and other meteorological phenomena at approximately 150 observation facilities nationwide. In addition, in order to grasp wind and other meteorological phenomena in more detail, the JMA automatically observes wind speed at approximately 1,300 observation stations throughout Japan. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. Goto et al., ANOVA with Dependent Errors, JSS Research Series in Statistics, https://doi.org/10.1007/978-981-99-4172-8_8
85
86
8 Empirical Data Analysis
2:Maebashi 1:Matsumoto 3:Kumagaya 4:Chichibu
5:Tokyo 6:Chiba 7:Yokohama
Fig. 8.1 Map of seven cities and corresponding locations
Table 8.1 The mean, minimum value, maximum value, and variance of daily average wind speed data from May 1, 2020, to April 1, 2023 (1,066 observations) Prefectures Matsumoto Maebashi Kumagaya Chichibu Tokyo Chiba Yokohama Mean Min Max Variance
2.50 0.50 7.00 1.33
2.30 0.70 6.40 0.75
2.42 0.90 7.10 0.98
1.62 0.50 4.20 0.36
2.70 1.20 6.50 0.71
3.63 1.20 11.20 2.28
3.46 1.30 8.90 1.41
We apply the test statistics to the average wind speed series daily recorded from May 1, 2020, to April 1, 2023. Hence, the dataset consists of daily average wind speed series from seven points covering 1,066 observations for each city. Table 8.1 reports the mean, minimum value, maximum value, and variance of the average wind speed time series data for each point. It can be seen that the average wind speeds in coastal prefectures (Tokyo, Chiba, and Yokohama) are higher than those in inland prefectures (Matsumoto, Maebashi, Kumagaya, and Chichibu) with a greater
8.1 Average Wind Speed Data
87
1:Matsumoto
0
0
2
2
4
4
6
6
8 10
8 10
2:Maebashi
0
200
400
600
800
1000
0
200
400
600
800
1000
800
1000
800
1000
4:Chichibu
0
0
2
2
4
4
6
6
8 10
8 10
3:Kumagaya
0
200
400
600
800
0
1000
200
400
600
6:Chiba
0
0
2
2
4
4
6
6
8 10
8 10
5:Tokyo
0
200
400
600
800
1000
800
1000
0
200
400
600
0
2
4
6
8 10
7:Yokohama
0
200
400
600
Fig. 8.2 Time series plots of the average wind speed data of seven points from May 1, 2020, to April 1, 2023
magnitude of the mean. It is also interesting to note that wind speeds in Chichibu are smaller than those in other inland prefectures. Figure 8.2 displays the time series plots of the average wind speed data of the seven points from May 1, 2020, to April 1, 2023. All the series are relatively stationary with a slightly increasing trend since the middle of year 2022, and the patterns of these observation sequences are similar. Figure 8.3 displays the heatmap plot of correlation of the average wind speed series among the seven points. It can be seen that the correlations of the average wind speeds for both coastal prefectures and inland areas are high except for Matsumoto. This may be due to the close distance between the observation sites, whereas Matsumoto is located relatively far from the other observation sites. The significant correlation shown in the heatmap indicates that the test based on TGALT,n designed for correlated
88
8 Empirical Data Analysis
7
0.44
0.2
0.31
0.2
0.75
0.78
1
6
0.58
0.15
0.24
0.27
0.85
1
0.78
5
0.47
0.26
0.42
0.42
1
0.85
0.75
correlation 1.00
0.75
4
0.13
0.62
0.74
1
0.42
0.27
0.2 0.50
3
0.09
0.81
1
0.74
0.42
0.24
0.31 0.25
2
0.2
1
0.81
0.62
0.26
0.15
0.2
1
1
0.2
0.09
0.13
0.47
0.58
0.44
1
2
3
4
5
6
7
Fig. 8.3 The heatmap of sample correlations between points
groups may be more appropriate than that based on Tts,n designed for independent groups. We applied the test statistic TGALT,n for the one-way effect model with correlated groups to the seven average wind speed series. Each city is located at different prefectures and is considered as a group, referring to the area effect. As a comparison, we also carried out the classical test based on Tts,n which is designed for independent groups. The values of the statistics are 1796 and 973, and the corresponding p-values are 0 and 5.22 × 10−207 for TGALT,n and Tts,n , respectively. This indicates that there is strong evidence to reject the null hypothesis that there is no area effect for both tests. However, Tts,n provides a low value of 973 compared with 1796 of TGALT,n , indicating that ignoring the between-group correlation may cause misleading results. This analysis points out that it is crucial to apply tests that are suited to the features of the data to ensure reliable conclusions.
Index
A Analysis of Variance (ANOVA), 1 Asymptotically most powerful test, 61 Autocovariance function, 2, 9, 10, 19 Autoregressive (AR) model, 78
B Balanced design, 10 Bartlett–Nanda–Pillai test statistic, 10, 20, 67 Bartlett–Nanda–Pillai test statistic for high-dimensional time series, 25
C Chi-square distribution, 6, 11–13, 16, 37, 40, 47, 51 Consistency (test), 37, 40, 47, 52 Contiguous hypothesis, 56, 60, 63, 64
D DCC-GARCH model, 21, 67, 83
F Fisher information matrix, 57 Fixed effect, 9, 19, 29 Fixed interaction, 52
G General (grand) mean, 6, 9, 19, 29, 43, 55 Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model, 6, 12 Generalized linear process, 4, 10
L Lawley–Hotelling test statistic, 10, 20, 67 Lawley–Hotelling test statistic for high-dimensional time series, 25 Likelihood ratio test statistic, 10, 20, 67 Likelihood ratio test statistic for high-dimensional time series, 25 Limit in the mean, 3 Local alternative, 38, 41, 47, 49 Local asymptotic normality, 7, 55 Local power, 38, 41, 48, 52 Log-likelihood function, 56 Log-likelihood ratio process, 56
M Minimum discrepancy estimator, 39 Mixed effect, 49 Moore–Penrose inverse matrix, 34, 46, 51 Moving Average (MA) model, 78
N Noncentral chi-square distribution, 34, 38, 39, 49
O One-way fixed effects model, 9, 19, 29 One-way random effects model, 39, 55 Orthogonal increment process, 3
R Random effect, 39, 43, 55 Random interaction, 50
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. Goto et al., ANOVA with Dependent Errors, JSS Research Series in Statistics, https://doi.org/10.1007/978-981-99-4172-8
89
90 S Spectral density matrix, 3–5, 9, 10, 19, 29, 39, 44 Spectral distribution matrix, 3 Stationary process, 2 Sum of squares, 11, 20, 30, 50
T Two-way fixed effects model, 49 Two-way mixed effects model, 49 Two-way random effects model, 43 Two-way random effects model with interactions, 50
Index U Uncorrelated process, 12, 21
V Vector Autoregressive Moving Average (VARMA) model, 5 Volterra series, 21
W Whittle likelihood, 13