A nonparametric approach to the analysis of longitudinal data via a set of level crossing problems w


224 84 75KB

English Pages 8

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

A nonparametric approach to the analysis of longitudinal data via a set of level crossing problems w

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Biostatistics (2005), 6, 2, pp. 271–278 doi:10.1093/biostatistics/kxi008

A nonparametric approach to the analysis of longitudinal data via a set of level crossing problems with application to the analysis of microarray time course experiments CAVAN REILLY∗ Division of Biostatistics, University of Minnesota, A460 Mayo Building, MMC 303, 420 Delaware Street SE, Minneapolis, MN 55455-0378, USA [email protected]

S UMMARY Here we develop a completely nonparametric method for comparing two groups on a set of longitudinal measurements. No assumptions are made about the form of the mean response function, the covariance structure or the distributional form of disturbances around the mean response function. The solution proposed here is based on the realization that every longitudinal data set can also be thought of as a collection of survival data sets where the events of interest are level crossings. The method for testing for differences in the longitudinal measurements then is as follows: for an arbitrarily large set of levels, for each subject determine the first time the subject has an upcrossing and a downcrossing for each level. For each level one then computes the log rank statistic and uses the maximum in absolute value of all these statistics as the test statistic. By permuting group labels we obtain a permutation test of the hypothesis that the joint distribution of the measurements over time does not depend on group membership. Simulations are performed to investigate the power and it is applied to the area that motivated the method—the analysis of microarrays. In this area small sample sizes, few time points and far too many genes to consider genuine gene level longitudinal modeling have created a need for a simple, model free test to screen for interesting features in the data. Keywords: Level crossing problems; Longitudinal analysis; Microarrays; Nonparametric tests; Survival analysis.

1. I NTRODUCTION In many applications, one has a collection of measurements for subjects over time and there is interest in the hypothesis that the joint distribution of measurements over time depends on group membership. The first step in analyzing such data is exploratory data analysis, typically followed with a variety of tools for inference. For example a common data analytic device employed is a plot over time of the response variable for each subject with different colors or symbols for each group. If such a plot suggests substantively meaningful differences between the two groups, then one typically addresses the hypothesis of interest using more sophisticated tools. In the usual parametric approach to the analysis of such longitudinal data, one specifies the joint distribution in some parametric fashion and estimates the parameters. Often, model selection is guided by some information measure. There has been a considerable amount ∗ To whom correspondence should be addressed.

c The Author 2005. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]. 

272

C. R EILLY

of research aimed at developing methods that require fewer assumptions. For example semiparametric methods allow one to not have to specify distributional forms for error terms. But even the most general semiparametric methods require the analyst to make difficult choices regarding the mean response and covariance structure. For example in the case of generalized estimating equations, the choice of the working covariance structure can impact the analysis for small sample sizes. Additionally, one can use tools from nonparametric regression to allow flexible models for the mean response function. Here too there are many options open to the analyst. These features of longitudinal analysis make it complex and the results of the analysis are sometimes dependent on the approach taken by the analyst. Longitudinal analysis becomes even more difficult when one analyzes microarray data. Often, one is interested in discovering which genes have different joint distributions in expression over time between two groups. The first difficulty one encounters is it is not possible to look at a plot of how the response depends on time and group because there are far too many genes (one cannot examine 10 000 plots). In addition, there are often few subjects in each group and not many time points. These latter two facts imply that the data will provide little evidence for any given mean and covariance structure, hence the usual information measures will not be of much use if one considers the variability of such measures. In addition, due to the small sample size and impossibility of examining plots, nonparametric methods (such as a kernel based mean response function) will not be of much use either. Other salient features of typical microarray data sets that make longitudinal analysis difficult include stationarity in time is unlikely, there is no reason to believe that any genes have the same mean response function over time and there are likely complex interactions between the genes over time. Given the complexities of attempting to model each gene, the uncertainty associated with a list of genes that differ between the two groups (the end product of the analysis) is enormous. 1.1

A level crossing approach

One solution to this problem is to realize that every longitudinal data set can also be thought of as a collection of survival data sets. There are a variety of ways one can make this transformation: here is one method. Suppose we have measurements on some real valued random variable y(t) at a collection of times ti for ti ∈ [0, T ] and i = 1, . . . , N . Suppose C0  y(ti )  C K for some C0 and C K for all ti and set Ck = C0 + k(C K − C0 )/K for k = 0, . . . , K . Now define TkU = min{t > t1 : y(t)  Ck } and TkD = min{t > t1 : y(t)  Ck } and define the indicator variables δ U and δ D that indicate if the event occurs or not (the U and D stand for upcrossing and downcrossing). If a subject starts below Ck then we define that subject to be censored at time zero for downcrossing (with a similar convention for upcrossings). If one has data on a collection of subjects, then one gets a collection of survival times and indicators for each level. A simple approach is to then compute a log rank test statistic for each level, and then take as the test statistic the largest in absolute value of these test statistics. Finally, with just two groups one can easily compute the null distribution of the test statistic using permutations of the group members. More generally, given the set of levels Ck , one can define the vector T G = f (y(t1 ), . . . , y(t N )) to convert to a collection of survival data sets. Then one can use some other method for testing if there are differences between the groups and other methods for synthesizing the resulting collection of test statistics. For example rather than just using the first time there is an upcrossing, one could consider all times that there is an upcrossing, then test for differences between the groups using methods for the analysis of recurrent events. As another extension, if one has measurements on a set of covariates for each subject, then one could use the Cox proportional hazard model with an indicator for group membership to obtain a semiparametric test (but one would have to be careful about the proportionality assumption). In this paper we concentrate on the simple approach using the log rank statistic outlined in the last paragraph.

Nonparametric longitudinal analysis

273

While the topic of level crossings has received a great deal of attention from theoretical perspectives, there have not been many applications of the technique to applications other than direct process control situations. Most of the theoretical work has focused on stationary processes (for example Kedem, 1978; Adler and Samorodnitsky, 1997; Illsley, 2001) and sought to derive probability distributions associated with level crossings of such processes. Some more applied work has been conducted recently by Leadbetter and Spaniolo (2002). They computed the Palm distributions associated with the point process defined by the crossing of a fixed level by a Gaussian process and used this in a process control type application. The work of Lindgren (1974) and Bj¨ornham and Lindgren (1976) is closer to what is proposed here. In their work, they used the number of zero crossings to estimate the spectral moments of stationary Gaussian processes (the mean of the process is known in the first reference and estimated in the second). While this is closer to what we propose here in that level crossings are used to conduct inference for a process outside of direct process control type applications, we do not assume the process is stationary or Gaussian and the proposed use of level crossings is very different. 2. S OME ISSUES SURROUNDING THE TEST 2.1

Dependence on the set of levels

Thus far we have just fixed K , the number of levels, at some arbitrary value. To obtain the most general test, we define our test statistic to be the limit of the sequence of test statistics we obtain as K goes to infinity. In fact, it is quite simple to find the set of levels that will give the limiting value of the statistic, as we show in Section 2.2, but first we show that the limit exists. To see that the limit in question exists, we will show that one can generate only finitely many possible survival data sets from a longitudinal data set with a finite number of observations. Let T (C) = inf{t > t1 : y(t)  C} for C ∈ [C0 , C K ]. As C varies over the reals, T (C) will change at only a finite number of values. In fact, the number of times T (C) changes will be less than the number of time points for which there are measurements (suppose y is strictly monotone increasing). But since for each subject there are only finitely many possible values for the event time, there are only finitely many possible data sets due to the finite sample size. Since there are only finitely many possible data sets, the number of possible values for the test statistic at a given level is finite. Hence, as K increases we eventually reach the point where the maximum of this finite set is included in the set over which we maximize. The asymptotics for either n (the number of subjects) or N appear complex, however, the primary motivation for the approach is the small n and N situation.

2.2

Finding the smallest set of levels needed to obtain the limiting value of the test statistic

An examination of the argument that concluded that the limit of the sequence of maximal test statistics exists shows that one can easily determine the smallest number of levels necessary to obtain the limiting value of the statistic. For the sake of concreteness, consider the case of just upcrossings, and consider the possible survival data that could arise from the trajectory of a single individual. If y(t + 1) > y(t) and if y(s) < y(t) for all s < t then for all C ∈ (y(t), y(t + 1)], T (C) = t + 1. Hence, when we select a set of levels, we will need a level at the value y(t) + ǫ and since any other C ∈ (y(t), y(t + 1)] will yield the same survival data for this subject we do not need to use any other levels in the interval (y(t), y(t + 1)]. A slightly more complicated case is when y(t + 1) > y(t) but there exists s such that y(t) < y(s) < y(t + 1). In this case, if y(s) > y(r ) for all r < t, then we just need a level at C = y(s) + ǫ. The resulting set of levels can be described by {y(t) + ǫ : there exist s > 0 such that y(t) < y(t + s) and y(r ) < y(t) for all 1  r < t + s}.

274

C. R EILLY

Finally, we need to be concerned with what occurs at the extremes of the values of the trajectory. Thus, a level will be necessary at maxt y(t) + ǫ and at mint y(t) − ǫ. One can also find the set of levels that are necessary for downcrossings using a similar argument. Once one has the collection of levels necessary for each individual, one obtains the entire set of levels by using the collection of all the levels for individuals. Software for finding the set of levels and performing the test is available at www. biostat.umn.edu/˜ cavanr for the case of equal numbers of subjects per group, an equal number of observations for each subject and no missing data. These restrictions on the software are unnecessary and work is under way to develop more general software. 2.3

The null hypothesis and considerations regarding alternatives

Thus far motivation has largely been in terms of a pure test of significance, so here we consider in more detail what hypothesis we are testing and compare to various alternative approaches. Suppose we are interested in the null hypothesis: the joint distribution of the measurements over time does not depend on group membership and our alternative hypothesis specifies the null if false and the mean is a monotone function of time. Then the mean response is invertible and there is no loss in considering the distribution of hitting times. In contrast, if the alternative specifies that the mean response is not monotone then TkU and TkD would be inappropriate summaries and lead to a test with low power. For example suppose we irradiate some cells with two levels of radiation and measure gene expression over time for many genes. If one wanted to find genes that were permanently altered and one suspects that there is a transitory change in all (or many) of the genes, then one would want to use Tk1 = max{t: y(t)  Ck } and Tk2 = max{t: y(t)  Ck } instead of the previously proposed statistics. In many applications we expect the mean response to be monotone in time. Note that the statistic can be powerful against some nonmonotone alternatives (e.g. sin(t) versus cos(t) − 1 for t ∈ [0, 5]). The worst situation for the first upcrossing and downcrossing statistics is when the mean response for the two groups is the same and monotone increasing up until time t ∗ after which both mean responses decline but at different rates. As in the previous microarray example, a time reversal would be appropriate before applying the method.

3. S IMULATION STUDIES Given the nature of the p-value calculation, simulations were performed to assess the power of the method. In addition to the proposed nonparametric method we consider two parametric models: a repeated measures ANOVA model and a normal theory linear model with an unstructured mean and unstructured covariance. We consider the repeated measures ANOVA model because it is commonly used to analyze longitudinal data by those reluctant to model the response directly. The repeated measures ANOVA model corresponds to a mean response model with a different mean for each time point and normally distributed errors with a compound symmetric covariance matrix. We use the F statistic for testing the effect of group (see e.g. Fleiss, 1986). The unstructured model has the same mean structure as the repeated measures ANOVA model but has a covariance matrix that is only constrained to be positive definite. One can obtain a test for group differences with this model using a χ 2 test of the hypothesis that the mean structure is the same in both groups. This test is only valid asymptotically, and for the small sample sizes used in the following simulations, the asymptotic approximation can behave quite poorly. Nonetheless, when we compute power we do not attempt to control for the failure of the asymptotic approximation. For the situation considered here, the effect of this deviation from the approximation is to overstate the power of these tests, sometimes considerably. Our general findings are that there is a fair amount of power lost relative to the unstructured model when the assumptions of that model are valid, but the method can have higher power than repeated measures ANOVA when the covariance matrix is not compound symmetric.

Nonparametric longitudinal analysis

275

Table 1. Power of the F test in repeated measures ANOVA, a χ 2 test for a difference between groups in a linear model with unstructured mean and covariance, and the nonparametric test proposed here when there are n observations per group and either 3, 6 or 9 measurements per subject n

3 measurements χ2

5 10 15 20

0.60 0.83 0.92 0.98

F 0.21 0.52 0.66 0.80

nonparametric 0.29 0.51 0.69 0.84

6 measurements χ2 1.94 1.93 1.98 1.0

F 0.41 0.73 0.89 0.97

nonparametric 0.20 0.56 0.75 0.86

9 measurements χ2 NA 0.94 0.99 1.0

F 0.49 0.84 0.99 1.0

nonparametric 0.31 0.61 0.70 0.81

For the results presented here, we generate the data using the multivariate normal distribution with mean linear in time and a first order autoregressive error structure. We suppose there are two equal sized groups with each subject having the same number of measurements and these measurements occur at the same time (equal numbers of measurements occurring at the same time are necessary for repeated measures ANOVA but not for the test developed here). We suppose the mean response function is linear in both groups and the intercept is the same. The value of the intercept used here is 4.0 and the slopes are −0.2 and −0.5. We set the standard deviation of the regression errors at 0.05 and the correlation between two adjacent measurements is 0.4. The time interval is [0, 1] with measurements occurring with equally spaced intervals between them. All simulations results are based on 100 simulations. The results are displayed in Table 1. Note that the power of the nonparametric method hardly depends on the number of repeated measures, whereas parametric methods benefit greatly from the additional measurements. This is most likely due to the use of the first upcrossing and first downcrossing. In addition, we note that there is a considerable loss of power relative to the unstructured model when the data are normally distributed. This indicates that if there are only a handful of variables under consideration and there is enough data to make diagnostics feasible, then fitting an appropriate parametric model is the best approach. Finally, note that the nonparametric test is comparable to the repeated measures ANOVA results when there are just a few measurements for each subject. 4. A N APPLICATION TO A MICROARRAY DATA SET Here we examine the performance of the test on a data set collected to understand the role of mRNA decay in the regulation of gene expression. While it is widely accepted that the primary mechanism for control of gene expression is the regulation of transcription, it is becoming increasingly clear that other factors also contribute to the regulation of gene expression. For example if two mRNA molecules decay at different rates once they are synthesized, the molecule that decays more rapidly would have to be transcribed at a higher rate in order for the equilibrium quantities of the two molecules to be equal. Thus, the rate of decay of an mRNA molecule is potentially an important factor in the regulation of gene expression. The basic strategy for measuring mRNA decay is to use a chemical (such as Actinomycin D, the agent used in these experiments) to stop transcription, then measure quantities of mRNA over time. Further background and a description of the analysis of a similar experiment can be found in Raghavan et al. (2002). The data set consists of microarrays for two cell lines (H9 cells and jurkat cells) measured at four time points, and there are three independent replications of the experiment (for a total of 24 arrays). With only  three replicates per group, there are only 63 /2 = 10 possible values for the maximum of the log rank test, hence we use α = 0.1 as a cutoff for testing. One would expect correlation within each replicate since each replicate involves stopping transcription for a collection of cells, then obtaining RNA from the collection of cells at four time points. The arrays are oligonucleotide arrays produced by Affymetrix (HU95Av2)

276

C. R EILLY

Fig. 1. The distribution of the p-values from the microarray experiment.

and contain probes for 12 625 transcripts. The question of primary interest is if there are differences in the patterns of decay across cell lines, as such differences would indicate that the cells degrade mRNA molecules at different rates, thereby demonstrating that mRNA degradation is a mechanism of regulation of gene expression. After normalizing the data as described in Raghavan et al. (2002), we applied our test to each gene separately. Of these 12 625 genes, we found that 3738 genes differ across conditions using α = 0.1, far exceeding what one would expect by chance. Figure 1 shows a histogram of the resulting p-values. Note that we do not claim that any of these individual rejected hypotheses are indeed “true differences,” so that test multiplicity is not a concern here. This is because the primary goal is to determine if mRNA decay regulates gene expression in these cell lines, not to determine if a particular gene has a slope that differs between the two cell lines. As an alternative, we could fit a linear model to the log expression levels and test for differences between the intercept and the slope across conditions. While it is reasonable to suspect that most genes would decay exponentially once transcription is stopped, there are other possibilities, especially if the chemical used to stop transcription is not completely effective. Another shortcoming of this approach is that it assumes independent errors within replicates over time, and this would make tests on the slopes and intercepts anti-conservative. Despite these obvious shortcomings of this model based approach (neither of which is easily remedied given the small sample size), we find that 4833 genes have either the slope or intercept differing across cell lines with α = 0.1 (736 genes have different slopes and 4555 have different intercepts). There is a moderate amount of agreement between the two methods if we say that the parametric method finds a difference if either the slope or the intercept differs across conditions: if we classify genes as either altered at α = 0.1 or not altered, then Cohen’s kappa for comparing the classification is 0.55. Many of the differences between the two cell lines are likely attributable to differences in the baseline levels of gene expression, but here the substantive question of interest is if there are differences in the patterns of decay. Hence, we would like to know which of the genes detected to differ between the two cell lines are not due to differences in baseline gene expression. From the parametric model, we find only 218 genes have slopes that differ but intercepts that do not differ. Note that we would expect to find 10% of 12 625 or 1263 genes that differ in their slopes just by chance, and 90% of these (i.e. 1136 with a standard deviation of 32) would have intercepts that do not differ if there are no real differences between the two samples (assuming independence). If we use the usual nonparametric test for differences in gene expression at baseline, Wilcoxon’s test, then we must use α = 0.1 due to the small numbers of replicates as explained above. This test indicates that there are 3690 genes that differ at baseline. Of the 3738 genes

Nonparametric longitudinal analysis

277

Fig. 2. The trajectories of gene expression for each replicate for six selected genes. The line type indicates the cell type.

detected to differ by the nonparametric method, 2495 genes differ at baseline by Wilcoxon’s test, thus this nonparametric approach identifies 1243 genes that differ for reasons other than differences in baseline values. This is larger than what you would expect by chance. One shortcoming of this nonparametric approach to detecting genes that differ for reasons other than baseline gene expression is that the power of the Wilcoxon test may be so much lower than the power of the test on the intercept in the linear model that we are unable to really identify all the genes that differ at baseline. This is not so much of a concern because one would also expect that the power of the nonparametric longitudinal test would suffer relative to the parametric test. The differences between the 2 approaches can be mostly attributed to nonlinearities the parametric approach fails to detect, differing variances for the measurements about the regression lines (which the parametric approach does not test), differences in the robustness of the estimates and granularity in the p-value distribution of the permutation tests due to the small sample size. Figure 2 shows some of the data plotted over time with a line for each replicate and different line types for each of the cell lines. The first five panels are for genes that the nonparametric approach detected a difference but the parametric approach failed to find a difference. The first two plots (reading left to right, top row first) look as if there are nonlinearities in the trend over time in one group, the next one looks like the variance of the measurements about the regression lines differ and the next two look like there is an outlier curve. The final plot shows data for a gene for which the nonparametric approach failed to detect a difference while the parametric approach did detect a difference. While it is surprising that any method would miss this gene, the reason here is due to the small sample size. There are only 10 possible values for the test statistic, and for this gene, some of these 10 values are the same, hence the test statistic is as extreme as possible, but

278

C. R EILLY

the p-value is not 0.1 because there are other permutations that lead to the same value of the test statistic. In fact, 4371 genes are such that the test statistic is as extreme as possible, and 82% of these are significant at α = 0.1. This granularity in the distribution of the p-values is unlikely to be a problem unless there are very few replicates, as here. 5. D ISCUSSION Ultimately the role of this approach will be like that of an overall F test in a regression context: it will indicate if there is some aspect of the data that is inconsistent with the null without much guidance as to what is this aspect of the data. In a clinical trials setting often univariate outcomes are used as end points and the repeated measurements are not used to assess the primary aim. This is a sensible strategy given that longitudinal modeling can be highly dependent on the analyst. When longitudinal data are used in such settings, although not optimal, in practice often a repeated measures ANOVA model is used to analyze the data. The popularity of this method among researchers in practice stems from the perceived generality and lack of a need for user input. The test proposed here could be used in those situations—it lacks the need for user input and is far more general than repeated measures ANOVA. Moreover, our simulations indicate that there is only a small loss of power unless the number of measurements over time is large (say 10), but the usual case in practice is no more than five repeated measures. Nonetheless, with sufficient data to specify a model and conduct diagnostic tests, a careful analysis that models the correlation structure is the best way to analyze a longitudinal data set with a few outcome measures (as witnessed in the simulations in Section 3). In the area of microarrays the role of this test statistic is to first screen all the genes for differences using this very general test. Once a set of genes is recognized, one could then examine this smaller set of genes in greater detail. ACKNOWLEDGMENTS Thanks to the NIH for the Great Lakes Center for AIDS Research grant 1P30-CA79458-01. R EFERENCES ADLER, R. AND SAMORODNITSKY, G. (1997). Level crossings of absolutely continuous stationary symmetric α-stable processes. Annals of Applied Probability 7, 460–493. ˚ AND LINDGREN, G. (1976). Frequency estimation from crossing of an observed mean level. ¨ , A. BJ ORNHAM Biometrika 63, 507–512. FLEISS, J. (1986). The Design and Analysis of Clinical Experiments. New York: Wiley. ILLSLEY, R. (2001). Excursions of a stationary Gaussian process outside a large two-dimensional region. Advances in Applied Probability 33, 141–159. KEDEM , B. (1978). Sufficiency and the number of level crossings by a stationary process. Biometrika 65, 207–210. LEADBETTER, M. AND SPANIOLO, G. (2002). On statistics at level crossings by a stationary process. Statistica Neerlandica 56, 152–164. LINDGREN, G. (1974). Spetral moment estimation by means of level crossings. Biometrika 61, 401–418. RAGHAVAN, A., OGILVIE, R., REILLY, C., ABELSON, M., RAGHAVAN, S., VASDEWANI, J., KRATHWOHL, M. AND B OHJANEN , P. (2002). Genome-wide analysis of mRNA decay in resting and activated primary human T lymphocytes. Nucleic Acids Research 30, 5529–5538. [Received June 2, 2004; revised November 12, 2004; accepted for publication November 16, 2004]