ACM transactions on information systems (January) [Volume 23, Number 1]

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Introduction to Genomic Information Retrieval Genomic information retrieval (GIR) is an emerging area of intersection between information retrieval (IR) and bioinformatics. Bioinformaticians have defined their discipline as the “research, development, or application of computational tools and approaches for expanding the use of biological, medical, behavioral or health data, including those to acquire, store, organize, archive, analyze, or visualize such data.”1 This definition clearly overlaps with that of information retrieval. Indeed, IR practitioners can bring substantial expertise to bioinformatics by applying and innovating with techniques used in text and multimedia retrieval for storing, organizing, analyzing, and visualizing data. The application domain of GIR is broad. Expertise can be applied to the retrieval and management of collections of genomic data (such as DNA and protein sequences), medical and biological literature, microarray data, or other bioinformatics domains. Problems that can be solved include understanding the structure, function, and role of proteins, discovering evidence for the causes of disease, and annotating medical and biological artifacts. However, the skills needed to work on such problems are broad: sufficient understanding of biology and genetics, information retrieval expertise, and skills in fundamental computer science techniques such as compression, string matching, data mining, and data structures and algorithms. This special section serves to bring together articles that show how diverse, important bioinformatics problems can be solved with GIR. As editor of this section, I was pleased by the response to the call for papers, and particularly impressed by the breadth of work submitted for review. Articles In This Section This special section contains two articles that describe solutions to very different problems in GIR. Korodi and Tabus describe, to my knowledge, the most effective DNA compression scheme to date. Compression of DNA is interesting for both practical reasons (such as reduced storage and transmission costs) and functional reasons (such as inferring structure and function from compression models). DNA consists of four symbols, and so compaction of the ASCII representation to two bits per symbol is trivial. However, compression to less that two bits per symbol is much harder: for example, compression with the well-known bzip2 algorithm is ineffective, leading typically to expansion. Prior to the work described in Korodi and Tabus’s article, the most effective schemes (described in detail in the article) achieved compression to between 1.41 and 1.86 bits per symbol. This article describes a scheme that is always more effective than other approaches, achieving between 1.36 and 1.82 bits per symbol. Their approach 1 This

is a working definition proposed by the US National Insitutes of Health. The working document can be found as http://grants1.nih.gov/grants/bistic/CompuBioDef.pdf. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005, Pages 1–2.

2



Introduction

is based on a normalized maximum likelihood model, with heuristics that make the scheme fast on desktop hardware. The article of Sander et al. proposes and examines techniques for analyzing Serial Analysis of Gene Expression (SAGE) data. SAGE data is a new but important resource that allows the comparison of normal human tissues, cells, and cancers. The work in this article shows that SAGE data is robust, that is, that data produced from different laboratories is comparable and can be used together for large studies. Importantly, the work also suggests interesting similarities between brain and ovarian cancer, and shows that brain and breast cancer have strong gene expression signatures. This may be an interesting direction for further biomedical research. The GIR techniques described and used include methods for preprocessing the SAGE data, clustering and selecting data subsets of interest, and methods for presenting and understanding SAGE experiments. ACKNOWLEDGMENTS

I thank the referees for their hard work in reviewing and providing feedback on the papers, in some cases, to very tight deadlines. I also thank the authors of all submitted papers. Last, I am grateful to the Editor-in-Chief, Gary Marchionini, for supporting the GIR special section, and to Associate Editor Javed Mostafa for his hands-on support and mentoring throughout the reviewing, discussion, and finalizing of the special section. I hope that TOIS readers enjoy the special section on Genomic Information Retrieval. HUGH E. WILLIAMS RMIT University Melbourne, Australia

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient Normalized Maximum Likelihood Algorithm for DNA Sequence Compression GERGELY KORODI and IOAN TABUS Tampere University of Technology

This article presents an efficient algorithm for DNA sequence compression, which achieves the best compression ratios reported over a test set commonly used for evaluating DNA compression programs. The algorithm introduces many refinements to a compression method that combines: (1) encoding by a simple normalized maximum likelihood (NML) model for discrete regression, through reference to preceding approximate matching blocks, (2) encoding by a first order context coding and (3) representing strings in clear, to make efficient use of the redundancy sources in DNA data, under fast execution times. One of the main algorithmic features is the constraint on the matching blocks to include reasonably long contiguous matches, which not only reduces significantly the search time, but also can be used to modify the NML model to exploit the constraint for getting smaller code lengths. The algorithm handles the changing statistics of DNA data in an adaptive way and by predictively encoding the matching pointers it is successful in compressing long approximate matches. Apart from comparison with previous DNA encoding methods, we present compression results for the recently published human genome data. Categories and Subject Descriptors: E.4 [Coding and Information Theory]—Data compaction and compression; G.3 [Probability and Statistics]—Correlation and regression analysis; Markov processes; J.3 [Life and Medical Sciences]—Biology and genetics General Terms: Algorithms, Theory Additional Key Words and Phrases: Approximate sequence matching, DNA compression, normalized maximum likelihood model

1. INTRODUCTION Being the definitive code book for the traits and biological functional behavior of every living organism, DNA sequences are the most challenging information sources that humans try to decipher and use. An enormous quantity of different DNA sequences exists, the size of each sequence varying in the range of millions to billions of nucleotides. In both scientific and commercial communities there is intense activity targeted at sequencing the DNA of many species and studying the variability of DNA between individuals of the same species, which produces Authors’ address: Institute of Signal Processing, Tampere University of Technology, P.O. Box 553, FIN-33101 Tampere, Finland; email: {gergely.korodi,ioan.tabus}@tut.fi. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected].  C 2005 ACM 1046-8188/05/0100-0003 $5.00 ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005, Pages 3–34.

4



G. Korodi and I. Tabus

huge amounts of information that needs to be stored and communicated to a large number of people. Therefore there is a great need for fast and efficient compression of DNA sequences. The traditional way to tackle the lossless compression of data is to make use of statistically significant repetitions of the symbols or groups of symbols in the data. Many of the commonly used general purpose methods eliminate the redundancies of exact replicas of a certain subsequence by using efficient referencing to its closest occurrence in the past. Unfortunately, these general-purpose compression algorithms typically fail to attain any size reduction when applied to DNA sequences, which implies that these sequences rarely possess the kind of regularities (like high predictability of the next symbol based on its context) that are typical for, say human created text. However, it is well known that redundancy is commonly present in DNA, but it involves more subtle forms, such as palindrome matches and approximate repetitions. The effective handling of these features is the key to successful DNA compression. The direct practical advantages of compressing DNA sequences stem from the already large, and rapidly increasing amount of available data. Although in recent years mass storage media and devices became easily affordable and ubiquitous, data compression now has an even more important role in reducing the costs of data transmission, since the DNA files are typically shared and distributed over the Internet. Space-efficient representation of the data reduces the load on FTP service providers, as the transmissions are done faster, and it also saves costs for clients who access and download the sequences. Since the price of transmission is proportional to the sizes of accessed files, even savings of 10–20% are useful. In this article we demonstrate that this size reduction can be easily accomplished for typical eukaryotic data. The compression of DNA has an intrinsic value in itself, but additionally it provides many useful clues to the nature of regularities that are statistically significant in the DNA data, indicating how different parts of the sequence are in relation with each other, how sensitive the genome is to random changes such as crossover and mutation, what the average composition of a sequence is, and where the important composition changes occur. Due to its many potential uses as a statistical analysis tool for DNA sequences, any DNA compression method is faced with all of the challenges that modeling tools are facing: maximizing the statistical relevance of the model used, and choosing between various models based on the overall performance-complexity merit. One example of a statistical question easily solved by an efficient compression scheme is whether or not a certain approximate match of two subsequences is statistically relevant. To solve this, one simply has to check whether encoding one subsequence based on a reference to the other subsequence and also including the description of the mismatches is able to compress the overall DNA sequence or not. To maximize the likelihood of a correct answer, the compression program that is used has to be the best known compressor, otherwise statistically relevant matches will remain undetected. This shows that designing the most efficient compressor is a goal of paramount importance for statistical and ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



5

biological inference [Li et al. 2001; Li et al. 2003], even if the gains over other compressors are just marginal when judging only from the storage savings benefits. The complexity of the models that are implicitly or explicitly used by each specific lossless compression method for capturing the regularities in the data should be weighted carefully when comparing various compression methods, due to its impact on various aspects such as: statistical relevance of model parameters, simplicity of parameter estimation, simplicity of using the model for encoding, and attainable compression rate. The present article discusses a compression method aimed at achieving very efficient compression with a fast execution time. The foundations for this work have been laid down in Tabus et al. [2003a], which is in turn based on the Normalized Maximum Likelihood model presented in Tabus et al. [2003b]. Whereas in Tabus et al. [2003a] the simplicity of the model was one of the stipulated objectives, here we relax this restriction in favor of increased compression efficiency and practical feasibility. As a result, in Section 2 we introduce the framework for our algorithm, more complex than the one considered in Tabus et al. [2003a], followed by a detailed discussion and analysis in Section 3 regarding implementation issues, solutions and performance evaluations. In Section 4 we present the empirical evaluation of our method, concentrating on constraints such as compression efficiency, processing speed and memory requirements. We show that at the time of this writing our program surpasses in terms of compressibility all previously published results on a de facto set of test sequences. The practical feasibility of our program is demonstrated by encoding the entire human genome, achieving a compression rate of 1.66 bits/base. 1.1 Connections with Previous Works The first compression algorithm dedicated exclusively to nucleic acid sequences, and hence significantly surpassing general-purpose methods, was published by Grumbach and Tahi [1993]. Their program, called Biocompress, used a LempelZiv style substitutional algorithm [Ziv and Lempel 1977] that, apart from the exact direct matches, was also able to detect complementary palindromes, which are exact matches of an invertible transform of the subsequence, obtained by taking the sequence in reversed order and replacing each base with its complement (A ↔ T and C ↔ G). Such palindrome matches are in general as frequent as direct matches, being a redundancy source worth exploiting for compression. In a later method, called Biocompress-2 Grumbach and Tahi [1994], improved Biocompress by adding an order-2 context coder followed by arithmetic coding [Rissanen 1976]. A new generation of DNA compression algorithms appeared when practical use was made of the data redundancy involved in approximate repetitions [Chen et al. 2001, 2002]. Especially efficient are two algorithms introduced in Chen et al. [2001] by Chen, Kwong and Li: the first version, GenCompress-1, again used the Lempel-Ziv algorithm and an order-2 context coder, but here the substitutional method only used replacement operations when matching two subsequences. In the second version, GenCompress-2, insertion and deletion ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

6



G. Korodi and I. Tabus

operations were allowed in addition to the replacements, when looking for an approximate match. However, this did not prove to offer consistent improvements over GenCompress-1, according to the reported results [Chen et al. 2001], leaving open the usefulness of considering insertions and deletions in addition to substitutions in their scheme. In both GenCompress-1 and GenCompress-2, along with the pointer to the matched subsequence (including offset and length) the encoder has to provide codes for the edit operations involved in the approximate match, in order that compression would be lossless. Establishing codes for the edit operation was an empirical process, and as such, subjected to overtraining with respect to the test data that was used. An earlier substitutional method, Cfact [Rivals et al. 1995] used a two pass coding strategy: in the first pass the exact matches worth providing a gain in compression were detected in the overall file, and in the second pass the encoding was performed. As such, it can be thought of as a precursor to one of the best DNA compression algorithms, DNACompress, introduced by Chen, Li, Ma, and Tromp [Chen et al. 2002], which also employs a two-pass strategy, and is again based on a substitutional (Lempel-Ziv style) compression scheme. In the first pass a specialized program called PatternHunter is used as a preprocessor for finding significant approximate repetitions. The second pass then encodes these by a pointer to their previous occurrence, and it also encodes the edit operations to correct the approximations of the match, while the rest of the sequences are coded using an order-2 context arithmetic coder. The success of DNACompress shows the importance of the method used for finding significant approximate matches, which in one way or another should be part of a substitutional compression algorithm. We briefly mention a few major methods for aligning sequences, because they are well related to the approximate matching problem, and furthermore because their principle of search is worth exploiting in a compression algorithm for the approximate match search task. Locating in a database the sequences that are significantly well aligned with a given sequence was a task first solved in the seventies by exact computing methods based on dynamic programming, but the methods were slow for searching in large databases in a practical time. Wilbur and Lipman developed several algorithms for aligning two nucleotide sequences, such as in Wilbur and Lipman [1983], which was based on matching fixed-length seeds, that is, consecutive strings of nucleotides. This approach became popular and it was later refined in further algorithms with the same objective, such as the FASTA algorithm by Pearson and Lipman [1988], and the family of BLAST programs [Altschul et al. 1990], which became very popular ways of comparing one sequence against all the sequences of a database. A more recent search algorithm attaining significantly faster speed than BLAST programs at the same level of sensitivity is PatternHunter by Ma, Tromp and Li [Ma et al. 2002]. In contrast to earlier algorithms, it uses strings of nonconsecutive symbols as seeds for search. All of the mentioned methods are well optimized for the task of finding alignment scores for sequences of arbitrary (not necessarily the same) lengths, and they operate in two steps: first locating a potential good match using a seed, then ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



7

subsequently trying to extend the seed sequence at both sides and evaluating after each extension the matching (or alignment, if gaps are included) score, to get an approximate match as long as possible. This second part makes the process even more computationally expensive. In our compression program the approximate search task is more particular: it needs only to compare blocks of the same size, and although we use the same principle of “seed” based acceleration search, the subsequent operations are less expensive than those used by the BLAST family of programs, and we optimize them to get a very fast encoding algorithm. At the decoder side, the search process is not needed, and hence the decoding time is much faster than the encoding time, which is true for all substitutional compression methods. Several other methods that do not use the substitution principle have also been proposed for DNA compression. Some of them only offer estimates of the achievable compression, or estimates of the entropy [Loewenstern and Yianilos 1997; Lanctot et al. 2000]; others are full compression programs. From the latter category we mention a combination of context tree weighting (CTW) method with Lempel-Ziv coding called CTW + LZ [Matsumoto et al. 2000], which achieves very good compression results, but it is very slow, and a Burrows-Wheeler Transform-based compressor [Adjeroh et al. 2002], which attains only modest compression results. Though several fundamentally different algorithms have also been proposed in the 10 years following the introduction of Biocompress, Lempel-Ziv style algorithms always represented the state-ofthe-art in DNA compression. A departure from Lempel-Ziv style algorithms is the method NMLComp presented in Tabus et al. [2003a], which replaces the Lempel-Ziv method by encoding with the Normalized Maximum Likelihood model for discrete regression [Tabus et al. 2003b]. In Tabus et al. [2003a] it was demonstrated that the NML model is well suited for encoding approximate matches based on replacement operations and also that it can be implemented with very low algorithmic and computational requirements. In our present article we introduce refinements and algorithmic improvements for the method introduced in Tabus et al. [2003a], directed towards speeding up the execution process and obtaining better compression results. The kernel of the method was presented in Tabus et al. [2003a] and it consists of block parsing the DNA sequence and using one of the three options for encoding: the first uses a reference to a previously encoded DNA segment for conditionally encoding the current block, by means of a simple NML model, the second uses an order-1 context coder, and the third uses “in clear” representation of the current block. The present article refines the kernel scheme, proposing improvements in the way the reference to the previous match is encoded, how the very expensive search of the previous match can be accelerated by constraining the matches, with just a small price in the coding performance, and how the new match constraints can be used to derive a more efficient NML model, which is as easy to handle algorithmically as was the one in Tabus et al. [2003a]. Additionally, more adaptive policies are introduced, such as the selection of the block size used in parsing, and scalable forgetting factors for the memory model. As a result of these and other algorithmic improvements, our ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

8



G. Korodi and I. Tabus

program achieves the best compression ratios published to date for a de facto standard test set, with very fast execution. 1.2 Organization of the Article In Section 2 we start reviewing the three encoders that compete for encoding a block of data, and present all components of Tabus et al. [2003a] which are necessary to make this paper self-contained. The NML encoder principle is discussed and some important issues are elaborated in more detail than in Tabus et al. [2003a], for example, the analysis of the threshold for deciding which strings to include in the normalization sum, in Sections 2.1.3 and 2.1.4. The selection of the block size becomes a more refined process, being intertwined with decisions over other important parameters, for example, forgetting memory, in an overall algorithm, which is presented in Section 2.3. Section 3 contains other major algorithmic novelties in the article, first addressing the acceleration of the approximate match search, and then considering improvements in the performance, which can be achieved by exploiting the particular search algorithm used. A new version of NML encoding is derived, that uses an additional constraint in the block model, namely that in each approximate match found, a contiguous run of matched characters that has at least a (given) fixed length is guaranteed to exist. The algorithmic improvements are assessed over artificial random sequences (where expected values can be computed) and over a benchmark of DNA sequences. In Section 4 we compare our results with previous DNA compression programs over some DNA sequences, then evaluate various options in our algorithm by tuning it to different constraints, and finally we present the results of our program when compressing all of the 3 billion bases of human chromosomes, revealing an average compressibility of 1.66 bits/base for the well-specified bases. 2. ALGORITHM DESCRIPTION In this section we present an algorithm with the objective of attaining good compression of DNA sequences while being practically low-cost, especially on the side of the decoder. In order to efficiently process various types of DNA sequences, we build up our encoder from three modules, each with several parameters and targeting it to address different types of redundancy in the input sequence. In Sections 2.1 and 2.2 we present these three algorithms, while the final encoder is described in Section 2.3. 2.1 The NML Model for Discrete Regression The NML model for discrete regression is a powerful tool to detect hidden regularities in the input data [Tabus et al. 2003b]. Such regularities relevant to DNA compression are the approximate repetitions, by which one sequence can be efficiently represented using knowledge of a sequence that occurred earlier, by replacing, deleting or inserting characters in the earlier one to match the later [Chen et al. 2001]. Our strategy to handle approximate repeats is to focus on blocks of relatively small length and study their encoding based on substitution operations alone (not considering insertions and deletions). Thus our ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



9

first objective is to match two sequences only by possibly substituting a certain number of bases in the first sequence, and to derive an efficient representation for the operations needed to recover the second sequence from the first. This demand is efficiently accomplished by a simplified version of the NML model, first described in Tabus et al. [2003a]. The idea behind this approach is to split the input sequence into fixed-length blocks, and to find for each block an appropriate regressor sequence in the already encoded—thus known—data to code that block. The regressor sequence should have minimal Hamming-distance to the current block, among all possible sequences in the past. By our independent addressing of the blocks, we can model long approximate repeats well, since the block concatenation can model substitution operations (inside blocks), and insertions and deletions (by shifting the regressor blocks). A more complex NML model would enable us to capture and eliminate more sophisticated redundancies from the DNA sequence. However, such a model would require a different choice of the regressor data, which naturally imposes larger costs for finding the best matching sequence, both in terms of speed and memory requirements. Since in practical applications, the data to be processed can have enormous sizes (a typical chromosome of a eukaryote organism varies in the range of several tens up to several hundreds of millions of base pairs) the advantage of using complex NML models which may offer only slightly better compression rates, but which at the same time increase computational requirements considerably is not yet clear. For this reason in the following we base our algorithm on the simpler NML variant first described in Tabus et al. [2003a]. In the present section we give an overview of this algorithm, as well as improvements that have not been discussed in Tabus et al. [2003a] and a suitable way of integrating them into an efficient implementation. 2.1.1 The Probability Model. From the statistical point of view DNA sequences are messages s = s0 , s1 , s2 , . . . , sNs −1 emitted by a source with alphabet size of M = 4 symbols [Grumbach and Tahi 1993]. In biological literature these symbols, called bases, are noted by the letters A, C, G and T , but for convenience we associate to them the numbers 0, 1, 2 and 3 in respective order; thus sk ∈ {0, 1, 2, 3}. In our method we split the sequence s into successive, nonoverlapping blocks of length n, where one generic block sℓn , . . . , sℓn+n−1 will be referred to as y n = y 0 , . . . , y n−1 . For convenience, we drop the reference to the running index ℓ from the block notation, although we always consider y n to be the “current” block, with y 0 = sℓn , . . . , y n−1 = sℓn+n−1 for some integer ℓ. Each block y n is encoded with the help of some regressor block sk , . . . , sk+n−1 written as x n = x0 , . . . , xn−1 in the already encoded stream, which does not necessarily start at a position that is a multiple of n. We denote by θ, the probability of the event that the symbol at any position in the current block matches the corresponding symbol in the regressor block. If the symbols do not match, there is an additional choice of 1 in M − 1 to specify the correct base y i , and thus the probability model is  θ if y i = xi , (1) P ( y i |xi ; θ ) = ψ if y i = xi ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

10



G. Korodi and I. Tabus

. From now on we assume that the symbols within the sequence where ψ = M1−θ −1 are independent—though this is not true for real DNA sequences, it does provide us a simple model, which turns out to be very efficient for our purposes. The probability model (1) extended to blocks of n symbols is then P ( y n |x n ; θ) = θ

n−1 i=0

χ ( y i =xi )

ψ

n−1 i=0

χ ( y i =xi )

= θ nm ψ n−nm ,

(2)

where χ (condition) is 1 if condition is true; 0 otherwise. The number of matching bases in blocks x n and y n is indicated by nm . Consequently, the maximum likelihood estimate of the parameter θ is θˆ ( y n , x n ) = nnm , and the maximized likelihood is  n nm  n − n n−nm m m . (3) P ( y n |x n ; θˆ ( y n , x n )) = n n(M − 1)

It turns out that there are many possible blocks that are dissimilar to the extent that there is no gain in using x n for them as regressor. Collecting only the “good” blocks for which NML is profitable into the set Yx n , we normalize the maximized likelihood to obtain the universal model ˆ y n , x n )) P ( y n |x n ; θ( Pˆ ( y n |x n ) =  n n ˆ n n z n ∈Yx n P (z |x ; θ(z , x )) n−nm    nm nm n

=



m∈n (M



1)n−m

n−nm n(M −1)

  , n  m m  n−m n−m n(M −1) m n

(4)

where the second expression makes use of the fact that the “good” blocks are clearly those with high probability in (3), and since this probability depends only on nm , constraining Yx n is straightforwardly done by constraining nm to a set n ⊆ {0, . . . , n}. 2.1.2 The NML Code Length. From (4) it follows that the code length that is spent on encoding the block y n knowing the regressor block x n is LNML (nm , n ) = − log2 Pˆ ( y n |x n ).

(5)

(The arguments of the code length, nm and n indicate that this length depends only on the number of matching bases and the set to which nm is constrained.) The block x n is most conveniently specified by a pointer to its start, and one bit indicating if direct or palindrome matching is used. Since the ℓth block y n starts at position ℓn, and we exclude regressor sequences overlapping this block, the pointer to x n can be encoded with log2 (ℓn − n + 1) bits. However, to prevent exploding run-time requirements, in practice it is often important to introduce a window W = {ℓn − n − w + 1, . . . , ℓn − n} of size w in which the search for the starting position of the regressor is carried out. This way the pointer can also be sent in log2 w bits, so the number of bits needed to completely specify the block x n is either 1 + log2 (ℓn − n + 1) or 1 + log2 w, whichever is smaller. This means that the overall number of bits the NML-algorithm spends on encoding y n is L1 (nm , n ) = LNML (nm , n ) + log2 min{(ℓ − 1)n + 1, w} + 1. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

(6)

An Efficient NML Algorithm for DNA Sequence Compression



11

Fig. 1. NML code length when Y contains all possible blocks (unconstrained normalization) and when Yx n contains only the blocks with a number of correct matches larger than N = 30, for n = 48.

The size of the search window is typically much smaller than the DNA sequence. Throughout the remainder of this section we are going to consider a window of size w0 = 218 , because this size permits fast execution. When the window size is important, we indicate it in the upper index of the code length, such as 0 Lw 1 (nm , n ) = LNML (nm , n ) + 19,

(7)

which is valid when the whole window W is already available (that is, when the index ℓ of the current block satisfies 0 ≤ ℓn − n − w + 1). 2.1.3 Restricting the Set of Possible Regressors. The NML code length is a function of the search area w, the block size n and the number of matching 0 bases nm . Figure 1 shows with dash-dotted line how the code length Lw 1 (nm , n ) looks as a function of nm when the block size n is 48 and n = {0, . . . , n}. We 0 see that if nm < 30, then Lw 1 (nm , n ) > 96 = 2n, which means that writing the block in the clear representation with 2 bits per base is more advantageous. This suggests that we consider only those blocks as candidates for NML for 0 which Lw 1 (nm , n ) < 2n, which is seen to happen for n = {30, 31, . . . , 48}. In general, the task is to determine a threshold N = N (w, n) for nm so that the NML code is used for nm ≥ N and the clear code otherwise. This means that n becomes the set {N , . . . , n}, and we write L(nm , n ) as L(nm , N ). The optimal threshold depends on the window size, which is not constant at the beginning of the file. However, experience shows that little is lost if we determine the threshold for the maximum window size w. The optimal threshold ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

12



G. Korodi and I. Tabus

is seen to be determined by the smallest N satisfying the implicit equation N (w, n) = min{nm | Lw NML (nm , N (w, n)) + log2 w + 1 < 2n}.

(8)

The NML code length simplifies to the form LNML (nm , N ) = − log2 Pˆ ( y n |x n ) = − log2

 nm nm  n

n−nm n(M −1)

n−nm

  n  m m  n−m n−m n(M −1) m n n − nm nm − (n − nm ) log2 = −nm log2 n n  n   m m  n − m n−m + log2 m n n m≥N 

n−m m≥N (M − 1)

(9)

+ (n − nm ) log2 (M − 1).

Introducing the notation Cn, N

 n   m m  n − m n−m = m n n

(10)

m≥N

for the normalization factor of an n bits long binary mask containing at least N ones, we can write the code length as follows: nm n − nm − (n − nm ) log2 n n + (n − nm ) log2 (M − 1).

LNML (nm , N ) = log2 Cn, N − nm log2

(11)

2.1.4 NML Threshold. The optimal threshold N (w, n) of NML for a specific block size and search area can be easily calculated from (8), since N belongs to a small, finite set. Table I shows these values for w0 = 218 . Notice that these values can be approximated very well by a linear function (this is generally true for other window sizes as well). The linear function minimizing the mean ˜ (w0 , n) = 0.4242n + 9.5455, which is also shown square error for N (w0 , n) is N ˜ in Table I (the values of N (w0 , n) are rounded). For comparison, we calculated the practically optimal threshold for w0 on a set of test DNA sequences by compressing it with all possible thresholds and taking the value for the smallest compressed size. These, again for various block sizes, are included in column “Exp.” of Table I. We note that the theoretical values agree well with the experimental ones. In the rest of the article we use the thresholds given by (8), which are supposed to be tabulated in both the encoder and decoder. 2.1.5 Encoding with the NML Model. Encoding a block of characters y n with the NML model is outlined in the following steps: (1) find the best regressor x n , (2) encode the position and direction (normal or palindrome) of x n , (3) encode the binary mask bn where bi = χ ( y i = xi ), (4) correct the non-matching characters indicated by bn . ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



13

Table I. Comparison of Threshold Values: the Optimal Value from (8), in the Second Column its Approximation as a Linear Function, and in the Third Column Experimental Values Found on a Test Set of 17 Files, Totaling 1.615 MB. The Window Size is w0 = 218 ˜ (w0 , n) N n Eq. (8) Exp. 24 32 40 48 56 64 72 80 88 96

19 23 27 30 34 37 40 43 47 50

20 23 27 30 33 37 40 43 47 50

19 23 27 30 34 37 41 44 47 50

The regressor x n is searched in the window smax{ℓn−n−w+1,0} , . . . , sℓn−n where ℓn is the current position. The match can be either normal or palindrome, but in both cases, it must be situated fully inside the window. Details about how the search is carried out are given in Section 3.1. Once x n is found, its location is encoded in log2 min{(ℓ − 1)n + 1, w} + 1

(12)

bits (Section 2.1.2). The binary mask bn cannot be encoded in one step, since its probability  nm nm  n−nm n−nm n n Pˆ (b ) = n (13) Cn, N for large block sizes is generally too low to be handled even with 64 bits precision. To solve this, we first output the number of matching bases nm according to the probability model   n  nm nm  n−nm n−nm n n n m P (nm ) = Pˆ (bn ) = , (14)  Cn, N bn | i bi =nm

then the binary mask bn is encoded bit-wise with the distribution P (bk = 0) =

n(k) n − k − n(k) , P (bk = 1) = , n−k n−k

(15)

n−1

b j . The overall length spent on bn this way is   n  n − nm m n − log P (nm ) − log P (b |nm ) = log Cn, N − nm log − (n − nm ) log . n n (16)

where n(k) =

j =k

Finally, the bases at non-matching positions can be corrected in log(M − 1) bits each, because at such positions we can exclude the base xi . This requires (n − nm ) log(M − 1) bits, which when added to (12) and (16), gives the overall NML code length shown in (6). ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

14



G. Korodi and I. Tabus

2.1.6 Block Size. An important parameter for the algorithm given in Section 2.1.5 is the block size n. Since in general the block size providing the best compression can vary according to local changes in the DNA sequence, we propose a scheme where several NML models are used with different block sizes, and the best one for the current block is selected for encoding. In this scheme, the block sizes form a geometric progression; they start from a basic size n0 , and progressively increase by a factor of δ. The parameter C gives the number of different blocks. It turns out that for practical reasons additional restrictions have to be imposed on the block sizes; this is discussed in Section 3.2. An evaluation of different parameters n0 , δ and C, as well as other variables of the algorithm is provided in Section 4.1. 2.1.7 Match Prediction. Approximate and exact matches of several hundreds or even thousands of base pairs are not uncommon in DNA sequences. Since such matches generate a significant amount of redundancy, their efficient compression is important. When NML encodes a match much longer than its block size, the match is split into blocks that are processed independently, though obviously there is considerable redundancy among the positions of best matches, as they are coming in an arithmetic progression. Since encoding the position generally requires a considerable number of bits (e.g. 18 + 1 = 19 bits for w0 = 218 , which is quite high compared to the total size of the clear representation, like 96 bits for n = 48), we can be more efficient by “guessing” the position of the best match to the next block, rather than writing it out each time. Let pℓ be the palindrome indicator of the best match to the ℓth block ( pℓ = 0 for normal matches, pℓ = 1 for palindromes) and qℓ its position. Define

qˆ ℓ+1

 nm < g (n),  −1 = qℓ + n nm ≥ g (n) and pℓ = 0,  qℓ − n nm ≥ g (n) and pℓ = 1.

(17)

The function g (n) specifies for a certain block size the minimal number of matching bases such that the block is worth being used for prediction. We have found empirically that g (n) = N (n) + 1 is a good choice. When qˆ ℓ+1 < 0, then the best match for the ℓ + 1th block is written out in clear. If qˆ ℓ+1 ≥ 0, a flag bit signals if the predicted match qˆ ℓ+1 or the actual best match qℓ+1 is used. Only in the latter case is the position qℓ+1 sent in log((ℓ − 1)n + 1) bits. Because of the significant difference between the cost of transmitting qˆ ℓ+1 and qℓ+1 , it may happen that encoding with the regressor pointed to by qˆ ℓ+1 is more efficient than encoding with the regressor pointed to by qℓ+1 , even if the latter has a higher nm than the predicted regressor. Table II(a) illustrates some statistics justifying this approach. It shows that in practice, many times predicted matches are preferable to be used, because we can save the bits otherwise necessary for encoding the position and direction of the regressor block. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression

15



Table II. (a) The Percentages of NML-Encoded Blocks in Which the Best Match is Used (Labeled “Best”), the Predicted Match is Used and it Differs from the Best Match (Labeled “Pred”), and the Predicted Match is the Same as the Best Match in NML (Labeled “Equal”). (b) The Respective Percentages of All Blocks in Which the Three Algorithms (Clear Representation, Order-1 Context and NML with n = 48) Prove to be the Best. In Both Tests the Files were MPOMTCG, CHMPXX and HUMGHCSA, with Compression Rates 1.901, 1.672 and 1.067, Respectively MPOMTCG CHMPXX HUMGHCSA

Best

Pred

Equal

71.2 14.6 37.9 (a)

2.0 0.4 15.0

26.8 85.0 47.0

Clear

Context

NML

43.9 12.3 18.2 (b)

44.5 77.7 9.7

11.6 10.0 72.1

MPOMTCG CHMPXX HUMGHCSA

Table III. Compression Rates of Context Coders of Different Orders, in Bits per Symbols. From Each File Shown the Blocks that Could Not be Compressed Efficiently by NML were Collected in a Stream, and then Processed with Context Coders MPOMTCG CHMPXX HUMGHCSA

0

1

2

3

4

1.972 1.824 1.990

1.961 1.810 1.933

1.960 1.812 1.937

1.962 1.814 1.949

1.968 1.825 1.980

2.2 Alternative Coders NML relies on the existence of an approximate match in the past. On the other hand, in DNA sequences many times nucleotides exhibit features specific to local redundancy—for example, abrupt changes in statistics that are unique to a certain segment of the sequence. More noticeable, and fairly frequent examples are when only two or three of the four bases are alternating in a segment, or just the probability of a certain base changes significantly, though always just transiently. Such cases can occur unexpectedly, making it impossible for NML to handle them effectively, or at all. Because of the local irregularities we cannot rely on repeating subsequences, thus these types of segments are best processed with some low order, adaptive Markov model. Though the compression performance of an order-1 or 2 context coder falls short of that of NML in general, it is still gainful if local redundancy is stronger than global redundancy in a segment. In such cases, a sophisticated algorithm would signal the change, and switch to the context coder to process the current segment. This approach was presented in Grumbach and Tahi [1994] and more recently in Chen et al. [2001], where an order-2 context coder was used when the primary substitutional method could not perform better. However, using our test sequences we have found that an order-1 model is just as adequate, especially because it uses fewer parameters that need to be updated, while at the same time offering a similar performance. It is also important to mention that lower-order models can adapt more quickly to local changes, and to certain types of data (and DNA seems to be in this category). This leads to improved efficiency compared to higher-order, but also more robust models. Table III illustrates this phenomenon on some DNA sequences. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

16



G. Korodi and I. Tabus

In light of these, for each block y n , we also consider an order-1 model with n(a |a ) parameters η(ak |a j ) =  nk (a|aj ) and the associated code length a

j

L2 ( y n ) = −

n

log2 η( y i | y i−1 ).

(18)

i=1

This length is further compared to L1 as described in Section 2.3, and if the context coder performs better, the block y n is encoded with it and the frequencies n( y i | y i−1 ) are collected for all pairs ( y i , y i−1 ) for i = 1, . . . , n in the block and they are added to the corresponding counts n(ak |a j ). Otherwise, if the current block is not encoded by the context coder, its model is not updated either. In order to make the model react to sudden changes, we introduce a parameter H. If the frequency n(ak |a j ) of a certain context a j ∈ {A, C, G, T } exceeds H, this frequency and the frequencies of all bases appearing in that context n(ai |a j ), i = 1, . . . , M − 1 are halved. Smaller values of H make the model more adaptive, but less accurate. As with the NML block sizes, we have found that a good way is to use several models with different H parameters forming a geometric progression (see the final algorithm in Section 2.3). While the NML and context coder algorithms can achieve very good compression on redundant blocks, they can also expand, instead of compressing some blocks when no redundancy is found. Following the paradigm of Grumbach and Tahi [1994], we limit the worst-case code lengths by also considering the clear representation for each block y n with the obvious code length L3 ( y n ) = 2n.

(19)

Table II(b) compares the three algorithms on some DNA sequences. The numbers show the percentages of blocks where a particular algorithm performed better than the other two. Even though NML is relatively rarely used in some cases, its contribution to the compression performance is almost always the most significant. 2.3 The GeNML Algorithm In this section we present our solution for DNA sequence compression, intended for maximizing the compression ratio, while keeping the algorithm usable in practical applications for even the largest genomes. The Genome NML algorithm (or GeNML for short) is constructed from the modules discussed in Sections 2.1 and 2.2, where for the NML method and the context coder we consider various parameter settings. Since selecting from a large number of algorithms would impose high bit rates for specifying which method is used to compress the current block, we divide the DNA sequence in larger entities called macroblocks, arrange the algorithms in several groups each having a specific setting for the parameters of the three coders (NML-, context- and clear- coders), and encode the macroblock with the group that gives the best compression on it. This way, inside the macroblock only three algorithms need to be selected for each block. The formal specification of the final algorithm is shown in Figure 2. In the algorithm, Step 1 initializes the variables. The smallest NML block size (divisible by 8) is n0 , the largest halving limit for the context coder is H0 , the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



17

Fig. 2. The specification of the GeNML algorithm. This algorithm compresses a DNA sequence s = s0 , s1 , . . . , sNs − 1. The parameters are: base block size (n0 ), context halving limit (H0 ), step size (δ), group number (C), window size (w, used by NML implicitly). The numbers in parentheses in Step 2.3.1 refer to the corresponding equations in Section 2.

step size is δ, and C instances are created from each algorithm (with different settings). One instance of NML, context coder and clear coder form a group, so there is a total of C groups. The variable m denotes the macroblock size. Step 2 shows how to encode a generic macroblock selected by the running index k. Steps 2.1–2.4 compute the code length in which each group would encode the macroblock. The code length, denoted by Ln , is a function of the variable index n, which is the block size of the NML coder in the given group (these block sizes are unique among different groups). The “then” branch of Step 2.4 gives the correspondence between the parameter settings of subsequent groups: the NML block sizes form an increasing geometric progression, while the context coder halving limits form a decreasing geometric progression. Note that these steps do not yet encode the data, and as such, models are not updated here. When all the C groups have been examined, the best of them is selected in Step 2.5 and its NML block size and context halving limit are stored in the variables nb and Hb, respectively. This group is used to encode the macroblock in Step 2.6, and after each block the model of the algorithm that encoded that block is updated. 3. IMPLEMENTATION In Section 2.3 we presented an algorithm suitable for obtaining good compression ratios on DNA sequences. However, since our objective is to create a practically feasible encoder, complexity issues are also important to consider. While we sacrifice the requirement of small and simple models in favor of improved compression performance, computational overheads and memory requirements remain still very important and as such, subject to further analysis and discussion. Our final model is basically a composition of NML, order-1 context and clear copy encoders, all of these followed by arithmetic encoding [Rissanen 1976]. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

18



G. Korodi and I. Tabus

The probabilities of the symbols needed by the arithmetic coder are provided by the two modules: the NML coder and the order-1 context coder, described in Sections 2.1.5 and 2.2. In the following we focus on the efficient implementation of the algorithmic steps of NML coding, since they are the most demanding in terms of memory requirements and execution time. The NML coding has two parts: the search for the best regressor, and encoding with the regressor. While in principle these parts can be implemented independently, in practice it is easy to integrate them, resulting in a 1-pass encoding algorithm. The compression is done block-wise on the input data, and the best match for a particular block is computed only when it is needed. Later it will be shown that the search part uses certain data structures to accelerate the process; the structures that are independent of the input data can be generated prior to compression and stored in memory for fast access; data-dependent values are computed on the fly. Once the best match is found, the rest of the NML encoding process is fast to run, therefore in our DNA encoding model the only bottleneck from the viewpoint of execution speed is the search part for NML coding, which is described next. 3.1 The Search for the Best Regressor The task is to find for each block the previous, nonoverlapping segment that has minimal Hamming-distance (or the maximal matching score) to that block. Mathematically this is accomplished by evaluating the expressions

 n−1 n S1 = max χ (si+ j = sℓn+ j ) : (ℓ − 1)n − w + 1 ≤ i ≤ (ℓ − 1)n (20) j =0

and p S1

= max

n−1



χ (si+n− j = P (sℓn+ j )) : (ℓ − 1)n − w + 1 ≤ i ≤ (ℓ − 1)n .

j =0

(21)

Here the function P defines the complementary bases in palindrome matches: P (A) = T , P (C) = G, P (G) = C, P (T ) = A. Because the two expressions are similar, we focus our discussion on the normal matches, since the equivalent algorithm modifications for palindrome matches can be easily carried out. The most noticeable problem with the direct evaluation of S1 is its immense computational requirements, as it performs wn comparisons for each block (we denote by w the size of the window, in which we are looking for matching blocks). For a DNA sequence having Ns bases, that would mean roughly 2Ns w comparisons (taking palindrome matches into account, too). In order to be useful, a genome compression program must be able to handle sequences with Ns > 108 , and we found w = 106 to be a conservative lower limit for the window size if one wants to achieve reasonably good compression. However, with these parameters the evaluation of S1 is not practical even with the fastest computers. Thus the development of much faster search algorithms is of utmost importance. In this section we discuss such improvements. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



19

For further analysis it is important to split the evaluation of S1 into two parts.  The first one concerns the computation of the matching score n−1 j =0 χ(s p+ j = sq+ j ) between two blocks of length n, starting at positions p and q, respectively. The simplest way to do this is the symbol-wise comparison, that we denote by K1 . The second part is the evaluation of the maximum, which requires an iteration through the existing blocks to find the best matching one. We refer to the exhaustive iteration, which sequentially goes through all positions in the window, by the notation I1 . In the following we show improvements for both parts. 3.2 Fast Computation of the Matching Score In order to accelerate the computation of the matching score we compare several neighboring bases at once, so two sequences are compared to each other in one operation, which returns the number of matching bases in the sequences. Let us consider a computer architecture having instructions for bit-wise logical operations on words of B bits. Since each nucleotide can be represented by 2 bits, we can pack B/2 neighboring bases in one register u B−1 u B−2 . . . u0 , each base occupying the positions u2i+1 u2i , 0 ≤ i < B/2. This way the content of the register represents a sequence of bases that we can match to another such sequence in one operation. Taking the bit-wise exclusive-or between the two registers, a “00” result at the location of one base indicates a corresponding match, so counting the occurrences of “00” at positions u2i+1 u2i in the result gives us the number of matching bases in the sequences. If we tabulate all possible values of these results, the requested number can then be obtained with a single table look-up. We denote this version of the comparison kernel by K2 . Note that unlike K1 , K2 is subject to the restriction that 2n should be divisible by B; otherwise, K1 and K2 are equivalent in terms of the result of the comparison. In contrast to the algorithm K1 , that consisted only of comparisons and the necessary iteration, the kernel of K2 is composed of table look-ups and exclusiveor operations. Not considering the administrative iteration, which is basically the same for both kernels, we have already pointed out that K1 required wn comparisons per block for each direction, but this also involves 2wn table lookups when the bases to be compared are read from memory. A straightforward implementation of K2 would use 2wn/B exclusive-or operations and 6wn/B table look-ups. Because for current computer architectures it is reasonable to assume that there is no significant difference between the execution time of a comparison and an exclusive-or operation, we can conclude that K2 needs B/2 times less operations and B/3 times less table look-ups than K1 . However, K2 also requires the initialization and storage of some auxiliary data structures used for the comparison. When choosing the value of B, we have to take into account that the size of the look-up tables that give the number of matching bases is B · 2 B bits, and that B bit operations must be supported by conventional computer architectures. These restrictions are quite strict, and actually they rule out any values but B = 16; the value B = 32 is prohibitive because of memory requirements, and values ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

20



G. Korodi and I. Tabus

below 16 are uninteresting because of their obviously inferior performance. By setting B to 16 in our implementation the comparison K2 could yield up to 4–5 times faster search when the DNA sequence size was much larger than the window size, compared to the conventional approach of K1 . For very short sequences, the one-time initialization of look-up tables becomes considerable, but even in such cases we have observed an average of 2 times improvement in speed for the whole process. We also note that choosing B = 16 implies that the block size be divisible by 8. In practice we have observed that there is little difference between the performances of neighboring block sizes, therefore such a restriction does not have any disadvantage in the scalability of block sizes. 3.3 Additionally Constraining the Approximate Matches for Accelerating their Search We now concentrate on the most time consuming component of the coding algorithm, finding for each “current” block y n = sℓn , . . . , sℓn+n−1 the best matching block x n = sp , . . . , sp+n−1 , where the starting position of the block, p, can have only w different values, those in the past window W = {ℓn−n−w+1, . . . , ℓn−n}. The exhaustive search algorithm, denoted by I1 , evaluates the matching score K at each of the w possible positions of p, and picks the absolute maximum nm (ℓ, W ) = max{K( p, ℓn) | p ∈ W },

(22)

the overall effort being overwhelming for the values n and w of interest. To get a practical algorithm, one needs simple ways to exclude (many) positions p that are unlikely to be good matches, before the search is done, with the possible trade-off that we do not always find the best matching block, but rather the best matching block subjected to a supplementary constraint. The most natural approach, taken by many homology search tools [Wilbur and Lipman 1983], for example, BLAST search algorithm [Altschul et al. 1990] and its derivatives, is to additionally ask that the searched approximate matching sequence should contain a contiguous match of length no less than r, that is, for a cerℓn+n−1 tain displacement i ≤ n − r one can extract from the sequences y n = sℓn p+n−1 p+i+r−1 ℓn+i+r−1 and x n = sp two identical “seed” sequences, sℓn+i and sp+i , respectively. The search process can be considerably accelerated by imposing this constraint, since the pointers p to sequences x n not satisfying the additional constraint can be easily excluded from the set W by checking a list of possible seeds, determined once for the whole file. The additional constraint on the approximate repetitions helps filter out the random repetitions, keeping only those that may have a (closely homologous) biological relevance. However, from the point of view of compression alone, only the matching score nm is important (the larger the nm , the better the compression) and the new constraint that imposes the existence of a long enough contiguous match may lead to losing some segments with the highest nm ’s, which may slightly decrease the compression ratio. This makes the choice of the seed length r important, as it enables balancing the search acceleration against the decrease in compression ratio. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



21

For enabling the fast exclusion process, the following preliminary task has to be performed before starting any of the searches: go through the file and mark the start of the seeds: the start of all continuous exact repetitions of length r, such that for each possible segment of r bases, denoted by z r , we collect in a list L(z r ) the set of positions, where the segment occurs in the whole file s0 , . . . , sNs −1 . That is, L(z r ) = { p | ∀i ∈ {0, . . . , r − 1} : sp+i = z i }.

(23)

After these 4r sets of pointers are collected, we start the process of finding matches for the consecutive blocks; we now describe the process at one such current block, y n = sℓn , . . . , sℓn+n−1 . By going through the appropriate list L marked by seeds fully inside x n , we construct the subsets of positions in W defined as M i,r (W ) = { p | p ∈ W ∧ ∀ j ∈ {0, . . . , r − 1} : sp+i+ j = sℓn+i+ j }  n+i+r−1   W } i ∈ {0, . . . , r − 1}, = { p | p + i ∈ L slln+i

(24)

which will be tested for finding the best match at the current block. As defined, the set M i,r (W ) collects all positions p of W such that at p + i starts the seed identical to the contiguous segment z r = sℓn+i , . . . , sℓn+i+r−1 of length r, and we have such a set for each displacement variable i ∈ {0, . . . , n − r}. The efficient search algorithm, denoted by I2 , will test only the pointers in the sets M i,r (W ), for i ∈ {0, . . . , n − r}, and thus the value of the best (constrained) match found will be   n−r  n˜ m (ℓ, r, W ) = max K( p, ℓn) | p ∈ M i,r (W ) . (25) i=0

We note that I2 , like I1 , can use any of the comparison functions K( p, ℓn).

3.3.1 The Evaluation of the M i,r Sets. Even though generating the list L(z r ) and consequently, the sets M i,r (W ) is very fast, from (25) we see that it is actually the union of the M i,r sets that we need. The simplest way to achieve this is sequentially going through the M i,r sets, for i = 0, 1, . . . , n − r, then for each particular M i,r (W ) going through its elements and processing them only if they have not occurred before, for a displacement smaller than i. The already processed positions can be recorded in a flag array F , and also in a position list G that enables F to be cleared very quickly. With these data structures, the pseudo-code for iteration I2 is given in Figure 3. Note that even though the generation of the list L(z r ) is independent of the rest of the procedures, it can be easily integrated, resulting a one-pass algorithm. An alternative version of this iteration, called I2∗ , evaluates each set M i,r (W ) completely, regardless of the sets for other offsets i. The candidate pointers are taken sequentially from the sets M i,r , i = 0, . . . , n − r (where there is a risk that the same pointer is tested more than once)  instead of constraining n−r them to the nonredundant set obtained by the union i=0 M i,r . On the other hand, this version does not need the F array and G list, saving memory and some additional operations. Denoting the number of block comparisons done by an iteration algorithm I by |I|, and the cardinality of a set M by |M |, we ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

22



G. Korodi and I. Tabus

Fig. 3. The algorithm of the I2 iteration. The parameters are: block size (n), seed length (r).

n−r n−r have |I1 | = |W | = w, |I2 | = | i=0 M i,r (W )| and |I2∗ | = i=0 |M i,r (W )|. Though |I2 | ≤ |I2∗ | always holds, the extra operations we save from omitting the data structures F and G allow the effectiveness of I2∗ against I2 to depend on actual values of seed length r and block size n. For larger r’s the redundancy among the sets M i,r (W ), i ∈ {0, . . . , n − r}, is supposed to decrease, whereas larger block sizes n make the penalty for a redundant block comparison more serious. So for small r and large n we prefer I2 , and I2∗ for large r and small n. At this point we basically face three questions: (1) how much the iteration I2 improves speed over I1 , (2) how much this improvement would be if we used I2∗ instead of I2 , (3) how the application of I2 instead of I1 affects compression ratio. The performance of I2 and I2∗ in terms of compression ratio will be, of course, the same. 3.3.2 Acceleration of the Search by Increasing Seed Length. The constrained search I2 is accelerated with respect to the exhaustive search n−rby the fact that instead of w evaluations of the counting function K, only | i=0 M i,r | n−r evaluations are needed. Considering only the subset | i=0 M i,r | of W has the main effect of filtering out the low nm approximate matches that are expected to occur only by chance in the DNA string, similar to the repetitions occurring in any random string. We therefore introduce the indicator of the achievable average acceleration for I2 by w , (26) A(n, r) = n−r E| i=0 M i,r |

and for I2∗ ,

D(n, r) = n−r i=0

w . E|M i,r |

(27)

As we noted earlier, the achievable average acceleration for I2 is bounded below as A(n, r) ≥ D(n, r). For a random string, having the symbols generated independently with the probability distribution p(i), i ∈ {0, 1, 2, 3}, the probability of the match of two 3 bases is Pc = i=0 p(i)2 , and since E|M i,r | = w · Pcr , D(n, r) =

1 . (n − r + 1)Pcr

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

(28)

An Efficient NML Algorithm for DNA Sequence Compression



23

The probability of a match at the positions in the binary mask bn , where there are nm units, is P (bn ) = Pcnm (1 − Pc )n−nm . We denote by α(r, n, nm ) the number of strings bn that have a run of ones of length at least r. The probability to test a pointer p is therefore nnm =N (w,n) Pcnm (1 − Pc )n−nm α(r, n, nm ), and the expected number of elements in the constrained subset of I2 is thus   n  n−r   Pcnm (1 − Pc )n−nm α(r, n, nm ). (29) M i,r  = w E   i=0 n =N (w,n) m

(The function N (w, n) was discussed in Section 2.1.3).

3.3.3 Study of the Recursions for Counting the Blocks Marked by Seeds. We denoted by α(r, n, nm ) the number of strings of length n, having nm ones and a contiguous run   of ones of length at least r, and symmetrically we denote by β(r, n, nm ) = nnm − α(r, n, nm ) the number of strings that do not have a run of length at least r. The α(r, n, nm ) strings of length n that have contiguous runs of ones of length r can be considered as being the strings with a shorter window, n − 1, which are extended on the right side with a bit 0 (and there are α(r, n − 1, nm ) such shorter strings) or with a bit 1 (and there are at least α(r, n − 1, nm − 1) such strings). When extending with a 1, some of the strings that did not have a run of length r may also get such a long run. These new qualifying strings of length n − 1 have the pattern r −1    01...1

on the right side, and the n−r −1 bits on the left side have nm −r ones and must not have any run of ones of length r or larger. The number of such strings is  − α(r, n − r − 1, nm − r), thus the recursion clearly β(r, n − r − 1, nm − r) = n−r−1 nm −r formula is   0, if (n < nm ) ∨ (nm < r)    1, if (n = nm ) ∧ (n ≥ r) α(r, n, nm ) = α(r, n − 1, nm ) + α(r, n − 1, nm − 1)     + n−r−1 − α(r, n − r − 1, n − r), else. m nm −r

(30)

Figure 4 shows the acceleration indicator computed using the exact formula (29) and the lower bound given by (28), where we see that the lower bound is not too distant from the true value. This suggests that the difference between the computational requirements of I2 and I2∗ regarding the number of block comparisons is not significant, and thus I2∗ may be preferred in practical applications for its simplicity. 3.3.4 Changes in the NML Code Length due to Constrained Matching. The NML code length assignment can be improved by taking into account the new constraint on the pattern of the approximate matches, which now contains a number of r exact matches contiguously. With the constrained search, the binary vectors bn expressing the pattern of approximate matching (with 1 for a match and 0 for a miss) will be constrained to a set denoted by Bnnm ,r , containing all vectors with nm ones, r of which are contiguous. Therefore it is possible to ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

24



G. Korodi and I. Tabus

Fig. 4. Acceleration of the search using seeds of length r, when the block size is n = 48.

adapt the formulas (11), (10), (13) and (14) to the search performed by algorithm I2 in the following way: the new code length for NML (11) is  n˜ m n˜ m  n

LNML (n˜ m , N (n), r) = − log2 

m≥N (n) (M

n−n˜ m n(M −1)

n−n˜ m

− 1)n−m α(r, n, m)

n

 m m  n

n−m n(M −1)

the normalization factor for b (10) is  m m  n − m n−m Cn, Nt = α(r, n, m) , n n m≥N (n)

n−m ,

(31)

(32)

the probability of bn (13) is

Pˆ (bn ) =

 n˜ m n˜ m  n−n˜ m n−n˜ m n

n

Cn, Nt

,

the probability of the number of matching bases (14) is  n˜ m  n−n˜ m n−n˜ m α(r, n, n˜ m ) n˜nm n n P (n˜ m ) = Pˆ (b ) = , C n, Nt bn ∈B n

(33)

(34)

n˜ m ,r

n

and consequently, P (b |n˜ m ) = 1/α(r, n, n˜ m ). 3.3.5 Efficient Representation of the Matching Bit Mask. In this section we present an efficient and computationally low-cost solution that encodes a given bit mask bn according to the probability shown in (33). This method replaces the algorithm for the simple probability distribution (15). ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



25

We have to encode the bit mask bn and we know the following: (a) the block n−1 size n, (b) the number of matches nm = i=0 bi , (c) bn contains an r bits long n contiguous sequence of 1’s. We parse b sequentially, and for any position k ∈ {0, . . . , n− 1} we determine P (bk = 0 | b0 b1 . . . bk−1 ), also considering the already known bits. We define N T (k) as the number of possible sequences bk . . . bn−1 , subject to the constraints (a), (b), (c) and the known bits b0 . . . bk−1 . Similarly, let N0 (k) denote the number of possible sequences bk . . . bn−1 , subject to the previous constraints, and additionally requiring bk = 0. As in Section 2.1.5, we n−1 define n(k) = i=k bi the number of remaining 1’s in bn . Then, we clearly have N0 (k) P (bk = 0) = NT (k) . Once we have seen a contiguous sequence of 1’s of length r, the constraint (c) is void for the rest of bn and thus N T (k) = n−k, N0 (k) = n−k −n(k). Otherwise, let ℓ(k) denote the number of contiguous 1’s immediately preceding position k (i.e. ℓ(k) = 0 if either k = 0 or bk−1 = 0, ℓ(k) = 1 if bk−1 = 1 and bk−2 = 0 etc.). If bk = 0, then an r long run of 1’s must be fully inside bk+1 . . . bn−1 , which means that N0 (k) = α(r, n − k − 1, n(k)).

(35)

In order to determine N T (k), let us denote the index of the first 0 relative to k by t, that is, bk = bk+1 = . . . = bk+t−1 = 1, bk+t = 0. If there are no more 0’s in the sequence, then bk , . . . , bn−1 are already determined, as all of them are 1. Otherwise, 0 ≤ t < n − k. If ℓ(k) + t < r, then the bits bk+t+1 , . . . , bn−1 must contain an r long run of 1’s, which can be arranged α(r, n − k − t − 1, n(k) − t) ways. Otherwise, if ℓ(k) + t ≥ r, then we have the r long run of 1’s from bk−ℓ(k) to bk+t−1 , and only the constraints (a) and (b) apply to the following bits, making k−t−1 the total number of possible arrangements ( n −n(k) − t ). Summing these for all the possible values of t, we get r−ℓ(k)−1 n−k−1 n − k − t − 1 N T (k) = α(r, n − k − t − 1, n(k) − t)) + . (36) n(k) − t t=0

t=r−ℓ(k)

Now that we know both N0 (k) and N T (k), we can compute P (bk = 0), but calculating the complex expression of (36) would be time consuming. However, notice that if ℓ(k) = 0, then N T (k) = α(r, n − k, n(k)),

(37)

and for ℓ(k) > 0: ℓ(k − 1) = ℓ(k) − 1, n(k − 1) = n(k) + 1, which when substituted in the expansion of N T (k − 1) from (36), gives N T (k) = N T (k − 1) − α(r, n − k, n(k) + 1).

(38)

Expression (38) can be illustrated in the following way: when we advance from position k − 1 to k in bn , and find that bk−1 = 1, we lose all of those possible combinations from the total that could have finished the bit mask correctly with bk−1 = 0. Since this 0 interrupts any run of 1’s before, the r long run must still come in bk , . . . , bn−1 , and that can be done α(r, n−k, n(k−1)) = α(r, n−k, n(k)+1) ways. So we have to subtract this amount from the total. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

26



G. Korodi and I. Tabus

Fig. 5. Algorithm for encoding an n bits long array bn that has nm 1’s and an r bits long contiguous run of 1’s, in the total cost of the optimal log α(r, n, nm ) bits. The parameters are: block size (n), number of 1’s (nm ), seed length (r).

Expressions (37) and (38) provide an easy way to compute the values of N T (k) recursively as follows. Whenever the previous bit in bn is 0, we write the current bit with the probability distribution P (0) = α(r, n − k − 1, n(k))/α(r, n − k, n(k)), P (1) = 1 − P (0), and set N T = α(r, n − k, n(k)). Otherwise, P (0) = α(r, n − k − 1, n(k))/(N T − α(r, n − k, n(k) + 1), and N T is again set to the denominator. This goes on until a contiguous run of 1’s is encountered, when the distribution is switched back to (15). The algorithm that implements this is shown in Figure 5. 3.3.6 Decrease in Compression Ratio for Increasing Seed Length. The search constrained only to the seeds has the clear disadvantage that it will miss some of the matches having a high number of hits. For an independent, random sequence the probability of missing an existing match of nm hits is P (miss|nm ) = 1 −

α(r, n, nm ) n .

(39)

nm

This probability is illustrated in Figure 6(a), where the values of (39) are plotted against the number of matching bases for different values of r, for the random string. Figure 6(b) shows empirical results carried out on a test set of real DNA sequences, that were processed by both I1 and I2 . The number of times when the match found by I2 was worse than the best match was recorded, from which the probability of the miss was computed. Figure 6(b) compared to Figure 6(a) shows that for DNA sequences, the constraint imposed by I2 is even less serious than it is for random sequences, since in DNA, good matches have contextual correlation, which implies the existence of long runs of matching bases in the block. Generally, any miss of I2 with regard to I1 is unlikely to have a devastating effect, since there are probably other block matches with high values of nm that I2 can find. The average length for encoding a block of length n when using the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



27

Fig. 6. (a) Probability of miss of a matching block with nm correct matches by I2 , using seeds of length r, when the block size is n = 48, for random files. (b) Probability that I2 missed the best match with score nm , on a DNA test set containing 17 files, total size 1.615 MB.

NML algorithm is E LI1 =

n



P (nm |I1 )(LNML (nm , N ) + 1) + 1 −

nm =N

E LI2∗ = E LI2 =



P (nm |I2 )(LNML (n˜ m , N , r) + 1) + 1 −

nm =N



P (nm |I1 ) (2n + 1) (40)

nm =N

with exhaustive search, and n

n

n

nm =N



P (nm |I2 ) (2n + 1) (41)

with constrained search. In order to consider a scenario close to the statistics of DNA sequences, we have collected the empirical PD N A (nm ) from a DNA file, and redefined D N A (nm ) . The relative decrease P (nm |I1 ) = PD N A (nm ) and P (nm |I2 ) = α(r,n,nm )P (nnm ) in the compression ratio is given by E LI2 − E LI1 , E LI1

(42)

as shown in Figure (7). Figure (7) also shows the reduction in performance of the GeNML algorithm for different seed lengths. Note that the reduction for GeNML is consistently smaller than the values of (42). This is because I2 is more likely to miss a match with score close to N (w, n), in which case (42) takes the clear length instead. However, for such blocks GeNML can always use the context coder, which in these cases may be more effective than NML with iteration I1 , since the matching score is so low. The presence of the context coder thus attenuates the decrease in compression performance of I2 compared to I1 . 4. RESULTS In this Section we show the results of our implementation of the GeNML algorithm, as it was discussed in Sections 2.3 and 3. In Section 4.1 we show some ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

28



G. Korodi and I. Tabus

Fig. 7. Relative reduction in compression ratio

E LI −E LI 2 1 E LI

when using seeds of length r, against

1

the case of exhaustive search, when the block size is n = 48. Circular marks show P (miss|nm ) obtained on a random sequence, but with probabilities P (nm ) collected from the DNA sequence HUMGHCSA. Triangular marks show the change in performance of GeNML on the same file.

experimental evaluation and tuning of the various parameters that are used by the algorithm in Section 2.3. In Section 4.2 we compare the GeNML performance to other existing DNA compression programs, while in Section 4.3 the compression results for the complete human genome are given. We have implemented the GeNML algorithm in the C Programming Language. The test platform for all of the tests in this section was a Pentium 4 processor running at 2.8 GHz under the GNU/Linux operating system. 4.1 Parameter Evaluation For the evaluation of the GeNML algorithm, we used several “real” DNA sequences. It turns out that the window size w in which the search for the best regressor is carried out is best kept as large as permitted by resources, since increasing window size almost always increases compression efficiency. Apart from w, the most important parameter is probably the NML block size n; we have found that a useful range for n seems to be from 20 up to 120. Test results have also confirmed that the implementation restriction that confines the block size to the set {8i | 3 ≤ i ≤ 15} (see Section 3.2) did not result in any significant loss in compression performance. As introduced in Section 2.1.6, the parameter C defines the number of different NML block sizes; n0 is the smallest block, and δ is the factor for the progression. Here we observed that for a fixed C and varying n0 and δ, the compression performance of our algorithm is quite the same, as long as n0 and δ ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



29

are set so that they cover a broad range. We have found the best settings to be C = 3 and δ = 2 with n0 ∈ {24, 32} (i.e., the subsets {24, 48, 96} or {32, 64, 128}), which make the algorithm fairly efficient and robust to all kinds of DNA sequences, but testing other values for n0 , δ and C showed only marginal changes in compression performance. Generally, decreasing δ and increasing C provides marginally better compression, but at the cost of significantly slower speed. The seed length r (Section 3.3) is also an important factor for adjusting the program for resource limitations. Specifying values 12 and above enables the use of very large window sizes, but we must also take into account the expanding memory requirements for generating the lists L(z r ). In practice a window size of 10 MB is recommended for very large genomes, but it probably necessitates r ≥ 10. For sequences of sizes of a few hundred kilobases up to a few megabases the processing speed is fast anyway, so here we suggest r = 8 for better compression efficiency. To illustrate the effects of varying the parameters w, r, C, n0 and δ, we have compressed the complete 64 megabase human chromosome 20 with various settings. This chromosome has 6.6% undetermined bases (denoted by the symbol “N” in the FASTA format) that were also included. The results are shown in Table IV. The last row of the table gives the performance of the generalpurpose compression program bzip2 for comparison; from this we can see that small window sizes and large seed lengths for GeNML are quite competitive with bzip2 in terms of resource requirements, while still achieving significantly better compression performance. 4.2 General Comparison The practical evaluation of the compression performance of our algorithm was done such that it is easily comparable with the published results describing the performance of Biocompress-2 [Grumbach and Tahi 1994], GenCompress-2 [Chen et al. 2001], DNACompress [Chen et al. 2002], CTW-LZ [Matsumoto et al. 2000], which all used the same set of DNA sequences. Unfortunately, these sequences are available in different versions, as biological databases are updated and corrected, and we used the reported file sizes to check whether our sequences were the same as those used in the earlier articles. On one such occasion, for the file HUMHBB, we were unable to find the correct version with matching file size, so we have omitted this particular file from the test. For the rest of the files, the compression results, in bits per base, are shown in Table V. These files include sequences from humans (HUMDYSTROP, HUMGHCSA, HUMHDABCD and HUMHPRTB), the complete genomes of two mitochondria (MPOMTCG and MTPACG), two chloroplasts (CHMPXX and CHNTXX), and the complete genomes of two viruses (HEHCMVCG and VACCG). For each file we set the window size used by GeNML to the size of the actual file. Encoding all the files in Table V took 6.14 seconds on our test platform; the decoding time for the whole set was 0.63 seconds. The performance of the general-purpose compression program bzip2 is also included and it illustrates the importance of DNA-specific improvements. We note that where our method cannot improve the known results, none of the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

30



G. Korodi and I. Tabus

Table IV. The Evaluation of GeNML on the 64 Megabase Long Human Chromosome 20. The Columns are: Window Size w, Seed Length r, Block Size Sequence (C, n0 , δ), (Rate) Compression Rate in Bits Per Bases, (CT) Encoding Time in Seconds, (CM) Memory Used for Encoding in Mbytes, (DT) Decoding Time in Seconds, (DM) Memory Used for Decoding in Mbytes w

r 12

100 kB

10

8

12

1 MB

10

8

12

10 MB

10

8

C

n0

δ

Rate

CT

CM

DT

DM

1 3 3 1 3 3 1 3 3 1 3 3 1 3 3 1 3 3 1 3 3 1 3 3 1 3 3

48 24 32 48 24 32 48 24 32 48 24 32 48 24 32 48 24 32 48 24 32 48 24 32 48 24 32

– 2 2 – 2 2 – 2 2 – 2 2 – 2 2 – 2 2 – 2 2 – 2 2 – 2 2

1.671 1.661 1.661 1.668 1.658 1.658 1.666 1.657 1.657 1.634 1.622 1.622 1.630 1.619 1.619 1.628 1.618 1.617 1.612 1.599 1.598 1.609 1.596 1.596 1.607 1.595 1.594

118 207 212 126 228 235 182 355 396 173 346 383 270 592 693 1166 3368 3886 850 2479 3061 2109 6352 7692 12529 38948 45668

64.7 64.7 64.7 4.7 4.7 4.7 1.0 1.0 1.0 71.0 71.0 71.0 11.0 11.0 11.0 7.3 7.3 7.3 134.0 134.0 134.0 74.0 74.0 74.0 70.3 70.3 70.3

32 31 31 32 31 31 32 31 31 31 31 31 31 31 31 31 31 31 32 31 31 32 31 31 32 31 31

0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.0 10.0

1.924

29

7.6

11

3.9

bzip2 (switch-9)

previous approximate matching methods could provide an improvement over Biocompress-2, and we may conclude that those files (CHNTXX, HEHCMVCG, VACCG) have no (or only very few) statistically significant approximate repeats. Nevertheless, our results are an improvement over the best known results to date on all the files in which approximate repetitions may play an important role, demonstrating our better handling of the encoding for approximate repeats. Table VI shows the comparison between bzip2, Cfact, GenCompress-2 and GeNML using the data from Rivals et al. [1995] and Chen et al. [2001]. The encoding and decoding times of GeNML were 0.78 and 0.09 seconds, respectively, on the platform described in the beginning of this section. We note that the importance of the way one handles the replacement operations is more noticeable with this test set, and the consistently increased gain of GeNML over the previous methods indicates the better efficiency of GeNML in implicitly encoding the replacement operations. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



31

Table V. Comparison of the Compression (in Bits per Basis) Obtained by Algorithms bzip2 (Switch-9), Biocompress-2 (Bio2), GenCompress-2 (Gen2), CTW-LZ (CTW), DNACompress (DNA) and GeNML Sequence CHMPXX CHNTXX HEHCMVCG HUMDYSTROP HUMGHCSA HUMHDABCD HUMHPRTB MPOMTCG MTPACG VACCG

Size

bzip2

Bio2

Gen2

CTW

DNA

GeNML

121024 155844 229354 38770 66495 58864 56737 186608 100314 191737

2.12 2.18 2.17 2.18 1.73 2.07 2.09 2.17 2.12 2.09

1.68 1.62 1.85 1.93 1.31 1.88 1.91 1.94 1.88 1.76

1.67 1.61 1.85 1.92 1.10 1.82 1.85 1.91 1.86 1.76

1.67 1.61 1.84 1.92 1.10 1.82 1.84 1.90 1.86 1.76

1.67 1.61 1.85 1.91 1.03 1.80 1.82 1.89 1.86 1.76

1.66 1.61 1.84 1.91 1.01 1.71 1.76 1.88 1.84 1.76

Table VI. Comparison of the Compression (in Bits per Basis) Obtained by the Algorithms bzip2 (switch-9), Cfact, GenCompress-2 and GeNML Sequence ATATSGS ATEF1A23 ATRDNAF ATRDNAI CELK07E12 HSG6PDGEN MMZP3G XLXFG512

Size

bzip2

Cfact

GenCompress-2

GeNML

9647 6022 10014 5287 58949 52173 10833 19338

2.15 2.15 2.15 1.96 1.91 2.07 2.13 1.80

1.79 1.58 1.81 1.47 1.71 1.93 1.91 1.49

1.67 1.54 1.79 1.41 1.61 1.80 1.86 1.39

1.60 1.52 1.76 1.36 1.46 1.71 1.82 1.35

4.3 Human Genome Compression In order to illustrate the practical strength of our algorithm, we have compressed the complete human genome as released on April 14, 2003, taken from the NCBI web site. Apart from symbols for the nucleotides A, C, G and T , these files, which are distributed in the FASTA format, regularly contain undetermined (possibly unknown or unstable during the sequencing process) bases denoted by the symbol N . The FASTA format also defines other wildcard characters, but they do not occur in this release of the human genome. Since in the present work we focus on DNA compression and do not address special file formats such as FASTA, we do not elaborate upon the representation of wildcard symbols, yet for the importance of this particular test we have extended our GeNML program to support the encoding and decoding of N symbols as well (for an algorithm concerning the storage and retrieval of wildcard characters in the FASTA format, see for example, Williams and Zobel [1997]). Nevertheless, it is important to mention that the statistical nature of the N symbols (and the wildcard characters in general), since they seem to always come in long runs, is quite different from the regular bases. As such, their presence alters the compression efficiency in such a way that it will no longer be indicative of sequences entirely composed of the well specified nucleotides A, C, G and T . In order to get a clear picture of the effect of undetermined bases, we have done the test in two variations: first, the files were compressed in their original format, with all the wildcards ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

32



G. Korodi and I. Tabus

Table VII. Compressing the Human Genome with a 10 MB Window Size. The Compression Efficiency is Expressed as the Ratio Between the Compressed Size Measured in Bits, and the Number of Symbols. The Total Number of Bases (Listed in Column “Number of Bases”) Counts the Symbols A, C, G, T and N , while in Column “Known Bases” Only the Well Specified Bases A, C, G, and T are Counted and Given as Percentage of the Total

Chromosome Chr 1 Chr 2 Chr 3 Chr 4 Chr 5 Chr 6 Chr 7 Chr 8 Chr 9 Chr 10 Chr 11 Chr 12 Chr 13 Chr 14 Chr 15 Chr 16 Chr 17 Chr 18 Chr 19 Chr 20 Chr 21 Chr 22 Chr X Chr Y Total

Number of Bases

Wildcards Included Compressed Size/Basis bzip2 GeNML

Known Bases

Wildcards Omitted Compressed Size/Basis bzip2 GeNML

245203898 243315028 199411731 191610523 180967295 170740541 158431299 145908738 134505819 135480874 134978784 133464434 114151656 105311216 100114055 89995999 81691216 77753510 63790860 63644868 46976537 49476972 152634166 50961097

1.836 2.023 2.016 2.023 2.034 2.027 2.001 2.015 1.767 1.992 2.001 1.992 1.746 1.708 1.667 1.802 1.907 2.000 1.680 1.924 1.501 1.400 1.983 0.898

1.490 1.650 1.650 1.636 1.647 1.651 1.602 1.644 1.398 1.610 1.626 1.627 1.435 1.398 1.328 1.421 1.544 1.658 1.313 1.599 1.244 1.129 1.531 0.529

89.2% 97.4% 97.1% 97.4% 98.1% 97.7% 97.5% 97.1% 85.6% 96.5% 96.8% 96.9% 83.7% 82.8% 81.0% 88.8% 94.8% 95.9% 87.4% 93.4% 72.2% 69.4% 96.8% 44.7%

2.058 2.077 2.077 2.078 2.073 2.074 2.051 2.075 2.063 2.065 2.066 2.055 2.087 2.063 2.057 2.030 2.010 2.086 1.921 2.061 2.078 2.017 2.049 2.011

1.670 1.694 1.699 1.681 1.679 1.689 1.642 1.693 1.633 1.669 1.679 1.679 1.715 1.689 1.639 1.600 1.628 1.730 1.502 1.713 1.722 1.627 1.582 1.185

3070521116

1.901

1.535

92.2%

2.061

1.664

included; second, we removed the undetermined bases from the files, leaving only the A, C, G and T symbols, and performed encoding again. The parameter settings for the GeNML algorithm in this test were: w = 10 MB, r = 12, C = 3, n0 = 24, δ = 2. The results, along with the results of bzip2 for comparison, are shown in Table VII. The table is split vertically into two sections: the left side shows the results with the undetermined bases included, and the right side when they are omitted. In the left section the total number of bases is also shown; this is complemented in the right section with the “Known Bases” column, which shows the proportion of the well-defined bases compared to the total. The compressed size is always measured in bits; as such, the theoretically maximal compressed size/base ratio would be log 5 ≈ 2.322 bits/base in the left section (for the random source with the alphabet {A, C, G, T, N }), and log 4 = 2 bits/base in the right one. Comparing the bzip2 columns in the two tests, we can see that all the general purpose compressor could do in the first test was filter out the long runs of undetermined bases; with those gone, it could not achieve any further reduction ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

An Efficient NML Algorithm for DNA Sequence Compression



33

in size. The DNA-specific GeNML program, on the other hand, remains quite successful in the second test. The 1.664 bits/base overall compression rate corresponds to a size reduction of 16.8% with regard to the 2 bits/base clear representation, and a total compressed size of 562 MB (this is also the total compressed size when wildcard symbols are included, because those can be stored in a very compact way). Table VII shows that the human genome is more compressible than sequences of lower order life forms. Using sequences of other organisms, we observed that generally the higher the form of the organism, the more compressible (and thus, the more redundant) its DNA sequence is. One explanation for this interesting fact could be that higher order organisms deliberately carry some redundancy to protect the vital parts of the more complicated, and thus sensitive, DNA sequences from the disastrous consequences of mutation. 5. CONCLUSIONS In this article we have introduced an efficient algorithm for lossless DNA compression. The two most important parts of this algorithm for efficient compression are the approximate sequence matching by the normalized maximum likelihood model for discrete regression, and the integration of different methods with various parameters into a single encoder. Special techniques are used to accelerate the search for the best matching block and the scoring for the approximate match, making the algorithm fast enough for practical use even on the largest DNA sequences. As a result, at the time of this writing, this algorithm, to our knowledge, represents the state-of-the-art in lossless DNA compression. The 16.8% improvement gained on the human genome still falls short of the compressibility of many other types of data, such as text and images. However, this improvement is enough to justify the importance of DNA compression in practice. One of the most significant applications could be its use for large public databases, such as NCBI, that make thousands of genomes available for everyone over the Internet. This compression algorithm could already help cut transmission costs. Furthermore, decoding has very low resource requirements, making its use possible in many environments. ACKNOWLEDGMENTS

The authors gratefully acknowledge the useful suggestions and comments of Professor Jorma Rissanen on an early version of the manuscript. REFERENCES ADJEROH, D., ZHANG, Y., MUKHERJEE, A., POWELL, M., AND BELL, T. 2002. DNA sequence compression using the Burrows-Wheeler Transform. In IEEE Computer Society Bioinformatics Conference. 303. ALTSCHUL, S., GISH, W., MILLER, W., MYERS, E., AND LIPMAN, D. 1990. Basic local alignment search tool. J. Mol. Biol. 215, 403–410. CHEN, X., KWONG, S., AND LI, M. 2001. A compression algorithm for DNA sequences. IEEE Engineering in Medicine and Biology. 61–66. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

34



G. Korodi and I. Tabus

CHEN, X., LI, M., MA, B., AND TROMP, J. 2002. DNACompress: fast and effective DNA sequence compression. Bioinformatics 18, 1696–1698. GRUMBACH, S. AND TAHI, F. 1993. Compression of DNA sequences. In Proceedings of the Data Compression Conference. 340–350. GRUMBACH, S. AND TAHI, F. 1994. A new challenge for compression algorithms: Genetic sequences. J. Inform. Process. Manage. 30, 6, 875–886. LANCTOT, K., LI, M., AND YANG, E. 2000. Estimating DNA sequence entropy. In 11th Annual ACMSIAM Symposium on Discrete Algorithms. 409–418. LI, M., BADGER, J., CHEN, X., KWONG, S., KEARNEY, P., AND ZHANG, H. 2001. An information based sequence distance and its application to whole mitochondrial genome phylogeny. Bioinformatics 17, 149–154. LI, M., CHEN, X., LI, X., MA, B., AND VITA´ NYI, P. 2003. The similarity metric. In 14th Annual ACM-SIAM Symposium on Discrete Algorithms. 863–872. LOEWENSTERN, D. AND YIANILOS, P. 1997. Significantly lower entropy estimates for natural DNA sequences. In Proceedings of the Data Compression Conference. 151–160. MA, B., TROMP, J., AND LI, M. 2002. PatternHunter: Faster and more sensitive homology search. Bioinformatics 18, 440–445. MATSUMOTO, T., SADAKANE, K., AND IMAI, H. 2000. Biological sequence compression algorithms. In Genome Informatics Workshop. Universal Academy Press, 43–52. PEARSON, W. AND LIPMAN, D. 1988. Improved tools for biological sequence comparison. In Proc. Natl. Acad. Sci. USA. Vol. 85. 2444–2448. RISSANEN, J. 1976. Generalized Kraft inequality and arithmetic coding. IBM J. Res. Dev. 20, 3, 198–203. RIVALS, E., DELAHAYE, J., DAUCHET, M., AND DELGRANGE, O. 1995. A guaranteed compression scheme for repetitive DNA sequences. Tech. Rep. IT–285, LIFL Lille I Univ. TABUS, I., KORODI, G., AND RISSANEN, J. 2003a. DNA sequence compression using the normalized maximum likelihood model for discrete regression. In Proceedings of the Data Compression Conference. 253–262. TABUS, I., RISSANEN, J., AND ASTOLA, J. 2003b. Classification and feature gene selection using the normalized maximum likelihood model for discrete regression. Signal Processing, Special Issue on Genomic Signal Processing 83, 4 (April), 713–727. WILBUR, W. AND LIPMAN, D. 1983. Rapid similarity searches of nucleic acid and protein data banks. In Proc. Natl. Acad. Sci. USA. Vol. 80. 726–730. WILLIAMS, H. AND ZOBEL, J. 1997. Compression of nucleotide databases for fast searching. Computer Applications in the Biosciences 13, 5, 549–554. ZIV, J. AND LEMPEL, A. 1977. A universal algorithm for sequential data compression. IEEE Trans. Inform. Theory 23, 3, 337–343. Received October 2003; revised June 2004; accepted August 2004

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling ¨ JORG SANDER University of Alberta RAYMOND T. NG, MONICA C. SLEUMER, and MAN SAINT YUEN University of British Columbia and STEVEN J. JONES British Columbia Genome Sciences Centre

Serial Analysis of Gene Expression (SAGE) has proven to be an important alternative to microarray techniques for global profiling of mRNA populations. We have developed preprocessing methodologies to address problems in analyzing SAGE data due to noise caused by sequencing error, normalization methodologies to account for libraries sampled at different depths, and missing tag imputation methodologies to aid in the analysis of poorly sampled SAGE libraries. We have also used subspace selection using the Wilcoxon rank sum test to exclude tags that have similar expression levels regardless of source. Using these methodologies we have clustered, using the OPTICS algorithm, 88 SAGE libraries derived from cancerous and normal tissues as well as cell line material. Our results produced eight dense clusters representing ovarian cancer cell line, brain cancer cell line, brain cancer bulk tissue, prostate tissue, pancreatic cancer, breast cancer cell line, normal brain, and normal breast bulk tissue. The ovarian cancer and brain cancer cell lines clustered closely together, leading to a further investigation on possible associations between these two cancer types. We also investigated the utility of gene expression data in the classification between normal and cancerous tissues. Our results indicate that brain and breast cancer libraries have strong identities allowing robust discrimination from their normal counterparts. However, the SAGE expression data provide poor predictive accuracy in discriminating between prostate and ovarian cancers and their respective normal tissues. Categories and Subject Descriptors: J3 [Life and Medical Sciences]—Biology and genetics; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval—Clustering General Terms: Algorithms, Experimentation Additional Key Words and Phrases: Gene expression, clustering, classification, cancer profiling This research was supported by NCE IRIS and by NSERC Canada. Authors’ addresses: J. Sander, Department of Computing Science, University of Alberta, Edmonton AB, Canada T6G 2E8; email: [email protected]; R. T. Ng, M. C. Sleumer, and M. S. Yuen, Department of Computer Science, University of British Columbia, Vancouver BC, Canada V6Y 1Z4; email: {mg, sleumer, myuen}@cs.ubc.ca; S. J. Jones, Genome Sciences Centre, British Columbia, Vancouver BC, Canada V5Z 4E6; email: [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected].  C 2005 ACM 1046-8188/05/0100-0035 $5.00 ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005, Pages 35–60.

36



J. Sander et al.

1. INTRODUCTION Gene expression profiling has been studied extensively in an attempt to determine the transcriptional changes, both causative and correlative, associated with the progression of cancer [Gray and Collins 2000]. Such an approach has the potential to determine new prognostic and diagnostic biomarkers in addition to new gene targets for therapeutic intervention. 1.1 Background Cancers are typically classified based on macroscopic qualities such as the tissues they developed in. However, the same type of cancer can have different reactions to treatment in different people, and on the other hand, one treatment can be effective for cancers originating in different tissues. A possible explanation for these observations is that cancers, which are similar on a macroscopic level, may in fact be different at the sub-cellular level, and that certain cancers that originate in different tissues may actually be similar to each other at the sub-cellular level. The sub-cellular level of a tissue is characterized by very complex biochemical processes of which only a very small fraction is understood today. Most of the functions of a cell are performed by proteins that are produced by the cell via a mechanism called gene expression. Gene expression is the process of making so-called messenger RNA (mRNA) copies of a gene (a part of the DNA strand of the cell). mRNA, which is essentially a sequence of four bases denoted by A, C, G, and T, is then translated in the cell into an amino acid sequence that forms the basis of a protein. Each tissue type in an organism requires different amounts of different proteins to perform its duties. The amount of a particular protein produced by a cell is partly controlled by the number of corresponding mRNA copies. The relative levels of mRNA of each gene in a tissue type are called the tissue’s gene expression profile. This profile is assumed to be characteristic for a particular tissue type. Biochemists assume that many of the genes in the human genome are only expressed in one tissue type; but there are also so-called “housekeeping genes” that are expressed in all cells, for example, those that control transcription and translation. Certain diseases, especially cancer, are caused by a sequence of errors that radically alter the normal pattern of gene expression. There may be a mutation in one or several genes that up- or down-regulate (i.e., control the rate of expression of) several other genes. For instance, the proteins that prevent uncontrolled cell growth and promote cell death are no longer produced, and the focus of the cell’s gene expression machinery switches from producing the proteins appropriate to its tissue type to producing mostly proteins that are needed for cell growth and division. Theoretically, if the gene expression profile of a diseased tissue sample were known, it could be used to find out what has gone wrong in the cell and provide clues as to how to fix it. There are two major methods of measurement for gene expression data, which produce a snapshot of the gene expression processes in a cell sample, at a certain point in time—the microarray method and the SAGE method. In the microarray method, the sequences of the mRNAs that are measured must be known in advance. Many single-stranded pieces of DNA that ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



37

complement these short sequences are printed on a glass chip. The chip is then brought into contact with the mRNAs extracted from a cell sample. The mRNA in the sample binds to its DNA complement on the chip, and causes it to fluoresce. This fluorescence is detected with a laser. The more mRNA of a certain sequence is in the sample, the more its complementary spot on the chip will light up. An advantage of this method is that it is relatively inexpensive to produce large amounts of data. The major disadvantage, however, is that the experimenter must choose the mRNA sequences to be detected in a sample, and the sequences useful for cancer profiling may not be known. Serial Analysis of Gene Expression (SAGE) [Velculescu et al. 1995] allows for the global profiling of an mRNA population, regardless of whether the transcripts have been previously identified. In the SAGE method, all the mRNA of a cell sample is collected, and a short subsequence, called a tag, is extracted from each mRNA (most commonly 10 base pairs in length, excluding the site for the restriction enzyme). These tags are then enumerated through standard DNA sequencing—the frequency of each tag is counted, giving the relative levels of the corresponding mRNA in the cell sample. The information about the frequencies of detected mRNA tags in a tissue sample is called a SAGE library. The SAGE method is highly quantitative and SAGE profiles from different tissues can be readily compared. A substantive resource of SAGE data has been created as part of the SageMap initiative of the National Cancer Institute, USA [Lash et al. 2000], now part of the Gene Expression Omnibus [Edgar et al. 2002]. The availability of such a resource allows for a global comparison of normal tissues, cell lines and cancer types. Such a comparison may shed light on many interesting questions, such as: r Is the SAGE method robust? Different libraries were created at different places by different groups. Is the SAGE method itself reproducible? r Do different types of cancer behave differently at the gene expression level? r Are there readily detectable subtypes of cancers? r Is there such a notion that cancer type A is closer to type B than to type C? r Can we identify, by computational analysis, a set of genes that characterize different types of cancer? Frequently used methods to characterize and extract information from gene expression data are clustering (also known as “unsupervised classification/learning”) and classification (also known as “supervised classification/learning” and in some information retrieval contexts as “categorization”). The goal of a clustering algorithm is to find unknown groups that may exist in a data set, whereas classification is supervised in the sense that classes are known beforehand and the methods try to learn a function that assigns objects correctly to one of the known classes (for an overview of different clustering and classification methods see, for instance, Han and Komber [2000]. There are many different types of clustering algorithms; the most common distinction is between partitioning and hierarchical algorithms. Partitioning algorithms construct a flat (single level) partition of a database D of n objects into a set of k clusters such that the objects in a cluster are more similar to each ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

38



J. Sander et al.

other than to objects in different clusters. For a given number of clusters k, partitioning algorithms typically try to minimize an objective function such as the sum of the (squared) distances of objects to “their” cluster representative (i.e. the representative that is closest to them). Hierarchical clustering algorithms, on the other hand, do not actually partition a data set into clusters, but compute a hierarchical representation, which reflects its possibly hierarchical clustering structure. The well-known single-link method and its variants produce a socalled dendrogram, which is a tree that iteratively splits a data set into smaller subsets until each subset consists of only one object. Consequently, the root represents the whole data set, and the leaves represent individual objects. A different hierarchical clustering algorithm, which generalizes density-based clustering, is OPTICS [Ankerst et al. 1999]. This algorithm produces another type of output, a so-called reachability plot, which is a bar plot showing clusters as “dents” in the plot. In this article, we base much of our analysis on the result of this clustering method, which improves over traditional hierarchical clustering algorithms in several ways: r The single-link or nearest-neighbor method, suffers from the so-called “singlelink” or “chaining” effect by which clusters could be incorrectly merged if they are connected by a single line of points having the same inter-object distances as the points within the clusters (those “lines” often exist especially in data sets that contain background noise). OPTICS generalizes the nearest neighbor method in the sense that for a special parameter setting, the result of OPTICS is equivalent to the result of the nearest neighbor method. The same parameter, however, can be set to different values, which will weaken chaining effects and thus be able to better separate clusters in noisy data sets. r OPTICS is more efficient on large data sets than traditional hierarchical clustering algorithms, if the dimension of the data set is not too high. For the 88 libraries we are working with in this article, however, this aspect is not important. r The output of the OPTICS algorithm is a reachability plot, which in our opinion gives a clearer view of the hierarchical clustering structure and the density of the clusters than the typical dendrogram produced by traditional hierarchical clustering methods. However, a dendrogram can be generated from a reachability plot and vice versa [Sander et al. 2003]. The reachability plot is a simple bar plot, which visualizes a cluster ordering of the data set, where each library is represented by one bar. It shows simultaneously: r which clusters are formed by which points (easily recognized as “valleys” in the plot), r how dense clusters are in relation to each other (the deeper the “valley”, the denser), r a lower bound on the distance between 2 clusters (tallest bar between two “valleys”), ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



39

Fig. 1. Result of OPTICS for the given 2-dimensional data set (a), represented as a reachability plot (b), and a dendrogram (c). The regions in the plots corresponding to (nested) clusters are indicated by letters A, B, C.

r the hierarchical structure of the clusters (“nested valleys”), and r which points are outliers (very large bars in the plot, not at the border of a cluster). How to read dendrograms and a reachability plots is illustrated in Figure 1, using a simple 2-dimensional point data set. 1.2 Related Work The SAGE method was introduced in 1995 [Velculescu et al. 1995]. The same group [Zhang et al. 1997] also proposed that the SAGE method could be used to study the differences between cancerous and normal cells. However, they only provided a brief example of the analysis that could potentially be done and did not provide the results of such an analysis. In 1999 a website (http://www.ncbi.nlm.nih.gov/SAGE) was created as an offshoot of the Cancer Genome Anatomy Project (CGAP) at the National Center for Biotechnology Information (NCBI). CGAP is dedicated to collecting data on the genetics of cancer. An introduction to CGAP and an explanation of its purpose and the tools it contains is presented in Strausberg et al. [2000]. The purpose of the SAGE website is to provide data to the public, so that researchers could benefit from the SAGE technique without having to bear the expense of creating all of the data themselves [Lal et al. 1999]. Since then, various laboratories have submitted SAGE data to the site, both cancerous and noncancerous, from 10 different tissue types. The website also contains various tools to help researchers analyze the data [Lash et al. 2000]. However, it is hard to perform any analysis of the tag frequency distribution on a larger set of libraries. The use of a clustering algorithm to study gene expression data was first proposed by the seminal paper of Eisen et al. [1998] in the context of microarray data. The authors applied hierarchical pairwise average-linkage clustering (using correlation coefficient as similarity measure) to the genes on microarrays for the budding yeast genome and a human fibroblast cell line. The same clustering methodology (a software is publicly available from the Eisen lab at http://rana.lbl.gov/) has since then been used for different analyses of microarrays, including the clustering of genes on microarrays for normal and cancerous breast tissue [Perou et al. 1999], the clustering of different tissues on microarrays for different types of lymphoma, in order to detect possible subtypes of ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

40



J. Sander et al.

lymphoma [Alizadeh et al. 2000], and the clustering of both the tissues and the genes on microarrays for normal and cancerous colon tissue [Alon et al. 1999]. Other clustering paradigms have also been tried on microarray data, such as k-means (e.g., Tavazoie et al. [1999]), self-organizing maps (e.g., Golub et al. [1999]), model-based clustering (e.g., Yeung et al. [2001]), and specially designed algorithms such as the method of Ben-Dor et al. [1999] which is based on a biological model of genes. Reports on cluster analysis of SAGE data have only recently started to appear (including some preliminary results, which we extend in this article [Ng et al. 2001]. Most approaches simply apply the software package from the Eisen lab. Porter et al. [2001] clustered both genes and libraries of 8 normal and cancerous breast tissues, finding differences in the variability of gene expressions in normal and cancerous cells. Nacht et al. [2001], clustered 9 normal and cancerous lung tissues, and showed that normal and cancer could be separated based even on only the 115 most abundant tags. Van Ruissen et al. [2002] used two-way clustering of both libraries and genes of different skin tissues, finding two clusters of genes that are up-regulated in cancerous tissue. Hashimoto et al. [2003] applied hierarchical clustering to genes of different types of leukocytes, showing that genes are differentially expressed in each leukocyte population, depending on their differentiation stages. Buckhaults et al. [2003] studied 62 surgically removed samples of 4 different cancers (primary cancers and secondary metastases of ovary, breast, pancreas, and colon) in order to predict the origin of a cancer. They first selected only the top-ranked tags identified by a support vector machine classifier: tags that best separated between the four classes. Based on these tags that best distinguish between the four classes they applied twoway clustering to both genes and tissues detecting that metastases of a cancer clustered together with their corresponding primary tumors. 1.3 Overview of Our Contributions The previous clustering studies of SAGE were in general performed by different labs that produced their own (typically very small sets of) libraries. The main contribution of this article is a methodology for analyzing heterogeneous SAGE libraries on a large scale.1 We first develop a sequence of steps to cleanse or preprocess the libraries including missing tag imputation and subspace selection (i.e., removing tags that may just be noise, and selecting tags that may be more discriminating). We show that all four steps have to be applied together for an effective subsequent analysis of the data—none of the steps alone can produce satisfactory results. After the libraries have been cleansed, we perform hierarchical clustering using OPTICS on the libraries. Apart from clustering, we also apply nearestneighbour classification. This is to examine whether a specific type of cancer has a strong identity or “signature” at the gene expression level. We show that our methodology is effective in that it sheds light on some of the biological questions we explore. Our analysis suggests that the SAGE technique is robust. For instance, brain cancer libraries developed in different 1A

preliminary version of this work appeared as Ng et al. [2001]

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



41

laboratories form a strong cluster, suggesting that results obtained in different laboratories are in fact comparable. Our analysis also indicates that brain and breast cancer have much stronger gene expression signatures than ovarian and prostate cancer do. Finally, our analysis finds one strong cluster of brain and ovarian cancer libraries, suggesting that there may be interesting similarities between the two cancer types. We also identify a set of genes that distinguish this cluster from normal brain and ovarian tissues. In many cases, literature searches confirm that the identified genes are promising candidates for further analysis. 2. RESULTS The following analysis is based on 88 SAGE libraries, which are publicly available on the NCBI SAGE website as of January 2001. Each of these SAGE libraries is made of a sample from one of the following tissues: brain, breast, prostate, ovary, colon, pancreas, vascular tissue, skin, or kidney. The other information consistently included with each library is whether it was made from cancerous or normal tissue, and whether it was made from bulk tissue in vivo or a cell line grown in vitro. The data in each library includes a list of 10-base tags that were detected in the sample, and the number of times each tag was detected. The number of unique tags in a library is the number of different tags that were detected in the sample, each of which was detected with some integer frequency. The total number of tag copies is the sum of all the frequencies of all the tags in a library. In analyzing the SAGE data, four major features need to be accounted for. The first three issues are a consequence of the sampling and sequencing errors associated with the SAGE method [Stollberg et al. 2000]: First, the SAGE method is highly prone to sequencing error, which creates a large amount of noise and obscures the clustering structure. Second, the libraries differ largely in terms of total number of tag copies due to differences in the depth to which individual SAGE libraries were sampled. Third, some of the SAGE libraries have been subjected to very limited sequencing, having 0 values for most of the tags. The fourth issue is that the “raw” data is extremely high dimensional. For our cluster analysis, each tag corresponds to a dimension. Thus, if we consider 50,000 tags simultaneously, each library corresponds to one point in the 50,000-dimensional space. But it is not clear which of the 50,000 dimensions are relevant for cancer profiling. To deal with these problems, we designed four preprocessing steps: error removal, normalization, missing tag imputation, and subspace selection. Each of these steps is discussed in detail below. 2.1 Removing Erroneous Tags If we assume that the single pass sequences used to generate the SAGE tags have a base error rate of approximately 1% per base, then we can expect that approximately 10% (1.00-(0.99)10 ) of the total number of tag copies in each SAGE library will contain at least one sequencing error (an error can occur at each of the 10 base positions). Since these errors result in noise and increase the dimensionality of the data, our first preprocessing step is error removal. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

42



J. Sander et al.

Within one library, up to 80% of the unique tags have a frequency of 1, but these make up about 20% of the total number of copies of tags. Some of the single frequency tags represent genuine low frequency mRNAs. However, it is impossible to tell which of the single frequency tags are errors by only looking at a single library. Since the size of the set of possible tags is 410 ≈ 106 , and only about 3 × 105 of these tags have ever appeared in any library, a sequence error in a tag most likely represents a completely new tag that never appears in any other library. Therefore, our method of removing erroneous tags without removing legitimate single-frequency tags follows a thresholding approach (as used, e.g., by Hashimoto et al. [2003]): remove those tags that have a frequency of no more than 1 in all the libraries. In this manner, tags that have a frequency of 1 in some libraries but a higher frequency in other libraries are not removed. This approach taken to removing erroneous tags is similar to that now done by the SAGE genie software, which generates a “confident tag list” (see Boon et al. [2002]). The total number of unique tags across all libraries (which is the dimensionality of the subsequent space upon which the libraries are clustered) before tag removal is about 350,000; after the removal it is 58,524, which is a significant reduction of the dimensionality of the data set as well as the noise. In each library, between 5% and 15% of the total number of tag copies are removed, which is in the range of the expected number of sequencing errors per library. 2.2 Normalization The 88 SAGE libraries upon which we base our analysis were made by different institutions. Because the amount of resources spent by these institutions for sequencing varies widely, each library has a different total number of tag copies. As a consequence, it is not meaningful to use some of the distance functions such as the Euclidean distance to compare the libraries to each other, since their tag frequencies are not on the same scale. Thus, we normalized the tag frequencies by simply scaling them all up to the same total number of tag copies (e.g., tags per million), which is a common way to compensate for unequal tag numbers in SAGE data analysis (see, e.g., Porter et al. [2001]). For other distance functions such as the correlation coefficient, this type of scaling does not have any effect. 2.3 Preliminary Attempts In our early attempts to cluster the SAGE libraries, we investigated the use of CLARANS [Ng and Han 1994], a k-medoids algorithm, which is a partitioning algorithm that tries to find k objects as representatives of clusters so that the sum of all distances of all objects to their closest representative is minimized. We focused on a single tissue type: 14 breast tissue libraries that were available at the time. We applied some simple screening procedures to reduce the dimensionality of the data set, such as requiring a minimum frequency threshold for tags. We tested several similarity and dissimilarity measures such as Euclidean distance, and the correlation coefficient. This analysis showed that the breast tissue libraries form clusters, but these clusters were formed according ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



43

to tissue source (cell line or bulk tissue) and not according to neoplastic state (cancer or normal). All of the bulk tissue-derived libraries went into one cluster while all the cell line-derived libraries went into another cluster. But even after eliminating all tags that were only found in bulk tissues, the success in separating cancerous breast tissue from normal breast tissue was only moderate. Furthermore, CLARANS requires the user to specify the number of clusters to be formed. From a biological standpoint, this number is hard to determine a priori. Because of these weak results, and because we actually wanted to see whether some types of cancers were related to each other at the gene expression level as well as to find clusters within one category, we decided to use a hierarchical clustering algorithm for a more detailed analysis. In contrast to partitioning algorithms, hierarchical algorithms do not require the number of clusters to be specified a priori. Hierarchical algorithms were used in many previously published papers on the clustering of microarray data. For reasons discussed in Section 1.1, our analysis here is based on the hierarchical clustering algorithm OPTICS [Ankerst et al. 1999], and we will show both the reachability plots and (for readers more familiar with the interpretation of tree representations) the dendrograms of our clustering results. Applying OPTICS naively to the SAGE data (after error removal and normalization) does not produce a strong clustering structure. Figure 2a shows the reachability plot using correlation coefficient as a similarity measure. Figure 2b is the corresponding dendrogram. Each library is labeled by a library number (001 to 088), cancerous or normal indicator (can, nor), bulk tissue or cell line indicator (bk, cl), tissue type (bn = brain, bt = breast, co = colon, ki = kidney, ov = ovary, pa = pancreas, pr = prostate, sk = skin, va = vascular) and a code for the laboratory that created the library. For instance, the label 032-can-bk-bt-po corresponds to the cancerous, bulk tissue breast library whose number was 32 and was created by the laboratory labeled po. There are very few clusters: a brain cancer cell line cluster, a prostate cluster, a pancreas cluster, a mixed cluster containing brain and vascular tissue, and a breast cancer cell line cluster. Each of these clusters contains only 3 to 5 libraries, all of which are of a single type of tissue (with the exception of the cluster containing both brain and vascular tissue). The other 56 libraries do not group into pronounced clusters. Note that there are several different possibilities for a similarity or dissimilarity measure between SAGE libraries, such as the Manhattan distance, the Euclidean distance, and the correlation coefficient. The one that is most commonly used in the literature for microarray data is the correlation coefficient (c.f. Section 1.2). We clustered the SAGE libraries using all of the mentioned functions and found that the clustering structures are typically very robust with respect to the similarity function—they differ in general only slightly. However, the correlation coefficient always resulted in the most pronounced structure, and is also consistent with previously published work. Therefore, we will present our results using the correlation coefficient. Furthermore, note that we do not use the log scale as in typical microarray analyses. Intensity values on a microarray chip represent the ratios of gene expressions of ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

44



J. Sander et al.

Fig. 2. Result of OPTICS after error removal and normalization.

a condition relative to a reference condition. These values are typically log transformed, mainly so that equivalent fold changes in either direction (overor under-expression) have the same absolute value (x/y and y/x are very different, whereas log(x/y) = − log(y/x)). In SAGE, tag counts represent absolute number of occurrences of a gene tag, so a log transformation is not necessary. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



45

The weak clustering result is consistent with our preliminary results using CLARANS and indicates that we have to address the problem of missing values due to incomplete sequencing and suitable subspace selection. 2.4 Missing Tag Imputation After the outlined error removal, the total number of tag copies in a library varies from 1293 to 79498. Consider all the libraries whose total tag counts are below 30,000. Out of the 88, there are 28 such libraries. If we were to discard these 28 libraries, we would have lost valuable data contained in them. If we were to normalize the libraries just based on the existing tag counts, the counts of some of the tags would have been exaggerated. The strategy we adopt is to “impute” the counts of the missing tags. Specifically, for a given library with a low total count, we use similar libraries to conservatively estimate the count of a missing tag. This estimate allows us to include the library and its true tag counts in our analysis. While the details of our imputation strategy are given below, we basically use a method similar to those used for filling in answers on incomplete surveys (see http://www.utexas.edu/cc/faqs/stat/general/gen25. html). Note that missing value imputation is not new to gene expression analysis. But the estimation methods for DNA microarrays as proposed in Troyanskaya et al. [2001] are not applicable in this case, since missing values in DNA microarrays are due to different types of errors. Those methods look at the gene expressions of other genes in the experiments and use their values to estimate a missing value. For instance, the best method reported in Troyanskaya et al. [2001], the KNNimpute method, first selects the k genes that have the most similar expression profiles to the gene having a missing value under some condition; then a weighted average of the k most similar genes that have a value under the same condition is used as an estimate for the missing value. This method is based on the assumptions that errors occur randomly and locally at specific spots on the microarray, and that the gene with the missing value is over- or under-expressed by a similar factor as a small cluster of genes. Both assumptions are not applicable in the case of our SAGE data set. Here, missing values are systematic in the sense that an incomplete sequencing affects all genes in a specific condition or experiment. Furthermore, the conditions or experiments involve very different tissue types where we cannot assume that every gene is co-expressed in the same cluster of other genes in all tissues, or that the absolute mRNA counts of co-expressed genes are similar (even though they may be up- or down-regulated by the same factor). Therefore, in our approach, we adjust the frequency of tags in a library that has a low total number of tag copies by using more complete libraries in the same category—from the same tissue, source, and neoplastic state. For our study, libraries containing 30,000 or fewer tags underwent this imputation, the underlying assumption of this process being that it is more conservative to consider a tag to be absent due to insufficient sampling alone in cases where the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

46



J. Sander et al.

same tag has been observed in libraries of similar origin sampled at a greater depth. We intend to adjust the truncated SAGE libraries as conservatively as possible in order not to introduce artificial similarities between them. Our method for replacing missing tags is as following: r Identify libraries with a low total number of tag copies. The threshold value was set to 30,000 in consultation with experts familiar with the SAGE method, subject to the constraint that not more than 30% of the libraries were actually affected. r For each of these, • Create a list of libraries in the same category, which have a total number of tag copies exceeding 30,000. • Scale them down so that the total number of tag copies matches the total number of tag copies of the low tag library. This gives an estimate of the expression values in those libraries as if they were sequenced incompletely to the same degree as the truncated libraries of interest. • Replace each frequency 0 in the low-tag library with the lowest non-zero frequency of that tag in the scaled down libraries. Since the intention here is to solve the problem of the truncated libraries as conservatively as possible, we replace the zeros with the lowest possible value from the scaled down libraries, instead of, for instance, with a weighted average, or by replacing the values in a truncated library with the values of the most similar downscaled library in the same category. Of the 88 SAGE libraries studied, 28 were subjected to tag imputation. After the low frequency tags were removed, the missing values were filled in, and the libraries were normalized, the resulting data was clustered by OPTICS. Figure 3a (reachability plot) and 3b (dendrogram) show the result, again with correlation coefficient as similarity measure. Basically, the same clustering structure as before is found; only the pancreas cluster now includes 2 additional libraries. The imputation of missing values did not change the data radically with respect to the clustering structure, and did not create any “artificial” clusters by making libraries too similar to each other. However, the imputation of missing values has important benefits for subspace selection. 2.5 Subspace Selection So far, we have used the whole set of tags when computing the similarities between libraries on which a clustering algorithm is based, and the resulting clustering structures using this whole space of features (tags) were very weak. Thus, we embark on subspace selection, the goal of which is to identify a subset of tags that discriminate between the following two kinds of tags: r Tags that have similar expression levels in all or most of the libraries, independent of neoplastic state, source and tissue type. r Tags that have different expression patterns in different situations. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



47

Fig. 3. Result of OPTICS after error removal, normalization, and missing tag imputation.

The first kind of tags does not help in forming clustering structures. It even produces a dilution effect to the true clusters, by disturbing the similarity and dissimilarity of libraries. Thus, these tags are removed and only the discriminating tags are kept for further analysis. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

48



J. Sander et al.

The key question here is how to select discriminating tags. Since the total number of tag copies in our data set is extremely high, methods that search over subsets of tags are computationally too expensive, and we have to restrict ourselves to a method that checks properties of single tags. A similar strategy has also been employed in the context of cancer classification using DNA microarray data sets. These methods typically compute a value for each gene that measures the gene’s suitability for class separation. The genes are ranked according to this measure, and the top m, or all genes with a value above a certain threshold t, are selected. The values for m or t have to be specified by the user. Definitions that have been proposed for measuring a gene’s relative class separation include, for instance, Golub et al. [1999] correlation metric based on the means and the standard deviation of the expression values for a gene in the different given classes, and Ben-Dor et al.’s [2000] TnoM score, which measures the, classification error when assigning the samples to the classes only by comparing the expression value for the gene of interest to a learned threshold. Those methods have some drawbacks in our context because our data set contains samples from very different tissues and the correlation of any tag with the classes cancer or normal might be very low, which in turn would make it very difficult to specify the required threshold parameters for the methods. We addressed the issue of subspace selection by using the Wilcoxon rank sum test [Wilcoxon 1945], which can tests whether two samples are taken from the same population. In addition to not requiring input parameters, it is a nonparametric statistical test—it makes no assumption concerning the distribution of the two classes, and it can be successfully applied even to very small samples. For all tests a significance level of α equal to 0.01 was used. In our application, we want to detect possible similarities between cancers in different tissue type as well as possible subtypes of a cancer in a particular tissue type. Therefore, in order to avoid biasing the result, we cannot use tissue specific differences in expression to reduce the number of tags. Therefore, to select a relevant subspace of tags, we first applied the test on a tag by tag basis to determine those tags that have significantly different expression levels in cancerous versus normal cells, without regard to the tissue type. For the Wilcoxon test, we look at two groups of tissues at a time (e.g., cancer versus normal), and we consider the frequencies of a tag T in the libraries of each group as a sample. We then apply the test to these two samples to determine whether the (null) hypothesis that the two samples are taken from the same population can be rejected (at the given confidence level). We retain a tag only if its expression in cancerous and normal cells is significantly different. The intuition behind this heuristic selection of a subspace of tags is that we only want to select “informative” tags and remove those that do not have any individual predictive ability (according to our statistical test). In particular, if the distribution of the expression values of a tag are so similar in both classes (e.g. cancer vs. normal) that we cannot reject the hypothesis that the values follow the same distribution (according to the Wilcoxon test), their (individual) ability to distinguish between the two classes is consequently very low and we do not include this tag in the final subspace. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



49

As described earlier, our preliminary results indicated that the tissue source (i.e., bulk tissue, or cell line) can dominate the similarity between the tissues. We tried to weaken this kind of distortion by applying the test again to also determine the tags that have significantly different expression levels in bulk tissue versus cell line. Those tags were then subtracted from the set of potentially cancer relevant tags selected in the first step. To demonstrate the effect of our missing value imputation on the subspace selection method, we first applied the Wilcoxon test to the SAGE data after error removal and normalization, but without imputing missing values. The Wilcoxon test selected 45,354 tags out of 58,524 when testing cancer versus normal. 722 tags out of the original 58,524, were selected when testing bulk tissue versus cell line. Only 49 of these tags were also present in the 45,354 tags selected by the first test. We removed these tags from our dataset, leaving us with a dimension of 45,305. When clustered with OPTICS, we obtain the result shown in Figures 4a and 4b. The subspace selection resulted in a slightly more pronounced clustering structure than in the previous experiments. For instance, the figure shows a large prostate tissue cluster containing 13 libraries. However, most of the clusters are still very small and several of them are mixed with respect to tissue type and neoplastic state. Furthermore, a large number of libraries are not contained in any significant cluster. Finally, the result of OPTICS on the SAGE data using all the pre-processing steps in combination is shown in Figures 5a and 5b. In this case, the Wilcoxon test selected 40,016 tags when testing cancer versus normal, and 186 of these were also selected when testing bulk tissue versus cell line (i.e. 39,830 tags remained). In this result the clusters are more prominent and all of them except one are pure in terms of tissue type and neoplastic state. r Eight dense clusters formed: ovarian cancer cell line, brain cancer cell line, brain cancer bulk tissue, prostate tissue, pancreatic cancer, breast cancer cell line, normal brain, and normal breast bulk tissue. r For the soundness of our cluster analysis method for heterogeneous SAGE libraries, it is important to validate that the libraries do not form clusters by laboratories alone. Spurious clusters may be formed due to artificial laboratory effects, and the issue is complicated by the fact that many laboratories produced only libraries of a single tissue type. On this regard, it is reassuring that meaningful clusters consisting of libraries from different laboratories have been found, and that libraries produced by the same laboratory have been grouped into different clusters based on biological similarity. r The prostate tissue cluster is almost pure—only 3 out of 12 libraries are normal while the rest are prostate cancer. However, since all of them were made from bulk tissue samples, it is still possible that these 3 normal libraries contain a lot of heterogeneity, presumably reflecting the difficulty in the clinical dissection of prostate tumors. The prostate cluster is also interesting in that the libraries came from different laboratories (i.e., pp, sr, ri and ch), strongly suggesting that this cluster is formed for biological reasons. r The ovarian cancer cluster did not appear in the any of the previous results, and is surprisingly close to the brain cancer cell line cluster, which suggests ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

50



J. Sander et al.

Fig. 4. Result of OPTICS on the subspace selected by the Wilcoxon test including error removal, normalization, but NO missing tag imputation.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



51

Fig. 5. Result of OPTICS on the subspace selected by the Wilcoxon test including error removal, normalization, and missing tag imputation.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

52



J. Sander et al.

that these two cancers may be related in some way. Furthermore, the ovarian libraries and the brain libraries were produced by two different laboratories. In a later section, we will investigate why these two types of cancers clustered together and which tags they are similar for. r We also see that we have succeeded to some degree in focusing our data on tags that are related to cancer growth and development. For instance, brain cancer bulk tissue and normal brain bulk tissue have been separated, even though the were produced by the same laboratory (ri). But some further analysis may be needed since the brain cancer bulk tissue and brain cancer cell line clusters are in two different clusters, which means that there are still significant differences between bulk and cell line tissue in the selected subspace. r We also see a larger breast cancer cell line cluster than in any of our previous results, and a normal brain cluster and a pancreatic cancer cluster have appeared for the first time. These interesting observations are only possible when all four preprocessing steps—error removal, imputation of missing values, normalization, and subspace selection by the Wilcoxon test—are combined. It is worth emphasizing that the clusters are not only separated according to neoplastic state but also according to tissue type, although we did not utilize any tissue specific information in the subspace selection process—which is important for an unbiased analysis of similarities and dissimilarities of cancers across different tissues. In this investigation, using the limited resources available on the Web,we did not find any sign of a type of cancer having distinct subtypes. This may, however, be due to the fact that there are not enough libraries of any one type to detect subtypes if they exist. Only when more SAGE libraries for cancerous samples of the different tissue types are available will it be possible to investigate this problem more thoroughly. The SAGE data clearly contains errors from various sources, each of which can be dealt with individually. The noise created by sequencing error can be repaired by removing ultra-low frequency tags. The problems created by truncated libraries can be reduced through missing value imputation. The discrepancies in the sizes of the various libraries can be dealt with by normalization, and the entire clustering structure can be enhanced and focused by using the Wilcoxon test to select the most relevant tags. It is also clear at this point that the SAGE method is a valid way to measure gene expression. An interesting point in general is that these libraries were created in different laboratories of various research institutions across North America, and yet the libraries clustered very consistently by tissue type, source and neoplastic state. This is evidence that the SAGE method itself is highly reproducible—an issue that has not been studied very much due to the prohibitive expense of producing duplicated SAGE libraries. Moreover, it can be seen that SAGE libraries cluster primarily by tissue type. Brain and breast tissues appear to undergo further separation by neoplastic state and tissue source, and ovarian and brain cancer cell lines appear to be more similar to each other than to any other tissue types, while some libraries ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



53

do not form into clusters at all. However, it is also clear that more analysis of both the various tissues and the tags will be necessary. 3. FURTHER INVESTIGATIONS The clustering structure shown in Figure 5 leads us to further questions about the SAGE data. For instance, it would be of interest to explore which tags are significantly different between the various clusters in order to determine what is unique about each cluster. It is also of value to analyze the properties of SAGE libraries made from different tissues, since some tissues formed into more clusters than others, as well as to study those libraries that did not form clusters or are part of a different cluster than expected. Below we describe our findings on two further investigations. 3.1 Identification of Discriminating Genes Earlier we demonstrated the use of the Wilcoxon test in selecting an appropriate subspace that would accentuate the differences between cancerous and normal tissues. However, since the Wilcoxon test identifies attributes that are significantly different between any two groups, it can also be used to highlight the differences between different clusters. For example, let us consider the ovarian cancer cell line and brain cancer cell line clusters shown in Figure 5. It would be of interest to determine which tags have different expression levels between the libraries in the brain and ovarian cell line clusters and the normal brain and ovarian libraries (i.e. what transcripts are common to these two cell lines and which discriminate them from their normal counterparts). We expect the tags that passed the Wilcoxon test to represent a subset of genes that allow us to distinguish between normal brain/ovary libraries and the brain/ovarian cancer cell line libraries. We used the UniGene cluster “reliable” mapping provided on the SAGE website, which maps SAGE tags to UniGene Ids that represent nonredundant sets of gene-oriented sequence clusters in GenBank (a molecular sequence database maintained at http://www.ncbi.nlm.nih.gov/). Using this mapping, 165 tags selected by this test were mapped to 169 genes, with 7 tags mapping to 2 genes. Upon examination, some of the genes were excluded from further literature research due to inadequate information. For the remaining genes, we looked up the entries in the OMIM database [Hamosh et al. 2002]. 88 of them did not map to any entries, while 63 of the rest mapped to entries not related to cancer, and 18 mapped to entries related to cancer. Table I shows selected genes that we have associated with a role in cancer through literature research. Columns one, two, and three refer to UniGene cluster identification numbers, their description, and OMIM numbers, respectively. The fourth column summarizes the tag counts for the two types of libraries. For instance, in the second entry, the row indicates that among the 9 normal libraries, 8 do not have the gene expressed, and the remaining one has an expression level between 2 and 5. The PubMed ID is included for a quick reference to the literature used in this study. Furthermore, the last column indicates whether there is general consensus within the literature. All the counts are normalized to the lowest value in ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

54



J. Sander et al. Table I. Information about Selected Genes

UniGene Cluster

UniGene Cluster Description

OMIM Number

Hs.179565 MCM3 Minichromosome Maintenance Deficient 3

602693

N C

Hs.334562 CDC2 Cell Division Cycle 2

116940

N C

Hs.241257 LTBP1 Latent Transform. Growth Factor Beta Binding Protein 1

150390

N C

Hs.239

FOXM1 Foxhead Box M1

602341

N C

Hs.45231

LDOC1 Leucine Zipper, down regulated by cancer 1 (AAGGTGGCAT)

n/a

N C

Expression 0 1 ≥2 ≥5 4 2 3 0 1 1 5 7 0 1 ≥2 ≥5 8 0 1 0 3 1 9 1 0 1 ≥2 ≥5 6 2 1 0 2 0 7 5 0 8 1 0 5 3

1 ≥2 ≥5 0 1 0 2 5 6 1 ≥2 ≥5 3 0 1 1 9 1

PubMed ID

Lit.

11801723 10653597



11836499



11376559



11682060



10403563

×

the table. Due to space limitations, we include only five genes in the following discussion. The first entry is one of the discriminating genes identified—the minichromosome maintenance deficient 3 (MCM3) gene. The MCM3 protein is involved in eukaryotic genome replication and participates in the regulation of DNA duplication. This has been used as a cell growth and differentiation marker for cancer prognosis. Although MCM3 is also expressed in nonproliferating cells as described in the reference paper, only five of nine normal libraries in our data expressed MCM3, whereas thirteen of fourteen cancer cell line libraries expressed MCM3 with a high tag count. The second entry is CDC2. Together with BRCA1, CDC2 is involved in modulating the cell cycle arrest process, specifically at the G2/M checkpoint [Yarden et al. 2002]. In the OMIM entry ID116940, the CDC2 protein is proposed to be involved in the resistance of drug-induced cell death in breast cancer due to its role in the regulation of the cell division cycle. Of the fourteen brain/ovarian cancer cell line libraries, eleven of them expressed CDC2, ten more than the normal libraries. For the third entry, in a study of ovarian cancer using the differential display method, Higashi et al. [2001] identified the LTBP1 transcript as one of the most highly differentially expressed transcripts in both ovarian cancer tissue and cell line material. In other malignant tissues LTBP genes have been associated with reduced expression [Oklu and Hesketh 2000] and elevated levels in their surrounding extracellular matrix and stromal tissues. This helps confirm our analysis that LTBP1 is a discriminating gene in resolving between ovarian cancer, normal tissues and other malignancies as well as indicating that the LTBP1 is also upregulated in brain cancers. FOXM1 (previously known as Trident) is a transcription factor, expressed in many cell types undergoing proliferation [Leung et al. [2001]. Its proposed ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



55

function is within the process of cell cycle regulation through the control of expression of Cyclin B1. Thirteen cancer cell line libraries (8/8 brain & 5/6 ovary) showed elevated levels of FOXM1 transcripts compared to only one of the normal libraries. Furthermore, three libraries that expressed FOXM1 were also found to express Cyclin B1 but not Cyclin D1, as documented by Leung et al. [2001]. All of these are examples showing that our methodology seems to produce relevant results. In fact, there is little in the literature that directly links the genes to brain and ovarian cancer. Our approach serves to establish previously undetermined linkages and commonalities between brain and ovarian cancer. We note that not every gene we found agrees with the previously established literature. For example LDOC1 (leucine zipper down regulated in cancer) was identified as being ubiquitously expressed in normal tissues and downregulated in cancer cell lines by Nagasaki et al. [1999]. Our examination points to a different conclusion. In fact, analysis aside, LDOC1 is not ubiquitously expressed in our normal brain/ovary SAGE libraries as claimed in the paper. Instead of being down regulated in various tissues, our observed tag counts were significantly higher in brain/ovary cancer cell line libraries when compared to their normal counterparts. However, one aspect of the SAGE technology is that different transcripts may produce similar tags. Therefore it is possible that this particular tag also derives from a currently undiscovered transcript, although the relatively high expression cell line material suggests that it would be relatively well represented in existing resources of ESTs (Expressed Sequence Tags) and cDNAs (complementary copy of an mRNA). 3.2 Gene Expression Classification The analysis just presented provides a method for identifying genes that discriminate one group of libraries from another. The SAGE libraries also present an opportunity for us to consider a related question: whether different cancerous tissues have different identities at the gene expression level. To evaluate this question, we adopt a classification approach. For each case, we randomly selected 50 to 90 percent of normal and cancerous libraries of a specific tissue (e.g. brain) and labelled them as the training libraries. Then we randomly picked a testing library from the remaining ones. The goal was to try to predict whether the testing library is cancerous or normal. Due to the uneven number of libraries in each state (i.e. normal/cancerous cell line/bulk) in the four tissues, it was fairer just to classify the libraries according to their neoplastic state and disregard whether they are bulk tissue or cell line. The dissimilarities between the testing library and the training libraries were measured using correlation coefficient, and the closest training library was identified. The testing library was then predicted to be cancerous or normal, depending on whether the closest training library was cancerous or normal. We repeated this procedure a hundred times for different randomizations, and recorded the percentage of times the correct prediction was made. The results are shown in Figure 6. Both brain and breast cancer libraries have high prediction accuracy, suggesting that these two cancer types have strong identities. Furthermore, the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

56



J. Sander et al.

Fig. 6. Nearest Neighbour Classification with Random Sampling using correlation coefficient. The x-axis of the graph shows the percentage of libraries used as training libraries. The y-axis shows the average percent accuracy over a hundred runs.

more libraries used for training, the higher the accuracy. In contrast, both prostate and ovarian cancer libraries have low accuracy. Specifically, the number of normal and cancerous prostate libraries are 4 and 10 respectively. In this case, always predicting cancer would already give an accuracy of 71% (10 out of 14). The curve in Figure 6 for prostate indicates that the training libraries serve little purpose. This suggests that normal prostate and cancerous prostate libraries are very similar to each other, and there seems to be a lack of a definitive signature of prostate cancer. As a consequence, the increased number of training libraries did not improve the performance, but indeed confused matters further. This may explain the inverse relationship between the percentage of training libraries used and the accuracy of the prediction. A similar comment applies to ovarian tissue. 3.3 Outliers In general, the clustering structure shown in Figure 5 is quite strong, and there are very few instances of mixed clusters or outliers. The majority of outliers are libraries of tissue types of which there are only 2 instances, such as kidney and skin tissue libraries. It is interesting to note that there are 7 colon tissue libraries which were all closer to each other than to any other tissue type but did not form a pronounced cluster. This may be because the colon tissue libraries were not all from the same tissue source, or perhaps because colon tissues vary more in gene expression from person to person than other tissues. It is also interesting to note that the 6 libraries that were the furthest from all other libraries in the data set were all truncated, which meant that ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



57

there were no complete libraries available to perform missing tag imputation. This is additional evidence that the amount of data in a library affects how it clusters. The only true cases of libraries of different tissues forming a single cluster are the ovarian and brain cancer cell line cluster and the vascular and brain cluster. In the former case, the two tissues form sub-clusters of a larger combined cluster as previously discussed. In the latter case, the two vascular tissue libraries form a small cluster with a brain cancer cell line library. However, with only three libraries in total, the significance of this cluster is not clear. Because the brain cancer library is not close to the other brain cancer libraries, it is possible that this library represents a subtype of brain cancer that is similar to vascular tissue rather than to ovarian cancer tissue. 4. CONCLUSION In this article we proposed a method for clustering SAGE data to detect similarities and dissimilarities between different types of cancer at the sub-cellular level. We introduced four preprocessing steps to reduce errors, restore missing data, normalize the data, and select an appropriate subspace based on the Wilcoxon test. After all of these steps were performed, the clustering algorithm OPTICS produced a promising hierarchical clustering structure in which the SAGE libraries are grouped according to tissue type and neoplastic state, showing a possible relationship between brain and ovarian cancer. We have shown that the SAGE data appears to be highly reproducible, since the data was produced in various different laboratories but still displays a consistent clustering structure, when appropriate preprocessing steps were applied. We have also shown that the Wilcoxon test can be used to identify discriminating tags. Furthermore, we have shown that different cancerous tissues have different degrees of identity at the gene expression level. This may suggest that the strong-identity ones, like brain and breast, may be good candidates for applying the kind of discriminating gene identification described here rather than the weak-identity ones. The dissimilarities between the libraries in the subgroups of several tissue types were studied. Brain tissues were found to be completely separated by subgroup—every brain tissue library of every subgroup was found to be closest to other brain libraries of the same subgroup. One breast cancer bulk tissue library was found to be closer to a normal breast bulk tissue library than to other breast cancer tissue libraries, perhaps because of contamination by surrounding cells in the sample. Prostate tissue library subgroups were all close to each other, suggesting that gene expression in cancerous prostate tissues is not very different from that in normal tissues. Normal ovarian cell line tissues were found to be close to ovarian cancer cell line tissues, which may mean that normal cell lines may be used as a model for cancer in ovarian tissue. This is, however, preliminary work in several respects. Further research is necessary to investigate the significance of our findings, and more data is required before subtypes of cancers can be discovered. We based our analysis on 88 SAGE libraries, but more libraries are added over time. However, SAGE data ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

58



J. Sander et al.

is expensive to produce and it may take some time before a large enough data set is available. In addition, more aggressive methods for subspace selection could be explored to reveal more subtle similarities between different cancers and dissimilarities within a specific cancer. Nevertheless, producing more SAGE data is important because, as our experiments indicate, it contains valuable information that is not available elsewhere. The main purpose of this analysis was to identify cancer types and subtypes, which is why we focused on clustering only the SAGE libraries. In the future it may also be interesting to cluster the SAGE tags as well. Understanding the clustering structure of the tags will not be trivial, however, because of the large number of tags and the fact that the majority of tags have not yet been mapped to a gene. ACKNOWLEDGMENTS

We would like to thank the anonymous TOIS reviewers and the editor whose constructive comments contributed greatly to clarity and precision of this article. REFERENCES ALIZADEH, A. A., EISEN, M. B., DAVIS, R. E., MA, C., LOSSOS, I. S., ROSENWALD, A., BOLDRICK, J. C., SABET, H., TRAN, T., YU, X., POWELL, J. I., YANG, L., MARTI, G. E., MOORE, T., HUDSON, J., LU, L., LEWIS, D. B., TIBSHIRANI, R., SHERLOCK, G., CHAN, W. C., GREINER, T. C., WELSENBURGER, D. D., ARMITAGE, J. O., WARNKE, R., LEVY, R., WILSON, W., GREVER, M. R., BYRD, J. C., BOTSTEIN, D., BROWN, P. O., AND STAUDT, L. M. 2000. Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling. Nature, 403, 3 (Feb.), 503–511. ALON, U., BARKAI, N., NOTTERMAN, D. A., GISH, K., YBARRA, S., MACK, D., AND LEVINE, A. J. 1999. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci USA, 96, 6745–6750. ANKERST, M., BREUNIG, M., KRIEGEL, H.-P., AND SANDER, J. 1999. OPTICS: Ordering Points to identify the clustering structure. In Proceedings of the ACM SIGMOD International Conference on Management of Data, Philadelphia, PA, June 1999, ACM Press, New York, NY, 49–60. BEN-DOR, A., SHAMIR, R., AND YAHKINI, Z. 1999. Clustering gene expression patterns. J. Comput. Biol. 6, 281–297. BEN-DOR, A., BRUHN, L., FRIEDMAN, N., NACHMAN, I., SCHUMMER, M., AND YAKHINI, Z. 2000. Tissue classification with gene expression profiles. J. Comput. Biol. 7, 559–584. ´ , E. C., GREENHUT, S. F., SCHAEFER, C. F., SHOEMAKER, J., POLYAK, K., MORIN, P. J., BOON, K., OSORIO BUETOW, K. H., STRAUSBERG, R. L., DE SOUZA, S. J., AND RIGGINS, G. J. 2002. An anatomy of normal and malignant gene expression. Proc. Natl. Acad. Sci. USA 99, 11287–11292 . BUCKHAULTS, P., ZHANG, Z., CHEN, Y. C., WANG, T. L., ST CROIX, B., SAHA, S., BARDELLI, A., MORIN, P. J., POLYAK, K., HRUBAN, R. H., VELCULESCU, V. E., AND SHIH, IEM. 2003. Identifying tumor origin using a gene expression-based classification map. Cancer Res. 15, 63, 14, 4144–4149. EDGAR, R., DOMRACHEV, M., AND LASH, A. E. 2002. Gene expression omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res. 30, 207–210. EISEN, M. B., SPELLMAN, P. T., BROWN, P. O., AND BOTSTEIN, D. 1998. Cluster analysis and display of genome-wide expression patterns. Proc. Natl. Acad. Sci. USA 95, 25, 14863–14868. GRAY, J. W. AND COLLINS, C. 2000. Genome changes and gene expression in human solid tumors. Carcinogenesis 21, 443–52. GOLUB, T. R., SLONIM, D. K., TAMAYO, P., HUARD, C., GAASENBEEK, M., MESIROV, J. P., COLLER, H., LOH, M. L., DOWNING, J. R., CALIGIURI, M. A., BLOOMFIELD, C. D., AND LANDER, E. S. 1999. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 286, 531–537. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

A Methodology for Analyzing SAGE Libraries for Cancer Profiling



59

HAMOSH, A., SCOTT, A. F., AMBERGER, J., BOCCHINI, C., VALLE, D., AND MCKUSICK, V. A. 2002. Online mendelian inheritance in man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic Acids Res. 30, 52–55. HAN, J. AND KAMBER, M. 2000. Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, San Francisco, CA. HASHIMOTO, S., NAGAI, S., SESE, J., SUZUKI, T., OBATA, A., SATO, T., TOYODA, N., DONG, H. Y., KURACHI, M., NAGAHATA, T., SHIZUNO, K., MORISHITA, S., AND MATSUSHIMA, K. 2003. Gene expression profile in human leukocytes. Blood 101, 9, 3509–3513. HIGASHI, T., SASAGAWA, T., INOUE, M., OKA, R., SHUANGYING, L., AND SAIJOH, K. 2001. Overexpression of latent transforming growth factor-beta 1 (TGF-beta 1) binding protein (LTBP-1) in association with TGF-beta 1 in ovarian carcinoma. Jpn. J. Cancer Res. 92, 2, 506–515. LAL, A., LASH, A. E., ALTSCHUL, S. F., VELCULESCU, V., ZHANG, L., MCLENDON, R. E., MARRA, M. A., PRANGE, C., MORIN, P. J., POLYAK, K., PAPADOPOULOS, N., VOGELSTEIN, B., KINZLER, K. W., STRAUSBERG, R. L., AND RIGGINS, G. J. 1999. A public database for gene expression in human cancers. Cancer Res. 59, 5403–5407. LASH, A. E., TOLSTOSHEV, C. M., WAGNER, L., SCHULER, G. D., STRAUSBERG, R. L., RIGGINS, G. J., AND ALTSCHUL, S. F. 2000. SAGEmap: A public gene expression resource. Genome Res, 10, 7, 1051– 1060. LEUNG, T. W., LIN, S. S., TSANG, A. C., TONG, C. S., CHING, J. C., LEUNG, W. Y., GIMLICH, R., WONG, G. G., AND YAO, K. M. 2001. Over-expression of FoxM1 stimulates cyclin B1 expression. FEBS Lett. 507, 59–66. NACHT, M., DRACHEVA, T., GAO, Y., FUJII, T., CHEN, Y., PLAYER, A., AKMAEV, V., COOK, B., DUFAULT, M., ZHANG, M., ZHANG, W., GUO, M., CURRAN, J., HAN, S., SIDRANSKY, D., BUETOW, K., MADDEN, S. L., AND JEN, J. 2001. Molecular characteristics of non-small cell lung cancer. Proc. Natl. Acad. Sci. USA. 98, 26, 15203–15208. NCBI (NATIONAL CENTER FOR BIOTECHNOLOGY INFORMATION) SAGE: Measuring Gene Expression, http://www.ncbi.nlm.nih.gov/SAGE. NAGASAKI, K., MANABE, T., HANZAWA, H., MAASS, N., TSUKADA, T., AND YAMAGUCHI, K. 1999. Identification of a novel gene, LDOC1, down-regulated in cancer cell lines. Cancer Lett. 140, 227–234. NG, R. T. AND HAN, J. 1994. Efficient and effective clustering methods for spatial data mining. In Proceedings of the 20th International Conference on Very Large Data Bases, Santiago, Chile, September 1994, Morgan Kaufmann Publishers, San Francisco, CA, 144–155. NG, R. T., SANDER, J., AND SLEUMER, M. 2001. Hierarchical cluster analysis of SAGE data for cancer profiling. Workshop on Data Mining in Bioinformatics. In Conjunction with 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, August 2001. OKLU, R. AND HESKETH, R. 2000. The latent transforming growth factor beta binding protein (LTBP) family. Biochem J. 352, Pt 3, 601–610. PEROU, C. M., JEFFREY, S. S., VAN DE RIJN, M., REES, C. A., EISEN, M. B., ROSS, D. T., PERGAMENSCHIKOV, A., WILLIAMS, C. F., ZHU, S. X., LEE, J. C. F., LASHKARI, D., SHALON, D., BROWN, P. O., AND BOTSTEIN, D. 1999. Distinctive gene expression patterns in human mammary epithelial cells and breast cancers. Natl. Acad. Sci USA 96, 9212–9217. PORTER, D. A., KROP, I. E., NASSER, S., SGROI, D., KAELIN, C. M., MARKS, J. R., RIGGINS, G., AND POLYAK, K. 2001. A SAGE (serial analysis of gene expression) view of breast tumor progression. Cancer Res. 61, 15, 5697–702. SANDER, J., QIN, X., LU, Z., NIU, N., AND KOVARSKY, A. 2003. Automatic extraction of clusters from hierarchical clustering representations. In Proceedings of the 7th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Seoul, Korea, April/May 2003. Lecture Notes in Artificial Intelligence 2637, Springer, Berlin, Germany, 75–87. STOLLBERG, J., URSCHITZ, J., URBAN, Z., AND BOYD, C. D. 2000. A Quantitative Evaluation of SAGE. Genome Res. 10, 1241–1248. STRAUSBERG, R. L, BUETOW, K. H., EMMERT-BUCK, M. R., AND KLAUSNER, R. D. 2000. The cancer genome anatomy project: Building an annotated index. Trends Genet. 16, 3, 103–106. TANNER, M. M., GRENMAN, S., KOUL, A., JOHANNSSON, O., MELTZER, P., PEJOVIC, T., BORG, ˚A., AND ISOLA, J. J. 2000. Frequent Amplification of Chromosomal Regoin 20q12-q13 in Ovarian Cancer. Clin. Cancer Res. 6, 1833–1839. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

60



J. Sander et al.

TAVAZOIE, S, HUGHES, J. D., CAMPBELL, M. J., CHO, R. J., AND CHURCH, G. M. 1999. Systematic determination of genetic network architecture. Nature Genetics 22, 281–285, TROYANSKAYA, O., CANTOR, M., SHERLOCK, G., BROWN, P. HASTIE, T., TIBSHIRANI, R., BOTSTEIN, D., AND ALTMAN, R. B. 2001. Missing value estimation methods for DNA microarrays. Bioinformatics 17, 6, 520–525. VAN RUISSEN, F., JANSEN, B. J., DE JONGH, G. J., VAN VLIJMEN-WILLEMS, I. M., AND SCHALKWIJK, J. 2002. Differential gene expression in premalignant human epidermis revealed by cluster analysis of serial analysis of gene expression (SAGE) libraries. FASEB J. 16, 2, 246–248. VELCULESCU, V. E., ZHANG, L., VOGELSTEIN, B., AND KINZLER, K. W. 1995. Serial analysis of gene expression. Science 270, 484–487. WILCOXON, F. 1945. Individual Comparisons by Ranking Methods. Biometrics 1, 80–83. YARDEN, R. I., PARDO-REOYO, S., SGAGIAS, M., COWAN, K. H., AND BRODY, L. C. 2002. BRCA1 regulates the G2/M checkpoint by activating Chk1 kinase upon DNA damage. Nature Genetics 30, 285–289. YEUNG, K. Y., FRALEY, C., MURUA, A., RAFTERY, A. E., AND RUZZO, W. L. 2001. Model-based clustering and data transformations for gene expression data. Bioinformatics 17, 977–987. ZHANG, L., ZHOU, W., VELCULESCU, V. E., KERN, S. E., HRUBAN, R. H., HAMILTON, S. R., VOGELSTEIN, B., AND KINZLER, K. W. 1997. Gene expression profiles in normal and cancer cells. Science 276, 1268–1272. Received October 2003; revised June 2004, August 2004; accepted August 2004

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation YUFEI TAO City University of Hong Kong and DIMITRIS PAPADIAS Hong Kong University of Science and Technology

Spatio-temporal databases store information about the positions of individual objects over time. However, in many applications such as traffic supervision or mobile communication systems, only summarized data, like the number of cars in an area for a specific period, or phone-calls serviced by a cell each day, is required. Although this information can be obtained from operational databases, its computation is expensive, rendering online processing inapplicable. In this paper, we present specialized methods, which integrate spatio-temporal indexing with pre-aggregation. The methods support dynamic spatio-temporal dimensions for the efficient processing of historical aggregate queries without a priori knowledge of grouping hierarchies. The superiority of the proposed techniques over existing methods is demonstrated through a comprehensive probabilistic analysis and an extensive experimental evaluation. Categories and Subject Descriptors: H.2 [Database Management]; H.3 [Information Storage and Retrieval] General Terms: Algorithms, Experimentation Additional Key Words and Phrases: Aggregation, access methods, cost models

1. INTRODUCTION Spatio-temporal databases have received considerable attention during the past few years due to the accumulation of large amounts of multi-dimensional data evolving in time, and the emergence of novel applications such as traffic supervision and mobile communication systems. Research has focused on ¨ modeling [Sistla et al. 1997; Guting et al. 2000; Forlizzi et al. 2000], historical information retrieval [Vazirgiannis et al. 1998; Pfoser et al. 2000; Kollios This research was supported by the grants CityU 1163/04E and HKUST 6197/02E from Hong Kong RGC. Authors’ addresses: Y. Tao, Department of Computer Science, City University of Hong Kong, Tat Chee Avenue, Hong Kong; email: [email protected]; D. Papadias, Department of Computer Science, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong; email: [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected].  C 2005 ACM 1046-8188/05/0100-0061 $5.00 ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005, Pages 61–102.

62



Y. Tao and D. Papadias

et al. 2001; Tao and Papadias 2001], indexing of moving objects [Kollios et al. 1999; Agarwal et al. 2000; Saltenis et al. 2000; Hadjieleftheriou et al. 2002; Saltenis and Jensen 2002; Tao et al. 2003a], selectivity estimation [Choi and Chung 2002; Hadjieleftheriou et al. 2003; Tao et al. 2003b], and so on. All these approaches assume that object locations are individually stored, and queries retrieve objects that satisfy some spatio-temporal condition (e.g., mobile users inside a query window during a time interval, or the first car expected to arrive at a destination, etc.). The motivation of this work is that many (if not most) current spatiotemporal applications require summarized results, rather than information about individual objects. As an example, traffic supervision systems monitor the number of cars in an area of interest [Denny et al. 2003], instead of their IDs. Similarly mobile phone companies use the number of phone calls per cell in order to identify trends and prevent potential network congestion. Other applications focus directly on numerical aggregate data with spatial and temporal aspects, rather than moving objects. As an example consider a pollution monitoring system, where the readings from several sensors are fed into a database that arranges them in regions of similar or identical values. These regions should then be indexed for the efficient processing of queries such as “find the areas near the center with the highest pollution levels yesterday.” Although summarized results can be obtained using conventional operations on individual objects (i.e., accessing every single record qualifying the query), the ability to manipulate aggregate information directly is imperative in spatiotemporal databases due to several reasons. First, in some cases personal data should not be stored due to legal issues. For instance, keeping historical locations of mobile phone users may violate their privacy. Second, the individual data may be irrelevant or unavailable, as in the traffic supervision system mentioned above. Third, although individual data may be highly volatile and involve extreme space requirements, the aggregate information usually remains fairly constant for long periods, thus requiring considerably less space for storage. For example, although the distinct cars in a city area usually change rapidly, their number at each timestamp may not vary significantly, since the number of objects entering the area is similar to that exiting. This is especially true if only approximate information is kept; instead of the precise number of objects we store values to denote ranges such as high or low traffic and so on. We consider, at the finest aggregation unit, a set of regions that can be static (e.g., road segments), or volatile (e.g., areas covered by antenna cells, which can change their extents according to the weather conditions, allocated capacity, etc.). Each region is associated with a set of measures (e.g., number of cars in a road segment, phone calls per cell), whose values are continuously updated. We aim at retrieving aggregate measures over regions satisfying certain spatiotemporal conditions, for example, “return the number of cars in the city center during the last hour” (A formal problem definition is presented in Section 3). An important fact that differentiates spatio-temporal from conventional aggregation is the lack of predefined groupings on the aggregation units. Such groupings (e.g., product types) are taken into account in traditional data warehouses so that queries of the form “find the average sales for all products grouped-by ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



63

Fig. 1. R-tree example.

product type” can be efficiently answered. In spatio-temporal scenarios, the spatial and temporal extents of queries are not confined to predefined groupings, and cannot be predicted (e.g., queries can inquire about the traffic situation in any district of arbitrary size at any time interval). This paper presents several multi-tree indexes that combine the spatial and temporal attributes to accelerate query processing involving static or volatile spatial dimensions. The proposed indexes support ad hoc groupings, arbitrary query windows and historical time intervals. Furthermore, we perform a comprehensive analysis for the existing and proposed solutions, which provides significant insight into their behavior and reveals the superiority of our methods. This analysis leads to a set of cost models directly applicable for query optimization in practice. The rest of the paper is organized as follows. Section 2 describes related work in the context of spatial, spatio-temporal databases and conventional data warehouses. Section 3 formally describes the problem and elaborates its characteristics. Section 4 presents the proposed solutions, while Section 5 analyzes their performance. Section 6 contains an extensive experimental evaluation, and Section 7 concludes the paper with a discussion on future work. 2. RELATED WORK Section 2.1 introduces the spatial and spatio-temporal indexes fundamental to our discussions. Then, Section 2.2 surveys existing techniques for multidimensional aggregate processing, and Section 2.3 reviews traditional data warehouses and their extensions for spatio-temporal data. 2.1 Spatial and Spatio-Temporal Access Methods ¨ Spatial access methods [Gaede and Gunther 1998] manage multi-dimensional (typically 2D or 3D) rectangles, and are often optimized for the window query, which retrieves the objects intersecting a query box. One of the most popular indexes is the R-tree [Guttman 1984] and its variations, most notably the R*tree [Beckmann et al. 1990]. Each intermediate entry r of an R-tree has the form , where r.MBR is the minimum bounding rectangle that tightly encloses all objects in its sub-tree pointed to by r.pointer. For leaf entries, r.MBR stores the corresponding data rectangle whose actual record is referenced by r.pointer. Figure 1(a) illustrates four 2D rectangles R1 , . . . , R4 , together with the node MBRs of the corresponding R-tree (node capacity = 2) shown in Figure 1(b). Based on their spatial proximity, R1 , R2 are grouped together into node N1 (whose parent entry is R5 ) and R3 , R4 into N2 (parent entry R6 ). Given a window query q R (e.g., the grey rectangle in Figure 1(a)), the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

64



Y. Tao and D. Papadias

Fig. 2. The multi-version R-tree.

qualifying objects (i.e., R1 , R2 , R3 ) are retrieved by visiting those nodes whose MBRs intersect q R . A spatio-temporal index, on the other hand, manages moving objects. In Vazirgiannis et al. [1998] the movements of 2D rectangles are modeled as 3D boxes indexed with a 3DR-tree. Specifically, the temporal projection of a box denotes the period when the corresponding object remains static, while the spatial projection corresponds to the object’s position and extents during that period. Whenever an object moves to another position, a new box is created to represent its new static period, position, and extents. A spatio-temporal window query involves, in addition to a spatial region q R , a time interval qT , and returns objects intersecting q R during qT . If we also model the query as a 3D box (bounding q R and qT ), the qualifying objects are those whose 3D representation intersects the query box. A similar idea is applied in Pfoser et al. [2000] for storing objects’ trajectories. While the 3DR-tree stores all data versions in a single tree, the partially persistent technique [Becker et al. 1996; Varman and Verma 1997; Salzberg and Tsotras 1999] maintains (in a space efficient manner) a separate (logical) 2D R-tree for each timestamp, indexing the regions that are alive at this timestamp. The motivation is that the number of records valid at a timestamp is much lower than the total number of data versions in history; hence, a query with short interval (compared to the history length) only needs to search a small number of R-trees, each indexing a limited number of objects. A popular index is the Multi-version R-tree (MVR-tree) [Kumar et al. 1998; Tao and Papadias 2001]. An entry r has the form , where [r.tst ,r.ted ] denotes the lifespan: the time interval during which r was alive (ted =“*” implies that the entry is still alive at the current time). For leaf entries, r.MBR denotes the MBR of the corresponding object, while for intermediate entries it encloses all the child entries alive in its lifespan. The semantics of r.pointer are similar to the ordinary R-tree. Figure 2(a) shows an example where R1 moves to a new position R1′ at timestamp 5 (triggering the change of the parent entry R5 to R5′ ), and Figure 2(b) illustrates the corresponding MVR-tree. The (logical) R-trees for time interval [1, 4] involve entries in nodes N1 , N2 , N4 (observe the lifespans of their parent entries), while starting from timestamp 5, the logical trees consist of nodes N5 and N3 , which replace N4 and N1 , respectively. Note that N2 is shared (i.e., it is the child node of both N4 and N5 ) because none of its objects issued an ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



65

update. The window query algorithm of the MVR-tree is the same as that of normal R-trees, except that search is performed in the logical trees responsible for the query timestamps. If the number of involved timestamps is small, only few R-trees are accessed, in which case the MVR-tree is more efficient than the 3DR-tree. This benefit comes, however, at the cost of data duplication. In Figure 2, for example, although region R2 does not issue any update, two separate copies R2 , R2′ are stored in N1 , N3 respectively. As a result, the MVR-tree performs worse for queries involving long temporal intervals and consumes more space than the corresponding 3DR-tree [Tao and Papadias 2001]. 2.2 Multi-Dimensional Aggregate Methods The aggregate R-tree (aR-tree) [Jurgens and Lenz 1998; Papadias et al. 2001] augments traditional R-trees with summarized information. Figure 1(c) shows an example aR-tree for the regions of Figure 1(a). Each leaf entry contains a set of numerical measures, which are the objectives of analysis (e.g., the number of users in a cell, the number of phone calls made). The measures of intermediate entries are computed using some distributive1 aggregation function (e.g., sum, count, max), and summarize the information in the corresponding subtrees. In Figure 1(c) we assume that there is a single measure per leaf entry (i.e., data region); the measure for intermediate entries is based on the sum function—the measure of entry R5 equals the sum of measures of R1 and R2 (e.g., the total number of users in the regions indexed by its subtree). The same concept has been applied to a variety of indexes [Lazaridis and Mehrotra 2001]. The aR-tree (and other multi-dimensional aggregation structures) aims at the efficient processing of the window aggregate query. Such a query specifies a window q R and returns the aggregated measure of the regions intersecting q R (instead of reporting them individually). For instance, if the query window q R of Figure 1(a) is applied to the aR-tree of Figure 1(c), the result should be 150 + 75 + 132 (i.e., the sum of measures of regions R1 , R2 , R3 ). Since R5 .MBR is covered by q R , all the objects in its sub-tree must satisfy the query. Thus, the measure (225) stored with R5 is aggregated directly, without accessing its child node. On the other hand, since R6 .MBR partially intersects q, its sub-tree must be visited to identify the qualifying regions (only R3 ). Hence, the query is answered with only 2 node accesses (root and N2 ), while a traditional R-tree requires 3 accesses. Multi-dimensional aggregate processing has also been studied theoretically, leading to several interesting results. Zhang et al. [2001] propose the MVSBtree, which efficiently solves a window aggregate query on two-dimensional horizontal interval data (i.e., find the number of intervals intersecting a query window) in O(log B (N /B)) I/Os using O((N /B)log B (N /B)) space, where N is the dataset cardinality and B the disk page capacity. Their idea is to transform a query to four “less-key-less-time” and two “less-key-single-time” queries, which are supported by two separate structures that constitute a complete MVSB-tree. 1A

function f agg is distributive [Gray et al. 1996] if, given S1 ∪ S2 = S and S1 ∩ S2 = ∅, f agg (S) can be obtained from f agg (S1 ) and f agg (S2 ); namely, the aggregate result for S can be computed by further aggregating disjoint subsets S1 , S2 . ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

66



Y. Tao and D. Papadias

Fig. 3. A data cube example.

This solution also answers aggregate queries on 2D points (e.g., find the number of points in a query window) with the same performance, by treating each point as a special interval with zero length. The aP-tree [Tao et al. 2002b] achieves the same time and space complexity using a simpler conversion of a window aggregate query to two “vertical range queries.” Govindarajan, et al. [2003] present the CRB-tree that further lowers the space consumption, and supports data points of arbitrary dimensionality. The above techniques target point/interval objects (they are inapplicable to regions), while Zhang et al. [2002] develop two versions of the ECDF-B-tree for answering aggregate queries on rectangular data with different space-query time tradeoffs. Specifically, in the d -dimensional space, the first version consumes O((N /B)logdB−1 (N /B)) space and answers a query in O(B·logdB (N /B)) I/Os, while the corresponding complexities of the second version are O(N · Bd −2 logdB−1 (N /B)) (for space) and O(logdB (N /B)) (for query cost). Both versions, however, require relatively high space consumption, limiting their applicability in practice. Aggregate processing on one-dimensional intervals has also been addressed in the context of temporal databases [Kline and Snodgrass 1995; Gendrano et al. 1999; Moon et al. 2000; Yang and Widom 2003]. Zhang et al. [2002, 2003] study spatial and temporal aggregation over data streams. 2.3 Data Warehouses A considerable amount of related research has been carried out on data warehouses and OLAP in the context of relational databases. The most common conceptual model for data warehouses is the multi-dimensional data view. In this model, each measure depends on a set of dimensions, for example, region and time, and thus is a value in the multi-dimensional space. A dimension is described by a domain of values (e.g. days), which may be related via a hierarchy (e.g., day-month-year). Figure 3 illustrates a simple case, where each cell denotes the measure of a region at a certain timestamp. Observe that although regions are 2-dimensional, they are mapped as one dimension in the warehouse. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



67

The star schema [Kimball 1996] is a common way to map a data warehouse onto a relational database. A main table (called fact table) F stores the multidimensional array of measures, while auxiliary tables D1 , D2 , . . . , Dn store the details of the dimensions. A tuple in F has the form where Di [].key is the set of foreign keys to the dimension tables and M [] is the set of measures. OLAP operations ask for a set of tuples in F , or for aggregates on groupings of tuples. Assuming that there is no hierarchy in the dimensions of the previous example, the possible groupings in Figure 3 include: (i) group-by Region and Time, which is identical to F , (ii)-(iii) group-by Region (Time), which corresponds to the projection of F on the region- (time-) axis, and (iv) the aggregation over all values of F which is the projection on the origin (Figure 3 depicts these groupings for the aggregation function sum). The fact table together with all possible combinations of group-bys composes the data cube [Gray et al. 1996]. Although all groupings can be derived from F, in order to accelerate query processing some results may be precomputed and stored as materialized views. A detailed group-by query can be used to answer more abstract aggregates. In our example, the total measure of all regions for all timestamps (i.e. 1828) can be computed either from the fact table, or by summing the projected results on the time or region axis. Ideally the whole data cube should be materialized to enable efficient query processing. Materializing all possible results may be prohibitive in practice as there are O(2n ) group-by combinations for a data warehouse with n dimensional attributes. Therefore, several techniques have been proposed for the view selection problem in OLAP applications [Harinarayan et al. 1996; Gupta 1997; Gupta and Mumick 1999; Baralis et al. 1997; Shukla et al. 1998]. In addition to relational databases, data warehouse techniques have also been applied to spatial [Han et al. 1998; Stefanovic et al. 2000] and temporal [Mendelzon and Vaisman 2000; Hurtado et al. 1999] databases. All these methods, however, only benefit queries on a predefined hierarchy. An ad hoc query not confined by the hierarchy, such as the one in Figure 3 involving the gray cells, would still need to access the fact table, even if the entire data cube were materialized. In the next section we formally define spatio-temporal aggregate processing and explain the inefficiency of existing techniques. 3. PROBLEM DEFINITION AND CHARACTERISTICS Consider N regions R1 , R2 , . . . , R N , and a time axis consisting of discrete timestamps 1, 2, . . . , T , where T represents the total number of recorded timestamps (i.e., the length of history). Following the conventional spatial object modeling, each region Ri (1 ≤ i ≤ N ) is a two-dimensional minimum bounding rectangle of the actual shape (e.g., a road segment, an antenna cell, etc). The position and area of a region Ri may vary along with time, and we refer to its extent at timestamp t as Ri (t). Each region carries a set of measures Ri (t).ms, which also changes with time (sometimes we refer to Ri (t).ms as the aggregate data of Ri (t)). Note that this modeling trivially captures static objects, for which Ri (t) remains constant for all timestamps t. Further, it also supports region insertions/deletions—the emergence/disappearance of new/existing objects. In this case, the dataset cardinality N should be interpreted as the total number of ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

68



Y. Tao and D. Papadias

distinct regions in the entire history. At a timestamp t, if a region Ri (1 ≤ i ≤ N ) is inactive (i.e., it has been deleted or has not been inserted at this time), its extent Ri (t) and measure Ri (t).ms are set to some default “void” values. Without loss of generality, to simplify discussion in the sequel we do not consider such appearances/disappearances, and assume that N regions are active at all timestamps. In practice the measures of regions change asynchronously with their extents. In other words, the measure of Ri (1 ≤ i ≤ N ) may change at a timestamp t (i.e., Ri (t).ms = Ri (t − 1).ms), while its extent remains the same (i.e., Ri (t) = Ri (t − 1)), and vice versa. To quantify the rates of these changes, we define the measure agility ams (t), as the percentage of regions that issue measure modifications at time t (e.g., if ams = 100%, then all regions obtain new measures each timestamp); similarly, the extent agility aext (t) characterizes the percentage for extent changes. In some cases the extent agility is 0 (e.g., road segments are static). Even for volatile regions (i.e., aext (t) > 0), ams (t) is usually considerably higher than aext (t), which is an important property that must be taken into account for efficient query processing. We aim at answering the spatio-temporal window aggregate query, which specifies a rectangle q R and a time interval qT of continuous timestamps. The goal is to return the aggregated measure Agg(q R , qT , f agg ) of all regions that intersect q R during qT , according to some distributive aggregation function f agg , or formally: Agg(q R , qT , f agg ) = f agg {Ri (t).ms | Ri (t) intersects q R and t ∈ qT }. If qT involves a single timestamp, the query is a timestamp query; otherwise, it is an interval query. For the following examples and illustrations, we use the static (dynamic) regions of Figure 1 (2), assuming that a region corresponds to the coverage area of an antenna cell. For each data region Ri (t) there is a single measure Ri (t).ms (we use the measures of Figure 3) representing the number of phone calls initialized in Ri at timestamp t and the aggregate function is sum. A spatio-temporal window aggregate query (q R , qT ) retrieves the total number of phone calls initiated during qT in cells intersecting q R . Application to other aggregate functions and query types is, as discussed in Section 7, straightforward. Next, we describe how to adapt existing methods to spatio-temporal aggregation, and explain their inefficiency. 3.1 Using a 3D Aggregate R-Tree We can consider the problem as multi-dimensional aggregate retrieval in the 3D space and solve it using one of the existing aggregation structures (discussed in Section 2.2). Assume for instance that we use aR-trees. Whenever the extent or measure of a region changes, a new 3D box is inserted in a 3D version of the aR-tree, called the a3DR-tree. Using the example of Figure 3, four entries are required for R1 : one for timestamps 1 and 2 (when its measure remains 150) and three more entries for the other timestamps. Given a spatio-temporal window aggregate query, we can also model it as a 3D box, which can be processed in a way similar to Figure 1(c). The problem of this solution is that it creates a ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



69

new box duplicating the region’s extent, even though it does not change. Since the measure changes are much more frequent than extent updates, the a3DRtree incurs high redundancy. The worst case occurs when aext (t) = 0: although the extent of a region remains constant, it is still duplicated at the rate of its measure changes. Bundling the extent and aggregate information in all entries significantly lowers the node fanout and compromises query efficiency, because as analyzed in Section 5, more nodes must be accessed to retrieve the same amount of information. Note that redundancy occurs whenever the extent and measure changes are asynchronous; the above problem also exists when a new box is spawned because of an extent update, in which case the region’s measure must be replicated. 3.2 Using a Data Cube Following the traditional data warehouse approach we could create a data cube, where one axis corresponds to time, the other to regions, and keep the measure values in the cells of this two-dimensional table (see Figure 3). Since the spatial dimension has no one-dimensional order we store the table in the secondary memory ordered by time and build a B-tree index to locate the pages containing information about each timestamp. The processing of a query employs the B-tree index to retrieve the pages (i.e., table columns) containing information about qT ; then, these regions (qualifying the temporal condition) are scanned sequentially and the measures of those satisfying q R are aggregated. In the sequel, we refer to this method as column scanning. Even if there exists an additional spatial index on the regions, the simultaneous employment of both indexes has limited effect. Assume that first a window query q R is performed on the spatial index to provide a set of IDs for regions that qualify the spatial condition. Measures of these regions must still be retrieved from the columns corresponding to qT (which, again, are found through the B-tree index). However, the column storage does not preserve spatial proximity, hence the spatially qualifying regions are expected to be scattered in different pages. Therefore, the spatial index has some effect only on very selective queries (on the spatial conditions). Furthermore, recall that prematerialization is useless, since the query parameters q R and qT do not conform to predefined groupings. 4. PROPOSED SOLUTIONS Our solutions are motivated by the facts that (i) the extent and measure updates are asynchronous and (ii) in practice, measures change much more frequently than extents (which may even be static). Therefore, the two types of updates should be managed independently to avoid redundancy. In particular, the proposed solutions involve two types of indexes: (i) a host index, which is an aggregate spatial or spatio-temporal structure managing region extents, and (ii) numerous measure indexes (one for each entry of the host index), which are aggregate temporal structures storing the values of measures during the history. Figure 4 shows a general overview of the architecture. Given a query, the host index is first searched, identifying the set of entries that qualify the spatial condition. The measure indexes of these entries are then accessed to ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

70



Y. Tao and D. Papadias

Fig. 4. Overview of the proposed solution.

Fig. 5. An aRB-tree (c.f. regions in Figure 1(a) and measures in Figure 3).

retrieve the timestamps qualifying the temporal conditions. Since the number of records (corresponding to extent changes) in the host index is very small compared to the measure changes, the cost of query processing is expected to be low. As host indexes, we use variations of the R-tree due to its popularity, flexibility (i.e., applicability to spatial or spatio-temporal data), low space consumption (O(N /B)), and good performance in practice. For similar reasons, we use aggregate B-trees as measure indexes. Nevertheless, the same concept can be applied with other spatial or temporal aggregate structures. In Section 4.1, we first solve the case of static regions (i.e., aext (t) = 0). Then, Sections 4.2 and 4.3 address the general problem involving volatile regions (aext (t) > 0). Section 4.4 proposes a space-efficient structure for managing multiple measure indexes. 4.1 The Aggregate R-B-Tree The aggregate R- B-tree (aRB-tree) adopts an aR-tree as the host index, where an entry r has the form ; r.MBR and r.pointer have the same semantics as a normal R-tree, r.aggr keeps the aggregated measure about r over the entire history, and r.btree points to an aggregate B-tree, which stores the detailed measure information of r at concrete timestamps. Figure 5 illustrates an example using the data regions of Figure 1(a) and the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



71

measures of Figure 3. The number 710 stored with R-tree entry R1 , equals the sum of measures in R1 for all 5 timestamps (e.g., the total number of phone calls initiated at R1 ). The first leaf entry of the B-tree for R1 (1, 150) indicates that the measure of R1 at timestamp 1 is 150. Since the measure of R1 at timestamp 2 is the same, there is no a special entry, but this knowledge is implied from the previous entry (1, 150). Similarly, the first root entry (1, 445) of the same B-tree indicates that the aggregated measure in R1 during time interval [1, 3] is 445. The topmost B-tree stores aggregated information about the whole space, and its role is to answer queries involving only temporal conditions (similar to that of the extra row in Figure 3). To illustrate the processing algorithms, consider the query “find the number of phone calls initiated during interval qT = [1, 3] in all cells intersecting the window q R shown in Figure 1(a).” Starting from the root of the R-tree, the algorithm visits the B-tree of R5 since the entry is totally contained in q R . The root of this B-tree has entries (1,685), (4,445) meaning that the aggregated measures (of all data regions covered by R5 ) during intervals [1, 3], [4, 5] are 685 and 445, respectively. Hence the contribution of R5 to the query result is 685. The second root entry R6 of the R-tree partially overlaps q R , so we visit its child node, where only entry R3 intersects q R , thus its B-tree is retrieved. The first entry of the root (of the B-tree) suggests that the contribution of R3 for the interval [1, 2] is 259. In order to complete the result we will have to descend the second entry and retrieve the measure of R3 at timestamp 3 (i.e., 125). The final result equals 685 + 259 + 125, which corresponds to the sum of measures in the gray cells of Figure 5. The pseudo-code for the algorithm is presented in Figure 6 for the general case where the query has both spatial (q R ) and temporal (qT ) extents. Purely spatial queries (e.g., find the total sum of measures—throughout history—for regions intersecting q R ) can be answered using only the R-tree, while purely temporal queries (e.g., find the total sum of measures during qT for all regions) can be answered exclusively by the topmost B-tree. In general, the aRB-tree accelerates queries regardless of their selectivity because (i) if the query window q R (interval qT ) is large, many nodes in the intermediate levels of the R- (B-) tree will be contained in q R (qT ) so the precalculated results are used, and visits to the lower tree levels are avoided; (ii) If q R (qT ) is small, the aRB-tree behaves as a spatio-temporal index. Incremental maintenance of the aRB-tree is straightforward. Assume, for example, that at the next timestamp 6 region R1 changes its measure. To update the aRB-tree, we first locate R1 in the R-tree (in Figure 5), by performing an ordinary window query using the extent of R1 , after which the B-tree associated with R1 is modified to include the new measure. A change at the lower level may propagate to higher levels; continuing the previous example, after updating R1 .btree, we backtrack to the parent entry R5 , and modify its B-tree (according to the new aggregate of R1 ). A faster way to perform updates is by following a bottom-up approach.2 In particular, we can build a hash index on 2 Bottom up updates using hash indexes have been used extensively in spatio-temporal applications

involving intensive updates [Kwon et al. 2002; Lee et al. 2003]. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

72



Y. Tao and D. Papadias

Fig. 6. Query processing using the aRB-tree (single window queries).

region ID and associate each region with a pointer to the last entry of the B-tree that stores its measure. When new information about a region arrives, the hash index is used to directly locate the appropriate B-tree entry where the measure is stored (thus avoiding the window query on the R-tree). Then, the change propagates upwards the B-tree and the R-tree, updating the affected entries. Similar update policies can be applied for volatile regions discussed in subsequent sections. 4.2 The Aggregate Multi-Version R-B-Tree When the extents of data regions change with time, the aRB-tree is inadequate because its host index is a spatial access method, which does not support moving objects. To overcome this problem, we propose the aggregate multi-version R-B-tree (aMVRB-tree), which adopts the MVR-tree (discussed in Section 2.1) as the host index. Specifically, each entry r in the MVR-tree has the form , where (i) the meanings of r.MBR, r.lifespan, r.pointer are the same as the normal MVR-tree, (ii) r.aggr keeps the aggregated measure of r during its lifespan (instead of the whole history as in the aRB-tree), and (iii) r.btree points to a B-tree storing its concrete measures. Figure 7 shows an example for the moving regions in Figure 2(a). The value 580 stored with R1 , for example, equals the sum of its aggregate values during interval [1, 4] (the lifespan of R1 ). On the other hand, the B-tree of R1′ (i.e., the updated version of R1 ) contains a single entry (5, 130), indicating its measure 130 at the current time 5. Consider a query asking for the number of phone calls initiated during interval qT = [1, 5] in all cells intersecting the window q R in Figure 2(a). Since ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



73

Fig. 7. An aMVRB-tree (c.f. regions in Figure 2a and measures in Figure 3).

R5 .MBR is inside q R during time interval [1, 4], its child node (at N4 ) is not visited. Furthermore, R5 .btree is not retrieved either because its lifespan [1, 4] is contained in the query interval [1, 5]; instead, the summary data (910) of R5 at node N4 is simply aggregated. On the other hand, N2 must be accessed because its parent R6 .MBR partially overlaps q R . Inside N2 , only R3 intersects q R , and we aggregate its summary (638) without retrieving its B-tree (as its lifespan [1,*] = [1, 5] is also included in qT ). Searching the logical R-tree rooted at N5 is similar, except that shared nodes should not contribute more than once. Continuing the example, node N3 is accessed (R5′ partially overlaps q R ) without retrieving any B-tree (because the lifespans of R1′ and R2′ are enclosed by qT ). Further, since N2 has already been processed, we do not follow R6′ .pointer, even though R6′ .MBR partially intersects q R . In Figure 7, the entries that contribute to the query are shaded. In order to avoid multiple visits to a shared node via different parents, we search the MVR-tree in a breadth-first manner. Specifically, at each level, the algorithm visits all the necessary nodes before descending to the lower level. In Figure 7, for example, nodes N4 and N5 (i.e., the root level of the MVR-tree) are searched first, after which we obtain an access list, containing the IDs of nodes N2 , N3 to be visited at the next level. Thus, multiple visits are trivially avoided by eliminating duplicate entries from the access list. Figure 8 illustrates the complete query algorithm of aMVRB-trees, where function B node aggregate is shown in Figure 6. Note that, the algorithm visits the B-trees of only those entries in the MVRtree whose lifespans cover the starting or ending timestamps of qT ; for (MVR) entries whose lifespans include only the intermediate timestamps of qT , the relevant aggregate data stored in the MVR-tree are used directly. Furthermore, although in Figure 7 we show a separate B-tree for each MVR-tree entry, the B-trees of various entries can be stored together in a space efficient manner, described in Section 4.4. Finally, the aMVRB-tree can be incrementally maintained in a way similar to aRB-trees. Specifically, given the new spatial extent ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

74



Y. Tao and D. Papadias

Fig. 8. Query processing using the aMVRB-tree.

and aggregate value of a region, the update algorithm first locates the corresponding entry in the MVR-tree (or inserts an entry if the region incurs extent change), modifies the information in its B-tree, and then propagates the changes to higher levels of the MVR-tree. 4.3 The Aggregate 3-Dimensional R-B-Tree As mentioned in Section 2.1, the MVR-tree still involves data duplication,3 which has negative effects on the space consumption and query performance. To eliminate this problem, we develop the a3DRB-tree (aggregate 3-dimensional R-B-tree), by adopting the 3DR-tree as the host index. We follow the “3D box” representation of (discretely) moving rectangles (see Sections 2.1 and 3.1), but unlike the a3DR-tree, a new box is necessary only for extent changes (i.e., not for measure changes); hence, there is no redundancy. Specifically, an entry in the host index has the form , where r.MBR, r.btree are defined as in aRB-trees, and r.aggr stores aggregated data over r.lifespan. Figure 9 shows an example using the moving regions of Figure 2(a). Region R1 changes to R1′ at timestamp 5, which creates a new box and a new node R7 containing it. As with the a3DR-tree, a spatio-temporal aggregate query is modeled as a 3D box representing the spatial and temporal ranges. The query algorithm follows the same idea as those for aRB- and aMVRB-trees. Specifically, it starts from the root of the 3DR-tree, and for each entry r, one of the following conditions holds: (i) the entry is covered by both (q R and qT ) query extents. In this case, 3 The

data duplication in the MVR-tree does not involve regions’ measures (as is the case in a3DRtrees), but is caused by the partially persistent framework. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



75

Fig. 9. Example of a3DRB-tree.

its precomputed aggregate data r.aggr is used (subtree or B-tree accesses are avoided). (ii) The entry’s spatial extent is covered by q R , and its temporal extent partially overlaps qT . The B-tree pointed by r.btree is accessed to retrieve aggregate information for qT . (iii) The entry’s spatial extent partially overlaps q R , and its temporal extent overlaps (or is inside) qT . In this case the algorithm descends to the next R-tree level and the same process is applied recursively. (iv) If none of the previous conditions holds, the entry is ignored. Although both aMVRB- and a3DRB-trees aim at volatile regions, they have two important differences. (i) The a3DRB-tree maintains a large 3DR-tree for the whole history, while the aMVRB-tree maintains several small trees, each responsible for a relatively short interval. This fact has implications on their query performance as discussed in Section 5. (ii) The aMVRB-tree is an online structure (i.e., it can be incrementally updated), while the a3DRB-tree is off-line, meaning that all the region extents must be known in advance.4 Specifically, to create an a3DRB-tree, we should first build the underlying 3DR-tree according to the regions’ spatial extents and lifespans, after which the B-trees of the entries are constructed chronologically by scanning the aggregate changes. Similar to aRB- and aMVRB-trees, for each aggregate change, the algorithm first identifies the leaf entry of the corresponding region (that produces the change), and then modifies its B-tree. Finally, the update propagates to higher levels of the tree. 4.4 Management of B-Trees Maintaining a separate B-tree for each entry of the aMVRB- (a3DRB-) tree can lead to considerable waste of space if the B-tree contains too few entries. Consider, for example, Figure 7, where region R1 changes to R1′ at timestamp 5; 4 Otherwise, we have to store unbounded boxes inside the 3DR-tree, which affects query performance severely. The same problem exists for the a3DR-tree and, in general, any structure based on 3DRtrees.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

76



Y. Tao and D. Papadias

Fig. 10. A B-File example.

thus, R1 .btree contains only 4 entries although in practice a page has a capacity of 100-1000 entries. If such a situation happens frequently, the average page utilization in the B-trees may be very low. To solve this problem we propose the B-File (BF), which is a space-efficient storage scheme for multiple B-trees. A BF possesses the following properties: (i) the B-trees stored in the same BF manage disjoint sets of keys, which in our case correspond to timestamps (any timestamp can be indexed by at most one B-tree in the same BF), (ii) all the nodes (except, possibly, for the last node of each level) are full (since deletions never happen), and (iii) the search algorithms are the same as those of conventional B-trees (a BF is merely a compact storage scheme for multiple B-trees, each maintaining its logical integrity). Figure 10(a) illustrates an example BF, which stores the B-trees of two regions R and R ′ (for simplicity, in each B-tree entry we include only the timestamps and not the aggregate values). The lifespan of R is [1, 19], while that of R ′ is [20,*] (R ′ is currently alive). The B-tree of R consists of two levels while, up to timestamp 30, the B-tree of R ′ has only one level. Note that the root pointers of R and R ′ point to nodes at different levels. The insertion of 35 (in the B-tree of R ′ ) causes node B to overflow, and a new node C is created (Figure 10(b)). An entry 35, pointing to node C, is inserted into A, which becomes the root of the B-tree of R ′ . If the live B-tree dies (e.g., R ′ ceases to exist), the corresponding BF becomes vacant and may be used for any B-tree created at later timestamps. Whenever a new B-tree needs to be initiated, we first search for vacant BFs. If such a BF does not exist, a new one is initiated. In practice, the creation of new BFs is infrequent because, when an object changes its position or extent, the new entry (in the MVR- or 3DR-trees) can use the vacant BF of the previous version. As analyzed in the next section, the BF can achieve significant space savings for highly dynamic datasets. 5. PERFORMANCE ANALYSIS This section theoretically proves the superiority of our solutions and provides cost models for query optimization. Since column scanning ignores the spatial conditions and (as shown in Section 6) has inferior performance, we focus on the a3DR-tree and the proposed aRB-, aMVRB- and a3DRB-trees (collectively called multi-tree structures). In Section 5.1 we present a unified (high-level) model that describes the behavior of all structures. Then, Sections 5.2–5.4 develop the complete formulae for space consumption and query cost of each method, assuming ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



77

Table I. List of Frequent Symbols Symbol N D T aext ams q R , qS , qT h Ni aPi Ei NABi

Description total number of data regions region density number of timestamps in history extent agility of the dataset measure agility of the dataset query region, side length, and query interval height of the host tree number of nodes at the i-th level of the host tree access probability of a level-i node in the host tree number of level-i B-trees to be searched cost of searching a level-i B-tree

uniform locations and velocities. Section 5.5 provides significant insight into the characteristics of alternative solutions, and Section 5.6 extends the analysis to general datasets. Table I lists the symbols that will be frequently used in our derivation. 5.1 A Unified Model To facilitate discussion, let us first consider the following regular datasets. At the initial timestamp 1, N regions with density5 D distribute uniformly in the 2D unit data space [0, 1]2 . Then, at each of the subsequent T − 1 timestamps, (i) aext percent of the regions change their positions randomly so that the spatial distribution is still uniform (for static dimensions aext = 0), and (ii) ams percent modify their aggregate values, where the extent (aext ) and measure (ams ) agilities remain fixed at all timestamps. Further, each region has the same chance to produce changes—aext (ams ) corresponds to the probability that a region changes its extent (measure) at each timestamp. Such regular data allow us to concentrate on the most crucial factors that affect the performance of each method. We will show, in Section 5.6, that the results obtained from the regular case can be easily extended to general datasets (without the above constraints), using histograms. The objective of analysis is to predict (i) the number of node accesses in answering a spatio-temporal aggregate query, and (ii) the structure size (in terms of the number of nodes). For this purpose, we separate the derivation for the host index (i.e., the R-, MVR- and 3DR-trees in the aRB-, aMVRBand a3DRB-trees, respectively) from that for the measure indexes (i.e., aggregate B-trees). For convenience we say that a measure index (interchangeably, a B-tree) is at level-i, if the corresponding host entry (i.e., pointing to the B-tree) is at the i-th level of the host tree. Also, we define the lifespan of a B-tree node as the range of timestamps covered by the sub-tree rooted at it. Particularly, the lifespan of the root (of the B-tree) is also the lifespan of the entire B-tree. For example, for R1 .btree (i.e., a level-0 B-tree) in Figure 5, the extents of the 5 The density D of a set of rectangles is the average number of rectangles that contain a given point in space. Equivalently, D can be expressed as the ratio of the sum of the areas of all rectangles over the data space area.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

78



Y. Tao and D. Papadias

first and second leaf nodes are [1, 3] and [4, 5] respectively, while that of the root is [1, 5]. Obviously, the query cost (structure size) equals the sum of the costs (sizes) of the host and measure indexes: NA = NAhost + NAms , and Size = Sizehost + Sizems .

(5-1)

The a3DR-tree is a special case of our framework that consists of only the host index: Sizems = NAms = 0 in Equation 5-1. For regular datasets, as defined earlier, the data characteristics are the same across the whole spatio-temporal space, leading to similar properties in all parts of the index. This has several important implications: (i) for all structures, the MBRs of the host entries at the same level have similar sizes, (ii) for a3DR-, aMVRB-, a3DRB-trees, the lengths of the host entries’ lifespans are also similar, and (iii) for the proposed structures, the B-trees of the same level manage an equal number of timestamps (i.e., their lifespans are equally long). In particular, property (iii) is most obvious for the aRB-tree: the B-tree of a host entry at the leaf level indexes all the ams · T measure changes of the corresponding data region, where ams and T are the measure agility and number of recorded timestamps, respectively. Next we investigate Equation 5-1. Let h be the height of the host index (the leaves are at level 0), Ni the number of nodes at the i-th (0 ≤ i ≤ h−1) level, and aPi the probability that a level-i node is visited for answering a query q. Then, NAhost can be represented as: NAhost =

h−1  i=0

( Ni · a Pi ) .

(5-2)

The above equation already gives the cost (albeit at a coarse level) of the a3DR-tree, which has no measure indexes. For the proposed multi-tree solutions, we still need to consider NAms , which depends on two factors: (i) the number Ei of B-trees at the i-th level (of the host index) that need to be searched, and (ii) the cost NABi of accessing each level-i B-tree. Then, NA (hence  the h−1  ms total cost NA in Equation 5-1) can be derived as: NAms = i=0 Ei · NABi , and combining with Equation 5-2, NA =

h−1   i=0

 Ni · a Pi + Ei · NABi .

(5-3)

Now we qualitatively compare, using Equation 5-3, the performance of the a3DR-tree and multi-tree structures. Towards this, we relate the query cost to the measure agility ams that determines the total number of records (recall that ams >>aext ). In the formula for the a3DR-tree, Ei = NABi = 0, but Ni (i.e., the number of nodes at the i-th level) includes all the extent and measure changes. In particular, since Ni grows linearly with the measure agility ams , the cost of the a3DR-tree is linear to ams . On the other hand, for the multi-tree structures, Ni is very low since it is decided by only the number of extent changes (i.e., not related to ams ), which is much smaller than the number of measure changes. As a result, the overall cost NA is dominated by that of searching the measure indexes. Further, as the number Ei of B-trees searched depends only on the host index it is also independent of ams . As will be explained shortly, the cost NABi of ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



79

Fig. 11. Two cases of searching a measure index.

searching each B-tree is logarithmic to the measure agility ams , and therefore the overall query time of the multi-tree structures is logarithmic to ams , which explains their superiority over the a3DR-tree. NABi is logarithmic to ams because regardless of how many timestamps are involved in the interval qT , the query accesses at most two complete paths (from the root to the leaf) in a B-tree. Recall that a node is accessed, if and only if, its lifespan includes the starting or ending timestamp of qT , and the number of such nodes at each level (of the B-tree) is at most 2! This is illustrated in Figure 11(a), which shows a two-level B-tree and the corresponding query range qT . Leaf nodes B and D are visited because their extents partially intersect qT , while leaf node C is not accessed since its extent is contained (in qT ); consequently, the aggregate measure stored in the parent entry c is used directly. Figure 11(b) shows another query, where qT is not totally contained in the lifespan of the B-tree. In this case, the cost is even lower—the algorithm only visits a single path from the root to leaf level (e.g., the nodes visited are the root and node B). In the aRB-tree, a measure index stores all the ams · T changes of a single (static) data region in history. Hence its height is ⌈log (ams · T/bB )⌉ (where bB is the node capacity of the B-tree), and NABi is at most twice this number. The situation is more complex for the aMVRB- and a3DRB-trees, but as will be explained in Sections 5.3 and 5.4, the height of a measure index is roughly ⌈log[(ams /aext )/bB ]⌉ so that NABi is also proportional to log(ams ). In the rest of the section, we extend the above analytical framework for each structure and derive cost models as functions of the data and query properties (specifically, D, N , T , ams , aext , q R , qT ). Our discussion utilizes some previous results in the literature of index analysis, which will be well separated from our contributions at the beginning of each subsection. 5.2 Cost Model for aRB-Trees The analysis of the aRB-tree is based on the following lemmas. LEMMA 5.1 (PAGEL ET AL. 1993). Let r and s be two m-dimensional rectangles that uniformly distribute in the unit universe [0, 1]m , and let ri (si ) be the side length of r (s) along the i-th m dimension (1≤ i ≤ m). Then, (i) the probability for r and s to intersect is i=1 (ri + si ), (ii) the probability for r to contain s is m or 0 otherwise, and (iii) the probability for i=1 (ri − si ) if ri ≥ si for 1≤ i ≤ m, m m r to intersect, but not contain, s is (r + s ) − i i i=1 i=1 (ri − si ) if ri ≥ si on all m dimensions 1≤ i ≤ m, or i=1 (ri + si ) otherwise. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

80



Y. Tao and D. Papadias

LEMMA 5.2 (THEODORIDIS AND SELLIS 1996). Let an R-tree indexing N twodimensional regions with density D that distribute uniformly in the data space. The side length si of the MBR of a level-i node (0 ≤ i ≤ h–1, where h is the height of the tree) is:  f i+1 si = Di+1 R , N where Di+1 = (1 +

√ Di −1 2 √ ) , fR

D0 = D and f R is the node fanout of the tree.

We first derive the formula that predicts the query cost of the aRB-tree, by rewriting the components of Equation 5-3, specifically, h, Ni , aPi , Ei , NABi , as a function of the dataset properties. The first two components are straightforward: given that the R-tree indexes N regions and the node fanout is f R , the height of the tree h = ⌈log f R (N / f R )⌉, while the number of nodes at the i-th level is Ni = ⌈N / f Ri+1 ⌉. The derivation of aPi is also easy. For simplicity, let us consider that the query region q R is a square6 with side length qS . As discussed in Section 4.1, a node in the R-tree of the aRB-tree is searched if and only if its MBR intersects, but is not contained in, q R . Therefore, according to Lemma 5.1 (condition iii), we have (after some simplification) aPi = 4·qS ·si if qS > si , otherwise aPi = (qS + si )2 . Thus it remains to derive Ei (i.e., the number of level-i B-trees searched), and NABi (i.e., the number of node accesses in searching a level-i B-tree), for which we prove the following results. LEMMA 5.3. Given an aRB-tree and a spatio-temporal aggregation query, whose region is a square with length qS , the number Ei of B-trees searched at the i-th level of the host index equals:

E0 =

  2  2  D  N  N + qS − (qS − s0 ) , if qS > s0    2     D N , otherwise + q S N

  N i (q − s )2 − (q − s )2  , if q > s and q > s   S i−1 S i S i−1 S i fR     Ei = N 2  i (qS − si−1 ) , if qS > si−1 and qS < si  fR     0, otherwise,

if i = 0 (leaf level),

if i > 0,

where the side length si of a level-i node in the R-tree is given by Lemma 5.2. PROOF. The B-tree associated with a leaf entry of the R-tree is searched, if and only if (i) the entry’s MBR intersects the query region q R , and (ii) the MBR of the node containing the entry intersects, but is not contained in, q R . Thus, the number E0 of such leaf entries equals the difference between the total 6 The

simplification of square query windows is common in the R-tree analysis; the extension to general query windows is trivial (according to Lemma 5.1). ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



81

number of (i) leaf entries whose MBRs intersect q R , and (ii) entries in the leaf nodes completely contained in q R . Given that there are N (N / f R ) leaf entries (leaf nodes), and the node fanout of the R-tree is f R , E0 can be represented as N · P1 − f R · (N / f R ) · P2 , where P1 (P2 ) denotes the probability that a leaf entry (node) intersects (is √ contained in) q R . Since an object (node) MBR is a square with side length D/N (s0 ), the derivation of P1 and P2 follows Lemma 5.1 directly, leading to the final representation of E0 shown in Lemma 5.3. The derivation of Ei for higher levels i > 0 is similar, except that the conditions for a level-i B-tree to be searched is slightly different. Specifically, the conditions include (i) the corresponding (level-i) host entry’s MBR is contained in q R , and (ii) as with the case of E0 , the MBR of the node including the entry intersects, but is not contained in, q R . Given that there are N / f Ri (N / f Ri+1 ) entries (nodes) at the i-th level of the R-tree, Ei can be represented as (N / f Ri ) · P1′ − f R · (N / f Ri+1 ) · P2′ , where P1′ (P2′ ) is the probability that an entry (node) is contained in q R . The final form of Ei in Lemma 5.3 is obtained after solving P1′ and P2′ using Lemma 5.1 (applying the MBR extent of the entry/node given in Lemma 5.2). LEMMA 5.4. Given an aRB-tree and a spatio-temporal aggregation query, whose interval consists of qT timestamps, the cost NABi of searching a B-tree at the i-th level of the host index equals:

NABi =

hBi −1 

NABij , where

j =0

NABij =

    j T/ j  if qr > bB /amsi , min 2, a ·  msi  b B         

hBi = ⌈logbB (amsi · T/bB )⌉, amsi = 1 − (1 − ams(i−1) ) f R and ams0 = ams

   j min 1 + qr · amsi / j , amsi · T/bB otherwise. bs

PROOF. Let us first consider the B-trees associated with the leaf entries of the R-tree. Each of these trees indexes all the measure changes of a particular data region in history, the number of which equals ams · T . Thus, the height of the B-tree equals hBi = ⌈logbB (ams · T/bB ⌉. At each level 0≤ j ≤ hBi −1 of the j B-tree, (i) there are totally N B j = ⌈ams · T/bB ⌉ nodes, so (ii) each node covers T /N B j timestamps. As shown in Figure 11(a), if the query lifespan qT is longer than that of a node, two node accesses are necessary (unless level- j is the root). Otherwise, the query only visits those nodes whose lifespans intersect qT , and according to Lemma 5.1, the probability of such intersection is (T /N B j + qT )/T . In this case, the expected number NA B0 j of node accesses at level- j of the B-tree j equals N B j · (T/N B j + qT )/T = 1 + qT · ams /bB . The analysis generalizes to the B-trees at higher levels, except that the probability amsi (that a level-i B-tree receives a new measure change at each timestamp) varies. Interestingly, amsi (i ≥1) can be derived from ams(i−1) based on the following observation (for i = 0, ams0 = ams ). Let e1 be a level-i entry in the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

82



Y. Tao and D. Papadias

R-tree and e2 be any entry in the child node of e1 ; then, whenever a measure change is inserted into the B-tree of e2 , a change is also inserted into that of e1 . Given that the average number of entries in the child node of e1 equals the node fanout f R , we have amsi = 1 − (1 − ams(i−1) ) f R . Replacing ams with amsi in the derivation for NA B0 j , we obtain the same representation for NABij , and thus complete the proof. Based on Lemmas 5.3 and 5.4, the following theorem presents the query cost (in terms of number of node accesses) of the aRB-tree as a function of the dataset properties and query parameters. THEOREM 5.1 (aRB-TREE QUERY COST). Given a spatio-temporal aggregation query, whose region is a square with length qS and interval including qT timestamps, the cost of the aRB-tree equals:         NAaRB = 4N · qS · s0 f + N ( D/N + qS )2 − (qS − s0 )2 2 logbB ams · Tb B − 1 R

⌈log f R (N / f R )⌉−1

 i=1

           4N · q S · si 2 2 N a ·T msi −1 , × i+1 + bB f Ri (qS − si−1 ) − (qS − si ) · 2 logbB fR

where N is the dataset cardinality, D the density of data regions, T is the total number of timestamps in history, ams is the measure agility, f R the node fanout of the R-tree, bB is the node capacity of the B-tree, si is given in Lemma 5.2, and amsi is given in Lemma 5.4. PROOF. This theorem can be obtained by applying Lemmas 5.3, 5.4 to Equation 5-3. Note that the presented formula corresponds to the costs of “typical” queries, whose regions and intervals are large enough so that we consider the most expensive case in each conditioned expression that appears in Lemmas 5.3 and 5.4. We also prove the following theorem for the size of the aRB-tree. THEOREM 5.2 (aRB-TREE SIZE). SizeaRB =



 log f R (N / f R ) −1

 i=0

N

The number of nodes of the aRB-tree equals:

 i+1 + N

fR

f Ri



 logbB (amsi · T/bB ) −1

 j =0



amsi · T j +1 , bB

where amsi is given in Lemma 5.4.

PROOF. The number of nodes at the i-th level (0 ≤ i ≤ h−1) of the R-tree is Ni = N / f Ri+1 , where N is the number of data regions, f R the node fanout, and h = ⌈log f R (N / f R )⌉ is the height of the tree. Thus, the size of the R-tree is  i+1 i=0∼h−1 N / f R . As for the measure index size, let us first focus on a level-i B-tree, which, as discussed in Lemma 5.4, indexes amsi · T measures, where amsi is the probability that a new measure is inserted into this tree at each ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



83

timestamp. Following the same reasoning as the R-tree size analysis, the size  j +1 of such a B-tree equals j =0∼h Bi ⌈amsi ·T/bB ⌉, where hBi = ⌈logbB (amsi ·T/bB )⌉ is the height of the tree, and bB is the node capacity of the B-tree (recall that each B-tree is packed). Since the total number of level-i B-trees equals N / f Ri (the entries at the i-th the total size of the measure indexes is level of the R-tree),  j +1 i ⌈a ·T/b ⌉). The formula presented in the theorem (N / f · msi B R j =0∼h Bi j =0∼h−1 corresponds to the sum of the sizes of the host and measure indexes. 5.3 Cost Model for aMVRB-Trees Our analysis of the aMVRB-tree uses the following lemma on the MVR-tree, which allows us to circumvent a discussion of the complex behavior of multiversion structures. LEMMA 5.5 (TAO ET AL. 2002A). Given N regions (with density D) evolving for T timestamps with extent agility aext , the following estimates hold for the corresponding MVR-tree: (i) the height is h = ⌈log f MVR (N / f MVR )⌉, where f MVR is the node fanout7 ; (ii) the total number of nodes (entries) at the i-th level is Ni = i+1 i+1 +aext · N ·(T −1)/(bMVR − f MVR )i+1 (Ni ·[bMVR −aext · f MVR /(bMVR − f MVR )i ]), N / f MVR where bMVR is the node capacity; (iii) the side length si of a node at the i-th level, and the number ti of timestamps covered by its lifespan, are:  √ 2 i+1 f MVR Di − 1 (bMVR − f MVR )i+1 ; , Di+1 = 1 + √ si = Di+1 , D0 = D, ti = i+1 N f MVR aext · f MVR (iv) the lifespan eti of an entry at the i-th level covers ti · f MVR /[bMVR − aext · i+1 f MVR /(bMVR − f MVR )i ] timestamps. The above lemma provides the estimation of h and Ni , while for the other components (i.e., aPi , Ei , NABi ) in Equation 5-3, we prove the following results: LEMMA 5.6. Given a spatio-temporal aggregation query, whose region is a square with length qS and interval includes qT timestamps, the probability aPi that a level-i node of the host MVR-tree is accessed, can be represented as: aPi = 4 · qS · si (ti + qT )/T , if qS >si ; otherwise, aPi = (qS + si )2 (ti + qT )/T , where si , ti are given by Lemma 5.5. PROOF. A node in the host MVR-tree is searched if (i) its MBR intersects, but is not contained in, the query region q R , and (ii) its lifespan intersects the query interval qT . The formulae presented correspond to the product of the probabilities of (i) and (ii) in Lemma 5.1, applying the MBR extent and lifespan of a node/entry in the MVR-tree from Lemma 5.5. LEMMA 5.7. Given a spatio-temporal aggregation query, whose region is a square with length qS and interval includes qT timestamps, the number of level-i 7 The fanout of the MVR-tree should be interpreted as the number of entries in a node that are alive at one timestamp. Note that this is different from the number of entries in a node, which include entries alive at all the timestamps of the node’s lifespan.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

84



Y. Tao and D. Papadias

B-trees searched in the aMVRB-tree equals:    2    2qT − 2  2 D  , if qS > s0 + q − q − s 1 + N  ( ) S 0 S N   et0 + qT − 1  E0 = if i = 0(leaf level),     2    2q − 2 T  D  , otherwise 1+ N N + qS et0 + qT − 1      2qT − 2 2 2  N i  , if qS > si−1 and qS > si 1+  f MVR (qS − si−1 ) − (qS − si )  eti + qT − 1    if i > 0,  Ei =   2q − 2 T 2  N i   f MVR (qS − si−1 ) 1 + et + q − 1 , if qS > si−1 and qS < si   i T  0, otherwise,

where si , eti are given in Lemma 5.5.

PROOF. A timestamp query is answered in the same way as in the aRB-tree, using only the logical R-tree (in the MVR-tree) responsible for the query timestamp. Thus, the estimation of Ei is reduced to that of the aRB-tree (note that, for qT = 1, the presented formulae have the same form as those in Lemma 5.3). For interval queries, since a B-tree is searched only if its host entry’s lifespan covers either the starting or ending timestamp of qT , we compute Ei as c1 + c2 − c3 , where c1 (c2 ) is the number of B-trees searched at the logical Rtree responsible for starting (ending) timestamp of qT , and c3 is the number of common B-trees included in c1 and c2 (i.e., their host entries cover the entire qT ). Note that c1 and c2 are identical because they both correspond to the Ei estimation of timestamp queries, which is already solved earlier (by reducing to the aRB-tree). Further, given that the lifespan of a level-i host entry covers eti timestamps (given in Lemma 5.5), the probability that the lifespan covers qT , provided that it covers the starting or ending timestamp of qT , equals (eti −qT )/ (eti + qT ) if qT ≤ eti , in which case c3 can be obtained as c1 · (eti − qT )/(eti + qT ). If qT > eti , then the entry’s lifespan cannot contain qT , and thus c3 = 0. The formulae presented in the lemma correspond to the final form after simplification. LEMMA 5.8. Given an aMVRB-tree and a spatio-temporal aggregation query, whose query interval consists of qT timestamps, the cost NABi of searching a B-tree at the i-th level of the host index equals: NABi =

NABij

hBi −1 

NABij , where :

j =0

   j    min 2, amsi · eti bB , if min (qT , eti ) > bB a ,   msi   hBi = ⌈logbB (amsi · eti /bB )⌉, =  amsi = 1 − (1 − ams(i−1) ) f mvb ,  

    min 1 + min (qT , eti ) · amsi j , amsi · eti bB bB , otherwise

ams0 = ams and eti is given in Lemma 5.5.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



85

PROOF. This lemma can be proved in the same way as Lemma 5.4, except that no B-tree manages all the timestamps in history, but rather the timestamps in the lifespan of the associated host entry. Similar to Lemma 5.4, amsi represents the probability that a new measure change is inserted into the B-tree of a host entry at the i-th level. Consequently, if the lifespan of the entry covers eti timestamps, its B-tree consists of amsi · eti records. The correctness follows by replacing amsi · T with amsi · eti in the proof of Lemma 5.4. Now we are ready to present the complete models for the cost and space consumption of the aMVRB-tree. THEOREM 5.3 (aMVRB-TREE QUERY COST). Given a spatio-temporal aggregation query whose region is a square with length qS and interval includes qT timestamps, the cost of an aMVRB-tree equals:     aext · N · T − 1 qS · s0 (t0 + qT ) N NAaMVRB = 4 f MVR + b T MVR − f MVR

   2   D + qS − (qS − s0 )2 1 + 2qT − 2 +N 2h B0 − 1 + N et0 + qT − 1      log f  (N / f ) −1 4 N f i+1 + aext ·N ·(T −1)i+1 qS ·si (tTi +qT )  MVR  MVR MVR (bMVR − f MVR ) ,    + N i [(q − s )2 − (q − s )2 ] 1 + 2qT −2 (2h − 1)  i=1 S i−1 S i Bi f eti +qT −1 MVR

where N is the dataset cardinality, D the density of data regions, T is the total number of timestamps in history, ams is the measure agility, f MVR the node fanout, bB is the node capacity of a B-tree, si , ti , eti are given in Lemma 5.5, and hBi is given in Lemma 5.8.

PROOF. The Equation 5-3.

theorem

follows

THEOREM 5.4 (aMVRB-TREE SIZE). equals: $ # log f

SizeaMVRB =



N/f

MVR  MVR i=0



−1

N

i+1

by

applying

Lemmas

5.6–5.8

to

The number of nodes of the aMVRB-tree

f MVR

+

  aext · N · T − 1

(bMVR − f MVR )i+1

 amsi · T bB , MVR

 +N fi

where amsi is given in Lemma 5.8. PROOF. The proof is similar to Theorem 5.2, except that the B-trees of the host entries (in the MVR-tree) with disjoint lifespans are stored compactly in a B-File. 5.4 Cost Models for a3DR- and a3DRB-Trees The a3DR-tree does not involve any measure index, but includes all the extent and measure changes in a single structure. Thus, based on Equation 5-3, its query cost depends only on h, Ni , and aPi , as solved in the following lemma. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

86



Y. Tao and D. Papadias

LEMMA 5.9. Given N data regions (with density D) evolving for T timestamps with extent (measure) agility aext (ams ), the total number of records in the a3DR-tree equals N + N · (T − 1) · (aext + ams − aext · ams ). As a result, its height is h = log f 3DR {[N + N · (T − 1) · (aext + ams − aext · ams )]/ f 3DR }, and the number Ni of i+1 nodes at the i-th level equals Ni = [N + N · (T − 1) · (aext + ams − aext · ams )]/ f 3DR . The probability aPi that a level-i node is accessed during a square query, with length qS and interval qT , can be represented as: % (qS + si )2 (qT + ti ) /T − (qS − si )2 (qT − ti ) /T, if qS > si and qT > ti a Pi = (qS + si )2 (qT + ti ) /T, otherwise, where the side length si of a node at the i-th level and the number ti of timestamps covered by its lifespan can be computed using extent regression functions [Tao and Papadias 2004]. Note that conventional R-tree analysis (e.g., Theodoridis and Sellis [1996]; Theodoridis et al. [2000]) assumes that nodes have similar extents on each dimension, which does not hold for spatio-temporal applications (nodes may be elongated on the temporal dimension). PROOF. A record is inserted into the a3DR-tree at a timestamp when a data region issues an extent or measure change (with probabilities aext and ams , respectively). Thus, a region has probability aext + ams − aext · ams to create a record in the a3DR-tree every timestamp. Since the dataset contains N regions evolving for T − 1 timestamps, the 3DR-tree has totally N + N · (T − 1) · (aext + ams − aext · ams ) records. Regarding aPi , recall that a node in the host 3DR-tree is searched if its 3D box (bounding its MBR and lifespan) intersects, but is not contained in, the query box (bounding q R and qT ). The presented equations result from the application of Lemma 5.1. On the other hand, since the host index of the a3DRB-tree manages only extent changes, the number of records in it equals N + N · (T − 1) · aext ; thus, its height is h = log f 3DR {[N + N · (T − 1) · aext ]/ f 3DR }, and the number Ni of nodes i+1 at the i-th level equals Ni = [N + N · (T − 1) · aext ]/ f 3DR . Further, since the conditions for a node (in the host index) to be accessed are the same as those for the aMVRB-tree, the estimation of aPi is the same as Lemma 5.6. Similarly, Lemmas 5.7 and 5.8 also predict Ei and NABi for the a3DRB-tree, except that (i) f MVR should be replaced as f 3DR · ti−1 /ti (for i ≥ 1), and (ii) et0 = 1/aext (where aext is the extent agility), while eti = ti−1 for i ≥ 1. The following theorems summarize the complete models for the a3DR- and a3DRB-trees. The sizes of the a3DR-tree and a3DRB-trees are derived in a way similar to Theorems 5.2 and 5.4. THEOREM 5.5 (a3DR-, a3DRB-TREES QUERY COSTS). Given a spatiotemporal aggregation query, whose region is a square with length qS and interval includes qT timestamps, the costs of the a3DR- and a3DRB-trees are: NAa3DR =

h−1  N + N (T − 1)(aext + ams − aext · ams ) i=0



·T f 3i+1 DR

× (qS + si )2 (qT + ti ) − (qS − si )2 (qT − ti )



ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation

NAa3DRB



87

  N + aext · N · T − 1 qS · s0 (t0 + qT ) =4 f 3DR T 

  2 2qT − 2 D + qS − (qS − s0 )2 1 + +N N 1/aext + qT − 1 

  × 2h B0 − 1 + ×

# log f

3DR



N / f 3DR



$ −1

i=1

  N +aext ·N ·(T −1) qS ·si (ti +qT )  4 i+1 T f 3DR

    + N 2 2 1+ ( f 3DR · ti−1 /ti )i (qS − si−1 ) − (qS − si )

2qT −2 si−1 +qT −1



2hBi−1

  

,

 

where N is the dataset cardinality, D the density of data regions, T the total number of timestamps in history, aext (ams ) the extent (measure) agility, f 3DR the node fanout of the 3DR-tree (its value is different in the a3DR- and a3DRB-tree), si , ti , eti are computed using extent regression functions, and hBi , amsi are given in Lemma 5.8. PROOF. The Equation 5-3.

theorem

follows

by

applying

THEOREM 5.6 (a3DR-, a3DRB-TREE SIZES). a3DR-, and a3DRB-trees is: Sizea3DR =

Sizea3DRB =



log f

3DR

5.6–5.9

to

The number of nodes of the

h−1  N + N (T − 1) (aext + ams − aext · ams ) i=0

N / f 3DR ) ( i=0

Lemmas



−1 

i+1 f 3DR

  N + aext · N · T − 1 i+1 f 3DR

 +N

  a · T msi i f 3DR bB ,

where N is the dataset cardinality, D the density of data regions, T is the total number of timestamps in history, aext (ams ) is the extent (measure) agility, bB is the node capacity of a B-tree and f 3DR the node fanout of the 3DR-tree; amsi is given in Lemma 5.8. PROOF. By Lemma 5.9, the total number of entries in the a3DR-tree is N + N · (T − 1) · (aext + ams − aext · ams ), after which the structure size can be obtained in the same way as the R-tree size estimation [Theodoridis and Sellis 1996, Tao and Papadias 2004]. The proof for the size of the a3DRB-tree is similar to that of Theorem 5.4. 5.5 Performance Characteristics and Simplified Models The previous equations can be used directly for query optimization. Furthermore, they mathematically reveal the factors that determine the performance of each structure and promote our understanding about their behavior. The first observation, which leads to simplification of the formulae, is that the query cost ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

88



Y. Tao and D. Papadias

of each method involves a dominant term. Specifically, the cost of the a3DRtree is dominated by the number of leaf node accesses (a typical phenomenon for multi-dimensional indexes). For the proposed multi-tree structures, however, the cost is dominated by that of searching the B-trees associated with the leaf entries in the host index. In other words, the query cost on the host index is negligible compared to the total processing time, because a leaf node access in the host index usually necessitates visits to the associated B-trees (each involving at least one node access). Based on this fact, we can simplify the cost of the aRB-tree (Theorem 5.1) into:

 2   2      ams · T D fR + q log − q − NAaRB = 2N S S b N N bB . (5-4) B

Its advantage over the a3DR-tree (given in Theorem 5.5, setting aext to 0) becomes obvious: its cost increases only logarithmically with ams , while that of the a3DR-tree (Theorem 5.5) grows linearly. This and the subsequent observations are experimentally confirmed in Section 6. Regarding the solutions for volatile data regions, an important fact is that for aMVRB- and a3DRB-trees the height of each B-tree associated with a leaf entry in the host index is usually 1 in practice, indicating that the total number of node accesses equals the number of qualifying B-trees. In particular, for the a3DRB-tree, the height h B0 = ⌈logbB [(ams /aext )/bB ]⌉ (Lemma 5.8) equals 1 as long as ams /aext < bB . Given that for typical page sizes, bB is 100-1000, this condition holds when the regions’ measures change less than 100-1000 times faster than their extents. For the aMVRB-tree, on the other hand, h B0 = ⌈logbB [(ams ·et0 )/bB ]⌉ (Lemma 5.8), where et0 (given in Lemma 5.5) equals (bMVR − f MVR )/[aext · (bMVR − aext · f MVR )] ≤ 1/aext (recall that aext ≤ 1). Thus, for aMVRBtrees, h B0 ≤ ⌈logbB [(ams /aext )/bB ]⌉—the condition for h B0 = 1 is even easier to satisfy than a3DRB-trees. When the height of measure indexes equals 1, the query cost of these two structures can be simplified as follows: 

  2  2qT − 2 2 D (5-5) 1+ NAa3DRB = N N + qS − (qS − s0 ) 1/aext + qT − 1 NAaMVRB = N



D + qS N

2

 − (qS − s0 )2 1 +

2qT − 2



. + qT − 1 (5-6) In this case the query time of the aMVRB- and a3DRB-trees is independent of the measure agility ams , in contrast to the linear deterioration of the a3DR-tree (Theorem 5.5). Further, the relative performance of the two multitree structures is also clear (through the comparison8 of Equations 5-5 and 5-6): the a3DRB-tree always outperforms the aMVRB-tree (i.e., (bMVR − f MVR )/[aext · bMVR − f MVR 1 aext bMVR −aext f MVR

8 Strictly

speaking, the two equations are not directly comparable due to the different estimates of s0 (i.e., the side length of the MBR of a leaf node). Nevertheless, the difference is small enough for our discussion to hold. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



89

(bMVR − aext · f MVR )] ≤ 1/aext ) except for qT = 1 (i.e., timestamp queries). Nevertheless, recall that the aMVRB-tree has wider applicability since it is an online structure, while the a3DRB-tree does not support incremental updates. Finally, the size comparison of the a3DR-tree and the proposed methods is obvious: the multi-trees consume significantly less space due to the lack of redundancy, which is also reflected in the cost models (Theorems 5.2, 5.4, 5.6). Specifically, observe that the total number of records stored in all trees is approximately the same; however, the a3DR-tree has rather low fanout f a3DR (since each entry of the tree must store both extent and measure information), while for multi-tree structures, most data (i.e., the measures) are stored in packed B-trees with large node capacity bB (≈3 f a3DR in our experimental settings), hence requiring fewer nodes. 5.6 Extension to General Datasets So far we have focused on regular datasets, where the spatial distribution remains uniform and the aggregate and extent agilities are constant throughout the history. As discussed in Tao et al. [2002a], the analysis of general datasets (e.g., non-uniform spatial distribution, variable agilities at different timestamps, etc.) can be reduced to that of regular data, based on the fact that even though the overall data distribution may deviate significantly, for typical queries with small regions (compared to the data space) and intervals (compared to the history length), the distribution of data satisfying the query conditions is usually fairly regular. This permits the application of the regular model at the query spatial and temporal extents, after the local data properties (i.e., data density, average agilities, etc) are accurately estimated, which can be achieved through histograms [Tao et al. 2002a]. We adopt the same approach in our experimental evaluation for providing cost estimations for general data. Finally, note that all the proposed equations for structure sizes directly support nonregular data (i.e., without the need of histograms). 6. EXPERIMENTS In this section, we evaluate the proposed methods under a variety of experimental settings. The spatial datasets used in the following experiments include [Tiger]: (i) LA that contains 130 k rectangles representing street locations, and (ii) LB that consists of 50 k road segments. Due to the lack of real spatiotemporal (aggregation) data, datasets with static regions are created as follows. At timestamp 0, each object (in a unit spatial universe) of a real dataset is associated with a measure (uniformly distributed in [0, 10000]). Then, for each of the subsequent 999 timestamps (i.e., T = 1000), ams percent of the objects are randomly selected to change their measures by offsets uniformly decided in [−100, 100]. Dynamic datasets (volatile regions) are synthesized in a similar manner except that at each timestamp, aext percent of the regions move their centroids (towards random directions) by distances that are uniform in [−0.01, 0.01]. We vary ams as a dataset parameter from 1% to 20%, and aext from 1% to 9%, resulting in a total of 1 to 20 million records. In most of the combinations of ams and aext , the measure agility is (up to 20 times) larger than the extent agility. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

90



Y. Tao and D. Papadias

Fig. 12. Spatial distributions.

Fig. 13. Distribution changes of dynamic data (created from LA).

Figure 12 shows the visualization of LA and LB, while Figure 13 illustrates the distributions of dynamic data (created from LA with aext = 5%) at timestamps 250, 500, and 1000. Notice that the distribution gradually becomes uniform. All implementations of R-trees use the R*-tree [Beckmann et al. 1990] update algorithms. The node size is set to 1 k bytes, so that the node capacity of the R-tree (MVR-tree) is 36 (28) entries. The 3DR-trees used in a3DR-, and a3DRB-trees have slightly different entry formats, resulting in capacities 31 and 28, respectively. The node capacity of a B-tree equals 127 in all cases. Each query specifies a square spatial region with side length qS (i.e., if qS = 0.1, the query occupies 1% area of the universe), and a temporal interval involving qT timestamps. The distribution of the query regions follows that of the data in order to avoid queries with empty results, while the temporal interval is generated uniformly in [1, 1000] (i.e., the entire history). The cost of a structure is measured as the average number of node accesses for answering a workload of 200 queries with the same parameters qS and qT . In the next section, we first measure the performance (i.e., size and query costs) of alternative methods, and then evaluate the accuracy of the proposed cost models in Section 6.2. 6.1 Structure Size and Query Performance We start from static regions (i.e., aext = 0), and compare the proposed aRB-tree with existing solutions (described in Section 3), namely, column scanning (ColS for short), and the a3DR-tree. The first experiment evaluates space consumption by varying the measure agility ams from 1% to 20%. As shown in Figure 14, ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



91

Fig. 14. Size vs. measure agility.

Fig. 15. Node accesses for LA (static regions).

the aRB-tree is the smallest structure in all cases. Despite the intermediate tree levels, it consumes less space than ColS, because it does not replicate (in the B-trees) measures that remain constant. On the other hand the fact table approach has to create a new column for each timestamp. The aRB-tree size is constant until 10% agility, after which it stabilizes at some higher values. This is because, when the agility exceeds 10%, the height of a data region’s B-tree increases by one level. Notice that for ams = 10%, each data region issues around 100(= ams · T ) aggregate updates throughout the history, which is smaller than the B-tree node capacity (127): each B-tree has one node. Similarly, for ams = 15% each B-tree manages on the average 150 records, thus it requires two levels. The a3DR-tree, on the other hand, grows linearly with ams and consumes more space than ColS for ams > 10%. The next experiment measures the query cost as a function of qS (i.e., the extent of the query MBR), by fixing qT to 100 timestamps (i.e., 10% of the history) and ams to 10%. Figure 15(a) shows the results for dataset LA, varying qS from 0.1 to 0.5. The aRB-tree outperforms its competitors significantly for all qS (notice the logarithmic scale). Furthermore, the costs of the aRB- and a3DR-trees initially increase with qS , but decrease after qS exceeds 0.4. This is not surprising because, for skewed distributions (see Figure 12), a large query will contain the MBRs of most nodes, thus resulting in fewer node accesses. Similar phenomena have also been observed in Tao et al. [2002b] for spatial aggregation. ColS is worse than the other methods by more than an order of ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

92



Y. Tao and D. Papadias

Fig. 16. Node accesses for LB (static regions).

magnitude, because it retrieves the information of all regions at each queried timestamp, hence its cost is linear to qT , but not affected by qS . Since ColS is significantly more expensive (by orders of magnitude) in all our experiments, we omit its results in the sequel. Next we fix qS to 0.3, ams to 10%, and increase qT from 1 to 200 timestamps. Figure 15(b) illustrates the number of node accesses as a function of qT for the aRB- and a3DR-trees. The performance of the aRB-tree does not deteriorate with qT because as discussed in Section 5, the cost is dominated by qS (which determines the number of host entries whose B-trees need to be searched), while visiting each B-tree has almost the same overhead. The a3DR-tree, however, deteriorates very fast with qT , and becomes almost five times slower than the aRB-tree when qT = 200. In Figure 15(c), qS and qT are set to their median values 0.3 and 100 respectively, and ams ranges from 1% to 20%. The cost of the aRB-tree remains the same until ams = 10% because as discussed for Figure 14, the B-tree height of each host entry does not change until this agility. For ams ≥15%, each B-tree contains one more level, which almost doubles the query cost. It is worth mentioning that the aRB-tree will not deteriorate until the B-tree height increases again, which however, will happen only at much higher agility, due to the fact that the height grows logarithmically with the cardinality. Figure 16 shows the results of the same experiments for dataset LB, where similar phenomena can be observed. In summary, the aRB-tree is clearly the most efficient structure for static regions, while at the same time it consumes less space than the other approaches. Having presented the results for static regions, we now proceed with volatile data (where a dataset is described by both the aggregate ams and extent aext agilities), and compare the aMVRB- and a3DRB-trees against the a3DR-tree. Figure 17(a) (17b) plots the index sizes for dataset LA as a function of ams (aext ), by fixing aext (ams ) to 5% (10%). Observe that a3DRB-trees are the smallest in all cases because they do not incur redundancy. The aMVRB-tree consumes less space than the a3DR-tree unless ams ≤ 5%(aext ≥ 7%) in Figure 17(a) (17b), because for small ams (large aext ), there are relatively few measure (many extent) changes; thus the size of an aMVRB-tree is dominated by the MVR-tree which, due to the data duplication introduced by the multi-version technique, is larger than the a3DR-tree. Figure 18 shows similar results for dataset LB. The previous diagrams (Figures 17, 18) for size evaluation of aMVRB- and a3DRB-trees correspond to an implementation using B-Files. Figures 19(a) and ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



93

Fig. 17. Structure sizes for LA (volatile regions).

Fig. 18. Structure sizes for LB (volatile regions).

Fig. 19. Benefits of using B-Files for LA (volatile regions).

19(b) illustrate the benefit ratio: the ratio of space without/with B-Files, as a function of ams and aext , respectively for LA (the results for LB are similar). The inclusion of B-Files results in structures that are between 10 and 27 times smaller. The aMVRB-tree receives higher improvements than the corresponding a3DRB-tree, due to the fact that it contains more host entries and thus requires ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

94



Y. Tao and D. Papadias

Fig. 20. Node accesses for LA (volatile regions).

a larger number of B-trees, leading to more space waste if B-Files are not used. For all subsequent experiments we employ the B-File implementation. The next set of experiments evaluates the query performance of methods for volatile regions. In Figure 20(a), we fix qT , ams , and aext to their median values, and measure the query cost as a function of qS . As expected, the proposed structures outperform the a3DR-tree significantly, while the a3DRB-tree is even more efficient than the aMVRB-tree. Similar to Figure 15(a), the costs of all approaches initially grow with qS , but decrease after the query becomes sufficiently large (qS > 0.4). Figure 20(b) shows the number of node accesses as a function of qT , fixing qS , ams , aext to 0.3, 10%, 5% respectively. As predicted in Section 5.5, the aMVRB-tree performs better than the a3DRB-tree for timestamp queries (i.e., qT = 1), for which only one logical R-tree (in the aMVRBtree) is visited. The a3DRB-tree is the best structure for the other values of qT , while the a3DR-tree yields the worst performance in all cases. Figure 20(c) shows the cost by varying ams from 1% to 20%. Although the performance of a3DR-tree deteriorates significantly when ams increases, the costs of aMVRBand a3DRB-tees are not affected at all. Figure 20(d) demonstrates the node accesses by varying aext . In general, the a3DRB-tree has the best performance (and the smallest size), followed by the aMVRB-tree. However, the aMVRB-tree is the only online structure, applicable in cases where the region extents are not known in advance. The results for dataset LB are similar and omitted. 6.2 Accuracy of the Cost Models This section evaluates the accuracy of the cost models proposed in Section 5. Given the actual act and estimated est values, the relative error is defined as ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



95

Fig. 21. Size estimation accuracy.

Fig. 22. Node access estimation accuracy for LA (static regions).

|act-est|/act. Based on this, we measure the error for a query workload as the average error of all queries involved. In order to estimate the performance for nonregular data distributions, we maintain histograms as described in Theodoridis et al. [2000] and Tao et al. [2002a]. Specifically, the histogram for static regions consists of a grid with H × H cells that partition the space regularly, and each cell is associated with its local density.9 For volatile data, the histogram contains a set of grids such that the i-th grid corresponds to the data distribution at the 100·i-th timestamp (i.e., for T = 1000, 11 grids are maintained). Since the variation of distribution is slow with time, the i-th grid can be used to represent the distributions between the 100 · i-th and (100 · i + 99)-th timestamps [Tao et al. 2002a]. Starting with static regions, Figure 21 shows the relative error (as a function of ams ) of Theorem 5.2 (5.6) that computes the size of the aRB-tree (a3DR-tree) for datasets LA and LB. The estimated values are very accurate (maximum error 3%) and the precision increases with ams . The minimum error will be achieved when ams = 100%, in which case the size estimation of the aRB-tree becomes trivial because each B-tree of the host entry simply contains exactly T records, where T is the number of timestamps in history. Next we evaluate Theorems 5.1, 5.5 that predict the number of node accesses for aRB- and a3DR-trees. Figures 22(a), 22(b), and 22(c) illustrate the error as 9 Assuming

there are n data rectangles ri (1≤ i ≤ n) intersecting a cell c, the local density of c is defined as i xi /area (c)where xi is the intersection area between c and ri , and area (c) is the area of c. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

96



Y. Tao and D. Papadias

Fig. 23. Node access estimation accuracy for LB (static regions).

Fig. 24. Size estimation accuracy (volatile regions).

functions of qS , qT , and q A for dataset LA (by setting the other parameters to their median values in each case). A general observation is that queries incurring higher overhead can usually be better predicted; which is consistent with previous spatial analysis [Theodoridis and Sellis 1996; Theodoridis et al. 2000; Acharya et al. 1999]. In Figure 22(a), for example, the error initially drops with qS but grows after qS exceeds 0.4, corresponding to the same behavior as Figure 15(a). Similar phenomena can also be observed in Figures 22(b) and 22(c), where the settings are the same as Figures 15(b) and 15(c), respectively, as well as Figure 23 for dataset LB. It is worth mentioning that the maximum error (about 20%) in query cost estimation is higher than that of size estimation (Figure 21), because, as indicated in Theorems 5.1 and 5.5, the cost depends, not only on the structure size, but also on the node extents. Hence the overall error accumulates the estimation of the node extents (i.e., the imprecision of the previous cost models such as the one in Lemma 5.2), and the inaccuracy introduced by the histogram. The last set of experiments evaluates the accuracy of the cost models for volatile regions. Figure 24 shows the error of estimating the sizes of the proposed structures on both datasets LA and LB. The estimated values are accurate (maximum error 7%) and the precision improves with extent and aggregate agilities. Figure 25 demonstrates the error of Theorems 5.3, 5.5 (for query cost estimation) with respect to qS , qT , ams , and aext for LA. Comparing the diagrams in ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



97

Fig. 25. Node access estimation accuracy for LA (volatile regions).

Figures 25 and 22, notice that the observation mentioned earlier also applies to volatile regions—the precision, in general, increases with the query overhead. Furthermore, the estimation of the query cost is less accurate than the size, as it accumulates the error of the histograms and the corresponding models for node extents. The results for LB are similar and omitted. To summarize, in this section we have experimentally confirmed the efficiency of the proposed structures for spatio-temporal aggregation. Specifically, for static regions, the aRB-tree, although consuming a fraction of the space required by the a3DR-tree, outperforms the a3DR-tree significantly in all cases. For volatile regions, the a3DRB-tree has the best overall performance in terms of size and query cost. Since, however, it is an offline structure, the aMVRBtree becomes the best alternative for applications requiring online indexing. In all cases, the traditional data cube approach yields disappointing results. Finally, we also demonstrated that the proposed models can predict the performance accurately, incurring a maximum error of around 20% for real data distributions. 7. DISCUSSION AND CONCLUSION Numerous real-life applications require fast access to summarized spatiotemporal information. Although data warehouses have been successfully employed in similar problems for relational data, traditional techniques have three basic impediments when applied directly in spatio-temporal applications: (i) no ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

98



Y. Tao and D. Papadias

support for ad hoc hierarchies, unknown at the design time, (ii) lack of spatiotemporal indexing methods, and (iii) limited provision for dimension versioning and volatile regions. Here, we provide a unified solution to these problems by developing spatiotemporal structures that integrate indexing with the pre-aggregation technique. The intuition is that, by keeping summarized information inside the index, aggregation queries with arbitrary groupings can be answered by the intermediate nodes, thus saving accesses to detailed data. The applicability of our methods is demonstrated through a set of experiments that attempt to simulate realistic situations. In order to enable query optimization in practice, we also perform a comprehensive performance study for the existing and proposed structures, and present efficient cost models to capture the index sizes and query costs. Our results provide significant insights into the behavior of alternative methods, and analytically clarify the advantages of the proposed technique. The proposed techniques can replace the data-cube in a star-schema-like implementation of spatio-temporal data-warehouses. Consider, for instance, the aRB-tree of Figure 5. Each leaf entry of the host R-tree can keep a pointer (foreign key) to the record storing information about the corresponding cell (e.g., phone company that owns the cell) in a table of regions. Given this dimension table, the system can answer queries of the form “find the total number of phone-calls (in cells intersecting q R , during qT ) initiated by customers of Hong Kong Telecom.” Similar pointers may be kept for the leaf entries of the B-trees, pointing to a dimension table with information about the type of timestamp (e.g., peak hour, cost of phone-calls) and so on. Although for simplicity we focused on the sum function, our techniques are directly applicable to multiple measures and functions. Consider, for instance, that queries inquire about the maximum number of phone-calls (during qT ) in some cell (intersecting q R ). Each intermediate entry r in the host and measure indexes must now store the maximum measure in its sub-tree (instead of the sum of measures). The query algorithms are exactly the same as in the case of sum for all the structures: if the extent and lifespan of r is contained in q R and qT , its max value is aggregated directly and so on. In addition to distributive functions, the proposed techniques can also process algebraic functions (e.g., average), since they can be expressed as scalar functions of distributive functions (e.g., sum/count). Obviously, depending on the application needs, it is possible to have several measures (e.g., sum and max) associated with each entry. Furthermore, it is easy to devise processing algorithms for alternative query types such as: “for every cell in the city center (i.e., q R ) find the total number of phone calls in the last hour (i.e., qT ).” In this case the result contains several tuples, one for each cell qualifying the spatial condition (i.e., similar to a group-by). Query processing must now continue until the leaf level of the host index (the measures of intermediate entries are not aggregated)—the host index acts as a conventional spatio-temporal index. A final note concerns the interpretation of the results of spatio-temporal aggregate queries, which depends on the application semantics of Ri (t).ms. If, for instance, Ri (t).ms stores the number of mobile users (instead of initiated ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



99

phone-calls) in region Ri (t), the result should not be considered as the total number of users in q R during qT , since a user may be counted multiple times (if he/she stays in q R for multiple timestamps). Tao et al [2004] propose a method for duplicate elimination that combines spatio-temporal aggregation structures (e.g., the aRB-tree) with sketches [Flajolet and Martin 1985] based on probabilistic counting. Furthermore, note that our techniques, following the relevant literature [Jurgens and Lenz 1998; Papadias et al. 2001; Zhang et al. 2002; Govindarajan et al. 2003; Zhang et al. 2003], assume that if the query partially intersects a region, the entire measure of the region contributes to the query result. This is due to the fact that regions represent the highest resolution in the system. If additional information about the distribution of the objects within each region is available, we could take into account only the number of objects in the part of the region that intersects the query. Spatio-temporal aggregation is a promising research area, combining various concepts of on-line analytical processing and multi-dimensional indexing, which is expected to play an important role in several emerging applications such as mobile computing and data streaming. A direction for future work includes supporting more complex spatio-temporal measures like the direction of movement. This will enable analysts to ask sophisticated queries in order to identify interesting numerical and spatial/temporal trends. The processing of such queries against the raw data is currently impractical considering the huge amounts of information involved in most spatio-temporal applications. Another topic worth studying concerns bulk updates: when a large number of regions issue updates synchronously (e.g., every timestamp). In this case instead of processing each update individually, we could exploit specialized bulk loading techniques adapted to the current problem. ACKNOWLEDGMENTS

A short version of this work appears in Papadias et al. [2002]. We would like to thank Panos Kalnis and Jun Zhang for several discussions that led to this paper. REFERENCES ACHARYA, S., POOSALA, V., AND RAMASWAMY, S. 1999. Selectivity estimation in spatial databases. In Proceedings of the ACM SIGMOD conference (June), 13–24. AGARWAL, P., ARGE, L., AND ERICKSON, J. 2000. Indexing moving points. In Proceedings of the ACM Symposium on Principles of Database Systems (PODS) (May), 175–168. BARALIS, E., PARABOSCHI, S., AND TENIENTE, E. 1997. Materialized view selection in a multidimensional database. In Proceedings of Very Large Database Conference (VLDB) (August), 156–165. BECKER, B., GSCHWIND, S., OHLER, T., SEEGER, B., AND WIDMAYER, P. 1996. An asymptotically optimal multiversion B-Tree. The VLDB Journal 5, 4, 264–275. BECKMANN, N., KRIEGEL, H., SCHNEIDER, R., AND SEEGER, B. 1990. The R*-tree: An efficient and robust access method for points and rectangles. In Proceedings of the ACM SIGMOD conference (May), 322–331. CHOI, Y. AND CHUNG, C. 2002. Selectivity estimation for spatio-temporal queries to moving objects. In Proceedings of the ACM SIGMOD conference (June), 440–451. DENNY, M., FRANKLIN, M., CASTRO, P., AND PURAKAYASTHA, A. 2003. Mobiscope: A scalable spatial discovery service for mobile network resources. In Proceedings of the 4th Mobile Data Management (MDM) (Jan.), 307–324. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

100



Y. Tao and D. Papadias

FLAJOLET, P. AND MARTIN, G. 1985. Probabilistic counting algorithms for data base applications. J. Comput. Syst. Sci. 32, 2, 182–209. ¨ FORLIZZI, L., GUTING , R., NARDELLI, E., AND SCHNEIDER, M. 2000. A data model and data structures for moving objects databases. In Proceedings of the ACM SIGMOD conference (May), 319–330. ¨ GAEDE, V. AND GUNTHER , O. 1998. Multidimensional access methods. ACM Comput. Surv. 30, 2, 123–169. GENDRANO, J., HUANG, B., RODRIGUE, J., MOON, B., AND SNODGRASS, R. 1999. Parallel algorithms for computing temporal aggregates. In Proceedings of International Conference on Database Engineering (ICDE), 418–427. GOVINDARAJAN, S., AGARWAL, P., AND ARGE, L. 2003. CRB-Tree: An efficient indexing scheme for range aggregate queries. In Proceedings of International Conference on Database Theory (ICDT) (Jan.), 143–157. GRAY, J., BOSWORTH, A., LAYMAN, A., AND PIRAHESH, H. 1996. Data cube: A relational aggregation operator generalizing group-by, cross-tabs and subtotals. In Proceedings of International Conference on Database Engineering (ICDE), 152–159. GUPTA, H. 1997. Selection of views to materialize in a data warehouse. In Proceedings of International Conference on Database Theory (ICDT) (Jan.), 98–112. GUPTA, H. AND MUMICK, I. 1999. Selection of views to materialize under a maintenance-time constraint. In Proceedings of International Conference on Database Theory (ICDT) (Jan.), 453– 470. ¨ ¨ GUTING , R., BOHLEN , M., ERWIG, M., JENSEN, C., LORENTZOS, N., SCHNEIDER, M., AND VAZIRGIANNIS, M. 2000. A foundation for representing and querying moving objects. ACM Tran. Datab. Syst. 25, 1, 1–42. GUTTMAN, A. 1984. R-Trees: A dynamic index structure for spatial searching. In Proceedings of the ACM SIGMOD conference (June), 47–57. HADJIELEFTHERIOU, M., KOLLIOS, AND G., TSOTRAS, V. 2003. Performance evaluation of spatiotemporal selectivity estimation techniques. In Proceedings of Statistical and Scientific Database Management (SSDBM) (July), 202–211. HADJIELEFTHERIOU, M., KOLLIOS, G., TSOTRAS, V., AND GUNOPULOS, D. 2002. Efficient indexing of spatiotemporal objects. In Proceedings of Extending Data Base Technology (EDBT) (March). 251– 268. HAN, J., STEFANOVIC, N., AND KOPERSKI, K. 1998. Selective materialization: An efficient method for spatial data cube construction. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) (April), 144–158. HARINARAYAN, V., RAJARAMAN A., AND ULLMAN, J. 1996. Implementing data cubes efficiently. In Proceedings of the ACM SIGMOD conference (June), 205–216. HURTADO, C., MENDELZON, A., AND VAISMAN, A. 1999. Maintaining data cubes under dimension updates. In Proceedings of International Conference on Database Engineering (ICDE) (March), 346–355. JURGENS M. AND LENZ H. 1998. The Ra*-tree: An improved R-tree with materialized data for supporting range queries on OLAP-data. In Proceedings of International Workshop on Database and Expert Systems Applications (Aug.), 186–191. KIMBALL, R. 1996. The Data Warehouse Toolkit. John Wiley. KLINE, N. AND SNODGRASS, R. 1995. Computing temporal aggregates. In Proceedings of International Conference on Database Engineering (ICDE) (March), 222–231. KOLLIOS, G., GUNOPULOS, D., AND TSOTRAS, V. 1999. On indexing mobile objects. In Proceedings of the ACM Symposium on Principles of Database Systems (PODS) (May), 261–272. KOLLIOS, G., GUNOPULOS, D., TSOTRAS, V., DELIS, A., AND HADJIELEFTHERIOU, M. 2001. Indexing animated objects using spatiotemporal access methods. Tran. Knowl. Data Eng. (TKDE), 13, 5, 758–777. KUMAR, A., TSOTRAS, V., AND FALOUTSOS, C. 1998. Designing access methods for bitemporal databases. Tran. Knowl. Data Eng. (TKDE), 10, 1, 1–20. KWON, D., LEE, S., AND LEE. S. 2002. Indexing the current positions of moving objects using the lazy update R-tree. In Proceedings of the 4th Mobile Data Management (MDM) (Jan.), 113–120. LAZARIDIS, I. AND MEHROTRA, S. 2001. Progressive approximate aggregate queries with a multiresolution tree structure. In Proceedings of the ACM SIGMOD conference (June), 401–412. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Historical Spatio-Temporal Aggregation



101

LEE, M., HSU, W., JENSEN, C., CUI, B., AND TEO, K. 2003. Supporting frequent updates in R-Trees: A bottom-up approach. In Proceedings of Very Large Database Conference (VLDB) (Sep.), 608–619. MENDELZON, A. AND VAISMAN, A. 2000. Temporal queries in OLAP. In Proceedings of Very Large Database Conference (VLDB) (Sep.), 242–253. MOON, B., LOPEZ, I., AND IMMANUEL, V. 2000. Scalable algorithms for large temporal aggregation. In Proceedings of International Conference on Database Engineering (ICDE) (Feb.), 145–154. PAGEL, B.U., SIX, H.W., TOBEN, H., AND WIDMAYER, P. 1993. Towards an analysis of range query performance in spatial data structures. In Proceedings of the ACM Symposium on Principles of Database Systems (PODS) (May), 49–58. PAPADIAS, D., KALNIS, P., ZHANG, J., AND TAO, Y. 2001. Efficient OLAP operations in spatial data warehouses. In Proceedings of the International Symposium on Spatial and Temporal Databases (SSTD) (July), 443–459. PAPADIAS, D., TAO, Y., KALNIS, P., AND ZHANG, J. 2002. Indexing spatio-temporal data warehouses. 2002. In Proceedings of International Conference on Database Engineering (ICDE) (Feb.), 166– 175. PFOSER, D., JENSEN, C, AND THEODORIDIS, Y. 2000. Novel approaches to the indexing of moving object trajectories. In Proceedings of Very Large Database Conference (VLDB) (Sep.), 395–406. SALTENIS, S. AND JENSEN, C. 2002. Indexing of moving objects for location-based services. In Proceedings of International Conference on Database Engineering (ICDE) (Feb.), 463–472. SALTENIS, S., JENSEN, C., LEUTENEGGER, S., AND LOPEZ, M. 2000. Indexing the positions of continuously moving objects. In Proceedings of the ACM SIGMOD conference (June), 331–342. SALZBERG, B. AND TSOTRAS, V. 1999. A comparison of access methods for temporal data. ACM Computing Surveys 31, 2, 158–221. SHUKLA, A., DESHPANDE, P., AND NAUGHTON, J. 1998. Materialized view selection for multidimensional datasets. In Proceedings of Very Large Database Conference (VLDB) (Aug.), 488–499. SISTLA, A., WOLFSON, O., CHAMBERLAIN, S., AND DAO, S. 1997. Modeling and querying moving objects. In Proceedings of International Conference on Database Engineering (ICDE) (April), 422–432. STEFANOVIC, N., HAN, J., AND KOPERSKI, K. 2000. Object-based selective materialization for efficient implementation of spatial data cubes. Tran. Knowl. Data Eng. (TKDE), 12, 6, 938–958. TAO, Y., KOLLIOS, G., CONSIDINE, J., LI, F., AND PAPADIAS, D. 2004. Spatio-temporal aggregation using sketches. In Proceedings of International Conference on Database Engineering (ICDE) (March), 214–226. TAO, Y. AND PAPADIAS, D. 2001. The MV3R-tree: A spatio-temporal access method for timestamp and interval queries. In Proceedings of Very Large Database Conference (VLDB) (Sep.), 431–440. TAO, Y. AND PAPADIAS, D. 2004. Performance analysis of R*-trees with arbitrary node extents. Tran. Knowl. Data Eng. (TKDE), 16, 6, 653–668. TAO, Y., PAPADIAS, D., AND SUN, J. 2003a. The TPR*-tree: An optimized spatio-temporal access method for predictive queries. Proceedings of Very Large Database Conference (VLDB) (Sep.), 790–801. TAO, Y., PAPADIAS, D., AND ZHANG, J. 2002a. Cost models for overlapping and multi-version structures. ACM Tran. Datab. Syst. 27, 3, 299–342. TAO, Y., PAPADIAS, D., AND ZHANG, J. 2002b. Efficient processing of planar points. In Proceedings of Extended Database Technology (EDBT) (March), 682–700. TAO, Y., SUN, J., AND PAPADIAS, D. 2003b. Selectivity estimation for predictive spatio-temporal queries. ACM Tran. Datab. Syst. 28, 4, 295–336. THEODORIDIS, Y. AND SELLIS, T. 1996. A model for the prediction of R-tree performance. In Proceedings of the ACM Symposium on Principles of Database Systems (PODS) (June), 161–171. THEODORIDIS, Y., STEFANAKIS, E., AND SELLIS, T. 2000. Efficient cost models for spatial queries using R-trees. Tran. Knowl. Data Eng. (TKDE), 12, 1, 19–32. TIGER. http://www.census.gov/geo/www/tiger/. VARMAN, P. AND VERMA, R. 1997. Optimal Storage and access to multiversion data. Tran. Knowl. Data Eng. (TKDE), 9, 3, 391–409. VAZIRGIANNIS, M., THEODORIDIS, Y., AND SELLIS, T. 1998. Spatio-temporal composition and indexing for large multimedia applications. Multimedia Systems, 6, 4, 284–298. YANG, J. AND WIDOM, J. 2003. Incremental computation and maintenance of temporal aggregates. The VLDB Journal, 12, 3, 262–283. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

102



Y. Tao and D. Papadias

YAO, S. Random 2-3 Trees. 1978. Acta Informatica, 2, 9, 159–179. ZHANG, D., GUNOPULOS, D., TSOTRAS, V., AND SEEGER, B. 2002. Temporal aggregation over data streams using multiple granularities. In Proceedings of Extended Database Technology (EDBT) (March), 646–663. ZHANG, D., GUNOPULOS, D., TSOTRAS, V., AND SEEGER, B. 2003. Spatial and temporal aggregation over data streams using multiple granularities. Information Systems, 28, 1-2, 61–84. ZHANG, D., MARKOWETZ, A., TSOTRAS, V., GUNOPULOS, D., AND SEEGER, B. 2001. Efficient computation of temporal aggregates with range predicates. In Proceedings of the ACM Symposium on Principles of Database Systems (PODS) (May), 237–245. ZHANG, D., TSOTRAS, V., AND GUNOPULOS, D. 2002. Efficient aggregation over objects with extent. In Proceedings of the ACM Symposium on Principles of Database Systems (PODS) (May), 121–132. Received May 2003; revised May 2004; accepted August 2004

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems Using a Multidimensional Approach GEDIMINAS ADOMAVICIUS University of Minnesota RAMESH SANKARANARAYANAN University of Connecticut SHAHANA SEN Fairleigh Dickinson University and ALEXANDER TUZHILIN New York University

The article presents a multidimensional (MD) approach to recommender systems that can provide recommendations based on additional contextual information besides the typical information on users and items used in most of the current recommender systems. This approach supports multiple dimensions, profiling information, and hierarchical aggregation of recommendations. The article also presents a multidimensional rating estimation method capable of selecting two-dimensional segments of ratings pertinent to the recommendation context and applying standard collaborative filtering or other traditional two-dimensional rating estimation techniques to these segments. A comparison of the multidimensional and two-dimensional rating estimation approaches is made, and the tradeoffs between the two are studied. Moreover, the article introduces a combined rating estimation method, which identifies the situations where the MD approach outperforms the standard two-dimensional approach and uses the MD approach in those situations and the standard two-dimensional approach elsewhere. Finally, the article presents a pilot empirical study of the combined approach, using a multidimensional movie recommender system that was developed for implementing this approach and testing its performance.

Authors’ addresses: G. Adomavicius, Department of Information and Decision Sciences, Carlson School of Management, University of Minnesota, 321 19th Avenue South, Minneapolis, MN 55455; email: [email protected]; R. Sankaranarayanan, Department of Operations and Information Management, School of Business, University of Connecticut, 2100 Hillside Road, U-1041 OPIM, Storrs, CT 06269-1041; email: [email protected]; S. Sen, Department of Marketing and Entrepreneurship, Silberman College of Business, Fairleigh Dickinson University, 1000 River Road HDH2-07, Teaneck, NJ 07666; email: [email protected]; A. Tuzhilin, Department of Information, Operations and Management Sciences, Stern School of Business, New York University, 44 West 4th Street, New York, NY 10012; email: [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected].  C 2005 ACM 1046-8188/05/0100-0103 $5.00 ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005, Pages 103–145.

104



G. Adomavicius et al.

Categories and Subject Descriptors: H.1.2 [Models and Principles]: User/Machine Systems— Human information processing; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval—Information filtering, Selection process General Terms: Design, Algorithms, Experimentation, Performance Additional Key Words and Phrases: Recommender systems, collaborative filtering, personalization, multidimensional recommender systems, context-aware recommender systems, rating estimation, multidimensional data models

1. INTRODUCTION AND MOTIVATION There has been much work done in the area of recommender systems over the past decade since the introduction of the first papers on the subject [Resnick et al. 1994; Hill et al. 1995; Shardanand and Maes 1995]. Most of this work has focused on developing new methods of recommending items to users and vice versa, such as recommending movies to Web site visitors or recommending customers for books. These recommendation methods are usually classified into collaborative, content-based, and hybrid methods [Balabanovic and Shoham 1997] and are described in more detail in Section 2. However, in many applications, such as recommending vacation packages, personalized content on a Web site, various products in an online store, or movies, it may not be sufficient to consider only users and items—it is also important to incorporate the contextual information of the user’s decision scenario into the recommendation process. For example, in the case of personalizing content on a Web site, it is important to determine what content needs to be delivered (recommended) to a customer and when. More specifically, on weekdays a user might prefer to read world news in the morning and the stock market report in the evening, and on weekends she might prefer to read movie reviews and do shopping. As another example of the need for incorporating contextual information, a “smart” shopping cart providing real-time recommendations to shoppers using wireless location-based technologies [Wade 2003] needs to take into account not only information about products and customers but also such contextual information as shopping date/time, store, who accompanies the primary shopper, products already placed into the shopping cart and its location within the store. Again, a recommender system may recommend a different movie to a user depending on whether she is going to see it with her boyfriend on a Saturday night or with her parents on a weekday. In marketing, behavioral research on consumer decision making has established that decision making, rather than being invariant, is contingent upon the context of decision making; the same consumer may use different decision-making strategies and prefer different products or brands under different contexts [Lussier and Olshavsky 1979; Klein and Yadav 1989; Bettman et al. 1991]. Therefore, accurate prediction of consumer preferences undoubtedly depends upon the degree to which relevant contextual information is incorporated into a recommendation method. To provide recommendations incorporating contextual information, we present a multidimensional recommendation model (MD model) that makes recommendations based on multiple dimensions and, therefore, extends the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



105

classical two-dimensional (2D) Users × Items paradigm. The preliminary ideas of the MD model were introduced in an earlier workshop paper [Adomavicius and Tuzhilin 2001a]. In this article, we present the MD model in greater depth and also describe various additional aspects, including a formulation of the MD recommendation problem, and the analysis of rating aggregation capabilities of the MD model. Moreover, we present a particular rating estimation method for the MD model, and an algorithm that combines the MD and the 2D approaches and provides rating estimations based on the combined approach. Finally, we present a pilot study implementing and testing our methods on a movie recommendation application that takes into consideration the multidimensional contextual information, such as when the movie was seen, with whom and where. Since traditional collaborative filtering systems assume homogeneity of context, they typically utilize all the collected ratings data to determine appropriate recommendations. In contrast, the main rating estimation method presented in the article uses the reduction-based approach that uses only the ratings that pertain to the context of the user-specified criteria in which a recommendation is made. For example, to recommend a movie to a person who wants to see it in a movie theater on a Saturday night, our method will use only the ratings of the movies seen in movie theaters over the weekends, if it is determined from the data that the place and the time of the week dimensions affect the moviegoers’ behavior. Moreover, this method combines some of the multi-strategy and local machine learning methods [Atkeson et al. 1997; Fan and Li 2003; Hand et al. 2001, Sections 6.3.2-6.3.3] with On-Line Analytical Processing (OLAP) [Kimball 1996; Chaudhuri and Dayal 1997] and marketing segmentation methods [Kotler 2003] to predict unknown ratings. We also show in the article that there is a tradeoff between having more pertinent data for calculating an unknown rating and having fewer data points used in this calculation based only on the ratings with the same or similar context—the tradeoff between greater pertinence vs. higher sparsity of the data. We also show how to achieve better recommendations using this tradeoff. This article makes the following contributions. First, it presents the multidimensional recommendation model and discusses some of its properties and capabilities. Second, it proposes a multidimensional rating estimation method based on the reduction-based approach. Third, the article demonstrates that context matters—that the multidimensional approach can produce better rating estimations for the reduction-based approach in some situations. In addition, the article demonstrates that context may matter in some cases and not in others—that the reduction-based approach outperforms the 2D approach on some contextual segments of the data and underperforms on others. Fourth, the article proposes a combined approach that identifies the contextual segments where the reduction-based approach outperforms the 2D approach and uses this approach in these situations, and the standard 2D approach in the rest. Fifth, we implement the combined approach and empirically demonstrate that this combined approach outperforms the standard 2D collaborative filtering approach on the multidimensional rating data that we collected from a web site that we designed for this purpose. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

106



G. Adomavicius et al.

In the rest of the article we present the MD approach in Section 3, MD rating estimation method in Section 4, and the empirical evaluation of the proposed approach in Section 5. We begin with the review of prior work on recommender systems in Section 2. 2. PRIOR WORK ON RECOMMENDER SYSTEMS Traditionally, recommender systems deal with applications that have two types of entities, users and items. The recommendation process starts with the specification of the initial set of ratings, which is either explicitly provided by the users or is implicitly inferred by the system. For example, in case of a movie recommender system, John Doe may assign a rating of 7 (out of 13) for the movie “Gladiator,”—set Rmovie (John Doe, Gladiator) = 7. Once these initial ratings are specified, a recommender system tries to estimate the rating function R R : Users × Items → Ratings

(1)

for the (user, item) pairs that have not yet been rated by the users. Conceptually, once function R is estimated for the whole Users × Items domain, a recommender system can select the item iu′ with the highest rating (or a set of k highest-rated items) for user u and recommend that item(s) to the user: ∀u ∈ Users,

iu′ = arg max R(u, i).

(2)

i∈Items

In practice, however, the unknown ratings do not have to be estimated for the whole Users × Items space beforehand, since this can be a very expensive task for large domains of users and items. Instead, various methods have been developed for finding efficient solutions to (2) requiring smaller computational efforts, for example, as described in Goldberg et al. [2001] and Sarwar et al. [2001]. According to Balabanovic and Shoham [1997], the approaches to recommender systems are usually classified as content-based, collaborative, and hybrid, and we review them in the rest of this section. Content-Based Recommender Systems. In content-based recommendation methods, the rating R(u, i) of item i for user u is typically estimated based on the ratings R(u, i ′ ) assigned by the same user u to other items i ′ ∈ Items that are “similar” to item i in terms of their content. For example, in a movie recommendation application, in order to recommend movies to user u, the content-based recommender system tries to understand user preferences by analyzing commonalities among the content of the movies user u has rated highly in the past. Then, only the movies that have a high degree of similarity to whatever the customer’s preferences are would get recommended. More formally, let Content(i) be the set of attributes characterizing item i. It is usually computed by extracting a set of features from item i (its content) and is used to determine appropriateness of the item for recommendation purposes. Since many content-based systems are designed for recommending text-based items, including Web pages and Usenet news messages, the content in these ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



107

systems is usually described with keywords, as is done in the Fab [Balabanovic and Shoham 1997] and the Syskill and Webert [Pazzani and Billsus 1997] recommender systems. The “importance” of a keyword is determined with some weighting measure that can be defined using various measures from information retrieval [Salton 1989; Baeza-Yates and Ribeiro-Neto 1999], such as the term frequency/inverse document frequency (TF-IDF) measure [Salton 1989]. In addition, we need to define a profile of user u. Since many content systems deal with recommending text-based items, these user profiles are also often defined in terms of weights of important keywords. In other words, ContentBasedProfile(u) for user u can be defined as a vector of weights Wu = (wu1 , . . . , wuk ), where each weight wui denotes the importance of keyword ki to user u and can also be specified using various information retrieval metrics, including the TF-IDF measure [Lang 1995; Pazzani and Billsus 1997]. In content-based recommender systems the rating function R(u, i) is usually defined as R(u, i) = score(ContentBasedProfile(u), Content(i)).

(3)

In case ContentBasedProfile(u) and Content(i) are defined as vectors of keyword weights Wu and Wi , as is usually done for recommending Web pages, Usenet messages, and other kinds of textual documents, rating function R(u, i) is usually represented in information retrieval literature by some scoring heuristic defined in terms of vectors Wu and Wi , such as the cosine similarity measure [Salton 1989; Baeza-Yates and Ribeiro-Neto 1999]. Besides the traditional heuristics that are based mostly on information retrieval methods, other techniques for content-based recommendations have been used, such as Bayesian classifiers [Pazzani and Billsus 1997; Mooney et al. 1998] and various machine learning techniques, including clustering, decision trees, and artificial neural networks [Pazzani and Billsus 1997]. These techniques differ from information retrieval-based approaches in that they calculate estimated ratings based not on a heuristic formula, such as the cosine similarity measure, but rather on a model learned from the underlying data using statistical learning and machine learning techniques. For example, based on a set of Web pages that were rated as “relevant” or “irrelevant” by the user, Pazzani and Billsus [1997] use the na¨ıve Bayesian classifier [Duda et al. 2001] to classify unrated Web pages. As was observed in Shardanand and Maes [1995] and Balabanovic and Shoham [1997], content-based recommender systems have several limitations. Specifically, content-based recommender systems have only limited content analysis capabilities [Shardanand and Maes 1995]. In other words, such recommender systems are most useful in domains where content information can be extracted automatically (e.g., using various feature extraction methods on textual data) or where it has been provided manually (e.g., information about movies). It would be much more difficult to use such systems to recommend, say, multimedia items (e.g., audio and video streams) that are not manually “annotated” with content information. Content-based recommender systems can also suffer from over-specialization, since, by design, the user is being recommended ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

108



G. Adomavicius et al.

only the items that are similar to the ones she rated highly in the past. However, in certain cases, items should not be recommended if they are too similar to something the user has already seen, such as a different news article describing the same event. Therefore, some content-based recommender systems, such as DailyLearner [Billsus and Pazzani 2000], filter out high-relevance items if they are too similar to something the user has seen before. Finally, the user has to rate a sufficient number of items before a content-based recommender system can really understand her preferences and present reliable recommendations. This is often referred to as a new user problem, since a new user, having very few ratings, often is not able to get accurate recommendations. Some of the problems of the content-based methods described above can be remedied using collaborative methods that are presented below. Collaborative Recommender Systems. Traditionally, many collaborative recommender systems try to predict the rating of an item for a particular customer based on how other customers previously rated the same item. More formally, the rating R(u, i) of item i for user u is estimated based on the ratings R(u′ , i) assigned to the same item i by those users u′ who are “similar” to user u. There have been many collaborative systems developed in academia and industry since the development of the first systems, such as GroupLens [Resnick et al. 1994; Konstan et al. 1997], Video Recommender [Hill et al. 1995], and Ringo [Shardanand and Maes 1995], that used collaborative filtering algorithms to automate the recommendation process. Other examples of collaborative recommender systems include the book recommendation system from Amazon.com, MovieCritic that recommends movies on the Web, the PHOAKS system that helps people find relevant information on the Web [Terveen et al. 1997], and the Jester system that recommends jokes [Goldberg et al. 2001]. According to Breese et al. [1998], algorithms for collaborative recommendations can be grouped into two general classes: memory-based (or heuristic-based) and model-based. Memory-based algorithms [Resnick et al. 1994; Shardanand and Maes 1995; Breese et al. 1998; Nakamura and Abe 1998; Delgado and Ishii 1999] are heuristics that make rating predictions based on the entire collection of items previously rated by the users. That is, the value of the unknown rating ru,i for user u and item i is usually computed as an aggregate of the ratings of some other (e.g., the N most similar) users for the same item i: ru,i = aggr ru′ ,i

(4)

ˆ u′ ∈U

where Uˆ denotes the set of N users that are the most similar to user u and who have rated item i (N can range anywhere from 1 to the number of all users). Some examples of the aggregation function are:  1  sim(u, u′ ) × ru′ ,i . ru′ ,i (b) ru,i = k N ′ ˆ ˆ u′ ∈U u ∈U  ′ = r¯u + k sim(u, u ) × (ru′ ,i − r¯u′ ),

(a) ru,i = (c) ru,i

ˆ u′ ∈U

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

(5)

Incorporating Contextual Information in Recommender Systems



109

where multiplier k serves as a normalizing factor and is usually selected as  k = 1/ u′ ∈Uˆ |sim(u, u′ )|, and where the average rating of user u, r¯u , in (5c) is  defined as r¯u = (1/|Su |) i∈Su ru,i , where Su = {i|ru,i = ε}.1 The similarity measure between the users u and u′ , sim(u, u′ ), determines the “distance” between users u and u′ and is used as a weight for ratings ru′ ,i , — the more similar users u and u′ are, the more weight rating ru′ ,i will carry in the prediction of ru,i . Various approaches have been used to compute similarity measure sim(u, u′ ) between users in collaborative recommender systems. In most of these approaches, sim(u, u′ ) is based on the ratings of items that both users u and u′ have rated. The two most popular approaches are the correlationbased approach [Resnick et al. 1994; Shardanand and Maes 1995]:  (rx,s − r¯x )(r y,s − r¯ y ) s∈Sx y

sim(x, y) =  

s∈Sx y

(rx,s − r¯x )2



(r y,s − r¯ y )2

,

(6)

s∈Sx y

and the cosine-based approach [Breese et al. 1998; Sarwar et al. 2001]:  rx,s r y,s s∈Sx y X ·Y sim(x, y) = cos(X , Y ) = , =    X 2 × Y 2 r2 r2 s∈Sx y

x,s

s∈Sx y

(7)

y,s

where rx,s and r y,s are the ratings of item s assigned by users x and y respectively, Sx y = {s ∈ Items |rx,s = ε ∧ r y,s = ε} is the set of all items co-rated by both users x and y, and X · Y denotes the dot-product of the rating vectors X and Y of the respective users. Many performance-improving modifications, such as default voting, inverse user frequency, case amplification [Breese et al. 1998], and weighted-majority prediction [Nakamura and Abe 1998; Delgado and Ishii 1999], have been proposed as extensions to these standard correlation-based and cosine-based techniques. Moreover, Aggarwal et al. [1999] propose a graph-theoretic approach to collaborative filtering where similarities between users are calculated in advance and are stored as a directed graph. Also, while the above techniques have traditionally been used to compute similarities between users, Sarwar et al. [2001] proposed to use the same correlation-based and cosine-based techniques to compute similarities between items instead, and to obtain ratings from them. Furthermore, Sarwar et al. [2001] argue that item-based algorithms can provide better computational performance than traditional user-based collaborative methods, while at the same time also providing better quality than the best available user-based algorithms. In contrast to memory-based methods, model-based algorithms [Breese et al. 1998; Billsus and Pazzani 1998; Ungar and Foster 1998; Chien and George 1999; Getoor and Sahami 1999; Goldberg et al. 2001] use the collection of ratings to learn a model, which is then used to make rating predictions. Therefore, in comparison to model-based methods, the memory-based algorithms can be 1 We

use the ri, j = ε notation to indicate that item j has not been rated by user i. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

110



G. Adomavicius et al.

thought of as “lazy learning” methods in the sense that they do not build a model but instead perform the heuristic computations at the time recommendations are sought. One example of model-based recommendation techniques is presented in Breese et al. [1998], where a probabilistic approach to collaborative filtering is proposed and the unknown ratings are calculated as ru,i = E(ru,i ) =

n  x=0

x × Pr(ru,i = x|ru,i′ , i ′ ∈ Su )

(8)

and it is assumed that rating values are integers between 0 and n, and the probability expression is the probability that user u will give a particular rating to item i given the previous ratings of items rated by user u. To estimate this probability, Breese et al. [1998] propose two alternative collaborative probabilistic models: cluster models and Bayesian networks. Moreover, Billsus and Pazzani [1998] proposed a collaborative filtering method in a machine learning framework, where various machine learning techniques (such as artificial neural networks) coupled with feature extraction techniques (such as singular value decomposition—an algebraic technique for reducing dimensionality of matrices) can be used. Dimensionality reduction techniques for recommender systems were also studied in Sarwar et al. [2000]. Furthermore, both Breese et al. [1998] and Billsus and Pazzani [1998] compare their respective model-based approaches with standard memory-based approaches and report that model-based methods in some instances can outperform memory-based approaches in terms of accuracy of recommendations. There have been several other model-based collaborative recommendation approaches proposed in the literature. A statistical model for collaborative filtering was proposed in Ungar and Foster [1998], and several different algorithms for estimating the model parameters were compared, including K-means clustering and Gibbs sampling. Other methods for collaborative filtering include a Bayesian model [Chien and George 1999], a probabilistic relational model [Getoor and Sahami 1999], and a linear regression [Sarwar et al. 2001]. Also, a method combining both memory-based and model-based approaches was proposed in Pennock and Horvitz [1999], where it was empirically demonstrated that the use of this combined approach can provide better recommendations than pure memory-based and model-based approaches. Furthermore, Kumar et al. [2001] use a simple probabilistic model for collaborative filtering to demonstrate that recommender systems can be valuable even with little data on each user, and that simple algorithms can be almost as effective as the best possible ones in terms of utility. Although the pure collaborative recommender systems do not have some of the shortcomings of the content-based systems described earlier, such as limited content analysis or over-specialization, they do have other limitations [Balabanovic and Shoham 1997; Lee 2001]. In addition to the new user problem (the same issue as in content-based systems), the collaborative recommender systems also tend to suffer from the new item problem, since they rely solely on rating data to make recommendations. Therefore, the recommender system ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



111

would not be able to recommend a new item until it is rated by a substantial number of users. The sparsity of ratings is another important problem that collaborative recommender systems frequently face, since the number of userspecified ratings is usually very small compared to the number of ratings that need to be predicted. For example, in the movie recommendation system there may be many movies that have been rated by only a few people and these movies would be recommended very rarely, even if those few users gave high ratings to them. Also, for the user whose tastes are unusual compared to the rest of the population, there may not be any other users who are particularly similar, leading to poor recommendations [Balabanovic and Shoham 1997]. One way to deal with the sparsity problem is presented in Huang et al. [2004], where this problem is addressed by applying an associative retrieval framework and related spreading activation algorithms to explore transitive associations among users through their past transactions and feedback. Moreover, some researchers proposed to combine the content-based and collaborative approaches into a hybrid approach to deal with this and other problems described above. Hybrid Recommender Systems. Content and collaborative methods can be combined into the hybrid approach in several different ways [Balabanovic and Shoham 1997; Basu et al. 1998; Ungar and Foster 1998; Claypool et al. 1999; Soboroff and Nicholas 1999; Pazzani 1999; Tran and Cohen 2000]. Many hybrid recommender systems, including Fab [Balabanovic and Shoham 1997] and the “collaboration via content” approach described in Pazzani [1999], combine collaborative and content-based approaches by (1) learning and maintaining user profiles based on content analysis using various information retrieval methods and/or other content-based techniques, and (2) directly comparing the resulting profiles to determine similar users in order to make collaborative recommendations. This means that users can be recommended items when items either score highly against the user’s profile or are rated highly by a user with a similar profile. Basu et al. [1998] follow a similar approach and propose to use such customer profiling information, as the age and gender of users, and such content-based information as the genre of movies, to aid collaborative filtering predictions. Also, Soboroff and Nicholas [1999] propose using the latent semantic indexing technique to incorporate and conveniently rearrange the collaborative information, such as the collection of user profiles, in the content-based recommendation framework. This enables comparison of items (e.g., textual documents) and user profiles in a unified model; as a result, the commonalities between users can be exploited in the content filtering task. Another approach to building hybrid recommender systems is to implement separate collaborative and content-based recommender systems. Then, we can have two different scenarios. First, we can combine the outputs (ratings) obtained from individual recommender systems into one final recommendation using either a linear combination of ratings [Claypool et al. 1999] or a voting scheme [Pazzani 1999]. Alternatively, we can use one of the individual recommender systems, at any given moment choosing to use the one that is “better” than others based on some recommendation quality metric. For example, the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

112



G. Adomavicius et al.

DailyLearner system [Billsus and Pazzani 2000] selects the recommender system that can give the recommendation with the higher level of confidence, while Tran and Cohen [2000] choose the one whose recommendation is more consistent with past ratings of the user. Yet another hybrid approach to recommendations is used by Condliff et al. [1999] and Ansari et al. [2000], where instead of combining collaborative and content-based methods the authors propose to use information about both users and items in a single recommendation model. Both Condliff et al. [1999] and Ansari et al. [2000] use Bayesian mixed-effects regression models that employ Markov chain Monte Carlo methods for parameter estimation and prediction. Finally, it was demonstrated in Balabanovic and Shoham [1997] and Pazzani [1999] that hybrid methods can provide more accurate recommendations than pure collaborative and content-based approaches. In addition, various factors affecting performance of recommender systems, including product domain, user characteristics, user search mode and the number of users, were studied in Im and Hars [2001]. All of the approaches described in this section focus on recommending items to users or users to items and do not take into consideration additional contextual information, such as time, place, the company of other people, and other factors described in Section 1 affecting recommendation experiences. Moreover, the recommendation methods are hard-wired into these recommendation systems and provide only particular types of recommendations. To address these issues, Adomavicius and Tuzhilin [2001a] proposed a multidimensional approach to recommendations where the traditional two-dimensional user/item paradigm was extended to support additional dimensions capturing the context in which recommendations are made. This multidimensional approach is based on the multidimensional data model used for data warehousing and OnLine Analytical Processing (OLAP) applications in databases [Kimball 1996; Chaudhuri and Dayal 1997], on hierarchical aggregation capabilities, and on user, item and other profiles defined for each of these dimensions. Moreover, Adomavicius and Tuzhilin [2001a] also describe how the standard multidimensional OLAP model is adjusted when applied to recommender systems. Finally, to provide more extensive and flexible types of recommendations that can be requested by the user on demand, Adomavicius and Tuzhilin [2001a] present a Recommendation Query Language (RQL) that allows users to express complex recommendations that can take into account multiple dimensions, aggregation hierarchies, and extensive profiling information. The usage of contextual information in recommender systems can also be traced to Herlocker and Konstan [2001], who argue that the inclusion of knowledge about user’s task into the recommendation algorithm in certain applications can lead to better recommendations. For example, if we want to recommend books as gifts for a child, then, according to Herlocker and Konstan [2001], we might want to use several books that the child already has and likes and use this information in calculating new recommendations. However, this approach is still two-dimensional (i.e., it uses only User and Item dimensions), since the task specification consists of a list of sample items; no additional contextual dimensions are used. However, this approach was successful in illustrating the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



113

value of incorporating additional information into the standard collaborative filtering paradigm. Our proposal to incorporate other dimensions into recommender systems is in line with the research on consumer decision making by behavioral researchers who have established that decision making, rather than being invariant, is contingent on the context. The same consumer may use different decisionmaking strategies and prefer different products or brands, in different contexts [Bettman et al. 1991]. According to Lilien et al. [1992], “consumers vary in their decision-making rules because of the usage situation, the use of the good or service (for family, for gift, for self) and purchase situation (catalog sale, in-store shelf selection, salesperson aided purchase).” Therefore, accurate prediction of consumer preference undoubtedly depends upon the degree to which the relevant contextual information (e.g. usage situation, purchase situation, who is it for) is incorporated in a recommender system. 3. MULTIDIMENSIONAL RECOMMENDATION MODEL (MD MODEL) In this section, we describe the multidimensional (MD) recommentation model supporting multiple dimensions, hierarchies, and aggregate ratings. We describe these concepts below. 3.1 Multiple Dimensions As stated before, the MD model provides recommendations not only over the User × Item dimensions, as the classical (2D) recommender systems do, but over several dimensions, such as User, Item, Time, Place, and so on. When considering multiple dimensions, we will follow the multidimensional data model used for data warehousing and OLAP applications in databases [Kimball 1996; Chaudhuri and Dayal 1997]. Formally, let D1 , D2 , . . . , Dn be dimensions, each dimension Di being a subset of a Cartesian product of some attributes (or fields) Ai j , ( j = 1, . . . , ki ), i.e., Di ⊆ Ai1 × Ai2 × · · · × Aiki , where each attribute defines a domain (or a set) of values. Moreover, one or several attributes form a key: they uniquely define the rest of the attributes [Ramakrishnan and Gehrke 2000]. In some cases a dimension can be defined by a single attribute, and ki = 1 in such cases.2 For example, consider the three-dimensional recommendation space User × Item × Time, where the User dimension is defined as User ⊆ UName × Address × Income × Age and consists of a set of users having certain names, addresses, incomes, and being of a certain age. Similarly, the Item dimension is defined as Item ⊆ IName × Type × Price and consists of a set of items defined by their names, types and the price. Finally, the Time dimension can be defined as Time ⊆ Year × Month × Day and consists of a list of days from the starting to the ending date (e.g. from January 1, 2003 to December 31, 2003). The set of attributes describing a particular dimension is sometimes called a profile. For 2 To

simplify the presentation, we will sometimes not distinguish between “dimension” and “attribute” for single-attribute dimensions and will use these terms interchangeably when it is clear from the context. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

114



G. Adomavicius et al.

example, user’s name, address, income and age would constitute user’s profile and item’s name, type and price would constitute item’s profile.3 Given dimensions D1 , D2 , . . . , Dn , we define the recommendation space for these dimensions as a Cartesian product S = D1 × D2 × · · · × Dn . Moreover, let Ratings be a rating domain representing the ordered set of all possible rating values. Then the rating function is defined over the space D1 × · · · × Dn as R : D1 × · · · × Dn → Ratings

(9)

For instance, continuing the User × Item × Time example considered above, we can define a rating function R on the recommendation space User × Item × Time specifying how much user u ∈ User liked item i ∈ Item at time t ∈ Time, R(u, i, t). For example, John Doe rated a vacation that he took at the Almond Resort Club on Barbados on January 7–14, 2003 as 6 (out of 7). Visually, ratings R(d 1 , . . . , d n ) on the recommendation space S = D1 × D2 × · · · × Dn can be stored in a multidimensional cube, such as the one shown in Figure 1. For example, the cube in Figure 1 stores ratings R(u, i, t) for the recommendation space User × Item × Time, where the three tables define the sets of users, items and times associated with User, Item, and Time dimensions respectively. For example, rating R(101, 7, 1) = 6 in Figure 1 means that for the user with User ID 101 and the item with Item ID 7, rating 6 was specified during the weekday. The rating function R in (9) is usually defined as a partial function, where the initial set of ratings is either explicitly specified by the user or is inferred from the application [Konstan et al. 1997; Caglayan et al. 1997; Oard and Kim 2001]. Then one of the central problems in recommender systems is to estimate the unknown ratings—make the rating function R total. In the multidimensional model of recommender systems this rating estimation problem has its caveats, which will be described in Section 4 after we present other parts of the multidimensional model. Therefore, in the rest of Section 3 we assume that the unknown values of the rating function have been already estimated and that R is already a total function defined on the whole recommendation space. Given the recommendation space S = D1 × D2 × · · · × Dn and the rating function (9), the recommendation problem is defined by selecting certain “what” dimensions Di1 , . . . , Dik (k < n) and certain “for whom” dimensions D j 1 , . . . , D j l (l < n) that do not overlap, i.e., {Di1 , . . . , Dik } ∩ {D j 1 , . . . , D j l } = ⊘ and recommending for each tuple (d j 1 , . . . , d j l ) ∈ D j 1 × · · · × D j l the tuple (d i1 , . . . , d ik ) ∈ Di1 × · · · × Dik that maximizes rating R(d 1 , . . . , d n ), i.e., ∀(d j 1 , . . . , d j l ) ∈ D j 1 × · · · × D j l , (d i1 , . . . , d ik ) = R(d 1′ , . . . , d n′ ) arg max

(10)

′ ′ )∈Di1 × · · · × Dik (d i1 , . . . , d ik (d ′j 1 , . . . , d ′j l )=(d j 1 , . . . , d j l )

3 It

is possible to have more complex profiles that are based on rules, sequences—these types of profiles are sometimes called factual [Adomavicius and Tuzhilin 2001b]. There are other more complex profiles based on rules, sequences, signatures and other profiling methods [Adomavicius and Tuzhilin 2001b], as will be mentioned in Section 6. However, we do not currently use them in our MD approach. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



115

Fig. 1. Multidimensional model for the User × Item × Time recommendation space.

For example, consider the application that recommends movies to the users and that has the following dimensions: r Movie: represents all the movies that can be recommended in a given application; it is defined by attributes Movie(MovieID, Name, Studio, Director, Year, Genre, MainActors). r Person: represents all the people for whom movies are recommended in an application; it is defined by attributes Person(UserID, Name, Address, Age, Occupation, etc.). r Place: represents the places where the movie can be seen. Place consists of a single attribute defining the listing of movie theaters and also the choices of the home TV, VCR, and DVD. r Time: represents the time when the movie can be or has been seen; it is defined by attributes Time(TimeOfDay, DayOfWeek, Month, Year). r Companion: represents a person or a group of persons with whom one can see the movie. Companion consists of a single attribute having values “alone,” “friends,” “girlfriend/boyfriend,” “family,” “co-workers,” and “others.” Then the rating assigned to a movie by a person also depends on where and how the movie has been seen, with whom and at what time. For example, the type of movie to recommend to college student Jane Doe can differ significantly depending on whether she is planning to see it on a Saturday night with her boyfriend vs. on a weekday with her parents. Some additional applications ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

116



G. Adomavicius et al.

where multidimensional recommendations are useful were mentioned in Section 1 and include personalized Web content presentation (having the recommendation space S = User × Content × Time) and a “smart” shopping cart (having the recommendation space S = Customer × Product × Time × Store × Location). An important question is what dimensions should be included in a multidimensional recommendation model. This issue is related to the problem of feature selection that has been extensively addressed in machine learning [Koller and Sahami 1996], data mining [Liu and Motoda 1998], and statistics [Chatterjee et al. 2000]. To understand the issues involved, consider a simple case of a single-attribute dimension X having two possible values X = h and X = t. If the distributions of ratings for X = h and X = t were the same, then dimension X would not matter for recommendation purposes. For example, assume that the single-attribute Place dimension has only two values: Theater and Home. Also assume that the ratings given to the movies watched in the movie theater (Place = Theater) have the same distribution as the ratings for the movies watched at home (Place = Home). This means that the place where movies are watched (at home or in a movie theater) does not affect movie watching experiences and, therefore, the Place dimension can be removed from the MD model. There is a rich body of work in statistics that tests whether two distributions or their moment generating functions are equal [Kachigan 1986], and this work can be applied to our case to determine which dimensions should be kept for the MD model. Finally, this example can easily be extended to the situations when the attribute (or the whole dimension) has other data types besides binary. While most traditional recommender systems provide recommendations only of one particular type: “recommend top N items to a user,” multidimensional recommender systems offer many more possibilities, including recommending more than one dimension, as expressed in (10). For example, in the personalized web content application described above, one could ask for the “best N user/time combinations to recommend for each item,” or the “best N items to recommend for each user/time combination,” or the “best N times to recommend for each user/item combination,” and so on. Similarly, in the movie recommender system, one can recommend a movie and a place to see it to a person at a certain time (e.g., tomorrow, Joe should see “Harry Potter” in a movie theater). Alternatively, one can recommend a place and a time to see a movie to a person with a companion (e.g., Jane and her boyfriend should see “About Schmidt” on a DVD at home over the weekend). These examples demonstrate that recommendations in the multidimensional case can be significantly more complex than in the classical 2D case. Therefore, there is a need for a special language to be able to express such recommendations; Adomavicius and Tuzhilin [2001a] propose a recommendation query language RQL for the purposes of expressing different types of multidimensional recommendations. However, coverage of the RQL language is outside of the scope of this article, and the reader is referred to Adomavicius and Tuzhilin [2001a] to learn more about it. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



117

3.2 Aggregation Capabilities One of the key characteristics of multidimensional databases is the ability to store measurements (such as sales numbers) in multidimensional cubes, support aggregation hierarchies for different dimensions (e.g., a hierarchical classification of products or a time hierarchy), and provide capabilities to aggregate the measurements at different levels of the hierarchy [Kimball 1996; Chaudhuri and Dayal 1997]. For example, we can aggregate individual sales for the carbonated beverages category for a major soft-drink distributor in the Northwest region over the last month. Our multidimensional recommendation model supports the same aggregation capability as the multidimensional data model. However, there are certain idiosyncrasies pertaining to our model that make it different from the classical OLAP models used in databases [Kimball 1996; Chaudhuri and Dayal 1997]. We describe these aggregation capabilities in the rest of this section. As a starting point, we assume that some of the dimensions Di of the multidimensional recommendation model have hierarchies associated with these dimensions, for example, the Products dimension can use the standard industrial product hierarchy, such as North American Industry Classification System (NAICS—see www.naics.com), and the Time dimension can use one of the temporal hierarchies, such as minutes, hours, days, months, seasons, and so on. For example, in the movie recommendation application described in Section 3.1, all the movies can be grouped into sub-genres, and sub-genres can be further grouped into genres. The Person dimension can be grouped based on age and/or the occupation, or using one of the standard marketing classifications. Also, all the movie theaters for the Place dimension can be grouped into the “Movie Theater” category, and other categories, such as “TV at home,” “VCR at home,” and “DVD at home,” can remain intact. Finally, the Companion dimension does not have any aggregation hierarchy associated with it. As mentioned before, some of the dimensions, such as Time, can have more than one hierarchy associated with it. For example, Time can be aggregated into Days/Weeks/Months/Years or into Days/{Weekdays, Weekends} hierarchies. Selecting appropriate hierarchies is a standard OLAP problem: either the user has to specify a hierarchy or these hierarchies can be learned from the data [Han and Kamber 2001, Ch. 3]. In this article, we assume that a particular hierarchy has already been selected or learned for each dimension, and we use it in our multidimensional model. Given aggregation hierarchies, the enhanced n-dimensional recommendation model consists of r Profiles for each of the n dimensions, defined by a set of attributes describing these dimensions, as explained in Section 3.1. For example, for the Movie dimension, we store profiles of all the movies (including movie title, studio, director, year of release, and genre). r Aggregation hierarchies associated with each dimension, as previously described in this section. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

118



G. Adomavicius et al.

Fig. 2. Aggregation capabilities for recommender systems: aggregating the ratings.

r The multidimensional cube of ratings, each dimension being defined by the key for this dimension and storing the available rating information in the cells of the cube. For example, a multidimensional cube of ratings for the recommendation space Users × Items × Time is discussed in Section 3.1 and presented in Figure 1. Given this enhanced multidimensional recommendation model defined by the dimensional profiles, hierarchies and a ratings cube, a recommender system can provide more complex recommendations that deal not only with individual items, but also with groups of items. For example, we may want to know not only how individual users like individual movies, e.g., R(John Doe, Gladiator) = 7, but also how they may like categories (genres) of movies, e.g., R(John Doe, action movies) = 5. Similarly, we may want to group users and other dimensions as well. For example, we may want to know how graduate students like “Gladiator”, e.g., R(graduate students, Gladiator) = 9. More generally, given individual ratings in the multidimensional cube, we may want to use hierarchies to compute aggregated ratings. For example, assume that movies can be grouped based on their genres and assume that we know how John Doe likes each action movie individually. Then, as shown in Figure 2, we can compute an overall rating of how John Doe likes action movies as a genre by aggregating his individual action movie ratings: R(John Doe, action) := AGGRx.genre=action R(John Doe, x)

(11)

Most traditional OLAP systems use similar aggregation operation (known as roll-up) that often is a simple summation of all the underlying elements [Kimball 1996; Chaudhuri and Dayal 1997]. Such an approach, however, is not applicable to recommender systems because ratings usually are not additive quantities. For example, if John Doe saw two action movies and rated them with 5 and 9 respectively, then the overall rating for action movies should not be the sum of these two ratings (i.e., 14). Therefore, more appropriate aggregation functions AGGR for recommender systems are AVG, AVG of TOP k, or more complicated mathematical or even special user-defined functions. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



119

For example, the cumulative rating of action movies can be computed for John Doe as R(John Doe, action) := AVGx.genre=action R(John Doe, x).

(12)

One of the central issues in recommender systems is how to obtain ratings for the multi-dimensional cube described in this section and in Section 3.1. As for the standard two-dimensional case, we start with an initial set of userspecified ratings on some subset of D1 × · · · × Dn . This initial set of ratings can be obtained either explicitly from the user or using various implicit rating elicitation methods [Konstan et al. 1997; Caglayan et al. 1997; Oard and Kim 2001; Kelly and Teevan 2003]. Moreover, the initial set of ratings does not have to be specified at the bottom level of the hierarchy. In fact, the initial ratings can be specified for the higher levels of the cube. For example, we can get a rating of how much John Doe likes action movies. Then we need to determine the “missing” ratings on all the levels of the entire multidimensional cube, which we describe in the next section. 4. RATING ESTIMATION IN MULTIDIMENSIONAL RECOMMENDER SYSTEMS An important research question is how to estimate unknown ratings in a multidimensional recommendation space. As in traditional recommender systems, the key problem in multidimensional systems is the extrapolation of the rating function from a (usually small) subset of ratings that are specified by the users for different levels of the aggregation hierarchies in the multidimensional cube of ratings. For example, some ratings are specified for the bottom level of individual ratings, such as John Doe assigned rating 7 to “Gladiator,”—R(JD, Gladiator) = 7, whereas others are specified for aggregate ratings, such as John Doe assigned rating 6 to action movies—R(JD, action) = 6. Then, the general rating estimation problem can be formulated as follows: Multi-level Multidimensional Rating Estimation Problem: given the initial (small) set of user-assigned ratings specified for different levels of the multidimensional cube of ratings, the task is to estimate all other ratings in the cube at all the levels of the OLAP hierarchies.

This rating estimation problem is formulated in its most general case, where the ratings are specified and estimated at multiple levels of OLAP hierarchies. Although there are many methods proposed for estimating ratings in traditional two-dimensional recommender systems as described in Section 2, not all of these methods can be directly extended to the multidimensional case because extra dimensions and aggregation hierarchies complicate the problem. We will first describe how to estimate multidimensional ratings without aggregation hierarchies using the proposed reduction-based approach. We present algorithms for doing these reduction-based estimations in Sections 4.1 and 4.2, and describe in Section 5 how we tested them empirically on multidimensional ratings data that we have collected for this purpose. In addition, we address the rating estimation problem for multiple levels of the aggregation hierarchy. We present a theoretical discussion of this problem in Section 4.3. Since multilevel multidimensional rating estimation constitutes a complex problem, in this ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

120



G. Adomavicius et al.

article we address only certain aspects of it. We discuss possible future extensions to this problem in Section 6. 4.1 An Overview of the Reduction-Based Approach The reduction-based approach reduces the problem of multidimensional recommendations to the traditional two-dimensional User × Item recommendation space. Therefore, it does not consider aggregation hierarchies and operates at the level of individual cells of the multidimensional cube of ratings described in Section 3. Furthermore, one of the advantages of the reduction-based approach is that all previous research on two-dimensional recommender systems is directly applicable in the multidimensional case—any of the methods described in Section 2 can be applied after the reduction is done. To see how this reduction can be done, consider the content presentation system discussed in Section 2. Furthermore, assume that D RUser×Content : U × C → rating

(13)

is a two-dimensional rating estimation function that, given existing ratings D (i.e., D contains records user, content, rating for each of the userspecified ratings), can calculate a prediction for any rating, for example, D RUser×Content (John, DowJonesReport). A three-dimensional rating prediction function supporting time can be defined similarly as D RUser×Content×Time : U × C × T → rating,

(14)

where D contains records user, content, time, rating for the user-specified ratings. Then the three-dimensional prediction function can be expressed through a two-dimensional prediction function as follows: D[Time=t](User,Content,rating)

D ∀(u, c, t) ∈ U × C × T, RUser×Content×Time (u, c, t) = RUser×Content

(u, c) (15)

where D[Time = t](User, Content, rating) denotes a rating set obtained from D by selecting only those records where Time dimension has value t and keeping only the corresponding values for User and Content dimensions as well as the value of the rating itself. In other words, if we treat a set of three-dimensional ratings D as a relation, then D[Time = t](User, Content, rating) is simply another relation obtained from D by performing two relational operations: selection followed by projection. Note that in some cases, the relation D[Time = t](User, Content, rating) may not contain enough ratings for the two-dimensional recommender algorithm to accurately predict R(u,c). Therefore, a more general approach to reducing the multidimensional recommendation space to two dimensions would be to use not the exact context t of the rating (u, c, t), but some contextual segment St , which typically denotes some superset of the context t. For example, if we would like to predict how much John Doe would like to see the “Gladiator” movie on Monday—if we would like to predict the rating D (JohnDoe, Gladiator, Monday), we may want to use not only RUser×Content×Time other user-specified Monday ratings for prediction, but weekday ratings in ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



121

general. In other words, for every (u,c,t) where t ∈ weekday, we can predict the rating as follows: D[Time∈weekday](User,Content, AGGR(rating))

D RUser×Content×Time (u, c, t) = RUser×Content

(u, c).

More generally, in order to estimate some rating R(u, c, t), we can use some specific contextual segment St as follows: D[Time∈S ](User,Content, AGGR(rating))

D t RUser×Content×Time (u, c, t) = RUser×Content

(u, c).

Note, that we have used the AGGR(rating) notation in the above expressions, since there may be several user-specified ratings with the same User and Content values for different Time instances in dataset D (e.g., different ratings for Monday and Tuesday). Therefore, we have to aggregate these values using some aggregation function, for example, AVG, when reducing the dimensionality of the recommendation space. The above three-dimensional reduction-based approach can be extended to a general method reducing an arbitrary n-dimensional recommendation space to an m-dimensional one (where m < n). However, in most of the applications we have m = 2 because traditional recommendation algorithms are designed for the two-dimensional case, as described in Section 2. Therefore, we assume that m = 2 in the rest of the article. We will refer to these two dimensions on which the ratings are projected as the main dimensions. Usually these are User and Item dimensions. All the remaining dimensions, such as Time, will be called contextual dimensions since they identify the context in which recommendations are made (e.g., at a specific time). We will also follow the standard marketing terminology and refer to the reduced relations defined by fixing some of the values of the contextual dimensions, such as Time = t, as segments [Kotler 2003]. For instance, we will refer to the time-related segment in the previous example, since all the ratings are restricted to the specific time t. We reiterate that the segments define not arbitrary subsets of the overall set of ratings D, but rather subsets of ratings that are selected based on the values of attributes of the contextual dimensions or the combinations of these values. For example the Weekend segment of D contains all the ratings of movies watched on weekends: Weekend = {d ∈ D|d .Time.weekend = yes}. Similarly, Theater-Weekend segment contains all the movie ratings watched in the theater over the weekends: TheaterWeekend = {d ∈ D| (d .Location.place = theater) ∧ (d .Time.weekend = yes)}. We illustrate how the reduction-based approach works on the following example. Assume we want to predict how John would like the Dow Jones D Report in the morning. In order to calculate RUser×Content×Time (John, DowJonesReport, Morning), the reduction-based approach would proceed as follows. First, it would eliminate the Time dimension by selecting only the morning ratings from the set of all ratings D. As a result, the problem is reduced to the standard Users × Items case on the set of morning ratings. Then, using any of the 2D rating estimation techniques described in Section 2, we can calculate how John likes the Dow Jones Report based on the set of these morning ratings. In other words, this approach would use the two-dimensional predicD[Time=Morning](User,Content, AGGR(rating)) tion function RUser×Content to estimate ratings for the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

122



G. Adomavicius et al.

User × Content domain on the Morning segment. The intuition behind this approach is simple: if we want to predict a “morning” rating for a certain user and a certain content item, we should consider only the previously specified “morning” ratings for the rating estimation purposes—work only with the Morning segment of ratings. Although, as pointed out in the previous example, the reduction-based approach can use any of the 2D rating estimation methods described in Section 2, we will focus on collaborative filtering (CF) in the rest of the article since CF constitutes one of the main methods in recommender systems. In other words, we will assume that a CF method is used to estimate ratings on 2D segments produced by the reduction-based approach. The reduction-based approach is related to the problems of building local models in machine learning and data mining [Atkeson et al., 1997, Fan and Li, 2003, Hand et al. 2001, Sections 6.3.2-6.3.3]. Rather than building the global rating estimation CF model utilizing all the available ratings, the reductionbased approach builds a local rating estimation CF model, which uses only the ratings pertaining to the user-specified criteria in which a recommendation is made (e.g. morning). It is important to know if a local model generated by the reduction-based approach outperforms the global model of the standard collaborative filtering approach where all the information associated with the contextual dimensions is simply ignored. This is one of the central questions addressed in the remaining part of the article. As the next example demonstrates, the reduction-based CF approach outperforms the standard CF approach in some cases. Example. Consider the following three-dimensional recommendation space User × Item × X , where X is the dimension consisting of a single binary attribute having two possible values: h and t.4 Also assume that all user-specified rating values for X = t are the same. Let’s denote this value as nt . Similarly, let’s also assume that all user-specified rating values for X = h are the same as well, and denote this value as nh . Also, assume that nt = nh . Under these assumptions, the reduction-based approach always estimates the unknown ratings correctly. It is easy to see this because, as mentioned in Section 2, the traditional CF approach computes the rating of item i by user u as  ru,i = k sim(u, u′ ) × ru′ ,i . ˆ u′ ∈U

Therefore, if we use the contextual information about the item being rated, then in the case of the X = t segment, all the ratings ru′ ,i in the sum are the t ratings, and therefore ru,i = nt (regardless of the similarity measure). Similarly, for the X = h segment we get ru,i = nh . Therefore, the estimated rating always coincides with the actual rating for the reduction-based approach. In contrast to this, the general two-dimensional collaborative filtering approach will not be able to predict all the ratings precisely because it uses a 4 For

example, X can represent a single-attribute Place dimension having only two values: Theater (t) and Home (h).

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



123

mixture of nt and nh ratings. Depending on the distribution of these ratings and on the particular rating that is being predicted, an estimation error can vary from 0 (when only the correct ratings are used) to |nt – nh |, when only the incorrect ratings are used for estimation. The reason why the reduction-based CF outperformed the traditional CF approach in this example is that dimension X clearly separates the ratings in two distinct groups (all ratings for X = t are nt and all ratings for X = h are nh and nt = nh ). However, if nt = nh , then dimension X would not separate the ratings for conditions X = h and X = t, and dimension X would not matter for recommendation purposes, as was discussed in Section 3.1. In this case, the reduction-based CF approach would not outperform standard CF. These observations can also be applied to individual segments. In particular, it is possible that for some segments of the ratings data the reduction-based CF dominates the traditional CF method while for other segments it is the other way around. For example, it is possible that it is better to use the reductionbased approach to recommend movies to see in the movie theaters on weekends and the traditional CF approach for movies to see at home on VCRs. This is the case because the reduction-based approach, on the one hand, focuses recommendations on a particular segment and builds a local prediction model for this segment, but, on the other hand, computes these recommendations based on a smaller number of points limited to the considered segment. This tradeoff between having more relevant data for calculating an unknown rating based only on the ratings with the same or similar context and having fewer data points used in this calculation belonging to a particular segment (i.e., the sparsity effect) explains why the reduction-based CF method can outperform traditional CF on some segments and underperform on others. Which of these two factors dominates on a particular segment may depend on the application domain and on the specifics of the available data. One solution to this problem is to combine the reduction-based and the traditional CF approaches as explained in the next section. 4.2 Combined Reduction-Based and Traditional CF Approaches Before describing the combined method, we first present some preliminary concepts. In order to combine the two methods, we need some performance metric to determine which method “outperforms” the other one on various segments. There are several performance metrics that are traditionally used to evaluate performance of recommender systems, such as mean absolute error (MAE), mean squared error (MSE), correlation between predictions and actual ratings, precision, recall, F-measure, and the Receiver Operating Characteristic (ROC) [Mooney and Roy 1999, Herlocker et al. 1999]. Moreover, [Herlocker et al. 1999] classifies these metrics into statistical accuracy and decision-support accuracy metrics. Statistical accuracy metrics compare the predicted ratings against the actual user ratings on the test data. The MAE measure is a representative example of a statistical accuracy measure. The decision-support accuracy metrics measure how well a recommender system can predict which of the unknown ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

124



G. Adomavicius et al.

items will be highly rated. The F-measure is a representative example of the decision-support accuracy metric [Herlocker et al. 1999]. Moreover, although both types of measures are important, it has been argued in the literature [Herlocker et al. 1999] that the decision-support metrics are better suited for recommender systems because they focus on recommending high-quality items, which is the primary target of recommender systems. In this section, we will use some abstract performance metric µ A, X (Y ) for a recommendation algorithm A trained on the set of known ratings X and evaluated on the set of known ratings Y , where X ∩ Y = ⊘. Before proceeding further, we would like to introduce some notation. For each d ∈ Y , let d .R be the actual (user-specified) rating for that data point and let d .R A, X be the rating predicted by algorithm A trained on dataset X and applied to point d . Then µ A, X (Y ) is defined as some statistic on the two sets of ratings {d .R|d ∈ Y } and {d .R A, X |d ∈ Y }. For   example,   the mean absolute error (MAE) measure is defined as µ A, X (Y ) = 1 |Y | d ∈Y |d .R A, X − d .R|. As mentioned earlier, when discussing the performance measure µ A, X (Y ), we always imply that X ∩Y = ⊘—training and testing data should be kept separate. In practice, researchers often use multiple pairs of disjoint training and test sets obtained from the same initial data by employing various model evaluation techniques such as n-fold cross validation or resampling (bootstrapping) [Mitchell 1997, Hastie et al. 2001], and we do use these techniques extensively in our experiments, as described in Section 5. Specifically, given some set T of known ratings, cross-validation or resampling techniques can be used to obtain training and test data sets X i and Y i (i = 1, 2, . . .), where X i ∩ Y i = ∅ and X i ∪ Y i = T , and the actual prediction of a given rating d .R is often computed as an average of its predictions by individual models: d .R A,T =

1  d .R A, X i , where C = {i|d ∈ Y i } . |C| i∈C

(16)

 When using cross-validation or resampling, it is often the case that i X i = T and i Y i = T . To keep the notation simple, we will denote the performance measure as µ A,T (T ). Note, however that this does not mean that the testing was performed on the same data set as training; it simply happens that the combination of all the training (X i ) and testing (Y i ) sets (where each pair is disjoint, as mentioned before) reflects the same initial data set T . As stated before, algorithm A in the definition of µ A,T (S) can be an arbitrary two-dimensional rating estimation method, including collaborative filtering or any other heuristic-based and model-based methods discussed in Section 2. However, to illustrate how the reduction-based approach works, we will assume that A is a traditional collaborative filtering method in the remainder of this section and in our pilot study in Section 5. After introducing the preliminary concepts, we are ready to present the combined approach that consists of the following two phases. First, using known user-specified ratings (i.e., training data), we determine which contextual segments outperform the traditional CF method. Second, in order to predict a rating, we choose the best contextual segment for that particular rating and use ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



125

Fig. 3. The algorithm for determining high-performing contextual segments.

the two-dimensional recommendation algorithm on this contextual segment. We describe each of these phases below. The first phase, presented in Figure 3, is a preprocessing phase and is usually performed “offline.” It consists of the following three steps described below. Step 1 determines all the “large” contextual segments—the segments where the number of user-specified (known) ratings belonging to the segment exceeds a predetermined threshold N . If the recommendation space is “small” (in the number of dimensions and the ranges of attributes in each dimension), we can straightforwardly obtain all the large segments by an exhaustive search in the space of all possible segments. Otherwise, if the search space is too large for an exhaustive search, we can use either standard heuristic search methods [Winston 1992] or the help of a domain expert (e.g., a marketing manager) to determine the set of most important segments for the application and test them for “largeness.” Alternatively, the hierarchies used for rating aggregation purposes, as described in Section 3.2, can also be used to identify large segments. In Step 2, for each large segment S determined in Step 1, we run algorithm A on segment S and determine its performance µ A, S (S) using a broad range of standard cross-validation, resampling, and other performance evaluation techniques that usually split the data into training and testing parts multiple times, evaluate performance separately on each split, and aggregate (e.g., average) performance results at the end [Hastie et al. 2001]. One example of such a performance evaluation technique that we use in our experiments is 5 In

practice, we use the term better to mean not only that µ A, S (S) > µ A,T (S), but also that the difference between performances is substantial: it amounts to performing a statistical test that is dependent on the specific metric µ, as discussed in Section 5. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

126



G. Adomavicius et al.

Fig. 4. The combined approach for rating estimation.

bootstrapping [Hastie et al. 2001]. As will be described in Section 5, we take K random resamples of a certain portion p of the data from S and estimate ratings on the remaining 1– p portion of data S using CF methods.6 Then we average these estimations using Equation (16) and compute measure µ A, S (S) using one of the methods described above (in Section 5 we used MAE and some of the decision-support accuracy metrics). We also run algorithm A on the whole data set T and compute its performance µ A,T (S) on the test set S using the same performance evaluation method as for µ A, S (S). Then we compare the results and determine which method outperforms the other on the same data for this segment. We keep only those segments for which the performance of the reduction-based algorithm A exceeds the performance of the pure algorithm A. Finally, in Step 3 we also remove from SEGM(T ) any segment S, for which there exists a strictly more general segment Q where the reduction-based approach performs better. It will also be seen from the rating estimation algorithm presented in Figure 4 that such underperforming segments will never be used for rating estimation purposes and, therefore, can be removed from SEGM(T ). As a result, the algorithm produces the set of contextual segments SEGM(T ) on which the reduction-based algorithm A outperforms the pure algorithm A. Once we have the set of high-performance contextual segments SEGM(T ), we can perform the second phase of the combined approach and determine which method to use in “real-time” when we need to produce an actual recommendation. The actual algorithm is presented in Figure 4. Given data point d for which we want to estimate the rating, we first go over the contextual segments SEGM(T ) = {S1 , . . . , Sk } ordered in the decreasing order of their performance and select the best performing segment to which point d belongs. If d does not belong to any segment, we use the pure algorithm A (i.e., trained on the whole 6 We

set K = 500 and p = 29/30 in Section 5.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



127

training data T ) for rating prediction and return the estimated rating R A,T (d ). Otherwise, we take the best-performing segment S j to which point d belongs, use the reduction-based algorithm A on that segment, and return the estimated rating R A,Sj (d ). Note that the combined method uses the reduction-based approach only on those contextual segments for which it outperforms the pure two-dimensional algorithm. Otherwise, it reverts to the pure two-dimensional approach to predict those ratings that do not fall into any of the “advantageous” contextual segments. Therefore, the combined approach is expected to perform equally well or better than the pure two-dimensional approach in practice (however, this is not an absolute theoretical guarantee, since the actual performance ultimately depends on the underlying data). The extent to which the combined approach can outperform the two-dimensional approach depends on many different factors, such as the application domain, quality of data, and the performance metric (i.e., adding contextual information to recommender systems may improve some metrics more significantly than others, as will be shown below). The main advantage of the combined reduction-based approach described in this section is that it uses the reduction-based approach only for those contextual situations where this method outperforms the standard 2D recommendation algorithm, and continues to use the latter where there is no improvement. To illustrate how the combined approach presented in Figures 3 and 4 performs in practice, we evaluated it on a real-world data set and compared its performance with the traditional two-dimensional CF method. We describe our study in Section 5. 4.3 Multi-Level Rating Estimation Problem So far, we have studied only individual ratings in Section 4 and did not consider aggregation hierarchies. However, aggregate ratings, described in Section 3.2, are important since they can be useful in various applications. For instance, one can estimate some of the unknown individual ratings in terms of the known aggregate and known individual ratings. For example, assume that John Doe has provided the following ratings for the overall action movie genre and also for the specific action movies that he has seen: R(JD, action) = 6, R(JD, Gladiator) = 7 and R(JD, Matrix) = 3. Then how can we use the cumulative rating of action movies R(JD, action) = 6 to estimate ratings of other individual action movies that John has not seen yet? This problem can be addressed as follows. Let Ra (JD, action) be the actual aggregate rating that John assigned to the action movies (e.g., 6 in the case above). Let Rc (JD, action) be the aggregate rating computed from the individual ratings R(JD, x) assigned to all the action movies in set action using expression (11) and some aggregation function AGGR. Let X r be the set of the action movies that John has already rated and X nr be the action movies that he has not rated yet and which ratings we try to estimate (note that X r ∪ X nr = action). Then, one way to assign ratings to the action movies X nr that John has not rated yet is to minimize the difference between the actual rating Ra (JD, action) and the ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

128



G. Adomavicius et al.

computed rating Rc (JD, action).7 More formally: min|Ra (JD, action) − Rc (JD, action)| =

min

{R(JD,x)}x∈X nr

|Ra (JD, action) − AGGRx∈X r ∪X nr R(JD, x)| .

In other words, we want to assign ratings to the movies in X nr so that they yield the computed aggregated rating that is the closest to the rating Ra (JD, action) assigned by John himself. One of the issues with this minimization problem is that there can be too many (even an infinite number of) solutions in some situations. To see this, assume that function AGGR is the average function AVG, and that X nr = { y 1 , . . . , y k }. Then the above optimization problem is reduced to the problem of finding R(JD, y 1 ), . . . , R(JD, y k ) such that R(JD, y 1 ) + · · · + R(JD, y k ) = c, where  c = (|X r | + |X nr |) · Ra (JD, action) − R(JD, x). x∈X r

In this case, the knowledge of the aggregate rating Ra (JD, action) was reduced to a linear constraint on the set of ratings for the unseen action movies y 1 , . . . , y k . Then we can use any of the rating estimation methods discussed in Section 2 (and adopted to the multidimensional case as will be discussed in Section 4) to estimate ratings R(JD, y 1 ), . . . , R(JD, y k ) for the unseen action movies. However, as explained before, we also have an additional constraint R(JD, y 1 ) + · · · + R(JD, y k ) = c. Therefore, an interesting research problem would be to incorporate such constraints in some of the methods described in Section 2. Moreover, if we use other aggregation functions (besides AVG), then the optimization problem can take different forms. However, the idea of converting a solution to this optimization problem into a set of constraints is generalizable to other aggregation functions, and finding efficient solutions to this problem constitutes an interesting topic for future research. Another reason for using aggregate ratings is that, under certain assumptions, they can have smaller estimation errors than individual ratings. To see this, consider the two-dimensional case of User × Item when the rating estimation function is modeled as Rc (u, i) = Ra (u, i) + ε(µ, σ 2 ),

where Ra (u, i) is the true rating of item i made by user u, Rc (u, i) is its estimated value, and ε(µ, σ 2 ) is an error term that is represented as a random variable having an unspecified distribution with mean µ and variance σ 2 , and that this distribution is the same across all the users and items. Moreover, assume that the aggregation function is the average (AVG) function. Then the aggregated rating for a group of items I is  Ra (u, I ) = (1/|I |) Ra (u, i). i∈I 7 Note

that this difference depends on the assignment of ratings to movies in X nr .

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



129

Without loss of generality, assume that the ratings for all items in I are estimated.8 Then  Rc (u, I ) = (1|I |) Rc (u, i) i∈I and the aggregate estimation error is

1  (Rc (u, i) − Ra (u, i)) i∈I |I |  1  σ2 2 = ε(µ, σ ) = ε µ, . i∈I |I | |I |

Rc (u, I ) − Ra (u, I ) =

The last equality follows from a well known theorem (e.g., see Section 3.1 in Mood et al. [1974]) about the mean and variance of the average of independent identically distributed random variables. One special case is when the error rate ε (µ, σ 2 ) is normally distributed as N (0, σ 2 ). In such a case the aggregate estimation error will be normally distributed as N (0, σ 2 /|I |). In fact, normality  of the aggregate distribution (1/|I |) i∈I ε(µ, σ 2 ) should be true for arbitrary error rate ε (µ, σ 2 ) and for large values of |I | since this error rate converges to the normal distribution N (µ, σ 2 /|I |) according to the Central Limit Theorem. To summarize this discussion, under the assumptions considered above, the estimation error for the aggregate rating is always smaller than the estimation error for the individual ratings. Moreover, it is easy to see that this would also hold at all the levels of the aggregation hierarchy. However, the above assumptions are fairly restrictive, therefore, it is not true in general that estimations of aggregate ratings are always more accurate than estimations of individual ratings. Obviously, this depends on various factors, including (a) the rating estimation function, (b) the rating aggregation function (e.g., AVG, AVG-of-Top-k, etc.), and (c) the accuracy measure (mean absolute error, mean squared error, F-measure, etc.) that are being used in the model. Therefore, an important research problem would be to determine when it is the case—under what conditions an estimated aggregate rating is more accurate than the individual estimated ratings. This question also constitutes an interesting topic for future research. 5. IMPLEMENTATION AND EVALUATION OF THE MULTIDIMENSIONAL APPROACH 5.1 Experimental Setup for a Multidimensional Movie Recommender System In order to compare the multidimensional recommendation approach to the traditional approaches, we decided to test our MD recommender system on a movie recommendation application. However, unlike traditional movie recommender systems, such as MovieLens [movielens.umn.edu], we wanted to take into consideration the contextual information about the viewing when recommending a movie, such as when the movie was seen, where, and with whom. 8 If

some ratings R(u i) in I are already explicitly defined, we can use them instead of the actual estimates and carry the same argument through.

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

130



G. Adomavicius et al.

Since such data was not available in any of the existing systems, we were not able to use any publicly available data. Instead, we built our own Web site and asked end-users to enter their ratings for movies they had seen, as well as the relevant contextual information. We decided not to use aggregation hierarchies in our study for the following reasons. First, our main goal is to compare the MD and the 2D models and to show that the contextual information does matter. We can demonstrate this effect of contextual information without using aggregation hierarchies. Second, we use the reduction-based approach to rating estimations in our studies, and this method does not fit well with aggregation hierarchies, as was explained before. Third, the use of aggregation hierarchies would require collecting significantly larger amounts of more detailed data. This would be a problem since, as described below, data collection was an expensive and time consuming process and we chose not to put this extra burden on our subjects. We set up our data collection Web site in such a way that users could select movies to rate either from memory or choose them from the list of all the available movies obtained from the Internet Movie Database site [imdb.com]. To help the users in the process of rating a movie, our data collection Web site also provided access to all the information about that movie at the Internet Movie Database. While designing our Web survey, we developed a specific list of contextual dimensions and attributes by brainstorming among ourselves and then pretesting them on students similar to those who would be participating in the survey. In particular, we decided to include the following contextual dimensions in addition to the traditional Person and Movie dimensions: r Time: when the movie was seen (choices: weekday, weekend, don’t remember); furthermore, if seen on a weekend, was it the opening weekend for the movie (choices: yes, no, don’t remember); r Place: where the movie was seen (choices: in a movie theater, at home, don’t remember); r Companion: with whom the movie was seen (choices: alone, with friends, with boyfriend/girlfriend, with family or others). Moreover, we decided to use “coarse granularity” for each of the contextual dimensions (e.g., partition the Time dimension only into weekend vs. weekday) for the following reasons. First, coarse granularities partitioned the data in a meaningful way from the consumer’s perspective. Second, having coarse granularities helped with the sparsity problem because they led to more data points per segment. Third, some of the ratings were for movies that the users saw in the not-so-recent past, so there was a tradeoff between recalling the specifics of the viewing context from memory and the accuracy of the data recalled. Also, note that dimensions can have multiple aggregations that are overlapping, for example, for dimension Time, we can have aggregations {weekdays, weekends} and {spring, summer, fall, winter}. In our case, based on our understanding of consumer behavior from pretests, we chose to use only one of these aggregations: {weekends, weekdays}. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



131

Participants rated the movies on a scale from 1 to 13, which is a more sensitive scale in comparison to the scales used in such applications as EachMovie, MovieLens, and Amazon.com. The scale anchors were 1 (absolutely hated) and 13 (absolutely loved), and the midpoint was 7 (neither loved nor hated the movie). The scale was in the form of a drop down menu of rating choices with each point depicted numerically as well as graphically (i.e., showing the number of stars from 1 to 13) to ensure that it was well understood by subjects. The participants in our study were college students. Overall, 1755 ratings were entered by 117 students over a period of 12 months from May 2001 to April 2002. Since some students rated very few movies, we decided to drop those who had fewer than 10 movies for the purpose of our analysis. As a result, a few movies that were rated only by such students were also dropped. Therefore, from an initial list of 117 students and 210 movies, we finally had a list of 62 students, 202 movies and 1457 total ratings. Throughout the rest of the article, we will use D to denote this set of 1457 ratings. As was pointed out in Section 3.1, it is important to understand which dimensions really matter in terms of making a “significant difference” in rating estimations. For example, perhaps the Time dimension does not really affect movie-watching experiences (i.e., perhaps it really does not matter whether you see a movie on a weekday or a weekend), and should be dropped from consideration, and from the multidimensional cube of ratings. Therefore, after the data had been collected, we tested each dimension to see if it significantly affected ratings using the feature selection techniques mentioned in Section 3.1. In particular, for each dimension, we partitioned the ratings based on all the values (categories) of this dimension. For example, for the Time dimension, we partitioned all the ratings into either of the two categories, i.e., “weekend” or “weekday”. Then for each student9 we computed the average ratings for each category and applied a paired comparison t-test [Kachigan 1986] on the set of all users to determine whether there was a significant difference between the average ratings in these categories. For the binary dimensions Time, Place and OpeningWeekend the application of the paired t-test was straightforward and showed that the differences in ratings for the weekend/weekday, theater/home and opening/non-opening values were significant. Since the Companion dimension had 4 values, we applied the paired t-test to various pairs of values. From all these tests, we concluded that each of the contextual dimensions affected ratings in a significant way and therefore should be kept for further considerations.10 9 We

dropped from this test those respondents who had not seen a certain minimum number of movies in each category (e.g., at least 3 movies on weekdays and weekends in our study). 10 We would like to make the following observations pertaining to this test. First, it should be viewed only as a heuristic and not as the necessary or sufficient condition for the usefulness of the tested dimensions in a rigorous mathematical sense. Second, the t-test relies on the normality assumption of the underlying distribution. As an alternative, Wilcoxon test can be used for non-normal distributions [Kachigan 1986]. However, this test would still remain just another heuristic since, again, we cannot use it as a necessary or sufficient condition for the usefulness of the tested dimension in the reduction-based recommendation approach. Third, yet another method for eliminating some of the dimensions and features and, therefore, speeding up the process of identifying “good” ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

132



G. Adomavicius et al.

We implemented the algorithms described in Figures 3 and 4 (in Section 4) and tested them on the movie data described above. Before we provide the detailed description of the testing procedure and present the results, we first provide a broad overview of its major steps below. 1. We split the set of ratings D into 10 parts. Then we selected one part containing 10% of the initial dataset (145 ratings) as evaluation dataset (D E ) and the remaining 9 parts (or 90% of the data containing 1312 ratings) as modeling dataset (D M ). The modeling dataset D M was used for the contextual segment selection purposes as described in Section 4. The evaluation dataset D E was used only to evaluate our recommendation approach for the real-time estimation of ratings (as described in Figure 4) and was not used in any contextual segment selection procedures described below. Based on our definitions, we have D E ∩ D M = ⊘ and D E ∪ D M = D. 2. We ran our segment selection algorithm presented in Figure 3 on dataset D M . As a result, we identified the segments on which the reduction-based approach significantly outperformed the standard CF method. The details of this step are presented in Section 5.2. 3. We evaluated the proposed combined recommendation approach by running the corresponding algorithm (i.e., from Figure 4) with the well-performing segments that were discovered in Step 2 (as described in the previous paragraph). In particular, we compared the performance of the combined approach to the performance of the standard CF using 10-fold cross-validation as follows. We took the 10 subsets of ratings produced in Step 1 and used them for the cross-validation purposes. In other words, we performed the evaluation procedure on 10 different 10%–90% splits of our initial dataset D into non-overlapping datasets D E and D M , where each time D E contained 145 ratings and constituted one of the 10 parts of D produced in Step 1, and D M contained 1312 ratings and constituted the remaining 9 parts of D. Moreover, all 10 evaluation datasets D E obtained during the 10-fold crossvalidation process are pairwise nonoverlapping. The details of this step are presented in Section 5.3. We have used F-measure as the predictive performance measure µ (see Section 4.2), where a movie is considered to be “good” if it is rated above 10 (on a 13 point scale) and “bad” otherwise, and the precision and recall measures were defined in the standard way using these definitions of “good” and “bad.” One caveat in the evaluation procedure outlined above is that we used only one 90%–10% split to select the set of segments on which the reduction-based approach significantly outperformed the standard CF method and used this same set of segments in the cross-validation procedure for the combined approach described in Step 3 above. Alternatively, we could have identified the set of outperforming segments each time when we did a fold in the 10-fold cross validation (e.g., did it inside the cross-validation loop). We followed the single-split segments would be to estimate multicollinearity [Kachigan 1986] among the features in various dimensions, and then drop some of the correlated features. However, we did not deploy this method in the project. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



133

approach for the following reasons. First, identification of outperforming segments (as described in Figure 3) is a computationally expensive process. Second, during the 10-fold cross-validation there is a significant training data overlap for each of the folds of the 10-fold cross validation, and, therefore, we do not expect to obtain a significantly different set of segments for different folds of cross-validation. For these reasons, we decided to use the same set of outperforming segments for each fold of cross-validation. 5.2 Selecting the Pertinent Segments: Evaluating the Reduction-Based Approach Although any available 2D recommendation algorithm A can be used in the combined approach (as described in Section 4.2 and presented in Figure 3), we decided to compare µ A, S (S) and µ A,T (S) in terms of collaborative filtering (i.e., in both cases A is the same collaborative filtering method CF), since CF constitutes one of the most widely used tasks in recommender systems. For the purpose of our analysis, we have implemented one of the traditionally used versions of CF that computes the ratings using the adjusted weighted sum, as described in equation (5c) in Section 2. We used the cosine similarity measure (7) as the measure of similarity between users. As mentioned earlier, in this segment selection phase, we use dataset D M to find the most pertinent contextual segments for a given application. Initially, we ran the standard 2D CF method on dataset D M containing 1312 ratings using the bootstrapping method [Mitchell 1997, Hastie et al. 2001] with 500 random re-samples in order to obtain a baseline performance. In each re-sample, 29/30th of D M was used for training and the remaining 1/30th of D M for testing. More specifically, for each sample, we estimated ratings in the testing set of this sample using the ratings from the training set and did this operation 500 times. As a result, we obtained data set X ⊆ D M , containing 1235 ratings (|X | = 1235), on which at least one rating from D M was tested. In other words, X represents a set of ratings that 2D CF was able to predict. Note, that in this particular case we have X ⊂ D M (and not X = D M ), because CF was not able to predict all the ratings because of the sparsity-related limitations of the data. After that, we used equation (16) to compute average predicted ratings for all points in X . Then we used MAE, precision, recall and F-measures to determine the predictive performance of the standard CF method. We use precision and recall in the standard manner: precision is defined as the portion of truly “good” ratings among those that were predicted as “good” by the recommender system, and recall is defined as the portion of correctly predicted “good” ratings among all the ratings known to be “good.” F-measure is defined in a standard way [BaezaYates and Ribeiro-Neto 1999], as a harmonic mean of the precision and recall: F-measure =

2 · Precision · Recall . Precision + Recall

The results of predictive performance of the standard CF method are reported in Table I. After this, we ran our segment selection algorithm (presented in Figure 3) on dataset D M . In Step 1 of the algorithm, we performed an exhaustive search ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

134



G. Adomavicius et al. Table I. Performance Results of Two-Dimensional CF Performance Metric, µ MAE Precision Recall F-measure

Value, µC F, D M (X ) 2.0 0.617 0.356 0.452

Table II. Large Contextual Segments Generated by Step 1 of Segment Selection Algorithm Name Home Friends NonRelease Weekend Theater Weekday GBFriend Theater-Weekend Theater-Friends

Size 727 565 551 538 526 340 319 301 274

Description Movies watched at home Movies watched with friends Movies watched not during the 1st weekend of their release Movies watched on weekends Movies watched in the movie theater Movies watched on weekdays Movies watched with girlfriend/boyfriend Movies watched in the movie theater on weekends Movies watched in the movie theater with friends

through the contextual space11 and obtained 9 large contextual segments (subsets of D M ) that had more than 262 user-specified ratings (i.e., at least 20% of the original dataset D M ). These segments are presented in Table II. In Step 2 of the segment selection algorithm, we contrasted the performance of the CF algorithm trained on each of these 9 contextual segments with the performance of the same CF algorithm but trained on the whole dataset. Again, we used the same bootstrapping method as described above for the whole dataset D M , except we applied it to the ratings data from the segments described in Table II. The comparisons between the reduction-based and standard CF approaches can be done in terms of MAE, as a representative example of the statistical accuracy metric [Herlocker et al. 1999], and F-measure, as a representative example of the decision-support accuracy metric [Herlocker et al. 1999]. As was explained in Section 4.2, decision-support metrics, such as the F-measure, are better suited for recommender systems than the statistical accuracy metrics such as MAE, since recommender systems mainly focus on recommending high-quality items, for which decision-support metrics are more appropriate. Therefore, while we focused primarily on F-measure in this study, for the sake of completeness we performed some tests on MAE as well. More specifically, we performed the paired z-test for the MAE and the z-test for proportions [Kachigan 1986] for precision and recall to identify the segments where the reduction-based CF approach significantly outperformed the standard CF method. Different statistical tests were used for MAE and precision/recall because of the different nature of these evaluation metrics. A paired z-test can be used for the MAE, because we can measure the absolute error of both combined reduction-based CF method and the traditional 2D CF method for each 11 Since

the search space was relatively small, we could obtain the resulting 9 segments through an exhaustive search. As was explained in Section 4, one could use various AI search methods for larger search spaces. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



135

Table III. Performance of the Reduction-Based Approach on Large Segments Segment Home Segment size: 727 Predicted: 658 Friends Segment size: 565 Predicted: 467 NonRelease Segment size: 551 Predicted: 483 Weekend Segment size: 538 Predicted: 463 Theater Segment size: 526 Predicted: 451 Weekday Segment size: 340 Predicted: 247 GBFriend Segment size: 319 Predicted: 233 Theater-Weekend Segment size: 301 Predicted: 205 Theater-Friends Segment size: 274 Predicted: 150

Method (CF) Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values Segment-based Whole-data-based z-values

Precision 0.527 0.556 0.427 0.526 0.643 1.710 0.495 0.500 0.065 0.596 0.655 0.983 0.622 0.694 1.258 0.415 0.531 1.041 0.513 0.627 1.292 0.660 0.754 1.234 0.657 0.732 0.814

Recall 0.319 0.357 0.776 0.444 0.333 −2.051 0.383 0.333 −0.869 0.497 0.383 −2.256 0.595 0.366 −4.646 0.349 0.270 −0.964 0.451 0.352 −1.361 0.623 0.406 −3.161 0.564 0.385 −2.245

F-measure 0.397 0.435 0.482 0.439 0.432 0.400 0.542* 0.484 0.608* 0.479 0.379 0.358 0.480 0.451 0.641* 0.528 0.607* 0.504

predicted rating. It turned out that the differences were not statistically significant for MAE on all 9 segments. On the other hand, precision and recall measure the accuracy of classifying ratings as “good” or “bad”, and, therefore, deal not with numeric (as MAE) but rather with binary outcomes for each rating. Therefore, the z-test for proportions was appropriate to evaluate the statistical difference of such measurement. The F-measure results are reported in Table III, where statistically significant differences for precision and recall at the 95% confidence level are highlighted in boldface and substantial differences in F-measure are indicated in boldface and with symbol*. Note, that since there are no standard “substantial difference” definitions for the F-measure, for the purpose of this article we considered the difference in F-measure as substantial if the actual difference between F-measures is more than 0.0512 and the difference in at least one of its components (i.e., precision or recall) is statistically significant (determined using the z-test for proportions at the significance level 0.05). Also note that, since during this segment selection phase we compute the F-measure for each segment, the whole-data-based error rates are different for each segment. As mentioned in Section 4.2 (Figure 3), we evaluate each segment 12 This is analogous to the approach taken in information retrieval literature, where 5% performance

improvement is typically considered substantial [Sparck Jones 1974]. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

136



G. Adomavicius et al.

Table IV. Large Contextual Segments on which Reduction-Based Approach Substantially Outperformed the Standard CF Method in Terms of the F-Measure Segment Theater-Weekend Theater Theater-Friends Weekend

Segment-based F-measure 0.641 0.608 0.607 0.542

Whole-data-based F-measure 0.528 0.479 0.504 0.484

S by comparing how well the reduction-based CF approach predicts the ratings of this segment (i.e., we calculate µ A, S (S)) with how well the standard 2D CF approach (i.e., the CF that is trained on the whole dataset) predicts the ratings of the same segment (i.e., we calculate µ A,T (S)). Therefore, clearly, for different segments S, the whole-data-based error rate µ A,T (S) can be different. As Table III shows, the differences in F-measure turned out to be substantial on four segments out of 9; on these segments the reduction-based CF approach substantially outperformed the standard CF method in terms of the F-measure. These four segments are presented in Table IV in the decreasing order, according to their F-measure. In Step 3 of the segment selection algorithm, we identified the segments that passed the performance test in Step 2 and that also (a) were sub-segments of more general segments and (b) had worse performance results. In our case, the segment Theater-Friends (i.e., the ratings submitted by people who saw the movie in a movie theater with friends) was a subset of the Theater segment. Moreover, the Theater segment provided performance improvement in terms of the absolute F-measure value compared to the Theater-Friends segment.13 Therefore, using the segment selection algorithm, we discarded the smaller segment with a lower performance (i.e., Theater-Friends), and obtained the set SEGM(T ) = {Theater-Weekend, Theater, Weekend}, which constituted the final result of the algorithm. Figure 5 provides graphical illustrations of Theater and Weekend segments within the cube of contextual dimensions: cube Companion × Place × Time.14 The Theater-Weekend segment is simply an intersection of these two segments. 5.3 Evaluation of the Combined Approach We used the three segments from SEGM(T ) produced in Section 5.2 in the combined recommendation algorithm described in Figure 4 for real-time recommendations. Moreover, we tested the performance of the combined reductionbased CF approach in comparison to the standard CF using the 10-fold crossvalidation method described in Section 5.1. In particular, we used the same split of dataset D into 10 parts, as described in Section 5.1, that we used for identifying the outperforming segments in Section 5.2 (recall that we split 13 Instead

of comparing absolute F-measure values of segments, an alternative approach would be to compare segments in terms of their relative F-measure improvement over the whole-data-based F-measure. In our future work, we plan to explore this and other possible approaches to segment comparison. 14 We omitted User and Item dimensions in Figure 5 for the sake of the clarity of visualization. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



137

Fig. 5. Examples of two high-performing contextual segments. Table V. Comparison of the Combined Reduction-Based and the Standard CF Methods on a Holdout Sample Performance metric Precision Recall F-measure

Standard two-dimensional CF approach 0.710 0.489 0.579

Combined reduction-based CF approach 62.3 73.3 0.673*

our initial dataset D into nonoverlapping datasets D E and D M , where D M has 1312 and D E has 145 ratings respectively, and that all 10 evaluation datasets D E obtained during the 10-fold cross-validation process are pairwise nonoverlapping). Among these 10 different D E and D M pairs, we particularly focused on the one that we used to generate segments presented in Table IV. For this split, we obtained the following results for the combined reduction-based CF approach utilizing the three segments SEGM(T ) listed above. We also compared the performance of the combined approach with the standard CF method on the evaluation dataset D E (i.e., the holdout set of 145 ratings that were not used in the segment selection procedure and were kept purely for testing purposes). Both approaches were able to predict 140 ratings. For the combined approach, out of these 140 ratings, the contextual segments from SEGM(T ) were able to match and calculate predictions for 78 ratings; the remaining 62 ratings were predicted by the standard CF approach (i.e., trained on the whole D M dataset). The evaluation results are reported in Table V. As was explained in Section 4, the combined algorithm (Figures 3 and 4) should in practice outperform the standard CF method, and Table V supports this claim for the F-measure. It also shows by how much the combined approach improves the performance of CF—in this case the F-measure increased from 0.579 to 0.673. Moreover, the performance improvement for the F-measure is substantial based on our definition presented in Section 5.2. In addition to this particular D E /D M spilt, we also tested the performance of the combined approach on the other 9 folds of the 10-fold cross validation. The summary of the evaluation results is presented in Table VI. As can be seen ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

138



G. Adomavicius et al.

Table VI. Comparison of the Combined Reduction-Based CF and the Standard 2D CF Methods Using 10-Fold Cross-Validation Test No 1 2 3 4 5 6 7 8 9 10 AVG:

F-measure Standard 2-dimensional CF Combined reduction-based CF 0.579 0.673 0.418 0.411 0.500 0.577 0.387 0.548 0.384 0.488 0.458 0.447 0.432 0.390 0.367 0.435 0.535 0.667 0.513 0.548 0.457 0.518

Difference between F-measures 0.094 −0.007 0.077 0.161 0.104 −0.011 −0.042 0.068 0.132 0.035 0.061

Table VII. Overall Performance Comparison Based on F-Measure

Comparison All predictions (1373 ratings) Predictions on ratings from SEGM(T ) (743 ratings)

Overall F-measure Standard Combined 2-dimensional CF reduction-based CF 0.463 0.526 0.450

0.545

Difference between F-measures 0.063∗ 0.095∗

Note: Symbol * denotes substantial difference as per our definition in Section 5.2.

from this table, although there were few cases where standard 2-dimensional CF slightly outperformed the combined reduction-based CF approach, in the majority of cases the combined reduction-based CF approach outperformed the standard 2D CF in terms of F-measure. Furthermore, since all the 10 evaluation datasets (D E ) were nonoverlapping, we could simply calculate the overall F-measure by combining all the different prediction results in one set. The results are presented in Table VII. Overall, there were 1373 (out of 1457) ratings that both 2D CF and the combined reduction-based CF approaches were able to predict. Note that not all the ratings were predicted because of the sparsity-related limitations of data. As would be expected, the overall F-measure values for both approaches are very close to their respective average F-measure values that are based on the 10-fold cross-validation procedure (Table VI). Furthermore, the combined reduction-based CF approach substantially outperformed the traditional 2D CF.15 Note, that the combined reduction-based CF approach incorporates the standard CF approach, as discussed earlier (e.g., see Figure 4). More specifically, the combined approach would use the standard 2D CF to predict the value of any 15 As

defined in Section 5.2, this means that the difference between the two F-measures is at least 0.05 and that the difference between either their precision or recall components is statistically significant. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



139

rating that does not belong to any of the discovered high-performing contextual segments SEGM(T ) = {Theater-Weekend, Theater, Weekend}. Consequently, in our application, the predictions of the two approaches are identical for all ratings that do not belong to segments in SEGM(T ). Since the ratings outside of SEGM(T ) do not contribute to the differentiation between the two approaches, it is important to determine how well the two approaches do on the ratings from SEGM(T ). In our case, there were 743 such ratings (out of 1373 ratings predicted by both approaches) and the difference in performance of the two approaches on ratings in SEGM(T ) is 0.095, as shown in Table VII, which is even more substantial in terms of the F-measure than for the previously described all-ratings case. In conclusion, our study shows that the combined reduction-based CF approach can substantially outperform the standard two-dimensional CF method on “real-world” problems. However, the actual performance results depend very much upon the context-dependent nature of the application at hand. As the example in Section 4.1 illustrates, in some cases the reduction-based CF approach can outperform the standard CF method on every rating, and, therefore, the difference in recommendation accuracy can be substantial. Similarly, as was also argued in Section 3.1, extra dimensions may not make any difference in terms of recommendation performance in other applications. In such a case, as indicated in Figure 4, our approach would be reduced to the underlying 2D recommendation algorithm, since no contextual segments would be found. Therefore, by how much the combined reduction-based approach would outperform the standard CF method depends critically on the contextual parameters of the application at hand. 6. CONCLUSIONS AND FUTURE WORK In this article we have presented a multidimensional recommendation model that incorporates contextual information into the recommendation process and makes recommendations based on multiple dimensions, profiles, and aggregation hierarchies. We have proposed a reduction-based approach for rating estimation that can use any traditional two-dimensional recommendation technique and extend it to incorporate contextual segments. The importance of context for preferences has been extensively studied and established in the consumer behavior literature. Drawing upon that, we empirically demonstrated in this article that, in situations where context matters, multidimensional recommender systems can provide better recommendations by being able to take this contextual information into consideration. Since typically not every contextual dimension significantly affects a given recommendation task, the reduction-based approach may provide better performance on some and worse performance on other contextual segments. To address this problem, we proposed a combined reduction-based method and tested it using a standard two-dimensional collaborative filtering algorithm. We also demonstrated empirically that the combined reduction-based collaborative filtering approach substantially outperformed the standard 2D collaborative filtering method in terms of the F-measure. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

140



G. Adomavicius et al.

The results presented in this article can be further extended by studying the following topics: r Other rating estimation methods, in addition to the reduction-based approach presented in the article; r Leveraging the hierarchical and profile information in order to provide better recommendations; r Performance-related issues in multidimensional recommender systems. We will examine each of these topics in the rest of this section. Other Rating Estimation Methods. As stated in Section 2, the rating estimation techniques for classical two-dimensional collaborative filtering systems are classified into heuristic-based (or memory-based) and model-based [Breese et al. 1998]. Some of these two-dimensional techniques can be directly extended to the multidimensional case. Therefore, multidimensional approaches can be grouped into heuristic-based, model-based, and reduction-based approaches. It is important to develop these extensions and compare their performance to the classical two-dimensional methods in a way similar to how our method was compared to the standard 2D CF approach in this article. It is also important to compare the performance of new multidimensional heuristic- and modelbased methods to the performance of our reduction-based approach. Moreover, we studied only a particular type of the reduction-based approach, where CF was used for the local models generated in the manner described in this article. Clearly, there are several alternative two-dimensional recommendation methods for generating local models using the reduction-based approach, and it is important to develop and study these alternatives. In addition to using other types of rating estimation methods to build the local models (besides collaborative filtering), one can build several local models on various segments, each model possibly using a different reduction-, model-, or heuristic-based estimation method. Then one can combine these different local models into one global (mixture) model using various combination techniques, including different ensemble methods [Dietterich 2000]. Furthermore, the multi-level rating estimation problem, as formulated in Section 4, constitutes still another open and interesting problem that we will mention below. In summary, multidimensional rating estimation is a multifaceted and complex problem requiring considerable research effort in order to explore it fully. Leveraging hierarchical and profile information. The hierarchical aggregation was described in Section 3.2 and the multi-level rating estimation problem was formulated in Section 4 and discussed in Section 4.3. However, additional work is required for a more comprehensive understanding of this problem. In particular, one needs to solve the minimization problem formulated in Section 4.3 and combine it with the previously described rating estimation methods. Also, we need additional studies of the rating aggregation problem at the theoretical level beyond the approach presented in Section 4.3. Finally, an interesting research problem is how to combine hierarchical aggregation with the segmentation approaches described in Section 4. Hierarchies naturally define segments, and it is important to incorporate these hierarchies into novel ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



141

segmentation methods that would produce superior segment-specific rating estimators. In Section 3.1, we introduced profiles consisting of the set of attributes defining a dimension. We also pointed out (in Footnote 3) that these profiles are simple and limited in the sense that they capture only factual information about the dimension. For example, the user profile may contain some demographic information and also some of the statistics about that user’s behavior, such as a seating preference on an airplane or the maximal purchase the user made on a website. It has been argued in Adomavicius and Tuzhilin [2001b] that more advanced user profiles supporting behavioral rules (e.g., “John Doe prefers to see action movies on weekends,” i.e., MovieGenre = “action” → TimeOfWeek = “weekend”), sets of popular sequences (e.g. sequences of Web browsing activities [Mobasher et al. 2002, Spiliopoulou et al. 2003], such as “when Jim visits the book Web site XYZ, he usually first accesses the home page, then goes to the Home&Gardening section of the site, then browses the Gardening section and then leaves the Web site,” i.e., XYZ: StartPage → Home&Gardening → Gardening → Exit) and signatures (data structures that are used to capture the evolving behavior learned from large data streams of simple transactions [Cortes et al. 2000]) are useful for capturing more intricate patterns of user behavior. The very same argument is applicable to profiles for other dimensions, such as Product, and it is important to incorporate these more complex profiles into the multidimensional model. Furthermore, as was observed in Adomavicius and Tuzhilin [2001a], incorporation of contextual information into recommender systems leads to the ability to provide significantly more complex recommendations than using a standard two-dimensional approach. Therefore, it is important to empower end-users with a flexible query language so that they could express different types of multidimensional recommendations that are of interest to them. For example, John Doe may want to be recommended at most three action movies (provided their ratings are at least 6 out of 10) along with the places to see them with his girlfriend over the weekend. In Adomavicius and Tuzhilin [2001a], we have developed an initial version of the Recommendation Query Language (RQL), and we are currently working on extending this query language to capture more meaning and provide theoretical justification to it. For example, the previous query can be expressed in RQL as RECOMMEND Movie, Place TO User BASED ON AVG(Rating) WITH AVG(Rating) >= 6 SHOW TOP 3 FROM MovieRecommender WHERE Time.TimeOfWeek = ‘Weekend’ AND Movie.Genre = ‘Action’ AND Company.Type = ‘Girlfriend’ AND User.Name = ‘John Doe’.

Performance Improvements. In the article, we demonstrated empirically that the contextual information provides performance improvements by collecting ratings data, testing the combined reduction-based approach, and demonstrating that it outperforms the standard 2D CF method on this data. To further advance our understanding of the reduction-based approach, it is important to ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

142



G. Adomavicius et al.

study the causes of this performance improvement at a theoretical level, and we intend to study this issue in our future work. Finally, it is important to develop recommendation algorithms that are not only accurate but scalable as well. Due to our limited resources, we had collected only 1755 ratings, which was sufficient for our purposes of conducting an initial study, but would not be enough if we wanted to conduct more extensive studies of the multidimensional recommendation problem, including studies of scalability and data sparsity issues. Therefore, it is important to collect significantly more data (e.g., millions of ratings) and test the performance of our and other approaches on such large datasets. Our conjecture is that the performance of both the reduction-based and the standard 2D approaches would grow on larger datasets; however, the performance of the reduction-based approach would grow faster with more data. As a consequence, we would expect to see more segments outperforming the standard 2D approach and the performance differences being more pronounced than in our study, resulting in better performance improvements of the combined approach. We intend to test this conjecture in practice once we can obtain an appropriate dataset(s) for this purpose. In summary, we believe that this article is only the first step in studying the multidimensional recommendation problem and that much additional work is required to understand this problem fully. In this section, we outlined several directions and some of the issues that the research community can explore to advance the field. ACKNOWLEDGMENTS

The authors would like to thank the five anonymous reviewers for providing many useful comments and, in particular, the Associate Editor for highly diligent and thorough reviews, all of which significantly improved the article. The authors also thank Joseph Pancras for discussions on relevant marketing research. REFERENCES ADOMAVICIUS, G. AND TUZHILIN, A. 2001a. Multidimensional recommender systems: a data warehousing approach. In Proceedings of the 2nd International Workshop on Electronic Commerce (WELCOM’01). Lecture Notes in Computer Science, vol. 2232, Springer-Verlag, 180–192. ADOMAVICIUS, G. AND TUZHILIN, A. 2001b. Expert-driven validation of rule-based user models in personalization applications. Data Mining and Knowledge Discovery 5, 1/2, 33–58. AGGARWAL, C. C., WOLF, J. L., WU, K.-L., AND YU, P. S. 1999. Horting hatches an egg: A new graph-theoretic approach to collaborative filtering. In Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ANSARI, A., ESSEGAIER, S., AND KOHLI, R. 2000. Internet recommendation systems. J. Market. Res. 37, 3, 363–375. ATKESON, C. G., MOORE, A. W., AND SCHAAL, S. 1997. Locally Weighted Learning. Artif. Intell. Rev. 11,11–73. BAEZA-YATES, R. AND RIBEIRO-NETO, B. 1999. Modern Information Retrieval. Addison-Wesley. BALABANOVIC, M. AND SHOHAM, Y. 1997. Fab: Content-based, collaborative recommendation. Comm. ACM 40, 3, 66–72. BASU, C., HIRSH, H., AND COHEN, W. 1998. Recommendation as classification: Using social and content-based information in recommendation. In Recommender Systems. Papers from 1998 Workshop. Tech. Rep. WS-98-08. AAAI Press. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



143

BETTMAN, J. R., JOHNSON, E. J., AND PAYNE, J. W. 1991. Consumer Decision Making. In Handbook of Consumer Behavior, T. ROBERTSON AND H. KASSARJIAN, Eds. Prentice Hall, 50–84. BILLSUS, D. AND PAZZANI, M. 1998. Learning collaborative information filters. In Proceedings of the 15th International Conference on Machine Learning, Morgan Kaufmann Publishers. BILLSUS, D. AND PAZZANI, M. 2000. User modeling for adaptive news access. User Modeling and User-Adapted Interaction 10, 2–3, 147–180. BREESE, J. S., HECKERMAN, D., AND KADIE, C. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, Madison, WI. CAGLAYAN, A., SNORRASON, M., JACOBY, J., MAZZU, J., JONES, R., AND KUMAR, K. 1997. Learn Sesame— a learning agent engine. Appl. Artif. Intell. 11, 393–412. CHATTERJEE, S., HADI, A. S., AND PRICE, B. 2000. Regression Analysis by Example. John Wiley and Sons, Inc. CHAUDHURI, S. AND DAYAL, U. 1997. An overview of data warehousing and OLAP technology. ACM SIGMOD Record 26, 1, 65–74. CHIEN, Y.-H. AND GEORGE, E. I. 1999. A bayesian model for collaborative filtering. In Proceedings of the 7th International Workshop on Artificial Intelligence and Statistics. CLAYPOOL, M., GOKHALE, A., MIRANDA, T., MURNIKOV, P., NETES, D., AND SARTIN, M. 1999. Combining content-based and collaborative filters in an online newspaper. In ACM SIGIR’99. Workshop on Recommender Systems: Algorithms and Evaluation. CONDLIFF, M., LEWIS, D., MADIGAN, D., AND POSSE, C. 1999. Bayesian mixed-effects models for recommender systems. In ACM SIGIR’99 Workshop on Recommender Systems: Algorithms and Evaluation. CORTES, C., FISHER, K., PREGIBON, D., ROGERS, A., AND SMITH, F. 2000. Hancock: A language for extracting signatures from data streams. In Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. DELGADO, J. AND ISHII, N. 1999. Memory-based weighted-majority prediction for recommender systems. In ACM SIGIR’99 Workshop on Recommender Systems: Algorithms and Evaluation. DIETTERICH, T. G. 2000. Ensemble Methods in Machine Learning. In First International Workshop on Multiple Classifier Systems, J. KITTLER AND F. ROLI, Eds. Lecture Notes in Computer Science, New York, Springer Verlag, 1–15. DUDA, R. O., HART, P. E., AND STORK, D. G. 2001. Pattern Classification, John Wiley & Sons, Inc. FAN, J. AND LI, R. 2003. Local Modeling: Density Estimation and Nonparametric Regression. In Advanced Medical Statistics, J. FANG AND Y. LU, Eds. World Scientific, 885–930. GETOOR, L. AND SAHAMI, M. 1999. Using probabilistic relational models for collaborative filtering. In Workshop on Web Usage Analysis and User Profiling (WEBKDD’99). GOLDBERG, K., ROEDER, T., GUPTA, D., AND PERKINS, C. 2001. Eigentaste: A constant time collaborative filtering algorithm. Information Retrieval Journal, 4, 2, 133–151. HAN, J. AND KAMBER, M. 2001. Data Mining: Concepts and Techniques. Morgan Kaufmann. HAND, D., MANNILA, H., AND SMYTH, P. 2001. Principles of Data Mining. MIT Press. HASTIE, T., TIBSHIRANI, R., AND FRIEDMAN, J. 2001. The Elements of Statistical Learning. Springer. HERLOCKER, J. L. AND KONSTAN, J. A. 2001. Content-Independent Task-Focused Recommendation. IEEE Internet Comput. 5, 6, 40–47. HERLOCKER, J. L., KONSTAN, J. A., BORCHERS, A., AND RIEDL, J. 1999. An algorithmic framework for performing collaborative filtering. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’99), 230–237. HILL, W., STEAD, L., ROSENSTEIN, M., AND FURNAS, G. 1995. Recommending and evaluating choices in a virtual community of use. In Proceedings of ACM CHI’95 Conference on Human Factors in Computing Systems, 194–201. HUANG, Z., CHEN, H., AND ZENG, D. 2004. Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering. ACM Trans. Info. Syst. 22, 1, 116–142. IM, I. AND HARS, A. 2001. Finding information just for you: knowledge reuse using collaborative filtering systems. In Proceedings of the 22nd International Conference on Information Systems. KACHIGAN, S. C. 1986. Statistical Analysis. Radius Press. KELLY, D. AND TEEVAN, J. 2003. Implicit feedback for inferring user preference: a bibliography. SIGIR Forum 37, 2, 18–28. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

144



G. Adomavicius et al.

KIMBALL, R. 1996. The Data Warehouse Toolkit. John Wiley & Sons, Inc. KLEIN, N. M. AND YADAV, M. 1989. Context effects on effort and accuracy in choice: An inquiry into adaptive decision making. J. Consum. Res. 16, 410–420. KOLLER, D. AND SAHAMI, M. 1996. Toward optimal feature selection. In Proceedings of the 13th International Conference on Machine Learning, Morgan Kaufmann. KONSTAN, J. A., MILLER, B. N., MALTZ, D., HERLOCKER, J. L., GORDON, L. R., AND RIEDL, J. 1997. GroupLens: Applying collaborative filtering to Usenet news. Comm. ACM 40, 3, 77–87. KOTLER, P. 2003. Marketing Management. 11th ed. Prentice Hall. KUMAR, R., RAGHAVAN, P., RAJAGOPALAN, S., AND TOMKINS, A. 2001. Recommendation systems: A probabilistic analysis. J. Comput. Syst. Sci. 63, 1, 42–61. LANG, K. 1995. Newsweeder: Learning to filter netnews. In Proceedings of the 12th International Conference on Machine Learning. LEE, W. S. 2001. Collaborative learning for recommender systems. In Proccedings of the International Conference on Machine Learning. LILIEN, G. L., KOTLER, P., AND MOORTHY, S. K. 1992. Marketing Models. Prentice Hall, 22–23. LIU, H. AND MOTODA, H. 1998. Feature Selection for Knowledge Discovery and Data Mining. Kluwer Academic Publishers. LUSSIER, D. A. AND OLSHAVSKY, R. W. 1979. Task complexity and contingent processing in brand choice. J. Consum. Res. 6, 154–165. MITCHELL, T. M. 1997. Machine Learning, McGraw-Hill. MOBASHER, B., DAI, H., LUO, T., AND M. NAKAGAWA. 2002. Using sequential and non-sequential patterns for predictive web usage mining tasks. In Proceedings of the IEEE International Conference on Data Mining (ICDM’02), Maebashi City, Japan. MOOD, A. M., GRAYBILL, F. A., AND BOES, D. C. 1974. Introduction to the Theory of Statistics, 3rd ed., McGraw-Hill. MOONEY, R. J., BENNETT, P. N., AND ROY, L. 1998. Book recommending using text categorization with extracted information. In Recommender Systems. Papers from 1998 Workshop. Tech. Rep. WS-98-08. AAAI Press. MOONEY, R. J. AND ROY, L. 1999. Content-based book recommending using learning for text categorization. In ACM SIGIR’99. Workshop on Recommender Systems: Algorithms and Evaluation. NAKAMURA, A. AND ABE, N. 1998. Collaborative filtering using weighted majority prediction algorithms. In Proceedings of the 15th International Conference on Machine Learning. OARD, D. W. AND KIM, J. 2001. Modeling information content using observable behavior. In Proceedings of the American Society for Information Science and Technology Conference, Washington, DC. PAZZANI, M. 1999. A framework for collaborative, content-based and demographic filtering. Arti. Intell. Rev. 13, 5/6, 393–408. PAZZANI, M. AND BILLSUS, D. 1997. Learning and revising user profiles: The identification of interesting web sites. Machine Learning, 27,313–331. PENNOCK, D. M. AND HORVITZ, E. 1999. Collaborative filtering by personality diagnosis: A hybrid memory- and model-based approach. In IJCAI’99 Workshop: Machine Learning for Information Filtering. RAMAKRISHNAN, R. AND GEHRKE, J. 2000. Database Management Systems. McGraw-Hill. RESNICK, P., IAKOVOU, N., SUSHAK, M., BERGSTROM, P., AND J. RIEDL. 1994. GroupLens: An open architecture for collaborative filtering of netnews. In Proceedings of the 1994 Computer Supported Cooperative Work Conference. SALTON, G. 1989. Automatic Text Processing. Addison-Wesley. SARWAR B., KARYPIS, G., KONSTAN, J., AND RIEDL, J. 2000. Application of dimensionality reduction in recommender systems—a case study. In Proceedings of the ACM WebKDD Workshop. SARWAR, B., KARYPIS, G., KONSTAN, J., AND RIEDL, J. 2001. Item-based Collaborative Filtering Recommendation Algorithms. In Proceedings of the 10th International WWW Conference. SHARDANAND, U. AND MAES, P. 1995. Social information filtering: Algorithms for automating ‘word of mouth’. In Proceedings of the Conference on Human Factors in Computing Systems. SOBOROFF, I. AND NICHOLAS, C. 1999. Combining content and collaboration in text filtering. In IJCAI’99 Workshop: Machine Learning for Information Filtering. SPARCK JONES, K. 1974. Automatic Indexing. Journal of Documentation, 30,393–432. ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.

Incorporating Contextual Information in Recommender Systems



145

SPILIOPOULOU, M., MOBASHER, B., BERENDT, B., AND NAKAGAWA, M. 2003. A Framework for the Evaluation of Session Reconstruction Heuristics in Web Usage Analysis. INFORMS J. Comput. Special Issue on Mining Web-Based Data for E-Business Applications, 15, 2, 171–190. TERVEEN, L., HILL, W., AMENTO, B., MCDONALD, D., AND CRETER, J. 1997. PHOAKS: A system for sharing recommendations. Comm. ACM 40, 3, 59–62. TRAN, T. AND COHEN, R. 2000. Hybrid Recommender Systems for Electronic Commerce. In Knowledge-Based Electronic Markets, Papers from the AAAI Workshop, Tech. Rep. WS-00-04, AAAI Press. UNGAR, L. H. AND FOSTER, D. P. 1998. Clustering methods for collaborative filtering. In Recommender Systems. Papers from 1998 Workshop. Tech. Rep. WS-98-08. AAAI Press. WADE, W. 2003. A grocery cart that holds bread, butter and preferences. New York Times, Jan. 16. WINSTON, P. H. 1992. Artificial Intelligence (3rd ed.). Addison-Wesley. Received January 2003; revised July 2003, February 2004, June 2004; accepted August 2004

ACM Transactions on Information Systems, Vol. 23, No. 1, January 2005.