125 89 6MB
English Pages 311 [310] Year 2007
Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
4747
Sašo Džeroski Jan Struyf (Eds.)
Knowledge Discovery in Inductive Databases 5th International Workshop, KDID 2006 Berlin, Germany, September 18, 2006 Revised Selected and Invited Papers
13
Volume Editors Sašo Džeroski Jožef Stefan Institute Department of Knowledge Technologies Jamova 39, 1000 Ljubljana, Slovenia E-mail: [email protected] Jan Struyf Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A, 3001 Leuven, Belgium E-mail: [email protected]
Library of Congress Control Number: 2007937944 CR Subject Classification (1998): H.2, I.2 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI ISSN ISBN-10 ISBN-13
0302-9743 3-540-75548-9 Springer Berlin Heidelberg New York 978-3-540-75548-7 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2007 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12171675 06/3180 543210
Preface
The 5th International Workshop on Knowledge Discovery in Inductive Databases (KDID 2006) was held on September 18, 2006 in Berlin, Germany, in conjunction with ECML/PKDD 2006: The 17th European Conference on Machine Learning (ECML) and the 10th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD). Inductive databases (IDBs) represent a database view on data mining and knowledge discovery. IDBs contain not only data, but also generalizations (patterns and models) valid in the data. In an IDB, ordinary queries can be used to access and manipulate data, while inductive queries can be used to generate (mine), manipulate, and apply patterns. In the IDB framework, patterns become “first-class citizens”, and KDD becomes an extended querying process in which both the data and the patterns/models that hold in the data are queried. The IDB framework is appealing as a general framework for data mining, because it employs declarative queries instead of ad-hoc procedural constructs. As declarative queries are often formulated using constraints, inductive querying is closely related to constraint-based data mining. The IDB framework is also appealing for data mining applications, as it supports the entire KDD process, i.e., nontrivial multi-step KDD scenarios, rather than just individual data mining operations. The goal of the workshop was to bring together database and data mining researchers interested in the areas of inductive databases, inductive queries, constraint-based data mining, and data mining query languages. This workshop followed the previous four successful KDID workshops organized in conjunction with ECML/PKDD: KDID 2002 held in Helsinki, Finland, KDID 2003 held in Cavtat-Dubrovnik, Croatia, KDID 2004 held in Pisa, Italy, and KDID 2005 held in Porto, Portugal. Its scientific program included nine regular presentations and two short ones, as well as an invited talk by Kiri L. Wagstaff (Jet Propulsion Laboratory, California Institute of Technology, USA). This volume bundles all papers presented at the workshop and, in addition, includes three contributions that cover relevant research presented at other venues. We also include an article by one of the editors (SD) that attempts to unify existing research in the area and outline directions for further research towards a general framework for data mining. We wish to thank the invited speaker, all the authors of submitted papers, the program committee members and additional reviewers, and the ECML/PKDD organization committee. KDID 2006 was supported by the European project IQ (“Inductive Queries for Mining Patterns and Models”, IST FET FP6-516169). July 2007
Saˇso Dˇzeroski Jan Struyf
Organization
Program Chairs Saˇso Dˇzeroski
Jan Struyf
Department of Knowledge Technologies Joˇzef Stefan Institute Jamova 39, 1000 Ljubljana, Slovenia [email protected] http://www-ai.ijs.si/SasoDzeroski/ Department of Computer Science Katholieke Universiteit Leuven Celestijnenlaan 200A, 3001 Leuven, Belgium [email protected] http://www.cs.kuleuven.be/∼jan/
Program Committee Hiroki Arimura, Hokkaido University, Japan Hendrik Blockeel, Katholieke Universiteit Leuven, Belgium Francesco Bonchi, ISTI-C.N.R., Italy Jean-Fran¸cois Boulicaut, INSA Lyon, France Toon Calders, University of Antwerp, Belgium Luc De Raedt, Katholieke Universiteit Leuven, Belgium Minos N. Garofalakis, Intel Research Berkeley, USA Fosca Giannotti, ISTI-C.N.R., Italy Bart Goethals, University of Antwerp, Belgium Jiawei Han, University Illinois at Urbana-Champaign, USA Ross D. King, University of Wales, Aberystwyth, UK Giuseppe Manco, ICAR-C.N.R., Italy Rosa Meo, University of Turin, Italy Ryszard S. Michalski, George Mason University, USA Taneli Mielik¨ ainen, University of Helsinki, Finland Shinichi Morishita, University of Tokyo, Japan Siegfried Nijssen, Katholieke Universiteit Leuven, Belgium C´eline Robardet, INSA Lyon, France Arno Siebes, Utrecht University, The Netherlands Takashi Washio, Osaka University, Japan Philip S. Yu, IBM Thomas J. Watson, USA Mohammed Zaki, Rensselaer Polytechnic Institute, USA Carlo Zaniolo, UCLA, USA
VIII
Organization
Additional Reviewers Annalisa Appice Marko Bohanec Emma L. Byrne Hong Cheng Amanda Clare
Francesco Folino Gemma Garriga Kenneth A. Kaufman Elio Masciari Riccardo Ortale
Jimeng Sun Janusz Wojtusiak
Table of Contents
Invited Talk Value, Cost, and Sharing: Open Issues in Constrained Clustering . . . . . . . Kiri L. Wagstaff
1
Contributed Papers Mining Bi-sets in Numerical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J´er´emy Besson, C´eline Robardet, Luc De Raedt, and Jean-Fran¸cois Boulicaut
11
Extending the Soft Constraint Based Mining Paradigm . . . . . . . . . . . . . . . Stefano Bistarelli and Francesco Bonchi
24
On Interactive Pattern Mining from Relational Databases . . . . . . . . . . . . . Francesco Bonchi, Fosca Giannotti, Claudio Lucchese, Salvatore Orlando, Raffaele Perego, and Roberto Trasarti
42
Analysis of Time Series Data with Predictive Clustering Trees . . . . . . . . . Saˇso Dˇzeroski, Valentin Gjorgjioski, Ivica Slavkov, and Jan Struyf
63
Integrating Decision Tree Learning into Inductive Databases . . . . . . . . . . . ´ Elisa Fromont, Hendrik Blockeel, and Jan Struyf
81
Using a Reinforced Concept Lattice to Incrementally Mine Association Rules from Closed Itemsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arianna Gallo and Rosa Meo
97
An Integrated Multi-task Inductive Database VINLEN: Initial Implementation and Early Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kenneth A. Kaufman, Ryszard S. Michalski, Jaroslaw Pietrzykowski, and Janusz Wojtusiak
116
Beam Search Induction and Similarity Constraints for Predictive Clustering Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dragi Kocev, Jan Struyf, and Saˇso Dˇzeroski
134
Frequent Pattern Mining and Knowledge Indexing Based on Zero-Suppressed BDDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shin-ichi Minato and Hiroki Arimura
152
Extracting Trees of Quantitative Serial Episodes . . . . . . . . . . . . . . . . . . . . . Mirco Nanni and Christophe Rigotti
170
X
Table of Contents
IQL: A Proposal for an Inductive Query Language . . . . . . . . . . . . . . . . . . . Siegfried Nijssen and Luc De Raedt
189
Mining Correct Properties in Incomplete Databases . . . . . . . . . . . . . . . . . . Fran¸cois Rioult and Bruno Cr´emilleux
208
Efficient Mining Under Rich Constraints Derived from Various Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arnaud Soulet, Jiˇr´ı Kl´ema, and Bruno Cr´emilleux
223
Three Strategies for Concurrent Processing of Frequent Itemset Queries Using FP-Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marek Wojciechowski, Krzysztof Galecki, and Krzysztof Gawronek
240
Discussion Paper Towards a General Framework for Data Mining . . . . . . . . . . . . . . . . . . . . . . Saˇso Dˇzeroski
259
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
301
Value, Cost, and Sharing: Open Issues in Constrained Clustering Kiri L. Wagstaff Jet Propulsion Laboratory, California Institute of Technology, Mail Stop 126-347, 4800 Oak Grove Drive, Pasadena CA 91109, USA [email protected]
Abstract. Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.
1
Introduction
Clustering methods are used to analyze data sets that lack any supervisory information such as data labels. They identify major patterns or trends based on a combination of the assumed cluster structure (e.g., Gaussian distribution) and the observed data distribution. Recently, semi-supervised clustering methods have become very popular because they can also take advantage of supervisory information when it is available. This supervision often takes the form of a set of pairwise constraints that specify known relationships between pairs of data items. Constrained clustering methods incorporate and enforce these constraints. This process is not just a fix for suboptimal distance metrics; it is quite possible for different users to have different goals in mind when analyzing the same data set. Constrained clustering methods permit the clustering results to be individually tailored for these different goals. The initial work in constrained clustering has led to further study of the impact of incorporating constraints into clustering algorithms, particularly when applied to large, real-world data sets. Important issues that have arisen include: 1. Given the recent observation that some constraint sets can adversely impact performance, how can we determine the utility of a given constraint set, prior to clustering? 2. How can we minimize the effort required of the user, by active soliciting only the most useful constraints? S. Dˇ zeroski and J. Struyf (Eds.): KDID 2006, LNCS 4747, pp. 1–10, 2007. c Springer-Verlag Berlin Heidelberg 2007
2
K.L. Wagstaff
3. When and how should constraints be propagated or shared with neighboring points? This paper begins with a description of the constrained clustering problem and surveys existing methods for finding satisfying solutions (Section 2). This overview is meant to be representative rather than comprehensive. Section 3 contributes more detailed descriptions of each of these open questions. In identifying these challenges, and the state of the art in addressing them, we highlight several directions for future research.
2
Constrained Clustering
We specify a clustering problem as a scenario in which a user wishes to obtain a partition P of a data set D, containing n items, into k clusters or groups. A constrained clustering problem is one in which the user has some pre-existing knowledge about their desired P ∗ . Usually, P ∗ is not fully known; if it were, no clustering would be necessary. Instead, the user is only able to provide a partial view V(P ∗ ). In this case, rather than returning P that best satisfies the (generic) objective function used by the clustering algorithm, we require that the algorithm adapt its solution to accommodate V(P ∗ ). 2.1
Pairwise Constraints
A partition P can be completely specified by stating, for each pairwise relationship (di , dj ) where di , dj ∈ D and di = dj , whether the pair of items is in the same cluster or split between different cluster. When used to specify requirements about the output partition, we refer to these statements as must-link and cannot-link constraints, respectively [1,2]. The number of distinct constraints ranges from 1 to 12 n(n − 1), since constraints are by definition symmetric. It is often the case that additional information can be automatically inferred from the partial set of constraints specified by the user. Cluster membership is an equivalence relation, so the must-link relationships are symmetric and transitive. Cannot-link relationships are symmetric but not necessarily transitive. When constraints of both kinds are present, an entailment relationship permits the discovery of additional constraints implied by the user-specified set [2,3]. The first work in this area proposed a modified version of COBWEB that enforced pairwise must-link and cannot-link constraints [1]. It was followed by an enhanced version of the widely used k-means algorithm that could also accommodate constraints, called cop-kmeans [2]. Table 1 reproduces the details of this algorithm. cop-kmeans takes in a set of must-link (Con= ) and cannotlink (Con= ) constraints. The essential change from the basic k-means algorithm occurs in step (2), where the decision about where to assign a given item di is constrained so that no constraints in Con= or Con= are violated. The satisfying condition is checked by the violate-constraints function. Note that it is possible for there to be no solutions that satisfy all constraints, in which case the algorithm exits prematurely.
Value, Cost, and Sharing: Open Issues in Constrained Clustering
3
Table 1. Constrained K-means Algorithm for hard, pairwise constraints [2] cop-kmeans(data set D, number of clusters k, must-link constraints Con= ⊂ D × D, cannot-link constraints Con= ⊂ D × D) 1. Let C1 . . . Ck be the k initial cluster centers. 2. For each point di ∈ D, assign it to the closest cluster Cj such that violateconstraints(di , Cj , Con= , Con= ) is false. If no such cluster exists, fail (return {}). 3. For each cluster Ci , update its center by averaging all of the points dj that have been assigned to it. 4. Iterate between (2) and (3) until convergence. 5. Return {C1 . . . Ck }. violate-constraints(data point d, cluster C, must-link constraints Con= ⊂ D × D, cannot-link constraints Con= ⊂ D × D) / C, return true. 1. For each (d, d= ) ∈ Con= : If d= ∈ 2. For each (d, d= ) ∈ Con= : If d= ∈ C, return true. 3. Otherwise, return false.
A drawback of this approach is that it may fail to find a satisfying solution even when one exists. This happens because of the greedy fashion in which items are assigned; early assignments can constrain later ones due to potential conflicts, and there is no mechanism for backtracking. As a result, the algorithm is sensitive to the order in which it processes the data set D. In practice, this is resolved by running the algorithm multiple times with different orderings of the data, but for data sets with a large number of constraints (especially cannot-link constraints), early termination without a solution can be a persistent problem. We previously assessed the hardness of this problem by generating constraint sets of varying sizes for the same data set and found that convergence failures happened most often for problems with an intermediate number of constraints, with respect to the number of items in the data set. This is consistent with the finding that 3-SAT formulas with intermediate complexity tend to be most difficult to solve [4]. In practice, however, this algorithm has proven very effective on a variety of data sets. Initial experiments used several data sets from the UCI repository [5], using constraints artificially generated from the known data labels. In addition, experimental results on a real-world problem showed the benefits of using a constrained clustering method when pre-existing knowledge is available. In this application, data from cars with GPS receivers were collected as they traversed repeatedly over the same roads. The goal was to cluster the data points to identify the road lanes, permitting the automatic refinement of digital maps to the individual lane level. By expressing domain knowledge about the contiguity of a given car’s trajectory and a maximum reasonable separation between lanes in the form of pairwise constraints, lane-finding performance increased from 58.0% without constraints to 98.6% with constraints [2]. A natural follow-on to this work was the development of a constrained version of the EM clustering algorithm [6].
4
K.L. Wagstaff
Soft Constraints. When the constraints are known to be completely reliable, treating them as hard constraints is an appropriate approach. However, since the constraints may be derived from heuristic domain knowledge, it is also useful to have a more flexible approach. There are two kinds of uncertainty that we may wish to capture: (1) the constraints are noisy, so we should permit some of them to be violated if there is overwhelming evidence against them (from other data items), and (2) we have knowledge about the likelihood that a given constraint should be satisfied, so we should permit the expression of a probabilistic constraint. The scop-kmeans algorithm is a more general version of cop-kmeans algorithm that treats constraint statements as soft constraints, addressing the issue of noise in the constraints [7]. Rather than requiring that every constraint be satisfied, it instead trades off the objective function (variance) against constraint violations, penalizing for each violation but permitting a violation if it provides a significant boost to the quality of the solution. Other approaches, such as the MPCK-means algorithm, permit the specification of an individual weight for each constraint, addressing the issue of variable per-constraint confidences [3]. MPCK-means imposes a penalty for constraint violations that is proportional to the violated constraint’s weight. Metric Learning. It was recognized early on that constraints could provide information not only about the desired solution, but also more general information about the metric space in which the clusters reside. A must-link constraint (di , dj ) can be interpreted as a hint that the conceptual distance between di and dj is small. Likewise, a cannot-link constraint implies that the distance between di and dj is so great that they should never be clustered together. Rather than using a modified clustering algorithm to enforce these individual constraints, it is also possible to use the constraints to learn a new metric over the feature space and then apply regular clustering algorithms, using the new metric. Several such metric learning approaches have been developed; some are restricted to learning from must-link constraints only [8], while others can also accommodate cannot-link constraints [9,10]. The MPCK-means algorithm fuses both of these approaches (direct constraint satisfaction and metric learning) into a single architecture [3]. 2.2
Beyond Pairwise Constraints
There are other kinds of knowledge that a user may have about the desired partition P ∗ , aside from pairwise constraints. Cluster-level constraints include existential constraints, which require that a cluster contain at least cmin items [11,12] and capacity constraints, which require that a cluster must have less than cmax items [13]. The user may also wish to express constraints on the features. Co-clustering is the process of identifying subsets of items in the data set that are similar with respect to a subset of the features. That is, both the items and the features are clustered. In essence, co-clustering combines data clustering with feature selection and can provide new insights into a data set. For data sets in which the
Value, Cost, and Sharing: Open Issues in Constrained Clustering
5
features have a pre-defined ordering, such as a temporal (time series) or spatial ordering, it can be useful to express interval/non-interval constraints on how the features are selected by a co-clustering algorithm [14].
3
Open Questions
The large body of existing work on constrained clustering has achieved several important algorithmic advances. We have now reached the point where more fundamental issues have arisen, challenging the prevailing view that constraints are always beneficial and examining how constraints can be used for real problems, in which scalability and the user effort required to provide constraints may impose an unreasonable burden. In this section, we examine these important questions, including how the utility of a given constraint set can be quantified (Section 3.1), how we can minimize the cost of constraint acquisition (Section 3.2), and how we can propagate constraint information to nearby regions to minimize the number of constraints needed (Section 3.3). 3.1
Value: How Useful Is a Given Set of Constraints?
It is to be expected that some constraint sets will be more useful than others, in terms of the benefit they provide to a given clustering algorithm. For example, if the constraints contain information that the clustering algorithm is able to deduce on its own, then they will not provide any improvement in clustering performance. However, virtually all work to date values constraint sets only in terms of the number of constraints they contain. The ability to more accurately quantify the utility of a given constraint set, prior to clustering, will permit practitioners to decide whether to use a given constraint set, or to choose the best constraint set to use, when several are available. The need for a constraint set utility measure has become imperative with the recent observation that some constraint sets, even when completely accurate with respect to the evaluation labels, can actually decrease clustering performance [15]. The usual practice when describing the results of constrained clustering experiments is to report the clustering performance averaged over multiple trials, where each trial consists of a set of constraints that is randomly generated from the data labels. While it is generally the case that average performance does increase as more constraints are provided, a closer examination of the individual trials reveals that some, or even many, of them instead cause a drop in accuracy. Table 2 shows the results of 1000 trials, each with a different set of 25 randomly selected constraints, conducted over four UCI data sets [5] using four different k-means-based constrained clustering algorithms. The table reports the fraction of trials in which the performance was lower than the default (unconstrained) k-means result, which ranges from 0% up to 87% of the trials. The average performance numbers obscure this effect because the “good” trials tend to have a larger magnitude change in performance than the “bad” trials do. However, the fact that any of the constraint sets can cause a decrease in
6
K.L. Wagstaff
Table 2. Fraction of 1000 randomly selected 25-constraint sets that caused a drop in accuracy, compared to an unconstrained run with the same centroid initialization (table from Davidson et al. [15]) Algorithm CKM [2] PKM [3] MKM [3] MPKM [3] Constraint Constraint Metric Enforcement and Data Set enforcement enforcement learning metric learning Glass 28% 1% 11% 0% Ionosphere 26% 77% 0% 77% Iris 29% 19% 36% 36% Wine 38% 34% 87% 74%
performance is unintuitive, and even worrisome, since the constraints are known to be noise-free and should not lead the algorithm astray. To better understand the reasons for this effect, Davidson et al. [15] defined two constraint set properties and provided a quantitative way to measure them. Informativeness is the fraction of information in the constraint set that the algorithm cannot determine on its own. Coherence is the amount of agreement between the constraints in the set. Constraint sets with low coherence will be difficult to completely satisfy and can lead the algorithm into unpromising areas of the search space. Both high informativeness and high coherence tend to result in an increase in clustering performance. However, these properties do not fully explain some clustering behavior. For example, a set of just three randomly selected constraints, with high informativeness and coherence, can increase clustering performance on the iris data set significantly, while a constraint set with similarly high values for both properties has no effect on the ionosphere data set. Additional work must be done to refine these measures or propose additional ones that better characterize the utility of the constraint set. Two challenges for future progress in this area are: 1) to identify other constraint set properties that correlate with utility for constrained clustering algorithms, and 2) to learn to predict the overall utility of a new constraint set, based on extracted attributes such as these properties. It is likely that the latter will require the combination of several different constraint set properties, rather than being a single quantity, so using machine learning techniques to identify the mapping from properties to utility may be a useful approach. 3.2
Cost: How Can We Make Constraints Cheaper to Acquire?
A single pairwise constraint specifies a relationship between two data points. For a data set with n items, there are 12 n(n − 1) possible constraints. Therefore, the number of constraints needed to specify a given percentage of the relationships (say, 10%) increases quadratically with the data set size. For large data sets, the constraint specification effort can become a significant burden.
Value, Cost, and Sharing: Open Issues in Constrained Clustering
7
There are several ways to mitigate the cost of collecting constraints. If constraints are derived from a set of labeled items, we obtain L(L−1) constraints for the cost of labeling only L items. If the constraints arise independently (not from labels), most constrained clustering algorithms can leverage constraint properties such as transitivity and entailment to deduce additional constraints automatically. A more efficient way to obtain the most useful constraints for the least effort is to permit the algorithm to actively solicit only the constraints it needs. Klein et al. [9] suggested an active constraint acquisition method in which a hierarchical clustering algorithm can identify the m best queries to issue to the oracle. Recent work has also explored constraint acquisition methods for partitional clustering based on a farthest-first traversal scheme [16] or identifying points that are most likely to lie on cluster boundaries [17]. When constraints are derived from data labels, it is also possible to use an unsupervised support vector machine (SVM) to identify “pivot points” that are most useful to label [18]. A natural next step would be to combine methods for active constraint acquisition with methods for quantifying constraint set utility. In an ideal world, we would like to request the constraint(s) which will result in the largest increase in utility for the existing constraint set. Davidson et al. [15] showed that when restricting evaluation to the most coherent constraint sets, the average performance increased for most of the data sets studied. This early result suggests that coherence, and other utility measures, could be used to guide active constraint acquisition. Challenges in this area are: 1) to incorporate measures of constraint set utility into an active constraint selection heuristic, akin to the MaxMin heuristic for classification [19], so that the best constraint can be identified and queried prior to knowing its designation (must/cannot), and 2) to identify efficient ways to query the user for constraint information at a higher level, such as a cluster description or heuristic rule that can be propagated down to individual items to produce a batch of constraints from a single user statement. 3.3
Sharing: When and How Should Constraints Be Propagated to Neighboring Points?
Another way to get the most out of a set of constraints is to determine how they can be propagated to other nearby points. Existing methods that learn distance metrics use the constraints to “warp” the original distance metric to bring must-linked points closer together and to push cannot-linked points farther apart [9,10,8,3]. They implicitly rely on the assumption that it is “safe” to propagate constraints locally, in feature space. For example, if a must be linked to b, and the distance dist(a, c) is small, then when the distance metric is warped to bring a closer to b, it is also likely that the distance dist(b, c) will shrink and the algorithm will cluster b and c together as well. The performance gains that have been achieved when adapting the distance metric to the constraints are a testament to the common reliability of this assumption. However, the assumption that proximity can be used to propagate constraints is not always a valid one. It is only reasonable if the distance in feature space is
8
K.L. Wagstaff
Board A x x x x o o o
Board B x x o x o x o
Board C o o x o x x x x o
Win for X
Loss for X
Win for X
Hamming distances
dist(A,B) dist(B,C) dist(A,C)
2 8 8
Fig. 1. Three items (endgame boards) from the tic-tac-toe data set. For clarity, blanks are represented as blanks, rather than spaces marked ‘b’. The Hamming distances between each pair of boards are shown on the right.
consistent with the distances that are implied by the constraint set. This often holds true, since the features that are chosen to describe the data points are consistent with the data labels, which are commonly the source of the constraints. One exception is the tic-tac-toe data set from the UCI archive [5]. In this data set, each item is a 3x3 tic-tac-toe board that represents an end state for the game, assuming that the ‘x’ player played first. The boards are represented with nine features, one for each position on the board, and each one can take on a value of ‘x’, ‘o’, or ‘b’ (for blank). The goal is to separate the boards into two clusters: one with boards that show a win for ‘x’ and one with all other boards (losses and draws). This data set is challenging because proximity in the feature space does not correlate well with similarity in terms of assigned labels. Consider the examples shown in Figure 1. Hamming distance is used with this data set, since the features have symbolic values. Boards A and B are very similar (Hamming distance of 2), but they should be joined by a cannot-link constraint. In contrast, boards A and C are very different (Hamming distance of of 8), but they should be joined by a must-link constraint. In this situation, propagating constraints to nearby (similar) items will not help improve performance (and may even degrade it). Clustering performance on this data set is typically poor, unless a large number of constraints are available. The basic k-means algorithm achieves a Rand Index of 51%; COP-KMEANS requires 500 randomly selected constraints to increase performance to 92% [2]. COP-COBWEB is unable to increase its performance above the baseline of 49% performance, regardless of the number of constraints provided [1]. In fact, when we examine performance on a held-out subset of the data1 , it only increases to 55% for COP-KMEANS, far lower than the 92% performance on the rest of the data set. For most data sets, the held-out performance is much higher [2]. The low held-out performance indicates that the algorithm is unable to generalize the constraint information beyond the exact items that participate in constraints. This is a sign that the constraints and the features are not consistent, and that propagating constraints may be dangerous. The results of applying metric learning methods to this data set have not yet 1
The data subset is “held-out” in the sense that no constraints were generated on the subset, although it was clustered along with all of the other items once the constraints were introduced.
Value, Cost, and Sharing: Open Issues in Constrained Clustering
9
been published, probably because the feature values are symbolic rather than real-valued. However, we expect that metric learning would be ineffective, or even damaging, in this case. Challenges to be addressed in this area are: 1) to characterize data sets in terms of whether or not constraints should be propagated (when is it “safe” and when should the data overrule the constraints?), and 2) to determine the degree to which the constraints should be propagated (e.g., how far should the local neighborhood extend, for each constraint?). It is possible that constraint set coherence [15] could be used to help estimate the relevant neighborhood for each point.
4
Conclusions
This paper outlines several important unanswered questions that relate to the practice of constrained clustering. To use constrained clustering methods effectively, it is important that we have tools for estimating the value of a given constraint set prior to clustering. We also seek to minimize the cost of acquiring constraints. Finally, we require guidance in determining when and how to share or propagate constraints to their local neighborhoods. In addressing each of these subjects, we will make it possible to confidently apply constrained clustering methods to very large data sets in an efficient, principled fashion. Acknowledgments. I would like to thank Sugato Basu and Ian Davidson for ongoing discussions on constrained clustering issues and their excellent tutorial, “Clustering with Constraints: Theory and Practice,” presented at KDD 2006. The research described in this paper was funded by the NSF ITR Program (award #0325329) and was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
References 1. Wagstaff, K., Cardie, C.: Clustering with instance-level constraints. In: Proceedings of the Seventeenth International Conference on Machine Learning, pp. 1103–1110 (2000) 2. Wagstaff, K., Cardie, C., Rogers, S., Schroedl, S.: Constrained k-means clustering with background knowledge. In: Proceedings of the Eighteenth International Conference on Machine Learning, pp. 577–584 (2001) 3. Bilenko, M., Basu, S., Mooney, R.J.: Integrating constraints and metric learning in semi-supervised clustering. In: Proceedings of the Twenty-First International Conference on Machine Learning, pp. 11–18 (2004) 4. Selman, B., Mitchell, D.G., Levesque, H.J.: Generating hard satisfiability problems. Artificial Intelligence 81, 17–29 (1996) 5. Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/∼ mlearn/MLRepository.html
10
K.L. Wagstaff
6. Shental, N., Bar-Hillel, A., Hertz, T., Weinshall, D.: Computing Gaussian mixture models with EM using equivalence constraints. In: Advances in Neural Information Processing Systems 16 (2004) 7. Wagstaff, K.L.: Intelligent Clustering with Instance-Level Constraints. PhD thesis, Cornell University (2002) 8. Bar-Hillel, A., Hertz, T., Shental, N., Weinshall, D.: Learning a Mahalanobis metric from equivalence constraints. Journal of Machine Learning Research 6, 937–965 (2005) 9. Klein, D., Kamvar, S.D., Manning, C.D.: From instance-level constraints to spacelevel constraints: Making the most of prior knowledge in data clustering. In: Proceedings of the Nineteenth International Conference on Machine Learning, pp. 307–313 (2002) 10. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning, with application to clustering with side-information. In: Advances in Neural Information Processing Systems 15 (2003) 11. Bradley, P.S., Bennett, K.P., Demiriz, A.: Constrained k-means clustering. Technical Report MSR-TR-2000-65, Microsoft Research, Redmond, WA (2000) 12. Tung, A.K.H., Ng, R.T., Lakshmanan, L.V.S., Han, J.: Constraint-based clustering in large databases. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 405–419. Springer, Heidelberg (2000) 13. Murtagh, F.: A survey of algorithms for contiguity-constrained clustering and related problems. The Computer Journal 28(1), 82–88 (1985) 14. Pensa, R.G., Robardet, C., Boulicaut, J.F.: Towards constrained co-clustering in ordered 0/1 data sets. In: Esposito, F., Ra´s, Z.W., Malerba, D., Semeraro, G. (eds.) ISMIS 2006. LNCS (LNAI), vol. 4203, pp. 425–434. Springer, Heidelberg (2006) 15. Davidson, I., Wagstaff, K.L., Basu, S.: Measuring constraint-set utility for partitional clustering algorithms. In: F¨ urnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) PKDD 2006. LNCS (LNAI), vol. 4213, pp. 115–126. Springer, Heidelberg (2006) 16. Basu, S., Banerjee, A., Mooney, R.J.: Active semi-supervision for pairwise constrained clustering. In: Proceedings of the SIAM International Conference on Data Mining, pp. 333–344 (2004) 17. Xu, Q., DesJardins, M., Wagstaff, K.L.: Active constrained clustering by examining spectral eigenvectors. In: Hoffmann, A., Motoda, H., Scheffer, T. (eds.) DS 2005. LNCS (LNAI), vol. 3735, pp. 294–307. Springer, Heidelberg (2005) 18. Xu, Q.: Active Querying for Semi-supervised Clustering. PhD thesis, University of Maryland, Baltimore County (2006) 19. Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. Journal of Machine Learning Research 2, 45–66 (2002)
Mining Bi-sets in Numerical Data J´er´emy Besson1,2 , C´eline Robardet1 , Luc De Raedt3 , and Jean-Fran¸cois Boulicaut1 1 LIRIS UMR 5205 CNRS/INSA Lyon Bˆ atiment Blaise Pascal, F-69621 Villeurbanne, France 2 UMR INRA/INSERM 1235 F-69372 Lyon cedex 08, France 3 Albert-Ludwigs-Universitat Freiburg Georges-Kohler-Allee, Gebaude 079 D-79110 Freiburg, Germany [email protected]
Abstract. Thanks to an important research effort the last few years, inductive queries on set patterns and complete solvers which can evaluate them on large 0/1 data sets have been proved extremely useful. However, for many application domains, the raw data is numerical (matrices of real numbers whose dimensions denote objects and properties). Therefore, using efficient 0/1 mining techniques needs for tedious Boolean property encoding phases. This is, e.g., the case, when considering microarray data mining and its impact for knowledge discovery in molecular biology. We consider the possibility to mine directly numerical data to extract collections of relevant bi-sets, i.e., couples of associated sets of objects and attributes which satisfy some user-defined constraints. Not only we propose a new pattern domain but also we introduce a complete solver for computing the so-called numerical bi-sets. Preliminary experimental validation is given.
1
Introduction
Popular data mining techniques concern 0/1 data analysis by means of set patterns (i.e., frequent sets, association rules, closed sets, formal concepts). The huge research effort of the last 10 years has given rise to efficient complete solvers, i.e., algorithms which can compute complete collections of the set patterns which satisfy user-defined constraints (e.g., minimal frequency, minimal confidence, closeness or maximality). It is however common that the considered raw data is available as matrices where we get numerical values for a collection of attributes describing a collection of objects. Therefore, using the efficient techniques in 0/1 data has to start by Boolean property encoding, i.e., the computation of Boolean values for new sets of attributes. For instance, raw microarray data can be considered as a matrix whose rows denote biological samples and columns denote genes. In that context, each cell of the matrix is a quantitative measure of the activity of a given gene in a given biological sample. Several researchers have considered how to encode Boolean gene expression properties like, e.g., gene over-expression [1,7,12,11]. In such papers, the computed Boolean matrix has the same number of attributes S. Dˇ zeroski and J. Struyf (Eds.): KDID 2006, LNCS 4747, pp. 11–23, 2007. c Springer-Verlag Berlin Heidelberg 2007
12
J. Besson et al.
than the raw data but it encodes only one specific property. Given such datasets, efficient techniques like association rule mining (see, e.g., [1,7]) or formal concept discovery (see, e.g., [4]) have been considered. Such a Boolean encoding phase is however tedious. For instance, we still lack a consensus on how the over-expression property of a gene can be specified or assessed. As a result, different views on over-expression will lead to different Boolean encoding and thus potentially quite different collections of patterns. To overcome these problems, we investigate the possibility to mine directly the numerical data to find interesting local patterns. Global pattern mining from numerical data, e.g., clustering and bi-clustering, has been extensively studied (see [10] for a survey). Heuristic search for local patterns has been studied as well (see, e.g., [2]). However, very few researchers have investigated the non heuristic, say complete, search of well-specified local patterns from numerical data. In this paper, we introduce the Numerical Bi-Sets as a new pattern domain (NBS). Intuitively, we specify collections of bi-sets, i.e., associated sets of rows and columns such that the specified cells (for each row-column pair) of the matrix contain similar values. This property is formalized in terms of constraints, and we provide a complete solver for computing NBS patterns. We start from a recent formalization of constraint-based bi-set mining from 0/1 data (extension of formal concepts towards fault-tolerance introduced in [3]) both for the design of the pattern domain and its associated solver. The next section concerns the formalization of the NBS pattern domain and its properties. Section 3 sketches our algorithm and Section 4 provides preliminary experimental results. Section 5 discusses related work and, finally, Section 6 concludes.
2
A New Pattern Domain for Numerical Data Analysis
Let us consider a set of objects O and a set of properties P such that |O| = n and |P| = m. Let us denote by M a real valued matrix of dimension n × m such that M(i, j) denotes the value of property j ∈ P for the object i ∈ O (see Table 1 for an example). Our language of patterns is the language of bi-sets, i.e., couples made of a set of rows (objects) and a set of columns (properties). Intuitively, a bi-set (X, Y ) with X ∈ 2O and Y ∈ 2P can be considered as a rectangle or sub-matrix within M modulo row and column permutations. Definition 1 (NBS). Numerical Bi-Sets (or NBS patterns) in a matrix are the bi-sets (X, Y ) such that |X| ≥ 1 and |Y | ≥ 1 (X ⊆ O, Y ⊆ P) which satisfy the constraint Cin ∧ Cout : Cin (X, Y ) ≡ | max M(i, j) − i∈X, j∈Y
Cout (X, Y ) ≡ ∀y ∈ P \ Y, |
min
i∈X, j∈Y
max
i∈X, j∈Y ∪{y}
∀x ∈ O \ X, |
M(i, j)| ≤
M(i, j) −
max i∈X∪{x}, j∈Y
min
i∈X, j∈Y ∪{y}
M(i, j) −
M(i, j)| >
min i∈X∪{x}, j∈Y
M(i, j)| >
where is a user-defined parameter. Each NBS pattern defines a sub-matrix S of M such that the absolute value of the difference between the maximum value and the minimum value on S is less
Mining Bi-sets in Numerical Data
13
Table 1. A real valued matrix; the bold rectangles indicate two NBS patterns
o1 o2 o3 o4
p1 1 2 2 8
p2 2 1 2 9
p3 2 1 1 2
p4 1 0 7 6
p5 6 6 6 7
or equal to (see Cin ). Furthermore, no object or property can be added to the bi-set without violating this constraint (see Cout ). This ensures the maximality of the specified bi-sets. In Figure 1 (left), we can find the complete collection of NBS patterns which hold in the data from Table 1 when we have = 1. In Table 1, the two bold rectangles are two examples of such NBS patterns (i.e., the underlined patterns of Figure 1 (left)). Figure 1 (right) is an alternative representation for them: each cross in the 3D-diagram corresponds to an element in the matrix from Table 1. The search space for bi-sets can be ordered thanks to a specialization relation. Definition 2 (Specialization and monotonicity). Our specialization relation on bi-sets denoted is defined as follows: (⊥O , ⊥P ) (O , P ) iff ⊥O ⊆ O and ⊥P ⊆ P . We say that (O , P ) extends or is an extension of (⊥O , ⊥P ). A constraint C is anti-monotonic w.r.t. iff ∀B and D ∈ 2O × 2P s.t. B D, C(D) ⇒ C(B). Dually, C is monotonic w.r.t. iff C(B) ⇒ C(D). Assume W denotes the whole collection of NBS patterns for a given threshold . Let us now discuss some interesting properties of this new pattern domain: – Cin and Cout are respectively anti-monotonic and monotonic w.r.t. (see Property 1). – Each NBS pattern (X, Y ) from W is maximal w.r.t. (see Property 2). – If there exists a bi-set (X, Y ) with similar values (belonging to an interval of size ), then there exists a NBS (X , Y ) from W such that (X, Y ) (X , Y ) (see Property 3). – When increases, the size of NBS pattern increases too, whereas some new NBS patterns which are not extensions of previous one can appear (see Property 4). – The collection of numerical bi-sets is paving the dataset (see Corollary 1), i.e., any data item belongs to at least one NBS pattern. Property 1 (Monotonicity). The constraint Cin is anti-monotonic and the constraint Cout is monotonic. Proof. Let (X, Y ) a bi-set s.t. Cin (X, Y ) is true, and let (X , Y ) be a bi-set s.t. (X , Y ) (X, Y ). This implies that Cin (X , Y ) is also true: |
max
i∈X , j∈Y
M(i, j) −
≤ | max M(i, j) − i∈X, j∈Y
min
M(i, j)|
min
M(i, j)| ≤
i∈X , j∈Y i∈X, j∈Y
14
J. Besson et al. ((o1 , o2 , o3 , o4 ), (p5 )) ((o3 , o4 ), (p4 , p5 )) ((o4 ), (p1 , p5 )) ((o1 , o2 , o3 , o4 ), (p3 )) ((o4 ), (p1 , p2 )) ((o2 ), (p2 , p3 , p4 )) ((o1 , o2 ), (p4 )) ((o1 ), (p1 , p2 , p3 , p4 )) ((o1 , o2 , o3 ), (p1 , p2 , p3 ))
Data NBS 1 NBS 2 9 8 7 6 5 4 3 2 1 0
o4 p5
o3 p4 p3
o2 p2 o1p1
Fig. 1. Examples of NBS
If (X, Y ) satisfies Cout and (X, Y ) (X , Y ), then Cout (X , Y ) is also true: ∀y ∈ P \ Y, |
max
i∈X, j∈Y ∪{y}
> ∀y ∈ P \ Y , |
max
M(i, j) −
i∈X , j∈Y ∪{y}
min
i∈X, j∈Y ∪{y}
M(i, j) −
M(i, j)|
min
i∈X , j∈Y ∪{y}
M(i, j)| >
Property 2 (Maximality). The NBS patterns are maximal bi-sets w.r.t. our specialization relation , i.e., if (X, ⊥P ) and (X, P ) are two NBS patterns from W , then ⊥P ⊆ P and P ⊆ ⊥P . Proof. Assume ⊥P ⊆ P . (X, ⊥P ) does not satisfy Equation 2, because for y ∈ P \ ⊥P , | maxi∈X M(i, y) − mini∈X M(i, y)| ≤ . Property 3 (NBS patterns extending bi-sets of close values). Let I1 , I2 ∈ R, I1 ≤ I2 , and (X, Y ) be a bi-set such that ∀i ∈ X, ∀j ∈ Y, M(i, j) ∈ [I1 , I2 ]. Then, there exists a NBS (U, V ) with = |I1 − I2 | such that X ⊆ U and Y ⊆ V . Thus, if there are bi-sets of which all values are within a small range, there exists at least one NBS pattern which extends it.
Mining Bi-sets in Numerical Data
15
Proof. V can be recursively constructed from Y = Y by adding a property y s.t. y ∈ P \ Y to Y if | maxi∈X, j∈Y ∪{y} M(i, j)− mini∈X, j∈Y ∪{y} M(i, j)| ≤ , and then continue until no further property can be added. At the end, Y = V . After that, we extend in a similar way the set X towards U . By construction, (U, V ) is a NBS pattern with = |I1 − I2 |. Notice that we can have several (U, V ) which extend (X, Y ). When = 0, the NBS pattern collection contains all maximal bi-sets of identical values. As a result, we get a paving (with overlapping) of the whole dataset. Property 4 (NBS pattern size is growing with ). Let (X, Y ) be a NBS pattern from W . There exists (X , Y ) ∈ W with > such that X ⊆ X and Y ⊆ Y . Proof. Proof is trivial given Property 3. Corollary 1. As W0 is paving the data, then W is paving the data as well.
3
Algorithm
The whole collection of bi-sets ordered by forms a lattice whose bottom is (⊥O , ⊥P ) = (∅, ∅) and top is (O , P ) = (O, P). Let us denote by B the set of sublattices1 of ((∅, ∅), (O, P)): B = {((⊥O , ⊥P ), (O , P )) s.t. ⊥O , O ∈ 2O , ⊥P , P ∈ 2P and ⊥O ⊆ O , ⊥P ⊆ P } where the first (resp. the second) bi-set is the bottom (resp. the top) element. Property 5. Let N BSF = ((⊥O , ⊥P ), (O , P )) ∈ B, for all (X, Y ) ∈ N BSF we have the following properties: – – – –
e ∈ ⊥O e ∈ ⊥P e∈ / O e ∈ P
⇒e∈X ⇒e∈Y ⇒ e ∈ X ⇒ e ∈ Y
N BSF = ((⊥O , ⊥P ), (O , P )) is the set of all the bi-sets (X, Y ) s.t. ⊥O ⊆ X ⊆ O and ⊥P ⊆ Y ⊆ P . A sublattice represents explicitly a search space for bi-sets. Our algorithm NBS-Miner explores some of the sublattices of B built by means of three mechanisms: enumeration, pruning and propagation. It starts with the sublattice ((∅, ∅), (O, P)), i.e., the lattice containing all the possible bi-sets. Table 2 introduces the algorithm NBS-Miner. We now provide details about the three mecanisms. 3.1
Candidate Enumeration
The enumeration function splits recursively the current sublattice (the candidate), say N BSF , in two new sublattices containing all the bi-sets of N BSF . 1
X is a sublattice of Y if Y is a lattice, X is a subset of Y and X is a lattice with the same join and meet operations than Y .
16
J. Besson et al.
Property 6. Let N BSF = ((⊥O , ⊥P ), (O , P )) ∈ B and e ∈ O \ ⊥O , then N BS1 = ((⊥O ∪ {e}, ⊥P ), (O , P )) and N BS2 = ((⊥O , ⊥P ), (O \ {e}, P )) is a partition of N BSF . N BS1 contains all the bi-sets of N BSF which contain e and N BS2 contains all the bi-sets of N BSF which do not contain e. If e ∈ P \ ⊥P , N BS1 = ((⊥O , ⊥P ∪{e}), (O , P )) and N BS2 = ((⊥O , ⊥P ), (O , P \e)) is a partition of N BSF as well. The enumeration function selects an element of the set e ∈ O \⊥P ∪P \⊥P and its generates two new sublattices. More formally, we use the following functions Enum and Choose. Let Enum : B × O ∪ P → B 2 such that Enum(((⊥O , ⊥P ), (O , P )), e) (((⊥O ∪ {e}, ⊥P ), (O , P )), ((⊥O , ⊥P ), (O \ {e}, P ))) if e ∈ O = (((⊥O , ⊥P ∪ {e}), (O , P )), ((⊥O , ⊥P ), (O , P \ {e}))) if e ∈ P where e ∈ O \ ⊥O or e ∈ P \ ⊥P . Enum generates two new sublattices which are a partition of its input parameter. Let Choose : B → O ∪ P be a function which returns one of the element e ∈ O \ ⊥O ∪ P \ ⊥P . 3.2
Candidate Pruning
Obviously, we do not want to explore all the bi-sets. We want either to stop the enumeration when one can ensure that none bi-set of N BSF is a N BS (Pruning) or to reduce the search space when a part of N BSF can be removed witout loosing any N BS pattern (Propagation). The sublattice allows to compute bounds of any (anti-)monotonic constraints w.r.t. . For instance, Cmin area (X, Y ) ≡ #X × #Y > 20 is a monotonic constraint and Cmax area (X, Y ) ≡ #X × #Y < 3 is an anti-monotonic constraint, when #E denotes the size of the set E. If N BSF = (({o1 , o3 }, {p1 , p2 }), ({o1 , o2 , o3 , o4 }, {p1, p2 , p3 , p4 })) then none of the bi-sets of N BSF satisfy Cmin area and Cmax area . Actually, we have #{o1 , o3 } × #{p1 , p2 } > 3 and #{o1 , o2 , o3 , o4 } × #{p1 , p2 , p3 , p4 } < 20. None bi-set satisfies Cmin area and Cmax area , whatsoever the enumeration. Intuitively, the monotonic constraints use the top of the sublattice to compute a bound whereas the antimonotonic constraints use its bottom. For the pruning, we use the following function: Let P runem C : B → {true,false} be a function which returns True iff the monotonic constraint C m (w.r.t. ) is satisfied by the top of the sublattice. m P runem C ((⊥O , ⊥P ), (O , P )) ≡ C (O , P )
If P runem C ((⊥O , ⊥P ), (O , P )) is false then none of the bi-sets contained in the sublattice satisfies C m . Let P runeam : B → {true,false} be a function which returns True iff C the anti-monotonic constraint C am (w.r.t ) is satisfied by te bottom of the sublattice: am P runeam (⊥G , ⊥M ) C ((⊥G , ⊥M ), (G , M )) ≡ C
Mining Bi-sets in Numerical Data
17
If P runeam C ((⊥O , ⊥P ), (O , P )) is false then none of the bi-sets contained in the sublattice satisfies C am . Let P runeCN BS : B → {true,false} be the pruning function. Due to Property 1, we have P runeCN BS ((⊥O , ⊥P ), (O , P )) ≡ Cin (⊥O , ⊥P ) ∧ Cout (O , P ) When P runeCN BS ((⊥O , ⊥P ), (O , P )) is false then no NBS pattern is contained in the sublattice ((⊥O , ⊥P ), (O , P )). 3.3
Propagation
The propagation plays another role. It enables to reduce the size of the search space, i.e., it does not consider the entire current sublattice N BSF but a smaller sublattice N BSP ∈ B such that N BSP ⊂ N BSF . For instance, if ((⊥O ∪ {e1 }, ⊥P ), (O , P )) and ((⊥O , ⊥P ), (O , P \ {e2 })) do not contain any N BS pattern, then we can keep going the enumeration process with ((⊥O , ⊥P ∪ {e2 }), (O \ e1 , P )) instead of N BSF . Cin and Cout can be used to reduce the size of the sublattices by moving objects of O \ ⊥O into ⊥O or outside O , and similarly on attributes. The following function is used to reduce the size of the sublattice: The function P ropin B → B and P ropout B → B are used to do it as follow: P ropin ((⊥O , ⊥P ), (O , P )) = {((⊥1O , ⊥1P ), (O , P )) ∈ B | ⊥1O = ⊥O ∪ {x ∈ O \ ⊥O | Cout ((⊥O , ⊥P ), (O \ {x}, P )) is false} ⊥1P = ⊥P ∪ {x ∈ P \ ⊥P | Cout ((⊥O , ⊥P ), (O , P \ {x})) is false}} P ropout ((⊥O , ⊥P ), (O , P )) = {((⊥O , ⊥P ), (1O , 1P )) ∈ B | 1O = O \ {x ∈ O \ ⊥O | Cin ((⊥O ∪ {x}, ⊥P ), (O , P )) is false} 1P = P \ {x ∈ P \ ⊥P | Cin ((⊥O , ⊥P ), (O , P ∪ {x})) is false}} Let P rop B → B s.t. P ropin (P ropout (L)) is recursively applied as long as its result changes. We call a leaf a sublattice L = ((⊥O , ⊥P ), (O , P )) which contains only one bi-set i.e., (⊥O , ⊥P ) = (O , P ). NBS are these leaves. Example 1. Here are examples of the function P rop with the data of Table 1. – ((⊥O , ⊥P ), (O , P )) = (({o1 }, {p1}), ({o1 , o2 , o3 , o4 }, {p1 , p2 , p3 , p4 , p5 })) P rop((⊥O , ⊥P ), (O , P )) = ((⊥O , ⊥P ), (O \ {o4 }, P \ {p5 })) – ((⊥O , ⊥P ), (O , P )) = (({o1 , o2 }, {p1 }), ({o1 , o2 , o3 }, {p1 , p2 , p3 , p4 })) P ropout ((⊥O , ⊥P ), (O , P )) = ((⊥O , ⊥P ), (O , P \ {p4 })) P ropin ((⊥O , ⊥P ), (O , P \ {p4 })) = (({o1 , o2 , o3 }, {p1 , p2 , p3 }), ({o1 , o2 , o3 }, {p1 , p2 , p3 }))
18
J. Besson et al. Table 2. NBS-Miner pseudo-code M is a real valued matrix, C a conjunction of monotonic and anti-monotonic constraints on 2O × 2P and is a positive value. NBS-Miner Generate((∅, ∅), (O, P)) End NBS-Miner Generate(L) Let L = ((⊥O , ⊥P ), (O , P )) L ← P rop(L) If P rune(L) then If (⊥O , ⊥P ) = (O , P ) then (L1 , L2 ) ← Enum(L, Choose(L)) Generate(L1 ) Generate(L2 ) Else Store L End if End if End Generate
4
Experiments
We report a preliminary experimental evaluation of the NBS pattern domain and its implemented solver. We have been considering the “peaks” matrix of matlab (30*30 matrix with values ranging between -10 and +9). We used = 4.5 and we obtained 1700 NBS patterns. On Figure 2, we plot in white one extracted NBS. The two axes ranged from 0 to 30 correspond to the two matrix dimensions and the third one indicates their corresponding values (row-column pairs). In a second experiment, we enforced that the values inside the extracted patterns to be greater than 1.95 (minimal value constraint). Figure 3 shows the 228 extracted NBS patterns when = 0.1. Indeed, the white area corresponds to the union of 228 extracted patterns. To study the impact of parameter, we used the malaria dataset [5]. It records the numerical gene expression value of 3 719 genes of Plasmodium falciparum during its complete lifecycle (a time series of 46 biological situations). We used a minimal size constraint on both dimension, i.e., looking for the NBS patterns (X, Y ) s.t. |X| > 4 and |Y | > 4. Furthermore, we have been adding a minimal value constraint. Figure 4 provides the mean and standard deviation of the area of the NBS patterns from this dataset w.r.t. the value. As it was expected owed to Property 4, the mean area increases with . Figure 5 reports on the number of NBS patterns in the malaria dataset. From = 75 to = 300, this number decreases. It shows that the size of the NBS
Mining Bi-sets in Numerical Data
19
10 8 6 4 2 0 −2 −4 −6 −8 30 25
30
20
25 15
20 15
10
10
5
5 0
0
Fig. 2. An example of a NBS pattern
10 8 6 4 2 0 −2 −4 −6 −8 30 25
30
20
25 15
20 15
10
10
5
5 0
0
Fig. 3. Examples of extracted NBS
pattern collection tends to decrease when increases. Intuitively, many patterns are gathered when increases whereas few patterns are extended by generating more than one new pattern. Moreover, the minimal size constraint can explain the increase of the collection size. Finally, when the pattern size increases with , new NBS patterns can appear in the collection.
5
Related Work
[14,6,13] propose to extend classical frequent itemset and association rule definitions for numerical data. In [14], the authors generalize the classical notion of itemset support in 0/1 data when considering other data types, e.g., numerical ones. Support computation requires data normalization, first translating the
20
J. Besson et al. 90
mean area
80 70 60 50 40 30 20 10 0
0
50
100
150 epsilon
200
250
300
Fig. 4. Mean area of the NBS w.r.t. 1100 1000 900
number of NBS
800 700 600 500 400 300 200
0
50
100
150 epsilon
200
250
300
Fig. 5. Collection sizes w.r.t.
values to be positive, and then dividing each column entry by the sum of the column entries. After such a treatment, each entry is between 0 and 1, and the sum of the values for a column is equal to 1. The support of an itemset is then computed as the sum on each row of the minimum of the entries of this itemset. If the items have identical values on all the rows, then the support is equal to 1, and the more the items are different, the more the support value decreases toward 0. This support function is anti-monotonic, and thus the authors propose to adapt an Apriori algorithm to compute the frequent itemsets according to this new support definition. [6] proposes new methods to measure the support of itemsets in numerical data and categorical data. They adapt three well-known correlation measures: Kendall’s τ , Spearman’s ρ and Spearman’s Footrule F. These measures are based on the rank of the values of objects for each attribute, not the
Mining Bi-sets in Numerical Data
21
values themselves. They extend these measures to sets of attributes (instead of 2 variables). Efficient algorithms are proposed. [13] uses an optimization setting for finding association rules in numerical data. The type of extracted association rules is: “if the weighted sum of some variables is greater than a threshold then a different weighted sum of variables is with high probability greater than a second threshold”. They propose to use hyperplanes to represent the left-hand and the right-hand sides of such rules. Confidence and coverage measures are used. It is unclear wether it is possible to extend these approaches to bi-set computation. Hartigan proposes a bi-clustering algorithm that can be considered as a specific collection of bi-sets [8]. He introduced a partition-based algorithm called “Block Clustering”. It splits the original data matrix into bi-sets and it uses the variance of the values inside the bi-sets to evaluate the quality of each bi-set. Then, a so-called ideal constant cluster has a variance equal to zero. To avoid the partitioning of the dataset into bi-sets with only one row and one column (i.e., leading to ideal clusters), the algorithm searches for K bi-sets within the data. The quality of a collection of K bi-sets is considered as the sum of the variance of the K bi-sets. Unfortunately, this approach uses a local optimization procedure which can lead to unstable results. In [15], the authors propose a method to isolate subspace clusters (bi-sets) containing objects varying similarly on subset of columns. They propose to compute bi-sets (X, Y ) such that given a, b ∈ X and c, d ∈ Y the 2 × 2 sub-matrix entries ((a, b), (c, d)) included in (X, Y ) satisfies |M(a, c) + M(b, d) − (M(a, d) + M(b, c))| ≤ δ. Intuitively, this constraint enforces that the change of value on the two attributes between the two objects is confined by δ. Thus, inside the bi-sets, the values have the same profile. The algorithm first considers all pairs of objects and all pairs of attributes, and then combines them to compute all the bi-sets satisfying the anti-monotonic constraint. Liu and Wang [9] have proposed an exhaustive bi-cluster enumeration algorithm. They are looking for order-preserving bi-sets with a minimum number of rows and a minimum number of columns. This means that for each extracted bi-set (X, Y ), there exists an order on Y such that according to this order and for each element of X the values are increasing. They want to provide all the bi-clusters that, after column reordering, represent coherent evolutions of the symbols in the matrix. This is achieved by using a pattern discovery algorithm heavily inspired in sequential pattern mining algorithms. These two local pattern types are well defined and efficient solvers are proposed. Notice however that these patterns are not symmetrical: they capture similar variations on one dimension and not similar values. Except for the bi-clustering method of [8], all these methods focus on one of the two dimensions. We have proposed to compute bi-sets with a symmetrical definition which is one of the main difficulties in bi-set mining. This is indeed one of the lessons from all the previous work on bi-set mining from 0/1 data, and, among others, the several attempts to mine fault-tolerant extensions to formal concepts instead of fault-tolerant itemsets [3].
22
6
J. Besson et al.
Conclusion
Efficient data mining techniques tackle 0/1 data analysis by means of set patterns. It is however common, for instance in the context of gene expression data analysis, that the considered raw data is available as a collection of real numbers. Therefore, using the available algorithms needs for a beforehand Boolean property encoding. To overcome such a tedious task, we started to investigate the possibility to mine set patterns directly from the numerical data. We introduced the Numerical Bi-Sets as a new pattern domain. Some nice properties of NBS patterns have been considered. We have described our implemented solver NBSMiner in quite generic terms, i.e., emphasizing the fundamental operations for the complete computation of NBS patterns. Notice also that other monotonic or anti-monotonic constraints can be used in conjunction with Cin ∧ Cout , i.e., the constraint which specifies the pattern domain. It means that search space pruning can be enhanced for mining real-life datasets provided that further userdefined constraints are given. The perspectives are obviously related to further experimental validation, especially the study of scalability issues. Furthermore, we still need for an in-depth understanding of the complementarity between NBS pattern mining and bi-set mining from 0/1 data. Acknowledgments. This research is partially funded by the EU contract IQ FP6-516169 (FET arm of the IST programme). J. Besson is paid by INRA (ASC post-doc).
References 1. Becquet, C., Blachon, S., Jeudy, B., Boulicaut, J.-F., Gandrillon, O.: Strongassociation-rule mining for large-scale gene-expression data analysis: a case study on human sage data. Genome Biology, 12 (November 2002) 2. Bergmann, S., Ihmels, J., Barkai, N.: Iterative signature algorithm for the analysis of large-scale gene expression data. Physical Review 67 (March 2003) 3. Besson, J., Pensa, R., Robardet, C., Boulicaut, J.-F.: Constraint-based mining of fault-tolerant patterns from boolean data. In: Bonchi, F., Boulicaut, J.-F. (eds.) KDID 2005. LNCS, vol. 3933, pp. 55–71. Springer, Heidelberg (2006) 4. Besson, J., Robardet, C., Boulicaut, J.-F., Rome, S.: Constraint-based concept mining and its application to microarray data analysis. Intelligent Data Analysis 9(1), 59–82 (2005) 5. Bozdech, Z., Llin´ as, M., Pulliam, B., Wong, E., Zhu, J., DeRisi, J.: The transcriptome of the intraerythrocytic developmental cycle of plasmodium falciparum. PLoS Biology 1(1), 1–16 (2003) 6. Calders, T., Goethals, B., Jaroszewicz, S.: Mining rank correlated sets of numerical attributes. In: Proceedings ACM SIGKDD 2006, Philadelphia, USA, August 2006, pp. 96–105 (2006) 7. Creighton, C., Hanash, S.: Mining gene expression databases for association rules. Bioinformatics 19(1), 79–86 (2002) 8. Hartigan, J.: Direct clustering of data matrix. Journal of the American Statistical Association 67(337), 123–129 (1972)
Mining Bi-sets in Numerical Data
23
9. Liu, J., Wang, W.: Op-cluster: Clustering by tendency in high dimensional space. In: Proceedings IEEE ICDM’03, Melbourne, USA, December 2003, pp. 187–194 (2003) 10. Madeira, S.C., Oliveira, A.L.: Biclustering algorithms for biological data analysis: A survey. ACM/IEEE Trans. on computational biology and bioinformatics 1(1), 24–45 (2004) 11. Pensa, R., Boulicaut, J.-F.: Boolean property encoding for local set pattern discovery: an application to gene expression data analysis. In: Morik, K., Boulicaut, J.-F., Siebes, A. (eds.) Local Pattern Detection. LNCS (LNAI), vol. 3539, pp. 114–134. Springer, Heidelberg (2005) 12. Pensa, R.G., Leschi, C., Besson, J., Boulicaut, J.-F.: Assessment of discretization techniques for relevant pattern discovery from gene expression data. In: Proceedings ACM BIOKDD 2004, Seattle, USA, August 2004, pp. 24–30 (2004) 13. Ruckert, U., Richter, L., Kramer, S.: Quantitative association rules based on halfspaces: An optimization approach. In: Proceedings IEEE ICDM 2004, November 2004, pp. 507–510, Brighton, UK (2004) 14. Steinbach, M., Tan, P.-N., Xiong, H., Kumar, V.: Generalizing the notion of support. In: Proceedings ACM SIGKDD 2004, Seatle, USA, pp. 689–694 (2004) 15. Wang, H., Wang, W., Yang, J., Yu, P.S.: Clustering by pattern similarity in large data sets. In: Proceedings ACM SIGMOD 2002, Madison, USA, June 2002, pp. 394–405 (2002)
Extending the Soft Constraint Based Mining Paradigm Stefano Bistarelli1,2 and Francesco Bonchi3 1
Dipartimento di Scienze, Universit` a degli Studi “G. D’Annunzio”, Pescara, Italy 2 Istituto di Informatica e Telematica, CNR, Pisa, Italy 3 Pisa KDD Laboratory, ISTI - C.N.R., Pisa, Italy [email protected], [email protected]
Abstract. The paradigm of pattern discovery based on constraints has been recognized as a core technique in inductive querying: constraints provide to the user a tool to drive the discovery process towards potentially interesting patterns, with the positive side effect of achieving a more efficient computation. So far the research on this paradigm has mainly focussed on the latter aspect: the development of efficient algorithms for the evaluation of constraint-based mining queries. Due to the lack of research on methodological issues, the constraint-based pattern mining framework still suffers from many problems which limit its practical relevance. In our previous work [5], we analyzed such limitations and showed how they flow out from the same source: the fact that in the classical constraint-based mining, a constraint is a rigid boolean function which returns either true or false. To overcome such limitations we introduced the new paradigm of pattern discovery based on Soft Constraints, and instantiated our idea to the fuzzy soft constraints. In this paper we extend the framework to deal with probabilistic and weighted soft constraints: we provide theoretical basis and detailed experimental analysis. We also discuss a straightforward solution to deal with top-k queries. Finally we show how the ideas presented in this paper have been implemented in a real Inductive Database system.
1
Introduction
The paradigm of pattern discovery based on constraints was introduced with the aim of providing to the user a tool to drive the discovery process towards potentially interesting patterns, with the positive side effect of achieving a more efficient computation. So far the research on this paradigm has mainly focused on the latter aspect: the study of constraint properties and, on the basis of these properties, the development of efficient algorithms for the evaluation of constraint-based mining queries. Despite such algorithmic research effort, and regardless some successful applications, e.g., in medical domains [13,18], or in biological domains [4], the constraint-based pattern mining framework still suffers from many problems which limit its practical relevance. In our previous work [5], we analyzed such limitations and showed how they flow out from the S. Dˇ zeroski and J. Struyf (Eds.): KDID 2006, LNCS 4747, pp. 24–41, 2007. c Springer-Verlag Berlin Heidelberg 2007
Extending the Soft Constraint Based Mining Paradigm
25
same source: the fact that in the classical constraint-based mining, a constraint is a rigid boolean function which returns either true or false. Indeed, interestingness is not a dichotomy. Following this consideration, we introduced in [5] the new paradigm of pattern discovery based on Soft Constraints, where constraints are no longer rigid boolean functions. In particular we adopted a definition of soft constraints based on the mathematical concept of semiring. Albeit based on a simple idea, our proposal has the merit of providing a rigorous theoretical framework, which is very general (having the classical paradigm as a particular instance), and which overcomes all the major methodological drawbacks of the classical constraint-based paradigm, representing a step further towards practical pattern discovery. While in our previous paper we instantiated the framework to the fuzzy semiring, in this paper we extend the framework to deal with the probabilistic and the weighted semirings: these different constraints instances can be used to model different situations, depending on the application at hand. We provide the formal problem definition and the theoretical basis to develop concrete solvers for the mining problems we defined. In particular, we will show how to build a concrete softconstraint based pattern discovery system, by means of a set of appropriate wrappers around a crisp constraint pattern mining system. The mining system for classical constraint-based pattern discover that we adopted is ConQueSt, a system which we have developed at Pisa KDD Laboratory [8]. Such a system is based on a mining engine which is a general Apriori-like algorithm which, by means of data reduction and search space pruning, is able to push a wide variety of constraints (practically all possible kinds of constraints which have been studied and characterized) into the frequent itemsets computation. Finally, we discuss how to answer to top-k queries.
2
Soft Constraint Based Pattern Mining
Classical constraint (or crisp constraints) are used to discriminate admissible and/or non-admissible values for a specific (set of ) variable. However, sometimes this discrimination does not help to select a set of assignments for the variable (consider for instance overconstrained problems, or not discriminating enough constraints). In this case is preferable to use soft constraints where a specific cost/preference is assigned to each variable assignments and the best solution is selected by looking for the less expensive/more preferable complete assignment. Several formalizations of the concept of soft constraints are currently available. In the following, we refer to the formalization based on c-semirings [7]. Using this framework, classical/crisp constraints are represented by using the boolean true and false representing the admissible and/or non-admissible values; when cost or preference are used, the values are instead instantiations over a partial order set (for instance, the reals, or the interval [0,1]). Moreover the formalism must provide suitable operations for combination (×) of constraints satisfaction level, and comparison (+) of patterns under a combination of constraints. This is why this formalization is based on the mathematical concept of semiring.
26
S. Bistarelli and F. Bonchi
Definition 1 (c-semirings [7]). A semiring is a tuple A, +, ×, 0, 1 such that: A is a set and 0, 1 ∈ A; + is commutative, associative and 0 is its unit element; × is associative, distributes over +, 1 is its unit element and 0 is its absorbing element. A c-semiring (“c” stands for “constraint-based”) is a semiring A, +, ×, 0, 1 such that + is idempotent with 1 as its absorbing element and × is commutative. Definition 2 (soft constraint on c-semiring [7]). Given a c-semiring S = A, +, ×, 0, 1 and an ordered set of variables V over a finite domain D, a constraint is a function which, given an assignment η : V → D of the variables, returns a value of the c-semiring. By using this notation we define C = η → A as the set of all possible constraints that can be built starting from S, D and V . In the following we will always use the word semiring as standing for c-semiring. Example 1. The following example illustrates the definition of soft constraints based on semiring, using the example mining query: Q : supp D (X) ≥ 1500 ∧ avg(X.weight) ≤ 5 ∧ sum(X.price) ≥ 20 which requires to mine, from database D, all patterns which are frequent (have a support at least 1500), have average weight at most 5 and a sum of prices at least 20. In this context, we have that the ordered set of variables V is supp D (X), avg(X.weight), sum(X.price); the domain D is: D(supp D (X)) = N, D(avg(X.weight)) = R+ , and D(sum(X.price)) = N. If we consider the classical crisp framework (i.e., hard constraints) we are on the boolean semiring: SBool = {true, f alse}, ∨, ∧, false, true. A soft constraint C is a function V → D → A; e.g., supp D (X) → 1700 → true. The + operator is what we use to compare the level of constraints satisfaction for various patterns. Let us consider the relation ≤S (where S stands for the specified semiring) over A such that a ≤S b iff a + b = b. It is possible to prove that: ≤S is a partial order; + and × are monotone on ≤S ; 0 is its minimum and 1 its maximum, and A, ≤S is a complete lattice with least upper bound operator +. In the context of pattern discovery a ≤S b means that the pattern b is more interesting than a, where interestingness is defined by a combination of soft constraints. When using (soft) constraints it is necessary to specify, via suitable combination operators, how the level of interest of a combination of constraints is obtained from the interest level of each constraint. The combined weight (or interest) of a combination of constraints is computed by using the operator ⊗ : C × C → C defined as (C1 ⊗ C2 )η = C1 η ×S C2 η. Example 2. In this example, and in the rest of the paper, we use for the patterns the notation p : v1 , v2 , v3 , where p is an itemset, and v1 , v2 , v3 denote the three values supp D (p), avg(p.weight), sum(p.price) corresponding to the three constraints in the conjunction in the query Q of Example 1. Consider, for instance, the following three patterns: p1 : 1700, 0.8, 19, p2 : 1550, 4.8, 54, p3 :
Extending the Soft Constraint Based Mining Paradigm
27
1550, 2.2, 26. If we adopt the classical crisp framework, in the mining query Q we have to combine the three constraints using the ∧ operator (which is the × in the boolean semiring SBool ). Consider for instance the pattern p1 : 1700, 0.8, 19 for the ordered set of variables V = supp D (X), avg(X.weight), sum(X.price). The first and the second constraint are satisfied leading to the semiring level true, while the third one is not satisfied and has associated level false. Combining the three values with ∧ we obtain true ∧ true ∧ false = false and we can conclude that the pattern 1700, 0.8, 19 is not interesting w.r.t. our purposes. Similarly, we can instead compute level true for pattern p3 : 1550, 2.2, 26 corresponding to an interest w.r.t. our goals. However, dividing patterns in interesting and non-interesting is sometimes not meaningful nor useful. Most of the times we want to say that each pattern is interesting with a specific level of preference. This idea is at the basis of the soft constraint based pattern mining paradigm [5]. Definition 3 (Soft Constraint Based Pattern Mining). Let P denote the domain of possible patterns. A soft constraint on patterns is a function C : P → A where A is the carrier set of a semiring S = A, +, ×, 0, 1. Given a combination of soft constraints ⊗C, i.e., a description of what is considered by the user an interesting pattern, we define two different problems: λ-interesting: given a minimum interest threshold λ ∈ A, it is required to mine the set of all λ-interesting patterns, i.e., {p ∈ P| ⊗ C(p) ≥S λ}. top-k: given a threshold k ∈ N, it is required to mine the top-k patterns p ∈ P w.r.t. the order ≤S . In the rest of the paper we adopt the notation intP S (λ) to denote the problem of mining λ-interesting patterns (from pattern domain P) on the semiring S, and similarly topP S (k), for the corresponding top-k mining problem. Note that the Soft Constraint Based Pattern Mining paradigm just defined, has many degrees of freedom. In particular, it can be instantiated: 1. on the domain of patterns P in analysis (e.g., itemsets, sequences, trees or graphs), 2. on the semiring S = A, +, ×, 0, 1 (e.g., boolean, fuzzy, weighted or probabilistic), and 3. on one of the two possible mining problems, i.e., λ-interesting or top-k mining. In other terms, by means of Definition 3, we have defined many different mining problems: it is worth noting that the classical constraint based frequent itemsets mining, is just a particular instance of our framework. In particular, it corresponds to the mining of λ-interesting itemsets on the boolean semiring, where λ = true, i.e., intIb (true). In our previous paper [5] we have shown how to deal with the mining problem intIf (λ) (i.e., λ-interesting Itemsets on the Fuzzy Semiring), in this paper we show how to extend our framework to deal with (i) intIp (λ) (i.e., λ-interesting Itemsets on the Probabilistic Semiring), (ii) intIw (λ)
28
S. Bistarelli and F. Bonchi
(i.e., λ-interesting Itemsets on the Weighted Semiring), and (iii) mining top-k itemsets on any semiring. The methodology we adopt is based on the property that in a c-semiring S = A, +, ×, 0, 1 the ×-operator is extensive [7], i.e, a × b ≤S a for all a, b ∈ A. Thanks to this property, we can easily prune away some patterns from the set of possibly interesting ones. In particular this result directly applies when we want to solve a λ-interesting problem. In fact for any semiring (fuzzy, weighted, probabilistic) we have that [7]: Proposition 1. Given a combination of soft constraints ⊗C = C1 ⊗ . . . ⊗ Cn based on a semiring S, for any pattern p ∈ P: ⊗C(p) ≥S λ ⇒ ∀i ∈ {1, . . . , n} : Ci (p) ≥S λ. Proof. Straightforward from the extensivity of ×. Therefore, computing all the λ-interesting patterns can be done by solving a crisp problem where all the constraint instances with semiring level lower than λ have been assigned level false, and all the instances with semiring level greater or equal to λ have been assigned level true. In fact, if a pattern does not satisfy such conjunction of crisp constraints, it will not be neither interesting w.r.t. the soft constraints. Using this theoretical result, and some simple arithmetic we can transform each soft constraint in a corresponding crisp constraint, push the crisp constraint in the mining computation to prune uninteresting patterns, and when needed, post-process the solution of the crisp problem, to remove uninteresting patterns from it.
3
Mining intIp (λ) (λ-Interesting Itemsets on the Probabilistic Semiring)
Probabilistic CSPs (Prob-CSPs) were introduced to model those situations where each constraint c has a certain probability p(c), independent from the probability of the other constraints, to be part of the given problem (actually, the probability is not of the constraint, but of the situation which corresponds to the constraint: saying that c has probability p means that the situation corresponding to c has probability p of occurring in the real-life problem). Using the probabilistic constraints framework [14] we suppose each constraint to have an independent probability law, and combination is computed performing the product of the semiring value of each constraint instantiations. As a result, the semiring corresponding to the probabilistic framework is SP = [0, 1], max, ×, 0, 1. Consider the constraints graphical representations in Figure 1, where the semiring values between 0 and 1 are this time interpreted as probabilities. In this situation for the pattern p1 = 1700, 0.8, 19 we obtain that: C1 (p1 ) = 0.83, C2 (p1 ) = 1 and C3 (p1 ) = 0.45. Since in the probabilistic semiring the
Extending the Soft Constraint Based Mining Paradigm
29
combination operator × is the arithmetic multiplication, we got that the interest level of p1 is 0.37. Similarly for p2 and p3 : – p1 : C1 ⊗ C2 ⊗ C3 (1700, 0.8, 19) = ×(0.83, 1, 0.45) = 0.37 – p2 : C1 ⊗ C2 ⊗ C3 (1550, 4.8, 54) = ×(0.58, 0.6, 1) = 0.35 – p3 : C1 ⊗ C2 ⊗ C3 (1550, 2.2, 26) = ×(0.58, 1, 0.8) = 0.46 Therefore, with this particular instance we got that p2