Interpretable Sequence Clustering
- URL: http://arxiv.org/abs/2309.01140v1
- Date: Sun, 3 Sep 2023 11:31:44 GMT
- Title: Interpretable Sequence Clustering
- Authors: Junjie Dong, Xinyi Yang, Mudi Jiang, Lianyu Hu and Zengyou He
- Abstract summary: We propose a method called Interpretable Sequence Clustering Tree (ISCT)
ISCT generates k leaf nodes, corresponding to k clusters, which provides an intuitive explanation on how each cluster is formed.
Experimental results on 14 real-world data sets demonstrate that our proposed method provides an interpretable tree structure.
- Score: 3.280979689839737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Categorical sequence clustering plays a crucial role in various fields, but
the lack of interpretability in cluster assignments poses significant
challenges. Sequences inherently lack explicit features, and existing sequence
clustering algorithms heavily rely on complex representations, making it
difficult to explain their results. To address this issue, we propose a method
called Interpretable Sequence Clustering Tree (ISCT), which combines sequential
patterns with a concise and interpretable tree structure. ISCT leverages k-1
patterns to generate k leaf nodes, corresponding to k clusters, which provides
an intuitive explanation on how each cluster is formed. More precisely, ISCT
first projects sequences into random subspaces and then utilizes the k-means
algorithm to obtain high-quality initial cluster assignments. Subsequently, it
constructs a pattern-based decision tree using a boosting-based construction
strategy in which sequences are re-projected and re-clustered at each node
before mining the top-1 discriminative splitting pattern. Experimental results
on 14 real-world data sets demonstrate that our proposed method provides an
interpretable tree structure while delivering fast and accurate cluster
assignments.
Related papers
- Deep Embedding Clustering Driven by Sample Stability [16.53706617383543]
We propose a deep embedding clustering algorithm driven by sample stability (DECS)
Specifically, we start by constructing the initial feature space with an autoencoder and then learn the cluster-oriented embedding feature constrained by sample stability.
The experimental results on five datasets illustrate that the proposed method achieves superior performance compared to state-of-the-art clustering approaches.
arXiv Detail & Related papers (2024-01-29T09:19:49Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - k-MS: A novel clustering algorithm based on morphological reconstruction [0.0]
k-MS is faster than the CPU-parallel k-Means in worst case scenarios.
It is also faster than similar clusterization methods that are sensitive to density and shapes such as Mitosis and TRICLUST.
arXiv Detail & Related papers (2022-08-30T16:55:21Z) - Perfect Spectral Clustering with Discrete Covariates [68.8204255655161]
We propose a spectral algorithm that achieves perfect clustering with high probability on a class of large, sparse networks.
Our method is the first to offer a guarantee of consistent latent structure recovery using spectral clustering.
arXiv Detail & Related papers (2022-05-17T01:41:06Z) - Natural Hierarchical Cluster Analysis by Nearest Neighbors with
Near-Linear Time Complexity [0.0]
We propose a nearest neighbor based clustering algorithm that results in a naturally defined hierarchy of clusters.
In contrast to the agglomerative and divisive hierarchical clustering algorithms, our approach is not dependent on the iterative working of the algorithm.
arXiv Detail & Related papers (2022-03-15T16:03:42Z) - Fast and explainable clustering based on sorting [0.0]
We introduce a fast and explainable clustering method called CLASSIX.
The algorithm is controlled by two scalar parameters, namely a distance parameter for the aggregation and another parameter controlling the minimal cluster size.
Our experiments demonstrate that CLASSIX competes with state-of-the-art clustering algorithms.
arXiv Detail & Related papers (2022-02-03T08:24:21Z) - Hierarchical clustering by aggregating representatives in
sub-minimum-spanning-trees [5.877624540482919]
We propose a novel hierarchical clustering algorithm, in which, while building the clustering dendrogram, we can effectively detect the representative point.
Under our analysis, the proposed algorithm has O(nlogn) time-complexity and O(logn) space-complexity, indicating that it has the scalability in handling massive data.
arXiv Detail & Related papers (2021-11-11T07:36:55Z) - Self-supervised Contrastive Attributed Graph Clustering [110.52694943592974]
We propose a novel attributed graph clustering network, namely Self-supervised Contrastive Attributed Graph Clustering (SCAGC)
In SCAGC, by leveraging inaccurate clustering labels, a self-supervised contrastive loss, are designed for node representation learning.
For the OOS nodes, SCAGC can directly calculate their clustering labels.
arXiv Detail & Related papers (2021-10-15T03:25:28Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Scalable Hierarchical Agglomerative Clustering [65.66407726145619]
Existing scalable hierarchical clustering methods sacrifice quality for speed.
We present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.
arXiv Detail & Related papers (2020-10-22T15:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.