Ensemble Clustering via Co-association Matrix Self-enhancement
- URL: http://arxiv.org/abs/2205.05937v1
- Date: Thu, 12 May 2022 07:54:32 GMT
- Title: Ensemble Clustering via Co-association Matrix Self-enhancement
- Authors: Yuheng Jia, Sirui Tao, Ran Wang, Yongheng Wang
- Abstract summary: Ensemble clustering integrates a set of base clustering results to generate a stronger one.
Existing methods usually rely on a co-association (CA) matrix that measures how many times two samples are grouped into the same cluster.
We propose a simple yet effective CA matrix self-enhancement framework that can improve the CA matrix to achieve better clustering performance.
- Score: 16.928049559092454
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble clustering integrates a set of base clustering results to generate a
stronger one. Existing methods usually rely on a co-association (CA) matrix
that measures how many times two samples are grouped into the same cluster
according to the base clusterings to achieve ensemble clustering. However, when
the constructed CA matrix is of low quality, the performance will degrade. In
this paper, we propose a simple yet effective CA matrix self-enhancement
framework that can improve the CA matrix to achieve better clustering
performance. Specifically, we first extract the high-confidence (HC)
information from the base clusterings to form a sparse HC matrix. By
propagating the highly-reliable information of the HC matrix to the CA matrix
and complementing the HC matrix according to the CA matrix simultaneously, the
proposed method generates an enhanced CA matrix for better clustering.
Technically, the proposed model is formulated as a symmetric constrained convex
optimization problem, which is efficiently solved by an alternating iterative
algorithm with convergence and global optimum theoretically guaranteed.
Extensive experimental comparisons with twelve state-of-the-art methods on
eight benchmark datasets substantiate the effectiveness, flexibility and
efficiency of the proposed model in ensemble clustering. The codes and datasets
can be downloaded at https://github.com/Siritao/EC-CMS.
Related papers
- Fuzzy K-Means Clustering without Cluster Centroids [79.19713746387337]
Fuzzy K-Means clustering is a critical computation technique in unsupervised data analysis.
This paper proposes a novel Fuzzy K-Means clustering algorithm that entirely eliminates the reliance on cluster centroids.
arXiv Detail & Related papers (2024-04-07T12:25:03Z) - One-Step Late Fusion Multi-view Clustering with Compressed Subspace [29.02032034647922]
We propose an integrated framework named One-Step Late Fusion Multi-view Clustering with Compressed Subspace (OS-LFMVC-CS)
We use the consensus subspace to align the partition matrix while optimizing the partition fusion, and utilize the fused partition matrix to guide the learning of discrete labels.
arXiv Detail & Related papers (2024-01-03T06:18:30Z) - Linear time Evidence Accumulation Clustering with KMeans [0.0]
This work describes a trick which mimic the behavior of average linkage clustering.
We found a way of computing efficiently the density of a partitioning, reducing the cost from a quadratic to linear complexity.
The k-means results are comparable to the best state of the art in terms of NMI while keeping the computational cost low.
arXiv Detail & Related papers (2023-11-15T14:12:59Z) - Deep Double Self-Expressive Subspace Clustering [7.875193047472789]
We propose a double self-expressive subspace clustering algorithm.
The proposed algorithm can achieve better clustering than state-of-the-art methods.
arXiv Detail & Related papers (2023-06-20T15:10:35Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Clustering Ensemble Meets Low-rank Tensor Approximation [50.21581880045667]
This paper explores the problem of clustering ensemble, which aims to combine multiple base clusterings to produce better performance than that of the individual one.
We propose a novel low-rank tensor approximation-based method to solve the problem from a global perspective.
Experimental results over 7 benchmark data sets show that the proposed model achieves a breakthrough in clustering performance, compared with 12 state-of-the-art methods.
arXiv Detail & Related papers (2020-12-16T13:01:37Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.