Representation Learning for Clustering via Building Consensus
- URL: http://arxiv.org/abs/2105.01289v1
- Date: Tue, 4 May 2021 05:04:03 GMT
- Title: Representation Learning for Clustering via Building Consensus
- Authors: Aniket Anand Deshmukh, Jayanth Reddy Regatti, Eren Manavoglu, and Urun
Dogan
- Abstract summary: We propose Consensus Clustering using Unsupervised Representation Learning (ConCURL)
ConCURL improves the clustering performance over state-of-the art methods on four out of five image datasets.
We extend the evaluation procedure for clustering to reflect the challenges in real world clustering tasks.
- Score: 3.7434090710577608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we focus on deep clustering and unsupervised representation
learning for images. Recent advances in deep clustering and unsupervised
representation learning are based on the idea that different views of an input
image (generated through data augmentation techniques) must be closer in the
representation space (exemplar consistency), and/or similar images have a
similar cluster assignment (population consistency). We define an additional
notion of consistency, consensus consistency, which ensures that
representations are learnt to induce similar partitions for variations in the
representation space, different clustering algorithms or different
initializations of a clustering algorithm. We define a clustering loss by
performing variations in the representation space and seamlessly integrate all
three consistencies (consensus, exemplar and population) into an end-to-end
learning framework. The proposed algorithm, Consensus Clustering using
Unsupervised Representation Learning (ConCURL) improves the clustering
performance over state-of-the art methods on four out of five image datasets.
Further, we extend the evaluation procedure for clustering to reflect the
challenges in real world clustering tasks, such as clustering performance in
the case of distribution shift. We also perform a detailed ablation study for a
deeper understanding of the algorithm.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - Reinforcement Graph Clustering with Unknown Cluster Number [91.4861135742095]
We propose a new deep graph clustering method termed Reinforcement Graph Clustering.
In our proposed method, cluster number determination and unsupervised representation learning are unified into a uniform framework.
In order to conduct feedback actions, the clustering-oriented reward function is proposed to enhance the cohesion of the same clusters and separate the different clusters.
arXiv Detail & Related papers (2023-08-13T18:12:28Z) - Deep Clustering With Consensus Representations [10.058084837348366]
We introduce the idea of learning consensus representations for heterogeneous clusterings, a novel notion to approach consensus clustering.
We propose DECCS, the first deep clustering method that jointly improves the representation and clustering results of multiple heterogeneous clustering algorithms.
arXiv Detail & Related papers (2022-10-13T14:40:48Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Clustering by Maximizing Mutual Information Across Views [62.21716612888669]
We propose a novel framework for image clustering that incorporates joint representation learning and clustering.
Our method significantly outperforms state-of-the-art single-stage clustering methods across a variety of image datasets.
arXiv Detail & Related papers (2021-07-24T15:36:49Z) - Learning the Precise Feature for Cluster Assignment [39.320210567860485]
We propose a framework which integrates representation learning and clustering into a single pipeline for the first time.
The proposed framework exploits the powerful ability of recently developed generative models for learning intrinsic features.
Experimental results show that the performance of the proposed method is superior, or at least comparable to, the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-11T04:08:54Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Unsupervised Visual Representation Learning by Online Constrained
K-Means [44.38989920488318]
Cluster discrimination is an effective pretext task for unsupervised representation learning.
We propose a novel clustering-based pretext task with online textbfConstrained textbfK-mtextbfeans (textbfCoKe)
Our online assignment method has a theoretical guarantee to approach the global optimum.
arXiv Detail & Related papers (2021-05-24T20:38:32Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z) - Consensus Clustering With Unsupervised Representation Learning [4.164845768197489]
We study the clustering ability of Bootstrap Your Own Latent (BYOL) and observe that features learnt using BYOL may not be optimal for clustering.
We propose a novel consensus clustering based loss function, and train BYOL with the proposed loss in an end-to-end way that improves the clustering ability and outperforms similar clustering based methods.
arXiv Detail & Related papers (2020-10-03T01:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.