Contrastive Clustering
- URL: http://arxiv.org/abs/2009.09687v1
- Date: Mon, 21 Sep 2020 08:54:40 GMT
- Title: Contrastive Clustering
- Authors: Yunfan Li, Peng Hu, Zitao Liu, Dezhong Peng, Joey Tianyi Zhou, Xi Peng
- Abstract summary: We propose Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning.
In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19% (39%) performance improvement compared with the best baseline.
- Score: 57.71729650297379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a one-stage online clustering method called
Contrastive Clustering (CC) which explicitly performs the instance- and
cluster-level contrastive learning. To be specific, for a given dataset, the
positive and negative instance pairs are constructed through data augmentations
and then projected into a feature space. Therein, the instance- and
cluster-level contrastive learning are respectively conducted in the row and
column space by maximizing the similarities of positive pairs while minimizing
those of negative ones. Our key observation is that the rows of the feature
matrix could be regarded as soft labels of instances, and accordingly the
columns could be further regarded as cluster representations. By simultaneously
optimizing the instance- and cluster-level contrastive loss, the model jointly
learns representations and cluster assignments in an end-to-end manner.
Extensive experimental results show that CC remarkably outperforms 17
competitive clustering methods on six challenging image benchmarks. In
particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100)
dataset, which is an up to 19\% (39\%) performance improvement compared with
the best baseline.
Related papers
- Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - CLC: Cluster Assignment via Contrastive Representation Learning [9.631532215759256]
We propose Contrastive Learning-based Clustering (CLC), which uses contrastive learning to directly learn cluster assignment.
We achieve 53.4% accuracy on the full ImageNet dataset and outperform existing methods by large margins.
arXiv Detail & Related papers (2023-06-08T07:15:13Z) - Dynamic Clustering and Cluster Contrastive Learning for Unsupervised
Person Re-identification [29.167783500369442]
Unsupervised Re-ID methods aim at learning robust and discriminative features from unlabeled data.
We propose a dynamic clustering and cluster contrastive learning (DCCC) method.
Experiments on several widely used public datasets validate the effectiveness of our proposed DCCC.
arXiv Detail & Related papers (2023-03-13T01:56:53Z) - C3: Cross-instance guided Contrastive Clustering [8.953252452851862]
Clustering is the task of gathering similar data samples into clusters without using any predefined labels.
We propose a novel contrastive clustering method, Cross-instance guided Contrastive Clustering (C3)
Our proposed method can outperform state-of-the-art algorithms on benchmark computer vision datasets.
arXiv Detail & Related papers (2022-11-14T06:28:07Z) - Twin Contrastive Learning for Online Clustering [15.9794051341163]
This paper proposes to perform online clustering by conducting twin contrastive learning (TCL) at the instance and cluster level.
We find that when the data is projected into a feature space with a dimensionality of the target cluster number, the rows and columns of its feature matrix correspond to the instance and cluster representation.
arXiv Detail & Related papers (2022-10-21T02:12:48Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Exploring Non-Contrastive Representation Learning for Deep Clustering [23.546602131801205]
Non-contrastive representation learning for deep clustering, termed NCC, is based on BYOL, a representative method without negative examples.
NCC forms an embedding space where all clusters are well-separated and within-cluster examples are compact.
Experimental results on several clustering benchmark datasets including ImageNet-1K demonstrate that NCC outperforms the state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2021-11-23T12:21:53Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.