Deep Clustering by Semantic Contrastive Learning
- URL: http://arxiv.org/abs/2103.02662v1
- Date: Wed, 3 Mar 2021 20:20:48 GMT
- Title: Deep Clustering by Semantic Contrastive Learning
- Authors: Jiabo Huang and Shaogang Gong
- Abstract summary: We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
- Score: 67.28140787010447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Whilst contrastive learning has achieved remarkable success in
self-supervised representation learning, its potential for deep clustering
remains unknown. This is due to its fundamental limitation that the instance
discrimination strategy it takes is not class sensitive and hence unable to
reason about the underlying decision boundaries between semantic concepts or
classes. In this work, we solve this problem by introducing a novel variant
called Semantic Contrastive Learning (SCL). It explores the characteristics of
both conventional contrastive learning and deep clustering by imposing
distance-based cluster structures on unlabelled training data and also
introducing a discriminative contrastive loss formulation. For explicitly
modelling class boundaries on-the-fly, we further formulate a clustering
consistency condition on the two different predictions given by visual
similarities and semantic decision boundaries. By advancing implicit
representation learning towards explicit understandings of visual semantics,
SCL can amplify jointly the strengths of contrastive learning and deep
clustering in a unified approach. Extensive experiments show that the proposed
model outperforms the state-of-the-art deep clustering methods on six
challenging object recognition benchmarks, especially on finer-grained and
larger datasets.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Cluster-based Contrastive Disentangling for Generalized Zero-Shot
Learning [25.92340532509084]
Generalized Zero-Shot Learning (GZSL) aims to recognize both seen and unseen classes by training only the seen classes.
We propose a Cluster-based Contrastive Disentangling (CCD) method to improve GZSL by alleviating the semantic gap and domain shift problems.
arXiv Detail & Related papers (2022-03-05T02:50:12Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Cluster Analysis with Deep Embeddings and Contrastive Learning [0.0]
This work proposes a novel framework for performing image clustering from deep embeddings.
Our approach jointly learns representations and predicts cluster centers in an end-to-end manner.
Our framework performs on par with widely accepted clustering methods and outperforms the state-of-the-art contrastive learning method on the CIFAR-10 dataset.
arXiv Detail & Related papers (2021-09-26T22:18:15Z) - Learning the Precise Feature for Cluster Assignment [39.320210567860485]
We propose a framework which integrates representation learning and clustering into a single pipeline for the first time.
The proposed framework exploits the powerful ability of recently developed generative models for learning intrinsic features.
Experimental results show that the performance of the proposed method is superior, or at least comparable to, the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-11T04:08:54Z) - Clustering-friendly Representation Learning via Instance Discrimination
and Feature Decorrelation [0.0]
We propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation.
In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively.
arXiv Detail & Related papers (2021-05-31T22:59:31Z) - Deep Fair Discriminative Clustering [24.237000220172906]
We study a general notion of group-level fairness for binary and multi-state protected status variables (PSVs)
We propose a refinement learning algorithm to combine the clustering goal with the fairness objective to learn fair clusters adaptively.
Our framework shows promising results for novel clustering tasks including flexible fairness constraints, multi-state PSVs and predictive clustering.
arXiv Detail & Related papers (2021-05-28T23:50:48Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.