Semi-supervised learning made simple with self-supervised clustering
- URL: http://arxiv.org/abs/2306.07483v1
- Date: Tue, 13 Jun 2023 01:09:18 GMT
- Title: Semi-supervised learning made simple with self-supervised clustering
- Authors: Enrico Fini and Pietro Astolfi and Karteek Alahari and Xavier
Alameda-Pineda and Julien Mairal and Moin Nabi and Elisa Ricci
- Abstract summary: Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
- Score: 65.98152950607707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning models have been shown to learn rich visual
representations without requiring human annotations. However, in many
real-world scenarios, labels are partially available, motivating a recent line
of work on semi-supervised methods inspired by self-supervised principles. In
this paper, we propose a conceptually simple yet empirically powerful approach
to turn clustering-based self-supervised methods such as SwAV or DINO into
semi-supervised learners. More precisely, we introduce a multi-task framework
merging a supervised objective using ground-truth labels and a self-supervised
objective relying on clustering assignments with a single cross-entropy loss.
This approach may be interpreted as imposing the cluster centroids to be class
prototypes. Despite its simplicity, we provide empirical evidence that our
approach is highly effective and achieves state-of-the-art performance on
CIFAR100 and ImageNet.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - A Bayesian Unification of Self-Supervised Clustering and Energy-Based
Models [11.007541337967027]
We perform a Bayesian analysis of state-of-the-art self-supervised learning objectives.
We show that our objective function allows to outperform existing self-supervised learning strategies.
We also demonstrate that GEDI can be integrated into a neuro-symbolic framework.
arXiv Detail & Related papers (2023-12-30T04:46:16Z) - Representation Learning via Consistent Assignment of Views to Clusters [0.7614628596146599]
Consistent Assignment for Representation Learning (CARL) is an unsupervised learning method to learn visual representations.
By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes.
Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion.
arXiv Detail & Related papers (2021-12-31T12:59:23Z) - Towards the Generalization of Contrastive Self-Supervised Learning [11.889992921445849]
We present a theoretical explanation of how contrastive self-supervised pre-trained models generalize to downstream tasks.
We further explore SimCLR and Barlow Twins, which are two canonical contrastive self-supervised methods.
arXiv Detail & Related papers (2021-11-01T07:39:38Z) - Hybrid Dynamic Contrast and Probability Distillation for Unsupervised
Person Re-Id [109.1730454118532]
Unsupervised person re-identification (Re-Id) has attracted increasing attention due to its practical application in the read-world video surveillance system.
We present the hybrid dynamic cluster contrast and probability distillation algorithm.
It formulates the unsupervised Re-Id problem into an unified local-to-global dynamic contrastive learning and self-supervised probability distillation framework.
arXiv Detail & Related papers (2021-09-29T02:56:45Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Information Maximization Clustering via Multi-View Self-Labelling [9.947717243638289]
We propose a novel single-phase clustering method that simultaneously learns meaningful representations and assigns the corresponding annotations.
This is achieved by integrating a discrete representation into the self-supervised paradigm through a net.
Our empirical results show that the proposed framework outperforms state-of-the-art techniques with the average accuracy of 89.1% and 49.0%, respectively.
arXiv Detail & Related papers (2021-03-12T16:04:41Z) - Revisiting Contrastive Learning for Few-Shot Classification [74.78397993160583]
Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations.
We show how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks.
We propose a novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
arXiv Detail & Related papers (2021-01-26T19:58:08Z) - CLASTER: Clustering with Reinforcement Learning for Zero-Shot Action
Recognition [52.66360172784038]
We propose a clustering-based model, which considers all training samples at once, instead of optimizing for each instance individually.
We call the proposed method CLASTER and observe that it consistently improves over the state-of-the-art in all standard datasets.
arXiv Detail & Related papers (2021-01-18T12:46:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.