Revisiting Contrastive Learning for Few-Shot Classification
- URL: http://arxiv.org/abs/2101.11058v1
- Date: Tue, 26 Jan 2021 19:58:08 GMT
- Title: Revisiting Contrastive Learning for Few-Shot Classification
- Authors: Orchid Majumder, Avinash Ravichandran, Subhransu Maji, Marzia Polito,
Rahul Bhotika, Stefano Soatto
- Abstract summary: Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations.
We show how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks.
We propose a novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
- Score: 74.78397993160583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instance discrimination based contrastive learning has emerged as a leading
approach for self-supervised learning of visual representations. Yet, its
generalization to novel tasks remains elusive when compared to representations
learned with supervision, especially in the few-shot setting. We demonstrate
how one can incorporate supervision in the instance discrimination based
contrastive self-supervised learning framework to learn representations that
generalize better to novel tasks. We call our approach CIDS (Contrastive
Instance Discrimination with Supervision). CIDS performs favorably compared to
existing algorithms on popular few-shot benchmarks like Mini-ImageNet or
Tiered-ImageNet. We also propose a novel model selection algorithm that can be
used in conjunction with a universal embedding trained using CIDS to outperform
state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
Related papers
- Harmony: A Joint Self-Supervised and Weakly-Supervised Framework for Learning General Purpose Visual Representations [6.990891188823598]
We present Harmony, a framework that combines vision-language training with discriminative and generative self-supervision to learn visual features.
Our framework is specifically designed to work on web-scraped data by not relying on negative examples and addressing the one-to-one correspondence issue.
arXiv Detail & Related papers (2024-05-23T07:18:08Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - On Mutual Information in Contrastive Learning for Visual Representations [19.136685699971864]
unsupervised, "contrastive" learning algorithms in vision have been shown to learn representations that perform remarkably well on transfer tasks.
We show that this family of algorithms maximizes a lower bound on the mutual information between two or more "views" of an image.
We find that the choice of negative samples and views are critical to the success of these algorithms.
arXiv Detail & Related papers (2020-05-27T04:21:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.