Spatial Contrastive Learning for Few-Shot Classification
- URL: http://arxiv.org/abs/2012.13831v1
- Date: Sat, 26 Dec 2020 23:39:41 GMT
- Title: Spatial Contrastive Learning for Few-Shot Classification
- Authors: Yassine Ouali, C\'eline Hudelot, Myriam Tami
- Abstract summary: We propose a novel attention-based spatial contrastive objective to learn locally discriminative and class-agnostic features.
With extensive experiments, we show that the proposed method outperforms state-of-the-art approaches.
- Score: 9.66840768820136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing few-shot classification methods rely to some degree on the
cross-entropy (CE) loss to learn transferable representations that facilitate
the test time adaptation to unseen classes with limited data. However, the CE
loss has several shortcomings, e.g., inducing representations with excessive
discrimination towards seen classes, which reduces their transferability to
unseen classes and results in sub-optimal generalization. In this work, we
explore contrastive learning as an additional auxiliary training objective,
acting as a data-dependent regularizer to promote more general and transferable
features. Instead of using the standard contrastive objective, which suppresses
local discriminative features, we propose a novel attention-based spatial
contrastive objective to learn locally discriminative and class-agnostic
features. With extensive experiments, we show that the proposed method
outperforms state-of-the-art approaches, confirming the importance of learning
good and transferable embeddings for few-shot learning.
Related papers
- CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning [52.63674911541416]
Few-shot class-incremental learning (FSCIL) faces several challenges, such as overfitting and forgetting.
Our primary focus is representation learning on base classes to tackle the unique challenge of FSCIL.
We find that trying to secure the spread of features within a more confined feature space enables the learned representation to strike a better balance between transferability and discriminability.
arXiv Detail & Related papers (2024-10-08T02:23:16Z) - Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning [42.14439854721613]
We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios.
Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique.
arXiv Detail & Related papers (2024-05-17T19:49:02Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Semi-Discriminative Representation Loss for Online Continual Learning [16.414031859647874]
gradient-based approaches have been developed to make more efficient use of compact episodic memory.
We propose a simple method -- Semi-Discriminative Representation Loss (SDRL) -- for continual learning.
arXiv Detail & Related papers (2020-06-19T17:13:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.