BECLR: Batch Enhanced Contrastive Few-Shot Learning
- URL: http://arxiv.org/abs/2402.02444v1
- Date: Sun, 4 Feb 2024 10:52:43 GMT
- Title: BECLR: Batch Enhanced Contrastive Few-Shot Learning
- Authors: Stylianos Poulakakis-Daktylidis and Hadi Jamali-Rad
- Abstract summary: Unsupervised few-shot learning aspires to bridge this gap by discarding the reliance on annotations at training time.
We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space.
We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage.
- Score: 1.450405446885067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning quickly from very few labeled samples is a fundamental attribute
that separates machines and humans in the era of deep representation learning.
Unsupervised few-shot learning (U-FSL) aspires to bridge this gap by discarding
the reliance on annotations at training time. Intrigued by the success of
contrastive learning approaches in the realm of U-FSL, we structurally approach
their shortcomings in both pretraining and downstream inference stages. We
propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly
separable latent representation space for enhancing positive sampling at the
pretraining phase and infusing implicit class-level insights into unsupervised
contrastive learning. We then tackle the, somehow overlooked yet critical,
issue of sample bias at the few-shot inference stage. We propose an iterative
Optimal Transport-based distribution Alignment (OpTA) strategy and demonstrate
that it efficiently addresses the problem, especially in low-shot scenarios
where FSL approaches suffer the most from sample bias. We later on discuss that
DyCE and OpTA are two intertwined pieces of a novel end-to-end approach (we
coin as BECLR), constructively magnifying each other's impact. We then present
a suite of extensive quantitative and qualitative experimentation to
corroborate that BECLR sets a new state-of-the-art across ALL existing U-FSL
benchmarks (to the best of our knowledge), and significantly outperforms the
best of the current baselines (codebase available at:
https://github.com/stypoumic/BECLR).
Related papers
- CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning [52.63674911541416]
Few-shot class-incremental learning (FSCIL) faces several challenges, such as overfitting and forgetting.
Our primary focus is representation learning on base classes to tackle the unique challenge of FSCIL.
We find that trying to secure the spread of features within a more confined feature space enables the learned representation to strike a better balance between transferability and discriminability.
arXiv Detail & Related papers (2024-10-08T02:23:16Z) - ItTakesTwo: Leveraging Peer Representations for Semi-supervised LiDAR Semantic Segmentation [24.743048965822297]
This paper introduces a novel semi-supervised LiDAR semantic segmentation framework called ItTakesTwo (IT2)
IT2 is designed to ensure consistent predictions from peer LiDAR representations, thereby improving the perturbation effectiveness in consistency learning.
Results on public benchmarks show that our approach achieves remarkable improvements over the previous state-of-the-art (SOTA) methods in the field.
arXiv Detail & Related papers (2024-07-09T18:26:53Z) - FUSSL: Fuzzy Uncertain Self Supervised Learning [8.31483061185317]
Self supervised learning (SSL) has become a very successful technique to harness the power of unlabeled data, with no annotation effort.
In this paper, for the first time, we recognize the fundamental limits of SSL coming from the use of a single-supervisory signal.
We propose a robust and general standard hierarchical learning/training protocol for any SSL baseline.
arXiv Detail & Related papers (2022-10-28T01:06:10Z) - Self-Attention Message Passing for Contrastive Few-Shot Learning [2.1485350418225244]
Unsupervised few-shot learning is the pursuit of bridging this gap between machines and humans.
We propose a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training.
We also propose an optimal transport (OT) based fine-tuning strategy (we call OpT-Tune) to efficiently induce task awareness into our novel end-to-end unsupervised few-shot classification framework (SAMPTransfer)
arXiv Detail & Related papers (2022-10-12T15:57:44Z) - Few-Shot Classification with Contrastive Learning [10.236150550121163]
We propose a novel contrastive learning-based framework that seamlessly integrates contrastive learning into both stages.
In the meta-training stage, we propose a cross-view episodic training mechanism to perform the nearest centroid classification on two different views of the same episode.
These two strategies force the model to overcome the bias between views and promote the transferability of representations.
arXiv Detail & Related papers (2022-09-17T02:39:09Z) - Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness [69.39073806630583]
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
arXiv Detail & Related papers (2022-07-22T06:30:44Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.