Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning
- URL: http://arxiv.org/abs/2103.01315v1
- Date: Mon, 1 Mar 2021 21:14:33 GMT
- Title: Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning
- Authors: Mamshad Nayeem Rizve, Salman Khan, Fahad Shahbaz Khan, Mubarak Shah
- Abstract summary: In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
- Score: 96.75889543560497
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In many real-world problems, collecting a large number of labeled samples is
infeasible. Few-shot learning (FSL) is the dominant approach to address this
issue, where the objective is to quickly adapt to novel categories in presence
of a limited number of samples. FSL tasks have been predominantly solved by
leveraging the ideas from gradient-based meta-learning and metric learning
approaches. However, recent works have demonstrated the significance of
powerful feature representations with a simple embedding network that can
outperform existing sophisticated FSL algorithms. In this work, we build on
this insight and propose a novel training mechanism that simultaneously
enforces equivariance and invariance to a general set of geometric
transformations. Equivariance or invariance has been employed standalone in the
previous works; however, to the best of our knowledge, they have not been used
jointly. Simultaneous optimization for both of these contrasting objectives
allows the model to jointly learn features that are not only independent of the
input transformation but also the features that encode the structure of
geometric transformations. These complementary sets of features help generalize
well to novel classes with only a few data samples. We achieve additional
improvements by incorporating a novel self-supervised distillation objective.
Our extensive experimentation shows that even without knowledge distillation
our proposed method can outperform current state-of-the-art FSL methods on five
popular benchmark datasets.
Related papers
- Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Sufficient Invariant Learning for Distribution Shift [20.88069274935592]
We introduce a novel learning principle called the Sufficient Invariant Learning (SIL) framework.
SIL focuses on learning a sufficient subset of invariant features rather than relying on a single feature.
We propose a new algorithm, Adaptive Sharpness-aware Group Distributionally Robust Optimization (ASGDRO), to learn diverse invariant features by seeking common flat minima.
arXiv Detail & Related papers (2022-10-24T18:34:24Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Few-shot Learning via Dependency Maximization and Instance Discriminant
Analysis [21.8311401851523]
We study the few-shot learning problem, where a model learns to recognize new objects with extremely few labeled data per category.
We propose a simple approach to exploit unlabeled data accompanying the few-shot task for improving few-shot performance.
arXiv Detail & Related papers (2021-09-07T02:19:01Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z) - Hybrid Consistency Training with Prototype Adaptation for Few-Shot
Learning [11.873143649261362]
Few-Shot Learning aims to improve a model's generalization capability in low data regimes.
Recent FSL works have made steady progress via metric learning, meta learning, representation learning, etc.
arXiv Detail & Related papers (2020-11-19T19:51:33Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.