Bridging Few-Shot Learning and Adaptation: New Challenges of
Support-Query Shift
- URL: http://arxiv.org/abs/2105.11804v1
- Date: Tue, 25 May 2021 10:10:09 GMT
- Title: Bridging Few-Shot Learning and Adaptation: New Challenges of
Support-Query Shift
- Authors: Etienne Bennequin, Victor Bouvier, Myriam Tami, Antoine Toubhans,
C\'eline Hudelot
- Abstract summary: Few-Shot Learning algorithms have made substantial progress in learning novel concepts with just a handful of labelled data.
To classify query instances from novel classes encountered at test-time, they only require a support set composed of a few labelled samples.
In a realistic set-ting, data distribution is plausibly subject to change, a situation referred to as Distribution Shift (DS)
- Score: 4.374837991804085
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Few-Shot Learning (FSL) algorithms have made substantial progress in learning
novel concepts with just a handful of labelled data. To classify query
instances from novel classes encountered at test-time, they only require a
support set composed of a few labelled samples. FSL benchmarks commonly assume
that those queries come from the same distribution as instances in the support
set. However, in a realistic set-ting, data distribution is plausibly subject
to change, a situation referred to as Distribution Shift (DS). The present work
addresses the new and challenging problem of Few-Shot Learning under
Support/Query Shift (FSQS) i.e., when support and query instances are sampled
from related but different distributions. Our contributions are the following.
First, we release a testbed for FSQS, including datasets, relevant baselines
and a protocol for a rigorous and reproducible evaluation. Second, we observe
that well-established FSL algorithms unsurprisingly suffer from a considerable
drop in accuracy when facing FSQS, stressing the significance of our study.
Finally, we show that transductive algorithms can limit the inopportune effect
of DS. In particular, we study both the role of Batch-Normalization and Optimal
Transport (OT) in aligning distributions, bridging Unsupervised Domain
Adaptation with FSL. This results in a new method that efficiently combines OT
with the celebrated Prototypical Networks. We bring compelling experiments
demonstrating the advantage of our method. Our work opens an exciting line of
research by providing a testbed and strong baselines. Our code is available at
https://github.com/ebennequin/meta-domain-shift.
Related papers
- Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - Dual Adversarial Alignment for Realistic Support-Query Shift Few-shot
Learning [15.828113109152069]
Support-Query Shift Few-shot learning aims to classify unseen examples (query set) to labeled data (support set) based on the learned embedding in a low-dimensional space.
In this paper, we propose a novel but more difficult challenge, Realistic Support-Query Shift few-shot learning.
In addition, we propose a unified adversarial feature alignment method called DUal adversarial ALignment framework (DuaL) to relieve RSQS from two aspects, i.e., inter-domain bias and intra-domain variance.
arXiv Detail & Related papers (2023-09-05T09:50:31Z) - Addressing Distribution Shift at Test Time in Pre-trained Language
Models [3.655021726150369]
State-of-the-art pre-trained language models (PLMs) outperform other models when applied to the majority of language processing tasks.
PLMs have been found to degrade in performance under distribution shift.
We present an approach that improves the performance of PLMs at test-time under distribution shift.
arXiv Detail & Related papers (2022-12-05T16:04:54Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Hybrid Consistency Training with Prototype Adaptation for Few-Shot
Learning [11.873143649261362]
Few-Shot Learning aims to improve a model's generalization capability in low data regimes.
Recent FSL works have made steady progress via metric learning, meta learning, representation learning, etc.
arXiv Detail & Related papers (2020-11-19T19:51:33Z) - Domain-Adaptive Few-Shot Learning [124.51420562201407]
We propose a novel domain-adversarial network (DAPN) model for domain-adaptive few-shot learning.
Our solution is to explicitly enhance the source/target per-class separation before domain-adaptive feature embedding learning.
arXiv Detail & Related papers (2020-03-19T08:31:14Z) - TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot
classification [50.358839666165764]
We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in Few-Shot Learning scenarios.
Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5%$.
arXiv Detail & Related papers (2020-03-14T16:59:17Z) - Few-Shot Learning as Domain Adaptation: Algorithm and Analysis [120.75020271706978]
Few-shot learning uses prior knowledge learned from the seen classes to recognize the unseen classes.
This class-difference-caused distribution shift can be considered as a special case of domain shift.
We propose a prototypical domain adaptation network with attention (DAPNA) to explicitly tackle such a domain shift problem in a meta-learning framework.
arXiv Detail & Related papers (2020-02-06T01:04:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.