TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot
classification
- URL: http://arxiv.org/abs/2003.06670v1
- Date: Sat, 14 Mar 2020 16:59:17 GMT
- Title: TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot
classification
- Authors: Moshe Lichtenstein and Prasanna Sattigeri and Rogerio Feris and Raja
Giryes and Leonid Karlinsky
- Abstract summary: We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in Few-Shot Learning scenarios.
Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5%$.
- Score: 50.358839666165764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of Few-Shot Learning (FSL), or learning from very few (typically
$1$ or $5$) examples per novel class (unseen during training), has received a
lot of attention and significant performance advances in the recent literature.
While number of techniques have been proposed for FSL, several factors have
emerged as most important for FSL performance, awarding SOTA even to the
simplest of techniques. These are: the backbone architecture (bigger is
better), type of pre-training on the base classes (meta-training vs regular
multi-class, currently regular wins), quantity and diversity of the base
classes set (the more the merrier, resulting in richer and better adaptive
features), and the use of self-supervised tasks during pre-training (serving as
a proxy for increasing the diversity of the base set). In this paper we propose
yet another simple technique that is important for the few shot learning
performance - a search for a compact feature sub-space that is discriminative
for a given few-shot test task. We show that the Task-Adaptive Feature
Sub-Space Learning (TAFSSL) can significantly boost the performance in FSL
scenarios when some additional unlabeled data accompanies the novel few-shot
task, be it either the set of unlabeled queries (transductive FSL) or some
additional set of unlabeled data samples (semi-supervised FSL). Specifically,
we show that on the challenging miniImageNet and tieredImageNet benchmarks,
TAFSSL can improve the current state-of-the-art in both transductive and
semi-supervised FSL settings by more than $5\%$, while increasing the benefit
of using unlabeled data in FSL to above $10\%$ performance gain.
Related papers
- Instance-based Max-margin for Practical Few-shot Recognition [32.26577845735846]
IbM2 is a novel instance-based max-margin method for few-shot learning.
This paper shows that IbM2 almost always leads to improvements compared to its respective baseline methods.
arXiv Detail & Related papers (2023-05-27T04:55:13Z) - Exploring Efficient Few-shot Adaptation for Vision Transformers [70.91692521825405]
We propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the Few-shot Learning tasks.
Key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA)
We conduct extensive experiments to show the efficacy of our model.
arXiv Detail & Related papers (2023-01-06T08:42:05Z) - Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning [93.63638405586354]
We propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML)
Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data.
We build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression.
arXiv Detail & Related papers (2022-07-14T10:53:53Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - The Role of Global Labels in Few-Shot Classification and How to Infer
Them [55.64429518100676]
Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks.
We propose Meta Label Learning (MeLa), a novel algorithm that infers global labels and obtains robust few-shot models via standard classification.
arXiv Detail & Related papers (2021-08-09T14:07:46Z) - Bridging Few-Shot Learning and Adaptation: New Challenges of
Support-Query Shift [4.374837991804085]
Few-Shot Learning algorithms have made substantial progress in learning novel concepts with just a handful of labelled data.
To classify query instances from novel classes encountered at test-time, they only require a support set composed of a few labelled samples.
In a realistic set-ting, data distribution is plausibly subject to change, a situation referred to as Distribution Shift (DS)
arXiv Detail & Related papers (2021-05-25T10:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.