The Role of Global Labels in Few-Shot Classification and How to Infer
Them
- URL: http://arxiv.org/abs/2108.04055v1
- Date: Mon, 9 Aug 2021 14:07:46 GMT
- Title: The Role of Global Labels in Few-Shot Classification and How to Infer
Them
- Authors: Ruohan Wang, Massimiliano Pontil, Carlo Ciliberto
- Abstract summary: Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks.
We propose Meta Label Learning (MeLa), a novel algorithm that infers global labels and obtains robust few-shot models via standard classification.
- Score: 55.64429518100676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning (FSL) is a central problem in meta-learning, where learners
must quickly adapt to new tasks given limited training data. Surprisingly,
recent works have outperformed meta-learning methods tailored to FSL by casting
it as standard supervised learning to jointly classify all classes shared
across tasks. However, this approach violates the standard FSL setting by
requiring global labels shared across tasks, which are often unavailable in
practice. In this paper, we show why solving FSL via standard classification is
theoretically advantageous. This motivates us to propose Meta Label Learning
(MeLa), a novel algorithm that infers global labels and obtains robust few-shot
models via standard classification. Empirically, we demonstrate that MeLa
outperforms meta-learning competitors and is comparable to the oracle setting
where ground truth labels are given. We provide extensive ablation studies to
highlight the key properties of the proposed strategy.
Related papers
- Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Robust Meta-Representation Learning via Global Label Inference and
Classification [42.81340522184904]
We introduce Meta Label Learning (MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across tasks.
MeLa outperforms existing methods across a diverse range of benchmarks, in particular under a more challenging setting where the number of training tasks is limited and labels are task-specific.
arXiv Detail & Related papers (2022-12-22T13:46:47Z) - Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning [93.63638405586354]
We propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML)
Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data.
We build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression.
arXiv Detail & Related papers (2022-07-14T10:53:53Z) - A Strong Baseline for Semi-Supervised Incremental Few-Shot Learning [54.617688468341704]
Few-shot learning aims to learn models that generalize to novel classes with limited training samples.
We propose a novel paradigm containing two parts: (1) a well-designed meta-training algorithm for mitigating ambiguity between base and novel classes caused by unreliable pseudo labels and (2) a model adaptation mechanism to learn discriminative features for novel classes while preserving base knowledge using few labeled and all the unlabeled data.
arXiv Detail & Related papers (2021-10-21T13:25:52Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z) - TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot
classification [50.358839666165764]
We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in Few-Shot Learning scenarios.
Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5%$.
arXiv Detail & Related papers (2020-03-14T16:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.