Episodic-free Task Selection for Few-shot Learning
- URL: http://arxiv.org/abs/2402.00092v1
- Date: Wed, 31 Jan 2024 10:52:15 GMT
- Title: Episodic-free Task Selection for Few-shot Learning
- Authors: Tao Zhang
- Abstract summary: We propose a novel meta-training framework beyond episodic training.
episodic tasks are not used directly for training, but for evaluating the effectiveness of some selected episodic-free tasks.
In experiments, the training task set contains some promising types, e. g., contrastive learning and classification.
- Score: 2.508902852545462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Episodic training is a mainstream training strategy for few-shot learning. In
few-shot scenarios, however, this strategy is often inferior to some
non-episodic training strategy, e. g., Neighbourhood Component Analysis (NCA),
which challenges the principle that training conditions must match testing
conditions. Thus, a question is naturally asked: How to search for
episodic-free tasks for better few-shot learning? In this work, we propose a
novel meta-training framework beyond episodic training. In this framework,
episodic tasks are not used directly for training, but for evaluating the
effectiveness of some selected episodic-free tasks from a task set that are
performed for training the meta-learners. The selection criterion is designed
with the affinity, which measures the degree to which loss decreases when
executing the target tasks after training with the selected tasks. In
experiments, the training task set contains some promising types, e. g.,
contrastive learning and classification, and the target few-shot tasks are
achieved with the nearest centroid classifiers on the miniImageNet,
tiered-ImageNet and CIFAR-FS datasets. The experimental results demonstrate the
effectiveness of our approach.
Related papers
- Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Selecting task with optimal transport self-supervised learning for
few-shot classification [15.088213168796772]
Few-Shot classification aims at solving problems that only a few samples are available in the training process.
We propose a novel task selecting algorithm, named Optimal Transport Task Selecting (OTTS), to construct a training set by selecting similar tasks for Few-Shot learning.
OTTS measures the task similarity by calculating the optimal transport distance and completes the model training via a self-supervised strategy.
arXiv Detail & Related papers (2022-04-01T08:45:29Z) - Curriculum Meta-Learning for Few-shot Classification [1.5039745292757671]
We propose an adaptation of the curriculum training framework, applicable to state-of-the-art meta learning techniques for few-shot classification.
Our experiments with the MAML algorithm on two few-shot image classification tasks show significant gains with the curriculum training framework.
arXiv Detail & Related papers (2021-12-06T10:29:23Z) - MetaICL: Learning to Learn In Context [87.23056864536613]
We introduce MetaICL, a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learn-ing on a large set of training tasks.
We show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task training data, and outperforms much bigger models with nearly 8x parameters.
arXiv Detail & Related papers (2021-10-29T17:42:08Z) - Uniform Sampling over Episode Difficulty [55.067544082168624]
We propose a method to approximate episode sampling distributions based on their difficulty.
As the proposed sampling method is algorithm agnostic, we can leverage these insights to improve few-shot learning accuracies.
arXiv Detail & Related papers (2021-08-03T17:58:54Z) - Meta-Reinforcement Learning for Heuristic Planning [12.462608802359936]
In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of tasks to prepare for and learn faster in new, unseen, but related tasks.
We show that given a set of training tasks, learning can be both faster and more effective if the training tasks are appropriately selected.
We propose a task selection algorithm, Information-Theoretic Task Selection (ITTS), based on information theory.
arXiv Detail & Related papers (2021-07-06T13:25:52Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z) - Few-Shot Image Classification via Contrastive Self-Supervised Learning [5.878021051195956]
We propose a new paradigm of unsupervised few-shot learning to repair the deficiencies.
We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning.
Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets.
arXiv Detail & Related papers (2020-08-23T02:24:31Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.