Adaptive Task Sampling for Meta-Learning
- URL: http://arxiv.org/abs/2007.08735v1
- Date: Fri, 17 Jul 2020 03:15:53 GMT
- Title: Adaptive Task Sampling for Meta-Learning
- Authors: Chenghao Liu and Zhihao Wang and Doyen Sahoo and Yuan Fang and Kun
Zhang and Steven C.H. Hoi
- Abstract summary: Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
- Score: 79.61146834134459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning methods have been extensively studied and applied in computer
vision, especially for few-shot classification tasks. The key idea of
meta-learning for few-shot classification is to mimic the few-shot situations
faced at test time by randomly sampling classes in meta-training data to
construct few-shot tasks for episodic training. While a rich line of work
focuses solely on how to extract meta-knowledge across tasks, we exploit the
complementary problem on how to generate informative tasks. We argue that the
randomly sampled tasks could be sub-optimal and uninformative (e.g., the task
of classifying "dog" from "laptop" is often trivial) to the meta-learner. In
this paper, we propose an adaptive task sampling method to improve the
generalization performance. Unlike instance based sampling, task based sampling
is much more challenging due to the implicit definition of the task in each
episode. Therefore, we accordingly propose a greedy class-pair based sampling
method, which selects difficult tasks according to class-pair potentials. We
evaluate our adaptive task sampling method on two few-shot classification
benchmarks, and it achieves consistent improvements across different feature
backbones, meta-learning algorithms and datasets.
Related papers
- Meta-Learning with Heterogeneous Tasks [42.695853959923625]
Heterogeneous Tasks Robust Meta-learning (HeTRoM)
An efficient iterative optimization algorithm based on bi-level optimization.
Results demonstrate that our method provides flexibility, enabling users to adapt to diverse task settings.
arXiv Detail & Related papers (2024-10-24T16:32:23Z) - Robust Meta-Reinforcement Learning with Curriculum-Based Task Sampling [0.0]
We show that Robust Meta Reinforcement Learning with Guided Task Sampling (RMRL-GTS) is an effective method that restricts task sampling based on scores and epochs.
In order to achieve robust meta-RL, it is necessary not only to intensively sample tasks with poor scores, but also to restrict and expand the task regions of the tasks to be sampled.
arXiv Detail & Related papers (2022-03-31T05:16:24Z) - Meta-Sampler: Almost-Universal yet Task-Oriented Sampling for Point
Clouds [46.33828400918886]
We show how we can train an almost-universal meta-sampler across multiple tasks.
This meta-sampler can then be rapidly fine-tuned when applied to different datasets, networks, or even different tasks.
arXiv Detail & Related papers (2022-03-30T02:21:34Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z) - Meta-Regularization by Enforcing Mutual-Exclusiveness [0.8057006406834467]
We propose a regularization technique for meta-learning models that gives the model designer more control over the information flow during meta-training.
Our proposed regularization function shows an accuracy boost of $sim$ $36%$ on the Omniglot dataset.
arXiv Detail & Related papers (2021-01-24T22:57:19Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z) - Expert Training: Task Hardness Aware Meta-Learning for Few-Shot
Classification [62.10696018098057]
We propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly.
A task hardness aware module is designed and integrated into the training procedure to estimate the hardness of a task.
Experimental results on the miniImageNet and tieredImageNetSketch datasets show that the meta-learners can obtain better results with our expert training strategy.
arXiv Detail & Related papers (2020-07-13T08:49:00Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.