Meta-Learning with Fewer Tasks through Task Interpolation
- URL: http://arxiv.org/abs/2106.02695v1
- Date: Fri, 4 Jun 2021 20:15:34 GMT
- Title: Meta-Learning with Fewer Tasks through Task Interpolation
- Authors: Huaxiu Yao, Linjun Zhang, Chelsea Finn
- Abstract summary: Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
- Score: 67.03769747726666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning enables algorithms to quickly learn a newly encountered task
with just a few labeled examples by transferring previously learned knowledge.
However, the bottleneck of current meta-learning algorithms is the requirement
of a large number of meta-training tasks, which may not be accessible in
real-world scenarios. To address the challenge that available tasks may not
densely sample the space of tasks, we propose to augment the task set through
interpolation. By meta-learning with task interpolation (MLTI), our approach
effectively generates additional tasks by randomly sampling a pair of tasks and
interpolating the corresponding features and labels. Under both gradient-based
and metric-based meta-learning settings, our theoretical analysis shows MLTI
corresponds to a data-adaptive meta-regularization and further improves the
generalization. Empirically, in our experiments on eight datasets from diverse
domains including image recognition, pose prediction, molecule property
prediction, and medical image classification, we find that the proposed general
MLTI framework is compatible with representative meta-learning algorithms and
consistently outperforms other state-of-the-art strategies.
Related papers
- Meta-Learning with Heterogeneous Tasks [42.695853959923625]
Heterogeneous Tasks Robust Meta-learning (HeTRoM)
An efficient iterative optimization algorithm based on bi-level optimization.
Results demonstrate that our method provides flexibility, enabling users to adapt to diverse task settings.
arXiv Detail & Related papers (2024-10-24T16:32:23Z) - ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Towards Task Sampler Learning for Meta-Learning [37.02030832662183]
Meta-learning aims to learn general knowledge with diverse training tasks conducted from limited data, and then transfer it to new tasks.
It is commonly believed that increasing task diversity will enhance the generalization ability of meta-learning models.
This paper challenges this view through empirical and theoretical analysis.
arXiv Detail & Related papers (2023-07-18T01:53:18Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - ST-MAML: A Stochastic-Task based Method for Task-Heterogeneous
Meta-Learning [12.215288736524268]
This paper proposes a novel method, ST-MAML, that empowers model-agnostic meta-learning (MAML) to learn from multiple task distributions.
We demonstrate that ST-MAML matches or outperforms the state-of-the-art on two few-shot image classification tasks, one curve regression benchmark, one image completion problem, and a real-world temperature prediction application.
arXiv Detail & Related papers (2021-09-27T18:54:50Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Information-Theoretic Generalization Bounds for Meta-Learning and
Applications [42.275148861039895]
Key performance measure for meta-learning is the meta-generalization gap.
This paper presents novel information-theoretic upper bounds on the meta-generalization gap.
arXiv Detail & Related papers (2020-05-09T05:48:01Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.