Meta-learning Amidst Heterogeneity and Ambiguity
- URL: http://arxiv.org/abs/2107.02228v1
- Date: Mon, 5 Jul 2021 18:54:31 GMT
- Title: Meta-learning Amidst Heterogeneity and Ambiguity
- Authors: Kyeongryeol Go, Seyoung Yun
- Abstract summary: We devise a novel meta-learning framework, called Meta-learning Amidst Heterogeneity and Ambiguity (MAHA)
By extensively conducting several experiments in regression and classification, we demonstrate the validity of our model.
- Score: 11.061517140668961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning aims to learn a model that can handle multiple tasks generated
from an unknown but shared distribution. However, typical meta-learning
algorithms have assumed the tasks to be similar such that a single meta-learner
is sufficient to aggregate the variations in all aspects. In addition, there
has been less consideration on uncertainty when limited information is given as
context. In this paper, we devise a novel meta-learning framework, called
Meta-learning Amidst Heterogeneity and Ambiguity (MAHA), that outperforms
previous works in terms of prediction based on its ability on task
identification. By extensively conducting several experiments in regression and
classification, we demonstrate the validity of our model, which turns out to be
robust to both task heterogeneity and ambiguity.
Related papers
- Meta-Learning with Heterogeneous Tasks [42.695853959923625]
Heterogeneous Tasks Robust Meta-learning (HeTRoM)
An efficient iterative optimization algorithm based on bi-level optimization.
Results demonstrate that our method provides flexibility, enabling users to adapt to diverse task settings.
arXiv Detail & Related papers (2024-10-24T16:32:23Z) - MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks [63.016244188951696]
We propose a method for few-shot learning with fewer tasks, which is by metaulation.
We modify parameters at various batch levels to increase the meta-training tasks.
We also introduce learning variational feature hierarchies by incorporating the variationalulation.
arXiv Detail & Related papers (2023-05-17T15:47:47Z) - Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification [28.38744876121834]
We introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning.
We evaluate the performance of CAML in different few-shot learning scenarios.
arXiv Detail & Related papers (2022-07-25T17:01:29Z) - Multidimensional Belief Quantification for Label-Efficient Meta-Learning [7.257751371276488]
We propose a novel uncertainty-aware task selection model for label efficient meta-learning.
The proposed model formulates a multidimensional belief measure, which can quantify the known uncertainty and lower bound the unknown uncertainty of any given task.
Experiments conducted over multiple real-world few-shot image classification tasks demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-03-23T23:37:16Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Is Support Set Diversity Necessary for Meta-Learning? [14.231486872262531]
We propose a modification to traditional meta-learning approaches in which we keep the support sets fixed across tasks, thus reducing task diversity.
Surprisingly, we find that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta-learning methods.
arXiv Detail & Related papers (2020-11-28T02:28:42Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.