Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn
- URL: http://arxiv.org/abs/2305.07625v1
- Date: Fri, 12 May 2023 17:25:19 GMT
- Title: Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn
- Authors: Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li,
Henry Gouk, Li Guo, Timothy Hospedales
- Abstract summary: We introduce Meta Omnium, a dataset-of-datasets spanning multiple vision tasks.
We analyze their ability to generalize across tasks and to transfer knowledge between them.
- Score: 15.0841751679151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning and other approaches to few-shot learning are widely studied
for image recognition, and are increasingly applied to other vision tasks such
as pose estimation and dense prediction. This naturally raises the question of
whether there is any few-shot meta-learning algorithm capable of generalizing
across these diverse task types? To support the community in answering this
question, we introduce Meta Omnium, a dataset-of-datasets spanning multiple
vision tasks including recognition, keypoint localization, semantic
segmentation and regression. We experiment with popular few-shot meta-learning
baselines and analyze their ability to generalize across tasks and to transfer
knowledge between them. Meta Omnium enables meta-learning researchers to
evaluate model generalization to a much wider array of tasks than previously
possible, and provides a single framework for evaluating meta-learners across a
wide suite of vision applications in a consistent manner.
Related papers
- Multimodality in Meta-Learning: A Comprehensive Survey [34.69292359136745]
This survey provides a comprehensive overview of the multimodality-based meta-learning landscape.
We first formalize the definition of meta-learning and multimodality, along with the research challenges in this growing field.
We then propose a new taxonomy to systematically discuss typical meta-learning algorithms combined with multimodal tasks.
arXiv Detail & Related papers (2021-09-28T09:16:12Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.