Revisiting Meta-Learning as Supervised Learning
- URL: http://arxiv.org/abs/2002.00573v1
- Date: Mon, 3 Feb 2020 06:13:01 GMT
- Title: Revisiting Meta-Learning as Supervised Learning
- Authors: Wei-Lun Chao, Han-Jia Ye, De-Chuan Zhan, Mark Campbell, Kilian Q.
Weinberger
- Abstract summary: We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
- Score: 69.2067288158133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed an abundance of new publications and approaches
on meta-learning. This community-wide enthusiasm has sparked great insights but
has also created a plethora of seemingly different frameworks, which can be
hard to compare and evaluate. In this paper, we aim to provide a principled,
unifying framework by revisiting and strengthening the connection between
meta-learning and traditional supervised learning. By treating pairs of
task-specific data sets and target models as (feature, label) samples, we can
reduce many meta-learning algorithms to instances of supervised learning. This
view not only unifies meta-learning into an intuitive and practical framework
but also allows us to transfer insights from supervised learning directly to
improve meta-learning. For example, we obtain a better understanding of
generalization properties, and we can readily transfer well-understood
techniques, such as model ensemble, pre-training, joint training, data
augmentation, and even nearest neighbor based methods. We provide an intuitive
analogy of these methods in the context of meta-learning and show that they
give rise to significant improvements in model performance on few-shot
learning.
Related papers
- More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms [15.621144215664769]
We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory.
Our main advantage is that it allows for more flexibility in how the transfer of knowledge between tasks is realized.
arXiv Detail & Related papers (2024-02-06T15:00:08Z) - Concept Discovery for Fast Adapatation [42.81705659613234]
We introduce concept discovery to the few-shot learning problem, where we achieve more effective adaptation by meta-learning the structure among the data features.
Our proposed method Concept-Based Model-Agnostic Meta-Learning (COMAML) has been shown to achieve consistent improvements in the structured data for both synthesized datasets and real-world datasets.
arXiv Detail & Related papers (2023-01-19T02:33:58Z) - A Brief Summary of Interactions Between Meta-Learning and
Self-Supervised Learning [0.0]
This paper briefly reviews the connections between meta-learning and self-supervised learning.
We show that an integration of meta-learning and self-supervised learning models can best contribute to the improvement of model generalization capability.
arXiv Detail & Related papers (2021-03-01T08:31:28Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.