A Comprehensive Overview and Survey of Recent Advances in Meta-Learning
- URL: http://arxiv.org/abs/2004.11149v7
- Date: Mon, 26 Oct 2020 06:18:08 GMT
- Title: A Comprehensive Overview and Survey of Recent Advances in Meta-Learning
- Authors: Huimin Peng
- Abstract summary: Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article reviews meta-learning also known as learning-to-learn which
seeks rapid and accurate model adaptation to unseen tasks with applications in
highly automated AI, few-shot learning, natural language processing and
robotics. Unlike deep learning, meta-learning can be applied to few-shot
high-dimensional datasets and considers further improving model generalization
to unseen tasks. Deep learning is focused upon in-sample prediction and
meta-learning concerns model adaptation for out-of-sample prediction.
Meta-learning can continually perform self-improvement to achieve highly
autonomous AI. Meta-learning may serve as an additional generalization block
complementary for original deep learning model. Meta-learning seeks adaptation
of machine learning models to unseen tasks which are vastly different from
trained tasks. Meta-learning with coevolution between agent and environment
provides solutions for complex tasks unsolvable by training from scratch.
Meta-learning methodology covers a wide range of great minds and thoughts. We
briefly introduce meta-learning methodologies in the following categories:
black-box meta-learning, metric-based meta-learning, layered meta-learning and
Bayesian meta-learning framework. Recent applications concentrate upon the
integration of meta-learning with other machine learning framework to provide
feasible integrated problem solutions. We briefly present recent meta-learning
advances and discuss potential future research directions.
Related papers
- ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Awesome-META+: Meta-Learning Research and Learning Platform [3.7381507346856524]
Awesome-META+ is a complete and reliable meta-learning framework application and learning platform.
The project aims to promote the development of meta-learning and the expansion of the community.
arXiv Detail & Related papers (2023-04-24T03:09:25Z) - General-Purpose In-Context Learning by Meta-Learning Transformers [45.63069059498147]
We show that Transformers and other black-box models can be meta-trained to act as general-purpose in-context learners.
We characterize transitions between algorithms that generalize, algorithms that memorize, and algorithms that fail to meta-train at all.
We propose practical interventions such as biasing the training distribution that improve the meta-training and meta-generalization of general-purpose in-context learning algorithms.
arXiv Detail & Related papers (2022-12-08T18:30:22Z) - Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems [46.760568562468606]
Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
arXiv Detail & Related papers (2022-10-03T17:15:36Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - A Brief Summary of Interactions Between Meta-Learning and
Self-Supervised Learning [0.0]
This paper briefly reviews the connections between meta-learning and self-supervised learning.
We show that an integration of meta-learning and self-supervised learning models can best contribute to the improvement of model generalization capability.
arXiv Detail & Related papers (2021-03-01T08:31:28Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.