Meta-Learning in Neural Networks: A Survey
- URL: http://arxiv.org/abs/2004.05439v2
- Date: Sat, 7 Nov 2020 20:22:57 GMT
- Title: Meta-Learning in Neural Networks: A Survey
- Authors: Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey
- Abstract summary: This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
- Score: 4.588028371034406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of meta-learning, or learning-to-learn, has seen a dramatic rise in
interest in recent years. Contrary to conventional approaches to AI where tasks
are solved from scratch using a fixed learning algorithm, meta-learning aims to
improve the learning algorithm itself, given the experience of multiple
learning episodes. This paradigm provides an opportunity to tackle many
conventional challenges of deep learning, including data and computation
bottlenecks, as well as generalization. This survey describes the contemporary
meta-learning landscape. We first discuss definitions of meta-learning and
position it with respect to related fields, such as transfer learning and
hyperparameter optimization. We then propose a new taxonomy that provides a
more comprehensive breakdown of the space of meta-learning methods today. We
survey promising applications and successes of meta-learning such as few-shot
learning and reinforcement learning. Finally, we discuss outstanding challenges
and promising areas for future research.
Related papers
- Meta-Learning and representation learner: A short theoretical note [0.0]
Meta-learning is a subfield of machine learning where the goal is to develop models and algorithms that can learn from various tasks.
Unlike traditional machine learning methods focusing on learning a specific task, meta-learning aims to leverage experience from previous tasks to enhance future learning.
arXiv Detail & Related papers (2024-07-04T23:47:10Z) - When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - Meta-learning approaches for few-shot learning: A survey of recent
advances [12.052118555436081]
Despite its success in learning deeper multi-dimensional data, the performance of deep learning declines on new unseen tasks.
Deep learning is notorious for poor generalization from few samples.
This survey first briefly introduces meta-learning and then investigates state-of-the-art meta-learning methods.
arXiv Detail & Related papers (2023-03-13T22:20:39Z) - Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems [46.760568562468606]
Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
arXiv Detail & Related papers (2022-10-03T17:15:36Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.