A Survey of Deep Meta-Learning
- URL: http://arxiv.org/abs/2010.03522v2
- Date: Wed, 21 Apr 2021 13:33:40 GMT
- Title: A Survey of Deep Meta-Learning
- Authors: Mike Huisman and Jan N. van Rijn and Aske Plaat
- Abstract summary: Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources.
However, their ability to learn new concepts quickly is limited.
Deep Meta-Learning is one approach to address this issue, by enabling the network to learn how to learn.
- Score: 1.2891210250935143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks can achieve great successes when presented with large
data sets and sufficient computational resources. However, their ability to
learn new concepts quickly is limited. Meta-learning is one approach to address
this issue, by enabling the network to learn how to learn. The field of Deep
Meta-Learning advances at great speed, but lacks a unified, in-depth overview
of current techniques. With this work, we aim to bridge this gap. After
providing the reader with a theoretical foundation, we investigate and
summarize key methods, which are categorized into i)~metric-, ii)~model-, and
iii)~optimization-based techniques. In addition, we identify the main open
challenges, such as performance evaluations on heterogeneous benchmarks, and
reduction of the computational costs of meta-learning.
Related papers
- When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - Concept Discovery for Fast Adapatation [42.81705659613234]
We introduce concept discovery to the few-shot learning problem, where we achieve more effective adaptation by meta-learning the structure among the data features.
Our proposed method Concept-Based Model-Agnostic Meta-Learning (COMAML) has been shown to achieve consistent improvements in the structured data for both synthesized datasets and real-world datasets.
arXiv Detail & Related papers (2023-01-19T02:33:58Z) - Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems [46.760568562468606]
Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
arXiv Detail & Related papers (2022-10-03T17:15:36Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - State-of-the-art Techniques in Deep Edge Intelligence [0.0]
Edge Intelligence (EI) has quickly emerged as a powerful alternative to enable learning using the concepts of Edge Computing.
In this article, we provide an overview of the major constraints in operationalizing DEI.
arXiv Detail & Related papers (2020-08-03T12:17:23Z) - Meta-Gradient Reinforcement Learning with an Objective Discovered Online [54.15180335046361]
We propose an algorithm based on meta-gradient descent that discovers its own objective, flexibly parameterised by a deep neural network.
Because the objective is discovered online, it can adapt to changes over time.
On the Atari Learning Environment, the meta-gradient algorithm adapts over time to learn with greater efficiency.
arXiv Detail & Related papers (2020-07-16T16:17:09Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
Open Problems [108.81683598693539]
offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines.
We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods.
arXiv Detail & Related papers (2020-05-04T17:00:15Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.