Meta-learning approaches for few-shot learning: A survey of recent
advances
- URL: http://arxiv.org/abs/2303.07502v1
- Date: Mon, 13 Mar 2023 22:20:39 GMT
- Title: Meta-learning approaches for few-shot learning: A survey of recent
advances
- Authors: Hassan Gharoun, Fereshteh Momenifar, Fang Chen, and Amir H. Gandomi
- Abstract summary: Despite its success in learning deeper multi-dimensional data, the performance of deep learning declines on new unseen tasks.
Deep learning is notorious for poor generalization from few samples.
This survey first briefly introduces meta-learning and then investigates state-of-the-art meta-learning methods.
- Score: 12.052118555436081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite its astounding success in learning deeper multi-dimensional data, the
performance of deep learning declines on new unseen tasks mainly due to its
focus on same-distribution prediction. Moreover, deep learning is notorious for
poor generalization from few samples. Meta-learning is a promising approach
that addresses these issues by adapting to new tasks with few-shot datasets.
This survey first briefly introduces meta-learning and then investigates
state-of-the-art meta-learning methods and recent advances in: (I)
metric-based, (II) memory-based, (III), and learning-based methods. Finally,
current challenges and insights for future researches are discussed.
Related papers
- When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - Lessons from Chasing Few-Shot Learning Benchmarks: Rethinking the
Evaluation of Meta-Learning Methods [9.821362920940631]
We introduce a simple baseline for meta-learning, FIX-ML.
We explore two possible goals of meta-learning: to develop methods that generalize (i) to the same task distribution that generates the training set (in-distribution), or (ii) to new, unseen task distributions (out-of-distribution)
Our results highlight that in order to reason about progress in this space, it is necessary to provide a clearer description of the goals of meta-learning, and to develop more appropriate evaluation strategies.
arXiv Detail & Related papers (2021-02-23T05:34:30Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - A Survey of Deep Meta-Learning [1.2891210250935143]
Deep neural networks can achieve great successes when presented with large data sets and sufficient computational resources.
However, their ability to learn new concepts quickly is limited.
Deep Meta-Learning is one approach to address this issue, by enabling the network to learn how to learn.
arXiv Detail & Related papers (2020-10-07T17:09:02Z) - Deep Learning for Change Detection in Remote Sensing Images:
Comprehensive Review and Meta-Analysis [12.462608802359936]
We first introduce the fundamentals of deep learning methods which arefrequently adopted for change detection.
Then, we focus on deep learning-based change detection methodologies for remote sensing images by giving a general overview of the existing methods.
As a result of these investigations, promising new directions were identified for future research.
arXiv Detail & Related papers (2020-06-10T02:14:08Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z) - Online Fast Adaptation and Knowledge Accumulation: a New Approach to
Continual Learning [74.07455280246212]
Continual learning studies agents that learn from streams of tasks without forgetting previous ones while adapting to new ones.
We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario.
We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario.
arXiv Detail & Related papers (2020-03-12T15:47:16Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.