Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
- URL: http://arxiv.org/abs/2003.04390v4
- Date: Thu, 19 Aug 2021 06:15:12 GMT
- Title: Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
- Authors: Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, Xiaolong Wang
- Abstract summary: We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
- Score: 79.25478727351604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning has been the most common framework for few-shot learning in
recent years. It learns the model from collections of few-shot classification
tasks, which is believed to have a key advantage of making the training
objective consistent with the testing objective. However, some recent works
report that by training for whole-classification, i.e. classification on the
whole label-set, it can get comparable or even better embedding than many
meta-learning algorithms. The edge between these two lines of works has yet
been underexplored, and the effectiveness of meta-learning in few-shot learning
remains unclear. In this paper, we explore a simple process: meta-learning over
a whole-classification pre-trained model on its evaluation metric. We observe
this simple method achieves competitive performance to state-of-the-art methods
on standard benchmarks. Our further analysis shed some light on understanding
the trade-offs between the meta-learning objective and the whole-classification
objective in few-shot learning.
Related papers
- Lessons from Chasing Few-Shot Learning Benchmarks: Rethinking the
Evaluation of Meta-Learning Methods [9.821362920940631]
We introduce a simple baseline for meta-learning, FIX-ML.
We explore two possible goals of meta-learning: to develop methods that generalize (i) to the same task distribution that generates the training set (in-distribution), or (ii) to new, unseen task distributions (out-of-distribution)
Our results highlight that in order to reason about progress in this space, it is necessary to provide a clearer description of the goals of meta-learning, and to develop more appropriate evaluation strategies.
arXiv Detail & Related papers (2021-02-23T05:34:30Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense
Disambiguation [26.296412053816233]
We propose a meta-learning framework for few-shot word sense disambiguation.
The goal is to learn to disambiguate unseen words from only a few labeled instances.
We extend several popular meta-learning approaches to this scenario, and analyze their strengths and weaknesses.
arXiv Detail & Related papers (2020-04-29T17:33:31Z) - Meta-Meta Classification for One-Shot Learning [11.27833234287093]
We present a new approach, called meta-meta classification, to learning in small-data settings.
In this approach, one uses a large set of learning problems to design an ensemble of learners, where each learner has high bias and low variance.
We evaluate the approach on a one-shot, one-class-versus-all classification task and show that it is able to outperform traditional meta-learning as well as ensembling approaches.
arXiv Detail & Related papers (2020-04-17T07:05:03Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.