Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need?
- URL: http://arxiv.org/abs/2003.11539v2
- Date: Wed, 17 Jun 2020 08:11:10 GMT
- Title: Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need?
- Authors: Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, and
Phillip Isola
- Abstract summary: We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
- Score: 72.00712736992618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The focus of recent meta-learning research has been on the development of
learning algorithms that can quickly adapt to test time tasks with limited data
and low computational cost. Few-shot learning is widely used as one of the
standard benchmarks in meta-learning. In this work, we show that a simple
baseline: learning a supervised or self-supervised representation on the
meta-training set, followed by training a linear classifier on top of this
representation, outperforms state-of-the-art few-shot learning methods. An
additional boost can be achieved through the use of self-distillation. This
demonstrates that using a good learned embedding model can be more effective
than sophisticated meta-learning algorithms. We believe that our findings
motivate a rethinking of few-shot image classification benchmarks and the
associated role of meta-learning algorithms. Code is available at:
http://github.com/WangYueFt/rfs/.
Related papers
- Context-Aware Meta-Learning [52.09326317432577]
We propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning.
Our approach exceeds or matches the state-of-the-art algorithm, P>M>F, on 8 out of 11 meta-learning benchmarks.
arXiv Detail & Related papers (2023-10-17T03:35:27Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Does MAML Only Work via Feature Re-use? A Data Centric Perspective [19.556093984142418]
We provide empirical results that shed some light on how meta-learned MAML representations function.
We show that it is possible to define a family of synthetic benchmarks that result in a low degree of feature re-use.
We conjecture the core challenge of re-thinking meta-learning is in the design of few-shot learning data sets and benchmarks.
arXiv Detail & Related papers (2021-12-24T20:18:38Z) - A Closer Look at Few-Shot Video Classification: A New Baseline and
Benchmark [33.86872697028233]
We present an in-depth study on few-shot video classification by making three contributions.
First, we perform a consistent comparative study on the existing metric-based methods to figure out their limitations in representation learning.
Second, we discover that there is a high correlation between the novel action class and the ImageNet object class, which is problematic in the few-shot recognition setting.
Third, we present a new benchmark with more base data to facilitate future few-shot video classification without pre-training.
arXiv Detail & Related papers (2021-10-24T06:01:46Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.