Fast Few-Shot Classification by Few-Iteration Meta-Learning
- URL: http://arxiv.org/abs/2010.00511v3
- Date: Sun, 20 Mar 2022 19:22:34 GMT
- Title: Fast Few-Shot Classification by Few-Iteration Meta-Learning
- Authors: Ardhendu Shekhar Tripathi, Martin Danelljan, Luc Van Gool, Radu
Timofte
- Abstract summary: We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
- Score: 173.32497326674775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous agents interacting with the real world need to learn new concepts
efficiently and reliably. This requires learning in a low-data regime, which is
a highly challenging problem. We address this task by introducing a fast
optimization-based meta-learning method for few-shot classification. It
consists of an embedding network, providing a general representation of the
image, and a base learner module. The latter learns a linear classifier during
the inference through an unrolled optimization procedure. We design an inner
learning objective composed of (i) a robust classification loss on the support
set and (ii) an entropy loss, allowing transductive learning from unlabeled
query samples. By employing an efficient initialization module and a Steepest
Descent based optimization algorithm, our base learner predicts a powerful
classifier within only a few iterations. Further, our strategy enables
important aspects of the base learner objective to be learned during
meta-training. To the best of our knowledge, this work is the first to
integrate both induction and transduction into the base learner in an
optimization-based meta-learning framework. We perform a comprehensive
experimental analysis, demonstrating the speed and effectiveness of our
approach on four few-shot classification datasets. The Code is available at
\href{https://github.com/4rdhendu/FIML}{\textcolor{blue}{https://github.com/4rdhendu/FIML}}.
Related papers
- Achieving More with Less: A Tensor-Optimization-Powered Ensemble Method [53.170053108447455]
Ensemble learning is a method that leverages weak learners to produce a strong learner.
We design a smooth and convex objective function that leverages the concept of margin, making the strong learner more discriminative.
We then compare our algorithm with random forests of ten times the size and other classical methods across numerous datasets.
arXiv Detail & Related papers (2024-08-06T03:42:38Z) - Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Generalized Reinforcement Meta Learning for Few-Shot Optimization [3.7675996866306845]
We present a generic and flexible Reinforcement Learning (RL) based meta-learning framework for the problem of few-shot learning.
Our framework could be easily extended to do network architecture search.
arXiv Detail & Related papers (2020-05-04T03:21:05Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.