MetaDelta: A Meta-Learning System for Few-shot Image Classification
- URL: http://arxiv.org/abs/2102.10744v1
- Date: Mon, 22 Feb 2021 02:57:22 GMT
- Title: MetaDelta: A Meta-Learning System for Few-shot Image Classification
- Authors: Yudong Chen, Chaoyu Guan, Zhikun Wei, Xin Wang, Wenwu Zhu
- Abstract summary: We propose MetaDelta, a novel practical meta-learning system for the few-shot image classification.
Each meta-learner in MetaDelta is composed of a unique pretrained encoder fine-tuned by batch training and parameter-free decoder used for prediction.
- Score: 71.06324527247423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning aims at learning quickly on novel tasks with limited data by
transferring generic experience learned from previous tasks. Naturally,
few-shot learning has been one of the most popular applications for
meta-learning. However, existing meta-learning algorithms rarely consider the
time and resource efficiency or the generalization capacity for unknown
datasets, which limits their applicability in real-world scenarios. In this
paper, we propose MetaDelta, a novel practical meta-learning system for the
few-shot image classification. MetaDelta consists of two core components: i)
multiple meta-learners supervised by a central controller to ensure efficiency,
and ii) a meta-ensemble module in charge of integrated inference and better
generalization. In particular, each meta-learner in MetaDelta is composed of a
unique pretrained encoder fine-tuned by batch training and parameter-free
decoder used for prediction. MetaDelta ranks first in the final phase in the
AAAI 2021 MetaDL
Challenge\footnote{https://competitions.codalab.org/competitions/26638},
demonstrating the advantages of our proposed system. The codes are publicly
available at https://github.com/Frozenmad/MetaDelta.
Related papers
- Does MAML Only Work via Feature Re-use? A Data Centric Perspective [19.556093984142418]
We provide empirical results that shed some light on how meta-learned MAML representations function.
We show that it is possible to define a family of synthetic benchmarks that result in a low degree of feature re-use.
We conjecture the core challenge of re-thinking meta-learning is in the design of few-shot learning data sets and benchmarks.
arXiv Detail & Related papers (2021-12-24T20:18:38Z) - Combining Domain-Specific Meta-Learners in the Parameter Space for
Cross-Domain Few-Shot Classification [6.945139522691311]
We propose an optimization-based meta-learning method called Combining Domain-Specific Meta-Learners (CosML)
Our experiments show that CosML outperforms a range of state-of-the-art methods and achieves strong cross-domain ability generalization.
arXiv Detail & Related papers (2020-10-31T03:33:39Z) - MetaMix: Improved Meta-Learning with Interpolation-based Consistency
Regularization [14.531741503372764]
We propose an approach called MetaMix to regularize backbone models.
It generates virtual feature-target pairs within each episode to regularize the backbone models.
It can be integrated with any of the MAML-based algorithms and learn the decision boundaries generalizing better to new tasks.
arXiv Detail & Related papers (2020-09-29T02:44:13Z) - BOML: A Modularized Bilevel Optimization Library in Python for Meta
Learning [52.90643948602659]
BOML is a modularized optimization library that unifies several meta-learning algorithms into a common bilevel optimization framework.
It provides a hierarchical optimization pipeline together with a variety of iteration modules, which can be used to solve the mainstream categories of meta-learning methods.
arXiv Detail & Related papers (2020-09-28T14:21:55Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Meta-Learning across Meta-Tasks for Few-Shot Learning [107.44950540552765]
We argue that the inter-meta-task relationships should be exploited and those tasks are sampled strategically to assist in meta-learning.
We consider the relationships defined over two types of meta-task pairs and propose different strategies to exploit them.
arXiv Detail & Related papers (2020-02-11T09:25:13Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.