MetaMix: Improved Meta-Learning with Interpolation-based Consistency
Regularization
- URL: http://arxiv.org/abs/2009.13735v2
- Date: Sat, 10 Oct 2020 05:36:55 GMT
- Title: MetaMix: Improved Meta-Learning with Interpolation-based Consistency
Regularization
- Authors: Yangbin Chen, Yun Ma, Tom Ko, Jianping Wang, Qing Li
- Abstract summary: We propose an approach called MetaMix to regularize backbone models.
It generates virtual feature-target pairs within each episode to regularize the backbone models.
It can be integrated with any of the MAML-based algorithms and learn the decision boundaries generalizing better to new tasks.
- Score: 14.531741503372764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-Agnostic Meta-Learning (MAML) and its variants are popular few-shot
classification methods. They train an initializer across a variety of sampled
learning tasks (also known as episodes) such that the initialized model can
adapt quickly to new tasks. However, current MAML-based algorithms have
limitations in forming generalizable decision boundaries. In this paper, we
propose an approach called MetaMix. It generates virtual feature-target pairs
within each episode to regularize the backbone models. MetaMix can be
integrated with any of the MAML-based algorithms and learn the decision
boundaries generalizing better to new tasks. Experiments on the mini-ImageNet,
CUB, and FC100 datasets show that MetaMix improves the performance of
MAML-based algorithms and achieves state-of-the-art result when integrated with
Meta-Transfer Learning.
Related papers
- How to Train Your MAML to Excel in Few-Shot Classification [26.51244463209443]
We show how to train MAML to excel in few-shot classification.
Our approach, which we name UNICORN-MAML, performs on a par with or even outperforms state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-30T17:56:15Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - MetaDelta: A Meta-Learning System for Few-shot Image Classification [71.06324527247423]
We propose MetaDelta, a novel practical meta-learning system for the few-shot image classification.
Each meta-learner in MetaDelta is composed of a unique pretrained encoder fine-tuned by batch training and parameter-free decoder used for prediction.
arXiv Detail & Related papers (2021-02-22T02:57:22Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - A Nested Bi-level Optimization Framework for Robust Few Shot Learning [10.147225934340877]
NestedMAML learns to assign weights to training tasks or instances.
Experiments on synthetic and real-world datasets demonstrate that NestedMAML efficiently mitigates the effects of "unwanted" tasks or instances.
arXiv Detail & Related papers (2020-11-13T06:41:22Z) - BOML: A Modularized Bilevel Optimization Library in Python for Meta
Learning [52.90643948602659]
BOML is a modularized optimization library that unifies several meta-learning algorithms into a common bilevel optimization framework.
It provides a hierarchical optimization pipeline together with a variety of iteration modules, which can be used to solve the mainstream categories of meta-learning methods.
arXiv Detail & Related papers (2020-09-28T14:21:55Z) - La-MAML: Look-ahead Meta Learning for Continual Learning [14.405620521842621]
We propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory.
La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.
arXiv Detail & Related papers (2020-07-27T23:07:01Z) - Improving Generalization in Meta-learning via Task Augmentation [69.83677015207527]
We propose two task augmentation methods, including MetaMix and Channel Shuffle.
Both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets.
arXiv Detail & Related papers (2020-07-26T01:50:42Z) - Weighted Meta-Learning [21.522768804834616]
Many popular meta-learning algorithms, such as model-agnostic meta-learning (MAML), only assume access to the target samples for fine-tuning.
In this work, we provide a general framework for meta-learning based on weighting the loss of different source tasks.
We develop a learning algorithm based on minimizing the error bound with respect to an empirical IPM, including a weighted MAML algorithm.
arXiv Detail & Related papers (2020-03-20T19:00:42Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.