Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot
Image Recognition
- URL: http://arxiv.org/abs/2111.04993v2
- Date: Thu, 11 Nov 2021 16:54:45 GMT
- Title: Incremental Meta-Learning via Episodic Replay Distillation for Few-Shot
Image Recognition
- Authors: Kai Wang, Xialei Liu, Andy Bagdanov, Luis Herranz, Shangling Jui,
Joost van de Weijer
- Abstract summary: We consider the problem of Incremental Meta-Learning (IML) in which classes are presented incrementally in discrete tasks.
We propose an approach to IML, which we call Episodic Replay Distillation (ERD)
ERD mixes classes from the current task with class exemplars from previous tasks when sampling episodes for meta-learning.
- Score: 43.44508415430047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most meta-learning approaches assume the existence of a very large set of
labeled data available for episodic meta-learning of base knowledge. This
contrasts with the more realistic continual learning paradigm in which data
arrives incrementally in the form of tasks containing disjoint classes. In this
paper we consider this problem of Incremental Meta-Learning (IML) in which
classes are presented incrementally in discrete tasks. We propose an approach
to IML, which we call Episodic Replay Distillation (ERD), that mixes classes
from the current task with class exemplars from previous tasks when sampling
episodes for meta-learning. These episodes are then used for knowledge
distillation to minimize catastrophic forgetting. Experiments on four datasets
demonstrate that ERD surpasses the state-of-the-art. In particular, on the more
challenging one-shot, long task sequence incremental meta-learning scenarios,
we reduce the gap between IML and the joint-training upper bound from 3.5% /
10.1% / 13.4% with the current state-of-the-art to 2.6% / 2.9% / 5.0% with our
method on Tiered-ImageNet / Mini-ImageNet / CIFAR100, respectively.
Related papers
- Accelerating Meta-Learning by Sharing Gradients [12.090942406595637]
We show that gradient sharing enables meta-learning under bigger inner loop learning rates and can accelerate the meta-training process by up to 134%.
We show using two popular few-shot classification datasets that gradient sharing enables meta-learning under bigger inner loop learning rates and can accelerate the meta-training process by up to 134%.
arXiv Detail & Related papers (2023-12-13T04:34:48Z) - Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification [68.19713459228369]
We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
arXiv Detail & Related papers (2023-05-21T11:20:49Z) - Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Continual Few-Shot Learning with Adversarial Class Storage [44.04528506999142]
We propose Continual Meta-Learner (CML) to solve the continual few-shot learning problem.
CML integrates metric-based classification and a memory-based mechanism along with adversarial learning into a meta-learning framework.
Experimental results show that CML delivers state-of-the-art performance on few-shot learning tasks without catastrophic forgetting.
arXiv Detail & Related papers (2022-07-10T03:40:38Z) - Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and
Contrastive Meta-Learning [51.03781020616402]
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications.
We propose a few-shot fine-grained action recognition problem, aiming to recognize novel fine-grained actions with only few samples given for each class.
Although progress has been made in coarse-grained actions, existing few-shot recognition methods encounter two issues handling fine-grained actions.
arXiv Detail & Related papers (2021-08-15T02:21:01Z) - La-MAML: Look-ahead Meta Learning for Continual Learning [14.405620521842621]
We propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory.
La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.
arXiv Detail & Related papers (2020-07-27T23:07:01Z) - Learning to Segment the Tail [91.38061765836443]
Real-world visual recognition requires handling the extreme sample imbalance in large-scale long-tailed data.
We propose a "divide&conquer" strategy for the challenging LVIS task: divide the whole data into balanced parts and then apply incremental learning to conquer each one.
arXiv Detail & Related papers (2020-04-02T09:39:08Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.