Continual Few-Shot Learning with Adversarial Class Storage
- URL: http://arxiv.org/abs/2207.12303v1
- Date: Sun, 10 Jul 2022 03:40:38 GMT
- Title: Continual Few-Shot Learning with Adversarial Class Storage
- Authors: Kun Wu, Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang, Dejun Yang
- Abstract summary: We propose Continual Meta-Learner (CML) to solve the continual few-shot learning problem.
CML integrates metric-based classification and a memory-based mechanism along with adversarial learning into a meta-learning framework.
Experimental results show that CML delivers state-of-the-art performance on few-shot learning tasks without catastrophic forgetting.
- Score: 44.04528506999142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans have a remarkable ability to quickly and effectively learn new
concepts in a continuous manner without forgetting old knowledge. Though deep
learning has made tremendous successes on various computer vision tasks, it
faces challenges for achieving such human-level intelligence. In this paper, we
define a new problem called continual few-shot learning, in which tasks arrive
sequentially and each task is associated with a few training samples. We
propose Continual Meta-Learner (CML) to solve this problem. CML integrates
metric-based classification and a memory-based mechanism along with adversarial
learning into a meta-learning framework, which leads to the desirable
properties: 1) it can quickly and effectively learn to handle a new task; 2) it
overcomes catastrophic forgetting; 3) it is model-agnostic. We conduct
extensive experiments on two image datasets, MiniImageNet and CIFAR100.
Experimental results show that CML delivers state-of-the-art performance in
terms of classification accuracy on few-shot learning tasks without
catastrophic forgetting.
Related papers
- Learning to Learn with Indispensable Connections [6.040904021861969]
We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
arXiv Detail & Related papers (2023-04-06T04:53:13Z) - Task-Attentive Transformer Architecture for Continual Learning of
Vision-and-Language Tasks Using Knowledge Distillation [18.345183818638475]
Continual learning (CL) can serve as a remedy through enabling knowledge-transfer across sequentially arriving tasks.
We develop a transformer-based CL architecture for learning bimodal vision-and-language tasks.
Our approach is scalable learning to a large number of tasks because it requires little memory and time overhead.
arXiv Detail & Related papers (2023-03-25T10:16:53Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Continual learning of quantum state classification with gradient
episodic memory [0.20646127669654826]
A phenomenon called catastrophic forgetting emerges when a machine learning model is trained across multiple tasks.
Some continual learning strategies have been proposed to address the catastrophic forgetting problem.
In this work, we incorporate the gradient episodic memory method to train a variational quantum classifier.
arXiv Detail & Related papers (2022-03-26T09:28:26Z) - What Matters For Meta-Learning Vision Regression Tasks? [19.373532562905208]
This paper makes two main contributions that help understand this barely explored area.
First, we design two new types of cross-category level vision regression tasks, namely object discovery and pose estimation.
Second, we propose the addition of functional contrastive learning (FCL) over the task representations in Conditional Neural Processes (CNPs) and train in an end-to-end fashion.
arXiv Detail & Related papers (2022-03-09T17:28:16Z) - Few-shot Continual Learning: a Brain-inspired Approach [34.306678703379944]
We provide a first systematic study on few-shot continual learning (FSCL) and present an effective solution with deep neural networks.
Our solution is based on the observation that continual learning of a task sequence inevitably interferes few-shot generalization.
We draw inspirations from the robust brain system and develop a method that (1) interdependently updates a pair of fast / slow weights for continual learning and few-shot learning to disentangle their divergent objectives, inspired by the biological model of meta-plasticity and fast / slow synapse; and (2) applies a brain-inspired two-step consolidation strategy to learn a task sequence without forgetting in the
arXiv Detail & Related papers (2021-04-19T03:40:48Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Self-supervised Knowledge Distillation for Few-shot Learning [123.10294801296926]
Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples.
We propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks.
Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T11:27:00Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.