Generative Feature Replay For Class-Incremental Learning
- URL: http://arxiv.org/abs/2004.09199v1
- Date: Mon, 20 Apr 2020 10:58:20 GMT
- Title: Generative Feature Replay For Class-Incremental Learning
- Authors: Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu,
Andrew D. Bagdanov, Shangling Jui, Joost van de Weijer
- Abstract summary: We consider a class-incremental setting which means that the task-ID is unknown at inference time.
The imbalance between old and new classes typically results in a bias of the network towards the newest ones.
We propose a solution based on generative feature replay which does not require any exemplars.
- Score: 46.88667212214957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans are capable of learning new tasks without forgetting previous ones,
while neural networks fail due to catastrophic forgetting between new and
previously-learned tasks. We consider a class-incremental setting which means
that the task-ID is unknown at inference time. The imbalance between old and
new classes typically results in a bias of the network towards the newest ones.
This imbalance problem can either be addressed by storing exemplars from
previous tasks, or by using image replay methods. However, the latter can only
be applied to toy datasets since image generation for complex datasets is a
hard problem.
We propose a solution to the imbalance problem based on generative feature
replay which does not require any exemplars. To do this, we split the network
into two parts: a feature extractor and a classifier. To prevent forgetting, we
combine generative feature replay in the classifier with feature distillation
in the feature extractor. Through feature generation, our method reduces the
complexity of generative replay and prevents the imbalance problem. Our
approach is computationally efficient and scalable to large datasets.
Experiments confirm that our approach achieves state-of-the-art results on
CIFAR-100 and ImageNet, while requiring only a fraction of the storage needed
for exemplar-based continual learning. Code available at
\url{https://github.com/xialeiliu/GFR-IL}.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Density Map Distillation for Incremental Object Counting [37.982124268097]
A na"ive approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic performance drop on previous tasks.
We propose a new exemplar-free functional regularization method, called Density Map Distillation (DMD)
During training, we introduce a new counter head for each task and introduce a distillation loss to prevent forgetting of previous tasks.
arXiv Detail & Related papers (2023-04-11T14:46:21Z) - Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study [68.75670223005716]
We find that pre-trained language models like BERT have a potential ability to learn sequentially, even without any sparse memory replay.
Our experiments reveal that BERT can actually generate high quality representations for previously learned tasks in a long term, under extremely sparse replay or even no replay.
arXiv Detail & Related papers (2023-03-02T09:03:43Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Match What Matters: Generative Implicit Feature Replay for Continual
Learning [0.0]
We propose GenIFeR (Generative Implicit Feature Replay) for class-incremental learning.
The main idea is to train a generative adversarial network (GAN) to generate images that contain realistic features.
We empirically show that GenIFeR is superior to both conventional generative image and feature replay.
arXiv Detail & Related papers (2021-06-09T19:29:41Z) - Class-incremental Learning using a Sequence of Partial Implicitly
Regularized Classifiers [0.0]
In class-incremental learning, the objective is to learn a number of classes sequentially without having access to the whole training data.
Our experiments on CIFAR100 dataset show that the proposed method improves the performance over SOTA by a large margin.
arXiv Detail & Related papers (2021-04-04T10:02:45Z) - Generative Feature Replay with Orthogonal Weight Modification for
Continual Learning [20.8966035274874]
generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.
We propose to replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature.
Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM.
arXiv Detail & Related papers (2020-05-07T13:56:22Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z) - Adversarial Incremental Learning [0.0]
Deep learning can forget previously learned information upon learning new tasks where previous data is not available.
We propose an adversarial discriminator based method that does not make use of old data at all while training on new tasks.
We are able to outperform other state-of-the-art methods on CIFAR-100, SVHN, and MNIST datasets.
arXiv Detail & Related papers (2020-01-30T02:25:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.