Generative Negative Replay for Continual Learning
- URL: http://arxiv.org/abs/2204.05842v1
- Date: Tue, 12 Apr 2022 14:38:00 GMT
- Title: Generative Negative Replay for Continual Learning
- Authors: Gabriele Graffieti, Davide Maltoni, Lorenzo Pellegrini, Vincenzo
Lomonaco
- Abstract summary: Achilles' heel of continual learning is storing part of the old data and replaying them interleaved with new experiences.
Generative replay is using generative models to provide replay patterns on demand.
We show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples to better learn the new classes.
- Score: 13.492896179777835
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning continually is a key aspect of intelligence and a necessary ability
to solve many real-life problems. One of the most effective strategies to
control catastrophic forgetting, the Achilles' heel of continual learning, is
storing part of the old data and replaying them interleaved with new
experiences (also known as the replay approach). Generative replay, which is
using generative models to provide replay patterns on demand, is particularly
intriguing, however, it was shown to be effective mainly under simplified
assumptions, such as simple scenarios and low-dimensional data. In this paper,
we show that, while the generated data are usually not able to improve the
classification accuracy for the old classes, they can be effective as negative
examples (or antagonists) to better learn the new classes, especially when the
learning experiences are small and contain examples of just one or few classes.
The proposed approach is validated on complex class-incremental and
data-incremental continual learning scenarios (CORe50 and ImageNet-1000)
composed of high-dimensional data and a large number of training experiences: a
setup where existing generative replay approaches usually fail.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Looking through the past: better knowledge retention for generative
replay in continual learning [18.695587430349438]
VAE-based generative replay is not powerful enough to generate more complex data with a greater number of classes.
We propose three modifications that allow the model to learn and generate complex data.
Our method outperforms other generative replay methods in various scenarios.
arXiv Detail & Related papers (2023-09-18T13:45:49Z) - Bypassing Logits Bias in Online Class-Incremental Learning with a
Generative Framework [15.345043222622158]
We focus on online class-incremental learning setting in which new classes emerge over time.
Almost all existing methods are replay-based with a softmax classifier.
We propose a novel generative framework based on the feature space.
arXiv Detail & Related papers (2022-05-19T06:54:20Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z) - Training Deep Networks from Zero to Hero: avoiding pitfalls and going
beyond [59.94347858883343]
This tutorial covers the basic steps as well as more recent options to improve models.
It can be particularly useful in datasets that are not as well-prepared as those in challenges.
arXiv Detail & Related papers (2021-09-06T21:31:42Z) - Reducing Representation Drift in Online Continual Learning [87.71558506591937]
We study the online continual learning paradigm, where agents must learn from a changing distribution with constrained memory and compute.
In this work we instead focus on the change in representations of previously observed data due to the introduction of previously unobserved class samples in the incoming data stream.
arXiv Detail & Related papers (2021-04-11T15:19:30Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Generative Feature Replay with Orthogonal Weight Modification for
Continual Learning [20.8966035274874]
generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting.
We propose to replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature.
Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM.
arXiv Detail & Related papers (2020-05-07T13:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.