Distilled Replay: Overcoming Forgetting through Synthetic Samples
- URL: http://arxiv.org/abs/2103.15851v1
- Date: Mon, 29 Mar 2021 18:02:05 GMT
- Title: Distilled Replay: Overcoming Forgetting through Synthetic Samples
- Authors: Andrea Rosasco, Antonio Carta, Andrea Cossu, Vincenzo Lomonaco, Davide
Bacciu
- Abstract summary: Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experience.
This work introduces Distilled Replay, a novel replay strategy for Continual Learning which is able to mitigate forgetting by keeping a very small buffer.
We show the effectiveness of our Distilled Replay against naive replay, which randomly samples patterns from the dataset, on four popular Continual Learning benchmarks.
- Score: 11.240947363668242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Replay strategies are Continual Learning techniques which mitigate
catastrophic forgetting by keeping a buffer of patterns from previous
experience, which are interleaved with new data during training. The amount of
patterns stored in the buffer is a critical parameter which largely influences
the final performance and the memory footprint of the approach. This work
introduces Distilled Replay, a novel replay strategy for Continual Learning
which is able to mitigate forgetting by keeping a very small buffer (up to $1$
pattern per class) of highly informative samples. Distilled Replay builds the
buffer through a distillation process which compresses a large dataset into a
tiny set of informative examples. We show the effectiveness of our Distilled
Replay against naive replay, which randomly samples patterns from the dataset,
on four popular Continual Learning benchmarks.
Related papers
- Watch Your Step: Optimal Retrieval for Continual Learning at Scale [1.7265013728931]
In continual learning, a model learns incrementally over time while minimizing interference between old and new tasks.
One of the most widely used approaches in continual learning is referred to as replay.
We propose a framework for evaluating selective retrieval strategies, categorized by simple, independent class- and sample-selective primitives.
We propose a set of strategies to prevent duplicate replays and explore whether new samples with low loss values can be learned without replay.
arXiv Detail & Related papers (2024-04-16T17:35:35Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Continual Learning with Strong Experience Replay [32.154995019080594]
We propose a CL method with Strong Experience Replay (SER)
SER utilizes future experiences mimicked on the current training data, besides distilling past experience from the memory buffer.
Experimental results on multiple image classification datasets show that our SER method surpasses the state-of-the-art methods by a noticeable margin.
arXiv Detail & Related papers (2023-05-23T02:42:54Z) - PCR: Proxy-based Contrastive Replay for Online Class-Incremental
Continual Learning [16.67238259139417]
Existing replay-based methods effectively alleviate this issue by saving and replaying part of old data in a proxy-based or contrastive-based replay manner.
We propose a novel replay-based method called proxy-based contrastive replay (PCR)
arXiv Detail & Related papers (2023-04-10T06:35:19Z) - Adiabatic replay for continual learning [138.7878582237908]
generative replay spends an increasing amount of time just re-learning what is already known.
We propose a replay-based CL strategy that we term adiabatic replay (AR)
We verify experimentally that AR is superior to state-of-the-art deep generative replay using VAEs.
arXiv Detail & Related papers (2023-03-23T10:18:06Z) - Analysis of Stochastic Processes through Replay Buffers [50.52781475688759]
We analyze a system where a process X is pushed into a replay buffer and then randomly generates a process Y from the replay buffer.
Our theoretical analysis sheds light on why replay buffer may be a good de-correlator.
arXiv Detail & Related papers (2022-06-26T11:20:44Z) - Sample Condensation in Online Continual Learning [13.041782266237]
Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data.
We propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory.
arXiv Detail & Related papers (2022-06-23T17:23:42Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - Replay For Safety [51.11953997546418]
In experience replay, past transitions are stored in a memory buffer and re-used during learning.
We show that using an appropriate biased sampling scheme can allow us to achieve a emphsafe policy.
arXiv Detail & Related papers (2021-12-08T11:10:57Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.