Sample Condensation in Online Continual Learning
- URL: http://arxiv.org/abs/2206.11849v1
- Date: Thu, 23 Jun 2022 17:23:42 GMT
- Title: Sample Condensation in Online Continual Learning
- Authors: Mattia Sangermano, Antonio Carta, Andrea Cossu, Davide Bacciu
- Abstract summary: Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data.
We propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory.
- Score: 13.041782266237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online Continual learning is a challenging learning scenario where the model
must learn from a non-stationary stream of data where each sample is seen only
once. The main challenge is to incrementally learn while avoiding catastrophic
forgetting, namely the problem of forgetting previously acquired knowledge
while learning from new data. A popular solution in these scenario is to use a
small memory to retain old data and rehearse them over time. Unfortunately, due
to the limited memory size, the quality of the memory will deteriorate over
time. In this paper we propose OLCGM, a novel replay-based continual learning
strategy that uses knowledge condensation techniques to continuously compress
the memory and achieve a better use of its limited size. The sample
condensation step compresses old samples, instead of removing them like other
replay strategies. As a result, the experiments show that, whenever the memory
budget is limited compared to the complexity of the data, OLCGM improves the
final accuracy compared to state-of-the-art replay strategies.
Related papers
- Reducing Catastrophic Forgetting in Online Class Incremental Learning Using Self-Distillation [3.8506666685467343]
In continual learning, previous knowledge is forgotten when a model learns new tasks.
In this paper, we tried to solve this problem by acquiring transferable knowledge through self-distillation.
Our proposed method outperformed conventional methods by experiments on CIFAR10, CIFAR100, and MiniimageNet datasets.
arXiv Detail & Related papers (2024-09-17T16:26:33Z) - Lifelong Event Detection with Embedding Space Separation and Compaction [30.05158209938146]
Existing lifelong event detection methods typically maintain a memory module and replay the stored memory data during the learning of a new task.
The simple combination of memory data and new-task samples can still result in substantial forgetting of previously acquired knowledge.
We propose a novel method based on embedding space separation and compaction.
arXiv Detail & Related papers (2024-04-03T06:51:49Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Summarizing Stream Data for Memory-Constrained Online Continual Learning [17.40956484727636]
We propose to Summarize the knowledge from the Stream Data (SSD) into more informative samples by distilling the training characteristics of real images.
We demonstrate that with limited extra computational overhead, SSD provides more than 3% accuracy boost for sequential CIFAR-100 under extremely restricted memory buffer.
arXiv Detail & Related papers (2023-05-26T05:31:51Z) - Adiabatic replay for continual learning [138.7878582237908]
generative replay spends an increasing amount of time just re-learning what is already known.
We propose a replay-based CL strategy that we term adiabatic replay (AR)
We verify experimentally that AR is superior to state-of-the-art deep generative replay using VAEs.
arXiv Detail & Related papers (2023-03-23T10:18:06Z) - Semiparametric Language Models Are Scalable Continual Learners [83.74414880208334]
Semiparametric language models (LMs) have shown promise in continuously learning from new text data.
We present a simple and intuitive approach called Selective Memorization (SeMem)
SeMem only memorizes difficult samples that the model is likely to struggle with.
arXiv Detail & Related papers (2023-03-02T17:15:02Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Queried Unlabeled Data Improves and Robustifies Class-Incremental
Learning [133.39254981496146]
Class-incremental learning (CIL) suffers from the notorious dilemma between learning newly added classes and preserving previously learned class knowledge.
We propose to leverage "free" external unlabeled data querying in continual learning.
We show queried unlabeled data can continue to benefit, and seamlessly extend CIL-QUD into its robustified versions.
arXiv Detail & Related papers (2022-06-15T22:53:23Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.