Memory Replay with Data Compression for Continual Learning
- URL: http://arxiv.org/abs/2202.06592v1
- Date: Mon, 14 Feb 2022 10:26:23 GMT
- Title: Memory Replay with Data Compression for Continual Learning
- Authors: Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li,
Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu
- Abstract summary: We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
- Score: 80.95444077825852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning needs to overcome catastrophic forgetting of the past.
Memory replay of representative old training samples has been shown as an
effective solution, and achieves the state-of-the-art (SOTA) performance.
However, existing work is mainly built on a small memory buffer containing a
few original data, which cannot fully characterize the old data distribution.
In this work, we propose memory replay with data compression to reduce the
storage cost of old training samples and thus increase their amount that can be
stored in the memory buffer. Observing that the trade-off between the quality
and quantity of compressed data is highly nontrivial for the efficacy of memory
replay, we propose a novel method based on determinantal point processes (DPPs)
to efficiently determine an appropriate compression quality for
currently-arrived training samples. In this way, using a naive data compression
algorithm with a properly selected quality can largely boost recent strong
baselines by saving more compressed data in a limited storage space. We
extensively validate this across several benchmarks of class-incremental
learning and in a realistic scenario of object detection for autonomous
driving.
Related papers
- Hybrid Memory Replay: Blending Real and Distilled Data for Class Incremental Learning [19.671792951465]
Incremental learning (IL) aims to acquire new knowledge from current tasks while retaining knowledge learned from previous tasks.
Replay-based IL methods store a set of exemplars from previous tasks in a buffer and replay them when learning new tasks.
Data distillation (DD) can reduce the exemplar buffer's size, by condensing a large real dataset into a much smaller set of more information-compact synthetic exemplars.
We propose an innovative modification to DD that distills synthetic data from a sliding window of checkpoints in history.
arXiv Detail & Related papers (2024-10-20T12:13:32Z) - FETCH: A Memory-Efficient Replay Approach for Continual Learning in Image Classification [7.29168794682254]
Class-incremental continual learning is an important area of research.
In previous works, promising results were achieved using replay and compressed replay techniques.
This work is to evaluate compressed replay in the pipeline of GDumb.
arXiv Detail & Related papers (2024-07-17T07:54:03Z) - Probing Image Compression For Class-Incremental Learning [8.711266563753846]
Continual machine learning (ML) systems rely on storing representative samples, also known as exemplars, within a limited memory constraint to maintain the performance on previously learned data.
In this paper, we explore the use of image compression as a strategy to enhance the buffer's capacity, thereby increasing exemplar diversity.
We introduce a new framework to incorporate image compression for continual ML including a pre-processing data compression step and an efficient compression rate/algorithm selection method.
arXiv Detail & Related papers (2024-03-10T18:58:14Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Summarizing Stream Data for Memory-Constrained Online Continual Learning [17.40956484727636]
We propose to Summarize the knowledge from the Stream Data (SSD) into more informative samples by distilling the training characteristics of real images.
We demonstrate that with limited extra computational overhead, SSD provides more than 3% accuracy boost for sequential CIFAR-100 under extremely restricted memory buffer.
arXiv Detail & Related papers (2023-05-26T05:31:51Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Sample Condensation in Online Continual Learning [13.041782266237]
Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data.
We propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory.
arXiv Detail & Related papers (2022-06-23T17:23:42Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Neural Network Compression for Noisy Storage Devices [71.4102472611862]
Conventionally, model compression and physical storage are decoupled.
This approach forces the storage to treat each bit of the compressed model equally, and to dedicate the same amount of resources to each bit.
We propose a radically different approach that: (i) employs analog memories to maximize the capacity of each memory cell, and (ii) jointly optimize model compression and physical storage to maximize memory utility.
arXiv Detail & Related papers (2021-02-15T18:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.