GPS: Distilling Compact Memories via Grid-based Patch Sampling for Efficient Online Class-Incremental Learning
- URL: http://arxiv.org/abs/2504.10409v2
- Date: Tue, 15 Apr 2025 03:20:18 GMT
- Title: GPS: Distilling Compact Memories via Grid-based Patch Sampling for Efficient Online Class-Incremental Learning
- Authors: Mingchuan Ma, Yuhao Zhou, Jindi Lv, Yuxin Tian, Dan Si, Shujian Li, Qing Ye, Jiancheng Lv,
- Abstract summary: We introduce Grid-based Patch Sampling (GPS), a lightweight strategy for distilling informative memory samples without relying on a trainable model.<n>GPS generates informative samples by sampling a subset of pixels from the original image, yielding compact low-resolution representations.<n>GPS can be seamlessly integrated into existing replay frameworks, leading to 3%-4% improvements in average end accuracy under memory-constrained settings.
- Score: 20.112448377660854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online class-incremental learning aims to enable models to continuously adapt to new classes with limited access to past data, while mitigating catastrophic forgetting. Replay-based methods address this by maintaining a small memory buffer of previous samples, achieving competitive performance. For effective replay under constrained storage, recent approaches leverage distilled data to enhance the informativeness of memory. However, such approaches often involve significant computational overhead due to the use of bi-level optimization. Motivated by these limitations, we introduce Grid-based Patch Sampling (GPS), a lightweight and effective strategy for distilling informative memory samples without relying on a trainable model. GPS generates informative samples by sampling a subset of pixels from the original image, yielding compact low-resolution representations that preserve both semantic content and structural information. During replay, these representations are reassembled to support training and evaluation. Experiments on extensive benchmarks demonstrate that GRS can be seamlessly integrated into existing replay frameworks, leading to 3%-4% improvements in average end accuracy under memory-constrained settings, with limited computational overhead.
Related papers
- Prototype-Based Continual Learning with Label-free Replay Buffer and Cluster Preservation Loss [3.824522034247845]
Continual learning techniques employ simple replay sample selection processes and use them during subsequent tasks.<n>In this paper, we depart from this by automatically selecting prototypes stored without labels.<n>"Push-away" and "pull-toward" mechanisms are also introduced for class-incremental and domain-incremental scenarios.
arXiv Detail & Related papers (2025-04-09T19:26:26Z) - Strike a Balance in Continual Panoptic Segmentation [60.26892488010291]
We introduce past-class backtrace distillation to balance the stability of existing knowledge with the adaptability to new information.
We also introduce a class-proportional memory strategy, which aligns the class distribution in the replay sample set with that of the historical training data.
We present a new method named Continual Panoptic Balanced (BalConpas)
arXiv Detail & Related papers (2024-07-23T09:58:20Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning [93.90047628101155]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.<n>To address this, some methods propose replaying data from previous tasks during new task learning.<n>However, it is not expected in practice due to memory constraints and data privacy issues.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Class-Wise Buffer Management for Incremental Object Detection: An
Effective Buffer Training Strategy [11.109975137910881]
Class incremental learning aims to solve a problem that arises when continuously adding unseen class instances to an existing model.
We introduce an effective buffer training strategy (eBTS) that creates the optimized replay buffer on object detection.
arXiv Detail & Related papers (2023-12-14T17:10:09Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - IB-DRR: Incremental Learning with Information-Back Discrete
Representation Replay [4.8666876477091865]
Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes.
Saving a subset of training samples of previously seen classes in the memory and replaying them during new training phases is proven to be an efficient and effective way to fulfil this aim.
However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning.
arXiv Detail & Related papers (2021-04-21T15:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.