Prompt-Based Exemplar Super-Compression and Regeneration for
Class-Incremental Learning
- URL: http://arxiv.org/abs/2311.18266v1
- Date: Thu, 30 Nov 2023 05:59:31 GMT
- Title: Prompt-Based Exemplar Super-Compression and Regeneration for
Class-Incremental Learning
- Authors: Ruxiao Duan, Yaoyao Liu, Jieneng Chen, Adam Kortylewski, Alan Yuille
- Abstract summary: Super-compression and regeneration method, ESCORT, substantially increases the quantity and enhances the diversity of exemplars.
To minimize the domain gap between generated exemplars and real images, we propose partial compression and diffusion-based data augmentation.
- Score: 22.676222987218555
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Replay-based methods in class-incremental learning (CIL) have attained
remarkable success, as replaying the exemplars of old classes can significantly
mitigate catastrophic forgetting. Despite their effectiveness, the inherent
memory restrictions of CIL result in saving a limited number of exemplars with
poor diversity, leading to data imbalance and overfitting issues. In this
paper, we introduce a novel exemplar super-compression and regeneration method,
ESCORT, which substantially increases the quantity and enhances the diversity
of exemplars. Rather than storing past images, we compress images into visual
and textual prompts, e.g., edge maps and class tags, and save the prompts
instead, reducing the memory usage of each exemplar to 1/24 of the original
size. In subsequent learning phases, diverse high-resolution exemplars are
generated from the prompts by a pre-trained diffusion model, e.g., ControlNet.
To minimize the domain gap between generated exemplars and real images, we
propose partial compression and diffusion-based data augmentation, allowing us
to utilize an off-the-shelf diffusion model without fine-tuning it on the
target dataset. Therefore, the same diffusion model can be downloaded whenever
it is needed, incurring no memory consumption. Comprehensive experiments
demonstrate that our method significantly improves model performance across
multiple CIL benchmarks, e.g., 5.0 percentage points higher than the previous
state-of-the-art on 10-phase Caltech-256 dataset.
Related papers
- Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models [51.3915762595891]
This paper presents an efficient LoRA-based personalization approach for on-device subject-driven generation.
Our method, termed Hollowed Net, enhances memory efficiency during fine-tuning by modifying the architecture of a diffusion U-Net.
arXiv Detail & Related papers (2024-11-02T08:42:48Z) - Iterative Ensemble Training with Anti-Gradient Control for Mitigating Memorization in Diffusion Models [20.550324116099357]
Diffusion models are known for their tremendous ability to generate novel and high-quality samples.
Recent approaches for memory mitigation either only focused on the text modality problem in cross-modal generation tasks or utilized data augmentation strategies.
We propose a novel training framework for diffusion models from the perspective of visual modality, which is more generic and fundamental for mitigating memorization.
arXiv Detail & Related papers (2024-07-22T02:19:30Z) - Feature Expansion and enhanced Compression for Class Incremental Learning [3.3425792454347616]
We propose a new algorithm that enhances the compression of previous class knowledge by cutting and mixing patches of previous class samples with the new images during compression.
We show that this new data augmentation reduces catastrophic forgetting by specifically targeting past class information and improving its compression.
arXiv Detail & Related papers (2024-05-13T06:57:18Z) - Probing Image Compression For Class-Incremental Learning [8.711266563753846]
Continual machine learning (ML) systems rely on storing representative samples, also known as exemplars, within a limited memory constraint to maintain the performance on previously learned data.
In this paper, we explore the use of image compression as a strategy to enhance the buffer's capacity, thereby increasing exemplar diversity.
We introduce a new framework to incorporate image compression for continual ML including a pre-processing data compression step and an efficient compression rate/algorithm selection method.
arXiv Detail & Related papers (2024-03-10T18:58:14Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - IB-DRR: Incremental Learning with Information-Back Discrete
Representation Replay [4.8666876477091865]
Incremental learning aims to enable machine learning models to continuously acquire new knowledge given new classes.
Saving a subset of training samples of previously seen classes in the memory and replaying them during new training phases is proven to be an efficient and effective way to fulfil this aim.
However, finding a trade-off between the model performance and the number of samples to save for each class is still an open problem for replay-based incremental learning.
arXiv Detail & Related papers (2021-04-21T15:32:11Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.