Future-Proofing Class Incremental Learning
- URL: http://arxiv.org/abs/2404.03200v1
- Date: Thu, 4 Apr 2024 05:08:51 GMT
- Title: Future-Proofing Class Incremental Learning
- Authors: Quentin Jodelet, Xin Liu, Yin Jun Phua, Tsuyoshi Murata,
- Abstract summary: We propose to use a pre-trained text-to-image diffusion model in order to generate synthetic images of future classes and use them to train the feature extractor.
Experiments on the standard benchmarks CIFAR100 and ImageNet-Subset demonstrate that our proposed method can be used to improve state-of-the-art methods for exemplar-free class incremental learning.
- Score: 4.666429553461402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exemplar-Free Class Incremental Learning is a highly challenging setting where replay memory is unavailable. Methods relying on frozen feature extractors have drawn attention recently in this setting due to their impressive performances and lower computational costs. However, those methods are highly dependent on the data used to train the feature extractor and may struggle when an insufficient amount of classes are available during the first incremental step. To overcome this limitation, we propose to use a pre-trained text-to-image diffusion model in order to generate synthetic images of future classes and use them to train the feature extractor. Experiments on the standard benchmarks CIFAR100 and ImageNet-Subset demonstrate that our proposed method can be used to improve state-of-the-art methods for exemplar-free class incremental learning, especially in the most difficult settings where the first incremental step only contains few classes. Moreover, we show that using synthetic samples of future classes achieves higher performance than using real data from different classes, paving the way for better and less costly pre-training methods for incremental learning.
Related papers
- Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis [21.348252135252412]
Current Non-Exemplar Class-Incremental Learning (NECIL) methods mitigate forgetting by storing a single prototype per class.
We propose a more efficient NECIL method that replaces prototypes with synthesized retrospective features for old classes.
Our method significantly improves the efficiency of non-exemplar class-incremental learning and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-03T07:19:11Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning [13.264972882846966]
Continual learning methods are known to suffer from catastrophic forgetting.
Existing exemplar-free methods are typically evaluated in settings where the first task is significantly larger than subsequent tasks.
We propose to adversarially perturb the current samples such that their embeddings are close to the old class prototypes in the old model embedding space.
We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly.
arXiv Detail & Related papers (2024-05-29T13:31:42Z) - Class-Incremental Learning using Diffusion Model for Distillation and
Replay [5.0977390531431634]
Class-incremental learning aims to learn new classes in an incremental fashion without forgetting the previously learned ones.
We propose the use of a pretrained Stable Diffusion model as a source of additional data for class-incremental learning.
arXiv Detail & Related papers (2023-06-30T11:23:49Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Cross-Class Feature Augmentation for Class Incremental Learning [45.91253737682168]
We propose a novel class incremental learning approach by incorporating a feature augmentation technique motivated by adversarial attacks.
The proposed approach has a unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes.
Our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios.
arXiv Detail & Related papers (2023-04-04T15:48:09Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - PIVOT: Prompting for Video Continual Learning [50.80141083993668]
We introduce PIVOT, a novel method that leverages extensive knowledge in pre-trained models from the image domain.
Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
arXiv Detail & Related papers (2022-12-09T13:22:27Z) - SURF: Semi-supervised Reward Learning with Data Augmentation for
Feedback-efficient Preference-based Reinforcement Learning [168.89470249446023]
We present SURF, a semi-supervised reward learning framework that utilizes a large amount of unlabeled samples with data augmentation.
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
Our experiments demonstrate that our approach significantly improves the feedback-efficiency of the preference-based method on a variety of locomotion and robotic manipulation tasks.
arXiv Detail & Related papers (2022-03-18T16:50:38Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - Class-incremental Learning using a Sequence of Partial Implicitly
Regularized Classifiers [0.0]
In class-incremental learning, the objective is to learn a number of classes sequentially without having access to the whole training data.
Our experiments on CIFAR100 dataset show that the proposed method improves the performance over SOTA by a large margin.
arXiv Detail & Related papers (2021-04-04T10:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.