PlaStIL: Plastic and Stable Memory-Free Class-Incremental Learning
- URL: http://arxiv.org/abs/2209.06606v2
- Date: Tue, 4 Jul 2023 09:48:35 GMT
- Title: PlaStIL: Plastic and Stable Memory-Free Class-Incremental Learning
- Authors: Gr\'egoire Petit, Adrian Popescu, Eden Belouadah, David Picard,
Bertrand Delezoide
- Abstract summary: Plasticity and stability are needed in class-incremental learning in order to learn from new data while preserving past knowledge.
We propose a method which has similar number of parameters but distributes them differently to find a better balance between plasticity and stability.
- Score: 49.0417577439298
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Plasticity and stability are needed in class-incremental learning in order to
learn from new data while preserving past knowledge. Due to catastrophic
forgetting, finding a compromise between these two properties is particularly
challenging when no memory buffer is available. Mainstream methods need to
store two deep models since they integrate new classes using fine-tuning with
knowledge distillation from the previous incremental state. We propose a method
which has similar number of parameters but distributes them differently in
order to find a better balance between plasticity and stability. Following an
approach already deployed by transfer-based incremental methods, we freeze the
feature extractor after the initial state. Classes in the oldest incremental
states are trained with this frozen extractor to ensure stability. Recent
classes are predicted using partially fine-tuned models in order to introduce
plasticity. Our proposed plasticity layer can be incorporated to any
transfer-based method designed for exemplar-free incremental learning, and we
apply it to two such methods. Evaluation is done with three large-scale
datasets. Results show that performance gains are obtained in all tested
configurations compared to existing methods.
Related papers
- PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning [49.240408681098906]
Class-incremental learning (CIL) aims to recognize new classes incrementally while maintaining the discriminability of old classes.
Most existing CIL methods are exemplar-based, i.e., storing a part of old data for retraining.
We present a simple and novel dual bias reduction framework that employs self-supervised transformation (SST) in input space and prototype augmentation (protoAug) in deep feature space.
arXiv Detail & Related papers (2024-07-19T05:03:16Z) - CEAT: Continual Expansion and Absorption Transformer for Non-Exemplar
Class-Incremental Learning [34.59310641291726]
In real-world applications, dynamic scenarios require the models to possess the capability to learn new tasks continuously without forgetting the old knowledge.
We propose a new architecture, named continual expansion and absorption transformer(CEAT)
The model can learn the novel knowledge by extending the expanded-fusion layers in parallel with the frozen previous parameters.
To improve the learning ability of the model, we designed a novel prototype contrastive loss to reduce the overlap between old and new classes in the feature space.
arXiv Detail & Related papers (2024-03-11T12:40:12Z) - SRIL: Selective Regularization for Class-Incremental Learning [5.810252620242912]
Class-Incremental Learning aims to create an integrated model that balances plasticity and stability to overcome this challenge.
We propose a selective regularization method that accepts new knowledge while maintaining previous knowledge.
We validate the effectiveness of the proposed method through extensive experimental protocols using CIFAR-100, ImageNet-Subset, and ImageNet-Full.
arXiv Detail & Related papers (2023-05-09T05:04:35Z) - On the Stability-Plasticity Dilemma of Class-Incremental Learning [50.863180812727244]
A primary goal of class-incremental learning is to strike a balance between stability and plasticity.
This paper aims to shed light on how effectively recent class-incremental learning algorithms address the stability-plasticity trade-off.
arXiv Detail & Related papers (2023-04-04T09:34:14Z) - Online Hyperparameter Optimization for Class-Incremental Learning [99.70569355681174]
Class-incremental learning (CIL) aims to train a classification model while the number of classes increases phase-by-phase.
An inherent challenge of CIL is the stability-plasticity tradeoff, i.e., CIL models should keep stable to retain old knowledge and keep plastic to absorb new knowledge.
We propose an online learning method that can adaptively optimize the tradeoff without knowing the setting as a priori.
arXiv Detail & Related papers (2023-01-11T17:58:51Z) - FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning [40.74872446895684]
A balance between stability and plasticity of the incremental process is needed in order to obtain good accuracy for past as well as new classes.
Existing exemplar-free class-incremental methods focus either on successive fine tuning of the model, thus favoring plasticity, or on using a feature extractor fixed after the initial incremental state.
We introduce a method which combines a fixed feature extractor and a pseudo-features generator to improve the stability-plasticity balance.
arXiv Detail & Related papers (2022-11-23T17:04:20Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Few-Shot Lifelong Learning [35.05196800623617]
Few-Shot Lifelong Learning enables deep learning models to perform lifelong/continual learning on few-shot data.
Our method selects very few parameters from the model for training every new set of classes instead of training the full model.
We experimentally show that our method significantly outperforms existing methods on the miniImageNet, CIFAR-100, and CUB-200 datasets.
arXiv Detail & Related papers (2021-03-01T13:26:57Z) - Adaptive Aggregation Networks for Class-Incremental Learning [102.20140790771265]
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
An inherent problem in CIL is the stability-plasticity dilemma between the learning of old and new classes.
arXiv Detail & Related papers (2020-10-10T18:24:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.