Non-exemplar Class-incremental Learning by Random Auxiliary Classes
Augmentation and Mixed Features
- URL: http://arxiv.org/abs/2304.07707v2
- Date: Fri, 25 Aug 2023 15:17:41 GMT
- Title: Non-exemplar Class-incremental Learning by Random Auxiliary Classes
Augmentation and Mixed Features
- Authors: Ke Song, Quan Xia, Guoqiang Liang, Zhaojie Chen, Yanning Zhang
- Abstract summary: Non-exemplar class-incremental learning refers to classifying new and old classes without storing samples of old classes.
We propose an effective non-exemplar method called RAMF consisting of Random Auxiliary classes augmentation and Mixed Feature.
- Score: 37.51376572211081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-exemplar class-incremental learning refers to classifying new and old
classes without storing samples of old classes. Since only new class samples
are available for optimization, it often occurs catastrophic forgetting of old
knowledge. To alleviate this problem, many new methods are proposed such as
model distillation, class augmentation. In this paper, we propose an effective
non-exemplar method called RAMF consisting of Random Auxiliary classes
augmentation and Mixed Feature. On the one hand, we design a novel random
auxiliary classes augmentation method, where one augmentation is randomly
selected from three augmentations and applied on the input to generate
augmented samples and extra class labels. By extending data and label space, it
allows the model to learn more diverse representations, which can prevent the
model from being biased towards learning task-specific features. When learning
new tasks, it will reduce the change of feature space and improve model
generalization. On the other hand, we employ mixed feature to replace the new
features since only using new feature to optimize the model will affect the
representation that was previously embedded in the feature space. Instead, by
mixing new and old features, old knowledge can be retained without increasing
the computational complexity. Extensive experiments on three benchmarks
demonstrate the superiority of our approach, which outperforms the
state-of-the-art non-exemplar methods and is comparable to high-performance
replay-based methods.
Related papers
- Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis [21.348252135252412]
Current Non-Exemplar Class-Incremental Learning (NECIL) methods mitigate forgetting by storing a single prototype per class.
We propose a more efficient NECIL method that replaces prototypes with synthesized retrospective features for old classes.
Our method significantly improves the efficiency of non-exemplar class-incremental learning and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-03T07:19:11Z) - PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning [49.240408681098906]
Class-incremental learning (CIL) aims to recognize new classes incrementally while maintaining the discriminability of old classes.
Most existing CIL methods are exemplar-based, i.e., storing a part of old data for retraining.
We present a simple and novel dual bias reduction framework that employs self-supervised transformation (SST) in input space and prototype augmentation (protoAug) in deep feature space.
arXiv Detail & Related papers (2024-07-19T05:03:16Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - CEAT: Continual Expansion and Absorption Transformer for Non-Exemplar
Class-Incremental Learning [34.59310641291726]
In real-world applications, dynamic scenarios require the models to possess the capability to learn new tasks continuously without forgetting the old knowledge.
We propose a new architecture, named continual expansion and absorption transformer(CEAT)
The model can learn the novel knowledge by extending the expanded-fusion layers in parallel with the frozen previous parameters.
To improve the learning ability of the model, we designed a novel prototype contrastive loss to reduce the overlap between old and new classes in the feature space.
arXiv Detail & Related papers (2024-03-11T12:40:12Z) - Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Cross-Class Feature Augmentation for Class Incremental Learning [45.91253737682168]
We propose a novel class incremental learning approach by incorporating a feature augmentation technique motivated by adversarial attacks.
The proposed approach has a unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes.
Our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios.
arXiv Detail & Related papers (2023-04-04T15:48:09Z) - Self-Sustaining Representation Expansion for Non-Exemplar
Class-Incremental Learning [138.35405462309456]
Non-exemplar class-incremental learning is to recognize both the old and new classes when old class samples cannot be saved.
Our scheme consists of a structure reorganization strategy that fuses main-branch expansion and side-branch updating to maintain the old features.
A prototype selection mechanism is proposed to enhance the discrimination between the old and new classes by selectively incorporating new samples into the distillation process.
arXiv Detail & Related papers (2022-03-12T06:42:20Z) - Self-Promoted Prototype Refinement for Few-Shot Class-Incremental
Learning [81.10531943939365]
Few-shot class-incremental learning is to recognize the new classes given few samples and not forget the old classes.
We propose a novel incremental prototype learning scheme that adapts the feature representation to various generated incremental episodes.
Experiments on three benchmark datasets demonstrate the above-par incremental performance, outperforming state-of-the-art methods by a margin of 13%, 17% and 11%, respectively.
arXiv Detail & Related papers (2021-07-19T14:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.