Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning
- URL: http://arxiv.org/abs/2403.12030v1
- Date: Mon, 18 Mar 2024 17:58:13 GMT
- Title: Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning
- Authors: Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan,
- Abstract summary: We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
- Score: 65.57123249246358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Despite the strong performance of Pre-Trained Models (PTMs) in CIL, a critical issue persists: learning new classes often results in the overwriting of old ones. Excessive modification of the network causes forgetting, while minimal adjustments lead to an inadequate fit for new classes. As a result, it is desired to figure out a way of efficient model updating without harming former knowledge. In this paper, we propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model updating without conflict, we train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces. These adapters span a high-dimensional feature space, enabling joint decision-making across multiple subspaces. As data evolves, the expanding subspaces render the old class classifiers incompatible with new-stage spaces. Correspondingly, we design a semantic-guided prototype complement strategy that synthesizes old classes' new features without using any old class instance. Extensive experiments on seven benchmark datasets verify EASE's state-of-the-art performance. Code is available at: https://github.com/sun-hailong/CVPR24-Ease
Related papers
- Taxonomy-Aware Continual Semantic Segmentation in Hyperbolic Spaces for Open-World Perception [8.625083692154414]
Class-incremental semantic segmentation aims to update models with emerging new classes while preventing catastrophic forgetting of previously learned ones.
We propose Poincar'e-Regularized Incremental-Class (TOPICS) that learns feature embeddings in hyperbolic space following explicit taxonomy-tree structures.
We also establish eight realistic incremental learning protocols for autonomous driving scenarios, where novel classes can originate from known classes or the background.
arXiv Detail & Related papers (2024-07-25T15:49:26Z) - Organizing Background to Explore Latent Classes for Incremental Few-shot Semantic Segmentation [7.570798966278471]
incremental Few-shot Semantic COCO (iFSS) is to extend pre-trained segmentation models to new classes via few annotated images.
We propose a network called OINet, i.e., the background embedding space textbfOrganization and prototype textbfInherit Network.
arXiv Detail & Related papers (2024-05-29T23:22:12Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Incremental Few-Shot Semantic Segmentation via Embedding Adaptive-Update
and Hyper-class Representation [30.558312809285905]
EHNet achieves new state-of-the-art performance with remarkable advantages.
Experiments on PASCAL-5i and COCO datasets show that EHNet achieves new state-of-the-art performance with remarkable advantages.
arXiv Detail & Related papers (2022-07-26T15:20:07Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Forward Compatible Few-Shot Class-Incremental Learning [71.2459746681805]
A machine learning model should recognize new classes without forgetting old ones.
Current methods handle incremental learning retrospectively.
We propose ForwArd Compatible Training (FACT) for FSCIL.
arXiv Detail & Related papers (2022-03-14T09:36:35Z) - Few-Shot Object Detection via Association and DIscrimination [83.8472428718097]
Few-shot object detection via Association and DIscrimination builds up a discriminative feature space for each novel class with two integral steps.
Experiments on Pascal VOC and MS-COCO datasets demonstrate FADI achieves new SOTA performance, significantly improving the baseline in any shot/split by +18.7.
arXiv Detail & Related papers (2021-11-23T05:04:06Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.