Class-Incremental Learning with Cross-Space Clustering and Controlled
Transfer
- URL: http://arxiv.org/abs/2208.03767v1
- Date: Sun, 7 Aug 2022 16:28:02 GMT
- Title: Class-Incremental Learning with Cross-Space Clustering and Controlled
Transfer
- Authors: Arjun Ashok, K J Joseph, Vineeth Balasubramanian
- Abstract summary: In class-incremental learning, the model is expected to learn new classes continually while maintaining knowledge on previous classes.
We propose two distillation-based objectives for class incremental learning.
- Score: 9.356870107137093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In class-incremental learning, the model is expected to learn new classes
continually while maintaining knowledge on previous classes. The challenge here
lies in preserving the model's ability to effectively represent prior classes
in the feature space, while adapting it to represent incoming new classes. We
propose two distillation-based objectives for class incremental learning that
leverage the structure of the feature space to maintain accuracy on previous
classes, as well as enable learning the new classes. In our first objective,
termed cross-space clustering (CSC), we propose to use the feature space
structure of the previous model to characterize directions of optimization that
maximally preserve the class - directions that all instances of a specific
class should collectively optimize towards, and those that they should
collectively optimize away from. Apart from minimizing forgetting, this
indirectly encourages the model to cluster all instances of a class in the
current feature space, and gives rise to a sense of herd-immunity, allowing all
samples of a class to jointly combat the model from forgetting the class. Our
second objective termed controlled transfer (CT) tackles incremental learning
from an understudied perspective of inter-class transfer. CT explicitly
approximates and conditions the current model on the semantic similarities
between incrementally arriving classes and prior classes. This allows the model
to learn classes in such a way that it maximizes positive forward transfer from
similar prior classes, thus increasing plasticity, and minimizes negative
backward transfer on dissimilar prior classes, whereby strengthening stability.
We perform extensive experiments on two benchmark datasets, adding our method
(CSCCT) on top of three prominent class-incremental learning methods. We
observe consistent performance improvement on a variety of experimental
settings.
Related papers
- Embedding Space Allocation with Angle-Norm Joint Classifiers for Few-Shot Class-Incremental Learning [8.321592316231786]
Few-shot class-incremental learning aims to continually learn new classes from only a few samples.
Current classes occupy the entire feature space, which is detrimental to learning new classes.
Small number of samples in incremental rounds is insufficient for fully training.
arXiv Detail & Related papers (2024-11-14T07:31:12Z) - Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Robust Feature Learning and Global Variance-Driven Classifier Alignment
for Long-Tail Class Incremental Learning [20.267257778779992]
This paper introduces a two-stage framework designed to enhance long-tail class incremental learning.
We address the challenge posed by the under-representation of tail classes in long-tail class incremental learning.
The proposed framework can seamlessly integrate as a module with any class incremental learning method.
arXiv Detail & Related papers (2023-11-02T13:28:53Z) - DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot
Object Detection [39.937724871284665]
Generalized few-shot object detection aims to achieve precise detection on both base classes with abundant annotations and novel classes with limited training data.
Existing approaches enhance few-shot generalization with the sacrifice of base-class performance.
We propose a new training framework, DiGeo, to learn Geometry-aware features of inter-class separation and intra-class compactness.
arXiv Detail & Related papers (2023-03-16T22:37:09Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning [141.35105358670316]
We study the difference between a na"ively-trained initial-phase model and the oracle model.
We propose Class-wise Decorrelation (CwD) that effectively regularizes representations of each class to scatter more uniformly.
Our CwD is simple to implement and easy to plug into existing methods.
arXiv Detail & Related papers (2021-12-09T07:20:32Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.