Adaptive Aggregation Networks for Class-Incremental Learning
- URL: http://arxiv.org/abs/2010.05063v3
- Date: Mon, 29 Mar 2021 22:09:07 GMT
- Title: Adaptive Aggregation Networks for Class-Incremental Learning
- Authors: Yaoyao Liu, Bernt Schiele, Qianru Sun
- Abstract summary: Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
An inherent problem in CIL is the stability-plasticity dilemma between the learning of old and new classes.
- Score: 102.20140790771265
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Class-Incremental Learning (CIL) aims to learn a classification model with
the number of classes increasing phase-by-phase. An inherent problem in CIL is
the stability-plasticity dilemma between the learning of old and new classes,
i.e., high-plasticity models easily forget old classes, but high-stability
models are weak to learn new classes. We alleviate this issue by proposing a
novel network architecture called Adaptive Aggregation Networks (AANets), in
which we explicitly build two types of residual blocks at each residual level
(taking ResNet as the baseline architecture): a stable block and a plastic
block. We aggregate the output feature maps from these two blocks and then feed
the results to the next-level blocks. We adapt the aggregation weights in order
to balance these two types of blocks, i.e., to balance stability and
plasticity, dynamically. We conduct extensive experiments on three CIL
benchmarks: CIFAR-100, ImageNet-Subset, and ImageNet, and show that many
existing CIL methods can be straightforwardly incorporated into the
architecture of AANets to boost their performances.
Related papers
- Knowledge Adaptation Network for Few-Shot Class-Incremental Learning [23.90555521006653]
Few-shot class-incremental learning aims to incrementally recognize new classes using a few samples.
One of the effective methods to solve this challenge is to construct prototypical evolution classifiers.
Because representations for new classes are weak and biased, we argue such a strategy is suboptimal.
arXiv Detail & Related papers (2024-09-18T07:51:38Z) - Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning [113.89327264634984]
Few-shot class-incremental learning (FSCIL) confronts the challenge of integrating new classes into a model with minimal training samples.
Traditional methods widely adopt static adaptation relying on a fixed parameter space to learn from data that arrive sequentially.
We propose a dual selective SSM projector that dynamically adjusts the projection parameters based on the intermediate features for dynamic adaptation.
arXiv Detail & Related papers (2024-07-08T17:09:39Z) - Dynamic Feature Learning and Matching for Class-Incremental Learning [20.432575325147894]
Class-incremental learning (CIL) has emerged as a means to learn new classes without catastrophic forgetting of previous classes.
We propose the Dynamic Feature Learning and Matching (DFLM) model in this paper.
Our proposed model achieves significant performance improvements over existing methods.
arXiv Detail & Related papers (2024-05-14T12:17:19Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - CEAT: Continual Expansion and Absorption Transformer for Non-Exemplar
Class-Incremental Learning [34.59310641291726]
In real-world applications, dynamic scenarios require the models to possess the capability to learn new tasks continuously without forgetting the old knowledge.
We propose a new architecture, named continual expansion and absorption transformer(CEAT)
The model can learn the novel knowledge by extending the expanded-fusion layers in parallel with the frozen previous parameters.
To improve the learning ability of the model, we designed a novel prototype contrastive loss to reduce the overlap between old and new classes in the feature space.
arXiv Detail & Related papers (2024-03-11T12:40:12Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.