Class-incremental Learning with Pre-allocated Fixed Classifiers
- URL: http://arxiv.org/abs/2010.08657v3
- Date: Sat, 5 Aug 2023 11:56:03 GMT
- Title: Class-incremental Learning with Pre-allocated Fixed Classifiers
- Authors: Federico Pernici, Matteo Bruni, Claudio Baecchi, Francesco Turchini,
Alberto Del Bimbo
- Abstract summary: In class-incremental learning, a learning agent faces a stream of data with the goal of learning new classes while not forgetting previous ones.
We propose a novel fixed classifier in which a number of pre-allocated output nodes are subject to the classification loss right from the beginning of the learning phase.
- Score: 20.74548175713497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In class-incremental learning, a learning agent faces a stream of data with
the goal of learning new classes while not forgetting previous ones. Neural
networks are known to suffer under this setting, as they forget previously
acquired knowledge. To address this problem, effective methods exploit past
data stored in an episodic memory while expanding the final classifier nodes to
accommodate the new classes.
In this work, we substitute the expanding classifier with a novel fixed
classifier in which a number of pre-allocated output nodes are subject to the
classification loss right from the beginning of the learning phase. Contrarily
to the standard expanding classifier, this allows: (a) the output nodes of
future unseen classes to firstly see negative samples since the beginning of
learning together with the positive samples that incrementally arrive; (b) to
learn features that do not change their geometric configuration as novel
classes are incorporated in the learning model.
Experiments with public datasets show that the proposed approach is as
effective as the expanding classifier while exhibiting novel intriguing
properties of the internal feature representation that are otherwise
not-existent. Our ablation study on pre-allocating a large number of classes
further validates the approach.
Related papers
- Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning [21.088762527081883]
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks.
Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.
We explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes.
arXiv Detail & Related papers (2023-09-25T11:54:33Z) - Class Incremental Learning with Self-Supervised Pre-Training and
Prototype Learning [21.901331484173944]
We analyze the causes of catastrophic forgetting in class incremental learning.
We propose a two-stage learning framework with a fixed encoder and an incrementally updated prototype classifier.
Our method does not rely on preserved samples of old classes, is thus a non-exemplar based CIL method.
arXiv Detail & Related papers (2023-08-04T14:20:42Z) - AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning [53.32576252950481]
Continual learning aims to enable a model to incrementally learn knowledge from sequentially arrived data.
In this paper, we propose a non-incremental learner, named AttriCLIP, to incrementally extract knowledge of new classes or tasks.
arXiv Detail & Related papers (2023-05-19T07:39:17Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class
Incremental Learning [120.53458753007851]
Few-shot class-incremental learning (FSCIL) has been a challenging problem as only a few training samples are accessible for each novel class in the new sessions.
We deal with this misalignment dilemma in FSCIL inspired by the recently discovered phenomenon named neural collapse.
We propose a neural collapse inspired framework for FSCIL. Experiments on the miniImageNet, CUB-200, and CIFAR-100 datasets demonstrate that our proposed framework outperforms the state-of-the-art performances.
arXiv Detail & Related papers (2023-02-06T18:39:40Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Evidential Deep Learning for Class-Incremental Semantic Segmentation [15.563703446465823]
Class-Incremental Learning is a challenging problem in machine learning that aims to extend previously trained neural networks with new classes.
In this paper, we address the problem of how to model unlabeled classes while avoiding spurious feature clustering of future uncorrelated classes.
Our method factorizes the problem into a separate foreground class probability, calculated by the expected value of the Dirichlet distribution, and an unknown class (background) probability corresponding to the uncertainty of the estimate.
arXiv Detail & Related papers (2022-12-06T10:13:30Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.