Few-Shot Class-Incremental Learning
- URL: http://arxiv.org/abs/2004.10956v2
- Date: Fri, 24 Apr 2020 02:12:32 GMT
- Title: Few-Shot Class-Incremental Learning
- Authors: Xiaoyu Tao, Xiaopeng Hong, Xinyuan Chang, Songlin Dong, Xing Wei,
Yihong Gong
- Abstract summary: We focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
We represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes.
- Score: 68.75462849428196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to incrementally learn new classes is crucial to the development
of real-world artificial intelligence systems. In this paper, we focus on a
challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few
labelled samples, without forgetting the previously learned ones. To address
this problem, we represent the knowledge using a neural gas (NG) network, which
can learn and preserve the topology of the feature manifold formed by different
classes. On this basis, we propose the TOpology-Preserving knowledge
InCrementer (TOPIC) framework. TOPIC mitigates the forgetting of the old
classes by stabilizing NG's topology and improves the representation learning
for few-shot new classes by growing and adapting NG to new training samples.
Comprehensive experimental results demonstrate that our proposed method
significantly outperforms other state-of-the-art class-incremental learning
methods on CIFAR100, miniImageNet, and CUB200 datasets.
Related papers
- CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning [52.63674911541416]
Few-shot class-incremental learning (FSCIL) faces several challenges, such as overfitting and forgetting.
Our primary focus is representation learning on base classes to tackle the unique challenge of FSCIL.
We find that trying to secure the spread of features within a more confined feature space enables the learned representation to strike a better balance between transferability and discriminability.
arXiv Detail & Related papers (2024-10-08T02:23:16Z) - Knowledge Adaptation Network for Few-Shot Class-Incremental Learning [23.90555521006653]
Few-shot class-incremental learning aims to incrementally recognize new classes using a few samples.
One of the effective methods to solve this challenge is to construct prototypical evolution classifiers.
Because representations for new classes are weak and biased, we argue such a strategy is suboptimal.
arXiv Detail & Related papers (2024-09-18T07:51:38Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - On the Role of Neural Collapse in Transfer Learning [29.972063833424215]
Recent results show that representations learned by a single classifier over many classes are competitive on few-shot learning problems.
We show that neural collapse generalizes to new samples from the training classes, and -- more importantly -- to new classes as well.
arXiv Detail & Related papers (2021-12-30T16:36:26Z) - Essentials for Class Incremental Learning [43.306374557919646]
Class-incremental learning results on CIFAR-100 and ImageNet improve over the state-of-the-art by a large margin, while keeping the approach simple.
arXiv Detail & Related papers (2021-02-18T18:01:06Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Cognitively-Inspired Model for Incremental Learning Using a Few Examples [11.193504036335503]
Incremental learning attempts to develop a classifier which learns continuously from a stream of data segregated into different classes.
Deep learning approaches suffer from catastrophic forgetting when learning classes incrementally, while most incremental learning approaches require a large amount of training data per class.
We propose a novel approach inspired by the concept learning model of the hippocampus and the neocortex that represents each image class as centroids.
arXiv Detail & Related papers (2020-02-27T19:52:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.