Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for
Few-Shot Class-Incremental Learning
- URL: http://arxiv.org/abs/2304.00426v1
- Date: Sun, 2 Apr 2023 01:51:24 GMT
- Title: Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for
Few-Shot Class-Incremental Learning
- Authors: Zeyin Song, Yifan Zhao, Yujun Shi, Peixi Peng, Li Yuan, Yonghong Tian
- Abstract summary: Few-shot class-incremental learning (FSCIL) aims at learning to classify new classes continually from limited samples without forgetting the old classes.
We propose Semantic-Aware Virtual Contrastive model (SAVC), a novel method that facilitates separation between new classes and base classes.
Our SAVC significantly boosts base class separation and novel class generalization, achieving new state-of-the-art performance on the three widely-used FSCIL benchmark datasets.
- Score: 42.551377055029334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot class-incremental learning (FSCIL) aims at learning to classify new
classes continually from limited samples without forgetting the old classes.
The mainstream framework tackling FSCIL is first to adopt the cross-entropy
(CE) loss for training at the base session, then freeze the feature extractor
to adapt to new classes. However, in this work, we find that the CE loss is not
ideal for the base session training as it suffers poor class separation in
terms of representations, which further degrades generalization to novel
classes. One tempting method to mitigate this problem is to apply an additional
naive supervised contrastive learning (SCL) in the base session. Unfortunately,
we find that although SCL can create a slightly better representation
separation among different base classes, it still struggles to separate base
classes and new classes. Inspired by the observations made, we propose
Semantic-Aware Virtual Contrastive model (SAVC), a novel method that
facilitates separation between new classes and base classes by introducing
virtual classes to SCL. These virtual classes, which are generated via
pre-defined transformations, not only act as placeholders for unseen classes in
the representation space, but also provide diverse semantic information. By
learning to recognize and contrast in the fantasy space fostered by virtual
classes, our SAVC significantly boosts base class separation and novel class
generalization, achieving new state-of-the-art performance on the three
widely-used FSCIL benchmark datasets. Code is available at:
https://github.com/zysong0113/SAVC.
Related papers
- Embedding Space Allocation with Angle-Norm Joint Classifiers for Few-Shot Class-Incremental Learning [8.321592316231786]
Few-shot class-incremental learning aims to continually learn new classes from only a few samples.
Current classes occupy the entire feature space, which is detrimental to learning new classes.
Small number of samples in incremental rounds is insufficient for fully training.
arXiv Detail & Related papers (2024-11-14T07:31:12Z) - Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Organizing Background to Explore Latent Classes for Incremental Few-shot Semantic Segmentation [7.570798966278471]
incremental Few-shot Semantic COCO (iFSS) is to extend pre-trained segmentation models to new classes via few annotated images.
We propose a network called OINet, i.e., the background embedding space textbfOrganization and prototype textbfInherit Network.
arXiv Detail & Related papers (2024-05-29T23:22:12Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Few-Shot Object Detection via Association and DIscrimination [83.8472428718097]
Few-shot object detection via Association and DIscrimination builds up a discriminative feature space for each novel class with two integral steps.
Experiments on Pascal VOC and MS-COCO datasets demonstrate FADI achieves new SOTA performance, significantly improving the baseline in any shot/split by +18.7.
arXiv Detail & Related papers (2021-11-23T05:04:06Z) - Fine-grained Angular Contrastive Learning with Coarse Labels [72.80126601230447]
We introduce a novel 'Angular normalization' module that allows to effectively combine supervised and self-supervised contrastive pre-training.
This work will help to pave the way for future research on this new, challenging, and very practical topic of C2FS classification.
arXiv Detail & Related papers (2020-12-07T08:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.