Self-Promoted Prototype Refinement for Few-Shot Class-Incremental
Learning
- URL: http://arxiv.org/abs/2107.08918v1
- Date: Mon, 19 Jul 2021 14:31:33 GMT
- Title: Self-Promoted Prototype Refinement for Few-Shot Class-Incremental
Learning
- Authors: Kai Zhu, Yang Cao, Wei Zhai, Jie Cheng, Zheng-Jun Zha
- Abstract summary: Few-shot class-incremental learning is to recognize the new classes given few samples and not forget the old classes.
We propose a novel incremental prototype learning scheme that adapts the feature representation to various generated incremental episodes.
Experiments on three benchmark datasets demonstrate the above-par incremental performance, outperforming state-of-the-art methods by a margin of 13%, 17% and 11%, respectively.
- Score: 81.10531943939365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot class-incremental learning is to recognize the new classes given few
samples and not forget the old classes. It is a challenging task since
representation optimization and prototype reorganization can only be achieved
under little supervision. To address this problem, we propose a novel
incremental prototype learning scheme. Our scheme consists of a random episode
selection strategy that adapts the feature representation to various generated
incremental episodes to enhance the corresponding extensibility, and a
self-promoted prototype refinement mechanism which strengthens the expression
ability of the new classes by explicitly considering the dependencies among
different classes. Particularly, a dynamic relation projection module is
proposed to calculate the relation matrix in a shared embedding space and
leverage it as the factor for bootstrapping the update of prototypes. Extensive
experiments on three benchmark datasets demonstrate the above-par incremental
performance, outperforming state-of-the-art methods by a margin of 13%, 17% and
11%, respectively.
Related papers
- Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis [21.348252135252412]
Current Non-Exemplar Class-Incremental Learning (NECIL) methods mitigate forgetting by storing a single prototype per class.
We propose a more efficient NECIL method that replaces prototypes with synthesized retrospective features for old classes.
Our method significantly improves the efficiency of non-exemplar class-incremental learning and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-03T07:19:11Z) - Simple-Sampling and Hard-Mixup with Prototypes to Rebalance Contrastive Learning for Text Classification [11.072083437769093]
We propose a novel model named SharpReCL for imbalanced text classification tasks.
Our model even outperforms popular large language models across several datasets.
arXiv Detail & Related papers (2024-05-19T11:33:49Z) - Dynamic Feature Learning and Matching for Class-Incremental Learning [20.432575325147894]
Class-incremental learning (CIL) has emerged as a means to learn new classes without catastrophic forgetting of previous classes.
We propose the Dynamic Feature Learning and Matching (DFLM) model in this paper.
Our proposed model achieves significant performance improvements over existing methods.
arXiv Detail & Related papers (2024-05-14T12:17:19Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Non-exemplar Class-incremental Learning by Random Auxiliary Classes
Augmentation and Mixed Features [37.51376572211081]
Non-exemplar class-incremental learning refers to classifying new and old classes without storing samples of old classes.
We propose an effective non-exemplar method called RAMF consisting of Random Auxiliary classes augmentation and Mixed Feature.
arXiv Detail & Related papers (2023-04-16T06:33:43Z) - Harmonizing Base and Novel Classes: A Class-Contrastive Approach for
Generalized Few-Shot Segmentation [78.74340676536441]
We propose a class contrastive loss and a class relationship loss to regulate prototype updates and encourage a large distance between prototypes.
Our proposed approach achieves new state-of-the-art performance for the generalized few-shot segmentation task on PASCAL VOC and MS COCO datasets.
arXiv Detail & Related papers (2023-03-24T00:30:25Z) - Self-Sustaining Representation Expansion for Non-Exemplar
Class-Incremental Learning [138.35405462309456]
Non-exemplar class-incremental learning is to recognize both the old and new classes when old class samples cannot be saved.
Our scheme consists of a structure reorganization strategy that fuses main-branch expansion and side-branch updating to maintain the old features.
A prototype selection mechanism is proposed to enhance the discrimination between the old and new classes by selectively incorporating new samples into the distillation process.
arXiv Detail & Related papers (2022-03-12T06:42:20Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.