Self-Paced Imbalance Rectification for Class Incremental Learning
- URL: http://arxiv.org/abs/2202.03703v1
- Date: Tue, 8 Feb 2022 07:58:13 GMT
- Title: Self-Paced Imbalance Rectification for Class Incremental Learning
- Authors: Zhiheng Liu, Kai Zhu and Yang Cao
- Abstract summary: We propose a self-paced imbalance rectification scheme, which dynamically maintains the incremental balance during the representation learning phase.
Experiments on three benchmarks demonstrate stable incremental performance, significantly outperforming the state-of-the-art methods.
- Score: 6.966383162917331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Exemplar-based class-incremental learning is to recognize new classes while
not forgetting old ones, whose samples can only be saved in limited memory. The
ratio fluctuation of new samples to old exemplars, which is caused by the
variation of memory capacity at different environments, will bring challenges
to stabilize the incremental optimization process. To address this problem, we
propose a novel self-paced imbalance rectification scheme, which dynamically
maintains the incremental balance during the representation learning phase.
Specifically, our proposed scheme consists of a frequency compensation strategy
that adjusts the logits margin between old and new classes with the
corresponding number ratio to strengthen the expression ability of the old
classes, and an inheritance transfer strategy to reduce the representation
confusion by estimating the similarity of different classes in the old
embedding space. Furthermore, a chronological attenuation mechanism is proposed
to mitigate the repetitive optimization of the older classes at multiple
step-wise increments. Extensive experiments on three benchmarks demonstrate
stable incremental performance, significantly outperforming the
state-of-the-art methods.
Related papers
- Strike a Balance in Continual Panoptic Segmentation [60.26892488010291]
We introduce past-class backtrace distillation to balance the stability of existing knowledge with the adaptability to new information.
We also introduce a class-proportional memory strategy, which aligns the class distribution in the replay sample set with that of the historical training data.
We present a new method named Continual Panoptic Balanced (BalConpas)
arXiv Detail & Related papers (2024-07-23T09:58:20Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Non-exemplar Class-incremental Learning by Random Auxiliary Classes
Augmentation and Mixed Features [37.51376572211081]
Non-exemplar class-incremental learning refers to classifying new and old classes without storing samples of old classes.
We propose an effective non-exemplar method called RAMF consisting of Random Auxiliary classes augmentation and Mixed Feature.
arXiv Detail & Related papers (2023-04-16T06:33:43Z) - Self-Sustaining Representation Expansion for Non-Exemplar
Class-Incremental Learning [138.35405462309456]
Non-exemplar class-incremental learning is to recognize both the old and new classes when old class samples cannot be saved.
Our scheme consists of a structure reorganization strategy that fuses main-branch expansion and side-branch updating to maintain the old features.
A prototype selection mechanism is proposed to enhance the discrimination between the old and new classes by selectively incorporating new samples into the distillation process.
arXiv Detail & Related papers (2022-03-12T06:42:20Z) - Self-Promoted Prototype Refinement for Few-Shot Class-Incremental
Learning [81.10531943939365]
Few-shot class-incremental learning is to recognize the new classes given few samples and not forget the old classes.
We propose a novel incremental prototype learning scheme that adapts the feature representation to various generated incremental episodes.
Experiments on three benchmark datasets demonstrate the above-par incremental performance, outperforming state-of-the-art methods by a margin of 13%, 17% and 11%, respectively.
arXiv Detail & Related papers (2021-07-19T14:31:33Z) - Breadcrumbs: Adversarial Class-Balanced Sampling for Long-tailed
Recognition [95.93760490301395]
The problem of long-tailed recognition, where the number of examples per class is highly unbalanced, is considered.
It is hypothesized that this is due to the repeated sampling of examples and can be addressed by feature space augmentation.
A new feature augmentation strategy, EMANATE, based on back-tracking of features across epochs during training, is proposed.
A new sampling procedure, Breadcrumb, is then introduced to implement adversarial class-balanced sampling without extra computation.
arXiv Detail & Related papers (2021-05-01T00:21:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.