Multi-Granularity Regularized Re-Balancing for Class Incremental
Learning
- URL: http://arxiv.org/abs/2206.15189v1
- Date: Thu, 30 Jun 2022 11:04:51 GMT
- Title: Multi-Granularity Regularized Re-Balancing for Class Incremental
Learning
- Authors: Huitong Chen, Yu Wang, and Qinghua Hu
- Abstract summary: Deep learning models suffer from catastrophic forgetting when learning new tasks.
Data imbalance between old and new classes is a key issue that leads to performance degradation of the model.
We propose an assumption-agnostic method, Multi-Granularity Regularized re-Balancing, to address this problem.
- Score: 32.52884416761171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models suffer from catastrophic forgetting when learning new
tasks incrementally. Incremental learning has been proposed to retain the
knowledge of old classes while learning to identify new classes. A typical
approach is to use a few exemplars to avoid forgetting old knowledge. In such a
scenario, data imbalance between old and new classes is a key issue that leads
to performance degradation of the model. Several strategies have been designed
to rectify the bias towards the new classes due to data imbalance. However,
they heavily rely on the assumptions of the bias relation between old and new
classes. Therefore, they are not suitable for complex real-world applications.
In this study, we propose an assumption-agnostic method, Multi-Granularity
Regularized re-Balancing (MGRB), to address this problem. Re-balancing methods
are used to alleviate the influence of data imbalance; however, we empirically
discover that they would under-fit new classes. To this end, we further design
a novel multi-granularity regularization term that enables the model to
consider the correlations of classes in addition to re-balancing the data. A
class hierarchy is first constructed by grouping the semantically or visually
similar classes. The multi-granularity regularization then transforms the
one-hot label vector into a continuous label distribution, which reflects the
relations between the target class and other classes based on the constructed
class hierarchy. Thus, the model can learn the inter-class relational
information, which helps enhance the learning of both old and new classes.
Experimental results on both public datasets and a real-world fault diagnosis
dataset verify the effectiveness of the proposed method.
Related papers
- Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - Subspace Regularizers for Few-Shot Class Incremental Learning [26.372024890126408]
We present a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes.
Our results show that simple geometric regularization of class representations offers an effective tool for continual learning.
arXiv Detail & Related papers (2021-10-13T22:19:53Z) - ClaRe: Practical Class Incremental Learning By Remembering Previous
Class Representations [9.530976792843495]
Class Incremental Learning (CIL) tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data.
ClaRe is an efficient solution for CIL by remembering the representations of learned classes in each increment.
ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes.
arXiv Detail & Related papers (2021-03-29T10:39:42Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.