Robust Feature Learning and Global Variance-Driven Classifier Alignment
for Long-Tail Class Incremental Learning
- URL: http://arxiv.org/abs/2311.01227v1
- Date: Thu, 2 Nov 2023 13:28:53 GMT
- Title: Robust Feature Learning and Global Variance-Driven Classifier Alignment
for Long-Tail Class Incremental Learning
- Authors: Jayateja Kalla and Soma Biswas
- Abstract summary: This paper introduces a two-stage framework designed to enhance long-tail class incremental learning.
We address the challenge posed by the under-representation of tail classes in long-tail class incremental learning.
The proposed framework can seamlessly integrate as a module with any class incremental learning method.
- Score: 20.267257778779992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a two-stage framework designed to enhance long-tail
class incremental learning, enabling the model to progressively learn new
classes, while mitigating catastrophic forgetting in the context of long-tailed
data distributions. Addressing the challenge posed by the under-representation
of tail classes in long-tail class incremental learning, our approach achieves
classifier alignment by leveraging global variance as an informative measure
and class prototypes in the second stage. This process effectively captures
class properties and eliminates the need for data balancing or additional layer
tuning. Alongside traditional class incremental learning losses in the first
stage, the proposed approach incorporates mixup classes to learn robust feature
representations, ensuring smoother boundaries. The proposed framework can
seamlessly integrate as a module with any class incremental learning method to
effectively handle long-tail class incremental learning scenarios. Extensive
experimentation on the CIFAR-100 and ImageNet-Subset datasets validates the
approach's efficacy, showcasing its superiority over state-of-the-art
techniques across various long-tail CIL settings.
Related papers
- TaE: Task-aware Expandable Representation for Long Tail Class Incremental Learning [42.630413950957795]
We introduce a novel Task-aware Expandable (TaE) framework to learn diverse representations from each incremental task.
TaE achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-02-08T16:37:04Z) - SLCA: Slow Learner with Classifier Alignment for Continual Learning on a
Pre-trained Model [73.80068155830708]
We present an extensive analysis for continual learning on a pre-trained model (CLPM)
We propose a simple but extremely effective approach named Slow Learner with Alignment (SLCA)
Across a variety of scenarios, our proposal provides substantial improvements for CLPM.
arXiv Detail & Related papers (2023-03-09T08:57:01Z) - Class-Incremental Learning with Cross-Space Clustering and Controlled
Transfer [9.356870107137093]
In class-incremental learning, the model is expected to learn new classes continually while maintaining knowledge on previous classes.
We propose two distillation-based objectives for class incremental learning.
arXiv Detail & Related papers (2022-08-07T16:28:02Z) - DILF-EN framework for Class-Incremental Learning [9.969403314560179]
We show that the effect of catastrophic forgetting on the model prediction varies with the change in orientation of the same image.
We propose a novel data-ensemble approach that combines the predictions for the different orientations of the image.
We also propose a novel dual-incremental learning framework that involves jointly training the network with two incremental learning objectives.
arXiv Detail & Related papers (2021-12-23T06:49:24Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Class-Balanced Distillation for Long-Tailed Visual Recognition [100.10293372607222]
Real-world imagery is often characterized by a significant imbalance of the number of images per class, leading to long-tailed distributions.
In this work, we introduce a new framework, by making the key observation that a feature representation learned with instance sampling is far from optimal in a long-tailed setting.
Our main contribution is a new training method, that leverages knowledge distillation to enhance feature representations.
arXiv Detail & Related papers (2021-04-12T08:21:03Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Learning to Segment the Tail [91.38061765836443]
Real-world visual recognition requires handling the extreme sample imbalance in large-scale long-tailed data.
We propose a "divide&conquer" strategy for the challenging LVIS task: divide the whole data into balanced parts and then apply incremental learning to conquer each one.
arXiv Detail & Related papers (2020-04-02T09:39:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.