Long-Tailed Class Incremental Learning
- URL: http://arxiv.org/abs/2210.00266v1
- Date: Sat, 1 Oct 2022 12:41:48 GMT
- Title: Long-Tailed Class Incremental Learning
- Authors: Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li,
Ming-Ming Cheng
- Abstract summary: In class incremental learning (CIL) a model must learn new classes in a sequential manner without forgetting old ones.
We propose two long-tailed CIL scenarios, which we term ordered and shuffled LT-CIL.
Our results demonstrate the superior performance (up to 6.44 points in average incremental accuracy) of our approach on CIFAR-100 and ImageNet-Subset.
- Score: 72.4894111493918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In class incremental learning (CIL) a model must learn new classes in a
sequential manner without forgetting old ones. However, conventional CIL
methods consider a balanced distribution for each new task, which ignores the
prevalence of long-tailed distributions in the real world. In this work we
propose two long-tailed CIL scenarios, which we term ordered and shuffled
LT-CIL. Ordered LT-CIL considers the scenario where we learn from head classes
collected with more samples than tail classes which have few. Shuffled LT-CIL,
on the other hand, assumes a completely random long-tailed distribution for
each task. We systematically evaluate existing methods in both LT-CIL scenarios
and demonstrate very different behaviors compared to conventional CIL
scenarios. Additionally, we propose a two-stage learning baseline with a
learnable weight scaling layer for reducing the bias caused by long-tailed
distribution in LT-CIL and which in turn also improves the performance of
conventional CIL due to the limited exemplars. Our results demonstrate the
superior performance (up to 6.44 points in average incremental accuracy) of our
approach on CIFAR-100 and ImageNet-Subset. The code is available at
https://github.com/xialeiliu/Long-Tailed-CIL
Related papers
- Three Heads Are Better Than One: Complementary Experts for Long-Tailed Semi-supervised Learning [74.44500692632778]
We propose a novel method named ComPlementary Experts (CPE) to model various class distributions.
CPE achieves state-of-the-art performances on CIFAR-10-LT, CIFAR-100-LT, and STL-10-LT dataset benchmarks.
arXiv Detail & Related papers (2023-12-25T11:54:07Z) - Dynamic Residual Classifier for Class Incremental Learning [4.02487511510606]
With imbalanced sample numbers between old and new classes, the learning can be biased.
Existing CIL methods exploit the longtailed (LT) recognition techniques, e.g., the adjusted losses and the data re-sampling methods.
A novel Dynamic Residual adaptation (DRC) is proposed to handle this challenging scenario.
arXiv Detail & Related papers (2023-08-25T11:07:11Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Learning in Imperfect Environment: Multi-Label Classification with
Long-Tailed Distribution and Partial Labels [53.68653940062605]
We introduce a novel task, Partial labeling and Long-Tailed Multi-Label Classification (PLT-MLC)
We find that most LT-MLC and PL-MLC approaches fail to solve the degradation-MLC.
We propose an end-to-end learning framework: textbfCOrrection $rightarrow$ textbfModificattextbfIon $rightarrow$ balantextbfCe.
arXiv Detail & Related papers (2023-04-20T20:05:08Z) - Invariant Feature Learning for Generalized Long-Tailed Classification [63.0533733524078]
We introduce Generalized Long-Tailed classification (GLT) to jointly consider both kinds of imbalances.
We argue that most class-wise LT methods degenerate in our proposed two benchmarks: ImageNet-GLT and MSCOCO-GLT.
We propose an Invariant Feature Learning (IFL) method as the first strong baseline for GLT.
arXiv Detail & Related papers (2022-07-19T18:27:42Z) - Feature Generation for Long-tail Classification [36.186909933006675]
We show how to generate meaningful features by estimating the tail category's distribution.
We also present a qualitative analysis of generated features using t-SNE visualizations and analyze the nearest neighbors used to calibrate the tail class distributions.
arXiv Detail & Related papers (2021-11-10T21:34:29Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.