Dual Compensation Residual Networks for Class Imbalanced Learning
- URL: http://arxiv.org/abs/2308.13165v1
- Date: Fri, 25 Aug 2023 04:06:30 GMT
- Title: Dual Compensation Residual Networks for Class Imbalanced Learning
- Authors: Ruibing Hou, Hong Chang, Bingpeng Ma, Shiguang Shan and Xilin Chen
- Abstract summary: We propose Dual Compensation Residual Networks to better fit both tail and head classes.
An important factor causing overfitting is that there is severe feature drift between training and test data on tail classes.
We also propose a Residual Balanced Multi-Proxies classifier to alleviate the under-fitting issue.
- Score: 98.35401757647749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning generalizable representation and classifier for class-imbalanced
data is challenging for data-driven deep models. Most studies attempt to
re-balance the data distribution, which is prone to overfitting on tail classes
and underfitting on head classes. In this work, we propose Dual Compensation
Residual Networks to better fit both tail and head classes. Firstly, we propose
dual Feature Compensation Module (FCM) and Logit Compensation Module (LCM) to
alleviate the overfitting issue. The design of these two modules is based on
the observation: an important factor causing overfitting is that there is
severe feature drift between training and test data on tail classes. In
details, the test features of a tail category tend to drift towards feature
cloud of multiple similar head categories. So FCM estimates a multi-mode
feature drift direction for each tail category and compensate for it.
Furthermore, LCM translates the deterministic feature drift vector estimated by
FCM along intra-class variations, so as to cover a larger effective
compensation space, thereby better fitting the test features. Secondly, we
propose a Residual Balanced Multi-Proxies Classifier (RBMC) to alleviate the
under-fitting issue. Motivated by the observation that re-balancing strategy
hinders the classifier from learning sufficient head knowledge and eventually
causes underfitting, RBMC utilizes uniform learning with a residual path to
facilitate classifier learning. Comprehensive experiments on Long-tailed and
Class-Incremental benchmarks validate the efficacy of our method.
Related papers
- A dual-branch model with inter- and intra-branch contrastive loss for
long-tailed recognition [7.225494453600985]
Models trained on long-tailed datasets have poor adaptability to tail classes and the decision boundaries are ambiguous.
We propose a simple yet effective model, named Dual-Branch Long-Tailed Recognition (DB-LTR), which includes an imbalanced learning branch and a Contrastive Learning Branch (CoLB)
CoLB can improve the capability of the model in adapting to tail classes and assist the imbalanced learning branch to learn a well-represented feature space and discriminative decision boundary.
arXiv Detail & Related papers (2023-09-28T03:31:11Z) - Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - Feature Fusion from Head to Tail for Long-Tailed Visual Recognition [39.86973663532936]
The biased decision boundary caused by inadequate semantic information in tail classes is one of the key factors contributing to their low recognition accuracy.
We propose to augment tail classes by grafting the diverse semantic information from head classes, referred to as head-to-tail fusion (H2T)
Both theoretical analysis and practical experimentation demonstrate that H2T can contribute to a more optimized solution for the decision boundary.
arXiv Detail & Related papers (2023-06-12T08:50:46Z) - Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition [37.62659619941791]
We study the problem of long-tailed visual recognition from the perspective of feature level.
Two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.
Experiments conducted on benchmark datasets demonstrate the superior performance of the proposed method over the state-of-the-art ones.
arXiv Detail & Related papers (2023-05-18T02:06:06Z) - Leveraging Angular Information Between Feature and Classifier for
Long-tailed Learning: A Prediction Reformulation Approach [90.77858044524544]
We reformulate the recognition probabilities through included angles without re-balancing the classifier weights.
Inspired by the performance improvement of the predictive form reformulation, we explore the different properties of this angular prediction.
Our method is able to obtain the best performance among peer methods without pretraining on CIFAR10/100-LT and ImageNet-LT.
arXiv Detail & Related papers (2022-12-03T07:52:48Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Exploring Classification Equilibrium in Long-Tailed Object Detection [29.069986049436157]
We propose to use the mean classification score to indicate the classification accuracy for each category during training.
We balance the classification via an Equilibrium Loss (EBL) and a Memory-augmented Feature Sampling (MFS) method.
It improves the detection performance of tail classes by 15.6 AP, and outperforms the most recent long-tailed object detectors by more than 1 AP.
arXiv Detail & Related papers (2021-08-17T08:39:04Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.