Constructing Balance from Imbalance for Long-tailed Image Recognition
- URL: http://arxiv.org/abs/2208.02567v1
- Date: Thu, 4 Aug 2022 10:22:24 GMT
- Title: Constructing Balance from Imbalance for Long-tailed Image Recognition
- Authors: Yue Xu, Yong-Lu Li, Jiefeng Li, Cewu Lu
- Abstract summary: The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
- Score: 50.6210415377178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tailed image recognition presents massive challenges to deep learning
systems since the imbalance between majority (head) classes and minority (tail)
classes severely skews the data-driven deep neural networks. Previous methods
tackle with data imbalance from the viewpoints of data distribution, feature
space, and model design, etc.In this work, instead of directly learning a
recognition model, we suggest confronting the bottleneck of head-to-tail bias
before classifier learning, from the previously omitted perspective of
balancing label space. To alleviate the head-to-tail bias, we propose a concise
paradigm by progressively adjusting label space and dividing the head classes
and tail classes, dynamically constructing balance from imbalance to facilitate
the classification. With flexible data filtering and label space mapping, we
can easily embed our approach to most classification models, especially the
decoupled training methods. Besides, we find the separability of head-tail
classes varies among different features with different inductive biases. Hence,
our proposed model also provides a feature evaluation method and paves the way
for long-tailed feature learning. Extensive experiments show that our method
can boost the performance of state-of-the-arts of different types on
widely-used benchmarks. Code is available at https://github.com/silicx/DLSA.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - RAHNet: Retrieval Augmented Hybrid Network for Long-tailed Graph
Classification [10.806893809269074]
We propose a novel framework called Retrieval Augmented Hybrid Network (RAHNet) to jointly learn a robust feature extractor and an unbiased classifier.
In the feature extractor training stage, we develop a graph retrieval module to search for relevant graphs that directly enrich the intra-class diversity for the tail classes.
We also innovatively optimize a category-centered supervised contrastive loss to obtain discriminative representations.
arXiv Detail & Related papers (2023-08-04T14:06:44Z) - Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition [37.62659619941791]
We study the problem of long-tailed visual recognition from the perspective of feature level.
Two novel logit adjustment methods are proposed to improve model performance at a modest computational overhead.
Experiments conducted on benchmark datasets demonstrate the superior performance of the proposed method over the state-of-the-art ones.
arXiv Detail & Related papers (2023-05-18T02:06:06Z) - Inducing Neural Collapse in Deep Long-tailed Learning [13.242721780822848]
We propose two explicit feature regularization terms to learn high-quality representation for class-imbalanced data.
With the proposed regularization, Neural Collapse phenomena will appear under the class-imbalanced distribution.
Our method is easily implemented, highly effective, and can be plugged into most existing methods.
arXiv Detail & Related papers (2023-02-24T05:07:05Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.