Feature-Balanced Loss for Long-Tailed Visual Recognition
- URL: http://arxiv.org/abs/2305.10772v1
- Date: Thu, 18 May 2023 07:30:22 GMT
- Title: Feature-Balanced Loss for Long-Tailed Visual Recognition
- Authors: Mengke Li, Yiu-ming Cheung, Juyong Jiang
- Abstract summary: Deep neural networks frequently suffer from performance degradation when the training data is long-tailed.
In this paper, we address the long-tailed problem from feature space and thereby propose the feature-balanced loss.
Experiments on multiple popular long-tailed recognition benchmarks demonstrate that the feature-balanced loss achieves superior performance gains compared with the state-of-the-art methods.
- Score: 36.974139534723825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks frequently suffer from performance degradation when the
training data is long-tailed because several majority classes dominate the
training, resulting in a biased model. Recent studies have made a great effort
in solving this issue by obtaining good representations from data space, but
few of them pay attention to the influence of feature norm on the predicted
results. In this paper, we therefore address the long-tailed problem from
feature space and thereby propose the feature-balanced loss. Specifically, we
encourage larger feature norms of tail classes by giving them relatively
stronger stimuli. Moreover, the stimuli intensity is gradually increased in the
way of curriculum learning, which improves the generalization of the tail
classes, meanwhile maintaining the performance of the head classes. Extensive
experiments on multiple popular long-tailed recognition benchmarks demonstrate
that the feature-balanced loss achieves superior performance gains compared
with the state-of-the-art methods.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - LCReg: Long-Tailed Image Classification with Latent Categories based
Recognition [81.5551335554507]
We propose the Latent Categories based long-tail Recognition (LCReg) method.
Our hypothesis is that common latent features shared by head and tail classes can be used to improve feature representation.
Specifically, we learn a set of class-agnostic latent features shared by both head and tail classes, and then use semantic data augmentation on the latent features to implicitly increase the diversity of the training sample.
arXiv Detail & Related papers (2023-09-13T02:03:17Z) - Fairness Improves Learning from Noisily Labeled Long-Tailed Data [119.0612617460727]
Long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.
We introduce the Fairness Regularizer (FR), inspired by regularizing the performance gap between any two sub-populations.
We show that the introduced fairness regularizer improves the performances of sub-populations on the tail and the overall learning performance.
arXiv Detail & Related papers (2023-03-22T03:46:51Z) - Long-tailed Recognition by Learning from Latent Categories [70.6272114218549]
We introduce a Latent Categories based long-tail Recognition (LCReg) method.
Specifically, we learn a set of class-agnostic latent features shared among the head and tail classes.
Then, we implicitly enrich the training sample diversity via applying semantic data augmentation to the latent features.
arXiv Detail & Related papers (2022-06-02T12:19:51Z) - A Survey on Long-Tailed Visual Recognition [13.138929184395423]
We focus on the problems caused by long-tailed data distribution, sort out the representative long-tailed visual recognition datasets and summarize some mainstream long-tailed studies.
Based on the Gini coefficient, we quantitatively study 20 widely-used and large-scale visual datasets proposed in the last decade.
arXiv Detail & Related papers (2022-05-27T06:22:55Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.