A Survey on Long-Tailed Visual Recognition
- URL: http://arxiv.org/abs/2205.13775v1
- Date: Fri, 27 May 2022 06:22:55 GMT
- Title: A Survey on Long-Tailed Visual Recognition
- Authors: Lu Yang, He Jiang, Qing Song, Jun Guo
- Abstract summary: We focus on the problems caused by long-tailed data distribution, sort out the representative long-tailed visual recognition datasets and summarize some mainstream long-tailed studies.
Based on the Gini coefficient, we quantitatively study 20 widely-used and large-scale visual datasets proposed in the last decade.
- Score: 13.138929184395423
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The heavy reliance on data is one of the major reasons that currently limit
the development of deep learning. Data quality directly dominates the effect of
deep learning models, and the long-tailed distribution is one of the factors
affecting data quality. The long-tailed phenomenon is prevalent due to the
prevalence of power law in nature. In this case, the performance of deep
learning models is often dominated by the head classes while the learning of
the tail classes is severely underdeveloped. In order to learn adequately for
all classes, many researchers have studied and preliminarily addressed the
long-tailed problem. In this survey, we focus on the problems caused by
long-tailed data distribution, sort out the representative long-tailed visual
recognition datasets and summarize some mainstream long-tailed studies.
Specifically, we summarize these studies into ten categories from the
perspective of representation learning, and outline the highlights and
limitations of each category. Besides, we have studied four quantitative
metrics for evaluating the imbalance, and suggest using the Gini coefficient to
evaluate the long-tailedness of a dataset. Based on the Gini coefficient, we
quantitatively study 20 widely-used and large-scale visual datasets proposed in
the last decade, and find that the long-tailed phenomenon is widespread and has
not been fully studied. Finally, we provide several future directions for the
development of long-tailed learning to provide more ideas for readers.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - A Systematic Review on Long-Tailed Learning [12.122327726952946]
Long-tailed learning aims to build high-performance models on datasets with long-tailed distributions.
We propose a new taxonomy for long-tailed learning, which consists of eight different dimensions.
We present a systematic review of long-tailed learning methods, discussing their commonalities and alignable differences.
arXiv Detail & Related papers (2024-08-01T11:39:45Z) - LCReg: Long-Tailed Image Classification with Latent Categories based
Recognition [81.5551335554507]
We propose the Latent Categories based long-tail Recognition (LCReg) method.
Our hypothesis is that common latent features shared by head and tail classes can be used to improve feature representation.
Specifically, we learn a set of class-agnostic latent features shared by both head and tail classes, and then use semantic data augmentation on the latent features to implicitly increase the diversity of the training sample.
arXiv Detail & Related papers (2023-09-13T02:03:17Z) - Feature-Balanced Loss for Long-Tailed Visual Recognition [36.974139534723825]
Deep neural networks frequently suffer from performance degradation when the training data is long-tailed.
In this paper, we address the long-tailed problem from feature space and thereby propose the feature-balanced loss.
Experiments on multiple popular long-tailed recognition benchmarks demonstrate that the feature-balanced loss achieves superior performance gains compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-05-18T07:30:22Z) - Propheter: Prophetic Teacher Guided Long-Tailed Distribution Learning [44.947984354108094]
We propose an innovative long-tailed learning paradigm that breaks the bottleneck by guiding the learning of deep networks with external prior knowledge.
The proposed prophetic paradigm acts as a promising solution to the challenge of limited class knowledge in long-tailed datasets.
arXiv Detail & Related papers (2023-04-09T02:02:19Z) - Long-tailed Recognition by Learning from Latent Categories [70.6272114218549]
We introduce a Latent Categories based long-tail Recognition (LCReg) method.
Specifically, we learn a set of class-agnostic latent features shared among the head and tail classes.
Then, we implicitly enrich the training sample diversity via applying semantic data augmentation to the latent features.
arXiv Detail & Related papers (2022-06-02T12:19:51Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Deep Long-Tailed Learning: A Survey [163.16874896812885]
Deep long-tailed learning aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution.
Long-tailed class imbalance is a common problem in practical visual recognition tasks.
This paper provides a comprehensive survey on recent advances in deep long-tailed learning.
arXiv Detail & Related papers (2021-10-09T15:25:22Z) - Relational Subsets Knowledge Distillation for Long-tailed Retinal
Diseases Recognition [65.77962788209103]
We propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge.
It enforces the model to focus on learning the subset-specific knowledge.
The proposed framework proved to be effective for the long-tailed retinal diseases recognition task.
arXiv Detail & Related papers (2021-04-22T13:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.