Deep Long-Tailed Learning: A Survey
- URL: http://arxiv.org/abs/2110.04596v2
- Date: Sat, 15 Apr 2023 08:20:35 GMT
- Title: Deep Long-Tailed Learning: A Survey
- Authors: Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, Jiashi Feng
- Abstract summary: Deep long-tailed learning aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution.
Long-tailed class imbalance is a common problem in practical visual recognition tasks.
This paper provides a comprehensive survey on recent advances in deep long-tailed learning.
- Score: 163.16874896812885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep long-tailed learning, one of the most challenging problems in visual
recognition, aims to train well-performing deep models from a large number of
images that follow a long-tailed class distribution. In the last decade, deep
learning has emerged as a powerful recognition model for learning high-quality
image representations and has led to remarkable breakthroughs in generic visual
recognition. However, long-tailed class imbalance, a common problem in
practical visual recognition tasks, often limits the practicality of deep
network based recognition models in real-world applications, since they can be
easily biased towards dominant classes and perform poorly on tail classes. To
address this problem, a large number of studies have been conducted in recent
years, making promising progress in the field of deep long-tailed learning.
Considering the rapid evolution of this field, this paper aims to provide a
comprehensive survey on recent advances in deep long-tailed learning. To be
specific, we group existing deep long-tailed learning studies into three main
categories (i.e., class re-balancing, information augmentation and module
improvement), and review these methods following this taxonomy in detail.
Afterward, we empirically analyze several state-of-the-art methods by
evaluating to what extent they address the issue of class imbalance via a newly
proposed evaluation metric, i.e., relative accuracy. We conclude the survey by
highlighting important applications of deep long-tailed learning and
identifying several promising directions for future research.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - A Systematic Review on Long-Tailed Learning [12.122327726952946]
Long-tailed learning aims to build high-performance models on datasets with long-tailed distributions.
We propose a new taxonomy for long-tailed learning, which consists of eight different dimensions.
We present a systematic review of long-tailed learning methods, discussing their commonalities and alignable differences.
arXiv Detail & Related papers (2024-08-01T11:39:45Z) - LTRL: Boosting Long-tail Recognition via Reflective Learning [12.784186450718652]
We propose a novel learning paradigm, called reflecting learning, in handling long-tail recognition.
Our method integrates three processes for reviewing past predictions during training, summarizing and leveraging the feature relation across classes, and correcting gradient conflict for loss functions.
arXiv Detail & Related papers (2024-07-17T13:51:49Z) - LCReg: Long-Tailed Image Classification with Latent Categories based
Recognition [81.5551335554507]
We propose the Latent Categories based long-tail Recognition (LCReg) method.
Our hypothesis is that common latent features shared by head and tail classes can be used to improve feature representation.
Specifically, we learn a set of class-agnostic latent features shared by both head and tail classes, and then use semantic data augmentation on the latent features to implicitly increase the diversity of the training sample.
arXiv Detail & Related papers (2023-09-13T02:03:17Z) - Label-Efficient Deep Learning in Medical Image Analysis: Challenges and
Future Directions [10.502964056448283]
Training models in medical imaging analysis typically require expensive and time-consuming collection of labeled data.
We extensively investigated over 300 recent papers to provide a comprehensive overview of progress on label-efficient learning strategies in MIA.
Specifically, we provide an in-depth investigation, covering not only canonical semi-supervised, self-supervised, and multi-instance learning schemes, but also recently emerged active and annotation-efficient learning strategies.
arXiv Detail & Related papers (2023-03-22T11:51:49Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Long-tailed Recognition by Learning from Latent Categories [70.6272114218549]
We introduce a Latent Categories based long-tail Recognition (LCReg) method.
Specifically, we learn a set of class-agnostic latent features shared among the head and tail classes.
Then, we implicitly enrich the training sample diversity via applying semantic data augmentation to the latent features.
arXiv Detail & Related papers (2022-06-02T12:19:51Z) - A Survey on Long-Tailed Visual Recognition [13.138929184395423]
We focus on the problems caused by long-tailed data distribution, sort out the representative long-tailed visual recognition datasets and summarize some mainstream long-tailed studies.
Based on the Gini coefficient, we quantitatively study 20 widely-used and large-scale visual datasets proposed in the last decade.
arXiv Detail & Related papers (2022-05-27T06:22:55Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.