LCReg: Long-Tailed Image Classification with Latent Categories based
Recognition
- URL: http://arxiv.org/abs/2309.07186v1
- Date: Wed, 13 Sep 2023 02:03:17 GMT
- Title: LCReg: Long-Tailed Image Classification with Latent Categories based
Recognition
- Authors: Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin,
Guosheng Lin
- Abstract summary: We propose the Latent Categories based long-tail Recognition (LCReg) method.
Our hypothesis is that common latent features shared by head and tail classes can be used to improve feature representation.
Specifically, we learn a set of class-agnostic latent features shared by both head and tail classes, and then use semantic data augmentation on the latent features to implicitly increase the diversity of the training sample.
- Score: 81.5551335554507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we tackle the challenging problem of long-tailed image
recognition. Previous long-tailed recognition approaches mainly focus on data
augmentation or re-balancing strategies for the tail classes to give them more
attention during model training. However, these methods are limited by the
small number of training images for the tail classes, which results in poor
feature representations. To address this issue, we propose the Latent
Categories based long-tail Recognition (LCReg) method. Our hypothesis is that
common latent features shared by head and tail classes can be used to improve
feature representation. Specifically, we learn a set of class-agnostic latent
features shared by both head and tail classes, and then use semantic data
augmentation on the latent features to implicitly increase the diversity of the
training sample. We conduct extensive experiments on five long-tailed image
recognition datasets, and the results show that our proposed method
significantly improves the baselines.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Towards Long-Tailed Recognition for Graph Classification via
Collaborative Experts [10.99232053983369]
We propose a novel long-tailed graph-level classification framework via Collaborative Multi-expert Learning (CoMe)
To equilibrate the contributions of head and tail classes, we first develop balanced contrastive learning from the view of representation learning.
We execute gated fusion and disentangled knowledge distillation among the multiple experts to promote the collaboration in a multi-expert framework.
arXiv Detail & Related papers (2023-08-31T10:12:32Z) - Improving GANs for Long-Tailed Data through Group Spectral
Regularization [51.58250647277375]
We propose a novel group Spectral Regularizer (gSR) that prevents the spectral explosion alleviating mode collapse.
We find that gSR effectively combines with existing augmentation and regularization techniques, leading to state-of-the-art image generation performance on long-tailed data.
arXiv Detail & Related papers (2022-08-21T17:51:05Z) - Long-tailed Recognition by Learning from Latent Categories [70.6272114218549]
We introduce a Latent Categories based long-tail Recognition (LCReg) method.
Specifically, we learn a set of class-agnostic latent features shared among the head and tail classes.
Then, we implicitly enrich the training sample diversity via applying semantic data augmentation to the latent features.
arXiv Detail & Related papers (2022-06-02T12:19:51Z) - Improving Tail-Class Representation with Centroid Contrastive Learning [145.73991900239017]
We propose interpolative centroid contrastive learning (ICCL) to improve long-tailed representation learning.
ICCL interpolates two images from a class-agnostic sampler and a class-aware sampler, and trains the model such that the representation of the ICCL can be used to retrieve the centroids for both source classes.
Our result shows a significant accuracy gain of 2.8% on the iNaturalist 2018 dataset with a real-world long-tailed distribution.
arXiv Detail & Related papers (2021-10-19T15:24:48Z) - Class-Balanced Distillation for Long-Tailed Visual Recognition [100.10293372607222]
Real-world imagery is often characterized by a significant imbalance of the number of images per class, leading to long-tailed distributions.
In this work, we introduce a new framework, by making the key observation that a feature representation learned with instance sampling is far from optimal in a long-tailed setting.
Our main contribution is a new training method, that leverages knowledge distillation to enhance feature representations.
arXiv Detail & Related papers (2021-04-12T08:21:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.