Breadcrumbs: Adversarial Class-Balanced Sampling for Long-tailed
Recognition
- URL: http://arxiv.org/abs/2105.00127v1
- Date: Sat, 1 May 2021 00:21:26 GMT
- Title: Breadcrumbs: Adversarial Class-Balanced Sampling for Long-tailed
Recognition
- Authors: Bo Liu, Haoxiang Li, Hao Kang, Gang Hua, Nuno Vasconcelos
- Abstract summary: The problem of long-tailed recognition, where the number of examples per class is highly unbalanced, is considered.
It is hypothesized that this is due to the repeated sampling of examples and can be addressed by feature space augmentation.
A new feature augmentation strategy, EMANATE, based on back-tracking of features across epochs during training, is proposed.
A new sampling procedure, Breadcrumb, is then introduced to implement adversarial class-balanced sampling without extra computation.
- Score: 95.93760490301395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of long-tailed recognition, where the number of examples per
class is highly unbalanced, is considered. While training with class-balanced
sampling has been shown effective for this problem, it is known to over-fit to
few-shot classes. It is hypothesized that this is due to the repeated sampling
of examples and can be addressed by feature space augmentation. A new feature
augmentation strategy, EMANATE, based on back-tracking of features across
epochs during training, is proposed. It is shown that, unlike class-balanced
sampling, this is an adversarial augmentation strategy. A new sampling
procedure, Breadcrumb, is then introduced to implement adversarial
class-balanced sampling without extra computation. Experiments on three popular
long-tailed recognition datasets show that Breadcrumb training produces
classifiers that outperform existing solutions to the problem.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - How Re-sampling Helps for Long-Tail Learning? [45.187004699024435]
Long-tail learning has received significant attention due to the challenge it poses with extremely imbalanced datasets.
Recent studies claim that re-sampling brings negligible performance improvements in modern long-tail learning tasks.
We propose a new context shift augmentation module that generates diverse training images for the tail class.
arXiv Detail & Related papers (2023-10-27T16:20:34Z) - ProBoost: a Boosting Method for Probabilistic Classifiers [55.970609838687864]
ProBoost is a new boosting algorithm for probabilistic classifiers.
It uses the uncertainty of each training sample to determine the most challenging/uncertain ones.
It produces a sequence that progressively focuses on the samples found to have the highest uncertainty.
arXiv Detail & Related papers (2022-09-04T12:49:20Z) - Combating Noisy Labels in Long-Tailed Image Classification [33.40963778043824]
This paper makes an early effort to tackle the image classification task with both long-tailed distribution and label noise.
Existing noise-robust learning methods cannot work in this scenario as it is challenging to differentiate noisy samples from clean samples of tail classes.
We propose a new learning paradigm based on matching between inferences on weak and strong data augmentations to screen out noisy samples.
arXiv Detail & Related papers (2022-09-01T07:31:03Z) - Semi-supervised Long-tailed Recognition using Alternate Sampling [95.93760490301395]
Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes.
We propose a new recognition setting, namely semi-supervised long-tailed recognition.
We demonstrate significant accuracy improvements over other competitive methods on two datasets.
arXiv Detail & Related papers (2021-05-01T00:43:38Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.