DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation
for Long-Tailed Visual Recognition
- URL: http://arxiv.org/abs/2207.02173v1
- Date: Tue, 5 Jul 2022 17:01:27 GMT
- Title: DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation
for Long-Tailed Visual Recognition
- Authors: Jae Soon Baik, In Young Yoon, Jun Won Choi
- Abstract summary: We develop a simple yet effective method to improve the performance of DBN without cumulative learning.
We present class-conditional temperature scaling that mitigates bias toward the majority class for the proposed DBN architecture.
- Score: 7.94190631530826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a growing interest in the challenging visual perception task of
learning from long-tailed class distributions. The extreme class imbalance in
the training dataset biases the model to prefer to recognize majority-class
data over minority-class data. Recently, the dual branch network (DBN)
framework has been proposed, where two branch networks; the conventional branch
and the re-balancing branch were employed to improve the accuracy of
long-tailed visual recognition. The re-balancing branch uses a reverse sampler
to generate class-balanced training samples to mitigate bias due to class
imbalance. Although this strategy has been quite successful in handling bias,
using a reversed sampler for training can degrade the representation learning
performance. To alleviate this issue, the conventional method used a carefully
designed cumulative learning strategy, in which the influence of the
re-balancing branch gradually increases throughout the entire training phase.
In this study, we aim to develop a simple yet effective method to improve the
performance of DBN without cumulative learning that is difficult to optimize.
We devise a simple data augmentation method termed bilateral mixup
augmentation, which combines one sample from the uniform sampler with another
sample from the reversed sampler to produce a training sample. Furthermore, we
present class-conditional temperature scaling that mitigates bias toward the
majority class for the proposed DBN architecture. Our experiments performed on
widely used long-tailed visual recognition datasets show that bilateral mixup
augmentation is quite effective in improving the representation learning
performance of DBNs, and that the proposed method achieves state-of-the-art
performance for some categories.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Debiased Sample Selection for Combating Noisy Labels [24.296451733127956]
We propose a noIse-Tolerant Expert Model (ITEM) for debiased learning in sample selection.
Specifically, to mitigate the training bias, we design a robust network architecture that integrates with multiple experts.
By training on the mixture of two class-discriminative mini-batches, the model mitigates the effect of the imbalanced training set.
arXiv Detail & Related papers (2024-01-24T10:37:28Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Leveraging Angular Information Between Feature and Classifier for
Long-tailed Learning: A Prediction Reformulation Approach [90.77858044524544]
We reformulate the recognition probabilities through included angles without re-balancing the classifier weights.
Inspired by the performance improvement of the predictive form reformulation, we explore the different properties of this angular prediction.
Our method is able to obtain the best performance among peer methods without pretraining on CIFAR10/100-LT and ImageNet-LT.
arXiv Detail & Related papers (2022-12-03T07:52:48Z) - Supervised Contrastive Learning on Blended Images for Long-tailed
Recognition [32.876647081080655]
Real-world data often have a long-tailed distribution, where the number of samples per class is not equal over training classes.
In this paper, we propose a novel long-tailed recognition method to balance the latent feature space.
arXiv Detail & Related papers (2022-11-22T01:19:00Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Improving Tail-Class Representation with Centroid Contrastive Learning [145.73991900239017]
We propose interpolative centroid contrastive learning (ICCL) to improve long-tailed representation learning.
ICCL interpolates two images from a class-agnostic sampler and a class-aware sampler, and trains the model such that the representation of the ICCL can be used to retrieve the centroids for both source classes.
Our result shows a significant accuracy gain of 2.8% on the iNaturalist 2018 dataset with a real-world long-tailed distribution.
arXiv Detail & Related papers (2021-10-19T15:24:48Z) - Class-Balanced Distillation for Long-Tailed Visual Recognition [100.10293372607222]
Real-world imagery is often characterized by a significant imbalance of the number of images per class, leading to long-tailed distributions.
In this work, we introduce a new framework, by making the key observation that a feature representation learned with instance sampling is far from optimal in a long-tailed setting.
Our main contribution is a new training method, that leverages knowledge distillation to enhance feature representations.
arXiv Detail & Related papers (2021-04-12T08:21:03Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.