Mix from Failure: Confusion-Pairing Mixup for Long-Tailed Recognition
- URL: http://arxiv.org/abs/2411.07621v1
- Date: Tue, 12 Nov 2024 08:08:31 GMT
- Title: Mix from Failure: Confusion-Pairing Mixup for Long-Tailed Recognition
- Authors: Youngseok Yoon, Sangwoo Hong, Hyungjoon Joo, Yao Qin, Haewon Jeong, Jungwoo Lee,
- Abstract summary: Long-tailed image recognition is a problem considering a real-world class distribution rather than an artificial uniform.
In this paper, we tackle the problem from a different perspective to augment a training dataset to enhance the sample diversity of minority classes.
Our method, namely Confusion-Pairing Mixup (CP-Mix), estimates the confusion distribution of the model and handles the data deficiency problem.
- Score: 14.009773753739282
- License:
- Abstract: Long-tailed image recognition is a computer vision problem considering a real-world class distribution rather than an artificial uniform. Existing methods typically detour the problem by i) adjusting a loss function, ii) decoupling classifier learning, or iii) proposing a new multi-head architecture called experts. In this paper, we tackle the problem from a different perspective to augment a training dataset to enhance the sample diversity of minority classes. Specifically, our method, namely Confusion-Pairing Mixup (CP-Mix), estimates the confusion distribution of the model and handles the data deficiency problem by augmenting samples from confusion pairs in real-time. In this way, CP-Mix trains the model to mitigate its weakness and distinguish a pair of classes it frequently misclassifies. In addition, CP-Mix utilizes a novel mixup formulation to handle the bias in decision boundaries that originated from the imbalanced dataset. Extensive experiments demonstrate that CP-Mix outperforms existing methods for long-tailed image recognition and successfully relieves the confusion of the classifier.
Related papers
- Adaptive Mix for Semi-Supervised Medical Image Segmentation [22.69909762038458]
We propose an Adaptive Mix algorithm (AdaMix) for image mix-up in a self-paced learning manner.
We develop three frameworks with our AdaMix, i.e., AdaMix-ST, AdaMix-MT, and AdaMix-CT, for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2024-07-31T13:19:39Z) - SUMix: Mixup with Semantic and Uncertain Information [41.99721365685618]
Mixup data augmentation approaches have been applied for various tasks of deep learning.
We propose a novel approach named SUMix to learn the mixing ratio as well as the uncertainty for the mixed samples during the training process.
arXiv Detail & Related papers (2024-07-10T16:25:26Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - DualMix: Unleashing the Potential of Data Augmentation for Online
Class-Incremental Learning [14.194817677415065]
We show that augmented samples with lower correlation to the original data are more effective in preventing forgetting.
We propose the Enhanced Mixup (EnMix) method that mixes the augmented samples and their labels simultaneously.
To solve the class imbalance problem, we design an Adaptive Mixup (AdpMix) method to calibrate the decision boundaries.
arXiv Detail & Related papers (2023-03-14T12:55:42Z) - Supervised Contrastive Learning on Blended Images for Long-tailed
Recognition [32.876647081080655]
Real-world data often have a long-tailed distribution, where the number of samples per class is not equal over training classes.
In this paper, we propose a novel long-tailed recognition method to balance the latent feature space.
arXiv Detail & Related papers (2022-11-22T01:19:00Z) - Contrastive-mixup learning for improved speaker verification [17.93491404662201]
This paper proposes a novel formulation of prototypical loss with mixup for speaker verification.
Mixup is a simple yet efficient data augmentation technique that fabricates a weighted combination of random data point and label pairs.
arXiv Detail & Related papers (2022-02-22T05:09:22Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - ReMix: Towards Image-to-Image Translation with Limited Data [154.71724970593036]
We propose a data augmentation method (ReMix) to tackle this issue.
We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples.
The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results.
arXiv Detail & Related papers (2021-03-31T06:24:10Z) - XEM: An Explainable-by-Design Ensemble Method for Multivariate Time
Series Classification [61.33695273474151]
We present XEM, an eXplainable-by-design Ensemble method for Multivariable time series classification.
XEM relies on a new hybrid ensemble method that combines an explicit boosting-bagging approach and an implicit divide-and-conquer approach.
Our evaluation shows that XEM outperforms the state-of-the-art MTS classifiers on the public UEA datasets.
arXiv Detail & Related papers (2020-05-07T17:50:18Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.