The Majority Can Help The Minority: Context-rich Minority Oversampling
for Long-tailed Classification
- URL: http://arxiv.org/abs/2112.00412v2
- Date: Fri, 3 Dec 2021 08:54:45 GMT
- Title: The Majority Can Help The Minority: Context-rich Minority Oversampling
for Long-tailed Classification
- Authors: Seulki Park, Youngkyu Hong, Byeongho Heo, Sangdoo Yun and Jin Young
Choi
- Abstract summary: We propose a novel minority over-sampling method to augment diversified minority samples.
Our key idea is to paste a foreground patch from a minority class to a background image from a majority class having affluent contexts.
Our method achieves state-of-the-art performance on various long-tailed classification benchmarks.
- Score: 20.203461156516937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of class imbalanced data lies in that the generalization
performance of the classifier is deteriorated due to the lack of data of the
minority classes. In this paper, we propose a novel minority over-sampling
method to augment diversified minority samples by leveraging the rich context
of the majority classes as background images. To diversify the minority
samples, our key idea is to paste a foreground patch from a minority class to a
background image from a majority class having affluent contexts. Our method is
simple and can be easily combined with the existing long-tailed recognition
methods. We empirically prove the effectiveness of the proposed oversampling
method through extensive experiments and ablation studies. Without any
architectural changes or complex algorithms, our method achieves
state-of-the-art performance on various long-tailed classification benchmarks.
Our code will be publicly available at link.
Related papers
- Confronting Discrimination in Classification: Smote Based on
Marginalized Minorities in the Kernel Space for Imbalanced Data [0.0]
We propose a novel classification oversampling approach based on the decision boundary and sample proximity relationships.
We test the proposed method on a classic financial fraud dataset.
arXiv Detail & Related papers (2024-02-13T04:03:09Z) - Combating Representation Learning Disparity with Geometric Harmonization [50.29859682439571]
We propose a novel Geometric Harmonization (GH) method to encourage category-level uniformity in representation learning.
Our proposal does not alter the setting of SSL and can be easily integrated into existing methods in a low-cost manner.
arXiv Detail & Related papers (2023-10-26T17:41:11Z) - Generative Oversampling for Imbalanced Data via Majority-Guided VAE [15.93867386081279]
We propose a novel over-sampling model, called Majority-Guided VAE(MGVAE), which generates new minority samples under the guidance of a majority-based prior.
In this way, the newly generated minority samples can inherit the diversity and richness of the majority ones, thus mitigating overfitting in downstream tasks.
arXiv Detail & Related papers (2023-02-14T06:35:23Z) - Don't Play Favorites: Minority Guidance for Diffusion Models [59.75996752040651]
We present a novel framework that can make the generation process of the diffusion models focus on the minority samples.
We develop minority guidance, a sampling technique that can guide the generation process toward regions with desired likelihood levels.
arXiv Detail & Related papers (2023-01-29T03:08:47Z) - Few-shot Forgery Detection via Guided Adversarial Interpolation [56.59499187594308]
Existing forgery detection methods suffer from significant performance drops when applied to unseen novel forgery approaches.
We propose Guided Adversarial Interpolation (GAI) to overcome the few-shot forgery detection problem.
Our method is validated to be robust to choices of majority and minority forgery approaches.
arXiv Detail & Related papers (2022-04-12T16:05:10Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Counterfactual-based minority oversampling for imbalanced classification [11.140929092818235]
A key challenge of oversampling in imbalanced classification is that the generation of new minority samples often neglects the usage of majority classes.
We present a new oversampling framework based on the counterfactual theory.
arXiv Detail & Related papers (2020-08-21T14:13:15Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.