Removing Class Imbalance using Polarity-GAN: An Uncertainty Sampling
Approach
- URL: http://arxiv.org/abs/2012.04937v1
- Date: Wed, 9 Dec 2020 09:40:07 GMT
- Title: Removing Class Imbalance using Polarity-GAN: An Uncertainty Sampling
Approach
- Authors: Kumari Deepshikha and Anugunj Naman
- Abstract summary: We propose a Generative Adversarial Network (GAN) equipped with a generator network G, a discriminator network D and a classifier network C to remove the class-imbalance in visual data sets.
We achieve state of the art performance on extreme visual classification task on the FashionMNIST, MNIST, SVHN, ExDark, MVTec Anomaly dataset, Chest X-Ray dataset and others.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Class imbalance is a challenging issue in practical classification problems
for deep learning models as well as for traditional models. Traditionally
successful countermeasures such as synthetic over-sampling have had limited
success with complex, structured data handled by deep learning models. In this
work, we propose to use a Generative Adversarial Network (GAN) equipped with a
generator network G, a discriminator network D and a classifier network C to
remove the class-imbalance in visual data sets. The generator network is
initialized with auto-encoder to make it stable. The discriminator D ensures
that G adheres to class distribution of imbalanced class. In conventional
methods, where Generator G competes with discriminator D in a min-max game, we
propose to further add an additional classifier network to the original
network. Now, the generator network tries to compete in a min-max game with
Discriminator as well as the new classifier that we have introduced. An
additional condition is enforced on generator network G to produce points in
the convex hull of desired imbalanced class. Further the contention of
adversarial game with classifier C, pushes conditional distribution learned by
G towards the periphery of the respective class, compensating the problem of
class imbalance. Experimental evidence shows that this initialization results
in stable training of the network. We achieve state of the art performance on
extreme visual classification task on the FashionMNIST, MNIST, SVHN, ExDark,
MVTec Anomaly Detection dataset, Chest X-Ray dataset and others.
Related papers
- Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Adaptive DropBlock Enhanced Generative Adversarial Networks for
Hyperspectral Image Classification [36.679303770326264]
We propose an Adaptive DropBlock-enhanced Generative Adversarial Networks (ADGAN) for hyperspectral image (HSI) classification.
The discriminator in GAN always contradicts itself and tries to associate fake labels to the minority-class samples, and thus impair the classification performance.
Experimental results on three HSI datasets demonstrated that the proposed ADGAN achieved superior performance over state-of-the-art GAN-based methods.
arXiv Detail & Related papers (2022-01-22T01:43:59Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Self-supervised GANs with Label Augmentation [43.78253518292111]
We propose a novel self-supervised GANs framework with label augmentation, i.e., augmenting the GAN labels (real or fake) with the self-supervised pseudo-labels.
We demonstrate that the proposed method significantly outperforms competitive baselines on both generative modeling and representation learning.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Supervised Anomaly Detection via Conditional Generative Adversarial
Network and Ensemble Active Learning [24.112455929818484]
Anomaly detection has wide applications in machine intelligence but is still a difficult unsolved problem.
Traditional unsupervised anomaly detectors are suboptimal while supervised models can easily make biased predictions.
We present a new supervised anomaly detector through introducing the novel Ensemble Active Learning Generative Adversarial Network (EAL-GAN)
arXiv Detail & Related papers (2021-04-24T13:47:50Z) - eGAN: Unsupervised approach to class imbalance using transfer learning [8.100450025624443]
Class imbalance is an inherent problem in many machine learning classification tasks.
We explore an unsupervised approach to address these imbalances by leveraging transfer learning from pre-trained image classification models to encoder-based Generative Adversarial Network (eGAN)
Best result of 0.69 F1-score was obtained on CIFAR-10 classification task with imbalance ratio of 1:2500.
arXiv Detail & Related papers (2021-04-09T02:37:55Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - Generalized Zero-Shot Learning Via Over-Complete Distribution [79.5140590952889]
We propose to generate an Over-Complete Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both seen and unseen classes.
The effectiveness of the framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot Learning protocols.
arXiv Detail & Related papers (2020-04-01T19:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.