Conditional Variational Autoencoder with Balanced Pre-training for
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2201.04809v1
- Date: Thu, 13 Jan 2022 06:52:58 GMT
- Title: Conditional Variational Autoencoder with Balanced Pre-training for
Generative Adversarial Networks
- Authors: Yuchong Yao, Xiaohui Wangr, Yuanbang Ma, Han Fang, Jiaying Wei, Liyuan
Chen, Ali Anaissi and Ali Braytee
- Abstract summary: Class imbalance occurs in many real-world applications, including image classification, where the number of images in each class differs significantly.
With imbalanced data, the generative adversarial networks (GANs) leans to majority class samples.
We propose a novel Variational Autoencoder with Balanced Pre-training for Geneversarative Adrial Networks (CAPGAN) as an augmentation tool to generate realistic synthetic images.
- Score: 11.46883762268061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class imbalance occurs in many real-world applications, including image
classification, where the number of images in each class differs significantly.
With imbalanced data, the generative adversarial networks (GANs) leans to
majority class samples. The two recent methods, Balancing GAN (BAGAN) and
improved BAGAN (BAGAN-GP), are proposed as an augmentation tool to handle this
problem and restore the balance to the data. The former pre-trains the
autoencoder weights in an unsupervised manner. However, it is unstable when the
images from different categories have similar features. The latter is improved
based on BAGAN by facilitating supervised autoencoder training, but the
pre-training is biased towards the majority classes. In this work, we propose a
novel Conditional Variational Autoencoder with Balanced Pre-training for
Generative Adversarial Networks (CAPGAN) as an augmentation tool to generate
realistic synthetic images. In particular, we utilize a conditional
convolutional variational autoencoder with supervised and balanced pre-training
for the GAN initialization and training with gradient penalty. Our proposed
method presents a superior performance of other state-of-the-art methods on the
highly imbalanced version of MNIST, Fashion-MNIST, CIFAR-10, and two medical
imaging datasets. Our method can synthesize high-quality minority samples in
terms of Fr\'echet inception distance, structural similarity index measure and
perceptual quality.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - FewGAN: Generating from the Joint Distribution of a Few Images [95.6635227371479]
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-07-18T07:11:28Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - eGAN: Unsupervised approach to class imbalance using transfer learning [8.100450025624443]
Class imbalance is an inherent problem in many machine learning classification tasks.
We explore an unsupervised approach to address these imbalances by leveraging transfer learning from pre-trained image classification models to encoder-based Generative Adversarial Network (eGAN)
Best result of 0.69 F1-score was obtained on CIFAR-10 classification task with imbalance ratio of 1:2500.
arXiv Detail & Related papers (2021-04-09T02:37:55Z) - Insta-RS: Instance-wise Randomized Smoothing for Improved Robustness and
Accuracy [9.50143683501477]
Insta-RS is a multiple-start search algorithm that assigns customized Gaussian variances to test examples.
Insta-RS Train is a novel two-stage training algorithm that adaptively adjusts and customizes the noise level of each training example.
We show that our method significantly enhances the average certified radius (ACR) as well as the clean data accuracy.
arXiv Detail & Related papers (2021-03-07T19:46:07Z) - Enhanced Balancing GAN: Minority-class Image Generation [0.7310043452300734]
Generative adversarial networks (GANs) are one of the most powerful generative models.
Balancing GAN (BAGAN) is proposed to mitigate this problem, but it is unstable when images in different classes look similar.
In this work, we propose a supervised autoencoder with an intermediate embedding model to disperse the labeled latent vectors.
Our proposed model overcomes the unstable issue in original BAGAN and converges faster to high quality generations.
arXiv Detail & Related papers (2020-10-31T05:03:47Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Mitigating Dataset Imbalance via Joint Generation and Classification [17.57577266707809]
Supervised deep learning methods are enjoying enormous success in many practical applications of computer vision.
The marked performance degradation to biases and imbalanced data questions the reliability of these methods.
We introduce a joint dataset repairment strategy by combining a neural network classifier with Generative Adversarial Networks (GAN)
We show that the combined training helps to improve the robustness of both the classifier and the GAN against severe class imbalance.
arXiv Detail & Related papers (2020-08-12T18:40:38Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.