Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2102.12033v1
- Date: Wed, 24 Feb 2021 02:31:50 GMT
- Title: Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative
Adversarial Networks
- Authors: Jinhee Lee, Haeri Kim, Youngkyu Hong, Hye Won Chung
- Abstract summary: We propose a method to diagnose and emphasize underrepresented samples during training of a Generative Adversarial Networks (GAN)
Based on the observation that the underrepresented samples have a high average discrepancy or high variability in discrepancy, we propose a method to emphasize those samples.
Our experimental results demonstrate that the proposed method improves GAN performance on various datasets.
- Score: 5.754152248672317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite remarkable performance in producing realistic samples, Generative
Adversarial Networks (GANs) often produce low-quality samples near low-density
regions of the data manifold. Recently, many techniques have been developed to
improve the quality of generated samples, either by rejecting low-quality
samples after training or by pre-processing the empirical data distribution
before training, but at the cost of reduced diversity. To guarantee both the
quality and the diversity, we propose a simple yet effective method to diagnose
and emphasize underrepresented samples during training of a GAN. The main idea
is to use the statistics of the discrepancy between the data distribution and
the model distribution at each data instance. Based on the observation that the
underrepresented samples have a high average discrepancy or high variability in
discrepancy, we propose a method to emphasize those samples during training of
a GAN. Our experimental results demonstrate that the proposed method improves
GAN performance on various datasets, and it is especially effective in
improving the quality of generated samples with minor features.
Related papers
- Assessing Sample Quality via the Latent Space of Generative Models [44.59115390303591]
We propose to examine the latent space of a trained generative model to infer generated sample quality.
This is feasible because the quality a generated sample directly relates to the amount of training data resembling it.
We show that the proposed score correlates highly with the sample quality for various generative models including VAEs, GANs and Latent Diffusion Models.
arXiv Detail & Related papers (2024-07-21T14:05:06Z) - How Low Can You Go? Surfacing Prototypical In-Distribution Samples for Unsupervised Anomaly Detection [48.30283806131551]
We show that UAD with extremely few training samples can already match -- and in some cases even surpass -- the performance of training with the whole training dataset.
We propose an unsupervised method to reliably identify prototypical samples to further boost UAD performance.
arXiv Detail & Related papers (2023-12-06T15:30:47Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - Selectively increasing the diversity of GAN-generated samples [8.980453507536017]
We propose a novel method to selectively increase the diversity of GAN-generated samples.
We show the superiority of our method in a synthetic benchmark as well as a real-life scenario simulating data from the Zero Degree Calorimeter of ALICE experiment in CERN.
arXiv Detail & Related papers (2022-07-04T16:27:06Z) - Fake It Till You Make It: Near-Distribution Novelty Detection by
Score-Based Generative Models [54.182955830194445]
existing models either fail or face a dramatic drop under the so-called near-distribution" setting.
We propose to exploit a score-based generative model to produce synthetic near-distribution anomalous data.
Our method improves the near-distribution novelty detection by 6% and passes the state-of-the-art by 1% to 5% across nine novelty detection benchmarks.
arXiv Detail & Related papers (2022-05-28T02:02:53Z) - ReSmooth: Detecting and Utilizing OOD Samples when Training with Data
Augmentation [57.38418881020046]
Recent DA techniques always meet the need for diversity in augmented training samples.
An augmentation strategy that has a high diversity usually introduces out-of-distribution (OOD) augmented samples.
We propose ReSmooth, a framework that firstly detects OOD samples in augmented samples and then leverages them.
arXiv Detail & Related papers (2022-05-25T09:29:27Z) - Learning Fast Samplers for Diffusion Models by Differentiating Through
Sample Quality [44.37533757879762]
We introduce Differentiable Diffusion Sampler Search (DDSS), a method that optimize fast samplers for any pre-trained diffusion model.
We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models.
Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.
arXiv Detail & Related papers (2022-02-11T18:53:18Z) - Implicit Data Augmentation Using Feature Interpolation for Diversified
Low-Shot Image Generation [11.4559888429977]
Training of generative models can easily diverge in low-data setting.
We propose a novel implicit data augmentation approach which facilitates stable training and synthesize diverse samples.
arXiv Detail & Related papers (2021-12-04T23:55:46Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Informative Sample Mining Network for Multi-Domain Image-to-Image
Translation [101.01649070998532]
We show that improving the sample selection strategy is an effective solution for image-to-image translation tasks.
We propose a novel multi-stage sample training scheme to reduce sample hardness while preserving sample informativeness.
arXiv Detail & Related papers (2020-01-05T05:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.