Robust Generative Adversarial Network
- URL: http://arxiv.org/abs/2004.13344v1
- Date: Tue, 28 Apr 2020 07:37:01 GMT
- Title: Robust Generative Adversarial Network
- Authors: Shufei Zhang, Zhuang Qian, Kaizhu Huang, Jimin Xiao, Yuan He
- Abstract summary: We aim to improve the generalization capability of GANs by promoting the local robustness within the small neighborhood of the training samples.
We design a robust optimization framework where the generator and discriminator compete with each other in a textitworst-case setting within a small Wasserstein ball.
We have proved that our robust method can obtain a tighter generalization upper bound than traditional GANs under mild assumptions.
- Score: 37.015223009069175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are powerful generative models, but
usually suffer from instability and generalization problem which may lead to
poor generations. Most existing works focus on stabilizing the training of the
discriminator while ignoring the generalization properties. In this work, we
aim to improve the generalization capability of GANs by promoting the local
robustness within the small neighborhood of the training samples. We also prove
that the robustness in small neighborhood of training sets can lead to better
generalization. Particularly, we design a robust optimization framework where
the generator and discriminator compete with each other in a
\textit{worst-case} setting within a small Wasserstein ball. The generator
tries to map \textit{the worst input distribution} (rather than a Gaussian
distribution used in most GANs) to the real data distribution, while the
discriminator attempts to distinguish the real and fake distribution
\textit{with the worst perturbation}. We have proved that our robust method can
obtain a tighter generalization upper bound than traditional GANs under mild
assumptions, ensuring a theoretical superiority of RGAN over GANs. A series of
experiments on CIFAR-10, STL-10 and CelebA datasets indicate that our proposed
robust framework can improve on five baseline GAN models substantially and
consistently.
Related papers
- DigGAN: Discriminator gradIent Gap Regularization for GAN Training with
Limited Data [13.50061291734299]
We propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN.
DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples.
We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available.
arXiv Detail & Related papers (2022-11-27T01:03:58Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Minimax Optimality (Probably) Doesn't Imply Distribution Learning for
GANs [44.4200799586461]
We show that standard cryptographic assumptions imply that this stronger condition is still insufficient.
Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.
arXiv Detail & Related papers (2022-01-18T18:59:21Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Inferential Wasserstein Generative Adversarial Networks [9.859829604054127]
We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs.
The iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample.
arXiv Detail & Related papers (2021-09-13T00:43:21Z) - DO-GAN: A Double Oracle Framework for Generative Adversarial Networks [28.904057977044374]
We propose a new approach to train Generative Adversarial Networks (GANs)
We deploy a double-oracle framework using the generator and discriminator oracles.
We apply our framework to established GAN architectures such as vanilla GAN, Deep Convolutional GAN, Spectral Normalization GAN and Stacked GAN.
arXiv Detail & Related papers (2021-02-17T05:11:18Z) - Teaching a GAN What Not to Learn [20.03447539784024]
Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution.
In this paper, we approach the supervised GAN problem from a different perspective, one motivated by the philosophy of the famous Persian poet Rumi.
In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid.
This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable.
arXiv Detail & Related papers (2020-10-29T14:44:24Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.