Inferential Wasserstein Generative Adversarial Networks
- URL: http://arxiv.org/abs/2109.05652v1
- Date: Mon, 13 Sep 2021 00:43:21 GMT
- Title: Inferential Wasserstein Generative Adversarial Networks
- Authors: Yao Chen, Qingyi Gao and Xiao Wang
- Abstract summary: We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs.
The iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample.
- Score: 9.859829604054127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have been impactful on many problems
and applications but suffer from unstable training. The Wasserstein GAN (WGAN)
leverages the Wasserstein distance to avoid the caveats in the minmax
two-player training of GANs but has other defects such as mode collapse and
lack of metric to detect the convergence. We introduce a novel inferential
Wasserstein GAN (iWGAN) model, which is a principled framework to fuse
auto-encoders and WGANs. The iWGAN model jointly learns an encoder network and
a generator network motivated by the iterative primal dual optimization
process. The encoder network maps the observed samples to the latent space and
the generator network maps the samples from the latent space to the data space.
We establish the generalization error bound of the iWGAN to theoretically
justify its performance. We further provide a rigorous probabilistic
interpretation of our model under the framework of maximum likelihood
estimation. The iWGAN, with a clear stopping criteria, has many advantages over
other autoencoder GANs. The empirical experiments show that the iWGAN greatly
mitigates the symptom of mode collapse, speeds up the convergence, and is able
to provide a measurement of quality check for each individual sample. We
illustrate the ability of the iWGAN by obtaining competitive and stable
performances for benchmark datasets.
Related papers
- Generative Modeling for Tabular Data via Penalized Optimal Transport
Network [2.0319002824093015]
Wasserstein generative adversarial network (WGAN) is a notable improvement in generative modeling.
We propose POTNet, a generative deep neural network based on a novel, robust, and interpretable marginally-penalized Wasserstein (MPW) loss.
arXiv Detail & Related papers (2024-02-16T05:27:05Z) - Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Hidden Convexity of Wasserstein GANs: Interpretable Generative Models
with Closed-Form Solutions [31.952858521063277]
We analyze the impact of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality.
We further demonstrate the power of different activation functions of discriminator.
arXiv Detail & Related papers (2021-07-12T18:33:49Z) - Understanding Overparameterization in Generative Adversarial Networks [56.57403335510056]
Generative Adversarial Networks (GANs) are used to train non- concave mini-max optimization problems.
A theory has shown the importance of the gradient descent (GD) to globally optimal solutions.
We show that in an overized GAN with a $1$-layer neural network generator and a linear discriminator, the GDA converges to a global saddle point of the underlying non- concave min-max problem.
arXiv Detail & Related papers (2021-04-12T16:23:37Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.