Approximating Probability Distributions by using Wasserstein Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2103.10060v4
- Date: Fri, 30 Jun 2023 03:45:05 GMT
- Title: Approximating Probability Distributions by using Wasserstein Generative
Adversarial Networks
- Authors: Yihang Gao, Michael K. Ng, Mingjie Zhou
- Abstract summary: Wasserstein generative adversarial networks (WGANs) with GroupSort neural networks as their discriminators are studied.
It is shown that the error bound of the approximation for the target distribution depends on the width and depth (capacity) of the generators and discriminators.
- Score: 16.005358327268194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studied here are Wasserstein generative adversarial networks (WGANs) with
GroupSort neural networks as their discriminators. It is shown that the error
bound of the approximation for the target distribution depends on the width and
depth (capacity) of the generators and discriminators and the number of samples
in training. A quantified generalization bound is established for the
Wasserstein distance between the generated and target distributions. According
to the theoretical results, WGANs have a higher requirement for the capacity of
discriminators than that of generators, which is consistent with some existing
results. More importantly, the results with overly deep and wide
(high-capacity) generators may be worse than those with low-capacity generators
if discriminators are insufficiently strong. Numerical results obtained using
Swiss roll and MNIST datasets confirm the theoretical results.
Related papers
- Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - GANs Settle Scores! [16.317645727944466]
We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
arXiv Detail & Related papers (2023-06-02T16:24:07Z) - On some theoretical limitations of Generative Adversarial Networks [77.34726150561087]
It is a general assumption that GANs can generate any probability distribution.
We provide a new result based on Extreme Value Theory showing that GANs can't generate heavy tailed distributions.
arXiv Detail & Related papers (2021-10-21T06:10:38Z) - Wasserstein Generative Adversarial Uncertainty Quantification in
Physics-Informed Neural Networks [19.15477953428763]
Wasserstein Generative Adversarial Networks (WGANs) are designed to learn the uncertainty in solutions of partial differential equations.
We show that our physics-informed WGANs have higher requirement for the capacity of discriminators than that of generators.
arXiv Detail & Related papers (2021-08-30T08:18:58Z) - An error analysis of generative adversarial networks for learning
distributions [11.842861158282265]
generative adversarial networks (GANs) learn probability distributions from finite samples.
GANs are able to adaptively learn data distributions with low-dimensional structure or have H"older densities.
Our analysis is based on a new oracle inequality decomposing the estimation error into generator and discriminator approximation error and statistical error.
arXiv Detail & Related papers (2021-05-27T08:55:19Z) - Sampling-Decomposable Generative Adversarial Recommender [84.05894139540048]
We propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR)
In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling.
We extensively evaluate the proposed algorithm with five real-world recommendation datasets.
arXiv Detail & Related papers (2020-11-02T13:19:10Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.