Non-Asymptotic Error Bounds for Bidirectional GANs
- URL: http://arxiv.org/abs/2110.12319v1
- Date: Sun, 24 Oct 2021 00:12:03 GMT
- Title: Non-Asymptotic Error Bounds for Bidirectional GANs
- Authors: Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang
- Abstract summary: We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance.
This is the first theoretical guarantee for the bidirectional GAN learning approach.
- Score: 10.62911757343557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation
error under the Dudley distance between the latent joint distribution and the
data joint distribution with appropriately specified architecture of the neural
networks used in the model. To the best of our knowledge, this is the first
theoretical guarantee for the bidirectional GAN learning approach. An appealing
feature of our results is that they do not assume the reference and the data
distributions to have the same dimensions or these distributions to have
bounded support. These assumptions are commonly assumed in the existing
convergence analysis of the unidirectional GANs but may not be satisfied in
practice. Our results are also applicable to the Wasserstein bidirectional GAN
if the target distribution is assumed to have a bounded support. To prove these
results, we construct neural network functions that push forward an empirical
distribution to another arbitrary empirical distribution on a possibly
different-dimensional space. We also develop a novel decomposition of the
integral probability metric for the error analysis of bidirectional GANs. These
basic theoretical results are of independent interest and can be applied to
other related learning problems.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - On the Statistical Properties of Generative Adversarial Models for Low
Intrinsic Data Dimension [38.964624328622]
We derive statistical guarantees on the estimated densities in terms of the intrinsic dimension of the data and the latent space.
We demonstrate that GANs can effectively achieve the minimax optimal rate even for non-smooth underlying distributions.
arXiv Detail & Related papers (2024-01-28T23:18:10Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Investigating Shifts in GAN Output-Distributions [5.076419064097734]
We introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms.
arXiv Detail & Related papers (2021-12-28T09:16:55Z) - An error analysis of generative adversarial networks for learning
distributions [11.842861158282265]
generative adversarial networks (GANs) learn probability distributions from finite samples.
GANs are able to adaptively learn data distributions with low-dimensional structure or have H"older densities.
Our analysis is based on a new oracle inequality decomposing the estimation error into generator and discriminator approximation error and statistical error.
arXiv Detail & Related papers (2021-05-27T08:55:19Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.