An error analysis of generative adversarial networks for learning
distributions
- URL: http://arxiv.org/abs/2105.13010v1
- Date: Thu, 27 May 2021 08:55:19 GMT
- Title: An error analysis of generative adversarial networks for learning
distributions
- Authors: Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang
- Abstract summary: generative adversarial networks (GANs) learn probability distributions from finite samples.
GANs are able to adaptively learn data distributions with low-dimensional structure or have H"older densities.
Our analysis is based on a new oracle inequality decomposing the estimation error into generator and discriminator approximation error and statistical error.
- Score: 11.842861158282265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies how well generative adversarial networks (GANs) learn
probability distributions from finite samples. Our main results estimate the
convergence rates of GANs under a collection of integral probability metrics
defined through H\"older classes, including the Wasserstein distance as a
special case. We also show that GANs are able to adaptively learn data
distributions with low-dimensional structure or have H\"older densities, when
the network architectures are chosen properly. In particular, for distributions
concentrate around a low-dimensional set, it is proved that the learning rates
of GANs do not depend on the high ambient dimension, but on the lower intrinsic
dimension. Our analysis is based on a new oracle inequality decomposing the
estimation error into generator and discriminator approximation error and
statistical error, which may be of independent interest.
Related papers
- Score-based generative models break the curse of dimensionality in
learning a family of sub-Gaussian probability distributions [5.801621787540268]
We introduce a notion of complexity for probability distributions in terms of their relative density with respect to the standard Gaussian measure.
We prove that if the log-relative density can be locally approximated by a neural network whose parameters can be suitably bounded, then the distribution generated by empirical score matching approximates the target distribution.
An essential ingredient of our proof is to derive a dimension-free deep neural network approximation rate for the true score function associated with the forward process.
arXiv Detail & Related papers (2024-02-12T22:02:23Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Learning Distributions by Generative Adversarial Networks: Approximation
and Generalization [0.6768558752130311]
We study how well generative adversarial networks learn from finite samples by analyzing the convergence rates of these models.
Our analysis is based on a new inequality oracle that decomposes the estimation error of GAN into the discriminator and generator approximation errors.
For generator approximation error, we show that neural network can approximately transform a low-dimensional source distribution to a high-dimensional target distribution.
arXiv Detail & Related papers (2022-05-25T09:26:17Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Non-Asymptotic Error Bounds for Bidirectional GANs [10.62911757343557]
We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance.
This is the first theoretical guarantee for the bidirectional GAN learning approach.
arXiv Detail & Related papers (2021-10-24T00:12:03Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - A likelihood approach to nonparametric estimation of a singular
distribution using deep generative models [4.329951775163721]
We investigate a likelihood approach to nonparametric estimation of a singular distribution using deep generative models.
We prove that a novel and effective solution exists by perturbing the data with an instance noise.
We also characterize the class of distributions that can be efficiently estimated via deep generative models.
arXiv Detail & Related papers (2021-05-09T23:13:58Z) - General stochastic separation theorems with optimal bounds [68.8204255655161]
Phenomenon of separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities.
Errors or clusters of errors can be separated from the rest of the data.
The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same separability.
arXiv Detail & Related papers (2020-10-11T13:12:41Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.