Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2002.03938v3
- Date: Thu, 21 Jul 2022 01:36:31 GMT
- Title: Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks
- Authors: Minshuo Chen, Wenjing Liao, Hongyuan Zha, Tuo Zhao
- Abstract summary: Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
- Score: 82.61546580149427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have achieved a great success in
unsupervised learning. Despite its remarkable empirical performance, there are
limited theoretical studies on the statistical properties of GANs. This paper
provides approximation and statistical guarantees of GANs for the estimation of
data distributions that have densities in a H\"{o}lder space. Our main result
shows that, if the generator and discriminator network architectures are
properly chosen, GANs are consistent estimators of data distributions under
strong discrepancy metrics, such as the Wasserstein-1 distance. Furthermore,
when the data distribution exhibits low-dimensional structures, we show that
GANs are capable of capturing the unknown low-dimensional structures in data
and enjoy a fast statistical convergence, which is free of curse of the ambient
dimensionality. Our analysis for low-dimensional data builds upon a universal
approximation theory of neural networks with Lipschitz continuity guarantees,
which may be of independent interest.
Related papers
- On the Statistical Properties of Generative Adversarial Models for Low
Intrinsic Data Dimension [38.964624328622]
We derive statistical guarantees on the estimated densities in terms of the intrinsic dimension of the data and the latent space.
We demonstrate that GANs can effectively achieve the minimax optimal rate even for non-smooth underlying distributions.
arXiv Detail & Related papers (2024-01-28T23:18:10Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Statistical Theory of Differentially Private Marginal-based Data
Synthesis Algorithms [30.330715718619874]
Marginal-based methods achieve promising performance in the synthetic data competition hosted by the National Institute of Standards and Technology (NIST)
Despite its promising performance in practice, the statistical properties of marginal-based methods are rarely studied in the literature.
arXiv Detail & Related papers (2023-01-21T01:32:58Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - An error analysis of generative adversarial networks for learning
distributions [11.842861158282265]
generative adversarial networks (GANs) learn probability distributions from finite samples.
GANs are able to adaptively learn data distributions with low-dimensional structure or have H"older densities.
Our analysis is based on a new oracle inequality decomposing the estimation error into generator and discriminator approximation error and statistical error.
arXiv Detail & Related papers (2021-05-27T08:55:19Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - Distributionally Robust Chance Constrained Programming with Generative
Adversarial Networks (GANs) [0.0]
A novel generative adversarial network (GAN) based data-driven distributionally robust chance constrained programming framework is proposed.
GAN is applied to fully extract distributional information from historical data in a nonparametric and unsupervised way.
The proposed framework is then applied to supply chain optimization under demand uncertainty.
arXiv Detail & Related papers (2020-02-28T00:05:22Z) - Brainstorming Generative Adversarial Networks (BGANs): Towards
Multi-Agent Generative Models with Distributed Private Datasets [70.62568022925971]
generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space.
In many scenarios, the available datasets may be limited and distributed across multiple agents, each of which is seeking to learn the distribution of the data on its own.
In this paper, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner.
arXiv Detail & Related papers (2020-02-02T02:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.