Minimax Optimality (Probably) Doesn't Imply Distribution Learning for
GANs
- URL: http://arxiv.org/abs/2201.07206v1
- Date: Tue, 18 Jan 2022 18:59:21 GMT
- Title: Minimax Optimality (Probably) Doesn't Imply Distribution Learning for
GANs
- Authors: Sitan Chen, Jerry Li, Yuanzhi Li, Raghu Meka
- Abstract summary: We show that standard cryptographic assumptions imply that this stronger condition is still insufficient.
Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.
- Score: 44.4200799586461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Arguably the most fundamental question in the theory of generative
adversarial networks (GANs) is to understand to what extent GANs can actually
learn the underlying distribution. Theoretical and empirical evidence suggests
local optimality of the empirical training objective is insufficient. Yet, it
does not rule out the possibility that achieving a true population minimax
optimal solution might imply distribution learning.
In this paper, we show that standard cryptographic assumptions imply that
this stronger condition is still insufficient. Namely, we show that if local
pseudorandom generators (PRGs) exist, then for a large family of natural
continuous target distributions, there are ReLU network generators of constant
depth and polynomial size which take Gaussian random seeds so that (i) the
output is far in Wasserstein distance from the target distribution, but (ii) no
polynomially large Lipschitz discriminator ReLU network can detect this. This
implies that even achieving a population minimax optimal solution to the
Wasserstein GAN objective is likely insufficient for distribution learning in
the usual statistical sense. Our techniques reveal a deep connection between
GANs and PRGs, which we believe will lead to further insights into the
computational landscape of GANs.
Related papers
- Generative Conditional Distributions by Neural (Entropic) Optimal Transport [12.152228552335798]
We introduce a novel neural entropic optimal transport method designed to learn generative models of conditional distributions.
Our method relies on the minimax training of two neural networks.
Our experiments on real-world datasets show the effectiveness of our algorithm compared to state-of-the-art conditional distribution learning techniques.
arXiv Detail & Related papers (2024-06-04T13:45:35Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - On some theoretical limitations of Generative Adversarial Networks [77.34726150561087]
It is a general assumption that GANs can generate any probability distribution.
We provide a new result based on Extreme Value Theory showing that GANs can't generate heavy tailed distributions.
arXiv Detail & Related papers (2021-10-21T06:10:38Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Forward Super-Resolution: How Can GANs Learn Hierarchical Generative
Models for Real-World Distributions [66.05472746340142]
Generative networks (GAN) are among the most successful for learning high-complexity, real-world distributions.
In this paper we show how GANs can efficiently learn to the distribution of real-life images.
arXiv Detail & Related papers (2021-06-04T17:33:29Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.