On some theoretical limitations of Generative Adversarial Networks
- URL: http://arxiv.org/abs/2110.10915v1
- Date: Thu, 21 Oct 2021 06:10:38 GMT
- Title: On some theoretical limitations of Generative Adversarial Networks
- Authors: Beno\^it Oriol and Alexandre Miot
- Abstract summary: It is a general assumption that GANs can generate any probability distribution.
We provide a new result based on Extreme Value Theory showing that GANs can't generate heavy tailed distributions.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks have become a core technique in Machine
Learning to generate unknown distributions from data samples. They have been
used in a wide range of context without paying much attention to the possible
theoretical limitations of those models. Indeed, because of the universal
approximation properties of Neural Networks, it is a general assumption that
GANs can generate any probability distribution. Recently, people began to
question this assumption and this article is in line with this thinking. We
provide a new result based on Extreme Value Theory showing that GANs can't
generate heavy tailed distributions. The full proof of this result is given.
Related papers
- A Non-negative VAE:the Generalized Gamma Belief Network [49.970917207211556]
The gamma belief network (GBN) has demonstrated its potential for uncovering multi-layer interpretable latent representations in text data.
We introduce the generalized gamma belief network (Generalized GBN) in this paper, which extends the original linear generative model to a more expressive non-linear generative model.
We also propose an upward-downward Weibull inference network to approximate the posterior distribution of the latent variables.
arXiv Detail & Related papers (2024-08-06T18:18:37Z) - Indeterminate Probability Neural Network [20.993728880886994]
In this paper, we propose a new general probability theory, which is an extension of classical probability theory.
For our proposed neural network framework, the output of neural network is defined as probability events.
IPNN is capable of making very large classification with very small neural network, e.g. model with 100 output nodes can classify 10 billion categories.
arXiv Detail & Related papers (2023-03-21T01:57:40Z) - Redes Generativas Adversarias (GAN) Fundamentos Te\'oricos y
Aplicaciones [0.40611352512781856]
Generative adversarial networks (GANs) are a method based on the training of two neural networks, one called generator and the other discriminator.
GANs have a wide range of applications in fields such as computer vision, semantic segmentation, time series synthesis, image editing, natural language processing, and image generation from text.
arXiv Detail & Related papers (2023-02-18T14:39:51Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Minimax Optimality (Probably) Doesn't Imply Distribution Learning for
GANs [44.4200799586461]
We show that standard cryptographic assumptions imply that this stronger condition is still insufficient.
Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.
arXiv Detail & Related papers (2022-01-18T18:59:21Z) - Predicting Unreliable Predictions by Shattering a Neural Network [145.3823991041987]
Piecewise linear neural networks can be split into subfunctions.
Subfunctions have their own activation pattern, domain, and empirical error.
Empirical error for the full network can be written as an expectation over subfunctions.
arXiv Detail & Related papers (2021-06-15T18:34:41Z) - A t-distribution based operator for enhancing out of distribution
robustness of neural network classifiers [14.567354306568296]
Neural Network (NN) classifiers can assign extreme probabilities to samples that have not appeared during training.
One of the causes for this unwanted behaviour lies in the use of the standard softmax operator.
In this paper, a novel operator is proposed that is derived using $t$-distributions which are capable of providing a better description of uncertainty.
arXiv Detail & Related papers (2020-06-09T16:39:07Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.