Learning disconnected manifolds: a no GANs land
- URL: http://arxiv.org/abs/2006.04596v3
- Date: Thu, 10 Dec 2020 12:46:25 GMT
- Title: Learning disconnected manifolds: a no GANs land
- Authors: Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary
- Abstract summary: Generative AdversarialNetworks make use of a unimodal latent distribution transformed by a continuous generator.
We establish a no free lunch theorem for the disconnected manifold learning stating an upper bound on the precision of the targeted distribution.
We derive a rejection sampling method based on the norm of generators Jacobian and show its efficiency on several generators including BigGAN.
- Score: 15.4867805276559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typical architectures of Generative AdversarialNetworks make use of a
unimodal latent distribution transformed by a continuous generator.
Consequently, the modeled distribution always has connected support which is
cumbersome when learning a disconnected set of manifolds. We formalize this
problem by establishing a no free lunch theorem for the disconnected manifold
learning stating an upper bound on the precision of the targeted distribution.
This is done by building on the necessary existence of a low-quality region
where the generator continuously samples data between two disconnected modes.
Finally, we derive a rejection sampling method based on the norm of generators
Jacobian and show its efficiency on several generators including BigGAN.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - A New Paradigm for Generative Adversarial Networks based on Randomized
Decision Rules [8.36840154574354]
The Generative Adversarial Network (GAN) was recently introduced in the literature as a novel machine learning method for training generative models.
It has many applications in statistics such as nonparametric clustering and nonparametric conditional independence tests.
In this paper, we identify the reasons why the GAN suffers from this issue, and to address it, we propose a new formulation for the GAN based on randomized decision rules.
arXiv Detail & Related papers (2023-06-23T17:50:34Z) - StyleGenes: Discrete and Efficient Latent Distributions for GANs [149.0290830305808]
We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
arXiv Detail & Related papers (2023-04-30T23:28:46Z) - Self-Conditioned Generative Adversarial Networks for Image Editing [61.50205580051405]
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
arXiv Detail & Related papers (2022-02-08T18:08:24Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Generation of data on discontinuous manifolds via continuous stochastic
non-invertible networks [6.201770337181472]
We show how to generate discontinuous distributions using continuous networks.
We derive a link between the cost functions and the information-theoretic formulation.
We apply our approach to synthetic 2D distributions to demonstrate both reconstruction and generation of discontinuous distributions.
arXiv Detail & Related papers (2021-12-17T17:39:59Z) - Mode Penalty Generative Adversarial Network with adapted Auto-encoder [0.15229257192293197]
We propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in encoded space.
We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.
arXiv Detail & Related papers (2020-11-16T03:39:53Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Making Method of Moments Great Again? -- How can GANs learn
distributions [34.91089650516183]
Generative Adrial Networks (GANs) are widely used models to learn complex real-world distributions.
In GANs, the training of the generator usually stops when the discriminator can no longer distinguish the generator's output from the set of training examples.
We establish a theoretical results towards understanding this generator-discriminator training process.
arXiv Detail & Related papers (2020-03-09T10:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.