Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling
- URL: http://arxiv.org/abs/2003.06060v3
- Date: Wed, 7 Jul 2021 17:57:49 GMT
- Title: Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling
- Authors: Tong Che, Ruixiang Zhang, Jascha Sohl-Dickstein, Hugo Larochelle, Liam
Paull, Yuan Cao, Yoshua Bengio
- Abstract summary: We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
- Score: 106.68533003806276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that the sum of the implicit generator log-density $\log p_g$ of a
GAN with the logit score of the discriminator defines an energy function which
yields the true data density when the generator is imperfect but the
discriminator is optimal, thus making it possible to improve on the typical
generator (with implicit density $p_g$). To make that practical, we show that
sampling from this modified density can be achieved by sampling in latent space
according to an energy-based model induced by the sum of the latent prior
log-density and the discriminator output score. This can be achieved by running
a Langevin MCMC in latent space and then applying the generator function, which
we call Discriminator Driven Latent Sampling~(DDLS). We show that DDLS is
highly efficient compared to previous methods which work in the
high-dimensional pixel space and can be applied to improve on previously
trained GANs of many types. We evaluate DDLS on both synthetic and real-world
datasets qualitatively and quantitatively. On CIFAR-10, DDLS substantially
improves the Inception Score of an off-the-shelf pre-trained
SN-GAN~\citep{sngan} from $8.22$ to $9.09$ which is even comparable to the
class-conditional BigGAN~\citep{biggan} model. This achieves a new
state-of-the-art in unconditional image synthesis setting without introducing
extra parameters or additional training.
Related papers
- Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - GANs Settle Scores! [16.317645727944466]
We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
arXiv Detail & Related papers (2023-06-02T16:24:07Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Sampling-Decomposable Generative Adversarial Recommender [84.05894139540048]
We propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR)
In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling.
We extensively evaluate the proposed algorithm with five real-world recommendation datasets.
arXiv Detail & Related papers (2020-11-02T13:19:10Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.