GANs Settle Scores!
- URL: http://arxiv.org/abs/2306.01654v1
- Date: Fri, 2 Jun 2023 16:24:07 GMT
- Title: GANs Settle Scores!
- Authors: Siddarth Asokan, Nishanth Shetty, Aadithya Srikanth, Chandra Sekhar
Seelamantula
- Abstract summary: We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
- Score: 16.317645727944466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) comprise a generator, trained to learn
the underlying distribution of the desired data, and a discriminator, trained
to distinguish real samples from those output by the generator. A majority of
GAN literature focuses on understanding the optimality of the discriminator
through integral probability metric (IPM) or divergence based analysis. In this
paper, we propose a unified approach to analyzing the generator optimization
through variational approach. In $f$-divergence-minimizing GANs, we show that
the optimal generator is the one that matches the score of its output
distribution with that of the data distribution, while in IPM GANs, we show
that this optimal generator matches score-like functions, involving the
flow-field of the kernel associated with a chosen IPM constraint space.
Further, the IPM-GAN optimization can be seen as one of smoothed
score-matching, where the scores of the data and the generator distributions
are convolved with the kernel associated with the constraint. The proposed
approach serves to unify score-based training and existing GAN flavors,
leveraging results from normalizing flows, while also providing explanations
for empirical phenomena such as the stability of non-saturating GAN losses.
Based on these results, we propose novel alternatives to $f$-GAN and IPM-GAN
training based on score and flow matching, and discriminator-guided Langevin
sampling.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - cGANs with Auxiliary Discriminative Classifier [43.78253518292111]
Conditional generative models aim to learn the underlying joint distribution of data and labels.
auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the issue of low intra-class diversity on generated samples.
We propose novel cGANs with auxiliary discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
arXiv Detail & Related papers (2021-07-21T13:06:32Z) - Mode Penalty Generative Adversarial Network with adapted Auto-encoder [0.15229257192293197]
We propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in encoded space.
We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.
arXiv Detail & Related papers (2020-11-16T03:39:53Z) - Sampling-Decomposable Generative Adversarial Recommender [84.05894139540048]
We propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR)
In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling.
We extensively evaluate the proposed algorithm with five real-world recommendation datasets.
arXiv Detail & Related papers (2020-11-02T13:19:10Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.