Mode Penalty Generative Adversarial Network with adapted Auto-encoder
- URL: http://arxiv.org/abs/2011.07706v1
- Date: Mon, 16 Nov 2020 03:39:53 GMT
- Title: Mode Penalty Generative Adversarial Network with adapted Auto-encoder
- Authors: Gahye Lee and Seungkyu Lee
- Abstract summary: We propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in encoded space.
We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.
- Score: 0.15229257192293197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GAN) are trained to generate sample images
of interest distribution. To this end, generator network of GAN learns implicit
distribution of real data set from the classification with candidate generated
samples. Recently, various GANs have suggested novel ideas for stable
optimizing of its networks. However, in real implementation, sometimes they
still represent a only narrow part of true distribution or fail to converge. We
assume this ill posed problem comes from poor gradient from objective function
of discriminator, which easily trap the generator in a bad situation. To
address this problem, we propose a mode penalty GAN combined with pre-trained
auto encoder for explicit representation of generated and real data samples in
the encoded space. In this space, we make a generator manifold to follow a real
manifold by finding entire modes of target distribution. In addition, penalty
for uncovered modes of target distribution is given to the generator which
encourages it to find overall target distribution. We demonstrate that applying
the proposed method to GANs helps generator's optimization becoming more stable
and having faster convergence through experimental evaluations.
Related papers
- Generative Conditional Distributions by Neural (Entropic) Optimal Transport [12.152228552335798]
We introduce a novel neural entropic optimal transport method designed to learn generative models of conditional distributions.
Our method relies on the minimax training of two neural networks.
Our experiments on real-world datasets show the effectiveness of our algorithm compared to state-of-the-art conditional distribution learning techniques.
arXiv Detail & Related papers (2024-06-04T13:45:35Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - GANs Settle Scores! [16.317645727944466]
We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
arXiv Detail & Related papers (2023-06-02T16:24:07Z) - Distribution Fitting for Combating Mode Collapse in Generative
Adversarial Networks [1.5769569085442372]
Mode collapse is a significant unsolved issue of generative adversarial networks.
We propose a global distribution fitting (GDF) method with a penalty term to confine the generated data distribution.
We also propose a local distribution fitting (LDF) method to deal with the circumstance when the overall real data is unreachable.
arXiv Detail & Related papers (2022-12-03T03:39:44Z) - Self-Conditioned Generative Adversarial Networks for Image Editing [61.50205580051405]
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
arXiv Detail & Related papers (2022-02-08T18:08:24Z) - Generation of data on discontinuous manifolds via continuous stochastic
non-invertible networks [6.201770337181472]
We show how to generate discontinuous distributions using continuous networks.
We derive a link between the cost functions and the information-theoretic formulation.
We apply our approach to synthetic 2D distributions to demonstrate both reconstruction and generation of discontinuous distributions.
arXiv Detail & Related papers (2021-12-17T17:39:59Z) - IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse [82.49564071049366]
generative adversarial networks (GANs) still suffer from mode collapse.
We analyze and seek to regularize this issue with an independent and identically distributed (IID) sampling perspective.
We propose a new loss to encourage the closeness between inverse samples of real data and the Gaussian source in latent space to regularize the generation to be IID from the target distribution.
arXiv Detail & Related papers (2021-06-01T15:20:34Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Making Method of Moments Great Again? -- How can GANs learn
distributions [34.91089650516183]
Generative Adrial Networks (GANs) are widely used models to learn complex real-world distributions.
In GANs, the training of the generator usually stops when the discriminator can no longer distinguish the generator's output from the set of training examples.
We establish a theoretical results towards understanding this generator-discriminator training process.
arXiv Detail & Related papers (2020-03-09T10:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.