Shared Loss between Generators of GANs
- URL: http://arxiv.org/abs/2211.07234v1
- Date: Mon, 14 Nov 2022 09:47:42 GMT
- Title: Shared Loss between Generators of GANs
- Authors: Xin Wang
- Abstract summary: Generative adversarial networks are generative models that are capable of replicating the implicit probability distribution of the input data with high accuracy.
Traditionally, GANs consist of a Generator and a Discriminator which interact with each other to produce highly realistic artificial data.
We show that this causes a dramatic reduction in the training time for GANs without affecting its performance.
- Score: 7.33811357166334
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Generative adversarial networks are generative models that are capable of
replicating the implicit probability distribution of the input data with high
accuracy. Traditionally, GANs consist of a Generator and a Discriminator which
interact with each other to produce highly realistic artificial data.
Traditional GANs fall prey to the mode collapse problem, which means that they
are unable to generate the different variations of data present in the input
dataset. Recently, multiple generators have been used to produce more realistic
output by mitigating the mode collapse problem. We use this multiple generator
framework. The novelty in this paper lies in making the generators compete
against each other while interacting with the discriminator simultaneously. We
show that this causes a dramatic reduction in the training time for GANs
without affecting its performance.
Related papers
- Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Self-Conditioned Generative Adversarial Networks for Image Editing [61.50205580051405]
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
arXiv Detail & Related papers (2022-02-08T18:08:24Z) - Generation of data on discontinuous manifolds via continuous stochastic
non-invertible networks [6.201770337181472]
We show how to generate discontinuous distributions using continuous networks.
We derive a link between the cost functions and the information-theoretic formulation.
We apply our approach to synthetic 2D distributions to demonstrate both reconstruction and generation of discontinuous distributions.
arXiv Detail & Related papers (2021-12-17T17:39:59Z) - DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative
Networks [71.6879432974126]
We introduce DECAF: a GAN-based fair synthetic data generator for tabular data.
We show that DECAF successfully removes undesired bias and is capable of generating high-quality synthetic data.
We provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
arXiv Detail & Related papers (2021-10-25T12:39:56Z) - VARGAN: Variance Enforcing Network Enhanced GAN [0.6445605125467573]
We introduce a new GAN architecture called variance enforcing GAN ( VARGAN)
VARGAN incorporates a third network to introduce diversity in the generated samples.
High diversity and low computational complexity, as well as fast convergence, make VARGAN a promising model to alleviate mode collapse.
arXiv Detail & Related papers (2021-09-05T16:28:21Z) - Mode Penalty Generative Adversarial Network with adapted Auto-encoder [0.15229257192293197]
We propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in encoded space.
We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.
arXiv Detail & Related papers (2020-11-16T03:39:53Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Generative models with kernel distance in data space [10.002379593718471]
LCW generator resembles a classical GAN in transforming Gaussian noise into data space.
First, an autoencoder based architecture, using kernel measures, is built to model a manifold of data.
We propose a Latent Trick mapping a Gaussian to latent in order to get the final model.
arXiv Detail & Related papers (2020-09-15T19:11:47Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.