GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue
- URL: http://arxiv.org/abs/2009.11921v1
- Date: Thu, 24 Sep 2020 19:34:37 GMT
- Title: GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue
- Authors: Pirazh Khorramshahi, Hossein Souri, Rama Chellappa, and Soheil Feizi
- Abstract summary: Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
- Score: 95.23775347605923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building on the success of deep learning, Generative Adversarial Networks
(GANs) provide a modern approach to learn a probability distribution from
observed samples. GANs are often formulated as a zero-sum game between two sets
of functions; the generator and the discriminator. Although GANs have shown
great potentials in learning complex distributions such as images, they often
suffer from the mode collapse issue where the generator fails to capture all
existing modes of the input distribution. As a consequence, the diversity of
generated samples is lower than that of the observed ones. To tackle this
issue, we take an information-theoretic approach and maximize a variational
lower bound on the entropy of the generated samples to increase their
diversity. We call this approach GANs with Variational Entropy Regularizers
(GAN+VER). Existing remedies for the mode collapse issue in GANs can be easily
coupled with our proposed variational entropy regularization. Through extensive
experimentation on standard benchmark datasets, we show all the existing
evaluation metrics highlighting difference of real and generated samples are
significantly improved with GAN+VER.
Related papers
- GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - StyleGenes: Discrete and Efficient Latent Distributions for GANs [149.0290830305808]
We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
arXiv Detail & Related papers (2023-04-30T23:28:46Z) - Normalizing Flow with Variational Latent Representation [20.038183566389794]
We propose a new framework based on variational latent representation to improve the practical performance of Normalizing Flow (NF)
The idea is to replace the standard normal latent variable with a more general latent representation, jointly learned via Variational Bayes.
The resulting method is significantly more powerful than the standard normalization flow approach for generating data distributions with multiple modes.
arXiv Detail & Related papers (2022-11-21T16:51:49Z) - Selectively increasing the diversity of GAN-generated samples [8.980453507536017]
We propose a novel method to selectively increase the diversity of GAN-generated samples.
We show the superiority of our method in a synthetic benchmark as well as a real-life scenario simulating data from the Zero Degree Calorimeter of ALICE experiment in CERN.
arXiv Detail & Related papers (2022-07-04T16:27:06Z) - VARGAN: Variance Enforcing Network Enhanced GAN [0.6445605125467573]
We introduce a new GAN architecture called variance enforcing GAN ( VARGAN)
VARGAN incorporates a third network to introduce diversity in the generated samples.
High diversity and low computational complexity, as well as fast convergence, make VARGAN a promising model to alleviate mode collapse.
arXiv Detail & Related papers (2021-09-05T16:28:21Z) - The Bures Metric for Generative Adversarial Networks [10.69910379275607]
Generative Adversarial Networks (GANs) are performant generative methods yielding high-quality samples.
We propose to match the real batch diversity to the fake batch diversity.
We observe that diversity matching reduces mode collapse substantially and has a positive effect on the sample quality.
arXiv Detail & Related papers (2020-06-16T12:04:41Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.