Distribution Fitting for Combating Mode Collapse in Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2212.01521v2
- Date: Fri, 19 Jan 2024 03:21:28 GMT
- Title: Distribution Fitting for Combating Mode Collapse in Generative
Adversarial Networks
- Authors: Yanxiang Gong, Zhiwei Xie, Guozhen Duan, Zheng Ma, Mei Xie
- Abstract summary: Mode collapse is a significant unsolved issue of generative adversarial networks.
We propose a global distribution fitting (GDF) method with a penalty term to confine the generated data distribution.
We also propose a local distribution fitting (LDF) method to deal with the circumstance when the overall real data is unreachable.
- Score: 1.5769569085442372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mode collapse is a significant unsolved issue of generative adversarial
networks. In this work, we examine the causes of mode collapse from a novel
perspective. Due to the nonuniform sampling in the training process, some
sub-distributions may be missed when sampling data. As a result, even when the
generated distribution differs from the real one, the GAN objective can still
achieve the minimum. To address the issue, we propose a global distribution
fitting (GDF) method with a penalty term to confine the generated data
distribution. When the generated distribution differs from the real one, GDF
will make the objective harder to reach the minimal value, while the original
global minimum is not changed. To deal with the circumstance when the overall
real data is unreachable, we also propose a local distribution fitting (LDF)
method. Experiments on several benchmarks demonstrate the effectiveness and
competitive performance of GDF and LDF.
Related papers
- Generative Conditional Distributions by Neural (Entropic) Optimal Transport [12.152228552335798]
We introduce a novel neural entropic optimal transport method designed to learn generative models of conditional distributions.
Our method relies on the minimax training of two neural networks.
Our experiments on real-world datasets show the effectiveness of our algorithm compared to state-of-the-art conditional distribution learning techniques.
arXiv Detail & Related papers (2024-06-04T13:45:35Z) - Delta-AI: Local objectives for amortized inference in sparse graphical models [64.5938437823851]
We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs)
Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective.
We illustrate $Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure.
arXiv Detail & Related papers (2023-10-03T20:37:03Z) - Boundary of Distribution Support Generator (BDSG): Sample Generation on
the Boundary [0.0]
We use the recently developed Invertible Residual Network (IResNet) and Residual Flow (ResFlow) for density estimation.
These models have not yet been used for anomaly detection.
arXiv Detail & Related papers (2021-07-21T09:00:32Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse [82.49564071049366]
generative adversarial networks (GANs) still suffer from mode collapse.
We analyze and seek to regularize this issue with an independent and identically distributed (IID) sampling perspective.
We propose a new loss to encourage the closeness between inverse samples of real data and the Gaussian source in latent space to regularize the generation to be IID from the target distribution.
arXiv Detail & Related papers (2021-06-01T15:20:34Z) - Mode Penalty Generative Adversarial Network with adapted Auto-encoder [0.15229257192293197]
We propose a mode penalty GAN combined with pre-trained auto encoder for explicit representation of generated and real data samples in encoded space.
We demonstrate that applying the proposed method to GANs helps generator's optimization becoming more stable and having faster convergence through experimental evaluations.
arXiv Detail & Related papers (2020-11-16T03:39:53Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.