IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse
- URL: http://arxiv.org/abs/2106.00563v3
- Date: Tue, 20 Jun 2023 10:56:28 GMT
- Title: IID-GAN: an IID Sampling Perspective for Regularizing Mode Collapse
- Authors: Yang Li, Liangliang Shi, Junchi Yan
- Abstract summary: generative adversarial networks (GANs) still suffer from mode collapse.
We analyze and seek to regularize this issue with an independent and identically distributed (IID) sampling perspective.
We propose a new loss to encourage the closeness between inverse samples of real data and the Gaussian source in latent space to regularize the generation to be IID from the target distribution.
- Score: 82.49564071049366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite its success, generative adversarial networks (GANs) still suffer from
mode collapse, i.e., the generator can only map latent variables to a partial
set of modes in the target distribution. In this paper, we analyze and seek to
regularize this issue with an independent and identically distributed (IID)
sampling perspective and emphasize that holding the IID property referring to
the target distribution for generation can naturally avoid mode collapse. This
is based on the basic IID assumption for real data in machine learning.
However, though the source samples {z} obey IID, the generations {G(z)} may not
necessarily be IID sampling from the target distribution. Based on this
observation, considering a necessary condition of IID generation that the
inverse samples from target data should also be IID in the source distribution,
we propose a new loss to encourage the closeness between inverse samples of
real data and the Gaussian source in latent space to regularize the generation
to be IID from the target distribution. Experiments on both synthetic and
real-world data show the effectiveness of our model.
Related papers
- FedUV: Uniformity and Variance for Heterogeneous Federated Learning [5.9330433627374815]
Federated learning is a promising framework to train neural networks with widely distributed data.
Recent work has shown this is due to the final layer of the network being most prone to local bias.
We investigate the training dynamics of the classifier by applying SVD to the weights motivated by the observation that freezing weights results in constant singular values.
arXiv Detail & Related papers (2024-02-27T15:53:15Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Delta-AI: Local objectives for amortized inference in sparse graphical models [64.5938437823851]
We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs)
Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective.
We illustrate $Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure.
arXiv Detail & Related papers (2023-10-03T20:37:03Z) - Breaking the Spurious Causality of Conditional Generation via Fairness
Intervention with Corrective Sampling [77.15766509677348]
Conditional generative models often inherit spurious correlations from the training dataset.
This can result in label-conditional distributions that are imbalanced with respect to another latent attribute.
We propose a general two-step strategy to mitigate this issue.
arXiv Detail & Related papers (2022-12-05T08:09:33Z) - Distribution Fitting for Combating Mode Collapse in Generative
Adversarial Networks [1.5769569085442372]
Mode collapse is a significant unsolved issue of generative adversarial networks.
We propose a global distribution fitting (GDF) method with a penalty term to confine the generated data distribution.
We also propose a local distribution fitting (LDF) method to deal with the circumstance when the overall real data is unreachable.
arXiv Detail & Related papers (2022-12-03T03:39:44Z) - Learning by Erasing: Conditional Entropy based Transferable Out-Of-Distribution Detection [17.31471594748061]
Out-of-distribution (OOD) detection is essential to handle the distribution shifts between training and test scenarios.
Existing methods require retraining to capture the dataset-specific feature representation or data distribution.
We propose a deep generative models (DGM) based transferable OOD detection method, which is unnecessary to retrain on a new ID dataset.
arXiv Detail & Related papers (2022-04-23T10:19:58Z) - Self-Conditioned Generative Adversarial Networks for Image Editing [61.50205580051405]
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
arXiv Detail & Related papers (2022-02-08T18:08:24Z) - Gradual Domain Adaptation in the Wild:When Intermediate Distributions
are Absent [32.906658998929394]
We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution.
We propose GIFT, a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains.
arXiv Detail & Related papers (2021-06-10T22:47:06Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.