Investigating Shifts in GAN Output-Distributions
- URL: http://arxiv.org/abs/2112.14061v1
- Date: Tue, 28 Dec 2021 09:16:55 GMT
- Title: Investigating Shifts in GAN Output-Distributions
- Authors: Ricard Durall, Janis Keuper
- Abstract summary: We introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms.
- Score: 5.076419064097734
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: A fundamental and still largely unsolved question in the context of
Generative Adversarial Networks is whether they are truly able to capture the
real data distribution and, consequently, to sample from it. In particular, the
multidimensional nature of image distributions leads to a complex evaluation of
the diversity of GAN distributions. Existing approaches provide only a partial
understanding of this issue, leaving the question unanswered. In this work, we
introduce a loop-training scheme for the systematic investigation of observable
shifts between the distributions of real training data and GAN generated data.
Additionally, we introduce several bounded measures for distribution shifts,
which are both easy to compute and to interpret. Overall, the combination of
these methods allows an explorative investigation of innate limitations of
current GAN algorithms. Our experiments on different data-sets and multiple
state-of-the-art GAN architectures show large shifts between input and output
distributions, showing that existing theoretical guarantees towards the
convergence of output distributions appear not to be holding in practice.
Related papers
- Generative Assignment Flows for Representing and Learning Joint Distributions of Discrete Data [2.6499018693213316]
We introduce a novel generative model for the representation of joint probability distributions of a possibly large number of discrete random variables.
The embedding of the flow via the Segre map in the meta-simplex of all discrete joint distributions ensures that any target distribution can be represented in principle.
Our approach has strong motivation from first principles of modeling coupled discrete variables.
arXiv Detail & Related papers (2024-06-06T21:58:33Z) - Generative Modeling of Discrete Joint Distributions by E-Geodesic Flow
Matching on Assignment Manifolds [0.8594140167290099]
General non-factorizing discrete distributions can be approximated by embedding the submanifold into a the meta-simplex of all joint discrete distributions.
Efficient training of the generative model is demonstrated by matching the flow of geodesics of factorizing discrete distributions.
arXiv Detail & Related papers (2024-02-12T17:56:52Z) - Variational DAG Estimation via State Augmentation With Stochastic Permutations [16.57658783816741]
Estimating the structure of a Bayesian network from observational data is a statistically and computationally hard problem.
From a probabilistic inference perspective, the main challenges are (i) representing distributions over graphs that satisfy the DAG constraint and (ii) estimating a posterior over the underlying space.
We propose an approach that addresses these challenges by formulating a joint distribution on an augmented space of DAGs and permutations.
arXiv Detail & Related papers (2024-02-04T23:51:04Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Global Distance-distributions Separation for Unsupervised Person
Re-identification [93.39253443415392]
Existing unsupervised ReID approaches often fail in correctly identifying the positive samples and negative samples through the distance-based matching/ranking.
We introduce a global distance-distributions separation constraint over the two distributions to encourage the clear separation of positive and negative samples from a global view.
We show that our method leads to significant improvement over the baselines and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-01T07:05:39Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.