Structure-preserving GANs
- URL: http://arxiv.org/abs/2202.01129v1
- Date: Wed, 2 Feb 2022 16:40:04 GMT
- Title: Structure-preserving GANs
- Authors: Jeremiah Birrell, Markos A. Katsoulakis, Luc Rey-Bellet, Wei Zhu
- Abstract summary: We introduce structure-preserving GANs as a data-efficient framework for learning distributions.
We show that we can reduce the discriminator space to its projection on the invariant discriminator space.
We contextualize our framework by building symmetry-preserving GANs for distributions with intrinsic group symmetry.
- Score: 6.438897276587413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs), a class of distribution-learning
methods based on a two-player game between a generator and a discriminator, can
generally be formulated as a minmax problem based on the variational
representation of a divergence between the unknown and the generated
distributions. We introduce structure-preserving GANs as a data-efficient
framework for learning distributions with additional structure such as group
symmetry, by developing new variational representations for divergences. Our
theory shows that we can reduce the discriminator space to its projection on
the invariant discriminator space, using the conditional expectation with
respect to the $\sigma$-algebra associated to the underlying structure. In
addition, we prove that the discriminator space reduction must be accompanied
by a careful design of structured generators, as flawed designs may easily lead
to a catastrophic "mode collapse" of the learned distribution. We contextualize
our framework by building symmetry-preserving GANs for distributions with
intrinsic group symmetry, and demonstrate that both players, namely the
equivariant generator and invariant discriminator, play important but distinct
roles in the learning process. Empirical experiments and ablation studies
across a broad range of data sets, including real-world medical imaging,
validate our theory, and show our proposed methods achieve significantly
improved sample fidelity and diversity -- almost an order of magnitude measured
in Fr\'echet Inception Distance -- especially in the small data regime.
Related papers
- Uniform Transformation: Refining Latent Representation in Variational Autoencoders [7.4316292428754105]
We introduce a novel adaptable three-stage Uniform Transformation (UT) module to address irregular latent distributions.
By reconfiguring irregular distributions into a uniform distribution in the latent space, our approach significantly enhances the disentanglement and interpretability of latent representations.
Empirical evaluations demonstrated the efficacy of our proposed UT module in improving disentanglement metrics across benchmark datasets.
arXiv Detail & Related papers (2024-07-02T21:46:23Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Heterogeneous Target Speech Separation [52.05046029743995]
We introduce a new paradigm for single-channel target source separation where the sources of interest can be distinguished using non-mutually exclusive concepts.
Our proposed heterogeneous separation framework can seamlessly leverage datasets with large distribution shifts.
arXiv Detail & Related papers (2022-04-07T17:14:20Z) - Cycle Consistent Probability Divergences Across Different Spaces [38.43511529063335]
Discrepancy measures between probability distributions are at the core of statistical inference and machine learning.
This work proposes a novel unbalanced Monge optimal transport formulation for matching, up to isometries, distributions on different spaces.
arXiv Detail & Related papers (2021-11-22T16:35:58Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - A Convenient Infinite Dimensional Framework for Generative Adversarial
Learning [4.396860522241306]
We propose an infinite dimensional theoretical framework for generative adversarial learning.
In our framework the Jensen-Shannon divergence between the distribution induced by the generator from the adversarial learning procedure and the data generating distribution converges to zero.
arXiv Detail & Related papers (2020-11-24T13:45:17Z) - Generative Model without Prior Distribution Matching [26.91643368299913]
Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution.
We propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior.
arXiv Detail & Related papers (2020-09-23T09:33:24Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - AI Giving Back to Statistics? Discovery of the Coordinate System of
Univariate Distributions by Beta Variational Autoencoder [0.0]
The article discusses experiences of training neural networks to classify univariate empirical distributions and to represent them on the two-dimensional latent space forcing disentanglement based on the inputs of cumulative distribution functions (CDF)
The representation on the latent two-dimensional coordinate system can be seen as an additional metadata of the real-world data that disentangles important distribution characteristics, such as shape of the CDF, classification probabilities of underlying theoretical distributions and their parameters, information entropy, and skewness.
arXiv Detail & Related papers (2020-04-06T14:11:13Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.