Partition-Guided GANs
- URL: http://arxiv.org/abs/2104.00816v1
- Date: Fri, 2 Apr 2021 00:06:53 GMT
- Title: Partition-Guided GANs
- Authors: Mohammadreza Armandpour, Ali Sadeghian, Chunyuan Li, Mingyuan Zhou
- Abstract summary: We design a partitioner that breaks the space into smaller regions, each having a simpler distribution, and training a different generator for each partition.
This is done in an unsupervised manner without requiring any labels.
Experimental results on various standard benchmarks show that the proposed unsupervised model outperforms several recent methods.
- Score: 63.980473635585234
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the success of Generative Adversarial Networks (GANs), their training
suffers from several well-known problems, including mode collapse and
difficulties learning a disconnected set of manifolds. In this paper, we break
down the challenging task of learning complex high dimensional distributions,
supporting diverse data samples, to simpler sub-tasks. Our solution relies on
designing a partitioner that breaks the space into smaller regions, each having
a simpler distribution, and training a different generator for each partition.
This is done in an unsupervised manner without requiring any labels.
We formulate two desired criteria for the space partitioner that aid the
training of our mixture of generators: 1) to produce connected partitions and
2) provide a proxy of distance between partitions and data samples, along with
a direction for reducing that distance. These criteria are developed to avoid
producing samples from places with non-existent data density, and also
facilitate training by providing additional direction to the generators. We
develop theoretical constraints for a space partitioner to satisfy the above
criteria. Guided by our theoretical analysis, we design an effective neural
architecture for the space partitioner that empirically assures these
conditions. Experimental results on various standard benchmarks show that the
proposed unsupervised model outperforms several recent methods.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI [29.13807697733638]
We build on the remarkable achievements in generative sampling of natural images.
We propose an innovative challenge, potentially overly ambitious, which involves generating samples that resemble images.
The statistical challenge lies in the small sample size, sometimes consisting of a few hundred subjects.
arXiv Detail & Related papers (2024-04-10T22:35:06Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Smoothing the Generative Latent Space with Mixup-based Distance Learning [32.838539968751924]
We consider the situation where neither large scale dataset of our interest nor transferable source dataset is available.
We propose latent mixup-based distance regularization on the feature space of both a generator and the counterpart discriminator.
arXiv Detail & Related papers (2021-11-23T06:39:50Z) - Latent reweighting, an almost free improvement for GANs [12.605607949417033]
A line of works aims at improving the sampling quality from pre-trained generators at the expense of increased computational cost.
We introduce an additional network to predict latent importance weights and two associated sampling methods to avoid the poorest samples.
arXiv Detail & Related papers (2021-10-19T08:33:57Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z) - The Bures Metric for Generative Adversarial Networks [10.69910379275607]
Generative Adversarial Networks (GANs) are performant generative methods yielding high-quality samples.
We propose to match the real batch diversity to the fake batch diversity.
We observe that diversity matching reduces mode collapse substantially and has a positive effect on the sample quality.
arXiv Detail & Related papers (2020-06-16T12:04:41Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.