Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training
- URL: http://arxiv.org/abs/2305.07613v1
- Date: Fri, 12 May 2023 17:03:18 GMT
- Title: Spider GAN: Leveraging Friendly Neighbors to Accelerate GAN Training
- Authors: Siddarth Asokan and Chandra Sekhar Seelamantula
- Abstract summary: We propose a novel approach for training GANs with images as inputs, but without enforcing any pairwise constraints.
The process can be made efficient by identifying closely related datasets, or a friendly neighborhood'' of the target distribution.
We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets.
- Score: 20.03447539784024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training Generative adversarial networks (GANs) stably is a challenging task.
The generator in GANs transform noise vectors, typically Gaussian distributed,
into realistic data such as images. In this paper, we propose a novel approach
for training GANs with images as inputs, but without enforcing any pairwise
constraints. The intuition is that images are more structured than noise, which
the generator can leverage to learn a more robust transformation. The process
can be made efficient by identifying closely related datasets, or a ``friendly
neighborhood'' of the target distribution, inspiring the moniker, Spider GAN.
To define friendly neighborhoods leveraging proximity between datasets, we
propose a new measure called the signed inception distance (SID), inspired by
the polyharmonic kernel. We show that the Spider GAN formulation results in
faster convergence, as the generator can discover correspondence even between
seemingly unrelated datasets, for instance, between Tiny-ImageNet and CelebA
faces. Further, we demonstrate cascading Spider GAN, where the output
distribution from a pre-trained GAN generator is used as the input to the
subsequent network. Effectively, transporting one distribution to another in a
cascaded fashion until the target is learnt -- a new flavor of transfer
learning. We demonstrate the efficacy of the Spider approach on DCGAN,
conditional GAN, PGGAN, StyleGAN2 and StyleGAN3. The proposed approach achieves
state-of-the-art Frechet inception distance (FID) values, with one-fifth of the
training iterations, in comparison to their baseline counterparts on
high-resolution small datasets such as MetFaces, Ukiyo-E Faces and AFHQ-Cats.
Related papers
- SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Extracting Semantic Knowledge from GANs with Unsupervised Learning [65.32631025780631]
Generative Adversarial Networks (GANs) encode semantics in feature maps in a linearly separable form.
We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features.
KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects.
arXiv Detail & Related papers (2022-11-30T03:18:16Z) - Distilling Representations from GAN Generator via Squeeze and Span [55.76208869775715]
We propose to distill knowledge from GAN generators by squeezing and spanning their representations.
We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs.
arXiv Detail & Related papers (2022-11-06T01:10:28Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - Towards Discovery and Attribution of Open-world GAN Generated Images [18.10496076534083]
We present an iterative algorithm for discovering images generated from previously unseen GANs.
Our algorithm consists of multiple components including network training, out-of-distribution detection, clustering, merge and refine steps.
Our experiments demonstrate the effectiveness of our approach to discover new GANs and can be used in an open-world setup.
arXiv Detail & Related papers (2021-05-10T18:00:13Z) - MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
Limited Data Domains [77.46963293257912]
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain.
This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain.
We show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods.
arXiv Detail & Related papers (2021-04-28T13:10:56Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z) - Towards GANs' Approximation Ability [8.471366736328811]
This paper will first theoretically analyze GANs' approximation property.
We prove that the generator with the input latent variable in GANs can universally approximate the potential data distribution.
In the practical dataset, four GANs using SDG can also outperform the corresponding traditional GANs when the model architectures are smaller.
arXiv Detail & Related papers (2020-04-10T02:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.