GLeaD: Improving GANs with A Generator-Leading Task
- URL: http://arxiv.org/abs/2212.03752v2
- Date: Wed, 7 Jun 2023 03:34:34 GMT
- Title: GLeaD: Improving GANs with A Generator-Leading Task
- Authors: Qingyan Bai, Ceyuan Yang, Yinghao Xu, Xihui Liu, Yujiu Yang, Yujun
Shen
- Abstract summary: Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D)
We propose a new paradigm for adversarial training, which makes G assign a task to D as well.
- Score: 44.14659523033865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial network (GAN) is formulated as a two-player game
between a generator (G) and a discriminator (D), where D is asked to
differentiate whether an image comes from real data or is produced by G. Under
such a formulation, D plays as the rule maker and hence tends to dominate the
competition. Towards a fairer game in GANs, we propose a new paradigm for
adversarial training, which makes G assign a task to D as well. Specifically,
given an image, we expect D to extract representative features that can be
adequately decoded by G to reconstruct the input. That way, instead of learning
freely, D is urged to align with the view of G for domain classification.
Experimental results on various datasets demonstrate the substantial
superiority of our approach over the baselines. For instance, we improve the
FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on
LSUN Church. We believe that the pioneering attempt present in this work could
inspire the community with better designed generator-leading tasks for GAN
improvement.
Related papers
- SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Distilling Representations from GAN Generator via Squeeze and Span [55.76208869775715]
We propose to distill knowledge from GAN generators by squeezing and spanning their representations.
We span the distilled representation of the synthetic domain to the real domain by also using real training data to remedy the mode collapse of GANs.
arXiv Detail & Related papers (2022-11-06T01:10:28Z) - DGL-GAN: Discriminator Guided Learning for GAN Compression [57.6150859067392]
Generative Adversarial Networks (GANs) with high computation costs have achieved remarkable results in synthesizing high-resolution images from random noise.
We propose a novel yet simple bf Discriminator bf Guided bf Learning approach for compressing vanilla bf GAN, dubbed bf DGL-GAN.
arXiv Detail & Related papers (2021-12-13T09:24:45Z) - Improving GAN Equilibrium by Raising Spatial Awareness [80.71970464638585]
Generative Adversarial Networks (GANs) are built upon the adversarial training between a generator (G) and a discriminator (D)
In practice it is difficult to achieve such an equilibrium in GAN training, instead, D almost always surpasses G.
We propose to align the spatial awareness of G with the attention map induced from D.
arXiv Detail & Related papers (2021-12-01T18:55:51Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - DeepNAG: Deep Non-Adversarial Gesture Generation [4.46895288699085]
generative adversarial networks (GANs) have shown superior image data augmentation performance.
GANs require simultaneous generator and discriminator network training.
We first discuss a novel, device-agnostic GAN model for gesture synthesis called DeepGAN.
arXiv Detail & Related papers (2020-11-18T08:00:12Z) - Exploring DeshuffleGANs in Self-Supervised Generative Adversarial
Networks [0.0]
We study the contribution of a self-supervision task, deshuffling of the DeshuffleGANs in the generalizability context.
We show that the DeshuffleGAN obtains the best FID results for several datasets compared to the other self-supervised GANs.
We design the conditional DeshuffleGAN called cDeshuffleGAN to evaluate the quality of the learnt representations.
arXiv Detail & Related papers (2020-11-03T14:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.