Generator Knows What Discriminator Should Learn in Unconditional GANs
- URL: http://arxiv.org/abs/2207.13320v1
- Date: Wed, 27 Jul 2022 06:49:26 GMT
- Title: Generator Knows What Discriminator Should Learn in Unconditional GANs
- Authors: Gayoung Lee, Hyunsu Kim, Junho Kim, Seonghyeon Kim, Jung-Woo Ha,
Yunjey Choi
- Abstract summary: We propose a new generator-guided discriminator regularization(GGDR) in which the generator feature maps supervise the discriminator to have rich semantic representations in unconditional generation.
In specific, we employ an U-Net architecture for discriminator, which is trained to predict the generator feature maps given fake images as inputs.
- Score: 18.913330654689496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods for conditional image generation benefit from dense
supervision such as segmentation label maps to achieve high-fidelity. However,
it is rarely explored to employ dense supervision for unconditional image
generation. Here we explore the efficacy of dense supervision in unconditional
generation and find generator feature maps can be an alternative of
cost-expensive semantic label maps. From our empirical evidences, we propose a
new generator-guided discriminator regularization(GGDR) in which the generator
feature maps supervise the discriminator to have rich semantic representations
in unconditional generation. In specific, we employ an U-Net architecture for
discriminator, which is trained to predict the generator feature maps given
fake images as inputs. Extensive experiments on mulitple datasets show that our
GGDR consistently improves the performance of baseline methods in terms of
quantitative and qualitative aspects. Code is available at
https://github.com/naver-ai/GGDR
Related papers
- GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Return of Unconditional Generation: A Self-supervised Representation Generation Method [36.27605000082541]
Unconditional generation is a problem of modeling data distribution without relying on human-annotated labels.
In this work, we show that one can close this gap by generating semantic representations in the representation space produced by a self-supervised encoder.
This framework, called Representation-Conditioned Generation (RCG), provides an effective solution to the unconditional generation problem without using labels.
arXiv Detail & Related papers (2023-12-06T18:59:31Z) - Hierarchical Forgery Classifier On Multi-modality Face Forgery Clues [61.37306431455152]
We propose a novel Hierarchical Forgery for Multi-modality Face Forgery Detection (HFC-MFFD)
The HFC-MFFD learns robust patches-based hybrid representation to enhance forgery authentication in multiple-modality scenarios.
The specific hierarchical face forgery is proposed to alleviate the class imbalance problem and further boost detection performance.
arXiv Detail & Related papers (2022-12-30T10:54:29Z) - Latent Space is Feature Space: Regularization Term for GANs Training on
Limited Dataset [1.8634083978855898]
I proposed an additional structure and loss function for GANs called LFM, trained to maximize the feature diversity between the different dimensions of the latent space.
In experiments, this system has been built upon DCGAN and proved to have improvement on Frechet Inception Distance (FID) training from scratch on CelebA dataset.
arXiv Detail & Related papers (2022-10-28T16:34:48Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Learning High-Resolution Domain-Specific Representations with a GAN
Generator [5.8720142291102135]
We show that representations learnt by a GAN generator can be easily projected onto semantic segmentation map using a lightweight decoder.
We propose LayerMatch scheme for approximating the representation of a GAN generator that can be used for unsupervised domain-specific pretraining.
We find that the use of LayerMatch-pretrained backbone leads to superior accuracy compared to standard supervised pretraining on ImageNet.
arXiv Detail & Related papers (2020-06-18T11:57:18Z) - Classify and Generate: Using Classification Latent Space Representations
for Image Generations [17.184760662429834]
We propose a discriminative modeling framework that employs manipulated supervised latent representations to reconstruct and generate new samples belonging to a given class.
ReGene has higher classification accuracy than existing conditional generative models while being competitive in terms of FID.
arXiv Detail & Related papers (2020-04-16T09:13:44Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.