Inducing Optimal Attribute Representations for Conditional GANs
- URL: http://arxiv.org/abs/2003.06472v2
- Date: Tue, 8 Sep 2020 13:36:11 GMT
- Title: Inducing Optimal Attribute Representations for Conditional GANs
- Authors: Binod Bhattarai and Tae-Kyun Kim
- Abstract summary: Conditional GANs are widely used in translating an image from one category to another.
Existing conditional GANs commonly encode target domain label information as hard-coded categorical vectors in the form of 0s and 1s.
We propose a novel end-to-end learning framework with Graph Convolutional Networks to learn the attribute representations to condition on the generator.
- Score: 61.24506213440997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional GANs are widely used in translating an image from one category to
another. Meaningful conditions to GANs provide greater flexibility and control
over the nature of the target domain synthetic data. Existing conditional GANs
commonly encode target domain label information as hard-coded categorical
vectors in the form of 0s and 1s. The major drawbacks of such representations
are inability to encode the high-order semantic information of target
categories and their relative dependencies. We propose a novel end-to-end
learning framework with Graph Convolutional Networks to learn the attribute
representations to condition on the generator. The GAN losses, i.e. the
discriminator and attribute classification losses, are fed back to the Graph
resulting in the synthetic images that are more natural and clearer in
attributes. Moreover, prior-arts are given priorities to condition on the
generator side, not on the discriminator side of GANs. We apply the conditions
to the discriminator side as well via multi-task learning. We enhanced the four
state-of-the art cGANs architectures: Stargan, Stargan-JNT, AttGAN and STGAN.
Our extensive qualitative and quantitative evaluations on challenging face
attributes manipulation data set, CelebA, LFWA, and RaFD, show that the cGANs
enhanced by our methods outperform by a large margin, compared to their
counter-parts and other conditioning methods, in terms of both target
attributes recognition rates and quality measures such as PSNR and SSIM.
Related papers
- Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - Cyclically Disentangled Feature Translation for Face Anti-spoofing [61.70377630461084]
We propose a novel domain adaptation method called cyclically disentangled feature translation network (CDFTN)
CDFTN generates pseudo-labeled samples that possess: 1) source domain-invariant liveness features and 2) target domain-specific content features, which are disentangled through domain adversarial training.
A robust classifier is trained based on the synthetic pseudo-labeled images under the supervision of source domain labels.
arXiv Detail & Related papers (2022-12-07T14:12:34Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - Are conditional GANs explicitly conditional? [0.0]
This paper proposes two contributions for conditional Generative Adversarial Networks (cGANs)
The first main contribution is an analysis of cGANs to show that they are not explicitly conditional.
The second contribution is a new method, called acontrario, that explicitly models conditionality for both parts of the adversarial architecture.
arXiv Detail & Related papers (2021-06-28T22:49:27Z) - Label Geometry Aware Discriminator for Conditional Generative Networks [40.89719383597279]
Conditional Generative Adversarial Networks (GANs) can generate highly photo realistic images with desired target classes.
These synthetic images have not always been helpful to improve downstream supervised tasks such as image classification.
arXiv Detail & Related papers (2021-05-12T08:17:25Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Self-labeled Conditional GANs [2.9189409618561966]
This paper introduces a novel and fully unsupervised framework for conditional GAN training in which labels are automatically obtained from data.
We incorporate a clustering network into the standard conditional GAN framework that plays against the discriminator.
arXiv Detail & Related papers (2020-12-03T18:46:46Z) - Generalized Zero-Shot Learning via VAE-Conditioned Generative Flow [83.27681781274406]
Generalized zero-shot learning aims to recognize both seen and unseen classes by transferring knowledge from semantic descriptions to visual representations.
Recent generative methods formulate GZSL as a missing data problem, which mainly adopts GANs or VAEs to generate visual features for unseen classes.
We propose a conditional version of generative flows for GZSL, i.e., VAE-Conditioned Generative Flow (VAE-cFlow)
arXiv Detail & Related papers (2020-09-01T09:12:31Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.