Exploring DeshuffleGANs in Self-Supervised Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2011.01730v2
- Date: Wed, 1 Sep 2021 14:00:57 GMT
- Title: Exploring DeshuffleGANs in Self-Supervised Generative Adversarial
Networks
- Authors: Gulcin Baykal, Furkan Ozcelik, Gozde Unal
- Abstract summary: We study the contribution of a self-supervision task, deshuffling of the DeshuffleGANs in the generalizability context.
We show that the DeshuffleGAN obtains the best FID results for several datasets compared to the other self-supervised GANs.
We design the conditional DeshuffleGAN called cDeshuffleGAN to evaluate the quality of the learnt representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative Adversarial Networks (GANs) have become the most used networks
towards solving the problem of image generation. Self-supervised GANs are later
proposed to avoid the catastrophic forgetting of the discriminator and to
improve the image generation quality without needing the class labels. However,
the generalizability of the self-supervision tasks on different GAN
architectures is not studied before. To that end, we extensively analyze the
contribution of a previously proposed self-supervision task, deshuffling of the
DeshuffleGANs in the generalizability context. We assign the deshuffling task
to two different GAN discriminators and study the effects of the task on both
architectures. We extend the evaluations compared to the previously proposed
DeshuffleGANs on various datasets. We show that the DeshuffleGAN obtains the
best FID results for several datasets compared to the other self-supervised
GANs. Furthermore, we compare the deshuffling with the rotation prediction that
is firstly deployed to the GAN training and demonstrate that its contribution
exceeds the rotation prediction. We design the conditional DeshuffleGAN called
cDeshuffleGAN to evaluate the quality of the learnt representations. Lastly, we
show the contribution of the self-supervision tasks to the GAN training on the
loss landscape and present that the effects of these tasks may not be
cooperative to the adversarial training in some settings. Our code can be found
at https://github.com/gulcinbaykal/DeshuffleGAN.
Related papers
- Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - ErGAN: Generative Adversarial Networks for Entity Resolution [8.576633582363202]
A major challenge in learning-based entity resolution is how to reduce the label cost for training.
We propose a novel deep learning method, called ErGAN, to address the challenge.
We have conducted extensive experiments to empirically verify the labeling and learning efficiency of ErGAN.
arXiv Detail & Related papers (2020-12-18T01:33:58Z) - Teaching a GAN What Not to Learn [20.03447539784024]
Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution.
In this paper, we approach the supervised GAN problem from a different perspective, one motivated by the philosophy of the famous Persian poet Rumi.
In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid.
This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable.
arXiv Detail & Related papers (2020-10-29T14:44:24Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - DeshuffleGAN: A Self-Supervised GAN to Improve Structure Learning [0.0]
We argue that one of the crucial points to improve the GAN performance is to be able to provide the model with a capability to learn the spatial structure in data.
We introduce a deshuffling task that solves a puzzle of randomly shuffled image tiles, which in turn helps the DeshuffleGAN learn to increase its expressive capacity for spatial structure and realistic appearance.
arXiv Detail & Related papers (2020-06-15T19:06:07Z) - Improving GAN Training with Probability Ratio Clipping and Sample
Reweighting [145.5106274085799]
generative adversarial networks (GANs) often suffer from inferior performance due to unstable training.
We propose a new variational GAN training framework which enjoys superior training stability.
By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks.
arXiv Detail & Related papers (2020-06-12T01:39:48Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z) - Interpreting Galaxy Deblender GAN from the Discriminator's Perspective [50.12901802952574]
This research focuses on behaviors of one of the network's major components, the Discriminator, which plays a vital role but is often overlooked.
We demonstrate that our method clearly reveals attention areas of the Discriminator when differentiating generated galaxy images from ground truth images.
arXiv Detail & Related papers (2020-01-17T04:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.