S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels
- URL: http://arxiv.org/abs/2010.12622v1
- Date: Fri, 23 Oct 2020 19:13:44 GMT
- Title: S2cGAN: Semi-Supervised Training of Conditional GANs with Fewer Labels
- Authors: Arunava Chakraborty, Rahul Ragesh, Mahir Shah, Nipun Kwatra
- Abstract summary: Conditional GANs (cGANs) provide a mechanism to control the generation process by conditioning the output on a user defined input.
We propose a framework for semi-supervised training of cGANs which utilizes sparse labels to learn the conditional mapping.
We demonstrate effectiveness of our method on multiple datasets and different conditional tasks.
- Score: 1.3764085113103222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have been remarkably successful in
learning complex high dimensional real word distributions and generating
realistic samples. However, they provide limited control over the generation
process. Conditional GANs (cGANs) provide a mechanism to control the generation
process by conditioning the output on a user defined input. Although training
GANs requires only unsupervised data, training cGANs requires labelled data
which can be very expensive to obtain. We propose a framework for
semi-supervised training of cGANs which utilizes sparse labels to learn the
conditional mapping, and at the same time leverages a large amount of
unsupervised data to learn the unconditional distribution. We demonstrate
effectiveness of our method on multiple datasets and different conditional
tasks.
Related papers
- Leveraging Contaminated Datasets to Learn Clean-Data Distribution with
Purified Generative Adversarial Networks [15.932410447038697]
Generative adversarial networks (GANs) are known for their abilities on capturing the underlying distribution of training instances.
Existing GANs are almost established on the assumption that the training dataset is clean.
In many real-world applications, this may not hold, that is, the training dataset may be contaminated by a proportion of undesired instances.
Two purified generative adversarial networks (PuriGAN) are developed, in which the discriminators are augmented with the capability to distinguish between target and contaminated instances.
arXiv Detail & Related papers (2023-02-03T13:18:52Z) - A Distinct Unsupervised Reference Model From The Environment Helps
Continual Learning [5.332329421663282]
Open-Set Semi-Supervised Continual Learning (OSSCL) is a more realistic semi-supervised continual learning setting.
We present a model with two distinct parts: (i) the reference network captures general-purpose and task-agnostic knowledge in the environment by using a broad spectrum of unlabeled samples, and (ii) the learner network is designed to learn task-specific representations by exploiting supervised samples.
arXiv Detail & Related papers (2023-01-11T15:05:36Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z) - Learning to Count in the Crowd from Limited Labeled Data [109.2954525909007]
We focus on reducing the annotation efforts by learning to count in the crowd from limited number of labeled samples.
Specifically, we propose a Gaussian Process-based iterative learning mechanism that involves estimation of pseudo-ground truth for the unlabeled data.
arXiv Detail & Related papers (2020-07-07T04:17:01Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.