Data-Efficient Instance Generation from Instance Discrimination
- URL: http://arxiv.org/abs/2106.04566v1
- Date: Tue, 8 Jun 2021 17:52:59 GMT
- Title: Data-Efficient Instance Generation from Instance Discrimination
- Authors: Ceyuan Yang, Yujun Shen, Yinghao Xu, Bolei Zhou
- Abstract summary: We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
- Score: 40.71055888512495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have significantly advanced image
synthesis, however, the synthesis quality drops significantly given a limited
amount of training data. To improve the data efficiency of GAN training, prior
work typically employs data augmentation to mitigate the overfitting of the
discriminator yet still learn the discriminator with a bi-classification (i.e.,
real vs. fake) task. In this work, we propose a data-efficient Instance
Generation (InsGen) method based on instance discrimination. Concretely,
besides differentiating the real domain from the fake domain, the discriminator
is required to distinguish every individual image, no matter it comes from the
training set or from the generator. In this way, the discriminator can benefit
from the infinite synthesized samples for training, alleviating the overfitting
problem caused by insufficient training data. A noise perturbation strategy is
further introduced to improve its discriminative power. Meanwhile, the learned
instance discrimination capability from the discriminator is in turn exploited
to encourage the generator for diverse generation. Extensive experiments
demonstrate the effectiveness of our method on a variety of datasets and
training settings. Noticeably, on the setting of 2K training images from the
FFHQ dataset, we outperform the state-of-the-art approach with 23.5% FID
improvement.
Related papers
- Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators [47.19216713803009]
We propose a framework of generative adversarial networks with multiple discriminators.
We guide each discriminator to have expertise in a subset of the entire data.
Despite the use of multiple discriminators, the backbone networks are shared across the discriminators.
arXiv Detail & Related papers (2021-07-15T11:35:08Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.