Learn distributed GAN with Temporary Discriminators
- URL: http://arxiv.org/abs/2007.09221v1
- Date: Fri, 17 Jul 2020 20:45:57 GMT
- Title: Learn distributed GAN with Temporary Discriminators
- Authors: Hui Qu, Yikai Zhang, Qi Chang, Zhennan Yan, Chao Chen, Dimitris
Metaxas
- Abstract summary: We propose a method for training distributed GAN with sequential temporary discriminators.
We show our design of loss function indeed learns the correct distribution with provable guarantees.
- Score: 16.33621293935067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a method for training distributed GAN with
sequential temporary discriminators. Our proposed method tackles the challenge
of training GAN in the federated learning manner: How to update the generator
with a flow of temporary discriminators? We apply our proposed method to learn
a self-adaptive generator with a series of local discriminators from multiple
data centers. We show our design of loss function indeed learns the correct
distribution with provable guarantees. The empirical experiments show that our
approach is capable of generating synthetic data which is practical for
real-world applications such as training a segmentation model.
Related papers
- Discriminator Guidance for Autoregressive Diffusion Models [12.139222986297264]
We introduce discriminator guidance in the setting of Autoregressive Diffusion Models.
We derive ways of using a discriminator together with a pretrained generative model in the discrete case.
arXiv Detail & Related papers (2023-10-24T13:14:22Z) - Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Re-using Adversarial Mask Discriminators for Test-time Training under
Distribution Shifts [10.647970046084916]
We argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and correct segmentation mistakes.
We show that we can combine discriminators with image reconstruction costs (via decoders) to further improve the model.
Our method is simple and improves the test-time performance of pre-trained GANs.
arXiv Detail & Related papers (2021-08-26T17:31:46Z) - MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators [47.19216713803009]
We propose a framework of generative adversarial networks with multiple discriminators.
We guide each discriminator to have expertise in a subset of the entire data.
Despite the use of multiple discriminators, the backbone networks are shared across the discriminators.
arXiv Detail & Related papers (2021-07-15T11:35:08Z) - Exploring Dropout Discriminator for Domain Adaptation [27.19677042654432]
Adaptation of a classifier to new domains is one of the challenging problems in machine learning.
We propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution.
An ensemble of discriminators helps the model to learn the data distribution efficiently.
arXiv Detail & Related papers (2021-07-09T06:11:34Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.