Dynamically Masked Discriminator for Generative Adversarial Networks
- URL: http://arxiv.org/abs/2306.07716v3
- Date: Thu, 4 Jan 2024 13:58:50 GMT
- Title: Dynamically Masked Discriminator for Generative Adversarial Networks
- Authors: Wentian Zhang, Haozhe Liu, Bing Li, Jinheng Xie, Yawen Huang, Yuexiang
Li, Yefeng Zheng, Bernard Ghanem
- Abstract summary: Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
- Score: 71.33631511762782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training Generative Adversarial Networks (GANs) remains a challenging
problem. The discriminator trains the generator by learning the distribution of
real/generated data. However, the distribution of generated data changes
throughout the training process, which is difficult for the discriminator to
learn. In this paper, we propose a novel method for GANs from the viewpoint of
online continual learning. We observe that the discriminator model, trained on
historically generated data, often slows down its adaptation to the changes in
the new arrival generated data, which accordingly decreases the quality of
generated results. By treating the generated data in training as a stream, we
propose to detect whether the discriminator slows down the learning of new
knowledge in generated data. Therefore, we can explicitly enforce the
discriminator to learn new knowledge fast. Particularly, we propose a new
discriminator, which automatically detects its retardation and then dynamically
masks its features, such that the discriminator can adaptively learn the
temporally-vary distribution of generated data. Experimental results show our
method outperforms the state-of-the-art approaches.
Related papers
- Prompt Optimization via Adversarial In-Context Learning [51.18075178593142]
adv-ICL is implemented as a two-player game between a generator and a discriminator.
The generator tries to generate realistic enough output to fool the discriminator.
We show that adv-ICL results in significant improvements over state-of-the-art prompt optimization techniques.
arXiv Detail & Related papers (2023-12-05T09:44:45Z) - Generative Adversarial Networks Unlearning [13.342749941357152]
Machine unlearning has emerged as a solution to erase training data from trained machine learning models.
Research on Generative Adversarial Networks (GANs) is limited due to their unique architecture, including a generator and a discriminator.
We propose a cascaded unlearning approach for both item and class unlearning within GAN models, in which the unlearning and learning processes run in a cascaded manner.
arXiv Detail & Related papers (2023-08-19T02:21:21Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Discriminative-Generative Representation Learning for One-Class Anomaly
Detection [22.500931323372303]
We propose a self-supervised learning framework combining generative methods and discriminative methods.
Our method significantly outperforms several state-of-the-arts on multiple benchmark data sets.
arXiv Detail & Related papers (2021-07-27T11:46:15Z) - Data-Efficient Instance Generation from Instance Discrimination [40.71055888512495]
We propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
In this work, we propose a data-efficient Instance Generation (InsGen) method based on instance discrimination.
arXiv Detail & Related papers (2021-06-08T17:52:59Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Learn distributed GAN with Temporary Discriminators [16.33621293935067]
We propose a method for training distributed GAN with sequential temporary discriminators.
We show our design of loss function indeed learns the correct distribution with provable guarantees.
arXiv Detail & Related papers (2020-07-17T20:45:57Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.