Improving Model Compatibility of Generative Adversarial Networks by
Boundary Calibration
- URL: http://arxiv.org/abs/2111.02316v1
- Date: Wed, 3 Nov 2021 16:08:09 GMT
- Title: Improving Model Compatibility of Generative Adversarial Networks by
Boundary Calibration
- Authors: Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin
- Abstract summary: Boundary-Calibration GANs (BCGANs) are proposed to improve GAN's model compatibility.
BCGANs generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
- Score: 24.28407308818025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) is a powerful family of models that
learn an underlying distribution to generate synthetic data. Many existing
studies of GANs focus on improving the realness of the generated image data for
visual applications, and few of them concern about improving the quality of the
generated data for training other classifiers -- a task known as the model
compatibility problem. As a consequence, existing GANs often prefer generating
`easier' synthetic data that are far from the boundaries of the classifiers,
and refrain from generating near-boundary data, which are known to play an
important roles in training the classifiers. To improve GAN in terms of model
compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage
the boundary information from a set of pre-trained classifiers using the
original data. In particular, we introduce an auxiliary Boundary-Calibration
loss (BC-loss) into the generator of GAN to match the statistics between the
posterior distributions of original data and generated data with respect to the
boundaries of the pre-trained classifiers. The BC-loss is provably unbiased and
can be easily coupled with different GAN variants to improve their model
compatibility. Experimental results demonstrate that BCGANs not only generate
realistic images like original GANs but also achieves superior model
compatibility than the original GANs.
Related papers
- Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming Generative Adversarial Networks [3.3903891679981593]
We present Bias-transforming Generative Adversarial Networks (Bt-GAN), a GAN-based synthetic data generator specifically designed for the healthcare domain.
Our results demonstrate that Bt-GAN achieves SOTA accuracy while significantly improving fairness and minimizing bias.
arXiv Detail & Related papers (2024-04-21T12:16:38Z) - Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation [108.11783463263328]
This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T06:43:55Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Improving the quality of generative models through Smirnov
transformation [1.3492000366723798]
We propose a novel activation function to be used as output of the generator agent.
It is based on the Smirnov probabilistic transformation and it is specifically designed to improve the quality of the generated data.
arXiv Detail & Related papers (2021-10-29T17:01:06Z) - On the Fairness of Generative Adversarial Networks (GANs) [1.061960673667643]
Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years.
In this paper, we analyze and highlight fairness concerns of GANs model.
arXiv Detail & Related papers (2021-03-01T12:25:01Z) - HGAN: Hybrid Generative Adversarial Network [25.940501417539416]
We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
arXiv Detail & Related papers (2021-02-07T03:54:12Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.