MSGDD-cGAN: Multi-Scale Gradients Dual Discriminator Conditional
Generative Adversarial Network
- URL: http://arxiv.org/abs/2109.05614v1
- Date: Sun, 12 Sep 2021 21:08:37 GMT
- Title: MSGDD-cGAN: Multi-Scale Gradients Dual Discriminator Conditional
Generative Adversarial Network
- Authors: Mohammadreza Naderi, Zahra Nabizadeh, Nader Karimi, Shahram Shirani,
Shadrokh Samavi
- Abstract summary: MSGDD-cGAN is proposed to stabilize the performance of Conditional Generative Adversarial Networks (cGANs)
Our model shows a 3.18% increase in the F1 score comparing to the pix2pix version of cGANs.
- Score: 14.08122854653421
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conditional Generative Adversarial Networks (cGANs) have been used in many
image processing tasks. However, they still have serious problems maintaining
the balance between conditioning the output on the input and creating the
output with the desired distribution based on the corresponding ground truth.
The traditional cGANs, similar to most conventional GANs, suffer from vanishing
gradients, which backpropagate from the discriminator to the generator.
Moreover, the traditional cGANs are sensitive to architectural changes due to
previously mentioned gradient problems. Therefore, balancing the architecture
of the cGANs is almost impossible. Recently MSG-GAN has been proposed to
stabilize the performance of the GANs by applying multiple connections between
the generator and discriminator. In this work, we propose a method called
MSGDD-cGAN, which first stabilizes the performance of the cGANs using
multi-connections gradients flow. Secondly, the proposed network architecture
balances the correlation of the output to input and the fitness of the output
on the target distribution. This balance is generated by using the proposed
dual discrimination procedure. We tested our model by segmentation of fetal
ultrasound images. Our model shows a 3.18% increase in the F1 score comparing
to the pix2pix version of cGANs.
Related papers
- DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - EGC: Image Generation and Classification via a Diffusion Energy-Based
Model [59.591755258395594]
This work introduces an energy-based classifier and generator, namely EGC, which can achieve superior performance in both tasks using a single neural network.
EGC achieves competitive generation results compared with state-of-the-art approaches on ImageNet-1k, CelebA-HQ and LSUN Church.
This work represents the first successful attempt to simultaneously excel in both tasks using a single set of network parameters.
arXiv Detail & Related papers (2023-04-04T17:59:14Z) - Latent Multi-Relation Reasoning for GAN-Prior based Image
Super-Resolution [61.65012981435095]
LAREN is a graph-based disentanglement that constructs a superior disentangled latent space via hierarchical multi-relation reasoning.
We show that LAREN achieves superior large-factor image SR and outperforms the state-of-the-art consistently across multiple benchmarks.
arXiv Detail & Related papers (2022-08-04T19:45:21Z) - A Unified View of cGANs with and without Classifiers [24.28407308818025]
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
Some representative cGANs avoid the shortcoming and reach state-of-the-art performance without having classifiers.
In this work, we demonstrate that classifiers can be properly leveraged to improve cGANs.
arXiv Detail & Related papers (2021-11-01T15:36:33Z) - Are conditional GANs explicitly conditional? [0.0]
This paper proposes two contributions for conditional Generative Adversarial Networks (cGANs)
The first main contribution is an analysis of cGANs to show that they are not explicitly conditional.
The second contribution is a new method, called acontrario, that explicitly models conditionality for both parts of the adversarial architecture.
arXiv Detail & Related papers (2021-06-28T22:49:27Z) - Combining Transformer Generators with Convolutional Discriminators [9.83490307808789]
Recently proposed TransGAN is the first GAN using only transformer-based architectures.
TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism.
We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results.
arXiv Detail & Related papers (2021-05-21T07:56:59Z) - Rethinking conditional GAN training: An approach using geometrically
structured latent manifolds [58.07468272236356]
Conditional GANs (cGAN) suffer from critical drawbacks such as the lack of diversity in generated outputs.
We propose a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN.
arXiv Detail & Related papers (2020-11-25T22:54:11Z) - Learning Efficient GANs for Image Translation via Differentiable Masks
and co-Attention Distillation [130.30465659190773]
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computation and storage costs impede the deployment on mobile devices.
We introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation.
Experiments show DMAD can reduce the Multiply Accumulate Operations (MACs) of CycleGAN by 13x and that of Pix2Pix by 4x while retaining a comparable performance against the full model.
arXiv Detail & Related papers (2020-11-17T02:39:19Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.