CDGAN: Cyclic Discriminative Generative Adversarial Networks for
Image-to-Image Transformation
- URL: http://arxiv.org/abs/2001.05489v2
- Date: Sat, 27 Nov 2021 02:09:41 GMT
- Title: CDGAN: Cyclic Discriminative Generative Adversarial Networks for
Image-to-Image Transformation
- Authors: Kancharagunta Kishan Babu, Shiv Ram Dubey
- Abstract summary: We introduce a new Image-to-Image Transformation network named Cyclic Discriminative Generative Adversarial Networks (CDGAN)
The proposed CDGAN generates high quality and more realistic images by incorporating the additional discriminator networks for cycled images.
The quantitative and qualitative results are analyzed and compared with the state-of-the-art methods.
- Score: 17.205434613674104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have facilitated a new direction to
tackle the image-to-image transformation problem. Different GANs use generator
and discriminator networks with different losses in the objective function.
Still there is a gap to fill in terms of both the quality of the generated
images and close to the ground truth images. In this work, we introduce a new
Image-to-Image Transformation network named Cyclic Discriminative Generative
Adversarial Networks (CDGAN) that fills the above mentioned gaps. The proposed
CDGAN generates high quality and more realistic images by incorporating the
additional discriminator networks for cycled images in addition to the original
architecture of the CycleGAN. The proposed CDGAN is tested over three
image-to-image transformation datasets. The quantitative and qualitative
results are analyzed and compared with the state-of-the-art methods. The
proposed CDGAN method outperforms the state-of-the-art methods when compared
over the three baseline Image-to-Image transformation datasets. The code is
available at https://github.com/KishanKancharagunta/CDGAN.
Related papers
- Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - SRTransGAN: Image Super-Resolution using Transformer based Generative
Adversarial Network [16.243363392717434]
We propose a transformer-based encoder-decoder network as a generator to generate 2x images and 4x images.
The proposed SRTransGAN outperforms the existing methods by 4.38 % on an average of PSNR and SSIM scores.
arXiv Detail & Related papers (2023-12-04T16:22:39Z) - DCN-T: Dual Context Network with Transformer for Hyperspectral Image
Classification [109.09061514799413]
Hyperspectral image (HSI) classification is challenging due to spatial variability caused by complex imaging conditions.
We propose a tri-spectral image generation pipeline that transforms HSI into high-quality tri-spectral images.
Our proposed method outperforms state-of-the-art methods for HSI classification.
arXiv Detail & Related papers (2023-04-19T18:32:52Z) - Guided Image-to-Image Translation by Discriminator-Generator
Communication [71.86347329356244]
The goal of Image-to-image (I2I) translation is to transfer an image from a source domain to a target domain.
One major branch of this research is to formulate I2I translation based on Generative Adversarial Network (GAN)
arXiv Detail & Related papers (2023-03-07T02:29:36Z) - D3T-GAN: Data-Dependent Domain Transfer GANs for Few-shot Image
Generation [17.20913584422917]
Few-shot image generation aims at generating realistic images through training a GAN model given few samples.
A typical solution for few-shot generation is to transfer a well-trained GAN model from a data-rich source domain to the data-deficient target domain.
We propose a novel self-supervised transfer scheme termed D3T-GAN, addressing the cross-domain GANs transfer in few-shot image generation.
arXiv Detail & Related papers (2022-05-12T11:32:39Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Progressively Unfreezing Perceptual GAN [28.330940021951438]
Generative adversarial networks (GANs) are widely used in image generation tasks, yet the generated images are usually lack of texture details.
We propose a general framework, called Progressively Unfreezing Perceptual GAN (PUPGAN), which can generate images with fine texture details.
arXiv Detail & Related papers (2020-06-18T03:12:41Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.