Guided Image-to-Image Translation by Discriminator-Generator
Communication
- URL: http://arxiv.org/abs/2303.03598v1
- Date: Tue, 7 Mar 2023 02:29:36 GMT
- Title: Guided Image-to-Image Translation by Discriminator-Generator
Communication
- Authors: Yuanjiang Cao, Lina Yao, Le Pan, Quan Z. Sheng, and Xiaojun Chang
- Abstract summary: The goal of Image-to-image (I2I) translation is to transfer an image from a source domain to a target domain.
One major branch of this research is to formulate I2I translation based on Generative Adversarial Network (GAN)
- Score: 71.86347329356244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of Image-to-image (I2I) translation is to transfer an image from a
source domain to a target domain, which has recently drawn increasing
attention. One major branch of this research is to formulate I2I translation
based on Generative Adversarial Network (GAN). As a zero-sum game, GAN can be
reformulated as a Partially-observed Markov Decision Process (POMDP) for
generators, where generators cannot access full state information of their
environments. This formulation illustrates the information insufficiency in the
GAN training. To mitigate this problem, we propose to add a communication
channel between discriminators and generators. We explore multiple architecture
designs to integrate the communication mechanism into the I2I translation
framework. To validate the performance of the proposed approach, we have
conducted extensive experiments on various benchmark datasets. The experimental
results confirm the superiority of our proposed method.
Related papers
- I2I-Galip: Unsupervised Medical Image Translation Using Generative Adversarial CLIP [30.506544165999564]
Unpaired image-to-image translation is a challenging task due to the absence of paired examples.
We propose a new image-to-image translation framework named Image-to-Image-Generative-Adversarial-CLIP (I2I-Galip)
arXiv Detail & Related papers (2024-09-19T01:44:50Z) - SCONE-GAN: Semantic Contrastive learning-based Generative Adversarial
Network for an end-to-end image translation [18.93434486338439]
SCONE-GAN is shown to be effective for learning to generate realistic and diverse scenery images.
For more realistic and diverse image generation we introduce style reference image.
We validate the proposed algorithm for image-to-image translation and stylizing outdoor images.
arXiv Detail & Related papers (2023-11-07T10:29:16Z) - IR-GAN: Image Manipulation with Linguistic Instruction by Increment
Reasoning [110.7118381246156]
Increment Reasoning Generative Adversarial Network (IR-GAN) aims to reason consistency between visual increment in images and semantic increment in instructions.
First, we introduce the word-level and instruction-level instruction encoders to learn user's intention from history-correlated instructions as semantic increment.
Second, we embed the representation of semantic increment into that of source image for generating target image, where source image plays the role of referring auxiliary.
arXiv Detail & Related papers (2022-04-02T07:48:39Z) - Gated SwitchGAN for multi-domain facial image translation [12.501699058042439]
We propose a switch generative adversarial network (SwitchGAN) with a more adaptive discriminator structure and a matched generator to perform delicate image translation.
A feature-switching operation is proposed to achieve feature selection and fusion in our conditional modules.
Experiments on the Morph, RaFD and CelebA databases visually and quantitatively show that our extended SwitchGAN can achieve better translation results than StarGAN, AttGAN and STGAN.
arXiv Detail & Related papers (2021-11-28T10:24:43Z) - Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2
Network [73.5062435623908]
We propose a new I2I translation method that generates a new model in the target domain via a series of model transformations.
By feeding the latent vector into the generated model, we can perform I2I translation between the source domain and target domain.
arXiv Detail & Related papers (2020-10-12T13:51:40Z) - MI^2GAN: Generative Adversarial Network for Medical Image Domain
Adaptation using Mutual Information Constraint [47.07869311690419]
We propose a novel GAN to maintain image-contents during cross-domain I2I translation.
Particularly, we disentangle the content features from domain information for both the source and translated images.
The proposed MI$2$GAN is evaluated on two tasks---polyp segmentation using colonoscopic images and the segmentation of optic disc and cup in fundus images.
arXiv Detail & Related papers (2020-07-22T03:19:54Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z) - Multi-Channel Attention Selection GANs for Guided Image-to-Image
Translation [148.9985519929653]
We propose a novel model named Multi-Channel Attention Selection Generative Adversarial Network (SelectionGAN) for guided image-to-image translation.
The proposed framework and modules are unified solutions and can be applied to solve other generation tasks such as semantic image synthesis.
arXiv Detail & Related papers (2020-02-03T23:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.