Learning Unsupervised Cross-domain Image-to-Image Translation Using a
Shared Discriminator
- URL: http://arxiv.org/abs/2102.04699v1
- Date: Tue, 9 Feb 2021 08:26:23 GMT
- Title: Learning Unsupervised Cross-domain Image-to-Image Translation Using a
Shared Discriminator
- Authors: Rajiv Kumar, Rishabh Dabral, G. Sivakumar
- Abstract summary: Unsupervised image-to-image translation is used to transform images from a source domain to generate images in a target domain without using source-target image pairs.
We propose a new method that uses a single shared discriminator between the two GANs, which improves the overall efficacy.
Our results indicate that even without adding attention mechanisms, our method performs at par with attention-based methods and generates images of comparable quality.
- Score: 2.1377923666134118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised image-to-image translation is used to transform images from a
source domain to generate images in a target domain without using source-target
image pairs. Promising results have been obtained for this problem in an
adversarial setting using two independent GANs and attention mechanisms. We
propose a new method that uses a single shared discriminator between the two
GANs, which improves the overall efficacy. We assess the qualitative and
quantitative results on image transfiguration, a cross-domain translation task,
in a setting where the target domain shares similar semantics to the source
domain. Our results indicate that even without adding attention mechanisms, our
method performs at par with attention-based methods and generates images of
comparable quality.
Related papers
- Domain Agnostic Image-to-image Translation using Low-Resolution
Conditioning [6.470760375991825]
We propose a domain-agnostic i2i method for fine-grained problems, where the domains are related.
We present a novel approach that relies on training the generative model to produce images that both share distinctive information of the associated source image.
We validate our method on the CelebA-HQ and AFHQ datasets by demonstrating improvements in terms of visual quality.
arXiv Detail & Related papers (2023-05-08T19:58:49Z) - Multi-cropping Contrastive Learning and Domain Consistency for
Unsupervised Image-to-Image Translation [5.562419999563734]
We propose a novel unsupervised image-to-image translation framework based on multi-cropping contrastive learning and domain consistency, called MCDUT.
In many image-to-image translation tasks, our method achieves state-of-the-art results, and the advantages of our method have been proven through comparison experiments and ablation research.
arXiv Detail & Related papers (2023-04-24T16:20:28Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Multi-domain Unsupervised Image-to-Image Translation with Appearance
Adaptive Convolution [62.4972011636884]
We propose a novel multi-domain unsupervised image-to-image translation (MDUIT) framework.
We exploit the decomposed content feature and appearance adaptive convolution to translate an image into a target appearance.
We show that the proposed method produces visually diverse and plausible results in multiple domains compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-02-06T14:12:34Z) - Image-to-image Translation as a Unique Source of Knowledge [91.3755431537592]
This article performs translations of labelled datasets from the optical domain to the SAR domain with different I2I algorithms from the state-of-the-art.
stacking is proposed as a way of combining the knowledge learned from the different I2I translations and evaluated against single models.
arXiv Detail & Related papers (2021-12-03T12:12:04Z) - RPCL: A Framework for Improving Cross-Domain Detection with Auxiliary
Tasks [74.10747285807315]
Cross-Domain Detection (XDD) aims to train an object detector using labeled image from a source domain but have good performance in the target domain with only unlabeled images.
This paper provides a complementary solution to align domains by learning the same auxiliary tasks in both domains simultaneously.
arXiv Detail & Related papers (2021-04-18T02:56:19Z) - Deep Symmetric Adaptation Network for Cross-modality Medical Image
Segmentation [40.95845629932874]
Unsupervised domain adaptation (UDA) methods have shown their promising performance in the cross-modality medical image segmentation tasks.
We present a novel deep symmetric architecture of UDA for medical image segmentation, which consists of a segmentation sub-network and two symmetric source and target domain translation sub-networks.
Our method has remarkable advantages compared to the state-of-the-art methods in both cross-modality Cardiac and BraTS segmentation tasks.
arXiv Detail & Related papers (2021-01-18T02:54:30Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Label-Driven Reconstruction for Domain Adaptation in Semantic
Segmentation [43.09068177612067]
Unsupervised domain adaptation enables to alleviate the need for pixel-wise annotation in the semantic segmentation.
One of the most common strategies is to translate images from the source domain to the target domain and then align their marginal distributions in the feature space using adversarial learning.
Here, we present an innovative framework, designed to mitigate the image translation bias and align cross-domain features with the same category.
arXiv Detail & Related papers (2020-03-10T10:06:35Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.