A Domain Gap Aware Generative Adversarial Network for Multi-domain Image
Translation
- URL: http://arxiv.org/abs/2110.10837v1
- Date: Thu, 21 Oct 2021 00:33:06 GMT
- Title: A Domain Gap Aware Generative Adversarial Network for Multi-domain Image
Translation
- Authors: Wenju Xu and Guanghui Wang
- Abstract summary: The paper proposes a unified model to translate images across multiple domains with significant domain gaps.
With a single unified generator, the model can maintain consistency over the global shapes as well as the local texture information across multiple domains.
- Score: 22.47113158859034
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent image-to-image translation models have shown great success in mapping
local textures between two domains. Existing approaches rely on a
cycle-consistency constraint that supervises the generators to learn an inverse
mapping. However, learning the inverse mapping introduces extra trainable
parameters and it is unable to learn the inverse mapping for some domains. As a
result, they are ineffective in the scenarios where (i) multiple visual image
domains are involved; (ii) both structure and texture transformations are
required; and (iii) semantic consistency is preserved. To solve these
challenges, the paper proposes a unified model to translate images across
multiple domains with significant domain gaps. Unlike previous models that
constrain the generators with the ubiquitous cycle-consistency constraint to
achieve the content similarity, the proposed model employs a perceptual
self-regularization constraint. With a single unified generator, the model can
maintain consistency over the global shapes as well as the local texture
information across multiple domains. Extensive qualitative and quantitative
evaluations demonstrate the effectiveness and superior performance over
state-of-the-art models. It is more effective in representing shape deformation
in challenging mappings with significant dataset variation across multiple
domains.
Related papers
- Boundless Across Domains: A New Paradigm of Adaptive Feature and Cross-Attention for Domain Generalization in Medical Image Segmentation [1.93061220186624]
Domain-invariant representation learning is a powerful method for domain generalization.
Previous approaches face challenges such as high computational demands, training instability, and limited effectiveness with high-dimensional data.
We propose an Adaptive Feature Blending (AFB) method that generates out-of-distribution samples while exploring the in-distribution space.
arXiv Detail & Related papers (2024-11-22T12:06:24Z) - Semantic Segmentation for Real-World and Synthetic Vehicle's Forward-Facing Camera Images [0.8562182926816566]
This is the solution for semantic segmentation problem in both real-world and synthetic images from a vehicle s forward-facing camera.
We concentrate in building a robust model which performs well across various domains of different outdoor situations.
This paper studies the effectiveness of employing real-world and synthetic data to handle the domain adaptation in semantic segmentation problem.
arXiv Detail & Related papers (2024-07-07T17:28:45Z) - Domain-Scalable Unpaired Image Translation via Latent Space Anchoring [88.7642967393508]
Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data.
We propose a new domain-scalable UNIT method, termed as latent space anchoring.
Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models.
In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning.
arXiv Detail & Related papers (2023-06-26T17:50:02Z) - Unsupervised Domain Adaptation for Semantic Segmentation using One-shot
Image-to-Image Translation via Latent Representation Mixing [9.118706387430883]
We propose a new unsupervised domain adaptation method for the semantic segmentation of very high resolution images.
An image-to-image translation paradigm is proposed, based on an encoder-decoder principle where latent content representations are mixed across domains.
Cross-city comparative experiments have shown that the proposed method outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2022-12-07T18:16:17Z) - Disentangled Unsupervised Image Translation via Restricted Information
Flow [61.44666983942965]
Many state-of-art methods hard-code the desired shared-vs-specific split into their architecture.
We propose a new method that does not rely on inductive architectural biases.
We show that the proposed method achieves consistently high manipulation accuracy across two synthetic and one natural dataset.
arXiv Detail & Related papers (2021-11-26T00:27:54Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Cross-Domain Latent Modulation for Variational Transfer Learning [1.9212368803706577]
We propose a cross-domain latent modulation mechanism within a variational autoencoders (VAE) framework to enable improved transfer learning.
We apply the proposed model to a number of transfer learning tasks including unsupervised domain adaptation and image-to-image translation.
arXiv Detail & Related papers (2020-12-21T22:45:00Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Image-to-image Mapping with Many Domains by Sparse Attribute Transfer [71.28847881318013]
Unsupervised image-to-image translation consists of learning a pair of mappings between two domains without known pairwise correspondences between points.
Current convention is to approach this task with cycle-consistent GANs.
We propose an alternate approach that directly restricts the generator to performing a simple sparse transformation in a latent layer.
arXiv Detail & Related papers (2020-06-23T19:52:23Z) - CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency [119.45667331836583]
Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another.
We present a novel pixel-wise adversarial domain adaptation algorithm.
arXiv Detail & Related papers (2020-01-09T19:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.