Domain-Specific Mappings for Generative Adversarial Style Transfer
- URL: http://arxiv.org/abs/2008.02198v1
- Date: Wed, 5 Aug 2020 15:55:25 GMT
- Title: Domain-Specific Mappings for Generative Adversarial Style Transfer
- Authors: Hsin-Yu Chang, Zhixiang Wang, Yung-Yu Chuang
- Abstract summary: Style transfer generates an image whose content comes from one image and style from the other.
Previous methods often assume a shared domain-invariant content space, which could compromise the content representation power.
This paper leverages domain-specific mappings for remapping latent features in the shared content space to domain-specific content spaces.
- Score: 30.50889066030244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Style transfer generates an image whose content comes from one image and
style from the other. Image-to-image translation approaches with disentangled
representations have been shown effective for style transfer between two image
categories. However, previous methods often assume a shared domain-invariant
content space, which could compromise the content representation power. For
addressing this issue, this paper leverages domain-specific mappings for
remapping latent features in the shared content space to domain-specific
content spaces. This way, images can be encoded more properly for style
transfer. Experiments show that the proposed method outperforms previous style
transfer methods, particularly on challenging scenarios that would require
semantic correspondences between images. Code and results are available at
https://acht7111020.github.io/DSMAP-demo/.
Related papers
- Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - DSI2I: Dense Style for Unpaired Image-to-Image Translation [70.93865212275412]
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar.
We propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information.
Our results show that the translations produced by our approach are more diverse, preserve the source content better, and are closer to the exemplars when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-12-26T18:45:25Z) - DRANet: Disentangling Representation and Adaptation Networks for
Unsupervised Cross-Domain Adaptation [23.588766224169493]
DRANet is a network architecture that disentangles image representations and transfers the visual attributes in a latent space for unsupervised cross-domain adaptation.
Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images.
It adapts the domain by incorporating the transferred style factor into the content factor along with learnable weights specified for each domain.
arXiv Detail & Related papers (2021-03-24T18:54:23Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation [59.73535607392732]
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another.
We propose the use of an image retrieval system to assist the image-to-image translation task.
arXiv Detail & Related papers (2020-08-11T20:11:53Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z) - TriGAN: Image-to-Image Translation for Multi-Source Domain Adaptation [82.52514546441247]
We propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks.
Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style and the content.
We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-04-19T05:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.