Manifold Alignment for Semantically Aligned Style Transfer
- URL: http://arxiv.org/abs/2005.10777v2
- Date: Thu, 2 Sep 2021 05:41:18 GMT
- Title: Manifold Alignment for Semantically Aligned Style Transfer
- Authors: Jing Huo, Shiyin Jin, Wenbin Li, Jing Wu, Yu-Kun Lai, Yinghuan Shi,
Yang Gao
- Abstract summary: We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
- Score: 61.1274057338588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing style transfer methods follow the assumption that styles can be
represented with global statistics (e.g., Gram matrices or covariance
matrices), and thus address the problem by forcing the output and style images
to have similar global statistics. An alternative is the assumption of local
style patterns, where algorithms are designed to swap similar local features of
content and style images. However, the limitation of these existing methods is
that they neglect the semantic structure of the content image which may lead to
corrupted content structure in the output. In this paper, we make a new
assumption that image features from the same semantic region form a manifold
and an image with multiple semantic regions follows a multi-manifold
distribution. Based on this assumption, the style transfer problem is
formulated as aligning two multi-manifold distributions and a Manifold
Alignment based Style Transfer (MAST) framework is proposed. The proposed
framework allows semantically similar regions between the output and the style
image share similar style patterns. Moreover, the proposed manifold alignment
method is flexible to allow user editing or using semantic segmentation maps as
guidance for style transfer. To allow the method to be applicable to
photorealistic style transfer, we propose a new adaptive weight skip connection
network structure to preserve the content details. Extensive experiments verify
the effectiveness of the proposed framework for both artistic and
photorealistic style transfer. Code is available at
https://github.com/NJUHuoJing/MAST.
Related papers
- Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Consistent Style Transfer [23.193302706359464]
Recently, attentional arbitrary style transfer methods have been proposed to achieve fine-grained results.
We propose the progressive attentional manifold alignment (PAMA) to alleviate this problem.
We show that PAMA achieves state-of-the-art performance while avoiding the inconsistency of semantic regions.
arXiv Detail & Related papers (2022-01-06T20:19:35Z) - Towards Controllable and Photorealistic Region-wise Image Manipulation [11.601157452472714]
We present a generative model with auto-encoder architecture for per-region style manipulation.
We apply a code consistency loss to enforce an explicit disentanglement between content and style latent representations.
The model is constrained by a content alignment loss to ensure the foreground editing will not interfere background contents.
arXiv Detail & Related papers (2021-08-19T13:29:45Z) - Domain-Specific Mappings for Generative Adversarial Style Transfer [30.50889066030244]
Style transfer generates an image whose content comes from one image and style from the other.
Previous methods often assume a shared domain-invariant content space, which could compromise the content representation power.
This paper leverages domain-specific mappings for remapping latent features in the shared content space to domain-specific content spaces.
arXiv Detail & Related papers (2020-08-05T15:55:25Z) - Distribution Aligned Multimodal and Multi-Domain Image Stylization [76.74823384524814]
We propose a unified framework for multimodal and multi-domain style transfer.
The key component of our method is a novel style distribution alignment module.
We validate our proposed framework on painting style transfer with a variety of different artistic styles and genres.
arXiv Detail & Related papers (2020-06-02T07:25:53Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.