Consistent Style Transfer
- URL: http://arxiv.org/abs/2201.02233v1
- Date: Thu, 6 Jan 2022 20:19:35 GMT
- Title: Consistent Style Transfer
- Authors: Xuan Luo, Zhen Han, Lingkang Yang, Lingling Zhang
- Abstract summary: Recently, attentional arbitrary style transfer methods have been proposed to achieve fine-grained results.
We propose the progressive attentional manifold alignment (PAMA) to alleviate this problem.
We show that PAMA achieves state-of-the-art performance while avoiding the inconsistency of semantic regions.
- Score: 23.193302706359464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, attentional arbitrary style transfer methods have been proposed to
achieve fine-grained results, which manipulates the point-wise similarity
between content and style features for stylization. However, the attention
mechanism based on feature points ignores the feature multi-manifold
distribution, where each feature manifold corresponds to a semantic region in
the image. Consequently, a uniform content semantic region is rendered by
highly different patterns from various style semantic regions, producing
inconsistent stylization results with visual artifacts. We proposed the
progressive attentional manifold alignment (PAMA) to alleviate this problem,
which repeatedly applies attention operations and space-aware interpolations.
The attention operation rearranges style features dynamically according to the
spatial distribution of content features. This makes the content and style
manifolds correspond on the feature map. Then the space-aware interpolation
adaptively interpolates between the corresponding content and style manifolds
to increase their similarity. By gradually aligning the content manifolds to
style manifolds, the proposed PAMA achieves state-of-the-art performance while
avoiding the inconsistency of semantic regions. Codes are available at
https://github.com/computer-vision2022/PAMA.
Related papers
- Learning Dynamic Style Kernels for Artistic Style Transfer [26.19086645743083]
We propose a new scheme that learns em spatially adaptive kernels for per-pixel stylization.
Our proposed method outperforms state-of-the-art methods and exhibits superior performance in terms of visual quality and efficiency.
arXiv Detail & Related papers (2023-04-02T00:26:43Z) - All-to-key Attention for Arbitrary Style Transfer [98.83954812536521]
We propose a novel all-to-key attention mechanism -- each position of content features is matched to stable key positions of style features.
The resultant module, dubbed StyA2K, shows extraordinary performance in preserving the semantic structure and rendering consistent style patterns.
arXiv Detail & Related papers (2022-12-08T06:46:35Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - Towards Controllable and Photorealistic Region-wise Image Manipulation [11.601157452472714]
We present a generative model with auto-encoder architecture for per-region style manipulation.
We apply a code consistency loss to enforce an explicit disentanglement between content and style latent representations.
The model is constrained by a content alignment loss to ensure the foreground editing will not interfere background contents.
arXiv Detail & Related papers (2021-08-19T13:29:45Z) - Arbitrary Video Style Transfer via Multi-Channel Correlation [84.75377967652753]
We propose Multi-Channel Correction network (MCCNet) to fuse exemplar style features and input content features for efficient style transfer.
MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on similarity with content features.
The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures.
arXiv Detail & Related papers (2020-09-17T01:30:46Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z) - Manifold Alignment for Semantically Aligned Style Transfer [61.1274057338588]
We make a new assumption that image features from the same semantic region form a manifold and an image with multiple semantic regions follows a multi-manifold distribution.
Based on this assumption, the style transfer problem is formulated as aligning two multi-manifold distributions.
The proposed framework allows semantically similar regions between the output and the style image share similar style patterns.
arXiv Detail & Related papers (2020-05-21T16:52:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.