Style Transfer with Target Feature Palette and Attention Coloring
- URL: http://arxiv.org/abs/2111.04028v1
- Date: Sun, 7 Nov 2021 08:09:20 GMT
- Title: Style Transfer with Target Feature Palette and Attention Coloring
- Authors: Suhyeon Ha, Guisik Kim, Junseok Kwon
- Abstract summary: A novel artistic stylization method with target feature palettes is proposed, which can transfer key features accurately.
Our stylized images exhibit state-of-the-art performance, with strength in preserving core structures and details of the content image.
- Score: 15.775618544581885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Style transfer has attracted a lot of attentions, as it can change a given
image into one with splendid artistic styles while preserving the image
structure. However, conventional approaches easily lose image details and tend
to produce unpleasant artifacts during style transfer. In this paper, to solve
these problems, a novel artistic stylization method with target feature
palettes is proposed, which can transfer key features accurately. Specifically,
our method contains two modules, namely feature palette composition (FPC) and
attention coloring (AC) modules. The FPC module captures representative
features based on K-means clustering and produces a feature target palette. The
following AC module calculates attention maps between content and style images,
and transfers colors and patterns based on the attention map and the target
palette. These modules enable the proposed stylization to focus on key features
and generate plausibly transferred images. Thus, the contributions of the
proposed method are to propose a novel deep learning-based style transfer
method and present target feature palette and attention coloring modules, and
provide in-depth analysis and insight on the proposed method via exhaustive
ablation study. Qualitative and quantitative results show that our stylized
images exhibit state-of-the-art performance, with strength in preserving core
structures and details of the content image.
Related papers
- DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - AEANet: Affinity Enhanced Attentional Networks for Arbitrary Style Transfer [4.639424509503966]
A research area that combines rational academic study with emotive artistic creation.
It aims to create a new image from a content image according to a target artistic style, maintaining the content's textural structural information.
Existing style transfer methods often significantly damage the texture lines of the content image during the style transformation.
We propose affinity-enhanced attentional network, which include the content affinity-enhanced attention (CAEA) module, the style affinity-enhanced attention (SAEA) module, and the hybrid attention (HA) module.
arXiv Detail & Related papers (2024-09-23T01:39:11Z) - MagicStyle: Portrait Stylization Based on Reference Image [0.562479170374811]
We propose a diffusion model-based reference image stylization method specifically for portraits, called MagicStyle.
The C phase involves a reverse denoising process, where DDIM Inversion is performed separately on the content image and the style image, storing the self-attention query, key and value features of both images during the inversion process.
The FFF phase executes forward denoising, integrating the texture and color information from the pre-stored feature queries, keys and values into the diffusion generation process based on our Well-designed Feature Fusion Attention (FFA)
arXiv Detail & Related papers (2024-09-12T15:51:09Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Implicit Style-Content Separation using B-LoRA [61.664293840163865]
We introduce B-LoRA, a method that implicitly separate the style and content components of a single image.
By analyzing the architecture of SDXL combined with LoRA, we find that jointly learning the LoRA weights of two specific blocks achieves style-content separation.
arXiv Detail & Related papers (2024-03-21T17:20:21Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss [51.309905690367835]
We introduce a novel arbitrary style transfer method with structure enhancement by combining the global and local loss.
Experimental results demonstrate that our method can generate higher-quality images with impressive visual effects.
arXiv Detail & Related papers (2022-07-23T07:02:57Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - UMFA: A photorealistic style transfer method based on U-Net and
multi-layer feature aggregation [0.0]
We propose a photorealistic style transfer network to emphasize the natural effect of photorealistic image stylization.
In particular, an encoder based on the dense block and a decoder form a symmetrical structure of U-Net are jointly staked to realize an effective feature extraction and image reconstruction.
arXiv Detail & Related papers (2021-08-13T08:06:29Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.