Style Transfer with Target Feature Palette and Attention Coloring
- URL: http://arxiv.org/abs/2111.04028v1
- Date: Sun, 7 Nov 2021 08:09:20 GMT
- Title: Style Transfer with Target Feature Palette and Attention Coloring
- Authors: Suhyeon Ha, Guisik Kim, Junseok Kwon
- Abstract summary: A novel artistic stylization method with target feature palettes is proposed, which can transfer key features accurately.
Our stylized images exhibit state-of-the-art performance, with strength in preserving core structures and details of the content image.
- Score: 15.775618544581885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Style transfer has attracted a lot of attentions, as it can change a given
image into one with splendid artistic styles while preserving the image
structure. However, conventional approaches easily lose image details and tend
to produce unpleasant artifacts during style transfer. In this paper, to solve
these problems, a novel artistic stylization method with target feature
palettes is proposed, which can transfer key features accurately. Specifically,
our method contains two modules, namely feature palette composition (FPC) and
attention coloring (AC) modules. The FPC module captures representative
features based on K-means clustering and produces a feature target palette. The
following AC module calculates attention maps between content and style images,
and transfers colors and patterns based on the attention map and the target
palette. These modules enable the proposed stylization to focus on key features
and generate plausibly transferred images. Thus, the contributions of the
proposed method are to propose a novel deep learning-based style transfer
method and present target feature palette and attention coloring modules, and
provide in-depth analysis and insight on the proposed method via exhaustive
ablation study. Qualitative and quantitative results show that our stylized
images exhibit state-of-the-art performance, with strength in preserving core
structures and details of the content image.
Related papers
- Palette-based Color Transfer between Images [9.471264982229508]
We propose a new palette-based color transfer method that can automatically generate a new color scheme.
With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution.
Our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
arXiv Detail & Related papers (2024-05-14T01:41:19Z) - Implicit Style-Content Separation using B-LoRA [61.664293840163865]
We introduce B-LoRA, a method that implicitly separate the style and content components of a single image.
By analyzing the architecture of SDXL combined with LoRA, we find that jointly learning the LoRA weights of two specific blocks achieves style-content separation.
arXiv Detail & Related papers (2024-03-21T17:20:21Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - Inharmonious Region Localization with Auxiliary Style Feature [19.146209624835322]
Inharmonious region localization aims to localize the inharmonious region in a synthetic image.
We propose a novel color mapping module and a style feature loss to extract discriminative style features.
Based on the extracted style features, we also propose a novel style voting module to guide the localization of inharmonious region.
arXiv Detail & Related papers (2022-10-05T05:37:35Z) - Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss [51.309905690367835]
We introduce a novel arbitrary style transfer method with structure enhancement by combining the global and local loss.
Experimental results demonstrate that our method can generate higher-quality images with impressive visual effects.
arXiv Detail & Related papers (2022-07-23T07:02:57Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - UMFA: A photorealistic style transfer method based on U-Net and
multi-layer feature aggregation [0.0]
We propose a photorealistic style transfer network to emphasize the natural effect of photorealistic image stylization.
In particular, an encoder based on the dense block and a decoder form a symmetrical structure of U-Net are jointly staked to realize an effective feature extraction and image reconstruction.
arXiv Detail & Related papers (2021-08-13T08:06:29Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.