SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization
- URL: http://arxiv.org/abs/2105.06129v1
- Date: Thu, 13 May 2021 08:01:01 GMT
- Title: SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization
- Authors: Aaditya Singh, Shreeshail Hingane, Xinyu Gong, Zhangyang Wang
- Abstract summary: Artistic style transfer aims to transfer the style characteristics of one image onto another image while retaining its content.
Self-Attention-based approaches have tackled this issue with partial success but suffer from unwanted artifacts.
This paper aims to combine the best of both worlds: self-attention and normalization.
- Score: 71.85169368997738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artistic style transfer aims to transfer the style characteristics of one
image onto another image while retaining its content. Existing approaches
commonly leverage various normalization techniques, although these face
limitations in adequately transferring diverse textures to different spatial
locations. Self-Attention-based approaches have tackled this issue with partial
success but suffer from unwanted artifacts. Motivated by these observations,
this paper aims to combine the best of both worlds: self-attention and
normalization. That yields a new plug-and-play module that we
nameSelf-Attentive Fac-torized Instance Normalization(SAFIN). SAFIN is
essentially a spatially adaptive normalization module whose parameters are
inferred through attention on the content and style image. We demonstrate that
plugging SAFIN into the base network of another state-of-the-art method results
in enhanced stylization. We also develop a novel base network composed of
Wavelet Transform for multi-scale style transfer, which when combined with
SAFIN, produces visually appealing results with lesser unwanted textures.
Related papers
- Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer [19.355744690301403]
We introduce a novel artistic style transfer method based on a pre-trained large-scale diffusion model without any optimization.
Our experimental results demonstrate that our proposed method surpasses state-of-the-art methods in both conventional and diffusion-based style transfer baselines.
arXiv Detail & Related papers (2023-12-11T09:53:12Z) - Retinex-guided Channel-grouping based Patch Swap for Arbitrary Style
Transfer [54.25418866649519]
The basic principle of the patch-matching based style transfer is to substitute the patches of the content image feature maps by the closest patches from the style image feature maps.
Existing techniques treat the full-channel style feature patches as simple signal tensors and create new style feature patches via signal-level fusion.
We propose a Retinex theory guided, channel-grouping based patch swap technique to solve the above challenges.
arXiv Detail & Related papers (2023-09-19T11:13:56Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Styleverse: Towards Identity Stylization across Heterogeneous Domains [70.13327076710269]
We propose a new challenging task namely IDentity Stylization (IDS) across heterogeneous domains.
We use an effective heterogeneous-network-based framework $Styleverse$ that uses a single domain-aware generator.
$Styleverse achieves higher-fidelity identity stylization compared with other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-02T04:23:01Z) - UMFA: A photorealistic style transfer method based on U-Net and
multi-layer feature aggregation [0.0]
We propose a photorealistic style transfer network to emphasize the natural effect of photorealistic image stylization.
In particular, an encoder based on the dense block and a decoder form a symmetrical structure of U-Net are jointly staked to realize an effective feature extraction and image reconstruction.
arXiv Detail & Related papers (2021-08-13T08:06:29Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Anisotropic Stroke Control for Multiple Artists Style Transfer [36.92721585146738]
Stroke Control Multi-Artist Style Transfer framework is developed.
Anisotropic Stroke Module (ASM) endows the network with the ability of adaptive semantic-consistency among various styles.
In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue.
arXiv Detail & Related papers (2020-10-16T05:32:26Z) - Geometric Style Transfer [74.58782301514053]
We introduce a neural architecture that supports transfer of geometric style.
New architecture runs prior to a network that transfers texture style.
Users can input content/style pair as is common, or they can chose to input a content/texture-style/geometry-style triple.
arXiv Detail & Related papers (2020-07-10T16:33:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.