Anisotropic Stroke Control for Multiple Artists Style Transfer
- URL: http://arxiv.org/abs/2010.08175v2
- Date: Mon, 14 Jun 2021 14:25:27 GMT
- Title: Anisotropic Stroke Control for Multiple Artists Style Transfer
- Authors: Xuanhong Chen, Xirui Yan, Naiyuan Liu, Ting Qiu and Bingbing Ni
- Abstract summary: Stroke Control Multi-Artist Style Transfer framework is developed.
Anisotropic Stroke Module (ASM) endows the network with the ability of adaptive semantic-consistency among various styles.
In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue.
- Score: 36.92721585146738
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Though significant progress has been made in artistic style transfer,
semantic information is usually difficult to be preserved in a fine-grained
locally consistent manner by most existing methods, especially when multiple
artists styles are required to transfer within one single model. To circumvent
this issue, we propose a Stroke Control Multi-Artist Style Transfer framework.
On the one hand, we develop a multi-condition single-generator structure which
first performs multi-artist style transfer. On the one hand, we design an
Anisotropic Stroke Module (ASM) which realizes the dynamic adjustment of
style-stroke between the non-trivial and the trivial regions. ASM endows the
network with the ability of adaptive semantic-consistency among various styles.
On the other hand, we present an novel Multi-Scale Projection Discriminator} to
realize the texture-level conditional generation. In contrast to the
single-scale conditional discriminator, our discriminator is able to capture
multi-scale texture clue to effectively distinguish a wide range of artistic
styles. Extensive experimental results well demonstrate the feasibility and
effectiveness of our approach. Our framework can transform a photograph into
different artistic style oil painting via only ONE single model. Furthermore,
the results are with distinctive artistic style and retain the anisotropic
semantic information. The code is already available on github:
https://github.com/neuralchen/ASMAGAN.
Related papers
- Diffusion-based Human Motion Style Transfer with Semantic Guidance [23.600154466988073]
We propose a novel framework for few-shot style transfer learning based on the diffusion model.
In the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior.
In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer.
arXiv Detail & Related papers (2024-03-20T05:52:11Z) - DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer [27.39248034592382]
We propose using a new class of models to perform style transfer while enabling deformable style transfer.
We show how leveraging the priors of these models can expose new artistic controls at inference time.
arXiv Detail & Related papers (2023-07-09T12:13:43Z) - ArtFusion: Controllable Arbitrary Style Transfer using Dual Conditional
Latent Diffusion Models [0.0]
Arbitrary Style Transfer (AST) aims to transform images by adopting the style from any selected artwork.
We propose a new approach, ArtFusion, which provides a flexible balance between content and style.
arXiv Detail & Related papers (2023-06-15T17:58:36Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity [94.5479418998225]
We propose a new style transfer framework called QuantArt for high visual-fidelity stylization.
Our framework achieves significantly higher visual fidelity compared with the existing style transfer methods.
arXiv Detail & Related papers (2022-12-20T17:09:53Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization [71.85169368997738]
Artistic style transfer aims to transfer the style characteristics of one image onto another image while retaining its content.
Self-Attention-based approaches have tackled this issue with partial success but suffer from unwanted artifacts.
This paper aims to combine the best of both worlds: self-attention and normalization.
arXiv Detail & Related papers (2021-05-13T08:01:01Z) - StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [119.03470556503942]
Crossmodal matching problem is typically solved by learning a joint embedding space where semantic content shared between photo and sketch modalities are preserved.
An effective model needs to explicitly account for this style diversity, crucially, to unseen user styles.
Our model can not only disentangle the cross-modal shared semantic content, but can adapt the disentanglement to any unseen user style as well, making the model truly agnostic.
arXiv Detail & Related papers (2021-03-29T15:44:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.