Name Your Style: An Arbitrary Artist-aware Image Style Transfer
- URL: http://arxiv.org/abs/2202.13562v1
- Date: Mon, 28 Feb 2022 06:21:38 GMT
- Title: Name Your Style: An Arbitrary Artist-aware Image Style Transfer
- Authors: Zhi-Song Liu, Li-Wen Wang, Wan-Chi Siu, Vicky Kalogeiton
- Abstract summary: We propose a text-driven image style transfer (TxST) that leverages advanced image-text encoders to control arbitrary style transfer.
We introduce a contrastive training strategy to effectively extract style descriptions from the image-text model.
We also propose a novel and efficient attention module that explores cross-attentions to fuse style and content features.
- Score: 38.41608300670523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image style transfer has attracted widespread attention in the past few
years. Despite its remarkable results, it requires additional style images
available as references, making it less flexible and inconvenient. Using text
is the most natural way to describe the style. More importantly, text can
describe implicit abstract styles, like styles of specific artists or art
movements. In this paper, we propose a text-driven image style transfer (TxST)
that leverages advanced image-text encoders to control arbitrary style
transfer. We introduce a contrastive training strategy to effectively extract
style descriptions from the image-text model (i.e., CLIP), which aligns
stylization with the text description. To this end, we also propose a novel and
efficient attention module that explores cross-attentions to fuse style and
content features. Finally, we achieve an arbitrary artist-aware image style
transfer to learn and transfer specific artistic characters such as Picasso,
oil painting, or a rough sketch. Extensive experiments demonstrate that our
approach outperforms the state-of-the-art methods on both image and textual
styles. Moreover, it can mimic the styles of one or many artists to achieve
attractive results, thus highlighting a promising direction in image style
transfer.
Related papers
- Bridging Text and Image for Artist Style Transfer via Contrastive Learning [21.962361974579036]
We propose a Contrastive Learning for Artistic Style Transfer (CLAST) to control arbitrary style transfer.
We introduce a supervised contrastive training strategy to effectively extract style descriptions from the image-text model.
We also propose a novel and efficient adaLN based state space models that explore style-content fusion.
arXiv Detail & Related papers (2024-10-12T15:27:57Z) - DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion
Models [11.164432246850247]
We introduce DreamStyler, a novel framework designed for artistic image synthesis.
DreamStyler is proficient in both text-to-image synthesis and style transfer.
With content and style guidance, DreamStyler exhibits flexibility to accommodate a range of style references.
arXiv Detail & Related papers (2023-09-13T13:13:29Z) - Any-to-Any Style Transfer: Making Picasso and Da Vinci Collaborate [58.83278629019384]
Style transfer aims to render the style of a given image for style reference to another given image for content reference.
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
We propose Any-to-Any Style Transfer, which enables users to interactively select styles of regions in the style image and apply them to the prescribed content regions.
arXiv Detail & Related papers (2023-04-19T15:15:36Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - Inversion-Based Style Transfer with Diffusion Models [78.93863016223858]
Previous arbitrary example-guided artistic image generation methods often fail to control shape changes or convey elements.
We propose an inversion-based style transfer method (InST), which can efficiently and accurately learn the key information of an image.
arXiv Detail & Related papers (2022-11-23T18:44:25Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - CLIPstyler: Image Style Transfer with a Single Text Condition [34.24876359759408]
Existing neural style transfer methods require reference style images to transfer texture information of style images to content images.
We propose a new framework that enables a style transfer without' a style image, but only with a text description of the desired style.
arXiv Detail & Related papers (2021-12-01T09:48:53Z) - Language-Driven Image Style Transfer [72.36790598245096]
We introduce a new task -- language-driven image style transfer (textttLDIST) -- to manipulate the style of a content image, guided by a text.
The discriminator considers the correlation between language and patches of style images or transferred results to jointly embed style instructions.
Experiments show that our CLVA is effective and achieves superb transferred results on textttLDIST.
arXiv Detail & Related papers (2021-06-01T01:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.