Edge Enhanced Image Style Transfer via Transformers
- URL: http://arxiv.org/abs/2301.00592v1
- Date: Mon, 2 Jan 2023 10:39:31 GMT
- Title: Edge Enhanced Image Style Transfer via Transformers
- Authors: Chiyu Zhang, Jun Yang, Zaiyan Dai, Peng Cao
- Abstract summary: arbitrary image style transfer has attracted more and more attention.
It is difficult to simultaneously keep well the trade-off between the content details and the style features.
We present a new transformer-based method named STT for image style transfer and an edge loss.
- Score: 6.666912981102909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, arbitrary image style transfer has attracted more and more
attention. Given a pair of content and style images, a stylized one is hoped
that retains the content from the former while catching style patterns from the
latter. However, it is difficult to simultaneously keep well the trade-off
between the content details and the style features. To stylize the image with
sufficient style patterns, the content details may be damaged and sometimes the
objects of images can not be distinguished clearly. For this reason, we present
a new transformer-based method named STT for image style transfer and an edge
loss which can enhance the content details apparently to avoid generating
blurred results for excessive rendering on style features. Qualitative and
quantitative experiments demonstrate that STT achieves comparable performance
to state-of-the-art image style transfer methods while alleviating the content
leak problem.
Related papers
- D2Styler: Advancing Arbitrary Style Transfer with Discrete Diffusion Methods [2.468658581089448]
We propose a novel framework called D$2$Styler (Discrete Diffusion Styler)
Our method uses Adaptive Instance Normalization (AdaIN) features as a context guide for the reverse diffusion process.
Experimental results demonstrate that D$2$Styler produces high-quality style-transferred images.
arXiv Detail & Related papers (2024-08-07T05:47:06Z) - Puff-Net: Efficient Style Transfer with Pure Content and Style Feature Fusion Network [32.12413686394824]
Style transfer aims to render an image with the artistic features of a style image, while maintaining the original structure.
It is difficult for CNN-based methods to handle global information and long-range dependencies between input images.
We propose a novel network termed Puff-Net, i.e., pure content and style feature fusion network.
arXiv Detail & Related papers (2024-05-30T07:41:07Z) - InfoStyler: Disentanglement Information Bottleneck for Artistic Style
Transfer [22.29381866838179]
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content.
We propose a novel information disentanglement method, named InfoStyler, to capture the minimal sufficient information for both content and style representations.
arXiv Detail & Related papers (2023-07-30T13:38:56Z) - StyleStegan: Leak-free Style Transfer Based on Feature Steganography [19.153040728118285]
existing style transfer methods suffer from a serious content leakage issue.
We propose a leak-free style transfer method based on feature steganography.
The results demonstrate that StyleStegan successfully mitigates the content leakage issue in serial and reversible style transfer tasks.
arXiv Detail & Related papers (2023-07-01T05:00:19Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - Improving the Latent Space of Image Style Transfer [24.37383949267162]
In some cases, the feature statistics from the pre-trained encoder may not be consistent with the visual style we perceived.
In such an inappropriate latent space, the objective function of the existing methods will be optimized in the wrong direction.
We propose two contrastive training schemes to get a refined encoder that is more suitable for this task.
arXiv Detail & Related papers (2022-05-24T15:13:01Z) - Language-Driven Image Style Transfer [72.36790598245096]
We introduce a new task -- language-driven image style transfer (textttLDIST) -- to manipulate the style of a content image, guided by a text.
The discriminator considers the correlation between language and patches of style images or transferred results to jointly embed style instructions.
Experiments show that our CLVA is effective and achieves superb transferred results on textttLDIST.
arXiv Detail & Related papers (2021-06-01T01:58:50Z) - StyTr^2: Unbiased Image Style Transfer with Transformers [59.34108877969477]
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
Traditional neural style transfer methods are usually biased and content leak can be observed by running several times of the style transfer process with the same reference image.
We propose a transformer-based approach, namely StyTr2, to address this critical issue.
arXiv Detail & Related papers (2021-05-30T15:57:09Z) - ArtFlow: Unbiased Image Style Transfer via Reversible Neural Flows [101.16791104543492]
ArtFlow is proposed to prevent content leak during universal style transfer.
It supports both forward and backward inferences and operates in a projection-transfer-reversion scheme.
It achieves comparable performance to state-of-the-art style transfer methods while avoiding content leak.
arXiv Detail & Related papers (2021-03-31T07:59:02Z) - Arbitrary Style Transfer via Multi-Adaptation Network [109.6765099732799]
A desired style transfer, given a content image and referenced style painting, would render the content image with the color tone and vivid stroke patterns of the style painting.
A new disentanglement loss function enables our network to extract main style patterns and exact content structures to adapt to various input images.
arXiv Detail & Related papers (2020-05-27T08:00:22Z) - Parameter-Free Style Projection for Arbitrary Style Transfer [64.06126075460722]
This paper proposes a new feature-level style transformation technique, named Style Projection, for parameter-free, fast, and effective content-style transformation.
This paper further presents a real-time feed-forward model to leverage Style Projection for arbitrary image style transfer.
arXiv Detail & Related papers (2020-03-17T13:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.