Neural Style Transfer for Vector Graphics
- URL: http://arxiv.org/abs/2303.03405v1
- Date: Mon, 6 Mar 2023 16:57:45 GMT
- Title: Neural Style Transfer for Vector Graphics
- Authors: Valeria Efimova, Artyom Chebykin, Ivan Jarsky, Evgenii Prosvirnin,
Andrey Filchenkov
- Abstract summary: Style transfer between vector images has not been considered.
Applying standard content and style losses insignificantly changes the vector image drawing style.
New method based on differentiableization can change the color and shape parameters of the content image corresponding to the drawing of the style image.
- Score: 3.8983556368110226
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural style transfer draws researchers' attention, but the interest focuses
on bitmap images. Various models have been developed for bitmap image
generation both online and offline with arbitrary and pre-trained styles.
However, the style transfer between vector images has not almost been
considered. Our research shows that applying standard content and style losses
insignificantly changes the vector image drawing style because the structure of
vector primitives differs a lot from pixels. To handle this problem, we
introduce new loss functions. We also develop a new method based on
differentiable rasterization that uses these loss functions and can change the
color and shape parameters of the content image corresponding to the drawing of
the style image. Qualitative experiments demonstrate the effectiveness of the
proposed VectorNST method compared with the state-of-the-art neural style
transfer approaches for bitmap images and the only existing approach for
stylizing vector images, DiffVG. Although the proposed model does not achieve
the quality and smoothness of style transfer between bitmap images, we consider
our work an important early step in this area. VectorNST code and demo service
are available at https://github.com/IzhanVarsky/VectorNST.
Related papers
- SuperSVG: Superpixel-based Scalable Vector Graphics Synthesis [66.44553285020066]
SuperSVG is a superpixel-based vectorization model that achieves fast and high-precision image vectorization.
We propose a two-stage self-training framework, where a coarse-stage model is employed to reconstruct the main structure and a refinement-stage model is used for enriching the details.
Experiments demonstrate the superior performance of our method in terms of reconstruction accuracy and inference time compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-14T07:43:23Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Scaling Painting Style Transfer [10.059627473725508]
Neural style transfer (NST) is a technique that produces an unprecedentedly rich style transfer from a style image to a content image.
This paper presents a solution to solve the original global optimization for ultra-high resolution (UHR) images.
We show that our method produces style transfer of unmatched quality for such high-resolution painting styles.
arXiv Detail & Related papers (2022-12-27T12:03:38Z) - DSI2I: Dense Style for Unpaired Image-to-Image Translation [70.93865212275412]
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar.
We propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information.
Our results show that the translations produced by our approach are more diverse, preserve the source content better, and are closer to the exemplars when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-12-26T18:45:25Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Non-Parametric Style Transfer [0.9137554315375919]
Recent feed-forward neural methods of arbitrary image style transfer mainly utilized encoded feature map upto its second-order statistics.
We extend the second-order statistical feature matching into a general distribution matching based on the understanding that style of an image is represented by the distribution of responses from receptive fields.
Based on our results, it is proven that the stylized images obtained with our method are more similar with the target style images in all existing style measures without losing content clearness.
arXiv Detail & Related papers (2022-06-26T16:34:37Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - StyTr^2: Unbiased Image Style Transfer with Transformers [59.34108877969477]
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
Traditional neural style transfer methods are usually biased and content leak can be observed by running several times of the style transfer process with the same reference image.
We propose a transformer-based approach, namely StyTr2, to address this critical issue.
arXiv Detail & Related papers (2021-05-30T15:57:09Z) - Stylized Neural Painting [0.0]
This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles.
Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures.
arXiv Detail & Related papers (2020-11-16T17:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.