VectorPainter: Advanced Stylized Vector Graphics Synthesis Using Stroke-Style Priors
- URL: http://arxiv.org/abs/2405.02962v2
- Date: Thu, 26 Dec 2024 08:39:26 GMT
- Title: VectorPainter: Advanced Stylized Vector Graphics Synthesis Using Stroke-Style Priors
- Authors: Juncheng Hu, Ximing Xing, Jing Zhang, Qian Yu,
- Abstract summary: We introduce VectorPainter, a novel framework designed for reference-guided text-to-vector-graphics synthesis.
Our method first converts the pixels of the reference image into a series of vector strokes, and then generates a vector graphic based on the input text description.
To preserve the style of the strokes throughout the generation process, we introduce a style-preserving loss function.
- Score: 18.477188153621125
- License:
- Abstract: We introduce VectorPainter, a novel framework designed for reference-guided text-to-vector-graphics synthesis. Based on our observation that the style of strokes can be an important aspect to distinguish different artists, our method reforms the task into synthesize a desired vector graphics by rearranging stylized strokes, which are vectorized from the reference images. Specifically, our method first converts the pixels of the reference image into a series of vector strokes, and then generates a vector graphic based on the input text description by optimizing the positions and colors of these vector strokes. To precisely capture the style of the reference image in the vectorized strokes, we propose an innovative vectorization method that employs an imitation learning strategy. To preserve the style of the strokes throughout the generation process, we introduce a style-preserving loss function. Extensive experiments have been conducted to demonstrate the superiority of our approach over existing works in stylized vector graphics synthesis, as well as the effectiveness of the various components of our method.
Related papers
- SVGDreamer++: Advancing Editability and Diversity in Text-Guided SVG Generation [31.76771064173087]
We propose a novel text-guided vector graphics synthesis method to address limitations of existing methods.
We introduce a Hierarchical Image VEctorization (HIVE) framework that operates at the semantic object level.
We also present a Vectorized Particle-based Score Distillation (VPSD) approach to improve the diversity of output SVGs.
arXiv Detail & Related papers (2024-11-26T19:13:38Z) - Segmentation-guided Layer-wise Image Vectorization with Gradient Fills [6.037332707968933]
We propose a segmentation-guided vectorization framework to convert images into concise vector graphics with gradient fills.
With the guidance of an embedded gradient-aware segmentation, our approach progressively appends gradient-filled B'ezier paths to the output.
arXiv Detail & Related papers (2024-08-28T12:08:25Z) - SuperSVG: Superpixel-based Scalable Vector Graphics Synthesis [66.44553285020066]
SuperSVG is a superpixel-based vectorization model that achieves fast and high-precision image vectorization.
We propose a two-stage self-training framework, where a coarse-stage model is employed to reconstruct the main structure and a refinement-stage model is used for enriching the details.
Experiments demonstrate the superior performance of our method in terms of reconstruction accuracy and inference time compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-14T07:43:23Z) - Layered Image Vectorization via Semantic Simplification [46.23779847614095]
This work presents a novel progressive image vectorization technique aimed at generating layered vectors that represent the original image from coarse to fine detail levels.
Our approach introduces semantic simplification, which combines Score Distillation Sampling and semantic segmentation to iteratively simplify the input image.
Our method provides robust optimization, which avoids local minima and enables adjustable detail levels in the final output.
arXiv Detail & Related papers (2024-06-08T08:54:35Z) - StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis [112.25071764647683]
StrokeNUWA is a pioneering work exploring a better visual representation ''stroke tokens'' on vector graphics.
equipped with stroke tokens, StrokeNUWA can significantly surpass traditional LLM-based and optimization-based methods.
StrokeNUWA achieves up to a 94x speedup in inference over the speed of prior methods with an exceptional SVG code compression ratio of 6.9%.
arXiv Detail & Related papers (2024-01-30T15:20:26Z) - Text-Guided Vector Graphics Customization [31.41266632288932]
We propose a novel pipeline that generates high-quality customized vector graphics based on textual prompts.
Our method harnesses the capabilities of large pre-trained text-to-image models.
We evaluate our method using multiple metrics from vector-level, image-level and text-level perspectives.
arXiv Detail & Related papers (2023-09-21T17:59:01Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - Neural Style Transfer for Vector Graphics [3.8983556368110226]
Style transfer between vector images has not been considered.
Applying standard content and style losses insignificantly changes the vector image drawing style.
New method based on differentiableization can change the color and shape parameters of the content image corresponding to the drawing of the style image.
arXiv Detail & Related papers (2023-03-06T16:57:45Z) - VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models [82.93345261434943]
We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics.
Inspired by recent text-to-3D work, we learn an SVG consistent with a caption using Score Distillation Sampling.
Experiments show greater quality than prior work, and demonstrate a range of styles including pixel art and sketches.
arXiv Detail & Related papers (2022-11-21T10:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.