StarVector: Generating Scalable Vector Graphics Code from Images and Text
- URL: http://arxiv.org/abs/2312.11556v3
- Date: Thu, 05 Dec 2024 22:32:50 GMT
- Title: StarVector: Generating Scalable Vector Graphics Code from Images and Text
- Authors: Juan A. Rodriguez, Abhay Puri, Shubham Agarwal, Issam H. Laradji, Pau Rodriguez, Sai Rajeswar, David Vazquez, Christopher Pal, Marco Pedersoli,
- Abstract summary: We introduce Star, a multimodal large language model for SVG generation.<n>It performs image vectorization by understanding image semantics and using SVG primitives for compact, precise outputs.<n>We train StarStack, a diverse dataset of 2M samples that enables generalization across vectorization tasks.
- Score: 15.32194071443065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scalable Vector Graphics (SVGs) are vital for modern image rendering due to their scalability and versatility. Previous SVG generation methods have focused on curve-based vectorization, lacking semantic understanding, often producing artifacts, and struggling with SVG primitives beyond path curves. To address these issues, we introduce StarVector, a multimodal large language model for SVG generation. It performs image vectorization by understanding image semantics and using SVG primitives for compact, precise outputs. Unlike traditional methods, StarVector works directly in the SVG code space, leveraging visual understanding to apply accurate SVG primitives. To train StarVector, we create SVG-Stack, a diverse dataset of 2M samples that enables generalization across vectorization tasks and precise use of primitives like ellipses, polygons, and text. We address challenges in SVG evaluation, showing that pixel-based metrics like MSE fail to capture the unique qualities of vector graphics. We introduce SVG-Bench, a benchmark across 10 datasets, and 3 tasks: Image-to-SVG, Text-to-SVG generation, and diagram generation. Using this setup, StarVector achieves state-of-the-art performance, producing more compact and semantically rich SVGs.
Related papers
- OmniSVG: A Unified Scalable Vector Graphics Generation Model [70.26163703054979]
We propose OmniSVG, a unified framework that leverages pre-trained Vision-Language Models for end-to-end multimodal SVG generation.
By parameterizing SVG commands and coordinates into discrete tokens, OmniSVG decouples structural logic from low-level geometry for efficient training while maintaining the synthesis of complex SVG structure.
We introduce MMSVG-2M, a multimodal dataset with two million annotated SVG assets, along with a standardized evaluation protocol for conditional SVG generation tasks.
arXiv Detail & Related papers (2025-04-08T17:59:49Z) - NeuralSVG: An Implicit Representation for Text-to-Vector Generation [54.4153300455889]
We propose NeuralSVG, an implicit neural representation for generating vector graphics from text prompts.
To encourage a layered structure in the generated SVG, we introduce a dropout-based regularization technique.
We demonstrate that NeuralSVG outperforms existing methods in generating structured and flexible SVG.
arXiv Detail & Related papers (2025-01-07T18:50:06Z) - SVGBuilder: Component-Based Colored SVG Generation with Text-Guided Autoregressive Transformers [5.921625661186367]
This paper introduces a component-based, autoregressive model for generating high-quality colored SVGs from textual input.
It significantly reduces computational overhead and improves efficiency compared to traditional methods.
To address the limitations of existing SVG datasets and support our research, we introduce ColorSVG-100K, the first large-scale dataset of colored SVGs.
arXiv Detail & Related papers (2024-12-13T15:24:11Z) - SVGFusion: Scalable Text-to-SVG Generation via Vector Space Diffusion [32.01103570298614]
We introduce SVGFusion, a Text-to-SVG model capable of scaling to real-world SVG data.
The core idea of SVGFusion is to utilize a popular Text-to-Image framework to learn a continuous latent space for vector graphics.
To effectively train and evaluate SVGFusion, we construct SVGX-Dataset, a large-scale, high-quality SVG dataset.
arXiv Detail & Related papers (2024-12-11T09:02:25Z) - Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision [20.325246638505714]
We introduce GRIMOIRE, a text-guided generative model that learns to map images onto a discrete codebook by reconstructing them as vector shapes.
Unlike existing models that require direct supervision from data, GRIMOIRE learns using only image supervision which opens up vector generative modeling to significantly more data.
arXiv Detail & Related papers (2024-10-08T12:41:31Z) - SuperSVG: Superpixel-based Scalable Vector Graphics Synthesis [66.44553285020066]
SuperSVG is a superpixel-based vectorization model that achieves fast and high-precision image vectorization.
We propose a two-stage self-training framework, where a coarse-stage model is employed to reconstruct the main structure and a refinement-stage model is used for enriching the details.
Experiments demonstrate the superior performance of our method in terms of reconstruction accuracy and inference time compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-14T07:43:23Z) - SVGDreamer: Text Guided SVG Generation with Diffusion Model [31.76771064173087]
We propose a novel text-guided vector graphics synthesis method called SVGDreamer.
SIVE process enables decomposition of synthesis into foreground objects and background.
VPSD approach addresses issues of shape over-smoothing, color over-saturation, limited diversity, and slow convergence.
arXiv Detail & Related papers (2023-12-27T08:50:01Z) - Beyond Pixels: Exploring Human-Readable SVG Generation for Simple Images
with Vision Language Models [19.145503353922038]
We introduce our method, Simple-SVG-Generation (Stextsuperscript2VGtextsuperscript2).
Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding.
With simple images, we evaluate our method with reasoning tasks together with advanced language models, the results show a clear improvement over previous SVG generation methods.
arXiv Detail & Related papers (2023-11-27T05:20:11Z) - SAMVG: A Multi-stage Image Vectorization Model with the Segment-Anything
Model [59.40189857428461]
We propose a multi-stage model to vectorize images into SVG (Scalable Vector Graphics)
Firstly, SAMVG uses general image segmentation provided by the Segment-Anything Model and uses a novel filtering method to identify the best dense segmentation map for the entire image.
Secondly, SAMVG then identifies missing components and adds more detailed components to the SVG.
arXiv Detail & Related papers (2023-11-09T11:11:56Z) - VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models [82.93345261434943]
We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics.
Inspired by recent text-to-3D work, we learn an SVG consistent with a caption using Score Distillation Sampling.
Experiments show greater quality than prior work, and demonstrate a range of styles including pixel art and sketches.
arXiv Detail & Related papers (2022-11-21T10:04:27Z) - Towards Layer-wise Image Vectorization [57.26058135389497]
We propose Layerwise Image Vectorization, namely LIVE, to convert images to SVGs and simultaneously maintain its image topology.
Live generates compact forms with layer-wise structures that are semantically consistent with human perspective.
Live initiates human editable SVGs for both designers and can be used in other applications.
arXiv Detail & Related papers (2022-06-09T17:55:02Z) - SVG-Net: An SVG-based Trajectory Prediction Model [67.68864911674308]
Anticipating motions of vehicles in a scene is an essential problem for safe autonomous driving systems.
To this end, the comprehension of the scene's infrastructure is often the main clue for predicting future trajectories.
Most of the proposed approaches represent the scene with averse averseized format and some of the more recent approaches leverage custom vectorized formats.
arXiv Detail & Related papers (2021-10-07T18:00:08Z) - DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation [217.86315551526235]
We propose a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and manipulation.
Our architecture effectively disentangles high-level shapes from the low-level commands that encode the shape itself.
We demonstrate that our network learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool.
arXiv Detail & Related papers (2020-07-22T09:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.