SVGCraft: Beyond Single Object Text-to-SVG Synthesis with Comprehensive Canvas Layout
- URL: http://arxiv.org/abs/2404.00412v1
- Date: Sat, 30 Mar 2024 16:43:40 GMT
- Title: SVGCraft: Beyond Single Object Text-to-SVG Synthesis with Comprehensive Canvas Layout
- Authors: Ayan Banerjee, Nityanand Mathur, Josep Lladós, Umapada Pal, Anjan Dutta,
- Abstract summary: This work introduces a novel end-to-end framework for the creation of vector graphics depicting entire scenes from textual descriptions.
SVGCraft is optimized using a pre-trained encoder and LPIPS loss with opacity modulation to maximize similarity.
It is demonstrated to surpass prior works in abstraction, recognizability, and detail.
- Score: 14.824205628841158
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generating VectorArt from text prompts is a challenging vision task, requiring diverse yet realistic depictions of the seen as well as unseen entities. However, existing research has been mostly limited to the generation of single objects, rather than comprehensive scenes comprising multiple elements. In response, this work introduces SVGCraft, a novel end-to-end framework for the creation of vector graphics depicting entire scenes from textual descriptions. Utilizing a pre-trained LLM for layout generation from text prompts, this framework introduces a technique for producing masked latents in specified bounding boxes for accurate object placement. It introduces a fusion mechanism for integrating attention maps and employs a diffusion U-Net for coherent composition, speeding up the drawing process. The resulting SVG is optimized using a pre-trained encoder and LPIPS loss with opacity modulation to maximize similarity. Additionally, this work explores the potential of primitive shapes in facilitating canvas completion in constrained environments. Through both qualitative and quantitative assessments, SVGCraft is demonstrated to surpass prior works in abstraction, recognizability, and detail, as evidenced by its performance metrics (CLIP-T: 0.4563, Cosine Similarity: 0.6342, Confusion: 0.66, Aesthetic: 6.7832). The code will be available at https://github.com/ayanban011/SVGCraft.
Related papers
- Semantic Document Derendering: SVG Reconstruction via Vision-Language Modeling [32.22298939812003]
We introduce SliDer, a novel framework that uses Vision-Language Models to derender slide images as compact and editable SVG representations.<n>SliDer achieves a reconstruction LPIPS of 0.069 and is favored by human evaluators in 82.9% of cases compared to the strongest zero-shot VLM baseline.
arXiv Detail & Related papers (2025-11-17T15:16:13Z) - SVGThinker: Instruction-Aligned and Reasoning-Driven Text-to-SVG Generation [47.390332111383294]
We present SVGThinker, a reasoning-driven framework that aligns the production of SVG code with the visualization process.<n>Our pipeline first renders each primitive in sequence and uses a multimodal model to annotate the image and code.<n> Experiments against state-of-the-art baselines show that SVGThinker produces more stable, editable, and higher-quality SVGs.
arXiv Detail & Related papers (2025-09-29T05:25:00Z) - SVGen: Interpretable Vector Graphics Generation with Large Language Models [61.62816031675714]
We introduce SVG-1M, a large-scale dataset of high-quality SVGs paired with natural language descriptions.<n>We create well-aligned Text to SVG training pairs, including a subset with Chain of Thought annotations for enhanced semantic guidance.<n>Based on this dataset, we propose SVGen, an end-to-end model that generates SVG code from natural language inputs.
arXiv Detail & Related papers (2025-08-06T15:00:24Z) - OmniSVG: A Unified Scalable Vector Graphics Generation Model [70.26163703054979]
We propose OmniSVG, a unified framework that leverages pre-trained Vision-Language Models for end-to-end multimodal SVG generation.
By parameterizing SVG commands and coordinates into discrete tokens, OmniSVG decouples structural logic from low-level geometry for efficient training while maintaining the synthesis of complex SVG structure.
We introduce MMSVG-2M, a multimodal dataset with two million annotated SVG assets, along with a standardized evaluation protocol for conditional SVG generation tasks.
arXiv Detail & Related papers (2025-04-08T17:59:49Z) - NeuralSVG: An Implicit Representation for Text-to-Vector Generation [54.4153300455889]
We propose NeuralSVG, an implicit neural representation for generating vector graphics from text prompts.
To encourage a layered structure in the generated SVG, we introduce a dropout-based regularization technique.
We demonstrate that NeuralSVG outperforms existing methods in generating structured and flexible SVG.
arXiv Detail & Related papers (2025-01-07T18:50:06Z) - SVGDreamer++: Advancing Editability and Diversity in Text-Guided SVG Generation [31.76771064173087]
We propose a novel text-guided vector graphics synthesis method to address limitations of existing methods.
We introduce a Hierarchical Image VEctorization (HIVE) framework that operates at the semantic object level.
We also present a Vectorized Particle-based Score Distillation (VPSD) approach to improve the diversity of output SVGs.
arXiv Detail & Related papers (2024-11-26T19:13:38Z) - Chat2SVG: Vector Graphics Generation with Large Language Models and Image Diffusion Models [14.917583676464266]
Chat2SVG is a hybrid framework that combines Large Language Models and image diffusion models for text-to-SVG generation.
Our system enables intuitive editing through natural language instructions, making professional vector graphics creation accessible to all users.
arXiv Detail & Related papers (2024-11-25T17:31:57Z) - SVGDreamer: Text Guided SVG Generation with Diffusion Model [31.76771064173087]
We propose a novel text-guided vector graphics synthesis method called SVGDreamer.
SIVE process enables decomposition of synthesis into foreground objects and background.
VPSD approach addresses issues of shape over-smoothing, color over-saturation, limited diversity, and slow convergence.
arXiv Detail & Related papers (2023-12-27T08:50:01Z) - StarVector: Generating Scalable Vector Graphics Code from Images [13.995963187283321]
This paper introduces Star, a multimodal SVG generation model that integrates Code Generation Large Language Models (CodeLLMs) and vision models.
Our approach utilizes a CLIP image to extract visual representations from pixel-based images, which are then transformed into visual tokens via an adapter module.
Our results demonstrate significant enhancements in visual quality and complexity over current methods, marking a notable advancement in SVG generation technology.
arXiv Detail & Related papers (2023-12-17T08:07:32Z) - GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs [74.98581417902201]
We propose a novel framework to generate compositional 3D scenes from scene graphs.
By exploiting node and edge information in scene graphs, our method makes better use of the pretrained text-to-image diffusion model.
We conduct both qualitative and quantitative experiments to validate the effectiveness of GraphDreamer.
arXiv Detail & Related papers (2023-11-30T18:59:58Z) - Text-Guided Vector Graphics Customization [31.41266632288932]
We propose a novel pipeline that generates high-quality customized vector graphics based on textual prompts.
Our method harnesses the capabilities of large pre-trained text-to-image models.
We evaluate our method using multiple metrics from vector-level, image-level and text-level perspectives.
arXiv Detail & Related papers (2023-09-21T17:59:01Z) - VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models [82.93345261434943]
We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics.
Inspired by recent text-to-3D work, we learn an SVG consistent with a caption using Score Distillation Sampling.
Experiments show greater quality than prior work, and demonstrate a range of styles including pixel art and sketches.
arXiv Detail & Related papers (2022-11-21T10:04:27Z) - Towards Layer-wise Image Vectorization [57.26058135389497]
We propose Layerwise Image Vectorization, namely LIVE, to convert images to SVGs and simultaneously maintain its image topology.
Live generates compact forms with layer-wise structures that are semantically consistent with human perspective.
Live initiates human editable SVGs for both designers and can be used in other applications.
arXiv Detail & Related papers (2022-06-09T17:55:02Z) - Graph-to-3D: End-to-End Generation and Manipulation of 3D Scenes Using
Scene Graphs [85.54212143154986]
Controllable scene synthesis consists of generating 3D information that satisfy underlying specifications.
Scene graphs are representations of a scene composed of objects (nodes) and inter-object relationships (edges)
We propose the first work that directly generates shapes from a scene graph in an end-to-end manner.
arXiv Detail & Related papers (2021-08-19T17:59:07Z) - DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation [217.86315551526235]
We propose a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and manipulation.
Our architecture effectively disentangles high-level shapes from the low-level commands that encode the shape itself.
We demonstrate that our network learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool.
arXiv Detail & Related papers (2020-07-22T09:36:31Z) - 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure
Prior [50.73148041205675]
The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation.
We propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation.
Our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks.
arXiv Detail & Related papers (2020-03-31T09:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.