SAMVG: A Multi-stage Image Vectorization Model with the Segment-Anything
Model
- URL: http://arxiv.org/abs/2311.05276v2
- Date: Mon, 25 Dec 2023 14:16:07 GMT
- Title: SAMVG: A Multi-stage Image Vectorization Model with the Segment-Anything
Model
- Authors: Haokun Zhu, Juang Ian Chong, Teng Hu, Ran Yi, Yu-Kun Lai, Paul L.
Rosin
- Abstract summary: We propose a multi-stage model to vectorize images into SVG (Scalable Vector Graphics)
Firstly, SAMVG uses general image segmentation provided by the Segment-Anything Model and uses a novel filtering method to identify the best dense segmentation map for the entire image.
Secondly, SAMVG then identifies missing components and adds more detailed components to the SVG.
- Score: 59.40189857428461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vector graphics are widely used in graphical designs and have received more
and more attention. However, unlike raster images which can be easily obtained,
acquiring high-quality vector graphics, typically through automatically
converting from raster images remains a significant challenge, especially for
more complex images such as photos or artworks. In this paper, we propose
SAMVG, a multi-stage model to vectorize raster images into SVG (Scalable Vector
Graphics). Firstly, SAMVG uses general image segmentation provided by the
Segment-Anything Model and uses a novel filtering method to identify the best
dense segmentation map for the entire image. Secondly, SAMVG then identifies
missing components and adds more detailed components to the SVG. Through a
series of extensive experiments, we demonstrate that SAMVG can produce high
quality SVGs in any domain while requiring less computation time and complexity
compared to previous state-of-the-art methods.
Related papers
- Chat2SVG: Vector Graphics Generation with Large Language Models and Image Diffusion Models [14.917583676464266]
Chat2SVG is a hybrid framework that combines Large Language Models and image diffusion models for text-to-SVG generation.
Our system enables intuitive editing through natural language instructions, making professional vector graphics creation accessible to all users.
arXiv Detail & Related papers (2024-11-25T17:31:57Z) - Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision [20.325246638505714]
We introduce GRIMOIRE, a text-guided generative model that learns to map images onto a discrete codebook by reconstructing them as vector shapes.
Unlike existing models that require direct supervision from data, GRIMOIRE learns using only image supervision which opens up vector generative modeling to significantly more data.
arXiv Detail & Related papers (2024-10-08T12:41:31Z) - SuperSVG: Superpixel-based Scalable Vector Graphics Synthesis [66.44553285020066]
SuperSVG is a superpixel-based vectorization model that achieves fast and high-precision image vectorization.
We propose a two-stage self-training framework, where a coarse-stage model is employed to reconstruct the main structure and a refinement-stage model is used for enriching the details.
Experiments demonstrate the superior performance of our method in terms of reconstruction accuracy and inference time compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-14T07:43:23Z) - StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis [112.25071764647683]
StrokeNUWA is a pioneering work exploring a better visual representation ''stroke tokens'' on vector graphics.
equipped with stroke tokens, StrokeNUWA can significantly surpass traditional LLM-based and optimization-based methods.
StrokeNUWA achieves up to a 94x speedup in inference over the speed of prior methods with an exceptional SVG code compression ratio of 6.9%.
arXiv Detail & Related papers (2024-01-30T15:20:26Z) - StarVector: Generating Scalable Vector Graphics Code from Images [13.995963187283321]
This paper introduces Star, a multimodal SVG generation model that integrates Code Generation Large Language Models (CodeLLMs) and vision models.
Our approach utilizes a CLIP image to extract visual representations from pixel-based images, which are then transformed into visual tokens via an adapter module.
Our results demonstrate significant enhancements in visual quality and complexity over current methods, marking a notable advancement in SVG generation technology.
arXiv Detail & Related papers (2023-12-17T08:07:32Z) - VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models [82.93345261434943]
We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics.
Inspired by recent text-to-3D work, we learn an SVG consistent with a caption using Score Distillation Sampling.
Experiments show greater quality than prior work, and demonstrate a range of styles including pixel art and sketches.
arXiv Detail & Related papers (2022-11-21T10:04:27Z) - Towards Layer-wise Image Vectorization [57.26058135389497]
We propose Layerwise Image Vectorization, namely LIVE, to convert images to SVGs and simultaneously maintain its image topology.
Live generates compact forms with layer-wise structures that are semantically consistent with human perspective.
Live initiates human editable SVGs for both designers and can be used in other applications.
arXiv Detail & Related papers (2022-06-09T17:55:02Z) - DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation [217.86315551526235]
We propose a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and manipulation.
Our architecture effectively disentangles high-level shapes from the low-level commands that encode the shape itself.
We demonstrate that our network learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool.
arXiv Detail & Related papers (2020-07-22T09:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.