MARVEL: Raster Manga Vectorization via Primitive-wise Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2110.04830v2
- Date: Tue, 18 Jul 2023 21:13:25 GMT
- Title: MARVEL: Raster Manga Vectorization via Primitive-wise Deep Reinforcement
Learning
- Authors: Hao Su, Jianwei Niu, Xuefeng Liu, Jiahe Cui, Ji Wan
- Abstract summary: Manga is a fashionable Japanese-style comic form that is composed of black-and-white strokes and is generally displayed as images on digital devices.
We propose MARVEL, a primitive-wise approach for vectorizing mangas by Deep Reinforcement Learning (DRL)
Unlike previous learning-based methods which predict vector parameters for an entire image, MARVEL introduces a new perspective that regards an entire manga as a collection of basic primitivestextemdash stroke lines.
- Score: 29.14983719525674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manga is a fashionable Japanese-style comic form that is composed of
black-and-white strokes and is generally displayed as raster images on digital
devices. Typical mangas have simple textures, wide lines, and few color
gradients, which are vectorizable natures to enjoy the merits of vector
graphics, e.g., adaptive resolutions and small file sizes. In this paper, we
propose MARVEL (MAnga's Raster to VEctor Learning), a primitive-wise approach
for vectorizing raster mangas by Deep Reinforcement Learning (DRL). Unlike
previous learning-based methods which predict vector parameters for an entire
image, MARVEL introduces a new perspective that regards an entire manga as a
collection of basic primitives\textemdash stroke lines, and designs a DRL model
to decompose the target image into a primitive sequence for achieving accurate
vectorization. To improve vectorization accuracies and decrease file sizes, we
further propose a stroke accuracy reward to predict accurate stroke lines, and
a pruning mechanism to avoid generating erroneous and repeated strokes.
Extensive subjective and objective experiments show that our MARVEL can
generate impressive results and reaches the state-of-the-art level. Our code is
open-source at: https://github.com/SwordHolderSH/Mang2Vec.
Related papers
- SuperSVG: Superpixel-based Scalable Vector Graphics Synthesis [66.44553285020066]
SuperSVG is a superpixel-based vectorization model that achieves fast and high-precision image vectorization.
We propose a two-stage self-training framework, where a coarse-stage model is employed to reconstruct the main structure and a refinement-stage model is used for enriching the details.
Experiments demonstrate the superior performance of our method in terms of reconstruction accuracy and inference time compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-06-14T07:43:23Z) - Text-to-Vector Generation with Neural Path Representation [27.949704002538944]
We propose a novel neural path representation that learns the path latent space from both sequence and image modalities.
In the first stage, a pre-trained text-to-image diffusion model guides the initial generation of complex vector graphics.
In the second stage, we refine the graphics using a layer-wise image vectorization strategy to achieve clearer elements and structure.
arXiv Detail & Related papers (2024-05-16T17:59:22Z) - Text-Based Reasoning About Vector Graphics [76.42082386029206]
We propose the Visually Descriptive Language Model (VDLM), which performs text-based reasoning about vector graphics.
VDLM bridges with pretrained language models through a newly introduced symbolic representation, Primal Visual Description (PVD)
Our framework offers better interpretability due to its disentangled perception and reasoning processes.
arXiv Detail & Related papers (2024-04-09T17:30:18Z) - Sketch2Manga: Shaded Manga Screening from Sketch with Diffusion Models [26.010509997863196]
We propose a novel sketch-to-manga framework that first generates a color illustration from the sketch and then generates a screentoned manga.
Our method significantly outperforms existing methods in generating high-quality manga with shaded high-frequency screentones.
arXiv Detail & Related papers (2024-03-13T05:33:52Z) - Deep Geometrized Cartoon Line Inbetweening [98.35956631655357]
Inbetweening involves generating intermediate frames between two black-and-white line drawings.
Existing frame methods that rely on matching and warping whole images are unsuitable for line inbetweening.
We propose AnimeInbet, which geometrizes geometric line drawings into endpoints and reframes the inbetweening task as a graph fusion problem.
Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening.
arXiv Detail & Related papers (2023-09-28T17:50:05Z) - Text-Guided Vector Graphics Customization [31.41266632288932]
We propose a novel pipeline that generates high-quality customized vector graphics based on textual prompts.
Our method harnesses the capabilities of large pre-trained text-to-image models.
We evaluate our method using multiple metrics from vector-level, image-level and text-level perspectives.
arXiv Detail & Related papers (2023-09-21T17:59:01Z) - VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models [82.93345261434943]
We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics.
Inspired by recent text-to-3D work, we learn an SVG consistent with a caption using Score Distillation Sampling.
Experiments show greater quality than prior work, and demonstrate a range of styles including pixel art and sketches.
arXiv Detail & Related papers (2022-11-21T10:04:27Z) - Character Generation through Self-Supervised Vectorization [9.36599317326032]
We present a drawing agent that operates on stroke-level representation of images.
When a 'draw' decision is made, the agent outputs a program indicating the stroke to be drawn.
We present successful results on all three generation tasks and the parsing task.
arXiv Detail & Related papers (2022-08-03T12:31:55Z) - Towards Layer-wise Image Vectorization [57.26058135389497]
We propose Layerwise Image Vectorization, namely LIVE, to convert images to SVGs and simultaneously maintain its image topology.
Live generates compact forms with layer-wise structures that are semantically consistent with human perspective.
Live initiates human editable SVGs for both designers and can be used in other applications.
arXiv Detail & Related papers (2022-06-09T17:55:02Z) - Cloud2Curve: Generation and Vectorization of Parametric Sketches [109.02932608241227]
We present Cloud2Curve, a generative model for scalable high-resolution vector sketches.
We evaluate the generation and vectorization capabilities of our model on Quick, Draw! and KMNIST datasets.
arXiv Detail & Related papers (2021-03-29T12:09:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.