Neural Image Abstraction Using Long Smoothing B-Splines
- URL: http://arxiv.org/abs/2511.05360v1
- Date: Fri, 07 Nov 2025 15:50:48 GMT
- Title: Neural Image Abstraction Using Long Smoothing B-Splines
- Authors: Daniel Berio, Michael Stroh, Sylvain Calinon, Frederic Fol Leymarie, Oliver Deussen, Ariel Shamir,
- Abstract summary: We show how to generate smooth and arbitrarily long paths within image-based deep learning systems.<n>We take advantage of derivative-based smoothing costs for parametric control of fidelity vs. simplicity tradeoffs.
- Score: 33.22485341851476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We integrate smoothing B-splines into a standard differentiable vector graphics (DiffVG) pipeline through linear mapping, and show how this can be used to generate smooth and arbitrarily long paths within image-based deep learning systems. We take advantage of derivative-based smoothing costs for parametric control of fidelity vs. simplicity tradeoffs, while also enabling stylization control in geometric and image spaces. The proposed pipeline is compatible with recent vector graphics generation and vectorization methods. We demonstrate the versatility of our approach with four applications aimed at the generation of stylized vector graphics: stylized space-filling path generation, stroke-based image abstraction, closed-area image abstraction, and stylized text generation.
Related papers
- Aligned Novel View Image and Geometry Synthesis via Cross-modal Attention Instillation [62.87088388345378]
We introduce a diffusion-based framework that performs aligned novel view image and geometry generation via a warping-and-inpainting methodology.<n>Method leverages off-the-shelf geometry predictors to predict partial geometries viewed from reference images.<n>Cross-modal attention distillation is proposed to ensure accurate alignment between generated images and geometry.
arXiv Detail & Related papers (2025-06-13T16:19:00Z) - SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation [57.47730473674261]
We introduce SwiftSketch, a model for image-conditioned vector sketch generation that can produce high-quality sketches in less than a second.<n>SwiftSketch operates by progressively denoising stroke control points sampled from a Gaussian distribution.<n>ControlSketch is a method that enhances SDS-based techniques by incorporating precise spatial control through a depth-aware ControlNet.
arXiv Detail & Related papers (2025-02-12T18:57:12Z) - LinPrim: Linear Primitives for Differentiable Volumetric Rendering [51.56484100374058]
We introduce two new scene representations based on linear primitives.<n>We present a different octaiableizer that runs efficiently on GPU.<n>We demonstrate comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2025-01-27T18:49:38Z) - Segmentation-guided Layer-wise Image Vectorization with Gradient Fills [6.037332707968933]
We propose a segmentation-guided vectorization framework to convert images into concise vector graphics with gradient fills.
With the guidance of an embedded gradient-aware segmentation, our approach progressively appends gradient-filled B'ezier paths to the output.
arXiv Detail & Related papers (2024-08-28T12:08:25Z) - Text-to-Vector Generation with Neural Path Representation [27.949704002538944]
We propose a novel neural path representation that learns the path latent space from both sequence and image modalities.
In the first stage, a pre-trained text-to-image diffusion model guides the initial generation of complex vector graphics.
In the second stage, we refine the graphics using a layer-wise image vectorization strategy to achieve clearer elements and structure.
arXiv Detail & Related papers (2024-05-16T17:59:22Z) - VectorPainter: Advanced Stylized Vector Graphics Synthesis Using Stroke-Style Priors [18.477188153621125]
We introduce VectorPainter, a novel framework designed for reference-guided text-to-vector-graphics synthesis.<n>Our method first converts the pixels of the reference image into a series of vector strokes, and then generates a vector graphic based on the input text description.<n>To preserve the style of the strokes throughout the generation process, we introduce a style-preserving loss function.
arXiv Detail & Related papers (2024-05-05T15:01:29Z) - StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis [112.25071764647683]
StrokeNUWA is a pioneering work exploring a better visual representation ''stroke tokens'' on vector graphics.
equipped with stroke tokens, StrokeNUWA can significantly surpass traditional LLM-based and optimization-based methods.
StrokeNUWA achieves up to a 94x speedup in inference over the speed of prior methods with an exceptional SVG code compression ratio of 6.9%.
arXiv Detail & Related papers (2024-01-30T15:20:26Z) - Sketch Video Synthesis [52.134906766625164]
We propose a novel framework for sketching videos represented by the frame-wise B'ezier curve.
Our method unlocks applications in sketch-based video editing and video doodling, enabled through video composition.
arXiv Detail & Related papers (2023-11-26T14:14:04Z) - Flow-Guided Controllable Line Drawing Generation [6.200483285433661]
We present an Image-to-Flow network (I2FNet) to efficiently and robustly create the vector flow field in a learning-based manner.
We then introduce our well-designed Double Flow Generator (DFG) framework to fuse features from learned vector flow and input image flow.
In order to allow for controllable character line drawing generation, we integrate a Line Control Matrix into DFG and train a Line Control Regressor.
arXiv Detail & Related papers (2023-07-14T14:09:09Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Differentiable Drawing and Sketching [0.0]
We present a differentiable relaxation of the process of drawing points, lines and curves into a pixel.
This relaxation allows end-to-end differentiable programs and deep networks to be learned and optimised.
arXiv Detail & Related papers (2021-03-30T09:25:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.