Order Matters: 3D Shape Generation from Sequential VR Sketches
- URL: http://arxiv.org/abs/2512.04761v1
- Date: Thu, 04 Dec 2025 12:53:31 GMT
- Title: Order Matters: 3D Shape Generation from Sequential VR Sketches
- Authors: Yizi Chen, Sidi Wu, Tianyi Xiao, Nina Wiedemann, Loic Landrieu,
- Abstract summary: We introduce VR2Shape, the first framework and multi-category dataset for generating 3D shapes from sequential VR sketches.<n>Our contributions are threefold: (i) an automated pipeline that generates sequential VR sketches from arbitrary shapes, (ii) a dataset of over 20k synthetic and 900 hand-drawn sketch-shape pairs, and (iii) an order-aware sketch encoder coupled with a diffusion-based 3D generator.
- Score: 14.101692464280097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: VR sketching lets users explore and iterate on ideas directly in 3D, offering a faster and more intuitive alternative to conventional CAD tools. However, existing sketch-to-shape models ignore the temporal ordering of strokes, discarding crucial cues about structure and design intent. We introduce VRSketch2Shape, the first framework and multi-category dataset for generating 3D shapes from sequential VR sketches. Our contributions are threefold: (i) an automated pipeline that generates sequential VR sketches from arbitrary shapes, (ii) a dataset of over 20k synthetic and 900 hand-drawn sketch-shape pairs across four categories, and (iii) an order-aware sketch encoder coupled with a diffusion-based 3D generator. Our approach yields higher geometric fidelity than prior work, generalizes effectively from synthetic to real sketches with minimal supervision, and performs well even on partial sketches. All data and models will be released open-source at https://chenyizi086.github.io/VRSketch2Shape_website.
Related papers
- 4-Doodle: Text to 3D Sketches that Move! [60.89021458068987]
4-Doodle is the first training-free framework for generating dynamic 3D sketches from text.<n>Our method produces temporally realistic and structurally stable 3D sketch animations, outperforming existing baselines in both fidelity and controllability.
arXiv Detail & Related papers (2025-10-29T09:33:29Z) - S3D: Sketch-Driven 3D Model Generation [26.557326163693215]
S3D is a framework that converts simple hand-drawn sketches into detailed 3D models.<n>Our method utilizes a U-Net-based encoder-decoder architecture to convert sketches into face segmentation masks.
arXiv Detail & Related papers (2025-05-07T07:34:37Z) - VRsketch2Gaussian: 3D VR Sketch Guided 3D Object Generation with Gaussian Splatting [17.92139776515526]
We propose VRSketch2Gaussian, a first VR sketch-guided, multi-modal, native 3D object generation framework.<n>VRSS is the first large-scale paired dataset containing VR sketches, text, images, and 3DGS.
arXiv Detail & Related papers (2025-03-16T07:03:13Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Fine-Grained VR Sketching: Dataset and Insights [140.0579567561475]
We present the first fine-grained dataset of 1,497 3D VR sketch and 3D shape pairs of a chair category with large shapes diversity.
Our dataset supports the recent trend in the sketch community on fine-grained data analysis.
arXiv Detail & Related papers (2022-09-20T21:30:54Z) - DifferSketching: How Differently Do People Sketch 3D Objects? [78.44544977215918]
Multiple sketch datasets have been proposed to understand how people draw 3D objects.
These datasets are often of small scale and cover a small set of objects or categories.
We analyze the collected data at three levels, i.e., sketch-level, stroke-level, and pixel-level, under both spatial and temporal characteristics.
arXiv Detail & Related papers (2022-09-19T06:52:18Z) - SingleSketch2Mesh : Generating 3D Mesh model from Sketch [1.6973426830397942]
Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms.
We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn sketches.
arXiv Detail & Related papers (2022-03-07T06:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.