sketch2symm: Symmetry-aware sketch-to-shape generation via semantic bridging
- URL: http://arxiv.org/abs/2510.11303v1
- Date: Mon, 13 Oct 2025 11:49:45 GMT
- Title: sketch2symm: Symmetry-aware sketch-to-shape generation via semantic bridging
- Authors: Yan Zhou, Mingji Li, Xiantao Zeng, Jie Lin, Yuexia Zhou,
- Abstract summary: We propose Sketch2Symm, a two-stage generation method that produces geometrically consistent 3D from sketches.<n>Our approach introduces semantic bridging via sketch-to-image translation to enrich sparse sketch representations.<n>Our method achieves superior performance compared to existing sketch-based reconstruction methods in terms of Chamfer Distance, Earth Mover's Distance, and F-Score.
- Score: 12.390999095290404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketch-based 3D reconstruction remains a challenging task due to the abstract and sparse nature of sketch inputs, which often lack sufficient semantic and geometric information. To address this, we propose Sketch2Symm, a two-stage generation method that produces geometrically consistent 3D shapes from sketches. Our approach introduces semantic bridging via sketch-to-image translation to enrich sparse sketch representations, and incorporates symmetry constraints as geometric priors to leverage the structural regularity commonly found in everyday objects. Experiments on mainstream sketch datasets demonstrate that our method achieves superior performance compared to existing sketch-based reconstruction methods in terms of Chamfer Distance, Earth Mover's Distance, and F-Score, verifying the effectiveness of the proposed semantic bridging and symmetry-aware design.
Related papers
- Sketch2PoseNet: Efficient and Generalized Sketch to 3D Human Pose Prediction [34.19632657034878]
We introduce an end-to-end data-driven framework for estimating human poses and shapes from diverse sketch styles.<n>Our framework combines existing 2D pose detectors and generative diffusion priors for sketch feature extraction with a feed-forward neural network for efficient 2D pose estimation.<n>Our model substantially surpasses previous ones in both estimation accuracy and speed for sketch-to-pose tasks.
arXiv Detail & Related papers (2025-10-30T07:13:46Z) - From One Single Sketch to 3D Detailed Face Reconstruction [0.5937476291232802]
3D face reconstruction from a single sketch is a critical yet underexplored task with significant practical applications.<n>We introduce Sketch-1-to-3, a novel framework for realistic 3D face reconstruction from a single sketch.<n>We show that Sketch-1-to-3 achieves state-of-the-art performance in sketch-based 3D face reconstruction.
arXiv Detail & Related papers (2025-02-25T04:58:17Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - Uncertainty-Aware Cross-Modal Transfer Network for Sketch-Based 3D Shape
Retrieval [8.765045867163646]
This paper presents an uncertainty-aware cross-modal transfer network (UACTN) that addresses this issue.
We first introduce an end-to-end classification-based approach that simultaneously learns sketch features and uncertainty.
Then, 3D shape features are mapped into the pre-learned sketch embedding space for feature alignment.
arXiv Detail & Related papers (2023-08-11T05:46:52Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z) - On Learning Semantic Representations for Million-Scale Free-Hand
Sketches [146.52892067335128]
We study learning semantic representations for million-scale free-hand sketches.
We propose a dual-branch CNNRNN network architecture to represent sketches.
We explore learning the sketch-oriented semantic representations in hashing retrieval and zero-shot recognition.
arXiv Detail & Related papers (2020-07-07T15:23:22Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.