3D Shape Reconstruction from Free-Hand Sketches
- URL: http://arxiv.org/abs/2006.09694v2
- Date: Wed, 19 Jan 2022 03:35:23 GMT
- Title: 3D Shape Reconstruction from Free-Hand Sketches
- Authors: Jiayun Wang, Jierui Lin, Qian Yu, Runtao Liu, Yubei Chen, Stella X. Yu
- Abstract summary: Despite great progress achieved in 3D reconstruction from distortion-free line drawings, little effort has been made to reconstruct 3D shapes from free-hand sketches.
We aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games.
A major challenge for free-hand sketch 3D reconstruction comes from the insufficient training data and free-hand sketch diversity.
- Score: 42.15888734492648
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketches are the most abstract 2D representations of real-world objects.
Although a sketch usually has geometrical distortion and lacks visual cues,
humans can effortlessly envision a 3D object from it. This suggests that
sketches encode the information necessary for reconstructing 3D shapes. Despite
great progress achieved in 3D reconstruction from distortion-free line
drawings, such as CAD and edge maps, little effort has been made to reconstruct
3D shapes from free-hand sketches. We study this task and aim to enhance the
power of sketches in 3D-related applications such as interactive design and
VR/AR games.
Unlike previous works, which mostly study distortion-free line drawings, our
3D shape reconstruction is based on free-hand sketches. A major challenge for
free-hand sketch 3D reconstruction comes from the insufficient training data
and free-hand sketch diversity, e.g. individualized sketching styles. We thus
propose data generation and standardization mechanisms. Instead of
distortion-free line drawings, synthesized sketches are adopted as input
training data. Additionally, we propose a sketch standardization module to
handle different sketch distortions and styles. Extensive experiments
demonstrate the effectiveness of our model and its strong generalizability to
various free-hand sketches. Our code is publicly available at
https://github.com/samaonline/3D-Shape-Reconstruction-from-Free-Hand-Sketches.
Related papers
- Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Rendering [17.918603435615335]
3D sketches are widely used for visually representing the 3D shape and structure of objects or scenes.
We propose Diff3DS, a novel differentiable framework for generating view-consistent 3D sketch.
Our framework bridges the domains of 3D sketch and customized image, achieving end-toend optimization of 3D sketch.
arXiv Detail & Related papers (2024-05-24T07:48:14Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D
Human Reconstruction [18.443079472919635]
We propose a sketch-driven multi-faceted decoder network termed SketchBodyNet to address this task.
Our network achieves superior performance in reconstructing 3D human meshes from freehand sketches.
arXiv Detail & Related papers (2023-10-10T12:38:34Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Cross-Modal 3D Shape Generation and Manipulation [62.50628361920725]
We propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.
We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images.
arXiv Detail & Related papers (2022-07-24T19:22:57Z) - SingleSketch2Mesh : Generating 3D Mesh model from Sketch [1.6973426830397942]
Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms.
We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn sketches.
arXiv Detail & Related papers (2022-03-07T06:30:36Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.