Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches
- URL: http://arxiv.org/abs/2309.13006v1
- Date: Fri, 22 Sep 2023 17:12:13 GMT
- Title: Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches
- Authors: Tianrun Chen, Chenglong Fu, Ying Zang, Lanyun Zhu, Jia Zhang, Papa
Mao, Lingyun Sun
- Abstract summary: We introduce a novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information.
Experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets.
- Score: 15.426513559370086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid development of AR/VR brings tremendous demands for 3D content.
While the widely-used Computer-Aided Design (CAD) method requires a
time-consuming and labor-intensive modeling process, sketch-based 3D modeling
offers a potential solution as a natural form of computer-human interaction.
However, the sparsity and ambiguity of sketches make it challenging to generate
high-fidelity content reflecting creators' ideas. Precise drawing from multiple
views or strategic step-by-step drawings is often required to tackle the
challenge but is not friendly to novice users. In this work, we introduce a
novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only
a single free-hand sketch without inputting multiple sketches or view
information. Specifically, we introduce a lightweight generation network for
efficient inference in real-time and a structural-aware adversarial training
approach with a Stroke Enhancement Module (SEM) to capture the structural
information to facilitate learning of the realistic and fine-detailed shape
structures for high-fidelity performance. Extensive experiments demonstrated
the effectiveness of our approach with the state-of-the-art (SOTA) performance
on both synthetic and real datasets.
Related papers
- Magic3DSketch: Create Colorful 3D Models From Sketch-Based 3D Modeling Guided by Text and Language-Image Pre-Training [2.9600148687385786]
Traditional methods like Computer-Aided Design (CAD) are often too labor-intensive and skill-demanding, making it challenging for novice users.
Our proposed method, Magic3DSketch, employs a novel technique that encodes sketches to predict a 3D mesh, guided by text descriptions.
Our method is also more useful and offers higher degree of controllability compared to existing text-to-3D approaches.
arXiv Detail & Related papers (2024-07-27T09:59:13Z) - CharNeRF: 3D Character Generation from Concept Art [3.8061090528695543]
We present a novel approach to create volumetric representations of 3D characters from consistent turnaround concept art.
We train the network to make use of these priors for various 3D points through a learnable view-direction-attended multi-head self-attention layer.
Our model is able to generate high-quality 360-degree views of characters.
arXiv Detail & Related papers (2024-02-27T01:22:08Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation [13.47191379827792]
We investigate how large pre-trained models can be used to generate 3D shapes from sketches.
We find that conditioning a 3D generative model on the features of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time.
This suggests that the large pre-trained vision model features carry semantic signals that are resilient to domain shifts.
arXiv Detail & Related papers (2023-07-08T00:45:01Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Interactive Annotation of 3D Object Geometry using 2D Scribbles [84.51514043814066]
In this paper, we propose an interactive framework for annotating 3D object geometry from point cloud data and RGB imagery.
Our framework targets naive users without artistic or graphics expertise.
arXiv Detail & Related papers (2020-08-24T21:51:29Z) - Monocular Real-time Hand Shape and Motion Capture using Multi-modal Data [77.34069717612493]
We present a novel method for monocular hand shape and pose estimation at unprecedented runtime performance of 100fps.
This is enabled by a new learning based architecture designed such that it can make use of all the sources of available hand training data.
It features a 3D hand joint detection module and an inverse kinematics module which regresses not only 3D joint positions but also maps them to joint rotations in a single feed-forward pass.
arXiv Detail & Related papers (2020-03-21T03:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.