Sketch2Model: View-Aware 3D Modeling from Single Free-Hand Sketches
- URL: http://arxiv.org/abs/2105.06663v1
- Date: Fri, 14 May 2021 06:27:48 GMT
- Title: Sketch2Model: View-Aware 3D Modeling from Single Free-Hand Sketches
- Authors: Song-Hai Zhang, Yuan-Chen Guo, Qing-Wen Gu
- Abstract summary: We investigate the problem of generating 3D meshes from single free-hand sketches, aiming at fast 3D modeling for novice users.
We address the importance of viewpoint specification for overcoming ambiguities, and propose a novel view-aware generation approach.
- Score: 4.781615891172263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the problem of generating 3D meshes from single free-hand
sketches, aiming at fast 3D modeling for novice users. It can be regarded as a
single-view reconstruction problem, but with unique challenges, brought by the
variation and conciseness of sketches. Ambiguities in poorly-drawn sketches
could make it hard to determine how the sketched object is posed. In this
paper, we address the importance of viewpoint specification for overcoming such
ambiguities, and propose a novel view-aware generation approach. By explicitly
conditioning the generation process on a given viewpoint, our method can
generate plausible shapes automatically with predicted viewpoints, or with
specified viewpoints to help users better express their intentions. Extensive
evaluations on various datasets demonstrate the effectiveness of our view-aware
design in solving sketch ambiguities and improving reconstruction quality.
Related papers
- SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - Sketch2Cloth: Sketch-based 3D Garment Generation with Unsigned Distance
Fields [12.013968508918634]
We propose Sketch2Cloth, a sketch-based 3D garment generation system using the unsigned distance fields from the user's sketch input.
Sketch2Cloth first estimates the unsigned distance function of the target 3D model from the sketch input, and extracts the mesh from the estimated field with Marching Cubes.
We also provide the model editing function to modify the generated mesh.
arXiv Detail & Related papers (2023-03-01T01:45:28Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - SingleSketch2Mesh : Generating 3D Mesh model from Sketch [1.6973426830397942]
Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms.
We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn sketches.
arXiv Detail & Related papers (2022-03-07T06:30:36Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.