SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design
- URL: http://arxiv.org/abs/2108.02548v1
- Date: Thu, 5 Aug 2021 12:17:36 GMT
- Title: SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design
- Authors: Zhongjin Luo and Jie Zhou and Heming Zhu and Dong Du and Xiaoguang Han
and Hongbo Fu
- Abstract summary: We propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads.
We use the advanced implicit-based shape inference methods, which have strong ability to handle the domain gap between freehand sketches and synthetic ones used for training.
We also contribute to a dataset of high-quality 3D animal heads, which are manually created by artists.
- Score: 40.821865912127635
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Head shapes play an important role in 3D character design. In this work, we
propose SimpModeling, a novel sketch-based system for helping users, especially
amateur users, easily model 3D animalmorphic heads - a prevalent kind of heads
in character design. Although sketching provides an easy way to depict desired
shapes, it is challenging to infer dense geometric information from sparse line
drawings. Recently, deepnet-based approaches have been taken to address this
challenge and try to produce rich geometric details from very few strokes.
However, while such methods reduce users' workload, they would cause less
controllability of target shapes. This is mainly due to the uncertainty of the
neural prediction. Our system tackles this issue and provides good
controllability from three aspects: 1) we separate coarse shape design and
geometric detail specification into two stages and respectively provide
different sketching means; 2) in coarse shape designing, sketches are used for
both shape inference and geometric constraints to determine global geometry,
and in geometric detail crafting, sketches are used for carving surface
details; 3) in both stages, we use the advanced implicit-based shape inference
methods, which have strong ability to handle the domain gap between freehand
sketches and synthetic ones used for training. Experimental results confirm the
effectiveness of our method and the usability of our interactive system. We
also contribute to a dataset of high-quality 3D animal heads, which are
manually created by artists.
Related papers
- SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - SingleSketch2Mesh : Generating 3D Mesh model from Sketch [1.6973426830397942]
Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms.
We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn sketches.
arXiv Detail & Related papers (2022-03-07T06:30:36Z) - Sketch2PQ: Freeform Planar Quadrilateral Mesh Design via a Single Sketch [36.10997511325458]
We present a novel sketch-based system to bridge the concept design and digital modeling of freeform roof-like shapes.
Our system allows the user to sketch the surface boundary and contour lines under axonometric projection.
We propose a deep neural network to infer in real-time the underlying surface shape along with a dense conjugate direction field.
arXiv Detail & Related papers (2022-01-23T21:09:59Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - 3D Shape Reconstruction from Free-Hand Sketches [42.15888734492648]
Despite great progress achieved in 3D reconstruction from distortion-free line drawings, little effort has been made to reconstruct 3D shapes from free-hand sketches.
We aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games.
A major challenge for free-hand sketch 3D reconstruction comes from the insufficient training data and free-hand sketch diversity.
arXiv Detail & Related papers (2020-06-17T07:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.