SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling
- URL: http://arxiv.org/abs/2307.00804v2
- Date: Tue, 4 Jul 2023 12:21:18 GMT
- Title: SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling
- Authors: Zhongjin Luo, Dong Du, Heming Zhu, Yizhou Yu, Hongbo Fu, Xiaoguang Han
- Abstract summary: SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
- Score: 69.28254439393298
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Modeling 3D avatars benefits various application scenarios such as AR/VR,
gaming, and filming. Character faces contribute significant diversity and
vividity as a vital component of avatars. However, building 3D character face
models usually requires a heavy workload with commercial tools, even for
experienced artists. Various existing sketch-based tools fail to support
amateurs in modeling diverse facial shapes and rich geometric details. In this
paper, we present SketchMetaFace - a sketching system targeting amateur users
to model high-fidelity 3D faces in minutes. We carefully design both the user
interface and the underlying algorithm. First, curvature-aware strokes are
adopted to better support the controllability of carving facial details.
Second, considering the key problem of mapping a 2D sketch map to a 3D model,
we develop a novel learning-based method termed "Implicit and Depth Guided Mesh
Modeling" (IDGMM). It fuses the advantages of mesh, implicit, and depth
representations to achieve high-quality results with high efficiency. In
addition, to further support usability, we present a coarse-to-fine 2D
sketching interface design and a data-driven stroke suggestion tool. User
studies demonstrate the superiority of our system over existing modeling tools
in terms of the ease to use and visual quality of results. Experimental
analyses also show that IDGMM reaches a better trade-off between accuracy and
efficiency. SketchMetaFace is available at
https://zhongjinluo.github.io/SketchMetaFace/.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design [40.821865912127635]
We propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads.
We use the advanced implicit-based shape inference methods, which have strong ability to handle the domain gap between freehand sketches and synthetic ones used for training.
We also contribute to a dataset of high-quality 3D animal heads, which are manually created by artists.
arXiv Detail & Related papers (2021-08-05T12:17:36Z) - Interactive Annotation of 3D Object Geometry using 2D Scribbles [84.51514043814066]
In this paper, we propose an interactive framework for annotating 3D object geometry from point cloud data and RGB imagery.
Our framework targets naive users without artistic or graphics expertise.
arXiv Detail & Related papers (2020-08-24T21:51:29Z) - JNR: Joint-based Neural Rig Representation for Compact 3D Face Modeling [22.584569656416864]
We introduce a novel approach to learn a 3D face model using a joint-based face rig and a neural skinning network.
Thanks to the joint-based representation, our model enjoys some significant advantages over prior blendshape-based models.
arXiv Detail & Related papers (2020-07-14T01:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.