High-Fidelity 3D Digital Human Head Creation from RGB-D Selfies
- URL: http://arxiv.org/abs/2010.05562v2
- Date: Tue, 29 Jun 2021 09:51:51 GMT
- Title: High-Fidelity 3D Digital Human Head Creation from RGB-D Selfies
- Authors: Linchao Bao, Xiangkai Lin, Yajing Chen, Haoxian Zhang, Sheng Wang,
Xuefei Zhe, Di Kang, Haozhi Huang, Xinwei Jiang, Jue Wang, Dong Yu, Zhengyou
Zhang
- Abstract summary: We present a fully automatic system that can produce high-fidelity, photo-realistic 3D digital human heads with a consumer RGB-D selfie camera.
The system only needs the user to take a short selfie RGB-D video while rotating his/her head, and can produce a high quality head reconstruction in less than 30 seconds.
- Score: 41.74253269778287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a fully automatic system that can produce high-fidelity,
photo-realistic 3D digital human heads with a consumer RGB-D selfie camera. The
system only needs the user to take a short selfie RGB-D video while rotating
his/her head, and can produce a high quality head reconstruction in less than
30 seconds. Our main contribution is a new facial geometry modeling and
reflectance synthesis procedure that significantly improves the
state-of-the-art. Specifically, given the input video a two-stage frame
selection procedure is first employed to select a few high-quality frames for
reconstruction. Then a differentiable renderer based 3D Morphable Model (3DMM)
fitting algorithm is applied to recover facial geometries from multiview RGB-D
data, which takes advantages of a powerful 3DMM basis constructed with
extensive data generation and perturbation. Our 3DMM has much larger expressive
capacities than conventional 3DMM, allowing us to recover more accurate facial
geometry using merely linear basis. For reflectance synthesis, we present a
hybrid approach that combines parametric fitting and CNNs to synthesize
high-resolution albedo/normal maps with realistic hair/pore/wrinkle details.
Results show that our system can produce faithful 3D digital human faces with
extremely realistic details. The main code and the newly constructed 3DMM basis
is publicly available.
Related papers
- ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - VRMM: A Volumetric Relightable Morphable Head Model [55.21098471673929]
We introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling.
Our framework efficiently disentangles and encodes latent spaces of identity, expression, and lighting into low-dimensional representations.
We demonstrate the versatility and effectiveness of VRMM through various applications like avatar generation, facial reconstruction, and animation.
arXiv Detail & Related papers (2024-02-06T15:55:46Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z) - SIRA: Relightable Avatars from a Single Image [19.69326772087838]
We introduce SIRA, a method which reconstructs human head avatars with high fidelity geometry and factorized lights and surface materials.
Our key ingredients are two data-driven statistical models based on neural fields that resolve the ambiguities of single-view 3D surface reconstruction and appearance factorization.
arXiv Detail & Related papers (2022-09-07T09:47:46Z) - FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable
Model from a Hybrid Dataset [36.688730105295015]
FaceVerse is built from hybrid East Asian face datasets containing 60K fused RGB-D images and 2K high-fidelity 3D head scan models.
In the coarse module, we generate a base parametric model from large-scale RGB-D images, which is able to predict accurate rough 3D face models in different genders, ages, etc.
In the fine module, a conditional StyleGAN architecture trained with high-fidelity scan models is introduced to enrich elaborate facial geometric and texture details.
arXiv Detail & Related papers (2022-03-26T12:13:14Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.