HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and
Dynamic Details
- URL: http://arxiv.org/abs/2303.11225v2
- Date: Wed, 23 Aug 2023 11:46:57 GMT
- Title: HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and
Dynamic Details
- Authors: Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltru\v{s}aitis,
HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian
- Abstract summary: HiFace aims at high-fidelity 3D face reconstruction with dynamic and static details.
We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets.
- Score: 66.74088288846491
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Morphable Models (3DMMs) demonstrate great potential for reconstructing
faithful and animatable 3D facial surfaces from a single image. The facial
surface is influenced by the coarse shape, as well as the static detail (e,g.,
person-specific appearance) and dynamic detail (e.g., expression-driven
wrinkles). Previous work struggles to decouple the static and dynamic details
through image-level supervision, leading to reconstructions that are not
realistic. In this paper, we aim at high-fidelity 3D face reconstruction and
propose HiFace to explicitly model the static and dynamic details.
Specifically, the static detail is modeled as the linear combination of a
displacement basis, while the dynamic detail is modeled as the linear
interpolation of two displacement maps with polarized expressions. We exploit
several loss functions to jointly learn the coarse shape and fine details with
both synthetic and real-world datasets, which enable HiFace to reconstruct
high-fidelity 3D shapes with animatable details. Extensive quantitative and
qualitative experiments demonstrate that HiFace presents state-of-the-art
reconstruction quality and faithfully recovers both the static and dynamic
details. Our project page can be found at https://project-hiface.github.io.
Related papers
- Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture [47.44029968307207]
We propose a novel framework for simultaneous high-fidelity recovery of object shapes and textures from single-view images.
Our approach utilizes the proposed Single-view neural implicit Shape and Radiance field (SSR) representations to leverage both explicit 3D shape supervision and volume rendering.
A distinctive feature of our framework is its ability to generate fine-grained textured meshes while seamlessly integrating rendering capabilities into the single-view 3D reconstruction model.
arXiv Detail & Related papers (2023-11-01T11:46:15Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - JIFF: Jointly-aligned Implicit Face Function for High Quality Single
View Clothed Human Reconstruction [24.11991929558466]
Recent implicit function based methods have shown impressive results, but they fail to recover fine face details in their reconstructions.
This largely degrades user experience in applications like 3D telepresence.
We propose a novel Jointly-aligned Implicit Face Function (JIFF) that combines the merits of the implicit function based approach and model based approach.
arXiv Detail & Related papers (2022-04-22T07:43:45Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.