Robust Geometry and Reflectance Disentanglement for 3D Face
Reconstruction from Sparse-view Images
- URL: http://arxiv.org/abs/2312.06085v1
- Date: Mon, 11 Dec 2023 03:14:58 GMT
- Title: Robust Geometry and Reflectance Disentanglement for 3D Face
Reconstruction from Sparse-view Images
- Authors: Daisheng Jin, Jiangbei Hu, Baixin Xu, Yuxin Dai, Chen Qian, Ying He
- Abstract summary: This paper presents a novel two-stage approach for reconstructing human faces from sparse-view images.
Our method focuses on decomposing key facial attributes, including geometry, diffuse reflectance, and specular reflectance, from ambient light.
- Score: 12.648827250749587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel two-stage approach for reconstructing human faces
from sparse-view images, a task made challenging by the unique geometry and
complex skin reflectance of each individual. Our method focuses on decomposing
key facial attributes, including geometry, diffuse reflectance, and specular
reflectance, from ambient light. Initially, we create a general facial template
from a diverse collection of individual faces, capturing essential geometric
and reflectance characteristics. Guided by this template, we refine each
specific face model in the second stage, which further considers the
interaction between geometry and reflectance, as well as the subsurface
scattering effects on facial skin. Our method enables the reconstruction of
high-quality facial representations from as few as three images, offering
improved geometric accuracy and reflectance detail. Through comprehensive
evaluations and comparisons, our method demonstrates superiority over existing
techniques. Our method effectively disentangles geometry and reflectance
components, leading to enhanced quality in synthesizing new views and opening
up possibilities for applications such as relighting and reflectance editing.
We will make the code publicly available.
Related papers
- NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Monocular Identity-Conditioned Facial Reflectance Reconstruction [71.90507628715388]
Existing methods rely on a large amount of light-stage captured data to learn facial reflectance models.
We learn the reflectance prior in image space rather than UV space and present a framework named ID2Reflectance.
Our framework can directly estimate the reflectance maps of a single image while using limited reflectance data for training.
arXiv Detail & Related papers (2024-03-30T09:43:40Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - S2F2: Self-Supervised High Fidelity Face Reconstruction from Monocular
Image [2.469794902645761]
We present a novel face reconstruction method capable of reconstructing detailed face geometry, spatially varying face reflectance from a single image.
Compared to state-of-the-art methods, our method achieves more visually appealing reconstruction.
arXiv Detail & Related papers (2022-03-15T08:55:45Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z) - Monocular Reconstruction of Neural Face Reflectance Fields [0.0]
The reflectance field of a face describes the reflectance properties responsible for complex lighting effects.
Most existing methods for estimating the face reflectance from a monocular image assume faces to be diffuse with very few approaches adding a specular component.
We present a new neural representation for face reflectance where we can estimate all components of the reflectance responsible for the final appearance from a single monocular image.
arXiv Detail & Related papers (2020-08-24T08:19:05Z) - Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware
Multi-view Geometry Consistency [40.56510679634943]
We propose a self-supervised training architecture by leveraging the multi-view geometry consistency.
We design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss.
Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions.
arXiv Detail & Related papers (2020-07-24T12:36:09Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.