Monocular Reconstruction of Neural Face Reflectance Fields
- URL: http://arxiv.org/abs/2008.10247v1
- Date: Mon, 24 Aug 2020 08:19:05 GMT
- Title: Monocular Reconstruction of Neural Face Reflectance Fields
- Authors: Mallikarjun B R. (1), Ayush Tewari (1), Tae-Hyun Oh (2), Tim Weyrich
(3), Bernd Bickel (4), Hans-Peter Seidel (1), Hanspeter Pfister (5), Wojciech
Matusik (6), Mohamed Elgharib (1), Christian Theobalt (1) ((1) Max Planck
Institute for Informatics, Saarland Informatics Campus, (2) POSTECH, (3)
University College London, (4) IST Austria, (5) Harvard University, (6) MIT
CSAIL)
- Abstract summary: The reflectance field of a face describes the reflectance properties responsible for complex lighting effects.
Most existing methods for estimating the face reflectance from a monocular image assume faces to be diffuse with very few approaches adding a specular component.
We present a new neural representation for face reflectance where we can estimate all components of the reflectance responsible for the final appearance from a single monocular image.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reflectance field of a face describes the reflectance properties
responsible for complex lighting effects including diffuse, specular,
inter-reflection and self shadowing. Most existing methods for estimating the
face reflectance from a monocular image assume faces to be diffuse with very
few approaches adding a specular component. This still leaves out important
perceptual aspects of reflectance as higher-order global illumination effects
and self-shadowing are not modeled. We present a new neural representation for
face reflectance where we can estimate all components of the reflectance
responsible for the final appearance from a single monocular image. Instead of
modeling each component of the reflectance separately using parametric models,
our neural representation allows us to generate a basis set of faces in a
geometric deformation-invariant space, parameterized by the input light
direction, viewpoint and face geometry. We learn to reconstruct this
reflectance field of a face just from a monocular image, which can be used to
render the face from any viewpoint in any light condition. Our method is
trained on a light-stage training dataset, which captures 300 people
illuminated with 150 light conditions from 8 viewpoints. We show that our
method outperforms existing monocular reflectance reconstruction methods, in
terms of photorealism due to better capturing of physical premitives, such as
sub-surface scattering, specularities, self-shadows and other higher-order
effects.
Related papers
- Photometric Inverse Rendering: Shading Cues Modeling and Surface Reflectance Regularization [46.146783750386994]
We propose a new method for neural inverse rendering.
Our method jointly optimize the light source position to account for the self-shadows in images.
To enhance surface reflectance decomposition, we introduce a new regularization.
arXiv Detail & Related papers (2024-08-13T11:39:14Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Monocular Identity-Conditioned Facial Reflectance Reconstruction [71.90507628715388]
Existing methods rely on a large amount of light-stage captured data to learn facial reflectance models.
We learn the reflectance prior in image space rather than UV space and present a framework named ID2Reflectance.
Our framework can directly estimate the reflectance maps of a single image while using limited reflectance data for training.
arXiv Detail & Related papers (2024-03-30T09:43:40Z) - Robust Geometry and Reflectance Disentanglement for 3D Face
Reconstruction from Sparse-view Images [12.648827250749587]
This paper presents a novel two-stage approach for reconstructing human faces from sparse-view images.
Our method focuses on decomposing key facial attributes, including geometry, diffuse reflectance, and specular reflectance, from ambient light.
arXiv Detail & Related papers (2023-12-11T03:14:58Z) - Ref-NeuS: Ambiguity-Reduced Neural Implicit Surface Learning for
Multi-View Reconstruction with Reflection [24.23826907954389]
Ref-NeuS aims to reduce ambiguity by attenuating the effect of reflective surfaces.
We show that our model achieves high-quality surface reconstruction on reflective surfaces and outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2023-03-20T03:08:22Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Predicting Surface Reflectance Properties of Outdoor Scenes Under
Unknown Natural Illumination [6.767885381740952]
This paper proposes a complete framework to predict surface reflectance properties of outdoor scenes under unknown natural illumination.
We recast the problem into its two constituent components involving the BRDF incoming light and outgoing view directions.
We present experiments that show that rendering with the predicted reflectance properties results in a visually similar appearance to using textures.
arXiv Detail & Related papers (2021-05-14T13:31:47Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Polarized Reflection Removal with Perfect Alignment in the Wild [66.48211204364142]
We present a novel formulation to removing reflection from polarized images in the wild.
We first identify the misalignment issues of existing reflection removal datasets.
We build a new dataset with more than 100 types of glass in which obtained transmission images are perfectly aligned with input mixed images.
arXiv Detail & Related papers (2020-03-28T13:29:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.