Learning a 3D Morphable Face Reflectance Model from Low-cost Data
- URL: http://arxiv.org/abs/2303.11686v1
- Date: Tue, 21 Mar 2023 09:08:30 GMT
- Title: Learning a 3D Morphable Face Reflectance Model from Low-cost Data
- Authors: Yuxuan Han, Zhibo Wang, Feng Xu
- Abstract summary: Existing works build parametric models for diffuse and specular albedo using Light Stage data.
This paper proposes the first 3D morphable face reflectance model with spatially varying BRDF using only low-cost publicly-available data.
- Score: 21.37535100469443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling non-Lambertian effects such as facial specularity leads to a more
realistic 3D Morphable Face Model. Existing works build parametric models for
diffuse and specular albedo using Light Stage data. However, only diffuse and
specular albedo cannot determine the full BRDF. In addition, the requirement of
Light Stage data is hard to fulfill for the research communities. This paper
proposes the first 3D morphable face reflectance model with spatially varying
BRDF using only low-cost publicly-available data. We apply linear shiness
weighting into parametric modeling to represent spatially varying specular
intensity and shiness. Then an inverse rendering algorithm is developed to
reconstruct the reflectance parameters from non-Light Stage data, which are
used to train an initial morphable reflectance model. To enhance the model's
generalization capability and expressive power, we further propose an
update-by-reconstruction strategy to finetune it on an in-the-wild dataset.
Experimental results show that our method obtains decent rendering results with
plausible facial specularities. Our code is released
\href{https://yxuhan.github.io/ReflectanceMM/index.html}{\textcolor{magenta}{here}}.
Related papers
- MoSAR: Monocular Semi-Supervised Model for Avatar Reconstruction using
Differentiable Shading [3.2586340344073927]
MoSAR is a method for 3D avatar generation from monocular images.
We propose a semi-supervised training scheme that improves generalization by learning from both light stage and in-the-wild datasets.
We also introduce a new dataset, named FFHQ-UV-Intrinsics, the first public dataset providing intrinsic face attributes at scale.
arXiv Detail & Related papers (2023-12-20T15:12:53Z) - RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail
Richness in Text-to-3D [31.77212284992657]
We learn a generalizable Normal-Depth diffusion model for 3D generation.
We introduce an albedo diffusion model to impose data-driven constraints on the albedo component.
Our experiments show that when integrated into existing text-to-3D pipelines, our models significantly enhance the richness.
arXiv Detail & Related papers (2023-11-28T16:22:33Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - DiFaReli: Diffusion Face Relighting [13.000032155650835]
We present a novel approach to single-view face relighting in the wild.
Handling non-diffuse effects, such as global illumination or cast shadows, has long been a challenge in face relighting.
We achieve state-of-the-art performance on standard benchmark Multi-PIE and can photorealistically relight in-the-wild images.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.