SIRA: Relightable Avatars from a Single Image
- URL: http://arxiv.org/abs/2209.03027v1
- Date: Wed, 7 Sep 2022 09:47:46 GMT
- Title: SIRA: Relightable Avatars from a Single Image
- Authors: Pol Caselles, Eduard Ramon, Jaime Garcia, Xavier Giro-i-Nieto,
Francesc Moreno-Noguer, Gil Triginer
- Abstract summary: We introduce SIRA, a method which reconstructs human head avatars with high fidelity geometry and factorized lights and surface materials.
Our key ingredients are two data-driven statistical models based on neural fields that resolve the ambiguities of single-view 3D surface reconstruction and appearance factorization.
- Score: 19.69326772087838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering the geometry of a human head from a single image, while
factorizing the materials and illumination is a severely ill-posed problem that
requires prior information to be solved. Methods based on 3D Morphable Models
(3DMM), and their combination with differentiable renderers, have shown
promising results. However, the expressiveness of 3DMMs is limited, and they
typically yield over-smoothed and identity-agnostic 3D shapes limited to the
face region. Highly accurate full head reconstructions have recently been
obtained with neural fields that parameterize the geometry using multilayer
perceptrons. The versatility of these representations has also proved effective
for disentangling geometry, materials and lighting. However, these methods
require several tens of input images. In this paper, we introduce SIRA, a
method which, from a single image, reconstructs human head avatars with high
fidelity geometry and factorized lights and surface materials. Our key
ingredients are two data-driven statistical models based on neural fields that
resolve the ambiguities of single-view 3D surface reconstruction and appearance
factorization. Experiments show that SIRA obtains state of the art results in
3D head reconstruction while at the same time it successfully disentangles the
global illumination, and the diffuse and specular albedos. Furthermore, our
reconstructions are amenable to physically-based appearance editing and head
model relighting.
Related papers
- Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.