AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs
- URL: http://arxiv.org/abs/2112.05957v1
- Date: Sat, 11 Dec 2021 11:36:30 GMT
- Title: AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs
- Authors: Alexandros Lattas, Stylianos Moschoglou, Stylianos Ploumpis, Baris
Gecer, Abhijeet Ghosh, Stefanos Zafeiriou
- Abstract summary: We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
- Score: 119.23922747230193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last years, many face analysis tasks have accomplished astounding
performance, with applications including face generation and 3D face
reconstruction from a single "in-the-wild" image. Nevertheless, to the best of
our knowledge, there is no method which can produce render-ready
high-resolution 3D faces from "in-the-wild" images and this can be attributed
to the: (a) scarcity of available data for training, and (b) lack of robust
methodologies that can successfully be applied on very high-resolution data. In
this work, we introduce the first method that is able to reconstruct
photorealistic render-ready 3D facial geometry and BRDF from a single
"in-the-wild" image. We capture a large dataset of facial shape and
reflectance, which we have made public. We define a fast facial photorealistic
differentiable rendering methodology with accurate facial skin diffuse and
specular reflection, self-occlusion and subsurface scattering approximation.
With this, we train a network that disentangles the facial diffuse and specular
BRDF components from a shape and texture with baked illumination, reconstructed
with a state-of-the-art 3DMM fitting method. Our method outperforms the
existing arts by a significant margin and reconstructs high-resolution 3D faces
from a single low-resolution image, that can be rendered in various
applications, and bridge the uncanny valley.
Related papers
- Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction
from a Single Image [19.0074836183624]
We propose a novel self-supervised learning framework for reconstructing high-quality 3D faces from single-view images in-the-wild.
Our framework substantially outperforms state-of-the-art approaches in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2021-11-16T08:10:24Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.