Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction
from a Single Image
- URL: http://arxiv.org/abs/2111.08282v1
- Date: Tue, 16 Nov 2021 08:10:24 GMT
- Title: Self-supervised High-fidelity and Re-renderable 3D Facial Reconstruction
from a Single Image
- Authors: Mingxin Yang, Jianwei Guo, Zhanglin Cheng, Xiaopeng Zhang, Dong-Ming
Yan
- Abstract summary: We propose a novel self-supervised learning framework for reconstructing high-quality 3D faces from single-view images in-the-wild.
Our framework substantially outperforms state-of-the-art approaches in both qualitative and quantitative comparisons.
- Score: 19.0074836183624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing high-fidelity 3D facial texture from a single image is a
challenging task since the lack of complete face information and the domain gap
between the 3D face and 2D image. The most recent works tackle facial texture
reconstruction problem by applying either generation-based or
reconstruction-based methods. Although each method has its own advantage, none
of them is capable of recovering a high-fidelity and re-renderable facial
texture, where the term 're-renderable' demands the facial texture to be
spatially complete and disentangled with environmental illumination. In this
paper, we propose a novel self-supervised learning framework for reconstructing
high-quality 3D faces from single-view images in-the-wild. Our main idea is to
first utilize the prior generation module to produce a prior albedo, then
leverage the detail refinement module to obtain detailed albedo. To further
make facial textures disentangled with illumination, we present a novel
detailed illumination representation which is reconstructed with the detailed
albedo together. We also design several regularization loss functions on both
the albedo side and illumination side to facilitate the disentanglement of
these two factors. Finally, thanks to the differentiable rendering technique,
our neural network can be efficiently trained in a self-supervised manner.
Extensive experiments on challenging datasets demonstrate that our framework
substantially outperforms state-of-the-art approaches in both qualitative and
quantitative comparisons.
Related papers
- A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Unsupervised High-Fidelity Facial Texture Generation and Reconstruction [20.447635896077454]
We propose a novel unified pipeline for both tasks, generation of both geometry and texture, and recovery of high-fidelity texture.
Our texture model is learned, in an unsupervised fashion, from natural images as opposed to scanned texture maps.
By applying precise 3DMM fitting, we can seamlessly integrate our modeled textures into synthetically generated background images.
arXiv Detail & Related papers (2021-10-10T10:59:04Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.