Facial Geometric Detail Recovery via Implicit Representation
- URL: http://arxiv.org/abs/2203.09692v1
- Date: Fri, 18 Mar 2022 01:42:59 GMT
- Title: Facial Geometric Detail Recovery via Implicit Representation
- Authors: Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma,
Xiaokang Yang, Stefanos Zafeiriou
- Abstract summary: We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
- Score: 147.07961322377685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a dense 3D model with fine-scale details from a single facial image
is highly challenging and ill-posed. To address this problem, many approaches
fit smooth geometries through facial prior while learning details as additional
displacement maps or personalized basis. However, these techniques typically
require vast datasets of paired multi-view data or 3D scans, whereas such
datasets are scarce and expensive. To alleviate heavy data dependency, we
present a robust texture-guided geometric detail recovery approach using only a
single in-the-wild facial image. More specifically, our method combines
high-quality texture completion with the powerful expressiveness of implicit
surfaces. Initially, we inpaint occluded facial parts, generate complete
textures, and build an accurate multi-view dataset of the same subject. In
order to estimate the detailed geometry, we define an implicit signed distance
function and employ a physically-based implicit renderer to reconstruct fine
geometric details from the generated multi-view images. Our method not only
recovers accurate facial details but also decomposes normals, albedos, and
shading parts in a self-supervised way. Finally, we register the implicit shape
details to a 3D Morphable Model template, which can be used in traditional
modeling and rendering pipelines. Extensive experiments demonstrate that the
proposed approach can reconstruct impressive facial details from a single
image, especially when compared with state-of-the-art methods trained on large
datasets.
Related papers
- Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Topologically Consistent Multi-View Face Inference Using Volumetric
Sampling [25.001398662643986]
ToFu is a geometry inference framework that can produce topologically consistent meshes across identities and expressions.
A novel progressive mesh generation network embeds the topological structure of the face in a feature volume.
These high-quality assets are readily usable by production studios for avatar creation, animation and physically-based skin rendering.
arXiv Detail & Related papers (2021-10-06T17:55:08Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z) - Towards High-Fidelity 3D Face Reconstruction from In-the-Wild Images
Using Graph Convolutional Networks [32.859340851346786]
We introduce a method to reconstruct 3D facial shapes with high-fidelity textures from single-view images in-the-wild.
Our method can generate high-quality results and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2020-03-12T08:06:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.