FaceDet3D: Facial Expressions with 3D Geometric Detail Prediction
- URL: http://arxiv.org/abs/2012.07999v3
- Date: Wed, 23 Dec 2020 18:22:48 GMT
- Title: FaceDet3D: Facial Expressions with 3D Geometric Detail Prediction
- Authors: ShahRukh Athar, Albert Pumarola, Francesc Moreno-Noguer, Dimitris
Samaras
- Abstract summary: Facial expressions induce a variety of high-level details on the 3D face geometry.
Morphable Models (3DMMs) of the human face fail to capture such fine details in their PCA-based representations.
We introduce FaceDet3D, a first-of-its-kind method that generates - from a single image - geometric facial details consistent with any desired target expression.
- Score: 62.5557724039217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial Expressions induce a variety of high-level details on the 3D face
geometry. For example, a smile causes the wrinkling of cheeks or the formation
of dimples, while being angry often causes wrinkling of the forehead. Morphable
Models (3DMMs) of the human face fail to capture such fine details in their
PCA-based representations and consequently cannot generate such details when
used to edit expressions. In this work, we introduce FaceDet3D, a
first-of-its-kind method that generates - from a single image - geometric
facial details that are consistent with any desired target expression. The
facial details are represented as a vertex displacement map and used then by a
Neural Renderer to photo-realistically render novel images of any single image
in any desired expression and view. The project website is:
http://shahrukhathar.github.io/2020/12/14/FaceDet3D.html
Related papers
- FaceGPT: Self-supervised Learning to Chat about 3D Human Faces [69.4651241319356]
We introduce FaceGPT, a self-supervised learning framework for Large Vision-Language Models (VLMs) to reason about 3D human faces from images and text.
FaceGPT overcomes this limitation by embedding the parameters of a 3D morphable face model (3DMM) into the token space of a VLM.
We show that FaceGPT achieves high-quality 3D face reconstructions and retains the ability for general-purpose visual instruction following.
arXiv Detail & Related papers (2024-06-11T11:13:29Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - SAFA: Structure Aware Face Animation [9.58882272014749]
We propose a structure aware face animation (SAFA) method which constructs specific geometric structures to model different components of a face image.
We use a 3D morphable model (3DMM) to model the face, multiple affine transforms to model the other foreground components like hair and beard, and an identity transform to model the background.
The 3DMM geometric embedding not only helps generate realistic structure for the driving scene, but also contributes to better perception of occluded area in the generated image.
arXiv Detail & Related papers (2021-11-09T03:22:38Z) - FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face
Reconstruction [29.920622006999732]
We present a large-scale detailed 3D face dataset, FaceScape, and the corresponding benchmark to evaluate single-view facial 3D reconstruction.
By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.
We also use FaceScape data to generate the in-the-wild and in-the-lab benchmark to evaluate recent methods of single-view face reconstruction.
arXiv Detail & Related papers (2021-11-01T16:48:34Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - Face-GCN: A Graph Convolutional Network for 3D Dynamic Face
Identification/Recognition [21.116748155592752]
We propose a novel framework for dynamic 3D face identification/recognition based on facial keypoints.
Each dynamic sequence of facial expressions is represented as a-temporal graph, which is constructed using 3D facial landmarks.
We evaluate our approach on a challenging dynamic 3D facial expression dataset.
arXiv Detail & Related papers (2021-04-19T09:05:39Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.