DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation
- URL: http://arxiv.org/abs/2005.07298v1
- Date: Thu, 14 May 2020 23:56:48 GMT
- Title: DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation
- Authors: Mohammad Rami Koujan, Anastasios Roussos, Stefanos Zafeiriou
- Abstract summary: DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
- Score: 56.56575063461169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dense 3D facial motion capture from only monocular in-the-wild pairs of RGB
images is a highly challenging problem with numerous applications, ranging from
facial expression recognition to facial reenactment. In this work, we propose
DeepFaceFlow, a robust, fast, and highly-accurate framework for the dense
estimation of 3D non-rigid facial flow between pairs of monocular images. Our
DeepFaceFlow framework was trained and tested on two very large-scale facial
video datasets, one of them of our own collection and annotation, with the aid
of occlusion-aware and 3D-based loss function. We conduct comprehensive
experiments probing different aspects of our approach and demonstrating its
improved performance against state-of-the-art flow and 3D reconstruction
methods. Furthermore, we incorporate our framework in a full-head
state-of-the-art facial video synthesis method and demonstrate the ability of
our method in better representing and capturing the facial dynamics, resulting
in a highly-realistic facial video synthesis. Given registered pairs of images,
our framework generates 3D flow maps at ~60 fps.
Related papers
- G3FA: Geometry-guided GAN for Face Animation [14.488117084637631]
We introduce Geometry-guided GAN for Face Animation (G3FA) to tackle this limitation.
Our novel approach empowers the face animation model to incorporate 3D information using only 2D images.
In our face reenactment model, we leverage 2D motion warping to capture motion dynamics.
arXiv Detail & Related papers (2024-08-23T13:13:24Z) - 3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow [15.479024531161476]
We propose a novel face tracker, FlowFace, that introduces an innovative 2D alignment network for dense per-vertex alignment.
Unlike prior work, FlowFace is trained on high-quality 3D scan annotations rather than weak supervision or synthetic data.
Our method exhibits superior performance on both custom and publicly available benchmarks.
arXiv Detail & Related papers (2024-04-15T14:20:07Z) - FitDiff: Robust monocular 3D facial shape and reflectance estimation using Diffusion Models [79.65289816077629]
We present FitDiff, a diffusion-based 3D facial avatar generative model.
Our model accurately generates relightable facial avatars, utilizing an identity embedding extracted from an "in-the-wild" 2D facial image.
Being the first 3D LDM conditioned on face recognition embeddings, FitDiff reconstructs relightable human avatars, that can be used as-is in common rendering engines.
arXiv Detail & Related papers (2023-12-07T17:35:49Z) - Fake It Without Making It: Conditioned Face Generation for Accurate 3D
Face Reconstruction [5.079602839359523]
We present a method to generate a large-scale synthesised dataset of 250K photorealistic images and their corresponding shape parameters and depth maps, which we call SynthFace.
Our synthesis method conditions Stable Diffusion on depth maps sampled from the FLAME 3D Morphable Model (3DMM) of the human face, allowing us to generate a diverse set of shape-consistent facial images that is designed to be balanced in race and gender.
We propose ControlFace, a deep neural network, trained on SynthFace, which achieves competitive performance on the NoW benchmark, without requiring 3D supervision or manual 3D asset creation.
arXiv Detail & Related papers (2023-07-25T16:42:06Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Video2StyleGAN: Encoding Video in Latent Space for Manipulation [63.03250800510085]
We propose a novel network to encode face videos into the latent space of StyleGAN for semantic face video manipulation.
Our approach can significantly outperform existing single image methods, while achieving real-time (66 fps) speed.
arXiv Detail & Related papers (2022-06-27T06:48:15Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - Head2Head++: Deep Facial Attributes Re-Targeting [6.230979482947681]
We leverage the 3D geometry of faces and Generative Adversarial Networks (GANs) to design a novel deep learning architecture for the task of facial and head reenactment.
We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos.
Our system performs end-to-end reenactment in nearly real-time speed (18 fps)
arXiv Detail & Related papers (2020-06-17T23:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.