SAFA: Structure Aware Face Animation
- URL: http://arxiv.org/abs/2111.04928v1
- Date: Tue, 9 Nov 2021 03:22:38 GMT
- Title: SAFA: Structure Aware Face Animation
- Authors: Qiulin Wang, Lu Zhang, Bo Li
- Abstract summary: We propose a structure aware face animation (SAFA) method which constructs specific geometric structures to model different components of a face image.
We use a 3D morphable model (3DMM) to model the face, multiple affine transforms to model the other foreground components like hair and beard, and an identity transform to model the background.
The 3DMM geometric embedding not only helps generate realistic structure for the driving scene, but also contributes to better perception of occluded area in the generated image.
- Score: 9.58882272014749
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent success of generative adversarial networks (GAN) has made great
progress on the face animation task. However, the complex scene structure of a
face image still makes it a challenge to generate videos with face poses
significantly deviating from the source image. On one hand, without knowing the
facial geometric structure, generated face images might be improperly
distorted. On the other hand, some area of the generated image might be
occluded in the source image, which makes it difficult for GAN to generate
realistic appearance. To address these problems, we propose a structure aware
face animation (SAFA) method which constructs specific geometric structures to
model different components of a face image. Following the well recognized
motion based face animation technique, we use a 3D morphable model (3DMM) to
model the face, multiple affine transforms to model the other foreground
components like hair and beard, and an identity transform to model the
background. The 3DMM geometric embedding not only helps generate realistic
structure for the driving scene, but also contributes to better perception of
occluded area in the generated image. Besides, we further propose to exploit
the widely studied inpainting technique to faithfully recover the occluded
image area. Both quantitative and qualitative experiment results have shown the
superiority of our method. Code is available at
https://github.com/Qiulin-W/SAFA.
Related papers
- G3FA: Geometry-guided GAN for Face Animation [14.488117084637631]
We introduce Geometry-guided GAN for Face Animation (G3FA) to tackle this limitation.
Our novel approach empowers the face animation model to incorporate 3D information using only 2D images.
In our face reenactment model, we leverage 2D motion warping to capture motion dynamics.
arXiv Detail & Related papers (2024-08-23T13:13:24Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - Learning an Animatable Detailed 3D Face Model from In-The-Wild Images [50.09971525995828]
We present the first approach to jointly learn a model with animatable detail and a detailed 3D face regressor from in-the-wild images.
Our DECA model is trained to robustly produce a UV displacement map from a low-dimensional latent representation.
We introduce a novel detail-consistency loss to disentangle person-specific details and expression-dependent wrinkles.
arXiv Detail & Related papers (2020-12-07T19:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.