Non-Deterministic Face Mask Removal Based On 3D Priors
- URL: http://arxiv.org/abs/2202.09856v1
- Date: Sun, 20 Feb 2022 16:27:44 GMT
- Title: Non-Deterministic Face Mask Removal Based On 3D Priors
- Authors: Xiangnan Yin and Liming Chen
- Abstract summary: The proposed approach integrates a multi-task 3D face reconstruction module with a face inpainting module.
By gradually controlling the 3D shape parameters, our method generates high-quality dynamic inpainting results with different expressions and mouth movements.
- Score: 3.8502825594372703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel image inpainting framework for face mask removal.
Although current methods have demonstrated their impressive ability in
recovering damaged face images, they suffer from two main problems: the
dependence on manually labeled missing regions and the deterministic result
corresponding to each input. The proposed approach tackles these problems by
integrating a multi-task 3D face reconstruction module with a face inpainting
module. Given a masked face image, the former predicts a 3DMM-based
reconstructed face together with a binary occlusion map, providing dense
geometrical and textural priors that greatly facilitate the inpainting task of
the latter. By gradually controlling the 3D shape parameters, our method
generates high-quality dynamic inpainting results with different expressions
and mouth movements. Qualitative and quantitative experiments verify the
effectiveness of the proposed method.
Related papers
- 3D-GANTex: 3D Face Reconstruction with StyleGAN3-based Multi-View Images and 3DDFA based Mesh Generation [0.8479659578608233]
This paper introduces a novel method for texture estimation from a single image by first using StyleGAN and 3D Morphable Models.
The result shows that the generated mesh is of high quality with near to accurate texture representation.
arXiv Detail & Related papers (2024-10-21T13:42:06Z) - 3D Facial Expressions through Analysis-by-Neural-Synthesis [30.2749903946587]
SMIRK (Spatial Modeling for Image-based Reconstruction of Kinesics) faithfully reconstructs expressive 3D faces from images.
We identify two key limitations in existing methods: shortcomings in their self-supervised training formulation, and a lack of expression diversity in the training images.
Our qualitative, quantitative and particularly our perceptual evaluations demonstrate that SMIRK achieves the new state-of-the art performance on accurate expression reconstruction.
arXiv Detail & Related papers (2024-04-05T14:00:07Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - Segmentation-Reconstruction-Guided Facial Image De-occlusion [48.952656891182826]
Occlusions are very common in face images in the wild, leading to the degraded performance of face-related tasks.
This paper proposes a novel face de-occlusion model based on face segmentation and 3D face reconstruction.
arXiv Detail & Related papers (2021-12-15T10:40:08Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - A 3D model-based approach for fitting masks to faces in the wild [9.958467179573235]
We present a 3D model-based approach called WearMask3D for augmenting face images of various poses to the masked face counterparts.
Our method proceeds by first fitting a 3D morphable model on the input image, second overlaying the mask surface onto the face model and warping the respective mask texture, and last projecting the 3D mask back to 2D.
Experimental results demonstrate WearMask3D produces more realistic masked images, and utilizing these images for training leads to improved recognition accuracy of masked faces.
arXiv Detail & Related papers (2021-03-01T06:50:18Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.