Segmentation-Reconstruction-Guided Facial Image De-occlusion
- URL: http://arxiv.org/abs/2112.08022v1
- Date: Wed, 15 Dec 2021 10:40:08 GMT
- Title: Segmentation-Reconstruction-Guided Facial Image De-occlusion
- Authors: Xiangnan Yin, Di Huang, Zehua Fu, Yunhong Wang, Liming Chen
- Abstract summary: Occlusions are very common in face images in the wild, leading to the degraded performance of face-related tasks.
This paper proposes a novel face de-occlusion model based on face segmentation and 3D face reconstruction.
- Score: 48.952656891182826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occlusions are very common in face images in the wild, leading to the
degraded performance of face-related tasks. Although much effort has been
devoted to removing occlusions from face images, the varying shapes and
textures of occlusions still challenge the robustness of current methods. As a
result, current methods either rely on manual occlusion masks or only apply to
specific occlusions. This paper proposes a novel face de-occlusion model based
on face segmentation and 3D face reconstruction, which automatically removes
all kinds of face occlusions with even blurred boundaries,e.g., hairs. The
proposed model consists of a 3D face reconstruction module, a face segmentation
module, and an image generation module. With the face prior and the occlusion
mask predicted by the first two, respectively, the image generation module can
faithfully recover the missing facial textures. To supervise the training, we
further build a large occlusion dataset, with both manually labeled and
synthetic occlusions. Qualitative and quantitative results demonstrate the
effectiveness and robustness of the proposed method.
Related papers
- Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Non-Deterministic Face Mask Removal Based On 3D Priors [3.8502825594372703]
The proposed approach integrates a multi-task 3D face reconstruction module with a face inpainting module.
By gradually controlling the 3D shape parameters, our method generates high-quality dynamic inpainting results with different expressions and mouth movements.
arXiv Detail & Related papers (2022-02-20T16:27:44Z) - FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face
Extraction [3.8502825594372703]
Occlusions often occur in face images in the wild, troubling face-related tasks such as landmark detection, 3D reconstruction, and face recognition.
This paper proposes a novel face segmentation dataset with manually labeled face occlusions from the CelebA-HQ and the internet.
We trained a straightforward face segmentation model but obtained SOTA performance, convincingly demonstrating the effectiveness of the proposed dataset.
arXiv Detail & Related papers (2022-01-20T19:44:18Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware
Multi-view Geometry Consistency [40.56510679634943]
We propose a self-supervised training architecture by leveraging the multi-view geometry consistency.
We design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss.
Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions.
arXiv Detail & Related papers (2020-07-24T12:36:09Z) - Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting [22.24046752858929]
We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters.
Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections.
Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions.
arXiv Detail & Related papers (2020-07-14T01:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.