Segmentation-Reconstruction-Guided Facial Image De-occlusion
- URL: http://arxiv.org/abs/2112.08022v1
- Date: Wed, 15 Dec 2021 10:40:08 GMT
- Title: Segmentation-Reconstruction-Guided Facial Image De-occlusion
- Authors: Xiangnan Yin, Di Huang, Zehua Fu, Yunhong Wang, Liming Chen
- Abstract summary: Occlusions are very common in face images in the wild, leading to the degraded performance of face-related tasks.
This paper proposes a novel face de-occlusion model based on face segmentation and 3D face reconstruction.
- Score: 48.952656891182826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occlusions are very common in face images in the wild, leading to the
degraded performance of face-related tasks. Although much effort has been
devoted to removing occlusions from face images, the varying shapes and
textures of occlusions still challenge the robustness of current methods. As a
result, current methods either rely on manual occlusion masks or only apply to
specific occlusions. This paper proposes a novel face de-occlusion model based
on face segmentation and 3D face reconstruction, which automatically removes
all kinds of face occlusions with even blurred boundaries,e.g., hairs. The
proposed model consists of a 3D face reconstruction module, a face segmentation
module, and an image generation module. With the face prior and the occlusion
mask predicted by the first two, respectively, the image generation module can
faithfully recover the missing facial textures. To supervise the training, we
further build a large occlusion dataset, with both manually labeled and
synthetic occlusions. Qualitative and quantitative results demonstrate the
effectiveness and robustness of the proposed method.
Related papers
- Generative Face Parsing Map Guided 3D Face Reconstruction Under Occluded Scenes [4.542616945567623]
A complete face parsing map generation method guided by landmarks is proposed.
An excellent anti-occlusion face reconstruction method should ensure the authenticity of the output.
arXiv Detail & Related papers (2024-12-25T14:49:41Z) - Learning to Decouple the Lights for 3D Face Texture Modeling [71.67854540658472]
We introduce a novel approach to model 3D facial textures under such unnatural illumination.
Our framework learns to imitate the unnatural illumination as a composition of multiple separate light conditions.
According to experiments on both single images and video sequences, we demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-12-11T16:36:45Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Non-Deterministic Face Mask Removal Based On 3D Priors [3.8502825594372703]
The proposed approach integrates a multi-task 3D face reconstruction module with a face inpainting module.
By gradually controlling the 3D shape parameters, our method generates high-quality dynamic inpainting results with different expressions and mouth movements.
arXiv Detail & Related papers (2022-02-20T16:27:44Z) - FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face
Extraction [3.8502825594372703]
Occlusions often occur in face images in the wild, troubling face-related tasks such as landmark detection, 3D reconstruction, and face recognition.
This paper proposes a novel face segmentation dataset with manually labeled face occlusions from the CelebA-HQ and the internet.
We trained a straightforward face segmentation model but obtained SOTA performance, convincingly demonstrating the effectiveness of the proposed dataset.
arXiv Detail & Related papers (2022-01-20T19:44:18Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Personalized Face Modeling for Improved Face Reconstruction and Motion
Retargeting [22.24046752858929]
We propose an end-to-end framework that jointly learns a personalized face model per user and per-frame facial motion parameters.
Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections.
Experimental results show that our personalization accurately captures fine-grained facial dynamics in a wide range of conditions.
arXiv Detail & Related papers (2020-07-14T01:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.