Foreground-guided Facial Inpainting with Fidelity Preservation
- URL: http://arxiv.org/abs/2105.03342v1
- Date: Fri, 7 May 2021 15:50:58 GMT
- Title: Foreground-guided Facial Inpainting with Fidelity Preservation
- Authors: Jireh Jam, Connah Kendrick, Vincent Drouard, Kevin Walker, Moi Hoon
Yap
- Abstract summary: We propose a foreground-guided facial inpainting framework that can extract and generate facial features using convolutional neural network layers.
Specifically, we propose a new loss function with semantic capability reasoning of facial expressions, natural and unnatural features (make-up)
Our proposed method achieved comparable quantitative results when compare to the state of the art but qualitatively, it demonstrated high-fidelity preservation of facial components.
- Score: 7.5089719291325325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial image inpainting, with high-fidelity preservation for image realism,
is a very challenging task. This is due to the subtle texture in key facial
features (component) that are not easily transferable. Many image inpainting
techniques have been proposed with outstanding capabilities and high
quantitative performances recorded. However, with facial inpainting, the
features are more conspicuous and the visual quality of the blended inpainted
regions are more important qualitatively. Based on these facts, we design a
foreground-guided facial inpainting framework that can extract and generate
facial features using convolutional neural network layers. It introduces the
use of foreground segmentation masks to preserve the fidelity. Specifically, we
propose a new loss function with semantic capability reasoning of facial
expressions, natural and unnatural features (make-up). We conduct our
experiments using the CelebA-HQ dataset, segmentation masks from CelebAMask-HQ
(for foreground guidance) and Quick Draw Mask (for missing regions). Our
proposed method achieved comparable quantitative results when compare to the
state of the art but qualitatively, it demonstrated high-fidelity preservation
of facial components.
Related papers
- SimFLE: Simple Facial Landmark Encoding for Self-Supervised Facial
Expression Recognition in the Wild [3.4798852684389963]
We propose a self-supervised simple facial landmark encoding (SimFLE) method that can learn effective encoding of facial landmarks.
We introduce novel FaceMAE module for this purpose.
Experimental results on several FER-W benchmarks prove that the proposed SimFLE is superior in facial landmark localization.
arXiv Detail & Related papers (2023-03-14T06:30:55Z) - Diverse facial inpainting guided by exemplars [8.360536784609309]
This paper introduces EXE-GAN, a novel diverse and interactive facial inpainting framework.
The proposed facial inpainting is achieved based on generative adversarial networks by leveraging the global style of input image, the style, and exemplar style of image.
A variety of experimental results and comparisons on public CelebA-HQ and FFHQ datasets are presented to demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2022-02-13T16:29:45Z) - FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for
Blind Face Inpainting [77.78305705925376]
Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image.
We propose a novel two-stage blind face inpainting method named Frequency-guided Transformer and Top-Down Refinement Network (FT-TDR) to tackle these challenges.
arXiv Detail & Related papers (2021-08-10T03:12:01Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Network Architecture Search for Face Enhancement [82.25775020564654]
We present a multi-task face restoration network, called Network Architecture Search for Face Enhancement (NASFE)
NASFE can enhance poor quality face images containing a single degradation (i.e. noise or blur) or multiple degradations (noise+blur+low-light)
arXiv Detail & Related papers (2021-05-13T19:46:05Z) - Face Hallucination via Split-Attention in Split-Attention Network [58.30436379218425]
convolutional neural networks (CNNs) have been widely employed to promote the face hallucination.
We propose a novel external-internal split attention group (ESAG) to take into account the overall facial profile and fine texture details simultaneously.
By fusing the features from these two paths, the consistency of facial structure and the fidelity of facial details are strengthened.
arXiv Detail & Related papers (2020-10-22T10:09:31Z) - Explainable Face Recognition [4.358626952482686]
In this paper, we provide the first comprehensive benchmark and baseline evaluation for explainable face recognition.
We define a new evaluation protocol called the inpainting game'', which is a curated set of 3648 triplets (probe, mate, nonmate) of 95 subjects.
An explainable face matcher is tasked with generating a network attention map which best explains which regions in a probe image match with a mated image.
arXiv Detail & Related papers (2020-08-03T14:47:51Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - Domain Embedded Multi-model Generative Adversarial Networks for
Image-based Face Inpainting [44.598234654270584]
We present a domain embedded multi-model generative adversarial model for inpainting of face images with large cropped regions.
Experiments on both CelebA and CelebA-HQ face datasets demonstrate that our proposed approach achieved state-of-the-art performance.
arXiv Detail & Related papers (2020-02-05T17:36:13Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.