Domain Embedded Multi-model Generative Adversarial Networks for
Image-based Face Inpainting
- URL: http://arxiv.org/abs/2002.02909v2
- Date: Sat, 20 Jun 2020 05:47:05 GMT
- Title: Domain Embedded Multi-model Generative Adversarial Networks for
Image-based Face Inpainting
- Authors: Xian Zhang, Xin Wang, Bin Kong, Canghong Shi, Youbing Yin, Qi Song,
Siwei Lyu, Jiancheng Lv, Canghong Shi, Xiaojie Li
- Abstract summary: We present a domain embedded multi-model generative adversarial model for inpainting of face images with large cropped regions.
Experiments on both CelebA and CelebA-HQ face datasets demonstrate that our proposed approach achieved state-of-the-art performance.
- Score: 44.598234654270584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior knowledge of face shape and structure plays an important role in face
inpainting. However, traditional face inpainting methods mainly focus on the
generated image resolution of the missing portion without consideration of the
special particularities of the human face explicitly and generally produce
discordant facial parts. To solve this problem, we present a domain embedded
multi-model generative adversarial model for inpainting of face images with
large cropped regions. We firstly represent only face regions using the latent
variable as the domain knowledge and combine it with the non-face parts
textures to generate high-quality face images with plausible contents. Two
adversarial discriminators are finally used to judge whether the generated
distribution is close to the real distribution or not. It can not only
synthesize novel image structures but also explicitly utilize the embedded face
domain knowledge to generate better predictions with consistency on structures
and appearance. Experiments on both CelebA and CelebA-HQ face datasets
demonstrate that our proposed approach achieved state-of-the-art performance
and generates higher quality inpainting results than existing ones.
Related papers
- Single Image, Any Face: Generalisable 3D Face Generation [59.9369171926757]
We propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input.
To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images.
arXiv Detail & Related papers (2024-09-25T14:56:37Z) - Optimal-Landmark-Guided Image Blending for Face Morphing Attacks [8.024953195407502]
We propose a novel approach for conducting face morphing attacks, which utilizes optimal-landmark-guided image blending.
Our proposed method overcomes the limitations of previous approaches by optimizing the morphing landmarks and using Graph Convolutional Networks (GCNs) to combine landmark and appearance features.
arXiv Detail & Related papers (2024-01-30T03:45:06Z) - Face Deblurring Based on Separable Normalization and Adaptive
Denormalization [25.506065804812522]
Face deblurring aims to restore a clear face image from a blurred input image with more explicit structure and facial details.
We design an effective face deblurring network based on separable normalization and adaptive denormalization.
Experimental results on both CelebA and CelebA-HQ datasets demonstrate that the proposed face deblurring network restores face structure with more facial details.
arXiv Detail & Related papers (2021-12-18T03:42:23Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - Joint Face Image Restoration and Frontalization for Recognition [79.78729632975744]
In real-world scenarios, many factors may harm face recognition performance, e.g., large pose, bad illumination,low resolution, blur and noise.
Previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition.
We propose an Multi-Degradation Face Restoration model to restore frontalized high-quality faces from the given low-quality ones.
arXiv Detail & Related papers (2021-05-12T03:52:41Z) - Foreground-guided Facial Inpainting with Fidelity Preservation [7.5089719291325325]
We propose a foreground-guided facial inpainting framework that can extract and generate facial features using convolutional neural network layers.
Specifically, we propose a new loss function with semantic capability reasoning of facial expressions, natural and unnatural features (make-up)
Our proposed method achieved comparable quantitative results when compare to the state of the art but qualitatively, it demonstrated high-fidelity preservation of facial components.
arXiv Detail & Related papers (2021-05-07T15:50:58Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.