Diverse facial inpainting guided by exemplars
- URL: http://arxiv.org/abs/2202.06358v2
- Date: Tue, 15 Feb 2022 13:05:47 GMT
- Title: Diverse facial inpainting guided by exemplars
- Authors: Wanglong Lu, Hanli Zhao, Xianta Jiang, Xiaogang Jin, Min Wang, Jiankai
Lyu, and Kaijie Shi
- Abstract summary: This paper introduces EXE-GAN, a novel diverse and interactive facial inpainting framework.
The proposed facial inpainting is achieved based on generative adversarial networks by leveraging the global style of input image, the style, and exemplar style of image.
A variety of experimental results and comparisons on public CelebA-HQ and FFHQ datasets are presented to demonstrate the superiority of the proposed method.
- Score: 8.360536784609309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial image inpainting is a task of filling visually realistic and
semantically meaningful contents for missing or masked pixels in a face image.
Although existing methods have made significant progress in achieving high
visual quality, the controllable diversity of facial image inpainting remains
an open problem in this field. This paper introduces EXE-GAN, a novel diverse
and interactive facial inpainting framework, which can not only preserve the
high-quality visual effect of the whole image but also complete the face image
with exemplar-like facial attributes. The proposed facial inpainting is
achieved based on generative adversarial networks by leveraging the global
style of input image, the stochastic style, and the exemplar style of exemplar
image. A novel attribute similarity metric is introduced to encourage networks
to learn the style of facial attributes from the exemplar in a self-supervised
way. To guarantee the natural transition across the boundary of inpainted
regions, a novel spatial variant gradient backpropagation technique is designed
to adjust the loss gradients based on the spatial location. A variety of
experimental results and comparisons on public CelebA-HQ and FFHQ datasets are
presented to demonstrate the superiority of the proposed method in terms of
both the quality and diversity in facial inpainting.
Related papers
- Rank-based No-reference Quality Assessment for Face Swapping [88.53827937914038]
The metric of measuring the quality in most face swapping methods relies on several distances between the manipulated images and the source image.
We present a novel no-reference image quality assessment (NR-IQA) method specifically designed for face swapping.
arXiv Detail & Related papers (2024-06-04T01:36:29Z) - BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion [61.90969199199739]
BrushNet is a novel plug-and-play dual-branch model engineered to embed pixel-level masked image features into any pre-trained DM.
BrushNet's superior performance over existing models across seven key metrics, including image quality, mask region preservation, and textual coherence.
arXiv Detail & Related papers (2024-03-11T17:59:31Z) - Self-Supervised Facial Representation Learning with Facial Region
Awareness [13.06996608324306]
Self-supervised pre-training has been proven to be effective in learning transferable representations that benefit various visual tasks.
Recent efforts toward this goal are limited to treating each face image as a whole.
We propose a novel self-supervised facial representation learning framework to learn consistent global and local facial representations.
arXiv Detail & Related papers (2024-03-04T15:48:56Z) - Optimal-Landmark-Guided Image Blending for Face Morphing Attacks [8.024953195407502]
We propose a novel approach for conducting face morphing attacks, which utilizes optimal-landmark-guided image blending.
Our proposed method overcomes the limitations of previous approaches by optimizing the morphing landmarks and using Graph Convolutional Networks (GCNs) to combine landmark and appearance features.
arXiv Detail & Related papers (2024-01-30T03:45:06Z) - Personalized Face Inpainting with Diffusion Models by Parallel Visual
Attention [55.33017432880408]
This paper proposes the use of Parallel Visual Attention (PVA) in conjunction with diffusion models to improve inpainting results.
We train the added attention modules and identity encoder on CelebAHQ-IDI, a dataset proposed for identity-preserving face inpainting.
Experiments demonstrate that PVA attains unparalleled identity resemblance in both face inpainting and face inpainting with language guidance tasks.
arXiv Detail & Related papers (2023-12-06T15:39:03Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - FT-TDR: Frequency-guided Transformer and Top-Down Refinement Network for
Blind Face Inpainting [77.78305705925376]
Blind face inpainting refers to the task of reconstructing visual contents without explicitly indicating the corrupted regions in a face image.
We propose a novel two-stage blind face inpainting method named Frequency-guided Transformer and Top-Down Refinement Network (FT-TDR) to tackle these challenges.
arXiv Detail & Related papers (2021-08-10T03:12:01Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - Foreground-guided Facial Inpainting with Fidelity Preservation [7.5089719291325325]
We propose a foreground-guided facial inpainting framework that can extract and generate facial features using convolutional neural network layers.
Specifically, we propose a new loss function with semantic capability reasoning of facial expressions, natural and unnatural features (make-up)
Our proposed method achieved comparable quantitative results when compare to the state of the art but qualitatively, it demonstrated high-fidelity preservation of facial components.
arXiv Detail & Related papers (2021-05-07T15:50:58Z) - Explainable Face Recognition [4.358626952482686]
In this paper, we provide the first comprehensive benchmark and baseline evaluation for explainable face recognition.
We define a new evaluation protocol called the inpainting game'', which is a curated set of 3648 triplets (probe, mate, nonmate) of 95 subjects.
An explainable face matcher is tasked with generating a network attention map which best explains which regions in a probe image match with a mated image.
arXiv Detail & Related papers (2020-08-03T14:47:51Z) - Domain Embedded Multi-model Generative Adversarial Networks for
Image-based Face Inpainting [44.598234654270584]
We present a domain embedded multi-model generative adversarial model for inpainting of face images with large cropped regions.
Experiments on both CelebA and CelebA-HQ face datasets demonstrate that our proposed approach achieved state-of-the-art performance.
arXiv Detail & Related papers (2020-02-05T17:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.