Reference-guided Face Component Editing
- URL: http://arxiv.org/abs/2006.02051v2
- Date: Tue, 14 Jul 2020 13:37:59 GMT
- Title: Reference-guided Face Component Editing
- Authors: Qiyao Deng, Jie Cao, Yunfan Liu, Zhenhua Chai, Qi Li and Zhenan Sun
- Abstract summary: We propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing.
Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components.
In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image.
- Score: 51.29105560090321
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Face portrait editing has achieved great progress in recent years. However,
previous methods either 1) operate on pre-defined face attributes, lacking the
flexibility of controlling shapes of high-level semantic facial components
(e.g., eyes, nose, mouth), or 2) take manually edited mask or sketch as an
intermediate representation for observable changes, but such additional input
usually requires extra efforts to obtain. To break the limitations (e.g. shape,
mask or sketch) of the existing methods, we propose a novel framework termed
r-FACE (Reference-guided FAce Component Editing) for diverse and controllable
face component editing with geometric changes. Specifically, r-FACE takes an
image inpainting model as the backbone, utilizing reference images as
conditions for controlling the shape of face components. In order to encourage
the framework to concentrate on the target face components, an example-guided
attention module is designed to fuse attention features and the target face
component features extracted from the reference image. Through extensive
experimental validation and comparisons, we verify the effectiveness of the
proposed framework.
Related papers
- StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces [103.54337984566877]
We use dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN without altering any model parameters.
This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions.
We validate our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks.
arXiv Detail & Related papers (2023-03-10T18:59:33Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - Fine-Grained Face Swapping via Regional GAN Inversion [18.537407253864508]
We present a novel paradigm for high-fidelity face swapping that faithfully preserves the desired subtle geometry and texture details.
We propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
At the core of our system lies a novel Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture.
arXiv Detail & Related papers (2022-11-25T12:40:45Z) - Semantics-Guided Object Removal for Facial Images: with Broad
Applicability and Robust Style Preservation [29.162655333387452]
Object removal and image inpainting in facial images is a task in which objects that occlude a facial image are specifically targeted, removed, and replaced by a properly reconstructed facial image.
Two different approaches utilizing U-net and modulated generator respectively have been widely endorsed for this task for their unique advantages but notwithstanding each method's innate disadvantages.
Here, we propose Semantics-Guided Inpainting Network (SGIN) which itself is a modification of the modulated generator, aiming to take advantage of its advanced generative capability and preserve the high-fidelity details of the original image.
arXiv Detail & Related papers (2022-09-29T00:09:12Z) - IA-FaceS: A Bidirectional Method for Semantic Face Editing [8.19063619210761]
This paper proposes a bidirectional method for disentangled face attribute manipulation as well as flexible, controllable component editing.
IA-FaceS is developed for the first time without any input visual guidance, such as segmentation masks or sketches.
Both quantitative and qualitative results indicate that the proposed method outperforms the other techniques in reconstruction, face attribute manipulation, and component transfer.
arXiv Detail & Related papers (2022-03-24T14:44:56Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - MagGAN: High-Resolution Face Attribute Editing with Mask-Guided
Generative Adversarial Network [145.4591079418917]
MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes.
A novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator.
A multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution ($1024 times 1024$) face editing.
arXiv Detail & Related papers (2020-10-03T20:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.