FaceController: Controllable Attribute Editing for Face in the Wild
- URL: http://arxiv.org/abs/2102.11464v1
- Date: Tue, 23 Feb 2021 02:47:28 GMT
- Title: FaceController: Controllable Attribute Editing for Face in the Wild
- Authors: Zhiliang Xu, Xiyu Yu, Zhibin Hong, Zhen Zhu, Junyu Han, Jingtuo Liu,
Errui Ding, Xiang Bai
- Abstract summary: We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
- Score: 74.56117807309576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face attribute editing aims to generate faces with one or multiple desired
face attributes manipulated while other details are preserved. Unlike prior
works such as GAN inversion, which has an expensive reverse mapping process, we
propose a simple feed-forward network to generate high-fidelity manipulated
faces. By simply employing some existing and easy-obtainable prior information,
our method can control, transfer, and edit diverse attributes of faces in the
wild. The proposed method can consequently be applied to various applications
such as face swapping, face relighting, and makeup transfer. In our method, we
decouple identity, expression, pose, and illumination using 3D priors; separate
texture and colors by using region-wise style codes. All the information is
embedded into adversarial learning by our identity-style normalization module.
Disentanglement losses are proposed to enhance the generator to extract
information independently from each attribute. Comprehensive quantitative and
qualitative evaluations have been conducted. In a single framework, our method
achieves the best or competitive scores on a variety of face applications.
Related papers
- G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - ManiCLIP: Multi-Attribute Face Manipulation from Text [104.30600573306991]
We present a novel multi-attribute face manipulation method based on textual descriptions.
Our method generates natural manipulated faces with minimal text-irrelevant attribute editing.
arXiv Detail & Related papers (2022-10-02T07:22:55Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - FacialGAN: Style Transfer and Attribute Manipulation on Synthetic Faces [9.664892091493586]
FacialGAN is a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation.
We show our model's capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification.
arXiv Detail & Related papers (2021-10-18T15:53:38Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.