E4S: Fine-grained Face Swapping via Editing With Regional GAN Inversion
- URL: http://arxiv.org/abs/2310.15081v3
- Date: Wed, 27 Mar 2024 13:23:28 GMT
- Title: E4S: Fine-grained Face Swapping via Editing With Regional GAN Inversion
- Authors: Maomao Li, Ge Yuan, Cairong Wang, Zhian Liu, Yong Zhang, Yongwei Nie, Jue Wang, Dong Xu,
- Abstract summary: "editing for swapping" (E4S) is a novel approach to face swapping from the perspective of fine-grained facial editing.
We propose a Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture.
Our E4S outperforms existing methods in preserving texture, shape, and lighting.
- Score: 30.118316634616324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel approach to face swapping from the perspective of fine-grained facial editing, dubbed "editing for swapping" (E4S). The traditional face swapping methods rely on global feature extraction and fail to preserve the detailed source identity. In contrast, we propose a Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture. Specifically, our E4S performs face swapping in the latent space of a pretrained StyleGAN, where a multi-scale mask-guided encoder is applied to project the texture of each facial component into regional style codes and a mask-guided injection module manipulating feature maps with the style codes. Based on this disentanglement, face swapping can be simplified as style and mask swapping. Besides, due to the large lighting condition gap, transferring the source skin into the target image may lead to disharmony lighting. We propose a re-coloring network to make the swapped face maintain the target lighting condition while preserving the source skin. Further, to deal with the potential mismatch areas during mask exchange, we design a face inpainting module to refine the face shape. The extensive comparisons with state-of-the-art methods demonstrate that our E4S outperforms existing methods in preserving texture, shape, and lighting. Our implementation is available at https://github.com/e4s2024/E4S2024.
Related papers
- 3D Face Arbitrary Style Transfer [18.09280257466941]
We propose a novel method, namely Face-guided Dual Style Transfer (FDST)
FDST employs a 3D decoupling module to separate facial geometry and texture.
We show that FDST can be applied in many downstream tasks, including region-controllable style transfer, high-fidelity face texture reconstruction, and artistic face reconstruction.
arXiv Detail & Related papers (2023-03-14T08:51:51Z) - StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces [103.54337984566877]
We use dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN without altering any model parameters.
This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions.
We validate our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks.
arXiv Detail & Related papers (2023-03-10T18:59:33Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - LC-NeRF: Local Controllable Face Generation in Neural Randiance Field [55.54131820411912]
LC-NeRF is composed of a Local Region Generators Module and a Spatial-Aware Fusion Module.
Our method provides better local editing than state-of-the-art face editing methods.
Our method also performs well in downstream tasks, such as text-driven facial image editing.
arXiv Detail & Related papers (2023-02-19T05:50:08Z) - Fine-Grained Face Swapping via Regional GAN Inversion [18.537407253864508]
We present a novel paradigm for high-fidelity face swapping that faithfully preserves the desired subtle geometry and texture details.
We propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
At the core of our system lies a novel Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture.
arXiv Detail & Related papers (2022-11-25T12:40:45Z) - IA-FaceS: A Bidirectional Method for Semantic Face Editing [8.19063619210761]
This paper proposes a bidirectional method for disentangled face attribute manipulation as well as flexible, controllable component editing.
IA-FaceS is developed for the first time without any input visual guidance, such as segmentation masks or sketches.
Both quantitative and qualitative results indicate that the proposed method outperforms the other techniques in reconstruction, face attribute manipulation, and component transfer.
arXiv Detail & Related papers (2022-03-24T14:44:56Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.