Fine-Grained Face Swapping via Regional GAN Inversion
- URL: http://arxiv.org/abs/2211.14068v2
- Date: Thu, 23 Mar 2023 08:05:52 GMT
- Title: Fine-Grained Face Swapping via Regional GAN Inversion
- Authors: Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang,
Yongwei Nie
- Abstract summary: We present a novel paradigm for high-fidelity face swapping that faithfully preserves the desired subtle geometry and texture details.
We propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
At the core of our system lies a novel Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture.
- Score: 18.537407253864508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel paradigm for high-fidelity face swapping that faithfully
preserves the desired subtle geometry and texture details. We rethink face
swapping from the perspective of fine-grained face editing, \textit{i.e.,
``editing for swapping'' (E4S)}, and propose a framework that is based on the
explicit disentanglement of the shape and texture of facial components.
Following the E4S principle, our framework enables both global and local
swapping of facial features, as well as controlling the amount of partial
swapping specified by the user. Furthermore, the E4S paradigm is inherently
capable of handling facial occlusions by means of facial masks. At the core of
our system lies a novel Regional GAN Inversion (RGI) method, which allows the
explicit disentanglement of shape and texture. It also allows face swapping to
be performed in the latent space of StyleGAN. Specifically, we design a
multi-scale mask-guided encoder to project the texture of each facial component
into regional style codes. We also design a mask-guided injection module to
manipulate the feature maps with the style codes. Based on the disentanglement,
face swapping is reformulated as a simplified problem of style and mask
swapping. Extensive experiments and comparisons with current state-of-the-art
methods demonstrate the superiority of our approach in preserving texture and
shape details, as well as working with high resolution images. The project page
is http://e4s2022.github.io
Related papers
- MaTe3D: Mask-guided Text-based 3D-aware Portrait Editing [61.014328598895524]
We propose textbfMaTe3D: mask-guided text-based 3D-aware portrait editing.
New SDF-based 3D generator learns local and global representations with proposed SDF and density consistency losses.
Conditional Distillation on Geometry and Texture (CDGT) mitigates visual ambiguity and avoids mismatch between texture and geometry.
arXiv Detail & Related papers (2023-12-12T03:04:08Z) - E4S: Fine-grained Face Swapping via Editing With Regional GAN Inversion [30.118316634616324]
"editing for swapping" (E4S) is a novel approach to face swapping from the perspective of fine-grained facial editing.
We propose a Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture.
Our E4S outperforms existing methods in preserving texture, shape, and lighting.
arXiv Detail & Related papers (2023-10-23T16:41:13Z) - FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping [28.714484307143927]
FlowFace++ is a novel face-swapping framework utilizing explicit semantic flow supervision and end-to-end architecture.
The discriminator is shape-aware and relies on a semantic flow-guided operation to explicitly calculate the shape discrepancies between the target and source faces.
arXiv Detail & Related papers (2023-06-22T06:18:29Z) - StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces [103.54337984566877]
We use dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN without altering any model parameters.
This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions.
We validate our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks.
arXiv Detail & Related papers (2023-03-10T18:59:33Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - LC-NeRF: Local Controllable Face Generation in Neural Randiance Field [55.54131820411912]
LC-NeRF is composed of a Local Region Generators Module and a Spatial-Aware Fusion Module.
Our method provides better local editing than state-of-the-art face editing methods.
Our method also performs well in downstream tasks, such as text-driven facial image editing.
arXiv Detail & Related papers (2023-02-19T05:50:08Z) - IA-FaceS: A Bidirectional Method for Semantic Face Editing [8.19063619210761]
This paper proposes a bidirectional method for disentangled face attribute manipulation as well as flexible, controllable component editing.
IA-FaceS is developed for the first time without any input visual guidance, such as segmentation masks or sketches.
Both quantitative and qualitative results indicate that the proposed method outperforms the other techniques in reconstruction, face attribute manipulation, and component transfer.
arXiv Detail & Related papers (2022-03-24T14:44:56Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - Reference-guided Face Component Editing [51.29105560090321]
We propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing.
Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components.
In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image.
arXiv Detail & Related papers (2020-06-03T05:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.