High-Fidelity Face Swapping with Style Blending
- URL: http://arxiv.org/abs/2312.10843v1
- Date: Sun, 17 Dec 2023 23:22:37 GMT
- Title: High-Fidelity Face Swapping with Style Blending
- Authors: Xinyu Yang, Hongbo Bo
- Abstract summary: We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
- Score: 16.024260677867076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face swapping has gained significant traction, driven by the plethora of
human face synthesis facilitated by deep learning methods. However, previous
face swapping methods that used generative adversarial networks (GANs) as
backbones have faced challenges such as inconsistency in blending, distortions,
artifacts, and issues with training stability. To address these limitations, we
propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts
essential features from faces and inverts them into a latent style code,
encapsulating indispensable facial attributes for successful face swapping.
Second, we introduce an attention-based style blending module to effectively
transfer Face IDs from source to target. To ensure accurate and quality
transferring, a series of constraint measures including contrastive face ID
learning, facial landmark alignment, and dual swap consistency is implemented.
Finally, the blended style code is translated back to the image space via the
style decoder, which is of high training stability and generative capability.
Extensive experiments on the CelebA-HQ dataset highlight the superior visual
quality of generated images from our face-swapping methodology when compared to
other state-of-the-art methods, and the effectiveness of each proposed module.
Source code and weights will be publicly available.
Related papers
- BlendFace: Re-designing Identity Encoders for Face-Swapping [2.320417845168326]
BlendFace is a novel identity encoder for face-swapping.
It disentangles identity features into generators and guides generators properly as an identity loss function.
Extensive experiments demonstrate that BlendFace improves the identity-attribute disentanglement in face-swapping models.
arXiv Detail & Related papers (2023-07-20T13:17:30Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - Smooth-Swap: A Simple Enhancement for Face-Swapping with Smoothness [18.555874044296463]
We propose a new face-swapping model called Smooth-Swap'
It focuses on deriving the smoothness of the identity embedding instead of employing complex handcrafted designs.
Our model is quantitatively and qualitatively comparable or even superior to existing methods in terms of identity change.
arXiv Detail & Related papers (2021-12-11T03:26:32Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - One Shot Face Swapping on Megapixels [65.47443090320955]
This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
arXiv Detail & Related papers (2021-05-11T10:41:47Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.