High-Fidelity Face Swapping with Style Blending
- URL: http://arxiv.org/abs/2312.10843v1
- Date: Sun, 17 Dec 2023 23:22:37 GMT
- Title: High-Fidelity Face Swapping with Style Blending
- Authors: Xinyu Yang, Hongbo Bo
- Abstract summary: We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
- Score: 16.024260677867076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face swapping has gained significant traction, driven by the plethora of
human face synthesis facilitated by deep learning methods. However, previous
face swapping methods that used generative adversarial networks (GANs) as
backbones have faced challenges such as inconsistency in blending, distortions,
artifacts, and issues with training stability. To address these limitations, we
propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts
essential features from faces and inverts them into a latent style code,
encapsulating indispensable facial attributes for successful face swapping.
Second, we introduce an attention-based style blending module to effectively
transfer Face IDs from source to target. To ensure accurate and quality
transferring, a series of constraint measures including contrastive face ID
learning, facial landmark alignment, and dual swap consistency is implemented.
Finally, the blended style code is translated back to the image space via the
style decoder, which is of high training stability and generative capability.
Extensive experiments on the CelebA-HQ dataset highlight the superior visual
quality of generated images from our face-swapping methodology when compared to
other state-of-the-art methods, and the effectiveness of each proposed module.
Source code and weights will be publicly available.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - BlendFace: Re-designing Identity Encoders for Face-Swapping [2.320417845168326]
BlendFace is a novel identity encoder for face-swapping.
It disentangles identity features into generators and guides generators properly as an identity loss function.
Extensive experiments demonstrate that BlendFace improves the identity-attribute disentanglement in face-swapping models.
arXiv Detail & Related papers (2023-07-20T13:17:30Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - One Shot Face Swapping on Megapixels [65.47443090320955]
This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
arXiv Detail & Related papers (2021-05-11T10:41:47Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.