SimSwap: An Efficient Framework For High Fidelity Face Swapping
- URL: http://arxiv.org/abs/2106.06340v1
- Date: Fri, 11 Jun 2021 12:23:10 GMT
- Title: SimSwap: An Efficient Framework For High Fidelity Face Swapping
- Authors: Renwang Chen, Xuanhong Chen, Bingbing Ni, Yanhao Ge
- Abstract summary: We propose an efficient framework, called Simple Swap (SimSwap), aiming for generalized and high fidelity face swapping.
Our framework is capable of transferring the identity of an arbitrary source face into an arbitrary target face while preserving the attributes of the target face.
Experiments on wild faces demonstrate that our SimSwap is able to achieve competitive identity performance while preserving attributes better than previous state-of-the-art methods.
- Score: 43.59969679039686
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose an efficient framework, called Simple Swap (SimSwap), aiming for
generalized and high fidelity face swapping. In contrast to previous approaches
that either lack the ability to generalize to arbitrary identity or fail to
preserve attributes like facial expression and gaze direction, our framework is
capable of transferring the identity of an arbitrary source face into an
arbitrary target face while preserving the attributes of the target face. We
overcome the above defects in the following two ways. First, we present the ID
Injection Module (IIM) which transfers the identity information of the source
face into the target face at feature level. By using this module, we extend the
architecture of an identity-specific face swapping algorithm to a framework for
arbitrary face swapping. Second, we propose the Weak Feature Matching Loss
which efficiently helps our framework to preserve the facial attributes in an
implicit way. Extensive experiments on wild faces demonstrate that our SimSwap
is able to achieve competitive identity performance while preserving attributes
better than previous state-of-the-art methods. The code is already available on
github: https://github.com/neuralchen/SimSwap.
Related papers
- G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.