FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping
- URL: http://arxiv.org/abs/1912.13457v3
- Date: Tue, 15 Sep 2020 07:43:58 GMT
- Title: FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping
- Authors: Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen
- Abstract summary: We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
- Score: 43.236261887752065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a novel two-stage framework, called FaceShifter, for
high fidelity and occlusion aware face swapping. Unlike many existing face
swapping works that leverage only limited information from the target image
when synthesizing the swapped face, our framework, in its first stage,
generates the swapped face in high-fidelity by exploiting and integrating the
target attributes thoroughly and adaptively. We propose a novel attributes
encoder for extracting multi-level target face attributes, and a new generator
with carefully designed Adaptive Attentional Denormalization (AAD) layers to
adaptively integrate the identity and the attributes for face synthesis. To
address the challenging facial occlusions, we append a second stage consisting
of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net). It is
trained to recover anomaly regions in a self-supervised way without any manual
annotations. Extensive experiments on wild faces demonstrate that our face
swapping results are not only considerably more perceptually appealing, but
also better identity preserving in comparison to other state-of-the-art
methods.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Realistic and Efficient Face Swapping: A Unified Approach with Diffusion Models [69.50286698375386]
We propose a novel approach that better harnesses diffusion models for face-swapping.
We introduce a mask shuffling technique during inpainting training, which allows us to create a so-called universal model for swapping.
Ours is a relatively unified approach and so it is resilient to errors in other off-the-shelf models.
arXiv Detail & Related papers (2024-09-11T13:43:53Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Unconstrained Face Sketch Synthesis via Perception-Adaptive Network and
A New Benchmark [16.126100433405398]
We argue that accurately perceiving facial region and facial components is crucial for unconstrained sketch synthesis.
We propose a novel Perception-Adaptive Network (PANet), which can generate high-quality face sketches under unconstrained conditions.
We introduce a new benchmark termed WildSketch, which contains 800 pairs of face photo-sketch with large variations in pose, expression, ethnic origin, background, and illumination.
arXiv Detail & Related papers (2021-12-02T07:08:31Z) - SimSwap: An Efficient Framework For High Fidelity Face Swapping [43.59969679039686]
We propose an efficient framework, called Simple Swap (SimSwap), aiming for generalized and high fidelity face swapping.
Our framework is capable of transferring the identity of an arbitrary source face into an arbitrary target face while preserving the attributes of the target face.
Experiments on wild faces demonstrate that our SimSwap is able to achieve competitive identity performance while preserving attributes better than previous state-of-the-art methods.
arXiv Detail & Related papers (2021-06-11T12:23:10Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.