FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
- URL: http://arxiv.org/abs/2210.10473v2
- Date: Fri, 21 Oct 2022 09:46:19 GMT
- Title: FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping
- Authors: Felix Rosberg, Eren Erdal Aksoy, Fernando Alonso-Fernandez, Cristofer
Englund
- Abstract summary: We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
- Score: 62.38898610210771
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we present a new single-stage method for subject agnostic face
swapping and identity transfer, named FaceDancer. We have two major
contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature
Similarity Regularization (IFSR). The AFFA module is embedded in the decoder
and adaptively learns to fuse attribute features and features conditioned on
identity information without requiring any additional facial segmentation
process. In IFSR, we leverage the intermediate features in an identity encoder
to preserve important attributes such as head pose, facial expression,
lighting, and occlusion in the target face, while still transferring the
identity of the source face with high fidelity. We conduct extensive
quantitative and qualitative experiments on various datasets and show that the
proposed FaceDancer outperforms other state-of-the-art networks in terms of
identityn transfer, while having significantly better pose preservation than
most of the previous methods.
Related papers
- DiffFAE: Advancing High-fidelity One-shot Facial Appearance Editing with Space-sensitive Customization and Semantic Preservation [84.0586749616249]
This paper presents DiffFAE, a one-stage and highly-efficient diffusion-based framework tailored for high-fidelity Facial Appearance Editing.
For high-fidelity query attributes transfer, we adopt Space-sensitive Physical Customization (SPC), which ensures the fidelity and generalization ability.
In order to preserve source attributes, we introduce the Region-responsive Semantic Composition (RSC)
This module is guided to learn decoupled source-regarding features, thereby better preserving the identity and alleviating artifacts from non-facial attributes such as hair, clothes, and background.
arXiv Detail & Related papers (2024-03-26T12:53:10Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - FICGAN: Facial Identity Controllable GAN for De-identification [34.38379234653657]
We present Facial Identity Controllable GAN (FICGAN) for generating high-quality de-identified face images with ensured privacy protection.
Based on the analysis, we develop FICGAN, an autoencoder-based conditional generative model that learns to disentangle the identity attributes from non-identity attributes on a face image.
arXiv Detail & Related papers (2021-10-02T07:09:27Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.