FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping
- URL: http://arxiv.org/abs/2306.12686v2
- Date: Mon, 26 Jun 2023 05:11:17 GMT
- Title: FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping
- Authors: Yu Zhang, Hao Zeng, Bowen Ma, Wei Zhang, Zhimeng Zhang, Yu Ding,
Tangjie Lv, Changjie Fan
- Abstract summary: FlowFace++ is a novel face-swapping framework utilizing explicit semantic flow supervision and end-to-end architecture.
The discriminator is shape-aware and relies on a semantic flow-guided operation to explicitly calculate the shape discrepancies between the target and source faces.
- Score: 28.714484307143927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a novel face-swapping framework FlowFace++, utilizing
explicit semantic flow supervision and end-to-end architecture to facilitate
shape-aware face-swapping. Specifically, our work pretrains a facial shape
discriminator to supervise the face swapping network. The discriminator is
shape-aware and relies on a semantic flow-guided operation to explicitly
calculate the shape discrepancies between the target and source faces, thus
optimizing the face swapping network to generate highly realistic results. The
face swapping network is a stack of a pre-trained face-masked autoencoder
(MAE), a cross-attention fusion module, and a convolutional decoder. The MAE
provides a fine-grained facial image representation space, which is unified for
the target and source faces and thus facilitates final realistic results. The
cross-attention fusion module carries out the source-to-target face swapping in
a fine-grained latent space while preserving other attributes of the target
image (e.g. expression, head pose, hair, background, illumination, etc).
Lastly, the convolutional decoder further synthesizes the swapping results
according to the face-swapping latent embedding from the cross-attention fusion
module. Extensive quantitative and qualitative experiments on in-the-wild faces
demonstrate that our FlowFace++ outperforms the state-of-the-art significantly,
particularly while the source face is obstructed by uneven lighting or angle
offset.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - FlowFace: Semantic Flow-guided Shape-aware Face Swapping [43.166181219154936]
We propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely FlowFace.
Our FlowFace consists of a face reshaping network and a face swapping network.
We employ a pre-trained face masked autoencoder to extract facial features from both the source face and the target face.
arXiv Detail & Related papers (2022-12-06T07:23:39Z) - FaceFormer: Scale-aware Blind Face Restoration with Transformers [18.514630131883536]
We propose a novel scale-aware blind face restoration framework, named FaceFormer, which formulates facial feature restoration as scale-aware transformation.
Our proposed method trained with synthetic dataset generalizes better to a natural low quality images than current state-of-the-arts.
arXiv Detail & Related papers (2022-07-20T10:08:34Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - Face Deblurring Based on Separable Normalization and Adaptive
Denormalization [25.506065804812522]
Face deblurring aims to restore a clear face image from a blurred input image with more explicit structure and facial details.
We design an effective face deblurring network based on separable normalization and adaptive denormalization.
Experimental results on both CelebA and CelebA-HQ datasets demonstrate that the proposed face deblurring network restores face structure with more facial details.
arXiv Detail & Related papers (2021-12-18T03:42:23Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.