FlowFace: Semantic Flow-guided Shape-aware Face Swapping
- URL: http://arxiv.org/abs/2212.02797v1
- Date: Tue, 6 Dec 2022 07:23:39 GMT
- Title: FlowFace: Semantic Flow-guided Shape-aware Face Swapping
- Authors: Hao Zeng, Wei Zhang, Changjie Fan, Tangjie Lv, Suzhen Wang, Zhimeng
Zhang, Bowen Ma, Lincheng Li, Yu Ding, Xin Yu
- Abstract summary: We propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely FlowFace.
Our FlowFace consists of a face reshaping network and a face swapping network.
We employ a pre-trained face masked autoencoder to extract facial features from both the source face and the target face.
- Score: 43.166181219154936
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose a semantic flow-guided two-stage framework for
shape-aware face swapping, namely FlowFace. Unlike most previous methods that
focus on transferring the source inner facial features but neglect facial
contours, our FlowFace can transfer both of them to a target face, thus leading
to more realistic face swapping. Concretely, our FlowFace consists of a face
reshaping network and a face swapping network. The face reshaping network
addresses the shape outline differences between the source and target faces. It
first estimates a semantic flow (i.e., face shape differences) between the
source and the target face, and then explicitly warps the target face shape
with the estimated semantic flow. After reshaping, the face swapping network
generates inner facial features that exhibit the identity of the source face.
We employ a pre-trained face masked autoencoder (MAE) to extract facial
features from both the source face and the target face. In contrast to previous
methods that use identity embedding to preserve identity information, the
features extracted by our encoder can better capture facial appearances and
identity information. Then, we develop a cross-attention fusion module to
adaptively fuse inner facial features from the source face with the target
facial attributes, thus leading to better identity preservation. Extensive
quantitative and qualitative experiments on in-the-wild faces demonstrate that
our FlowFace outperforms the state-of-the-art significantly.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping [28.714484307143927]
FlowFace++ is a novel face-swapping framework utilizing explicit semantic flow supervision and end-to-end architecture.
The discriminator is shape-aware and relies on a semantic flow-guided operation to explicitly calculate the shape discrepancies between the target and source faces.
arXiv Detail & Related papers (2023-06-22T06:18:29Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Learning Facial Representations from the Cycle-consistency of Face [23.23272327438177]
We introduce cycle-consistency in facial characteristics as free supervisory signal to learn facial representations from unlabeled facial images.
The learning is realized by superimposing the facial motion cycle-consistency and identity cycle-consistency constraints.
Our approach is competitive with those of existing methods, demonstrating the rich and unique information embedded in the disentangled representations.
arXiv Detail & Related papers (2021-08-07T11:30:35Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Exploiting Semantics for Face Image Deblurring [121.44928934662063]
We propose an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks.
We incorporate face semantic labels as input priors and propose an adaptive structural loss to regularize facial local structures.
The proposed method restores sharp images with more accurate facial features and details.
arXiv Detail & Related papers (2020-01-19T13:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.