One Shot Face Swapping on Megapixels
- URL: http://arxiv.org/abs/2105.04932v1
- Date: Tue, 11 May 2021 10:41:47 GMT
- Title: One Shot Face Swapping on Megapixels
- Authors: Yuhao Zhu, Qi Li, Jian Wang, Chengzhong Xu, Zhenan Sun
- Abstract summary: This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
- Score: 65.47443090320955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face swapping has both positive applications such as entertainment,
human-computer interaction, etc., and negative applications such as DeepFake
threats to politics, economics, etc. Nevertheless, it is necessary to
understand the scheme of advanced methods for high-quality face swapping and
generate enough and representative face swapping images to train DeepFake
detection algorithms. This paper proposes the first Megapixel level method for
one shot Face Swapping (or MegaFS for short). Firstly, MegaFS organizes face
representation hierarchically by the proposed Hierarchical Representation Face
Encoder (HieRFE) in an extended latent space to maintain more facial details,
rather than compressed representation in previous face swapping methods.
Secondly, a carefully designed Face Transfer Module (FTM) is proposed to
transfer the identity from a source image to the target by a non-linear
trajectory without explicit feature disentanglement. Finally, the swapped faces
can be synthesized by StyleGAN2 with the benefits of its training stability and
powerful generative capability. Each part of MegaFS can be trained separately
so the requirement of our model for GPU memory can be satisfied for megapixel
face swapping. In summary, complete face representation, stable training, and
limited memory usage are the three novel contributions to the success of our
method. Extensive experiments demonstrate the superiority of MegaFS and the
first megapixel level face swapping database is released for research on
DeepFake detection and face image editing in the public domain. The dataset is
at this link.
Related papers
- High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - MorphGANFormer: Transformer-based Face Morphing and De-Morphing [55.211984079735196]
StyleGAN-based approaches to face morphing are among the leading techniques.
We propose a transformer-based alternative to face morphing and demonstrate its superiority to StyleGAN-based methods.
arXiv Detail & Related papers (2023-02-18T19:09:11Z) - HiMFR: A Hybrid Masked Face Recognition Through Face Inpainting [0.7868449549351486]
We propose an end-to-end hybrid masked face recognition system, namely HiMFR.
Masked face detector module applies a pretrained Vision Transformer to detect whether faces are covered with masked or not.
Inpainting module uses a fine-tune image inpainting model based on a Generative Adversarial Network (GAN) to restore faces.
Finally, the hybrid face recognition module based on ViT with an EfficientNetB3 backbone recognizes the faces.
arXiv Detail & Related papers (2022-09-19T11:26:49Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - GMFIM: A Generative Mask-guided Facial Image Manipulation Model for
Privacy Preservation [0.7734726150561088]
We propose a Generative Mask-guided Face Image Manipulation model based on GANs to apply imperceptible editing to the input face image.
Our model can achieve better performance against automated face recognition systems in comparison to the state-of-the-art methods.
arXiv Detail & Related papers (2022-01-10T14:09:14Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - Joint Face Image Restoration and Frontalization for Recognition [79.78729632975744]
In real-world scenarios, many factors may harm face recognition performance, e.g., large pose, bad illumination,low resolution, blur and noise.
Previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition.
We propose an Multi-Degradation Face Restoration model to restore frontalized high-quality faces from the given low-quality ones.
arXiv Detail & Related papers (2021-05-12T03:52:41Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.