End-to-end Face-swapping via Adaptive Latent Representation Learning
- URL: http://arxiv.org/abs/2303.04186v1
- Date: Tue, 7 Mar 2023 19:16:20 GMT
- Title: End-to-end Face-swapping via Adaptive Latent Representation Learning
- Authors: Chenhao Lin, Pengbin Hu, Chao Shen, Qian Li
- Abstract summary: This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
- Score: 12.364688530047786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Taking full advantage of the excellent performance of StyleGAN, style
transfer-based face swapping methods have been extensively investigated
recently. However, these studies require separate face segmentation and
blending modules for successful face swapping, and the fixed selection of the
manipulated latent code in these works is reckless, thus degrading face
swapping quality, generalizability, and practicability. This paper proposes a
novel and end-to-end integrated framework for high resolution and attribute
preservation face swapping via Adaptive Latent Representation Learning.
Specifically, we first design a multi-task dual-space face encoder by sharing
the underlying feature extraction network to simultaneously complete the facial
region perception and face encoding. This encoder enables us to control the
face pose and attribute individually, thus enhancing the face swapping quality.
Next, we propose an adaptive latent codes swapping module to adaptively learn
the mapping between the facial attributes and the latent codes and select
effective latent codes for improved retention of facial attributes. Finally,
the initial face swapping image generated by StyleGAN2 is blended with the
facial region mask generated by our encoder to address the background blur
problem. Our framework integrating facial perceiving and blending into the
end-to-end training and testing process can achieve high realistic
face-swapping on wild faces without segmentation masks. Experimental results
demonstrate the superior performance of our approach over state-of-the-art
methods.
Related papers
- A Generalist FaceX via Learning Unified Facial Representation [77.74407008931486]
FaceX is a novel facial generalist model capable of handling diverse facial tasks simultaneously.
Our versatile FaceX achieves competitive performance compared to elaborate task-specific models on popular facial editing tasks.
arXiv Detail & Related papers (2023-12-31T17:41:48Z) - High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping [28.714484307143927]
FlowFace++ is a novel face-swapping framework utilizing explicit semantic flow supervision and end-to-end architecture.
The discriminator is shape-aware and relies on a semantic flow-guided operation to explicitly calculate the shape discrepancies between the target and source faces.
arXiv Detail & Related papers (2023-06-22T06:18:29Z) - StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces [103.54337984566877]
We use dilated convolutions to rescale the receptive fields of shallow layers in StyleGAN without altering any model parameters.
This allows fixed-size small features at shallow layers to be extended into larger ones that can accommodate variable resolutions.
We validate our method using unaligned face inputs of various resolutions in a diverse set of face manipulation tasks.
arXiv Detail & Related papers (2023-03-10T18:59:33Z) - IA-FaceS: A Bidirectional Method for Semantic Face Editing [8.19063619210761]
This paper proposes a bidirectional method for disentangled face attribute manipulation as well as flexible, controllable component editing.
IA-FaceS is developed for the first time without any input visual guidance, such as segmentation masks or sketches.
Both quantitative and qualitative results indicate that the proposed method outperforms the other techniques in reconstruction, face attribute manipulation, and component transfer.
arXiv Detail & Related papers (2022-03-24T14:44:56Z) - FSGANv2: Improved Subject Agnostic Face Swapping and Reenactment [28.83743270895698]
We present Face Swapping GAN (FSGAN) for face swapping and reenactment.
Unlike previous work, we offer a subject swapping scheme that can be applied to pairs of faces without requiring training on those faces.
We derive a novel iterative deep learning--based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence.
For video sequences, we introduce a continuous agnostic of the face views based on reenactment, Delaunay Triangulation, and bary coordinates. Occluded face regions are handled by a face completion network.
arXiv Detail & Related papers (2022-02-25T21:04:39Z) - Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails [53.080403912727604]
We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN.
It exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces.
Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks.
arXiv Detail & Related papers (2021-08-02T02:29:24Z) - HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping [116.1022638063613]
We propose HifiFace, which can preserve the face shape of the source face and generate photo-realistic results.
We introduce the Semantic Facial Fusion module to optimize the combination of encoder and decoder features.
arXiv Detail & Related papers (2021-06-18T07:39:09Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.