StyleSwap: Style-Based Generator Empowers Robust Face Swapping
- URL: http://arxiv.org/abs/2209.13514v1
- Date: Tue, 27 Sep 2022 16:35:16 GMT
- Title: StyleSwap: Style-Based Generator Empowers Robust Face Swapping
- Authors: Zhiliang Xu, Hang Zhou, Zhibin Hong, Ziwei Liu, Jiaming Liu, Zhizhi
Guo, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang
- Abstract summary: We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
- Score: 90.05775519962303
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerous attempts have been made to the task of person-agnostic face swapping
given its wide applications. While existing methods mostly rely on tedious
network and loss designs, they still struggle in the information balancing
between the source and target faces, and tend to produce visible artifacts. In
this work, we introduce a concise and effective framework named StyleSwap. Our
core idea is to leverage a style-based generator to empower high-fidelity and
robust face swapping, thus the generator's advantage can be adopted for
optimizing identity similarity. We identify that with only minimal
modifications, a StyleGAN2 architecture can successfully handle the desired
information from both source and target. Additionally, inspired by the ToRGB
layers, a Swapping-Driven Mask Branch is further devised to improve information
blending. Furthermore, the advantage of StyleGAN inversion can be adopted.
Particularly, a Swapping-Guided ID Inversion strategy is proposed to optimize
identity similarity. Extensive experiments validate that our framework
generates high-quality face swapping results that outperform state-of-the-art
methods both qualitatively and quantitatively.
Related papers
- High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - High-resolution Face Swapping via Latent Semantics Disentanglement [50.23624681222619]
We present a novel high-resolution hallucination face swapping method using the inherent prior knowledge of a pre-trained GAN model.
We explicitly disentangle the latent semantics by utilizing the progressive nature of the generator.
We extend our method to video face swapping by enforcing two-temporal constraints on the latent space and the image space.
arXiv Detail & Related papers (2022-03-30T00:33:08Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - Styleverse: Towards Identity Stylization across Heterogeneous Domains [70.13327076710269]
We propose a new challenging task namely IDentity Stylization (IDS) across heterogeneous domains.
We use an effective heterogeneous-network-based framework $Styleverse$ that uses a single domain-aware generator.
$Styleverse achieves higher-fidelity identity stylization compared with other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-02T04:23:01Z) - Smooth-Swap: A Simple Enhancement for Face-Swapping with Smoothness [18.555874044296463]
We propose a new face-swapping model called Smooth-Swap'
It focuses on deriving the smoothness of the identity embedding instead of employing complex handcrafted designs.
Our model is quantitatively and qualitatively comparable or even superior to existing methods in terms of identity change.
arXiv Detail & Related papers (2021-12-11T03:26:32Z) - SimSwap: An Efficient Framework For High Fidelity Face Swapping [43.59969679039686]
We propose an efficient framework, called Simple Swap (SimSwap), aiming for generalized and high fidelity face swapping.
Our framework is capable of transferring the identity of an arbitrary source face into an arbitrary target face while preserving the attributes of the target face.
Experiments on wild faces demonstrate that our SimSwap is able to achieve competitive identity performance while preserving attributes better than previous state-of-the-art methods.
arXiv Detail & Related papers (2021-06-11T12:23:10Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.