ShapeEditer: a StyleGAN Encoder for Face Swapping
- URL: http://arxiv.org/abs/2106.13984v1
- Date: Sat, 26 Jun 2021 09:38:45 GMT
- Title: ShapeEditer: a StyleGAN Encoder for Face Swapping
- Authors: Shuai Yang, Kai Qiao
- Abstract summary: We propose a novel encoder, called ShapeEditor, for high-resolution, realistic and high-fidelity face exchange.
Our key idea is to use an advanced pretrained high-quality random face image generator, i.e. StyleGAN, as backbone.
For learning to map into the latent space of StyleGAN, we propose a set of self-supervised loss functions.
- Score: 6.848723869850855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel encoder, called ShapeEditor, for
high-resolution, realistic and high-fidelity face exchange. First of all, in
order to ensure sufficient clarity and authenticity, our key idea is to use an
advanced pretrained high-quality random face image generator, i.e. StyleGAN, as
backbone. Secondly, we design ShapeEditor, a two-step encoder, to make the
swapped face integrate the identity and attribute of the input faces. In the
first step, we extract the identity vector of the source image and the
attribute vector of the target image respectively; in the second step, we map
the concatenation of identity vector and attribute vector into the
$\mathcal{W+}$ potential space. In addition, for learning to map into the
latent space of StyleGAN, we propose a set of self-supervised loss functions
with which the training data do not need to be labeled manually. Extensive
experiments on the test dataset show that the results of our method not only
have a great advantage in clarity and authenticity than other state-of-the-art
methods, but also reflect the sufficient integration of identity and attribute.
Related papers
- G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - StableIdentity: Inserting Anybody into Anywhere at First Sight [57.99693188913382]
We propose StableIdentity, which allows identity-consistent recontextualization with just one face image.
We are the first to directly inject the identity learned from a single image into video/3D generation without finetuning.
arXiv Detail & Related papers (2024-01-29T09:06:15Z) - Disentangled Representation Learning for Controllable Person Image
Generation [29.719070087384512]
We propose a novel framework named DRL-CPG to learn disentangled latent representation for controllable person image generation.
To our knowledge, we are the first to learn disentangled latent representations with transformers for person image generation.
arXiv Detail & Related papers (2023-12-10T07:15:58Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - FaceDancer: Pose- and Occlusion-Aware High Fidelity Face Swapping [62.38898610210771]
We present a new single-stage method for subject face swapping and identity transfer, named FaceDancer.
We have two major contributions: Adaptive Feature Fusion Attention (AFFA) and Interpreted Feature Similarity Regularization (IFSR)
arXiv Detail & Related papers (2022-10-19T11:31:38Z) - Dynamic Prototype Mask for Occluded Person Re-Identification [88.7782299372656]
Existing methods mainly address this issue by employing body clues provided by an extra network to distinguish the visible part.
We propose a novel Dynamic Prototype Mask (DPM) based on two self-evident prior knowledge.
Under this condition, the occluded representation could be well aligned in a selected subspace spontaneously.
arXiv Detail & Related papers (2022-07-19T03:31:13Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - Identity-Guided Face Generation with Multi-modal Contour Conditions [15.84849740726513]
We propose a framework that takes the contour and an extra image specifying the identity as the inputs.
An identity encoder extracts the identity-related feature, accompanied by a main encoder to obtain the rough contour information.
Our method can produce photo-realistic results with 1024$times$1024 resolution.
arXiv Detail & Related papers (2021-10-10T17:08:22Z) - An Efficient Integration of Disentangled Attended Expression and
Identity FeaturesFor Facial Expression Transfer andSynthesis [6.383596973102899]
We present an Attention-based Identity Preserving Generative Adversarial Network (AIP-GAN) to overcome the identity leakage problem from a source image to a generated face image.
Our key insight is that the identity preserving network should be able to disentangle and compose shape, appearance, and expression information for efficient facial expression transfer and synthesis.
arXiv Detail & Related papers (2020-05-01T17:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.