FACEGAN: Facial Attribute Controllable rEenactment GAN
- URL: http://arxiv.org/abs/2011.04439v1
- Date: Mon, 9 Nov 2020 14:04:15 GMT
- Title: FACEGAN: Facial Attribute Controllable rEenactment GAN
- Authors: Soumya Tripathy, Juho Kannala and Esa Rahtu
- Abstract summary: Face reenactment is a popular animation method where the person's identity is taken from the source image and the facial motion from the driving image.
Recent works have demonstrated high quality results by combining the facial landmark based motion representations with the generative adversarial networks.
We propose a novel Facial Attribute Controllable rEenactment GAN (FACEGAN), which transfers the facial motion from the driving face via the Action Unit (AU) representation.
- Score: 24.547319786399743
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The face reenactment is a popular facial animation method where the person's
identity is taken from the source image and the facial motion from the driving
image. Recent works have demonstrated high quality results by combining the
facial landmark based motion representations with the generative adversarial
networks. These models perform best if the source and driving images depict the
same person or if the facial structures are otherwise very similar. However, if
the identity differs, the driving facial structures leak to the output
distorting the reenactment result. We propose a novel Facial Attribute
Controllable rEenactment GAN (FACEGAN), which transfers the facial motion from
the driving face via the Action Unit (AU) representation. Unlike facial
landmarks, the AUs are independent of the facial structure preventing the
identity leak. Moreover, AUs provide a human interpretable way to control the
reenactment. FACEGAN processes background and face regions separately for
optimized output quality. The extensive quantitative and qualitative
comparisons show a clear improvement over the state-of-the-art in a single
source reenactment task. The results are best illustrated in the reenactment
video provided in the supplementary material. The source code will be made
available upon publication of the paper.
Related papers
- FSRT: Facial Scene Representation Transformer for Face Reenactment from Factorized Appearance, Head-pose, and Facial Expression Features [17.531847357428454]
The task of face reenactment is to transfer the head motion and facial expressions from a driving video to the appearance of a source image.
Most existing methods are CNN-based and estimate optical flow from the source image to the current driving frame.
We propose a transformer-based encoder for computing a set-latent representation of the source image.
arXiv Detail & Related papers (2024-04-15T12:37:26Z) - DiffusionAct: Controllable Diffusion Autoencoder for One-shot Face Reenactment [34.821255203019554]
Video-driven neural face reenactment aims to synthesize realistic facial images that successfully preserve the identity and appearance of a source face.
Recent advances in Diffusion Probabilistic Models (DPMs) enable the generation of high-quality realistic images.
We present Diffusion, a novel method that leverages the photo-realistic image generation of diffusion models to perform neural face reenactment.
arXiv Detail & Related papers (2024-03-25T21:46:53Z) - HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and
Retarget Faces [47.27033282706179]
We present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity.
Our method operates under the one-shot setting (i.e., using a single source frame) and allows for cross-subject reenactment, without requiring subject-specific fine-tuning.
We compare our method both quantitatively and qualitatively against several state-of-the-art techniques on the standard benchmarks of VoxCeleb1 and VoxCeleb2.
arXiv Detail & Related papers (2023-07-20T11:59:42Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face
Reenactment [47.27033282706179]
We propose a framework that learns to disentangle the identity characteristics of the face from its pose.
We show that the proposed method produces higher quality results even on extreme pose variations.
arXiv Detail & Related papers (2022-09-27T13:22:35Z) - Graph-based Generative Face Anonymisation with Pose Preservation [49.18049578591058]
AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
arXiv Detail & Related papers (2021-12-10T12:58:17Z) - Single Source One Shot Reenactment using Weighted motion From Paired
Feature Points [26.210285908770377]
We propose a new (face) reenactment model that learns shape-independent motion features in a self-supervised setup.
The model faithfully transfers the driving motion to the source while retaining the source identity intact.
arXiv Detail & Related papers (2021-04-07T13:45:34Z) - LI-Net: Large-Pose Identity-Preserving Face Reenactment Network [14.472453602392182]
We propose a large-pose identity-preserving face reenactment network, LI-Net.
Specifically, the Landmark Transformer is adopted to adjust driving landmark images.
The Face Rotation Module and the Expression Enhancing Generator decouple the transformed landmark image into pose and expression features, and reenact those attributes separately to generate identity-preserving faces.
arXiv Detail & Related papers (2021-04-07T01:41:21Z) - FaR-GAN for One-Shot Face Reenactment [20.894596219099164]
We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
arXiv Detail & Related papers (2020-05-13T16:15:37Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.