One-Shot Face Reenactment on Megapixels
- URL: http://arxiv.org/abs/2205.13368v1
- Date: Thu, 26 May 2022 13:52:04 GMT
- Title: One-Shot Face Reenactment on Megapixels
- Authors: Wonjun Kang, Geonsu Lee, Hyung Il Koo, Nam Ik Cho
- Abstract summary: We present a one-shot and high-resolution face reenactment method called MegaFR.
We leverage StyleGAN by using 3DMM-based rendering images and overcome the lack of high-quality video datasets.
We apply MegaFR to various applications such as face frontalization, eye in-painting, and talking head generation.
- Score: 10.93616643559957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of face reenactment is to transfer a target expression and head pose
to a source face while preserving the source identity. With the popularity of
face-related applications, there has been much research on this topic. However,
the results of existing methods are still limited to low-resolution and lack
photorealism. In this work, we present a one-shot and high-resolution face
reenactment method called MegaFR. To be precise, we leverage StyleGAN by using
3DMM-based rendering images and overcome the lack of high-quality video
datasets by designing a loss function that works without high-quality videos.
Also, we apply iterative refinement to deal with extreme poses and/or
expressions. Since the proposed method controls source images through 3DMM
parameters, we can explicitly manipulate source images. We apply MegaFR to
various applications such as face frontalization, eye in-painting, and talking
head generation. Experimental results show that our method successfully
disentangles identity from expression and head pose, and outperforms
conventional methods.
Related papers
- 3DFlowRenderer: One-shot Face Re-enactment via Dense 3D Facial Flow Estimation [2.048226951354646]
We propose a novel warping technology which integrates the advantages of both 2D and 3D methods to achieve robust face re-enactment.
We generate dense 3D facial flow fields in feature space to warp an input image based on target expressions without depth information.
This enables explicit 3D geometric control for re-enacting misaligned source and target faces.
arXiv Detail & Related papers (2024-04-23T01:51:58Z) - HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and
Retarget Faces [47.27033282706179]
We present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity.
Our method operates under the one-shot setting (i.e., using a single source frame) and allows for cross-subject reenactment, without requiring subject-specific fine-tuning.
We compare our method both quantitatively and qualitatively against several state-of-the-art techniques on the standard benchmarks of VoxCeleb1 and VoxCeleb2.
arXiv Detail & Related papers (2023-07-20T11:59:42Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - Video2StyleGAN: Encoding Video in Latent Space for Manipulation [63.03250800510085]
We propose a novel network to encode face videos into the latent space of StyleGAN for semantic face video manipulation.
Our approach can significantly outperform existing single image methods, while achieving real-time (66 fps) speed.
arXiv Detail & Related papers (2022-06-27T06:48:15Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - One Shot Face Swapping on Megapixels [65.47443090320955]
This paper proposes the first Megapixel level method for one shot Face Swapping (or MegaFS for short)
Complete face representation, stable training, and limited memory usage are the three novel contributions to the success of our method.
arXiv Detail & Related papers (2021-05-11T10:41:47Z) - HeadGAN: One-shot Neural Head Synthesis and Editing [70.30831163311296]
HeadGAN is a system that synthesises on 3D face representations and adapted to the facial geometry of any reference image.
The 3D face representation enables HeadGAN to be further used as an efficient method for compression and reconstruction and a tool for expression and pose editing.
arXiv Detail & Related papers (2020-12-15T12:51:32Z) - Head2Head++: Deep Facial Attributes Re-Targeting [6.230979482947681]
We leverage the 3D geometry of faces and Generative Adversarial Networks (GANs) to design a novel deep learning architecture for the task of facial and head reenactment.
We manage to capture the complex non-rigid facial motion from the driving monocular performances and synthesise temporally consistent videos.
Our system performs end-to-end reenactment in nearly real-time speed (18 fps)
arXiv Detail & Related papers (2020-06-17T23:38:37Z) - FaR-GAN for One-Shot Face Reenactment [20.894596219099164]
We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
arXiv Detail & Related papers (2020-05-13T16:15:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.