FaR-GAN for One-Shot Face Reenactment
- URL: http://arxiv.org/abs/2005.06402v1
- Date: Wed, 13 May 2020 16:15:37 GMT
- Title: FaR-GAN for One-Shot Face Reenactment
- Authors: Hanxiang Hao and Sriram Baireddy and Amy R. Reibman and Edward J. Delp
- Abstract summary: We present a one-shot face reenactment model, FaR-GAN, that takes only one face image of any given source identity and a target expression as input.
The proposed method makes no assumptions about the source identity, facial expression, head pose, or even image background.
- Score: 20.894596219099164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Animating a static face image with target facial expressions and movements is
important in the area of image editing and movie production. This face
reenactment process is challenging due to the complex geometry and movement of
human faces. Previous work usually requires a large set of images from the same
person to model the appearance. In this paper, we present a one-shot face
reenactment model, FaR-GAN, that takes only one face image of any given source
identity and a target expression as input, and then produces a face image of
the same source identity but with the target expression. The proposed method
makes no assumptions about the source identity, facial expression, head pose,
or even image background. We evaluate our method on the VoxCeleb1 dataset and
show that our method is able to generate a higher quality face image than the
compared methods.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - DiffusionAct: Controllable Diffusion Autoencoder for One-shot Face Reenactment [34.821255203019554]
Video-driven neural face reenactment aims to synthesize realistic facial images that successfully preserve the identity and appearance of a source face.
Recent advances in Diffusion Probabilistic Models (DPMs) enable the generation of high-quality realistic images.
We present Diffusion, a novel method that leverages the photo-realistic image generation of diffusion models to perform neural face reenactment.
arXiv Detail & Related papers (2024-03-25T21:46:53Z) - HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and
Retarget Faces [47.27033282706179]
We present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity.
Our method operates under the one-shot setting (i.e., using a single source frame) and allows for cross-subject reenactment, without requiring subject-specific fine-tuning.
We compare our method both quantitatively and qualitatively against several state-of-the-art techniques on the standard benchmarks of VoxCeleb1 and VoxCeleb2.
arXiv Detail & Related papers (2023-07-20T11:59:42Z) - Face Transformer: Towards High Fidelity and Accurate Face Swapping [54.737909435708936]
Face swapping aims to generate swapped images that fuse the identity of source faces and the attributes of target faces.
This paper presents Face Transformer, a novel face swapping network that can accurately preserve source identities and target attributes simultaneously.
arXiv Detail & Related papers (2023-04-05T15:51:44Z) - Finding Directions in GAN's Latent Space for Neural Face Reenactment [45.67273942952348]
This paper is on face/head reenactment where the goal is to transfer the facial pose (3D head orientation and expression) of a target face to a source face.
We take a different approach, bypassing the training of such networks, by using (fine-tuned) pre-trained GANs.
We show that by embedding real images in the GAN latent space, our method can be successfully used for the reenactment of real-world faces.
arXiv Detail & Related papers (2022-01-31T19:14:03Z) - Graph-based Generative Face Anonymisation with Pose Preservation [49.18049578591058]
AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
arXiv Detail & Related papers (2021-12-10T12:58:17Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Image-to-Video Generation via 3D Facial Dynamics [78.01476554323179]
We present a versatile model, FaceAnime, for various video generation tasks from still images.
Our model is versatile for various AR/VR and entertainment applications, such as face video and face video prediction.
arXiv Detail & Related papers (2021-05-31T02:30:11Z) - FACEGAN: Facial Attribute Controllable rEenactment GAN [24.547319786399743]
Face reenactment is a popular animation method where the person's identity is taken from the source image and the facial motion from the driving image.
Recent works have demonstrated high quality results by combining the facial landmark based motion representations with the generative adversarial networks.
We propose a novel Facial Attribute Controllable rEenactment GAN (FACEGAN), which transfers the facial motion from the driving face via the Action Unit (AU) representation.
arXiv Detail & Related papers (2020-11-09T14:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.