One-Shot Identity-Preserving Portrait Reenactment
- URL: http://arxiv.org/abs/2004.12452v1
- Date: Sun, 26 Apr 2020 18:30:33 GMT
- Title: One-Shot Identity-Preserving Portrait Reenactment
- Authors: Sitao Xiang, Yuming Gu, Pengda Xiang, Mingming He, Koki Nagano, Haiwei
Chen, Hao Li
- Abstract summary: We present a deep learning-based framework for portrait reenactment from a single picture of a target (one-shot) and a video of a driving subject.
We aim to address identity preservation in cross-subject portrait reenactment from a single picture.
- Score: 16.889479797252783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deep learning-based framework for portrait reenactment from a
single picture of a target (one-shot) and a video of a driving subject.
Existing facial reenactment methods suffer from identity mismatch and produce
inconsistent identities when a target and a driving subject are different
(cross-subject), especially in one-shot settings. In this work, we aim to
address identity preservation in cross-subject portrait reenactment from a
single picture. We introduce a novel technique that can disentangle identity
from expressions and poses, allowing identity preserving portrait reenactment
even when the driver's identity is very different from that of the target. This
is achieved by a novel landmark disentanglement network (LD-Net), which
predicts personalized facial landmarks that combine the identity of the target
with expressions and poses from a different subject. To handle portrait
reenactment from unseen subjects, we also introduce a feature dictionary-based
generative adversarial network (FD-GAN), which locally translates 2D landmarks
into a personalized portrait, enabling one-shot portrait reenactment under
large pose and expression variations. We validate the effectiveness of our
identity disentangling capabilities via an extensive ablation study, and our
method produces consistent identities for cross-subject portrait reenactment.
Our comprehensive experiments show that our method significantly outperforms
the state-of-the-art single-image facial reenactment methods. We will release
our code and models for academic use.
Related papers
- AniFaceDiff: High-Fidelity Face Reenactment via Facial Parametric Conditioned Diffusion Models [33.39336530229545]
Face reenactment refers to the process of transferring the pose and facial expressions from a reference (driving) video onto a static facial (source) image.
Previous research in this domain has made significant progress by training controllable deep generative models to generate faces.
This paper proposes a new method based on Stable Diffusion, called AniFaceDiff, incorporating a new conditioning module for high-fidelity face reenactment.
arXiv Detail & Related papers (2024-06-19T07:08:48Z) - PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved
Personalization [92.90392834835751]
PortraitBooth is designed for high efficiency, robust identity preservation, and expression-editable text-to-image generation.
PortraitBooth eliminates computational overhead and mitigates identity distortion.
It incorporates emotion-aware cross-attention control for diverse facial expressions in generated images.
arXiv Detail & Related papers (2023-12-11T13:03:29Z) - HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and
Retarget Faces [47.27033282706179]
We present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity.
Our method operates under the one-shot setting (i.e., using a single source frame) and allows for cross-subject reenactment, without requiring subject-specific fine-tuning.
We compare our method both quantitatively and qualitatively against several state-of-the-art techniques on the standard benchmarks of VoxCeleb1 and VoxCeleb2.
arXiv Detail & Related papers (2023-07-20T11:59:42Z) - Facial Reenactment Through a Personalized Generator [47.02774886256621]
We propose a novel method for facial reenactment using a personalized generator.
We locate the desired frames in the latent space of the personalized generator using carefully designed latent optimization.
We show that since our reenactment takes place in a semantic latent space, it can be semantically edited and stylized in post-processing.
arXiv Detail & Related papers (2023-07-12T17:09:18Z) - StyleMask: Disentangling the Style Space of StyleGAN2 for Neural Face
Reenactment [47.27033282706179]
We propose a framework that learns to disentangle the identity characteristics of the face from its pose.
We show that the proposed method produces higher quality results even on extreme pose variations.
arXiv Detail & Related papers (2022-09-27T13:22:35Z) - Graph-based Generative Face Anonymisation with Pose Preservation [49.18049578591058]
AnonyGAN is a GAN-based solution for face anonymisation.
It replaces the visual information corresponding to a source identity with a condition identity provided as any single image.
arXiv Detail & Related papers (2021-12-10T12:58:17Z) - LI-Net: Large-Pose Identity-Preserving Face Reenactment Network [14.472453602392182]
We propose a large-pose identity-preserving face reenactment network, LI-Net.
Specifically, the Landmark Transformer is adopted to adjust driving landmark images.
The Face Rotation Module and the Expression Enhancing Generator decouple the transformed landmark image into pose and expression features, and reenact those attributes separately to generate identity-preserving faces.
arXiv Detail & Related papers (2021-04-07T01:41:21Z) - ActGAN: Flexible and Efficient One-shot Face Reenactment [1.8431600219151503]
ActGAN is a novel end-to-end generative adversarial network (GAN) for one-shot face reenactment.
We introduce a "many-to-many" approach, which allows arbitrary persons both for source and target without additional retraining.
We also introduce a solution to preserve a person's identity between synthesized and target person by adopting the state-of-the-art approach in deep face recognition domain.
arXiv Detail & Related papers (2020-03-30T22:03:16Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.