Fine-grained Identity Preserving Landmark Synthesis for Face Reenactment
- URL: http://arxiv.org/abs/2110.04708v2
- Date: Tue, 12 Oct 2021 08:57:05 GMT
- Title: Fine-grained Identity Preserving Landmark Synthesis for Face Reenactment
- Authors: Haichao Zhang, Youcheng Ben, Weixi Zhang, Tao Chen, Gang Yu, Bin Fu
- Abstract summary: A landmark synthesis network is designed to generate fine-grained landmark faces with more details.
The network refines the manipulated landmarks and generates a smooth and gradually changing face landmark sequence with good identity preserving ability.
Experiments are conducted on our self-collected BeautySelfie and the public VoxCeleb1 datasets.
- Score: 30.062379710262068
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent face reenactment works are limited by the coarse reference landmarks,
leading to unsatisfactory identity preserving performance due to the
distribution gap between the manipulated landmarks and those sampled from a
real person. To address this issue, we propose a fine-grained
identity-preserving landmark-guided face reenactment approach. The proposed
method has two novelties. First, a landmark synthesis network which is designed
to generate fine-grained landmark faces with more details. The network refines
the manipulated landmarks and generates a smooth and gradually changing face
landmark sequence with good identity preserving ability. Second, several novel
loss functions including synthesized face identity preserving loss,
foreground/background mask loss as well as boundary loss are designed, which
aims at synthesizing clear and sharp high-quality faces. Experiments are
conducted on our self-collected BeautySelfie and the public VoxCeleb1 datasets.
The presented qualitative and quantitative results show that our method can
reenact fine-grained higher quality faces with good ID-preserved appearance
details, fewer artifacts and clearer boundaries than state-of-the-art works.
Code will be released for reproduction.
Related papers
- Reference-Guided Identity Preserving Face Restoration [54.10295747851343]
Preserving face identity is a critical yet persistent challenge in diffusion-based image restoration.<n>This paper introduces a novel approach that maximizes reference face utility for improved face restoration and identity preservation.
arXiv Detail & Related papers (2025-05-28T02:46:34Z) - HonestFace: Towards Honest Face Restoration with One-Step Diffusion Model [36.36629793211904]
HonestFace is a novel approach designed to restore faces with a strong emphasis on such honesty.<n>Masked face alignment method is presented to enhance fine-grained details and textural authenticity.<n>Our approach surpasses existing state-of-the-art methods, achieving superior performance in both visual quality and quantitative assessments.
arXiv Detail & Related papers (2025-05-24T02:19:20Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - HyperReenact: One-Shot Reenactment via Jointly Learning to Refine and
Retarget Faces [47.27033282706179]
We present our method for neural face reenactment, called HyperReenact, that aims to generate realistic talking head images of a source identity.
Our method operates under the one-shot setting (i.e., using a single source frame) and allows for cross-subject reenactment, without requiring subject-specific fine-tuning.
We compare our method both quantitatively and qualitatively against several state-of-the-art techniques on the standard benchmarks of VoxCeleb1 and VoxCeleb2.
arXiv Detail & Related papers (2023-07-20T11:59:42Z) - Attribute-preserving Face Dataset Anonymization via Latent Code
Optimization [64.4569739006591]
We present a task-agnostic anonymization procedure that directly optimize the images' latent representation in the latent space of a pre-trained GAN.
We demonstrate through a series of experiments that our method is capable of anonymizing the identity of the images whilst -- crucially -- better-preserving the facial attributes.
arXiv Detail & Related papers (2023-03-20T17:34:05Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - Everything's Talkin': Pareidolia Face Reenactment [119.49707201178633]
Pareidolia Face Reenactment is defined as animating a static illusory face to move in tandem with a human face in the video.
For the large differences between pareidolia face reenactment and traditional human face reenactment, shape variance and texture variance are introduced.
We propose a novel Parametric Unsupervised Reenactment Algorithm to tackle these two challenges.
arXiv Detail & Related papers (2021-04-07T11:19:13Z) - LI-Net: Large-Pose Identity-Preserving Face Reenactment Network [14.472453602392182]
We propose a large-pose identity-preserving face reenactment network, LI-Net.
Specifically, the Landmark Transformer is adopted to adjust driving landmark images.
The Face Rotation Module and the Expression Enhancing Generator decouple the transformed landmark image into pose and expression features, and reenact those attributes separately to generate identity-preserving faces.
arXiv Detail & Related papers (2021-04-07T01:41:21Z) - A recurrent cycle consistency loss for progressive face-to-face
synthesis [5.71097144710995]
This paper addresses a major flaw of the cycle consistency loss when used to preserve the input appearance in the face-to-face synthesis domain.
We show that the images generated by a network trained using this loss conceal a noise that hinders their use for further tasks.
We propose a ''recurrent cycle consistency loss'' which for different sequences of target attributes minimises the distance between the output images.
arXiv Detail & Related papers (2020-04-14T16:53:41Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping [43.236261887752065]
We propose a novel two-stage framework, called FaceShifter, for high fidelity and occlusion aware face swapping.
In its first stage, our framework generates the swapped face in high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively.
To address the challenging facial synthesiss, we append a second stage consisting of a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net)
arXiv Detail & Related papers (2019-12-31T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.