Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation
- URL: http://arxiv.org/abs/2211.12674v1
- Date: Wed, 23 Nov 2022 03:02:34 GMT
- Title: Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation
- Authors: Yunfan Liu, Qi Li, Zhenan Sun, Tieniu Tan
- Abstract summary: One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
- Score: 100.60938767993088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One-shot face re-enactment is a challenging task due to the identity mismatch
between source and driving faces. Specifically, the suboptimally disentangled
identity information of driving subjects would inevitably interfere with the
re-enactment results and lead to face shape distortion. To solve this problem,
this paper proposes to use 3D Morphable Model (3DMM) for explicit facial
semantic decomposition and identity disentanglement. Instead of using 3D
coefficients alone for re-enactment control, we take the advantage of the
generative ability of 3DMM to render textured face proxies. These proxies
contain abundant yet compact geometric and semantic information of human faces,
which enable us to compute the face motion field between source and driving
images by estimating the dense correspondence. In this way, we could
approximate re-enactment results by warping source images according to the
motion field, and a Generative Adversarial Network (GAN) is adopted to further
improve the visual quality of warping results. Extensive experiments on various
datasets demonstrate the advantages of the proposed method over existing
start-of-the-art benchmarks in both identity preservation and re-enactment
fulfillment.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - Learning Dense Correspondence for NeRF-Based Face Reenactment [24.072019889495966]
We propose a novel framework, which adopts tri-planes as fundamental NeRF representation and decomposes face tri-planes into three components: canonical tri-planes, identity deformations, and motion.
Our framework is the first method that achieves one-shot multi-view face reenactment without a 3D parametric model prior.
arXiv Detail & Related papers (2023-12-16T11:31:34Z) - Non-Deterministic Face Mask Removal Based On 3D Priors [3.8502825594372703]
The proposed approach integrates a multi-task 3D face reconstruction module with a face inpainting module.
By gradually controlling the 3D shape parameters, our method generates high-quality dynamic inpainting results with different expressions and mouth movements.
arXiv Detail & Related papers (2022-02-20T16:27:44Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Everything's Talkin': Pareidolia Face Reenactment [119.49707201178633]
Pareidolia Face Reenactment is defined as animating a static illusory face to move in tandem with a human face in the video.
For the large differences between pareidolia face reenactment and traditional human face reenactment, shape variance and texture variance are introduced.
We propose a novel Parametric Unsupervised Reenactment Algorithm to tackle these two challenges.
arXiv Detail & Related papers (2021-04-07T11:19:13Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - 3D Face Anti-spoofing with Factorized Bilinear Coding [35.30886962572515]
We propose a novel anti-spoofing method from the perspective of fine-grained classification.
By extracting discriminative and fusing complementary information from RGB and YCbCr spaces, we have developed a principled solution to 3D face spoofing detection.
arXiv Detail & Related papers (2020-05-12T03:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.