FLNeRF: 3D Facial Landmarks Estimation in Neural Radiance Fields
- URL: http://arxiv.org/abs/2211.11202v3
- Date: Fri, 16 Jun 2023 10:52:13 GMT
- Title: FLNeRF: 3D Facial Landmarks Estimation in Neural Radiance Fields
- Authors: Hao Zhang, Tianyuan Dai, Yu-Wing Tai, Chi-Keung Tang
- Abstract summary: This paper presents the first significant work on directly predicting 3D face landmarks on neural radiance fields (NeRFs)
Our 3D coarse-to-fine Face Landmarks NeRF (FLNeRF) model efficiently samples from a given face NeRF with individual facial features for accurate landmarks detection.
- Score: 64.17946473855382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the first significant work on directly predicting 3D face
landmarks on neural radiance fields (NeRFs). Our 3D coarse-to-fine Face
Landmarks NeRF (FLNeRF) model efficiently samples from a given face NeRF with
individual facial features for accurate landmarks detection. Expression
augmentation is applied to facial features in a fine scale to simulate large
emotions range including exaggerated facial expressions (e.g., cheek blowing,
wide opening mouth, eye blinking) for training FLNeRF. Qualitative and
quantitative comparison with related state-of-the-art 3D facial landmark
estimation methods demonstrate the efficacy of FLNeRF, which contributes to
downstream tasks such as high-quality face editing and swapping with direct
control using our NeRF landmarks. Code and data will be available. Github link:
https://github.com/ZHANG1023/FLNeRF.
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - 3D Visibility-aware Generalizable Neural Radiance Fields for Interacting
Hands [51.305421495638434]
Neural radiance fields (NeRFs) are promising 3D representations for scenes, objects, and humans.
This paper proposes a generalizable visibility-aware NeRF framework for interacting hands.
Experiments on the Interhand2.6M dataset demonstrate that our proposed VA-NeRF outperforms conventional NeRFs significantly.
arXiv Detail & Related papers (2024-01-02T00:42:06Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering [10.991274404360194]
The present paper proposes a novel approach, named NeRFlame, which combines the strengths of both NeRF and FLAME methods.
Our approach utilizes the FLAME mesh as a distinct density volume. Consequently, color values exist only in the vicinity of the FLAME mesh.
This FLAME framework is seamlessly incorporated into the NeRF architecture for predicting RGB colors, enabling our model to explicitly represent volume density and implicitly capture RGB colors.
arXiv Detail & Related papers (2023-03-10T22:21:30Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face
Synthesis [62.297513028116576]
GeneFace is a general and high-fidelity NeRF-based talking face generation method.
A head-aware torso-NeRF is proposed to eliminate the head-torso problem.
arXiv Detail & Related papers (2023-01-31T05:56:06Z) - HeadNeRF: A Real-time NeRF-based Parametric Head Model [39.240265611700735]
HeadNeRF is a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head.
It can render high fidelity head images in real-time, and supports directly controlling the generated images' rendering pose and various semantic attributes.
arXiv Detail & Related papers (2021-12-10T16:10:13Z) - MoFaNeRF: Morphable Facial Neural Radiance Field [12.443638713719357]
MoFaNeRF is a parametric model that maps free-view images into a vector space coded facial shape, expression and appearance.
By introducing identity-specific modulation and encoder texture, our model synthesizes accurate photometric details.
Our model shows strong ability on multiple applications including image-based fitting, random generation, face rigging, face editing, and novel view.
arXiv Detail & Related papers (2021-12-04T11:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.