HeadNeRF: A Real-time NeRF-based Parametric Head Model
- URL: http://arxiv.org/abs/2112.05637v2
- Date: Mon, 13 Dec 2021 03:05:45 GMT
- Title: HeadNeRF: A Real-time NeRF-based Parametric Head Model
- Authors: Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, Juyong Zhang
- Abstract summary: HeadNeRF is a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head.
It can render high fidelity head images in real-time, and supports directly controlling the generated images' rendering pose and various semantic attributes.
- Score: 39.240265611700735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose HeadNeRF, a novel NeRF-based parametric head model
that integrates the neural radiance field to the parametric representation of
the human head. It can render high fidelity head images in real-time, and
supports directly controlling the generated images' rendering pose and various
semantic attributes. Different from existing related parametric models, we use
the neural radiance fields as a novel 3D proxy instead of the traditional 3D
textured mesh, which makes that HeadNeRF is able to generate high fidelity
images. However, the computationally expensive rendering process of the
original NeRF hinders the construction of the parametric NeRF model. To address
this issue, we adopt the strategy of integrating 2D neural rendering to the
rendering process of NeRF and design novel loss terms. As a result, the
rendering speed of HeadNeRF can be significantly accelerated, and the rendering
time of one frame is reduced from 5s to 25ms. The novel-designed loss terms
also improve the rendering accuracy, and the fine-level details of the human
head, such as the gaps between teeth, wrinkles, and beards, can be represented
and synthesized by HeadNeRF. Extensive experimental results and several
applications demonstrate its effectiveness. We will release the code and
trained model to the public.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - PyNeRF: Pyramidal Neural Radiance Fields [51.25406129834537]
We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions.
At render time, we simply use coarser grids to render samples that cover larger volumes.
Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster.
arXiv Detail & Related papers (2023-11-30T23:52:46Z) - SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic
Reconstruction of Indoor Scenes [17.711755550841385]
SLAM-based methods can reconstruct 3D scene geometry progressively in real time but can not render photorealistic results.
NeRF-based methods produce promising novel view synthesis results, their long offline optimization time and lack of geometric constraints pose challenges to efficiently handling online input.
We introduce SurfelNeRF, a variant of neural radiance field which employs a flexible and scalable neural surfel representation to store geometric attributes and extracted appearance features from input images.
arXiv Detail & Related papers (2023-04-18T13:11:49Z) - NeRFlame: FLAME-based conditioning of NeRF for 3D face rendering [10.991274404360194]
The present paper proposes a novel approach, named NeRFlame, which combines the strengths of both NeRF and FLAME methods.
Our approach utilizes the FLAME mesh as a distinct density volume. Consequently, color values exist only in the vicinity of the FLAME mesh.
This FLAME framework is seamlessly incorporated into the NeRF architecture for predicting RGB colors, enabling our model to explicitly represent volume density and implicitly capture RGB colors.
arXiv Detail & Related papers (2023-03-10T22:21:30Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - FastNeRF: High-Fidelity Neural Rendering at 200FPS [17.722927021159393]
We propose FastNeRF, a system capable of rendering high fidelity images at 200Hz on a high-end consumer GPU.
The proposed method is 3000 times faster than the original NeRF algorithm and at least an order of magnitude faster than existing work on accelerating NeRF.
arXiv Detail & Related papers (2021-03-18T17:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.