Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering
- URL: http://arxiv.org/abs/2112.04312v1
- Date: Wed, 8 Dec 2021 14:42:10 GMT
- Title: Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering
- Authors: Mingfei Chen, Jianfeng Zhang, Xiangyu Xu, Lijuan Liu, Jiashi Feng,
Shuicheng Yan
- Abstract summary: We develop a generalizable and efficient Neural Radiance Field (NeRF) pipeline for high-fidelity free-viewpoint human body details.
To better tackle self-occlusion, we devise a geometry-guided multi-view feature integration approach.
For achieving higher rendering efficiency, we introduce a geometry-guided progressive rendering pipeline.
- Score: 139.159534903657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we develop a generalizable and efficient Neural Radiance Field
(NeRF) pipeline for high-fidelity free-viewpoint human body synthesis under
settings with sparse camera views. Though existing NeRF-based methods can
synthesize rather realistic details for human body, they tend to produce poor
results when the input has self-occlusion, especially for unseen humans under
sparse views. Moreover, these methods often require a large number of sampling
points for rendering, which leads to low efficiency and limits their real-world
applicability. To address these challenges, we propose a Geometry-guided
Progressive NeRF~(GP-NeRF). In particular, to better tackle self-occlusion, we
devise a geometry-guided multi-view feature integration approach that utilizes
the estimated geometry prior to integrate the incomplete information from input
views and construct a complete geometry volume for the target human body.
Meanwhile, for achieving higher rendering efficiency, we introduce a
geometry-guided progressive rendering pipeline, which leverages the geometric
feature volume and the predicted density values to progressively reduce the
number of sampling points and speed up the rendering process. Experiments on
the ZJU-MoCap and THUman datasets show that our method outperforms the
state-of-the-arts significantly across multiple generalization settings, while
the time cost is reduced >70% via applying our efficient progressive rendering
pipeline.
Related papers
- Few-Shot Multi-Human Neural Rendering Using Geometry Constraints [8.819403814092865]
We present a method for recovering the shape and radiance of a scene consisting of multiple people given solely a few images.
Existing approaches using implicit neural representations have achieved impressive results that deliver accurate geometry and appearance.
We propose a neural implicit reconstruction method that addresses the inherent challenges of this task through the following contributions.
arXiv Detail & Related papers (2025-02-11T00:10:58Z) - EG-HumanNeRF: Efficient Generalizable Human NeRF Utilizing Human Prior for Sparse View [2.11923215233494]
Generalizable neural radiance field (NeRF) enables neural-based digital human rendering without per-scene retraining.
We propose a generalizable human NeRF framework that achieves high-quality and real-time rendering with sparse input views.
arXiv Detail & Related papers (2024-10-16T05:08:00Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis [73.50359502037232]
VoxNeRF is a novel approach to enhance the quality and efficiency of neural indoor reconstruction and novel view synthesis.
We propose an efficient voxel-guided sampling technique that allocates computational resources to selectively the most relevant segments of rays.
Our approach is validated with extensive experiments on ScanNet and ScanNet++.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Adaptive Multi-NeRF: Exploit Efficient Parallelism in Adaptive Multiple
Scale Neural Radiance Field Rendering [3.8200916793910973]
Recent advances in Neural Radiance Fields (NeRF) have demonstrated significant potential for representing 3D scene appearances as implicit neural networks.
However, the lengthy training and rendering process hinders the widespread adoption of this promising technique for real-time rendering applications.
We present an effective adaptive multi-NeRF method designed to accelerate the neural rendering process for large scenes.
arXiv Detail & Related papers (2023-10-03T08:34:49Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - NeuralFusion: Neural Volumetric Rendering under Human-object
Interactions [46.70371238621842]
We propose a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors.
For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion.
We also introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains.
arXiv Detail & Related papers (2022-02-25T17:10:07Z) - Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer
RGBD Sensors [67.88097893304274]
We propose a human volumetric capture method that combines temporal fusion and deep implicit functions.
We propose dynamic sliding to fuse depth observations together with topology consistency.
arXiv Detail & Related papers (2021-05-05T04:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.