Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering
- URL: http://arxiv.org/abs/2112.04312v1
- Date: Wed, 8 Dec 2021 14:42:10 GMT
- Title: Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering
- Authors: Mingfei Chen, Jianfeng Zhang, Xiangyu Xu, Lijuan Liu, Jiashi Feng,
Shuicheng Yan
- Abstract summary: We develop a generalizable and efficient Neural Radiance Field (NeRF) pipeline for high-fidelity free-viewpoint human body details.
To better tackle self-occlusion, we devise a geometry-guided multi-view feature integration approach.
For achieving higher rendering efficiency, we introduce a geometry-guided progressive rendering pipeline.
- Score: 139.159534903657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we develop a generalizable and efficient Neural Radiance Field
(NeRF) pipeline for high-fidelity free-viewpoint human body synthesis under
settings with sparse camera views. Though existing NeRF-based methods can
synthesize rather realistic details for human body, they tend to produce poor
results when the input has self-occlusion, especially for unseen humans under
sparse views. Moreover, these methods often require a large number of sampling
points for rendering, which leads to low efficiency and limits their real-world
applicability. To address these challenges, we propose a Geometry-guided
Progressive NeRF~(GP-NeRF). In particular, to better tackle self-occlusion, we
devise a geometry-guided multi-view feature integration approach that utilizes
the estimated geometry prior to integrate the incomplete information from input
views and construct a complete geometry volume for the target human body.
Meanwhile, for achieving higher rendering efficiency, we introduce a
geometry-guided progressive rendering pipeline, which leverages the geometric
feature volume and the predicted density values to progressively reduce the
number of sampling points and speed up the rendering process. Experiments on
the ZJU-MoCap and THUman datasets show that our method outperforms the
state-of-the-arts significantly across multiple generalization settings, while
the time cost is reduced >70% via applying our efficient progressive rendering
pipeline.
Related papers
- EG-HumanNeRF: Efficient Generalizable Human NeRF Utilizing Human Prior for Sparse View [2.11923215233494]
Generalizable neural radiance field (NeRF) enables neural-based digital human rendering without per-scene retraining.
We propose a generalizable human NeRF framework that achieves high-quality and real-time rendering with sparse input views.
arXiv Detail & Related papers (2024-10-16T05:08:00Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting [58.41056963451056]
We propose a few-shot view synthesis framework based on 3D Gaussian Splatting.
This framework enables real-time and photo-realistic view synthesis with as few as three training views.
FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets.
arXiv Detail & Related papers (2023-12-01T09:30:02Z) - Adaptive Multi-NeRF: Exploit Efficient Parallelism in Adaptive Multiple
Scale Neural Radiance Field Rendering [3.8200916793910973]
Recent advances in Neural Radiance Fields (NeRF) have demonstrated significant potential for representing 3D scene appearances as implicit neural networks.
However, the lengthy training and rendering process hinders the widespread adoption of this promising technique for real-time rendering applications.
We present an effective adaptive multi-NeRF method designed to accelerate the neural rendering process for large scenes.
arXiv Detail & Related papers (2023-10-03T08:34:49Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - GARF:Geometry-Aware Generalized Neural Radiance Field [47.76524984421343]
We propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy.
Our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images.
Experiments on indoor and outdoor datasets show that GARF reduces samples by more than 25%, while improving rendering quality and 3D geometry estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - NeuralFusion: Neural Volumetric Rendering under Human-object
Interactions [46.70371238621842]
We propose a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors.
For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion.
We also introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains.
arXiv Detail & Related papers (2022-02-25T17:10:07Z) - Few-shot Neural Human Performance Rendering from Sparse RGBD Videos [40.20382131461408]
Recent neural rendering approaches for human activities achieve remarkable view rendering results, but still rely on input views dense training.
We propose a fewshot neural rendering approach (FNHR) from only RGBD inputs to generate photoview free free viewpoint results.
arXiv Detail & Related papers (2021-07-14T06:28:16Z) - Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer
RGBD Sensors [67.88097893304274]
We propose a human volumetric capture method that combines temporal fusion and deep implicit functions.
We propose dynamic sliding to fuse depth observations together with topology consistency.
arXiv Detail & Related papers (2021-05-05T04:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.