EG-HumanNeRF: Efficient Generalizable Human NeRF Utilizing Human Prior for Sparse View
- URL: http://arxiv.org/abs/2410.12242v1
- Date: Wed, 16 Oct 2024 05:08:00 GMT
- Title: EG-HumanNeRF: Efficient Generalizable Human NeRF Utilizing Human Prior for Sparse View
- Authors: Zhaorong Wang, Yoshihiro Kanamori, Yuki Endo,
- Abstract summary: Generalizable neural radiance field (NeRF) enables neural-based digital human rendering without per-scene retraining.
We propose a generalizable human NeRF framework that achieves high-quality and real-time rendering with sparse input views.
- Score: 2.11923215233494
- License:
- Abstract: Generalizable neural radiance field (NeRF) enables neural-based digital human rendering without per-scene retraining. When combined with human prior knowledge, high-quality human rendering can be achieved even with sparse input views. However, the inference of these methods is still slow, as a large number of neural network queries on each ray are required to ensure the rendering quality. Moreover, occluded regions often suffer from artifacts, especially when the input views are sparse. To address these issues, we propose a generalizable human NeRF framework that achieves high-quality and real-time rendering with sparse input views by extensively leveraging human prior knowledge. We accelerate the rendering with a two-stage sampling reduction strategy: first constructing boundary meshes around the human geometry to reduce the number of ray samples for sampling guidance regression, and then volume rendering using fewer guided samples. To improve rendering quality, especially in occluded regions, we propose an occlusion-aware attention mechanism to extract occlusion information from the human priors, followed by an image space refinement network to improve rendering quality. Furthermore, for volume rendering, we adopt a signed ray distance function (SRDF) formulation, which allows us to propose an SRDF loss at every sample position to improve the rendering quality further. Our experiments demonstrate that our method outperforms the state-of-the-art methods in rendering quality and has a competitive rendering speed compared with speed-prioritized novel view synthesis methods.
Related papers
- RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Volume Feature Rendering for Fast Neural Radiance Field Reconstruction [11.05302598034426]
Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives.
In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density.
We propose to render the queried feature vectors of a ray first and then transform the rendered feature vector to the final pixel color by a neural network.
arXiv Detail & Related papers (2023-05-29T06:58:27Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering [139.159534903657]
We develop a generalizable and efficient Neural Radiance Field (NeRF) pipeline for high-fidelity free-viewpoint human body details.
To better tackle self-occlusion, we devise a geometry-guided multi-view feature integration approach.
For achieving higher rendering efficiency, we introduce a geometry-guided progressive rendering pipeline.
arXiv Detail & Related papers (2021-12-08T14:42:10Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Few-shot Neural Human Performance Rendering from Sparse RGBD Videos [40.20382131461408]
Recent neural rendering approaches for human activities achieve remarkable view rendering results, but still rely on input views dense training.
We propose a fewshot neural rendering approach (FNHR) from only RGBD inputs to generate photoview free free viewpoint results.
arXiv Detail & Related papers (2021-07-14T06:28:16Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.