Point-NeRF: Point-based Neural Radiance Fields
- URL: http://arxiv.org/abs/2201.08845v1
- Date: Fri, 21 Jan 2022 18:59:20 GMT
- Title: Point-NeRF: Point-based Neural Radiance Fields
- Authors: Qiangeng Xu and Zexiang Xu and Julien Philip and Sai Bi and Zhixin Shu
and Kalyan Sunkavalli and Ulrich Neumann
- Abstract summary: Point-NeRF uses neural 3D point clouds, with associated neural features, to model a radiance field.
It can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline.
Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers via a novel pruning and growing mechanism.
- Score: 39.38262052015925
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Volumetric neural rendering methods like NeRF generate high-quality view
synthesis results but are optimized per-scene leading to prohibitive
reconstruction time. On the other hand, deep multi-view stereo methods can
quickly reconstruct scene geometry via direct network inference. Point-NeRF
combines the advantages of these two approaches by using neural 3D point
clouds, with associated neural features, to model a radiance field. Point-NeRF
can be rendered efficiently by aggregating neural point features near scene
surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can
be initialized via direct inference of a pre-trained deep network to produce a
neural point cloud; this point cloud can be finetuned to surpass the visual
quality of NeRF with 30X faster training time. Point-NeRF can be combined with
other 3D reconstruction methods and handles the errors and outliers in such
methods via a novel pruning and growing mechanism.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Improving Neural Radiance Field using Near-Surface Sampling with Point Cloud Generation [6.506009070668646]
This paper proposes a near-surface sampling framework to improve the rendering quality of NeRF.
To obtain depth information on a novel view, the paper proposes a 3D point cloud generation method and a simple refining method for projected depth from a point cloud.
arXiv Detail & Related papers (2023-10-06T10:55:34Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic
Reconstruction of Indoor Scenes [17.711755550841385]
SLAM-based methods can reconstruct 3D scene geometry progressively in real time but can not render photorealistic results.
NeRF-based methods produce promising novel view synthesis results, their long offline optimization time and lack of geometric constraints pose challenges to efficiently handling online input.
We introduce SurfelNeRF, a variant of neural radiance field which employs a flexible and scalable neural surfel representation to store geometric attributes and extracted appearance features from input images.
arXiv Detail & Related papers (2023-04-18T13:11:49Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.