NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction
- URL: http://arxiv.org/abs/2203.11283v1
- Date: Mon, 21 Mar 2022 18:56:35 GMT
- Title: NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction
- Authors: Xiaoshuai Zhang, Sai Bi, Kalyan Sunkavalli, Hao Su, Zexiang Xu
- Abstract summary: We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
- Score: 50.54946139497575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While NeRF has shown great success for neural reconstruction and rendering,
its limited MLP capacity and long per-scene optimization times make it
challenging to model large-scale indoor scenes. In contrast, classical 3D
reconstruction methods can handle large-scale scenes but do not produce
realistic renderings. We propose NeRFusion, a method that combines the
advantages of NeRF and TSDF-based fusion techniques to achieve efficient
large-scale reconstruction and photo-realistic rendering. We process the input
image sequence to predict per-frame local radiance fields via direct network
inference. These are then fused using a novel recurrent neural network that
incrementally reconstructs a global, sparse scene representation in real-time
at 22 fps. This global volume can be further fine-tuned to boost rendering
quality. We demonstrate that NeRFusion achieves state-of-the-art quality on
both large-scale indoor and small-scale object scenes, with substantially
faster reconstruction than NeRF and other recent methods.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames
in Autonomous Driving Environments [3.1969023045814753]
We propose a 3D scene reconstruction and novel view synthesis framework called parent-child neural radiance field (PC-NeRF)
PC-NeRF implements hierarchical spatial partitioning and multi-level scene representation, including scene, segment, and point levels.
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
arXiv Detail & Related papers (2024-02-14T17:16:39Z) - BirdNeRF: Fast Neural Reconstruction of Large-Scale Scenes From Aerial
Imagery [3.4956406636452626]
We introduce BirdNeRF, an adaptation of Neural Radiance Fields (NeRF) designed specifically for reconstructing large-scale scenes using aerial imagery.
We present a novel bird-view pose-based spatial decomposition algorithm that decomposes a large aerial image set into multiple small sets with appropriately sized overlaps.
We evaluate our approach on existing datasets as well as against our own drone footage, improving reconstruction speed by 10x over classical photogrammetry software and 50x over state-of-the-art large-scale NeRF solution.
arXiv Detail & Related papers (2024-02-07T03:18:34Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z) - Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields [2.5432277893532116]
Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint.
NeRF-based models are susceptible to interference issues caused by colored "fog" noise.
Our approach, coined Enhance-NeRF, adopts joint color to balance low and high reflectivity objects display.
arXiv Detail & Related papers (2023-06-08T15:49:30Z) - SurfelNeRF: Neural Surfel Radiance Fields for Online Photorealistic
Reconstruction of Indoor Scenes [17.711755550841385]
SLAM-based methods can reconstruct 3D scene geometry progressively in real time but can not render photorealistic results.
NeRF-based methods produce promising novel view synthesis results, their long offline optimization time and lack of geometric constraints pose challenges to efficiently handling online input.
We introduce SurfelNeRF, a variant of neural radiance field which employs a flexible and scalable neural surfel representation to store geometric attributes and extracted appearance features from input images.
arXiv Detail & Related papers (2023-04-18T13:11:49Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.