PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames
in Autonomous Driving Environments
- URL: http://arxiv.org/abs/2402.09325v1
- Date: Wed, 14 Feb 2024 17:16:39 GMT
- Title: PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames
in Autonomous Driving Environments
- Authors: Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, Junyi
Ma
- Abstract summary: We propose a 3D scene reconstruction and novel view synthesis framework called parent-child neural radiance field (PC-NeRF)
PC-NeRF implements hierarchical spatial partitioning and multi-level scene representation, including scene, segment, and point levels.
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
- Score: 3.1969023045814753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale 3D scene reconstruction and novel view synthesis are vital for
autonomous vehicles, especially utilizing temporally sparse LiDAR frames.
However, conventional explicit representations remain a significant bottleneck
towards representing the reconstructed and synthetic scenes at unlimited
resolution. Although the recently developed neural radiance fields (NeRF) have
shown compelling results in implicit representations, the problem of
large-scale 3D scene reconstruction and novel view synthesis using sparse LiDAR
frames remains unexplored. To bridge this gap, we propose a 3D scene
reconstruction and novel view synthesis framework called parent-child neural
radiance field (PC-NeRF). Based on its two modules, parent NeRF and child NeRF,
the framework implements hierarchical spatial partitioning and multi-level
scene representation, including scene, segment, and point levels. The
multi-level scene representation enhances the efficient utilization of sparse
LiDAR point cloud data and enables the rapid acquisition of an approximate
volumetric scene representation. With extensive experiments, PC-NeRF is proven
to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in
large-scale scenes. Moreover, PC-NeRF can effectively handle situations with
sparse LiDAR frames and demonstrate high deployment efficiency with limited
training epochs. Our approach implementation and the pre-trained models are
available at https://github.com/biter0088/pc-nerf.
Related papers
- SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields [9.606992888590757]
We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function.
We propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model.
Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets.
arXiv Detail & Related papers (2024-09-06T03:36:12Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - PC-NeRF: Parent-Child Neural Radiance Fields under Partial Sensor Data
Loss in Autonomous Driving Environments [3.0170390440173023]
We propose a novel 3D scene reconstruction framework called parent-child neural radiance field (PC-NeRF)
With extensive experiments, our proposed PC-NeRF is proven to achieve high-precision 3D reconstruction in large-scale scenes.
arXiv Detail & Related papers (2023-10-02T03:32:35Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Grid-guided Neural Radiance Fields for Large Urban Scenes [146.06368329445857]
Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually.
An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene.
We present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient.
arXiv Detail & Related papers (2023-03-24T13:56:45Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.