Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous
Driving
- URL: http://arxiv.org/abs/2403.05907v1
- Date: Sat, 9 Mar 2024 13:14:27 GMT
- Title: Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous
Driving
- Authors: Junyi Cao, Zhichao Li, Naiyan Wang, Chao Ma
- Abstract summary: Lightning NeRF is an efficient hybrid scene representation that effectively utilizes the geometry prior to LiDAR in autonomous driving scenarios.
It significantly improves the novel view synthesis performance of NeRF and reduces computational overheads.
Our approach achieves a five-fold increase in training speed and a ten-fold improvement in rendering speed.
- Score: 28.788240489289212
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have highlighted the promising application of NeRF in
autonomous driving contexts. However, the complexity of outdoor environments,
combined with the restricted viewpoints in driving scenarios, complicates the
task of precisely reconstructing scene geometry. Such challenges often lead to
diminished quality in reconstructions and extended durations for both training
and rendering. To tackle these challenges, we present Lightning NeRF. It uses
an efficient hybrid scene representation that effectively utilizes the geometry
prior from LiDAR in autonomous driving scenarios. Lightning NeRF significantly
improves the novel view synthesis performance of NeRF and reduces computational
overheads. Through evaluations on real-world datasets, such as KITTI-360,
Argoverse2, and our private dataset, we demonstrate that our approach not only
exceeds the current state-of-the-art in novel view synthesis quality but also
achieves a five-fold increase in training speed and a ten-fold improvement in
rendering speed. Codes are available at
https://github.com/VISION-SJTU/Lightning-NeRF .
Related papers
- Few-shot NeRF by Adaptive Rendering Loss Regularization [78.50710219013301]
Novel view synthesis with sparse inputs poses great challenges to Neural Radiance Field (NeRF)
Recent works demonstrate that the frequency regularization of Positional rendering can achieve promising results for few-shot NeRF.
We propose Adaptive Rendering loss regularization for few-shot NeRF, dubbed AR-NeRF.
arXiv Detail & Related papers (2024-10-23T13:05:26Z) - Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering [106.0057551634008]
We introduce an accurate and efficient few-shot neural rendering method named Spatial Annealing smoothing regularized NeRF (SANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot NeRF methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - NeRF-VPT: Learning Novel View Representations with Neural Radiance
Fields via View Prompt Tuning [63.39461847093663]
We propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges.
Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages.
NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques.
arXiv Detail & Related papers (2024-03-02T22:08:10Z) - PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames
in Autonomous Driving Environments [3.1969023045814753]
We propose a 3D scene reconstruction and novel view synthesis framework called parent-child neural radiance field (PC-NeRF)
PC-NeRF implements hierarchical spatial partitioning and multi-level scene representation, including scene, segment, and point levels.
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
arXiv Detail & Related papers (2024-02-14T17:16:39Z) - HarmonicNeRF: Geometry-Informed Synthetic View Augmentation for 3D Scene Reconstruction in Driving Scenarios [2.949710700293865]
HarmonicNeRF is a novel approach for outdoor self-supervised monocular scene reconstruction.
It capitalizes on the strengths of NeRF and enhances surface reconstruction accuracy by augmenting the input space with geometry-informed synthetic views.
Our approach establishes new benchmarks in synthesizing novel depth views and reconstructing scenes, significantly outperforming existing methods.
arXiv Detail & Related papers (2023-10-09T07:42:33Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.