Neural LiDAR Fields for Novel View Synthesis
- URL: http://arxiv.org/abs/2305.01643v2
- Date: Sun, 13 Aug 2023 09:25:18 GMT
- Title: Neural LiDAR Fields for Novel View Synthesis
- Authors: Shengyu Huang, Zan Gojcic, Zian Wang, Francis Williams, Yoni Kasten,
Sanja Fidler, Konrad Schindler, Or Litany
- Abstract summary: We present Neural Fields for LiDAR (NFL), a method to optimise a neural field scene representation from LiDAR measurements.
NFL combines the rendering power of neural fields with a detailed, physically motivated model of the LiDAR sensing process.
We show that the improved realism of the synthesized views narrows the domain gap to real scans and translates to better registration and semantic segmentation performance.
- Score: 80.45307792404685
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present Neural Fields for LiDAR (NFL), a method to optimise a neural field
scene representation from LiDAR measurements, with the goal of synthesizing
realistic LiDAR scans from novel viewpoints. NFL combines the rendering power
of neural fields with a detailed, physically motivated model of the LiDAR
sensing process, thus enabling it to accurately reproduce key sensor behaviors
like beam divergence, secondary returns, and ray dropping. We evaluate NFL on
synthetic and real LiDAR scans and show that it outperforms explicit
reconstruct-then-simulate methods as well as other NeRF-style methods on LiDAR
novel view synthesis task. Moreover, we show that the improved realism of the
synthesized views narrows the domain gap to real scans and translates to better
registration and semantic segmentation performance.
Related papers
- LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis [11.395101473757443]
We propose LiDAR4D, a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis.
In consideration of the sparsity and large-scale characteristics, we design a 4D hybrid representation combined with multi-planar and grid features.
For the realistic synthesis of LiDAR point clouds, we incorporate the global optimization of ray-drop probability to preserve cross-region patterns.
arXiv Detail & Related papers (2024-04-03T13:39:29Z) - Real-Aug: Realistic Scene Synthesis for LiDAR Augmentation in 3D Object
Detection [45.102312149413855]
We study the synthesis-based LiDAR data augmentation approach (so-called GT-Aug) which offers maxium controllability over generated data samples.
We propose Real-Aug, a synthesis-based augmentation method which prioritizes on generating realistic LiDAR scans.
We achieve a state-of-the-art 0.744 NDS and 0.702 mAP on nuScenes test set.
arXiv Detail & Related papers (2023-05-22T09:24:55Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.