Neural LiDAR Fields for Novel View Synthesis
- URL: http://arxiv.org/abs/2305.01643v2
- Date: Sun, 13 Aug 2023 09:25:18 GMT
- Title: Neural LiDAR Fields for Novel View Synthesis
- Authors: Shengyu Huang, Zan Gojcic, Zian Wang, Francis Williams, Yoni Kasten,
Sanja Fidler, Konrad Schindler, Or Litany
- Abstract summary: We present Neural Fields for LiDAR (NFL), a method to optimise a neural field scene representation from LiDAR measurements.
NFL combines the rendering power of neural fields with a detailed, physically motivated model of the LiDAR sensing process.
We show that the improved realism of the synthesized views narrows the domain gap to real scans and translates to better registration and semantic segmentation performance.
- Score: 80.45307792404685
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present Neural Fields for LiDAR (NFL), a method to optimise a neural field
scene representation from LiDAR measurements, with the goal of synthesizing
realistic LiDAR scans from novel viewpoints. NFL combines the rendering power
of neural fields with a detailed, physically motivated model of the LiDAR
sensing process, thus enabling it to accurately reproduce key sensor behaviors
like beam divergence, secondary returns, and ray dropping. We evaluate NFL on
synthetic and real LiDAR scans and show that it outperforms explicit
reconstruct-then-simulate methods as well as other NeRF-style methods on LiDAR
novel view synthesis task. Moreover, we show that the improved realism of the
synthesized views narrows the domain gap to real scans and translates to better
registration and semantic segmentation performance.
Related papers
- GS-LiDAR: Generating Realistic LiDAR Point Clouds with Panoramic Gaussian Splatting [3.376357029373187]
GS-LiDAR is a novel framework for generating realistic LiDAR point clouds with panoramic Gaussian splatting.
We introduce a novel panoramic rendering technique with explicit ray-splat intersection, guided by panoramic LiDAR supervision.
arXiv Detail & Related papers (2025-01-22T11:21:20Z) - LiDAR-RT: Gaussian-based Ray Tracing for Dynamic LiDAR Re-simulation [31.79143254487969]
LiDAR-RT is a novel framework that supports real-time, physically accurate LiDAR re-simulation for driving scenes.
Our primary contribution is the development of an efficient and effective rendering pipeline.
Our framework supports realistic rendering with flexible scene editing operations and various sensor configurations.
arXiv Detail & Related papers (2024-12-19T18:58:36Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.