NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental
LiDAR Odometry and Mapping
- URL: http://arxiv.org/abs/2303.10709v1
- Date: Sun, 19 Mar 2023 16:40:36 GMT
- Title: NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental
LiDAR Odometry and Mapping
- Authors: Junyuan Deng, Xieyuanli Chen, Songpengcheng Xia, Zhen Sun, Guoqing
Liu, Wenxian Yu, Ling Pei
- Abstract summary: We propose a novel NeRF-based LiDAR odometry and mapping approach, NeRF-LOAM, consisting of three modules neural odometry, neural mapping, and mesh reconstruction.
Our approach achieves state-of-the-art odometry and mapping performance, as well as a strong generalization in large-scale environments utilizing LiDAR data.
- Score: 14.433784957457632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneously odometry and mapping using LiDAR data is an important task for
mobile systems to achieve full autonomy in large-scale environments. However,
most existing LiDAR-based methods prioritize tracking quality over
reconstruction quality. Although the recently developed neural radiance fields
(NeRF) have shown promising advances in implicit reconstruction for indoor
environments, the problem of simultaneous odometry and mapping for large-scale
scenarios using incremental LiDAR data remains unexplored. To bridge this gap,
in this paper, we propose a novel NeRF-based LiDAR odometry and mapping
approach, NeRF-LOAM, consisting of three modules neural odometry, neural
mapping, and mesh reconstruction. All these modules utilize our proposed neural
signed distance function, which separates LiDAR points into ground and
non-ground points to reduce Z-axis drift, optimizes odometry and voxel
embeddings concurrently, and in the end generates dense smooth mesh maps of the
environment. Moreover, this joint optimization allows our NeRF-LOAM to be
pre-trained free and exhibit strong generalization abilities when applied to
different environments. Extensive evaluations on three publicly available
datasets demonstrate that our approach achieves state-of-the-art odometry and
mapping performance, as well as a strong generalization in large-scale
environments utilizing LiDAR data. Furthermore, we perform multiple ablation
studies to validate the effectiveness of our network design. The implementation
of our approach will be made available at
https://github.com/JunyuanDeng/NeRF-LOAM.
Related papers
- LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields [10.753993328978542]
We propose a hybrid framework performing alternately global neural reconstruction and pure geometric pose optimization.
Experiments on NuScenes and KITTI-360 datasets demonstrate the superiority of GeoNLF in both novel view synthesis and multi-view registration.
arXiv Detail & Related papers (2024-07-08T04:19:49Z) - NeuRBF: A Neural Fields Representation with Adaptive Radial Basis
Functions [93.02515761070201]
We present a novel type of neural fields that uses general radial bases for signal representation.
Our method builds upon general radial bases with flexible kernel position and shape, which have higher spatial adaptivity and can more closely fit target signals.
When applied to neural radiance field reconstruction, our method achieves state-of-the-art rendering quality, with small model size and comparable training speed.
arXiv Detail & Related papers (2023-09-27T06:32:05Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization [27.247818386065894]
We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
arXiv Detail & Related papers (2021-03-10T16:24:47Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR
Segmentation [81.02742110604161]
State-of-the-art methods for large-scale driving-scene LiDAR segmentation often project the point clouds to 2D space and then process them via 2D convolution.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pat-tern.
Our method achieves the 1st place in the leaderboard of Semantic KITTI and outperforms existing methods on nuScenes with a noticeable margin, about 4%.
arXiv Detail & Related papers (2020-11-19T18:53:11Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - LodoNet: A Deep Neural Network with 2D Keypoint Matchingfor 3D LiDAR
Odometry Estimation [22.664095688406412]
We propose to transfer the LiDAR frames to image space and reformulate the problem as image feature extraction.
With the help of scale-invariant feature transform (SIFT) for feature extraction, we are able to generate matched keypoint pairs (MKPs)
A convolutional neural network pipeline is designed for LiDAR odometry estimation by extracted MKPs.
The proposed scheme, namely LodoNet, is then evaluated in the KITTI odometry estimation benchmark, achieving on par with or even better results than the state-of-the-art.
arXiv Detail & Related papers (2020-09-01T01:09:41Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.