KN-LIO: Geometric Kinematics and Neural Field Coupled LiDAR-Inertial Odometry
- URL: http://arxiv.org/abs/2501.04263v1
- Date: Wed, 08 Jan 2025 04:14:09 GMT
- Title: KN-LIO: Geometric Kinematics and Neural Field Coupled LiDAR-Inertial Odometry
- Authors: Zhong Wang, Lele Ren, Yue Wen, Hesheng Wang,
- Abstract summary: Recent emerging neural field technology has great potential in dense mapping, but pure LiDAR mapping is difficult to work on high-dynamic vehicles.<n>We present a new solution that tightly couples geometric kinematics with neural fields to enhance simultaneous state estimation and dense mapping capabilities.<n>Our KN-LIO achieves performance on par with or superior to existing state-of-the-art solutions in pose estimation and offers improved dense mapping accuracy over pure LiDAR-based methods.
- Score: 11.851882531837244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in LiDAR-Inertial Odometry (LIO) have boosted a large amount of applications. However, traditional LIO systems tend to focus more on localization rather than mapping, with maps consisting mostly of sparse geometric elements, which is not ideal for downstream tasks. Recent emerging neural field technology has great potential in dense mapping, but pure LiDAR mapping is difficult to work on high-dynamic vehicles. To mitigate this challenge, we present a new solution that tightly couples geometric kinematics with neural fields to enhance simultaneous state estimation and dense mapping capabilities. We propose both semi-coupled and tightly coupled Kinematic-Neural LIO (KN-LIO) systems that leverage online SDF decoding and iterated error-state Kalman filtering to fuse laser and inertial data. Our KN-LIO minimizes information loss and improves accuracy in state estimation, while also accommodating asynchronous multi-LiDAR inputs. Evaluations on diverse high-dynamic datasets demonstrate that our KN-LIO achieves performance on par with or superior to existing state-of-the-art solutions in pose estimation and offers improved dense mapping accuracy over pure LiDAR-based methods. The relevant code and datasets will be made available at https://**.
Related papers
- SN-LiDAR: Semantic Neural Fields for Novel Space-time View LiDAR Synthesis [11.615282010184917]
We propose SN-LiDAR, a method that jointly performs accurate semantic segmentation, high-quality geometric reconstruction, and realistic LiDAR synthesis.
Specifically, we employ a coarse-to-fine planar-grid feature representation to extract global features from multi-frame point clouds.
Experiments on Semantic KITTI and KITTI-360 demonstrate the superiority of SN-LiDAR in both semantic and geometric reconstruction.
arXiv Detail & Related papers (2025-04-11T08:51:23Z) - Incorporating GNSS Information with LIDAR-Inertial Odometry for Accurate Land-Vehicle Localization [1.9684593154403558]
We propose a novel LIDAR-based localization framework, which achieves high accuracy and provides robust localization in 3D pointcloud maps.
The system integrates global information with LIDAR-based odometry to optimize the localization state.
The algorithm is tested on various maps of different data sets and has higher robustness and accuracy than other localization algorithms.
arXiv Detail & Related papers (2025-03-29T19:41:31Z) - GS-SDF: LiDAR-Augmented Gaussian Splatting and Neural SDF for Geometrically Consistent Rendering and Reconstruction [12.293953058837653]
We propose a unified LiDAR-visual system that synergizes Gaussian splatting with a neural signed distance field.
Experiments demonstrate superior reconstruction accuracy and rendering quality across diverse trajectories.
arXiv Detail & Related papers (2025-03-13T08:53:38Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Parametric Taylor series based latent dynamics identification neural networks [0.3139093405260182]
A new latent identification of nonlinear dynamics, P-TLDINets, is introduced.
It relies on a novel neural network structure based on Taylor series expansion and ResNets.
arXiv Detail & Related papers (2024-10-05T15:10:32Z) - Scale-Translation Equivariant Network for Oceanic Internal Solitary Wave Localization [7.444865250744234]
Internal solitary waves (ISWs) are gravity waves that are often observed in the interior ocean rather than the surface.
Cloud cover in optical remote sensing images variably obscures ground information, leading to blurred or missing surface observations.
This paper aims at altimeter-based machine learning solutions to automatically locate ISWs.
arXiv Detail & Related papers (2024-06-18T21:09:56Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental
LiDAR Odometry and Mapping [14.433784957457632]
We propose a novel NeRF-based LiDAR odometry and mapping approach, NeRF-LOAM, consisting of three modules neural odometry, neural mapping, and mesh reconstruction.
Our approach achieves state-of-the-art odometry and mapping performance, as well as a strong generalization in large-scale environments utilizing LiDAR data.
arXiv Detail & Related papers (2023-03-19T16:40:36Z) - CodeVIO: Visual-Inertial Odometry with Learned Optimizable Dense Depth [83.77839773394106]
We present a lightweight, tightly-coupled deep depth network and visual-inertial odometry system.
We provide the network with previously marginalized sparse features from VIO to increase the accuracy of initial depth prediction.
We show that it can run in real-time with single-thread execution while utilizing GPU acceleration only for the network and code Jacobian.
arXiv Detail & Related papers (2020-12-18T09:42:54Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.