Traj-LIO: A Resilient Multi-LiDAR Multi-IMU State Estimator Through
Sparse Gaussian Process
- URL: http://arxiv.org/abs/2402.09189v1
- Date: Wed, 14 Feb 2024 14:08:06 GMT
- Title: Traj-LIO: A Resilient Multi-LiDAR Multi-IMU State Estimator Through
Sparse Gaussian Process
- Authors: Xin Zheng, Jianke Zhu
- Abstract summary: We introduce a multi-LiDAR multi-IMU state estimator by taking advantage of Gaussian Process (GP)
Our proposed approach is capable of handling different sensor configurations and resilient to sensor failures.
To contribute to the community, we will make our source code publicly available.
- Score: 20.452961476175812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nowadays, sensor suits have been equipped with redundant LiDARs and IMUs to
mitigate the risks associated with sensor failure. It is challenging for the
previous discrete-time and IMU-driven kinematic systems to incorporate multiple
asynchronized sensors, which are susceptible to abnormal IMU data. To address
these limitations, we introduce a multi-LiDAR multi-IMU state estimator by
taking advantage of Gaussian Process (GP) that predicts a non-parametric
continuous-time trajectory to capture sensors' spatial-temporal movement with
limited control states. Since the kinematic model driven by three types of
linear time-invariant stochastic differential equations are independent of
external sensor measurements, our proposed approach is capable of handling
different sensor configurations and resilient to sensor failures. Moreover, we
replace the conventional $\mathrm{SE}(3)$ state representation with the
combination of $\mathrm{SO}(3)$ and vector space, which enables GP-based
LiDAR-inertial system to fulfill the real-time requirement. Extensive
experiments on the public datasets demonstrate the versatility and resilience
of our proposed multi-LiDAR multi-IMU state estimator. To contribute to the
community, we will make our source code publicly available.
Related papers
- From One to the Power of Many: Augmentations for Invariance to Multi-LiDAR Perception from Single-Sensor Datasets [12.712896458348515]
LiDAR perception methods for autonomous vehicles, powered by deep neural networks, have experienced steep growth in performance on classic benchmarks.
There are still large gaps in performance when deploying models trained on single-sensor setups to modern multi-sensor vehicles.
We propose some initial solutions in the form of application-specific data augmentations, which can facilitate better transfer to multi-sensor LiDAR setups.
arXiv Detail & Related papers (2024-09-27T09:51:45Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Multi-Visual-Inertial System: Analysis, Calibration and Estimation [26.658649118048032]
We study state estimation of multi-visual-inertial systems (MVIS) and develop sensor fusion algorithms.
We are interested in the full calibration of the associated visual-inertial sensors.
arXiv Detail & Related papers (2023-08-10T02:47:36Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - AFT-VO: Asynchronous Fusion Transformers for Multi-View Visual Odometry
Estimation [39.351088248776435]
We propose AFT-VO, a novel transformer-based sensor fusion architecture to estimate VO from multiple sensors.
Our framework combines predictions from asynchronous multi-view cameras and accounts for the time discrepancies of measurements coming from different sources.
Our experiments demonstrate that multi-view fusion for VO estimation provides robust and accurate trajectories, outperforming the state of the art in both challenging weather and lighting conditions.
arXiv Detail & Related papers (2022-06-26T19:29:08Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - SensiX++: Bringing MLOPs and Multi-tenant Model Serving to Sensory Edge
Devices [69.1412199244903]
We present a multi-tenant runtime for adaptive model execution with integrated MLOps on edge devices, e.g., a camera, a microphone, or IoT sensors.
S SensiX++ operates on two fundamental principles - highly modular componentisation to externalise data operations with clear abstractions and document-centric manifestation for system-wide orchestration.
We report on the overall throughput and quantified benefits of various automation components of SensiX++ and demonstrate its efficacy to significantly reduce operational complexity and lower the effort to deploy, upgrade, reconfigure and serve embedded models on edge devices.
arXiv Detail & Related papers (2021-09-08T22:06:16Z) - Multi-Objective Bayesian Optimisation and Joint Inversion for Active
Sensor Fusion [22.04258832800079]
We propose a framework for multi-objective optimisation and inverse problems given an expensive cost function for allocating new measurements.
This new method is devised to jointly solve multi-linear forward models of 2D-sensor data and 3D-geophysical properties.
We demonstrate the advantages on a specific example of a joint inverse problem, recommending where to place new drill-core measurements given 2D gravity and magnetic sensor data.
arXiv Detail & Related papers (2020-10-12T01:23:41Z) - MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera
Visual-Inertial Navigation System [44.76768683036822]
We propose a real-time consistent multi-IMU multi-camera (CMU)-VINS estimator for visual-inertial navigation systems.
Within an efficient multi-state constraint filter, the proposed MIMC-VINS algorithm optimally fuses asynchronous measurements from all sensors.
The proposed MIMC-VINS is validated in both Monte-Carlo simulations and real-world experiments.
arXiv Detail & Related papers (2020-06-28T20:16:08Z) - siaNMS: Non-Maximum Suppression with Siamese Networks for Multi-Camera
3D Object Detection [65.03384167873564]
A siamese network is integrated into the pipeline of a well-known 3D object detector approach.
associations are exploited to enhance the 3D box regression of the object.
The experimental evaluation on the nuScenes dataset shows that the proposed method outperforms traditional NMS approaches.
arXiv Detail & Related papers (2020-02-19T15:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.