Robust Odometry and Mapping for Multi-LiDAR Systems with Online
Extrinsic Calibration
- URL: http://arxiv.org/abs/2010.14294v2
- Date: Wed, 5 May 2021 15:32:33 GMT
- Title: Robust Odometry and Mapping for Multi-LiDAR Systems with Online
Extrinsic Calibration
- Authors: Jianhao Jiao, Haoyang Ye, Yilong Zhu, Ming Liu
- Abstract summary: This paper proposes a system to achieve robust and simultaneous extrinsic calibration, odometry, and mapping for multiple LiDARs.
We validate our approach's performance with extensive experiments on ten sequences (4.60km total length) for the calibration and SLAM.
We demonstrate that the proposed work is a complete, robust, and system for various multi-LiDAR setups.
- Score: 15.946728828122385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining multiple LiDARs enables a robot to maximize its perceptual
awareness of environments and obtain sufficient measurements, which is
promising for simultaneous localization and mapping (SLAM). This paper proposes
a system to achieve robust and simultaneous extrinsic calibration, odometry,
and mapping for multiple LiDARs. Our approach starts with measurement
preprocessing to extract edge and planar features from raw measurements. After
a motion and extrinsic initialization procedure, a sliding window-based
multi-LiDAR odometry runs onboard to estimate poses with online calibration
refinement and convergence identification. We further develop a mapping
algorithm to construct a global map and optimize poses with sufficient features
together with a method to model and reduce data uncertainty. We validate our
approach's performance with extensive experiments on ten sequences (4.60km
total length) for the calibration and SLAM and compare them against the
state-of-the-art. We demonstrate that the proposed work is a complete, robust,
and extensible system for various multi-LiDAR setups. The source code,
datasets, and demonstrations are available at
https://ram-lab.com/file/site/m-loam.
Related papers
- MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Multiway Point Cloud Mosaicking with Diffusion and Global Optimization [74.3802812773891]
We introduce a novel framework for multiway point cloud mosaicking (named Wednesday)
At the core of our approach is ODIN, a learned pairwise registration algorithm that identifies overlaps and refines attention scores.
Tested on four diverse, large-scale datasets, our method state-of-the-art pairwise and rotation registration results by a large margin on all benchmarks.
arXiv Detail & Related papers (2024-03-30T17:29:13Z) - Tightly-Coupled LiDAR-Visual SLAM Based on Geometric Features for Mobile
Agents [43.137917788594926]
We propose a tightly-coupled LiDAR-visual SLAM based on geometric features.
The entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem.
Our system achieves more accurate and robust pose estimation compared to current state-of-the-art multi-modal methods.
arXiv Detail & Related papers (2023-07-15T10:06:43Z) - MV-JAR: Masked Voxel Jigsaw and Reconstruction for LiDAR-Based
Self-Supervised Pre-Training [58.07391711548269]
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
Masked Voxel Jigsaw and Reconstruction (MV-JAR) method for LiDAR-based self-supervised pre-training.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - End-To-End Optimization of LiDAR Beam Configuration for 3D Object
Detection and Localization [87.56144220508587]
We take a new route to learn to optimize the LiDAR beam configuration for a given application.
We propose a reinforcement learning-based learning-to-optimize framework to automatically optimize the beam configuration.
Our method is especially useful when a low-resolution (low-cost) LiDAR is needed.
arXiv Detail & Related papers (2022-01-11T09:46:31Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Real-time Multi-Adaptive-Resolution-Surfel 6D LiDAR Odometry using
Continuous-time Trajectory Optimization [33.67478846305404]
We propose a real-time method for 6D LiDAR odometry.
Our approach combines a continuous-time B-Spline trajectory representation with a Gaussian Mixture Model (GMM) formulation to jointly align local multi-resolution surfel maps.
A thorough experimental evaluation shows the performance of our approach on two datasets and during real-robot experiments.
arXiv Detail & Related papers (2021-05-05T12:14:39Z) - MULLS: Versatile LiDAR SLAM via Multi-metric Linear Least Square [4.449835214520727]
MULLS is an efficient, low-drift, and versatile 3D LiDAR SLAM system.
For the front-end, roughly classified feature points are extracted from each frame using dual-threshold ground filtering and principal components analysis.
For the back-end, hierarchical pose graph optimization is conducted among regularly stored history submaps to reduce the drift resulting from dead reckoning.
On the KITTI benchmark, MULLS ranks among the top LiDAR-only SLAM systems with real-time performance.
arXiv Detail & Related papers (2021-02-07T10:42:42Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Statistical Outlier Identification in Multi-robot Visual SLAM using
Expectation Maximization [18.259478519717426]
This paper introduces a novel and distributed method for detecting inter-map loop closure outliers in simultaneous localization and mapping (SLAM)
The proposed algorithm does not rely on a good initialization and can handle more than two maps at a time.
arXiv Detail & Related papers (2020-02-07T06:34:44Z) - TCM-ICP: Transformation Compatibility Measure for Registering Multiple
LIDAR Scans [4.5412347600435465]
We present an algorithm for registering multiple, overlapping LiDAR scans.
In this work, we introduce a geometric metric called Transformation Compatibility Measure (TCM) which aids in choosing the most similar point clouds for registration.
We evaluate the proposed algorithm on four different real world scenes and experimental results shows that the registration performance of the proposed method is comparable or superior to the traditionally used registration methods.
arXiv Detail & Related papers (2020-01-04T21:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.