Visual-LiDAR Odometry and Mapping with Monocular Scale Correction and
Visual Bootstrapping
- URL: http://arxiv.org/abs/2304.08978v2
- Date: Sat, 8 Jul 2023 09:07:10 GMT
- Title: Visual-LiDAR Odometry and Mapping with Monocular Scale Correction and
Visual Bootstrapping
- Authors: Hanyu Cai, Ni Ou and Junzheng Wang
- Abstract summary: We present a novel visual-LiDAR odometry and mapping method with low-drift characteristics.
The proposed method is based on two popular approaches, ORB-SLAM and A-LOAM, with monocular scale correction.
Our method significantly outperforms standalone ORB-SLAM2 and A-LOAM.
- Score: 0.7734726150561089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel visual-LiDAR odometry and mapping method with
low-drift characteristics. The proposed method is based on two popular
approaches, ORB-SLAM and A-LOAM, with monocular scale correction and
visual-bootstrapped LiDAR poses initialization modifications. The scale
corrector calculates the proportion between the depth of image keypoints
recovered by triangulation and that provided by LiDAR, using an outlier
rejection process for accuracy improvement. Concerning LiDAR poses
initialization, the visual odometry approach gives the initial guesses of LiDAR
motions for better performance. This methodology is not only applicable to
high-resolution LiDAR but can also adapt to low-resolution LiDAR. To evaluate
the proposed SLAM system's robustness and accuracy, we conducted experiments on
the KITTI Odometry and S3E datasets. Experimental results illustrate that our
method significantly outperforms standalone ORB-SLAM2 and A-LOAM. Furthermore,
regarding the accuracy of visual odometry with scale correction, our method
performs similarly to the stereo-mode ORB-SLAM2.
Related papers
- LiVisSfM: Accurate and Robust Structure-from-Motion with LiDAR and Visual Cues [7.911698650147302]
LiVisSfM is an SfM-based reconstruction system that fully combines LiDAR and visual cues.
We propose a LiDAR-visual SfM method which innovatively carries out LiDAR frame registration to LiDAR voxel map in a Point-to-Gaussian residual metrics.
arXiv Detail & Related papers (2024-10-29T16:41:56Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - Superresolving optical ruler based on spatial mode demultiplexing for systems evolving under Brownian motion [0.0]
We study the impact of Brownian motion of the center of the system of two weak incoherent sources of arbitrary relative brightness on adaptive SPADE measurement precision limits.
We find that Rayleigh's curse is present in such a scenario; however, SPADE measurement can outperform perfect direct imaging.
arXiv Detail & Related papers (2024-07-18T17:23:14Z) - OSPC: Online Sequential Photometric Calibration [0.0]
Photometric calibration is essential to many computer vision applications.
We propose a novel method that solves for photometric parameters using a sequential estimation approach.
arXiv Detail & Related papers (2023-05-28T09:44:58Z) - A Tightly Coupled LiDAR-IMU Odometry through Iterated Point-Level
Undistortion [10.399676936364527]
Scan undistortion is a key module for LiDAR odometry in high dynamic environment.
We propose an optimization based tightly coupled LiDAR-IMU odometry addressing iterated point-level undistortion.
arXiv Detail & Related papers (2022-09-25T15:48:42Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - End-To-End Optimization of LiDAR Beam Configuration for 3D Object
Detection and Localization [87.56144220508587]
We take a new route to learn to optimize the LiDAR beam configuration for a given application.
We propose a reinforcement learning-based learning-to-optimize framework to automatically optimize the beam configuration.
Our method is especially useful when a low-resolution (low-cost) LiDAR is needed.
arXiv Detail & Related papers (2022-01-11T09:46:31Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z) - D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual
Odometry [57.5549733585324]
D3VO is a novel framework for monocular visual odometry that exploits deep networks on three levels -- deep depth, pose and uncertainty estimation.
We first propose a novel self-supervised monocular depth estimation network trained on stereo videos without any external supervision.
We model the photometric uncertainties of pixels on the input images, which improves the depth estimation accuracy.
arXiv Detail & Related papers (2020-03-02T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.