SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping with Sweep
Reconstruction
- URL: http://arxiv.org/abs/2312.16800v1
- Date: Thu, 28 Dec 2023 03:06:49 GMT
- Title: SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping with Sweep
Reconstruction
- Authors: Zikang Yuan, Jie Deng, Ruiye Ming, Fengtian Lang and Xin Yang
- Abstract summary: SR-LIVO is an advanced and novel LIV-SLAM system employing sweep reconstruction to align reconstructed sweeps with image timestamps.
We have released our source code to contribute to the community development in this field.
- Score: 5.479262483638832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing LiDAR-inertial-visual odometry and mapping (LIV-SLAM) systems mainly
utilize the LiDAR-inertial odometry (LIO) module for structure reconstruction
and the visual-inertial odometry (VIO) module for color rendering. However, the
accuracy of VIO is often compromised by photometric changes, weak textures and
motion blur, unlike the more robust LIO. This paper introduces SR-LIVO, an
advanced and novel LIV-SLAM system employing sweep reconstruction to align
reconstructed sweeps with image timestamps. This allows the LIO module to
accurately determine states at all imaging moments, enhancing pose accuracy and
processing efficiency. Experimental results on two public datasets demonstrate
that: 1) our SRLIVO outperforms existing state-of-the-art LIV-SLAM systems in
both pose accuracy and time efficiency; 2) our LIO-based pose estimation prove
more accurate than VIO-based ones in several mainstream LIV-SLAM systems
(including ours). We have released our source code to contribute to the
community development in this field.
Related papers
- LiVisSfM: Accurate and Robust Structure-from-Motion with LiDAR and Visual Cues [7.911698650147302]
LiVisSfM is an SfM-based reconstruction system that fully combines LiDAR and visual cues.
We propose a LiDAR-visual SfM method which innovatively carries out LiDAR frame registration to LiDAR voxel map in a Point-to-Gaussian residual metrics.
arXiv Detail & Related papers (2024-10-29T16:41:56Z) - ECMamba: Consolidating Selective State Space Model with Retinex Guidance for Efficient Multiple Exposure Correction [48.77198487543991]
We introduce a novel framework based on Mamba for Exposure Correction (ECMamba) with dual pathways, each dedicated to the restoration of reflectance and illumination map.
Specifically, we derive the Retinex theory and we train a Retinex estimator capable of mapping inputs into two intermediary spaces.
We develop a novel 2D Selective State-space layer guided by Retinex information (Retinex-SS2D) as the core operator of ECMM.
arXiv Detail & Related papers (2024-10-28T21:02:46Z) - LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry [28.606325312582218]
We propose FAST-LIVO2, a fast, direct LiDAR-inertial-visual odometry framework to achieve accurate and robust state estimation in SLAM tasks.
FAST-LIVO2 fuses the IMU, LiDAR and image measurements efficiently through a sequential update strategy.
We show three applications of FAST-LIVO2, including real-time onboard navigation, airborne mapping, and 3D model rendering.
arXiv Detail & Related papers (2024-08-26T06:01:54Z) - Simultaneous Map and Object Reconstruction [66.66729715211642]
We present a method for dynamic surface reconstruction of large-scale urban scenes from LiDAR.
We take inspiration from recent novel view synthesis methods and pose the reconstruction problem as a global optimization.
By careful modeling of continuous-time motion, our reconstructions can compensate for the rolling shutter effects of rotating LiDAR sensors.
arXiv Detail & Related papers (2024-06-19T23:53:31Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - Visual-LiDAR Odometry and Mapping with Monocular Scale Correction and
Visual Bootstrapping [0.7734726150561089]
We present a novel visual-LiDAR odometry and mapping method with low-drift characteristics.
The proposed method is based on two popular approaches, ORB-SLAM and A-LOAM, with monocular scale correction.
Our method significantly outperforms standalone ORB-SLAM2 and A-LOAM.
arXiv Detail & Related papers (2023-04-18T13:20:33Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - R$^3$LIVE++: A Robust, Real-time, Radiance reconstruction package with a
tightly-coupled LiDAR-Inertial-Visual state Estimator [5.972044427549262]
Simultaneous localization and mapping (SLAM) are crucial for autonomous robots (e.g., self-driving cars, autonomous drones), 3D mapping systems, and AR/VR applications.
This work proposed a novel LiDAR-inertial-visual fusion framework termed R$3$LIVE++ to achieve robust and accurate state estimation while simultaneously reconstructing the radiance map on the fly.
arXiv Detail & Related papers (2022-09-08T09:26:20Z) - R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual
tightly-coupled state Estimation and mapping package [7.7016529229597035]
R3LIVE takes advantage of measurement of LiDAR, inertial, and visual sensors to achieve robust and accurate state estimation.
R3LIVE is a versatile and well-colored system toward various possible applications.
We open R3LIVE, including all our codes, software utilities, and the mechanical design of our device.
arXiv Detail & Related papers (2021-09-10T22:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.