LiVisSfM: Accurate and Robust Structure-from-Motion with LiDAR and Visual Cues
- URL: http://arxiv.org/abs/2410.22213v1
- Date: Tue, 29 Oct 2024 16:41:56 GMT
- Title: LiVisSfM: Accurate and Robust Structure-from-Motion with LiDAR and Visual Cues
- Authors: Hanqing Jiang, Liyang Zhou, Zhuang Zhang, Yihao Yu, Guofeng Zhang,
- Abstract summary: LiVisSfM is an SfM-based reconstruction system that fully combines LiDAR and visual cues.
We propose a LiDAR-visual SfM method which innovatively carries out LiDAR frame registration to LiDAR voxel map in a Point-to-Gaussian residual metrics.
- Score: 7.911698650147302
- License:
- Abstract: This paper presents an accurate and robust Structure-from-Motion (SfM) pipeline named LiVisSfM, which is an SfM-based reconstruction system that fully combines LiDAR and visual cues. Unlike most existing LiDAR-inertial odometry (LIO) and LiDAR-inertial-visual odometry (LIVO) methods relying heavily on LiDAR registration coupled with Inertial Measurement Unit (IMU), we propose a LiDAR-visual SfM method which innovatively carries out LiDAR frame registration to LiDAR voxel map in a Point-to-Gaussian residual metrics, combined with a LiDAR-visual BA and explicit loop closure in a bundle optimization way to achieve accurate and robust LiDAR pose estimation without dependence on IMU incorporation. Besides, we propose an incremental voxel updating strategy for efficient voxel map updating during the process of LiDAR frame registration and LiDAR-visual BA optimization. Experiments demonstrate the superior effectiveness of our LiVisSfM framework over state-of-the-art LIO and LIVO works on more accurate and robust LiDAR pose recovery and dense point cloud reconstruction of both public KITTI benchmark and a variety of self-captured dataset.
Related papers
- LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting [50.808933338389686]
LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving.
We present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes.
Our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
arXiv Detail & Related papers (2024-10-07T15:07:56Z) - FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry [28.606325312582218]
We propose FAST-LIVO2, a fast, direct LiDAR-inertial-visual odometry framework to achieve accurate and robust state estimation in SLAM tasks.
FAST-LIVO2 fuses the IMU, LiDAR and image measurements efficiently through a sequential update strategy.
We show three applications of FAST-LIVO2, including real-time onboard navigation, airborne mapping, and 3D model rendering.
arXiv Detail & Related papers (2024-08-26T06:01:54Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping with Sweep
Reconstruction [5.479262483638832]
SR-LIVO is an advanced and novel LIV-SLAM system employing sweep reconstruction to align reconstructed sweeps with image timestamps.
We have released our source code to contribute to the community development in this field.
arXiv Detail & Related papers (2023-12-28T03:06:49Z) - Traj-LO: In Defense of LiDAR-Only Odometry Using an Effective
Continuous-Time Trajectory [20.452961476175812]
This letter explores the capability of LiDAR-only odometry through a continuous-time perspective.
Our proposed Traj-LO approach tries to recover the spatial-temporal consistent movement of LiDAR.
Our implementation is open-sourced on GitHub.
arXiv Detail & Related papers (2023-09-25T03:05:06Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Visual-LiDAR Odometry and Mapping with Monocular Scale Correction and
Visual Bootstrapping [0.7734726150561089]
We present a novel visual-LiDAR odometry and mapping method with low-drift characteristics.
The proposed method is based on two popular approaches, ORB-SLAM and A-LOAM, with monocular scale correction.
Our method significantly outperforms standalone ORB-SLAM2 and A-LOAM.
arXiv Detail & Related papers (2023-04-18T13:20:33Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Learning Moving-Object Tracking with FMCW LiDAR [53.05551269151209]
We propose a learning-based moving-object tracking method utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave (FMCW) LiDAR.
Given the labels, we propose a contrastive learning framework, which pulls together the features from the same instance in embedding space and pushes apart the features from different instances to improve the tracking quality.
arXiv Detail & Related papers (2022-03-02T09:11:36Z) - End-To-End Optimization of LiDAR Beam Configuration for 3D Object
Detection and Localization [87.56144220508587]
We take a new route to learn to optimize the LiDAR beam configuration for a given application.
We propose a reinforcement learning-based learning-to-optimize framework to automatically optimize the beam configuration.
Our method is especially useful when a low-resolution (low-cost) LiDAR is needed.
arXiv Detail & Related papers (2022-01-11T09:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.