R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual
tightly-coupled state Estimation and mapping package
- URL: http://arxiv.org/abs/2109.07982v1
- Date: Fri, 10 Sep 2021 22:43:59 GMT
- Title: R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual
tightly-coupled state Estimation and mapping package
- Authors: Jiarong Lin and Fu Zhang
- Abstract summary: R3LIVE takes advantage of measurement of LiDAR, inertial, and visual sensors to achieve robust and accurate state estimation.
R3LIVE is a versatile and well-colored system toward various possible applications.
We open R3LIVE, including all our codes, software utilities, and the mechanical design of our device.
- Score: 7.7016529229597035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this letter, we propose a novel LiDAR-Inertial-Visual sensor fusion
framework termed R3LIVE, which takes advantage of measurement of LiDAR,
inertial, and visual sensors to achieve robust and accurate state estimation.
R3LIVE is contained of two subsystems, the LiDAR-inertial odometry (LIO) and
visual-inertial odometry (VIO). The LIO subsystem (FAST-LIO) takes advantage of
the measurement from LiDAR and inertial sensors and builds the geometry
structure of (i.e. the position of 3D points) global maps. The VIO subsystem
utilizes the data of visual-inertial sensors and renders the map's texture
(i.e. the color of 3D points). More specifically, the VIO subsystem fuses the
visual data directly and effectively by minimizing the frame-to-map photometric
error. The developed system R3LIVE is developed based on our previous work
R2LIVE, with careful architecture design and implementation. Experiment results
show that the resultant system achieves more robustness and higher accuracy in
state estimation than current counterparts (see our attached video).
R3LIVE is a versatile and well-engineered system toward various possible
applications, which can not only serve as a SLAM system for real-time robotic
applications, but can also reconstruct the dense, precise, RGB-colored 3D maps
for applications like surveying and mapping. Moreover, to make R3LIVE more
extensible, we develop a series of offline utilities for reconstructing and
texturing meshes, which further minimizes the gap between R3LIVE and various of
3D applications such as simulators, video games and etc (see our demos video).
To share our findings and make contributions to the community, we open source
R3LIVE on our Github, including all of our codes, software utilities, and the
mechanical design of our device.
Related papers
- FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry [28.606325312582218]
We propose FAST-LIVO2, a fast, direct LiDAR-inertial-visual odometry framework to achieve accurate and robust state estimation in SLAM tasks.
FAST-LIVO2 fuses the IMU, LiDAR and image measurements efficiently through a sequential update strategy.
We show three applications of FAST-LIVO2, including real-time onboard navigation, airborne mapping, and 3D model rendering.
arXiv Detail & Related papers (2024-08-26T06:01:54Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Volumetric Environment Representation for Vision-Language Navigation [66.04379819772764]
Vision-language navigation (VLN) requires an agent to navigate through a 3D environment based on visual observations and natural language instructions.
We introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.
VER predicts 3D occupancy, 3D room layout, and 3D bounding boxes jointly.
arXiv Detail & Related papers (2024-03-21T06:14:46Z) - SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping with Sweep
Reconstruction [5.479262483638832]
SR-LIVO is an advanced and novel LIV-SLAM system employing sweep reconstruction to align reconstructed sweeps with image timestamps.
We have released our source code to contribute to the community development in this field.
arXiv Detail & Related papers (2023-12-28T03:06:49Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - Tightly-Coupled LiDAR-Visual SLAM Based on Geometric Features for Mobile
Agents [43.137917788594926]
We propose a tightly-coupled LiDAR-visual SLAM based on geometric features.
The entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem.
Our system achieves more accurate and robust pose estimation compared to current state-of-the-art multi-modal methods.
arXiv Detail & Related papers (2023-07-15T10:06:43Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - BS3D: Building-scale 3D Reconstruction from RGB-D Images [25.604775584883413]
We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera.
Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms.
arXiv Detail & Related papers (2023-01-03T11:46:14Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - R$^3$LIVE++: A Robust, Real-time, Radiance reconstruction package with a
tightly-coupled LiDAR-Inertial-Visual state Estimator [5.972044427549262]
Simultaneous localization and mapping (SLAM) are crucial for autonomous robots (e.g., self-driving cars, autonomous drones), 3D mapping systems, and AR/VR applications.
This work proposed a novel LiDAR-inertial-visual fusion framework termed R$3$LIVE++ to achieve robust and accurate state estimation while simultaneously reconstructing the radiance map on the fly.
arXiv Detail & Related papers (2022-09-08T09:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.