R$^3$LIVE++: A Robust, Real-time, Radiance reconstruction package with a
tightly-coupled LiDAR-Inertial-Visual state Estimator
- URL: http://arxiv.org/abs/2209.03666v1
- Date: Thu, 8 Sep 2022 09:26:20 GMT
- Title: R$^3$LIVE++: A Robust, Real-time, Radiance reconstruction package with a
tightly-coupled LiDAR-Inertial-Visual state Estimator
- Authors: Jiarong Lin and Fu Zhang
- Abstract summary: Simultaneous localization and mapping (SLAM) are crucial for autonomous robots (e.g., self-driving cars, autonomous drones), 3D mapping systems, and AR/VR applications.
This work proposed a novel LiDAR-inertial-visual fusion framework termed R$3$LIVE++ to achieve robust and accurate state estimation while simultaneously reconstructing the radiance map on the fly.
- Score: 5.972044427549262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simultaneous localization and mapping (SLAM) are crucial for autonomous
robots (e.g., self-driving cars, autonomous drones), 3D mapping systems, and
AR/VR applications. This work proposed a novel LiDAR-inertial-visual fusion
framework termed R$^3$LIVE++ to achieve robust and accurate state estimation
while simultaneously reconstructing the radiance map on the fly. R$^3$LIVE++
consists of a LiDAR-inertial odometry (LIO) and a visual-inertial odometry
(VIO), both running in real-time. The LIO subsystem utilizes the measurements
from a LiDAR for reconstructing the geometric structure (i.e., the positions of
3D points), while the VIO subsystem simultaneously recovers the radiance
information of the geometric structure from the input images. R$^3$LIVE++ is
developed based on R$^3$LIVE and further improves the accuracy in localization
and mapping by accounting for the camera photometric calibration (e.g.,
non-linear response function and lens vignetting) and the online estimation of
camera exposure time. We conduct more extensive experiments on both public and
our private datasets to compare our proposed system against other
state-of-the-art SLAM systems. Quantitative and qualitative results show that
our proposed system has significant improvements over others in both accuracy
and robustness. In addition, to demonstrate the extendability of our work, {we
developed several applications based on our reconstructed radiance maps, such
as high dynamic range (HDR) imaging, virtual environment exploration, and 3D
video gaming.} Lastly, to share our findings and make contributions to the
community, we make our codes, hardware design, and dataset publicly available
on our Github: github.com/hku-mars/r3live
Related papers
- PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry [28.606325312582218]
We propose FAST-LIVO2, a fast, direct LiDAR-inertial-visual odometry framework to achieve accurate and robust state estimation in SLAM tasks.
FAST-LIVO2 fuses the IMU, LiDAR and image measurements efficiently through a sequential update strategy.
We show three applications of FAST-LIVO2, including real-time onboard navigation, airborne mapping, and 3D model rendering.
arXiv Detail & Related papers (2024-08-26T06:01:54Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping with Sweep
Reconstruction [5.479262483638832]
SR-LIVO is an advanced and novel LIV-SLAM system employing sweep reconstruction to align reconstructed sweeps with image timestamps.
We have released our source code to contribute to the community development in this field.
arXiv Detail & Related papers (2023-12-28T03:06:49Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - BS3D: Building-scale 3D Reconstruction from RGB-D Images [25.604775584883413]
We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera.
Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms.
arXiv Detail & Related papers (2023-01-03T11:46:14Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z) - R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual
tightly-coupled state Estimation and mapping package [7.7016529229597035]
R3LIVE takes advantage of measurement of LiDAR, inertial, and visual sensors to achieve robust and accurate state estimation.
R3LIVE is a versatile and well-colored system toward various possible applications.
We open R3LIVE, including all our codes, software utilities, and the mechanical design of our device.
arXiv Detail & Related papers (2021-09-10T22:43:59Z) - It's All Around You: Range-Guided Cylindrical Network for 3D Object
Detection [4.518012967046983]
This work presents a novel approach for analyzing 3D data produced by 360-degree depth scanners.
We introduce a novel notion of range-guided convolutions, adapting the receptive field by distance from the ego vehicle and the object's scale.
Our network demonstrates powerful results on the nuScenes challenge, comparable to current state-of-the-art architectures.
arXiv Detail & Related papers (2020-12-05T21:02:18Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.