Structure PLP-SLAM: Efficient Sparse Mapping and Localization using
Point, Line and Plane for Monocular, RGB-D and Stereo Cameras
- URL: http://arxiv.org/abs/2207.06058v1
- Date: Wed, 13 Jul 2022 09:05:35 GMT
- Title: Structure PLP-SLAM: Efficient Sparse Mapping and Localization using
Point, Line and Plane for Monocular, RGB-D and Stereo Cameras
- Authors: Fangwen Shu, Jiaxuan Wang, Alain Pagani, Didier Stricker
- Abstract summary: This paper demonstrates a visual SLAM system that utilizes point and line cloud for robust camera localization, simultaneously, with an embedded piece-wise planar reconstruction (PPR) module.
We address the challenge of reconstructing geometric primitives with scale ambiguity by proposing several run-time optimizations on the reconstructed lines and planes.
The results show that our proposed SLAM tightly incorporates the semantic features to boost both tracking as well as backend optimization.
- Score: 13.693353009049773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper demonstrates a visual SLAM system that utilizes point and line
cloud for robust camera localization, simultaneously, with an embedded
piece-wise planar reconstruction (PPR) module which in all provides a
structural map. To build a scale consistent map in parallel with tracking, such
as employing a single camera brings the challenge of reconstructing geometric
primitives with scale ambiguity, and further introduces the difficulty in graph
optimization of bundle adjustment (BA). We address these problems by proposing
several run-time optimizations on the reconstructed lines and planes. The
system is then extended with depth and stereo sensors based on the design of
the monocular framework. The results show that our proposed SLAM tightly
incorporates the semantic features to boost both frontend tracking as well as
backend optimization. We evaluate our system exhaustively on various datasets,
and open-source our code for the community
(https://github.com/PeterFWS/Structure-PLP-SLAM).
Related papers
- CP-SLAM: Collaborative Neural Point-based SLAM System [54.916578456416204]
This paper presents a collaborative implicit neural localization and mapping (SLAM) system with RGB-D image sequences.
In order to enable all these modules in a unified framework, we propose a novel neural point based 3D scene representation.
A distributed-to-centralized learning strategy is proposed for the collaborative implicit SLAM to improve consistency and cooperation.
arXiv Detail & Related papers (2023-11-14T09:17:15Z) - Tightly-Coupled LiDAR-Visual SLAM Based on Geometric Features for Mobile
Agents [43.137917788594926]
We propose a tightly-coupled LiDAR-visual SLAM based on geometric features.
The entire line segment detected by the visual subsystem overcomes the limitation of the LiDAR subsystem.
Our system achieves more accurate and robust pose estimation compared to current state-of-the-art multi-modal methods.
arXiv Detail & Related papers (2023-07-15T10:06:43Z) - DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System [1.443696537295348]
This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments.
Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, in a unified reference frame.
Our implementation's research-based Python API is publicly available on GitHub.
arXiv Detail & Related papers (2023-06-02T19:52:13Z) - VIP-SLAM: An Efficient Tightly-Coupled RGB-D Visual Inertial Planar SLAM [25.681256050571058]
We propose a tightly-coupled SLAM system fused with RGB, Depth, IMU and structured plane information.
We use homography constraints to eliminate the parameters of numerous plane points in the optimization.
The global bundle adjustment is nearly 2 times faster than the sparse points based SLAM algorithm.
arXiv Detail & Related papers (2022-07-04T01:45:24Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - DSP-SLAM: Object Oriented SLAM with Deep Shape Priors [16.867669408751507]
We propose an object-oriented SLAM system that builds a rich and accurate joint map of dense 3D models for foreground objects.
DSP-SLAM takes as input the 3D point cloud reconstructed by a feature-based SLAM system.
Our evaluation shows improvements in object pose and shape reconstruction with respect to recent deep prior-based reconstruction methods.
arXiv Detail & Related papers (2021-08-21T10:00:12Z) - Visual SLAM with Graph-Cut Optimized Multi-Plane Reconstruction [11.215334675788952]
This paper presents a semantic planar SLAM system that improves pose estimation and mapping using cues from an instance planar segmentation network.
While the mainstream approaches are using RGB-D sensors, employing a monocular camera with such a system still faces challenges such as robust data association and precise geometric model fitting.
arXiv Detail & Related papers (2021-08-09T18:16:08Z) - Light Field Reconstruction Using Convolutional Network on EPI and
Extended Applications [78.63280020581662]
A novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views.
We demonstrate the high performance and robustness of the proposed framework compared with state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-24T08:16:32Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z) - Augmented Parallel-Pyramid Net for Attention Guided Pose-Estimation [90.28365183660438]
This paper proposes an augmented parallel-pyramid net with attention partial module and differentiable auto-data augmentation.
We define a new pose search space where the sequences of data augmentations are formulated as a trainable and operational CNN component.
Notably, our method achieves the top-1 accuracy on the challenging COCO keypoint benchmark and the state-of-the-art results on the MPII datasets.
arXiv Detail & Related papers (2020-03-17T03:52:17Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.