PLD-SLAM: A Real-Time Visual SLAM Using Points and Line Segments in
Dynamic Scenes
- URL: http://arxiv.org/abs/2207.10916v1
- Date: Fri, 22 Jul 2022 07:40:00 GMT
- Title: PLD-SLAM: A Real-Time Visual SLAM Using Points and Line Segments in
Dynamic Scenes
- Authors: BaoSheng Zhang
- Abstract summary: This paper proposes a real-time stereo indirect visual SLAM system, PLD-SLAM, which combines point and line features.
We also present a novel global gray similarity (GGS) algorithm to achieve reasonable selection and efficient loop closure detection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problems in the practical application of
visual simultaneous localization and mapping (SLAM). With the popularization
and application of the technology in wide scope, the practicability of SLAM
system has become a new hot topic after the accuracy and robustness, e.g., how
to keep the stability of the system and achieve accurate pose estimation in the
low-texture and dynamic environment, and how to improve the universality and
real-time performance of the system in the real scenes, etc. This paper
proposes a real-time stereo indirect visual SLAM system, PLD-SLAM, which
combines point and line features, and avoid the impact of dynamic objects in
highly dynamic environments. We also present a novel global gray similarity
(GGS) algorithm to achieve reasonable keyframe selection and efficient loop
closure detection (LCD). Benefiting from the GGS, PLD-SLAM can realize
real-time accurate pose estimation in most real scenes without pre-training and
loading a huge feature dictionary model. To verify the performance of the
proposed system, we compare it with existing state-of-the-art (SOTA) methods on
the public datasets KITTI, EuRoC MAV, and the indoor stereo datasets provided
by us, etc. The experiments show that the PLD-SLAM has better real-time
performance while ensuring stability and accuracy in most scenarios. In
addition, through the analysis of the experimental results of the GGS, we can
find it has excellent performance in the keyframe selection and LCD.
Related papers
- OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB [40.62577054196799]
We introduce a large-scale synthetic dataset OmniPose6D, crafted to mirror the diversity of real-world conditions.
We present a benchmarking framework for a comprehensive comparison of pose tracking algorithms.
arXiv Detail & Related papers (2024-10-09T09:01:40Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - DK-SLAM: Monocular Visual SLAM with Deep Keypoint Learning, Tracking and Loop-Closing [13.50980509878613]
Experimental evaluations on publicly available datasets demonstrate that DK-SLAM outperforms leading traditional and learning based SLAM systems.
Our system employs a Model-Agnostic Meta-Learning (MAML) strategy to optimize the training of keypoint extraction networks.
To mitigate cumulative positioning errors, DK-SLAM incorporates a novel online learning module that utilizes binary features for loop closure detection.
arXiv Detail & Related papers (2024-01-17T12:08:30Z) - NID-SLAM: Neural Implicit Representation-based RGB-D SLAM in dynamic environments [9.706447888754614]
We present NID-SLAM, which significantly improves the performance of neural SLAM in dynamic environments.
We propose a new approach to enhance inaccurate regions in semantic masks, particularly in marginal areas.
We also introduce a selection strategy for dynamic scenes, which enhances camera tracking robustness against large-scale objects.
arXiv Detail & Related papers (2024-01-02T12:35:03Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Using Detection, Tracking and Prediction in Visual SLAM to Achieve
Real-time Semantic Mapping of Dynamic Scenarios [70.70421502784598]
RDS-SLAM can build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU.
We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios.
arXiv Detail & Related papers (2022-10-10T11:03:32Z) - Det-SLAM: A semantic visual SLAM for highly dynamic scenes using
Detectron2 [0.0]
This research combines the visual SLAM systems ORB-SLAM3 and Detectron2 to present the Det-SLAM system.
Det-SLAM is more resilient than previous dynamic SLAM systems and can lower the estimated error of camera posture in dynamic indoor scenarios.
arXiv Detail & Related papers (2022-10-01T13:25:11Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - Greedy-Based Feature Selection for Efficient LiDAR SLAM [12.257338124961622]
This paper demonstrates that actively selecting a subset of features significantly improves both the accuracy and efficiency of an L-SLAM system.
We show that our approach exhibits low localization error and speedup compared to the state-of-the-art L-SLAM systems.
arXiv Detail & Related papers (2021-03-24T11:03:16Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.