DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System
- URL: http://arxiv.org/abs/2306.01891v3
- Date: Sun, 9 Jun 2024 13:56:51 GMT
- Title: DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And Mapping System
- Authors: Abanob Soliman, Fabien Bonardi, Désiré Sidibé, Samia Bouchafa,
- Abstract summary: This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments.
Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, in a unified reference frame.
Our implementation's research-based Python API is publicly available on GitHub.
- Score: 1.443696537295348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a robust approach for a visual parallel tracking and mapping (PTAM) system that excels in challenging environments. Our proposed method combines the strengths of heterogeneous multi-modal visual sensors, including stereo event-based and frame-based sensors, in a unified reference frame through a novel spatio-temporal synchronization of stereo visual frames and stereo event streams. We employ deep learning-based feature extraction and description for estimation to enhance robustness further. We also introduce an end-to-end parallel tracking and mapping optimization layer complemented by a simple loop-closure algorithm for efficient SLAM behavior. Through comprehensive experiments on both small-scale and large-scale real-world sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM) demonstrates superior performance in terms of robustness and accuracy in adverse conditions, especially in large-scale HDR scenarios. Our implementation's research-based Python API is publicly available on GitHub for further research and development: https://github.com/AbanobSoliman/DH-PTAM.
Related papers
- Vector-Symbolic Architecture for Event-Based Optical Flow [18.261064372829164]
We introduce an effective and robust high-dimensional (HD) feature descriptor for event frames, utilizing Vector Architectures (VSA)
We propose a novel feature matching framework for event-based optical flow, encompassing both model-based (VSA-Flow) and self-supervised learning (VSA-SM) methods.
arXiv Detail & Related papers (2024-05-14T03:50:07Z) - DBA-Fusion: Tightly Integrating Deep Dense Visual Bundle Adjustment with Multiple Sensors for Large-Scale Localization and Mapping [3.5047603107971397]
We tightly integrate the trainable deep dense bundle adjustment (DBA) with multi-sensor information through a factor graph.
A pipeline for visual-inertial integration is firstly developed, which provides the minimum ability of metric-scale localization and mapping.
The results validate the superior localization performance of our approach, which enables real-time dense mapping in large-scale environments.
arXiv Detail & Related papers (2024-03-20T16:20:54Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented Reality in
Dynamic Environments [55.864869961717424]
It is typically challenging for visual or visual-inertial odometry systems to handle the problems of dynamic scenes and pure rotation.
We design a novel visual-inertial odometry (VIO) system called RD-VIO to handle both of these problems.
arXiv Detail & Related papers (2023-10-23T16:30:39Z) - PSNet: Parallel Symmetric Network for Video Salient Object Detection [85.94443548452729]
We propose a VSOD network with up and down parallel symmetry, named PSNet.
Two parallel branches with different dominant modalities are set to achieve complete video saliency decoding.
arXiv Detail & Related papers (2022-10-12T04:11:48Z) - Structure PLP-SLAM: Efficient Sparse Mapping and Localization using
Point, Line and Plane for Monocular, RGB-D and Stereo Cameras [13.693353009049773]
This paper demonstrates a visual SLAM system that utilizes point and line cloud for robust camera localization, simultaneously, with an embedded piece-wise planar reconstruction (PPR) module.
We address the challenge of reconstructing geometric primitives with scale ambiguity by proposing several run-time optimizations on the reconstructed lines and planes.
The results show that our proposed SLAM tightly incorporates the semantic features to boost both tracking as well as backend optimization.
arXiv Detail & Related papers (2022-07-13T09:05:35Z) - IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation
in Large-scale Dynamic Environments [0.0]
IBISCape is a simulated benchmark for high-fidelity SLAM systems.
We offer 34 multi-modal datasets suitable for autonomous vehicles navigation.
We evaluate four ORB-SLAM3 systems on various sequences collected in simulated large-scale dynamic environments.
arXiv Detail & Related papers (2022-06-27T17:04:06Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking [72.65494220685525]
We propose a new dynamic modality-aware filter generation module (named MFGNet) to boost the message communication between visible and thermal data.
We generate dynamic modality-aware filters with two independent networks. The visible and thermal filters will be used to conduct a dynamic convolutional operation on their corresponding input feature maps respectively.
To address issues caused by heavy occlusion, fast motion, and out-of-view, we propose to conduct a joint local and global search by exploiting a new direction-aware target-driven attention mechanism.
arXiv Detail & Related papers (2021-07-22T03:10:51Z) - Early Bird: Loop Closures from Opposing Viewpoints for
Perceptually-Aliased Indoor Environments [35.663671249819124]
We present novel research that simultaneously addresses viewpoint change and perceptual aliasing.
We show that our integration of VPR with SLAM significantly boosts the performance of VPR, feature correspondence, and pose graph submodules.
For the first time, we demonstrate a localization system capable of state-of-the-art performance despite perceptual aliasing and extreme 180-degree-rotated viewpoint change.
arXiv Detail & Related papers (2020-10-03T20:18:55Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.