OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications
- URL: http://arxiv.org/abs/2102.04060v1
- Date: Mon, 8 Feb 2021 08:39:23 GMT
- Title: OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications
- Authors: Maxime Ferrera, Alexandre Eudes, Julien Moras, Martial Sanfourche, Guy
Le Besnerais
- Abstract summary: We describe OV$2$SLAM, a fully online algorithm, handling both monocular and stereo camera setups, various map scales and frame-rates ranging from a few Hertz up to several hundreds.
For the benefit of the community, we release the source code: urlhttps://github.com/ov2slam/ov2slam.
- Score: 59.013743002557646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many applications of Visual SLAM, such as augmented reality, virtual reality,
robotics or autonomous driving, require versatile, robust and precise
solutions, most often with real-time capability. In this work, we describe
OV$^{2}$SLAM, a fully online algorithm, handling both monocular and stereo
camera setups, various map scales and frame-rates ranging from a few Hertz up
to several hundreds. It combines numerous recent contributions in visual
localization within an efficient multi-threaded architecture. Extensive
comparisons with competing algorithms shows the state-of-the-art accuracy and
real-time performance of the resulting algorithm. For the benefit of the
community, we release the source code:
\url{https://github.com/ov2slam/ov2slam}.
Related papers
- Coarse Correspondence Elicit 3D Spacetime Understanding in Multimodal Language Model [52.27297680947337]
Multimodal language models (MLLMs) are increasingly being implemented in real-world environments.
Despite their potential, current top models within our community still fall short in adequately understanding spatial and temporal dimensions.
We introduce Coarse Correspondence, a training-free, effective, and general-purpose visual prompting method to elicit 3D and temporal understanding.
arXiv Detail & Related papers (2024-08-01T17:57:12Z) - Multicam-SLAM: Non-overlapping Multi-camera SLAM for Indirect Visual Localization and Navigation [1.3654846342364308]
This paper presents a novel approach to visual simultaneous localization and mapping (SLAM) using multiple RGB-D cameras.
The proposed method, Multicam-SLAM, significantly enhances the robustness and accuracy of SLAM systems.
Experiments in various environments demonstrate the superior accuracy and robustness of the proposed method compared to conventional single-camera SLAM systems.
arXiv Detail & Related papers (2024-06-10T15:36:23Z) - NGD-SLAM: Towards Real-Time Dynamic SLAM without GPU [4.959552873584984]
This paper proposes an open-source real-time dynamic SLAM system that runs solely on CPU by incorporating a mask prediction mechanism.
Our system maintains high localization accuracy in dynamic environments while achieving a tracking frame rate of 56 FPS on a laptop CPU.
arXiv Detail & Related papers (2024-05-12T23:00:53Z) - VMamba: Visual State Space Model [92.83984290020891]
VMamba is a vision backbone that works in linear time complexity.
At the core of VMamba lies a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module.
arXiv Detail & Related papers (2024-01-18T17:55:39Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Shelving, Stacking, Hanging: Relational Pose Diffusion for Multi-modal
Rearrangement [49.888011242939385]
We propose a system for rearranging objects in a scene to achieve a desired object-scene placing relationship.
The pipeline generalizes to novel geometries, poses, and layouts of both scenes and objects.
arXiv Detail & Related papers (2023-07-10T17:56:06Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - Orbeez-SLAM: A Real-time Monocular Visual SLAM with ORB Features and
NeRF-realized Mapping [18.083667773491083]
We develop a visual SLAM that adapts to new scenes without pre-training and generates dense maps for downstream tasks in real-time.
Orbeez-SLAM collaborates with implicit neural representation (NeRF) and visual odometry to achieve our goals.
Results show that our SLAM is up to 800x faster than the strong baseline with superior rendering outcomes.
arXiv Detail & Related papers (2022-09-27T09:37:57Z) - Keeping Less is More: Point Sparsification for Visual SLAM [1.370633147306388]
This study proposes an efficient graph optimization for sparsifying map points in SLAM systems.
Specifically, we formulate a maximum pose-visibility and maximum spatial diversity problem as a minimum-cost maximum-flow graph optimization problem.
The proposed method works as an additional step in existing SLAM systems, so it can be used in both conventional or learning based SLAM systems.
arXiv Detail & Related papers (2022-07-01T06:39:38Z) - DXSLAM: A Robust and Efficient Visual SLAM System with Deep Features [5.319556638040589]
This paper shows that feature extraction with deep convolutional neural networks (CNNs) can be seamlessly incorporated into a modern SLAM framework.
The proposed SLAM system utilizes a state-of-the-art CNN to detect keypoints in each image frame, and to give not only keypoint descriptors, but also a global descriptor of the whole image.
arXiv Detail & Related papers (2020-08-12T16:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.