OpenVSLAM: A Versatile Visual SLAM Framework
- URL: http://arxiv.org/abs/1910.01122v3
- Date: Thu, 6 Apr 2023 12:34:01 GMT
- Title: OpenVSLAM: A Versatile Visual SLAM Framework
- Authors: Shinya Sumikura, Mikiya Shibuya, Ken Sakurada
- Abstract summary: We introduce OpenVSLAM, a visual SLAM framework with high usability.
This software is designed to be easily used and extended.
It incorporates several useful features and functions for research and development.
- Score: 13.268738551141107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce OpenVSLAM, a visual SLAM framework with high
usability and extensibility. Visual SLAM systems are essential for AR devices,
autonomous control of robots and drones, etc. However, conventional open-source
visual SLAM frameworks are not appropriately designed as libraries called from
third-party programs. To overcome this situation, we have developed a novel
visual SLAM framework. This software is designed to be easily used and
extended. It incorporates several useful features and functions for research
and development.
Related papers
- VSLAM-LAB: A Comprehensive Framework for Visual SLAM Methods and Datasets [64.57742015099531]
VSLAM-LAB is a unified framework designed to streamline the development, evaluation, and deployment of VSLAM systems.
It enables seamless compilation and configuration of VSLAM algorithms, automated dataset downloading and preprocessing, and standardized experiment design, execution, and evaluation.
arXiv Detail & Related papers (2025-04-06T12:02:19Z) - pySLAM: An Open-Source, Modular, and Extensible Framework for SLAM [0.0]
pySLAM is an open-source Python framework for Visual SLAM.
It supports monocular, stereo, and RGB-D cameras.
pySLAM encourages community contributions, fostering collaborative development in the field of Visual SLAM.
arXiv Detail & Related papers (2025-02-17T16:05:31Z) - Self-Organizing Edge Computing Distribution Framework for Visual SLAM [0.6749750044497732]
We propose a novel edge-assisted SLAM framework capable of self-organizing fully distributed SLAM execution across a network of devices.
The architecture consists of three layers and is designed to be device-agnostic, resilient to network failures, and minimally invasive to the core SLAM system.
arXiv Detail & Related papers (2025-01-15T07:24:15Z) - Large Action Models: From Inception to Implementation [51.81485642442344]
Large Action Models (LAMs) are designed for action generation and execution within dynamic environments.
LAMs hold the potential to transform AI from passive language understanding to active task completion.
We present a comprehensive framework for developing LAMs, offering a systematic approach to their creation, from inception to deployment.
arXiv Detail & Related papers (2024-12-13T11:19:56Z) - XRDSLAM: A Flexible and Modular Framework for Deep Learning based SLAM [5.092026311165656]
XRDSLAM is a flexible SLAM framework that adopts a modular code design and a multi-process running mechanism.
Within this framework, we integrate several state-of-the-art SLAM algorithms with different types, including NeRF and 3DGS based SLAM, and even odometry or reconstruction algorithms.
We contribute all the code, configuration and data to the open-source community, which aims to promote the widespread research and development of SLAM technology.
arXiv Detail & Related papers (2024-10-31T07:25:39Z) - OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models [61.14336781917986]
We introduce OpenR, an open-source framework for enhancing the reasoning capabilities of large language models (LLMs)
OpenR unifies data acquisition, reinforcement learning training, and non-autoregressive decoding into a cohesive software platform.
Our work is the first to provide an open-source framework that explores the core techniques of OpenAI's o1 model with reinforcement learning.
arXiv Detail & Related papers (2024-10-12T23:42:16Z) - Collaborative, Code-Proximal Dynamic Software Visualization within Code
Editors [55.57032418885258]
This paper introduces the design and proof-of-concept implementation for a software visualization approach that can be embedded into code editors.
Our contribution differs from related work in that we use dynamic analysis of a software system's runtime behavior.
Our visualization approach enhances common remote pair programming tools and is collaboratively usable by employing shared code cities.
arXiv Detail & Related papers (2023-08-30T06:35:40Z) - Orbeez-SLAM: A Real-time Monocular Visual SLAM with ORB Features and
NeRF-realized Mapping [18.083667773491083]
We develop a visual SLAM that adapts to new scenes without pre-training and generates dense maps for downstream tasks in real-time.
Orbeez-SLAM collaborates with implicit neural representation (NeRF) and visual odometry to achieve our goals.
Results show that our SLAM is up to 800x faster than the strong baseline with superior rendering outcomes.
arXiv Detail & Related papers (2022-09-27T09:37:57Z) - RWT-SLAM: Robust Visual SLAM for Highly Weak-textured Environments [1.1024591739346294]
We propose a novel visual SLAM system named RWT-SLAM to tackle this problem.
We modify LoFTR network which is able to produce dense point matching under low-textured scenes to generate feature descriptors.
The resulting RWT-SLAM is tested in various public datasets such as TUM and OpenLORIS.
arXiv Detail & Related papers (2022-07-07T19:24:03Z) - NICE-SLAM: Neural Implicit Scalable Encoding for SLAM [112.6093688226293]
NICE-SLAM is a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation.
Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust.
arXiv Detail & Related papers (2021-12-22T18:45:44Z) - DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras [71.41252518419486]
DROID-SLAM is a new deep learning based SLAM system.
It can leverage stereo or RGB-D video to achieve improved performance at test time.
arXiv Detail & Related papers (2021-08-24T17:50:10Z) - OV$^{2}$SLAM : A Fully Online and Versatile Visual SLAM for Real-Time
Applications [59.013743002557646]
We describe OV$2$SLAM, a fully online algorithm, handling both monocular and stereo camera setups, various map scales and frame-rates ranging from a few Hertz up to several hundreds.
For the benefit of the community, we release the source code: urlhttps://github.com/ov2slam/ov2slam.
arXiv Detail & Related papers (2021-02-08T08:39:23Z) - DXSLAM: A Robust and Efficient Visual SLAM System with Deep Features [5.319556638040589]
This paper shows that feature extraction with deep convolutional neural networks (CNNs) can be seamlessly incorporated into a modern SLAM framework.
The proposed SLAM system utilizes a state-of-the-art CNN to detect keypoints in each image frame, and to give not only keypoint descriptors, but also a global descriptor of the whole image.
arXiv Detail & Related papers (2020-08-12T16:14:46Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.