Swarm-SLAM : Sparse Decentralized Collaborative Simultaneous
Localization and Mapping Framework for Multi-Robot Systems
- URL: http://arxiv.org/abs/2301.06230v3
- Date: Fri, 12 Jan 2024 21:53:31 GMT
- Title: Swarm-SLAM : Sparse Decentralized Collaborative Simultaneous
Localization and Mapping Framework for Multi-Robot Systems
- Authors: Pierre-Yves Lajoie, Giovanni Beltrame
- Abstract summary: In this paper, we introduce Swarm-SLAM, an open-source C-SLAM system that is designed to be scalable, flexible, decentralized, and sparse.
Our system supports inertial, lidar, stereo, and RGB-D sensing, and it includes a novel inter-robot loop closure prioritization technique.
- Score: 12.751394886873664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative Simultaneous Localization And Mapping (C-SLAM) is a vital
component for successful multi-robot operations in environments without an
external positioning system, such as indoors, underground or underwater. In
this paper, we introduce Swarm-SLAM, an open-source C-SLAM system that is
designed to be scalable, flexible, decentralized, and sparse, which are all key
properties in swarm robotics. Our system supports inertial, lidar, stereo, and
RGB-D sensing, and it includes a novel inter-robot loop closure prioritization
technique that reduces communication and accelerates convergence. We evaluated
our ROS-2 implementation on five different datasets, and in a real-world
experiment with three robots communicating through an ad-hoc network. Our code
is publicly available: https://github.com/MISTLab/Swarm-SLAM
Related papers
- A Benchmark Dataset for Collaborative SLAM in Service Environments [17.866535357818474]
We introduce a new multi-modal C-SLAM dataset for multiple service robots in various indoor service environments.
By using simulation, we can provide accurate and precisely time-synchronized sensor data, such as stereo RGB, stereo depth, IMU, and ground truth (GT) poses.
We demonstrate our dataset by evaluating diverse state-of-the-art single-robot SLAM and multi-robot SLAM methods.
arXiv Detail & Related papers (2024-11-22T07:33:33Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and
Human-Robot Interaction [17.093672006793984]
We present the ROS-PyBullet Interface, a framework that provides a bridge between the reliable contact/impact simulator PyBullet and the Robot Operating System (ROS)
Furthermore, we provide additional utilities for facilitating Human-Robot Interaction (HRI) in the simulated environment.
arXiv Detail & Related papers (2022-10-13T10:31:36Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection [101.38634057635373]
We focus on applications where robots are required to jointly select actions to maximize team submodular objectives.
We propose a general-purpose learning architecture towards submodular at scale, with decentralized communications.
We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.
arXiv Detail & Related papers (2021-05-18T15:32:07Z) - Kimera-Multi: a System for Distributed Multi-Robot Metric-Semantic
Simultaneous Localization and Mapping [57.173793973480656]
We present the first fully distributed multi-robot system for dense metric-semantic SLAM.
Our system, dubbed Kimera-Multi, is implemented by a team of robots equipped with visual-inertial sensors.
Kimera-Multi builds a 3D mesh model of the environment in real-time, where each face of the mesh is annotated with a semantic label.
arXiv Detail & Related papers (2020-11-08T21:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.