COVINS: Visual-Inertial SLAM for Centralized Collaboration
- URL: http://arxiv.org/abs/2108.05756v1
- Date: Thu, 12 Aug 2021 13:50:44 GMT
- Title: COVINS: Visual-Inertial SLAM for Centralized Collaboration
- Authors: Patrik Schmuck, Thomas Ziegler, Marco Karrer, Jonathan Perraudin,
Margarita Chli
- Abstract summary: Collaborative SLAM enables a group of agents to simultaneously co-localize and jointly map an environment.
This article presents COVINS, a novel collaborative SLAM system, that enables multi-agent, scalable SLAM in large environments.
- Score: 11.65456841016608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative SLAM enables a group of agents to simultaneously co-localize
and jointly map an environment, thus paving the way to wide-ranging
applications of multi-robot perception and multi-user AR experiences by
eliminating the need for external infrastructure or pre-built maps. This
article presents COVINS, a novel collaborative SLAM system, that enables
multi-agent, scalable SLAM in large environments and for large teams of more
than 10 agents. The paradigm here is that each agent runs visual-inertial
odomety independently onboard in order to ensure its autonomy, while sharing
map information with the COVINS server back-end running on a powerful local PC
or a remote cloud server. The server back-end establishes an accurate
collaborative global estimate from the contributed data, refining the joint
estimate by means of place recognition, global optimization and removal of
redundant data, in order to ensure an accurate, but also efficient SLAM
process. A thorough evaluation of COVINS reveals increased accuracy of the
collaborative SLAM estimates, as well as efficiency in both removing redundant
information and reducing the coordination overhead, and demonstrates successful
operation in a large-scale mission with 12 agents jointly performing SLAM.
Related papers
- Self-Localized Collaborative Perception [49.86110931859302]
We propose$mathttCoBEVGlue$, a novel self-localized collaborative perception system.
$mathttCoBEVGlue$ is a novel spatial alignment module, which provides the relative poses between agents.
$mathttCoBEVGlue$ achieves state-of-the-art detection performance under arbitrary localization noises and attacks.
arXiv Detail & Related papers (2024-06-18T15:26:54Z) - What Makes Good Collaborative Views? Contrastive Mutual Information Maximization for Multi-Agent Perception [52.41695608928129]
Multi-agent perception (MAP) allows autonomous systems to understand complex environments by interpreting data from multiple sources.
This paper investigates intermediate collaboration for MAP with a specific focus on exploring "good" properties of collaborative view.
We propose a novel framework named CMiMC for intermediate collaboration.
arXiv Detail & Related papers (2024-03-15T07:18:55Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - APGL4SR: A Generic Framework with Adaptive and Personalized Global
Collaborative Information in Sequential Recommendation [86.29366168836141]
We propose a graph-driven framework, named Adaptive and Personalized Graph Learning for Sequential Recommendation (APGL4SR)
APGL4SR incorporates adaptive and personalized global collaborative information into sequential recommendation systems.
As a generic framework, APGL4SR can outperform other baselines with significant margins.
arXiv Detail & Related papers (2023-11-06T01:33:24Z) - Collaborative Mean Estimation over Intermittently Connected Networks
with Peer-To-Peer Privacy [86.61829236732744]
This work considers the problem of Distributed Mean Estimation (DME) over networks with intermittent connectivity.
The goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the additional data sharing among nodes.
arXiv Detail & Related papers (2023-02-28T19:17:03Z) - COVINS-G: A Generic Back-end for Collaborative Visual-Inertial SLAM [13.190581566723917]
Collaborative SLAM is at the core of perception in multi-robot systems.
CoVINS-G is a generalized back-end building upon the COVINS framework.
We show on-par accuracy with state-of-the-art multi-session and collaborative SLAM systems.
arXiv Detail & Related papers (2023-01-17T19:23:54Z) - Learning Efficient Multi-Agent Cooperative Visual Exploration [18.42493808094464]
We consider the task of visual indoor exploration with multiple agents, where the agents need to cooperatively explore the entire indoor region using as few steps as possible.
We extend the state-of-the-art single-agent RL solution, Active Neural SLAM (ANS), to the multi-agent setting by introducing a novel RL-based global-goal planner, Spatial Coordination Planner ( SCP)
SCP leverages spatial information from each individual agent in an end-to-end manner and effectively guides the agents to navigate towards different spatial goals with high exploration efficiency.
arXiv Detail & Related papers (2021-10-12T04:48:10Z) - Locality Matters: A Scalable Value Decomposition Approach for
Cooperative Multi-Agent Reinforcement Learning [52.7873574425376]
Cooperative multi-agent reinforcement learning (MARL) faces significant scalability issues due to state and action spaces that are exponentially large in the number of agents.
We propose a novel, value-based multi-agent algorithm called LOMAQ, which incorporates local rewards in the Training Decentralized Execution paradigm.
arXiv Detail & Related papers (2021-09-22T10:08:15Z) - Collaborative Visual Inertial SLAM for Multiple Smart Phones [2.680317409645303]
Multi-agent cooperative SLAM is the precondition of multi-user AR interaction.
We propose a collaborative monocular visual-inertial SLAM deployed on multiple ios mobile devices with a centralized architecture.
The accuracy of mapping and fusion of the proposed system is comparable to VINS-Mono which requires higher computing resources.
arXiv Detail & Related papers (2021-06-23T06:24:04Z) - Distributed Resource Scheduling for Large-Scale MEC Systems: A
Multi-Agent Ensemble Deep Reinforcement Learning with Imitation Acceleration [44.40722828581203]
We propose a distributed intelligent resource scheduling (DIRS) framework, which includes centralized training relying on the global information and distributed decision making by each agent deployed in each MEC server.
We first introduce a novel multi-agent ensemble-assisted distributed deep reinforcement learning (DRL) architecture, which can simplify the overall neural network structure of each agent.
Secondly, we apply action refinement to enhance the exploration ability of the proposed DIRS framework, where the near-optimal state-action pairs are obtained by a novel L'evy flight search.
arXiv Detail & Related papers (2020-05-21T20:04:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.