Enhancing Multi-Robot Perception via Learned Data Association
- URL: http://arxiv.org/abs/2107.00769v1
- Date: Thu, 1 Jul 2021 22:45:26 GMT
- Title: Enhancing Multi-Robot Perception via Learned Data Association
- Authors: Nathaniel Glaser, Yen-Cheng Liu, Junjiao Tian, Zsolt Kira
- Abstract summary: We address the multi-robot collaborative perception problem, specifically in the context of multi-view infilling for distributed semantic segmentation.
We propose the Multi-Agent Infilling Network: an neural architecture that can be deployed to each agent in a robotic swarm.
Specifically, each robot is in charge of locally encoding and decoding visual information, and an neural mechanism allows for an uncertainty-aware and context-based exchange of intermediate features.
- Score: 37.866254392010454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we address the multi-robot collaborative perception problem,
specifically in the context of multi-view infilling for distributed semantic
segmentation. This setting entails several real-world challenges, especially
those relating to unregistered multi-agent image data. Solutions must
effectively leverage multiple, non-static, and intermittently-overlapping RGB
perspectives. To this end, we propose the Multi-Agent Infilling Network: an
extensible neural architecture that can be deployed (in a distributed manner)
to each agent in a robotic swarm. Specifically, each robot is in charge of
locally encoding and decoding visual information, and an extensible neural
mechanism allows for an uncertainty-aware and context-based exchange of
intermediate features. We demonstrate improved performance on a realistic
multi-robot AirSim dataset.
Related papers
- Federated Multi-Agent Mapping for Planetary Exploration [0.4143603294943439]
Federated learning (FL) is a promising approach for distributed mapping, addressing the challenges of decentralized data in collaborative learning.
Our approach leverages implicit neural mapping, representing maps as continuous functions learned by neural networks, for compact and adaptable representations.
We rigorously evaluate this approach, demonstrating its effectiveness for real-world deployment in multi-agent exploration scenarios.
arXiv Detail & Related papers (2024-04-02T20:32:32Z) - An Interactive Agent Foundation Model [49.77861810045509]
We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents.
Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction.
We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare.
arXiv Detail & Related papers (2024-02-08T18:58:02Z) - ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple
yet General Complementary Transformer [91.43066633305662]
We propose a novel underlineComPlementary underlinetransformer, textbfComPtr, for diverse bi-source dense prediction tasks.
ComPtr treats different inputs equally and builds an efficient dense interaction model in the form of sequence-to-sequence on top of the transformer.
arXiv Detail & Related papers (2023-07-23T15:17:45Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Graph Neural Networks for Multi-Robot Active Information Acquisition [15.900385823366117]
A team of mobile robots, communicating through an underlying graph, estimates a hidden state expressing a phenomenon of interest.
Existing approaches are either not scalable, unable to handle dynamic phenomena or not robust to changes in the communication graph.
We propose an Information-aware Graph Block Network (I-GBNet) that aggregates information over the graph representation and provides sequential-decision making in a distributed manner.
arXiv Detail & Related papers (2022-09-24T21:45:06Z) - Multi-Robot Collaborative Perception with Graph Neural Networks [6.383576104583731]
We propose a general-purpose Graph Neural Network (GNN) with the main goal to increase, in multi-robot perception tasks.
We show that the proposed framework can address multi-view visual perception problems such as monocular depth estimation and semantic segmentation.
arXiv Detail & Related papers (2022-01-05T18:47:07Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Overcoming Obstructions via Bandwidth-Limited Multi-Agent Spatial
Handshaking [37.866254392010454]
We propose an end-to-end learn-able Multi-Agent Spatial Handshaking network (MASH) to process, compress, and propagate visual information across a robotic swarm.
Our method achieves an absolute 11% IoU improvement over strong baselines.
arXiv Detail & Related papers (2021-07-01T22:56:47Z) - Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection [101.38634057635373]
We focus on applications where robots are required to jointly select actions to maximize team submodular objectives.
We propose a general-purpose learning architecture towards submodular at scale, with decentralized communications.
We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.
arXiv Detail & Related papers (2021-05-18T15:32:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.