VGAI: End-to-End Learning of Vision-Based Decentralized Controllers for
Robot Swarms
- URL: http://arxiv.org/abs/2002.02308v2
- Date: Thu, 10 Dec 2020 14:10:23 GMT
- Title: VGAI: End-to-End Learning of Vision-Based Decentralized Controllers for
Robot Swarms
- Authors: Ting-Kuei Hu, Fernando Gama, Tianlong Chen, Zhangyang Wang, Alejandro
Ribeiro, Brian M. Sadler
- Abstract summary: We propose to learn decentralized controllers based on solely raw visual inputs.
For the first time, that integrates the learning of two key components: communication and visual perception.
Our proposed learning framework combines a convolutional neural network (CNN) for each robot to extract messages from the visual inputs, and a graph neural network (GNN) over the entire swarm to transmit, receive and process these messages.
- Score: 237.25930757584047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized coordination of a robot swarm requires addressing the tension
between local perceptions and actions, and the accomplishment of a global
objective. In this work, we propose to learn decentralized controllers based on
solely raw visual inputs. For the first time, that integrates the learning of
two key components: communication and visual perception, in one end-to-end
framework. More specifically, we consider that each robot has access to a
visual perception of the immediate surroundings, and communication capabilities
to transmit and receive messages from other neighboring robots. Our proposed
learning framework combines a convolutional neural network (CNN) for each robot
to extract messages from the visual inputs, and a graph neural network (GNN)
over the entire swarm to transmit, receive and process these messages in order
to decide on actions. The use of a GNN and locally-run CNNs results naturally
in a decentralized controller. We jointly train the CNNs and the GNN so that
each robot learns to extract messages from the images that are adequate for the
team as a whole. Our experiments demonstrate the proposed architecture in the
problem of drone flocking and show its promising performance and scalability,
e.g., achieving successful decentralized flocking for large-sized swarms
consisting of up to 75 drones.
Related papers
- Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning [72.86540018081531]
Unlabeled motion planning involves assigning a set of robots to target locations while ensuring collision avoidance.
This problem forms an essential building block for multi-robot systems in applications such as exploration, surveillance, and transportation.
We address this problem in a decentralized setting where each robot knows only the positions of its $k$-nearest robots and $k$-nearest targets.
arXiv Detail & Related papers (2024-09-29T23:57:25Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Asynchronous Perception-Action-Communication with Graph Neural Networks [93.58250297774728]
Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments.
The robots must execute a Perception-Action-Communication loop -- they perceive their local environment, communicate with other robots, and take actions in real time.
Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control.
This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication.
arXiv Detail & Related papers (2023-09-18T21:20:50Z) - Fully neuromorphic vision and control for autonomous drone flight [5.358212984063069]
Event-based vision and spiking neural hardware promises to exhibit similar characteristics.
Here, we present a fully learned neuromorphic pipeline for controlling a drone flying.
Results illustrate the potential of neuromorphic sensing and processing for enabling smaller network per flight.
arXiv Detail & Related papers (2023-03-15T17:19:45Z) - Graph Neural Networks for Relational Inductive Bias in Vision-based Deep
Reinforcement Learning of Robot Control [0.0]
This work introduces a neural network architecture that combines relational inductive bias and visual feedback to learn an efficient position control policy.
We derive a graph representation that models the robot's internal state with a low-dimensional description of the visual scene generated by an image encoding network.
We show the ability of the model to improve sample efficiency for a 6-DoF robot arm in a visually realistic 3D environment.
arXiv Detail & Related papers (2022-03-11T15:11:54Z) - Scalable Perception-Action-Communication Loops with Convolutional and
Graph Neural Networks [208.15591625749272]
We present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI)
Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning.
We demonstrate that VGAI yields performance comparable to or better than other decentralized controllers.
arXiv Detail & Related papers (2021-06-24T23:57:21Z) - Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection [101.38634057635373]
We focus on applications where robots are required to jointly select actions to maximize team submodular objectives.
We propose a general-purpose learning architecture towards submodular at scale, with decentralized communications.
We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.
arXiv Detail & Related papers (2021-05-18T15:32:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.