Communication-Efficient Reinforcement Learning in Swarm Robotic Networks
for Maze Exploration
- URL: http://arxiv.org/abs/2305.17087v1
- Date: Fri, 26 May 2023 16:56:00 GMT
- Title: Communication-Efficient Reinforcement Learning in Swarm Robotic Networks
for Maze Exploration
- Authors: Ehsan Latif and WenZhan Song and Ramviyas Parasuraman
- Abstract summary: Communication is key to the successful coordination of swarm robots.
This paper proposes a new communication-efficient decentralized cooperative reinforcement learning algorithm for coordinating swarm robots.
- Score: 2.958532752589616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smooth coordination within a swarm robotic system is essential for the
effective execution of collective robot missions. Having efficient
communication is key to the successful coordination of swarm robots. This paper
proposes a new communication-efficient decentralized cooperative reinforcement
learning algorithm for coordinating swarm robots. It is made efficient by
hierarchically building on the use of local information exchanges. We consider
a case study application of maze solving through cooperation among a group of
robots, where the time and costs are minimized while avoiding inter-robot
collisions and path overlaps during exploration. With a solid theoretical
basis, we extensively analyze the algorithm with realistic CORE network
simulations and evaluate it against state-of-the-art solutions in terms of maze
coverage percentage and efficiency under communication-degraded environments.
The results demonstrate significantly higher coverage accuracy and efficiency
while reducing costs and overlaps even in high packet loss and low
communication range scenarios.
Related papers
- Communication- and Computation-Efficient Distributed Decision-Making in Multi-Robot Networks [2.8936428431504164]
We provide a distributed coordination paradigm that enables scalable and near-optimal joint motion planning among multiple robots.
Our algorithm is up to two orders faster than competitive near-optimal algorithms.
In simulations of surveillance tasks with up to 45 robots, it enables real-time planning at the order of 1 Hz with superior coverage performance.
arXiv Detail & Related papers (2024-07-15T01:25:39Z) - LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Asynchronous Perception-Action-Communication with Graph Neural Networks [93.58250297774728]
Collaboration in large robot swarms to achieve a common global objective is a challenging problem in large environments.
The robots must execute a Perception-Action-Communication loop -- they perceive their local environment, communicate with other robots, and take actions in real time.
Recently, this has been addressed using Graph Neural Networks (GNNs) for applications such as flocking and coverage control.
This paper proposes a framework for asynchronous PAC in robot swarms, where decentralized GNNs are used to compute navigation actions and generate messages for communication.
arXiv Detail & Related papers (2023-09-18T21:20:50Z) - Simulation of robot swarms for learning communication-aware coordination [0.0]
We train end-to-end Neural Networks that take as input local observations obtained from an omniscient centralised controller.
Experiments are run in Enki, a high-performance open-source simulator for planar robots.
arXiv Detail & Related papers (2023-02-25T17:17:40Z) - AdverSAR: Adversarial Search and Rescue via Multi-Agent Reinforcement
Learning [4.843554492319537]
We propose an algorithm that allows robots to efficiently coordinate their strategies in the presence of adversarial inter-agent communications.
It is assumed that the robots have no prior knowledge of the target locations, and they can interact with only a subset of neighboring robots at any time.
The effectiveness of our approach is demonstrated on a collection of prototype grid-world environments.
arXiv Detail & Related papers (2022-12-20T08:13:29Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Centralizing State-Values in Dueling Networks for Multi-Robot
Reinforcement Learning Mapless Navigation [87.85646257351212]
We study the problem of multi-robot mapless navigation in the popular Training and Decentralized Execution (CTDE) paradigm.
This problem is challenging when each robot considers its path without explicitly sharing observations with other robots.
We propose a novel architecture for CTDE that uses a centralized state-value network to compute a joint state-value.
arXiv Detail & Related papers (2021-12-16T16:47:00Z) - Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection [101.38634057635373]
We focus on applications where robots are required to jointly select actions to maximize team submodular objectives.
We propose a general-purpose learning architecture towards submodular at scale, with decentralized communications.
We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.
arXiv Detail & Related papers (2021-05-18T15:32:07Z) - Learning Connectivity for Data Distribution in Robot Teams [96.39864514115136]
We propose a task-agnostic, decentralized, low-latency method for data distribution in ad-hoc networks using Graph Neural Networks (GNN)
Our approach enables multi-agent algorithms based on global state information to function by ensuring it is available at each robot.
We train the distributed GNN communication policies via reinforcement learning using the average Age of Information as the reward function and show that it improves training stability compared to task-specific reward functions.
arXiv Detail & Related papers (2021-03-08T21:48:55Z) - With Whom to Communicate: Learning Efficient Communication for
Multi-Robot Collision Avoidance [17.18628401523662]
This paper presents an efficient communication method that solves the problem of "when" and with "whom" to communicate in multi-robot collision avoidance scenarios.
In this approach, every robot learns to reason about other robots' states and considers the risk of future collisions before asking for the trajectory plans of other robots.
arXiv Detail & Related papers (2020-09-25T09:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.