Subdimensional Expansion Using Attention-Based Learning For Multi-Agent
Path Finding
- URL: http://arxiv.org/abs/2109.14695v1
- Date: Wed, 29 Sep 2021 20:01:04 GMT
- Title: Subdimensional Expansion Using Attention-Based Learning For Multi-Agent
Path Finding
- Authors: Lakshay Virmani, Zhongqiang Ren, Sivakumar Rathinam and Howie Choset
- Abstract summary: Multi-Agent Path Finding (MAPF) finds conflict-free paths for multiple agents from their respective start to goal locations.
We develop a novel multi-agent planner called LM* by integrating this learning-based single-agent planner with M*.
Our results show that for both "seen" and "unseen" maps, in comparison with M*, LM* has fewer conflicts to be resolved and thus, runs faster and enjoys higher success rates.
- Score: 9.2127262112464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Agent Path Finding (MAPF) finds conflict-free paths for multiple agents
from their respective start to goal locations. MAPF is challenging as the joint
configuration space grows exponentially with respect to the number of agents.
Among MAPF planners, search-based methods, such as CBS and M*, effectively
bypass the curse of dimensionality by employing a dynamically-coupled strategy:
agents are planned in a fully decoupled manner at first, where potential
conflicts between agents are ignored; and then agents either follow their
individual plans or are coupled together for planning to resolve the conflicts
between them. In general, the number of conflicts to be resolved decides the
run time of these planners and most of the existing work focuses on how to
efficiently resolve these conflicts. In this work, we take a different view and
aim to reduce the number of conflicts (and thus improve the overall search
efficiency) by improving each agent's individual plan. By leveraging a Visual
Transformer, we develop a learning-based single-agent planner, which plans for
a single agent while paying attention to both the structure of the map and
other agents with whom conflicts may happen. We then develop a novel
multi-agent planner called LM* by integrating this learning-based single-agent
planner with M*. Our results show that for both "seen" and "unseen" maps, in
comparison with M*, LM* has fewer conflicts to be resolved and thus, runs
faster and enjoys higher success rates. We empirically show that MAPF solutions
computed by LM* are near-optimal. Our code is available at
https://github.com/lakshayvirmani/learning-assisted-mstar .
Related papers
- Cooperative Reward Shaping for Multi-Agent Pathfinding [4.244426154524592]
The primary objective of Multi-Agent Pathfinding (MAPF) is to plan efficient and conflict-free paths for all agents.
Traditional multi-agent path planning algorithms struggle to achieve efficient distributed path planning for multiple agents.
This letter introduces a unique reward shaping technique based on Independent Q-Learning (IQL)
arXiv Detail & Related papers (2024-07-15T02:44:41Z) - Scalable Mechanism Design for Multi-Agent Path Finding [87.40027406028425]
Multi-Agent Path Finding (MAPF) involves determining paths for multiple agents to travel simultaneously and collision-free through a shared area toward given goal locations.
Finding an optimal solution is often computationally infeasible, making the use of approximate, suboptimal algorithms essential.
We introduce the problem of scalable mechanism design for MAPF and propose three strategyproof mechanisms, two of which even use approximate MAPF algorithms.
arXiv Detail & Related papers (2024-01-30T14:26:04Z) - Decentralized Monte Carlo Tree Search for Partially Observable
Multi-agent Pathfinding [49.730902939565986]
Multi-Agent Pathfinding problem involves finding a set of conflict-free paths for a group of agents confined to a graph.
In this study, we focus on the decentralized MAPF setting, where the agents may observe the other agents only locally.
We propose a decentralized multi-agent Monte Carlo Tree Search (MCTS) method for MAPF tasks.
arXiv Detail & Related papers (2023-12-26T06:57:22Z) - Learn to Follow: Decentralized Lifelong Multi-agent Pathfinding via
Planning and Learning [46.354187895184154]
Multi-agent Pathfinding (MAPF) problem generally asks to find a set of conflict-free paths for a set of agents confined to a graph.
In this work, we investigate the decentralized MAPF setting, when the central controller that posses all the information on the agents' locations and goals is absent.
We focus on the practically important lifelong variant of MAPF, which involves continuously assigning new goals to the agents upon arrival to the previous ones.
arXiv Detail & Related papers (2023-10-02T13:51:32Z) - Traffic Flow Optimisation for Lifelong Multi-Agent Path Finding [29.76466191644455]
Multi-Agent Path Finding (MAPF) is a fundamental problem in robotics that asks us to compute collision-free paths for a team of agents.
We propose a new approach for MAPF where agents are guided to their destination by following congestion-avoiding paths.
We evaluate the idea in two large-scale settings: one-shot MAPF, where each agent has a single destination, and lifelong MAPF, where agents are continuously assigned new destinations.
arXiv Detail & Related papers (2023-08-22T07:17:39Z) - Monte-Carlo Tree Search for Multi-Agent Pathfinding: Preliminary Results [60.4817465598352]
We introduce an original variant of Monte-Carlo Tree Search (MCTS) tailored to multi-agent pathfinding.
Specifically, we use individual paths to assist the agents with the the goal-reaching behavior.
We also use a dedicated decomposition technique to reduce the branching factor of the tree search procedure.
arXiv Detail & Related papers (2023-07-25T12:33:53Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
Diffusion model (DM) recently achieved huge success in various scenarios including offline reinforcement learning.
We propose MADiff, a novel generative multi-agent learning framework to tackle this problem.
Our experiments show the superior performance of MADiff compared to baseline algorithms in a wide range of multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Decentralised Approach for Multi Agent Path Finding [6.599344783327053]
Multi Agent Path Finding (MAPF) requires identification of conflict free paths for spatially-extended agents.
These find application in real world problems like Convoy Movement Problem, Train Scheduling etc.
Our proposed approach, Decentralised Multi Agent Path Finding (DeMAPF), handles MAPF as a sequence of pathplanning and allocation problems.
arXiv Detail & Related papers (2021-06-03T18:07:26Z) - Dynamic Multi-Agent Path Finding based on Conflict Resolution using
Answer Set Programming [0.0]
We introduce a new method to solve D-MAPF based on conflict-resolution.
The idea is, when a set of new agents joins the team and there are conflicts, instead of replanning for the whole team, to replan only for a minimal subset of agents whose plans conflict with each other.
arXiv Detail & Related papers (2020-09-22T00:50:35Z) - Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal
Constraints [52.58352707495122]
We present a multi-robot allocation algorithm that decouples the key computational challenges of sequential decision-making under uncertainty and multi-agent coordination.
We validate our results over a wide range of simulations on two distinct domains: multi-arm conveyor belt pick-and-place and multi-drone delivery dispatch in a city.
arXiv Detail & Related papers (2020-05-27T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.