Mobile Robot Path Planning in Dynamic Environments through Globally
Guided Reinforcement Learning
- URL: http://arxiv.org/abs/2005.05420v2
- Date: Fri, 11 Sep 2020 21:14:15 GMT
- Title: Mobile Robot Path Planning in Dynamic Environments through Globally
Guided Reinforcement Learning
- Authors: Binyu Wang and Zhe Liu and Qingbiao Li and Amanda Prorok
- Abstract summary: We introduce a globally guided learning reinforcement approach (G2RL) to solve the multi-robot planning problem.
G2RL incorporates a novel path reward structure that generalizes to arbitrary environments.
We evaluate our method across different map types, obstacle densities and the number of robots.
- Score: 12.813442161633116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Path planning for mobile robots in large dynamic environments is a
challenging problem, as the robots are required to efficiently reach their
given goals while simultaneously avoiding potential conflicts with other robots
or dynamic objects. In the presence of dynamic obstacles, traditional solutions
usually employ re-planning strategies, which re-call a planning algorithm to
search for an alternative path whenever the robot encounters a conflict.
However, such re-planning strategies often cause unnecessary detours. To
address this issue, we propose a learning-based technique that exploits
environmental spatio-temporal information. Different from existing
learning-based methods, we introduce a globally guided reinforcement learning
approach (G2RL), which incorporates a novel reward structure that generalizes
to arbitrary environments. We apply G2RL to solve the multi-robot path planning
problem in a fully distributed reactive manner. We evaluate our method across
different map types, obstacle densities, and the number of robots. Experimental
results show that G2RL generalizes well, outperforming existing distributed
methods, and performing very similarly to fully centralized state-of-the-art
benchmarks.
Related papers
- Generalizability of Graph Neural Networks for Decentralized Unlabeled Motion Planning [72.86540018081531]
Unlabeled motion planning involves assigning a set of robots to target locations while ensuring collision avoidance.
This problem forms an essential building block for multi-robot systems in applications such as exploration, surveillance, and transportation.
We address this problem in a decentralized setting where each robot knows only the positions of its $k$-nearest robots and $k$-nearest targets.
arXiv Detail & Related papers (2024-09-29T23:57:25Z) - Multi-Robot Informative Path Planning for Efficient Target Mapping using Deep Reinforcement Learning [11.134855513221359]
We propose a novel deep reinforcement learning approach for multi-robot informative path planning.
We train our reinforcement learning policy via the centralized training and decentralized execution paradigm.
Our approach outperforms other state-of-the-art multi-robot target mapping approaches by 33.75% in terms of the number of discovered targets-of-interest.
arXiv Detail & Related papers (2024-09-25T14:27:37Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Diffusion-Reinforcement Learning Hierarchical Motion Planning in Adversarial Multi-agent Games [6.532258098619471]
We focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-evasion games (PEG)
These pursuit-evasion problems are relevant to various applications, such as search and rescue operations and surveillance robots.
We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data.
arXiv Detail & Related papers (2024-03-16T03:53:55Z) - Multi-Robot Path Planning Combining Heuristics and Multi-Agent
Reinforcement Learning [0.0]
In the movement process, robots need to avoid collisions with other moving robots while minimizing their travel distance.
Previous methods for this problem either continuously replan paths using search methods to avoid conflicts or choose appropriate collision avoidance strategies based on learning approaches.
We propose a path planning method, MAPPOHR, which combines a search, empirical rules, and multi-agent reinforcement learning.
arXiv Detail & Related papers (2023-06-02T05:07:37Z) - Scalable Multi-robot Motion Planning for Congested Environments With
Topological Guidance [2.846144602096543]
Multi-robot motion planning (MRMP) is the problem of finding collision-free paths for a set of robots in a continuous state space.
We extend an existing single-robot motion planning method to leverage the improved efficiency provided by topological guidance.
We demonstrate our method's ability to efficiently plan paths in complex environments with many narrow passages, scaling to robot teams of size up to 25 times larger than existing methods.
arXiv Detail & Related papers (2022-10-13T16:26:01Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Learning to Generalize Across Long-Horizon Tasks from Human
Demonstrations [52.696205074092006]
Generalization Through Imitation (GTI) is a two-stage offline imitation learning algorithm.
GTI exploits a structure where demonstrated trajectories for different tasks intersect at common regions of the state space.
In the first stage of GTI, we train a policy that leverages intersections to have the capacity to compose behaviors from different demonstration trajectories together.
In the second stage of GTI, we train a goal-directed agent to generalize to novel start and goal configurations.
arXiv Detail & Related papers (2020-03-13T02:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.