Diffusion-Reinforcement Learning Hierarchical Motion Planning in Adversarial Multi-agent Games
- URL: http://arxiv.org/abs/2403.10794v1
- Date: Sat, 16 Mar 2024 03:53:55 GMT
- Title: Diffusion-Reinforcement Learning Hierarchical Motion Planning in Adversarial Multi-agent Games
- Authors: Zixuan Wu, Sean Ye, Manisha Natarajan, Matthew C. Gombolay,
- Abstract summary: We focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-evasion games (PEG)
These pursuit-evasion problems are relevant to various applications, such as search and rescue operations and surveillance robots.
We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data.
- Score: 6.532258098619471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement Learning- (RL-)based motion planning has recently shown the potential to outperform traditional approaches from autonomous navigation to robot manipulation. In this work, we focus on a motion planning task for an evasive target in a partially observable multi-agent adversarial pursuit-evasion games (PEG). These pursuit-evasion problems are relevant to various applications, such as search and rescue operations and surveillance robots, where robots must effectively plan their actions to gather intelligence or accomplish mission tasks while avoiding detection or capture themselves. We propose a hierarchical architecture that integrates a high-level diffusion model to plan global paths responsive to environment data while a low-level RL algorithm reasons about evasive versus global path-following behavior. Our approach outperforms baselines by 51.2% by leveraging the diffusion model to guide the RL algorithm for more efficient exploration and improves the explanability and predictability.
Related papers
- Action abstractions for amortized sampling [49.384037138511246]
We propose an approach to incorporate the discovery of action abstractions, or high-level actions, into the policy optimization process.
Our approach involves iteratively extracting action subsequences commonly used across many high-reward trajectories and chunking' them into a single action that is added to the action space.
arXiv Detail & Related papers (2024-10-19T19:22:50Z) - LDP: A Local Diffusion Planner for Efficient Robot Navigation and Collision Avoidance [16.81917489473445]
The conditional diffusion model has been demonstrated as an efficient tool for learning robot policies.
The intricate nature of real-world scenarios, characterized by dynamic obstacles and maze-like structures, underscores the complexity of robot local navigation decision-making.
arXiv Detail & Related papers (2024-07-02T04:53:35Z) - Improving Generalization in Aerial and Terrestrial Mobile Robots Control Through Delayed Policy Learning [0.19638749905454383]
Deep Reinforcement Learning (DRL) has emerged as a promising approach to enhancing motion control and decision-making.
This paper explores the impact of the Delayed Policy Updates (DPU) technique on fostering generalization to new situations.
arXiv Detail & Related papers (2024-06-04T04:16:38Z) - Trial and Error: Exploration-Based Trajectory Optimization for LLM Agents [49.85633804913796]
We present an exploration-based trajectory optimization approach, referred to as ETO.
This learning method is designed to enhance the performance of open LLM agents.
Our experiments on three complex tasks demonstrate that ETO consistently surpasses baseline performance by a large margin.
arXiv Detail & Related papers (2024-03-04T21:50:29Z) - Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Distributed multi-agent target search and tracking with Gaussian process
and reinforcement learning [26.499110405106812]
We propose a multi-agent reinforcement learning technique with target map building based on distributed process.
We evaluate the performance and transferability of the trained policy in simulation and demonstrate the method on a swarm of micro unmanned aerial vehicles.
arXiv Detail & Related papers (2023-08-29T01:53:14Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Mobile Robot Path Planning in Dynamic Environments through Globally
Guided Reinforcement Learning [12.813442161633116]
We introduce a globally guided learning reinforcement approach (G2RL) to solve the multi-robot planning problem.
G2RL incorporates a novel path reward structure that generalizes to arbitrary environments.
We evaluate our method across different map types, obstacle densities and the number of robots.
arXiv Detail & Related papers (2020-05-11T20:42:29Z) - Model-based Reinforcement Learning for Decentralized Multiagent
Rendezvous [66.6895109554163]
Underlying the human ability to align goals with other agents is their ability to predict the intentions of others and actively update their own plans.
We propose hierarchical predictive planning (HPP), a model-based reinforcement learning method for decentralized multiagent rendezvous.
arXiv Detail & Related papers (2020-03-15T19:49:20Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.