Improving the Generalization of Unseen Crowd Behaviors for Reinforcement Learning based Local Motion Planners
- URL: http://arxiv.org/abs/2410.12232v1
- Date: Wed, 16 Oct 2024 04:46:21 GMT
- Title: Improving the Generalization of Unseen Crowd Behaviors for Reinforcement Learning based Local Motion Planners
- Authors: Wen Zheng Terence Ng, Jianda Chen, Sinno Jialin Pan, Tianwei Zhang,
- Abstract summary: Current Reinforcement Learning-based motion planners rely on a single policy to simulate pedestrian movements.
We introduce an efficient method that enhances agent diversity within a single policy by maximizing an information-theoretic objective.
In assessing an agent's robustness against unseen crowds, we propose diverse scenarios inspired by pedestrian crowd behaviors.
- Score: 36.684452789236914
- License:
- Abstract: Deploying a safe mobile robot policy in scenarios with human pedestrians is challenging due to their unpredictable movements. Current Reinforcement Learning-based motion planners rely on a single policy to simulate pedestrian movements and could suffer from the over-fitting issue. Alternatively, framing the collision avoidance problem as a multi-agent framework, where agents generate dynamic movements while learning to reach their goals, can lead to conflicts with human pedestrians due to their homogeneity. To tackle this problem, we introduce an efficient method that enhances agent diversity within a single policy by maximizing an information-theoretic objective. This diversity enriches each agent's experiences, improving its adaptability to unseen crowd behaviors. In assessing an agent's robustness against unseen crowds, we propose diverse scenarios inspired by pedestrian crowd behaviors. Our behavior-conditioned policies outperform existing works in these challenging scenes, reducing potential collisions without additional time or travel.
Related papers
- Towards Learning Scalable Agile Dynamic Motion Planning for Robosoccer Teams with Policy Optimization [0.0]
Dynamic Motion Planning for Multi-Agent Systems in the presence of obstacles is a universal and unsolved problem.
We present a learning-based dynamic navigation model and show our model working on a simple environment in the concept of a simple Robosoccer Game.
arXiv Detail & Related papers (2025-02-08T11:13:07Z) - QuadrupedGPT: Towards a Versatile Quadruped Agent in Open-ended Worlds [51.05639500325598]
We introduce QuadrupedGPT, designed to follow diverse commands with agility comparable to that of a pet.
Our agent shows proficiency in handling diverse tasks and intricate instructions, representing a significant step toward the development of versatile quadruped agents.
arXiv Detail & Related papers (2024-06-24T12:14:24Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Adapt On-the-Go: Behavior Modulation for Single-Life Robot Deployment [88.06408322210025]
We study the problem of adapting on-the-fly to novel scenarios during deployment.
Our approach, RObust Autonomous Modulation (ROAM), introduces a mechanism based on the perceived value of pre-trained behaviors.
We demonstrate that ROAM enables a robot to adapt rapidly to changes in dynamics both in simulation and on a real Go1 quadruped.
arXiv Detail & Related papers (2023-11-02T08:22:28Z) - Robust multi-agent coordination via evolutionary generation of auxiliary
adversarial attackers [23.15190337027283]
We propose Robust Multi-Agent Coordination via Generation of Auxiliary Adversarial Attackers (ROMANCE)
ROMANCE enables the trained policy to encounter diversified and strong auxiliary adversarial attacks during training, thus achieving high robustness under various policy perturbations.
The goal of quality is to minimize the ego-system coordination effect, and a novel diversity regularizer is applied to diversify the behaviors among attackers.
arXiv Detail & Related papers (2023-05-10T05:29:47Z) - An Energy-aware and Fault-tolerant Deep Reinforcement Learning based
approach for Multi-agent Patrolling Problems [0.5008597638379226]
We propose an approach based on model-free, deep multi-agent reinforcement learning.
Agents are trained to patrol an environment with various unknown dynamics and factors.
They can automatically recharge themselves to support continuous collective patrolling.
This architecture provides a patrolling system that can tolerate agent failures and allow supplementary agents to be added to replace failed agents or to increase the overall patrol performance.
arXiv Detail & Related papers (2022-12-16T01:38:35Z) - Enhanced method for reinforcement learning based dynamic obstacle
avoidance by assessment of collision risk [0.0]
This paper proposes a general training environment where we gain control over the difficulty of the obstacle avoidance task.
We found that shifting the training towards a greater task difficulty can massively increase the final performance.
arXiv Detail & Related papers (2022-12-08T07:46:42Z) - Influencing Towards Stable Multi-Agent Interactions [12.477674452685756]
Learning in multi-agent environments is difficult due to the non-stationarity introduced by an opponent's or partner's changing behaviors.
We propose an algorithm to proactively influence the other agent's strategy to stabilize.
We demonstrate the effectiveness of stabilizing in improving efficiency of maximizing the task reward in a variety of simulated environments.
arXiv Detail & Related papers (2021-10-05T16:46:04Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.