Deep Reinforcement Learning-Based Mapless Crowd Navigation with
Perceived Risk of the Moving Crowd for Mobile Robots
- URL: http://arxiv.org/abs/2304.03593v2
- Date: Sat, 23 Sep 2023 16:56:15 GMT
- Title: Deep Reinforcement Learning-Based Mapless Crowd Navigation with
Perceived Risk of the Moving Crowd for Mobile Robots
- Authors: Hafiq Anas, Ong Wee Hong, Owais Ahmed Malik
- Abstract summary: Current state-of-the-art crowd navigation approaches are mainly deep reinforcement learning (DRL)-based.
We propose a method that includes a Collision Probability (CP) in the observation space to give the robot a sense of the level of danger of the moving crowd.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current state-of-the-art crowd navigation approaches are mainly deep
reinforcement learning (DRL)-based. However, DRL-based methods suffer from the
issues of generalization and scalability. To overcome these challenges, we
propose a method that includes a Collision Probability (CP) in the observation
space to give the robot a sense of the level of danger of the moving crowd to
help the robot navigate safely through crowds with unseen behaviors. We studied
the effects of changing the number of moving obstacles to pay attention during
navigation. During training, we generated local waypoints to increase the
reward density and improve the learning efficiency of the system. Our approach
was developed using deep reinforcement learning (DRL) and trained using the
Gazebo simulator in a non-cooperative crowd environment with obstacles moving
at randomized speeds and directions. We then evaluated our model on four
different crowd-behavior scenarios. The results show that our method achieved a
100% success rate in all test settings. We compared our approach with a current
state-of-the-art DRL-based approach, and our approach has performed
significantly better, especially in terms of social safety. Importantly, our
method can navigate in different crowd behaviors and requires no fine-tuning
after being trained once. We further demonstrated the crowd navigation
capability of our model in real-world tests.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Avoidance Navigation Based on Offline Pre-Training Reinforcement
Learning [0.0]
This paper presents a Pre-Training Deep Reinforcement Learning(DRL) for avoidance navigation without map for mobile robots.
The efficient offline training strategy is proposed to speed up the inefficient random explorations in early stage.
It was demonstrated that our DRL model have universal general capacity in different environment.
arXiv Detail & Related papers (2023-08-03T06:19:46Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free
Reinforcement Learning [86.06110576808824]
Deep reinforcement learning is a promising approach to learning policies in uncontrolled environments.
Recent advancements in machine learning algorithms and libraries combined with a carefully tuned robot controller lead to learning quadruped in only 20 minutes in the real world.
arXiv Detail & Related papers (2022-08-16T17:37:36Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Relative velocity-based reward functions for crowd navigation of robots [7.671375709255977]
How to navigate in crowd environments with socially acceptable standards remains a key problem to be solved for the development of mobile robots.
Recent work has shown the effectiveness of deep reinforcement learning in addressing crowd navigation, but the learning becomes progressively less effective as the speed of pedestrians increases.
To improve the effectiveness of deep reinforcement learning, we redesigned the reward function by introducing the penalty term of relative speed in the reward function.
arXiv Detail & Related papers (2021-12-28T03:49:01Z) - Human-Aware Robot Navigation via Reinforcement Learning with Hindsight
Experience Replay and Curriculum Learning [28.045441768064215]
Reinforcement learning approaches have shown superior ability in solving sequential decision making problems.
In this work, we consider the task of training an RL agent without employing the demonstration data.
We propose to incorporate the hindsight experience replay (HER) and curriculum learning (CL) techniques with RL to efficiently learn the optimal navigation policy in the dense crowd.
arXiv Detail & Related papers (2021-10-09T13:18:11Z) - CLAMGen: Closed-Loop Arm Motion Generation via Multi-view Vision-Based
RL [4.014524824655106]
We propose a vision-based reinforcement learning (RL) approach for closed-loop trajectory generation in an arm reaching problem.
Arm trajectory generation is a fundamental robotics problem which entails finding collision-free paths to move the robot's body.
arXiv Detail & Related papers (2021-03-24T15:33:03Z) - An A* Curriculum Approach to Reinforcement Learning for RGBD Indoor
Robot Navigation [6.660458629649825]
Recently released photo-realistic simulators such as Habitat allow for the training of networks that output control actions directly from perception.
Our paper tries to overcome this problem by separating the training of the perception and control neural nets and increasing the path complexity gradually.
arXiv Detail & Related papers (2021-01-05T20:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.