A comparative evaluation of machine learning methods for robot
navigation through human crowds
- URL: http://arxiv.org/abs/2012.08822v1
- Date: Wed, 16 Dec 2020 09:40:47 GMT
- Title: A comparative evaluation of machine learning methods for robot
navigation through human crowds
- Authors: Anastasia Gaydashenko, Daniel Kudenko, Aleksei Shpilman
- Abstract summary: We compare pathfinding/prediction and reinforcement learning approaches on a crowd movement dataset collected from surveillance videos taken at Grand Central Station in New York.
Results demonstrate the strong superiority of state-of-the-art reinforcement learning approaches over pathfinding with state-of-the-art behaviour prediction techniques.
- Score: 1.933681537640272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robot navigation through crowds poses a difficult challenge to AI systems,
since the methods should result in fast and efficient movement but at the same
time are not allowed to compromise safety. Most approaches to date were focused
on the combination of pathfinding algorithms with machine learning for
pedestrian walking prediction. More recently, reinforcement learning techniques
have been proposed in the research literature. In this paper, we perform a
comparative evaluation of pathfinding/prediction and reinforcement learning
approaches on a crowd movement dataset collected from surveillance videos taken
at Grand Central Station in New York. The results demonstrate the strong
superiority of state-of-the-art reinforcement learning approaches over
pathfinding with state-of-the-art behaviour prediction techniques.
Related papers
- Online Context Learning for Socially-compliant Navigation [49.609656402450746]
This letter introduces an online context learning method that aims to empower robots to adapt to new social environments online.
Experiments using a community-wide simulator show that our method outperforms the state-of-the-art ones.
arXiv Detail & Related papers (2024-06-17T12:59:13Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Deep Reinforcement Learning for Autonomous Vehicle Intersection
Navigation [0.24578723416255746]
Reinforcement learning algorithms have emerged as a promising approach to address these challenges.
Here, we address the problem of efficiently and safely navigating T-intersections using a lower-cost, single-agent approach.
Our results reveal that the proposed approach enables the AV to effectively navigate T-intersections, outperforming previous methods in terms of travel delays, collision minimization, and overall cost.
arXiv Detail & Related papers (2023-09-30T10:54:02Z) - Machine Learning for Autonomous Vehicle's Trajectory Prediction: A
comprehensive survey, Challenges, and Future Research Directions [3.655021726150368]
We have examined over two hundred studies related to trajectory prediction in the context of AVs.
This review conducts a comprehensive evaluation of several deep learning-based techniques.
By identifying challenges in the existing literature and outlining potential research directions, this review significantly contributes to the advancement of knowledge in the domain of AV trajectory prediction.
arXiv Detail & Related papers (2023-07-12T10:20:19Z) - Silver-Bullet-3D at ManiSkill 2021: Learning-from-Demonstrations and
Heuristic Rule-based Methods for Object Manipulation [118.27432851053335]
This paper presents an overview and comparative analysis of our systems designed for the following two tracks in SAPIEN ManiSkill Challenge 2021: No Interaction Track.
The No Interaction track targets for learning policies from pre-collected demonstration trajectories.
In this track, we design a Heuristic Rule-based Method (HRM) to trigger high-quality object manipulation by decomposing the task into a series of sub-tasks.
For each sub-task, the simple rule-based controlling strategies are adopted to predict actions that can be applied to robotic arms.
arXiv Detail & Related papers (2022-06-13T16:20:42Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Review of Pedestrian Trajectory Prediction Methods: Comparing Deep
Learning and Knowledge-based Approaches [0.0]
This paper compares deep learning algorithms with classical knowledge-based models that are widely used to simulate pedestrian dynamics.
The ability of deep-learning algorithms for large-scale simulation and the description of collective dynamics remains to be demonstrated.
arXiv Detail & Related papers (2021-11-11T08:35:14Z) - Robot Navigation in a Crowd by Integrating Deep Reinforcement Learning
and Online Planning [8.211771115758381]
It is still an open and challenging problem for mobile robots navigating along time-efficient and collision-free paths in a crowd.
Deep reinforcement learning is a promising solution to this problem.
We propose a graph-based deep reinforcement learning method, SG-DQN.
Our model can help the robot better understand the crowd and achieve a high success rate of more than 0.99 in the crowd navigation task.
arXiv Detail & Related papers (2021-02-26T02:17:13Z) - NavRep: Unsupervised Representations for Reinforcement Learning of Robot
Navigation in Dynamic Human Environments [28.530962677406627]
We train two end-to-end, and 18 unsupervised-learning-based architectures, and compare them, along with existing approaches, in unseen test cases.
Our results show that unsupervised learning methods are competitive with end-to-end methods.
This release also includes OpenAI-gym-compatible environments designed to emulate the training conditions described by other papers.
arXiv Detail & Related papers (2020-12-08T12:51:14Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.