Occlusion-Aware Crowd Navigation Using People as Sensors
- URL: http://arxiv.org/abs/2210.00552v3
- Date: Fri, 28 Apr 2023 21:27:08 GMT
- Title: Occlusion-Aware Crowd Navigation Using People as Sensors
- Authors: Ye-Ji Mun, Masha Itkina, Shuijing Liu, and Katherine Driggs-Campbell
- Abstract summary: Occlusions are highly prevalent in such settings due to a limited sensor field of view.
Previous work has shown that observed interactive behaviors of human agents can be used to estimate potential obstacles.
We propose integrating such social inference techniques into the planning pipeline.
- Score: 8.635930195821263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous navigation in crowded spaces poses a challenge for mobile robots
due to the highly dynamic, partially observable environment. Occlusions are
highly prevalent in such settings due to a limited sensor field of view and
obstructing human agents. Previous work has shown that observed interactive
behaviors of human agents can be used to estimate potential obstacles despite
occlusions. We propose integrating such social inference techniques into the
planning pipeline. We use a variational autoencoder with a specially designed
loss function to learn representations that are meaningful for occlusion
inference. This work adopts a deep reinforcement learning approach to
incorporate the learned representation for occlusion-aware planning. In
simulation, our occlusion-aware policy achieves comparable collision avoidance
performance to fully observable navigation by estimating agents in occluded
spaces. We demonstrate successful policy transfer from simulation to the
real-world Turtlebot 2i. To the best of our knowledge, this work is the first
to use social occlusion inference for crowd navigation.
Related papers
- CoNav: A Benchmark for Human-Centered Collaborative Navigation [66.6268966718022]
We propose a collaborative navigation (CoNav) benchmark.
Our CoNav tackles the critical challenge of constructing a 3D navigation environment with realistic and diverse human activities.
We propose an intention-aware agent for reasoning both long-term and short-term human intention.
arXiv Detail & Related papers (2024-06-04T15:44:25Z) - Structured Graph Network for Constrained Robot Crowd Navigation with Low Fidelity Simulation [10.201765067255147]
We investigate the feasibility of deploying reinforcement learning (RL) policies for constrained crowd navigation using a low-fidelity simulator.
We introduce a representation of the dynamic environment, separating human and obstacle representations.
This representation enables RL policies trained in a low-fidelity simulator to deploy in real world with a reduced sim2real gap.
arXiv Detail & Related papers (2024-05-27T04:53:09Z) - Resilient Legged Local Navigation: Learning to Traverse with Compromised
Perception End-to-End [16.748853375988013]
We model perception failures as invisible obstacles and pits.
We train a reinforcement learning based local navigation policy to guide our legged robot.
We validate our approach in simulation and on the real quadruped robot ANYmal running in real-time.
arXiv Detail & Related papers (2023-10-05T15:01:31Z) - Navigating to Objects in the Real World [76.1517654037993]
We present a large-scale empirical study of semantic visual navigation methods comparing methods from classical, modular, and end-to-end learning approaches.
We find that modular learning works well in the real world, attaining a 90% success rate.
In contrast, end-to-end learning does not, dropping from 77% simulation to 23% real-world success rate due to a large image domain gap between simulation and reality.
arXiv Detail & Related papers (2022-12-02T01:10:47Z) - Exploiting Socially-Aware Tasks for Embodied Social Navigation [17.48110264302196]
We propose an end-to-end architecture that exploits Socially-Aware Tasks to inject into a reinforcement learning navigation policy.
To this end, our tasks exploit the notion of immediate and future dangers of collision.
We validate our approach on Gibson4+ and Habitat-Matterport3D datasets.
arXiv Detail & Related papers (2022-12-01T18:52:46Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Intention Aware Robot Crowd Navigation with Attention-Based Interaction
Graph [3.8461692052415137]
We study the problem of safe and intention-aware robot navigation in dense and interactive crowds.
We propose a novel recurrent graph neural network with attention mechanisms to capture heterogeneous interactions among agents.
We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios.
arXiv Detail & Related papers (2022-03-03T16:26:36Z) - Vision-Based Mobile Robotics Obstacle Avoidance With Deep Reinforcement
Learning [49.04274612323564]
Obstacle avoidance is a fundamental and challenging problem for autonomous navigation of mobile robots.
In this paper, we consider the problem of obstacle avoidance in simple 3D environments where the robot has to solely rely on a single monocular camera.
We tackle the obstacle avoidance problem as a data-driven end-to-end deep learning approach.
arXiv Detail & Related papers (2021-03-08T13:05:46Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.