Robot Navigation in Constrained Pedestrian Environments using
Reinforcement Learning
- URL: http://arxiv.org/abs/2010.08600v2
- Date: Mon, 16 Nov 2020 06:26:16 GMT
- Title: Robot Navigation in Constrained Pedestrian Environments using
Reinforcement Learning
- Authors: Claudia P\'erez-D'Arpino, Can Liu, Patrick Goebel, Roberto
Mart\'in-Mart\'in, Silvio Savarese
- Abstract summary: Navigating fluently around pedestrians is a necessary capability for mobile robots deployed in human environments.
We present an approach based on reinforcement learning to learn policies capable of dynamic adaptation to the presence of moving pedestrians.
We show transfer of the learned policy to unseen 3D reconstructions of two real environments.
- Score: 32.454250811667904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Navigating fluently around pedestrians is a necessary capability for mobile
robots deployed in human environments, such as buildings and homes. While
research on social navigation has focused mainly on the scalability with the
number of pedestrians in open spaces, typical indoor environments present the
additional challenge of constrained spaces such as corridors and doorways that
limit maneuverability and influence patterns of pedestrian interaction. We
present an approach based on reinforcement learning (RL) to learn policies
capable of dynamic adaptation to the presence of moving pedestrians while
navigating between desired locations in constrained environments. The policy
network receives guidance from a motion planner that provides waypoints to
follow a globally planned trajectory, whereas RL handles the local
interactions. We explore a compositional principle for multi-layout training
and find that policies trained in a small set of geometrically simple layouts
successfully generalize to more complex unseen layouts that exhibit composition
of the structural elements available during training. Going beyond walls-world
like domains, we show transfer of the learned policy to unseen 3D
reconstructions of two real environments. These results support the
applicability of the compositional principle to navigation in real-world
buildings and indicate promising usage of multi-agent simulation within
reconstructed environments for tasks that involve interaction.
Related papers
- IN-Sight: Interactive Navigation through Sight [20.184155117341497]
IN-Sight is a novel approach to self-supervised path planning.
It calculates traversability scores and incorporates them into a semantic map.
To precisely navigate around obstacles, IN-Sight employs a local planner.
arXiv Detail & Related papers (2024-08-01T07:27:54Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - RSPT: Reconstruct Surroundings and Predict Trajectories for
Generalizable Active Object Tracking [17.659697426459083]
We present RSPT, a framework that forms a structure-aware motion representation by Reconstructing the Surroundings and Predicting the target Trajectory.
We evaluate RSPT on various simulated scenarios and show that it outperforms existing methods in unseen environments.
arXiv Detail & Related papers (2023-04-07T12:52:24Z) - ETPNav: Evolving Topological Planning for Vision-Language Navigation in
Continuous Environments [56.194988818341976]
Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments.
We propose ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments.
ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets.
arXiv Detail & Related papers (2023-04-06T13:07:17Z) - Visual-Language Navigation Pretraining via Prompt-based Environmental
Self-exploration [83.96729205383501]
We introduce prompt-based learning to achieve fast adaptation for language embeddings.
Our model can adapt to diverse vision-language navigation tasks, including VLN and REVERIE.
arXiv Detail & Related papers (2022-03-08T11:01:24Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Learning Synthetic to Real Transfer for Localization and Navigational
Tasks [7.019683407682642]
Navigation is at the crossroad of multiple disciplines, it combines notions of computer vision, robotics and control.
This work aimed at creating, in a simulation, a navigation pipeline whose transfer to the real world could be done with as few efforts as possible.
To design the navigation pipeline four main challenges arise; environment, localization, navigation and planning.
arXiv Detail & Related papers (2020-11-20T08:37:03Z) - Embodied Visual Navigation with Automatic Curriculum Learning in Real
Environments [20.017277077448924]
NavACL is a method of automatic curriculum learning tailored to the navigation task.
Deep reinforcement learning agents trained using NavACL significantly outperform state-of-the-art agents trained with uniform sampling.
Our agents can navigate through unknown cluttered indoor environments to semantically-specified targets using only RGB images.
arXiv Detail & Related papers (2020-09-11T13:28:26Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.