NavDreams: Towards Camera-Only RL Navigation Among Humans
- URL: http://arxiv.org/abs/2203.12299v1
- Date: Wed, 23 Mar 2022 09:46:44 GMT
- Title: NavDreams: Towards Camera-Only RL Navigation Among Humans
- Authors: Daniel Dugas, Olov Andersson, Roland Siegwart and Jen Jen Chung
- Abstract summary: We investigate whether the world model concept, which has shown results for modeling and learning policies in Atari games, can also be applied to the camera-based navigation problem.
We create simulated environments where a robot must navigate past static and moving humans without colliding in order to reach its goal.
We find that state-of-the-art methods are able to achieve success in solving the navigation problem, and can generate dream-like predictions of future image-sequences.
- Score: 35.57943738219839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomously navigating a robot in everyday crowded spaces requires solving
complex perception and planning challenges. When using only monocular image
sensor data as input, classical two-dimensional planning approaches cannot be
used. While images present a significant challenge when it comes to perception
and planning, they also allow capturing potentially important details, such as
complex geometry, body movement, and other visual cues. In order to
successfully solve the navigation task from only images, algorithms must be
able to model the scene and its dynamics using only this channel of
information. We investigate whether the world model concept, which has shown
state-of-the-art results for modeling and learning policies in Atari games as
well as promising results in 2D LiDAR-based crowd navigation, can also be
applied to the camera-based navigation problem. To this end, we create
simulated environments where a robot must navigate past static and moving
humans without colliding in order to reach its goal. We find that
state-of-the-art methods are able to achieve success in solving the navigation
problem, and can generate dream-like predictions of future image-sequences
which show consistent geometry and moving persons. We are also able to show
that policy performance in our high-fidelity sim2real simulation scenario
transfers to the real world by testing the policy on a real robot. We make our
simulator, models and experiments available at
https://github.com/danieldugas/NavDreams.
Related papers
- Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Transformers for Image-Goal Navigation [0.0]
We present a generative Transformer based model that jointly models image goals, camera observations and the robot's past actions to predict future actions.
Our model demonstrates capability in capturing and associating visual information across long time horizons, helping in effective navigation.
arXiv Detail & Related papers (2024-05-23T03:01:32Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Out of the Box: Embodied Navigation in the Real World [45.97756658635314]
We show how to transfer knowledge acquired in simulation into the real world.
We deploy our models on a LoCoBot equipped with a single Intel RealSense camera.
Our experiments indicate that it is possible to achieve satisfying results when deploying the obtained model in the real world.
arXiv Detail & Related papers (2021-05-12T18:00:14Z) - Learning a State Representation and Navigation in Cluttered and Dynamic
Environments [6.909283975004628]
We present a learning-based pipeline to realise local navigation with a quadrupedal robot in cluttered environments.
The robot is able to safely locomote to a target location based on frames from a depth camera without any explicit mapping of the environment.
We show that our system can handle noisy depth images, avoid dynamic obstacles unseen during training, and is endowed with local spatial awareness.
arXiv Detail & Related papers (2021-03-07T13:19:06Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.