Fast Traversability Estimation for Wild Visual Navigation
- URL: http://arxiv.org/abs/2305.08510v2
- Date: Tue, 16 May 2023 08:49:49 GMT
- Title: Fast Traversability Estimation for Wild Visual Navigation
- Authors: Jonas Frey and Matias Mattamala and Nived Chebrolu and Cesar Cadena
and Maurice Fallon and Marco Hutter
- Abstract summary: We propose Wild Visual Navigation (WVN), an online self-supervised learning system for traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
- Score: 17.015268056925745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural environments such as forests and grasslands are challenging for
robotic navigation because of the false perception of rigid obstacles from high
grass, twigs, or bushes. In this work, we propose Wild Visual Navigation (WVN),
an online self-supervised learning system for traversability estimation which
uses only vision. The system is able to continuously adapt from a short human
demonstration in the field. It leverages high-dimensional features from
self-supervised visual transformer models, with an online scheme for
supervision generation that runs in real-time on the robot. We demonstrate the
advantages of our approach with experiments and ablation studies in challenging
environments in forests, parks, and grasslands. Our system is able to bootstrap
the traversable terrain segmentation in less than 5 min of in-field training
time, enabling the robot to navigate in complex outdoor terrains - negotiating
obstacles in high grass as well as a 1.4 km footpath following. While our
experiments were executed with a quadruped robot, ANYmal, the approach
presented can generalize to any ground robot.
Related papers
- Learning Humanoid Locomotion over Challenging Terrain [84.35038297708485]
We present a learning-based approach for blind humanoid locomotion capable of traversing challenging natural and man-made terrains.
Our model is first pre-trained on a dataset of flat-ground trajectories with sequence modeling, and then fine-tuned on uneven terrain using reinforcement learning.
We evaluate our model on a real humanoid robot across a variety of terrains, including rough, deformable, and sloped surfaces.
arXiv Detail & Related papers (2024-10-04T17:57:09Z) - Wild Visual Navigation: Fast Traversability Learning via Pre-Trained Models and Online Self-Supervision [27.65408575883111]
We present Wild Visual Navigation (WVN), an online self-supervised learning system for visual traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field, only using onboard sensing and computing.
We demonstrate our approach through diverse real-world deployments in forests, parks, and grasslands.
arXiv Detail & Related papers (2024-04-10T15:47:35Z) - STERLING: Self-Supervised Terrain Representation Learning from
Unconstrained Robot Experience [43.49602846732077]
We introduce Self-supervised TErrain Representation LearnING (STERLING)
STERLING is a novel approach for learning terrain representations that relies solely on easy-to-collect, unconstrained (e.g., non-expert) and unlabelled robot experience.
We evaluate STERLING features on the task of preference-aligned visual navigation and find that STERLING features perform on par with fully supervised approaches.
arXiv Detail & Related papers (2023-09-26T22:55:32Z) - How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle
Traversability [7.305104984234086]
Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains.
We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
arXiv Detail & Related papers (2022-09-22T05:18:35Z) - Learning Semantics-Aware Locomotion Skills from Human Demonstration [35.996425893483796]
We present a framework that learns semantics-aware locomotion skills from perception for quadrupedal robots.
Our framework learns to adjust the speed and gait of the robot based on perceived terrain semantics, and enables the robot to walk over 6km without failure.
arXiv Detail & Related papers (2022-06-27T21:08:03Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.