How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle
Traversability
- URL: http://arxiv.org/abs/2209.10788v1
- Date: Thu, 22 Sep 2022 05:18:35 GMT
- Title: How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle
Traversability
- Authors: Mateo Guaman Castro, Samuel Triest, Wenshan Wang, Jason M. Gregory,
Felix Sanchez, John G. Rogers III, Sebastian Scherer
- Abstract summary: Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains.
We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
- Score: 7.305104984234086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating terrain traversability in off-road environments requires reasoning
about complex interaction dynamics between the robot and these terrains.
However, it is challenging to build an accurate physics model, or create
informative labels to learn a model in a supervised manner, for these
interactions. We propose a method that learns to predict traversability
costmaps by combining exteroceptive environmental information with
proprioceptive terrain interaction feedback in a self-supervised manner.
Additionally, we propose a novel way of incorporating robot velocity in the
costmap prediction pipeline. We validate our method in multiple short and
large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV)
on challenging off-road terrains, and demonstrate ease of integration on a
separate large ground robot. Our short-scale navigation results show that using
our learned costmaps leads to overall smoother navigation, and provides the
robot with a more fine-grained understanding of the interactions between the
robot and different terrain types, such as grass and gravel. Our large-scale
navigation trials show that we can reduce the number of interventions by up to
57% compared to an occupancy-based navigation baseline in challenging off-road
courses ranging from 400 m to 3150 m.
Related papers
- RoadRunner M&M -- Learning Multi-range Multi-resolution Traversability Maps for Autonomous Off-road Navigation [12.835198004089385]
RoadRunner (M&M) is an end-to-end learning-based framework that directly predicts the traversability and elevation maps at multiple ranges.
RoadRunner M&M achieves a significant improvement of up to 50% for elevation mapping and 30% for traversability estimation over RoadRunner.
arXiv Detail & Related papers (2024-09-17T07:21:03Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - Pixel to Elevation: Learning to Predict Elevation Maps at Long Range using Images for Autonomous Offroad Navigation [10.898724668444125]
We present a learning-based approach capable of predicting terrain elevation maps at long-range using only onboard egocentric images in real-time.
We experimentally validate the applicability of our proposed approach for autonomous offroad robotic navigation in complex and unstructured terrain.
arXiv Detail & Related papers (2024-01-30T22:37:24Z) - Fast Traversability Estimation for Wild Visual Navigation [17.015268056925745]
We propose Wild Visual Navigation (WVN), an online self-supervised learning system for traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
arXiv Detail & Related papers (2023-05-15T10:19:30Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Complex Terrain Navigation via Model Error Prediction [5.937673383513695]
We train with an on-policy approach, resulting in successful navigation policies using as little as 50 minutes of training data split across simulation and real world.
Our learning-based navigation system is a sample efficient short-term planner that we demonstrate on a Clearpath Husky navigating through a variety of terrain.
arXiv Detail & Related papers (2021-11-18T15:55:04Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.