RoadRunner - Learning Traversability Estimation for Autonomous Off-road
Driving
- URL: http://arxiv.org/abs/2402.19341v2
- Date: Sun, 3 Mar 2024 15:21:03 GMT
- Title: RoadRunner - Learning Traversability Estimation for Autonomous Off-road
Driving
- Authors: Jonas Frey and Shehryar Khattak and Manthan Patel and Deegan Atha and
Julian Nubert and Curtis Padgett and Marco Hutter and Patrick Spieler
- Abstract summary: We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
- Score: 13.918488267013558
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous navigation at high speeds in off-road environments necessitates
robots to comprehensively understand their surroundings using onboard sensing
only. The extreme conditions posed by the off-road setting can cause degraded
camera image quality due to poor lighting and motion blur, as well as limited
sparse geometric information available from LiDAR sensing when driving at high
speeds. In this work, we present RoadRunner, a novel framework capable of
predicting terrain traversability and an elevation map directly from camera and
LiDAR sensor inputs. RoadRunner enables reliable autonomous navigation, by
fusing sensory information, handling of uncertainty, and generation of
contextually informed predictions about the geometry and traversability of the
terrain while operating at low latency. In contrast to existing methods relying
on classifying handcrafted semantic classes and using heuristics to predict
traversability costs, our method is trained end-to-end in a self-supervised
fashion. The RoadRunner network architecture builds upon popular sensor fusion
network architectures from the autonomous driving domain, which embed LiDAR and
camera information into a common Bird's Eye View perspective. Training is
enabled by utilizing an existing traversability estimation stack to generate
training data in hindsight in a scalable manner from real-world off-road
driving datasets. Furthermore, RoadRunner improves the system latency by a
factor of roughly 4, from 500 ms to 140 ms, while improving the accuracy for
traversability costs and elevation map predictions. We demonstrate the
effectiveness of RoadRunner in enabling safe and reliable off-road navigation
at high speeds in multiple real-world driving scenarios through unstructured
desert environments.
Related papers
- RoadRunner M&M -- Learning Multi-range Multi-resolution Traversability Maps for Autonomous Off-road Navigation [12.835198004089385]
RoadRunner (M&M) is an end-to-end learning-based framework that directly predicts the traversability and elevation maps at multiple ranges.
RoadRunner M&M achieves a significant improvement of up to 50% for elevation mapping and 30% for traversability estimation over RoadRunner.
arXiv Detail & Related papers (2024-09-17T07:21:03Z) - UFO: Uncertainty-aware LiDAR-image Fusion for Off-road Semantic Terrain
Map Estimation [2.048226951354646]
This paper presents a learning-based fusion method for generating dense terrain classification maps in BEV.
Our approach enhances the accuracy of semantic maps generated from an RGB image and a single-sweep LiDAR scan.
arXiv Detail & Related papers (2024-03-05T04:20:03Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle
Traversability [7.305104984234086]
Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains.
We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
arXiv Detail & Related papers (2022-09-22T05:18:35Z) - Risk-Aware Off-Road Navigation via a Learned Speed Distribution Map [39.54575497596679]
This work proposes a new representation of traversability based exclusively on robot speed that can be learned from data.
The proposed algorithm learns to predict a distribution of speeds the robot could achieve, conditioned on the environment semantics and commanded speed.
Numerical simulations demonstrate that the proposed risk-aware planning algorithm leads to faster average time-to-goals.
arXiv Detail & Related papers (2022-03-25T03:08:02Z) - WayFAST: Traversability Predictive Navigation for Field Robots [5.914664791853234]
We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots.
Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models.
We show that our training pipeline based on online traction estimates is more data-efficient than other-based methods.
arXiv Detail & Related papers (2022-03-22T22:02:03Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Learning High-Speed Flight in the Wild [101.33104268902208]
We propose an end-to-end approach that can autonomously fly quadrotors through complex natural and man-made environments at high speeds.
The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion.
By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments.
arXiv Detail & Related papers (2021-10-11T09:43:11Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.