WayFAST: Traversability Predictive Navigation for Field Robots
- URL: http://arxiv.org/abs/2203.12071v1
- Date: Tue, 22 Mar 2022 22:02:03 GMT
- Title: WayFAST: Traversability Predictive Navigation for Field Robots
- Authors: Mateus Valverde Gasparino, Arun Narenthiran Sivakumar, Yixiao Liu,
Andres Eduardo Baquero Velasquez, Vitor Akihiro Hisano Higuti, John Rogers,
Huy Tran, Girish Chowdhary
- Abstract summary: We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots.
Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models.
We show that our training pipeline based on online traction estimates is more data-efficient than other-based methods.
- Score: 5.914664791853234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a self-supervised approach for learning to predict traversable
paths for wheeled mobile robots that require good traction to navigate. Our
algorithm, termed WayFAST (Waypoint Free Autonomous Systems for
Traversability), uses RGB and depth data, along with navigation experience, to
autonomously generate traversable paths in outdoor unstructured environments.
Our key inspiration is that traction can be estimated for rolling robots using
kinodynamic models. Using traction estimates provided by an online receding
horizon estimator, we are able to train a traversability prediction neural
network in a self-supervised manner, without requiring heuristics utilized by
previous methods. We demonstrate the effectiveness of WayFAST through extensive
field testing in varying environments, ranging from sandy dry beaches to forest
canopies and snow covered grass fields. Our results clearly demonstrate that
WayFAST can learn to avoid geometric obstacles as well as untraversable
terrain, such as snow, which would be difficult to avoid with sensors that
provide only geometric data, such as LiDAR. Furthermore, we show that our
training pipeline based on online traction estimates is more data-efficient
than other heuristic-based methods.
Related papers
- Autonomous Hiking Trail Navigation via Semantic Segmentation and Geometric Analysis [2.1149122372776743]
This work introduces a novel approach to autonomous hiking trail navigation that balances trail adherence with the flexibility to adapt to off-trail routes when necessary.
The solution is a Traversability Analysis module that integrates semantic data from camera images with geometric information from LiDAR to create a comprehensive understanding of the surrounding terrain.
A planner uses this traversability map to navigate safely, adhering to trails while allowing off-trail movement when necessary to avoid on-trail hazards or for safe off-trail shortcuts.
arXiv Detail & Related papers (2024-09-24T02:21:10Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - EVORA: Deep Evidential Traversability Learning for Risk-Aware Off-Road Autonomy [34.19779754333234]
This work proposes a unified framework to learn uncertainty-aware traction model and plan risk-aware trajectories.
We parameterize Dirichlet distributions with the network outputs and propose a novel uncertainty-aware squared Earth Mover's distance loss.
Our approach is extensively validated in simulation and on wheeled and quadruped robots.
arXiv Detail & Related papers (2023-11-10T18:49:53Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - VAPOR: Legged Robot Navigation in Outdoor Vegetation Using Offline
Reinforcement Learning [53.13393315664145]
We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments.
Our method trains a novel RL policy using an actor-critic network and arbitrary data collected in real outdoor vegetation.
We observe that VAPOR's actions improve success rates by up to 40%, decrease the average current consumption by up to 2.9%, and decrease the normalized trajectory length by up to 11.2%.
arXiv Detail & Related papers (2023-09-14T16:21:27Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Offline Reinforcement Learning for Visual Navigation [66.88830049694457]
ReViND is the first offline RL system for robotic navigation that can leverage previously collected data to optimize user-specified reward functions in the real-world.
We show that ReViND can navigate to distant goals using only offline training from this dataset, and exhibit behaviors that qualitatively differ based on the user-specified reward function.
arXiv Detail & Related papers (2022-12-16T02:23:50Z) - OctoPath: An OcTree Based Self-Supervised Learning Approach to Local
Trajectory Planning for Mobile Robots [0.0]
We introduce OctoPath, which is an encoder-decoder deep neural network, trained in a self-supervised manner to predict the local optimal trajectory for the ego-vehicle.
During training, OctoPath minimizes the error between the predicted and the manually driven trajectories in a given training dataset.
We evaluate the predictions of OctoPath in different driving scenarios, both indoor and outdoor, while benchmarking our system against a baseline hybrid A-Star algorithm.
arXiv Detail & Related papers (2021-06-02T07:10:54Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z) - PLOP: Probabilistic poLynomial Objects trajectory Planning for
autonomous driving [8.105493956485583]
We use a conditional imitation learning algorithm to predict trajectories for ego vehicle and its neighbors.
Our approach is computationally efficient and relies only on on-board sensors.
We evaluate our method offline on the publicly available dataset nuScenes.
arXiv Detail & Related papers (2020-03-09T16:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.