VI-IKD: High-Speed Accurate Off-Road Navigation using Learned
Visual-Inertial Inverse Kinodynamics
- URL: http://arxiv.org/abs/2203.15983v1
- Date: Wed, 30 Mar 2022 01:43:15 GMT
- Title: VI-IKD: High-Speed Accurate Off-Road Navigation using Learned
Visual-Inertial Inverse Kinodynamics
- Authors: Haresh Karnan, Kavan Singh Sikand, Pranav Atreya, Sadegh Rabiee, Xuesu
Xiao, Garrett Warnell, Peter Stone, Joydeep Biswas
- Abstract summary: Visual-Inertial Inverse Kinodynamics (VI-IKD) is a novel learning based IKD model conditioned on visual information from a terrain patch ahead of the robot.
We show that VI-IKD enables more accurate and robust off-road navigation on a variety of different terrains at speeds of up to 3.5 m/s.
- Score: 42.92648945058518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the key challenges in high speed off road navigation on ground
vehicles is that the kinodynamics of the vehicle terrain interaction can differ
dramatically depending on the terrain. Previous approaches to addressing this
challenge have considered learning an inverse kinodynamics (IKD) model,
conditioned on inertial information of the vehicle to sense the kinodynamic
interactions. In this paper, we hypothesize that to enable accurate high-speed
off-road navigation using a learned IKD model, in addition to inertial
information from the past, one must also anticipate the kinodynamic
interactions of the vehicle with the terrain in the future. To this end, we
introduce Visual-Inertial Inverse Kinodynamics (VI-IKD), a novel learning based
IKD model that is conditioned on visual information from a terrain patch ahead
of the robot in addition to past inertial information, enabling it to
anticipate kinodynamic interactions in the future. We validate the
effectiveness of VI-IKD in accurate high-speed off-road navigation
experimentally on a scale 1/5 UT-AlphaTruck off-road autonomous vehicle in both
indoor and outdoor environments and show that compared to other
state-of-the-art approaches, VI-IKD enables more accurate and robust off-road
navigation on a variety of different terrains at speeds of up to 3.5 m/s.
Related papers
- Stag-1: Towards Realistic 4D Driving Simulation with Video Generation Model [83.31688383891871]
We propose a Spatial-Temporal simulAtion for drivinG (Stag-1) model to reconstruct real-world scenes.
Stag-1 constructs continuous 4D point cloud scenes using surround-view data from autonomous vehicles.
It decouples spatial-temporal relationships and produces coherent driving videos.
arXiv Detail & Related papers (2024-12-06T18:59:56Z) - DriveWorld: 4D Pre-trained Scene Understanding via World Models for Autonomous Driving [67.46481099962088]
Current vision-centric pre-training typically relies on either 2D or 3D pre-text tasks, overlooking the temporal characteristics of autonomous driving as a 4D scene understanding task.
We introduce emphcentricDriveWorld, which is capable of pre-training from multi-camera driving videos in atemporal fashion.
DriveWorld delivers promising results on various autonomous driving tasks.
arXiv Detail & Related papers (2024-05-07T15:14:20Z) - DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - Dynamic V2X Autonomous Perception from Road-to-Vehicle Vision [14.666587433945363]
We propose to build V2X perception from road-to-vehicle vision and present Adaptive Road-to-Vehicle Perception (AR2VP) method.
AR2VP is devised to tackle both intra-scene and inter-scene changes.
We conduct perception experiment on 3D object detection and segmentation, and the results show that AR2VP excels in both performance-bandwidth trade-offs and adaptability within dynamic environments.
arXiv Detail & Related papers (2023-10-29T19:01:20Z) - Safe Navigation: Training Autonomous Vehicles using Deep Reinforcement
Learning in CARLA [0.0]
The goal of this project is to train autonomous vehicles to make decisions to navigate in uncertain environments using deep reinforcement learning techniques.
The simulator provides a realistic and urban environment for training and testing self-driving models.
arXiv Detail & Related papers (2023-10-23T04:23:07Z) - Learning Terrain-Aware Kinodynamic Model for Autonomous Off-Road Rally
Driving With Model Predictive Path Integral Control [4.23755398158039]
We propose a method for learning terrain-aware kinodynamic model conditioned on both proprioceptive and exteroceptive information.
The proposed model generates reliable predictions of 6-degree-of-freedom motion and can even estimate contact interactions.
We demonstrate the effectiveness of our approach through experiments on a simulated off-road track, showing that our proposed model-controller pair outperforms the baseline.
arXiv Detail & Related papers (2023-05-01T06:09:49Z) - D&D: Learning Human Dynamics from Dynamic Camera [55.60512353465175]
We present D&D (Learning Human Dynamics from Dynamic Camera), which leverages the laws of physics to reconstruct 3D human motion from the in-the-wild videos with a moving camera.
Our approach is entirely neural-based and runs without offline optimization or simulation in physics engines.
arXiv Detail & Related papers (2022-09-19T06:51:02Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Learning Interpretable End-to-End Vision-Based Motion Planning for
Autonomous Driving with Optical Flow Distillation [11.638798976654327]
IVMP is an interpretable end-to-end vision-based motion planning approach for autonomous driving.
We develop an optical flow distillation paradigm, which can effectively enhance the network while still maintaining its real-time performance.
Our IVMP significantly outperforms the state-of-the-art approaches in imitating human drivers with a much higher success rate.
arXiv Detail & Related papers (2021-04-18T13:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.