Learning Inverse Kinodynamics for Autonomous Vehicle Drifting
- URL: http://arxiv.org/abs/2402.14928v1
- Date: Thu, 22 Feb 2024 19:24:56 GMT
- Title: Learning Inverse Kinodynamics for Autonomous Vehicle Drifting
- Authors: M. Suvarna, O. Tehrani
- Abstract summary: We learn the kinodynamic model of a small autonomous vehicle, and observe the effect it has on motion planning, specifically autonomous drifting.
Our approach is able to learn a kinodynamic model for high-speed circular navigation, and is able to avoid obstacles on an autonomous drift at high speed by correcting an executed curvature for loose drifts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we explore a data-driven learning-based approach to learning
the kinodynamic model of a small autonomous vehicle, and observe the effect it
has on motion planning, specifically autonomous drifting. When executing a
motion plan in the real world, there are numerous causes for error, and what is
planned is often not what is executed on the actual car. Learning a kinodynamic
planner based off of inertial measurements and executed commands can help us
learn the world state. In our case, we look towards the realm of drifting; it
is a complex maneuver that requires a smooth enough surface, high enough speed,
and a drastic change in velocity. We attempt to learn the kinodynamic model for
these drifting maneuvers, and attempt to tighten the slip of the car. Our
approach is able to learn a kinodynamic model for high-speed circular
navigation, and is able to avoid obstacles on an autonomous drift at high speed
by correcting an executed curvature for loose drifts. We seek to adjust our
kinodynamic model for success in tighter drifts in future work.
Related papers
- MagicDriveDiT: High-Resolution Long Video Generation for Autonomous Driving with Adaptive Control [68.74166535159311]
We introduce MagicDriveDiT, a novel approach based on the DiT architecture.
By incorporating spatial-temporal conditional encoding, MagicDriveDiT achieves precise control over spatial-temporal latents.
Experiments show its superior performance in generating realistic street scene videos with higher resolution and more frames.
arXiv Detail & Related papers (2024-11-21T03:13:30Z) - Optimal-state Dynamics Estimation for Physics-based Human Motion Capture from Videos [6.093379844890164]
We propose a novel method to selectively incorporate the physics models with the kinematics observations in an online setting.
A recurrent neural network is introduced to realize a Kalman filter that attentively balances the kinematics input and simulated motion.
The proposed approach excels in the physics-based human pose estimation task and demonstrates the physical plausibility of the predictive dynamics.
arXiv Detail & Related papers (2024-10-10T10:24:59Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - D&D: Learning Human Dynamics from Dynamic Camera [55.60512353465175]
We present D&D (Learning Human Dynamics from Dynamic Camera), which leverages the laws of physics to reconstruct 3D human motion from the in-the-wild videos with a moving camera.
Our approach is entirely neural-based and runs without offline optimization or simulation in physics engines.
arXiv Detail & Related papers (2022-09-19T06:51:02Z) - Motion Planning and Control for Multi Vehicle Autonomous Racing at High
Speeds [100.61456258283245]
This paper presents a multi-layer motion planning and control architecture for autonomous racing.
The proposed solution has been applied on a Dallara AV-21 racecar and tested at oval race tracks achieving lateral accelerations up to 25 $m/s2$.
arXiv Detail & Related papers (2022-07-22T15:16:54Z) - VI-IKD: High-Speed Accurate Off-Road Navigation using Learned
Visual-Inertial Inverse Kinodynamics [42.92648945058518]
Visual-Inertial Inverse Kinodynamics (VI-IKD) is a novel learning based IKD model conditioned on visual information from a terrain patch ahead of the robot.
We show that VI-IKD enables more accurate and robust off-road navigation on a variety of different terrains at speeds of up to 3.5 m/s.
arXiv Detail & Related papers (2022-03-30T01:43:15Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Autonomous Overtaking in Gran Turismo Sport Using Curriculum
Reinforcement Learning [39.757652701917166]
This work proposes a new learning-based method to tackle the autonomous overtaking problem.
We evaluate our approach using Gran Turismo Sport -- a world-leading car racing simulator.
arXiv Detail & Related papers (2021-03-26T18:06:50Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - High-speed Autonomous Drifting with Deep Reinforcement Learning [15.766089739894207]
We propose a robust drift controller without explicit motion equations.
Our controller is capable of making the vehicle drift through various sharp corners quickly and stably in the unseen map.
arXiv Detail & Related papers (2020-01-06T03:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.