OTTR: Off-Road Trajectory Tracking using Reinforcement Learning
- URL: http://arxiv.org/abs/2110.02332v1
- Date: Tue, 5 Oct 2021 20:04:37 GMT
- Title: OTTR: Off-Road Trajectory Tracking using Reinforcement Learning
- Authors: Akhil Nagariya, Dileep Kalathil, Srikanth Saripalli
- Abstract summary: We present a novel Reinforcement Learning (RL) algorithm for the off-road trajectory tracking problem.
Our approach efficiently exploits the limited real-world data available to adapt the baseline RL policy.
Compared to the standard ILQR approach, our proposed approach achieves a 30% and 50% reduction in cross track error in Warthog and Moose.
- Score: 6.033086397437647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we present a novel Reinforcement Learning (RL) algorithm for
the off-road trajectory tracking problem. Off-road environments involve varying
terrain types and elevations, and it is difficult to model the interaction
dynamics of specific off-road vehicles with such a diverse and complex
environment. Standard RL policies trained on a simulator will fail to operate
in such challenging real-world settings. Instead of using a naive domain
randomization approach, we propose an innovative supervised-learning based
approach for overcoming the sim-to-real gap problem. Our approach efficiently
exploits the limited real-world data available to adapt the baseline RL policy
obtained using a simple kinematics simulator. This avoids the need for modeling
the diverse and complex interaction of the vehicle with off-road environments.
We evaluate the performance of the proposed algorithm using two different
off-road vehicles, Warthog and Moose. Compared to the standard ILQR approach,
our proposed approach achieves a 30% and 50% reduction in cross track error in
Warthog and Moose, respectively, by utilizing only 30 minutes of real-world
driving data.
Related papers
- WROOM: An Autonomous Driving Approach for Off-Road Navigation [17.74237088460657]
We design an end-to-end reinforcement learning (RL) system for an autonomous vehicle in off-road environments.
We warm-start the agent by imitating a rule-based controller and utilize Proximal Policy Optimization (PPO) to improve the policy.
We propose a novel simulation environment to replicate off-road driving scenarios and deploy our proposed approach on a real buggy RC car.
arXiv Detail & Related papers (2024-04-12T23:55:59Z) - Data-efficient Deep Reinforcement Learning for Vehicle Trajectory
Control [6.144517901919656]
Reinforcement learning (RL) promises to achieve control performance superior to classical approaches.
Standard RL approaches like soft-actor critic (SAC) require extensive amounts of training data to be collected.
We apply recently developed data-efficient deep RL methods to vehicle trajectory control.
arXiv Detail & Related papers (2023-11-30T09:38:59Z) - Eco-Driving Control of Connected and Automated Vehicles using Neural
Network based Rollout [0.0]
Connected and autonomous vehicles have the potential to minimize energy consumption.
Existing deterministic and methods created to solve the eco-driving problem generally suffer from high computational and memory requirements.
This work proposes a hierarchical multi-horizon optimization framework implemented via a neural network.
arXiv Detail & Related papers (2023-10-16T23:13:51Z) - Integrating Higher-Order Dynamics and Roadway-Compliance into
Constrained ILQR-based Trajectory Planning for Autonomous Vehicles [3.200238632208686]
Trajectory planning aims to produce a globally optimal route for Autonomous Passenger Vehicles.
Existing implementations utilizing the vehicle bicycle kinematic model may not guarantee controllable trajectories.
We augment this model by higher-order terms, including the first and second-order derivatives of curvature and longitudinal jerk.
arXiv Detail & Related papers (2023-09-25T22:30:18Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Hybrid Reinforcement Learning-Based Eco-Driving Strategy for Connected
and Automated Vehicles at Signalized Intersections [3.401874022426856]
Vision-perceptive methods are integrated with vehicle-to-infrastructure (V2I) communications to achieve higher mobility and energy efficiency.
HRL framework has three components: a rule-based driving manager that operates the collaboration between the rule-based policies and the RL policy.
Experiments show that our HRL method can reduce energy consumption by 12.70% and save 11.75% travel time when compared with a state-of-the-art model-based Eco-Driving approach.
arXiv Detail & Related papers (2022-01-19T19:31:12Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning [52.2663102239029]
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle on idle-hailing platforms.
Our approach learns ride-based state-value function using a batch training algorithm with deep value.
We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency.
arXiv Detail & Related papers (2021-03-08T05:34:05Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z) - Path Planning Followed by Kinodynamic Smoothing for Multirotor Aerial
Vehicles (MAVs) [61.94975011711275]
We propose a geometrically based motion planning technique textquotedblleft RRT*textquotedblright; for this purpose.
In the proposed technique, we modified original RRT* introducing an adaptive search space and a steering function.
We have tested the proposed technique in various simulated environments.
arXiv Detail & Related papers (2020-08-29T09:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.