Driving in Real Life with Inverse Reinforcement Learning
- URL: http://arxiv.org/abs/2206.03004v1
- Date: Tue, 7 Jun 2022 04:36:46 GMT
- Title: Driving in Real Life with Inverse Reinforcement Learning
- Authors: Tung Phan-Minh and Forbes Howington and Ting-Sheng Chu and Sang Uk Lee
and Momchil S. Tomov and Nanxiang Li and Caglayan Dicle and Samuel Findler
and Francisco Suarez-Ruiz and Robert Beaudoin and Bo Yang and Sammy Omari and
Eric M. Wolff
- Abstract summary: We introduce the first learning-based planner to drive a car in dense, urban traffic using Inverse Reinforcement Learning (IRL)
DriveIRL generates a diverse set of trajectory proposals, filters these with a lightweight and interpretable safety filter, and then uses a learned model to score each remaining trajectory.
We validated DriveIRL on the Las Vegas Strip and demonstrated fully autonomous driving in heavy traffic.
- Score: 4.366642479205039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce the first learning-based planner to drive a car
in dense, urban traffic using Inverse Reinforcement Learning (IRL). Our
planner, DriveIRL, generates a diverse set of trajectory proposals, filters
these trajectories with a lightweight and interpretable safety filter, and then
uses a learned model to score each remaining trajectory. The best trajectory is
then tracked by the low-level controller of our self-driving vehicle. We train
our trajectory scoring model on a 500+ hour real-world dataset of expert
driving demonstrations in Las Vegas within the maximum entropy IRL framework.
DriveIRL's benefits include: a simple design due to only learning the
trajectory scoring function, relatively interpretable features, and strong
real-world performance. We validated DriveIRL on the Las Vegas Strip and
demonstrated fully autonomous driving in heavy traffic, including scenarios
involving cut-ins, abrupt braking by the lead vehicle, and hotel pickup/dropoff
zones. Our dataset will be made public to help further research in this area.
Related papers
- Reference-Free Formula Drift with Reinforcement Learning: From Driving Data to Tire Energy-Inspired, Real-World Policies [1.3499500088995464]
Real-time drifting strategies put the car where needed while bypassing expensive trajectory optimization.
We design a reinforcement learning agent that builds on the concept of tire energy absorption to autonomously drift through changing and complex waypoint configurations.
Experiments on a Toyota GR Supra and Lexus LC 500 show that the agent is capable of drifting smoothly through varying waypoint configurations with tracking error as low as 10 cm while stably pushing the vehicles to sideslip angles of up to 63deg.
arXiv Detail & Related papers (2024-10-28T13:10:15Z) - HE-Drive: Human-Like End-to-End Driving with Vision Language Models [11.845309076856365]
We propose HE-Drive: the first human-like-centric end-to-end autonomous driving system.
We show that HE-Drive achieves state-of-the-art performance (i.e., reduces the average collision rate by 71% than VAD) and efficiency (i.e., 1.9X faster than SparseDrive) on datasets.
arXiv Detail & Related papers (2024-10-07T14:06:16Z) - Enhancing End-to-End Autonomous Driving with Latent World Model [78.22157677787239]
We propose a novel self-supervised learning approach using the LAtent World model (LAW) for end-to-end driving.
LAW predicts future scene features based on current features and ego trajectories.
This self-supervised task can be seamlessly integrated into perception-free and perception-based frameworks.
arXiv Detail & Related papers (2024-06-12T17:59:21Z) - Traffic Smoothing Controllers for Autonomous Vehicles Using Deep
Reinforcement Learning and Real-World Trajectory Data [45.13152172664334]
We design traffic-smoothing cruise controllers that can be deployed onto autonomous vehicles.
We leverage real-world trajectory data from the I-24 highway in Tennessee.
We show that at a low 4% autonomous vehicle penetration rate, we achieve significant fuel savings of over 15% on trajectories exhibiting many stop-and-go waves.
arXiv Detail & Related papers (2024-01-18T00:50:41Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Safe Real-World Autonomous Driving by Learning to Predict and Plan with
a Mixture of Experts [3.2230833657560503]
We propose a distribution over multiple future trajectories for both the self-driving vehicle and other road agents.
During inference, we select the planning trajectory that minimizes a cost taking into account safety and the predicted probabilities.
We successfully deploy it on a self-driving vehicle on urban public roads, confirming that it drives safely without compromising comfort.
arXiv Detail & Related papers (2022-11-03T20:16:24Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - PlayVirtual: Augmenting Cycle-Consistent Virtual Trajectories for
Reinforcement Learning [84.30765628008207]
We propose a novel method, dubbed PlayVirtual, which augments cycle-consistent virtual trajectories to enhance the data efficiency for RL feature representation learning.
Our method outperforms the current state-of-the-art methods by a large margin on both benchmarks.
arXiv Detail & Related papers (2021-06-08T07:37:37Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Learn-to-Race: A Multimodal Control Environment for Autonomous Racing [23.798765519590734]
We introduce a new environment, where agents Learn-to-Race (L2R) in simulated Formula-E style racing.
Our environment, which includes a simulator and an interfacing training framework, accurately models vehicle dynamics and racing conditions.
Next, we propose the L2R task with challenging metrics, inspired by learning-to-drive challenges, Formula-E racing, and multimodal trajectory prediction for autonomous driving.
arXiv Detail & Related papers (2021-03-22T04:03:06Z) - End-to-end Interpretable Neural Motion Planner [78.69295676456085]
We propose a neural motion planner (NMP) for learning to drive autonomously in complex urban scenarios.
We design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations.
We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America.
arXiv Detail & Related papers (2021-01-17T14:16:12Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.