Spatiotemporal Costmap Inference for MPC via Deep Inverse Reinforcement
Learning
- URL: http://arxiv.org/abs/2201.06539v1
- Date: Mon, 17 Jan 2022 17:36:29 GMT
- Title: Spatiotemporal Costmap Inference for MPC via Deep Inverse Reinforcement
Learning
- Authors: Keuntaek Lee, David Isele, Evangelos A. Theodorou, Sangjae Bae
- Abstract summary: We propose a new IRLRL algorithm that learns a goal-conditionedtemporal reward function.
The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task.
- Score: 27.243603228431564
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It can be difficult to autonomously produce driver behavior so that it
appears natural to other traffic participants. Through Inverse Reinforcement
Learning (IRL), we can automate this process by learning the underlying reward
function from human demonstrations. We propose a new IRL algorithm that learns
a goal-conditioned spatiotemporal reward function. The resulting costmap is
used by Model Predictive Controllers (MPCs) to perform a task without any
hand-designing or hand-tuning of the cost function. We evaluate our proposed
Goal-conditioned SpatioTemporal Zeroing Maximum Entropy Deep IRL (GSTZ)-MEDIRL
framework together with MPC in the CARLA simulator for autonomous driving, lane
keeping, and lane changing tasks in a challenging dense traffic highway
scenario. Our proposed methods show higher success rates compared to other
baseline methods including behavior cloning, state-of-the-art RL policies, and
MPC with a learning-based behavior prediction model.
Related papers
- MetaFollower: Adaptable Personalized Autonomous Car Following [63.90050686330677]
We propose an adaptable personalized car-following framework - MetaFollower.
We first utilize Model-Agnostic Meta-Learning (MAML) to extract common driving knowledge from various CF events.
We additionally combine Long Short-Term Memory (LSTM) and Intelligent Driver Model (IDM) to reflect temporal heterogeneity with high interpretability.
arXiv Detail & Related papers (2024-06-23T15:30:40Z) - GAN-MPC: Training Model Predictive Controllers with Parameterized Cost
Functions using Demonstrations from Non-identical Experts [14.291720751625585]
We propose a generative adversarial network (GAN) to minimize the Jensen-Shannon divergence between the state-trajectory distributions of the demonstrator and the imitator.
We evaluate our approach on a variety of simulated robotics tasks of DeepMind Control suite.
arXiv Detail & Related papers (2023-05-30T15:15:30Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning [52.2663102239029]
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle on idle-hailing platforms.
Our approach learns ride-based state-value function using a batch training algorithm with deep value.
We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency.
arXiv Detail & Related papers (2021-03-08T05:34:05Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z) - Approximate Inverse Reinforcement Learning from Vision-based Imitation
Learning [34.5366377122507]
We present a method for obtaining an implicit objective function for vision-based navigation.
The proposed methodology relies on Imitation Learning, Model Predictive Control (MPC), and an interpretation technique used in Deep Neural Networks.
arXiv Detail & Related papers (2020-04-17T03:36:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.