Developing Driving Strategies Efficiently: A Skill-Based Hierarchical
Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2302.02179v2
- Date: Sun, 17 Sep 2023 17:48:48 GMT
- Title: Developing Driving Strategies Efficiently: A Skill-Based Hierarchical
Reinforcement Learning Approach
- Authors: Yigit Gurses, Kaan Buyukdemirci, and Yildiray Yildiz
- Abstract summary: Reinforcement learning is a common tool to model driver policies.
We propose skill-based" hierarchical driving strategies, where motion primitives are designed and used as high-level actions.
- Score: 0.7373617024876725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driving in dense traffic with human and autonomous drivers is a challenging
task that requires high-level planning and reasoning. Human drivers can achieve
this task comfortably, and there has been many efforts to model human driver
strategies. These strategies can be used as inspirations for developing
autonomous driving algorithms or to create high-fidelity simulators.
Reinforcement learning is a common tool to model driver policies, but
conventional training of these models can be computationally expensive and
time-consuming. To address this issue, in this paper, we propose ``skill-based"
hierarchical driving strategies, where motion primitives, i.e. skills, are
designed and used as high-level actions. This reduces the training time for
applications that require multiple models with varying behavior. Simulation
results in a merging scenario demonstrate that the proposed approach yields
driver models that achieve higher performance with less training compared to
baseline reinforcement learning methods.
Related papers
- Efficient Motion Prediction: A Lightweight & Accurate Trajectory Prediction Model With Fast Training and Inference Speed [56.27022390372502]
We propose a new efficient motion prediction model, which achieves highly competitive benchmark results while training only a few hours on a single GPU.
Its low inference latency makes it particularly suitable for deployment in autonomous applications with limited computing resources.
arXiv Detail & Related papers (2024-09-24T14:58:27Z) - Guiding Attention in End-to-End Driving Models [49.762868784033785]
Vision-based end-to-end driving models trained by imitation learning can lead to affordable solutions for autonomous driving.
We study how to guide the attention of these models to improve their driving quality by adding a loss term during training.
In contrast to previous work, our method does not require these salient semantic maps to be available during testing time.
arXiv Detail & Related papers (2024-04-30T23:18:51Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - DQ-GAT: Towards Safe and Efficient Autonomous Driving with Deep
Q-Learning and Graph Attention Networks [12.714551756377265]
Traditional planning methods are largely rule-based and scale poorly in complex dynamic scenarios.
We propose DQ-GAT to achieve scalable and proactive autonomous driving.
Our method can better trade-off safety and efficiency in both seen and unseen scenarios.
arXiv Detail & Related papers (2021-08-11T04:55:23Z) - Learning to drive from a world on rails [78.28647825246472]
We learn an interactive vision-based driving policy from pre-recorded driving logs via a model-based approach.
A forward model of the world supervises a driving policy that predicts the outcome of any potential driving trajectory.
Our method ranks first on the CARLA leaderboard, attaining a 25% higher driving score while using 40 times less data.
arXiv Detail & Related papers (2021-05-03T05:55:30Z) - Autonomous Overtaking in Gran Turismo Sport Using Curriculum
Reinforcement Learning [39.757652701917166]
This work proposes a new learning-based method to tackle the autonomous overtaking problem.
We evaluate our approach using Gran Turismo Sport -- a world-leading car racing simulator.
arXiv Detail & Related papers (2021-03-26T18:06:50Z) - Affordance-based Reinforcement Learning for Urban Driving [3.507764811554557]
We propose a deep reinforcement learning framework to learn optimal control policy using waypoints and low-dimensional visual representations.
We demonstrate that our agents when trained from scratch learn the tasks of lane-following, driving around inter-sections as well as stopping in front of other actors or traffic lights even in the dense traffic setting.
arXiv Detail & Related papers (2021-01-15T05:21:25Z) - Action-Based Representation Learning for Autonomous Driving [8.296684637620551]
We propose to use action-based driving data for learning representations.
Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery.
arXiv Detail & Related papers (2020-08-21T10:49:13Z) - Driver Modeling through Deep Reinforcement Learning and Behavioral Game
Theory [0.0]
It is estimated that for an autonomous vehicle to reach the same safety level of cars with drivers, millions of miles of driving tests are required.
The modeling framework presented in this paper may be used in a high-fidelity traffic simulator consisting of multiple human decision makers to reduce the time and effort spent for testing by allowing safe and quick assessment of self-driving algorithms.
arXiv Detail & Related papers (2020-03-24T18:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.