A Reinforcement Learning Benchmark for Autonomous Driving in
Intersection Scenarios
- URL: http://arxiv.org/abs/2109.10557v1
- Date: Wed, 22 Sep 2021 07:38:23 GMT
- Title: A Reinforcement Learning Benchmark for Autonomous Driving in
Intersection Scenarios
- Authors: Yuqi Liu, Qichao Zhang and Dongbin Zhao
- Abstract summary: We propose a benchmark for training and testing RL-based autonomous driving agents in complex intersection scenarios, which is called RL-CIS.
The test benchmark and baselines are to provide a fair and comprehensive training and testing platform for the study of RL for autonomous driving in the intersection scenario.
- Score: 11.365750371241154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, control under urban intersection scenarios becomes an
emerging research topic. In such scenarios, the autonomous vehicle confronts
complicated situations since it must deal with the interaction with social
vehicles timely while obeying the traffic rules. Generally, the autonomous
vehicle is supposed to avoid collisions while pursuing better efficiency. The
existing work fails to provide a framework that emphasizes the integrity of the
scenarios while being able to deploy and test reinforcement learning(RL)
methods. Specifically, we propose a benchmark for training and testing RL-based
autonomous driving agents in complex intersection scenarios, which is called
RL-CIS. Then, a set of baselines are deployed consists of various algorithms.
The test benchmark and baselines are to provide a fair and comprehensive
training and testing platform for the study of RL for autonomous driving in the
intersection scenario, advancing the progress of RL-based methods for
intersection autonomous driving control. The code of our proposed framework can
be found at https://github.com/liuyuqi123/ComplexUrbanScenarios.
Related papers
- Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - DeFIX: Detecting and Fixing Failure Scenarios with Reinforcement
Learning in Imitation Learning Based Autonomous Driving [0.0]
We present a Reinforcement Learning (RL) based methodology to DEtect and FIX failures of an IL agent.
DeFIX is a continuous learning framework, where extraction of failure scenarios and training of RL agents are executed in an infinite loop.
It is demonstrated that even with only one RL agent trained on failure scenario of an IL agent, DeFIX method is either competitive or does outperform state-of-the-art IL and RL based autonomous urban driving benchmarks.
arXiv Detail & Related papers (2022-10-29T10:58:43Z) - Adaptive Decision Making at the Intersection for Autonomous Vehicles
Based on Skill Discovery [13.134487965031667]
In urban environments, the complex and uncertain intersection scenarios are challenging for autonomous driving.
To ensure safety, it is crucial to develop an adaptive decision making system that can handle the interaction with other vehicles.
We propose a hierarchical framework that can autonomously accumulate and reuse knowledge.
arXiv Detail & Related papers (2022-07-24T11:56:45Z) - Evaluating the Robustness of Deep Reinforcement Learning for Autonomous
Policies in a Multi-agent Urban Driving Environment [3.8073142980733]
We propose a benchmarking framework for the comparison of deep reinforcement learning in a vision-based autonomous driving.
We run the experiments in a vision-only high-fidelity urban driving simulated environments.
The results indicate that only some of the deep reinforcement learning algorithms perform consistently better across single and multi-agent scenarios.
arXiv Detail & Related papers (2021-12-22T15:14:50Z) - Carl-Lead: Lidar-based End-to-End Autonomous Driving with Contrastive
Deep Reinforcement Learning [10.040113551761792]
We use deep reinforcement learning (DRL) to train lidar-based end-to-end driving policies.
In this work, we use DRL to train lidar-based end-to-end driving policies that naturally consider imperfect partial observations.
Our method achieves higher success rates than the state-of-the-art (SOTA) lidar-based end-to-end driving network.
arXiv Detail & Related papers (2021-09-17T11:24:10Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - CARLA Real Traffic Scenarios -- novel training ground and benchmark for
autonomous driving [8.287331387095545]
This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic.
We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods.
The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems.
arXiv Detail & Related papers (2020-12-16T13:20:39Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z) - Intelligent Roundabout Insertion using Deep Reinforcement Learning [68.8204255655161]
We present a maneuver planning module able to negotiate the entering in busy roundabouts.
The proposed module is based on a neural network trained to predict when and how entering the roundabout throughout the whole duration of the maneuver.
arXiv Detail & Related papers (2020-01-03T11:16:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.