CARLA Real Traffic Scenarios -- novel training ground and benchmark for
autonomous driving
- URL: http://arxiv.org/abs/2012.11329v1
- Date: Wed, 16 Dec 2020 13:20:39 GMT
- Title: CARLA Real Traffic Scenarios -- novel training ground and benchmark for
autonomous driving
- Authors: B{\l}a\.zej Osi\'nski, Piotr Mi{\l}o\'s, Adam Jakubowski, Pawe{\l}
Zi\k{e}cina, Micha{\l} Martyniak, Christopher Galias, Antonia Breuer, Silviu
Homoceanu, Henryk Michalewski
- Abstract summary: This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic.
We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods.
The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems.
- Score: 8.287331387095545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces interactive traffic scenarios in the CARLA simulator,
which are based on real-world traffic. We concentrate on tactical tasks lasting
several seconds, which are especially challenging for current control methods.
The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and
testing ground for autonomous driving systems. To this end, we open-source the
code under a permissive license and present a set of baseline policies. CRTS
combines the realism of traffic scenarios and the flexibility of simulation. We
use it to train agents using a reinforcement learning algorithm. We show how to
obtain competitive polices and evaluate experimentally how observation types
and reward schemes affect the training process and the resulting agent's
behavior.
Related papers
- Prompt to Transfer: Sim-to-Real Transfer for Traffic Signal Control with
Prompt Learning [4.195122359359966]
Large Language Models (LLMs) are trained on mass knowledge and proved to be equipped with astonishing inference abilities.
In this work, we leverage LLMs to understand and profile the system dynamics by a prompt-based grounded action transformation.
arXiv Detail & Related papers (2023-08-28T03:49:13Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - DeFIX: Detecting and Fixing Failure Scenarios with Reinforcement
Learning in Imitation Learning Based Autonomous Driving [0.0]
We present a Reinforcement Learning (RL) based methodology to DEtect and FIX failures of an IL agent.
DeFIX is a continuous learning framework, where extraction of failure scenarios and training of RL agents are executed in an infinite loop.
It is demonstrated that even with only one RL agent trained on failure scenario of an IL agent, DeFIX method is either competitive or does outperform state-of-the-art IL and RL based autonomous urban driving benchmarks.
arXiv Detail & Related papers (2022-10-29T10:58:43Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - A Reinforcement Learning Benchmark for Autonomous Driving in
Intersection Scenarios [11.365750371241154]
We propose a benchmark for training and testing RL-based autonomous driving agents in complex intersection scenarios, which is called RL-CIS.
The test benchmark and baselines are to provide a fair and comprehensive training and testing platform for the study of RL for autonomous driving in the intersection scenario.
arXiv Detail & Related papers (2021-09-22T07:38:23Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Affordance-based Reinforcement Learning for Urban Driving [3.507764811554557]
We propose a deep reinforcement learning framework to learn optimal control policy using waypoints and low-dimensional visual representations.
We demonstrate that our agents when trained from scratch learn the tasks of lane-following, driving around inter-sections as well as stopping in front of other actors or traffic lights even in the dense traffic setting.
arXiv Detail & Related papers (2021-01-15T05:21:25Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z) - Development of A Stochastic Traffic Environment with Generative
Time-Series Models for Improving Generalization Capabilities of Autonomous
Driving Agents [0.0]
We develop a data driven traffic simulator by training a generative adverserial network (GAN) on real life trajectory data.
The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles.
We demonstrate through simulations that RL agents trained on GAN-based traffic simulator has stronger generalization capabilities compared to RL agents trained on simple rule-driven simulators.
arXiv Detail & Related papers (2020-06-10T13:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.