Development of A Stochastic Traffic Environment with Generative
Time-Series Models for Improving Generalization Capabilities of Autonomous
Driving Agents
- URL: http://arxiv.org/abs/2006.05821v1
- Date: Wed, 10 Jun 2020 13:14:34 GMT
- Title: Development of A Stochastic Traffic Environment with Generative
Time-Series Models for Improving Generalization Capabilities of Autonomous
Driving Agents
- Authors: Anil Ozturk, Mustafa Burak Gunel, Melih Dal, Ugur Yavas, Nazim Kemal
Ure
- Abstract summary: We develop a data driven traffic simulator by training a generative adverserial network (GAN) on real life trajectory data.
The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles.
We demonstrate through simulations that RL agents trained on GAN-based traffic simulator has stronger generalization capabilities compared to RL agents trained on simple rule-driven simulators.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated lane changing is a critical feature for advanced autonomous driving
systems. In recent years, reinforcement learning (RL) algorithms trained on
traffic simulators yielded successful results in computing lane changing
policies that strike a balance between safety, agility and compensating for
traffic uncertainty. However, many RL algorithms exhibit simulator bias and
policies trained on simple simulators do not generalize well to realistic
traffic scenarios. In this work, we develop a data driven traffic simulator by
training a generative adverserial network (GAN) on real life trajectory data.
The simulator generates randomized trajectories that resembles real life
traffic interactions between vehicles, which enables training the RL agent on
much richer and realistic scenarios. We demonstrate through simulations that RL
agents that are trained on GAN-based traffic simulator has stronger
generalization capabilities compared to RL agents trained on simple rule-driven
simulators.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning [38.63187494867502]
CtRL-Sim is a method that leverages return-conditioned offline reinforcement learning (RL) to efficiently generate reactive and controllable traffic agents.
We show that CtRL-Sim can generate realistic safety-critical scenarios while providing fine-grained control over agent behaviours.
arXiv Detail & Related papers (2024-03-29T02:10:19Z) - Purpose in the Machine: Do Traffic Simulators Produce Distributionally
Equivalent Outcomes for Reinforcement Learning Applications? [35.719833726363085]
This work focuses on two simulators commonly used to train reinforcement learning (RL) agents for traffic applications, CityFlow and SUMO.
A controlled virtual experiment varying driver behavior and simulation scale finds evidence against distributional equivalence in RL-relevant measures from these simulators.
While granular real-world validation generally remains infeasible, these findings suggest that traffic simulators are not a deus ex machina for RL training.
arXiv Detail & Related papers (2023-11-14T01:05:14Z) - Learning Realistic Traffic Agents in Closed-loop [36.38063449192355]
Reinforcement learning (RL) can train traffic agents to avoid infractions, but using RL alone results in unhuman-like driving behaviors.
We propose Reinforcing Traffic Rules (RTR) to match expert demonstrations under a traffic compliance constraint.
Our experiments show that RTR learns more realistic and generalizable traffic simulation policies.
arXiv Detail & Related papers (2023-11-02T16:55:23Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - TrafficBots: Towards World Models for Autonomous Driving Simulation and
Motion Prediction [149.5716746789134]
We show data-driven traffic simulation can be formulated as a world model.
We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving.
Experiments on the open motion dataset show TrafficBots can simulate realistic multi-agent behaviors.
arXiv Detail & Related papers (2023-03-07T18:28:41Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - CARLA Real Traffic Scenarios -- novel training ground and benchmark for
autonomous driving [8.287331387095545]
This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic.
We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods.
The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems.
arXiv Detail & Related papers (2020-12-16T13:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.