LemgoRL: An open-source Benchmark Tool to Train Reinforcement Learning
Agents for Traffic Signal Control in a real-world simulation scenario
- URL: http://arxiv.org/abs/2103.16223v1
- Date: Tue, 30 Mar 2021 10:11:09 GMT
- Title: LemgoRL: An open-source Benchmark Tool to Train Reinforcement Learning
Agents for Traffic Signal Control in a real-world simulation scenario
- Authors: Arthur M\"uller, Vishal Rangras, Georg Schnittker, Michael Waldmann,
Maxim Friesen, Tobias Ferfers, Lukas Schreckenberg, Florian Hufen, J\"urgen
Jasperneite, Marco Wiering
- Abstract summary: Sub-optimal control policies in intersection traffic signal controllers (TSC) contribute to congestion and lead to negative effects on human health and the environment.
We propose LemgoRL, a benchmark tool to train RL agents as TSC in a realistic simulation environment of Lemgo, a medium-sized town in Germany.
LemgoRL offers the same interface as the well-known OpenAI gym toolkit to enable easy deployment in existing research work.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sub-optimal control policies in intersection traffic signal controllers (TSC)
contribute to congestion and lead to negative effects on human health and the
environment. Reinforcement learning (RL) for traffic signal control is a
promising approach to design better control policies and has attracted
considerable research interest in recent years. However, most work done in this
area used simplified simulation environments of traffic scenarios to train
RL-based TSC. To deploy RL in real-world traffic systems, the gap between
simplified simulation environments and real-world applications has to be
closed. Therefore, we propose LemgoRL, a benchmark tool to train RL agents as
TSC in a realistic simulation environment of Lemgo, a medium-sized town in
Germany. In addition to the realistic simulation model, LemgoRL encompasses a
traffic signal logic unit that ensures compliance with all regulatory and
safety requirements. LemgoRL offers the same interface as the well-known OpenAI
gym toolkit to enable easy deployment in existing research work. Our benchmark
tool drives the development of RL algorithms towards real-world applications.
We provide LemgoRL as an open-source tool at https://github.com/rl-ina/lemgorl.
Related papers
- IntersectionZoo: Eco-driving for Benchmarking Multi-Agent Contextual Reinforcement Learning [4.80862277413422]
We propose IntersectionZoo, a comprehensive benchmark suite for multi-agent reinforcement learning.
By grounding IntersectionZoo in a real-world application, we naturally capture real-world problem characteristics.
IntersectionZoo is built on data-informed simulations of 16,334 signalized intersections from 10 major US cities.
arXiv Detail & Related papers (2024-10-19T21:34:24Z) - Adaptive Transit Signal Priority based on Deep Reinforcement Learning and Connected Vehicles in a Traffic Microsimulation Environment [0.0]
This study extends RL - based traffic control to include adaptive transit signal priority (TSP) algorithms.
The agent is shown to reduce the bus travel time by about 21%, with marginal impacts to general traffic at a saturation rate of 0.95.
arXiv Detail & Related papers (2024-07-31T18:17:22Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - A Fully Data-Driven Approach for Realistic Traffic Signal Control Using
Offline Reinforcement Learning [18.2541182874636]
We propose a fully Data-Driven and simulator-free framework for realistic Traffic Signal Control (D2TSC)
We combine well-established traffic flow theory with machine learning to infer the reward signals from coarse-grained traffic data.
Our approach achieves superior performance over conventional and offline RL baselines, and also enjoys much better real-world applicability.
arXiv Detail & Related papers (2023-11-27T15:29:21Z) - Purpose in the Machine: Do Traffic Simulators Produce Distributionally
Equivalent Outcomes for Reinforcement Learning Applications? [35.719833726363085]
This work focuses on two simulators commonly used to train reinforcement learning (RL) agents for traffic applications, CityFlow and SUMO.
A controlled virtual experiment varying driver behavior and simulation scale finds evidence against distributional equivalence in RL-relevant measures from these simulators.
While granular real-world validation generally remains infeasible, these findings suggest that traffic simulators are not a deus ex machina for RL training.
arXiv Detail & Related papers (2023-11-14T01:05:14Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement
Learning [78.2286146954051]
LCRL implements model-free Reinforcement Learning (RL) algorithms over unknown Decision Processes (MDPs)
We present case studies to demonstrate the applicability, ease of use, scalability, and performance of LCRL.
arXiv Detail & Related papers (2022-09-21T13:21:00Z) - DriverGym: Democratising Reinforcement Learning for Autonomous Driving [75.91049219123899]
We propose DriverGym, an open-source environment for developing reinforcement learning algorithms for autonomous driving.
DriverGym provides access to more than 1000 hours of expert logged data and also supports reactive and data-driven agent behavior.
The performance of an RL policy can be easily validated on real-world data using our extensive and flexible closed-loop evaluation protocol.
arXiv Detail & Related papers (2021-11-12T11:47:08Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z) - Development of A Stochastic Traffic Environment with Generative
Time-Series Models for Improving Generalization Capabilities of Autonomous
Driving Agents [0.0]
We develop a data driven traffic simulator by training a generative adverserial network (GAN) on real life trajectory data.
The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles.
We demonstrate through simulations that RL agents trained on GAN-based traffic simulator has stronger generalization capabilities compared to RL agents trained on simple rule-driven simulators.
arXiv Detail & Related papers (2020-06-10T13:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.