The Real Deal: A Review of Challenges and Opportunities in Moving
Reinforcement Learning-Based Traffic Signal Control Systems Towards Reality
- URL: http://arxiv.org/abs/2206.11996v1
- Date: Thu, 23 Jun 2022 22:05:38 GMT
- Title: The Real Deal: A Review of Challenges and Opportunities in Moving
Reinforcement Learning-Based Traffic Signal Control Systems Towards Reality
- Authors: Rex Chen, Fei Fang, Norman Sadeh
- Abstract summary: Traffic signal control (TSC) is a high-stakes domain that is growing in importance as traffic volume grows globally.
reinforcement learning (RL) can draw on an abundance of traffic data to improve signalling efficiency.
RL-based signal controllers have never been deployed.
We focus on four challenges involving (1) uncertainty in detection, (2) reliability of communications, (3) compliance and interpretability, and (4) heterogeneous road users.
- Score: 35.22273933799107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic signal control (TSC) is a high-stakes domain that is growing in
importance as traffic volume grows globally. An increasing number of works are
applying reinforcement learning (RL) to TSC; RL can draw on an abundance of
traffic data to improve signalling efficiency. However, RL-based signal
controllers have never been deployed. In this work, we provide the first review
of challenges that must be addressed before RL can be deployed for TSC. We
focus on four challenges involving (1) uncertainty in detection, (2)
reliability of communications, (3) compliance and interpretability, and (4)
heterogeneous road users. We show that the literature on RL-based TSC has made
some progress towards addressing each challenge. However, more work should take
a systems thinking approach that considers the impacts of other pipeline
components on RL.
Related papers
- Reinforcement Learning for Adaptive Traffic Signal Control: Turn-Based and Time-Based Approaches to Reduce Congestion [2.733700237741334]
This paper explores the use of Reinforcement Learning to enhance traffic signal operations at intersections.
We introduce two RL-based algorithms: a turn-based agent, which dynamically prioritizes traffic signals based on real-time queue lengths, and a time-based agent, which adjusts signal phase durations according to traffic conditions.
Simulation results demonstrate that both RL algorithms significantly outperform conventional traffic signal control systems.
arXiv Detail & Related papers (2024-08-28T12:35:56Z) - iLLM-TSC: Integration reinforcement learning and large language model for traffic signal control policy improvement [5.078593258867346]
We introduce a novel integration framework that combines a large language model (LLM) with reinforcement learning (RL)
Our approach reduces the average waiting time by $17.5%$ in degraded communication conditions as compared to traditional RL methods.
arXiv Detail & Related papers (2024-07-08T15:22:49Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - A Fully Data-Driven Approach for Realistic Traffic Signal Control Using
Offline Reinforcement Learning [18.2541182874636]
We propose a fully Data-Driven and simulator-free framework for realistic Traffic Signal Control (D2TSC)
We combine well-established traffic flow theory with machine learning to infer the reward signals from coarse-grained traffic data.
Our approach achieves superior performance over conventional and offline RL baselines, and also enjoys much better real-world applicability.
arXiv Detail & Related papers (2023-11-27T15:29:21Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Reinforcement Learning Approaches for Traffic Signal Control under
Missing Data [5.896742981602458]
In real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors.
We propose two solutions: the first one imputes the traffic states to enable adaptive control, and the second one imputes both states and rewards to enable adaptive control and the training of RL agents.
arXiv Detail & Related papers (2023-04-21T03:26:33Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.