Efficient Pressure: Improving efficiency for signalized intersections
- URL: http://arxiv.org/abs/2112.02336v1
- Date: Sat, 4 Dec 2021 13:49:58 GMT
- Title: Efficient Pressure: Improving efficiency for signalized intersections
- Authors: Qiang Wu, Liang Zhang, Jun Shen, Linyuan L\"u, Bo Du, Jianqing Wu
- Abstract summary: Reinforcement learning (RL) has attracted more attention to help solve the traffic signal control (TSC) problem.
Existing RL-based methods are rarely deployed considering that they are neither cost-effective in terms of computing resources nor more robust than traditional approaches.
We demonstrate how to construct an adaptive controller for TSC with less training and reduced complexity based on RL-based approach.
- Score: 24.917612761503996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since conventional approaches could not adapt to dynamic traffic conditions,
reinforcement learning (RL) has attracted more attention to help solve the
traffic signal control (TSC) problem. However, existing RL-based methods are
rarely deployed considering that they are neither cost-effective in terms of
computing resources nor more robust than traditional approaches, which raises a
critical research question: how to construct an adaptive controller for TSC
with less training and reduced complexity based on RL-based approach? To
address this question, in this paper, we (1) innovatively specify the traffic
movement representation as a simple but efficient pressure of vehicle queues in
a traffic network, namely efficient pressure (EP); (2) build a traffic signal
settings protocol, including phase duration, signal phase number and EP for
TSC; (3) design a TSC approach based on the traditional max pressure (MP)
approach, namely efficient max pressure (Efficient-MP) using the EP to capture
the traffic state; and (4) develop a general RL-based TSC algorithm template:
efficient Xlight (Efficient-XLight) under EP. Through comprehensive experiments
on multiple real-world datasets in our traffic signal settings' protocol for
TSC, we demonstrate that efficient pressure is complementary to traditional and
RL-based modeling to design better TSC methods. Our code is released on Github.
Related papers
- Reinforcement Learning for Adaptive Traffic Signal Control: Turn-Based and Time-Based Approaches to Reduce Congestion [2.733700237741334]
This paper explores the use of Reinforcement Learning to enhance traffic signal operations at intersections.
We introduce two RL-based algorithms: a turn-based agent, which dynamically prioritizes traffic signals based on real-time queue lengths, and a time-based agent, which adjusts signal phase durations according to traffic conditions.
Simulation results demonstrate that both RL algorithms significantly outperform conventional traffic signal control systems.
arXiv Detail & Related papers (2024-08-28T12:35:56Z) - Adaptive traffic signal safety and efficiency improvement by multi objective deep reinforcement learning approach [0.0]
This research introduces an innovative method for adaptive traffic signal control (ATSC) through the utilization of multi-objective deep reinforcement learning (DRL) techniques.
The proposed approach aims to enhance control strategies at intersections while simultaneously addressing safety, efficiency, and decarbonization objectives.
arXiv Detail & Related papers (2024-08-01T13:10:41Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - CycLight: learning traffic signal cooperation with a cycle-level
strategy [10.303270722832924]
This study introduces CycLight, a novel cycle-level deep reinforcement learning (RL) approach for network-level adaptive traffic signal control (NATSC) systems.
Unlike most traditional RL-based traffic controllers that focus on step-by-step decision making, CycLight adopts a cycle-level strategy, optimizing cycle length and splits simultaneously.
arXiv Detail & Related papers (2024-01-16T05:28:12Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP [62.81300791178381]
The bottleneck of distributed edge learning over wireless has shifted from computing to communication.
Existing TCP-based data networking schemes for DEL are application-agnostic and fail to deliver adjustments according to application layer requirements.
We develop a hybrid multipath TCP (MP TCP) by combining model-based and deep reinforcement learning (DRL) based MP TCP for DEL.
arXiv Detail & Related papers (2022-11-03T09:08:30Z) - Leveraging Queue Length and Attention Mechanisms for Enhanced Traffic
Signal Control Optimization [3.0309252269809264]
We present a novel approach to traffic signal control (TSC) that utilizes queue length as an efficient state representation.
Comprehensive experiments on multiple real-world datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2021-12-30T09:24:09Z) - Expression is enough: Improving traffic signal control with advanced
traffic state representation [24.917612761503996]
We present a novel, flexible and straightforward method advanced max pressure (Advanced-MP)
We also develop an RL-based algorithm template Advanced-XLight, by combining ATS with current RL approaches and generate two RL algorithms, "Advanced-MPLight" and "Advanced-CoLight"
Comprehensive experiments on multiple real-world datasets show that: (1) the Advanced-MP outperforms baseline methods, which is efficient and reliable for deployment; (2) Advanced-MPLight and Advanced-CoLight could achieve new state-of-the-art.
arXiv Detail & Related papers (2021-12-19T10:28:39Z) - Remote Multilinear Compressive Learning with Adaptive Compression [107.87219371697063]
MultiIoT Compressive Learning (MCL) is an efficient signal acquisition and learning paradigm for multidimensional signals.
We propose a novel optimization scheme that enables such a feature for MCL models.
arXiv Detail & Related papers (2021-09-02T19:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.