Generalized Phase Pressure Control Enhanced Reinforcement Learning for Traffic Signal Control
- URL: http://arxiv.org/abs/2503.20205v1
- Date: Wed, 26 Mar 2025 04:03:12 GMT
- Title: Generalized Phase Pressure Control Enhanced Reinforcement Learning for Traffic Signal Control
- Authors: Xiao-Cheng Liao, Yi Mei, Mengjie Zhang, Xiang-Ling Chen,
- Abstract summary: We develop a flexible, efficient, and theoretically grounded method for learning traffic signal control policies.<n>We extend the pressure control theory to a general form for multi-homogeneous-lane road networks based on queueing theory.<n>We develop a reinforcement learning (RL)-based algorithm template named G2P-XLight, and two RL algorithms, G2P-MPLight and G2P-CoLight, by combining the generalized phase state representation with MPLight and CoLight.
- Score: 2.704899832646868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Appropriate traffic state representation is crucial for learning traffic signal control policies. However, most of the current traffic state representations are heuristically designed, with insufficient theoretical support. In this paper, we (1) develop a flexible, efficient, and theoretically grounded method, namely generalized phase pressure (G2P) control, which takes only simple lane features into consideration to decide which phase to be actuated; 2) extend the pressure control theory to a general form for multi-homogeneous-lane road networks based on queueing theory; (3) design a new traffic state representation based on the generalized phase state features from G2P control; and 4) develop a reinforcement learning (RL)-based algorithm template named G2P-XLight, and two RL algorithms, G2P-MPLight and G2P-CoLight, by combining the generalized phase state representation with MPLight and CoLight, two well-performed RL methods for learning traffic signal control policies. Extensive experiments conducted on multiple real-world datasets demonstrate that G2P control outperforms the state-of-the-art (SOTA) heuristic method in the transportation field and other recent human-designed heuristic methods; and that the newly proposed G2P-XLight significantly outperforms SOTA learning-based approaches. Our code is available online.
Related papers
- Learning Traffic Signal Control via Genetic Programming [2.954908748487635]
We propose a new learning-based method for signal control in complex intersections.<n>In our approach, we design a concept of phase urgency for each signal phase.<n>The urgency function can calculate the phase urgency for a specific phase based on the current road conditions.
arXiv Detail & Related papers (2024-03-26T02:22:08Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Federated Reinforcement Learning for Resource Allocation in V2X Networks [46.6256432514037]
Resource allocation significantly impacts the performance of vehicle-to-everything (V2X) networks.
Most existing algorithms for resource allocation are based on optimization or machine learning.
In this paper, we explore resource allocation in a V2X network under the framework of federated reinforcement learning.
arXiv Detail & Related papers (2023-10-15T15:26:54Z) - Reinforcement Learning Approaches for Traffic Signal Control under
Missing Data [5.896742981602458]
In real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors.
We propose two solutions: the first one imputes the traffic states to enable adaptive control, and the second one imputes both states and rewards to enable adaptive control and the training of RL agents.
arXiv Detail & Related papers (2023-04-21T03:26:33Z) - Demonstration-guided Deep Reinforcement Learning for Coordinated Ramp
Metering and Perimeter Control in Large Scale Networks [12.296779112932741]
This study considers two representative control approaches: ramp metering for freeways and perimeter control for homogeneous urban roads.
We propose a novel meso-macro dynamic network model and first time develop a demonstration-guided DRL method.
The research outcome reveals the great potential of combining traditional controllers with DRL for coordinated control in large-scale networks.
arXiv Detail & Related papers (2023-03-04T11:49:49Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Expression is enough: Improving traffic signal control with advanced
traffic state representation [24.917612761503996]
We present a novel, flexible and straightforward method advanced max pressure (Advanced-MP)
We also develop an RL-based algorithm template Advanced-XLight, by combining ATS with current RL approaches and generate two RL algorithms, "Advanced-MPLight" and "Advanced-CoLight"
Comprehensive experiments on multiple real-world datasets show that: (1) the Advanced-MP outperforms baseline methods, which is efficient and reliable for deployment; (2) Advanced-MPLight and Advanced-CoLight could achieve new state-of-the-art.
arXiv Detail & Related papers (2021-12-19T10:28:39Z) - Efficient Pressure: Improving efficiency for signalized intersections [24.917612761503996]
Reinforcement learning (RL) has attracted more attention to help solve the traffic signal control (TSC) problem.
Existing RL-based methods are rarely deployed considering that they are neither cost-effective in terms of computing resources nor more robust than traditional approaches.
We demonstrate how to construct an adaptive controller for TSC with less training and reduced complexity based on RL-based approach.
arXiv Detail & Related papers (2021-12-04T13:49:58Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - Reinforcement Learning with Augmented Data [97.42819506719191]
We present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.
We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods.
arXiv Detail & Related papers (2020-04-30T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.