Leveraging Queue Length and Attention Mechanisms for Enhanced Traffic
Signal Control Optimization
- URL: http://arxiv.org/abs/2201.00006v3
- Date: Mon, 25 Sep 2023 07:50:54 GMT
- Title: Leveraging Queue Length and Attention Mechanisms for Enhanced Traffic
Signal Control Optimization
- Authors: Liang Zhang, Shubin Xie, Jianming Deng
- Abstract summary: We present a novel approach to traffic signal control (TSC) that utilizes queue length as an efficient state representation.
Comprehensive experiments on multiple real-world datasets demonstrate the effectiveness of our approach.
- Score: 3.0309252269809264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) techniques for traffic signal control (TSC) have
gained increasing popularity in recent years. However, most existing RL-based
TSC methods tend to focus primarily on the RL model structure while neglecting
the significance of proper traffic state representation. Furthermore, some
RL-based methods heavily rely on expert-designed traffic signal phase
competition. In this paper, we present a novel approach to TSC that utilizes
queue length as an efficient state representation. We propose two new methods:
(1) Max Queue-Length (M-QL), an optimization-based traditional method designed
based on the property of queue length; and (2) AttentionLight, an RL model that
employs the self-attention mechanism to capture the signal phase correlation
without requiring human knowledge of phase relationships. Comprehensive
experiments on multiple real-world datasets demonstrate the effectiveness of
our approach: (1) the M-QL method outperforms the latest RL-based methods; (2)
AttentionLight achieves a new state-of-the-art performance; and (3) our results
highlight the significance of proper state representation, which is as crucial
as neural network design in TSC methods. Our findings have important
implications for advancing the development of more effective and efficient TSC
methods. Our code is released on Github (https://github.
com/LiangZhang1996/AttentionLight).
Related papers
- Boosting CNN-based Handwriting Recognition Systems with Learnable Relaxation Labeling [48.78361527873024]
We propose a novel approach to handwriting recognition that integrates the strengths of two distinct methodologies.
We introduce a sparsification technique that accelerates the convergence of the algorithm and enhances the overall system's performance.
arXiv Detail & Related papers (2024-09-09T15:12:28Z) - Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - Learning Traffic Signal Control via Genetic Programming [2.954908748487635]
We propose a new learning-based method for signal control in complex intersections.
In our approach, we design a concept of phase urgency for each signal phase.
The urgency function can calculate the phase urgency for a specific phase based on the current road conditions.
arXiv Detail & Related papers (2024-03-26T02:22:08Z) - Improving the generalizability and robustness of large-scale traffic
signal control [3.8028221877086814]
We study the robustness of deep reinforcement-learning (RL) approaches to control traffic signals.
We show that recent methods remain brittle in the face of missing data.
We propose using a combination of distributional and vanilla reinforcement learning through a policy ensemble.
arXiv Detail & Related papers (2023-06-02T21:30:44Z) - Graph Neural Network Autoencoders for Efficient Quantum Circuit
Optimisation [69.43216268165402]
We present for the first time how to use graph neural network (GNN) autoencoders for the optimisation of quantum circuits.
We construct directed acyclic graphs from the quantum circuits, encode the graphs and use the encodings to represent RL states.
Our method is the first realistic first step towards very large scale RL quantum circuit optimisation.
arXiv Detail & Related papers (2023-03-06T16:51:30Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - INFOrmation Prioritization through EmPOWERment in Visual Model-Based RL [90.06845886194235]
We propose a modified objective for model-based reinforcement learning (RL)
We integrate a term inspired by variational empowerment into a state-space model based on mutual information.
We evaluate the approach on a suite of vision-based robot control tasks with natural video backgrounds.
arXiv Detail & Related papers (2022-04-18T23:09:23Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Expression is enough: Improving traffic signal control with advanced
traffic state representation [24.917612761503996]
We present a novel, flexible and straightforward method advanced max pressure (Advanced-MP)
We also develop an RL-based algorithm template Advanced-XLight, by combining ATS with current RL approaches and generate two RL algorithms, "Advanced-MPLight" and "Advanced-CoLight"
Comprehensive experiments on multiple real-world datasets show that: (1) the Advanced-MP outperforms baseline methods, which is efficient and reliable for deployment; (2) Advanced-MPLight and Advanced-CoLight could achieve new state-of-the-art.
arXiv Detail & Related papers (2021-12-19T10:28:39Z) - Efficient Pressure: Improving efficiency for signalized intersections [24.917612761503996]
Reinforcement learning (RL) has attracted more attention to help solve the traffic signal control (TSC) problem.
Existing RL-based methods are rarely deployed considering that they are neither cost-effective in terms of computing resources nor more robust than traditional approaches.
We demonstrate how to construct an adaptive controller for TSC with less training and reduced complexity based on RL-based approach.
arXiv Detail & Related papers (2021-12-04T13:49:58Z) - POAR: Efficient Policy Optimization via Online Abstract State
Representation Learning [6.171331561029968]
State Representation Learning (SRL) is proposed to specifically learn to encode task-relevant features from complex sensory data into low-dimensional states.
We introduce a new SRL prior called domain resemblance to leverage expert demonstration to improve SRL interpretations.
We empirically verify POAR to efficiently handle tasks in high dimensions and facilitate training real-life robots directly from scratch.
arXiv Detail & Related papers (2021-09-17T16:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.