Area-wide traffic signal control based on a deep graph Q-Network (DGQN)
trained in an asynchronous manner
- URL: http://arxiv.org/abs/2008.01950v1
- Date: Wed, 5 Aug 2020 06:13:58 GMT
- Title: Area-wide traffic signal control based on a deep graph Q-Network (DGQN)
trained in an asynchronous manner
- Authors: Gyeongjun Kim and Keemin Sohn
- Abstract summary: Reinforcement learning (RL) algorithms have been widely applied in traffic signal studies.
There are, however, several problems in jointly controlling traffic lights for a large transportation network.
- Score: 3.655021726150368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) algorithms have been widely applied in traffic
signal studies. There are, however, several problems in jointly controlling
traffic lights for a large transportation network. First, the action space
exponentially explodes as the number of intersections to be jointly controlled
increases. Although a multi-agent RL algorithm has been used to solve the curse
of dimensionality, this neither guaranteed a global optimum, nor could it break
the ties between joint actions. The problem was circumvented by revising the
output structure of a deep Q-network (DQN) within the framework of a
single-agent RL algorithm. Second, when mapping traffic states into an action
value, it is difficult to consider spatio-temporal correlations over a large
transportation network. A deep graph Q-network (DGQN) was devised to
efficiently accommodate spatio-temporal dependencies on a large scale. Finally,
training a RL model to jointly control traffic lights in a large transportation
network requires much time to converge. An asynchronous update methodology was
devised for a DGQN to quickly reach an optimal policy. Using these three
remedies, a DGQN succeeded in jointly controlling the traffic lights in a large
transportation network in Seoul. This approach outperformed other
state-of-the-art RL algorithms as well as an actual fixed-signal operation.
Related papers
- Improving Traffic Flow Predictions with SGCN-LSTM: A Hybrid Model for Spatial and Temporal Dependencies [55.2480439325792]
This paper introduces the Signal-Enhanced Graph Convolutional Network Long Short Term Memory (SGCN-LSTM) model for predicting traffic speeds across road networks.
Experiments on the PEMS-BAY road network traffic dataset demonstrate the SGCN-LSTM model's effectiveness.
arXiv Detail & Related papers (2024-11-01T00:37:00Z) - Towards Multi-agent Reinforcement Learning based Traffic Signal Control through Spatio-temporal Hypergraphs [19.107744041461316]
Traffic signal control systems (TSCSs) are integral to intelligent traffic management, fostering efficient vehicle flow.
Traditional approaches often simplify road networks into standard graphs.
We propose a novel TSCS framework to realize intelligent traffic control.
arXiv Detail & Related papers (2024-04-17T02:46:18Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - A Novel Multi-Agent Deep RL Approach for Traffic Signal Control [13.927155702352131]
We propose a Friend-Deep Q-network (Friend-DQN) approach for multiple traffic signal control in urban networks.
In particular, the cooperation between multiple agents can reduce the state-action space and thus speed up the convergence.
arXiv Detail & Related papers (2023-06-05T08:20:37Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Large-Scale Traffic Signal Control by a Nash Deep Q-network Approach [7.23135508361981]
We introduce an off-policy nash deep Q-Network (OPNDQN) algorithm, which mitigates the weakness of both fully centralized and MARL approaches.
One of main advantages of OPNDQN is to mitigate the non-stationarity of multi-agent Markov process.
We show the dominant superiority of OPNDQN over several existing MARL approaches in terms of average queue length, episode training reward and average waiting time.
arXiv Detail & Related papers (2023-01-02T12:58:51Z) - Teal: Learning-Accelerated Optimization of WAN Traffic Engineering [68.7863363109948]
We present Teal, a learning-based TE algorithm that leverages the parallel processing power of GPUs to accelerate TE control.
To reduce the problem scale and make learning tractable, Teal employs a multi-agent reinforcement learning (RL) algorithm to independently allocate each traffic demand.
Compared with other TE acceleration schemes, Teal satisfies 6--32% more traffic demand and yields 197--625x speedups.
arXiv Detail & Related papers (2022-10-25T04:46:30Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - Independent Reinforcement Learning for Weakly Cooperative Multiagent
Traffic Control Problem [22.733542222812158]
We use independent reinforcement learning (IRL) to solve a complex traffic cooperative control problem in this study.
To this, we model the traffic control problem as a partially observable weak cooperative traffic model (PO-WCTM) to optimize the overall traffic situation of a group of intersections.
Experimental results show that CIL-DDQN outperforms other methods in almost all performance indicators of the traffic control problem.
arXiv Detail & Related papers (2021-04-22T07:55:46Z) - Constructing Geographic and Long-term Temporal Graph for Traffic
Forecasting [88.5550074808201]
We propose Geographic and Long term Temporal Graph Convolutional Recurrent Neural Network (GLT-GCRNN) for traffic forecasting.
In this work, we propose a novel framework for traffic forecasting that learns the rich interactions between roads sharing similar geographic or longterm temporal patterns.
arXiv Detail & Related papers (2020-04-23T03:50:46Z) - IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic
Signal Control [4.273991039651846]
Scaling adaptive traffic-signal control involves dealing with state and action spaces.
We introduce Inductive Graph Reinforcement Learning (IG-RL) based on graph-convolutional networks.
Our model can generalize to new road networks, traffic distributions, and traffic regimes.
arXiv Detail & Related papers (2020-03-06T17:17:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.