Deep Reinforcement Learning for Traffic Light Control in Intelligent
Transportation Systems
- URL: http://arxiv.org/abs/2302.03669v1
- Date: Sat, 4 Feb 2023 02:49:12 GMT
- Title: Deep Reinforcement Learning for Traffic Light Control in Intelligent
Transportation Systems
- Authors: Xiao-Yang Liu, Ming Zhu, Sem Borst, and Anwar Walid
- Abstract summary: Deep reinforcement learning (DRL) is a promising approach to adaptively control traffic lights based on the real-time traffic situation in a road network.
We use two DRL algorithms for the traffic light control problems in two scenarios.
The delivered policies both in a single road intersection and a grid road network demonstrate the scalability of DRL algorithms.
- Score: 19.318117948342362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smart traffic lights in intelligent transportation systems (ITSs) are
envisioned to greatly increase traffic efficiency and reduce congestion. Deep
reinforcement learning (DRL) is a promising approach to adaptively control
traffic lights based on the real-time traffic situation in a road network.
However, conventional methods may suffer from poor scalability. In this paper,
we investigate deep reinforcement learning to control traffic lights, and both
theoretical analysis and numerical experiments show that the intelligent
behavior ``greenwave" (i.e., a vehicle will see a progressive cascade of green
lights, and not have to brake at any intersection) emerges naturally a grid
road network, which is proved to be the optimal policy in an avenue with
multiple cross streets. As a first step, we use two DRL algorithms for the
traffic light control problems in two scenarios. In a single road intersection,
we verify that the deep Q-network (DQN) algorithm delivers a thresholding
policy; and in a grid road network, we adopt the deep deterministic policy
gradient (DDPG) algorithm. Secondly, numerical experiments show that the DQN
algorithm delivers the optimal control, and the DDPG algorithm with passive
observations has the capability to produce on its own a high-level intelligent
behavior in a grid road network, namely, the ``greenwave" policy emerges. We
also verify the ``greenwave" patterns in a $5 \times 10$ grid road network.
Thirdly, the ``greenwave" patterns demonstrate that DRL algorithms produce
favorable solutions since the ``greenwave" policy shown in experiment results
is proved to be optimal in a specified traffic model (an avenue with multiple
cross streets). The delivered policies both in a single road intersection and a
grid road network demonstrate the scalability of DRL algorithms.
Related papers
- A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Traffic Management of Autonomous Vehicles using Policy Based Deep
Reinforcement Learning and Intelligent Routing [0.26249027950824505]
We propose a DRL-based signal control system that adjusts traffic signals according to the current congestion situation on intersections.
To deal with the congestion on roads behind the intersection, we used re-routing technique to load balance the vehicles on road networks.
arXiv Detail & Related papers (2022-06-28T02:46:20Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - A Deep Reinforcement Learning Approach for Traffic Signal Control
Optimization [14.455497228170646]
Inefficient traffic signal control methods may cause numerous problems, such as traffic congestion and waste of energy.
This paper first proposes a multi-agent deep deterministic policy gradient (MADDPG) method by extending the actor-critic policy gradient algorithms.
arXiv Detail & Related papers (2021-07-13T14:11:04Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Deep Policy Dynamic Programming for Vehicle Routing Problems [89.96386273895985]
We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
arXiv Detail & Related papers (2021-02-23T15:33:57Z) - A Traffic Light Dynamic Control Algorithm with Deep Reinforcement
Learning Based on GNN Prediction [5.585321463602587]
GPlight is a deep reinforcement learning algorithm integrated with graph neural network (GNN)
In GPlight, the graph neural network (GNN) is first used to predict the future short-term traffic flow at the intersections.
arXiv Detail & Related papers (2020-09-29T01:09:24Z) - PDLight: A Deep Reinforcement Learning Traffic Light Control Algorithm
with Pressure and Dynamic Light Duration [5.585321463602587]
We propose PDlight, a deep reinforcement learning (DRL) traffic light control algorithm with a novel reward as PRCOL (Pressure with Remaining Capacity of Outgoing Lane)
Serving as an improvement over the pressure used in traffic control algorithms, PRCOL considers not only the number of vehicles on the incoming lane but also the remaining capacity of the outgoing lane.
arXiv Detail & Related papers (2020-09-29T01:07:49Z) - IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic
Signal Control [4.273991039651846]
Scaling adaptive traffic-signal control involves dealing with state and action spaces.
We introduce Inductive Graph Reinforcement Learning (IG-RL) based on graph-convolutional networks.
Our model can generalize to new road networks, traffic distributions, and traffic regimes.
arXiv Detail & Related papers (2020-03-06T17:17:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.