Batch-Augmented Multi-Agent Reinforcement Learning for Efficient Traffic
Signal Optimization
- URL: http://arxiv.org/abs/2005.09624v1
- Date: Tue, 19 May 2020 17:53:05 GMT
- Title: Batch-Augmented Multi-Agent Reinforcement Learning for Efficient Traffic
Signal Optimization
- Authors: Yueh-Hua Wu, I-Hau Yeh, David Hu, Hong-Yuan Mark Liao
- Abstract summary: The proposed framework reduces traffic congestion by 36% in terms of waiting time compared with the currently used fixed-time traffic signal plan.
Our experiments show that the proposed framework reduces traffic congestion by 36% in terms of waiting time compared with the currently used fixed-time traffic signal plan.
- Score: 9.456254189014127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of this work is to provide a viable solution based on reinforcement
learning for traffic signal control problems. Although the state-of-the-art
reinforcement learning approaches have yielded great success in a variety of
domains, directly applying it to alleviate traffic congestion can be
challenging, considering the requirement of high sample efficiency and how
training data is gathered. In this work, we address several challenges that we
encountered when we attempted to mitigate serious traffic congestion occurring
in a metropolitan area. Specifically, we are required to provide a solution
that is able to (1) handle the traffic signal control when certain surveillance
cameras that retrieve information for reinforcement learning are down, (2)
learn from batch data without a traffic simulator, and (3) make control
decisions without shared information across intersections. We present a
two-stage framework to deal with the above-mentioned situations. The framework
can be decomposed into an Evolution Strategies approach that gives a fixed-time
traffic signal control schedule and a multi-agent off-policy reinforcement
learning that is capable of learning from batch data with the aid of three
proposed components, bounded action, batch augmentation, and surrogate reward
clipping. Our experiments show that the proposed framework reduces traffic
congestion by 36% in terms of waiting time compared with the currently used
fixed-time traffic signal plan. Furthermore, the framework requires only 600
queries to a simulator to achieve the result.
Related papers
- Reinforcement Learning for Adaptive Traffic Signal Control: Turn-Based and Time-Based Approaches to Reduce Congestion [2.733700237741334]
This paper explores the use of Reinforcement Learning to enhance traffic signal operations at intersections.
We introduce two RL-based algorithms: a turn-based agent, which dynamically prioritizes traffic signals based on real-time queue lengths, and a time-based agent, which adjusts signal phase durations according to traffic conditions.
Simulation results demonstrate that both RL algorithms significantly outperform conventional traffic signal control systems.
arXiv Detail & Related papers (2024-08-28T12:35:56Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Deep Reinforcement Learning for Autonomous Vehicle Intersection
Navigation [0.24578723416255746]
Reinforcement learning algorithms have emerged as a promising approach to address these challenges.
Here, we address the problem of efficiently and safely navigating T-intersections using a lower-cost, single-agent approach.
Our results reveal that the proposed approach enables the AV to effectively navigate T-intersections, outperforming previous methods in terms of travel delays, collision minimization, and overall cost.
arXiv Detail & Related papers (2023-09-30T10:54:02Z) - Random Ensemble Reinforcement Learning for Traffic Signal Control [5.191217870404512]
An efficient traffic signal control strategy can reduce traffic congestion, improve urban road traffic efficiency and facilitate people's lives.
Existing reinforcement learning approaches for traffic signal control mainly focus on learning through a separate neural network.
arXiv Detail & Related papers (2022-03-10T08:45:47Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Reinforcement Learning for Mixed Autonomy Intersections [4.771833920251869]
We propose a model-free reinforcement learning method for controlling mixed autonomy traffic in simulated traffic networks.
Our method utilizes multi-agent policy decomposition which allows decentralized control based on local observations for an arbitrary number of controlled vehicles.
arXiv Detail & Related papers (2021-11-08T18:03:18Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-intersection Traffic Optimisation: A Benchmark Dataset and a
Strong Baseline [85.9210953301628]
Control of traffic signals is fundamental and critical to alleviate traffic congestion in urban areas.
Because of the high complexity of modelling the problem, experimental settings of current works are often inconsistent.
We propose a novel and strong baseline model based on deep reinforcement learning with the encoder-decoder structure.
arXiv Detail & Related papers (2021-01-24T03:55:39Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - IG-RL: Inductive Graph Reinforcement Learning for Massive-Scale Traffic
Signal Control [4.273991039651846]
Scaling adaptive traffic-signal control involves dealing with state and action spaces.
We introduce Inductive Graph Reinforcement Learning (IG-RL) based on graph-convolutional networks.
Our model can generalize to new road networks, traffic distributions, and traffic regimes.
arXiv Detail & Related papers (2020-03-06T17:17:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.