Expression is enough: Improving traffic signal control with advanced
traffic state representation
- URL: http://arxiv.org/abs/2112.10107v1
- Date: Sun, 19 Dec 2021 10:28:39 GMT
- Title: Expression is enough: Improving traffic signal control with advanced
traffic state representation
- Authors: Liang Zhang, Qiang Wu, Jun Shen, Linyuan L\"u, Jianqing Wu, Bo Du
- Abstract summary: We present a novel, flexible and straightforward method advanced max pressure (Advanced-MP)
We also develop an RL-based algorithm template Advanced-XLight, by combining ATS with current RL approaches and generate two RL algorithms, "Advanced-MPLight" and "Advanced-CoLight"
Comprehensive experiments on multiple real-world datasets show that: (1) the Advanced-MP outperforms baseline methods, which is efficient and reliable for deployment; (2) Advanced-MPLight and Advanced-CoLight could achieve new state-of-the-art.
- Score: 24.917612761503996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, finding fundamental properties for traffic state representation is
more critical than complex algorithms for traffic signal control (TSC).In this
paper, we (1) present a novel, flexible and straightforward method advanced max
pressure (Advanced-MP), taking both running and queueing vehicles into
consideration to decide whether to change current phase; (2) novelty design the
traffic movement representation with the efficient pressure and effective
running vehicles from Advanced-MP, namely advanced traffic state (ATS); (3)
develop an RL-based algorithm template Advanced-XLight, by combining ATS with
current RL approaches and generate two RL algorithms, "Advanced-MPLight" and
"Advanced-CoLight". Comprehensive experiments on multiple real-world datasets
show that: (1) the Advanced-MP outperforms baseline methods, which is efficient
and reliable for deployment; (2) Advanced-MPLight and Advanced-CoLight could
achieve new state-of-the-art. Our code is released on Github.
Related papers
- Reinforcement Learning for Adaptive Traffic Signal Control: Turn-Based and Time-Based Approaches to Reduce Congestion [2.733700237741334]
This paper explores the use of Reinforcement Learning to enhance traffic signal operations at intersections.
We introduce two RL-based algorithms: a turn-based agent, which dynamically prioritizes traffic signals based on real-time queue lengths, and a time-based agent, which adjusts signal phase durations according to traffic conditions.
Simulation results demonstrate that both RL algorithms significantly outperform conventional traffic signal control systems.
arXiv Detail & Related papers (2024-08-28T12:35:56Z) - AD-H: Autonomous Driving with Hierarchical Agents [64.49185157446297]
We propose to connect high-level instructions and low-level control signals with mid-level language-driven commands.
We implement this idea through a hierarchical multi-agent driving system named AD-H.
arXiv Detail & Related papers (2024-06-05T17:25:46Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - On Transforming Reinforcement Learning by Transformer: The Development
Trajectory [97.79247023389445]
Transformer, originally devised for natural language processing, has also attested significant success in computer vision.
We group existing developments in two categories: architecture enhancement and trajectory optimization.
We examine the main applications of TRL in robotic manipulation, text-based games, navigation and autonomous driving.
arXiv Detail & Related papers (2022-12-29T03:15:59Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Leveraging Queue Length and Attention Mechanisms for Enhanced Traffic
Signal Control Optimization [3.0309252269809264]
We present a novel approach to traffic signal control (TSC) that utilizes queue length as an efficient state representation.
Comprehensive experiments on multiple real-world datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2021-12-30T09:24:09Z) - Efficient Pressure: Improving efficiency for signalized intersections [24.917612761503996]
Reinforcement learning (RL) has attracted more attention to help solve the traffic signal control (TSC) problem.
Existing RL-based methods are rarely deployed considering that they are neither cost-effective in terms of computing resources nor more robust than traditional approaches.
We demonstrate how to construct an adaptive controller for TSC with less training and reduced complexity based on RL-based approach.
arXiv Detail & Related papers (2021-12-04T13:49:58Z) - ModelLight: Model-Based Meta-Reinforcement Learning for Traffic Signal
Control [5.219291917441908]
This paper proposes a novel model-based meta-reinforcement learning framework (ModelLight) for traffic signal control.
Within ModelLight, an ensemble of models for road intersections and the optimization-based meta-learning method are used to improve the data efficiency of an RL-based traffic light control method.
Experiments on real-world datasets demonstrate that ModelLight can outperform state-of-the-art traffic light control algorithms.
arXiv Detail & Related papers (2021-11-15T20:25:08Z) - Deep Reinforcement Q-Learning for Intelligent Traffic Signal Control
with Partial Detection [0.0]
Intelligent traffic signal controllers, applying DQN algorithms to traffic light policy optimization, efficiently reduce traffic congestion by adjusting traffic signals to real-time traffic.
Most propositions in the literature however consider that all vehicles at the intersection are detected, an unrealistic scenario.
We propose a deep reinforcement Q-learning model to optimize traffic signal control at an isolated intersection, in a partially observable environment with connected vehicles.
arXiv Detail & Related papers (2021-09-29T10:42:33Z) - Reinforcement Learning with Latent Flow [78.74671595139613]
Flow of Latents for Reinforcement Learning (Flare) is a network architecture for RL that explicitly encodes temporal information through latent vector differences.
We show that Flare recovers optimal performance in state-based RL without explicit access to the state velocity.
We also show that Flare achieves state-of-the-art performance on pixel-based challenging continuous control tasks within the DeepMind control benchmark suite.
arXiv Detail & Related papers (2021-01-06T03:50:50Z) - Reinforcement Learning with Augmented Data [97.42819506719191]
We present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.
We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods.
arXiv Detail & Related papers (2020-04-30T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.