Large-Scale Traffic Signal Control Using Constrained Network Partition
and Adaptive Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2303.11899v5
- Date: Thu, 7 Sep 2023 04:42:45 GMT
- Title: Large-Scale Traffic Signal Control Using Constrained Network Partition
and Adaptive Deep Reinforcement Learning
- Authors: Hankang Gu, Shangbo Wang, Xiaoguang Ma, Dongyao Jia, Guoqiang Mao, Eng
Gee Lim, Cheuk Pong Ryan Wong
- Abstract summary: Multi-agent Deep Reinforcement Learning (MADRL) based traffic signal control becomes a popular research topic in recent years.
Some literature utilizes a regional control approach where the whole network is partitioned into multiple disjoint regions.
We propose a novel RL training framework named RegionLight to tackle the above limitations.
- Score: 19.914106989483987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-agent Deep Reinforcement Learning (MADRL) based traffic signal control
becomes a popular research topic in recent years. To alleviate the scalability
issue of completely centralized RL techniques and the non-stationarity issue of
completely decentralized RL techniques on large-scale traffic networks, some
literature utilizes a regional control approach where the whole network is
firstly partitioned into multiple disjoint regions, followed by applying the
centralized RL approach to each region. However, the existing partitioning
rules either have no constraints on the topology of regions or require the same
topology for all regions. Meanwhile, no existing regional control approach
explores the performance of optimal joint action in an exponentially growing
regional action space when intersections are controlled by 4-phase traffic
signals (EW, EWL, NS, NSL). In this paper, we propose a novel RL training
framework named RegionLight to tackle the above limitations. Specifically, the
topology of regions is firstly constrained to a star network which comprises
one center and an arbitrary number of leaves. Next, the network partitioning
problem is modeled as an optimization problem to minimize the number of
regions. Then, an Adaptive Branching Dueling Q-Network (ABDQ) model is proposed
to decompose the regional control task into several joint signal control
sub-tasks corresponding to particular intersections. Subsequently, these
sub-tasks maximize the regional benefits cooperatively. Finally, the global
control strategy for the whole network is obtained by concatenating the optimal
joint actions of all regions. Experimental results demonstrate the superiority
of our proposed framework over all baselines under both real and synthetic
datasets in all evaluation metrics.
Related papers
- Joint Optimal Transport and Embedding for Network Alignment [66.49765320358361]
We propose a joint optimal transport and embedding framework for network alignment named JOENA.
With a unified objective, the mutual benefits of both methods can be achieved by an alternating optimization schema with guaranteed convergence.
Experiments on real-world networks validate the effectiveness and scalability of JOENA, achieving up to 16% improvement in MRR and 20x speedup.
arXiv Detail & Related papers (2025-02-26T17:28:08Z) - Toward Dependency Dynamics in Multi-Agent Reinforcement Learning for Traffic Signal Control [8.312659530314937]
Reinforcement learning (RL) emerges as a promising data-driven approach for adaptive traffic signal control.
In this paper, we propose a novel Dynamic Reinforcement Update Strategy for Deep Q-Network (DQN-DPUS)
We show that the proposed strategy can speed up the convergence rate without sacrificing optimal exploration.
arXiv Detail & Related papers (2025-02-23T15:29:12Z) - Decentralized Federated Learning Over Imperfect Communication Channels [68.08499874460857]
This paper analyzes the impact of imperfect communication channels on decentralized federated learning (D-FL)
It determines the optimal number of local aggregations per training round, adapting to the network topology and imperfect channels.
It is seen that D-FL, with an optimal number of local aggregations, can outperform its potential alternatives by over 10% in training accuracy.
arXiv Detail & Related papers (2024-05-21T16:04:32Z) - Region-aware Distribution Contrast: A Novel Approach to Multi-Task Partially Supervised Learning [50.88504784466931]
Multi-task dense prediction involves semantic segmentation, depth estimation, and surface normal estimation.
Existing solutions typically rely on learning global image representations for global cross-task image matching.
Our proposal involves modeling region-wise representations using Gaussian Distributions.
arXiv Detail & Related papers (2024-03-15T12:41:30Z) - SINR-Aware Deep Reinforcement Learning for Distributed Dynamic Channel
Allocation in Cognitive Interference Networks [10.514231683620517]
This paper focuses on real-world systems experiencing inter-carrier interference (ICI) and channel reuse by multiple large-scale networks.
We propose a novel multi-agent reinforcement learning framework for distributed DCA, named Channel Allocation RL To Overlapped Networks (CARLTON)
Our results demonstrate exceptional performance and robust generalization, showcasing superior efficiency compared to alternative state-of-the-art methods.
arXiv Detail & Related papers (2024-02-17T20:03:02Z) - A Novel Reinforcement Learning Routing Algorithm for Congestion Control
in Complex Networks [0.0]
This article introduces a routing algorithm leveraging reinforcement learning to address two primary objectives: congestion control and optimizing path length based on the shortest path algorithm.
Notably, the proposed method proves effective not only in Barab'asi-Albert scale-free networks but also in other network models such as Watts-Strogatz (small-world) and Erd"os-R'enyi (random network)
arXiv Detail & Related papers (2023-12-30T18:21:13Z) - Hierarchical Multi-Marginal Optimal Transport for Network Alignment [52.206006379563306]
Multi-network alignment is an essential prerequisite for joint learning on multiple networks.
We propose a hierarchical multi-marginal optimal transport framework named HOT for multi-network alignment.
Our proposed HOT achieves significant improvements over the state-of-the-art in both effectiveness and scalability.
arXiv Detail & Related papers (2023-10-06T02:35:35Z) - Large-Scale Traffic Signal Control by a Nash Deep Q-network Approach [7.23135508361981]
We introduce an off-policy nash deep Q-Network (OPNDQN) algorithm, which mitigates the weakness of both fully centralized and MARL approaches.
One of main advantages of OPNDQN is to mitigate the non-stationarity of multi-agent Markov process.
We show the dominant superiority of OPNDQN over several existing MARL approaches in terms of average queue length, episode training reward and average waiting time.
arXiv Detail & Related papers (2023-01-02T12:58:51Z) - Feudal Multi-Agent Reinforcement Learning with Adaptive Network
Partition for Traffic Signal Control [44.09601435685123]
Multi-agent reinforcement learning (MARL) has been applied and shown great potential in traffic signal control.
Previous work partitions the traffic network into several regions and learns policies for agents in a feudal structure.
We propose a novel feudal MARL approach with adaptive network partition.
arXiv Detail & Related papers (2022-05-27T09:02:10Z) - Region-Based Semantic Factorization in GANs [67.90498535507106]
We present a highly efficient algorithm to factorize the latent semantics learned by Generative Adversarial Networks (GANs) concerning an arbitrary image region.
Through an appropriately defined generalized Rayleigh quotient, we solve such a problem without any annotations or training.
Experimental results on various state-of-the-art GAN models demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-02-19T17:46:02Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Area-wide traffic signal control based on a deep graph Q-Network (DGQN)
trained in an asynchronous manner [3.655021726150368]
Reinforcement learning (RL) algorithms have been widely applied in traffic signal studies.
There are, however, several problems in jointly controlling traffic lights for a large transportation network.
arXiv Detail & Related papers (2020-08-05T06:13:58Z) - LRC-Net: Learning Discriminative Features on Point Clouds by Encoding
Local Region Contexts [65.79931333193016]
We present a novel Local-Region-Context Network (LRC-Net) to learn discriminative features on point clouds.
LRC-Net encodes fine-grained contexts inside and among local regions simultaneously.
Results show LRC-Net is competitive with state-of-the-art methods in shape classification and shape segmentation applications.
arXiv Detail & Related papers (2020-03-18T14:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.