Joint-Local Grounded Action Transformation for Sim-to-Real Transfer in Multi-Agent Traffic Control
- URL: http://arxiv.org/abs/2507.15174v1
- Date: Mon, 21 Jul 2025 01:33:59 GMT
- Title: Joint-Local Grounded Action Transformation for Sim-to-Real Transfer in Multi-Agent Traffic Control
- Authors: Justin Turnau, Longchao Da, Khoa Vo, Ferdous Al Rafi, Shreyas Bachiraju, Tiejin Chen, Hua Wei,
- Abstract summary: Traffic Signal Control (TSC) is essential for managing urban traffic flow and reducing congestion. Reinforcement Learning (RL) offers an adaptive method for TSC by responding to dynamic traffic patterns.<n> implementing MARL-based TSC policies in the real world often leads to a significant performance drop, known as the sim-to-real gap.<n>In this work, we introduce JL-GAT, an application of GAT to MARL-based TSC that balances scalability with enhanced grounding capability by incorporating information from neighboring agents.
- Score: 3.472517229547992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic Signal Control (TSC) is essential for managing urban traffic flow and reducing congestion. Reinforcement Learning (RL) offers an adaptive method for TSC by responding to dynamic traffic patterns, with multi-agent RL (MARL) gaining traction as intersections naturally function as coordinated agents. However, due to shifts in environmental dynamics, implementing MARL-based TSC policies in the real world often leads to a significant performance drop, known as the sim-to-real gap. Grounded Action Transformation (GAT) has successfully mitigated this gap in single-agent RL for TSC, but real-world traffic networks, which involve numerous interacting intersections, are better suited to a MARL framework. In this work, we introduce JL-GAT, an application of GAT to MARL-based TSC that balances scalability with enhanced grounding capability by incorporating information from neighboring agents. JL-GAT adopts a decentralized approach to GAT, allowing for the scalability often required in real-world traffic networks while still capturing key interactions between agents. Comprehensive experiments on various road networks under simulated adverse weather conditions, along with ablation studies, demonstrate the effectiveness of JL-GAT. The code is publicly available at https://github.com/DaRL-LibSignal/JL-GAT/.
Related papers
- Smart Traffic Signals: Comparing MARL and Fixed-Time Strategies [0.0]
Urban traffic congestion, particularly at intersections, significantly impacts travel time, fuel consumption, and emissions.<n>Traditional fixed-time signal control systems often lack the adaptability to manage dynamic traffic patterns effectively.<n>This study explores the application of multi-agent reinforcement learning to optimize traffic signal coordination across multiple intersections.
arXiv Detail & Related papers (2025-05-20T15:59:44Z) - Federated Hierarchical Reinforcement Learning for Adaptive Traffic Signal Control [5.570882985800125]
Multi-agent reinforcement learning (MARL) has shown promise for adaptive traffic signal control (ATSC)<n>MARL faces constraints due to extensive data sharing and communication requirements.<n>We propose Hierarchical Federated Reinforcement Learning (HFRL) for ATSC.
arXiv Detail & Related papers (2025-04-07T23:02:59Z) - CoLLMLight: Cooperative Large Language Model Agents for Network-Wide Traffic Signal Control [7.0964925117958515]
Traffic Signal Control (TSC) plays a critical role in urban traffic management by optimizing traffic flow and mitigating congestion.<n>Existing approaches fail to address the essential need for inter-agent coordination.<n>We propose CoLLMLight, a cooperative LLM agent framework for TSC.
arXiv Detail & Related papers (2025-03-14T15:40:39Z) - Toward Dependency Dynamics in Multi-Agent Reinforcement Learning for Traffic Signal Control [8.312659530314937]
Reinforcement learning (RL) emerges as a promising data-driven approach for adaptive traffic signal control.<n>In this paper, we propose a novel Dynamic Reinforcement Update Strategy for Deep Q-Network (DQN-DPUS)<n>We show that the proposed strategy can speed up the convergence rate without sacrificing optimal exploration.
arXiv Detail & Related papers (2025-02-23T15:29:12Z) - Improving Traffic Flow Predictions with SGCN-LSTM: A Hybrid Model for Spatial and Temporal Dependencies [55.2480439325792]
This paper introduces the Signal-Enhanced Graph Convolutional Network Long Short Term Memory (SGCN-LSTM) model for predicting traffic speeds across road networks.
Experiments on the PEMS-BAY road network traffic dataset demonstrate the SGCN-LSTM model's effectiveness.
arXiv Detail & Related papers (2024-11-01T00:37:00Z) - LLM-Assisted Light: Leveraging Large Language Model Capabilities for Human-Mimetic Traffic Signal Control in Complex Urban Environments [3.7788636451616697]
This work introduces an innovative approach that integrates Large Language Models into traffic signal control systems.
A hybrid framework that augments LLMs with a suite of perception and decision-making tools is proposed.
The findings from our simulations attest to the system's adeptness in adjusting to a multiplicity of traffic environments.
arXiv Detail & Related papers (2024-03-13T08:41:55Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.