Large-Scale Mixed-Traffic and Intersection Control using Multi-agent Reinforcement Learning
- URL: http://arxiv.org/abs/2504.04691v1
- Date: Mon, 07 Apr 2025 02:52:39 GMT
- Title: Large-Scale Mixed-Traffic and Intersection Control using Multi-agent Reinforcement Learning
- Authors: Songyang Liu, Muyang Fan, Weizi Li, Jing Du, Shuai Li,
- Abstract summary: This study presents the first attempt to use decentralized multi-agent reinforcement learning for large-scale mixed traffic control.<n>We evaluate a real-world network in Colorado Springs, CO, USA with 14 intersections.<n>At 80% RV penetration rate, our method reduces waiting time from 6.17 s to 5.09 s and increases throughput from 454 vehicles per 500 seconds to 493 vehicles per 500 seconds.
- Score: 9.05328054083722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic congestion remains a significant challenge in modern urban networks. Autonomous driving technologies have emerged as a potential solution. Among traffic control methods, reinforcement learning has shown superior performance over traffic signals in various scenarios. However, prior research has largely focused on small-scale networks or isolated intersections, leaving large-scale mixed traffic control largely unexplored. This study presents the first attempt to use decentralized multi-agent reinforcement learning for large-scale mixed traffic control in which some intersections are managed by traffic signals and others by robot vehicles. Evaluating a real-world network in Colorado Springs, CO, USA with 14 intersections, we measure traffic efficiency via average waiting time of vehicles at intersections and the number of vehicles reaching their destinations within a time window (i.e., throughput). At 80% RV penetration rate, our method reduces waiting time from 6.17 s to 5.09 s and increases throughput from 454 vehicles per 500 seconds to 493 vehicles per 500 seconds, outperforming the baseline of fully signalized intersections. These findings suggest that integrating reinforcement learning-based control large-scale traffic can improve overall efficiency and may inform future urban planning strategies.
Related papers
- Joint Pedestrian and Vehicle Traffic Optimization in Urban Environments using Reinforcement Learning [11.107470982920262]
Reinforcement learning holds significant promise for adaptive traffic signal control.
We present a deep RL framework for adaptive control of eight traffic signals along a real-world urban corridor.
Results demonstrate significant performance improvements over traditional fixed-time signals.
arXiv Detail & Related papers (2025-04-07T12:41:58Z) - Neighbor-Aware Reinforcement Learning for Mixed Traffic Optimization in Large-scale Networks [1.9413548770753521]
This paper proposes a reinforcement learning framework for coordinating mixed traffic across interconnected intersections.
Our key contribution is a neighbor-aware reward mechanism that enables RVs to maintain balanced distribution across the network.
Results show that our method reduces average waiting times by 39.2% compared to the state-of-the-art single-intersection control policy.
arXiv Detail & Related papers (2024-12-17T07:35:56Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - Learning to Control and Coordinate Mixed Traffic Through Robot Vehicles at Complex and Unsignalized Intersections [33.0086333735748]
We propose a multi-agent reinforcement learning approach for the control and coordination of mixed traffic by RVs at real-world, complex intersections.
Our method can prevent congestion formation via merely 5% RVs under a real-world traffic demand of 700 vehicles per hour.
Our method is robust against blackout events, sudden RV percentage drops, and V2V communication error.
arXiv Detail & Related papers (2023-01-12T21:09:58Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Reinforcement Learning for Mixed Autonomy Intersections [4.771833920251869]
We propose a model-free reinforcement learning method for controlling mixed autonomy traffic in simulated traffic networks.
Our method utilizes multi-agent policy decomposition which allows decentralized control based on local observations for an arbitrary number of controlled vehicles.
arXiv Detail & Related papers (2021-11-08T18:03:18Z) - Courteous Behavior of Automated Vehicles at Unsignalized Intersections
via Reinforcement Learning [30.00761722505295]
We propose a novel approach to optimize traffic flow at intersections in mixed traffic situations using deep reinforcement learning.
Our reinforcement learning agent learns a policy for a centralized controller to let connected autonomous vehicles at unsignalized intersections give up their right of way and yield to other vehicles to optimize traffic flow.
arXiv Detail & Related papers (2021-06-11T13:16:48Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.