Neighbor-Aware Reinforcement Learning for Mixed Traffic Optimization in Large-scale Networks
- URL: http://arxiv.org/abs/2412.12622v1
- Date: Tue, 17 Dec 2024 07:35:56 GMT
- Title: Neighbor-Aware Reinforcement Learning for Mixed Traffic Optimization in Large-scale Networks
- Authors: Iftekharul Islam, Weizi Li,
- Abstract summary: This paper proposes a reinforcement learning framework for coordinating mixed traffic across interconnected intersections.<n>Our key contribution is a neighbor-aware reward mechanism that enables RVs to maintain balanced distribution across the network.<n>Results show that our method reduces average waiting times by 39.2% compared to the state-of-the-art single-intersection control policy.
- Score: 1.9413548770753521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Managing mixed traffic comprising human-driven and robot vehicles (RVs) across large-scale networks presents unique challenges beyond single-intersection control. This paper proposes a reinforcement learning framework for coordinating mixed traffic across multiple interconnected intersections. Our key contribution is a neighbor-aware reward mechanism that enables RVs to maintain balanced distribution across the network while optimizing local intersection efficiency. We evaluate our approach using a real-world network, demonstrating its effectiveness in managing realistic traffic patterns. Results show that our method reduces average waiting times by 39.2% compared to the state-of-the-art single-intersection control policy and 79.8% compared to traditional traffic signals. The framework's ability to coordinate traffic across multiple intersections while maintaining balanced RV distribution provides a foundation for deploying learning-based solutions in urban traffic systems.
Related papers
- Unicorn: A Universal and Collaborative Reinforcement Learning Approach Towards Generalizable Network-Wide Traffic Signal Control [13.106167353085878]
Adaptive traffic signal control (ATSC) is crucial in reducing congestion, maximizing throughput, and improving mobility in rapidly growing urban areas.
Recent advancements in parameter-sharing multi-agent reinforcement learning (MARL) have greatly enhanced the scalable and adaptive optimization of complex, dynamic flows in large-scale homogeneous networks.
We present Unicorn, a universal and collaborative MARL framework designed for efficient and adaptable network-wide ATSC.
arXiv Detail & Related papers (2025-03-14T15:13:42Z) - Joint Optimal Transport and Embedding for Network Alignment [66.49765320358361]
We propose a joint optimal transport and embedding framework for network alignment named JOENA.
With a unified objective, the mutual benefits of both methods can be achieved by an alternating optimization schema with guaranteed convergence.
Experiments on real-world networks validate the effectiveness and scalability of JOENA, achieving up to 16% improvement in MRR and 20x speedup.
arXiv Detail & Related papers (2025-02-26T17:28:08Z) - Towards Multi-agent Reinforcement Learning based Traffic Signal Control through Spatio-temporal Hypergraphs [19.107744041461316]
Traffic signal control systems (TSCSs) are integral to intelligent traffic management, fostering efficient vehicle flow.
Traditional approaches often simplify road networks into standard graphs.
We propose a novel TSCS framework to realize intelligent traffic control.
arXiv Detail & Related papers (2024-04-17T02:46:18Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - SocialLight: Distributed Cooperation Learning towards Network-Wide
Traffic Signal Control [7.387226437589183]
SocialLight is a new multi-agent reinforcement learning method for traffic signal control.
It learns cooperative traffic control policies by estimating the individual marginal contribution of agents on their local neighborhood.
We benchmark our trained network against state-of-the-art traffic signal control methods on standard benchmarks in two traffic simulators.
arXiv Detail & Related papers (2023-04-20T12:41:25Z) - Learning to Control and Coordinate Mixed Traffic Through Robot Vehicles at Complex and Unsignalized Intersections [33.0086333735748]
We propose a multi-agent reinforcement learning approach for the control and coordination of mixed traffic by RVs at real-world, complex intersections.
Our method can prevent congestion formation via merely 5% RVs under a real-world traffic demand of 700 vehicles per hour.
Our method is robust against blackout events, sudden RV percentage drops, and V2V communication error.
arXiv Detail & Related papers (2023-01-12T21:09:58Z) - Reinforcement Learning for Mixed Autonomy Intersections [4.771833920251869]
We propose a model-free reinforcement learning method for controlling mixed autonomy traffic in simulated traffic networks.
Our method utilizes multi-agent policy decomposition which allows decentralized control based on local observations for an arbitrary number of controlled vehicles.
arXiv Detail & Related papers (2021-11-08T18:03:18Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - Courteous Behavior of Automated Vehicles at Unsignalized Intersections
via Reinforcement Learning [30.00761722505295]
We propose a novel approach to optimize traffic flow at intersections in mixed traffic situations using deep reinforcement learning.
Our reinforcement learning agent learns a policy for a centralized controller to let connected autonomous vehicles at unsignalized intersections give up their right of way and yield to other vehicles to optimize traffic flow.
arXiv Detail & Related papers (2021-06-11T13:16:48Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.