CoSLight: Co-optimizing Collaborator Selection and Decision-making to Enhance Traffic Signal Control
- URL: http://arxiv.org/abs/2405.17152v3
- Date: Wed, 19 Jun 2024 10:07:02 GMT
- Title: CoSLight: Co-optimizing Collaborator Selection and Decision-making to Enhance Traffic Signal Control
- Authors: Jingqing Ruan, Ziyue Li, Hua Wei, Haoyuan Jiang, Jiaming Lu, Xuantang Xiong, Hangyu Mao, Rui Zhao,
- Abstract summary: Existing work mainly chooses neighboring intersections as collaborators.
We propose to separate the collaborator selection as a second policy to be learned.
Specifically, the selection policy in real-time adaptively selects the best teammates according to phase- and intersection-level features.
- Score: 14.134128926121711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective multi-intersection collaboration is pivotal for reinforcement-learning-based traffic signal control to alleviate congestion. Existing work mainly chooses neighboring intersections as collaborators. However, quite an amount of congestion, even some wide-range congestion, is caused by non-neighbors failing to collaborate. To address these issues, we propose to separate the collaborator selection as a second policy to be learned, concurrently being updated with the original signal-controlling policy. Specifically, the selection policy in real-time adaptively selects the best teammates according to phase- and intersection-level features. Empirical results on both synthetic and real-world datasets provide robust validation for the superiority of our approach, offering significant improvements over existing state-of-the-art methods. The code is available at https://github.com/bonaldli/CoSLight.
Related papers
- Offline Multi-agent Reinforcement Learning via Score Decomposition [51.23590397383217]
offline cooperative multi-agent reinforcement learning (MARL) faces unique challenges due to distributional shifts.<n>This work is the first work to explicitly address the distributional gap between offline and online MARL.
arXiv Detail & Related papers (2025-05-09T11:42:31Z) - Joint Optimal Transport and Embedding for Network Alignment [66.49765320358361]
We propose a joint optimal transport and embedding framework for network alignment named JOENA.
With a unified objective, the mutual benefits of both methods can be achieved by an alternating optimization schema with guaranteed convergence.
Experiments on real-world networks validate the effectiveness and scalability of JOENA, achieving up to 16% improvement in MRR and 20x speedup.
arXiv Detail & Related papers (2025-02-26T17:28:08Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - PARCO: Learning Parallel Autoregressive Policies for Efficient Multi-Agent Combinatorial Optimization [17.392822956504848]
This paper introduces PARCO, a novel approach that learns fast surrogate solvers for multi-agent problems with reinforcement learning.
We propose a model with a Multiple Pointer Mechanism to efficiently decode multiple decisions simultaneously by different agents, enhanced by a Priority-based Conflict Handling scheme.
arXiv Detail & Related papers (2024-09-05T17:49:18Z) - CityLight: A Neighborhood-inclusive Universal Model for Coordinated City-scale Traffic Signal Control [23.5766158697276]
CityLight learns a universal policy based on representations obtained with two major modules.<n>Experiments on five city-scale datasets, ranging from 97 to 13,952 intersections, confirm the efficacy of CityLight.
arXiv Detail & Related papers (2024-06-04T09:10:14Z) - SocialLight: Distributed Cooperation Learning towards Network-Wide
Traffic Signal Control [7.387226437589183]
SocialLight is a new multi-agent reinforcement learning method for traffic signal control.
It learns cooperative traffic control policies by estimating the individual marginal contribution of agents on their local neighborhood.
We benchmark our trained network against state-of-the-art traffic signal control methods on standard benchmarks in two traffic simulators.
arXiv Detail & Related papers (2023-04-20T12:41:25Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Cooperative Reinforcement Learning on Traffic Signal Control [3.759936323189418]
Traffic signal control is a challenging real-world problem aiming to minimize overall travel time by coordinating vehicle movements at road intersections.
Existing traffic signal control systems in use still rely heavily on oversimplified information and rule-based methods.
This paper proposes a cooperative, multi-objective architecture with age-decaying weights to better estimate multiple reward terms for traffic signal control optimization.
arXiv Detail & Related papers (2022-05-23T13:25:15Z) - Learning to Help Emergency Vehicles Arrive Faster: A Cooperative
Vehicle-Road Scheduling Approach [24.505687255063986]
Vehicle-centric scheduling approaches recommend optimal paths for emergency vehicles.
Road-centric scheduling approaches aim to improve the traffic condition and assign a higher priority for EVs to pass an intersection.
We propose LEVID, a cooperative VehIcle-roaD scheduling approach including a real-time route planning module and a collaborative traffic signal control module.
arXiv Detail & Related papers (2022-02-20T10:25:15Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - A Multi-intersection Vehicular Cooperative Control based on
End-Edge-Cloud Computing [25.05518638792962]
We propose a Multi-intersection Vehicular Cooperative Control (MiVeCC) to enable cooperation among vehicles in a large area with multiple intersections.
Firstly, a vehicular end-edge-cloud computing framework is proposed to facilitate end-edge-cloud vertical cooperation and horizontal cooperation among vehicles.
To deal with high-density traffic, vehicle selection methods are proposed to reduce the state space and accelerate algorithm convergence without performance degradation.
arXiv Detail & Related papers (2020-12-01T14:15:14Z) - Non-Stationary Off-Policy Optimization [50.41335279896062]
We study the novel problem of off-policy optimization in piecewise-stationary contextual bandits.
In the offline learning phase, we partition logged data into categorical latent states and learn a near-optimal sub-policy for each state.
In the online deployment phase, we adaptively switch between the learned sub-policies based on their performance.
arXiv Detail & Related papers (2020-06-15T09:16:09Z) - Learning Scalable Multi-Agent Coordination by Spatial Differentiation
for Traffic Signal Control [8.380832628205372]
We design a multiagent coordination framework based on Deep Reinforcement Learning methods for traffic signal control.
Specifically, we propose the Spatial Differentiation method for coordination which uses the temporal-spatial information in the replay buffer to amend the reward of each action.
arXiv Detail & Related papers (2020-02-27T02:16:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.