Learning to Control and Coordinate Mixed Traffic Through Robot Vehicles
at Complex and Unsignalized Intersections
- URL: http://arxiv.org/abs/2301.05294v2
- Date: Fri, 20 Oct 2023 00:12:49 GMT
- Title: Learning to Control and Coordinate Mixed Traffic Through Robot Vehicles
at Complex and Unsignalized Intersections
- Authors: Dawei Wang, Weizi Li, Lei Zhu, Jia Pan
- Abstract summary: We propose a decentralized multi-agent reinforcement learning approach for the control and coordination of mixed traffic at real-world, complex intersections.
In particular, we show that using 5% RVs, we can prevent congestion formation inside a complex intersection under the actual traffic demand of 700 vehicles per hour.
Our method is also robust against both blackout events and sudden RV percentage drops, and enjoys excellent generalizablility.
- Score: 36.059560636577146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intersections are essential road infrastructures for traffic in modern
metropolises. However, they can also be the bottleneck of traffic flows as a
result of traffic incidents or the absence of traffic coordination mechanisms
such as traffic lights. Recently, various control and coordination mechanisms
that are beyond traditional control methods have been proposed to improve the
efficiency of intersection traffic. Amongst these methods, the control of
foreseeable mixed traffic that consists of human-driven vehicles (HVs) and
robot vehicles (RVs) has emerged. In this project, we propose a decentralized
multi-agent reinforcement learning approach for the control and coordination of
mixed traffic at real-world, complex intersections--a topic that has not been
previously explored. Comprehensive experiments are conducted to show the
effectiveness of our approach. In particular, we show that using 5% RVs, we can
prevent congestion formation inside a complex intersection under the actual
traffic demand of 700 vehicles per hour. In contrast, without RVs, congestion
starts to develop when the traffic demand reaches as low as 200 vehicles per
hour. When there exist more than 60% RVs in traffic, our method starts to
achieve comparable or even better performance to traffic signals on the average
waiting time of all vehicles at the intersection. Our method is also robust
against both blackout events and sudden RV percentage drops, and enjoys
excellent generalizablility, which is illustrated by its successful deployment
in two unseen intersections.
Related papers
- Origin-Destination Pattern Effects on Large-Scale Mixed Traffic Control via Multi-Agent Reinforcement Learning [7.813738581616868]
Large-scale mixed traffic control, involving both human-driven and robotic vehicles, remains underexplored.<n>We propose a decentralized multi-agent reinforcement learning framework for managing large-scale mixed traffic networks.<n>We evaluate our approach on a real-world network of 14 intersections in Colorado Springs, Colorado, USA.
arXiv Detail & Related papers (2025-05-19T01:36:05Z) - Large-Scale Mixed-Traffic and Intersection Control using Multi-agent Reinforcement Learning [9.05328054083722]
This study presents the first attempt to use decentralized multi-agent reinforcement learning for large-scale mixed traffic control.
We evaluate a real-world network in Colorado Springs, CO, USA with 14 intersections.
At 80% RV penetration rate, our method reduces waiting time from 6.17 s to 5.09 s and increases throughput from 454 vehicles per 500 seconds to 493 vehicles per 500 seconds.
arXiv Detail & Related papers (2025-04-07T02:52:39Z) - Neighbor-Aware Reinforcement Learning for Mixed Traffic Optimization in Large-scale Networks [1.9413548770753521]
This paper proposes a reinforcement learning framework for coordinating mixed traffic across interconnected intersections.
Our key contribution is a neighbor-aware reward mechanism that enables RVs to maintain balanced distribution across the network.
Results show that our method reduces average waiting times by 39.2% compared to the state-of-the-art single-intersection control policy.
arXiv Detail & Related papers (2024-12-17T07:35:56Z) - Cooperative Cruising: Reinforcement Learning-Based Time-Headway Control for Increased Traffic Efficiency [4.982603129041808]
This paper proposes a novel AI system that is the first to improve highway traffic efficiency compared with human-like traffic.
At the core of our approach is a reinforcement learning based controller that communicates time-headways to automated vehicles.
arXiv Detail & Related papers (2024-12-03T16:13:42Z) - Agent-Agnostic Centralized Training for Decentralized Multi-Agent Cooperative Driving [17.659812774579756]
We propose an asymmetric actor-critic model that learns decentralized cooperative driving policies for autonomous vehicles.
By employing attention neural networks with masking, our approach efficiently manages real-world traffic dynamics and partial observability.
arXiv Detail & Related papers (2024-03-18T16:13:02Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - HumanLight: Incentivizing Ridesharing via Human-centric Deep
Reinforcement Learning in Traffic Signal Control [3.402002554852499]
We present HumanLight, a novel decentralized adaptive traffic signal control algorithm.
Our proposed controller is founded on reinforcement learning with the reward function embedding the transportation-inspired concept of pressure at the person-level.
By rewarding HOV commuters with travel time savings for their efforts to merge into a single ride, HumanLight achieves equitable allocation of green times.
arXiv Detail & Related papers (2023-04-05T17:42:30Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Integrated Decision and Control at Multi-Lane Intersections with Mixed
Traffic Flow [6.233422723925688]
This paper develops a learning-based algorithm to deal with complex intersections with mixed traffic flows.
We first consider different velocity models for green and red lights in the training process and use a finite state machine to handle different modes of light transformation.
Then we design different types of distance constraints for vehicles, traffic lights, pedestrians, bicycles respectively and formulize the constrained optimal control problems.
arXiv Detail & Related papers (2021-08-30T07:55:32Z) - Courteous Behavior of Automated Vehicles at Unsignalized Intersections
via Reinforcement Learning [30.00761722505295]
We propose a novel approach to optimize traffic flow at intersections in mixed traffic situations using deep reinforcement learning.
Our reinforcement learning agent learns a policy for a centralized controller to let connected autonomous vehicles at unsignalized intersections give up their right of way and yield to other vehicles to optimize traffic flow.
arXiv Detail & Related papers (2021-06-11T13:16:48Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Scalable Multiagent Driving Policies For Reducing Traffic Congestion [32.08636346620938]
Past research has shown that in small scale mixed traffic scenarios with both AVs and human-driven vehicles, a small fraction of AVs executing a controlled multiagent driving policy can mitigate congestion.
In this paper, we scale up existing approaches and develop new multiagent driving policies for AVs in scenarios with greater complexity.
arXiv Detail & Related papers (2021-02-26T21:29:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.