Cooperative Highway Work Zone Merge Control based on Reinforcement
Learning in A Connected and Automated Environment
- URL: http://arxiv.org/abs/2001.08581v1
- Date: Tue, 21 Jan 2020 21:39:44 GMT
- Title: Cooperative Highway Work Zone Merge Control based on Reinforcement
Learning in A Connected and Automated Environment
- Authors: Tianzhu Ren, Yuanchang Xie, Liming Jiang
- Abstract summary: This paper proposes and evaluates a novel highway work zone merge control strategy based on cooperative driving behavior enabled by artificial intelligence.
The proposed method assumes that all vehicles are fully automated, connected and cooperative.
The results show that this cooperative RL-based merge control significantly outperforms popular strategies such as late merge and early merge in terms of both mobility and safety measures.
- Score: 6.402634424631123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the aging infrastructure and the anticipated growing number of highway
work zones in the United States, it is important to investigate work zone merge
control, which is critical for improving work zone safety and capacity. This
paper proposes and evaluates a novel highway work zone merge control strategy
based on cooperative driving behavior enabled by artificial intelligence. The
proposed method assumes that all vehicles are fully automated, connected and
cooperative. It inserts two metering zones in the open lane to make space for
merging vehicles in the closed lane. In addition, each vehicle in the closed
lane learns how to optimally adjust its longitudinal position to find a safe
gap in the open lane using an off-policy soft actor critic (SAC) reinforcement
learning (RL) algorithm, considering the traffic conditions in its surrounding.
The learning results are captured in convolutional neural networks and used to
control individual vehicles in the testing phase. By adding the metering zones
and taking the locations, speeds, and accelerations of surrounding vehicles
into account, cooperation among vehicles is implicitly considered. This
RL-based model is trained and evaluated using a microscopic traffic simulator.
The results show that this cooperative RL-based merge control significantly
outperforms popular strategies such as late merge and early merge in terms of
both mobility and safety measures.
Related papers
- A Systematic Study of Multi-Agent Deep Reinforcement Learning for Safe and Robust Autonomous Highway Ramp Entry [0.0]
We study a highway ramp function that controls the vehicles forward-moving actions to minimize collisions with the stream of highway traffic into which a merging (ego) vehicle enters.
We take a game-theoretic multi-agent (MA) approach to this problem and study the use of controllers based on deep reinforcement learning (DRL)
The work presented in this paper extends existing work by studying the interaction of more than two vehicles (agents) and does so by systematically expanding the road scene with additional traffic and ego vehicles.
arXiv Detail & Related papers (2024-11-21T21:23:46Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - A Conflicts-free, Speed-lossless KAN-based Reinforcement Learning Decision System for Interactive Driving in Roundabouts [17.434924472015812]
This paper introduces a learning-based algorithm tailored to foster safe and efficient driving behaviors in roundabouts.
The proposed algorithm employs a deep Q-learning network to learn safe and efficient driving strategies in complex multi-vehicle roundabouts.
The results show that our proposed system consistently achieves safe and efficient driving whilst maintaining a stable training process.
arXiv Detail & Related papers (2024-08-15T16:10:25Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Real-time Cooperative Vehicle Coordination at Unsignalized Road
Intersections [7.860567520771493]
Cooperative coordination at unsignalized road intersections aims to improve the safety driving traffic throughput for connected and automated vehicles.
We introduce a model-free Markov Decision Process (MDP) and tackle it by a Twin Delayed Deep Deterministic Policy (TD3)-based strategy in the deep reinforcement learning framework.
We show that the proposed strategy could achieve near-optimal performance in sub-static coordination scenarios and significantly improve control in the realistic continuous flow.
arXiv Detail & Related papers (2022-05-03T02:56:02Z) - Decentralized Cooperative Lane Changing at Freeway Weaving Areas Using
Multi-Agent Deep Reinforcement Learning [1.6752182911522522]
Frequent lane changes during congestion at freeway bottlenecks such as merge and weaving areas further reduce roadway capacity.
The emergence of deep reinforcement learning (RL) and connected and automated vehicle technology provides a possible solution to improve mobility and energy efficiency at freeway bottlenecks through cooperative lane changing.
In this study, a decentralized cooperative lane-changing controller was developed using a multi-agent deep RL paradigm.
The results of this study show that cooperative lane changing enabled by multi-agent deep RL yields superior performance to human drivers in term of traffic throughput, vehicle speed, number of stops per vehicle, vehicle fuel efficiency, and emissions.
arXiv Detail & Related papers (2021-10-05T18:29:13Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Vehicular Cooperative Perception Through Action Branching and Federated
Reinforcement Learning [101.64598586454571]
A novel framework is proposed to allow reinforcement learning-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs)
A federated RL approach is introduced in order to speed up the training process across vehicles.
Results show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
arXiv Detail & Related papers (2020-12-07T02:09:15Z) - A Multi-intersection Vehicular Cooperative Control based on
End-Edge-Cloud Computing [25.05518638792962]
We propose a Multi-intersection Vehicular Cooperative Control (MiVeCC) to enable cooperation among vehicles in a large area with multiple intersections.
Firstly, a vehicular end-edge-cloud computing framework is proposed to facilitate end-edge-cloud vertical cooperation and horizontal cooperation among vehicles.
To deal with high-density traffic, vehicle selection methods are proposed to reduce the state space and accelerate algorithm convergence without performance degradation.
arXiv Detail & Related papers (2020-12-01T14:15:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.