COOR-PLT: A hierarchical control model for coordinating adaptive
platoons of connected and autonomous vehicles at signal-free intersections
based on deep reinforcement learning
- URL: http://arxiv.org/abs/2207.07195v1
- Date: Fri, 1 Jul 2022 02:22:31 GMT
- Title: COOR-PLT: A hierarchical control model for coordinating adaptive
platoons of connected and autonomous vehicles at signal-free intersections
based on deep reinforcement learning
- Authors: Duowei Li (1 and 2), Jianping Wu (1), Feng Zhu (2), Tianyi Chen (2),
Yiik Diew Wong (2) ((1) Department of Civil Engineering, Tsinghua University,
China, (2) School of Civil and Environmental Engineering, Nanyang
Technological University, Singapore)
- Abstract summary: This study proposes a hierarchical control model, named COOR-PLT, to coordinate adaptive CAV platoons at a signal-free intersection.
The model is validated and examined on the simulator of Urban Mobility (SUMO)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Platooning and coordination are two implementation strategies that are
frequently proposed for traffic control of connected and autonomous vehicles
(CAVs) at signal-free intersections instead of using conventional traffic
signals. However, few studies have attempted to integrate both strategies to
better facilitate the CAV control at signal-free intersections. To this end,
this study proposes a hierarchical control model, named COOR-PLT, to coordinate
adaptive CAV platoons at a signal-free intersection based on deep reinforcement
learning (DRL). COOR-PLT has a two-layer framework. The first layer uses a
centralized control strategy to form adaptive platoons. The optimal size of
each platoon is determined by considering multiple objectives (i.e.,
efficiency, fairness and energy saving). The second layer employs a
decentralized control strategy to coordinate multiple platoons passing through
the intersection. Each platoon is labeled with coordinated status or
independent status, upon which its passing priority is determined. As an
efficient DRL algorithm, Deep Q-network (DQN) is adopted to determine platoon
sizes and passing priorities respectively in the two layers. The model is
validated and examined on the simulator Simulation of Urban Mobility (SUMO).
The simulation results demonstrate that the model is able to: (1) achieve
satisfactory convergence performances; (2) adaptively determine platoon size in
response to varying traffic conditions; and (3) completely avoid deadlocks at
the intersection. By comparison with other control methods, the model manifests
its superiority of adopting adaptive platooning and DRL-based coordination
strategies. Also, the model outperforms several state-of-the-art methods on
reducing travel time and fuel consumption in different traffic conditions.
Related papers
- Combat Urban Congestion via Collaboration: Heterogeneous GNN-based MARL
for Coordinated Platooning and Traffic Signal Control [16.762073265205565]
This paper proposes an innovative solution to tackle these challenges based on heterogeneous graph multi-agent reinforcement learning and traffic theories.
Our approach involves: 1) designing platoon and signal control as distinct reinforcement learning agents with their own set of observations, actions, and reward functions to optimize traffic flow; 2) designing coordination by incorporating graph neural networks within multi-agent reinforcement learning to facilitate seamless information exchange among agents on a regional scale.
arXiv Detail & Related papers (2023-10-17T02:46:04Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Learning to Sail Dynamic Networks: The MARLIN Reinforcement Learning
Framework for Congestion Control in Tactical Environments [53.08686495706487]
This paper proposes an RL framework that leverages an accurate and parallelizable emulation environment to reenact the conditions of a tactical network.
We evaluate our RL learning framework by training a MARLIN agent in conditions replicating a bottleneck link transition between a Satellite Communication (SATCOM) and an UHF Wide Band (UHF) radio link.
arXiv Detail & Related papers (2023-06-27T16:15:15Z) - Adaptive Hierarchical SpatioTemporal Network for Traffic Forecasting [70.66710698485745]
We propose an Adaptive Hierarchical SpatioTemporal Network (AHSTN) to promote traffic forecasting.
AHSTN exploits the spatial hierarchy and modeling multi-scale spatial correlations.
Experiments on two real-world datasets show that AHSTN achieves better performance over several strong baselines.
arXiv Detail & Related papers (2023-06-15T14:50:27Z) - Lyapunov Function Consistent Adaptive Network Signal Control with Back
Pressure and Reinforcement Learning [9.797994846439527]
This study introduces a unified framework using Lyapunov control theory, defining specific Lyapunov functions respectively.
Building on insights from Lyapunov theory, this study designs a reward function for the Reinforcement Learning (RL)-based network signal control.
The proposed algorithm is compared with several traditional and RL-based methods under pure passenger car flow and heterogenous traffic flow including freight.
arXiv Detail & Related papers (2022-10-06T00:22:02Z) - Development of a CAV-based Intersection Control System and Corridor
Level Impact Assessment [0.696125353550498]
This paper presents a signal-free intersection control system for CAVs by combination of a pixel reservation algorithm and a Deep Reinforcement Learning (DRL) decision-making logic.
The proposed model reduces delay by 50%, 29%, and 23% in moderate, high, and extreme volume regimes compared to the other CAV-based control system.
arXiv Detail & Related papers (2022-08-21T21:56:20Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Modeling Adaptive Platoon and Reservation Based Autonomous Intersection
Control: A Deep Reinforcement Learning Approach [0.0]
This study proposes an adaptive platoon based autonomous intersection control model powered by deep reinforcement learning (DRL) technique.
When tested on a traffic micro-simulator, our proposed model exhibits superior performances on travel efficiency and fuel conservation as compared to the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-24T08:50:36Z) - Deep Reinforcement Learning Aided Platoon Control Relying on V2X
Information [78.18186960475974]
The impact of Vehicle-to-Everything (V2X) communications on platoon control performance is investigated.
Our objective is to find the specific set of information that should be shared among the vehicles for the construction of the most appropriate state space.
More meritorious information is given higher priority in transmission, since including it in the state space has a higher probability in offsetting the negative effect of having higher state dimensions.
arXiv Detail & Related papers (2022-03-28T02:11:54Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Learning Scalable Multi-Agent Coordination by Spatial Differentiation
for Traffic Signal Control [8.380832628205372]
We design a multiagent coordination framework based on Deep Reinforcement Learning methods for traffic signal control.
Specifically, we propose the Spatial Differentiation method for coordination which uses the temporal-spatial information in the replay buffer to amend the reward of each action.
arXiv Detail & Related papers (2020-02-27T02:16:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.