Development of a CAV-based Intersection Control System and Corridor
Level Impact Assessment
- URL: http://arxiv.org/abs/2208.09973v1
- Date: Sun, 21 Aug 2022 21:56:20 GMT
- Title: Development of a CAV-based Intersection Control System and Corridor
Level Impact Assessment
- Authors: Ardeshir Mirbakhsh, Joyoung Lee, Dejan Besenski
- Abstract summary: This paper presents a signal-free intersection control system for CAVs by combination of a pixel reservation algorithm and a Deep Reinforcement Learning (DRL) decision-making logic.
The proposed model reduces delay by 50%, 29%, and 23% in moderate, high, and extreme volume regimes compared to the other CAV-based control system.
- Score: 0.696125353550498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a signal-free intersection control system for CAVs by
combination of a pixel reservation algorithm and a Deep Reinforcement Learning
(DRL) decision-making logic, followed by a corridor-level impact assessment of
the proposed model. The pixel reservation algorithm detects potential colliding
maneuvers and the DRL logic optimizes vehicles' movements to avoid collision
and minimize the overall delay at the intersection. The proposed control system
is called Decentralized Sparse Coordination System (DSCLS) since each vehicle
has its own control logic and interacts with other vehicles in coordinated
states only. Due to the chain impact of taking random actions in the DRL's
training course, the trained model can deal with unprecedented volume
conditions, which poses the main challenge in intersection management. The
performance of the developed model is compared with conventional and CAV-based
control systems, including fixed traffic lights, actuated traffic lights, and
the Longest Queue First (LQF) control system under three volume regimes in a
corridor of four intersections in VISSIM software. The simulation result
revealed that the proposed model reduces delay by 50%, 29%, and 23% in
moderate, high, and extreme volume regimes compared to the other CAV-based
control system. Improvements in travel time, fuel consumption, emission, and
Surrogate Safety Measures (SSM) are also noticeable.
Related papers
- A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Lyapunov Function Consistent Adaptive Network Signal Control with Back
Pressure and Reinforcement Learning [9.797994846439527]
This study introduces a unified framework using Lyapunov control theory, defining specific Lyapunov functions respectively.
Building on insights from Lyapunov theory, this study designs a reward function for the Reinforcement Learning (RL)-based network signal control.
The proposed algorithm is compared with several traditional and RL-based methods under pure passenger car flow and heterogenous traffic flow including freight.
arXiv Detail & Related papers (2022-10-06T00:22:02Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Proximal Policy Optimization Learning based Control of Congested Freeway
Traffic [3.816579519746557]
This study proposes a delay-compensated feedback controller based on proximal policy optimization (PPO) reinforcement learning.
For a delay-free system, the PPO control has faster convergence rate and less control effort than the Lyapunov control.
arXiv Detail & Related papers (2022-04-12T08:36:21Z) - Deep Reinforcement Learning Aided Platoon Control Relying on V2X
Information [78.18186960475974]
The impact of Vehicle-to-Everything (V2X) communications on platoon control performance is investigated.
Our objective is to find the specific set of information that should be shared among the vehicles for the construction of the most appropriate state space.
More meritorious information is given higher priority in transmission, since including it in the state space has a higher probability in offsetting the negative effect of having higher state dimensions.
arXiv Detail & Related papers (2022-03-28T02:11:54Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Deep Reinforcement Q-Learning for Intelligent Traffic Signal Control
with Partial Detection [0.0]
Intelligent traffic signal controllers, applying DQN algorithms to traffic light policy optimization, efficiently reduce traffic congestion by adjusting traffic signals to real-time traffic.
Most propositions in the literature however consider that all vehicles at the intersection are detected, an unrealistic scenario.
We propose a deep reinforcement Q-learning model to optimize traffic signal control at an isolated intersection, in a partially observable environment with connected vehicles.
arXiv Detail & Related papers (2021-09-29T10:42:33Z) - Data-Driven Intersection Management Solutions for Mixed Traffic of
Human-Driven and Connected and Automated Vehicles [0.0]
This dissertation proposes two solutions for urban traffic control in the presence of connected and automated vehicles.
First, a centralized platoon-based controller is proposed for the cooperative intersection management problem.
Second, a data-driven approach is proposed for adaptive signal control in the presence of connected vehicles.
arXiv Detail & Related papers (2020-12-10T01:44:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.