Model-free Learning of Corridor Clearance: A Near-term Deployment
Perspective
- URL: http://arxiv.org/abs/2312.10339v1
- Date: Sat, 16 Dec 2023 06:08:53 GMT
- Title: Model-free Learning of Corridor Clearance: A Near-term Deployment
Perspective
- Authors: Dajiang Suo, Vindula Jayawardana, Cathy Wu
- Abstract summary: An emerging public health application of connected and automated vehicle (CAV) technologies is to reduce response times of emergency medical service (EMS) by indirectly coordinating traffic.
Existing research on this topic often overlooks the impact of EMS vehicle disruptions on regular traffic, assumes 100% CAV penetration, relies on real-time traffic signal timing data and queue lengths at intersections, and makes various assumptions about traffic settings when deriving optimal model-based CAV control strategies.
To overcome these challenges and enhance real-world applicability in near-term, we propose a model-free approach employing deep reinforcement learning (DRL) for designing CAV control strategies
- Score: 5.39179984304986
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An emerging public health application of connected and automated vehicle
(CAV) technologies is to reduce response times of emergency medical service
(EMS) by indirectly coordinating traffic. Therefore, in this work we study the
CAV-assisted corridor clearance for EMS vehicles from a short term deployment
perspective. Existing research on this topic often overlooks the impact of EMS
vehicle disruptions on regular traffic, assumes 100% CAV penetration, relies on
real-time traffic signal timing data and queue lengths at intersections, and
makes various assumptions about traffic settings when deriving optimal
model-based CAV control strategies. However, these assumptions pose significant
challenges for near-term deployment and limit the real-world applicability of
such methods. To overcome these challenges and enhance real-world applicability
in near-term, we propose a model-free approach employing deep reinforcement
learning (DRL) for designing CAV control strategies, showing its reduced
overhead in designing and greater scalability and performance compared to
model-based methods. Our qualitative analysis highlights the complexities of
designing scalable EMS corridor clearance controllers for diverse traffic
settings in which DRL controller provides ease of design compared to the
model-based methods. In numerical evaluations, the model-free DRL controller
outperforms the model-based counterpart by improving traffic flow and even
improving EMS travel times in scenarios when a single CAV is present. Across 19
considered settings, the learned DRL controller excels by 25% in reducing the
travel time in six instances, achieving an average improvement of 9%. These
findings underscore the potential and promise of model-free DRL strategies in
advancing EMS response and traffic flow coordination, with a focus on practical
near-term deployment.
Related papers
- Improving Traffic Flow Predictions with SGCN-LSTM: A Hybrid Model for Spatial and Temporal Dependencies [55.2480439325792]
This paper introduces the Signal-Enhanced Graph Convolutional Network Long Short Term Memory (SGCN-LSTM) model for predicting traffic speeds across road networks.
Experiments on the PEMS-BAY road network traffic dataset demonstrate the SGCN-LSTM model's effectiveness.
arXiv Detail & Related papers (2024-11-01T00:37:00Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Generalizing Cooperative Eco-driving via Multi-residual Task Learning [6.864745785996583]
Multi-residual Task Learning (MRTL) is a generic learning framework based on multi-task learning.
MRTL decomposes control into nominal components that are effectively solved by conventional control methods and residual terms.
We employ MRTL for fleet-level emission reduction in mixed traffic using autonomous vehicles as a means of system control.
arXiv Detail & Related papers (2024-03-07T05:25:34Z) - MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot
Learning [52.101643259906915]
We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations.
Existing model-based offline RL methods are not suitable for offline-to-online fine-tuning in high-dimensional domains.
We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization.
arXiv Detail & Related papers (2024-01-06T21:04:31Z) - Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - A Deep Reinforcement Learning Approach for Traffic Signal Control
Optimization [14.455497228170646]
Inefficient traffic signal control methods may cause numerous problems, such as traffic congestion and waste of energy.
This paper first proposes a multi-agent deep deterministic policy gradient (MADDPG) method by extending the actor-critic policy gradient algorithms.
arXiv Detail & Related papers (2021-07-13T14:11:04Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z) - Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion [2.0010674945048468]
We present an RL-based multi-agent CAV control model to operate in mixed traffic.
The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic.
arXiv Detail & Related papers (2020-10-12T03:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.