EMVLight: A Decentralized Reinforcement Learning Framework for
EfficientPassage of Emergency Vehicles
- URL: http://arxiv.org/abs/2109.05429v1
- Date: Sun, 12 Sep 2021 04:21:50 GMT
- Title: EMVLight: A Decentralized Reinforcement Learning Framework for
EfficientPassage of Emergency Vehicles
- Authors: Haoran Su, Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
- Abstract summary: Emergency vehicles (EMVs) play a crucial role in responding to time-critical events such as medical emergencies and fire outbreaks in an urban area.
To reduce the travel time of EMVs, prior work has used route optimization based on historical traffic-flow data and traffic signal pre-emption based on the optimal route.
We propose EMVLight, a decentralized reinforcement learning framework for simultaneous dynamic routing and traffic signal control.
- Score: 8.91479401538491
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emergency vehicles (EMVs) play a crucial role in responding to time-critical
events such as medical emergencies and fire outbreaks in an urban area. The
less time EMVs spend traveling through the traffic, the more likely it would
help save people's lives and reduce property loss. To reduce the travel time of
EMVs, prior work has used route optimization based on historical traffic-flow
data and traffic signal pre-emption based on the optimal route. However,
traffic signal pre-emption dynamically changes the traffic flow which, in turn,
modifies the optimal route of an EMV. In addition, traffic signal pre-emption
practices usually lead to significant disturbances in traffic flow and
subsequently increase the travel time for non-EMVs. In this paper, we propose
EMVLight, a decentralized reinforcement learning (RL) framework for
simultaneous dynamic routing and traffic signal control. EMVLight extends
Dijkstra's algorithm to efficiently update the optimal route for the EMVs in
real time as it travels through the traffic network. The decentralized RL
agents learn network-level cooperative traffic signal phase strategies that not
only reduce EMV travel time but also reduce the average travel time of non-EMVs
in the network. This benefit has been demonstrated through comprehensive
experiments with synthetic and real-world maps. These experiments show that
EMVLight outperforms benchmark transportation engineering techniques and
existing RL-based signal control methods.
Related papers
- A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Traffic Smoothing Controllers for Autonomous Vehicles Using Deep
Reinforcement Learning and Real-World Trajectory Data [45.13152172664334]
We design traffic-smoothing cruise controllers that can be deployed onto autonomous vehicles.
We leverage real-world trajectory data from the I-24 highway in Tennessee.
We show that at a low 4% autonomous vehicle penetration rate, we achieve significant fuel savings of over 15% on trajectories exhibiting many stop-and-go waves.
arXiv Detail & Related papers (2024-01-18T00:50:41Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Deep Reinforcement Learning to Maximize Arterial Usage during Extreme
Congestion [4.934817254755007]
We propose a Deep Reinforcement Learning (DRL) approach to reduce traffic congestion on multi-lane freeways during extreme congestion.
Agent is trained to learn adaptive detouring strategies for congested freeway traffic.
Agent can improve average traffic speed by 21% when compared to no-action during steep congestion.
arXiv Detail & Related papers (2023-05-16T16:53:27Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - EMVLight: a Multi-agent Reinforcement Learning Framework for an
Emergency Vehicle Decentralized Routing and Traffic Signal Control System [4.622745478006317]
Emergency vehicles (EMVs) play a crucial role in responding to time-critical calls such as medical emergencies and fire outbreaks in urban areas.
Existing methods for EMV dispatch typically optimize routes based on historical traffic-flow data and design traffic signal pre-emption accordingly.
We propose EMVLight, a decentralized reinforcement learning framework for joint dynamic EMV routing and traffic signal pre-emption.
arXiv Detail & Related papers (2022-06-27T16:46:20Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - A Decentralized Reinforcement Learning Framework for Efficient Passage
of Emergency Vehicles [6.748225062396441]
Emergency vehicles (EMVs) play a critical role in a city's response to time-critical events.
The existing approaches to reduce EMV travel time employ route optimization and traffic signal pre-emption.
We introduce EMVLight, a framework for simultaneous dynamic routing and traffic signal control.
arXiv Detail & Related papers (2021-10-30T16:13:48Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - PDLight: A Deep Reinforcement Learning Traffic Light Control Algorithm
with Pressure and Dynamic Light Duration [5.585321463602587]
We propose PDlight, a deep reinforcement learning (DRL) traffic light control algorithm with a novel reward as PRCOL (Pressure with Remaining Capacity of Outgoing Lane)
Serving as an improvement over the pressure used in traffic control algorithms, PRCOL considers not only the number of vehicles on the incoming lane but also the remaining capacity of the outgoing lane.
arXiv Detail & Related papers (2020-09-29T01:07:49Z) - DRLE: Decentralized Reinforcement Learning at the Edge for Traffic Light
Control in the IoV [19.520162113896635]
Internet of Vehicles (IoV) enables real-time data exchange among vehicles and roadside units.
We propose a Decentralized Reinforcement Learning at the Edge for traffic light control in the IoV (DRLE)
DRLE operates within the coverage of the edge servers and uses aggregated data from neighboring edge servers to provide city-scale traffic light control.
arXiv Detail & Related papers (2020-09-03T08:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.