EMVLight: a Multi-agent Reinforcement Learning Framework for an
Emergency Vehicle Decentralized Routing and Traffic Signal Control System
- URL: http://arxiv.org/abs/2206.13441v3
- Date: Wed, 29 Jun 2022 04:00:27 GMT
- Title: EMVLight: a Multi-agent Reinforcement Learning Framework for an
Emergency Vehicle Decentralized Routing and Traffic Signal Control System
- Authors: Haoran Su, Yaofeng D. Zhong, Joseph Y.J. Chow, Biswadip Dey and Li Jin
- Abstract summary: Emergency vehicles (EMVs) play a crucial role in responding to time-critical calls such as medical emergencies and fire outbreaks in urban areas.
Existing methods for EMV dispatch typically optimize routes based on historical traffic-flow data and design traffic signal pre-emption accordingly.
We propose EMVLight, a decentralized reinforcement learning framework for joint dynamic EMV routing and traffic signal pre-emption.
- Score: 4.622745478006317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emergency vehicles (EMVs) play a crucial role in responding to time-critical
calls such as medical emergencies and fire outbreaks in urban areas. Existing
methods for EMV dispatch typically optimize routes based on historical
traffic-flow data and design traffic signal pre-emption accordingly; however,
we still lack a systematic methodology to address the coupling between EMV
routing and traffic signal control. In this paper, we propose EMVLight, a
decentralized reinforcement learning (RL) framework for joint dynamic EMV
routing and traffic signal pre-emption. We adopt the multi-agent advantage
actor-critic method with policy sharing and spatial discounted factor. This
framework addresses the coupling between EMV navigation and traffic signal
control via an innovative design of multi-class RL agents and a novel
pressure-based reward function. The proposed methodology enables EMVLight to
learn network-level cooperative traffic signal phasing strategies that not only
reduce EMV travel time but also shortens the travel time of non-EMVs.
Simulation-based experiments indicate that EMVLight enables up to a $42.6\%$
reduction in EMV travel time as well as an $23.5\%$ shorter average travel time
compared with existing approaches.
Related papers
- A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - A Decentralized Reinforcement Learning Framework for Efficient Passage
of Emergency Vehicles [6.748225062396441]
Emergency vehicles (EMVs) play a critical role in a city's response to time-critical events.
The existing approaches to reduce EMV travel time employ route optimization and traffic signal pre-emption.
We introduce EMVLight, a framework for simultaneous dynamic routing and traffic signal control.
arXiv Detail & Related papers (2021-10-30T16:13:48Z) - EMVLight: A Decentralized Reinforcement Learning Framework for
EfficientPassage of Emergency Vehicles [8.91479401538491]
Emergency vehicles (EMVs) play a crucial role in responding to time-critical events such as medical emergencies and fire outbreaks in an urban area.
To reduce the travel time of EMVs, prior work has used route optimization based on historical traffic-flow data and traffic signal pre-emption based on the optimal route.
We propose EMVLight, a decentralized reinforcement learning framework for simultaneous dynamic routing and traffic signal control.
arXiv Detail & Related papers (2021-09-12T04:21:50Z) - A Deep Reinforcement Learning Approach for Traffic Signal Control
Optimization [14.455497228170646]
Inefficient traffic signal control methods may cause numerous problems, such as traffic congestion and waste of energy.
This paper first proposes a multi-agent deep deterministic policy gradient (MADDPG) method by extending the actor-critic policy gradient algorithms.
arXiv Detail & Related papers (2021-07-13T14:11:04Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - MetaVIM: Meta Variationally Intrinsic Motivated Reinforcement Learning for Decentralized Traffic Signal Control [54.162449208797334]
Traffic signal control aims to coordinate traffic signals across intersections to improve the traffic efficiency of a district or a city.
Deep reinforcement learning (RL) has been applied to traffic signal control recently and demonstrated promising performance where each traffic signal is regarded as an agent.
We propose a novel Meta Variationally Intrinsic Motivated (MetaVIM) RL method to learn the decentralized policy for each intersection that considers neighbor information in a latent way.
arXiv Detail & Related papers (2021-01-04T03:06:08Z) - V2I Connectivity-Based Dynamic Queue-Jump Lane for Emergency Vehicles: A
Deep Reinforcement Learning Approach [3.39322931607753]
A main reason behind EMV service delay is the lack of communication and cooperation between vehicles blocking EMVs.
We consider the establishment of dynamic queue jump lanes (DQJLs) based on real-time coordination of connected vehicles.
We propose a deep neural network-based reinforcement learning algorithm that efficiently computes the optimal coordination instructions.
arXiv Detail & Related papers (2020-08-01T20:34:16Z) - Dynamic Queue-Jump Lane for Emergency Vehicles under Partially Connected
Settings: A Multi-Agent Deep Reinforcement Learning Approach [3.39322931607753]
Emergency vehicle (EMV) service is a key function of cities and is exceedingly challenging due to urban traffic congestion.
In this paper, we study the improvement of EMV service under V2X connectivity.
We consider the establishment of dynamic queue jump lanes (DQJLs) based on real-time coordination of connected vehicles in the presence of non-connected human-driven vehicles.
arXiv Detail & Related papers (2020-03-02T16:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.