Traffic Management of Autonomous Vehicles using Policy Based Deep
Reinforcement Learning and Intelligent Routing
- URL: http://arxiv.org/abs/2206.14608v1
- Date: Tue, 28 Jun 2022 02:46:20 GMT
- Title: Traffic Management of Autonomous Vehicles using Policy Based Deep
Reinforcement Learning and Intelligent Routing
- Authors: Anum Mushtaq, Irfan ul Haq, Muhammad Azeem Sarwar, Asifullah Khan,
Omair Shafiq
- Abstract summary: We propose a DRL-based signal control system that adjusts traffic signals according to the current congestion situation on intersections.
To deal with the congestion on roads behind the intersection, we used re-routing technique to load balance the vehicles on road networks.
- Score: 0.26249027950824505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Reinforcement Learning (DRL) uses diverse, unstructured data and makes
RL capable of learning complex policies in high dimensional environments.
Intelligent Transportation System (ITS) based on Autonomous Vehicles (AVs)
offers an excellent playground for policy-based DRL. Deep learning
architectures solve computational challenges of traditional algorithms while
helping in real-world adoption and deployment of AVs. One of the main
challenges in AVs implementation is that it can worsen traffic congestion on
roads if not reliably and efficiently managed. Considering each vehicle's
holistic effect and using efficient and reliable techniques could genuinely
help optimise traffic flow management and congestion reduction. For this
purpose, we proposed a intelligent traffic control system that deals with
complex traffic congestion scenarios at intersections and behind the
intersections. We proposed a DRL-based signal control system that dynamically
adjusts traffic signals according to the current congestion situation on
intersections. To deal with the congestion on roads behind the intersection, we
used re-routing technique to load balance the vehicles on road networks. To
achieve the actual benefits of the proposed approach, we break down the data
silos and use all the data coming from sensors, detectors, vehicles and roads
in combination to achieve sustainable results. We used SUMO micro-simulator for
our simulations. The significance of our proposed approach is manifested from
the results.
Related papers
- A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - A Fully Data-Driven Approach for Realistic Traffic Signal Control Using
Offline Reinforcement Learning [18.2541182874636]
We propose a fully Data-Driven and simulator-free framework for realistic Traffic Signal Control (D2TSC)
We combine well-established traffic flow theory with machine learning to infer the reward signals from coarse-grained traffic data.
Our approach achieves superior performance over conventional and offline RL baselines, and also enjoys much better real-world applicability.
arXiv Detail & Related papers (2023-11-27T15:29:21Z) - Joint Optimization of Traffic Signal Control and Vehicle Routing in
Signalized Road Networks using Multi-Agent Deep Reinforcement Learning [19.024527400852968]
We propose a joint optimization approach for traffic signal control and vehicle routing in signalized road networks.
The objective is to enhance network performance by simultaneously controlling signal timings and route choices using Multi-Agent Deep Reinforcement Learning (MADRL)
Our work is the first to utilize MADRL in determining the optimal joint policy for signal control and vehicle routing.
arXiv Detail & Related papers (2023-10-16T22:10:47Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Reinforcement Learning Approaches for Traffic Signal Control under
Missing Data [5.896742981602458]
In real-world urban scenarios, missing observation of traffic states may frequently occur due to the lack of sensors.
We propose two solutions: the first one imputes the traffic states to enable adaptive control, and the second one imputes both states and rewards to enable adaptive control and the training of RL agents.
arXiv Detail & Related papers (2023-04-21T03:26:33Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - A Deep Reinforcement Learning Approach for Traffic Signal Control
Optimization [14.455497228170646]
Inefficient traffic signal control methods may cause numerous problems, such as traffic congestion and waste of energy.
This paper first proposes a multi-agent deep deterministic policy gradient (MADDPG) method by extending the actor-critic policy gradient algorithms.
arXiv Detail & Related papers (2021-07-13T14:11:04Z) - Courteous Behavior of Automated Vehicles at Unsignalized Intersections
via Reinforcement Learning [30.00761722505295]
We propose a novel approach to optimize traffic flow at intersections in mixed traffic situations using deep reinforcement learning.
Our reinforcement learning agent learns a policy for a centralized controller to let connected autonomous vehicles at unsignalized intersections give up their right of way and yield to other vehicles to optimize traffic flow.
arXiv Detail & Related papers (2021-06-11T13:16:48Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.