Decentralized Semantic Traffic Control in AVs Using RL and DQN for Dynamic Roadblocks
- URL: http://arxiv.org/abs/2406.18741v1
- Date: Wed, 26 Jun 2024 20:12:48 GMT
- Title: Decentralized Semantic Traffic Control in AVs Using RL and DQN for Dynamic Roadblocks
- Authors: Emanuel Figetakis, Yahuza Bello, Ahmed Refaey, Abdallah Shami,
- Abstract summary: We present a novel semantic traffic control system that entrusts semantic encoding responsibilities to the vehicles themselves.
This system processes driving decisions obtained from a Reinforcement Learning (RL) agent, streamlining the decision-making process.
- Score: 9.485363025495225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous Vehicles (AVs), furnished with sensors capable of capturing essential vehicle dynamics such as speed, acceleration, and precise location, possess the capacity to execute intelligent maneuvers, including lane changes, in anticipation of approaching roadblocks. Nevertheless, the sheer volume of sensory data and the processing necessary to derive informed decisions can often overwhelm the vehicles, rendering them unable to handle the task independently. Consequently, a common approach in traffic scenarios involves transmitting the data to servers for processing, a practice that introduces challenges, particularly in situations demanding real-time processing. In response to this challenge, we present a novel DL-based semantic traffic control system that entrusts semantic encoding responsibilities to the vehicles themselves. This system processes driving decisions obtained from a Reinforcement Learning (RL) agent, streamlining the decision-making process. Specifically, our framework envisions scenarios where abrupt roadblocks materialize due to factors such as road maintenance, accidents, or vehicle repairs, necessitating vehicles to make determinations concerning lane-keeping or lane-changing actions to navigate past these obstacles. To formulate this scenario mathematically, we employ a Markov Decision Process (MDP) and harness the Deep Q Learning (DQN) algorithm to unearth viable solutions.
Related papers
- Agent-Agnostic Centralized Training for Decentralized Multi-Agent Cooperative Driving [17.659812774579756]
We propose an asymmetric actor-critic model that learns decentralized cooperative driving policies for autonomous vehicles.
By employing attention neural networks with masking, our approach efficiently manages real-world traffic dynamics and partial observability.
arXiv Detail & Related papers (2024-03-18T16:13:02Z) - Decision-Making for Autonomous Vehicles with Interaction-Aware
Behavioral Prediction and Social-Attention Neural Network [7.812717451846781]
We propose a behavioral model that encodes drivers' interacting intentions into latent social-psychological parameters.
We develop a receding-horizon optimization-based controller for autonomous vehicle decision-making.
We conduct extensive evaluations of the proposed decision-making module, in forced highway merging scenarios.
arXiv Detail & Related papers (2023-10-31T03:31:09Z) - Convergence of Communications, Control, and Machine Learning for Secure
and Autonomous Vehicle Navigation [78.60496411542549]
Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks. Reaping these benefits requires CAVs to autonomously navigate to target destinations.
This article proposes solutions using the convergence of communication theory, control theory, and machine learning to enable effective and secure CAV navigation.
arXiv Detail & Related papers (2023-07-05T21:38:36Z) - Prediction Based Decision Making for Autonomous Highway Driving [3.6818636539023175]
This paper proposes a Prediction-based Deep Reinforcement Learning (PDRL) decision-making model.
It considers the manoeuvre intentions of surrounding vehicles in the decision-making process for highway driving.
The results show that the proposed PDRL model improves the decision-making performance compared to a Deep Reinforcement Learning (DRL) model by decreasing collision numbers.
arXiv Detail & Related papers (2022-09-05T19:28:30Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Encoding Integrated Decision and Control for Autonomous Driving with
Mixed Traffic Flow [5.7440882048331705]
Reinforcement learning (RL) has been widely adopted to make intelligent driving policy in autonomous driving.
This paper proposes the encoding integrated decision and control (E-IDC) to handle complicated driving tasks with mixed traffic flows.
arXiv Detail & Related papers (2021-10-24T06:06:27Z) - Towards formalization and monitoring of microscopic traffic parameters
using temporal logic [1.3706331473063877]
We develop specification-based monitoring for the analysis of traffic networks using the formal language Signal Temporal Logic.
We develop monitors that identify safety-related behavior such as conforming to speed limits and maintaining appropriate headway.
This work can be utilized by traffic management centers to study the traffic stream properties, identify possible hazards, and provide valuable feedback for automating the traffic monitoring systems.
arXiv Detail & Related papers (2021-10-12T17:59:26Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.