DRL-Based RAT Selection in a Hybrid Vehicular Communication Network
- URL: http://arxiv.org/abs/2407.00828v1
- Date: Wed, 3 Apr 2024 08:13:07 GMT
- Title: DRL-Based RAT Selection in a Hybrid Vehicular Communication Network
- Authors: Badreddine Yacine Yacheur, Toufik Ahmed, Mohamed Mosbah,
- Abstract summary: Cooperative intelligent transport systems rely on a set of Vehicle-to-Everything (V2X) applications to enhance road safety.
New V2X applications depend on a significant amount of shared data and require high reliability, low end-to-end (E2E) latency, and high throughput.
We propose an intelligent, scalable hybrid vehicular communication architecture that leverages the performance of multiple Radio Access Technologies (RATs) to meet the needs of these applications.
- Score: 2.345902601618188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative intelligent transport systems rely on a set of Vehicle-to-Everything (V2X) applications to enhance road safety. Emerging new V2X applications like Advanced Driver Assistance Systems (ADASs) and Connected Autonomous Driving (CAD) applications depend on a significant amount of shared data and require high reliability, low end-to-end (E2E) latency, and high throughput. However, present V2X communication technologies such as ITS-G5 and C-V2X (Cellular V2X) cannot satisfy these requirements alone. In this paper, we propose an intelligent, scalable hybrid vehicular communication architecture that leverages the performance of multiple Radio Access Technologies (RATs) to meet the needs of these applications. Then, we propose a communication mode selection algorithm based on Deep Reinforcement Learning (DRL) to maximize the network's reliability while limiting resource consumption. Finally, we assess our work using the platooning scenario that requires high reliability. Numerical results reveal that the hybrid vehicular communication architecture has the potential to enhance the packet reception rate (PRR) by up to 30% compared to both the static RAT selection strategy and the multi-criteria decision-making (MCDM) selection algorithm. Additionally, it improves the efficiency of the redundant communication mode by 20% regarding resource consumption
Related papers
- Joint Channel Selection using FedDRL in V2X [20.96900576250422]
Vehicle-to-everything (V2X) communication technology is revolutionizing transportation by enabling interactions between vehicles, devices, and infrastructures.
In this paper, we study the problem of joint channel selection, where vehicles with different technologies choose one or more Access Points (APs) to transmit messages in a network.
We propose an approach based on Federated Deep Reinforcement Learning (FedDRL), which enables each vehicle to benefit from other vehicles' experiences.
arXiv Detail & Related papers (2024-10-03T14:04:08Z) - Joint Optimization of Age of Information and Energy Consumption in NR-V2X System based on Deep Reinforcement Learning [13.62746306281161]
Vehicle-to-Everything (V2X) specifications based on 5G New Radio (NR) technology.
Mode 2 Side-Link (SL) communication resembles Mode 4 in LTE-V2X, allowing direct communication between vehicles.
interference cancellation method is employed to mitigate this impact.
arXiv Detail & Related papers (2024-07-11T12:54:38Z) - Deep-Reinforcement-Learning-Based AoI-Aware Resource Allocation for RIS-Aided IoV Networks [43.443526528832145]
We propose a RIS-assisted internet of vehicles (IoV) network, considering the vehicle-to-everything (V2X) communication method.
In order to improve the timeliness of vehicle-to-infrastructure (V2I) links and the stability of vehicle-to-vehicle (V2V) links, we introduce the age of information (AoI) model and the payload transmission probability model.
arXiv Detail & Related papers (2024-06-17T06:16:07Z) - Deep Reinforcement Learning Algorithms for Hybrid V2X Communication: A
Benchmarking Study [39.214784277182304]
This paper addresses the vertical handover problem in V2X using Deep Reinforcement Learning (DRL) algorithms.
The benchmarked algorithms outperform the current state-of-the-art approaches in terms of redundancy and usage rate of V-VLC headlights.
arXiv Detail & Related papers (2023-10-04T12:32:14Z) - Energy-Efficient On-Board Radio Resource Management for Satellite
Communications via Neuromorphic Computing [59.40731173370976]
We investigate the application of energy-efficient brain-inspired machine learning models for on-board radio resource management.
For relevant workloads, spiking neural networks (SNNs) implemented on Loihi 2 yield higher accuracy, while reducing power consumption by more than 100$times$ as compared to the CNN-based reference platform.
arXiv Detail & Related papers (2023-08-22T03:13:57Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Deep Learning-based Resource Allocation For Device-to-Device
Communication [66.74874646973593]
We propose a framework for the optimization of the resource allocation in multi-channel cellular systems with device-to-device (D2D) communication.
A deep learning (DL) framework is proposed, where the optimal resource allocation strategy for arbitrary channel conditions is approximated by deep neural network (DNN) models.
Our simulation results confirm that near-optimal performance can be attained with low time, which underlines the real-time capability of the proposed scheme.
arXiv Detail & Related papers (2020-11-25T14:19:23Z) - Path Design and Resource Management for NOMA enhanced Indoor Intelligent
Robots [58.980293789967575]
A communication enabled indoor intelligent robots (IRs) service framework is proposed.
Lego modeling method is proposed, which can deterministically describe the indoor layout and channel state.
The investigated radio map is invoked as a virtual environment to train the reinforcement learning agent.
arXiv Detail & Related papers (2020-11-23T21:45:01Z) - Multi-Agent Reinforcement Learning for Channel Assignment and Power
Allocation in Platoon-Based C-V2X Systems [15.511438222357489]
We consider the problem of joint channel assignment and power allocation in underlaid cellular vehicular-to-everything (C-V2X) systems.
Our proposed distributed resource allocation algorithm provides a close performance compared to that of the well-known exhaustive search algorithm.
arXiv Detail & Related papers (2020-11-09T16:55:09Z) - Reinforcement Learning Based Vehicle-cell Association Algorithm for
Highly Mobile Millimeter Wave Communication [53.47785498477648]
This paper investigates the problem of vehicle-cell association in millimeter wave (mmWave) communication networks.
We first formulate the user state (VU) problem as a discrete non-vehicle association optimization problem.
The proposed solution achieves up to 15% gains in terms sum of user complexity and 20% reduction in VUE compared to several baseline designs.
arXiv Detail & Related papers (2020-01-22T08:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.