Scheduling Out-of-Coverage Vehicular Communications Using Reinforcement
Learning
- URL: http://arxiv.org/abs/2207.06537v1
- Date: Wed, 13 Jul 2022 22:47:48 GMT
- Title: Scheduling Out-of-Coverage Vehicular Communications Using Reinforcement
Learning
- Authors: Taylan \c{S}ahin, Ramin Khalili, Mate Boban, Adam Wolisz
- Abstract summary: We propose VRLS (Vehicular Reinforcement Learning Scheduler), a centralized scheduler that proactively assigns resources for out-of-coverage V2V communications.
We evaluate the performance of VRLS under varying mobility, network load, wireless channel, and resource configurations.
- Score: 3.058685580689605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Performance of vehicle-to-vehicle (V2V) communications depends highly on the
employed scheduling approach. While centralized network schedulers offer high
V2V communication reliability, their operation is conventionally restricted to
areas with full cellular network coverage. In contrast, in
out-of-cellular-coverage areas, comparatively inefficient distributed radio
resource management is used. To exploit the benefits of the centralized
approach for enhancing the reliability of V2V communications on roads lacking
cellular coverage, we propose VRLS (Vehicular Reinforcement Learning
Scheduler), a centralized scheduler that proactively assigns resources for
out-of-coverage V2V communications \textit{before} vehicles leave the cellular
network coverage. By training in simulated vehicular environments, VRLS can
learn a scheduling policy that is robust and adaptable to environmental
changes, thus eliminating the need for targeted (re-)training in complex
real-life environments. We evaluate the performance of VRLS under varying
mobility, network load, wireless channel, and resource configurations. VRLS
outperforms the state-of-the-art distributed scheduling algorithm in zones
without cellular network coverage by reducing the packet error rate by half in
highly loaded conditions and achieving near-maximum reliability in low-load
scenarios.
Related papers
- Communication-Aware Consistent Edge Selection for Mobile Users and Autonomous Vehicles [1.2453219864236245]
Offloading time-sensitive, computationally intensive tasks can enhance service efficiency.
This paper proposes a deep reinforcement learning framework based on the Deep Deterministic Policy Gradient (DDPG) algorithm.
A joint allocation method of communication and switching of APs is proposed to minimize computational load, service latency, and interruptions.
arXiv Detail & Related papers (2024-08-06T20:21:53Z) - Joint Optimization of Age of Information and Energy Consumption in NR-V2X System based on Deep Reinforcement Learning [13.62746306281161]
Vehicle-to-Everything (V2X) specifications based on 5G New Radio (NR) technology.
Mode 2 Side-Link (SL) communication resembles Mode 4 in LTE-V2X, allowing direct communication between vehicles.
interference cancellation method is employed to mitigate this impact.
arXiv Detail & Related papers (2024-07-11T12:54:38Z) - Graph Neural Networks and Deep Reinforcement Learning Based Resource Allocation for V2X Communications [43.443526528832145]
This paper proposes a method that integrates Graph Neural Networks (GNN) with Deep Reinforcement Learning (DRL) to address this challenge.
By constructing a dynamic graph with communication links as nodes, the model aims to ensure a high success rate for V2V communication.
The proposed method retains the global feature learning capabilities of GNN and supports distributed network deployment.
arXiv Detail & Related papers (2024-07-09T03:14:11Z) - Deep-Reinforcement-Learning-Based AoI-Aware Resource Allocation for RIS-Aided IoV Networks [43.443526528832145]
We propose a RIS-assisted internet of vehicles (IoV) network, considering the vehicle-to-everything (V2X) communication method.
In order to improve the timeliness of vehicle-to-infrastructure (V2I) links and the stability of vehicle-to-vehicle (V2V) links, we introduce the age of information (AoI) model and the payload transmission probability model.
arXiv Detail & Related papers (2024-06-17T06:16:07Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - A Deep RL Approach on Task Placement and Scaling of Edge Resources for Cellular Vehicle-to-Network Service Provisioning [6.625994697789603]
We tackle the interdependent problems of service task placement and scaling of edge resources.
We introduce a Deep Hybrid Policy Gradient (DHPG), a Deep Reinforcement Learning (DRL) approach for hybrid action spaces.
The performance of DHPG is evaluated against several state-of-the-art (SoA) solutions through simulations employing a real-world C-V2N traffic dataset.
arXiv Detail & Related papers (2023-05-16T22:19:19Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - An Energy Consumption Model for Electrical Vehicle Networks via Extended
Federated-learning [50.85048976506701]
This paper proposes a novel solution to range anxiety based on a federated-learning model.
It is capable of estimating battery consumption and providing energy-efficient route planning for vehicle networks.
arXiv Detail & Related papers (2021-11-13T15:03:44Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Deep Learning-based Resource Allocation For Device-to-Device
Communication [66.74874646973593]
We propose a framework for the optimization of the resource allocation in multi-channel cellular systems with device-to-device (D2D) communication.
A deep learning (DL) framework is proposed, where the optimal resource allocation strategy for arbitrary channel conditions is approximated by deep neural network (DNN) models.
Our simulation results confirm that near-optimal performance can be attained with low time, which underlines the real-time capability of the proposed scheme.
arXiv Detail & Related papers (2020-11-25T14:19:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.