Optimal Beam Association for High Mobility mmWave Vehicular Networks:
Lightweight Parallel Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2005.00694v2
- Date: Mon, 7 Jun 2021 06:47:25 GMT
- Title: Optimal Beam Association for High Mobility mmWave Vehicular Networks:
Lightweight Parallel Reinforcement Learning Approach
- Authors: Nguyen Van Huynh, Diep N. Nguyen, Dinh Thai Hoang, and Eryk Dutkiewicz
- Abstract summary: We develop an optimal beam association framework for mmWave vehicular networks under high mobility.
We use the semi-Markov decision process to capture the dynamics and uncertainty of the environment.
Our proposed solution can increase the data rate by 47% and reduce the disconnection probability by 29% compared to other solutions.
- Score: 34.71313117637721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In intelligent transportation systems (ITS), vehicles are expected to feature
with advanced applications and services which demand ultra-high data rates and
low-latency communications. For that, the millimeter wave (mmWave)
communication has been emerging as a very promising solution. However,
incorporating the mmWave into ITS is particularly challenging due to the high
mobility of vehicles and the inherent sensitivity of mmWave beams to dynamic
blockages. This article addresses these problems by developing an optimal beam
association framework for mmWave vehicular networks under high mobility.
Specifically, we use the semi-Markov decision process to capture the dynamics
and uncertainty of the environment. The Q-learning algorithm is then often used
to find the optimal policy. However, Q-learning is notorious for its
slow-convergence. Instead of adopting deep reinforcement learning structures
(like most works in the literature), we leverage the fact that there are
usually multiple vehicles on the road to speed up the learning process. To that
end, we develop a lightweight yet very effective parallel Q-learning algorithm
to quickly obtain the optimal policy by simultaneously learning from various
vehicles. Extensive simulations demonstrate that our proposed solution can
increase the data rate by 47% and reduce the disconnection probability by 29%
compared to other solutions.
Related papers
- Generalized Multi-Objective Reinforcement Learning with Envelope Updates in URLLC-enabled Vehicular Networks [12.323383132739195]
We develop a novel multi-objective reinforcement learning framework to jointly optimize wireless network selection and autonomous driving policies.
The proposed framework is designed to maximize the traffic flow and minimize collisions by controlling the vehicle's motion dynamics.
The proposed policies enable autonomous vehicles to adopt safe driving behaviors with improved connectivity.
arXiv Detail & Related papers (2024-05-18T16:31:32Z) - Learning for Semantic Knowledge Base-Guided Online Feature Transmission
in Dynamic Channels [41.59960455142914]
We propose an online optimization framework to address the challenge of dynamic channel conditions and device mobility in an end-to-end communication system.
Our approach builds upon existing methods by leveraging a semantic knowledge base to drive multi-level feature transmission.
To solve the online optimization problem, we design a novel soft actor-critic-based deep reinforcement learning system with a carefully designed reward function for real-time decision-making.
arXiv Detail & Related papers (2023-11-30T07:35:56Z) - Eco-Driving Control of Connected and Automated Vehicles using Neural
Network based Rollout [0.0]
Connected and autonomous vehicles have the potential to minimize energy consumption.
Existing deterministic and methods created to solve the eco-driving problem generally suffer from high computational and memory requirements.
This work proposes a hierarchical multi-horizon optimization framework implemented via a neural network.
arXiv Detail & Related papers (2023-10-16T23:13:51Z) - MOB-FL: Mobility-Aware Federated Learning for Intelligent Connected
Vehicles [21.615151912285835]
We consider a base station coordinating nearby ICVs to train a neural network in a collaborative yet distributed manner.
Due to the mobility of vehicles, the connections between the base station and ICVs are short-lived.
We propose an accelerated FL-ICV framework, by optimizing the duration of each training round and the number of local iterations.
arXiv Detail & Related papers (2022-12-07T08:53:53Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Bayesian Optimization and Deep Learning forsteering wheel angle
prediction [58.720142291102135]
This work aims to obtain an accurate model for the prediction of the steering angle in an automated driving system.
BO was able to identify, within a limited number of trials, a model -- namely BOST-LSTM -- which resulted, the most accurate when compared to classical end-to-end driving models.
arXiv Detail & Related papers (2021-10-22T15:25:14Z) - Cellular traffic offloading via Opportunistic Networking with
Reinforcement Learning [0.5758073912084364]
We propose an adaptive offloading solution based on the Reinforcement Learning framework.
We evaluate and compare the performance of two well-known learning algorithms: Actor-Critic and Q-Learning.
Our solution achieves a higher level of offloading with respect to other state-of-the-art approaches.
arXiv Detail & Related papers (2021-10-01T13:34:12Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.