Deep Reinforcement Learning Based Multi-Access Edge Computing Schedule
for Internet of Vehicle
- URL: http://arxiv.org/abs/2202.08972v1
- Date: Tue, 15 Feb 2022 17:14:58 GMT
- Title: Deep Reinforcement Learning Based Multi-Access Edge Computing Schedule
for Internet of Vehicle
- Authors: Xiaoyu Dai, Kaoru Ota, Mianxiong Dong
- Abstract summary: We propose a UAVs-assisted approach to help provide a better wireless network service retaining the maximum Quality of Experience (QoE) of the Internet of Vehicles (IoVs) on the lane.
In the paper, we present a Multi-Agent Graph Convolutional Deep Reinforcement Learning (M-AGCDRL) algorithm which combines local observations of each agent with a low-resolution global map as input to learn a policy for each agent.
- Score: 16.619839349229437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As intelligent transportation systems been implemented broadly and unmanned
arial vehicles (UAVs) can assist terrestrial base stations acting as
multi-access edge computing (MEC) to provide a better wireless network
communication for Internet of Vehicles (IoVs), we propose a UAVs-assisted
approach to help provide a better wireless network service retaining the
maximum Quality of Experience(QoE) of the IoVs on the lane. In the paper, we
present a Multi-Agent Graph Convolutional Deep Reinforcement Learning
(M-AGCDRL) algorithm which combines local observations of each agent with a
low-resolution global map as input to learn a policy for each agent. The agents
can share their information with others in graph attention networks, resulting
in an effective joint policy. Simulation results show that the M-AGCDRL method
enables a better QoE of IoTs and achieves good performance.
Related papers
- Optimizing Age of Information in Vehicular Edge Computing with Federated Graph Neural Network Multi-Agent Reinforcement Learning [44.17644657738893]
This paper focuses on the Age of Information (AoI) as a key metric for data freshness and explores task offloading issues for vehicles under RSU communication resource constraints.
We propose an innovative distributed federated learning framework combining Graph Neural Networks (GNN), named Federated Graph Neural Network Multi-Agent Reinforcement Learning (FGNN-MADRL) to optimize AoI across the system.
arXiv Detail & Related papers (2024-07-01T15:37:38Z) - Muti-Agent Proximal Policy Optimization For Data Freshness in
UAV-assisted Networks [4.042622147977782]
We focus on the case where the collected data is time-sensitive, and it is critical to maintain its timeliness.
Our objective is to optimally design the UAVs' trajectories and the subsets of visited IoT devices such as the global Age-of-Updates (AoU) is minimized.
arXiv Detail & Related papers (2023-03-15T15:03:09Z) - Cooperative Multi-Agent Deep Reinforcement Learning for Reliable and
Energy-Efficient Mobile Access via Multi-UAV Control [13.692977942834627]
This paper addresses a novel multi-agent deep reinforcement learning (MADRL)-based positioning algorithm for multiple unmanned aerial vehicles (UAVs) collaboration.
The primary objective of the proposed algorithm is to establish dependable mobile access networks for cellular vehicle-to-everything (C-V2X) communication.
arXiv Detail & Related papers (2022-10-03T14:01:52Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge
Intelligence [76.96698721128406]
Mobile edge computing (MEC) considered a novel paradigm for computation and delay-sensitive tasks in fifth generation (5G) networks and beyond.
This paper provides a comprehensive research review on free-enabled RL and offers insight for development.
arXiv Detail & Related papers (2022-01-27T10:02:54Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - Minimizing Age-of-Information for Fog Computing-supported Vehicular
Networks with Deep Q-learning [15.493225546165627]
Age of Information (AoI) is a metric to evaluate the performance of wireless links between vehicles and cloud/fog servers.
This paper introduces a novel proactive and data-driven approach to optimize the driving route with a main objective of guaranteeing the confidence of AoI.
arXiv Detail & Related papers (2020-04-04T05:19:25Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z) - Artificial Intelligence Aided Next-Generation Networks Relying on UAVs [140.42435857856455]
Artificial intelligence (AI) assisted unmanned aerial vehicle (UAV) aided next-generation networking is proposed for dynamic environments.
In the AI-enabled UAV-aided wireless networks (UAWN), multiple UAVs are employed as aerial base stations, which are capable of rapidly adapting to the dynamic environment.
As a benefit of the AI framework, several challenges of conventional UAWN may be circumvented, leading to enhanced network performance, improved reliability and agile adaptivity.
arXiv Detail & Related papers (2020-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.