Multi-Agent Deep Reinforcement Learning Based Trajectory Planning for
Multi-UAV Assisted Mobile Edge Computing
- URL: http://arxiv.org/abs/2009.11277v1
- Date: Wed, 23 Sep 2020 17:44:07 GMT
- Title: Multi-Agent Deep Reinforcement Learning Based Trajectory Planning for
Multi-UAV Assisted Mobile Edge Computing
- Authors: Liang Wang, Kezhi Wang, Cunhua Pan, Wei Xu, Nauman Aslam and Lajos
Hanzo
- Abstract summary: An unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) framework is proposed.
We aim to jointly optimize the geographical fairness among all the user equipments (UEs) and the fairness of each UAV's UE-load.
We show that our proposed solution has considerable performance over other traditional algorithms.
- Score: 99.27205900403578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) framework
is proposed, where several UAVs having different trajectories fly over the
target area and support the user equipments (UEs) on the ground. We aim to
jointly optimize the geographical fairness among all the UEs, the fairness of
each UAV' UE-load and the overall energy consumption of UEs. The above
optimization problem includes both integer and continues variables and it is
challenging to solve. To address the above problem, a multi-agent deep
reinforcement learning based trajectory control algorithm is proposed for
managing the trajectory of each UAV independently, where the popular
Multi-Agent Deep Deterministic Policy Gradient (MADDPG) method is applied.
Given the UAVs' trajectories, a low-complexity approach is introduced for
optimizing the offloading decisions of UEs. We show that our proposed solution
has considerable performance over other traditional algorithms, both in terms
of the fairness for serving UEs, fairness of UE-load at each UAV and energy
consumption for all the UEs.
Related papers
- Multi-UAV Multi-RIS QoS-Aware Aerial Communication Systems using DRL and PSO [34.951735976771765]
Unmanned Aerial Vehicles (UAVs) have attracted the attention of researchers in academia and industry for providing wireless services to ground users.
limited resources of UAVs can pose challenges for adopting UAVs for such applications.
Our system model considers a UAV swarm that navigates an area, providing wireless communication to ground users with RIS support to improve the coverage of the UAVs.
arXiv Detail & Related papers (2024-06-16T17:53:56Z) - UAV-enabled Collaborative Beamforming via Multi-Agent Deep Reinforcement Learning [79.16150966434299]
We formulate a UAV-enabled collaborative beamforming multi-objective optimization problem (UCBMOP) to maximize the transmission rate of the UVAA and minimize the energy consumption of all UAVs.
We use the heterogeneous-agent trust region policy optimization (HATRPO) as the basic framework, and then propose an improved HATRPO algorithm, namely HATRPO-UCB.
arXiv Detail & Related papers (2024-04-11T03:19:22Z) - Multi-Agent Reinforcement Learning for Offloading Cellular Communications with Cooperating UAVs [21.195346908715972]
Unmanned aerial vehicles present an alternative means to offload data traffic from terrestrial BSs.
This paper presents a novel approach to efficiently serve multiple UAVs for data offloading from terrestrial BSs.
arXiv Detail & Related papers (2024-02-05T12:36:08Z) - Joint User Association, Interference Cancellation and Power Control for
Multi-IRS Assisted UAV Communications [80.35959154762381]
Intelligent reflecting surface (IRS)-assisted unmanned aerial vehicle (UAV) communications are expected to alleviate the load of ground base stations in a cost-effective way.
Existing studies mainly focus on the deployment and resource allocation of a single IRS instead of multiple IRSs.
We propose a new optimization algorithm for joint IRS-user association, trajectory optimization of UAVs, successive interference cancellation (SIC) decoding order scheduling and power allocation.
arXiv Detail & Related papers (2023-12-08T01:57:10Z) - Muti-Agent Proximal Policy Optimization For Data Freshness in
UAV-assisted Networks [4.042622147977782]
We focus on the case where the collected data is time-sensitive, and it is critical to maintain its timeliness.
Our objective is to optimally design the UAVs' trajectories and the subsets of visited IoT devices such as the global Age-of-Updates (AoU) is minimized.
arXiv Detail & Related papers (2023-03-15T15:03:09Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Multi-Agent Reinforcement Learning in NOMA-aided UAV Networks for
Cellular Offloading [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T20:22:05Z) - Mobile Cellular-Connected UAVs: Reinforcement Learning for Sky Limits [71.28712804110974]
We propose a general novel multi-armed bandit (MAB) algorithm to reduce disconnectivity time, handover rate, and energy consumption of UAV.
We show how each of these performance indicators (PIs) is improved by adopting a proper range of corresponding learning parameter.
arXiv Detail & Related papers (2020-09-21T12:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.