Density-Aware Reinforcement Learning to Optimise Energy Efficiency in
UAV-Assisted Networks
- URL: http://arxiv.org/abs/2306.08785v1
- Date: Wed, 14 Jun 2023 23:43:18 GMT
- Title: Density-Aware Reinforcement Learning to Optimise Energy Efficiency in
UAV-Assisted Networks
- Authors: Babatunji Omoniwa, Boris Galkin, Ivana Dusparic
- Abstract summary: We propose a density-aware communication-enabled multi-agent decentralised double deep Q-network (DACEMAD-DDQN) approach.
Our result outperforms state-of-the-art MARL approaches in terms of energy efficiency (EE) by as much as 65% - 85%.
- Score: 2.6985600125290907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unmanned aerial vehicles (UAVs) serving as aerial base stations can be
deployed to provide wireless connectivity to mobile users, such as vehicles.
However, the density of vehicles on roads often varies spatially and temporally
primarily due to mobility and traffic situations in a geographical area, making
it difficult to provide ubiquitous service. Moreover, as energy-constrained
UAVs hover in the sky while serving mobile users, they may be faced with
interference from nearby UAV cells or other access points sharing the same
frequency band, thereby impacting the system's energy efficiency (EE). Recent
multi-agent reinforcement learning (MARL) approaches applied to optimise the
users' coverage worked well in reasonably even densities but might not perform
as well in uneven users' distribution, i.e., in urban road networks with uneven
concentration of vehicles. In this work, we propose a density-aware
communication-enabled multi-agent decentralised double deep Q-network
(DACEMAD-DDQN) approach that maximises the total system's EE by jointly
optimising the trajectory of each UAV, the number of connected users, and the
UAVs' energy consumption while keeping track of dense and uneven users'
distribution. Our result outperforms state-of-the-art MARL approaches in terms
of EE by as much as 65% - 85%.
Related papers
- UAV Based 5G Network: A Practical Survey Study [0.0]
Unmanned aerial vehicles (UAVs) are anticipated to significantly contribute to the development of new wireless networks.
UAVs may transfer massive volumes of data in real-time by utilizing low latency and high-speed abilities of 5G networks.
arXiv Detail & Related papers (2022-12-27T00:34:59Z) - Cooperative Multi-Agent Deep Reinforcement Learning for Reliable and
Energy-Efficient Mobile Access via Multi-UAV Control [13.692977942834627]
This paper addresses a novel multi-agent deep reinforcement learning (MADRL)-based positioning algorithm for multiple unmanned aerial vehicles (UAVs) collaboration.
The primary objective of the proposed algorithm is to establish dependable mobile access networks for cellular vehicle-to-everything (C-V2X) communication.
arXiv Detail & Related papers (2022-10-03T14:01:52Z) - Optimising Energy Efficiency in UAV-Assisted Networks using Deep
Reinforcement Learning [2.6985600125290907]
We study the energy efficiency (EE) optimisation of unmanned aerial vehicles (UAVs)
Recent multi-agent reinforcement learning approaches optimise the system's EE using a 2D trajectory design.
We propose a cooperative Multi-Agent Decentralised Double Deep Q-Network (MAD-DDQN) approach.
arXiv Detail & Related papers (2022-04-04T15:47:59Z) - Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points [3.502112118170715]
We propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points.
In our approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps.
arXiv Detail & Related papers (2021-11-03T14:49:17Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Energy-aware placement optimization of UAV base stations via
decentralized multi-agent Q-learning [3.502112118170715]
Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be deployed to provide wireless connectivity to ground devices in events of increased network demand, points-of-failure in existing infrastructure, or disasters.
It is challenging to conserve the energy of UAVs during prolonged coverage tasks, considering their limited on-board battery capacity.
We propose a decentralized Q-learning approach, where each UAV-BS is equipped with an autonomous agent that maximizes the connectivity to ground devices while improving its energy utilization.
arXiv Detail & Related papers (2021-06-01T22:49:42Z) - A Comprehensive Overview on 5G-and-Beyond Networks with UAVs: From
Communications to Sensing and Intelligence [152.89360859658296]
5G networks need to support three typical usage scenarios, namely, enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC) and massive machine-type communications (mMTC)
On the one hand, UAVs can be leveraged as cost-effective aerial platforms to provide ground users with enhanced communication services by exploiting their high cruising altitude and controllable maneuverability in 3D space.
On the other hand, providing such communication services simultaneously for both UAV and ground users poses new challenges due to the need for ubiquitous 3D signal coverage as well as the strong air-ground network interference.
arXiv Detail & Related papers (2020-10-19T08:56:04Z) - Multi-Agent Deep Reinforcement Learning Based Trajectory Planning for
Multi-UAV Assisted Mobile Edge Computing [99.27205900403578]
An unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) framework is proposed.
We aim to jointly optimize the geographical fairness among all the user equipments (UEs) and the fairness of each UAV's UE-load.
We show that our proposed solution has considerable performance over other traditional algorithms.
arXiv Detail & Related papers (2020-09-23T17:44:07Z) - Artificial Intelligence Aided Next-Generation Networks Relying on UAVs [140.42435857856455]
Artificial intelligence (AI) assisted unmanned aerial vehicle (UAV) aided next-generation networking is proposed for dynamic environments.
In the AI-enabled UAV-aided wireless networks (UAWN), multiple UAVs are employed as aerial base stations, which are capable of rapidly adapting to the dynamic environment.
As a benefit of the AI framework, several challenges of conventional UAWN may be circumvented, leading to enhanced network performance, improved reliability and agile adaptivity.
arXiv Detail & Related papers (2020-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.