Energy-aware placement optimization of UAV base stations via
decentralized multi-agent Q-learning
- URL: http://arxiv.org/abs/2106.00845v1
- Date: Tue, 1 Jun 2021 22:49:42 GMT
- Title: Energy-aware placement optimization of UAV base stations via
decentralized multi-agent Q-learning
- Authors: Babatunji Omoniwa, Boris Galkin, Ivana Dusparic
- Abstract summary: Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be deployed to provide wireless connectivity to ground devices in events of increased network demand, points-of-failure in existing infrastructure, or disasters.
It is challenging to conserve the energy of UAVs during prolonged coverage tasks, considering their limited on-board battery capacity.
We propose a decentralized Q-learning approach, where each UAV-BS is equipped with an autonomous agent that maximizes the connectivity to ground devices while improving its energy utilization.
- Score: 3.502112118170715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be
deployed to provide wireless connectivity to ground devices in events of
increased network demand, points-of-failure in existing infrastructure, or
disasters. However, it is challenging to conserve the energy of UAVs during
prolonged coverage tasks, considering their limited on-board battery capacity.
Reinforcement learning-based (RL) approaches have been previously used to
improve energy utilization of multiple UAVs, however, a central cloud
controller is assumed to have complete knowledge of the end-devices' locations,
i.e., the controller periodically scans and sends updates for UAV
decision-making. This assumption is impractical in dynamic network environments
with mobile ground devices. To address this problem, we propose a decentralized
Q-learning approach, where each UAV-BS is equipped with an autonomous agent
that maximizes the connectivity to ground devices while improving its energy
utilization. Experimental results show that the proposed design significantly
outperforms the centralized approaches in jointly maximizing the number of
connected ground devices and the energy utilization of the UAV-BSs.
Related papers
- UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Density-Aware Reinforcement Learning to Optimise Energy Efficiency in
UAV-Assisted Networks [2.6985600125290907]
We propose a density-aware communication-enabled multi-agent decentralised double deep Q-network (DACEMAD-DDQN) approach.
Our result outperforms state-of-the-art MARL approaches in terms of energy efficiency (EE) by as much as 65% - 85%.
arXiv Detail & Related papers (2023-06-14T23:43:18Z) - 5G Network on Wings: A Deep Reinforcement Learning Approach to the
UAV-based Integrated Access and Backhaul [11.197456628712846]
Unmanned aerial vehicle (UAV) based aerial networks offer a promising alternative for fast, flexible, and reliable wireless communications.
In this paper, we study how to control multiple UAV-BSs in both static and dynamic environments.
Deep reinforcement learning algorithm is developed to jointly optimize the three-dimensional placement of these multiple UAV-BSs.
arXiv Detail & Related papers (2022-02-04T07:45:06Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points [3.502112118170715]
We propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points.
In our approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps.
arXiv Detail & Related papers (2021-11-03T14:49:17Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Constrained Deep Reinforcement Learning for Energy Sustainable Multi-UAV
based Random Access IoT Networks with NOMA [20.160827428161898]
We apply the Non-Orthogonal Multiple Access technique to improve massive channel access of a wireless IoT network where solar-powered Unmanned Aerial Vehicles (UAVs) relay data from IoT devices to remote servers.
IoT devices contend for accessing the shared wireless channel using an adaptive $p$-persistent slotted Aloha protocol; and the solar-powered UAVs adopt Successive Interference Cancellation (SIC) to decode multiple received data from IoT devices to improve access efficiency.
arXiv Detail & Related papers (2020-01-31T22:05:30Z) - Artificial Intelligence Aided Next-Generation Networks Relying on UAVs [140.42435857856455]
Artificial intelligence (AI) assisted unmanned aerial vehicle (UAV) aided next-generation networking is proposed for dynamic environments.
In the AI-enabled UAV-aided wireless networks (UAWN), multiple UAVs are employed as aerial base stations, which are capable of rapidly adapting to the dynamic environment.
As a benefit of the AI framework, several challenges of conventional UAWN may be circumvented, leading to enhanced network performance, improved reliability and agile adaptivity.
arXiv Detail & Related papers (2020-01-28T15:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.