Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points
- URL: http://arxiv.org/abs/2111.02258v1
- Date: Wed, 3 Nov 2021 14:49:17 GMT
- Title: Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points
- Authors: Boris Galkin, Babatunji Omoniwa, Ivana Dusparic
- Abstract summary: We propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points.
In our approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps.
- Score: 3.502112118170715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unmanned Aerial Vehicles (UAVs) promise to become an intrinsic part of next
generation communications, as they can be deployed to provide wireless
connectivity to ground users to supplement existing terrestrial networks. The
majority of the existing research into the use of UAV access points for
cellular coverage considers rotary-wing UAV designs (i.e. quadcopters).
However, we expect fixed-wing UAVs to be more appropriate for connectivity
purposes in scenarios where long flight times are necessary (such as for rural
coverage), as fixed-wing UAVs rely on a more energy-efficient form of flight
when compared to the rotary-wing design. As fixed-wing UAVs are typically
incapable of hovering in place, their deployment optimisation involves
optimising their individual flight trajectories in a way that allows them to
deliver high quality service to the ground users in an energy-efficient manner.
In this paper, we propose a multi-agent deep reinforcement learning approach to
optimise the energy efficiency of fixed-wing UAV cellular access points while
still allowing them to deliver high-quality service to users on the ground. In
our decentralized approach, each UAV is equipped with a Dueling Deep Q-Network
(DDQN) agent which can adjust the 3D trajectory of the UAV over a series of
timesteps. By coordinating with their neighbours, the UAVs adjust their
individual flight trajectories in a manner that optimises the total system
energy efficiency. We benchmark the performance of our approach against a
series of heuristic trajectory planning strategies, and demonstrate that our
method can improve the system energy efficiency by as much as 70%.
Related papers
- Multi-UAV Multi-RIS QoS-Aware Aerial Communication Systems using DRL and PSO [34.951735976771765]
Unmanned Aerial Vehicles (UAVs) have attracted the attention of researchers in academia and industry for providing wireless services to ground users.
limited resources of UAVs can pose challenges for adopting UAVs for such applications.
Our system model considers a UAV swarm that navigates an area, providing wireless communication to ground users with RIS support to improve the coverage of the UAVs.
arXiv Detail & Related papers (2024-06-16T17:53:56Z) - UAV-enabled Collaborative Beamforming via Multi-Agent Deep Reinforcement Learning [79.16150966434299]
We formulate a UAV-enabled collaborative beamforming multi-objective optimization problem (UCBMOP) to maximize the transmission rate of the UVAA and minimize the energy consumption of all UAVs.
We use the heterogeneous-agent trust region policy optimization (HATRPO) as the basic framework, and then propose an improved HATRPO algorithm, namely HATRPO-UCB.
arXiv Detail & Related papers (2024-04-11T03:19:22Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Optimising Energy Efficiency in UAV-Assisted Networks using Deep
Reinforcement Learning [2.6985600125290907]
We study the energy efficiency (EE) optimisation of unmanned aerial vehicles (UAVs)
Recent multi-agent reinforcement learning approaches optimise the system's EE using a 2D trajectory design.
We propose a cooperative Multi-Agent Decentralised Double Deep Q-Network (MAD-DDQN) approach.
arXiv Detail & Related papers (2022-04-04T15:47:59Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Energy-aware placement optimization of UAV base stations via
decentralized multi-agent Q-learning [3.502112118170715]
Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be deployed to provide wireless connectivity to ground devices in events of increased network demand, points-of-failure in existing infrastructure, or disasters.
It is challenging to conserve the energy of UAVs during prolonged coverage tasks, considering their limited on-board battery capacity.
We propose a decentralized Q-learning approach, where each UAV-BS is equipped with an autonomous agent that maximizes the connectivity to ground devices while improving its energy utilization.
arXiv Detail & Related papers (2021-06-01T22:49:42Z) - Efficient UAV Trajectory-Planning using Economic Reinforcement Learning [65.91405908268662]
We introduce REPlanner, a novel reinforcement learning algorithm inspired by economic transactions to distribute tasks between UAVs.
We formulate the path planning problem as a multi-agent economic game, where agents can cooperate and compete for resources.
As the system computes task distributions via UAV cooperation, it is highly resilient to any change in the swarm size.
arXiv Detail & Related papers (2021-03-03T20:54:19Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.