RL-Based Cargo-UAV Trajectory Planning and Cell Association for Minimum
Handoffs, Disconnectivity, and Energy Consumption
- URL: http://arxiv.org/abs/2312.02478v1
- Date: Tue, 5 Dec 2023 04:06:09 GMT
- Title: RL-Based Cargo-UAV Trajectory Planning and Cell Association for Minimum
Handoffs, Disconnectivity, and Energy Consumption
- Authors: Nesrine Cherif, Wael Jaafar, Halim Yanikomeroglu, Abbas Yongacoglu
- Abstract summary: Unmanned aerial vehicle (UAV) is a promising technology for last-mile cargo delivery.
Existing cellular networks were primarily designed to service ground users.
We propose a novel approach for joint cargo-UAV trajectory planning and cell association.
- Score: 23.734853912297634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unmanned aerial vehicle (UAV) is a promising technology for last-mile cargo
delivery. However, the limited on-board battery capacity, cellular
unreliability, and frequent handoffs in the airspace are the main obstacles to
unleash its full potential. Given that existing cellular networks were
primarily designed to service ground users, re-utilizing the same architecture
for highly mobile aerial users, e.g., cargo-UAVs, is deemed challenging.
Indeed, to ensure a safe delivery using cargo-UAVs, it is crucial to utilize
the available energy efficiently, while guaranteeing reliable connectivity for
command-and-control and avoiding frequent handoff. To achieve this goal, we
propose a novel approach for joint cargo-UAV trajectory planning and cell
association. Specifically, we formulate the cargo-UAV mission as a
multi-objective problem aiming to 1) minimize energy consumption, 2) reduce
handoff events, and 3) guarantee cellular reliability along the trajectory. We
leverage reinforcement learning (RL) to jointly optimize the cargo-UAV's
trajectory and cell association. Simulation results demonstrate a performance
improvement of our proposed method, in terms of handoffs, disconnectivity, and
energy consumption, compared to benchmarks.
Related papers
- Meta Reinforcement Learning for Strategic IoT Deployments Coverage in
Disaster-Response UAV Swarms [5.57865728456594]
Unmanned Aerial Vehicles (UAVs) have grabbed the attention of researchers in academia and industry for their potential use in critical emergency applications.
These applications include providing wireless services to ground users and collecting data from areas affected by disasters.
UAVs' limited resources, energy budget, and strict mission completion time have posed challenges in adopting UAVs for these applications.
arXiv Detail & Related papers (2024-01-20T05:05:39Z) - UAV Swarm-enabled Collaborative Secure Relay Communications with
Time-domain Colluding Eavesdropper [115.56455278813756]
Unmanned aerial vehicles (UAV) as aerial relays are practically appealing for assisting Internet Things (IoT) network.
In this work, we aim to utilize the UAV to assist secure communication between the UAV base station and terminal terminal devices.
arXiv Detail & Related papers (2023-10-03T11:47:01Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points [3.502112118170715]
We propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points.
In our approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps.
arXiv Detail & Related papers (2021-11-03T14:49:17Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Energy-aware placement optimization of UAV base stations via
decentralized multi-agent Q-learning [3.502112118170715]
Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can be deployed to provide wireless connectivity to ground devices in events of increased network demand, points-of-failure in existing infrastructure, or disasters.
It is challenging to conserve the energy of UAVs during prolonged coverage tasks, considering their limited on-board battery capacity.
We propose a decentralized Q-learning approach, where each UAV-BS is equipped with an autonomous agent that maximizes the connectivity to ground devices while improving its energy utilization.
arXiv Detail & Related papers (2021-06-01T22:49:42Z) - Reinforcement Learning-based Joint Path and Energy Optimization of
Cellular-Connected Unmanned Aerial Vehicles [0.0]
We have used a reinforcement learning (RL) hierarchically to extend typical short-range path planners to consider battery recharge and solve the problem of UAVs in long missions.
The problem is simulated for the UAV that flies over a large area, and Q-learning algorithm could enable the UAV to find the optimal path and recharge policy.
arXiv Detail & Related papers (2020-11-27T14:16:55Z) - Mobile Cellular-Connected UAVs: Reinforcement Learning for Sky Limits [71.28712804110974]
We propose a general novel multi-armed bandit (MAB) algorithm to reduce disconnectivity time, handover rate, and energy consumption of UAV.
We show how each of these performance indicators (PIs) is improved by adopting a proper range of corresponding learning parameter.
arXiv Detail & Related papers (2020-09-21T12:35:23Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.