Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm
Networks
- URL: http://arxiv.org/abs/2209.07367v1
- Date: Thu, 15 Sep 2022 15:29:57 GMT
- Title: Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm
Networks
- Authors: Anne Catherine Nguyen, Turgay Pamuklu, Aisha Syed, W. Sean Kennedy,
Melike Erol-Kantarci
- Abstract summary: We introduce a Deep Q-Learning (DQL) approach to solve this multi-objective problem.
We show that our proposed DQL-based method achieves comparable results when it comes to the UAVs' remaining battery levels and percentage of deadline violations.
- Score: 3.6118662460334527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fifth and sixth generations of wireless communication networks are
enabling tools such as internet of things devices, unmanned aerial vehicles
(UAVs), and artificial intelligence, to improve the agricultural landscape
using a network of devices to automatically monitor farmlands. Surveying a
large area requires performing a lot of image classification tasks within a
specific period of time in order to prevent damage to the farm in case of an
incident, such as fire or flood. UAVs have limited energy and computing power,
and may not be able to perform all of the intense image classification tasks
locally and within an appropriate amount of time. Hence, it is assumed that the
UAVs are able to partially offload their workload to nearby multi-access edge
computing devices. The UAVs need a decision-making algorithm that will decide
where the tasks will be performed, while also considering the time constraints
and energy level of the other UAVs in the network. In this paper, we introduce
a Deep Q-Learning (DQL) approach to solve this multi-objective problem. The
proposed method is compared with Q-Learning and three heuristic baselines, and
the simulation results show that our proposed DQL-based method achieves
comparable results when it comes to the UAVs' remaining battery levels and
percentage of deadline violations. In addition, our method is able to reach
convergence 13 times faster than Q-Learning.
Related papers
- Multi-Agent Reinforcement Learning for Offloading Cellular Communications with Cooperating UAVs [21.195346908715972]
Unmanned aerial vehicles present an alternative means to offload data traffic from terrestrial BSs.
This paper presents a novel approach to efficiently serve multiple UAVs for data offloading from terrestrial BSs.
arXiv Detail & Related papers (2024-02-05T12:36:08Z) - Hardware Acceleration for Real-Time Wildfire Detection Onboard Drone
Networks [6.313148708539912]
wildfire detection in remote and forest areas is crucial for minimizing devastation and preserving ecosystems.
Drones offer agile access to remote, challenging terrains, equipped with advanced imaging technology.
limited computation and battery resources pose challenges in implementing and efficient image classification models.
This paper aims to develop a real-time image classification and fire segmentation model.
arXiv Detail & Related papers (2024-01-16T04:16:46Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Trajectory Design for UAV-Based Internet-of-Things Data Collection: A
Deep Reinforcement Learning Approach [93.67588414950656]
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted Internet-of-Things (IoT) system in a 3D environment.
We present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm.
Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional non-learning based baseline methods.
arXiv Detail & Related papers (2021-07-23T03:33:29Z) - A Multi-UAV System for Exploration and Target Finding in Cluttered and
GPS-Denied Environments [68.31522961125589]
We propose a framework for a team of UAVs to cooperatively explore and find a target in complex GPS-denied environments with obstacles.
The team of UAVs autonomously navigates, explores, detects, and finds the target in a cluttered environment with a known map.
Results indicate that the proposed multi-UAV system has improvements in terms of time-cost, the proportion of search area surveyed, as well as successful rates for search and rescue missions.
arXiv Detail & Related papers (2021-07-19T12:54:04Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Towards Deep Learning Assisted Autonomous UAVs for Manipulation Tasks in
GPS-Denied Environments [10.02675366919811]
This paper is primarily focused on the task of assembling large 3D structures in outdoors and GPS-denied environments.
Our framework is deployed on the specified UAV in order to report the performance analysis of the individual modules.
arXiv Detail & Related papers (2021-01-16T09:20:46Z) - Multi-Agent Reinforcement Learning in NOMA-aided UAV Networks for
Cellular Offloading [59.32570888309133]
A novel framework is proposed for cellular offloading with the aid of multiple unmanned aerial vehicles (UAVs)
Non-orthogonal multiple access (NOMA) technique is employed at each UAV to further improve the spectrum efficiency of the wireless network.
A mutual deep Q-network (MDQN) algorithm is proposed to jointly determine the optimal 3D trajectory and power allocation of UAVs.
arXiv Detail & Related papers (2020-10-18T20:22:05Z) - UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach [18.266087952180733]
We propose a new end-to-end reinforcement learning approach to UAV-enabled data collection from Internet of Things (IoT) devices.
An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance.
We show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters.
arXiv Detail & Related papers (2020-07-01T15:14:16Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.