UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach
- URL: http://arxiv.org/abs/2007.00544v2
- Date: Mon, 26 Oct 2020 12:14:45 GMT
- Title: UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach
- Authors: Harald Bayerlein, Mirco Theile, Marco Caccamo, David Gesbert
- Abstract summary: We propose a new end-to-end reinforcement learning approach to UAV-enabled data collection from Internet of Things (IoT) devices.
An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance.
We show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters.
- Score: 18.266087952180733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous deployment of unmanned aerial vehicles (UAVs) supporting
next-generation communication networks requires efficient trajectory planning
methods. We propose a new end-to-end reinforcement learning (RL) approach to
UAV-enabled data collection from Internet of Things (IoT) devices in an urban
environment. An autonomous drone is tasked with gathering data from distributed
sensor nodes subject to limited flying time and obstacle avoidance. While
previous approaches, learning and non-learning based, must perform expensive
recomputations or relearn a behavior when important scenario parameters such as
the number of sensors, sensor positions, or maximum flying time, change, we
train a double deep Q-network (DDQN) with combined experience replay to learn a
UAV control policy that generalizes over changing scenario parameters. By
exploiting a multi-layer map of the environment fed through convolutional
network layers to the agent, we show that our proposed network architecture
enables the agent to make movement decisions for a variety of scenario
parameters that balance the data collection goal with flight time efficiency
and safety constraints. Considerable advantages in learning efficiency from
using a map centered on the UAV's position over a non-centered map are also
illustrated.
Related papers
- Multi-Agent Reinforcement Learning for Offloading Cellular Communications with Cooperating UAVs [21.195346908715972]
Unmanned aerial vehicles present an alternative means to offload data traffic from terrestrial BSs.
This paper presents a novel approach to efficiently serve multiple UAVs for data offloading from terrestrial BSs.
arXiv Detail & Related papers (2024-02-05T12:36:08Z) - Integrated Sensing, Computation, and Communication for UAV-assisted
Federated Edge Learning [52.7230652428711]
Federated edge learning (FEEL) enables privacy-preserving model training through periodic communication between edge devices and the server.
Unmanned Aerial Vehicle (UAV)mounted edge devices are particularly advantageous for FEEL due to their flexibility and mobility in efficient data collection.
arXiv Detail & Related papers (2023-06-05T16:01:33Z) - Federated Deep Learning Meets Autonomous Vehicle Perception: Design and
Verification [168.67190934250868]
Federated learning empowered connected autonomous vehicle (FLCAV) has been proposed.
FLCAV preserves privacy while reducing communication and annotation costs.
It is challenging to determine the network resources and road sensor poses for multi-stage training.
arXiv Detail & Related papers (2022-06-03T23:55:45Z) - Adaptive Path Planning for UAVs for Multi-Resolution Semantic
Segmentation [28.104584236205405]
A key challenge is planning missions to maximize the value of acquired data in large environments.
This is, for example, relevant for monitoring agricultural fields.
We propose an online planning algorithm which adapts the UAV paths to obtain high-resolution semantic segmentations.
arXiv Detail & Related papers (2022-03-03T11:03:28Z) - Deep Learning Aided Packet Routing in Aeronautical Ad-Hoc Networks
Relying on Real Flight Data: From Single-Objective to Near-Pareto
Multi-Objective Optimization [79.96177511319713]
We invoke deep learning (DL) to assist routing in aeronautical ad-hoc networks (AANETs)
A deep neural network (DNN) is conceived for mapping the local geographic information observed by the forwarding node into the information required for determining the optimal next hop.
We extend the DL-aided routing algorithm to a multi-objective scenario, where we aim for simultaneously minimizing the delay, maximizing the path capacity, and maximizing the path lifetime.
arXiv Detail & Related papers (2021-10-28T14:18:22Z) - Trajectory Design for UAV-Based Internet-of-Things Data Collection: A
Deep Reinforcement Learning Approach [93.67588414950656]
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted Internet-of-Things (IoT) system in a 3D environment.
We present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm.
Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional non-learning based baseline methods.
arXiv Detail & Related papers (2021-07-23T03:33:29Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Jamming-Resilient Path Planning for Multiple UAVs via Deep Reinforcement
Learning [1.2330326247154968]
Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks.
In this paper, we aim to find collision-free paths for multiple cellular-connected UAVs.
We propose an offline temporal difference (TD) learning algorithm with online signal-to-interference-plus-noise ratio mapping to solve the problem.
arXiv Detail & Related papers (2021-04-09T16:52:33Z) - Multi-UAV Path Planning for Wireless Data Harvesting with Deep
Reinforcement Learning [18.266087952180733]
We propose a multi-agent reinforcement learning (MARL) approach that can adapt to profound changes in the scenario parameters defining the data harvesting mission.
We show that our proposed network architecture enables the agents to cooperate effectively by carefully dividing the data collection task among themselves.
arXiv Detail & Related papers (2020-10-23T14:59:30Z) - UAV Path Planning using Global and Local Map Information with Deep
Reinforcement Learning [16.720630804675213]
This work presents a method for autonomous UAV path planning based on deep reinforcement learning (DRL)
We compare coverage path planning ( CPP), where the UAV's goal is to survey an area of interest to data harvesting (DH), where the UAV collects data from distributed Internet of Things (IoT) sensor devices.
By exploiting structured map information of the environment, we train double deep Q-networks (DDQNs) with identical architectures on both distinctly different mission scenarios.
arXiv Detail & Related papers (2020-10-14T09:59:10Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.