Deep Reinforcement Learning Aided Packet-Routing For Aeronautical Ad-Hoc
Networks Formed by Passenger Planes
- URL: http://arxiv.org/abs/2110.15146v1
- Date: Thu, 28 Oct 2021 14:18:56 GMT
- Title: Deep Reinforcement Learning Aided Packet-Routing For Aeronautical Ad-Hoc
Networks Formed by Passenger Planes
- Authors: Dong Liu, Jingjing Cui, Jiankang Zhang, Chenyang Yang, Lajos Hanzo
- Abstract summary: We invoke deep reinforcement learning for routing in AANETs aiming at minimizing the end-to-end (E2E) delay.
A deep Q-network (DQN) is conceived for capturing the relationship between the optimal routing decision and the local geographic information observed by the forwarding node.
We further exploit the knowledge concerning the system's dynamics by using a deep value network (DVN) conceived with a feedback mechanism.
- Score: 99.54065757867554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data packet routing in aeronautical ad-hoc networks (AANETs) is challenging
due to their high-dynamic topology. In this paper, we invoke deep reinforcement
learning for routing in AANETs aiming at minimizing the end-to-end (E2E) delay.
Specifically, a deep Q-network (DQN) is conceived for capturing the
relationship between the optimal routing decision and the local geographic
information observed by the forwarding node. The DQN is trained in an offline
manner based on historical flight data and then stored by each airplane for
assisting their routing decisions during flight. To boost the learning
efficiency and the online adaptability of the proposed DQN-routing, we further
exploit the knowledge concerning the system's dynamics by using a deep value
network (DVN) conceived with a feedback mechanism. Our simulation results show
that both DQN-routing and DVN-routing achieve lower E2E delay than the
benchmark protocol, and DVN-routing performs similarly to the optimal routing
that relies on perfect global information.
Related papers
- ARDDQN: Attention Recurrent Double Deep Q-Network for UAV Coverage Path Planning and Data Harvesting [3.746548465186206]
Unmanned Aerial Vehicles (UAVs) have gained popularity in data harvesting (DH) and coverage path planning ( CPP)
We propose the ARDDQN (Attention-based Recurrent Double Deep Q Network), which integrates double deep Q-networks (DDQN) with recurrent neural networks (RNNs)
We employ a structured environment map comprising a compressed global environment map and a local map showing the UAV agent's locate efficiently scaling to large environments.
arXiv Detail & Related papers (2024-05-17T16:53:19Z) - An Intelligent SDWN Routing Algorithm Based on Network Situational
Awareness and Deep Reinforcement Learning [4.085916808788356]
This article introduces an intelligent routing algorithm (DRL-PPONSA) based on deep reinforcement learning with network situational awareness.
Experimental results show that DRL-PPONSA outperforms traditional routing methods in network throughput, delay, packet loss rate, and wireless node distance.
arXiv Detail & Related papers (2023-05-12T14:18:09Z) - Deep Learning Aided Packet Routing in Aeronautical Ad-Hoc Networks
Relying on Real Flight Data: From Single-Objective to Near-Pareto
Multi-Objective Optimization [79.96177511319713]
We invoke deep learning (DL) to assist routing in aeronautical ad-hoc networks (AANETs)
A deep neural network (DNN) is conceived for mapping the local geographic information observed by the forwarding node into the information required for determining the optimal next hop.
We extend the DL-aided routing algorithm to a multi-objective scenario, where we aim for simultaneously minimizing the delay, maximizing the path capacity, and maximizing the path lifetime.
arXiv Detail & Related papers (2021-10-28T14:18:22Z) - Deep Learning Aided Routing for Space-Air-Ground Integrated Networks
Relying on Real Satellite, Flight, and Shipping Data [79.96177511319713]
Current maritime communications mainly rely on satellites having meager transmission resources, hence suffering from poorer performance than modern terrestrial wireless networks.
With the growth of transcontinental air traffic, the promising concept of aeronautical ad hoc networking relying on commercial passenger airplanes is potentially capable of enhancing satellite-based maritime communications via air-to-ground and multi-hop air-to-air links.
We propose space-air-ground integrated networks (SAGINs) for supporting ubiquitous maritime communications, where the low-earth-orbit satellite constellations, passenger airplanes, terrestrial base stations, ships, respectively, serve as the space-, air-,
arXiv Detail & Related papers (2021-10-28T14:12:10Z) - Trajectory Design for UAV-Based Internet-of-Things Data Collection: A
Deep Reinforcement Learning Approach [93.67588414950656]
In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted Internet-of-Things (IoT) system in a 3D environment.
We present a TD3-based trajectory design for completion time minimization (TD3-TDCTM) algorithm.
Our simulation results show the superiority of the proposed TD3-TDCTM algorithm over three conventional non-learning based baseline methods.
arXiv Detail & Related papers (2021-07-23T03:33:29Z) - Jamming-Resilient Path Planning for Multiple UAVs via Deep Reinforcement
Learning [1.2330326247154968]
Unmanned aerial vehicles (UAVs) are expected to be an integral part of wireless networks.
In this paper, we aim to find collision-free paths for multiple cellular-connected UAVs.
We propose an offline temporal difference (TD) learning algorithm with online signal-to-interference-plus-noise ratio mapping to solve the problem.
arXiv Detail & Related papers (2021-04-09T16:52:33Z) - UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach [18.266087952180733]
We propose a new end-to-end reinforcement learning approach to UAV-enabled data collection from Internet of Things (IoT) devices.
An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance.
We show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters.
arXiv Detail & Related papers (2020-07-01T15:14:16Z) - Constructing Geographic and Long-term Temporal Graph for Traffic
Forecasting [88.5550074808201]
We propose Geographic and Long term Temporal Graph Convolutional Recurrent Neural Network (GLT-GCRNN) for traffic forecasting.
In this work, we propose a novel framework for traffic forecasting that learns the rich interactions between roads sharing similar geographic or longterm temporal patterns.
arXiv Detail & Related papers (2020-04-23T03:50:46Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.