Traffic Learning and Proactive UAV Trajectory Planning for Data Uplink
in Markovian IoT Models
- URL: http://arxiv.org/abs/2401.13827v1
- Date: Wed, 24 Jan 2024 21:57:55 GMT
- Title: Traffic Learning and Proactive UAV Trajectory Planning for Data Uplink
in Markovian IoT Models
- Authors: Eslam Eldeeb, Mohammad Shehab and Hirley Alves
- Abstract summary: In IoT networks, the traditional resource management schemes rely on a message exchange between the devices and the base station.
We present a novel learning-based framework that estimates the traffic arrival of IoT devices based on Markovian events.
We propose a deep reinforcement learning approach to optimize the optimal policy of each UAV.
- Score: 6.49537221266081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The age of information (AoI) is used to measure the freshness of the data. In
IoT networks, the traditional resource management schemes rely on a message
exchange between the devices and the base station (BS) before communication
which causes high AoI, high energy consumption, and low reliability. Unmanned
aerial vehicles (UAVs) as flying BSs have many advantages in minimizing the
AoI, energy-saving, and throughput improvement. In this paper, we present a
novel learning-based framework that estimates the traffic arrival of IoT
devices based on Markovian events. The learning proceeds to optimize the
trajectory of multiple UAVs and their scheduling policy. First, the BS predicts
the future traffic of the devices. We compare two traffic predictors: the
forward algorithm (FA) and the long short-term memory (LSTM). Afterward, we
propose a deep reinforcement learning (DRL) approach to optimize the optimal
policy of each UAV. Finally, we manipulate the optimum reward function for the
proposed DRL approach. Simulation results show that the proposed algorithm
outperforms the random-walk (RW) baseline model regarding the AoI, scheduling
accuracy, and transmission power.
Related papers
- Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - A Learning-Based Trajectory Planning of Multiple UAVs for AoI
Minimization in IoT Networks [13.2742178284328]
textitAge of Information (AoI) is a metric that quantifies information timeliness, i.e., the freshness of the received information or status update.
We formulate an optimization problem to jointly plan the UAVs' trajectory, while minimizing the AoI of the received messages.
The complex optimization problem is efficiently solved using a deep reinforcement learning (DRL) algorithm.
arXiv Detail & Related papers (2022-09-13T12:39:23Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - UAV Path Planning for Wireless Data Harvesting: A Deep Reinforcement
Learning Approach [18.266087952180733]
We propose a new end-to-end reinforcement learning approach to UAV-enabled data collection from Internet of Things (IoT) devices.
An autonomous drone is tasked with gathering data from distributed sensor nodes subject to limited flying time and obstacle avoidance.
We show that our proposed network architecture enables the agent to make movement decisions for a variety of scenario parameters.
arXiv Detail & Related papers (2020-07-01T15:14:16Z) - Machine Learning for Predictive Deployment of UAVs with Multiple Access [37.49465317156625]
In this paper, a machine learning deployment framework of unmanned aerial vehicles (UAVs) is studied.
Due to time-varying traffic distribution, a long short-term memory (LSTM) based prediction is introduced to predict the future cellular traffic.
The proposed method can reduce up to 24% of the total power consumption compared to the conventional method.
arXiv Detail & Related papers (2020-03-02T00:15:09Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z) - Federated Learning in the Sky: Joint Power Allocation and Scheduling
with UAV Swarms [98.78553146823829]
Unmanned aerial vehicle (UAV) swarms must exploit machine learning (ML) in order to execute various tasks.
In this paper, a novel framework is proposed to implement distributed learning (FL) algorithms within a UAV swarm.
arXiv Detail & Related papers (2020-02-19T14:04:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.