Age and Power Minimization via Meta-Deep Reinforcement Learning in UAV Networks
- URL: http://arxiv.org/abs/2501.14603v1
- Date: Fri, 24 Jan 2025 16:17:53 GMT
- Title: Age and Power Minimization via Meta-Deep Reinforcement Learning in UAV Networks
- Authors: Sankani Sarathchandra, Eslam Eldeeb, Mohammad Shehab, Hirley Alves, Konstantin Mikhaylov, Mohamed-Slim Alouini,
- Abstract summary: This study examines a power-limited internet of things (IoT) network supported by a flying unmanned aerial vehicle (UAV) that collects data.
Our aim is to optimize the UAV flight trajectory and scheduling policy to minimize a varying AoI and transmission power combination.
- Score: 42.14963369042011
- License:
- Abstract: Age-of-information (AoI) and transmission power are crucial performance metrics in low energy wireless networks, where information freshness is of paramount importance. This study examines a power-limited internet of things (IoT) network supported by a flying unmanned aerial vehicle(UAV) that collects data. Our aim is to optimize the UAV flight trajectory and scheduling policy to minimize a varying AoI and transmission power combination. To tackle this variation, this paper proposes a meta-deep reinforcement learning (RL) approach that integrates deep Q-networks (DQNs) with model-agnostic meta-learning (MAML). DQNs determine optimal UAV decisions, while MAML enables scalability across varying objective functions. Numerical results indicate that the proposed algorithm converges faster and adapts to new objectives more effectively than traditional deep RL methods, achieving minimal AoI and transmission power overall.
Related papers
- Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - AoI in Context-Aware Hybrid Radio-Optical IoT Networks [8.467370216900107]
We study hybrid IoT networks that employ Optical Communication (OC) as a reinforcement medium to Radio Frequency (RF)
We adopt a multi-objective optimization strategy to balance the collection of the throughput with the minimization of energy and the frequency of switching between technologies.
Simulation results show that the OC supplementary integration alongside enhances the network's overall performances and significantly reduces the Mean AoI and Peak AoI.
arXiv Detail & Related papers (2024-12-17T13:48:25Z) - Meta Reinforcement Learning Approach for Adaptive Resource Optimization in O-RAN [6.326120268549892]
Open Radio Access Network (O-RAN) addresses the variable demands of modern networks with unprecedented efficiency and adaptability.
This paper proposes a novel Meta Deep Reinforcement Learning (Meta-DRL) strategy, inspired by Model-Agnostic Meta-Learning (MAML) to advance resource block and downlink power allocation in O-RAN.
arXiv Detail & Related papers (2024-09-30T23:04:30Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - AI-based Radio and Computing Resource Allocation and Path Planning in
NOMA NTNs: AoI Minimization under CSI Uncertainty [23.29963717212139]
We develop a hierarchical aerial computing framework composed of high altitude platform (HAP) and unmanned aerial vehicles (UAVs)
It is shown that task scheduling significantly reduces the average AoI.
It is shown that power allocation has a marginal effect on the average AoI compared to using full transmission power for all users.
arXiv Detail & Related papers (2023-05-01T11:52:15Z) - Optimization for Master-UAV-powered Auxiliary-Aerial-IRS-assisted IoT
Networks: An Option-based Multi-agent Hierarchical Deep Reinforcement
Learning Approach [56.84948632954274]
This paper investigates a master unmanned aerial vehicle (MUAV)-powered Internet of Things (IoT) network.
We propose using a rechargeable auxiliary UAV (AUAV) equipped with an intelligent reflecting surface (IRS) to enhance the communication signals from the MUAV.
Under the proposed model, we investigate the optimal collaboration strategy of these energy-limited UAVs to maximize the accumulated throughput of the IoT network.
arXiv Detail & Related papers (2021-12-20T15:45:28Z) - Joint Cluster Head Selection and Trajectory Planning in UAV-Aided IoT
Networks by Reinforcement Learning with Sequential Model [4.273341750394231]
We formulate the problem of jointly designing the UAV's trajectory and selecting cluster heads in the Internet-of-Things network.
We propose a novel deep reinforcement learning (DRL) with a sequential model strategy that can effectively learn the policy represented by a sequence-to-sequence neural network.
Through extensive simulations, the obtained results show that the proposed DRL method can find the UAV's trajectory that requires much less energy consumption.
arXiv Detail & Related papers (2021-12-01T07:59:53Z) - 3D UAV Trajectory and Data Collection Optimisation via Deep
Reinforcement Learning [75.78929539923749]
Unmanned aerial vehicles (UAVs) are now beginning to be deployed for enhancing the network performance and coverage in wireless communication.
It is challenging to obtain an optimal resource allocation scheme for the UAV-assisted Internet of Things (IoT)
In this paper, we design a new UAV-assisted IoT systems relying on the shortest flight path of the UAVs while maximising the amount of data collected from IoT devices.
arXiv Detail & Related papers (2021-06-06T14:08:41Z) - Data Freshness and Energy-Efficient UAV Navigation Optimization: A Deep
Reinforcement Learning Approach [88.45509934702913]
We design a navigation policy for multiple unmanned aerial vehicles (UAVs) where mobile base stations (BSs) are deployed.
We incorporate different contextual information such as energy and age of information (AoI) constraints to ensure the data freshness at the ground BS.
By applying the proposed trained model, an effective real-time trajectory policy for the UAV-BSs captures the observable network states over time.
arXiv Detail & Related papers (2020-02-21T07:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.