Energy-Efficient Flying LoRa Gateways: A Multi-Agent Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2502.03377v2
- Date: Sun, 13 Apr 2025 17:52:57 GMT
- Title: Energy-Efficient Flying LoRa Gateways: A Multi-Agent Reinforcement Learning Approach
- Authors: Abdullahi Isa Ahmed, Jamal Bentahar, El Mehdi Amhoud,
- Abstract summary: We deploy flying LoRa gateways mounted on unmanned aerial vehicles (UAVs) to collect data from LoRa end devices and transmit it to a central server.<n>Our primary objective is to maximize the global system energy efficiency of wireless LoRa networks by joint optimization of transmission power, spreading factor, bandwidth, and user association.
- Score: 8.65375886947338
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As next-generation Internet of Things (NG-IoT) networks continue to grow, the number of connected devices is rapidly increasing, along with their energy demands. This creates challenges for resource management and sustainability. Energy-efficient communication, particularly for power-limited IoT devices, is therefore a key research focus. In this paper, we deployed flying LoRa gateways mounted on unmanned aerial vehicles (UAVs) to collect data from LoRa end devices and transmit it to a central server. Our primary objective is to maximize the global system energy efficiency of wireless LoRa networks by joint optimization of transmission power, spreading factor, bandwidth, and user association. To solve this challenging problem, we model the problem as a partially observable Markov decision process (POMDP), where each flying LoRa GW acts as a learning agent using a cooperative multi-agent reinforcement learning (MARL). Simulation results demonstrate that our proposed method, based on the multi-agent proximal policy optimization (MAPPO) algorithm, significantly improves the global system EE and surpasses the conventional MARL schemes.
Related papers
- Multi-Agent DRL for Queue-Aware Task Offloading in Hierarchical MEC-Enabled Air-Ground Networks [4.0948483603286245]
Mobile edge computing (MEC)-enabled air-ground networks are a key component of 6G.
This paper tackles the overall energy problem in MEC-enabled air-ground integrated networks (MAGIN)
We propose a novel variant of multi-altitude policy optimization with a Beta distribution (MAPPO-BD) to solve it.
arXiv Detail & Related papers (2025-03-05T11:12:40Z) - Task Delay and Energy Consumption Minimization for Low-altitude MEC via Evolutionary Multi-objective Deep Reinforcement Learning [52.64813150003228]
The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring.
In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas.
The task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV.
arXiv Detail & Related papers (2025-01-11T02:32:42Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Multi-Objective Optimization for UAV Swarm-Assisted IoT with Virtual
Antenna Arrays [55.736718475856726]
Unmanned aerial vehicle (UAV) network is a promising technology for assisting Internet-of-Things (IoT)
Existing UAV-assisted data harvesting and dissemination schemes require UAVs to frequently fly between the IoTs and access points.
We introduce collaborative beamforming into IoTs and UAVs simultaneously to achieve energy and time-efficient data harvesting and dissemination.
arXiv Detail & Related papers (2023-08-03T02:49:50Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - A reinforcement learning approach to improve communication performance
and energy utilization in fog-based IoT [3.158346511479111]
We propose a Q-learning-based decentralized approach where each mobile fog relay agent (MFRA) is controlled by an autonomous agent.
Our approach is able to ensure reliable delivery of data and reduce overall energy cost by 56.76% -- 88.03%.
arXiv Detail & Related papers (2021-06-01T17:38:20Z) - Leveraging AI and Intelligent Reflecting Surface for Energy-Efficient
Communication in 6G IoT [14.027983498089084]
We propose an artificial intelligence (AI) and intelligent reflecting surface (IRS) empowered energy-efficiency communication system for 6G IoT.
First, we design a smart and efficient communication architecture including the IRS-aided data transmission and the AI-driven network resource management mechanisms.
Third, a deep reinforcement learning (DRL) empowered network resource control and allocation scheme is proposed to solve the formulated optimization model.
arXiv Detail & Related papers (2020-12-29T11:56:28Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Constrained Deep Reinforcement Learning for Energy Sustainable Multi-UAV
based Random Access IoT Networks with NOMA [20.160827428161898]
We apply the Non-Orthogonal Multiple Access technique to improve massive channel access of a wireless IoT network where solar-powered Unmanned Aerial Vehicles (UAVs) relay data from IoT devices to remote servers.
IoT devices contend for accessing the shared wireless channel using an adaptive $p$-persistent slotted Aloha protocol; and the solar-powered UAVs adopt Successive Interference Cancellation (SIC) to decode multiple received data from IoT devices to improve access efficiency.
arXiv Detail & Related papers (2020-01-31T22:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.