Deep Reinforcement Learning for Power Control in Next-Generation WiFi
Network Systems
- URL: http://arxiv.org/abs/2211.01107v1
- Date: Wed, 2 Nov 2022 13:32:03 GMT
- Title: Deep Reinforcement Learning for Power Control in Next-Generation WiFi
Network Systems
- Authors: Ziad El Jamous and Kemal Davaslioglu and Yalin E. Sagduyu
- Abstract summary: deep reinforcement learning (DRL) for power control in wireless communications.
In a wireless network, each mobile node measures its link quality and signal strength, and controls its transmit power.
DRL is implemented for the embedded platform of each node combining an ARM processor and a WiFi transceiver for 802.11n.
- Score: 1.405141917351931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a deep reinforcement learning (DRL) solution for power
control in wireless communications, describes its embedded implementation with
WiFi transceivers for a WiFi network system, and evaluates the performance with
high-fidelity emulation tests. In a multi-hop wireless network, each mobile
node measures its link quality and signal strength, and controls its transmit
power. As a model-free solution, reinforcement learning allows nodes to adapt
their actions by observing the states and maximize their cumulative rewards
over time. For each node, the state consists of transmit power, link quality
and signal strength; the action adjusts the transmit power; and the reward
combines energy efficiency (throughput normalized by energy consumption) and
penalty of changing the transmit power. As the state space is large, Q-learning
is hard to implement on embedded platforms with limited memory and processing
power. By approximating the Q-values with a DQN, DRL is implemented for the
embedded platform of each node combining an ARM processor and a WiFi
transceiver for 802.11n. Controllable and repeatable emulation tests are
performed by inducing realistic channel effects on RF signals. Performance
comparison with benchmark schemes of fixed and myopic power allocations shows
that power control with DRL provides major improvements to energy efficiency
and throughput in WiFi network systems.
Related papers
- DRL Optimization Trajectory Generation via Wireless Network Intent-Guided Diffusion Models for Optimizing Resource Allocation [58.62766376631344]
We propose a customized wireless network intent (WNI-G) model to address different state variations of wireless communication networks.
Extensive simulation achieves greater stability in spectral efficiency and variations of traditional DRL models in dynamic communication systems.
arXiv Detail & Related papers (2024-10-18T14:04:38Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Scalable Power Control/Beamforming in Heterogeneous Wireless Networks
with Graph Neural Networks [6.631773993784724]
We propose a novel unsupervised learning-based framework named heterogeneous interference graph neural network (HIGNN) to handle these challenges.
HIGNN is scalable to wireless networks of growing sizes with robust performance after trained on small-sized networks.
arXiv Detail & Related papers (2021-04-12T13:36:32Z) - Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
Convergence and Power Transfer [42.30741737568212]
We propose the solution of powering devices using wireless power transfer (WPT)
This work aims at the derivation of guidelines on deploying the resultant wirelessly powered FEEL (WP-FEEL) system.
The results provide useful guidelines on WPT provisioning to provide a guaranteer on learning performance.
arXiv Detail & Related papers (2021-02-24T15:47:34Z) - Leveraging AI and Intelligent Reflecting Surface for Energy-Efficient
Communication in 6G IoT [14.027983498089084]
We propose an artificial intelligence (AI) and intelligent reflecting surface (IRS) empowered energy-efficiency communication system for 6G IoT.
First, we design a smart and efficient communication architecture including the IRS-aided data transmission and the AI-driven network resource management mechanisms.
Third, a deep reinforcement learning (DRL) empowered network resource control and allocation scheme is proposed to solve the formulated optimization model.
arXiv Detail & Related papers (2020-12-29T11:56:28Z) - Learning to Charge RF-Energy Harvesting Devices in WiFi Networks [0.0]
We propose two solutions that enable the AP to manage its harvested energy via transmit power control.
The first solution uses a deep Q-network (DQN) whilst the second solution uses Model Predictive Control (MPC) to control the AP's transmit power.
Our results show that our DQN and MPC solutions improve energy efficiency and user satisfaction by 16% to 35%, and 10% to 42% as compared to competing algorithms.
arXiv Detail & Related papers (2020-05-25T10:55:24Z) - Wireless Power Control via Counterfactual Optimization of Graph Neural
Networks [124.89036526192268]
We consider the problem of downlink power control in wireless networks, consisting of multiple transmitter-receiver pairs communicating over a single shared wireless medium.
To mitigate the interference among concurrent transmissions, we leverage the network topology to create a graph neural network architecture.
We then use an unsupervised primal-dual counterfactual optimization approach to learn optimal power allocation decisions.
arXiv Detail & Related papers (2020-02-17T07:54:39Z) - Constrained Deep Reinforcement Learning for Energy Sustainable Multi-UAV
based Random Access IoT Networks with NOMA [20.160827428161898]
We apply the Non-Orthogonal Multiple Access technique to improve massive channel access of a wireless IoT network where solar-powered Unmanned Aerial Vehicles (UAVs) relay data from IoT devices to remote servers.
IoT devices contend for accessing the shared wireless channel using an adaptive $p$-persistent slotted Aloha protocol; and the solar-powered UAVs adopt Successive Interference Cancellation (SIC) to decode multiple received data from IoT devices to improve access efficiency.
arXiv Detail & Related papers (2020-01-31T22:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.