Learning to Charge RF-Energy Harvesting Devices in WiFi Networks
- URL: http://arxiv.org/abs/2005.12022v1
- Date: Mon, 25 May 2020 10:55:24 GMT
- Title: Learning to Charge RF-Energy Harvesting Devices in WiFi Networks
- Authors: Yizhou Luo and Kwan-Wu Chin
- Abstract summary: We propose two solutions that enable the AP to manage its harvested energy via transmit power control.
The first solution uses a deep Q-network (DQN) whilst the second solution uses Model Predictive Control (MPC) to control the AP's transmit power.
Our results show that our DQN and MPC solutions improve energy efficiency and user satisfaction by 16% to 35%, and 10% to 42% as compared to competing algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider a solar-powered Access Point (AP) that is tasked
with supporting both non-energy harvesting or legacy data users such as
laptops, and devices with Radio Frequency (RF)-energy harvesting and sensing
capabilities. We propose two solutions that enable the AP to manage its
harvested energy via transmit power control and also ensure devices perform
sensing tasks frequently. Advantageously, our solutions are suitable for
current wireless networks and do not require perfect channel gain information
or non-causal energy arrival at devices. The first solution uses a deep
Q-network (DQN) whilst the second solution uses Model Predictive Control (MPC)
to control the AP's transmit power. Our results show that our DQN and MPC
solutions improve energy efficiency and user satisfaction by respectively 16%
to 35%, and 10% to 42% as compared to competing algorithms.
Related papers
- Towards Battery-Free Wireless Sensing via Radio-Frequency Energy Harvesting [11.511759874194706]
We propose REHSense, an energy-efficient wireless sensing solution based on Radio-Frequency (RF) energy harvesting.
Instead of relying on a power-hungry Wi-Fi receiver, REHSense leverages an RF energy harvester as the sensor.
We show that REHSense can achieve comparable sensing accuracy with conventional Wi-Fi-based solutions while adapting to different sensing environments.
arXiv Detail & Related papers (2024-08-26T02:01:39Z) - Monitoring Efficiency of IoT Wireless Charging [0.39373541926236766]
We propose an energy estimation framework that predicts the actual received energy.
Our framework uses two machine learning algorithms, namely XGBoost and Neural Network, to estimate the received energy.
arXiv Detail & Related papers (2023-03-10T00:15:08Z) - Deep Reinforcement Learning for Power Control in Next-Generation WiFi
Network Systems [1.405141917351931]
deep reinforcement learning (DRL) for power control in wireless communications.
In a wireless network, each mobile node measures its link quality and signal strength, and controls its transmit power.
DRL is implemented for the embedded platform of each node combining an ARM processor and a WiFi transceiver for 802.11n.
arXiv Detail & Related papers (2022-11-02T13:32:03Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - RIS-assisted UAV Communications for IoT with Wireless Power Transfer
Using Deep Reinforcement Learning [75.677197535939]
We propose a simultaneous wireless power transfer and information transmission scheme for IoT devices with support from unmanned aerial vehicle (UAV) communications.
In a first phase, IoT devices harvest energy from the UAV through wireless power transfer; and then in a second phase, the UAV collects data from the IoT devices through information transmission.
We formulate a Markov decision process and propose two deep reinforcement learning algorithms to solve the optimization problem of maximizing the total network sum-rate.
arXiv Detail & Related papers (2021-08-05T23:55:44Z) - Cognitive Radio Network Throughput Maximization with Deep Reinforcement
Learning [58.44609538048923]
Radio Frequency powered Cognitive Radio Networks (RF-CRN) are likely to be the eyes and ears of upcoming modern networks such as Internet of Things (IoT)
To be considered autonomous, the RF-powered network entities need to make decisions locally to maximize the network throughput under the uncertainty of any network environment.
In this paper, deep reinforcement learning is proposed to overcome the shortcomings and allow a wireless gateway to derive an optimal policy to maximize network throughput.
arXiv Detail & Related papers (2020-07-07T01:49:07Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - Wireless Power Control via Counterfactual Optimization of Graph Neural
Networks [124.89036526192268]
We consider the problem of downlink power control in wireless networks, consisting of multiple transmitter-receiver pairs communicating over a single shared wireless medium.
To mitigate the interference among concurrent transmissions, we leverage the network topology to create a graph neural network architecture.
We then use an unsupervised primal-dual counterfactual optimization approach to learn optimal power allocation decisions.
arXiv Detail & Related papers (2020-02-17T07:54:39Z) - Constrained Deep Reinforcement Learning for Energy Sustainable Multi-UAV
based Random Access IoT Networks with NOMA [20.160827428161898]
We apply the Non-Orthogonal Multiple Access technique to improve massive channel access of a wireless IoT network where solar-powered Unmanned Aerial Vehicles (UAVs) relay data from IoT devices to remote servers.
IoT devices contend for accessing the shared wireless channel using an adaptive $p$-persistent slotted Aloha protocol; and the solar-powered UAVs adopt Successive Interference Cancellation (SIC) to decode multiple received data from IoT devices to improve access efficiency.
arXiv Detail & Related papers (2020-01-31T22:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.