Learning and Fairness in Energy Harvesting: A Maximin Multi-Armed
Bandits Approach
- URL: http://arxiv.org/abs/2003.06213v3
- Date: Tue, 16 Jun 2020 10:01:56 GMT
- Title: Learning and Fairness in Energy Harvesting: A Maximin Multi-Armed
Bandits Approach
- Authors: Debamita Ghosh, Arun Verma and Manjesh K. Hanawal
- Abstract summary: Recent advances in wireless radio frequency (RF) energy harvesting allows sensor nodes to increase their lifespan by remotely charging their batteries.
The amount of energy harvested by the nodes varies depending on their ambient environment, and proximity to the source.
It is thus important to learn the least amount of energy harvested by nodes so that the source can transmit on a frequency band that maximizes this amount.
- Score: 4.350783459690612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in wireless radio frequency (RF) energy harvesting allows
sensor nodes to increase their lifespan by remotely charging their batteries.
The amount of energy harvested by the nodes varies depending on their ambient
environment, and proximity to the source. The lifespan of the sensor network
depends on the minimum amount of energy a node can harvest in the network. It
is thus important to learn the least amount of energy harvested by nodes so
that the source can transmit on a frequency band that maximizes this amount. We
model this learning problem as a novel stochastic Maximin Multi-Armed Bandits
(Maximin MAB) problem and propose an Upper Confidence Bound (UCB) based
algorithm named Maximin UCB. Maximin MAB is a generalization of standard MAB
and enjoys the same performance guarantee as that of the UCB1 algorithm.
Experimental results validate the performance guarantees of our algorithm.
Related papers
- Energy-Efficient Sleep Mode Optimization of 5G mmWave Networks Using Deep Contextual MAB [0.0]
An effective strategy to reduce this energy consumption in mobile networks is the sleep mode optimization (SMO) of base stations (BSs)
In this paper, we propose a novel SMO approach for mmWave BSs in a 3D urban environment.
Our proposed method outperforms all other SM strategies in terms of the $10th$ percentile of user rate and average throughput.
arXiv Detail & Related papers (2024-05-15T17:37:28Z) - Sum-Rate Maximization of RSMA-based Aerial Communications with Energy
Harvesting: A Reinforcement Learning Approach [5.35414932422173]
A self-sustainable aerial base station serves multiple users by utilizing the harvested energy.
Considering maximizing the sum-rate from the long-term perspective, we utilize a deep reinforcement learning (DRL) approach.
We show the superiority of the proposed scheme over several baseline methods in terms of the average sum-rate performance.
arXiv Detail & Related papers (2023-06-22T15:38:22Z) - Energy-Efficient Design for a NOMA assisted STAR-RIS Network with Deep
Reinforcement Learning [78.50920340621677]
Simultaneous transmitting and reconfigurable intelligent surfaces (STAR-RISs) has been considered as a promising auxiliary device to enhance the performance of wireless network.
In this paper, the energy efficiency (EE) problem for a non-orthogonal multiple access (NOMA) network is investigated.
A deep deterministic policy-based algorithm is proposed to maximize the EE by jointly optimizing the transmission beamforming vectors at the base station and the gradient matrices at the STAR-RIS.
arXiv Detail & Related papers (2021-11-30T15:01:19Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Design and Comparison of Reward Functions in Reinforcement Learning for
Energy Management of Sensor Nodes [0.0]
Interest in remote monitoring has grown thanks to recent advancements in Internet-of-Things (IoT) paradigms.
New applications have emerged, using small devices called sensor nodes capable of collecting data from the environment and processing it.
Battery technologies have not improved fast enough to cope with these increasing needs.
Miniature energy harvesting devices have emerged to complement traditional energy sources.
arXiv Detail & Related papers (2021-06-02T12:23:47Z) - Learning to Optimize Energy Efficiency in Energy Harvesting Wireless
Sensor Networks [11.075698140595113]
We study wireless power transmission by an energy source to multiple energy harvesting nodes.
We develop an Upper Confidence Bound based algorithm, which learns the optimal transmit power of the energy source that maximizes the energy efficiency.
arXiv Detail & Related papers (2020-12-30T15:51:39Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.