Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2003.02157v3
- Date: Wed, 6 Jan 2021 02:51:28 GMT
- Title: Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach
- Authors: Md. Shirajum Munir, Sarder Fakhrul Abedin, Nguyen H. Tran, Zhu Han,
Eui-Nam Huh, Choong Seon Hong
- Abstract summary: We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
- Score: 82.6692222294594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, multi-access edge computing (MEC) is a key enabler for
handling the massive expansion of Internet of Things (IoT) applications and
services. However, energy consumption of a MEC network depends on volatile
tasks that induces risk for energy demand estimations. As an energy supplier, a
microgrid can facilitate seamless energy supply. However, the risk associated
with energy supply is also increased due to unpredictable energy generation
from renewable and non-renewable sources. Especially, the risk of energy
shortfall is involved with uncertainties in both energy consumption and
generation. In this paper, we study a risk-aware energy scheduling problem for
a microgrid-powered MEC network. First, we formulate an optimization problem
considering the conditional value-at-risk (CVaR) measurement for both energy
consumption and generation, where the objective is to minimize the expected
residual of scheduled energy for the MEC networks and we show this problem is
an NP-hard problem. Second, we analyze our formulated problem using a
multi-agent stochastic game that ensures the joint policy Nash equilibrium, and
show the convergence of the proposed model. Third, we derive the solution by
applying a multi-agent deep reinforcement learning (MADRL)-based asynchronous
advantage actor-critic (A3C) algorithm with shared neural networks. This method
mitigates the curse of dimensionality of the state space and chooses the best
policy among the agents for the proposed problem. Finally, the experimental
results establish a significant performance gain by considering CVaR for high
accuracy energy scheduling of the proposed model than both the single and
random agent models.
Related papers
- Energy-Aware Dynamic Neural Inference [39.04688735618206]
We introduce an on-device adaptive inference system equipped with an energy-harvester and finite-capacity energy storage.
We show that, as the rate of the ambient energy increases, energy- and confidence-aware control schemes show approximately 5% improvement in accuracy.
We derive a principled policy with theoretical guarantees for confidence-aware and -agnostic controllers.
arXiv Detail & Related papers (2024-11-04T16:51:22Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - Sustainable Edge Intelligence Through Energy-Aware Early Exiting [0.726437825413781]
We propose energy-adaptive dynamic early exiting to enable efficient and accurate inference in an EH edge intelligence system.
Our approach derives an energy-aware EE policy that determines the optimal amount of computational processing on a per-sample basis.
Results show that accuracy and service rate are improved up to 25% and 35%, respectively, in comparison with an energy-agnostic policy.
arXiv Detail & Related papers (2023-05-23T14:17:44Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Cascaded Deep Hybrid Models for Multistep Household Energy Consumption
Forecasting [5.478764356647437]
This study introduces two hybrid cascaded models for forecasting multistep household power consumption in different resolutions.
The proposed hybrid models achieve superior prediction performance compared to the existing multistep power consumption prediction methods.
arXiv Detail & Related papers (2022-07-06T11:02:23Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.