Correlated Deep Q-learning based Microgrid Energy Management
- URL: http://arxiv.org/abs/2103.04152v1
- Date: Sat, 6 Mar 2021 16:43:18 GMT
- Title: Correlated Deep Q-learning based Microgrid Energy Management
- Authors: Hao Zhou, and Melike Erol-Kantarci
- Abstract summary: This paper proposes a correlated deep Q-learning (CDQN) based technique for the MG energy management.
Each electrical entity is modeled as an agent which has a neural network to predict its own Q-values, after which the correlated Q-equilibrium is used to coordinate agents.
The simulation result shows 40.9% and 9.62% higher profit for ESS agent and photovoltaic (PV) agent, respectively.
- Score: 12.013067383415747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Microgrid (MG) energy management is an important part of MG operation.
Various entities are generally involved in the energy management of an MG,
e.g., energy storage system (ESS), renewable energy resources (RER) and the
load of users, and it is crucial to coordinate these entities. Considering the
significant potential of machine learning techniques, this paper proposes a
correlated deep Q-learning (CDQN) based technique for the MG energy management.
Each electrical entity is modeled as an agent which has a neural network to
predict its own Q-values, after which the correlated Q-equilibrium is used to
coordinate the operation among agents. In this paper, the Long Short Term
Memory networks (LSTM) based deep Q-learning algorithm is introduced and the
correlated equilibrium is proposed to coordinate agents. The simulation result
shows 40.9% and 9.62% higher profit for ESS agent and photovoltaic (PV) agent,
respectively.
Related papers
- Pointer Networks with Q-Learning for Combinatorial Optimization [55.2480439325792]
We introduce the Pointer Q-Network (PQN), a hybrid neural architecture that integrates model-free Q-value policy approximation with Pointer Networks (Ptr-Nets)
Our empirical results demonstrate the efficacy of this approach, also testing the model in unstable environments.
arXiv Detail & Related papers (2023-11-05T12:03:58Z) - Smart Home Energy Management: VAE-GAN synthetic dataset generator and
Q-learning [15.995891934245334]
We propose a novel variational auto-encoder-generative adversarial network (VAE-GAN) technique for generating time-series data on energy consumption in smart homes.
We tested the online performance of Q-learning-based HEMS with real-world smart home data.
arXiv Detail & Related papers (2023-05-14T22:22:16Z) - Federated Multi-Agent Deep Reinforcement Learning Approach via
Physics-Informed Reward for Multi-Microgrid Energy Management [34.18923657108073]
This paper proposes a federated multi-agent deep reinforcement learning (F-MADRL) algorithm via the physics-informed reward.
In this algorithm, the federated learning mechanism is introduced to train the F-MADRL algorithm thus ensures the privacy and the security of data.
Experiments are conducted on Oak Ridge national laboratory distributed energy control communication lab microgrid (ORNL-MG) test system.
arXiv Detail & Related papers (2022-12-29T08:35:11Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Multi-agent Bayesian Deep Reinforcement Learning for Microgrid Energy
Management under Communication Failures [10.099371194251052]
We propose a multi-agent Bayesian deep reinforcement learning (BA-DRL) method for MG energy management under communication failures.
BA-DRL has 4.1% and 10.3% higher reward than Nash Deep Q-learning (Nash-DQN) and alternating direction method of multipliers (ADMM) respectively under 1% communication failure probability.
arXiv Detail & Related papers (2021-11-22T03:08:10Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Information Freshness-Aware Task Offloading in Air-Ground Integrated
Edge Computing Systems [49.80033982995667]
This paper studies the problem of information freshness-aware task offloading in an air-ground integrated multi-access edge computing system.
A third-party real-time application service provider provides computing services to the subscribed mobile users (MUs) with the limited communication and computation resources from the InP.
We derive a novel deep reinforcement learning (RL) scheme that adopts two separate double deep Q-networks for each MU to approximate the Q-factor and the post-decision Q-factor.
arXiv Detail & Related papers (2020-07-15T21:32:43Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.