Deep Reinforcement Learning Microgrid Optimization Strategy Considering
Priority Flexible Demand Side
- URL: http://arxiv.org/abs/2211.05946v1
- Date: Fri, 11 Nov 2022 01:43:10 GMT
- Title: Deep Reinforcement Learning Microgrid Optimization Strategy Considering
Priority Flexible Demand Side
- Authors: Jinsong Sang, Hongbin Sun and Lei Kou
- Abstract summary: A microgrid is mainly faced with the problems of small-scale volatility, uncertainty, intermittency and demand-side uncertainty of DERs.
The traditional microgrid has a single form and cannot meet the flexible energy dispatch between the complex demand side and the microgrid.
This paper considers the response priority of each unit component of TCLs and ESSs on the basis of the overall environment operation of the microgrid.
- Score: 6.129841305145217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an efficient way to integrate multiple distributed energy resources and
the user side, a microgrid is mainly faced with the problems of small-scale
volatility, uncertainty, intermittency and demand-side uncertainty of DERs. The
traditional microgrid has a single form and cannot meet the flexible energy
dispatch between the complex demand side and the microgrid. In response to this
problem, the overall environment of wind power, thermostatically controlled
loads, energy storage systems, price-responsive loads and the main grid is
proposed. Secondly, the centralized control of the microgrid operation is
convenient for the control of the reactive power and voltage of the distributed
power supply and the adjustment of the grid frequency. However, there is a
problem in that the flexible loads aggregate and generate peaks during the
electricity price valley. The existing research takes into account the power
constraints of the microgrid and fails to ensure a sufficient supply of
electric energy for a single flexible load. This paper considers the response
priority of each unit component of TCLs and ESSs on the basis of the overall
environment operation of the microgrid so as to ensure the power supply of the
flexible load of the microgrid and save the power input cost to the greatest
extent. Finally, the simulation optimization of the environment can be
expressed as a Markov decision process process. It combines two stages of
offline and online operations in the training process. The addition of multiple
threads with the lack of historical data learning leads to low learning
efficiency. The asynchronous advantage actor-critic with the experience replay
pool memory library is added to solve the data correlation and nonstatic
distribution problems during training.
Related papers
- Optimizing Load Scheduling in Power Grids Using Reinforcement Learning and Markov Decision Processes [0.0]
This paper proposes a reinforcement learning (RL) approach to address the challenges of dynamic load scheduling.
Our results show that the RL-based method provides a robust and scalable solution for real-time load scheduling.
arXiv Detail & Related papers (2024-10-23T09:16:22Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Machine Learning based Optimal Feedback Control for Microgrid
Stabilization [6.035279357076201]
An energy storage based feedback controller can compensate undesired dynamics of a microgrid to improve its stability.
This paper proposes a machine learning-based optimal feedback control scheme.
A case study is carried out for a microgrid model based on a modified Kundur two-area system to test the real-time performance of the proposed control scheme.
arXiv Detail & Related papers (2022-03-09T15:44:56Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Threshold-Based Data Exclusion Approach for Energy-Efficient Federated
Edge Learning [4.25234252803357]
Federated edge learning (FEEL) is a promising distributed learning technique for next-generation wireless networks.
FEEL might significantly shorten energy-constrained participating devices' lifetime due to the power consumed during the model training round.
This paper proposes a novel approach that endeavors to minimize computation and communication energy consumption during FEEL rounds.
arXiv Detail & Related papers (2021-03-30T13:34:40Z) - Multi-Objective Reinforcement Learning based Multi-Microgrid System
Optimisation Problem [4.338938227238059]
Microgrids with energy storage systems and distributed renewable energy sources play a crucial role in reducing the consumption from traditional power sources and the emission of $CO$.
Connecting multi microgrid to a distribution power grid can facilitate a more robust and reliable operation to increase the security and privacy of the system.
The proposed model consists of three layers, smart grid layer, independent system operator (ISO) layer and power grid layer.
arXiv Detail & Related papers (2021-03-10T23:01:22Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.