Reinforcement Learning for Battery Energy Storage Dispatch augmented
with Model-based Optimizer
- URL: http://arxiv.org/abs/2109.01659v1
- Date: Thu, 2 Sep 2021 14:48:25 GMT
- Title: Reinforcement Learning for Battery Energy Storage Dispatch augmented
with Model-based Optimizer
- Authors: Gayathri Krishnamoorthy and Anamika Dubey
- Abstract summary: This paper proposes a novel approach to combine the physics-based models with learning-based algorithms to solve distribution-level OPF problems.
The effectiveness of the proposed approach is demonstrated using IEEE 34-bus and 123-bus distribution feeders with numerous distribution-level battery storage systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning has been found useful in solving optimal power flow
(OPF) problems in electric power distribution systems. However, the use of
largely model-free reinforcement learning algorithms that completely ignore the
physics-based modeling of the power grid compromises the optimizer performance
and poses scalability challenges. This paper proposes a novel approach to
synergistically combine the physics-based models with learning-based algorithms
using imitation learning to solve distribution-level OPF problems.
Specifically, we propose imitation learning based improvements in deep
reinforcement learning (DRL) methods to solve the OPF problem for a specific
case of battery storage dispatch in the power distribution systems. The
proposed imitation learning algorithm uses the approximate optimal solutions
obtained from a linearized model-based OPF solver to provide a good initial
policy for the DRL algorithms while improving the training efficiency. The
effectiveness of the proposed approach is demonstrated using IEEE 34-bus and
123-bus distribution feeders with numerous distribution-level battery storage
systems.
Related papers
- Energy-efficient Decentralized Learning via Graph Sparsification [6.290202502226849]
This work aims at improving the energy efficiency of decentralized learning by optimizing the mixing matrix, which controls the communication demands during the learning process.
A solution with guaranteed performance is proposed for the special case of fully-connected base topology and a greedy algorithm is proposed for the general case.
Simulations based on real topology and dataset show that the proposed solution can lower the energy consumption at the busiest node by 54%-76% while maintaining the quality of the trained model.
arXiv Detail & Related papers (2024-01-05T23:00:38Z) - GP CC-OPF: Gaussian Process based optimization tool for
Chance-Constrained Optimal Power Flow [54.94701604030199]
The Gaussian Process (GP) based Chance-Constrained Optimal Flow (CC-OPF) is an open-source Python code for economic dispatch (ED) problem in power grids.
The developed tool presents a novel data-driven approach based on the CC-OP model for solving the large regression problem with a trade-off between complexity and accuracy.
arXiv Detail & Related papers (2023-02-16T17:59:06Z) - Proximal Policy Optimization with Graph Neural Networks for Optimal Power Flow [4.27638925658716]
Graph Neural Networks (GNN) has allowed the natural use of Machine Learning (ML) algorithms on data.
Deep Reinforcement Learning (DRL) is known for its powerful capability to solve complex decision-making problems.
We propose an architecture that learns how to solve the problem and that is at the same time able to unseen scenarios.
arXiv Detail & Related papers (2022-12-23T17:00:00Z) - Efficient Learning of Voltage Control Strategies via Model-based Deep
Reinforcement Learning [9.936452412191326]
This article proposes a model-based deep reinforcement learning (DRL) method to design emergency control strategies for short-term voltage stability problems in power systems.
Recent advances show promising results in model-free DRL-based methods for power systems, but model-free methods suffer from poor sample efficiency and training time.
We propose a novel model-based-DRL framework where a deep neural network (DNN)-based dynamic surrogate model is utilized with the policy learning framework.
arXiv Detail & Related papers (2022-12-06T02:50:53Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Model-Informed Generative Adversarial Network (MI-GAN) for Learning
Optimal Power Flow [5.407198609685119]
The optimal power flow (OPF) problem, as a critical component of power system operations, becomes increasingly difficult to solve due to the variability, intermittency, and unpredictability of renewable energy brought to the power system.
Deep learning techniques, such as neural networks, have recently been developed to improve computational efficiency in solving OPF problems with the utilization of data.
In this paper, we propose an optimization model-informed generative adversarial network (MI-GAN) framework to solve OPF under uncertainty.
arXiv Detail & Related papers (2022-06-04T00:37:37Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - Model-free and Bayesian Ensembling Model-based Deep Reinforcement
Learning for Particle Accelerator Control Demonstrated on the FERMI FEL [0.0]
This paper shows how reinforcement learning can be used on an operational level on accelerator physics problems.
We compare purely model-based to model-free reinforcement learning applied to the intensity optimisation on the FERMI FEL system.
We find that the model-based approach demonstrates higher representational power and sample-efficiency, while the performance of the model-free method is slightly superior.
arXiv Detail & Related papers (2020-12-17T16:57:27Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.