Scalable Optimal Design of Incremental Volt/VAR Control using Deep
Neural Networks
- URL: http://arxiv.org/abs/2301.01440v1
- Date: Wed, 4 Jan 2023 04:19:12 GMT
- Title: Scalable Optimal Design of Incremental Volt/VAR Control using Deep
Neural Networks
- Authors: Sarthak Gupta, Ali Mehrizi-Sani, Spyros Chatzivasileiadis, Vassilis
Kekatos
- Abstract summary: We propose a scalable solution by reformulating Optimal Rule Design (ORD) as training a deep neural network (DNN)
We put forth a scalable solution by reformulating ORD as training a deep neural network (DNN)
Analytical findings and numerical tests corroborate that the proposed ORD solution can be neatly adapted to single/multi-phase feeders.
- Score: 2.018732483255139
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Volt/VAR control rules facilitate the autonomous operation of distributed
energy resources (DER) to regulate voltage in power distribution grids.
According to non-incremental control rules, such as the one mandated by the
IEEE Standard 1547, the reactive power setpoint of each DER is computed as a
piecewise-linear curve of the local voltage. However, the slopes of such curves
are upper-bounded to ensure stability. On the other hand, incremental rules add
a memory term into the setpoint update, rendering them universally stable. They
can thus attain enhanced steady-state voltage profiles. Optimal rule design
(ORD) for incremental rules can be formulated as a bilevel program. We put
forth a scalable solution by reformulating ORD as training a deep neural
network (DNN). This DNN emulates the Volt/VAR dynamics for incremental rules
derived as iterations of proximal gradient descent (PGD). Analytical findings
and numerical tests corroborate that the proposed ORD solution can be neatly
adapted to single/multi-phase feeders.
Related papers
- Physics-Guided Graph Neural Networks for Real-time AC/DC Power Flow
Analysis [6.9065457480507995]
This letter proposes a physics-guided graph neural network (PG-GNN) for power flow analysis.
Case shows that only the proposed method matches AC model-based benchmark, also beats it in computational efficiency beyond 10 times.
arXiv Detail & Related papers (2023-04-29T09:58:15Z) - Optimal Design of Volt/VAR Control Rules of Inverters using Deep Learning [4.030910640265943]
To regulate voltage, the IEEE Standard 1547 recommends each DER inject reactive power according to piecewise-affine Volt/var control rules.
This task of optimal rule design (ORD) is challenging as Volt/var rules introduce nonlinear dynamics, and lurk trade-offs between stability and steady-state voltage profiles.
Towards a more efficient solution, we reformulate ORD as a deep learning problem.
The idea is to design a DNN that emulates Volt/var dynamics.
arXiv Detail & Related papers (2022-11-17T14:27:52Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Actor-Critic based Improper Reinforcement Learning [61.430513757337486]
We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process.
We propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic scheme and a Natural Actor-Critic scheme.
arXiv Detail & Related papers (2022-07-19T05:55:02Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Fast Power Control Adaptation via Meta-Learning for Random Edge Graph
Neural Networks [39.59987601426039]
This paper studies the higher-level problem of enabling fast adaption of the power control policy to time-varying topologies.
We apply first-order meta-learning on data from multiple topologies with the aim of optimizing for a few-shot adaptation to new network configurations.
arXiv Detail & Related papers (2021-05-02T12:43:10Z) - Controlling Smart Inverters using Proxies: A Chance-Constrained
DNN-based Approach [4.974932889340055]
Deep neural networks (DNNs) can learn optimal inverter schedules, but guaranteeing feasibility is largely elusive.
This work integrates DNN-based inverter policies into the optimal power flow (OPF)
Numerical tests compare the DNN-based inverter control schemes with the optimal inverter setpoints in terms of optimality and feasibility.
arXiv Detail & Related papers (2021-05-02T09:21:41Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Controllable Orthogonalization in Training DNNs [96.1365404059924]
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1.
This paper proposes a computationally efficient and numerically stable orthogonalization method using Newton's iteration (ONI)
We show that our method improves the performance of image classification networks by effectively controlling the orthogonality to provide an optimal tradeoff between optimization benefits and representational capacity reduction.
We also show that ONI stabilizes the training of generative adversarial networks (GANs) by maintaining the Lipschitz continuity of a network, similar to spectral normalization (
arXiv Detail & Related papers (2020-04-02T10:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.