Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks
- URL: http://arxiv.org/abs/2103.14779v1
- Date: Sat, 27 Mar 2021 00:45:23 GMT
- Title: Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks
- Authors: Manish K. Singh, Vassilis Kekatos, and Georgios B. Giannakis
- Abstract summary: We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
- Score: 52.32646357164739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To shift the computational burden from real-time to offline in delay-critical
power systems applications, recent works entertain the idea of using a deep
neural network (DNN) to predict the solutions of the AC optimal power flow
(AC-OPF) once presented load demands. As network topologies may change,
training this DNN in a sample-efficient manner becomes a necessity. To improve
data efficiency, this work utilizes the fact OPF data are not simple training
labels, but constitute the solutions of a parametric optimization problem. We
thus advocate training a sensitivity-informed DNN (SI-DNN) to match not only
the OPF optimizers, but also their partial derivatives with respect to the OPF
parameters (loads). It is shown that the required Jacobian matrices do exist
under mild conditions, and can be readily computed from the related primal/dual
solutions. The proposed SI-DNN is compatible with a broad range of OPF solvers,
including a non-convex quadratically constrained quadratic program (QCQP), its
semidefinite program (SDP) relaxation, and MATPOWER; while SI-DNN can be
seamlessly integrated in other learning-to-OPF schemes. Numerical tests on
three benchmark power systems corroborate the advanced generalization and
constraint satisfaction capabilities for the OPF solutions predicted by an
SI-DNN over a conventionally trained DNN, especially in low-data setups.
Related papers
- Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Optimal Power Flow Based on Physical-Model-Integrated Neural Network
with Worth-Learning Data Generation [1.370633147306388]
We propose an OPF solver based on a physical-model-integrated neural network (NN) with worth-learning data generation.
We show that the proposed method leads to an over 50% reduction of constraint violations and optimality loss compared to conventional NN solvers.
arXiv Detail & Related papers (2023-01-10T03:06:08Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Controlling Smart Inverters using Proxies: A Chance-Constrained
DNN-based Approach [4.974932889340055]
Deep neural networks (DNNs) can learn optimal inverter schedules, but guaranteeing feasibility is largely elusive.
This work integrates DNN-based inverter policies into the optimal power flow (OPF)
Numerical tests compare the DNN-based inverter control schemes with the optimal inverter setpoints in terms of optimality and feasibility.
arXiv Detail & Related papers (2021-05-02T09:21:41Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - DeepOPF: A Feasibility-Optimized Deep Neural Network Approach for AC
Optimal Power Flow Problems [25.791128241015684]
We develop a Deep Neural Network (DNN) approach, called DeepOPF, for solving AC-OPF problems in a fraction of the time used by conventional solvers.
We show that DeepOPF speeds up the computing time by up to two orders of magnitude as compared to a state-of-the-art solver.
arXiv Detail & Related papers (2020-07-02T10:26:46Z) - High-Fidelity Machine Learning Approximations of Large-Scale Optimal
Power Flow [49.2540510330407]
AC-OPF is a key building block in many power system applications.
Motivated by increased penetration of renewable sources, this paper explores deep learning to deliver efficient approximations to the AC-OPF.
arXiv Detail & Related papers (2020-06-29T20:22:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.