Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks
- URL: http://arxiv.org/abs/2006.11029v1
- Date: Fri, 19 Jun 2020 09:19:14 GMT
- Title: Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks
- Authors: Andreas Venzke, Guannan Qu, Steven Low, Spyros Chatzivasileiadis
- Abstract summary: We formulate mixed-integer linear programs to obtain worst-case guarantees for neural network predictions.
We show that the worst-case guarantees can be up to one order of magnitude larger than the empirical lower bounds calculated with conventional methods.
- Score: 1.8352113484137629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces for the first time a framework to obtain provable
worst-case guarantees for neural network performance, using learning for
optimal power flow (OPF) problems as a guiding example. Neural networks have
the potential to substantially reduce the computing time of OPF solutions.
However, the lack of guarantees for their worst-case performance remains a
major barrier for their adoption in practice. This work aims to remove this
barrier. We formulate mixed-integer linear programs to obtain worst-case
guarantees for neural network predictions related to (i) maximum constraint
violations, (ii) maximum distances between predicted and optimal decision
variables, and (iii) maximum sub-optimality. We demonstrate our methods on a
range of PGLib-OPF networks up to 300 buses. We show that the worst-case
guarantees can be up to one order of magnitude larger than the empirical lower
bounds calculated with conventional methods. More importantly, we show that the
worst-case predictions appear at the boundaries of the training input domain,
and we demonstrate how we can systematically reduce the worst-case guarantees
by training on a larger input domain than the domain they are evaluated on.
Related papers
- Optimization Proxies using Limited Labeled Data and Training Time -- A Semi-Supervised Bayesian Neural Network Approach [2.943640991628177]
Constrained optimization problems arise in various engineering system operations such as inventory management electric power grids.
This work introduces a learning scheme using Bayesian Networks (BNNs) to solve constrained optimization problems under limited data and restricted model times.
We show that the proposed learning method outperforms conventional BNN and deep neural network (DNN) architectures.
arXiv Detail & Related papers (2024-10-04T02:10:20Z) - Tight Certified Robustness via Min-Max Representations of ReLU Neural
Networks [9.771011198361865]
The reliable deployment of neural networks in control systems requires rigorous robustness guarantees.
In this paper, we obtain tight robustness certificates over convex representations of ReLU neural networks.
arXiv Detail & Related papers (2023-10-07T21:07:45Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Minimizing Worst-Case Violations of Neural Networks [0.0]
This paper introduces a neural network training procedure designed to achieve both a good average performance and minimum worst-case violations.
We demonstrate the proposed architecture on four different test systems ranging from 39 buses to 162 buses, for both AC-OPF and DC-OPF applications.
arXiv Detail & Related papers (2022-12-21T11:20:12Z) - Physics-Informed Neural Networks for AC Optimal Power Flow [0.0]
This paper introduces, for the first time, physics-informed neural networks to accurately estimate the AC-OPF result.
We show how physics-informed neural networks achieve higher accuracy and lower constraint violations than standard neural networks.
arXiv Detail & Related papers (2021-10-06T11:44:59Z) - Physics-Informed Neural Networks for Minimising Worst-Case Violations in
DC Optimal Power Flow [0.0]
Physics-informed neural networks exploit the existing models of the underlying physical systems to generate higher accuracy results with fewer data.
Such approaches can help drastically reduce the computation time and generate a good estimate of computationally intensive processes in power systems.
Such neural networks can be applied in safety-critical applications in power systems and build a high level of trust among power system operators.
arXiv Detail & Related papers (2021-06-28T10:45:22Z) - Improved Branch and Bound for Neural Network Verification via Lagrangian
Decomposition [161.09660864941603]
We improve the scalability of Branch and Bound (BaB) algorithms for formally proving input-output properties of neural networks.
We present a novel activation-based branching strategy and a BaB framework, named Branch and Dual Network Bound (BaDNB)
BaDNB outperforms previous complete verification systems by a large margin, cutting average verification times by factors up to 50 on adversarial properties.
arXiv Detail & Related papers (2021-04-14T09:22:42Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks [65.24701908364383]
We show that a sufficient condition for a uncertainty on a ReLU network is "to be a bit Bayesian calibrated"
We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations.
arXiv Detail & Related papers (2020-02-24T08:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.