Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks
- URL: http://arxiv.org/abs/2410.00796v1
- Date: Tue, 1 Oct 2024 15:38:09 GMT
- Title: Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks
- Authors: Nicolas Christianson, Wenqi Cui, Steven Low, Weiwei Yang, Baosen Zhang,
- Abstract summary: Power system operators must ensure that dispatch decisions remain feasible in case of grid outages or contingencies to prevent failures and ensure reliable operation.
Check the feasibility of all $N - k$ contingencies is intractable for even small $k$ grid components.
In this work, we propose use input- cascading neural networks (ICNNs) for contingency screening.
- Score: 3.490170135411753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Power system operators must ensure that dispatch decisions remain feasible in case of grid outages or contingencies to prevent cascading failures and ensure reliable operation. However, checking the feasibility of all $N - k$ contingencies -- every possible simultaneous failure of $k$ grid components -- is computationally intractable for even small $k$, requiring system operators to resort to heuristic screening methods. Because of the increase in uncertainty and changes in system behaviors, heuristic lists might not include all relevant contingencies, generating false negatives in which unsafe scenarios are misclassified as safe. In this work, we propose to use input-convex neural networks (ICNNs) for contingency screening. We show that ICNN reliability can be determined by solving a convex optimization problem, and by scaling model weights using this problem as a differentiable optimization layer during training, we can learn an ICNN classifier that is both data-driven and has provably guaranteed reliability. Namely, our method can ensure a zero false negative rate. We empirically validate this methodology in a case study on the IEEE 39-bus test network, observing that it yields substantial (10-20x) speedups while having excellent classification accuracy.
Related papers
- Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Adversarial Robustness Certification for Bayesian Neural Networks [22.71265211510824]
We study the problem of robustness certifying the computation of Bayesian neural networks (BNNs) to adversarial input perturbations.
Our framework is based on weight sampling, integration, and bound propagation techniques, and can be applied to BNNs with a large number of parameters.
arXiv Detail & Related papers (2023-06-23T16:58:25Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [19.380453459873298]
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations.
We show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.
We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications.
arXiv Detail & Related papers (2022-05-20T13:07:36Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - PAC Confidence Predictions for Deep Neural Network Classifiers [28.61937254015157]
Key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty.
We propose an algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees.
arXiv Detail & Related papers (2020-11-02T04:09:17Z) - Probabilistic Safety for Bayesian Neural Networks [22.71265211510824]
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
In particular, we evaluate that a network sampled from the BNN is vulnerable to adversarial attacks.
We apply our methods to BNNs trained on a task airborne avoidance, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.
arXiv Detail & Related papers (2020-04-21T20:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.