Revisiting Differential Verification: Equivalence Verification with Confidence
- URL: http://arxiv.org/abs/2410.20207v2
- Date: Wed, 29 Jan 2025 16:21:27 GMT
- Title: Revisiting Differential Verification: Equivalence Verification with Confidence
- Authors: Samuel Teuber, Philipp Kern, Marvin Janzen, Bernhard Beckert,
- Abstract summary: When validated neural networks (NNs) are pruned (and retrained) before deployment, it is desirable to prove that the new NN behaves equivalently to the original NN.
This paper revisits the idea of differential verification which performs reasoning on differences between NNs.
- Score: 0.6562256987706128
- License:
- Abstract: When validated neural networks (NNs) are pruned (and retrained) before deployment, it is desirable to prove that the new NN behaves equivalently to the (original) reference NN. To this end, our paper revisits the idea of differential verification which performs reasoning on differences between NNs: On the one hand, our paper proposes a novel abstract domain for differential verification admitting more efficient reasoning about equivalence. On the other hand, we investigate empirically and theoretically which equivalence properties are (not) efficiently solved using differential reasoning. Based on the gained insights, and following a recent line of work on confidence-based verification, we propose a novel equivalence property that is amenable to Differential Verification while providing guarantees for large parts of the input space instead of small-scale guarantees constructed w.r.t. predetermined input points. We implement our approach in a new tool called VeryDiff and perform an extensive evaluation on numerous old and new benchmark families, including new pruned NNs for particle jet classification in the context of CERN's LHC where we observe median speedups >300x over the State-of-the-Art verifier alpha,beta-CROWN.
Related papers
- Testing Neural Network Verifiers: A Soundness Benchmark with Hidden Counterexamples [52.564617070814485]
We propose to develop a soundness benchmark for NN verification.
Our benchmark contains instances with deliberately inserted counterexamples.
Our benchmark successfully identifies bugs in state-of-the-art NN verifiers, as well as synthetic bugs.
arXiv Detail & Related papers (2024-12-04T09:24:33Z) - What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection [24.518698391381204]
Machine learning methods such as deep neural networks (DNNs) often generate incorrect predictions with high confidence.
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results.
arXiv Detail & Related papers (2022-01-07T05:21:40Z) - Geometric Path Enumeration for Equivalence Verification of Neural
Networks [2.007262412327553]
We focus on the formal verification problem of NN equivalence which aims to prove that two NNs show equivalent behavior.
We show a theoretical result by proving that the epsilon-equivalence problem is coNP-complete.
In a third step, we implement the extended algorithm for equivalence verification and evaluate optimizations necessary for its practical use.
arXiv Detail & Related papers (2021-12-13T11:56:08Z) - Improving the Certified Robustness of Neural Networks via Consistency
Regularization [25.42238710803711]
A range of defense methods have been proposed to improve the robustness of neural networks on adversarial examples.
Most of these provable defense methods treat all examples equally during training process.
In this paper, we explore this inconsistency caused by misclassified examples and add a novel consistency regularization term to make better use of the misclassified examples.
arXiv Detail & Related papers (2020-12-24T05:00:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.