Revisiting Differential Verification: Equivalence Verification with Confidence
- URL: http://arxiv.org/abs/2410.20207v1
- Date: Sat, 26 Oct 2024 15:53:25 GMT
- Title: Revisiting Differential Verification: Equivalence Verification with Confidence
- Authors: Samuel Teuber, Philipp Kern, Marvin Janzen, Bernhard Beckert,
- Abstract summary: When validated neural networks (NNs) are pruned (and retrained) before deployment, it is desirable to prove that the new NN behaves equivalently to the original NN.
This paper revisits the idea of differential verification which performs reasoning on differences between NNs.
- Score: 0.6562256987706128
- License:
- Abstract: When validated neural networks (NNs) are pruned (and retrained) before deployment, it is desirable to prove that the new NN behaves equivalently to the (original) reference NN. To this end, our paper revisits the idea of differential verification which performs reasoning on differences between NNs: On the one hand, our paper proposes a novel abstract domain for differential verification admitting more efficient reasoning about equivalence. On the other hand, we investigate empirically and theoretically which equivalence properties are (not) efficiently solved using differential reasoning. Based on the gained insights, and following a recent line of work on confidence-based verification, we propose a novel equivalence property that is amenable to Differential Verification while providing guarantees for large parts of the input space instead of small-scale guarantees constructed w.r.t. predetermined input points. We implement our approach in a new tool called VeryDiff and perform an extensive evaluation on numerous old and new benchmark families, including new pruned NNs for particle jet classification in the context of CERN's LHC where we observe median speedups >300x over the State-of-the-Art verifier alpha,beta-CROWN.
Related papers
- What If the Input is Expanded in OOD Detection? [77.37433624869857]
Out-of-distribution (OOD) detection aims to identify OOD inputs from unknown classes.
Various scoring functions are proposed to distinguish it from in-distribution (ID) data.
We introduce a novel perspective, i.e., employing different common corruptions on the input space.
arXiv Detail & Related papers (2024-10-24T06:47:28Z) - NLP Verification: Towards a General Methodology for Certifying Robustness [9.897538432223714]
We discuss the technical challenge of semantic generalisability of verified subspaces.
We propose a general methodology to analyse the effect of the embedding gap.
We propose the metric of falsifiability of semantic subspaces as another fundamental metric to be reported as part of the NLP verification pipeline.
arXiv Detail & Related papers (2024-03-15T09:43:52Z) - Provably Safe Neural Network Controllers via Differential Dynamic Logic [2.416907802598482]
We present the first general approach that allows reusing control theory results for NNCS verification.
Based on provably safe control envelopes in dL, we derive specifications for the NN which is proven via NN verification.
We show that a proof of the NN adhering to the specification is mirrored by a dL proof on the infinite-time safety of the NNCS.
arXiv Detail & Related papers (2024-02-16T16:15:25Z) - CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation
in Classification Tasks [5.19656787424626]
Uncertainty estimation is increasingly attractive for improving the reliability of neural networks.
We present novel credal-set interval neural networks (CreINNs) designed for classification tasks.
arXiv Detail & Related papers (2024-01-10T10:04:49Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - iDECODe: In-distribution Equivariance for Conformal Out-of-distribution
Detection [24.518698391381204]
Machine learning methods such as deep neural networks (DNNs) often generate incorrect predictions with high confidence.
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
We demonstrate the efficacy of iDECODe by experiments on image and audio datasets, obtaining state-of-the-art results.
arXiv Detail & Related papers (2022-01-07T05:21:40Z) - Geometric Path Enumeration for Equivalence Verification of Neural
Networks [2.007262412327553]
We focus on the formal verification problem of NN equivalence which aims to prove that two NNs show equivalent behavior.
We show a theoretical result by proving that the epsilon-equivalence problem is coNP-complete.
In a third step, we implement the extended algorithm for equivalence verification and evaluate optimizations necessary for its practical use.
arXiv Detail & Related papers (2021-12-13T11:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.