Proof Minimization in Neural Network Verification
- URL: http://arxiv.org/abs/2511.08198v1
- Date: Wed, 12 Nov 2025 01:46:15 GMT
- Title: Proof Minimization in Neural Network Verification
- Authors: Omri Isac, Idan Refaeli, Haoze Wu, Clark Barrett, Guy Katz,
- Abstract summary: Deep neural networks (DNNs) are complex tools, which might contain bugs that could compromise their soundness and undermine the reliability of the verification process.<n>We present algorithms that remove facts which were learned during the verification process, but which are unnecessary for the proof itself.<n>Our results show that our best-performing algorithm reduces proof size by 37%-82% and proof checking time by 30%-88%, while introducing a runtime overhead of 7%-20% to the verification process itself.
- Score: 3.2006701823251578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread adoption of deep neural networks (DNNs) requires efficient techniques for verifying their safety. DNN verifiers are complex tools, which might contain bugs that could compromise their soundness and undermine the reliability of the verification process. This concern can be mitigated using proofs: artifacts that are checkable by an external and reliable proof checker, and which attest to the correctness of the verification process. However, such proofs tend to be extremely large, limiting their use in many scenarios. In this work, we address this problem by minimizing proofs of unsatisfiability produced by DNN verifiers. We present algorithms that remove facts which were learned during the verification process, but which are unnecessary for the proof itself. Conceptually, our method analyzes the dependencies among facts used to deduce UNSAT, and removes facts that did not contribute. We then further minimize the proof by eliminating remaining unnecessary dependencies, using two alternative procedures. We implemented our algorithms on top of a proof producing DNN verifier, and evaluated them across several benchmarks. Our results show that our best-performing algorithm reduces proof size by 37%-82% and proof checking time by 30%-88%, while introducing a runtime overhead of 7%-20% to the verification process itself.
Related papers
- SoundnessBench: A Soundness Benchmark for Neural Network Verifiers [45.86647998712757]
We develop a new benchmark for NN verification, named "SoundnessBench", specifically for testing the soundness of NN verifiers.<n>SoundnessBench consists of instances with deliberately inserted counterexamples that are hidden from adversarial attacks.<n>We demonstrate that our training effectively produces hidden counterexamples and our SoundnessBench successfully identifies bugs in state-of-the-art NN verifiers.
arXiv Detail & Related papers (2024-12-04T09:24:33Z) - FIRE: Fact-checking with Iterative Retrieval and Verification [63.67320352038525]
FIRE is a novel framework that integrates evidence retrieval and claim verification in an iterative manner.<n>It achieves slightly better performance while reducing large language model (LLM) costs by an average of 7.6 times and search costs by 16.5 times.<n>These results indicate that FIRE holds promise for application in large-scale fact-checking operations.
arXiv Detail & Related papers (2024-10-17T06:44:18Z) - A Certified Proof Checker for Deep Neural Network Verification in Imandra [0.9895793818721335]
We present an alternative implementation of the Marabou certificate checking in Imandra.<n>It gives stronger independent guarantees for Marabou.<n>It opens the way for the wider adoption of DNN verifiers in interactive theorem proving.
arXiv Detail & Related papers (2024-05-17T08:16:32Z) - Towards a Certified Proof Checker for Deep Neural Network Verification [1.0485739694839669]
We present a novel implementation of a proof checker for DNN verification.
It improves on existing implementations by offering numerical stability and greater verifiability.
arXiv Detail & Related papers (2023-07-12T16:53:32Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks [9.85360493553261]
Deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains.
We present a novel abstraction-refinement approach for scalable and exact DNN verification.
arXiv Detail & Related papers (2022-07-02T07:04:20Z) - Neural Network Verification with Proof Production [7.898605407936655]
We present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities.
Our proof production is based on an efficient adaptation of the well-known Farkas' lemma.
Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases.
arXiv Detail & Related papers (2022-06-01T14:14:37Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - Beta-CROWN: Efficient Bound Propagation with Per-neuron Split
Constraints for Complete and Incomplete Neural Network Verification [151.62491805851107]
We develop $beta$-CROWN, a bound propagation based verifier that can fully encode per-neuron splits.
$beta$-CROWN is close to three orders of magnitude faster than LP-based BaB methods for robustness verification.
By terminating BaB early, our method can also be used for incomplete verification.
arXiv Detail & Related papers (2021-03-11T11:56:54Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.