Abstraction-Based Proof Production in Formal Verification of Neural Networks
- URL: http://arxiv.org/abs/2506.09455v1
- Date: Wed, 11 Jun 2025 07:00:09 GMT
- Title: Abstraction-Based Proof Production in Formal Verification of Neural Networks
- Authors: Yizhak Yisrael Elboher, Omri Isac, Guy Katz, Tobias Ladner, Haoze Wu,
- Abstract summary: We introduce a novel framework for proof-producing abstraction-based verification.<n>This work aims to enable scalable and trustworthy verification by supporting common abstraction techniques.
- Score: 0.7048747239308888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern verification tools for deep neural networks (DNNs) increasingly rely on abstraction to scale to realistic architectures. In parallel, proof production is becoming a critical requirement for increasing the reliability of DNN verification results. However, current proofproducing verifiers do not support abstraction-based reasoning, creating a gap between scalability and provable guarantees. We address this gap by introducing a novel framework for proof-producing abstraction-based DNN verification. Our approach modularly separates the verification task into two components: (i) proving the correctness of an abstract network, and (ii) proving the soundness of the abstraction with respect to the original DNN. The former can be handled by existing proof-producing verifiers, whereas we propose the first method for generating formal proofs for the latter. This preliminary work aims to enable scalable and trustworthy verification by supporting common abstraction techniques within a formal proof framework.
Related papers
- ProofNet++: A Neuro-Symbolic System for Formal Proof Verification with Self-Correction [0.0]
We propose ProofNet++, a neuro-symbolic framework that enhances automated theorem proving.<n>We show that ProofNet++ significantly improves proof accuracy, correctness, and formal verifiability over prior models.
arXiv Detail & Related papers (2025-05-30T05:44:34Z) - Advancing Neural Network Verification through Hierarchical Safety Abstract Interpretation [52.626086874715284]
We introduce a novel problem formulation called Abstract DNN-Verification, which verifies a hierarchical structure of unsafe outputs.<n>By leveraging abstract interpretation and reasoning about output reachable sets, our approach enables assessing multiple safety levels during the formal verification process.<n>Our contributions include a theoretical exploration of the relationship between our novel abstract safety formulation and existing approaches.
arXiv Detail & Related papers (2025-05-08T13:29:46Z) - Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.<n>Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Towards a Certified Proof Checker for Deep Neural Network Verification [1.0485739694839669]
We present a novel implementation of a proof checker for DNN verification.
It improves on existing implementations by offering numerical stability and greater verifiability.
arXiv Detail & Related papers (2023-07-12T16:53:32Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Abstraction and Refinement: Towards Scalable and Exact Verification of
Neural Networks [9.85360493553261]
Deep neural networks (DNNs) have been increasingly deployed in practice, but the lack of robustness hinders their applications in safety-critical domains.
We present a novel abstraction-refinement approach for scalable and exact DNN verification.
arXiv Detail & Related papers (2022-07-02T07:04:20Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Neural Network Verification with Proof Production [7.898605407936655]
We present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities.
Our proof production is based on an efficient adaptation of the well-known Farkas' lemma.
Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases.
arXiv Detail & Related papers (2022-06-01T14:14:37Z) - Shared Certificates for Neural Network Verification [8.777291205946444]
Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation.
This process is repeated from scratch independently for each input.
We introduce a new method for reducing this verification cost without losing precision.
arXiv Detail & Related papers (2021-09-01T16:59:09Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.