Towards a Certified Proof Checker for Deep Neural Network Verification
- URL: http://arxiv.org/abs/2307.06299v2
- Date: Tue, 13 Feb 2024 15:51:50 GMT
- Title: Towards a Certified Proof Checker for Deep Neural Network Verification
- Authors: Remi Desmartin, Omri Isac, Grant Passmore, Kathrin Stark, Guy Katz and
Ekaterina Komendantskaya
- Abstract summary: We present a novel implementation of a proof checker for DNN verification.
It improves on existing implementations by offering numerical stability and greater verifiability.
- Score: 1.0485739694839669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in deep neural networks (DNNs) have led to their adoption
in safety-critical systems, which in turn has heightened the need for
guaranteeing their safety. These safety properties of DNNs can be proven using
tools developed by the verification community. However, these tools are
themselves prone to implementation bugs and numerical stability problems, which
make their reliability questionable. To overcome this, some verifiers produce
proofs of their results which can be checked by a trusted checker. In this
work, we present a novel implementation of a proof checker for DNN
verification. It improves on existing implementations by offering numerical
stability and greater verifiability. To achieve this, we leverage two key
capabilities of Imandra, an industrial theorem prover: its support of infinite
precision real arithmetic and its formal verification infrastructure. So far,
we have implemented a proof checker in Imandra, specified its correctness
properties and started to verify the checker's compliance with them. Our
ongoing work focuses on completing the formal verification of the checker and
further optimizing its performance.
Related papers
- Automated Proof Generation for Rust Code via Self-Evolution [69.25795662658356]
We introduce SAFE, a novel framework that overcomes the lack of human-written proof to enable automated proof generation of Rust code.
We demonstrate superior efficiency and precision compared to GPT-4o.
This advancement leads to a significant improvement in performance, achieving a 70.50% accuracy rate in a benchmark crafted by human experts.
arXiv Detail & Related papers (2024-10-21T08:15:45Z) - A Certified Proof Checker for Deep Neural Network Verification [0.9895793818721335]
We present an alternative implementation of the Marabou proof checking algorithm in Imandra.
It allows us to obtain an implementation with formal guarantees, including proofs of mathematical results underlying the algorithm.
arXiv Detail & Related papers (2024-05-17T08:16:32Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees [3.208888890455612]
We propose a novel framework to generate Verification-Friendly Neural Networks (VNNs)
We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness.
arXiv Detail & Related papers (2023-12-15T12:39:27Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - DelBugV: Delta-Debugging Neural Network Verifiers [0.0]
Deep neural networks (DNNs) are becoming a key component in diverse systems across the board.
Despite their success, they often err miserably; and this has triggered significant interest in formally verifying them.
Here, we present a novel tool, named DelBugV, that uses automated delta debug techniques on DNN verifiers.
arXiv Detail & Related papers (2023-05-29T18:42:03Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Neural Network Verification with Proof Production [7.898605407936655]
We present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities.
Our proof production is based on an efficient adaptation of the well-known Farkas' lemma.
Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases.
arXiv Detail & Related papers (2022-06-01T14:14:37Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.