Neural Network Verification with Proof Production
- URL: http://arxiv.org/abs/2206.00512v1
- Date: Wed, 1 Jun 2022 14:14:37 GMT
- Title: Neural Network Verification with Proof Production
- Authors: Omri Isac and Clark Barrett and Min Zhang and Guy Katz
- Abstract summary: We present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities.
Our proof production is based on an efficient adaptation of the well-known Farkas' lemma.
Our evaluation on a safety-critical system for airborne collision avoidance shows that proof production succeeds in almost all cases.
- Score: 7.898605407936655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are increasingly being employed in
safety-critical systems, and there is an urgent need to guarantee their
correctness. Consequently, the verification community has devised multiple
techniques and tools for verifying DNNs. When DNN verifiers discover an input
that triggers an error, that is easy to confirm; but when they report that no
error exists, there is no way to ensure that the verification tool itself is
not flawed. As multiple errors have already been observed in DNN verification
tools, this calls the applicability of DNN verification into question. In this
work, we present a novel mechanism for enhancing Simplex-based DNN verifiers
with proof production capabilities: the generation of an easy-to-check witness
of unsatisfiability, which attests to the absence of errors. Our proof
production is based on an efficient adaptation of the well-known Farkas' lemma,
combined with mechanisms for handling piecewise-linear functions and numerical
precision errors. As a proof of concept, we implemented our technique on top of
the Marabou DNN verifier. Our evaluation on a safety-critical system for
airborne collision avoidance shows that proof production succeeds in almost all
cases and requires only minimal overhead.
Related papers
- A Certified Proof Checker for Deep Neural Network Verification [0.9895793818721335]
We present an alternative implementation of the Marabou proof checking algorithm in Imandra.
It allows us to obtain an implementation with formal guarantees, including proofs of mathematical results underlying the algorithm.
arXiv Detail & Related papers (2024-05-17T08:16:32Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Towards a Certified Proof Checker for Deep Neural Network Verification [1.0485739694839669]
We present a novel implementation of a proof checker for DNN verification.
It improves on existing implementations by offering numerical stability and greater verifiability.
arXiv Detail & Related papers (2023-07-12T16:53:32Z) - DelBugV: Delta-Debugging Neural Network Verifiers [0.0]
Deep neural networks (DNNs) are becoming a key component in diverse systems across the board.
Despite their success, they often err miserably; and this has triggered significant interest in formally verifying them.
Here, we present a novel tool, named DelBugV, that uses automated delta debug techniques on DNN verifiers.
arXiv Detail & Related papers (2023-05-29T18:42:03Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Continuous Safety Verification of Neural Networks [1.7056768055368385]
This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
arXiv Detail & Related papers (2020-10-12T13:28:04Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.