Scalable Quantitative Verification For Deep Neural Networks
- URL: http://arxiv.org/abs/2002.06864v2
- Date: Tue, 23 Mar 2021 10:25:06 GMT
- Title: Scalable Quantitative Verification For Deep Neural Networks
- Authors: Teodora Baluta, Zheng Leong Chua, Kuldeep S. Meel and Prateek Saxena
- Abstract summary: We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
- Score: 44.570783946111334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the functional success of deep neural networks (DNNs), their
trustworthiness remains a crucial open challenge. To address this challenge,
both testing and verification techniques have been proposed. But these existing
techniques provide either scalability to large networks or formal guarantees,
not both. In this paper, we propose a scalable quantitative verification
framework for deep neural networks, i.e., a test-driven approach that comes
with formal guarantees that a desired probabilistic property is satisfied. Our
technique performs enough tests until soundness of a formal probabilistic
property can be proven. It can be used to certify properties of both
deterministic and randomized DNNs. We implement our approach in a tool called
PROVERO and apply it in the context of certifying adversarial robustness of
DNNs. In this context, we first show a new attack-agnostic measure of
robustness which offers an alternative to purely attack-based methodology of
evaluating robustness being reported today. Second, PROVERO provides
certificates of robustness for large DNNs, where existing state-of-the-art
verification tools fail to produce conclusive results. Our work paves the way
forward for verifying properties of distributions captured by real-world deep
neural networks, with provable guarantees, even where testers only have
black-box access to the neural network.
Related papers
- FairQuant: Certifying and Quantifying Fairness of Deep Neural Networks [6.22084835644296]
We propose a method for formally certifying and quantifying individual fairness of deep neural networks (DNN)
Individual fairness guarantees that any two individuals who are identical except for a legally protected attribute (e.g., gender or race) receive the same treatment.
We have implemented our method and evaluated it on four popular fairness research datasets.
arXiv Detail & Related papers (2024-09-05T03:36:05Z) - VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees [3.208888890455612]
We propose a novel framework to generate Verification-Friendly Neural Networks (VNNs)
We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness.
arXiv Detail & Related papers (2023-12-15T12:39:27Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Data-Driven Assessment of Deep Neural Networks with Random Input
Uncertainty [14.191310794366075]
We develop a data-driven optimization-based method capable of simultaneously certifying the safety of network outputs and localizing them.
We experimentally demonstrate the efficacy and tractability of the method on a deep ReLU network.
arXiv Detail & Related papers (2020-10-02T19:13:35Z) - Toward Reliable Models for Authenticating Multimedia Content: Detecting
Resampling Artifacts With Bayesian Neural Networks [9.857478771881741]
We make a first step toward redesigning forensic algorithms with a strong focus on reliability.
We propose to use Bayesian neural networks (BNN), which combine the power of deep neural networks with the rigorous probabilistic formulation of a Bayesian framework.
BNN yields state-of-the-art detection performance, plus excellent capabilities for detecting out-of-distribution samples.
arXiv Detail & Related papers (2020-07-28T11:23:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.