The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks
- URL: http://arxiv.org/abs/2301.07068v4
- Date: Mon, 19 Jun 2023 13:13:38 GMT
- Title: The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks
- Authors: Luca Marzari, Davide Corsi, Ferdinando Cicalese and Alessandro
Farinelli
- Abstract summary: #DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
- Score: 94.63547069706459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks are increasingly adopted in critical tasks that require
a high level of safety, e.g., autonomous driving. While state-of-the-art
verifiers can be employed to check whether a DNN is unsafe w.r.t. some given
property (i.e., whether there is at least one unsafe input configuration),
their yes/no output is not informative enough for other purposes, such as
shielding, model selection, or training improvements. In this paper, we
introduce the #DNN-Verification problem, which involves counting the number of
input configurations of a DNN that result in a violation of a particular safety
property. We analyze the complexity of this problem and propose a novel
approach that returns the exact count of violations. Due to the #P-completeness
of the problem, we also propose a randomized, approximate method that provides
a provable probabilistic bound of the correct count while significantly
reducing computational requirements. We present experimental results on a set
of safety-critical benchmarks that demonstrate the effectiveness of our
approximate method and evaluate the tightness of the bound.
Related papers
- A Certified Proof Checker for Deep Neural Network Verification [0.9895793818721335]
We present an alternative implementation of the Marabou proof checking algorithm in Imandra.
It allows us to obtain an implementation with formal guarantees, including proofs of mathematical results underlying the algorithm.
arXiv Detail & Related papers (2024-05-17T08:16:32Z) - Tight Verification of Probabilistic Robustness in Bayesian Neural
Networks [17.499817915644467]
We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs)
Our algorithms efficiently search the parameters' space for safe weights by using iterative expansion and the network's gradient.
In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks.
arXiv Detail & Related papers (2024-01-21T23:41:32Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Continuous Safety Verification of Neural Networks [1.7056768055368385]
This paper considers approaches to transfer results established in the previous DNN safety verification problem to a modified problem setting.
The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
arXiv Detail & Related papers (2020-10-12T13:28:04Z) - Global Optimization of Objective Functions Represented by ReLU Networks [77.55969359556032]
Neural networks can learn complex, non- adversarial functions, and it is challenging to guarantee their correct behavior in safety-critical contexts.
Many approaches exist to find failures in networks (e.g., adversarial examples), but these cannot guarantee the absence of failures.
We propose an approach that integrates the optimization process into the verification procedure, achieving better performance than the naive approach.
arXiv Detail & Related papers (2020-10-07T08:19:48Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.