Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees
- URL: http://arxiv.org/abs/2308.09842v2
- Date: Tue, 20 Feb 2024 17:35:48 GMT
- Title: Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees
- Authors: Luca Marzari, Davide Corsi, Enrico Marchesini, Alessandro Farinelli
and Ferdinando Cicalese
- Abstract summary: We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
- Score: 86.1362094580439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying safe areas is a key point to guarantee trust for systems that are
based on Deep Neural Networks (DNNs). To this end, we introduce the
AllDNN-Verification problem: given a safety property and a DNN, enumerate the
set of all the regions of the property input domain which are safe, i.e., where
the property does hold. Due to the #P-hardness of the problem, we propose an
efficient approximation method called epsilon-ProVe. Our approach exploits a
controllable underestimation of the output reachable sets obtained via
statistical prediction of tolerance limits, and can provide a tight (with
provable probabilistic guarantees) lower estimate of the safe areas. Our
empirical evaluation on different standard benchmarks shows the scalability and
effectiveness of our method, offering valuable insights for this new type of
verification of DNNs.
Related papers
- Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Safety Verification for Neural Networks Based on Set-boundary Analysis [5.487915758677295]
Neural networks (NNs) are increasingly applied in safety-critical systems such as autonomous vehicles.
We propose a set-boundary reachability method to investigate the safety verification problem of NNs from a topological perspective.
arXiv Detail & Related papers (2022-10-09T05:55:37Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - PAC Confidence Predictions for Deep Neural Network Classifiers [28.61937254015157]
Key challenge for deploying deep neural networks (DNNs) in safety critical settings is the need to provide rigorous ways to quantify their uncertainty.
We propose an algorithm for constructing predicted classification confidences for DNNs that comes with provable correctness guarantees.
arXiv Detail & Related papers (2020-11-02T04:09:17Z) - Probabilistic Safety for Bayesian Neural Networks [22.71265211510824]
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
In particular, we evaluate that a network sampled from the BNN is vulnerable to adversarial attacks.
We apply our methods to BNNs trained on a task airborne avoidance, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.
arXiv Detail & Related papers (2020-04-21T20:25:33Z) - Scalable Quantitative Verification For Deep Neural Networks [44.570783946111334]
We propose a test-driven verification framework for deep neural networks (DNNs)
Our technique performs enough tests until soundness of a formal probabilistic property can be proven.
Our work paves the way for verifying properties of distributions captured by real-world deep neural networks, with provable guarantees.
arXiv Detail & Related papers (2020-02-17T09:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.