Tight Verification of Probabilistic Robustness in Bayesian Neural
Networks
- URL: http://arxiv.org/abs/2401.11627v2
- Date: Wed, 28 Feb 2024 19:04:12 GMT
- Title: Tight Verification of Probabilistic Robustness in Bayesian Neural
Networks
- Authors: Ben Batten, Mehran Hosseini, Alessio Lomuscio
- Abstract summary: We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs)
Our algorithms efficiently search the parameters' space for safe weights by using iterative expansion and the network's gradient.
In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks.
- Score: 17.499817915644467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce two algorithms for computing tight guarantees on the
probabilistic robustness of Bayesian Neural Networks (BNNs). Computing
robustness guarantees for BNNs is a significantly more challenging task than
verifying the robustness of standard Neural Networks (NNs) because it requires
searching the parameters' space for safe weights. Moreover, tight and complete
approaches for the verification of standard NNs, such as those based on
Mixed-Integer Linear Programming (MILP), cannot be directly used for the
verification of BNNs because of the polynomial terms resulting from the
consecutive multiplication of variables encoding the weights. Our algorithms
efficiently and effectively search the parameters' space for safe weights by
using iterative expansion and the network's gradient and can be used with any
verification algorithm of choice for BNNs. In addition to proving that our
algorithms compute tighter bounds than the SoA, we also evaluate our algorithms
against the SoA on standard benchmarks, such as MNIST and CIFAR10, showing that
our algorithms compute bounds up to 40% tighter than the SoA.
Related papers
- Adversarial Robustness Certification for Bayesian Neural Networks [22.71265211510824]
We study the problem of robustness certifying the computation of Bayesian neural networks (BNNs) to adversarial input perturbations.
Our framework is based on weight sampling, integration, and bound propagation techniques, and can be applied to BNNs with a large number of parameters.
arXiv Detail & Related papers (2023-06-23T16:58:25Z) - Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! [100.19080749267316]
"Sparsity May Cry" Benchmark (SMC-Bench) is a collection of carefully-curated 4 diverse tasks with 10 datasets.
SMC-Bench is designed to favor and encourage the development of more scalable and generalizable sparse algorithms.
arXiv Detail & Related papers (2023-03-03T18:47:21Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A Mixed Integer Programming Approach for Verifying Properties of
Binarized Neural Networks [44.44006029119672]
We propose a mixed integer programming formulation for BNN verification.
We demonstrate our approach by verifying properties of BNNs trained on the MNIST dataset and an aircraft collision avoidance controller.
arXiv Detail & Related papers (2022-03-11T01:11:29Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.