Adversarial Robustness Certification for Bayesian Neural Networks
- URL: http://arxiv.org/abs/2306.13614v1
- Date: Fri, 23 Jun 2023 16:58:25 GMT
- Title: Adversarial Robustness Certification for Bayesian Neural Networks
- Authors: Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska
- Abstract summary: We study the problem of robustness certifying the computation of Bayesian neural networks (BNNs) to adversarial input perturbations.
Our framework is based on weight sampling, integration, and bound propagation techniques, and can be applied to BNNs with a large number of parameters.
- Score: 22.71265211510824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of certifying the robustness of Bayesian neural networks
(BNNs) to adversarial input perturbations. Given a compact set of input points
$T \subseteq \mathbb{R}^m$ and a set of output points $S \subseteq
\mathbb{R}^n$, we define two notions of robustness for BNNs in an adversarial
setting: probabilistic robustness and decision robustness. Probabilistic
robustness is the probability that for all points in $T$ the output of a BNN
sampled from the posterior is in $S$. On the other hand, decision robustness
considers the optimal decision of a BNN and checks if for all points in $T$ the
optimal decision of the BNN for a given loss function lies within the output
set $S$. Although exact computation of these robustness properties is
challenging due to the probabilistic and non-convex nature of BNNs, we present
a unified computational framework for efficiently and formally bounding them.
Our approach is based on weight interval sampling, integration, and bound
propagation techniques, and can be applied to BNNs with a large number of
parameters, and independently of the (approximate) inference method employed to
train the BNN. We evaluate the effectiveness of our methods on various
regression and classification tasks, including an industrial regression
benchmark, MNIST, traffic sign recognition, and airborne collision avoidance,
and demonstrate that our approach enables certification of robustness and
uncertainty of BNN predictions.
Related papers
- Fast and Reliable $N-k$ Contingency Screening with Input-Convex Neural Networks [3.490170135411753]
Power system operators must ensure that dispatch decisions remain feasible in case of grid outages or contingencies to prevent failures and ensure reliable operation.
Check the feasibility of all $N - k$ contingencies is intractable for even small $k$ grid components.
In this work, we propose use input- cascading neural networks (ICNNs) for contingency screening.
arXiv Detail & Related papers (2024-10-01T15:38:09Z) - Tight Verification of Probabilistic Robustness in Bayesian Neural
Networks [17.499817915644467]
We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs)
Our algorithms efficiently search the parameters' space for safe weights by using iterative expansion and the network's gradient.
In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks.
arXiv Detail & Related papers (2024-01-21T23:41:32Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic
Programming [8.162867143465382]
We introduce BNN-DP, an efficient framework for analysis of adversarial robustness of Bayesian Neural Networks.
We show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency.
arXiv Detail & Related papers (2023-06-19T07:19:15Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their
Asymptotic Overconfidence [65.24701908364383]
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data.
But far away from them, ReLU neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
We show that it can be applied emphpost-hoc to any pre-trained ReLU BNN at a low cost.
arXiv Detail & Related papers (2020-10-06T13:32:18Z) - Probabilistic Safety for Bayesian Neural Networks [22.71265211510824]
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
In particular, we evaluate that a network sampled from the BNN is vulnerable to adversarial attacks.
We apply our methods to BNNs trained on a task airborne avoidance, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.
arXiv Detail & Related papers (2020-04-21T20:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.