BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic
Programming
- URL: http://arxiv.org/abs/2306.10742v1
- Date: Mon, 19 Jun 2023 07:19:15 GMT
- Title: BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic
Programming
- Authors: Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti
- Abstract summary: We introduce BNN-DP, an efficient framework for analysis of adversarial robustness of Bayesian Neural Networks.
We show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency.
- Score: 8.162867143465382
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce BNN-DP, an efficient algorithmic framework for
analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a
compact set of input points $T\subset \mathbb{R}^n$, BNN-DP computes lower and
upper bounds on the BNN's predictions for all the points in $T$. The framework
is based on an interpretation of BNNs as stochastic dynamical systems, which
enables the use of Dynamic Programming (DP) algorithms to bound the prediction
range along the layers of the network. Specifically, the method uses bound
propagation techniques and convex relaxations to derive a backward recursion
procedure to over-approximate the prediction range of the BNN with piecewise
affine functions. The algorithm is general and can handle both regression and
classification tasks. On a set of experiments on various regression and
classification tasks and BNN architectures, we show that BNN-DP outperforms
state-of-the-art methods by up to four orders of magnitude in both tightness of
the bounds and computational efficiency.
Related papers
- A lifted Bregman strategy for training unfolded proximal neural network Gaussian denoisers [8.343594411714934]
Unfolded proximal neural networks (PNNs) form a family of methods that combines deep learning and proximal optimization approaches.
We propose a lifted training formulation based on Bregman distances for unfolded PNNs.
We assess the behaviour of the proposed training approach for PNNs through numerical simulations on image denoising.
arXiv Detail & Related papers (2024-08-16T13:41:34Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Anisotropic, Sparse and Interpretable Physics-Informed Neural Networks
for PDEs [0.0]
We present ASPINN, an anisotropic extension of our earlier work called SPINN--Sparse, Physics-informed, and Interpretable Neural Networks--to solve PDEs.
ASPINNs generalize radial basis function networks.
We also streamline the training of ASPINNs into a form that is closer to that of supervised learning algorithms.
arXiv Detail & Related papers (2022-07-01T12:24:43Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A Mixed Integer Programming Approach for Verifying Properties of
Binarized Neural Networks [44.44006029119672]
We propose a mixed integer programming formulation for BNN verification.
We demonstrate our approach by verifying properties of BNNs trained on the MNIST dataset and an aircraft collision avoidance controller.
arXiv Detail & Related papers (2022-03-11T01:11:29Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - A comprehensive review of Binary Neural Network [2.918940961856197]
Binary Neural Network (BNN) method is an extreme application of convolutional neural network (CNN) parameter quantization.
Recent developments in BNN have led to a lot of algorithms and solutions that have helped address this issue.
arXiv Detail & Related papers (2021-10-11T22:44:15Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.