Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks
- URL: http://arxiv.org/abs/2111.08591v1
- Date: Tue, 16 Nov 2021 16:14:44 GMT
- Title: Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks
- Authors: Adaku Uchendu, Daniel Campoy, Christopher Menart, and Alexandra
Hildenbrandt
- Abstract summary: Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
- Score: 55.531896312724555
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bayesian Neural Networks (BNNs), unlike Traditional Neural Networks (TNNs)
are robust and adept at handling adversarial attacks by incorporating
randomness. This randomness improves the estimation of uncertainty, a feature
lacking in TNNs. Thus, we investigate the robustness of BNNs to white-box
attacks using multiple Bayesian neural architectures. Furthermore, we create
our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e.,
variational Bayes) to the DenseNet architecture, and BDAV, by combining this
intervention with adversarial training. Experiments are conducted on the
CIFAR-10 and FGVC-Aircraft datasets. We attack our models with strong white-box
attacks ($l_\infty$-FGSM, $l_\infty$-PGD, $l_2$-PGD, EOT $l_\infty$-FGSM, and
EOT $l_\infty$-PGD). In all experiments, at least one BNN outperforms
traditional neural networks during adversarial attack scenarios. An
adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained
counterpart in most experiments, and often by significant margins. Lastly, we
investigate network calibration and find that BNNs do not make overconfident
predictions, providing evidence that BNNs are also better at measuring
uncertainty.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Attacking Bayes: On the Adversarial Robustness of Bayesian Neural Networks [10.317475068017961]
We investigate whether it is possible to successfully break state-of-the-art BNN inference methods and prediction pipelines.
We find that BNNs trained with state-of-the-art approximate inference methods, and even BNNs trained with Hamiltonian Monte Carlo, are highly susceptible to adversarial attacks.
arXiv Detail & Related papers (2024-04-27T01:34:46Z) - Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty
from Pre-trained Models [40.38541033389344]
Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification.
We introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs.
We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-12-23T16:39:24Z) - ARBiBench: Benchmarking Adversarial Robustness of Binarized Neural
Networks [22.497327185841232]
Network binarization exhibits great potential for deployment on resource-constrained devices due to its low computational cost.
Despite the critical importance, the security of binarized neural networks (BNNs) is rarely investigated.
We present ARBiBench, a comprehensive benchmark to evaluate the robustness of BNNs against adversarial perturbations.
arXiv Detail & Related papers (2023-12-21T04:48:34Z) - Attacking the Spike: On the Transferability and Security of Spiking
Neural Networks to Adversarial Examples [19.227133993690504]
Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance.
Unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped.
We show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique.
arXiv Detail & Related papers (2022-09-07T17:05:48Z) - Toward Robust Spiking Neural Network Against Adversarial Perturbation [22.56553160359798]
spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications.
Researchers have already demonstrated an SNN can be attacked with adversarial examples.
To the best of our knowledge, this is the first analysis on robust training of SNNs.
arXiv Detail & Related papers (2022-04-12T21:26:49Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - "BNN - BN = ?": Training Binary Neural Networks without Batch
Normalization [92.23297927690149]
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art binary neural networks (BNN)
We extend their framework to training BNNs, and for the first time demonstrate that BNs can be completed removed from BNN training and inference regimes.
arXiv Detail & Related papers (2021-04-16T16:46:57Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.