Evaluating the Robustness of Bayesian Neural Networks Against Different
Types of Attacks
- URL: http://arxiv.org/abs/2106.09223v1
- Date: Thu, 17 Jun 2021 03:18:59 GMT
- Title: Evaluating the Robustness of Bayesian Neural Networks Against Different
Types of Attacks
- Authors: Yutian Pang, Sheng Cheng, Jueming Hu, Yongming Liu
- Abstract summary: We show that a Bayesian neural network achieves significantly higher robustness against adversarial attacks generated against a deterministic neural network model.
The posterior can act as the safety precursor of ongoing malicious activities.
This advises on utilizing layers in building decision-making pipelines within a safety-critical domain.
- Score: 2.599882743586164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To evaluate the robustness gain of Bayesian neural networks on image
classification tasks, we perform input perturbations, and adversarial attacks
to the state-of-the-art Bayesian neural networks, with a benchmark CNN model as
reference. The attacks are selected to simulate signal interference and
cyberattacks towards CNN-based machine learning systems. The result shows that
a Bayesian neural network achieves significantly higher robustness against
adversarial attacks generated against a deterministic neural network model,
without adversarial training. The Bayesian posterior can act as the safety
precursor of ongoing malicious activities. Furthermore, we show that the
stochastic classifier after the deterministic CNN extractor has sufficient
robustness enhancement rather than a stochastic feature extractor before the
stochastic classifier. This advises on utilizing stochastic layers in building
decision-making pipelines within a safety-critical domain.
Related papers
- Compositional Curvature Bounds for Deep Neural Networks [7.373617024876726]
A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks.
We study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations.
We introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks.
arXiv Detail & Related papers (2024-06-07T17:50:15Z) - Defending Spiking Neural Networks against Adversarial Attacks through Image Purification [20.492531851480784]
Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and machine learning.
SNNs are vulnerable to adversarial attacks like convolutional neural networks.
We propose a biologically inspired methodology to enhance the robustness of SNNs.
arXiv Detail & Related papers (2024-04-26T00:57:06Z) - Bayesian Convolutional Neural Networks for Limited Data Hyperspectral
Remote Sensing Image Classification [14.464344312441582]
We use a special class of deep neural networks, namely Bayesian neural network, to classify HSRS images.
Bayesian neural networks provide an inherent tool for measuring uncertainty.
We show that a Bayesian network can outperform a similarly-constructed non-Bayesian convolutional neural network (CNN) and an off-the-shelf Random Forest (RF)
arXiv Detail & Related papers (2022-05-19T00:02:16Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Robustness against Adversarial Attacks in Neural Networks using
Incremental Dissipativity [3.8673567847548114]
Adversarial examples can easily degrade the classification performance in neural networks.
This work proposes an incremental dissipativity-based robustness certificate for neural networks.
arXiv Detail & Related papers (2021-11-25T04:42:57Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.