BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks
- URL: http://arxiv.org/abs/2103.08031v1
- Date: Sun, 14 Mar 2021 20:43:19 GMT
- Title: BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks
- Authors: Manoj Rohit Vemparala, Alexander Frickenstein, Nael Fasfous, Lukas
Frickenstein, Qi Zhao, Sabine Kuhn, Daniel Ehrhardt, Yuankai Wu, Christian
Unger, Naveen Shankar Nagaraja, Walter Stechele
- Abstract summary: We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
- Score: 65.2021953284622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deploying convolutional neural networks (CNNs) for embedded applications
presents many challenges in balancing resource-efficiency and task-related
accuracy. These two aspects have been well-researched in the field of CNN
compression. In real-world applications, a third important aspect comes into
play, namely the robustness of the CNN. In this paper, we thoroughly study the
robustness of uncompressed, distilled, pruned and binarized neural networks
against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool,
LocalSearch and GenAttack). These new insights facilitate defensive training
schemes or reactive filtering methods, where the attack is detected and the
input is discarded and/or cleaned. Experimental results are shown for distilled
CNNs, agent-based state-of-the-art pruned models, and binarized neural networks
(BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets.
We present evaluation methods to simplify the comparison between CNNs under
different attack schemes using loss/accuracy levels, stress-strain graphs,
box-plots and class activation mapping (CAM). Our analysis reveals susceptible
behavior of uncompressed and pruned CNNs against all kinds of attacks. The
distilled models exhibit their strength against all white box attacks with an
exception of C&W. Furthermore, binary neural networks exhibit resilient
behavior compared to their baselines and other compressed variants.
Related papers
- Impact of White-Box Adversarial Attacks on Convolutional Neural Networks [0.6138671548064356]
We investigate the susceptibility of Convolutional Neural Networks (CNNs) to white-box adversarial attacks.
Our study provides insights into the robustness of CNNs against adversarial threats.
arXiv Detail & Related papers (2024-10-02T21:24:08Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
Strict Layer-Output Manipulation for Adversarial Attacks [7.540176446791261]
Convolutional neural networks (CNN) have been more and more applied in mobile robotics such as intelligent vehicles.
Security of CNNs in robotics applications is an important issue, for which potential adversarial attacks on CNNs are worth research.
In this paper, we conduct adversarial attacks on CNNs from the perspective of network structure by investigating and exploiting the vulnerability of pooling.
arXiv Detail & Related papers (2020-12-21T15:18:41Z) - Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks [16.431689066281265]
The Conalvolutional Neural Networks (CNNs) have emerged as a powerful data dependent hierarchical feature extraction method.
It is observed that the network overfits the training samples very easily.
We propose a Color Channel Perturbation (CCP) attack to fool the CNNs.
arXiv Detail & Related papers (2020-12-20T11:35:29Z) - Extreme Value Preserving Networks [65.2037926048262]
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures.
This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness.
arXiv Detail & Related papers (2020-11-17T02:06:52Z) - The shape and simplicity biases of adversarially robust ImageNet-trained
CNNs [9.707679445925516]
We study the shape bias and internal mechanisms that enable the generalizability of AlexNet, GoogLeNet, and ResNet-50 models trained via adversarial training.
Remarkably, adversarial training induces three simplicity biases into hidden neurons in the process of "robustifying" CNNs.
arXiv Detail & Related papers (2020-06-16T16:38:16Z) - Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder [11.701729403940798]
We propose an attack-agnostic defence framework to enhance the intrinsic robustness of neural networks.
Our framework applies to all block-based convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-05-06T01:40:26Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.