SoWaF: Shuffling of Weights and Feature Maps: A Novel Hardware Intrinsic
Attack (HIA) on Convolutional Neural Network (CNN)
- URL: http://arxiv.org/abs/2103.09327v1
- Date: Tue, 16 Mar 2021 21:12:07 GMT
- Title: SoWaF: Shuffling of Weights and Feature Maps: A Novel Hardware Intrinsic
Attack (HIA) on Convolutional Neural Network (CNN)
- Authors: Tolulope A. Odetola and Syed Rafay Hasan
- Abstract summary: Security of inference phase deployment of Convolutional neural network (CNN) into resource constrained embedded systems is a growing research area.
Third party FPGA designers can be provided with no knowledge of initial and final classification layers.
We demonstrate that hardware intrinsic attack (HIA) in such a "secure" design is still possible.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Security of inference phase deployment of Convolutional neural network (CNN)
into resource constrained embedded systems (e.g. low end FPGAs) is a growing
research area. Using secure practices, third party FPGA designers can be
provided with no knowledge of initial and final classification layers. In this
work, we demonstrate that hardware intrinsic attack (HIA) in such a "secure"
design is still possible. Proposed HIA is inserted inside mathematical
operations of individual layers of CNN, which propagates erroneous operations
in all the subsequent CNN layers that lead to misclassification. The attack is
non-periodic and completely random, hence it becomes difficult to detect. Five
different attack scenarios with respect to each CNN layer are designed and
evaluated based on the overhead resources and the rate of triggering in
comparison to the original implementation. Our results for two CNN
architectures show that in all the attack scenarios, additional latency is
negligible (<0.61%), increment in DSP, LUT, FF is also less than 2.36%. Three
attack scenarios do not require any additional BRAM resources, while in two
scenarios BRAM increases, which compensates with the corresponding decrease in
FF and LUTs. To the authors' best knowledge this work is the first to address
the hardware intrinsic CNN attack with the attacker does not have knowledge of
the full CNN.
Related papers
- Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization [57.87950229651958]
Quantized neural networks (QNNs) have received increasing attention in resource-constrained scenarios due to their exceptional generalizability.
Previous studies claim that transferability is difficult to achieve across QNNs with different bitwidths.
We propose textitquantization aware attack (QAA) which fine-tunes a QNN substitute model with a multiple-bitwidth training objective.
arXiv Detail & Related papers (2023-05-10T03:46:53Z) - Improved techniques for deterministic l2 robustness [63.34032156196848]
Training convolutional neural networks (CNNs) with a strict 1-Lipschitz constraint under the $l_2$ norm is useful for adversarial robustness, interpretable gradients and stable training.
We introduce a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer.
We significantly advance the state-of-the-art for standard and provable robust accuracies on CIFAR-10 and CIFAR-100.
arXiv Detail & Related papers (2022-11-15T19:10:12Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - FeSHI: Feature Map Based Stealthy Hardware Intrinsic Attack [0.5872014229110214]
Convolutional Neural Networks (CNN) have shown impressive performance in computer vision, natural language processing, and many other applications.
The use of cloud computing for CNNs is becoming more popular.
This comes with privacy and latency concerns that have motivated the designers to develop embedded hardware accelerators for CNNs.
arXiv Detail & Related papers (2021-06-13T01:50:34Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit
Neural Network Inference [6.320009081099895]
A slowdown attack reduces the efficacy of multi-exit DNNs by 90-100%, and it amplifies the latency by 1.5-5$times$ in a typical IoT deployment.
We show that it is possible to craft universal, reusable perturbations and that the attack can be effective in realistic black-box scenarios.
arXiv Detail & Related papers (2020-10-06T02:06:52Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - How Secure is Distributed Convolutional Neural Network on IoT Edge
Devices? [0.0]
We propose Trojan attacks on CNN deployed across a distributed edge network across different nodes.
These attacks are tested on deep learning models (LeNet, AlexNet)
arXiv Detail & Related papers (2020-06-16T16:10:09Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.