How Secure is Distributed Convolutional Neural Network on IoT Edge
Devices?
- URL: http://arxiv.org/abs/2006.09276v1
- Date: Tue, 16 Jun 2020 16:10:09 GMT
- Title: How Secure is Distributed Convolutional Neural Network on IoT Edge
Devices?
- Authors: Hawzhin Mohammed, Tolulope A. Odetola, Syed Rafay Hasan
- Abstract summary: We propose Trojan attacks on CNN deployed across a distributed edge network across different nodes.
These attacks are tested on deep learning models (LeNet, AlexNet)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNN) has found successful adoption in many
applications. The deployment of CNN on resource-constrained edge devices have
proved challenging. CNN distributed deployment across different edge devices
has been adopted. In this paper, we propose Trojan attacks on CNN deployed
across a distributed edge network across different nodes. We propose five
stealthy attack scenarios for distributed CNN inference. These attacks are
divided into trigger and payload circuitry. These attacks are tested on deep
learning models (LeNet, AlexNet). The results show how the degree of
vulnerability of individual layers and how critical they are to the final
classification.
Related papers
- OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via
Automating Deep Neural Network Porting for Mobile Deployment [54.77943671991863]
MatchNAS is a novel scheme for porting Deep Neural Networks to mobile devices.
We optimise a large network family using both labelled and unlabelled data.
We then automatically search for tailored networks for different hardware platforms.
arXiv Detail & Related papers (2024-02-21T04:43:12Z) - DietCNN: Multiplication-free Inference for Quantized CNNs [9.295702629926025]
This paper proposes a new method for replacing multiplications in a CNN by table look-ups.
It is shown that the proposed multiplication-free CNN, based on a single activation codebook, can achieve 4.7x, 5.6x, and 3.5x reduction in energy per inference.
arXiv Detail & Related papers (2023-05-09T08:54:54Z) - AutoDiCE: Fully Automated Distributed CNN Inference at the Edge [0.9883261192383613]
We propose a novel framework, called AutoDiCE, for automated splitting of a CNN model into a set of sub-models.
Our experimental results show that AutoDiCE can deliver distributed CNN inference with reduced energy consumption and memory usage per edge device.
arXiv Detail & Related papers (2022-07-20T15:08:52Z) - Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free [126.15842954405929]
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a trigger.
We propose a novel Trojan network detection regime: first locating a "winning Trojan lottery ticket" which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated subnetwork.
arXiv Detail & Related papers (2022-05-24T06:33:31Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - SoWaF: Shuffling of Weights and Feature Maps: A Novel Hardware Intrinsic
Attack (HIA) on Convolutional Neural Network (CNN) [0.0]
Security of inference phase deployment of Convolutional neural network (CNN) into resource constrained embedded systems is a growing research area.
Third party FPGA designers can be provided with no knowledge of initial and final classification layers.
We demonstrate that hardware intrinsic attack (HIA) in such a "secure" design is still possible.
arXiv Detail & Related papers (2021-03-16T21:12:07Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks [16.431689066281265]
The Conalvolutional Neural Networks (CNNs) have emerged as a powerful data dependent hierarchical feature extraction method.
It is observed that the network overfits the training samples very easily.
We propose a Color Channel Perturbation (CCP) attack to fool the CNNs.
arXiv Detail & Related papers (2020-12-20T11:35:29Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.