Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
Deep Neural Network in Multi-Tenant FPGA
- URL: http://arxiv.org/abs/2011.03006v2
- Date: Fri, 8 Oct 2021 19:18:27 GMT
- Title: Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
Deep Neural Network in Multi-Tenant FPGA
- Authors: Adnan Siraj Rakin, Yukui Luo, Xiaolin Xu and Deliang Fan
- Abstract summary: We propose a novel adversarial attack framework: Deep-Dup, in which the adversarial tenant can inject adversarial faults to the DNN model in the victim tenant of FPGA.
The proposed Deep-Dup is experimentally validated in a developed multi-tenant FPGA prototype, for two popular deep learning applications.
- Score: 28.753009943116524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The wide deployment of Deep Neural Networks (DNN) in high-performance cloud
computing platforms brought to light multi-tenant cloud field-programmable gate
arrays (FPGA) as a popular choice of accelerator to boost performance due to
its hardware reprogramming flexibility. Such a multi-tenant FPGA setup for DNN
acceleration potentially exposes DNN interference tasks under severe threat
from malicious users. This work, to the best of our knowledge, is the first to
explore DNN model vulnerabilities in multi-tenant FPGAs. We propose a novel
adversarial attack framework: Deep-Dup, in which the adversarial tenant can
inject adversarial faults to the DNN model in the victim tenant of FPGA.
Specifically, she can aggressively overload the shared power distribution
system of FPGA with malicious power-plundering circuits, achieving adversarial
weight duplication (AWD) hardware attack that duplicates certain DNN weight
packages during data transmission between off-chip memory and on-chip buffer,
to hijack the DNN function of the victim tenant. Further, to identify the most
vulnerable DNN weight packages for a given malicious objective, we propose a
generic vulnerable weight package searching algorithm, called Progressive
Differential Evolution Search (P-DES), which is, for the first time, adaptive
to both deep learning white-box and black-box attack models. The proposed
Deep-Dup is experimentally validated in a developed multi-tenant FPGA
prototype, for two popular deep learning applications, i.e., Object Detection
and Image Classification. Successful attacks are demonstrated in six popular
DNN architectures (e.g., YOLOv2, ResNet-50, MobileNet, etc.)
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Improving Robustness Against Adversarial Attacks with Deeply Quantized
Neural Networks [0.5849513679510833]
A disadvantage of Deep Neural Networks (DNNs) is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs.
This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework.
arXiv Detail & Related papers (2023-04-25T13:56:35Z) - General Adversarial Defense Against Black-box Attacks via Pixel Level
and Feature Level Distribution Alignments [75.58342268895564]
We use Deep Generative Networks (DGNs) with a novel training mechanism to eliminate the distribution gap.
The trained DGNs align the distribution of adversarial samples with clean ones for the target DNNs by translating pixel values.
Our strategy demonstrates its unique effectiveness and generality against black-box attacks.
arXiv Detail & Related papers (2022-12-11T01:51:31Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - FitAct: Error Resilient Deep Neural Networks via Fine-Grained
Post-Trainable Activation Functions [0.05249805590164901]
Deep neural networks (DNNs) are increasingly being deployed in safety-critical systems such as personal healthcare devices and self-driving cars.
In this paper, we propose FitAct, a low-cost approach to enhance the error resilience of DNNs by deploying fine-grained post-trainable activation functions.
arXiv Detail & Related papers (2021-12-27T07:07:50Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Deep Serial Number: Computational Watermarking for DNN Intellectual
Property Protection [53.40245698216239]
DSN (Deep Serial Number) is a watermarking algorithm designed specifically for deep neural networks (DNNs)
Inspired by serial numbers in safeguarding conventional software IP, we propose the first implementation of serial number embedding within DNNs.
arXiv Detail & Related papers (2020-11-17T21:42:40Z) - ShiftAddNet: A Hardware-Inspired Deep Network [87.18216601210763]
ShiftAddNet is an energy-efficient multiplication-less deep neural network.
It leads to both energy-efficient inference and training, without compromising expressive capacity.
ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies.
arXiv Detail & Related papers (2020-10-24T05:09:14Z) - DeepHammer: Depleting the Intelligence of Deep Neural Networks through
Targeted Chain of Bit Flips [29.34622626909906]
We demonstrate the first hardware-based attack on quantized deep neural networks (DNNs)
DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes.
Our work highlights the need to incorporate security mechanisms in future deep learning system.
arXiv Detail & Related papers (2020-03-30T18:51:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.