Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
on Multi-Tenant FPGAs
- URL: http://arxiv.org/abs/2012.07242v1
- Date: Mon, 14 Dec 2020 03:59:08 GMT
- Title: Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
on Multi-Tenant FPGAs
- Authors: Andrew Boutros, Mathew Hall, Nicolas Papernot, Vaughn Betz
- Abstract summary: We evaluate the security of FPGA-based deep learning accelerators against voltage-based integrity attacks.
We show that aggressive clock gating, an effective power-saving technique, can also be a potential security threat in modern FPGAs.
We achieve 1.18-1.31x higher inference performance by over-clocking the DL accelerator without affecting its prediction accuracy.
- Score: 13.531406531429335
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Field-programmable gate arrays (FPGAs) are becoming widely used accelerators
for a myriad of datacenter applications due to their flexibility and energy
efficiency. Among these applications, FPGAs have shown promising results in
accelerating low-latency real-time deep learning (DL) inference, which is
becoming an indispensable component of many end-user applications. With the
emerging research direction towards virtualized cloud FPGAs that can be shared
by multiple users, the security aspect of FPGA-based DL accelerators requires
careful consideration. In this work, we evaluate the security of DL
accelerators against voltage-based integrity attacks in a multitenant FPGA
scenario. We first demonstrate the feasibility of such attacks on a
state-of-the-art Stratix 10 card using different attacker circuits that are
logically and physically isolated in a separate attacker role, and cannot be
flagged as malicious circuits by conventional bitstream checkers. We show that
aggressive clock gating, an effective power-saving technique, can also be a
potential security threat in modern FPGAs. Then, we carry out the attack on a
DL accelerator running ImageNet classification in the victim role to evaluate
the inherent resilience of DL models against timing faults induced by the
adversary. We find that even when using the strongest attacker circuit, the
prediction accuracy of the DL accelerator is not compromised when running at
its safe operating frequency. Furthermore, we can achieve 1.18-1.31x higher
inference performance by over-clocking the DL accelerator without affecting its
prediction accuracy.
Related papers
- Hacking the Fabric: Targeting Partial Reconfiguration for Fault Injection in FPGA Fabrics [2.511032692122208]
We present a novel fault attack methodology capable of causing persistent fault injections in partial bitstreams during the process of FPGA reconfiguration.
This attack leverages power-wasters and is timed to inject faults into bitstreams as they are being loaded onto the FPGA through the reconfiguration manager.
arXiv Detail & Related papers (2024-10-21T20:40:02Z) - The Impact of Run-Time Variability on Side-Channel Attacks Targeting FPGAs [5.795035584525081]
This work proposes a fine-grained dynamic voltage and frequency scaling actuator to investigate the effectiveness of desynchronization countermeasures.
The goal is to highlight the link between the enforced run-time variability and the vulnerability to side-channel attacks of cryptographic implementations targeting FPGAs.
arXiv Detail & Related papers (2024-09-03T13:22:38Z) - ElasticAI: Creating and Deploying Energy-Efficient Deep Learning Accelerator for Pervasive Computing [19.835810073852244]
Deep Learning (DL) on embedded devices is a hot trend in pervasive computing.
FPGAs are suitable for deploying DL accelerators for embedded devices, but developing an energy-efficient DL accelerator on an FPGA is not easy.
We propose the ElasticAI-Workflow that aims to help DL developers to create and deploy DL models as hardware accelerators on embedded FPGAs.
arXiv Detail & Related papers (2024-08-29T12:39:44Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - MaliGNNoma: GNN-Based Malicious Circuit Classifier for Secure Cloud FPGAs [1.6273816588362844]
MaliGNNoma is a machine learning-based solution that accurately identifies malicious FPGA configurations.
It can be employed by cloud service providers as an initial security layer within a necessary multi-tiered security system.
MaliGNNoma achieves a classification accuracy and precision of 98.24% and 97.88%, respectively, surpassing state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-04T09:16:12Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
DNN Accelerators [105.60654479548356]
We show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly.
This leads to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators.
arXiv Detail & Related papers (2021-04-16T19:11:14Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
Deep Neural Network in Multi-Tenant FPGA [28.753009943116524]
We propose a novel adversarial attack framework: Deep-Dup, in which the adversarial tenant can inject adversarial faults to the DNN model in the victim tenant of FPGA.
The proposed Deep-Dup is experimentally validated in a developed multi-tenant FPGA prototype, for two popular deep learning applications.
arXiv Detail & Related papers (2020-11-05T17:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.