Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
on Multi-Tenant FPGAs
- URL: http://arxiv.org/abs/2012.07242v1
- Date: Mon, 14 Dec 2020 03:59:08 GMT
- Title: Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
on Multi-Tenant FPGAs
- Authors: Andrew Boutros, Mathew Hall, Nicolas Papernot, Vaughn Betz
- Abstract summary: We evaluate the security of FPGA-based deep learning accelerators against voltage-based integrity attacks.
We show that aggressive clock gating, an effective power-saving technique, can also be a potential security threat in modern FPGAs.
We achieve 1.18-1.31x higher inference performance by over-clocking the DL accelerator without affecting its prediction accuracy.
- Score: 13.531406531429335
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Field-programmable gate arrays (FPGAs) are becoming widely used accelerators
for a myriad of datacenter applications due to their flexibility and energy
efficiency. Among these applications, FPGAs have shown promising results in
accelerating low-latency real-time deep learning (DL) inference, which is
becoming an indispensable component of many end-user applications. With the
emerging research direction towards virtualized cloud FPGAs that can be shared
by multiple users, the security aspect of FPGA-based DL accelerators requires
careful consideration. In this work, we evaluate the security of DL
accelerators against voltage-based integrity attacks in a multitenant FPGA
scenario. We first demonstrate the feasibility of such attacks on a
state-of-the-art Stratix 10 card using different attacker circuits that are
logically and physically isolated in a separate attacker role, and cannot be
flagged as malicious circuits by conventional bitstream checkers. We show that
aggressive clock gating, an effective power-saving technique, can also be a
potential security threat in modern FPGAs. Then, we carry out the attack on a
DL accelerator running ImageNet classification in the victim role to evaluate
the inherent resilience of DL models against timing faults induced by the
adversary. We find that even when using the strongest attacker circuit, the
prediction accuracy of the DL accelerator is not compromised when running at
its safe operating frequency. Furthermore, we can achieve 1.18-1.31x higher
inference performance by over-clocking the DL accelerator without affecting its
prediction accuracy.
Related papers
- LaserEscape: Detecting and Mitigating Optical Probing Attacks [5.4511018094405905]
We introduce LaserEscape, the first fully digital and FPGA-compatible countermeasure to detect and mitigate optical probing attacks.
LaserEscape incorporates digital delay-based sensors to reliably detect the physical alteration on the fabric caused by laser beam irradiations in real time.
As a response to the attack, LaserEscape deploys real-time hiding approaches using randomized hardware reconfigurability.
arXiv Detail & Related papers (2024-05-06T16:49:11Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - MaliGNNoma: GNN-Based Malicious Circuit Classifier for Secure Cloud FPGAs [1.6273816588362844]
MaliGNNoma is a machine learning-based solution that accurately identifies malicious FPGA configurations.
It can be employed by cloud service providers as an initial security layer within a necessary multi-tiered security system.
MaliGNNoma achieves a classification accuracy and precision of 98.24% and 97.88%, respectively, surpassing state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-04T09:16:12Z) - Deep Learning-based Embedded Intrusion Detection System for Automotive
CAN [12.084121187559864]
Various intrusion detection approaches have been proposed to detect and tackle such threats, with machine learning models proving highly effective.
We propose a hybrid FPGA-based ECU approach that can transparently integrate IDS functionality through a dedicated off-the-shelf hardware accelerator.
Our results show that the proposed approach provides an average accuracy of over 99% across multiple attack datasets with 0.64% false detection rates.
arXiv Detail & Related papers (2024-01-19T13:13:38Z) - RandOhm: Mitigating Impedance Side-channel Attacks using Randomized Circuit Configurations [6.388730198692013]
We introduce RandOhm, which exploits a moving target defense (MTD) strategy based on the partial reconfiguration (PR) feature of mainstream FPGAs.
We demonstrate that the information leakage through the PDN impedance could be significantly reduced via runtime reconfiguration of the secret-sensitive parts of the circuitry.
In contrast to existing PR-based countermeasures, RandOhm deploys open-source bitstream manipulation tools to speed up the randomization and provide real-time protection.
arXiv Detail & Related papers (2024-01-17T02:22:28Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
DNN Accelerators [105.60654479548356]
We show that a combination of robust fixed-point quantization, weight clipping, as well as random bit error training (RandBET) improves robustness against random or adversarial bit errors in quantized DNN weights significantly.
This leads to high energy savings for low-voltage operation as well as low-precision quantization, but also improves security of DNN accelerators.
arXiv Detail & Related papers (2021-04-16T19:11:14Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z) - Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
Deep Neural Network in Multi-Tenant FPGA [28.753009943116524]
We propose a novel adversarial attack framework: Deep-Dup, in which the adversarial tenant can inject adversarial faults to the DNN model in the victim tenant of FPGA.
The proposed Deep-Dup is experimentally validated in a developed multi-tenant FPGA prototype, for two popular deep learning applications.
arXiv Detail & Related papers (2020-11-05T17:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.