FLARE: Fault Attack Leveraging Address Reconfiguration Exploits in Multi-Tenant FPGAs
- URL: http://arxiv.org/abs/2502.15578v1
- Date: Fri, 21 Feb 2025 16:38:52 GMT
- Title: FLARE: Fault Attack Leveraging Address Reconfiguration Exploits in Multi-Tenant FPGAs
- Authors: Jayeeta Chaudhuri, Hassan Nassar, Dennis R. E. Gnad, Jorg Henkel, Mehdi B. Tahoori, Krishnendu Chakrabarty,
- Abstract summary: We present FLARE, a fault attack that exploits vulnerabilities in the partial reconfiguration process.<n>Unlike traditional fault attacks that operate during module runtime, FLARE injects faults in the bitstream during its reconfiguration.<n>This enables the overwriting of pre-configured co-tenant modules, disrupting their functionality.
- Score: 2.511032692122208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern FPGAs are increasingly supporting multi-tenancy to enable dynamic reconfiguration of user modules. While multi-tenant FPGAs improve utilization and flexibility, this paradigm introduces critical security threats. In this paper, we present FLARE, a fault attack that exploits vulnerabilities in the partial reconfiguration process, specifically while a user bitstream is being uploaded to the FPGA by a reconfiguration manager. Unlike traditional fault attacks that operate during module runtime, FLARE injects faults in the bitstream during its reconfiguration, altering the configuration address and redirecting it to unintended partial reconfigurable regions (PRRs). This enables the overwriting of pre-configured co-tenant modules, disrupting their functionality. FLARE leverages power-wasters that activate briefly during the reconfiguration process, making the attack stealthy and more challenging to detect with existing countermeasures. Experimental results on a Xilinx Pynq FPGA demonstrate the effectiveness of FLARE in compromising multiple user bitstreams during the reconfiguration process.
Related papers
- REDACTOR: eFPGA Redaction for DNN Accelerator Security [0.9831489366502302]
eFPGA redaction is a promising solution to prevent hardware intellectual property theft.<n>This technique selectively conceals critical components of the design, allowing authorized users to restore functionality post-fabrication.<n>In this paper, we explore the redaction of DNN accelerators using eFPGAs, from specification to physical design implementation.
arXiv Detail & Related papers (2025-01-30T20:41:33Z) - Algorithmic Strategies for Sustainable Reuse of Neural Network Accelerators with Permanent Faults [9.89051364546275]
We propose novel approaches that quantify permanent hardware faults in neural network (NN) accelerators by uniquely integrating the behavior of the faulty component instead of bypassing it.<n>We propose several algorithmic mitigation techniques for a subset of stuck-at faults, such as Invertible Scaling or Shifting of activations and weights, or fine tuning with the faulty behavior.<n> Notably, the proposed techniques do not require any hardware modification, instead relying on existing components of widely used systolic array based accelerators.
arXiv Detail & Related papers (2024-12-17T18:56:09Z) - Hacking the Fabric: Targeting Partial Reconfiguration for Fault Injection in FPGA Fabrics [2.511032692122208]
We present a novel fault attack methodology capable of causing persistent fault injections in partial bitstreams during the process of FPGA reconfiguration.
This attack leverages power-wasters and is timed to inject faults into bitstreams as they are being loaded onto the FPGA through the reconfiguration manager.
arXiv Detail & Related papers (2024-10-21T20:40:02Z) - Enhancing Dropout-based Bayesian Neural Networks with Multi-Exit on FPGA [20.629635991749808]
This paper proposes an algorithm and hardware co-design framework that can generate field-programmable gate array (FPGA)-based accelerators for efficient BayesNNs.
At the algorithm level, we propose novel multi-exit dropout-based BayesNNs with reduced computational and memory overheads.
At the hardware level, this paper introduces a transformation framework that can generate FPGA-based accelerators for the proposed efficient BayesNNs.
arXiv Detail & Related papers (2024-06-20T17:08:42Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition [68.6707284662443]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes plagued by severe invisibility and noise.
One critical aspect is formulating a consistency constraint specifically for temporal-spatial illumination and appearance enhanced versions.
We present an innovative video Retinex-based decomposition strategy that operates without the need for explicit supervision.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - Generalized Activation via Multivariate Projection [46.837481855573145]
Activation functions are essential to introduce nonlinearity into neural networks.
We consider ReLU as a projection from R onto the nonnegative half-line R+.
We extend ReLU by substituting it with a generalized projection operator onto a convex cone, such as the Second-Order Cone (SOC) projection.
arXiv Detail & Related papers (2023-09-29T12:44:27Z) - Reconfigurable Distributed FPGA Cluster Design for Deep Learning
Accelerators [59.11160990637615]
We propose a distributed system based on lowpower embedded FPGAs designed for edge computing applications.
The proposed system can simultaneously execute diverse Neural Network (NN) models, arrange the graph in a pipeline structure, and manually allocate greater resources to the most computationally intensive layers of the NN graph.
arXiv Detail & Related papers (2023-05-24T16:08:55Z) - LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy
Physics [45.666822327616046]
This work presents a novel reconfigurable architecture for Low Graph Neural Network (LL-GNN) designs for particle detectors.
The LL-GNN design advances the next generation of trigger systems by enabling sophisticated algorithms to process experimental data efficiently.
arXiv Detail & Related papers (2022-09-28T12:55:35Z) - Structured Sparsity Learning for Efficient Video Super-Resolution [99.1632164448236]
We develop a structured pruning scheme called Structured Sparsity Learning (SSL) according to the properties of video super-resolution (VSR) models.
In SSL, we design pruning schemes for several key components in VSR models, including residual blocks, recurrent networks, and upsampling networks.
arXiv Detail & Related papers (2022-06-15T17:36:04Z) - ISTR: End-to-End Instance Segmentation with Transformers [147.14073165997846]
We propose an instance segmentation Transformer, termed ISTR, which is the first end-to-end framework of its kind.
ISTR predicts low-dimensional mask embeddings, and matches them with ground truth mask embeddings for the set loss.
Benefiting from the proposed end-to-end mechanism, ISTR demonstrates state-of-the-art performance even with approximation-based suboptimal embeddings.
arXiv Detail & Related papers (2021-05-03T06:00:09Z) - Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators
on Multi-Tenant FPGAs [13.531406531429335]
We evaluate the security of FPGA-based deep learning accelerators against voltage-based integrity attacks.
We show that aggressive clock gating, an effective power-saving technique, can also be a potential security threat in modern FPGAs.
We achieve 1.18-1.31x higher inference performance by over-clocking the DL accelerator without affecting its prediction accuracy.
arXiv Detail & Related papers (2020-12-14T03:59:08Z) - Enabling Efficient and Flexible FPGA Virtualization for Deep Learning in
the Cloud [13.439004162406063]
FPGAs have shown great potential in providing low-latency and energy-efficient solutions for deep neural network (DNN) inference applications.
Currently, the majority of FPGA-based DNN accelerators in the cloud run in a time-division multiplexing way for multiple users sharing a single FPGA, and require re-compilation with $sim$100 s overhead.
arXiv Detail & Related papers (2020-03-26T18:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.