The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models
- URL: http://arxiv.org/abs/2602.16309v1
- Date: Wed, 18 Feb 2026 09:40:29 GMT
- Title: The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models
- Authors: Jakub Breier, Štefan Kučerák, Xiaolu Hou,
- Abstract summary: We investigate how four different number representations influence the success of an EMFI attack on embedded neural network models.<n>We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults.<n>Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall.
- Score: 3.2913641623634913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fault injection attacks on embedded neural network models have been shown as a potent threat. Numerous works studied resilience of models from various points of view. As of now, there is no comprehensive study that would evaluate the influence of number representations used for model parameters against electromagnetic fault injection (EMFI) attacks. In this paper, we investigate how four different number representations influence the success of an EMFI attack on embedded neural network models. We chose two common floating-point representations (32-bit, and 16-bit), and two integer representations (8-bit, and 4-bit). We deployed four common image classifiers, ResNet-18, ResNet-34, ResNet-50, and VGG-11, on an embedded memory chip, and utilized a low-cost EMFI platform to trigger faults. Our results show that while floating-point representations exhibit almost a complete degradation in accuracy (Top-1 and Top-5) after a single fault injection, integer representations offer better resistance overall. Especially, when considering the the 8-bit representation on a relatively large network (VGG-11), the Top-1 accuracies stay at around 70% and the Top-5 at around 90%.
Related papers
- Understanding vision transformer robustness through the lens of out-of-distribution detection [59.72757235382676]
Quantization reduces memory and inference costs at the risk of performance loss.<n>We investigate the behaviour of quantized small-variant popular vision transformers (DeiT, DeiT3, and ViT) on common out-of-distribution (OOD) datasets.
arXiv Detail & Related papers (2026-02-01T22:00:59Z) - Domain-Robust Marine Plastic Detection Using Vision Models [0.0]
This study benchmarks models for cross-domain robustness, training convolutional neural networks and vision transformers.<n>Two zero-shot models were assessed, CLIP ViT-L14 and Google's Gemini 2.0 Flash, that leverage pretraining to classify images without fine-tuning.<n>Results show the lightweight MobileNetV2 delivers the strongest cross-domain performance (F1 0.97), surpassing larger models.
arXiv Detail & Related papers (2025-09-29T17:15:07Z) - Harden Deep Neural Networks Against Fault Injections Through Weight Scaling [0.0]
We propose a method to harden DNN weights by multiplying weights by constants before storing them to fault-prone medium.<n>Our method is based on the observation that errors from bit-flips have properties similar to additive noise.
arXiv Detail & Related papers (2024-11-28T08:47:23Z) - FP8 Formats for Deep Learning [49.54015320992368]
We propose an 8-bit floating point (FP8) binary interchange format consisting of two encodings.
E4M3's dynamic range is extended by not representing infinities and having only one mantissa bit-pattern for NaNs.
We demonstrate the efficacy of the FP8 format on a variety of image and language tasks, effectively matching the result quality achieved by 16-bit training sessions.
arXiv Detail & Related papers (2022-09-12T17:39:55Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and
Memory-Efficient Inference of Deep Neural Networks [2.294014185517203]
This paper introduces an extremely flexible 8-bit floating-point (FFP8) format.
It achieves an extremely low accuracy loss of $0.1%sim 0.3%$ for several representative image classification models.
It is easy to turn a classical floating-point processing unit into an FFP8-compliant one, and the extra hardware cost is minor.
arXiv Detail & Related papers (2021-04-15T09:37:23Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
Robustness against Adversarial Attacks [18.241639570479563]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications.
We propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks.
Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively.
arXiv Detail & Related papers (2020-04-21T17:17:09Z) - Cryptanalytic Extraction of Neural Network Models [56.738871473622865]
We introduce a differential attack that can efficiently steal the parameters of the remote model up to floating point precision.
Our attack relies on the fact that ReLU neural networks are piecewise linear functions.
We extract models that are 220 times more precise and require 100x fewer queries than prior work.
arXiv Detail & Related papers (2020-03-10T17:57:14Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z) - Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks [9.409651543514615]
This work introduces convolutional layers with pre-defined sparse 2D kernels that have support sets that repeat periodically within and across filters.
Due to the efficient storage of our periodic sparse kernels, the parameter savings can translate into considerable improvements in energy efficiency.
arXiv Detail & Related papers (2020-01-29T07:10:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.