No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips
- URL: http://arxiv.org/abs/2502.07408v1
- Date: Tue, 11 Feb 2025 09:40:45 GMT
- Title: No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips
- Authors: Ido Galil, Moshe Kimhi, Ran El-Yaniv,
- Abstract summary: Deep Neural Lesion (DNL) is a data-free, lightweight method that locates critical parameters and triggers massive accuracy drops.<n>We validate its efficacy on a wide variety of computer vision models and datasets.
- Score: 17.136832159667204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) can be catastrophically disrupted by flipping only a handful of sign bits in their parameters. We introduce Deep Neural Lesion (DNL), a data-free, lightweight method that locates these critical parameters and triggers massive accuracy drops. We validate its efficacy on a wide variety of computer vision models and datasets. The method requires no training data or optimization and can be carried out via common exploits software, firmware or hardware based attack vectors. An enhanced variant that uses a single forward and backward pass further amplifies the damage beyond DNL's zero-pass approach. Flipping just two sign bits in ResNet50 on ImageNet reduces accuracy by 99.8\%. We also show that selectively protecting a small fraction of vulnerable sign bits provides a practical defense against such attacks.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise [0.5755004576310334]
We introduce a novel noise-reduction procedure, Vector Quantization U-Net (VQUNet), to reduce adversarial noise and reconstruct data with high fidelity.
VQUNet features a discrete latent representation learning through a multi-scale hierarchical structure for both noise reduction and data reconstruction.
It outperforms other state-of-the-art noise-reduction-based defense methods under various adversarial attacks for both Fashion-MNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-06-05T10:10:03Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes [50.00272243518593]
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains problematically high.<n>We have developed NeuralFuse, a novel add-on module that handles the energy-accuracy tradeoff in low-voltage regimes.<n>At a 1% bit-error rate, NeuralFuse can reduce access energy by up to 24% while recovering accuracy by up to 57%.
arXiv Detail & Related papers (2023-06-29T11:38:22Z) - BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks [0.05249805590164901]
Blind Data Adversarial Bit-flip Attack (BDFA) is a novel technique to enable BFA without any access to the training or testing data.
BDFA could decrease the accuracy of ResNet50 significantly from 75.96% to 13.94% with only 4 bits flips.
arXiv Detail & Related papers (2021-12-07T03:53:38Z) - Don't Knock! Rowhammer at the Backdoor of DNN Models [19.13129153353046]
We present an end-to-end backdoor injection attack realized on actual hardware on a model using Rowhammer as the fault injection method.
We propose a novel network training algorithm based on constrained optimization to achieve a realistic backdoor injection attack in hardware.
arXiv Detail & Related papers (2021-10-14T19:43:53Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - DeepDyve: Dynamic Verification for Deep Neural Networks [16.20238078882485]
DeepDyve employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification.
We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve.
arXiv Detail & Related papers (2020-09-21T07:58:18Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.