BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks
- URL: http://arxiv.org/abs/2112.03477v1
- Date: Tue, 7 Dec 2021 03:53:38 GMT
- Title: BDFA: A Blind Data Adversarial Bit-flip Attack on Deep Neural Networks
- Authors: Behnam Ghavami, Mani Sadati, Mohammad Shahidzadeh, Zhenman Fang,
Lesley Shannon
- Abstract summary: Blind Data Adversarial Bit-flip Attack (BDFA) is a novel technique to enable BFA without any access to the training or testing data.
BDFA could decrease the accuracy of ResNet50 significantly from 75.96% to 13.94% with only 4 bits flips.
- Score: 0.05249805590164901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial bit-flip attack (BFA) on Neural Network weights can result in
catastrophic accuracy degradation by flipping a very small number of bits. A
major drawback of prior bit flip attack techniques is their reliance on test
data. This is frequently not possible for applications that contain sensitive
or proprietary data. In this paper, we propose Blind Data Adversarial Bit-flip
Attack (BDFA), a novel technique to enable BFA without any access to the
training or testing data. This is achieved by optimizing for a synthetic
dataset, which is engineered to match the statistics of batch normalization
across different layers of the network and the targeted label. Experimental
results show that BDFA could decrease the accuracy of ResNet50 significantly
from 75.96\% to 13.94\% with only 4 bits flips.
Related papers
- Bit-Flip Fault Attack: Crushing Graph Neural Networks via Gradual Bit Search [0.4943822978887544]
Graph Neural Networks (GNNs) have emerged as a powerful machine learning method for graph-structured data.<n>In this paper, we investigate the vulnerability of GNN models to hardware-based fault attack.<n>We propose Gradual Bit-Flip Fault Attack (GBFA), a layer-aware bit-flip fault attack.
arXiv Detail & Related papers (2025-07-07T23:06:29Z) - ObfusBFA: A Holistic Approach to Safeguarding DNNs from Different Types of Bit-Flip Attacks [12.96840649714218]
Bit-flip attacks (BFAs) represent a serious threat to Deep Neural Networks (DNNs)<n>We propose ObfusBFA, an efficient and holistic methodology to mitigate BFAs.<n>We design novel algorithms to identify critical bits and insert obfuscation operations.
arXiv Detail & Related papers (2025-06-12T14:31:27Z) - No Data, No Optimization: A Lightweight Method To Disrupt Neural Networks With Sign-Flips [17.136832159667204]
Deep Neural Lesion (DNL) is a data-free, lightweight method that locates critical parameters and triggers massive accuracy drops.
We validate its efficacy on a wide variety of computer vision models and datasets.
arXiv Detail & Related papers (2025-02-11T09:40:45Z) - Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
We introduce an exact certification method for label flipping in Graph Neural Networks (GNNs)
We apply our method to certify a broad range of GNN architectures in node classification tasks.
Our work presents the first exact certificate to a poisoning attack ever derived for neural networks.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks [4.734824660843964]
We introduce an encoding-based protection method against bit-flip attacks on neural networks, titled DeepNcode.
Our results show an increase in protection margin of up to $7.6times$ for $4-$bit and $12.4times$ for $8-$bit quantized networks.
arXiv Detail & Related papers (2024-05-22T18:01:34Z) - Federated Learning Under Attack: Exposing Vulnerabilities through Data
Poisoning Attacks in Computer Networks [17.857547954232754]
Federated Learning (FL) is a machine learning approach that enables multiple decentralized devices or edge servers to collaboratively train a shared model without exchanging raw data.
During the training and sharing of model updates between clients and servers, data and models are susceptible to different data-poisoning attacks.
We considered two types of data-poisoning attacks, label flipping (LF) and feature poisoning (FP), and applied them with a novel approach.
arXiv Detail & Related papers (2024-03-05T14:03:15Z) - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training [54.622474306336635]
A new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques.
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
arXiv Detail & Related papers (2023-08-12T09:34:43Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based
Repeated Bit Flip Attack [10.31732879936362]
We present Zero-data Based Repeated bit flip Attack (ZeBRA) that precisely destroys deep neural networks (DNNs)
Our approach makes the adversarial weight attack more fatal to the security of DNNs.
arXiv Detail & Related papers (2021-11-01T16:44:20Z) - Iterative Pseudo-Labeling with Deep Feature Annotation and
Confidence-Based Sampling [127.46527972920383]
Training deep neural networks is challenging when large and annotated datasets are unavailable.
We improve a recent iterative pseudo-labeling technique, Deep Feature, by selecting the most confident unsupervised samples to iteratively train a deep neural network.
We first ascertain the best configuration for the baseline -- a self-trained deep neural network -- and then evaluate our confidence DeepFA for different confidence thresholds.
arXiv Detail & Related papers (2021-09-06T20:02:13Z) - Broadly Applicable Targeted Data Sample Omission Attacks [15.077408234311816]
We introduce a novel clean-label targeted poisoning attack on learning mechanisms.
Our attack misclassifies a single, targeted test sample of choice, without manipulating that sample.
We show that, with a low attack budget, our attack's success rate is above 80%, and in some cases 100%, for white-box learning.
arXiv Detail & Related papers (2021-05-04T15:20:54Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - T-BFA: Targeted Bit-Flip Adversarial Weight Attack [36.80180060697878]
Bit-Flip-based adversarial weight Attack (BFA) injects an extremely small amount of faults into weight parameters to hijack the executing DNN function.
This paper proposes the first work of targeted BFA based (T-BFA) adversarial weight attack on DNNs, which can intentionally mislead selected inputs to a target output class.
arXiv Detail & Related papers (2020-07-24T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.