Bit-Flip Fault Attack: Crushing Graph Neural Networks via Gradual Bit Search
- URL: http://arxiv.org/abs/2507.05531v1
- Date: Mon, 07 Jul 2025 23:06:29 GMT
- Title: Bit-Flip Fault Attack: Crushing Graph Neural Networks via Gradual Bit Search
- Authors: Sanaz Kazemi Abharian, Sai Manoj Pudukotai Dinakarrao,
- Abstract summary: Graph Neural Networks (GNNs) have emerged as a powerful machine learning method for graph-structured data.<n>In this paper, we investigate the vulnerability of GNN models to hardware-based fault attack.<n>We propose Gradual Bit-Flip Fault Attack (GBFA), a layer-aware bit-flip fault attack.
- Score: 0.4943822978887544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as a powerful machine learning method for graph-structured data. A plethora of hardware accelerators has been introduced to meet the performance demands of GNNs in real-world applications. However, security challenges of hardware-based attacks have been generally overlooked. In this paper, we investigate the vulnerability of GNN models to hardware-based fault attack, wherein an attacker attempts to misclassify output by modifying trained weight parameters through fault injection in a memory device. Thus, we propose Gradual Bit-Flip Fault Attack (GBFA), a layer-aware bit-flip fault attack, selecting a vulnerable bit in each selected weight gradually to compromise the GNN's performance by flipping a minimal number of bits. To achieve this, GBFA operates in two steps. First, a Markov model is created to predict the execution sequence of layers based on features extracted from memory access patterns, enabling the launch of the attack within a specific layer. Subsequently, GBFA identifies vulnerable bits within the selected weights using gradient ranking through an in-layer search. We evaluate the effectiveness of the proposed GBFA attack on various GNN models for node classification tasks using the Cora and PubMed datasets. Our findings show that GBFA significantly degrades prediction accuracy, and the variation in its impact across different layers highlights the importance of adopting a layer-aware attack strategy in GNNs. For example, GBFA degrades GraphSAGE's prediction accuracy by 17% on the Cora dataset with only a single bit flip in the last layer.
Related papers
- Ralts: Robust Aggregation for Enhancing Graph Neural Network Resilience on Bit-flip Errors [10.361566017170295]
We present a comprehensive analysis of GNN robustness against bit-flip errors.<n>We propose Ralts, a generalizable and lightweight solution to bolster GNN resilience to bit-flip errors.<n>Ralts exploits various graph similarity metrics to filter out outliers and recover compromised graph topology.
arXiv Detail & Related papers (2025-07-24T21:03:44Z) - ObfusBFA: A Holistic Approach to Safeguarding DNNs from Different Types of Bit-Flip Attacks [12.96840649714218]
Bit-flip attacks (BFAs) represent a serious threat to Deep Neural Networks (DNNs)<n>We propose ObfusBFA, an efficient and holistic methodology to mitigate BFAs.<n>We design novel algorithms to identify critical bits and insert obfuscation operations.
arXiv Detail & Related papers (2025-06-12T14:31:27Z) - Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
We introduce an exact certification method for label flipping in Graph Neural Networks (GNNs)<n>We apply our method to certify a broad range of GNN architectures in node classification tasks.<n>Our work presents the first exact certificate to a poisoning attack ever derived for neural networks.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks [17.723282166737867]
Hypergraph Neural Networks (HGNNs) have been successfully applied in various hypergraph-related tasks.
Recent works have shown that deep learning models are vulnerable to adversarial attacks.
We design a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses on modifying node features.
arXiv Detail & Related papers (2023-10-24T09:10:45Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Label-Only Membership Inference Attack against Node-Level Graph Neural
Networks [30.137860266059004]
Graph Neural Networks (GNNs) are vulnerable to Membership Inference Attacks (MIAs)
We propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism.
Our attacking method achieves around 60% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models.
arXiv Detail & Related papers (2022-07-27T19:46:26Z) - Transferable Graph Backdoor Attack [13.110473828583725]
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
arXiv Detail & Related papers (2022-06-21T06:25:37Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.