Query-Based and Unnoticeable Graph Injection Attack from Neighborhood Perspective
- URL: http://arxiv.org/abs/2502.01936v1
- Date: Tue, 04 Feb 2025 02:11:57 GMT
- Title: Query-Based and Unnoticeable Graph Injection Attack from Neighborhood Perspective
- Authors: Chang Liu, Hai Huang, Yujie Xing, Xingquan Zuo,
- Abstract summary: QUGIA is a Query-based and Unnoticeable Graph Injection Attack.
It injects nodes by first selecting edges based on victim node connections and then generating node features using a Bayesian framework.
This ensures that the injected nodes are similar to the original graph nodes, implicitly preserving homophily and making the attack more unnoticeable.
- Score: 5.29403129046676
- License:
- Abstract: The robustness of Graph Neural Networks (GNNs) has become an increasingly important topic due to their expanding range of applications. Various attack methods have been proposed to explore the vulnerabilities of GNNs, ranging from Graph Modification Attacks (GMA) to the more practical and flexible Graph Injection Attacks (GIA). However, existing methods face two key challenges: (i) their reliance on surrogate models, which often leads to reduced attack effectiveness due to structural differences and prior biases, and (ii) existing GIA methods often sacrifice attack success rates in undefended settings to bypass certain defense models, thereby limiting their overall effectiveness. To overcome these limitations, we propose QUGIA, a Query-based and Unnoticeable Graph Injection Attack. QUGIA injects nodes by first selecting edges based on victim node connections and then generating node features using a Bayesian framework. This ensures that the injected nodes are similar to the original graph nodes, implicitly preserving homophily and making the attack more unnoticeable. Unlike previous methods, QUGIA does not rely on surrogate models, thereby avoiding performance degradation and achieving better generalization. Extensive experiments on six real-world datasets with diverse characteristics demonstrate that QUGIA achieves unnoticeable attacks and outperforms state-of-the-art attackers. The code will be released upon acceptance.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph
Neural Networks via Reinforcement Learning [37.4570186471298]
We study the problem of black-box node injection attack, without training a potentially misleading surrogate model.
By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets.
We demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers.
arXiv Detail & Related papers (2022-11-19T19:37:22Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Black-box Node Injection Attack for Graph Neural Networks [29.88729779937473]
We study the possibility of injecting nodes to evade the victim GNN model.
Specifically, we propose GA2C, a graph reinforcement learning framework.
We demonstrate the superior performance of our proposed GA2C over existing state-of-the-art methods.
arXiv Detail & Related papers (2022-02-18T19:17:43Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.