Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning
- URL: http://arxiv.org/abs/2305.02901v1
- Date: Thu, 4 May 2023 15:10:41 GMT
- Title: Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning
- Authors: Dayuan Chen, Jian Zhang, Yuqian Lv, Jinhuan Wang, Hongjie Ni, Shanqing
Yu, Zhen Wang, and Qi Xuan
- Abstract summary: We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
- Score: 8.666702832094874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved remarkable success in various
real-world applications. However, recent studies highlight the vulnerability of
GNNs to malicious perturbations. Previous adversaries primarily focus on graph
modifications or node injections to existing graphs, yielding promising results
but with notable limitations. Graph modification attack~(GMA) requires
manipulation of the original graph, which is often impractical, while graph
injection attack~(GIA) necessitates training a surrogate model in the black-box
setting, leading to significant performance degradation due to divergence
between the surrogate architecture and the actual victim model. Furthermore,
most methods concentrate on a single attack goal and lack a generalizable
adversary to develop distinct attack strategies for diverse goals, thus
limiting precise control over victim model behavior in real-world scenarios. To
address these issues, we present a gradient-free generalizable adversary that
injects a single malicious node to manipulate the classification result of a
target node in the black-box evasion setting. We propose Gradient-free
Generalizable Single Node Injection Attack, namely G$^2$-SNIA, a reinforcement
learning framework employing Proximal Policy Optimization. By directly querying
the victim model, G$^2$-SNIA learns patterns from exploration to achieve
diverse attack goals with extremely limited attack budgets. Through
comprehensive experiments over three acknowledged benchmark datasets and four
prominent GNNs in the most challenging and realistic scenario, we demonstrate
the superior performance of our proposed G$^2$-SNIA over the existing
state-of-the-art baselines. Moreover, by comparing G$^2$-SNIA with multiple
white-box evasion baselines, we confirm its capacity to generate solutions
comparable to those of the best adversaries.
Related papers
- Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph
Neural Networks via Reinforcement Learning [37.4570186471298]
We study the problem of black-box node injection attack, without training a potentially misleading surrogate model.
By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets.
We demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers.
arXiv Detail & Related papers (2022-11-19T19:37:22Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Black-box Node Injection Attack for Graph Neural Networks [29.88729779937473]
We study the possibility of injecting nodes to evade the victim GNN model.
Specifically, we propose GA2C, a graph reinforcement learning framework.
We demonstrate the superior performance of our proposed GA2C over existing state-of-the-art methods.
arXiv Detail & Related papers (2022-02-18T19:17:43Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.