Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph
Neural Networks via Reinforcement Learning
- URL: http://arxiv.org/abs/2211.10782v1
- Date: Sat, 19 Nov 2022 19:37:22 GMT
- Title: Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph
Neural Networks via Reinforcement Learning
- Authors: Mingxuan Ju, Yujie Fan, Chuxu Zhang, Yanfang Ye
- Abstract summary: We study the problem of black-box node injection attack, without training a potentially misleading surrogate model.
By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets.
We demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers.
- Score: 37.4570186471298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have drawn significant attentions over the years
and been broadly applied to essential applications requiring solid robustness
or vigorous security standards, such as product recommendation and user
behavior modeling. Under these scenarios, exploiting GNN's vulnerabilities and
further downgrading its performance become extremely incentive for adversaries.
Previous attackers mainly focus on structural perturbations or node injections
to the existing graphs, guided by gradients from the surrogate models. Although
they deliver promising results, several limitations still exist. For the
structural perturbation attack, to launch a proposed attack, adversaries need
to manipulate the existing graph topology, which is impractical in most
circumstances. Whereas for the node injection attack, though being more
practical, current approaches require training surrogate models to simulate a
white-box setting, which results in significant performance downgrade when the
surrogate architecture diverges from the actual victim model. To bridge these
gaps, in this paper, we study the problem of black-box node injection attack,
without training a potentially misleading surrogate model. Specifically, we
model the node injection attack as a Markov decision process and propose
Gradient-free Graph Advantage Actor Critic, namely G2A2C, a reinforcement
learning framework in the fashion of advantage actor critic. By directly
querying the victim model, G2A2C learns to inject highly malicious nodes with
extremely limited attacking budgets, while maintaining a similar node feature
distribution. Through our comprehensive experiments over eight acknowledged
benchmark datasets with different characteristics, we demonstrate the superior
performance of our proposed G2A2C over the existing state-of-the-art attackers.
Source code is publicly available at: https://github.com/jumxglhf/G2A2C}.
Related papers
- Hard Label Black Box Node Injection Attack on Graph Neural Networks [7.176182084359572]
We will propose a non-targeted Hard Label Black Box Node Injection Attack on Graph Neural Networks.
Our attack is based on an existing edge perturbation attack, from which we restrict the optimization process to formulate a node injection attack.
In the work, we will evaluate the performance of the attack using three datasets.
arXiv Detail & Related papers (2023-11-22T09:02:04Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Black-box Node Injection Attack for Graph Neural Networks [29.88729779937473]
We study the possibility of injecting nodes to evade the victim GNN model.
Specifically, we propose GA2C, a graph reinforcement learning framework.
We demonstrate the superior performance of our proposed GA2C over existing state-of-the-art methods.
arXiv Detail & Related papers (2022-02-18T19:17:43Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.