TDGIA:Effective Injection Attacks on Graph Neural Networks
- URL: http://arxiv.org/abs/2106.06663v1
- Date: Sat, 12 Jun 2021 01:53:25 GMT
- Title: TDGIA:Effective Injection Attacks on Graph Neural Networks
- Authors: Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov,
Jialiang Lu, Jie Tang
- Abstract summary: We study a recently-introduced realistic attack scenario on graphs -- graph injection attack (GIA)
In the GIA scenario, the adversary is not able to modify the existing link structure and node attributes of the input graph, instead the attack is performed by injecting adversarial nodes into it.
We present an analysis on the topological vulnerability of GNNs under GIA setting, based on which we propose the Topological Defective Graph Injection Attack (TDGIA) for effective injection attacks.
- Score: 21.254710171416374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have achieved promising performance in various
real-world applications. However, recent studies have shown that GNNs are
vulnerable to adversarial attacks. In this paper, we study a
recently-introduced realistic attack scenario on graphs -- graph injection
attack (GIA). In the GIA scenario, the adversary is not able to modify the
existing link structure and node attributes of the input graph, instead the
attack is performed by injecting adversarial nodes into it. We present an
analysis on the topological vulnerability of GNNs under GIA setting, based on
which we propose the Topological Defective Graph Injection Attack (TDGIA) for
effective injection attacks. TDGIA first introduces the topological defective
edge selection strategy to choose the original nodes for connecting with the
injected ones. It then designs the smooth feature optimization objective to
generate the features for the injected nodes. Extensive experiments on
large-scale datasets show that TDGIA can consistently and significantly
outperform various attack baselines in attacking dozens of defense GNN models.
Notably, the performance drop on target GNNs resultant from TDGIA is more than
double the damage brought by the best attack solution among hundreds of
submissions on KDD-CUP 2020.
Related papers
- Node Injection Attack Based on Label Propagation Against Graph Neural Network [11.410811769066209]
Graph Neural Network (GNN) has achieved remarkable success in various graph learning tasks, such as node classification, link prediction and graph classification.
The attacker can easily perturb the aggregation process through injecting fake nodes, which reveals that GNN is vulnerable to the graph injection attack.
We propose the label-propagation-based global injection attack (LPGIA) which conducts the graph injection attack on the node classification task.
arXiv Detail & Related papers (2024-05-29T07:09:16Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.