GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections
- URL: http://arxiv.org/abs/2210.12598v1
- Date: Sun, 23 Oct 2022 02:12:26 GMT
- Title: GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections
- Authors: Junyuan Fang, Haixian Wen, Jiajing Wu, Qi Xuan, Zibin Zheng, Chi K.
Tse
- Abstract summary: Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
- Score: 20.18085461668842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have found successful applications in various
graph-related tasks. However, recent studies have shown that many GNNs are
vulnerable to adversarial attacks. In a vast majority of existing studies,
adversarial attacks on GNNs are launched via direct modification of the
original graph such as adding/removing links, which may not be applicable in
practice. In this paper, we focus on a realistic attack operation via injecting
fake nodes. The proposed Global Attack strategy via Node Injection (GANI) is
designed under the comprehensive consideration of an unnoticeable perturbation
setting from both structure and feature domains. Specifically, to make the node
injections as imperceptible and effective as possible, we propose a sampling
operation to determine the degree of the newly injected nodes, and then
generate features and select neighbors for these injected nodes based on the
statistical information of features and evolutionary perturbations obtained
from a genetic algorithm, respectively. In particular, the proposed feature
generation mechanism is suitable for both binary and continuous node features.
Extensive experimental results on benchmark datasets against both general and
defended GNNs show strong attack performance of GANI. Moreover, the
imperceptibility analyses also demonstrate that GANI achieves a relatively
unnoticeable injection on benchmark datasets.
Related papers
- Node Injection Attack Based on Label Propagation Against Graph Neural Network [11.410811769066209]
Graph Neural Network (GNN) has achieved remarkable success in various graph learning tasks, such as node classification, link prediction and graph classification.
The attacker can easily perturb the aggregation process through injecting fake nodes, which reveals that GNN is vulnerable to the graph injection attack.
We propose the label-propagation-based global injection attack (LPGIA) which conducts the graph injection attack on the node classification task.
arXiv Detail & Related papers (2024-05-29T07:09:16Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Detecting Topology Attacks against Graph Neural Networks [39.968619861265395]
We study the victim node detection problem under topology attacks against GNNs.
Our approach is built upon the key observation rooted in the intrinsic message passing nature of GNNs.
arXiv Detail & Related papers (2022-04-21T13:08:25Z) - TDGIA:Effective Injection Attacks on Graph Neural Networks [21.254710171416374]
We study a recently-introduced realistic attack scenario on graphs -- graph injection attack (GIA)
In the GIA scenario, the adversary is not able to modify the existing link structure and node attributes of the input graph, instead the attack is performed by injecting adversarial nodes into it.
We present an analysis on the topological vulnerability of GNNs under GIA setting, based on which we propose the Topological Defective Graph Injection Attack (TDGIA) for effective injection attacks.
arXiv Detail & Related papers (2021-06-12T01:53:25Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.