HGAttack: Transferable Heterogeneous Graph Adversarial Attack
- URL: http://arxiv.org/abs/2401.09945v1
- Date: Thu, 18 Jan 2024 12:47:13 GMT
- Title: HGAttack: Transferable Heterogeneous Graph Adversarial Attack
- Authors: He Zhao, Zhiwei Zeng, Yongwei Wang, Deheng Ye and Chunyan Miao
- Abstract summary: Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
- Score: 63.35560741500611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for
their performance in areas like the web and e-commerce, where resilience
against adversarial attacks is crucial. However, existing adversarial attack
methods, which are primarily designed for homogeneous graphs, fall short when
applied to HGNNs due to their limited ability to address the structural and
semantic complexity of HGNNs. This paper introduces HGAttack, the first
dedicated gray box evasion attack method for heterogeneous graphs. We design a
novel surrogate model to closely resemble the behaviors of the target HGNN and
utilize gradient-based methods for perturbation generation. Specifically, the
proposed surrogate model effectively leverages heterogeneous information by
extracting meta-path induced subgraphs and applying GNNs to learn node
embeddings with distinct semantics from each subgraph. This approach improves
the transferability of generated attacks on the target HGNN and significantly
reduces memory costs. For perturbation generation, we introduce a
semantics-aware mechanism that leverages subgraph gradient information to
autonomously identify vulnerable edges across a wide range of relations within
a constrained perturbation budget. We validate HGAttack's efficacy with
comprehensive experiments on three datasets, providing empirical analyses of
its generated perturbations. Outperforming baseline methods, HGAttack
demonstrated significant efficacy in diminishing the performance of target HGNN
models, affirming the effectiveness of our approach in evaluating the
robustness of HGNNs against adversarial attacks.
Related papers
- Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.