HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure
Attack of Hypergraph Neural Networks
- URL: http://arxiv.org/abs/2302.12407v1
- Date: Fri, 24 Feb 2023 02:15:42 GMT
- Title: HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure
Attack of Hypergraph Neural Networks
- Authors: Chao Hu, Ruishi Yu, Binqi Zeng, Yu Zhan, Ying Fu, Quan Zhang, Rongkai
Liu and Heyuan Shi
- Abstract summary: Hypergraph neural networks (HGNN) have shown superior performance in various deep learning tasks.
Despite the well-studied adversarial attacks on Graph Neural Networks (GNN), there is few study on adversarial attacks against HGNN.
We introduce HyperAttack, the first white-box adversarial attack framework against hypergraph neural networks.
- Score: 10.937499142803512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hypergraph neural networks (HGNN) have shown superior performance in various
deep learning tasks, leveraging the high-order representation ability to
formulate complex correlations among data by connecting two or more nodes
through hyperedge modeling. Despite the well-studied adversarial attacks on
Graph Neural Networks (GNN), there is few study on adversarial attacks against
HGNN, which leads to a threat to the safety of HGNN applications. In this
paper, we introduce HyperAttack, the first white-box adversarial attack
framework against hypergraph neural networks. HyperAttack conducts a white-box
structure attack by perturbing hyperedge link status towards the target node
with the guidance of both gradients and integrated gradients. We evaluate
HyperAttack on the widely-used Cora and PubMed datasets and three hypergraph
neural networks with typical hypergraph modeling techniques. Compared to
state-of-the-art white-box structural attack methods for GNN, HyperAttack
achieves a 10-20X improvement in time efficiency while also increasing attack
success rates by 1.3%-3.7%. The results show that HyperAttack can achieve
efficient adversarial attacks that balance effectiveness and time costs.
Related papers
- Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges [1.089691789591201]
Hypergraph Neural Networks (HGNNs) are vulnerable to adversarial attacks.
We present a novel framework, i.e., Hypergraph Attacks via Injecting Homogeneous Nodes into Elite Hyperedges (IE-Attack)
arXiv Detail & Related papers (2024-12-24T11:48:41Z) - Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Hypergraph Transformer for Semi-Supervised Classification [50.92027313775934]
We propose a novel hypergraph learning framework, HyperGraph Transformer (HyperGT)
HyperGT uses a Transformer-based neural network architecture to effectively consider global correlations among all nodes and hyperedges.
It achieves comprehensive hypergraph representation learning by effectively incorporating global interactions while preserving local connectivity patterns.
arXiv Detail & Related papers (2023-12-18T17:50:52Z) - Hard Label Black Box Node Injection Attack on Graph Neural Networks [7.176182084359572]
We will propose a non-targeted Hard Label Black Box Node Injection Attack on Graph Neural Networks.
Our attack is based on an existing edge perturbation attack, from which we restrict the optimization process to formulate a node injection attack.
In the work, we will evaluate the performance of the attack using three datasets.
arXiv Detail & Related papers (2023-11-22T09:02:04Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Augmentations in Hypergraph Contrastive Learning: Fabricated and
Generative [126.0985540285981]
We apply the contrastive learning approach from images/graphs (we refer to it as HyperGCL) to improve generalizability of hypergraph neural networks.
We fabricate two schemes to augment hyperedges with higher-order relations encoded, and adopt three augmentation strategies from graph-structured data.
We propose a hypergraph generative model to generate augmented views, and then an end-to-end differentiable pipeline to jointly learn hypergraph augmentations and model parameters.
arXiv Detail & Related papers (2022-10-07T20:12:20Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.