AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks
- URL: http://arxiv.org/abs/2412.07468v2
- Date: Thu, 17 Apr 2025 16:13:40 GMT
- Title: AHSG: Adversarial Attack on High-level Semantics in Graph Neural Networks
- Authors: Kai Yuan, Jiahao Zhang, Yidi Wang, Xiaobing Pei,
- Abstract summary: Adversarial attacks on Graph Neural Networks aim to perturb the performance of the learner by carefully modifying the graph topology and node attributes.<n>Existing methods achieve attack stealthiness by constraining the modification budget and differences in graph properties.<n>We propose an Adversarial Attack on High-level Semantics for Graph Neural Networks (AHSG), which is a graph structure attack model that ensures the retention of primary semantics.
- Score: 8.512355226572254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks on Graph Neural Networks aim to perturb the performance of the learner by carefully modifying the graph topology and node attributes. Existing methods achieve attack stealthiness by constraining the modification budget and differences in graph properties. However, these methods typically disrupt task-relevant primary semantics directly, which results in low defensibility and detectability of the attack. In this paper, we propose an Adversarial Attack on High-level Semantics for Graph Neural Networks (AHSG), which is a graph structure attack model that ensures the retention of primary semantics. By combining latent representations with shared primary semantics, our model retains detectable attributes and relational patterns of the original graph while leveraging more subtle changes to carry out the attack. Then we use the Projected Gradient Descent algorithm to map the latent representations with attack effects to the adversarial graph. Through experiments on robust graph deep learning models equipped with defense strategies, we demonstrate that AHSG outperforms other state-of-the-art methods in attack effectiveness. Additionally, using Contextual Stochastic Block Models to detect the attacked graph further validates that our method preserves the primary semantics of the graph.
Related papers
- Graph Defense Diffusion Model [26.41730982598055]
Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks, which can greatly degrade their performance.
Existing graph purification methods attempt to address this issue by filtering attacked graphs.
We propose a more versatile approach for defending against adversarial attacks on graphs.
arXiv Detail & Related papers (2025-01-20T16:18:40Z) - Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Talos: A More Effective and Efficient Adversarial Defense for GNN Models Based on the Global Homophily of Graphs [2.4866716181615467]
Graph neural network (GNN) models are susceptible to adversarial attacks.
We propose a new defense method named Talos, which enhances the global, rather than local, homophily of graphs as a defense.
arXiv Detail & Related papers (2024-06-06T08:08:01Z) - Attacks on Node Attributes in Graph Neural Networks [32.40598187698689]
This research investigates the vulnerability of graph models through the application of feature based adversarial attacks.
Our findings indicate that decision time attacks using Projected Gradient Descent (PGD) are more potent compared to poisoning attacks that employ Mean Node Embeddings and Graph Contrastive Learning strategies.
arXiv Detail & Related papers (2024-02-19T17:52:29Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.