What Does the Gradient Tell When Attacking the Graph Structure
- URL: http://arxiv.org/abs/2208.12815v2
- Date: Wed, 29 Mar 2023 12:19:49 GMT
- Title: What Does the Gradient Tell When Attacking the Graph Structure
- Authors: Zihan Liu, Ge Wang, Yun Luo, Stan Z. Li
- Abstract summary: We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
- Score: 44.44204591087092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has revealed that Graph Neural Networks (GNNs) are
susceptible to adversarial attacks targeting the graph structure. A malicious
attacker can manipulate a limited number of edges, given the training labels,
to impair the victim model's performance. Previous empirical studies indicate
that gradient-based attackers tend to add edges rather than remove them. In
this paper, we present a theoretical demonstration revealing that attackers
tend to increase inter-class edges due to the message passing mechanism of
GNNs, which explains some previous empirical observations. By connecting
dissimilar nodes, attackers can more effectively corrupt node features, making
such attacks more advantageous. However, we demonstrate that the inherent
smoothness of GNN's message passing tends to blur node dissimilarity in the
feature space, leading to the loss of crucial information during the forward
process. To address this issue, we propose a novel surrogate model with
multi-level propagation that preserves the node dissimilarity information. This
model parallelizes the propagation of unaggregated raw features and multi-hop
aggregated features, while introducing batch normalization to enhance the
dissimilarity in node representations and counteract the smoothness resulting
from topological aggregation. Our experiments show significant improvement with
our approach.Furthermore, both theoretical and experimental evidence suggest
that adding inter-class edges constitutes an easily observable attack pattern.
We propose an innovative attack loss that balances attack effectiveness and
imperceptibility, sacrificing some attack effectiveness to attain greater
imperceptibility. We also provide experiments to validate the compromise
performance achieved through this attack loss.
Related papers
- Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias [50.628150015907565]
Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
arXiv Detail & Related papers (2023-03-29T13:02:02Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Surrogate Representation Learning with Isometric Mapping for Gray-box
Graph Adversarial Attacks [27.317964031440546]
Gray-box graph attacks aim at disrupting the performance of the victim model by using attacks with limited knowledge of the victim model.
To obtain the gradient on the node attributes or graph structure, the attacker constructs an imaginary surrogate model trained under supervision.
This paper investigates the effect of representation learning of surrogate models on the transferability of gray-box graph adversarial attacks.
arXiv Detail & Related papers (2021-10-20T10:47:34Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Towards More Practical Adversarial Attacks on Graph Neural Networks [14.78539966828287]
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint.
We show that the structural inductive biases of GNN models can be an effective source for this type of attacks.
arXiv Detail & Related papers (2020-06-09T05:27:39Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.