Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias
- URL: http://arxiv.org/abs/2304.00010v1
- Date: Wed, 29 Mar 2023 13:02:02 GMT
- Title: Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias
- Authors: Zihan Liu, Yun Luo, Lirong Wu, Zicheng Liu, Stan Z. Li
- Abstract summary: Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
- Score: 50.628150015907565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has become cognitive inertia to employ cross-entropy loss function in
classification related tasks. In the untargeted attacks on graph structure, the
gradients derived from the attack objective are the attacker's basis for
evaluating a perturbation scheme. Previous methods use negative cross-entropy
loss as the attack objective in attacking node-level classification models.
However, the suitability of the cross-entropy function for constructing the
untargeted attack objective has yet been discussed in previous works. This
paper argues about the previous unreasonable attack objective from the
perspective of budget allocation. We demonstrate theoretically and empirically
that negative cross-entropy tends to produce more significant gradients from
nodes with lower confidence in the labeled classes, even if the predicted
classes of these nodes have been misled. To free up these inefficient attack
budgets, we propose a simple attack model for untargeted attacks on graph
structure based on a novel attack objective which generates unweighted
gradients on graph structures that are not affected by the node confidence. By
conducting experiments in gray-box poisoning attack scenarios, we demonstrate
that a reasonable budget allocation can significantly improve the effectiveness
of gradient-based edge perturbations without any extra hyper-parameter.
Related papers
- AGSOA:Graph Neural Network Targeted Attack Based on Average Gradient and Structure Optimization [16.681157857248436]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack that cause performance degradation by adding small perturbations to the graph.
This paper proposes an attack on GNNs, called AGSOA, which consists of an average gradient calculation and a structre optimization module.
arXiv Detail & Related papers (2024-06-19T05:29:20Z) - Minimum Topology Attacks for Graph Neural Networks [70.17791814425148]
Graph Neural Networks (GNNs) have received significant attention for their robustness to adversarial topology attacks.
We propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
arXiv Detail & Related papers (2024-03-05T07:29:12Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Are Gradients on Graph Structure Reliable in Gray-box Attacks? [56.346504691615934]
Previous gray-box attackers employ gradients from the surrogate model to locate the vulnerable edges to perturb the graph structure.
In this paper, we discuss and analyze the errors caused by the unreliability of the structural gradients.
We propose a novel attack model with methods to reduce the errors inside the structural gradients.
arXiv Detail & Related papers (2022-08-07T06:43:32Z) - Surrogate Representation Learning with Isometric Mapping for Gray-box
Graph Adversarial Attacks [27.317964031440546]
Gray-box graph attacks aim at disrupting the performance of the victim model by using attacks with limited knowledge of the victim model.
To obtain the gradient on the node attributes or graph structure, the attacker constructs an imaginary surrogate model trained under supervision.
This paper investigates the effect of representation learning of surrogate models on the transferability of gray-box graph adversarial attacks.
arXiv Detail & Related papers (2021-10-20T10:47:34Z) - BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly
Detection [20.666171188140503]
Graph-based Anomaly Detection (GAD) is becoming prevalent due to the powerful representation abilities of graphs.
These GAD tools expose a new attacking surface, ironically due to their unique advantage of being able to exploit the relations among data.
In this paper, we exploit this vulnerability by designing a new type of targeted structural poisoning attacks to a representative regression-based GAD system OddBall.
arXiv Detail & Related papers (2021-06-18T08:20:23Z) - Towards More Practical Adversarial Attacks on Graph Neural Networks [14.78539966828287]
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint.
We show that the structural inductive biases of GNN models can be an effective source for this type of attacks.
arXiv Detail & Related papers (2020-06-09T05:27:39Z) - Topological Effects on Attacks Against Vertex Classification [61.62383779296796]
This paper considers two topological characteristics of graphs and explores the way these features affect the amount the adversary must perturb the graph in order to be successful.
We show that, if certain vertices are included in the training set, it is possible to substantially an adversary's required perturbation budget.
Even for especially easy targets (those that are misclassified after just one or two perturbations), the degradation of performance is much slower, assigning much lower probabilities to the incorrect classes.
arXiv Detail & Related papers (2020-03-12T14:37:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.