Minimum Topology Attacks for Graph Neural Networks
- URL: http://arxiv.org/abs/2403.02723v1
- Date: Tue, 5 Mar 2024 07:29:12 GMT
- Title: Minimum Topology Attacks for Graph Neural Networks
- Authors: Mengmei Zhang, Xiao Wang, Chuan Shi, Lingjuan Lyu, Tianchi Yang,
Junping Du
- Abstract summary: Graph Neural Networks (GNNs) have received significant attention for their robustness to adversarial topology attacks.
We propose a new type of topology attack, named minimum-budget topology attack, aiming to adaptively find the minimum perturbation sufficient for a successful attack on each node.
- Score: 70.17791814425148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the great popularity of Graph Neural Networks (GNNs), their robustness
to adversarial topology attacks has received significant attention. Although
many attack methods have been proposed, they mainly focus on fixed-budget
attacks, aiming at finding the most adversarial perturbations within a fixed
budget for target node. However, considering the varied robustness of each
node, there is an inevitable dilemma caused by the fixed budget, i.e., no
successful perturbation is found when the budget is relatively small, while if
it is too large, the yielding redundant perturbations will hurt the
invisibility. To break this dilemma, we propose a new type of topology attack,
named minimum-budget topology attack, aiming to adaptively find the minimum
perturbation sufficient for a successful attack on each node. To this end, we
propose an attack model, named MiBTack, based on a dynamic projected gradient
descent algorithm, which can effectively solve the involving non-convex
constraint optimization on discrete topology. Extensive results on three GNNs
and four real-world datasets show that MiBTack can successfully lead all target
nodes misclassified with the minimum perturbation edges. Moreover, the obtained
minimum budget can be used to measure node robustness, so we can explore the
relationships of robustness, topology, and uncertainty for nodes, which is
beyond what the current fixed-budget topology attacks can offer.
Related papers
- Graph Protection under Multiple Simultaneous Attacks: A Heuristic Approach [41.94295877935867]
This work focuses on developing an effective meta-heuristic approach to protect against simultaneous attacks on nodes of a network modeled using a graph.
Specifically, we focus on the $k$-strong Roman domination problem, a generalization of the well-known Roman domination problem on graphs.
We propose a variable neighborhood search algorithm in which the feasibility of the solution is checked by introducing the concept of quasi-feasibility.
arXiv Detail & Related papers (2024-03-25T18:46:13Z) - Cost Aware Untargeted Poisoning Attack against Graph Neural Networks, [5.660584039688214]
We propose a novel attack loss framework called the Cost Aware Poisoning Attack (CA-attack) to improve the allocation of the attack budget.
Our experiments demonstrate that the proposed CA-attack significantly enhances existing attack strategies.
arXiv Detail & Related papers (2023-12-12T10:54:02Z) - Single Node Injection Label Specificity Attack on Graph Neural Networks
via Reinforcement Learning [8.666702832094874]
We present a gradient-free generalizable adversary that injects a single malicious node to manipulate a target node in the black-box evasion setting.
By directly querying the victim model, G$2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets.
arXiv Detail & Related papers (2023-05-04T15:10:41Z) - Towards Reasonable Budget Allocation in Untargeted Graph Structure
Attacks via Gradient Debias [50.628150015907565]
Cross-entropy loss function is used to evaluate perturbation schemes in classification tasks.
Previous methods use negative cross-entropy loss as the attack objective in attacking node-level classification models.
This paper argues about the previous unreasonable attack objective from the perspective of budget allocation.
arXiv Detail & Related papers (2023-03-29T13:02:02Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - GUARD: Graph Universal Adversarial Defense [54.81496179947696]
We present a simple yet effective method, named Graph Universal Adversarial Defense (GUARD)
GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node in a graph.
GUARD significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art defense methods by large margins.
arXiv Detail & Related papers (2022-04-20T22:18:12Z) - Adversarial Attack on Graph Neural Networks as An Influence Maximization
Problem [12.88476464580968]
Graph neural networks (GNNs) have attracted increasing interests.
There is an urgent need for understanding the robustness of GNNs under adversarial attacks.
arXiv Detail & Related papers (2021-06-21T00:47:44Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - AN-GCN: An Anonymous Graph Convolutional Network Defense Against
Edge-Perturbing Attack [53.06334363586119]
Recent studies have revealed the vulnerability of graph convolutional networks (GCNs) to edge-perturbing attacks.
We first generalize the formulation of edge-perturbing attacks and strictly prove the vulnerability of GCNs to such attacks in node classification tasks.
Following this, an anonymous graph convolutional network, named AN-GCN, is proposed to counter edge-perturbing attacks.
arXiv Detail & Related papers (2020-05-06T08:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.